By Lucy Murdoch, Managing Director, Global Corporate Citizenship Delivery, Accenture in Scotland

IF, like me, you are one of the 81 percent of business executives who believe that Artificial Intelligence (AI) will be our co-worker, collaborator and future adviser, have you considered what kind of morals it will have?

This is not as weird as it might sound. AI already interacts with people on many levels and therefore has significant responsibility. It needs to be trusted to make the right decisions.

Rather than being programmed to take specific actions over-and-over, an AI “learns”. It constantly analyses incoming data, adapting the algorithms by which it makes new smarter decisions and achieves better outcomes. When researchers at the University of Virginia trained AI on a widely used photo data set, however, they discovered that it amplified predictable gender bias. In one example the AI categorised a man standing next to a cooker as a woman.

So despite every effort to minimise and eliminate unconscious bias in people, when data reflects these biases, they can be quickly amplified. It will have far-reaching consequences in the workplace and more broadly into society if not addressed.

To my horror, my four-year-old daughter came home from nursery saying that doctors were boys and nurses were girls. How did that happen in 2018? Even as I have tried to raise my daughter to be who she wants to be, she is influenced by the world around her and I had to correct her perception using examples of female friends who are doctors. In the same way, the data scientist must not only train AI without bias, but also recognise where it has learned something wrong and correct it. Despite the best intentions of the system designer, some bad data can corrupt an AI, and needs to be spotted.

This is not just for the ubiquitous chat bot. AI is increasingly having a place in solving some of society’s big issues. By 2023, for instance, it is predicted that AI techniques will be the primary method of significant discoveries in life sciences.

The combination of human ingenuity with advanced and intelligent technologies also has the potential to produce innovation that builds a more equal and inclusive society. One Accenture project has helped to develop an AI-powered solution that improves how visually impaired people experience the world around them. Called Drishti – “vision” in Sanskrit – it can tell the user the number of people in the room, their ages and emotions along with other environment-scanning capabilities. With nearly 75 per cent of sight impaired people in this country unemployed, it has the potential to empower these people and create new opportunities.

Recognising the impact of AI is critical. Google in the US uses AI to power Google Translate in more than 100 languages; it has 500 million users and counting. Ant Financial Insurance in China uses AI to quickly make insurance payout determinations. Both could be dealing with cultural and highly emotional matters, which require a high degree of nuance in their interpretation and decisions.

In the same way that a parent nurtures a child, businesses must teach their AIs to learn, communicate and make unbiased decisions. After they learn how to learn, they need to rationalise or explain their thoughts and actions, and eventually accept responsibility for their decisions. That applies to children and AI alike.

Raising AI as a responsible, fair and transparent citizen and contributing member of society becomes critical the greater its responsibilities become. Treating AI as simply a software programme would be a mistake.