By Lucy Murdoch, Managing Director, Global Corporate Citizenship Delivery, Accenture in Scotland
IF, like me, you are one of the 81 percent of business executives who believe that Artificial Intelligence (AI) will be our co-worker, collaborator and future adviser, have you considered what kind of morals it will have?
This is not as weird as it might sound. AI already interacts with people on many levels and therefore has significant responsibility. It needs to be trusted to make the right decisions.
Rather than being programmed to take specific actions over-and-over, an AI “learns”. It constantly analyses incoming data, adapting the algorithms by which it makes new smarter decisions and achieves better outcomes. When researchers at the University of Virginia trained AI on a widely used photo data set, however, they discovered that it amplified predictable gender bias. In one example the AI categorised a man standing next to a cooker as a woman.
So despite every effort to minimise and eliminate unconscious bias in people, when data reflects these biases, they can be quickly amplified. It will have far-reaching consequences in the workplace and more broadly into society if not addressed.
To my horror, my four-year-old daughter came home from nursery saying that doctors were boys and nurses were girls. How did that happen in 2018? Even as I have tried to raise my daughter to be who she wants to be, she is influenced by the world around her and I had to correct her perception using examples of female friends who are doctors. In the same way, the data scientist must not only train AI without bias, but also recognise where it has learned something wrong and correct it. Despite the best intentions of the system designer, some bad data can corrupt an AI, and needs to be spotted.
This is not just for the ubiquitous chat bot. AI is increasingly having a place in solving some of society’s big issues. By 2023, for instance, it is predicted that AI techniques will be the primary method of significant discoveries in life sciences.
The combination of human ingenuity with advanced and intelligent technologies also has the potential to produce innovation that builds a more equal and inclusive society. One Accenture project has helped to develop an AI-powered solution that improves how visually impaired people experience the world around them. Called Drishti – “vision” in Sanskrit – it can tell the user the number of people in the room, their ages and emotions along with other environment-scanning capabilities. With nearly 75 per cent of sight impaired people in this country unemployed, it has the potential to empower these people and create new opportunities.
Recognising the impact of AI is critical. Google in the US uses AI to power Google Translate in more than 100 languages; it has 500 million users and counting. Ant Financial Insurance in China uses AI to quickly make insurance payout determinations. Both could be dealing with cultural and highly emotional matters, which require a high degree of nuance in their interpretation and decisions.
In the same way that a parent nurtures a child, businesses must teach their AIs to learn, communicate and make unbiased decisions. After they learn how to learn, they need to rationalise or explain their thoughts and actions, and eventually accept responsibility for their decisions. That applies to children and AI alike.
Raising AI as a responsible, fair and transparent citizen and contributing member of society becomes critical the greater its responsibilities become. Treating AI as simply a software programme would be a mistake.
Why are you making commenting on The Herald only available to subscribers?
It should have been a safe space for informed debate, somewhere for readers to discuss issues around the biggest stories of the day, but all too often the below the line comments on most websites have become bogged down by off-topic discussions and abuse.
heraldscotland.com is tackling this problem by allowing only subscribers to comment.
We are doing this to improve the experience for our loyal readers and we believe it will reduce the ability of trolls and troublemakers, who occasionally find their way onto our site, to abuse our journalists and readers. We also hope it will help the comments section fulfil its promise as a part of Scotland's conversation with itself.
We are lucky at The Herald. We are read by an informed, educated readership who can add their knowledge and insights to our stories.
That is invaluable.
We are making the subscriber-only change to support our valued readers, who tell us they don't want the site cluttered up with irrelevant comments, untruths and abuse.
In the past, the journalist’s job was to collect and distribute information to the audience. Technology means that readers can shape a discussion. We look forward to hearing from you on heraldscotland.com
Comments & Moderation
Readers’ comments: You are personally liable for the content of any comments you upload to this website, so please act responsibly. We do not pre-moderate or monitor readers’ comments appearing on our websites, but we do post-moderate in response to complaints we receive or otherwise when a potential problem comes to our attention. You can make a complaint by using the ‘report this post’ link . We may then apply our discretion under the user terms to amend or delete comments.
Post moderation is undertaken full-time 9am-6pm on weekdays, and on a part-time basis outwith those hours.
Read the rules here