The rise of artificial intelligence has been widely discussed and debated in recent months, with ChatGPT and even AI music all hitting the headlines.

While such technology is relatively novel – and not a little dystopian – most of us will interact with AI on a daily basis. Autocomplete on word processors or search engines is a primitive form of artificial intelligence, as are voice-activated programmes like Siri or Alexa.

In addition many smartphones now use face recognition technology as a security feature – and it’s not just them. Facebook is able to suggest which friends to tag in photos, while police can use it to compare CCTV images to mugshots or driver’s licence photos.

This has widespread implications for privacy and data protection, but also for another big societal issue – diversity.

Read More: The robots are coming: What it means for healthcare - and being human

A 2010 study by the National Institute of Standards and Technology found that the accuracy of facial recognition technology is not universal, with the poorest accuracy consistently found in subjects who are female, Black, and 18-30 years old. A further study in 2018 found error rates on dark-skinned women to be up to 34 per cent higher than for light-skinned men.

The Met Police, which was found in a report commissioned by the force itself to be institutionally racist, sexist and homophobic, runs a ‘Live Facial Recognition system’ which analyses images fed directly from select cameras into the database to identify people who are being sought by police.  The Met has already been found to use stop and search powers and acts of force against black people disproportionately – technology which struggles to correctly identify people of colour is unlikely to help the situation.

The Herald: A CCTV camera operated by Milnbank Housing Association Picture: Colin Mearns

The obvious rejoinder to this is: “a machine can’t be racist, it’s just a machine”. This ignores the fact that an artificial intelligence has to be created, programmed and taught by real, living humans who may have biases – conscious or unconscious – or blind spots. Standard training databases are predominantly made up of white men, camera settings are often set to defaults which do not capture darker skin well – which helps explain why face recognition struggles to accurately identify black women.

In 2016 Microsoft released a Twitter bot called Tay, described as an experiment in “conversational understanding”. Being Twitter, the bot was ‘taught’ by alt-right troll accounts and in under 24 hours was posting “Hitler was right I hate the Jews”.

Tay may be an extreme example, but it serves as a warning about the way artificial intelligences are trained – garbage in, garbage out as they say in the programming world.

Ketty Lawrence is digital economy project manager of Skills Development Scotland, which aims to close the technology skill gap throughout the country.

Read More: From AI Oasis to Michael Schumacher: the rise of the bots

She believes data and AI have a huge role to play in encouraging diversity going forward - but only if diverse groups of people are involved in the development of these tools from the outset.

Ms Lawrence, who will be a speaker at the The Herald & GenAnalytics Diversity Conference for Scotland, said: “If we can develop AI that is free from bias, then it is self-perpetuating. If bias is present in the data and decision-making process it will be inherent in the final product. But if we can get it right, by incorporating many world views with the right all-encompassing data, AI can make a huge and positive difference in our lives.

“To get it right from the start means getting diverse and inclusive teams working on these technologies from the get-go. This is will help ensure the data used, and the services subsequently developed, will be more reflective of society. That in turn will mean the tech will be more meaningful, impactful and therefore more successful commercially. It just makes good business sense when you think about it!”

The Herald: young women working and used computer, working concept..

That positive outcome, however, depends on ensuring that diversity is built in.

In 2014, Apple launched its Health app, promising it would “monitor all of your metrics that you’re most interested in” – except it had no function for menstrual cycles.  Apple Health having been mostly developed by men, to whom the idea clearly never occurred, ended up being exclusionary – or at least less functional – to women.

Ms Lawrence said: “This is not a new challenge – crash test dummies is always the one that sticks in my head.

“If we have teams working on products where the teams themselves are not representative of society – for example, they’re all white males – then, in the processes and the decision-making that they use automatically, their unintentional bias will come in and we’ll end up with products and services that and aren’t representative of society.

Read More: Work Towards a Better, More Inclusive Scotland - Diversity Conference

“Crash test dummies were indicative of the typical male anatomy, so the data being collected from crash tests is only indicative of that one type - anyone who doesn’t fit that profile will be disadvantaged.

“What we’ve seen in the past is that female injuries and fatalities are higher than for males, because of the data being used to build the safety features in those cars.

“In the case of facial recognition being modelled on white men, imagine that sort of facial recognition for a driverless car but it’s only been modelled to recognise lighter skin tones? Is that car then at risk of running over someone with a darker skin tone, and therefore you’ve almost got racist AI being used?”

The task, therefore, is to make sure that those building the technologies of the future come from as wide a demographic as possible, bringing as much data as they can to the table.

Ms Lawrence said: “It’s also not only about protected characteristics such as gender, race or disability. Research has shown that people who work in technology are more likely to come from affluent backgrounds and private schools. So how do we make sure young people in rural areas can access these skills and opportunities? How do we make sure young people from areas of deprivation have equal access? The involvement of these people will make AI far more inclusive and impactful.

“The task, therefore, is to make sure that those building the technologies of the future come from as wide a demographic as possible, and they bring representative data to the table. They are more likely to ascertain if the data set is biased, figure out what’s missing, and fix any issues before the tech even gets developed, never mind goes to market!”

"We’ll only have good data and good decision-making if all types of people feel able to take part and are able to share their views and life experiences with the associated data that validates those experiences.

“Diversity, inclusion and equality are not just nice to haves. They are absolutely imperative to ensure we create the best products we can that benefit our full population.”

The Herald & GenAnalytics Diversity Conference for Scotland conference takes place on Wednesday, May 24 at Radisson Blu Hotel, Glasgow from 9:30am. Tickets and full agenda details are available here.