By Meredith Broussard

MAYBE you’ve seen the viral video of the potty-mouthed Scottish pensioner shouting at Alexa? The internet has plenty of examples of situations where Alexa, Siri, and other voice assistants have failed to understand Scottish accents.

Why? The problem is both human and computational.

Amazon claims that its Alexa software is tuned to English in the UK, US, Canada, Australia, and India. Depending on where you use your device, geolocation or other signals can be used to automatically suggest a linguistic variant. So, for a user signing in from the UK, Alexa calculates that “Alexa, order boots” is statistically more likely to be a command to order from Boots the chemist, not a command to order Frye boots from an online retailer.

Most of the time this works, because common requests (“Alexa, what’s the weather?” “Alexa, play music.” “Alexa, tell me a joke.”) are really quite common. The principle behind this is known as the unreasonable effectiveness of data, as I outline in my new book, Artificial Unintelligence. When you have data that shows how millions of people ask about the weather – as Amazon, Google, Facebook, and Twitter do – you don’t need contextual knowledge at all. You can use maths to look at the letter patterns (the technical term is n-grams) and calculate an answer that is most likely correct.

Other important calculations include speech-to-text, where the machine “listens” to the voice command and translates it into text; and response-to-command, where the machine performs a simple action.

Speech-to-text is where Alexa chokes for many Scottish accents and regionalisms. Speech-to-text depends on machine learning, which sounds like it means there is a little brain inside the computer, but really it’s about breathtakingly complicated maths and statistics. Machine learning depends on lots and lots of data that is painstakingly labelled by humans. “We rely almost entirely on hand-curated, human-labelled data sets. If a person hasn’t spent the time to label something specific in an image, even the most advanced computer vision systems won’t be able to identity it,” Mike Schroepfer, Facebook’s chief technology officer, said at the recent F8 developers summit.

The machine learning models that power Alexa must be trained on Scots speech data in order to work. Unfortunately, most programmers use “standard” English datasets that do not include Scots. Programmers do pay some attention to speech recognition for native versus non-native speakers, but few focus on linguistic variants like Scots or Welsh English. A search for “Scottish accent” on arXiv.org, a popular site for computer science papers, reveals only a few works that address the issue. More typical is a paper called “Joint Modeling of Accents and Acoustics for Multi-Accent Speech Recognition,” which trains its machine learning model on only two speech corpora: the American English Wall Street Journal and British English Cambridge speech corpora.

It doesn’t have to be this way. There is a Scottish Corpus of Texts & Speech available online, funded by the Arts and Humanities Research Council. But the researchers who published this paper, like many, chose to believe that Cambridge and Wall Street English are the only kinds that matter. Anyone who doesn’t speak in these specific dialects has to do additional work to train their own speech recognition device.

Programmers like to pretend that tech is “neutral” or “objective,” but in reality all programmers make deliberate choices about whose voices to include and ignore. This bias toward what a small group sees as “standard” English is not inclusive. In order for technology to truly work well for all people, programmers need to make choices that allow a wide variety of voices to be heard.

* Meredith Broussard is the author of a new book, Artificial Unintelligence: How Computers Misunderstand the World. She teaches data journalism at New York University.