AI technology now exists which means villains and hoaxers can create undetectable fake footage of anyone on Earth doing and saying anything imaginable. It’s a recipe for political chaos, fraud, blackmail and revenge. Oh, and there’s every possibility it will allow Brad Pitt to live forever. Writer at Large Neil Mackay speaks to deepfake expert Michael Grothaus

THE internet lights up as footage emerges of Nicola Sturgeon caught on camera admitting: “Independence will ruin Scotland.” SNP support immediately tanks. There’s chaos in the Yes movement; jubilation at Westminster.

The only problem is: the footage – absolutely convincing as it is – has been faked. It’s a brilliant con constructed by political rivals using advanced artificial intelligence (AI) technology. Within hours, the media has investigated and reported that the Sturgeon recording is phoney, but it’s already too late – the footage has gone viral on social media and in this era of conspiracy theory swathes of Scots simply won’t accept it was all a hoax.

Welcome to the world of deepfakes, a technology that will be more disruptive than social media.

Clearly, you could take the above hypothetical scenario and flip it: footage emerges of Boris Johnson admitting “Scotland would be better off independent”. Cue chaos for unionists; joy in the Yes camp.

Or, perhaps, deepfakes emerge in the upcoming French presidential election of Emmanuel Macron saying he wants to “open the borders to migrants”, playing into the hands of the far right. It’s most likely, though, that we’ll first see deepfakes really change the course of politics in the next US presidential battle in 2024 when a well-placed deepfake on election eve could tilt the scales … maybe for Donald Trump.

Deepfakes are already upon us. The technology which means you or I can make utterly convincing hoax footage using AI currently exists – it’s cheap, available and allows anyone to easily concoct short videos with the power to unleash political chaos. It is only a matter of time until this latest technological threat makes its mark with some shocking stunt. MI6 warned just this week that AI poses a major risk to national security.

If we think Russia caused mayhem in the West simply by using social media, wait until you see what Putin could do with state-of-the-art deepfakes. Then there’s the terrible harm deepfakes could cause as “revenge porn”. Just imagine what might happen to you if an ex or someone who simply hated you deepfaked sexualised footage of you and released it to the world.

To find out just what’s happening with deepfakes, The Herald on Sunday spoke to Michael Grothaus.

The Herald:

 

The investigative writer has spent years researching this dangerous new technology and has just brought out his new book, Trust No One: Inside the World of Deepfakes. He paints a stark, at times bewildering, picture of a future where nobody can believe anything they see or hear.

What is a deep fake?

“A DEEPFAKE,” says Grothaus, “is any media, usually video, but it could just be audio, or a combination of both, that’s manipulated by AI to show something taking place that’s never actually happened.”

This isn’t just a matter of cutting and pasting the head of some politician or celebrity into an outrageous setting. That kind of fake even a child can spot. AI deepfakes when done well, using the best technology in the hands of a skilled operator, are literally impossible to detect.

They’re designed to fool all humans. Only another AI computer can work out that it’s phoney.

The key to understanding this new technology is in the word “deep”, which refers to “deep learning”, machine learning. Essentially, Grothaus explains, to make deepfakes, you use two AIs – one acts as “the forger”, the other as “the inspector”. The forger starts building a deepfake of, say, Joe Biden, making it look as if the US president is sitting in the Oval Office saying he plans to ban guns. The forger AI makes thousands of versions of the fake footage. Every time the inspector AI says the footage is fake, the forger starts again until the footage is good enough to fool the inspector. Once footage can deceive such an advanced AI, it’s guaranteed to deceive humans.

The most basic “faking” software is found in phone apps like Reface where users play with images and add their own likeness into something like a clip from a Marvel movie. But that’s easy to detect and would never cut it as a professional deepfake. Essentially, mobile phone software is just copying and pasting faces into video.

However, high-end open source deepfake software, which is thoroughly convincing, is available online – often for free. This software reconstructs faces using AI. After a little experimentation, says Grothaus, “you could probably make your first deepfake in a few weeks, which is about how long it’ll take to master the software”. From there, a user could create deepfakes in “a few days or less”.

Deepfake software runs “on the average laptop or desktop computer” – but the more powerful the computer, the faster it is to make deepfakes and the better the quality. The more audio-visual material deepfakers have of their target, the better. “If you’d an hour of their voice, you could make an incredibly lifelike deepfake saying anything you want,” Grothaus explains. But there’s also “software that can clone someone’s voice with only a minute of recording needed”.

In terms of video: “If you’d about one minute of footage of Biden, that’s enough to ‘train’ your deepfake software.” The software breaks video down into individual frames. Each second has 30 frames – so 60 seconds gives 1,800 separate images to manipulate. “That’s usually more than enough,” says Grothaus.

Deepfakers then couple manipulated footage and splice that in, unnoticeably, to any location or setting with any possible action taking place on camera. So, in theory you could produce thoroughly convincing footage of any politician walking into a massage parlour or snorting cocaine.

Revenge porn

WHAT scares Grothaus most is the conjunction of deepfakes and social media – not just in terms of politics but also pornography. The unleashing of sexualised deepfakes on Twitter or Facebook could end in disaster for ordinary people.

“If someone makes a deepfake of, say, Bill Clinton, he has the platform to get out there in front of the whole world and say, ‘hey, this is a deepfake’. He could be on every news channel. The girl who works at the hospital or the guy at the bank, ordinary people, we don’t have that platform. We can refute it on social media, but our refutations aren’t going to get around to everybody who saw the deepfake in the first place.”

Political chaos

DEEPFAKE technology wasn’t advanced or widespread enough to affect recent elections like the last American presidential campaign. However, the use of “shallowfakes” during that election does offer a glimpse of the future. Perhaps the best example of a shallowfake featured the Democratic Party’s Nancy Pelosi. Authentic video of her was taken and the soundtrack simply manipulated to make it appear as if she was drunk and slurring her words. It was poorly executed, without AI technology, and easy to spot, especially as the original unmanipulated footage existed to prove it was phoney.

However, with deepfakes you don’t need real or existing footage. A politician could be deepfaked into a porn film, into video of them punching a child. Nothing in the film would be real or have ever happened but it would still appear entirely authentic and convincing. “It’s terrifying,” says Grothaus. “You can put words in people’s mouths, put them in video doing anything you want.”

Grothaus fears “we’re going to see more nation states moving into the deepfake arena to spread misinformation … and weaponise it against their enemies”.

The death of trust

“WHEN deepfakes become more widespread, it’s going to lead to an erosion of trust in society,” Grothaus says. “Video is kind of the last frontier. We’ve had fake articles and tweets, but if you saw a video it was almost always true. Now, with deepfakes, you can create completely fabricated video. That video might not necessarily convince everybody who sees it, but everybody who does see it is now going to have to question ‘is that real?’.”

Reality could begin to disintegrate. The rise of deepfakes wouldn’t just mean we all have to become fact-checkers, making sure we’re not watching phoney footage, it could –counter-intuitively – make us begin to doubt everything we see, including authentic footage. Deepfakes may also become the ultimate get-out-of-jail free card. Grothaus poses the scenario where a politician is caught on camera saying something racist, but then claims the footage – which is indeed real – was actually deepfaked.

Clearly, most members of the public don’t have time to fact-check everything they see. That’s the job of journalists. However, many people today no longer get their news from traditional, trusted sources, instead relying on social media, which will be ground zero for deepfakers. Viewers may see an initial story spawned by a deepfake, but not read follow-ups proving it was all false. It’s a recipe for the complete collapse of trust globally.

Deepfaking live

SO, does this mean we will all in the future only trust what we see with our own eyes? Even then deepfakes can con us. Technology exists, says Grothaus, “to overlay another person’s face in real time”. What this means is that you and I could be on a Zoom call, but I’m not even there. A hoaxer has got my image and voice recording and manipulated it to look like you’re speaking to me – when in fact you aren’t, you’re speaking to them. It’s a neverending hall of mirrors.

“Even if you’re seeing media with your own eyes, even if it’s in real time, it could be a deepfake,” Grothaus says. He has even experimented with this technology and had himself deepfaked to sound convincingly like a woman as he spoke live on a podcast.

The death of celebrity

DISNEY, Grothaus explains, “is already heavily researching deepfake technology. I think by the end of this decade, deepfakes will be as transformative in Hollywood as sound was in the 1920s, colour in the 50s and CGI in the 90s. I think Brad Pitt is going to live forever.”

What Grothaus means is that studios could use deepfake technology to shoot movies with actors long after they’re dead, or put them in films where they look much younger. So a 50-something Pitt could appear in a romcom looking like he did aged 20.

“You’re going to have studios keeping actors alive and young forever. Leonardo DiCaprio is one of the biggest stars in the world – if he’s in a movie it puts butts on seats – but DiCaprio has a built-in flaw. He can only make one movie at a time.” With deepfakes, DiCaprio could shoot one film himself on location, while deepfaked Leos are used in multiple films in studios around the world.

“It’s going to happen,” Grothaus says. Deepfakes would massively reduce the cost of moviemaking, and the technology fits the risk-averse business model now underpinning Hollywood: returning repeatedly to the same material and stars, as we’ve seen with superhero franchises.

It seems inevitable we’ll see stars like Marilyn Monroe and James Dean return. Why not make a new Star Wars movie – the origin story of Princess Leia starring a deepfake of the dead Carrie Fisher? “It’s really creepy,” says Grothaus. “It could end celebrity.” How will new actors break through if the current crop of A-listers and dead stars from the past are constantly getting the best roles?

Deep crime

DEEPFAKES have already been used for successful criminal cons. One British case saw a leading energy company executive hoaxed out of £200,000. Deepfakers mimicked their boss perfectly, tricking them into transferring money.

Deepfakes can be used to breach facial recognition software. Future identity theft becomes nightmarish. “I could call my mum on Skype,” says Grothaus, “and say I’m travelling and in trouble and need £10,000. My mum would be like, ‘well this is Skype, it sounds like you, it looks like you, yeah, I’ll wire the money’. But it’s just a deepfaker.” That’s “doable” right now, he says.

Rise of the dead

GROTHAUS’s voice breaks a little as he recounts the folly of bringing his father back to life. His dad died in 1999 following a car crash. The pair were exceptionally close. Grothaus hired a deepfaker to make a video of his dad “walking around as if he were alive again”.

He pauses and adds: “I shouldn’t have done that. It felt like it cheapened my experience of my father, as the last image I have of him now is a deepfake. After I watched it several times I deleted it.” Resurrecting the dead was painful for Grothaus but he realises it may be “cathartic” for others.

Deepfakes could be used for dementia sufferers, letting them reconnect to dead loved ones. Would that be wrong? Grothaus ponders. Maybe not if it brought some happiness. But clearly the future “positive” uses of deepfakes are fraught with ethical dilemmas. Perhaps the best therapeutic application of deepfakes would be for patients with degenerative diseases – like Stephen Hawking – who lose the ability to talk. They could have their voices returned, pitch-perfect.

The end of history

“WITH deepfakes, you can rewrite history,” says Grothaus. He poses a scenario where a future populist leader runs for the US presidency. They’re anti-Semitic and want to normalise their hatred of Jews. They could, he suggests, create deepfakes of previous presidents like Kennedy talking disparagingly about Jewish people and say “hey look, I’m just like other past presidents”. Grothaus adds: “It might not be taken seriously by the mainstream media but a lot of people don’t even listen to mainstream media anymore.”

It’s an extreme example but it does back up his point that with deepfakes even history can be distorted. A deepfake already exists of President Nixon delivering a eulogy for Neil Armstrong as if the 1969 moon landing ended in disaster. Countries like North Korea could use deepfakes “to pump out video evidence showing anything they want. It gets really 1984. It scares me”, Grothaus adds. “If Stalin had deepfakes imagine the type of propaganda and disinformation he’d have been pumping out.”

A present-day dictator could simply make a video of dissidents saying something “treasonous” and use it as an excuse to arrest or execute them. Nations guilty of human rights abuse could claim video evidence of atrocities was deepfaked. Rogue states could release deepfaked videos of rebel leaders on WhatsApp asking anti-government protesters to meet at a certain time and place only to round them up or open fire.

One glimmer of positivity is that deepfakes would protect whistleblowers and dissidents, allowing them to disguise their identities online when speaking out against dangerous regimes or corporations.

The future

Most of the wrongdoing made possible by deepfakes is already covered by existing laws like defamation, harassment, blackmail and fraud. But the problem is catching the perpetrators - which is almost impossible if deepfakes are released through the anonymity of social media.

Technologically, we’re also in an arms race. The AI capability which allows us to currently identify deepfakes is the same tech used to help deepfakes learn how to get more convincing. “It’s a constant cat-and-mouse game,” says Grothaus. Deepfakers are so sophisticated today they even put natural reflections into the eyeballs of their targets.

“By the end of the decade,” says Grothaus, “probably 90 per cent of the content we watch will have some deepfaking in it … As every year goes on deepfakes will become a bigger part of the disinformation pie – I think by the end of this decade too deepfakes may be used by nation states to inflict damage on their enemies. They’ll regularly be used to dismiss claims that nation states have done something wrong – to control the narrative. So I’m not optimistic.”