Channel 4 News presenter Cathy Newman has revealed her image was used in a sexually explicit 'deepfake' video. "It was violating," she said, explaining the video displayed her face grafted onto another woman's body. She spoke out as the creation of sexually explicit deepfake images is to be made a criminal offence in England and Wales. There are existing protections in Scots Law to deal with deep fake pornography. Here, Dr Sajjad Bagheri says the deepfake technology also threatens to derail democracy.

Rapid advances in artificial intelligence (AI) technology have led to serious debate and concern; with the main point of worry typically being what it will mean for jobs and the workplace. Whilst this is a valid concern – there is a more imminent AI-related threat poised to cause havoc in the election year of 2024: deepfakes.

With at least 64 nations around the world heading to the polls, there’s a real possibility that deepfakes will be responsible for manipulating democracy at a level we simply have not seen before.

Deepfakes are manipulated images, video or audio created using AI; generally to present someone as doing or saying something that they did not.

Until recently, there’s been little cause for concern. In their infancy, deepfakes were easy to identify, and therefore easy to ignore. That is no longer the case. In 2024, deepfakes are more advanced than ever, to the point some slip past even the most advanced AI-detection tools. When the ethical guardrails are no longer working, there has to be cause for concern.

The believability of these deepfakes has led to several of them going viral this year; before being debunked in publications such as The Herald. AI-generated images of Taylor Swift became so widespread on social media platform X, that the company was forced to block users from searching for the popstar.

The Herald: Sir Keir Starmer was targetedSir Keir Starmer was targeted (Image: free)

Fake audio purporting to be of Sir Keir Starmer berating a member of staff was seen around 1.5 million times on X in days, but even condemnations from opposition politicians wasn’t enough to convince the platform to take it down. Even today, the clip is easily found – threatening to rear its head and kick up a second viral storm in the future. Even though this audio has been debunked, it’s still being shared by people who either don’t want to believe the truth, or don’t care to.

Part of the problem is that deepfakes are worryingly easy to make; and require almost no technical knowledge or skill meaning the barrier to entry is low. Plenty of free generation tools exist, and even paid-for software is affordable.

You may have noticed a flood of AI-generated music covers on platforms such as YouTube in recent months, with the vocals of celebrities such as Freddie Mercury and Frank Sinatra being used to cover songs released long after their deaths. These exist because they’re easy to make. This ease of use has also meant that some of the most damaging political deepfakes have been traced back to individual users as opposed to rogue nations. Anyone who wishes to cause chaos can attempt to do so at the click of a button.

We’ve seen, in recent years, how damaging misinformation can be. Conspiracies such as QAnon in the US have moved from the shadowy corners of the internet to mainstream social media sites and then on to the real world – culminating in shocking events such as the storming of the Capitol. Imagine how much more dangerous this misinformation will be when it’s accompanied by fake audio, images and video; making the conspiracy seem even more real to those who stumble across it. It’s said that the camera never lies, but this phrase has never been less true.


READ MORE

Will Artificial Intelligence take my job?

Artificial intelligence could enslave the human race


A solution to the threat posed by deepfakes is urgently required – yet there is no easy fix. The technology used to create deepfakes is outpacing the solutions. It’s a constant process of playing catch-up.

One solution that has been touted is for stricter verification processes to be introduced surrounding images uploaded to social media; with more human involvement in approving potentially contentious images. However, it is important that in blocking fake media, we don’t end up blocking genuine media. Platforms like Twitter have been invaluable tools in allowing breaking news to play out in front of us, and their ability to do so should not be hindered.

In recent UK elections, many seats have been extremely marginal; with a handful of votes being enough to swing some results. Deepfakes could end up being difference makers.

It is vital that public awareness of deepfakes and general media literacy is heightened as we approach the next vote; to ensure people know that seeing is not necessarily believing. Given that a fit-for-purpose technological solution does not yet exist, deepfakes should be a key topic in public discourse.

Stepping away from politics, it’s worth noting the destructive potential of this technology on an individual level. A disturbing trend has emerged of AI-generated adult images, on which faces are superimposed onto other bodies. Research from Home Security Heroes has shown that a staggering 99% of those affected, such as Cathy Newman, by this form of deepfake on one website were female.

The Herald: Dr Sajjad BagheriDr Sajjad Bagheri (Image: free)

It is inevitable that deepfakes will become a dangerous weapon in the toolkit of a cyberbully. They will also be used to harass and cause serious upset – they are an emerging threat that we must all take extremely seriously.

As we approach the election, it is inevitable that we will see more deepfakes. People will find new ways of using this technology in destructive ways; and public perception of some candidates may be damaged. It’s important that we talk about this emerging problem, and ensure we do what we can to ensure that it is not normalised or ignored. This applies to us as voters, but also the media, politicians and their parties who must be responsible about what they report and share on platforms.

In the coming months and years we all have to approach images, audio and video that does not come from a reputable news source with a degree of cynicism, which can be a challenge if what we’re being presented with backs up our existing feelings towards a person or subject. For the foreseeable, deepfakes are going to be part of political campaigns, and that’s going to be difficult to control; but how we collectively deal with them will be a shared endeavour.

Dr Sajjad Bagheri is a lecturer in Computer Science, University of the West of Scotland