The 21st century’s answer to Photoshopping, deepfakes use a form of artificial intelligence called deep learning to make images of fake events, hence the name deepfake.
Many are illegal and wrong work. The AI firm Deeptrace found 15,000 deepfake videos online in September 2019, a near doubling over nine months. As new techniques allow unskilled people to make deepfakes with a handful of photos, fake videos are likely to spread beyond the celebrity world.
Is deepfake just about videos?
No. Deepfake technology can create convincing but entirely fictional photos from scratch. Audio can be deepfaked too, to create “voice skins” or ”voice clones” of public figures.
How are they made?
It takes a few steps to make a face-swap video. First, you run thousands of face shots of the two people through an AI algorithm called an encoder. The encoder finds and learns similarities between the two faces, and reduces them to their shared common features, compressing the images in the process.
A second AI algorithm called a decoder is then taught to recover the faces from the compressed images. Because the faces are different, you train one decoder to recover the first person’s face, and another decoder to recover the second person’s face. To perform the face swap, you simply feed encoded images into the “wrong” decoder.
What technology do you need?
It is hard to make a good deepfake on a standard computer. Most are created on high-end desktops with powerful graphics cards or better still with computing power in the cloud. This reduces the processing time from days and weeks to hours. But it takes expertise, too.
How do you spot a deepfake?
It gets harder as the technology improves. Deepfake faces don’t blink normally. Poor-quality deepfakes are easier to spot. The lip-synching might be bad, or the skin tone patchy. There can be flickering around the edges of transposed faces.
And fine details, such as hair, are particularly hard for deepfakes to render well, especially where strands are visible on the fringe. Badly rendered jewellery and teeth can also be a giveaway, as can strange lighting effects, such as inconsistent illumination and reflections on the iris.
What’s the solution?
AI may be the answer. Artificial intelligence already helps to spot fake videos, but many existing detection systems have a serious weakness: they work best for celebrities, because they can train on hours of freely available footage.
Another strategy focuses on the provenance of the media. Digital watermarks are not foolproof, but a blockchain online ledger system could hold a tamper-proof record of videos, pictures and audio so their origins and any manipulations can always be checked.