Franziska Giffey was sure that the mayor of Kiev, Klitschko, could actually be seen in the video conference. But she was dealing with a so-called deep fake. How does counterfeiting work with the help of artificial intelligence?

Perhaps Berlin’s governing mayor Franziska Giffey (SPD) should have been suspicious that her interlocutor Vitali Klitschko had appeared in the middle of the summer in a thick jacket and sweater for the agreed video conference.

So it took some time before Giffey and her team became suspicious because of the strange questions asked by the supposed mayor of Kyiv. “There was no evidence that the video conference was not being conducted with a real person. To all appearances, it is a deep fake, »said the Senate Chancellery on Twitter.

What is a so-called deep fake?

This refers to media content that has been manipulated using artificial intelligence (AI) techniques. This can be, for example, a supposedly authentic video or an audio recording. The term “deep fake” is derived from the words “deep learning” and “fake”. Deep learning is an artificial intelligence process in which the system learns through intensive observation. Lip movements, facial expressions and posture are analyzed. It is also observed, for example, how a person moves or how they speak.

How did the fake Klischko get into the video conference?

For the fake Klitschko’s appearances with Giffey and other European mayors, it is highly likely that video footage of a real Klitschko interview with Ukrainian journalist Dmytro Hordon was used. The lip movements from the video were combined in real time with the statements of the person who actually spoke to Giffey.

How long have deep fake systems been around?

An experiment at the University of Washington is considered a milestone in the development of systems suitable for such video manipulation. In 2017, university researchers presented algorithms capable of converting any audio clip into realistic, lip-synched video of the person. The scientists virtually put sensitive statements on topics such as terrorism or mass unemployment into the mouth of former US President Barack Obama.

Didn’t the researchers create a monster technology with this?

The scientists actually wanted to develop a system to improve the image quality in video conferences. Since streaming audio over the Internet requires far less bandwidth than video, the company wanted to use the sound to produce a much better quality video. At that time, however, the scientists were already discussing the danger that this technology could be misused.

Do you need a supercomputer for deep fakes?

There are a number of applications in the app stores that are actually intended to optimize selfies or retouch portraits. But these apps also make it possible to swap faces in videos. Other programs convert photos into animated videos. However, these apps quickly reach their limits when it comes to image quality. Sophisticated deep fake attacks on politicians still require powerful computers.

Are criminals using deep fake technology?

Yes. On the one hand, the technology can be misused for malicious fake videos that make the perpetrators liable to prosecution. This includes fake sex videos where the victim’s face is embedded in porn. But deep fakes are also used by criminals, for example, to initiate fraudulent money transfers. In boss fraud, for example, a company accountant receives a manipulated voice memo from his boss, in which she orders a transfer to a specific bank account. The sender is fake like the audio recording.

However, the technique of video manipulation is also used by law enforcement agencies. This spring, the Dutch police digitally brought a teenager to life in a video almost 20 years after his violent death – and then received dozens of tips.

Who is behind the Klitschko fake?

The course of the conversation suggests that pro-Russian forces are behind it. However, an assignment cannot be made without any doubt at present, also because the real perpetrators often leave traces that deliberately point in the wrong direction. It is also conceivable, for example, that a political fun guerrilla wants to discredit Giffey and her counterparts in Vienna, Budapest and Madrid.

How to recognize deep fakes?

That’s going to get harder and harder. In the future, artificial intelligence algorithms and the hardware systems used will be able to produce fake video material that looks completely authentic. So you will no longer be able to trust your eyes and ears alone. It is all the more important to use logic and common sense to question sensational video clips. In the event of a completely surprising development or far-reaching statement, users should always ask themselves: How likely is it that it will be disseminated in this way? Sometimes artificial intelligence can also help to detect fake AI: For example, you can upload videos or links to videos on the deepware.ai website to get an assessment of whether they are deep fakes.