AI and Deepfake

AI and Deepfake

By dayannastefanny

What is Deepfake?

A deepfake is a video, image, or sound created that mimics the look and sound of a person. Also called “fake ads,” they mimic the real thing so convincingly that they can fool humans and algorithms. Deepfakes are created by live AI and are mainly used for augmented reality videos or filters.

Artificial intelligence can be applied in countless fields and one of the most controversial is video manipulation. These manipulated clips, known as deepfakes, pose a challenge for large social platforms such as Facebook, as they are constantly improving and becoming more difficult to detect. Proof of this is the new AI from Hong Kong tech giant SenseTime, which can create realistic deepfakes.

Summarizing its operation, the AI detects in each frame of video elements such as expression, geometry, and pose of the face. Subsequently, explain the authors of the paper, “a recurrent network is introduced to translate the source audio into speech parameters that are related to the audio content.” These speech parameters are used to create a “photorealistic persona” in each video “with the movement of the mouth parts closely related to the original sound.”

Samsung and 3D appearance

Researchers at the Samsung AI Center in Moscow and the Skolkovo Institute of Science and Technology have developed a method that can transform existing facial images into video footage.

The algorithm is inspired by deepfake technology, a technique designed to trick people into creating fake videos. The subject of the image can even be made to speak and do actions they did not do. As Egor Zakharov, systems development engineer, explains, the software has to elevate the reference of the movement from the “target face” to the “source face,” which in practice allows first controlling how the two go through it.

Unlike deepfake, Samsung’s technology makes it easier to fake a face and make it move. In the first case, it was necessary to make a 3D model of the person to reproduce their appearance and make a near-perfect clone that would appear natural. However, Samsung’s AI needs one to eight pictures to bring the player to life. With this system the m, you can bring pictures like Mona Lisa, or pictures of dead people like Marilyn Monroe or Albert Einstein.

“The training can be based on just a few images and performed quickly despite the need to adjust tens of millions of parameters,” Samsung researchers explained in a scientific publication.

The program learns to identify the most characteristic factors of a face such as the eyes, the shape of the mouth, and the length of the bridge of the nose and generates expressions and movements. Finally, a discriminating network ensures that the result is photorealistic.

According to the South Korean company, the generated model, which functions as a very similar avatar of a person, will be used both in the special effects industry and in the field of telepresence (especially in video conferences and multiplayer video games).

This new technology is not without its dangers if it falls into the wrong hands. In the case of “deepfake”, pornographic videos have been recreated with the faces of many people, which some users have uploaded to the Reddit website. Actresses Scarlett Johansson, Gal Gadot, and Emma Watson have been some of the victims of this trend. This method can make you believe that a famous person or politician has made a false statement.

Much more realistic faces

We had already seen this kind of technology a few weeks ago through an amazing video where Nvidia showed us the capabilities of GANs. They are the developers of this algorithm and today they are also its main promoters. Behind the ‘This person does not exist’ website is Phillip Wang, a software engineer at Uber, who decided to set up the site to showcase the capabilities of GANs. Wang developed new code based on Nvidia’s algorithm, which he named StyleGAN, and uploaded it to the web to give life to this site, where in addition to showing what it does, he shows us how simple it is to operate since it does not require any user intervention whatsoever.

According to the details of StyleGAN, Wang explains that it is a new demonstration of how simple it can be to create “people” who are not real, something we have already seen for example in the ‘virtual influencers’, and whose possibilities are very, very broad.

Wang says that this type of neural network can be used to make video games more realistic or create amazing 3D applications where advanced graphics processing power is not required. But the other side of the coin, because of the vulnerability of use, this network can be used for other dangerous purposes, as we have already seen with the “deep fakes” or videos that tend to confuse, to make false statements. To control politics or slander an innocent person.

 How to distinguish deepfake?

Rivero explains that it is difficult to distinguish a fake video from a real one but says there is a way to try. One expert explains the importance of blinking at the other side of the screen: “People usually blink at a certain time, and a deepfake can miss that phase.”

In addition, a senior security researcher points out that deep fakes often mimic beautiful faces, but the necks remain motionless: “There is no movement of the nuts.” “Color, video, sound quality,” write Kaspersky Lab experts as deep visual systems.

Another use of privacy is sex trafficking, where criminals take photographs of victims’ faces and ask for money not to share them. In addition, computer security experts point out that there are times when attackers pose as managers using this technology to make bank employees or provide company information.

%d bloggers like this: