This article is from a special issue of Science et Avenir n° 199 dated October-November 2019.
while AI train them fake news into a new dimension through deep fake, it may offer new keys to the fight against propaganda in all its forms. The European Fandango project, for example, backed by a 3.5 million euro budget, is specifically eyeing machine learning to envision new tools that could emerge in a few years. But for now, it is exclusively on the side of the image that the AI shows its strength.
Friendly methods for social networks
“It is possible to train a neural network to detect traces of modifications such as differences in compression”says CNRS researcher Vincent Clavau of the Research Institute in Computer Science and Random Systems (IRISA). The problem is that this method is not particularly suitable for montages being broadcast on social networks, which are often of low resolution after undergoing compression and decompression over their course on the net. So Vincent Clavau conceived one modified image to confuse the other.
Step 1: Get your hands on the original, untouched. “we use read or learn attentively To describe the content of a suspicious image, for example to identify the presence of a cat or car. The program then searches for images containing similar objects.” Thus, even if only minor modifications (change of colorimetry, cropping, rotation…) have occurred to the original document, the machine manages to detect it. It remains to compare the original and the retouched photos to identify the difference. “Finally, other neural networks will try to characterize the modifications that have been made, as any corrections are not intended to give misinformation”, recalls Vincent Clavo. It’s one thing to circle an arrow, title, or area on an image, it’s another to remove a person from the photo or change a face…
Videos that force anyone to say anything
Even in the case of video, we rely on AI. before the arrival of deepfakesVincent Nozick of the Gaspard Monge Computer Lab was already using read or learn attentively To distinguish a computer generated image from a photograph. So he was one of the first to develop software called Mesonet, which was able to recognize any deepfakes. “After the learning phase, the machine reached a recognition rate of 98%, declares the researcher. But if we stop training him with the last deepfakes, this rate eventually decreases.” All over the world, facing the concern raised by these videos that anyone can say anything, teams are developing the program. The case pertains to DARPA, the “research” arm of the US defense, which has already put $68 million on the table. “for now, deepfakes Don’t ‘defend’ yourself, their authors try not to believe these videos are true. But till when ? Vincent Nozick asks. With read or learn attentivelyIt’s easy to teach not to recognize an autoencoder.” So the fight hasn’t started yet.
by Yan Chavans
Analyst. Amateur problem solver. Wannabe internet expert. Coffee geek. Tv guru. Award-winning communicator. Food nerd.