Facebook is developing a new method to reverse engineer deepfake and track its origin. This work can assist prospective deepfake research.
Deepfake is not currently a dilemma with Facebook, but the firm finances a study on the technology to prevent forthcoming menaces.
Its most recent work was in alliance with scientists at Michigan State University (MSU), where the committee teamed up to reverse-engineer deepfake: Analyzing the AI-generated images to discover the recognized features of the machine learning model is created. Very useful because it can help Facebook track Deepfake attackers on several social networks.
Without the user’s consent, this subject may contain misinformation and pornography—a mutual use of frustrating deep forgery techniques. It is not yet ready for deployment. This method may support deep forgery through the network and infer which well-known AI models have produced deep forgeries. Still, this job led by Vishal Asnani of Moscow State University is determined by making less-known architectural features go further.
Facebook can now reverse-engineer #deepfakes and track their source.https://t.co/jkhqWcOf4u #socialmedia
— Fedica (@FedicaHQ) June 16, 2021
These properties, called hyperparameters, must be adjusted with each machine learning. The model is used as part of the engine. Together they leave a distinctive fingerprint on the complete image, which can then be used to determine its origin.
New Forensic methods
Hassner compared this work with forensic methods to determine which camera model was used to take photos by noticing patterns in the occurring images. “However, not everyone can create their camera,” he said. “Although anyone with sufficient experience and standard computers can create their models to generate deep forgeries.
The stemming algorithm can mark the features of the generated model and determine which known model built the image and the image is-this. The first is a deep forgery. “We got the best results through standard tests,” Hassner said. Challenge “But it’s important to know that even these cutting-edge outcomes are far from credible.
Unsolved Problem
When Facebook held a Deepfake detection tournament last year, the winning algorithm could only recognize AI videos 65.18% of the time. Scientists say that the use of algorithms to detect deep forgery “is still an unsolved hardship. One of the reasons for this is that the area of productive artificial intelligence is very active. New methods are released every day, which is almost for any filter. It’s impossible.
“Experts are conscious of this dynamic, and when inquired of whether the release of this new fingerprint algorithm will lead to investigations where these technologies may be overlooked, Hasner agreed-it’s still a game of cat and mouse.
Image courtesy of The Times of India/YouTube