
When artificial intelligence networks become more skilled and easier to access, it is difficult to discover “Deepfake” images that are digitally processed.
The new research led by Binghamton University dismantles images using frequency analysis techniques and is looking for abnormal cases that can indicate that they were created by artificial intelligence.
in A paper published in Incident Simple techniques in information scienceNihal Poredi, Deeraj Nagothu MS ’16, PhD 23, Professor Yu Chen from Thomas G. Watson Engineering and Applied Sciences in Electrical Engineering and Computer Compare Real and Falged images beyond the signs of treatment of treatment such images. Stretched fingers or wet wallpaper text. She also collaborated in the paper, Master Monica Sudarsan and Professor Inuk Solomon of Virginia State University.
The team created thousands of pictures using common AI tools such as Adobe Firefly, Pixlr, Dall-E and Google Deep Dream, then they analyzed them using signaling techniques so that the frequency field features can be understood. The difference in the characteristics of the frequency field of images created and the nature created from artificial intelligence is the basis for distinguishing between them using the machine learning model.
When compared to the images using a tool called the antagonism images (GANIA), researchers can discover abnormal cases (known as artifacts) due to the way the artificial intelligence is born. The most common way to build artificial intelligence images is Upsampling, which reproduces pixel units to make file sizes larger but leave fingerprints in the frequency field.
“When you take a picture with a real camera, you get information from the whole world – not only a person, flower, animal, or something you want to take a picture of, but all types of environmental information are included there, Chen said. As you ask to generate, regardless of your detail. There is no way you can describe, for example, what is the air quality, how the wind blows or all the small things that are the elements of the background. “
“While there are many emerging artificial intelligence models, the basic architecture of these models remains the same. This allows us to use the predictive nature to process its content and benefit from the unique and reliable fingerprints to reveal,” Nagothu added.
The search paper also explores the ways that Ghanna can use to determine the AI’s image of the image, which reduces the wrong information through DeepFake photos.
“We want to be able to determine” fingerprints “for various artificial intelligence images.” “This would allow us to create platforms to ratify visual content and prevent any negative events related to wrong information campaigns.”
Along with deep photos, the team has developed a technique to detect fake audio and artificial videos. The developed tool called “DefakePro” works to take advantage of the environmental fingerprints called the electrical network frequency signal (ENF) created as a result of slight electrical fluctuations in the power network. Like a microscopic background, this signal is naturally included in media files when recorded.
By analyzing this signal, which is unique at the time and place of registration, the Defokero tool can be fulfilled if the registration is authentic or if it is tampered with. This technology is very effective against Deepfakes and also explores how it can secure widely smart surveillance networks against artificial intelligence -based forgery. The approach can be effective in combating wrong information and digital fraud in our increasingly increasingly.
“The wrong information is one of the biggest challenges that the global community faces today,” Perry said. “The wide -ranging use of the spontaneous organization in many fields has led to its misuse. In addition to our dependence on social media, this created a flash point for the wrong information disaster. This is especially evident in countries where the restrictions on social media and speech are minimal. Therefore, it is necessary to ensure the mind of common data online, especially audio and visual data.
Despite the misuse of the Tural IQ models, it greatly contributes to the progress of imaging technology. Researchers want to help the public distinguish between fake and real content – but it can be to keep up with the latest innovations as a challenge.
“Artificial intelligence moves so quickly that once Deepfake developed, the next generation of the AI tool takes these abnormal situations and fixes them,” said Chen. “We are trying to do something outside the box.”