Artificial intelligence has proven to be an outstanding resource in many fields, but, like almost anything else, it also has its downfalls. As the new technology infiltrates society, falsely portrayed photos and videos of individuals, known as “deepfakes” have been scattered across the internet and AI is making their creation, spread and believability easier.
This fake, created content can be used to present false information as fact. It is used across a variety of lanes, from politics to celebrities and athletes and even against the average person.
Using a small amount of data about a person, a convincingly realistic piece of content — photo, video, audio — can be generated using AI, according to Dr. Thiago Serra, assistant professor of Analytics and Operations Management at Bucknell University.
“Because of the amount of data we have nowadays, we can create generative models,” Serra said. “It doesn’t take much to create an image of you doing things you never did.”
The application of machine learning that uses large amounts of training data to build statistics or models about how something is working, said Dr. Shomir Wilson, assistant professor in the College of Information Sciences and Technology at Penn State University.
Wilson said this technology enables users to create new content based on existing content, which can be beneficial for things like entertainment and movie production. “People can take part of one video and insert them into another video very seamlessly,” he said.
However, in terms of deepfakes, this seemingly very accessible technology can become problematic. “The technology with digital video has gotten to the point where it is easy for a person with limited technological knowledge to do it,” Wilson said.
Serra said deepfakes will likely become even more convincing with time, practice and AI-generated voicing coming into play.
“I saw something scary about someone trying to replicate a voice to make a phone call,” he said.
Perhaps the most well-known recent examples of this sort of content are the AI-rendered images of Former President Donald Trump being arrested earlier this year. Trump was not actually arrested at the time, but the fabricated images flooded social media platforms.
In terms of audience, Wilson said deepfakes are most commonly used in an effort to steer politics and agendas
“We have people putting politicians in situations where they were completely not involved or manipulating video to make them seem like they were acting differently,” Wilson said.
There are countless examples of deepfakes portraying politicians, celebrities and even average citizens. This issue for everyday social media users has become: How do we pick them out?
According to Serra, identifying the fakes is a tough task to take on and may become increasingly more difficult. “It will be tricky going forward,” he said. “They’re getting more and more credible.”
Wilson said it is important for social media users to implement good information hygiene habits, which include considerations of both the source of the content and the framing of it.
Source: Effingham Daily