Scrolling through my phone, I come across videos of Ice Spice and Barack Obama on social media teaching students mathematical concepts. These videos look real, but they’re not.
Fake clips, such as those in which celebrities explain educational content or sing humorous songs, have become increasingly widespread and popular due to rapidly evolving AI capabilities.
While some deepfakes plainly indicate their intent to only serve comedic purposes — primarily showcasing celebrities like Drake, Elon Musk and Taylor Swift — many deepfakes carry potential real-world risks.
Former President Donald Trump recently used the social platform X to tweet out the false endorsement of Taylor Swift’s fan base — dubbed Swifties — using multiple AI-generated pictures of women wearing “Swifties for Trump” shirts. Despite Swift’s clear opposition to Trump and his candidacy in the past, he posted the deepfakes regardless, angering her and her supporters.
To those not frequenters of pop culture or politics and who have not noted Swift’s open support for the Democratic party and its presidential nominee Kamala Harris, the fake news can have turbulent effects, used as a weapon to sway and influence voters.
Another increasingly alarming problem with deepfakes is how they are being utilized in warfare. During the start of the ongoing Russia and Ukraine war, deepfakes of the Ukrainian President Volodymyr Zelenskyy portrayed him urging his troops to stop fighting back and drop their weapons. The deepfake clouded troops’ judgment, embedding uncertainty and distrust into viewers’ minds every time Zelenskyy appeared in the media, causing them to wonder: Is this really him?
Fake news and false accusations in social media have posed problems even before AI became prevalent, and now, with the refinement of false video generation and abundance of face swap apps, anything can be published leading viewers to easily be subject to false personas and fake testimonies presented as truth.
With a simple Google search, anyone can figure out how to create and promote a deepfake video. In the past year, 54 states and territories contacted Congress, urging them to address the harms of deepfakes. Representatives specifically asked for the investigation of AI’s exploitation of children, as data indicates a rise in the usage of AI by teens to create deepfakes as the latest way to bully and harass schoolmates.
In a Westfield High in New Jersey, explicit deepfakes of female students were circulated around the school by their male classmates — a similar incident occurred at Beverly Vista Middle School in Beverly Hills, just four months later. Dorota Mani, the mother of one of the New Jersey victims, said Westfield administrators told her the accused males were suspended for only one to two days. Disgusted by their decision, she and her daughter began to speak out on the harm of AI usage, and filed harassment complaints against the school in regards to its weak response to the situation.
While most deepfakes depict real people, there are some content creators who also use the technology to generate fake characters and false personas in an effort to gain views and profit.
These videos and images have the distressing capability to be uncannily similar to reality, accentuating the struggle for viewers to identify what is real and what is fake. The chance that each of us will someday be fooled by a deepfake is only becoming more likely.
In the age of AI, as fiction can be quickly passed off as reality, we need to be more suspicious of content than ever. Running a quick Google search can often clear up confusion about many deepfakes —more often than not there will already be news articles or videos debunking them if they’ve been widely circulated.
Still, the trend toward deep fakes is deeply distressing, and fake videos and images are likely to continue to become more sophisticated and harder to detect, leading us into a world where the truth can no longer be detected by one’s own eye alone. We no longer live in a world where humans can keep up with the rise of technology.