Facing a wall of brightly lit screens, Steven Schorr and his grandson Simeon Pernick bent over a touch-screen, intently studying a video clip. Their task: to figure out if what they saw was real, or an artificial intelligence-generated “deepfake.”

Schorr and Pernick were playing a game featured in the MIT Museum’s AI: Mind the Gap exhibit called “True or False.” It shows a series of short video clips. Visitors choose whether they think the clip is real or fake, and the screen reveals whether they guessed correctly and why.

Two people face a wall of screens showing videos and the words REAL and FAKE
Steven Schorr (left) and his grandson Simeon Pernick see whether or not they can detect AI-generated videos at the MIT Museum's AI: Mind the Gap exhibit.
Renuka Balakrishnan / GBH News

Pew Research Center reports from November and December 2025 found that 84% of U.S. adults and 92% of U.S. teens use YouTube, with video platform TikTok coming in as the second-most used social media for teens.

Support for GBH is provided by:

As videos become more and more popular, deepfakes are proliferating on social media feeds. The MIT exhibit provides tips visitors can use outside the walls of the museum to improve media literacy in real life.

Both Meta AI and Open AI’s platform, Sora, use prompts that allow anyone to create an AI-generated video. Sora 1 boasted a feed entirely made up of AI-generated videos based on user prompts, which then flooded other social platforms with no explicit branding or AI disclaimers.

For many, the rise of what’s colloquially known as “AI slop” is calling into question the credibility of video, long seen as a way for trained fact-checkers and everyday consumers to verify the truth.

“It’s more annoying than anything,” said Pernick, a student at DigiPen Institute of Technology in Washington who is studying computer science and game design. “It used to be a lot easier to receive information that you can trust. With deepfakes and AI, it just makes it harder.”

Lindsay Bartholomew is the MIT Museum’s exhibit content and experience developer. Her goal with AI: Mind the Gap is to address some of the anxieties visitors may feel about the encroachment of AI into mainstream media. But rather than villainizing the technology, she wants to give people tools to navigate it.

Support for GBH is provided by:

Her key piece of advice: If you’re ever in doubt, watch it again.

At the museum, this works by tapping the screen’s lime green “replay” button to watch the clip a second time. But Bartholomew encourages people to take this practice out into the real world, too.

“Technology can advance all at once, but it can’t take our own agency away, of being able to check and double check – having our gut instinct, having a feeling,” she said.

The easiest test is to pay attention to faces, she said. The exhibit points out that deepfakes of people are often created using a medley of altered facial features, making it easy to spot when something is off.

Bartholomew added that in examining facial features, being human is actually the biggest advantage we have.

“We instinctually know facial expressions and eye contact,” she said. “When that doesn’t feel quite right, we know it somehow.”

Pernick, the DigiPen student, said at first, he focused on the background of each video, assuming that would give him clues as to what had been altered. But as he continued to watch, he got better at isolating facial features.

A screen showing a video of a woman with directions to watch again and pay attention to her glasses
"True or False" encourages participants to watch videos again to spot AI-generated content.
Renuka Balakrishnan / GBH News

Each clip shown in “True or False” is fairly short, imitating real life video content. Since generative AI technology is limited for now, creators may cap videos at a minute long, or splice AI-generated segments into real ones to create longer videos.

The game encourages participants to look for “seams” – places in longer videos where the artificial portion meets the real. It’s not foolproof, but Bartholomew says our intuitions are sharper than we think.

“We can look for those moments where something looked OK, and all of a sudden, something shifted,” she said. “Even if you can’t put your finger on it, you can still notice that something shifted.”

However, it may be just a matter of time before generative AI manages to outpace these strategies for detection.

OpenAI updated its existing Sora technology, releasing Sora 2 with improved generative capabilities it called “a big leap forward in controllability, able to follow intricate instructions spanning multiple shots while accurately persisting world state.”

For Schorr, a 78-year-old Navy veteran, the rapid expansion of AI excludes older generations from technology that already feels inaccessible. He’s always been on the cynical side, but says that now he especially avoids the internet.

“For me, being as old as I am, I already don’t believe what I see. So AI is just confirming my skepticism,” he said.

After learning the strategies outlined in the exhibit, however, he said he felt more confident about detecting AI — although he’s still not sure it will increase the little time he spends online.

Bartholomew encourages people not to get too anxious about how fast the technology is advancing.

“When people get scared of the potential for AI manipulating media — which is completely a fair fear — there is that ability for us to look again, look closely,” she said.