Research suggests kids growing up on Alexa and Google Home think that these devices are smarter than them. But a new kind of summer camp wants kids to know that artificial intelligence (AI) is far from perfect.

Welcome to AI Ethics camp.

In a classroom on the second floor of the Massachusetts Institute of Technology’s Media Lab, two dozen middle school-aged kids wearing neon green t-shirts sat in clusters around tables. Standing at the front of the room was MIT researcher Blakeley H. Payne, who’s devoted her graduate studies in the Media Lab to the ethics of artificial intelligence.

“How many of you use YouTube?” Payne asked.

The answer: just about everyone in this class. Some 81% of all parents with children age 11 or younger let their child watch videos on YouTube, with 34% indicating that they allow their child to do this regularly, according to a 2018 survey by the Pew Research Center. The more you watch, the more YouTube is able to target all kinds of content, including advertising.

“Do you ever think about the algorithm behind the ‘Who’s trending’ page?” Payne asked.

Most of the kids shook their heads no.

This was the first time local STEM organization Empow Studios hosted AI Ethics camp, after partnering with Payne. Empow Studios and Payne want to offer the camp at an affordable cost, so they set it at $150 a week. Payne thinks all kids should not only understand how AI works, but approach its design through an ethical lens.

And what better way to introduce that idea than with a favorite sandwich?

"At the very beginning, we talk about how algorithms can have different goals and purposes," Payne said. "They make an algorithm to make a peanut butter and jelly sandwich, and then we talk about what is the best peanut butter and jelly sandwich algorithm? And they all have different ideas. Is it the algorithm that makes the tastiest sandwich? Is it the prettiest sandwich, the quickest and easiest to make, the easiest to clean up? And we moved from there into the technical curriculum, so as they're building their machine learning models, they can think, 'Oh wait, what is the goal of my machine learning system?'"

The point is that artificial intelligence is a human creation, prone to flaws and biases. A 2018 study by Joy Buolamwini, a researcher at the Media Lab, found that the facial recognition software of tech giants IBM, Microsoft and Google struggled in detecting the faces of black women. The study suggested that when biased data sets are fed into machine-learning algorithms, they perpetuate inequalities in various pipelines where artificial intelligence is being used, from job recruitment to prison sentences.

Read more: Addressing Gender And Racial Bias In Facial Recognition Technology

Three days into the camp, 10-year-old Abhinav had already learned that artificial intelligence is not an independent entity.

“I thought AI would be, like, a robot that thinks freely on its own. That's what I thought before I came here," he said. "Now I know it's not just something that thinks really on its own, but it's actually something that can recognize things that can help people.”

The capstone project for the week is redesigning YouTube. It may sound like a heady undertaking, but Payne says these digital natives are already sharp media consumers.

“They have such an ability from a young age to empathize with different stakeholders, to put themselves in other people's shoes," she said. "If you ask them about YouTube, they can exactly explain to you who the recommender algorithm benefits, who it harms and why.”

Just ask Saisha, a rising seventh-grader who has already given this issue some thought. She suggests that YouTube implement more rigorous filters to protect kids from inappropriate content.

“What I would change is that there would be a heavier filter on content," Saisha said. "They would look not just for what the title says, but the program would run through and if it has any certain negative vocabulary maybe programmed into it, then it could just filter it out.”

Payne thinks the way to influence the next generation of programmers and consumers is to teach AI ethics more broadly, in schools across the country.

“Because what I don't want to happen is, I don't want the ethics piece to go to an elite few. And then you're just perpetuating these systems of inequality over and over again,” she said. “I love to think about a future where the students in this workshop make up the majority of the people who work in Silicon Valley or the majority of the people who work on Wall Street.”

Correction: An earlier version of this story spelled Blakeley H. Payne's first name incorrectly.