Facebook isn't just a place for individuals to document their lives, or keep up with others. It's actively shaping our lives, politics, and society in ways some consider manipulative. Of particular concern is how Facebook uses artificial intelligence - or A.I. for short - and how the technology may be helping spread misinformation online. Reporter Karen Hao has a new article about this at MIT Technology Review, "How Facebook Got Addicted to Spreading Misinformation." Hao discussed her reporting with GBH All Things Considered host Arun Rath. This transcript has been edited for clarity.

Arun Rath: I think when people think about A.I. on Facebook, they're thinking about targeted ads. Tell us about Facebook's use of A.I., because it's a lot more than that.

Karen Hao: Facebook has thousands of A.I. algorithms running at any one time, and some of them are precisely what you say. But that same technology that figures out what you're interested in is also then recommending to you groups you might like, pages you might like, and filtering the content that you see in your news feed. And the goal for all of these algorithms is ultimately to get users to engage as much as possible - to like, to share, to join these groups or to click into these ads.

Rath: This can, in some contexts, contribute to or instigate violence and even genocide, right?

Hao: Yes. So one thing I discovered through my reporting is that, in 2016, a Facebook researcher named Monica Lee started studying whether the company's algorithms were inadvertantly contributing to extremism or polarization. She found that their recommendation algorithms were linking up users with extremist groups, and that over 60 percent of the users who joined those extremist groups did so because it was recommended by Facebook. Mark Zuckerberg has publicly admitted that the closer certain content comes to violating their standards, the more that users want to engage with it. And because all of these algorithms are trying to maximize your engagement, it inevitably starts to maximize all of this misinformation and hate speech. In very sensitive political environments, this can really exacerbate political and social tensions. This is exactly what happened in Myanmar, where the Buddhist majority saw misinformation about the country's Muslim minority on Facebook, and it ultimately escalated into a genocide.

Rath: This word is kind of a golden word at Facebook - "engagement." Why is that so crucial to this, and what does that term really mean for Facebook?

Hao: I don't quite get into that in my piece, but many other journalists and writers who have, talk about Mark Zuckerberg's obsession with growth. When he started the company, his goal was to get every single person in this world on Facebook. Continuing to grow really hinges on the ability to get users to engage and get them hooked on it. Facebook has kind of supercharged that with all of these algorithms figuring out exactly what you like, what's going to hook you in, and what will keep you there.

Rath: With the ability to measure engagement with this degree of precision, could Facebook adjust it, turn it off, or tone it down?

Hao: That's really the critique of the company now that I've done this reporting. It's not that Facebook doesn't do anything to solve its misinformation problem. It actually has a really big team, called the integrity team, focused on catching misinformation. But that only addresses the symptom. The root problem is that maximizing engagement rewards inflammatory content, and that content is more likely to be polarizing, more likely to be hateful, more likely to be fake. So they're rewarding this content, and then scrambling to catch it after the fact.

Rath: You had this remarkable interview with Facebook's head of A.I., Joaquin Quiñonero Candela, where you point out not too long after the January 6th insurrection, that we kind of knew there were extremists groups that were going to rally on the Capitol. What did he think about that?

Hao: Joaquin Quiñonero Candela is the main character in the story. The reason why I wanted to tell the story through his eyes is because he first got Facebook hooked on using A.I. He then switched to leading Facebook's 'responsible' A.I. team. So I asked him, what is Facebook's role in the Capitol riots? What was really hard about reporting this story is that a lot of the responses Joaquin gave me were not necessarily his responses.

Rath: As you're doing this interview, there's a company handler alongside?

Hao: Exactly. So when I asked him what role Facebook had in the Capitol riots, he said he didn't know. When I asked if he thought he should start working on these problems, he said, 'well, I think that's the work of other teams but maybe it's something we'll think about in the future...' Then he said this isn't an A.I. problem, it's just a human nature problem, that people like saying fake things and violent things and hateful things. So I asked him whether he truly believed if the issues with Facebook haven't been made worse by A.I.? And he said, 'I don't know.' That was the end of the interview. To this day, I can't really say whether it was the company line or him that was talking that day.