Generative AI is getting really good, really fast. You’ve probably already experienced someone you know, maybe yourself, mistaking a video or image that’s completely AI generated for something that is real. There are some tells, but we’re rapidly approaching a point where AI images, videos, and text will be completely indistinguishable from real life. So. How can we continue to trust what we see online?
To get perspective on this and some possible solutions, GBH’s Morning Edition host Mark Herz spoke with MIT professor and member of the Computer Science and Artificial Intelligence Laboratory, David Karger. What follows is a lightly edited transcript.
Mark Herz: So how good is AI getting at making videos, audio, whatever it is? We always talk about AI slop, but how much of this is getting not-so sloppy?
David Karger: It’s really getting quite good. We’ve been seeing tremendous progress over the past few years and I expect it to continue. There are still some current limitations on AI. You’ve probably noticed that most AI generated videos are pretty short, because AI is still struggling to manage consistency over a very long span of time. But I think we’re going to get there and I think it’s going to be easier and cheaper. You’re gonna be able to start doing this on your own devices instead of relying on big powerful servers in the cloud to do it. I think it’s going to be everywhere and we’re gonna have to deal with that.
Herz: You’ve talked about how it’s unrealistic for social media websites to flag all AI-generated content or to filter it out. And these websites, notably, and under political pressure at times, have abandoned, sometimes dangerously, any institutionalized fact-checking. Are you saying that could be okay somehow?
Karger: I’m not saying that it could be okay. I have been arguing for some time that we can’t really leave that kind of checking in the hands of the platforms because they’re subject to political pressure, as you just mentioned. If the platforms really become the source of fact checking, then whoever is in power is going to be trying to push that fact checking in whatever direction that they want, not just governments, but any organization with an axe to grind is going to try to pressure the platforms. So I think we need to involve more entities, more people, more sources in the fact-checking process. We need to figure out how to ensure that that fact checking can propagate into the platforms, even though the platforms are not doing the fact checking themselves.
Herz: So how do we do that?
Karger: Well, there are a number of groups trying to develop a variety of standards and techniques for labeling content in various ways. There’s something called the Web Consortium Web Credibility Community Group, which is trying to develop some of these standards where you might be able to annotate information in a standardized way with metadata that says that it is real or metadata that say that it’s AI, and you can imagine tools that know how to look for this metadata and make use of it in appropriate ways. For example, I might want to configure my social media to not show me information that has been labeled as AI generated in certain contexts, or, and I actually think that we’re going to have to head more in this direction, I might want to configure my social media to only show me things that have been verified as true or accurate or real by some authority that I trust, but not the platforms.
Most of what each of us thinks we know is something that we have heard from other sources that we trust, whether it be teachers or journalists or governments or whatever. Those aren’t technologies.
Herz: But will the platforms let us do that?
Karger: Well, I think we’re gonna have to exert some pressure there. That’s a place for regulation. We’ve seen regulations like the regulation that allows people to download their data from social platforms, which say that, sure, we want to let the platforms operate, but we want to give their users a certain amount of control over their own data and over their experience. I think in similar ways, we want to ensure that users are able to indicate who it is that they trust, which sources. I think news organizations are going to play a very important role here going forward, that they may move from creators of content to more of a fact checking and publishing of what’s true role. I do not think that this is something that can be done with technology. This is all about how we as a society construct knowledge. Most of what each of us thinks we know is something that we have heard from other sources that we trust, whether it be teachers or journalists or governments or whatever. Those aren’t technologies. Those are entities and actors and they are going to continue to be the sources of insight about what is true and what is not. What technology can do is help to transmit that information from the sources somebody trusts to that person through the tools that they’re using.
Herz: So I’m a little confused because we started by talking about how good AI is getting at fooling people. So how are people going to get un-fooled and flag that for other people?
Karger: It’s all about provenance. You need to understand where did this content come from. If I have a particular piece of video that I’m looking at, for starters, I have to believe that it might be real or it might be AI generated. What’s going to differentiate that is somebody who says, “I took this video at this location at this time, and I am trusted by an organization that you rely on so you can trust the video that I took because I am asserting its authenticity.” So the technology is going to support those assertions, and delivering them to people who need them. But the technology’s not going to be able to make that decision for you, “is this accurate or is it not.” The technology is it going to a medium for communicating that information from people who have it to people want it.
Herz: So it sounds like what you’re saying is, it’s gonna be on everybody and not just journalism, although we may have a big part to play, to be highly skeptical. Reminds me when I was in journalism school, they taught us an old saw. They said, “Your mother says she loves you? Check it out.”
Karger: That’s exactly right. I think that we’ve had, for a long time, people talking about critical reading and so on and so forth, being more skeptical of what they see. And I think we need to shift attention from looking at whether the story is internally consistent, which is what you’re often doing with critical reading. That’s not going to survive AI’s ability to create internal consistency. But checking your sources, checking the provenance, where did this come from? Who says that it’s true? That is something that we’re going to have to become much more used to doing. And it’s not enough for us to just develop those habits. We need support from our technology, from our tools. Nobody has the energy or the resources to carefully investigate the sourcing of every single piece of content they encounter on social media. This is where technology can help. Technology is a way to make it easier to do things that we want to do. So if I want to carefully check the sources of every piece of information that crosses my path, well, technological tools can take care of doing that for me under direction.