Big tech was on Capitol Hill this week defending itself from allegations that the platforms helped aid and abet terrorism through their algorithms. Google and Twitter are shielding themselves behind the statute known as Section 230, or the Communications Decency Act of 1996. This landmark law, which came to be while the internet was still in its infancy, basically lets the social media platforms off the hook for what its users say online.

Section 230 critics argue it allows tech companies to propagate harmful content that could incite violence, as it did for Nohemi Gonzalez, who was killed by ISIS terrorists in 2015. Section 230 supporters say repealing it would threaten free speech as we know it, while hampering the creativity and innovation that comes from diverse online communities and platforms.

It's hard to believe that the fate of a 26-word statute could upend the future of the Internet. UMass Amherst Professor of Public Policy Communication and Information Ethan Zuckerman sat down with All Things Considered host Arun Rath to discuss what he calls the "Super Bowl of internet law." The following is a lightly edited transcript.

Arun Rath: Section 230 almost has as much history as the Super Bowl. So frame this for us, because people talk about this law or this statute going back to 1996 as being responsible for the internet as we know it. Is that an overstatement?

Ethan Zuckerman: It's not an overstatement. Let me set this up for you. As the early internet is coming into focus, there's a really interesting question about how we should think about internet service providers — big internet companies that are hosting lots and lots of users. One of those companies, CompuServe, get sued in 1991, and someone essentially says, "Hey, someone posted a comment on CompuServe. It was defamatory. CompuServe, you're responsible for it." And in that case, Cubby v. CompuServe, the court found that no, CompuServe is not the publisher. CompuServe is more like a digital library.

Then in 1995, someone took Prodigy, which was another one of these online platforms, to court. Same basic circumstance: someone is defaming someone on the message boards. Because you are doing moderation on these message boards because you're taking down some content, you are, therefore, a publisher. And in that case, the New York Court does find that Prodigy is more analogous to a book publisher than to a library or a bookstore.

And so this comes in front of Congress in 1996. The idea behind it, the reason this is part of something called the Communications Decency Act, is that the web is very, very new in 1996. People are really concerned about the spread of adult content. What Congress wants to do is allow companies to take down content that violates their terms of service.

So they end up saying, "If you are taking these steps to edit your communities — if you are cleaning them up, if you are moderating them — you are not going to be liable for the users' speech." That happens in '96. Most of the Communications Decency Act gets overturned. So this is the one sort of surviving little bit of a piece of legislation that was found unconstitutional. It survives because so much of the contemporary Internet is really hard to imagine happening without this shield.

"If we argue that YouTube is liable for recommending some of these videos, where does that liability stop?"
Ethan Zuckerman, UMass Amherst professor

Rath: Tell us about how yesterday's case went. It seemed like the justices were pretty skeptical of the takes to take away the shield of Section 230.

Zuckerman: Well, following the Super Bowl metaphor, we've really had sort of two halves. We had an argument in a case called Gonzalez v. Google, and then yesterday, we had an argument in Twitter v. Taamneh. In both cases, what's happening is the petitioner is arguing that a big internet platform — YouTube, in one case, and Twitter, in the other — failed to take steps to prevent ISIS from putting content online. In that Gonzalez v. Google case, the argument is that YouTube's algorithms end up recommending ISIS content, and therefore YouTube is at least partially liable for an ISIS attack that killed Nohemi Gonzalez.

In Twitter v. Taamneh, it's a slightly less direct case. It's really more making the argument that Twitter might bear liability for ISIS' actions in the same way that a bank that's engaged with terrorist financing might be liable. Somehow, Twitter is making it possible for ISIS to recruit on its platform, and by contributing to ISIS being able to reach a larger audience, Twitter might share the responsibility for an attack — in that case, in Istanbul, Turkey, that killed the client.

So in both cases, what people are trying to do is say, "These platforms have to take some liability, not just for the content that's on the platforms, but in particular for recommending that content" — for putting it in front of users and making what seems like it might be an editorial choice. But, as has become very, very clear, almost every web platform that hosts lots of user content uses some sort of algorithms to deal with the enormous explosion of content, the huge amount of information that's out there.

If we argue that YouTube is liable for recommending some of these videos, where does that liability stop? Does a search engine become liable for giving you information that you have searched for? If we find that in that case, the internet as we know it is going to change quite radically.

Rath: Sitting judges on the Supreme Court may not be the most tech-savvy. Are they the appropriate group of people to weigh in on a statue like this that's as complicated as you've laid out?

Zuckerman: Well, Elena Kagan actually made that point — that the nine justices were not, in fact, the most tech-savvy group and got a good laugh from all involved. I have to say, from what I've heard in the hearings and what I've read from fellow commentators in the space, I think the petitioners came up as weaker than expected, and I think the Supreme Court came up as more nuanced than I expected.

The questions that got asked were actually pretty good. It's one of those moments where you could hear the Supreme Court ask the lawyers presenting in front of them for a possible solution. What happens when a technology platform is promoting COVID-19 misinformation? What happens, as is in these cases, when tech platforms are promoting terrorist information? Are these platforms, in fact, fair in terms of ideological bias on these things? So we see this happening at the Supreme Court rather than happening at the legislative level, which is perhaps where it'd be more appropriate for it to be happening.

Rath: If the court, say, rules against the tech companies here and say, "You know what, you're not protected by this," does that mean that the big companies have to staff up with a lot more content moderators and have a lot more legal bills, and the smaller sites maybe go out of business?

Zuckerman: Well, Google's counsel made basically that argument. Google basically said, "Look, without Section 230, the internet is going to go in two different directions. You're going to have some platforms that are essentially a toxic swamp. They're going to be wholly unmoderated because the danger of moderation will end up being so high. If being involved with moderating and recommending means that you're a publisher, some people will run platforms that have absolutely no moderation. There will also be a set of platforms that are likely to be very heavily edited. Things will be as carefully chosen as stories in a magazine or a newspaper because there will be the possibility of lawsuits associated with it."

Brett Kavanaugh, I believe, suggested that this is going to be a lawsuit machine, but we could end up with an internet that is very, very different than what we've seen.