Cheating in academia is probably as old as academia itself. Like cheating in other fields, from gambling to banking, academic cheats use the latest technology to get away with it. The internet made it much easier to find content to plagiarize, to buy papers about a variety of topics, to hire people to write papers or take tests for you. We are now bearing witness to the birth of the next great cheating tool: artificial intelligence.

By now you've probably heard about ChatGPT, a language model chatbot that can generate unbelievably articulate texts about pretty much anything — with a few notable exceptions. ChatGPT can be used to pump out 10-page-long papers. It can be used to write code. On Wednesday, Massachusetts Rep. Jake Auchincloss even gave a speech on the House floor written by ChatGPT. So, is academia ready?

To discuss, GBH’s All Things Considered host Arun Rath spoke with Nick Montfort, a professor of digital media at MIT. After some instructors sounded alarm bells about the AI tool and how it might disrupt academia, he co-wrote a memo advising his peers to learn about the systems for themselves and adapt their courses to the technology. What follows is a lightly edited transcript.

Arun Rath: Talk to us about this memo. First, in general, you have advice for academia and how to approach this, because it's not about cheating entirely.

Nick Montfort: It's really a question of how writing instructors, and instructors in what we would call “communication-intensive courses” here at MIT, how they are to deal with and approach technologies like GPT-3 and ChatGPT: what it means for assignments, what it means for evaluation. We've been hearing a lot of consternation, a lot of anxiety around this. And so myself and Ed Schiappa decided to write up something from our own perspective — we weren't assigned to do this, it's not an official policy of any sort. But with my colleague, who's a professor of rhetoric, we decided to try to formulate a few thoughts and a few ideas.

Rath: What are the situations where it would be good to use ChatGPT? You lay out that there are times when it would be a useful part of the classroom.

Montfort: Sure. I mean, I've actually assigned students to use a language model — a large language model that is similar to the one that's the basis for ChatGPT.

Rath: And can you explain what a large language model is?

Montfort: Yes, I'll do my best. A language model is a distribution over sequences of words. It tells you how likely particular sequences of words are. It's not just used for text generation; it can also be used for speech recognition, for translation. For instance, let's say your speech recognition system was looking at this waveform and it was trying to figure out whether a person had said, "I'm trying to find a pharmacy." But that waveform was also very similar to "I'm trying to find a farm of me." What this large language model would do is it would say, "Well, people actually say, 'I'm trying to find a pharmacy' much more often. That's a much more likely sequence of words. And the other sequence, maybe we can make sense of it but it's not very probable."

So these language models, large language models, are technology that has really come along tremendously in the past few years. And they have all sorts of applications. They're not only used for language generation. For instance, if you wanted to use them in the classroom, some of the obvious cases would be: Well, what if you're studying educational technology itself? Or what if you're doing a critical exercise with new digital technologies?

So I assign students to actually generate papers — instead of writing papers — using a large language model, but then also to reflect on their experience: to figure out whether what they were generating was useful or meaningful to them, whether it advanced their own thinking about the topic, whether it was true. So we actually had a good conversation about that.

Rath: For those situations where it is a requirement of the class in that it should be entirely original work, there are technological ways now to detect plagiarism. What are the ways that academics and professors can use to screen this out?

Montfort: Well, there are ways to sort of play Whac-A-Mole as the AI technology is developed, as it is edited. How can we figure out what the signature of this particular type of writing versus human writing is? Yes, there are people working on this, including a project originating from Princeton. But I think a better idea is to focus assignments on what people can do that relates to the the present-day situation, even the classroom context.

So, for instance, if you ask about this week's news, a large language model which has ingested a huge amount of training data a while in the past, generally months ago, is not going to have much to say about the events — at least, not anything that's correct. It's going to make some obvious errors about incidents that occurred in the last week or two.

If you ask it for very localized information, things that people would be writing about regarding their own communities, it's not going to know about that to the same extent that the people in the classroom community would. And if you ask them to respond to the conversations and discussions that they've been having in class the previous week, then obviously the language model and AI chatbot that's built on it is going to be at a loss. That's one type of approach, which I think also is consonant with educational goals. It's not just a workaround for the fact that AI has popped up, but it's also helping people to actually write and communicate in a meaningful way.

Rath: Finally, I got to ask, because we are just seeing the beginnings of this now. I mean, ChatGPT can, at times, be amazing. It's certainly got limitations from what I've seen, but we're kind of at the starting point here. This technology is going to get a lot better, a lot more advanced. Where do you see it going from this point?

Montfort: Well, there's the technology, and the fact that it exists in a corporate context in which it's being developed and people want to make money from it. You know, I think that's one of the directions it will go.

One of the things that I found, for instance, I think ChatGPT is pretty good if you've forgotten the name of a short story, the name of a novel, and you describe what it is you're looking for, you know, as a type of sort of advanced conversational search, ChatGPT will provide an answer. It may be right or wrong, but you can find out right away.

There are a lot of problems with the system right now because it will just brazenly go on and tell you things that might be 80% true, and the rest of them will be false. So for this assignment, some of my students actually took to fact-checking what they generated much more assiduously than they would have human writing. There can be mistakes in Wikipedia, there could be mistakes in peer-reviewed research, there could be all sorts of mistakes. People became much more skeptical of what large language models produced, and I think there are good reasons for it.

But its ability to be semantically correct is limited to the massive library of text that it's ingested.