Can We Make Animals Talk? | Erich Jarvis
About The Episode
Scientists are editing genes for human speech into mice to see if they can learn vocal patterns. Neuroscientist Erich Jarvis explains how this could unlock not just speech—but entirely new ways of thinking.
For more, check out the extended interview with Erich Jarvis.
Learn more about NOVA and subscribe to our YouTubechannel.
HAKEEM: We now have the ability to edit genes.
ERICH: Yep.
HAKEEM: Can you engineer speech into species or vocal learning into species?
ERICH: This is exactly what we're trying. Yes. So, trying it for several reasons. I mean, of course there's the cool factor. Can you imagine-
HAKEEM: Right, Planet of the Apes.
ERICH: Can you-
HAKEEM: Becoming real, yeah.
ERICH: Yeah. I'm not sure if that'll happen in my lifetime, but it's theoretically possible.
HAKEEM: Yeah, wow.
ERICH: It's theoretically possible, yes.
HAKEEM: Right, right.
ERICH: And so, even number of years ago, when we started coming up with hypotheses about how could song pathways, and songbirds in human speech areas convergently evolve to function in a similar way with a similar set of genetic changes? And what are the function of these genes in vocal learning? So we would love to genetically modify these genes in a human, and test what they do. But that's tricky and unethical to a certain degree. And we would love to do it in a songbird as well, but the genetic tools to manipulate genes in songbirds is not as advanced as we can do in mice.
HAKEEM: I see. I see.
ERICH: And so what we're trying to do, and what we are doing, is taking a gene variance that we find in humans that is either unique to humans, or unique to vocal learning species, and gene editing them into the mouse genome. What are we looking for? We're looking for changes in the vocalizations in two characteristics.
HAKEEM: Okay.
ERICH: Our words are made up of phonemes, eh, ah, oh, ou, and we sequence those phonemes together to make words, and then we sequence those words together to make sentences. We call those sequences and the rules in which they're based syntax.
HAKEEM: Got it.
ERICH: Okay? Well, guess what? These mice, and other species, they have individual phonemes, or syllables is another name that we call them, and what is learned is the sequences. Okay? And you can change around the sequences. Sometimes those sequences are innate. In us, we can actually have learned sequences of sounds, and make words and make sentences. And so, what we're looking for is changes to those sequences, as well as to the individual structure of the phonemes.
HAKEEM: I see.
ERICH: Okay.
HAKEEM: So let me ask you a question about the sequences right quick. So if there's a particular mouse that makes vocalizations and you record that, is it the case that you find repetitions of the same sequence?
ERICH: Yes. Yeah. Yeah. And they're repetitions of the same sequences that are innate. Okay?
HAKEEM: So every other mouse will share those?
ERICH: Every other mouse will share them. And what we're trying to do is to see if we can get the mouse to learn new sequences, or change the acoustic structure of each syllable within the sequence.
HAKEEM: So is it incredibly subtle, where it's not obvious to the ear, you have to do some... Take the waveform of this recording and-
ERICH: Yeah. The changes we're seeing thus far, like with changing one gene at a time, are on the subtle side.
HAKEEM: Yeah.
ERICH: But in the direction one would predict.
HAKEEM: Ah, interesting.
ERICH: So like we recently published on a... Working with Bob Darnell at Rockefeller, recently published on a study where a NOVA1 gene, it's a gene that controls splicing of cutting up RNA molecules and re-putting them back together. There's a human variant that you don't even find in Neanderthal, and we put this human variant in the mouse genome, and these mice start producing more complex syllables.
HAKEEM: Oh, interesting. Interesting.
ERICH: Another gene called Plexin-A1, it's a gene that controls connectivity in the brain, and we see-
HAKEEM: Connectivity between neurons?
ERICH: Between neurons. And this gene actually is turned down in the human speech motor cortex. All right? It's not one that's turned up. It's turned down. And when it gets turned down, in a counterintuitive fashion, it allows certain connections to form from the speech brain areas to the areas that control the vocal organ. So we call it a loss of function of the gene, causes a gain of function in the behavior.
HAKEEM: Wow.
ERICH: All right? It allows a certain connection to form. We can see this connection form in mice.
HAKEEM: Yeah.
ERICH: And these mice, too, are producing more complex sequences of vocalizations.
HAKEEM: Wow. Wow.
ERICH: We have not yet seen or thoroughly tested, can these mice imitate sounds? That's our next step. But I do think we're going to have to manipulate multiple genes to get imitation.
HAKEEM: Wow.
ERICH: But that's also theoretically possible.
HAKEEM: Geez. So do you have models that basically you can play with the various genes and predict the behavioral output?
ERICH: Yes. Yeah. We have computer algorithms that we developed that look at the regulation of these several hundred genes. If we were to tweak one in one direction, or tweak it in another direction, we can make predictions.
HAKEEM: Well, this takes us to a obvious direction. Are we moving toward a future where we have talking pets? Is that-
ERICH: Yeah, someone asked me about that recently, another scientist.
HAKEEM: Oh yeah.
ERICH: Thinking about can we actually do that?
HAKEEM: Yeah. Yeah.
ERICH: Right, yes.
HAKEEM: Eventually. Not today, but-
ERICH: Yeah, and do people want to know what their pets are thinking?
HAKEEM: Oh, yeah.
ERICH: You see, I do think once you have the ability to imitate sounds, you have this inner speech in your brain, and I believe that the inner speech brain circuit is the same that's being used to produce the sounds, and that is separate from the auditory circuit that's hearing that speech.
HAKEEM: All right, so I got to throw this in. You said you believe?
ERICH: Yes.
HAKEEM: So does that mean that no one has stuck a brain in a scanner and saw what lights up when you're talking to yourself?
ERICH: No. That means maybe it's my cautious scientific self is that there's debate.
HAKEEM: Yeah, I see.
ERICH: If your consciousness, your inner speech brain areas are the same as what's used to speak the sounds.
HAKEEM: Is the debate based on data?
ERICH: Yeah, debate's based on data.
HAKEEM: Okay.
ERICH: And I strongly favor the human fMRI studies that show that the brain regions that control speech production is the same brain regions that is being lit up when you're actually thinking in speech.
HAKEEM: Right, right.
ERICH: Yeah.
HAKEEM: Yeah, yeah, yeah.
ERICH: And so, you asked about our pet animals, right?
HAKEEM: Right. Yeah.
ERICH: The pet animals have the brain areas that hear speech, but not produce them.
HAKEEM: Oh, that's right.
ERICH: So I do think if we can get our pet animals to speak through genetic manipulations, that will both allow them to say what they've been thinking in the hearing pathway, but it'll even give them a great ability now to have inner speech.
HAKEEM: Oh, interesting.
ERICH: And what we call conscious speaking.
HAKEEM: So if you want to know what your pets are thinking, by giving them this ability, you're giving them new thoughts.
ERICH: That's right.