Graphic with text NASW at Triple A S Travel Fellowship Program Organized by NASW’s Education Committee and mentored by NASW member volunteers

GPT-4 is all talk and not much understanding, says this AI expert

This story was published as part of the 2024 Travel Fellowship Program to AAAS organized by the NASW Education Committee, providing science journalism practice and experience for undergraduate and graduate students.

Story by Grace Huckins
Mentoed and edited by Rob Irion

DENVER — In April 2021, a Tesla user uploaded a video to YouTube of his car automatically stopping in front of a stop sign. That would have been a modest feather in Elon Musk’s cap—except the stop sign was being held by a police officer, in a photograph, on a billboard. For Melanie Mitchell, a professor at the Santa Fe Institute, the incident underlined a critical weakness of modern AI: These days, it’s easy to train a computer to recognize a red octagon with a white border, but it’s nearly impossible to teach it to understand what a billboard is.

Even systems that appear to understand human concepts, like Chat-GPT, may be far less intelligent than they appear. In a 2023 preprint, Mitchell and her colleagues reported that GPT-4, the most advanced language model that OpenAI has released, performs poorly on visual puzzles that ask it to deal with concepts like “same and different” or “above and below”—tests on which humans excel. Despite its proficiency at writing poetry in the style of Shakespeare or passing the bar exam, GPT-4 fails to measure up to a human child in rather important ways.

Mitchell, the author of ?Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus and Giroux, 2019), dissected AI’s limited understanding of the world in a Feb. 17 talk at the American Association for the Advancement of Science (AAAS) annual meeting in Denver. Afterward, she sat down with NASW graduate travel fellow Grace Huckins to map out the frontiers of AI knowledge.

How did you become interested in the question of what AI understands?

I started out in my research as a graduate student thinking about human psychology, and how people make analogies. So all of my work in AI has come from that interest in how humans think and whether we can get machines to think like that. And if not, what are they actually doing instead?

Is that just a theoretical question, or a practical one as well?

It is practical, because it helps you figure out, “Is this system going to work well in some new situation that I put it in, that wasn’t part of its training data?” We’ve seen that self-driving cars sometimes have problems, because they aren’t able to take something they’ve been trained on and apply it to some new situation. If they haven’t seen some situation before, like a construction site, they don’t know what to do.

Do you think it’s possible that current AI systems do understand the world—just very differently from the way humans do?

I do. One example is [the DeepMind AI system] AlphaFold, which predicts protein structure better than any other system, and better than people. The way it does it is probably by having some very complex network of statistical associations. We don’t understand it. So you can think of that as a kind of understanding that’s different from our understanding—and maybe better, because it’s able to do better prediction.

There may be different ways in which we want AI systems to work. Sometimes we want them to work like that, because we want them to do scientific discovery. Sometimes we want them to work more like us, because we want them to drive us around.

You’ve shown in your work that GPT-4 lacks important concepts, like “above” and “same.” But other research suggests GPT-4 can understand certain things. Where would you say GPT-4 lies on the spectrum between “no understanding” and “human-level understanding”?

I think it does understand a lot of stuff. People have looked at [GPT-4’s] understanding of abstract grammatical structures—does it understand the concept of a noun? Does it understand the concept of a verb? The answer is yes, it does seem to have those kinds of abstract concepts. So it’s not like it doesn’t have anything. But I don’t know where I’d put it on the spectrum.

But it’s definitely in between?

It’s definitely in between.

What will it take to push AI systems closer to human-level understanding?

It might require a different approach to training. Right now, these systems need so much data to train, and that just has to change, for many reasons. One of the new frontiers is how to make these things more active—how do we give them actual experiences? How do we give them episodic memory [a recollection of specific experiences]? How do we give them a world to understand? Right now, they don’t have such a thing.

Our own concepts develop through experience, through living in the world, and through curiosity—actively trying to understand the world. It’s one of the reasons that we are able to learn about the world without having to be trained on 50 billion books. I really like the work of Allison Gopnik, who [talks about] babies as scientists: They’re doing experiments, they’re active, they’re actively intervening to try to figure out what’s going on. I think that’s the key to not having to rely on ridiculous amounts of training data.

In the meantime, what are the biggest risks of deploying AI systems that fail to understand the world?

Probably people assuming they understand more than they do. There was that case of that lawyer who used Chat-GPT to do his briefs and cited several cases. He asked, “Are all these cases real?” And [Chat-GPT] said, “Yes, I can verify that all these cases are real.” [Many were not.] He said he trusted it. I think people trust them too much.


Grace Huckins is a PhD candidate in neuroscience at Stanford University and a freelance science writer. They are a frequent contributor to WIRED, and their writing has appeared in Slate, MIT Technology Review, Scientific American, and various other outlets. You can find all of their coverage of neuroscience, mental health, and AI at gracehuckins.com.

Edited by: Rob Irion, University of California, Santa Cruz

Top Image photo credit: Do Space on Flickr

Founded in 1934 with a mission to fight for the free flow of science news, NASW is an organization of ~ 3,000 professional journalists, authors, editors, producers, public information officers, students and people who write and produce material intended to inform the public about science, health, engineering, and technology. To learn more, visit www.nasw.org.

February 26, 2024