Preview Mode Links will not work in preview mode

Many Minds

Jan 25, 2023

By now you’ve probably heard about the new chatbot called ChatGPT. There’s no question it’s something of a marvel. It distills complex information into clear prose; it offers instructions and suggestions; it reasons its way through problems. With the right prompting, it can even mimic famous writers. And it does all this with an air of cool competence, of intelligence. But, if you're like me, you’ve probably also been wondering: What’s really going on here? What are ChatGPT—and other large language models like it—actually doing? How much of their apparent competence is just smoke and mirrors? In what sense, if any, do they have human-like capacities?

My guest today is Dr. Murray Shanahan. Murray is Professor of Cognitive Robotics at Imperial College London and Senior Research Scientist at DeepMind. He's the author of numerous articles and several books at the lively intersections of artificial intelligence, neuroscience, and philosophy. Very recently, Murray put out a paper titled 'Talking about Large Language Models’, and it’s the focus of our conversation today. In the paper, Murray argues that—tempting as may be—it's not appropriate to talk about large language models in anthropomorphic terms. Not yet, anyway.

Here, we chat about the rapid rise of large language models and the basics of how they work. We discuss how a model that—at its base—simply does “next-word prediction" can be engineered into a savvy chatbot like ChatGPT. We talk about why ChatGPT lacks genuine “knowledge” and “understanding”—at least as we currently use those terms. And we discuss what it might take for these models to eventually possess richer, more human-like capacities. Along the way, we touch on: emergence, prompt engineering, embodiment and grounding, image generation models, Wittgenstein, the intentional stance, soft robots, and "exotic mind-like entities."

Before we get to it, just a friendly reminder: applications are now open for the Diverse Intelligences Summer Institute (or DISI). DISI will be held this June/July in St Andrews Scotland—the program consists of three weeks of intense interdisciplinary engagement with exactly the kinds of ideas and questions we like to wrestle with here on this show. If you're intrigued—and I hope you are!—check out for more info.

Alright friends, on to my decidedly human chat, with Dr. Murray Shanahan. Enjoy!


The paper we discuss is here. A transcript of this episode is here.


Notes and links

6:30 – The 2017 “breakthrough” article by Vaswani and colleagues.

8:00 – A popular article about GPT-3.

10:00 – A popular article about some of the impressive—and not so impressive—behaviors of ChatGPT. For more discussion of ChatGPT and other large language models, see another interview with Dr. Shanahan, as well as interviews with Emily Bender and Margaret Mitchell, with Gary Marcus, and with Sam Altman (CEO of OpenAI, which created ChatGPT).

14:00 – A widely discussed paper by Emily Bender and colleagues on the “dangers of stochastic parrots.”

19:00 – A blog post about “prompt engineering”. Another blog post about the concept of Reinforcement Learning through Human Feedback, in the context of ChatGPT.

30:00 – One of Dr. Shanahan’s books is titled, Embodiment and the Inner Life.

39:00 – An example of a robotic agent, SayCan, which is connected to a language model.

40:30 – On the notion of embodiment in the cognitive sciences, see the classic book by Francisco Varela and colleagues, The Embodied Mind.

44:00 – For a detailed primer on the philosophy of Ludwig Wittgenstein, see here.

45:00 – See Dr. Shanahan’s general audience essay on “conscious exotica" and the space of possible minds.

49:00 – See Dennett’s book, The Intentional Stance.


Dr. Shanahan recommends:

Artificial Intelligence: A Guide for Thinking Humans, by Melanie Mitchell

(see also our earlier episode with Dr. Mitchell)

‘Abstraction for Deep Reinforcement Learning’, by M. Shanahan and M. Mitchell


You can read more about Murray’s work on his website and follow him on Twitter.


Many Minds is a project of the Diverse Intelligences Summer Institute (DISI) (, which is made possible by a generous grant from the Templeton World Charity Foundation to UCLA. It is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd ( Our transcripts are created by Sarah Dopierala (

You can subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you like to listen to podcasts.

**You can now subscribe to the Many Minds newsletter here!**

We welcome your comments, questions, and suggestions. Feel free to email us at:

For updates about the show, visit our website (, or follow us on Twitter: @ManyMindsPod.