Preview Mode Links will not work in preview mode

Many Minds


Nov 24, 2021

I’m betting you’ve heard about the next generation of artificial intelligence, the one that’s just around the corner. It’s going to be pervasive, all-competent, maybe super-intelligent. We’ll rely on it to drive cars, write novels, diagnose diseases, and make scientific breakthroughs. It will do all these things better, faster, more safely than we bumbling humans ever could. The thing is, we’ve been promised this for years. If this next level of AI is coming, it seems to be taking its time. Might it be that AI is taking awhile because it's simply harder than we thought?

My guest today is Dr. Melanie Mitchell. She is the Davis Professor at the Santa Fe Institute and the author of a number of books, including her latest, which is titled ‘Artificial Intelligence: A Guide for Thinking Humans.’ 

In this conversation we zoom in on Melanie’s widely discussed recent essay, 'Why AI is harder than we think.’ We talk about the repeating cycle of hype and disenchantment within AI, and how it stretches back to the first years of the field. We walk through four fallacies that Mitchell identifies that lead us to think that super smart AI is closer than it actually is. We talk about self-driving cars, brittleness, adversarial perturbations, Moravec’s paradox, analogy, brains in vats, and embodied cognition, among other topics. And we discuss an all-important concept, one we can’t easily define but we can all agree AI is sorely lacking: common sense. 

Across her scholarly publications and public-facing essays, Melanie has recently emerged as one of our most cogent and thoughtful guides to AI research. I’ve been following her work for a while now and was really stoked to get to chat with her. Her essay is insightful, lucid, and just plain fun—if you enjoy this conversation, I definitely suggest you check it out for yourselves. 

Alright folks, on to my conversation with Dr. Melanie Mitchell. And for those in the US—happy thanksgiving! 

The paper we discuss is available here. A transcript of this episode is available here

 

Notes and links

5:00 – A recent essay by Dr. Mitchell on self-driving cars and common sense.

14:00 – An influential paper from 2013 titled ‘Intriguing properties of neural networks.’

16:50 – A video introduction to “deep learning.”

19:00 – A paper on “first step fallacies” in AI by Hubert Dreyfus.

21:00 – For a discussion of Alpha Go’s recent success with the game of Go, see our earlier interview with Dr. Marta Halina.

26:00 – An influential 1976 paper titled, ‘Artificial intelligence meets natural stupidity.’

31:00 – A popular Twitter account that tags recent findings with “In mice.”

38:00 – A paper by Lawrence Barsalou on “grounded cognition.” For related ideas see Lakoff & Johnson’s Metaphors We Live By.

41:00 – A recent book by Brian Cantwell Smith, The Promise of Artificial Intelligence.

43:00 – An article on the idea of “core knowledge.”

47:00 – The CYC project.

49:30 – A recent article by Dr. Mitchell about analogies people have been using to understand COVID-19.

50:30 – An op-ed by Dr. Mitchell about why we should not worry os much about super-intelligence.

 

End-of-show recommendations:

Dr. Mitchell’s 2019 book, Artificial Intelligence: A Guide for Thinking Humans

Blake et al., 2017, ‘Building Machines that Think and Learn Like People

Chollet, 2019, ‘On the Measure of Intelligence

 

You can find Dr. Mitchell on Twitter (@MelMitchell1) and follow his research at her website.

Many Minds is a project of the Diverse Intelligences Summer Institute (DISI) (https://disi.org), which is made possible by a generous grant from the Templeton World Charity Foundation to UCLA. It is hosted and produced by Kensy Cooperrider, with help from assistant producer Cecilia Padilla. Creative support is provided by DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd (https://www.mayhilldesigns.co.uk/). Our transcripts are created by Sarah Dopierala (https://sarahdopierala.wordpress.com/).

You can subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you like to listen to podcasts.

We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com.

For updates about the show, visit our website (https://disi.org/manyminds/), or follow us on Twitter: @ManyMindsPod.