Preview Mode Links will not work in preview mode

Many Minds


Feb 17, 2021

Guess what folks: we are celebrating a birthday this week. That’s right, Many Minds has reached the ripe age of one year old. Not sure how old that is in podcast years, exactly, but it’s definitely a landmark that we’re proud of. Please no gifts, but, as always, you’re encouraged to share the show with a friend, write a review, or give us a shout out on social.

To help mark this milestone we’ve got a great episode for you. My guest is the writer, Brian Christian. Brian is a visiting scholar at the University of California Berkeley and the author of three widely acclaimed books: The Most Human Human, published in 2011; Algorithms To Live By, co-authored with Tom Griffiths and published in 2016; and most recently, The Alignment Problem. It was published this past fall and it’s the focus of our conversation in this episode.

The alignment problem, put simply, is the problem of building artificial intelligences—machine learning systems, for instance—that do what we want them to do, that both reflect and further our values. This is harder to do than you might think, and it’s more important than ever.

As Brian and I discuss, machine learning is becoming increasingly pervasive in everyday life—though it’s sometimes invisible. It’s working in the background every time we snap a photo or hop on Facebook. Companies are using it to sift resumes; courts are using it to make parole decisions. We are already trusting these systems with a bunch of important tasks, in other words. And as we rely on them in more and more domains, the alignment problem will only become that much more pressing.

In the course of laying out this problem, Brian’s book also offers a captivating history of machine learning and AI. Since their very beginnings, these fields have been formed through interaction with philosophy, psychology, mathematics, and neuroscience. Brian traces these interactions in fascinating detail—and brings them right up to the present moment. As he describes, machine learning today is not only informed by the latest advances in the cognitive sciences, it’s also propelling those advances.

This is a wide-ranging and illuminating conversation folks. And, if I may say so, it’s also an important one. Brian makes a compelling case, I think, that the alignment problem is one of the defining issues of our age. And he writes about it—and talks about it here—with such clarity and insight. I hope you enjoy this one. And, if you do, be sure to check out Brian’s book.

Happy birthday to us—and on to my conversation with Brian Christian. Enjoy!

 

A transcript of this show is available here.

 

Notes and links

7:26 - Norbert Wiener’s article from 1960, ‘Some moral and technical consequences of automation’.

8:35 - ‘The Sorcerer’s Apprentice’ is an episode from the animated film, Fantasia (1940). Before that, it was a poem by Goethe.

13:00 - A well-known incident in which Google’s nascent auto-tagging function went terribly awry.

13:30 - The ‘Labeled Faces in the Wild’ database can be viewed here.

18:35 - A groundbreaking article in ProPublica on the biases inherent in the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) tool.

25:00 – The website of the Future of Humanity Institute, mentioned in several places, is here.

25:55 - For an account of the collaboration between Walter Pitts and Warren McCulloch, see here.

29:35- An article about the racial biases built into photographic film technology in the 20th century.

31:45 - The much-investigated Tempe crash involving a driverless car and a pedestrian:

37:17 - The psychologist Edward Thorndike developed the “law of effect.” Here is one of his papers on the law.

44:40 - A highly influential 2015 paper in Nature in which a deep-Q network was able to surpass human performance on a number of classic Atari games, and yet not score a single point on ‘Montezuma’s Revenge.’

47:38 - A chapter on the classic “preferential looking” paradigm in developmental psychology:

53:40 - A blog post discussing the relationship between dopamine in the brain and temporal difference learning. Here is the paper in Science in which this relationship was first articulated.

1:00:00 - A paper on the concept of “coherent extrapolated volition.”

1:01:40 - An article on the notion of “iterated distillation and amplification.”

1:10:15 - The fourth edition of a seminal textbook by Stuart Russell and Peter Norvig, AI a Modern approach, is available here: http://aima.cs.berkeley.edu/

1:13:00 - An article on Warren McCulloch’s poetry.

1:17:45 - The concept of “reductions” is central in computer science and mathematics.

 

Brian Christian’s end-of-show reading recommendations:

The Alignment Newsletter, written by Rohin Shah

Invisible Women, by Caroline Criado Perez:

The Gardener and the Carpenter, Alison Gopnik:

You can keep up with Brian at his personal website or on Twitter.

 

Many Minds is a project of the Diverse Intelligences Summer Institute (DISI) (https://www.diverseintelligencessummer.com/), which is made possible by a generous grant from the Templeton World Charity Foundation to UCLA. It is hosted and produced by Kensy Cooperrider, with creative support from DISI Directors Erica Cartmill and Jacob Foster, and Associate Director Hilda Loury. Our artwork is by Ben Oldroyd (https://www.mayhilldesigns.co.uk/). Our transcripts are created by Sarah Dopierala (https://sarahdopierala.wordpress.com/).

You can subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you like to listen to podcasts.

We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com.

For updates about the show, follow us on Twitter: @ManyMindsPod.