Human Compatible

Human Compatible

Artificial Intelligence and the Problem of Control

4.5 (603 ratings from Audible, Apple, Spotify)
Sign up to track reviews and ratings

Read by: Raphael Corkhill

Language: English

Length: 11 hours and 38 minutes

Publisher: Penguin Audio

Release date: 2019-10-08

"The most important book on AI this year." (The Guardian)

"Mr. Russell's exciting book goes deep, while sparkling with dry witticisms." (The Wall Street Journal)

"The most important book I have read in quite some time" (Daniel Kahneman)

"A must-read" (Max Tegmark)

"The book we've all been waiting for" (Sam Harris)

A leading artificial intelligence researcher lays out a new approach to AI that will enable us to coexist successfully with increasingly intelligent machines.

In the popular imagination, superhuman artificial intelligence is an approaching tidal wave that threatens not just jobs and human relationships, but civilization itself. Conflict between humans and machines is seen as inevitable and its outcome all too predictable.

In this groundbreaking audiobook, distinguished AI researcher Stuart Russell argues that this scenario can be avoided, but only if we rethink AI from the ground up. Russell begins by exploring the idea of intelligence in humans and in machines. He describes the near-term benefits we can expect, from intelligent personal assistants to vastly accelerated scientific research, and outlines the AI breakthroughs that still have to happen before we reach superhuman AI. He also spells out the ways humans are already finding to misuse AI, from lethal autonomous weapons to viral sabotage.

If the predicted breakthroughs occur and superhuman AI emerges, we will have created entities far more powerful than ourselves. How can we ensure they never, ever, have power over us? Russell suggests that we can rebuild AI on a new foundation, according to which machines are designed to be inherently uncertain about the human preferences they are required to satisfy. Such machines would be humble, altruistic, and committed to pursue our objectives, not theirs. This new foundation would allow us to create machines that are provably deferential and provably beneficial.