Technology » Artificial Intelligence/AI Books

The best books on Artificial Intelligence

recommended by Calum Chace

Surviving AI: The promise and peril of artificial intelligence by Calum Chace

Surviving AI: The promise and peril of artificial intelligence
by Calum Chace

Read

It could lead us all to immortality or spell the end of the human race. Author Calum Chace picks the best books on Artificial Intelligence or AI.

Interview by Sophie Roell, Editor

Surviving AI: The promise and peril of artificial intelligence by Calum Chace

Surviving AI: The promise and peril of artificial intelligence
by Calum Chace

Read
Buy all books

I’ve read a couple of your books now, and what I want to know is this: Do you really think that artificial intelligence is a threat to the human race and could lead to our extinction?

Yes, I do, but it also has the potential for enormous benefit. I do think it’s probably going to be either very, very good for us or very, very bad. It’s a bit like a strange attractor in chaos theory, the outcomes in the middle seem less likely. I’m reasonably hopeful because what will determine whether it’s very good or very bad is largely us. We have time, certainly before artificial general intelligence (AGI) arrives. AGI is an artificial intelligence (AI) that has human-level cognitive ability, so can outperform us—or at least equal us—in every area of cognitive ability that we have. It also has volition and may be conscious, although that’s not necessary. We have time before that arrives: We have time to make sure it’s safe.

At the same time as having scary potential, AI also brings the possibility of immortality and living forever by uploading your brain. Is that something you think will happen at some point?

I certainly hope it will. Things like immortality, the complete end of poverty, the abolition of suffering, are all part of the very, very good outcome, if we get it right. If you have a superintelligence that is many, many times smarter than the smartest human, it could solve many of our problems. Problems like ageing and how to upload a mind into a computer, do seem, in principle, solvable. So yes, I do think they are realistic.

Let’s talk more about some of these themes as we go through the books you’ve chosen. The first one on your list is The Singularity is Near, by Ray Kurzweil. He thinks things are moving along pretty quickly, and that a superintelligence might be here soon. 

He does. He’s fantastically optimistic. He thinks that in 2029 we will have AGI. And he’s thought that for a long time, he’s been saying it for years. He then thinks we’ll have an intelligence explosion and achieve uploading by 2045. I’ve never been entirely clear what he thinks will happen in the 16 years in between. He probably does have quite detailed ideas, but I don’t think he’s put them to paper. Kurzweil is important because he, more than anybody else, has made people think about these things. He has amazing ideas in his books—like many of the ideas in everybody’s books they’re not completely original to him—but he has been clearly and loudly propounding the idea that we will have AGI soon and that it will create something like utopia.

I came across him in 1999 when I read his book, Are We Spiritual Machines? The book I’m suggesting here is The Singularity is Near, published in 2005. The reason why I point people to it is that it’s very rigorous. A lot of people think Kurzweil is a snake-oil salesman or somebody selling a religious dream. I don’t agree. I don’t agree with everything he says and he is very controversial. But his book is very rigorous in setting out a lot of the objections to his ideas and then tackling them. He’s brave, in a way, in tackling everything head-on, he has answers for everything.

Can you tell me a bit more about what ‘the singularity’ is and why it’s near?

The singularity is borrowed from the world of physics and math where it means an event at which the normal rules break down. The classic example is a black hole. There’s a bit of radiation leakage but basically, if you cross it, you can’t get back out and the laws of physics break down. Applied to human affairs, the singularity is the idea that we will achieve some technological breakthrough. The usual one is AGI. The machine becomes as smart as humans and continues to improve and quickly becomes hundreds, thousands, millions of times smarter than the smartest human. That’s the intelligence explosion. When you have an entity of that level of genius around, things that were previously impossible become possible. We get to an event horizon beyond which the normal rules no longer apply.

“If you have a superintelligence that is many, many times smarter than the smartest human, it could solve many of our problems.”

I’ve also started using it to refer to a prior event, which is the ‘economic singularity.’ There’s been a lot of talk in the last few months about the possibility of technological unemployment. Again, it’s something we don’t know for sure will happen, and we certainly don’t know when. But it may be that AIs—and to some extent their peripherals, robots—will become better at doing any job than a human. Better, and cheaper. When that happens, many or perhaps most of us can no longer work, through no fault of our own. We will need a new type of economy. It’s really very early days in terms of working out what that means and how to get there. That’s another event that’s like a singularity — in that it’s really hard to see how things will operate at the other side.

Going back to Ray Kurzweil’s book, you mentioned that there are some criticisms people have raised and that he’s come up with counter-arguments to. Can you give an example?

There are a whole load of criticisms he replies to. The best example might be, ‘Exponential trends don’t last forever.’ There are lots of people who’ve said Moore’s Law, which is the fact that computers get twice as powerful every 18 months—$1000 of computer will buy you twice as much processing power in 18 months’ time as it will today—is an exponential growth trend, and that these never go on forever, they always turn into ‘S’ curves. They say we’re quite quickly going to get to the point where you can’t get any more transistors on a chip, because if you pack them any closer together they’ll set each other on fire.

He says that that’s true, but that Moore’s Law, when it comes to integrated circuits, is actually the fifth paradigm of this exponential improvement in computing power, which goes back to vacuum tubes and other things before. Another one will replace integrated circuits. It might be optical or 3D or quantum computing. So, he says, the exponential growth will continue. That’s the kind of thing he does — and he goes into a lot of detail on that one: How long he thinks Moore’s Law has gone on and what will replace it.

The reason for him arguing this being that there will have to continue to be exponential growth in processing power for AGI to happen?

Within the timescale he imagines, yes. It’s always worth illustrating the power of exponential growth, because we’re not wired to understand it intuitively. If you take 30 strides you’ll go roughly 30 meters. If you could walk 30 exponential strides, so the first stride is one meter, the second is two meters, your third is four and your fourth is 8 meters and so on — you’d get to the moon. To be more precise, you’d get to the moon on the 29th stride, and the 30th would bring you all the way back. That is the power of exponential growth. That is why the smartphone in your pocket has more processing power than NASA had when they sent Neil Armstrong to the moon.

Is that why you personally are quite convinced by Kurzweil’s arguments, because you feel we really are in the midst of this incredible speed of change?

I think Kurzweil is too optimistic. He is lately starting to acknowledge the downside possibilities more, but he does tend to gloss over them. But yes, I do think we are on an exponential curve. I don’t know—neither does anybody else—how long it will go on for. I certainly think it’s possible we’ll get AGI in the next few decades i.e. somewhere between 4 and 8 decades. My main view is that the impact of this, if it happens, is so massive that we have to take it seriously.

When you say he’s optimistic you mean both in terms of the timeframe and the effects of the arrival of AGI?

Yes. He always seems to be devoid of doubt. I think that’s one of the things that makes him quite controversial. But he is something of a genius. Not only does he write fascinating books, but he’s also had a very successful career as a software entrepreneur.

And he now works for Google?

Yes. He was appointed a Director of Engineering there three years ago. I don’t know what that means in terms of the hierarchy. I went to California on holiday with my family two years ago and managed to wangle a tour of Google. I said to the person who was giving the tour, “So where does Ray Kurzweil work?” She pointed to a building and said he worked there. It was number 42. I thought, this has to be a Google joke. Because 42, obviously, is the answer to the ultimate question of life, the universe and everything in The Hitchhiker’s Guide to the Galaxy.

Shall we go onto your next book, which is Superintelligence by Nick Bostrom?

Nick is another remarkable individual. A nice story he tells is that when he was at university in Sweden, his tutor said to him, “It’s come to my attention that at the same time as you’re doing this degree you’re also doing another one. You can’t do that, you have to stop.” Bostrom says he didn’t stop and he also didn’t tell her he was doing a third degree at the same time! He’s a bit of a genius, brain-the-size-of-the-planet individual. He is a philosophy professor at Oxford University and runs the Oxford Martin School’s Future of Humanity Institute.

He’s said, for a long time, that Kurzweil is half right. If we get AGI, the outcome could be absolutely wonderful. But it could also be terrible. He warns about the possibility not so much of a superintelligence going rogue—like Skynet, or HAL in 2001—but more simply of an immensely powerful entity that would not set out to damage us but have goals that could do us harm.

Can you give an example?

He uses what he calls a ‘cartoon’ example: The first AGI turns out to be developed by someone who owns a paperclip manufacturing company. The AI has the goal of maximizing the production of paperclips. After a little while, it realizes, “Well these humans, they’re made of atoms, they could be turned into paperclips.” So it turns us all into paperclips. Then it turns its gaze towards the stars and thinks, “Well there’s an awful lot of planets out there and I can turn all those into paperclips!” So it develops a space program and travels around the cosmos and turns the entire universe either into paperclips or something that makes paperclips. It’s an absurd idea, but it shows the possibility of inadvertent damage. An AI doesn’t have to hate humans in the way Hollywood often shows them disliking us. It can just have goals that do us damage.

If you think about it, if you have a superintelligence around and it’s capable of changing the environment on the earth radically, there’s only a narrow range of positions that are good for us. We can’t have anyone tinkering with the mixture of oxygen and nitrogen in the air. We don’t want anyone changing the way gravity works. We don’t want anything taking away rare earths or materials that we use in our smartphones or for food. We need the superintelligence to leave things pretty much as they are and not make any radical changes. But a superintelligence could have any goal. We’ve got no idea what goals it may give itself.

Get the weekly Five Books newsletter

What Bostrom is saying is, ‘These are possibilities which we have to take seriously, because we may get AGI in the next few decades. We need to make sure that the first AGI, and all future AGIs, are safe.’ There’s a project called Friendly AI or FAI, which is to make sure that AIs are safe. It’s a very, very difficult job, but the good news is we’re a smart little mammal and we’ve got quite a lot of time to find the answers.

Bostrom is personally involved in that is he?

He is. The Future of Humanity Institute is one of the four main existential risk organizations around the world which are thinking hard about this problem and trying to raise awareness—which is what I’m trying to do as well—so that more smart people can be applied to solving it.

Yes, because Hollywood movies aside, I’ve never really thought of AI as a potential existential danger at all. 

Very few people have. I’ve been slightly obsessive about AI for 15 years or so. Bostrom’s book, Superintelligence, really changed the landscape. It’s a really, really good book. It’s quite technical, in a philosophical sense, it can be quite hard going. I talk to everybody I meet about AI and I get this glazed look because people think it’s just Hollywood, that it’s nonsense. Bostrom’s book was so thorough and his credentials so solid that suddenly a lot people—like Bill Gates and Stephen Hawking and Elon Musk—took notice and started to speak publicly about it. That’s what’s really brought it to the public attention over the last year or so.

I just read Frankenstein again. At that time, when it was written, just after the Scientific Revolution, people were beginning to understand the circulation of the blood and how the body works. It must have seemed as if you could put a life together quite easily by sewing a few body parts together. Haven’t we always felt that we’re on the verge of discovering the secret to life, but it’s never actually happened?

For a long time, it was thought to be blasphemous. You shouldn’t do it, even if you could, because it was God’s preserve. But yes, stories about either mortals or gods making other humans have been around pretty much forever. Things are always impossible until they become possible. Ever since people looked at birds, we’ve tried flying. It was always impossible until suddenly a couple of bicycle makers managed to do it. It is a hard thing to do: To create a human-level AI is a massive, massive task. Although, in many ways, AI is near the beginning of its history, it’s come a long way. It was only in 1956 that the discipline got going, at the Dartmouth Conference in America. It’s had periods of over-hype, followed by periods of winter when you couldn’t raise any money for it.

One thing that has happened recently is the application of deep learning to AI, the use of clever statistics and big data. It turns out that giving algorithms datasets of billions or trillions makes them unreasonably useful. If you only give them datasets of millions, they can’t do very much. Deep learning has led to really enormous strides in image recognition and real time machine translation, for instance. You can also see the glimmerings of an AI becoming sensible. Geoff Hinton, one of the founders of deep learning, reckons we’ll have the first computers with common sense in 10 years. That’s startling. That’s taking current AI systems, what’s known as artificial narrow intelligence, and improving them — bearing in mind that much of the hardware and a lot of the understanding is on an exponential curve of improvement.

Another way to create an AGI would be to copy the machine that we know works already: the human brain. People have thought for a while that if you slice a human brain incredibly thinly and map where all the neurons are and how they connect to each other, you can then reconstruct that inside a computer. That’s always sounded like a massive task but we now know that it’s harder than we originally thought because neurons are complicated little beasts. They’re little computers in themselves, each one. So it’s not a case of taking the 85 billion neurons in the brain and treating each one like a byte in a computer. You have to treat each one as a computer. That makes the process harder by several orders of magnitude.

I saw a piece recently where someone argued very cogently that the amount of processing that goes on inside a human brain is 10 to the power of 21 FLOPs (floating point operations per second, a measure of processing activity). That’s a huge, huge number, but it’s only a question of time until we get computers that can handle that.

It’s a hard job, it will take a long time, but we’re moving towards it at an exponential rate.

How does the issue of the soul fit into all this?

For me, that’s a simple question. I’m an atheist, and I don’t think they exist. Those who do think they exist could say, “Whatever you do to replicate the material brain it won’t capture the soul, so you’re not going to create something that is like a human.” If that’s true, maybe what we’ll end up with is an AI that can do everything a human can do, but just doesn’t have a soul. I wonder whether the AI will care very much about that. It may say “You believe you’ve got this thing you call the soul, but you don’t have any evidence for it. Anyway it doesn’t slow me down and I seem to be a lot smarter than you, so I’m not going to worry about it.”

It’s odd to think about what the volition of this thing — this computer — might be. I suppose we would program its goals, but then your argument is that it would change those goals?

We will certainly program the goals that it comes to awareness with. It may accept those goals and continue to operate on them. However, it may reflect on them and think, ‘Well, you’re quite smart little mammals, but I’m a lot smarter and I’ve got a better idea of what the goals should be, so I’m going to change them.’ We don’t really know. One of the interesting things about the Friendly AI project as a whole is that it’s really hard to specify a goal which would be always and forever good for us. If you said, ‘The goal is to enhance the wellbeing of humans…’

That seems a good goal, yes. 

But what is the wellbeing of humans? People don’t agree on that. In fact, all of us contradict ourselves: You don’t have to dig very far to work out that we have internally contradictory goals in our system, and probably had to in order to get by. A superintelligence which is programmed with such an internally inconsistent and possibly incoherent goal is either going to be paralyzed or have to change its goals, or could end up with some pretty perverse outcomes.

So you might say, instead, “Make all humans happy.” What the superintelligence then does is put us all into little coffins, wrap us in straitjackets, and feeds us intravenously with all the nutrition we need. It then puts a probe into the pleasure centers of our brain and stimulates our pleasure centers forever. We end up stuck inside these coffins, happy as anything, but effectively gone as a species. The universe might look at that and think, “No big deal, we’ve swapped one smart little mammal for a superintelligence.” But we would care. I’d care.

So when you said at the beginning you were optimistic about the arrival of AGI, you’re also pretty nervous about it.

I am. The subtitle of my book is ‘the promise and peril of AI.’ It has huge promise and huge peril. The worst peril, if you like, or the way to make sure we get the peril, is to ignore it.

Your third book choice is Rise of the Robots by Martin Ford. This is about the economic issues you mentioned earlier: Mass unemployment, for example. 

Yes, the next two books are about what I call the ‘economic singularity.’ Martin Ford is a software company owner in Silicon Valley. He noticed the dramatic improvements in what the computers he was working with could do and started thinking, ‘Won’t there come a time when they take over our jobs?’ His conclusion was, yes. They probably will. But maybe we can keep racing with the machines rather than having to race against them. The reason I’ve put his book down is that it’s one of the most convincing about the possibility of technological unemployment. His own conclusion is that a lot of people won’t be able to earn enough money to live on, but most of us will be able to continue working. There will be things that we can combine better with a computer to do than a computer can do on its own.

When you get to a point in economic development where most people can’t work—or at least they can’t work enough to make a living—you can’t just allow the whole population to starve. You have to have resources provided which aren’t worked for. In other words, you’ve got to have a sophisticated dole. This is called the ‘universal basic income’ (UBI) or, alternatively, a guaranteed annual income, money that the state or a public organization hands out just because you’re a citizen. Martin does get a bit discouraged here, because he runs up against the problem of whether America could tolerate the idea of universal basic income. There are lots of fans of the idea, but mainstream opinion thinks it’s an appalling one. It smells of socialism, and they don’t like socialism in the United States. So the debate about UBI in the States is very heated.

“An AI doesn’t have to hate humans in the way Hollywood often shows them disliking us.”

In Europe, I think that if and when the time comes when large numbers of people are unemployed through no fault of their own we’ll just say, as long as we can afford to, ‘We have to give these people money, because it wouldn’t be civilized to allow lots of people to starve or be reduced to terrible poverty.’ I don’t think we’ll find that concept as hard. Martin is very struck by the difficulty of persuading his fellow countrymen.

This mass unemployment and huge job losses are things he really thinks will happen and that we have to prepare for?

Yes, I think he does. He thinks many people will have to be given a financial top-up. They won’t be able to get enough work to feed themselves.

But the economy will be extremely efficient because everything is done by robots who do it better. In other words, there will be enough wealth to hand around, provided there’s not too much inequality. Is that right?

Yes. The idea is that you’ll have an economy of radical abundance and lots of people in Silicon Valley get very excited about this. The upside is great: Robots do all the work and we have lives of leisure. We spend our time playing sports, being social, improving ourselves, creating art or whatever we want to do. Maybe simply watching television. Everybody’s rich because robots produce so much and so efficiently that nobody has to worry. I don’t think it’s as simple as that. Most importantly, I think the transition from here to there is going to be quite bumpy and if it’s too bumpy, civilizations could break down. We really have to avoid that.

Does he go into that?

No, he doesn’t really. I think this is a fairly typical American approach. They see the difficulty of introducing UBI as so massive, that they don’t see very far beyond that. To me, there are all sorts of problems beyond UBI. If you have a society where AIs are producing so much wealth that lots of humans don’t need to work, you still have massive problems. Where do people find meaning in their lives? How do we allocate the best resources — the beachfront properties, the Aston Martins and so on?

Most important of all, how do we handle the new type of inequality? We’ve got lots of inequality now. I’m one of those people who think it’s not the biggest problem we have. But in a world where the majority of people don’t work, and aren’t involved in the economic life of the country at all, and there’s a minority of people who own all the AIs and because they own the AIs they own pretty much everything else — they might quickly become a different species. Not only will they own the economic means of producing, but they’ll also have privileged access to all the new technologies that are coming along for human improvement: Cognitive improvement, physical improvement. They’ll become a different species quickly in the sense that they’ll just be much smarter, quicker, and faster than anybody else.

I think a society where you get a fracturing like that is very dangerous. One of my favourite writers at the moment is Yuval Harari. He’s got a book coming out shortly where he talks about humanity splitting into two species or groups. He calls them the Gods and the Useless, which is brutal. I wouldn’t be that brutal. But it’s a potential scenario for the future that we have to work out how to avoid.

Tell me about your 4th book, The Second Machine Age which is by two MIT professors, Erik Brynjolfsson and Andrew McAfee.

This book also points out how massive the improvements in AI are and how we’re entering a new age of automation. It’s no longer just muscle power, but also cognitive skills that are being automated and we need to plan for what comes next. Brynjolfsson and McAfee think that if we get the transition right, people will still work full-time. They will still be able to earn most of their income through work. We will do different types of jobs but full employment will still be a possibility. That’s the big difference between this book and the previous one. I’m more radical than either of them, because I think massive and inevitable unemployment is a distinct possibility.

The sorts of jobs Brynjolfsson and McAfee think we’ll be doing is working with computers. For example, think about a doctor. At the moment, if there’s anything wrong with you, you go and see a doctor. You wait an hour — because there’s a law somewhere, I don’t know who wrote it, that you have to wait an hour for a doctor. You go into a surgery and the doctor doesn’t actually see you, because she’s busy typing into a computer, but you spend a few minutes there, and she says, “Take some of that and come and see me tomorrow if it’s still a problem.” In future, what will happen is you’ll breathe into your smartphone. It will analyze the components of your breath and say, “You’ve eaten too much spinach, that’s why you’ve got an upset stomach. Go and eat some bread.” It will give you a nice, quick diagnosis and will sort out 99% of problems.

In my view of the future, that means a lot of doctors won’t have a job. In Brynjolfsson and McAfee’s view of the future, it simply means that the appointments you still have with the doctor are about more serious things. She spends longer with you. So the doctor is doing as much work as before, but doing higher value work and we are all getting an enormously better service, in fact. This is one of the great things about AI: It could turn healthcare into healthcare. At the moment healthcare is actually sickcare. We spend something like 80%-90% of the money that we ever spend on each person in the last year of their life. That’s not wasted money, because you wouldn’t want to not do it, but it’s a bad orientation of resources. Healthcare could become, thanks to AI, an industry for keeping people healthy as long as possible.

Brynjolfsson and McAfee also think that people won’t work in call centers or on waste collection services anymore. We’ll work caring for each other, providing empathy and, where appropriate, physical care for each other. We’ll all be in touchy feely jobs and become artists. I think they think that’s the future.

The last book you’ve chosen is Permutation City, the only sci-fi novel on your list. Tell me about it. 

This book is by Greg Egan, an Australian science fiction writer. He’s well known in circles that think about what I’ve been talking about and pretty much unknown outside them. To my mind, he’s written better about AI than any other writer, because he takes it seriously. He recognizes it represents enormous change.

Permutation City is about a time in history when uploading becomes possible and very rich people can upload themselves into machines which operate quickly and in real time. Poorer people have to upload themselves into machines which process very slowly and so they live very slow versions of life. As novels have to, it involves all sorts of plots and derring-do and so on. It was written quite a long time ago, before the turn of the century, and to me, the interesting thing about it is that it’s a book which made me think, ‘Oh! Is this going to be possible sometime soon?’

He has some nice vignettes in it. There’s a chap who is uploaded and finds facing immortality rather daunting. So he decides to rewire his own mind inside the computer to find enormous satisfaction in doing very simple things. He spends several years of subjective time carving chair legs. He programs himself to derive huge satisfaction—utter fulfillment—out of carving the perfect chair leg. Once he does one, he starts again and ends up with a virtual room absolutely full of chair legs…

What does that mean for the future of humanity?

I don’t know. As a science fiction author I would love to write a book about the life of a superintelligent creature, but I’ve got a problem which is that I’m not superintelligent. So I just don’t know what they would think about. What does a superintelligence think about in the shower? Could an ant write a book about the inner life of a human? Probably not. It’s trying to do the same thing. What Egan does is take seriously the idea that humanity is going to change beyond all recognition if and when we get AGI.

Is it a great read or is it more that the issues it raises are interesting?

I think both. I certainly enjoyed it. He’s a good writer.

Are there any films around at the moment that you think are good on AI? 

Yes. I think the best two are firstly, Her by Spike Jonze with Joaquin Phoenix. Jones pretends it’s just an ordinary romance, but it isn’t, it’s definitely a film about superintelligence. It’s nice in a bunch of ways: It’s quite realistic and quite plausible which most films about AI are not. Also, the AI is very benign, it doesn’t want to kill us all. So that’s a nice change. The other one is Transcendence with Johnny Depp. It got terrible reviews and it does have some flaws, but it’s a really, really good movie. It shows a character being uploaded, having been shot and fatally wounded. The nature of the uploading is a bit cartoonish, but it’s a really interesting movie. I would not recommend Chappie. Nobody should ever go and see Chappie. It’s appalling.

Reflecting on your books and everything you’ve said today, it makes climate change seem like a minor issue…

You know what? I think that’s true. Climate change might warm us up a bit, but this stuff could kill us.

Interview by Sophie Roell, Editor

November 11, 2015

Five Books aims to keep its book recommendations and interviews up to date. If you are the interviewee and would like to update your choice of books (or even just what you say about them) please email us at [email protected]

Support Five Books

Five Books interviews are expensive to produce. If you've enjoyed this interview, please support us by .

Calum Chace

Calum Chace

Calum Chace is the author of Suriving AI: The Promise and Peril Artificial Intelligence and a novel on the same topic, Pandora's Brain.