Iâve read a couple of your books now, and what I want to know is this: Do you really think that artificial intelligence is a threat to the human race and could lead to our extinction?
Yes, I do, but it also has the potential for enormous benefit. I do think itâs probably going to be either very, very good for us or very, very bad. Itâs a bit like a strange attractor in chaos theory, the outcomes in the middle seem less likely. Iâm reasonably hopeful because what will determine whether itâs very good or very bad is largely us. We have time, certainly before artificial general intelligence (AGI) arrives. AGI is an artificial intelligence (AI) that has human-level cognitive ability, so can outperform usâor at least equal usâin every area of cognitive ability that we have. It also has volition and may be conscious, although thatâs not necessary. We have time before that arrives: We have time to make sure itâs safe.
At the same time as having scary potential, AI also brings the possibility of immortality and living forever by uploading your brain. Is that something you think will happen at some point?
I certainly hope it will. Things like immortality, the complete end of poverty, the abolition of suffering, are all part of the very, very good outcome, if we get it right. If you have a superintelligence that is many, many times smarter than the smartest human, it could solve many of our problems. Problems like ageing and how to upload a mind into a computer, do seem, in principle, solvable. So yes, I do think they are realistic.
Letâs talk more about some of these themes as we go through the books youâve chosen. The first one on your list is The Singularity is Near, by Ray Kurzweil. He thinks things are moving along pretty quickly, and that a superintelligence might be here soon.Â
He does. Heâs fantastically optimistic. He thinks that in 2029 we will have AGI. And heâs thought that for a long time, heâs been saying it for years. He then thinks weâll have an intelligence explosion and achieve uploading by 2045. Iâve never been entirely clear what he thinks will happen in the 16 years in between. He probably does have quite detailed ideas, but I donât think heâs put them to paper. Kurzweil is important because he, more than anybody else, has made people think about these things. He has amazing ideas in his booksâlike many of the ideas in everybodyâs books theyâre not completely original to himâbut he has been clearly and loudly propounding the idea that we will have AGI soon and that it will create something like utopia.
I came across him in 1999 when I read his book, Are We Spiritual Machines? The book Iâm suggesting here is The Singularity is Near, published in 2005. The reason why I point people to it is that itâs very rigorous. A lot of people think Kurzweil is a snake-oil salesman or somebody selling a religious dream. I donât agree. I donât agree with everything he says and he is very controversial. But his book is very rigorous in setting out a lot of the objections to his ideas and then tackling them. Heâs brave, in a way, in tackling everything head-on, he has answers for everything.
Can you tell me a bit more about what âthe singularityâ is and why itâs near?
The singularity is borrowed from the world of physics and math where it means an event at which the normal rules break down. The classic example is a black hole. Thereâs a bit of radiation leakage but basically, if you cross it, you canât get back out and the laws of physics break down. Applied to human affairs, the singularity is the idea that we will achieve some technological breakthrough. The usual one is AGI. The machine becomes as smart as humans and continues to improve and quickly becomes hundreds, thousands, millions of times smarter than the smartest human. Thatâs the intelligence explosion. When you have an entity of that level of genius around, things that were previously impossible become possible. We get to an event horizon beyond which the normal rules no longer apply.
âIf you have a superintelligence that is many, many times smarter than the smartest human, it could solve many of our problems.â
Iâve also started using it to refer to a prior event, which is the âeconomic singularity.â Thereâs been a lot of talk in the last few months about the possibility of technological unemployment. Again, itâs something we donât know for sure will happen, and we certainly donât know when. But it may be that AIsâand to some extent their peripherals, robotsâwill become better at doing any job than a human. Better, and cheaper. When that happens, many or perhaps most of us can no longer work, through no fault of our own. We will need a new type of economy. Itâs really very early days in terms of working out what that means and how to get there. Thatâs another event thatâs like a singularity â in that itâs really hard to see how things will operate at the other side.
Going back to Ray Kurzweilâs book, you mentioned that there are some criticisms people have raised and that heâs come up with counter-arguments to. Can you give an example?
There are a whole load of criticisms he replies to. The best example might be, âExponential trends donât last forever.â There are lots of people whoâve said Mooreâs Law, which is the fact that computers get twice as powerful every 18 monthsâ$1000 of computer will buy you twice as much processing power in 18 monthsâ time as it will todayâis an exponential growth trend, and that these never go on forever, they always turn into âSâ curves. They say weâre quite quickly going to get to the point where you canât get any more transistors on a chip, because if you pack them any closer together theyâll set each other on fire.
He says that thatâs true, but that Mooreâs Law, when it comes to integrated circuits, is actually the fifth paradigm of this exponential improvement in computing power, which goes back to vacuum tubes and other things before. Another one will replace integrated circuits. It might be optical or 3D or quantum computing. So, he says, the exponential growth will continue. Thatâs the kind of thing he does â and he goes into a lot of detail on that one: How long he thinks Mooreâs Law has gone on and what will replace it.
The reason for him arguing this being that there will have to continue to be exponential growth in processing power for AGI to happen?
Within the timescale he imagines, yes. Itâs always worth illustrating the power of exponential growth, because weâre not wired to understand it intuitively. If you take 30 strides youâll go roughly 30 meters. If you could walk 30 exponential strides, so the first stride is one meter, the second is two meters, your third is four and your fourth is 8 meters and so on â youâd get to the moon. To be more precise, youâd get to the moon on the 29th stride, and the 30th would bring you all the way back. That is the power of exponential growth. That is why the smartphone in your pocket has more processing power than NASA had when they sent Neil Armstrong to the moon.
Is that why you personally are quite convinced by Kurzweilâs arguments, because you feel we really are in the midst of this incredible speed of change?
I think Kurzweil is too optimistic. He is lately starting to acknowledge the downside possibilities more, but he does tend to gloss over them. But yes, I do think we are on an exponential curve. I donât knowâneither does anybody elseâhow long it will go on for. I certainly think itâs possible weâll get AGI in the next few decades i.e. somewhere between 4 and 8 decades. My main view is that the impact of this, if it happens, is so massive that we have to take it seriously.
When you say heâs optimistic you mean both in terms of the timeframe and the effects of the arrival of AGI?
Yes. He always seems to be devoid of doubt. I think thatâs one of the things that makes him quite controversial. But he is something of a genius. Not only does he write fascinating books, but heâs also had a very successful career as a software entrepreneur.
And he now works for Google?
Yes. He was appointed a Director of Engineering there three years ago. I donât know what that means in terms of the hierarchy. I went to California on holiday with my family two years ago and managed to wangle a tour of Google. I said to the person who was giving the tour, âSo where does Ray Kurzweil work?â She pointed to a building and said he worked there. It was number 42. I thought, this has to be a Google joke. Because 42, obviously, is the answer to the ultimate question of life, the universe and everything in The Hitchhikerâs Guide to the Galaxy.
Shall we go onto your next book, which is Superintelligence by Nick Bostrom?
Nick is another remarkable individual. A nice story he tells is that when he was at university in Sweden, his tutor said to him, âItâs come to my attention that at the same time as youâre doing this degree youâre also doing another one. You canât do that, you have to stop.â Bostrom says he didnât stop and he also didnât tell her he was doing a third degree at the same time! Heâs a bit of a genius, brain-the-size-of-the-planet individual. He is a philosophy professor at Oxford University and runs the Oxford Martin Schoolâs Future of Humanity Institute.
Heâs said, for a long time, that Kurzweil is half right. If we get AGI, the outcome could be absolutely wonderful. But it could also be terrible. He warns about the possibility not so much of a superintelligence going rogueâlike Skynet, or HAL in 2001âbut more simply of an immensely powerful entity that would not set out to damage us but have goals that could do us harm.
Can you give an example?
He uses what he calls a âcartoonâ example: The first AGI turns out to be developed by someone who owns a paperclip manufacturing company. The AI has the goal of maximizing the production of paperclips. After a little while, it realizes, âWell these humans, theyâre made of atoms, they could be turned into paperclips.â So it turns us all into paperclips. Then it turns its gaze towards the stars and thinks, âWell thereâs an awful lot of planets out there and I can turn all those into paperclips!â So it develops a space program and travels around the cosmos and turns the entire universe either into paperclips or something that makes paperclips. Itâs an absurd idea, but it shows the possibility of inadvertent damage. An AI doesnât have to hate humans in the way Hollywood often shows them disliking us. It can just have goals that do us damage.
If you think about it, if you have a superintelligence around and itâs capable of changing the environment on the earth radically, thereâs only a narrow range of positions that are good for us. We canât have anyone tinkering with the mixture of oxygen and nitrogen in the air. We donât want anyone changing the way gravity works. We donât want anything taking away rare earths or materials that we use in our smartphones or for food. We need the superintelligence to leave things pretty much as they are and not make any radical changes. But a superintelligence could have any goal. Weâve got no idea what goals it may give itself.
Get the weekly Five Books newsletter
What Bostrom is saying is, âThese are possibilities which we have to take seriously, because we may get AGI in the next few decades. We need to make sure that the first AGI, and all future AGIs, are safe.â Thereâs a project called Friendly AI or FAI, which is to make sure that AIs are safe. Itâs a very, very difficult job, but the good news is weâre a smart little mammal and weâve got quite a lot of time to find the answers.
Bostrom is personally involved in that is he?
He is. The Future of Humanity Institute is one of the four main existential risk organizations around the world which are thinking hard about this problem and trying to raise awarenessâwhich is what Iâm trying to do as wellâso that more smart people can be applied to solving it.
Yes, because Hollywood movies aside, Iâve never really thought of AI as a potential existential danger at all.Â
Very few people have. Iâve been slightly obsessive about AI for 15 years or so. Bostromâs book, Superintelligence, really changed the landscape. Itâs a really, really good book. Itâs quite technical, in a philosophical sense, it can be quite hard going. I talk to everybody I meet about AI and I get this glazed look because people think itâs just Hollywood, that itâs nonsense. Bostromâs book was so thorough and his credentials so solid that suddenly a lot peopleâlike Bill Gates and Stephen Hawking and Elon Muskâtook notice and started to speak publicly about it. Thatâs whatâs really brought it to the public attention over the last year or so.
I just read Frankenstein again. At that time, when it was written, just after the Scientific Revolution, people were beginning to understand the circulation of the blood and how the body works. It must have seemed as if you could put a life together quite easily by sewing a few body parts together. Havenât we always felt that weâre on the verge of discovering the secret to life, but itâs never actually happened?
For a long time, it was thought to be blasphemous. You shouldnât do it, even if you could, because it was Godâs preserve. But yes, stories about either mortals or gods making other humans have been around pretty much forever. Things are always impossible until they become possible. Ever since people looked at birds, weâve tried flying. It was always impossible until suddenly a couple of bicycle makers managed to do it. It is a hard thing to do: To create a human-level AI is a massive, massive task. Although, in many ways, AI is near the beginning of its history, itâs come a long way. It was only in 1956 that the discipline got going, at the Dartmouth Conference in America. Itâs had periods of over-hype, followed by periods of winter when you couldnât raise any money for it.
One thing that has happened recently is the application of deep learning to AI, the use of clever statistics and big data. It turns out that giving algorithms datasets of billions or trillions makes them unreasonably useful. If you only give them datasets of millions, they canât do very much. Deep learning has led to really enormous strides in image recognition and real time machine translation, for instance. You can also see the glimmerings of an AI becoming sensible. Geoff Hinton, one of the founders of deep learning, reckons weâll have the first computers with common sense in 10 years. Thatâs startling. Thatâs taking current AI systems, whatâs known as artificial narrow intelligence, and improving them â bearing in mind that much of the hardware and a lot of the understanding is on an exponential curve of improvement.
Another way to create an AGI would be to copy the machine that we know works already: the human brain. People have thought for a while that if you slice a human brain incredibly thinly and map where all the neurons are and how they connect to each other, you can then reconstruct that inside a computer. Thatâs always sounded like a massive task but we now know that itâs harder than we originally thought because neurons are complicated little beasts. Theyâre little computers in themselves, each one. So itâs not a case of taking the 85 billion neurons in the brain and treating each one like a byte in a computer. You have to treat each one as a computer. That makes the process harder by several orders of magnitude.
I saw a piece recently where someone argued very cogently that the amount of processing that goes on inside a human brain is 10 to the power of 21 FLOPs (floating point operations per second, a measure of processing activity). Thatâs a huge, huge number, but itâs only a question of time until we get computers that can handle that.
Itâs a hard job, it will take a long time, but weâre moving towards it at an exponential rate.
How does the issue of the soul fit into all this?
For me, thatâs a simple question. Iâm an atheist, and I donât think they exist. Those who do think they exist could say, âWhatever you do to replicate the material brain it wonât capture the soul, so youâre not going to create something that is like a human.â If thatâs true, maybe what weâll end up with is an AI that can do everything a human can do, but just doesnât have a soul. I wonder whether the AI will care very much about that. It may say âYou believe youâve got this thing you call the soul, but you donât have any evidence for it. Anyway it doesnât slow me down and I seem to be a lot smarter than you, so Iâm not going to worry about it.â
Itâs odd to think about what the volition of this thing â this computer â might be. I suppose we would program its goals, but then your argument is that it would change those goals?
We will certainly program the goals that it comes to awareness with. It may accept those goals and continue to operate on them. However, it may reflect on them and think, âWell, youâre quite smart little mammals, but Iâm a lot smarter and Iâve got a better idea of what the goals should be, so Iâm going to change them.â We donât really know. One of the interesting things about the Friendly AI project as a whole is that itâs really hard to specify a goal which would be always and forever good for us. If you said, âThe goal is to enhance the wellbeing of humansâŚâ
That seems a good goal, yes.Â
But what is the wellbeing of humans? People donât agree on that. In fact, all of us contradict ourselves: You donât have to dig very far to work out that we have internally contradictory goals in our system, and probably had to in order to get by. A superintelligence which is programmed with such an internally inconsistent and possibly incoherent goal is either going to be paralyzed or have to change its goals, or could end up with some pretty perverse outcomes.
So you might say, instead, âMake all humans happy.â What the superintelligence then does is put us all into little coffins, wrap us in straitjackets, and feeds us intravenously with all the nutrition we need. It then puts a probe into the pleasure centers of our brain and stimulates our pleasure centers forever. We end up stuck inside these coffins, happy as anything, but effectively gone as a species. The universe might look at that and think, âNo big deal, weâve swapped one smart little mammal for a superintelligence.â But we would care. Iâd care.
So when you said at the beginning you were optimistic about the arrival of AGI, youâre also pretty nervous about it.
I am. The subtitle of my book is âthe promise and peril of AI.â It has huge promise and huge peril. The worst peril, if you like, or the way to make sure we get the peril, is to ignore it.
Your third book choice is Rise of the Robots by Martin Ford. This is about the economic issues you mentioned earlier: Mass unemployment, for example.Â
Yes, the next two books are about what I call the âeconomic singularity.â Martin Ford is a software company owner in Silicon Valley. He noticed the dramatic improvements in what the computers he was working with could do and started thinking, âWonât there come a time when they take over our jobs?â His conclusion was, yes. They probably will. But maybe we can keep racing with the machines rather than having to race against them. The reason Iâve put his book down is that itâs one of the most convincing about the possibility of technological unemployment. His own conclusion is that a lot of people wonât be able to earn enough money to live on, but most of us will be able to continue working. There will be things that we can combine better with a computer to do than a computer can do on its own.
When you get to a point in economic development where most people canât workâor at least they canât work enough to make a livingâyou canât just allow the whole population to starve. You have to have resources provided which arenât worked for. In other words, youâve got to have a sophisticated dole. This is called the âuniversal basic incomeâ (UBI) or, alternatively, a guaranteed annual income, money that the state or a public organization hands out just because youâre a citizen. Martin does get a bit discouraged here, because he runs up against the problem of whether America could tolerate the idea of universal basic income. There are lots of fans of the idea, but mainstream opinion thinks itâs an appalling one. It smells of socialism, and they donât like socialism in the United States. So the debate about UBI in the States is very heated.
âAn AI doesnât have to hate humans in the way Hollywood often shows them disliking us.â
In Europe, I think that if and when the time comes when large numbers of people are unemployed through no fault of their own weâll just say, as long as we can afford to, âWe have to give these people money, because it wouldnât be civilized to allow lots of people to starve or be reduced to terrible poverty.â I donât think weâll find that concept as hard. Martin is very struck by the difficulty of persuading his fellow countrymen.
This mass unemployment and huge job losses are things he really thinks will happen and that we have to prepare for?
Yes, I think he does. He thinks many people will have to be given a financial top-up. They wonât be able to get enough work to feed themselves.
But the economy will be extremely efficient because everything is done by robots who do it better. In other words, there will be enough wealth to hand around, provided thereâs not too much inequality. Is that right?
Yes. The idea is that youâll have an economy of radical abundance and lots of people in Silicon Valley get very excited about this. The upside is great: Robots do all the work and we have lives of leisure. We spend our time playing sports, being social, improving ourselves, creating art or whatever we want to do. Maybe simply watching television. Everybodyâs rich because robots produce so much and so efficiently that nobody has to worry. I donât think itâs as simple as that. Most importantly, I think the transition from here to there is going to be quite bumpy and if itâs too bumpy, civilizations could break down. We really have to avoid that.
Does he go into that?
No, he doesnât really. I think this is a fairly typical American approach. They see the difficulty of introducing UBI as so massive, that they donât see very far beyond that. To me, there are all sorts of problems beyond UBI. If you have a society where AIs are producing so much wealth that lots of humans donât need to work, you still have massive problems. Where do people find meaning in their lives? How do we allocate the best resources â the beachfront properties, the Aston Martins and so on?
Most important of all, how do we handle the new type of inequality? Weâve got lots of inequality now. Iâm one of those people who think itâs not the biggest problem we have. But in a world where the majority of people donât work, and arenât involved in the economic life of the country at all, and thereâs a minority of people who own all the AIs and because they own the AIs they own pretty much everything else â they might quickly become a different species. Not only will they own the economic means of producing, but theyâll also have privileged access to all the new technologies that are coming along for human improvement: Cognitive improvement, physical improvement. Theyâll become a different species quickly in the sense that theyâll just be much smarter, quicker, and faster than anybody else.
I think a society where you get a fracturing like that is very dangerous. One of my favourite writers at the moment is Yuval Harari. Heâs got a book coming out shortly where he talks about humanity splitting into two species or groups. He calls them the Gods and the Useless, which is brutal. I wouldnât be that brutal. But itâs a potential scenario for the future that we have to work out how to avoid.
Tell me about your 4th book, The Second Machine Age which is by two MIT professors, Erik Brynjolfsson and Andrew McAfee.
This book also points out how massive the improvements in AI are and how weâre entering a new age of automation. Itâs no longer just muscle power, but also cognitive skills that are being automated and we need to plan for what comes next. Brynjolfsson and McAfee think that if we get the transition right, people will still work full-time. They will still be able to earn most of their income through work. We will do different types of jobs but full employment will still be a possibility. Thatâs the big difference between this book and the previous one. Iâm more radical than either of them, because I think massive and inevitable unemployment is a distinct possibility.
The sorts of jobs Brynjolfsson and McAfee think weâll be doing is working with computers. For example, think about a doctor. At the moment, if thereâs anything wrong with you, you go and see a doctor. You wait an hour â because thereâs a law somewhere, I donât know who wrote it, that you have to wait an hour for a doctor. You go into a surgery and the doctor doesnât actually see you, because sheâs busy typing into a computer, but you spend a few minutes there, and she says, âTake some of that and come and see me tomorrow if itâs still a problem.â In future, what will happen is youâll breathe into your smartphone. It will analyze the components of your breath and say, âYouâve eaten too much spinach, thatâs why youâve got an upset stomach. Go and eat some bread.â It will give you a nice, quick diagnosis and will sort out 99% of problems.
In my view of the future, that means a lot of doctors wonât have a job. In Brynjolfsson and McAfeeâs view of the future, it simply means that the appointments you still have with the doctor are about more serious things. She spends longer with you. So the doctor is doing as much work as before, but doing higher value work and we are all getting an enormously better service, in fact. This is one of the great things about AI: It could turn healthcare into healthcare. At the moment healthcare is actually sickcare. We spend something like 80%-90% of the money that we ever spend on each person in the last year of their life. Thatâs not wasted money, because you wouldnât want to not do it, but itâs a bad orientation of resources. Healthcare could become, thanks to AI, an industry for keeping people healthy as long as possible.
Brynjolfsson and McAfee also think that people wonât work in call centers or on waste collection services anymore. Weâll work caring for each other, providing empathy and, where appropriate, physical care for each other. Weâll all be in touchy feely jobs and become artists. I think they think thatâs the future.
The last book youâve chosen is Permutation City, the only sci-fi novel on your list. Tell me about it.Â
This book is by Greg Egan, an Australian science fiction writer. Heâs well known in circles that think about what Iâve been talking about and pretty much unknown outside them. To my mind, heâs written better about AI than any other writer, because he takes it seriously. He recognizes it represents enormous change.
Permutation City is about a time in history when uploading becomes possible and very rich people can upload themselves into machines which operate quickly and in real time. Poorer people have to upload themselves into machines which process very slowly and so they live very slow versions of life. As novels have to, it involves all sorts of plots and derring-do and so on. It was written quite a long time ago, before the turn of the century, and to me, the interesting thing about it is that itâs a book which made me think, âOh! Is this going to be possible sometime soon?â
He has some nice vignettes in it. Thereâs a chap who is uploaded and finds facing immortality rather daunting. So he decides to rewire his own mind inside the computer to find enormous satisfaction in doing very simple things. He spends several years of subjective time carving chair legs. He programs himself to derive huge satisfactionâutter fulfillmentâout of carving the perfect chair leg. Once he does one, he starts again and ends up with a virtual room absolutely full of chair legsâŚ
What does that mean for the future of humanity?
I donât know. As a science fiction author I would love to write a book about the life of a superintelligent creature, but Iâve got a problem which is that Iâm not superintelligent. So I just donât know what they would think about. What does a superintelligence think about in the shower? Could an ant write a book about the inner life of a human? Probably not. Itâs trying to do the same thing. What Egan does is take seriously the idea that humanity is going to change beyond all recognition if and when we get AGI.
Is it a great read or is it more that the issues it raises are interesting?
I think both. I certainly enjoyed it. Heâs a good writer.
Are there any films around at the moment that you think are good on AI?Â
Yes. I think the best two are firstly, Her by Spike Jonze with Joaquin Phoenix. Jones pretends itâs just an ordinary romance, but it isnât, itâs definitely a film about superintelligence. Itâs nice in a bunch of ways: Itâs quite realistic and quite plausible which most films about AI are not. Also, the AI is very benign, it doesnât want to kill us all. So thatâs a nice change. The other one is Transcendence with Johnny Depp. It got terrible reviews and it does have some flaws, but itâs a really, really good movie. It shows a character being uploaded, having been shot and fatally wounded. The nature of the uploading is a bit cartoonish, but itâs a really interesting movie. I would not recommend Chappie. Nobody should ever go and see Chappie. Itâs appalling.
Reflecting on your books and everything youâve said today, it makes climate change seem like a minor issueâŚ
You know what? I think thatâs true. Climate change might warm us up a bit, but this stuff could kill us.
Five Books aims to keep its book recommendations and interviews up to date. If you are the interviewee and would like to update your choice of books (or even just what you say about them) please email us at [email protected]