Technology » Artificial Intelligence/AI Books

Ethics for Artificial Intelligence Books

recommended by Paula Boddington

AI Ethics: A Textbook by Paula Boddington

OUT NOW

AI Ethics: A Textbook
by Paula Boddington

Read

Advances in artificial intelligence pose a myriad of ethical questions, but the most incisive thinking on this subject says more about humans than it does about machines, says Paula Boddington, philosopher and author of a recent AI ethics textbook. We first spoke to Paula in 2017—a long time ago in a fast-moving field. This week we caught up with her to find out what's happened since then and which new books have taken the conversation over ethics and AI further.

Interview by Nigel Warburton

AI Ethics: A Textbook by Paula Boddington

OUT NOW

AI Ethics: A Textbook
by Paula Boddington

Read
Buy all books

It’s been more than five years since we first spoke. What developments should we be aware of in terms of AI and ethics?

There is now a gargantuan effort in ethics and in regulation of AI in general. It’s a big area in academia. There’s also been a growth in civil society, with groups such as Algorithm Watch and many others looking at these issues. It’s fair to say there’s a big rush to regulate. There’s been lots of effort all around the world on law and ethics but by the European Union in particular, with much current attention on the EU AI act.

Since I wrote my book about developing codes of ethics around AI there has been a proliferation of such codes. There are even people keeping track of all the different codes of ethics that are being produced. The issues themselves haven’t really changed, I think. There is just growing public awareness of what’s going on and the ethical issues involved.

There has also been growth in the companies producing AI saying that they’re ‘doing ethics’. We have to be careful about what the word ‘ethics’ actually means. Sometimes it just means, ‘Don’t worry we’ve got a board who’s looking at this.’

There’s a sort of cyclical nature to the field. Shortly before we spoke last time, there was a boom in concern about ethical issues around AI, around about 2015. We’re currently living through what’s probably the next big boom of ethical issues, arising this year, mostly around ChatGPT. Each time, it’s as if the concern is completely new. The concerns are valid. But one of the things that people can be maybe reassured about is that people have been beavering away at this for a long time. Even prior to the panic around 2015, people had been working in computer ethics for a long time. Many of the issues are really issues around the use of human data. And social scientists, and people working in genetics and genomics, have been looking at ethical issues in human data use for a long time as well.

One of the things that encourages me is that although there are serious concerns that AI might act in ways to dehumanize us, it could do exactly the opposite. It could make us look really, really closely at what is it to be a human being and how we want to live. There are always going to be downsides to anything new, but how can we relate to technology to make things better?

Let’s turn to the books you’re recommending that have come out since 2017. The first one is Atlas of AI by Kate Crawford. Can you tell me what that one’s about and why you’re including it?

I’ve deliberately chosen books that are looking at quite different aspects of AI because one of the questions is what we even mean by AI. How do we think about it? How do we conceptualize it and what it is? Then there are particular questions about the minutiae of how the technology works. One of the things I really like about Kate Crawford’s book is that she’s looking critically at the way the entire system of AI works and the impact of AI on a global scale.

So we can think about AI just in terms of the software, what it’s capable of doing. That’s one element of it. But AI is not just software floating around in the ether, it’s actually manifest in hardware, in technology, in data banks, and also in a massive amount of human labor around the world. She looks at questions like mining and workers employed in terrible conditions for very little money.

She also looks at the whole ideology behind AI and technology. Notions of efficiency that might be applied to machines, are now coming into the workforce. There are questions about surveillance of workers and concern about how employees at places like Amazon and Uber, but also elsewhere, are being tracked. It’s technology that is enabling us to do it. A lot of that technology is not AI, per se. It’s not a super intelligence. But it is the logic of the technology, the idea that we have to increase outputs and efficiency. Workers are being treated like machines.

So is she mainly focused on the present or the future?

She’s focused on trying to understand and analyze the issues, so that people really understand the questions. Also, the depth of the problem. One of the main reasons why AI has developed so fast in recent years is because it relies upon masses and masses of data. There’s now an unbelievable amount of data available and it’s being collected all the time. She looks at the assumptions driving it. Can we simply understand the material world around us through data? Can we really understand human beings through data? What is it to be a human being just understood through data that’s being collected around us?

So those are issues I think are really important to understand and how they’re embedded into the technology. Also, any philosophers interested in this will see immediately that collecting data assumes a process of classifying the world. You’re classifying the data in certain ways. That raises powerful issues about how we’re even being seen.

She’s also got a chapter looking at affect. This is about using AI to detect our emotions, and then maybe also to manipulate them. That is based on work about how we can interpret human emotions which has also come under considerable critique.

And why does she use the word ‘atlas’: is she going around the world?

The subtitle is ‘Power, Politics, and the Planetary Costs of Artificial Intelligence.’ One of the things I like about it is that it’s very readable. She tells stories about how she drove to this place out in the middle of a desert where there are data centers. She’s taking you on journeys to different places around the world and within workplaces. In a sense, it’s really heavy reading, because it’s serious stuff, but it’s easy to read. She makes it really accessible.

Okay, so let’s go on to the next book that you want to add to this list of AI ethics books. This is The Ethical Algorithm by Michael Kearns and Aaron Roth. What’s good about this one?

I really rate this book. It’s more technical, because Kearns and Roth are coming from the engineering perspective. They both work in computing and algorithmic design. They’re really good at communicating fairly complex stuff about algorithms—with lots of diagrams and examples, stories and graphs—to people who might be slightly frightened of that stuff. They’re really good at explaining it. They’ve also done some really good lectures that you can find on YouTube, which you can watch before plunging into the book.

By now, a lot of people are familiar with concerns about bias in algorithms. Kearns and Roth are looking at how the exact details of the design of the algorithms might be understood to incorporate, for example, aspects of privacy, or aspects of fairness, and how there are tradeoffs between those. We can’t have it all, in a sense.

There are technical mathematical details that they go into, really quite clearly and helpfully, explaining how there may be tradeoffs between different groups. They’re asking philosophers and people working in different fields: ‘Here’s what we can tell you about what the algorithms are doing, which notion of fairness do you like best in these particular circumstances?’ So in terms of being as transparent as possible about what’s going on in AI, and then helping to have a conversation with people who have got things to contribute from other disciplines, I think their book is just brilliant.

Again, it’s not incredibly long. It’s fewer than 200 pages, and the pages are not that dense. It’s not so daunting that people coming from a non-mathematical, non-technical might think, ‘I couldn’t possibly read this.’ Many people could easily get a lot out of it because they’ve got plenty of clear examples.

This book is a good example of real developments in the conversation. It’s not replacing the other books I recommended. It’s advancing the way we help people look at the technical details of what’s going on in algorithms and the challenge of fairness.

Let’s go on to the last new book you’ve chosen, which is by Stuart Russell, co-author of the most popular AI textbook. This book, Human Compatible, is for the general public, though.

Stuart Russell is in a different grand tradition of worrying about AI, where a superintelligence takes us over and turns us all into fodder or whatever it wants. Elsewhere on Five Books, someone recommended Nick Bostrom’s Superintelligence. Bostrom was concerned with what’s become known as ‘perverse instantiation.’ It’s the idea that we get a machine to do something and it does something that we didn’t intend. It’s like The Sorcerer’s Apprentice, with a machine following instructions to the letter, and you end up in a terrible situation. Bostrom famously envisaged a superintelligence that we gave instructions to make paperclips, that would just carry on trying to make as many as possible and eventually turn us all into paperclips.

Stuart Russell is a leading person in AI and he really knows what he’s talking about. He’s turning his attention to how we might engineer AI so that it doesn’t end up turning us all into paper clips. He’s building on Bostrom and giving a different answer.

What I like about Stuart Russell’s book is that it’s terribly clear. I actually disagree with many of the philosophical suppositions that he makes, but he spells them out incredibly clearly. There’s also a really good reason for reading books that you might strongly disagree with, because it can help to further debate. You can work out why you disagree. One of the big dangers of AI—and you can see this with things like Chat GPT—is that it makes us lazy or controls us in too many ways. We really need to stress our human ability to think for ourselves. Disagreement is really important in ethics. It’s unlikely we’re ever going to completely agree, but if we can understand the basis of disagreement, then that’s the best way forward, really, for maintaining our humanity.

There are some people working in AI who are not too bothered by the prospect that AI might eradicate humans, because they don’t think that humans are the be-all and end-all of value. They think that it might be better to be transhumanist or post-humanist or that there could be more value in the world if we were somehow all replaced by machines.

Stuart Russell is on the side of humans. He’s trying to work out how we can develop AI to make certain that it’s always going to be at least roughly aligned with human values, so that humans are going to be kept at least reasonably happy with whatever it is that the AI is doing. I’m with Stuart Russell, there, I’m afraid: I’m on the side of humans.

What’s his solution?

His solution is to try to engineer AI so that it’s always like a tool, it’s always a machine for us. It’s always got to keep an eye on human preferences and also not assume it knows what human preferences are. We have to make it always doubt that it’s got it right, to keep checking in with humans about what it is they actually want and that it’s really understood what it is they’re trying to do. In The Sorcerer’s Apprentice, it would be as if the broom stopped and said, ‘Are you sure you want me to do this?’ And we would say, ‘No! Can you reprogram?’

In what ways do you disagree with him, then?

That’s generally a good approach. I disagree with him for a number of reasons. One is the problem of how the AI is going to be checking back on what our preferences are. He suggests that the AI could observe human behavior and extrapolate from our behavior what our preferences are and what we’re trying to do. But it’s very difficult to extrapolate motivation and intention. Human beings often do things that are irrational. We’re not quite doing what it is we really want to do. Philosophers have grappled with this. You could, perhaps, look at second-order preferences. For example, you might reach for another doughnut but your meta preference might actually be not to eat it and still fit into your dress.

Nonetheless, there’s a huge amount of complexity in how you would count a preference as a higher preference, and how we would extrapolate real human motivation and value from our behavior.

Stuart Russell also feels that value is subjective, that it comes entirely from the preferences of humans. I’m not really sure that that’s correct, either,

As a philosopher, you’re disagreeing him. He’s clearly not a philosopher, but a computer scientist.

He’s following a very dominant strain of thought within philosophy, the idea that value stems from wishes and desires and preferences. That’s a really common stream of thought that I happen to disagree with. In terms of the solution he offers, that AI will be able to extrapolate human preferences from behavior, that’s a real problem. Maybe that’s somewhere we could bring in wisdom from social scientists, for example, or novelists or psychologists. That’s where they can say, ‘Well, actually, this is a bit simplistic.’

So there are some problems with this book. I’m glad he’s trying, though. The book is very, very readable and well worth reading.

You’ve also got a new book out, AI Ethics: A Textbook. Who is that aimed at and who might it be useful for?

The textbook grew out of some teaching I’ve been doing on a Master’s course, on philosophy and AI. The audience I’d envisaged was students who were maybe working in computing or engineering, but I’ve tried to make it as broad as possible. There are now lots of arts and social science courses where people are studying elements of ethical issues in AI. As I’ve mentioned, one of the things we need to encourage is interdisciplinary conversations.

I also tried to write it clearly and accessibly so that any interested members of the public could just read it on their own. The emphasis really is on trying to open up dialogues, to show a range of different views and try to encourage readers that they’ve got things they can bring to the debate.

There is a huge range of questions to look at that I try to cover in the book. They’re interlinked with each other, so I try to cross-reference. For example, in the section where I’m talking about Stuart Russell’s book, I look back at how philosophy of mind and psychology have understood the human mind and the problems in philosophical and psychological behaviorism that we just discussed. I’ve tried to be as open-minded as possible, but obviously my own views are going to be in there.

One of the things that’s so interesting about what we’re facing now, with AI and related technologies, is that it makes us ask deep questions about human nature. So I’ve included different ways of approaching what it is to be a human being, of understanding what intelligence is, why we value it, and how that relates to the questions we’re looking at.

I guess a textbook on AI ethics involves explaining what ethics is as well. It’s not necessarily obvious to non-philosophers.

Yes, the book tries to be an introduction to ethics. I’ve tried to make it so that you can skim through summaries of each section. I’ve tried to make it so you can navigate around different parts of the book, as easily as possible. I included exercises. Some of the exercises are imaginative things, just to get people thinking and can be done on your own. Some can be done as class exercises. It’s about getting people to realize that everybody’s got something they can contribute to the debate because it affects everybody.

[End of our 2023 update. The original December 2017 interview appears below]

___________________________

What do you mean by ethics for artificial intelligence?

Well, that’s a good starting point because there are different sorts of questions that are being asked in the books I’m looking at. One of the kinds of questions we’re asking is: what sorts of ethical issues is artificial intelligence, AI, going to bring? AI encompasses so many different applications that it could raise a really wide variety of different questions.

For instance, what’s going to happen to the workforce if AI makes lots of people redundant? That raises ethical issues because it affects people’s well-being and employment. There are also questions about whether we somehow need to build ethics into the sorts of decisions that AI devices are making on our behalf, especially as AI becomes more autonomous and more powerful. For example, one question that is debated a lot at the moment is: what sorts of decisions should be programmed into autonomous vehicles? If they come to a decision where they’re going to have to crash one way or the other, or kill somebody, or kill the driver, what sort of ethics might go into that?

But there are also ethical questions about AI in medicine. For instance, there’s already work developing virtual psychological therapies using AI such as cognitive behavioural therapy. This might be useful since it seems people may sometimes open up more freely online. But, obviously, there are going to be ethical issues in how you’re going to respond to someone saying that they’re going to kill themselves, or something along those lines. There are various ethical issues about how you program that in.

“AI is pushing us to the limits of various questions about what it is to be a human in the world”

I suppose work in AI can be divided into whether you’re talking about the sorts of issues which we’re facing now or in the very near future.  The issues we are facing now concern ‘narrow’ AI which is focused on particular tasks. But there is also speculative work about whether we might develop an artificial general intelligence or, even then, going on from that to a superintelligence.

But if we’re looking at an artificial general intelligence which would be mimicking human intelligence in general, depending on whether we’re retaining control of it or even if we are not retaining control of it, lots of people are arguing that we need to build in some kind of ethics or some way to make certain that the AI isn’t going to do something like turn back and decide to rebel – the sort of thing that happens in many of the Isaac Asimov robot stories. So there are many ethical questions that arise from AI, both as it is now and as it will be in the future.

But there is also a different range of questions raised by AI because AI is, in many ways, pushing us to the limits of various questions about what it is to be a human in the world. Some of the ethical questions in AI are precisely about how we think of ourselves in the world. To give an example of that, if you can imagine some science fiction future where you’ve got robots doing everything for us, people are also talking about how you can make robots not just to do mundane jobs and some quite complex jobs, but also creative tasks. If you live in a world where the robots are doing all the creative tasks, for instance if you have robots who are writing music that is better than a human could write, or at least as good as a human could write, it just raises fundamental questions of why on earth we are here. Why are all those youngsters in garage bands thrashing out their not tremendously good riffs? Why are they doing that if a robot or a machine could do it better?

Working in this area is really interesting because it pushes us to ask those kinds of questions. Questions like: what is the nature of human agency? How do we relate to other people? How do we even think of ourselves? So, there are lots of deep ethical issues that come up in this area.

What work have you been doing in this area?

In the last two or three years, a number of prominent individuals have voiced concerns about the need to try to ensure that AI develops in ways that are beneficial. The Future of Life Institute, based in the USA, has a programme of grants, funded by Elon Musk and the Open Philanthropy project, given to 35 projects working on different questions related to the issue of developing beneficial AI. I’ve been working on a project examining the groundwork to how we might develop codes of ethics for AI, and what role such codes might have. I’ve got a book on this topic due out soon.

Let’s go on to your first book choice, Heartificial Intelligence: Embracing Our Humanity to Maximize Machines (2016) by John C Havens.

First of all, I’ll say that I’ve chosen five books which I thought, as a package, would give quite a good introduction to the range of issues in AI as there’s a big range of issues and quite a wide range of approaches.

Heartificial Intelligence, my first choice, gives a general overview of the issues which we’re presented with. Havens is a really interesting writer. He was formerly an actor and he’s worked a lot in tech journalism, so he knows a lot about tech. He’s also one of the people who is currently leading the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. So, he’s really got his finger on the pulse about what the technological developments are and how people are thinking about it in ethics. He wrote this book a couple of years ago, developing it from an article he’d written for Mashable about ethics in AI. There are several things I really like about it, one of them is that it covers a broad range of issues.

He’s also focussing very much on things which are happening now, or else are not in the very distant future but that you could envisage being realised perhaps within our lifetimes. For instance, he asks how we might relate to robots that are in the house. Technicians are already developing robots that can keep an eye on people or do various domestic tasks, so it’s just extrapolating from that. What he does in the book is start off each chapter with a fictional scenario which is quite well extrapolated and then discusses it. I think one of the things that we do need, in looking at AI, is precisely science fiction to engage our imaginations. Havens does that in quite a lot of detail. He doesn’t then just leave it as fiction but goes on to discuss its implications.

Can I just clarify why you think you need science fiction to get at the ethical issues in AI? It’s tempting to say ‘let’s just stick to the facts here’?

For one thing, we don’t know what the facts are. Secondly, we need to think about how we might feel about them because one of the things that AI is doing is changing how we interact with the world and changing how we interact with other people. So, there are as yet no facts out there about this for most of the scenarios he discusses. It is a question of how we feel running through these different situations.

His first scenario is quite a good illustration of that. He imagines himself as a father a little way into the future and he’s a bit nervous because his daughter is going out on her first date. And it turns out that her first date is with a robot. So, instead of being a bit nervous and thinking you’re not quite sure that you like the boy with a reputation from the wrong side of the tracks,  instead, his daughter is on a date with a robot. He’s looking about how we might feel about our prejudices about going out with a robot, and what the advantages and disadvantages of going out with a robot might be. You can imagine yourself in that situation. It does seem a bit far fetched that somebody would want to do that, but then, on the other hand, thinking about it now and knowing how many teenage boys behave, you might think that a lot of teenage girls might prefer to go out with a robot. He might be programmed not to date rape them, for instance…

This book sounds great. Is there another example, perhaps, that you could entice us with here?

Yes. He’s got an example about a home robot vacuum cleaner. The example works nicely because it’s about how different forces can work towards a bad outcome. It’s not just the AI itself, but poor communication, pressures from online reviews of products, making decisions too quickly, and also how AI is coming into a world where we’re already steeped in technology.

Commercial pressure from bad Amazon reviews leads to a change in a robot vacuum cleaner’s algorithm which leads to one such cleaner unplugging the power source for a baby monitor in order to retain its own power – and a baby chokes alone and unheard. The baby monitor is not AI, but this shows we need to think about how AI is nested within dependence on technology in general. In the olden days, like when I had my babies, I was a complete nervous wreck of a mother and wouldn’t go more than two feet away from them. But if you were relying on technology, you might then become more relaxed. The book is really good at looking at exploring the minutiae of our interactions with technology.

I think it’s so important to think about how we might go step by step by step into situations that we wouldn’t have wanted to go into when viewed from the outside. It’s really good at looking at how AI might affect how we view ourselves and our notions of responsibility.

“If you’re interacting with a robot who is programmed to be nice to you, it might stop us maturing”

Havens looks really closely at how we might become reliant on robots, especially if they’re programmed to interact with us in a certain way. If you interact with another human being, they might or might not be nice back to you. It’s a bit hit and miss, as you may have found. There’s always that element of uncertainty. But if you’re interacting with a robot who is programmed in a certain way, it might stop us maturing if you’re over-dependent on them – we assume they’re more or less infallible. So, he’s considering that. He’s also considering what he calls the possible loss of alterity if we’re reliant too much on machines. If you’re interacting with another human being, there’s the idea that the other human being is always to some extent an unknown – they’ve always got their own consciousness, and motivations. Maybe far into the future AI might also be similarly developed but we’re nowhere near that kind of situation yet. But, in between getting there, we might have robots that are very human-like performing particular tasks, and we really need to look about how that might significantly affect how we interact with the world.

I can see how that’s already happening. We’ve got pretty efficient satnav systems, that more often than not take us to the right places. People who have grown up with that kind of system have become incredibly reliant on navigation by machine. If that starts to go wrong at any point, I’d imagine some people who have recently passed their test as drivers would a really struggle to use road signs, or memorised routes, or a conventional map as a way of getting from one place to another.  

Yes. That’s precisely the sort of thing that Havens and other people are concerned about because we need to look at whether or not we’re losing skills.

Of course, it could be that we’re actually gaining something because we don’t have to worry ourselves about getting to places. We can use our intelligence and attention to focus on things that are more interesting.

Yes. So, another thing that I like about the book is that Havens is not at all anti-tech. He’s actually quite keen on it. Among the things he wants to do is to avoid polarisation. That’s something else that stories can help with because they can be pretty nuanced. Rather than saying ‘this is terrible’ which a lot of people are doing, or ‘this is going to be brilliant,’ which a lot of other people are doing as well, it’s a matter of looking at the subtleties of how we might be able to use tech for our own advantage, to use it so that it is truly beneficial. Of course, you can have pluses and minuses.

Another way of putting that is to say that you might have a gain from tech, but if what you’ve lost is of equal value, then you haven’t really moved anywhere. You have just made something different, without being in a better position.

There’s something else quite important about this book. John Havens is also very interested in psychology. His father was a psychiatrist, and has always influenced him. The book is influenced by positive psychology, and he’s interested in how we might use AI in line with our values and how we might check whether the AI we’re using is against our own values or not. Not everybody will necessarily buy the particular details of the positive psychology that underpins his stance, but I think it’s a really interesting way of looking at the issues. He’s looking at the grounded details about how we might think that AI is beneficial or not, and he’s linking that to how we might measure and think about human happiness and human welfare and wellbeing. I think that is incredibly important.

Thinking in ethical terms, one of the attractions of something like utilitarianism or consequentialism in looking at something that’s new and that we don’t really understand very much is that you can then look at the harms and the benefits and add them up, rather than having preconceptions about whether doing something in one way is right or wrong. But if AI is changing how we’re thinking about ourselves in relating to other people, how on earth do we even count or measure what the harms and benefits are? So, you’ve got to go and look at the detail, at what effects and impacts it’s having on us. A psychological level is a very good place to start.

Let’s move on to your second choice. This is The Technological Singularity (2015) by Murray Shanahan.

This is a very different sort of book. The ‘technological singularity’ is explained in slightly different ways, but it is roughly the point in AI development where things run away and we can’t predict what happens next. Imagine that we’ve created an AI that is cleverer than us. With a person that is cleverer than you, you probably don’t know what they’re going to do next. So, with an AI that is cleverer than us – especially if it’s one which has got a positive feedback loop and is developing and learning itself – we might be in a situation where we really can’t predict outcomes. That’s one way of looking at the singularity.

This book is quite short, and it’s very well explained. One of my irritations about much of the debate about AI is that some people come along and say: ‘the superintelligence is just around the corner – and any minute now we’re going to be enslaved to robots. There are going to be killer robots and drones everywhere, and nanoparticles that you’re going to breathe in, and they’re going to go into your brain, and this is all going to happen by 2035.’ And other people are saying ‘relax, there’s nothing to worry about; there’s nothing to see here.’

“With an AI cleverer than us – especially if it’s developing and learning itself – we really can’t predict outcomes”

Of course, ordinary people have absolutely no way whatsoever of knowing who’s right. They’ve got no idea whatsoever. What Shanahan does in this book is to look at exactly how we might map what intelligence is and different approaches to doing that, how we might build up intelligence in different ways. He looks at this in quite a lot of detail, but very simply and very well explained. He looks, for example, a lot at how you might understand what it is to model a mouse brain and what exactly you’d have to do to do that. Mouse brains are much much simpler than human brains. I think it’s useful and interesting material to help to understand better what’s going on in the debate. So, he’s taking us through the fundamental nuts and bolts about how we might reproduce intelligence. For instance, he talks about whole brain emulations…

Why brain ‘emulation’ rather than ‘simulation’?

There are different ways of trying to produce AI. An emulation is the attempt to make a precise copy of a brain in another form, in an attempt to create the brain’s intelligence artificially. You’d start by looking at exactly at how the brain works and try to build an emulation that way, looking at the physiology of how the brain actually works, and replicating that so that it replicates the brain’s functions, and performs those functions as closely as possible to the way the brain itself works. And I suppose there might be a question as to whether you could take something which operates in a biological material medium and reproduce it exactly in a non-biological material. But a different approach would be to see what intelligence fundamentally is, in terms of what we’re able to do, and see whether you could construct something, maybe in a completely different way, that is capable of that.

Machine learning, for instance, might be able to solve problems but we might not know how it’s doing it. It could do it in a way that is completely different to how our brains work.  By looking at those different ways it helps us to see why there are disputes about whether we might be able to exactly reproduce human intelligence or if it might be different to us in some fundamental way.

And does he have something to say about the ethics of all this?

Yes. As a background to looking at the ethical issues, he helps to give a thorough understanding about the differences between a biological intelligence and a possible non-biological intelligence, and how biological limits might constrain us in certain ways.  There are also questions about how our intelligence is linked to our actual bodies. With the brain, for instance, you can’t just take it out of your body. It wouldn’t function the same way. There are hormones, and all the rest of it, that are influencing how you behave; our actual embodiment is important.

“A lot of the central questions about ethics in AI revolve around what’s going to happen when we replace humans”

I think that’s a good background grounding for thinking about ethics in AI because a lot of the central ethical questions revolve around what’s going to happen when we replace humans with AI or enhance humans with AI. I suppose you could rephrase it as saying ‘what happens when we replace biological human intelligence or agency with something which is non-human?’. So, I think raising the question of how our biology affects who we are and how we relate to the world is really important to approaching the ethical questions. And there’s a lot more detail besides.

In both of the books you’ve described, what we say about AI and ethics really reflects back all the time on what human nature is, what we take it to be, and how fixed it is as well. There’s a sense in which we keep coming back to the fundamental philosophical question: what kind of a being are we?

Yes, precisely. One of the things that I like about Shanahan’s book is that he gets to the deep questions about ethics because he ends up by saying that AI ends up raising the question that Socrates raised about how we should live. Some of the questions that Murray Shanahan looks at later in the book are questions that might arise in some more futuristic scenarios which are linked to ethical questions and then linked to questions about the nature of personhood. A lot of philosophers have been really interested in this.

“AI ends up raising the question that Socrates raised about how we should live”

Supposing, for example, we could somehow upload someone’s mind to a computer, then you immediately could get all those questions about what happens if that splits. Once it’s on a computer, you can just copy it. But what then? Which copy is the real you? Should we treat the various copies equally, and so on? One of the points that he makes is that what AI is doing is making these thought experiment that philosophers like Bernard Williams and Derek Parfit have proposed about brain splitting and personal identity etc. real possibilities. AI can potentially realise some of those philosophical conundrums. It has that feature in common with John Havens’ book: its willingness to take what are currently science fiction tales seriously as tools for thought.

Your next choice is Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (2016) by Cathy O’Neil.

This is a book that you might not think at first is about AI per se, but it’s got very close links with important ethical issues in AI. One of the reasons why I’ve put this on the list, apart from the fact that it’s really interesting and important in its own right, is because AI certainly makes use of algorithms and a huge amount of data. One of the things that we need to think about is not simply the big picture Terminator situation where we’re going to get gunned down by killer robots, but also questions about automated calculation of results which AI can only speed up, and about how that’s raising really important ethical and social questions that are already here with us now.

Cathy O’Neil is a very interesting person. In a way, she’s a bit like somebody who used to belong to a terrorist cell who saw the light and is now spilling the beans. She was working with hedge funds when the financial collapse happened in 2008 and came to a realisation about how the use of numbers to manipulate data was having significant effects on people’s lives. As a result, she became a data analyst and, later, worked directly with exposing the issues. Again, as with John Havens’ book, hers includes many really interesting and gripping examples –  in her case, from real life – about how the use of algorithms has affected people’s lives and occasionally ruined them.

Support Five Books

Five Books interviews are expensive to produce. If you're enjoying this interview, please support us by .

For example, she starts off by talking about how schools in the US introduced mechanised mathematical ways of scoring based on students’ test results in an attempt to check that teachers were doing a good job. One of the things that was in the algorithm was how well the student had done compared to how they did the year before. So, some of the teachers who everyone thought were really good educators could end up with a very low score. Some teachers were even sacked because of low scores. Some found that they’d get a six one year and ninety-six the next. What happens in this sort of situation is that people try to game the system. We’re not yet talking about sophisticated AI in this example, but these are the sorts of things that AI can easily magnify. Teachers would game the system and do their students’ homework for them, or make the tests easier, or make certain that they had kids who did really badly last year so that they could add a lot of value.

O’Neil has numerous examples of how the system can be gamed. Another example that people in AI have talked about a lot is algorithms that try to determine the possible rates of recidivism in crime which is then used in sentencing. These are very real issues. A recent court case in Wisconsin concerned the COMPAS algorithm used to determine sentencing, because it appeared to be biased against black people. You could have lots of algorithms which are assessing insurance risk, for instance, which can end up being biased against people who live in certain areas.

She also looked at an algorithm which measures the likelihood of staying in a job. One of the factors which makes you more likely to stay in a job is how close you are to where you live. So, one of the things I really like about the book is how she shows how what seemed to be mathematical decisions done with the lure of rationality and the lure of tech and the lure of numbers – which seem to be objective – can end up incorporating poor decisions of the past, or incorporating values or leaving them out without us consciously realising. The author is very clear about how something which just looks like a computer programme designed to produce a good outcome is imbued with values and may well reproduce and entrench those values.

“Machine learning is only as good as the data set that you’ve got”

She clearly links the tech to the social issues and also shows how, in example after example, it’s often the same group of people who don’t get insurance, don’t get good education, don’t get the job. It’s the same people who often end up getting discriminated against. I think that’s a really important aspect of AI, that we need to look at: how we’re using algorithms. Machine learning is only as good as the data set that you’ve got. So, if we’re suddenly starting from where we are now and having a big take off in using this way of trying to make decisions, what we might not get is some glorious future. What we might just get instead is some dystopian version of the present because we’re just reproducing the data that we’ve already got, and sometimes amplifying its biases.

That sounds like a pessimistic view of the use of big data. Does O’Neil provide any solutions to these problems?

I don’t think she’s pessimistic. She’s just warning that we need to be able to do this properly and make very clear what’s going into it. Another thing I really liked about it is that she explains the history of how things came to be as they are. I read this on the train on the way back from visiting a university open day with my daughter and I read the chapter about how various university rankings began. As soon as it’s pointed out to you, it’s obvious that the whole thing is a scam. If you are near the top of the list then you stay near the top of the list because people apply to your university because it’s at the top of the list. Universities will do things to try to game the system. O’Neil emphasises throughout how values are embedded in algorithms, like it or not.

The university ranking system helped to create an appalling runaway problem because fees were not included, so there was no incentive to keep these low. She explains the history of how this happened. It all began in America because a second-rate magazine had run out of things to write about and decided to feature an article about which the best universities were. It all took off from there. It’s just insane. So, one of the things I think really important in ethics in AI is that we keep a develop a clear and informed view of social history and keep track of why we make particular decisions about tech, and how this can run away with us.

Your fourth choice is Moral Machines: Teaching Robots Right from Wrong (2008) by Wendell Wallach and Colin Allen.

This book is slightly older than the other books and is a lot more detailed. I picked it because the authors look at the particular issue of how we might actually program ethics into machines, into what they call ‘artificial moral agents’.

One of the things I liked about it was how they thought in really detailed ways about how you might go about constructing a machine that could somehow understand or incorporate ethics and what that might mean for whether you’re talking about AI which is performing quite specific tasks, or whether you’re talking about something more general. But they also talk about what sort of moral theory might be needed. They consider how you might have a ‘top down’ approach of starting off with, say, a set of principles, or a consequentialist approach looking at measuring harm and benefit; or how you might have a ‘bottom up’ approach where you’re trying to teach a robot ethics in the ways you might teach a child and what that would involve. And then they go into quite a lot of detail about the important issues in ethical theory that lie behind this.

For instance, they discuss Jonathan Dancy’s work (something that caught my eye because he taught me when I was an undergraduate). Dancy has a theory of moral particularism which is basically the idea that when we make moral decisions or moral judgements, we’re not relying on general principles or rules, but on the specific details of each situation, because these are so individual and so unique. On his view, when we make decisions or judgements they are particular to that situation, and we can’t draw general principles and rules from how we act in these circumstances.

How does that play out with a robot? If you’re programming particularism into a robot’s repertoire, does that mean that it has to gauge the precise situation and then have a unique response to that that’s not then going to be generalisable to any other situation?

Well, I have no idea how you would do it. The book suggests certain models of building AI using neural networks might work better than others if a particularist approach is correct, but we have no answers so far. But one of the things that the authors talk about in the book is the notion of moral relevance. That’s such a key issue. If you’ve got a fairly simple moral theory like utilitarianism, the only things that are relevant are pain and pleasure – which is why the theory is wrong and why the theory is crude. It works quite well in some issues, but it’s really crude. What we need to do is to have an appreciation of which elements in a situation are morally relevant.

That’s really interesting in terms of machine learning because we can get machine learning to recognise that something is a cat, for instance, but this is done in a completely different way to how a child recognises a cat. If you show a child a cat once, then the next time they see a cat they will say ‘cat’. But machine learning does it very differently.  It’s a really interesting question whether or not you could ever program machines to be able to pick out moral relevance in the same way that we do.

“If you show a child a cat once, then the next time they see a cat they will say ‘cat’. But machine learning does it very differently”

One of the things we need to think about further from this is that ethics is not just about getting to the right answer. When we’re talking about something being a moral problem or an ethical issue, one of the things that means is that we expect the people involved to be able to explain and justify their actions. If you just decide you’re going to have coffee rather than tea, I don’t expect you to justify that. But if you swear at me and push me down the stairs, then I expect you to have a very good reason for having done that, a reason like, ‘oh, I thought there was a bomb, so I was trying to get you out of harm’s way’.

For a moral issue, then, we expect reasons or an explanation. That’s one of the very important issues we need to think about: whether we really could program machines to be morally autonomous in that kind of way. This book is useful for exposing the level of detail we need to be asking both about machines and about morality to be able to answer those questions. Personally, from my understanding, I don’t think we’re anywhere near developing something which is morally autonomous in that sense. But this book is an excellent account of what the issues are, both in terms of the tech and of the ethics.

And your final choice is 2001: A Space Odyssey (1968) by Arthur C Clarke. For people who haven’t read the book or seen the movie, could you give a brief outline of the plot?

I’ll have to be careful not to spoil it. Basically, it’s about a space mission and a computer that goes bad. The book looks at humanity’s future, but it starts off right at the dawn of humanity. That’s one of the things I really like about it because, as we’ve seen in all these other books, developments in AI are now prompting us to ask questions about who we are as humans and where we’ve come from. One of the dangers from some of the current discussions about AI and ethics is that it would be easy to think that we can just look at where we are now. It’s too easy to think that we’re in a very good position and have made a lot of progress, so let’s just progress a bit further. What we need to do is to keep in view a long historical sweep about where we’ve come from and who we are as humans.

2001 starts off in the cradle of humanity in Africa, with a group of early humans including one named Moon Watcher. The events of the book span that historical dawn of humanity to what’s happening to the characters in the book in the current day. I think we do need to have some sense of what we’re doing with AI and technology and what significance it has for the whole of humanity, not just to focus on the small issues that arise with specific uses of it.

In the novel, the spaceship’s computer HAL 9000 ‘goes bad’. The way in which the story unfolds is a really powerful way of raising all the issues that are being talked about now in relation to AI. I find the book, especially by being set in space, produces a striking depiction of the central ethical questions about our relationship with AI.

“Arthur C Clarke raises all the issues that are being talked about now in relation to AI”

The astronaut David Bowman ends up being alive on his own in a spacecraft with a two-hour delay in radio signals back to earth, with a computer that’s not on his side. One of the things that just about everybody I know working in AI in ethics at the moment is talking about is that we need to retain ultimate human control of AI. The book raises exactly that issue. So, David Bowman has to try and turn the computer off and has to go through some sneaky steps to try and do that. He has to really grapple with the question of whether he should be doing this, because the computer is arguing back with him as it gradually loses power. Bowman has to make all these decisions entirely on his own. So, the notion of a spacecraft out in space, with the astronaut having to go outside the craft to mend things, is, as many people have commented, an analogy for a foetus with an umbilical cord attached to the mothership.

For me, it focuses all these questions about who are we as humans and the idea that we might be tethered somehow to this ship that is shutting down, completely in the hands of AI, going off into this uncertain future. Clarke doesn’t go into much detail about these questions but the novel raises them all. For example, for training, the astronauts have to go through a period of hibernation and they are woken up by a computer. There you are, completely and utterly dependent on a computer. What would that be like? What would that mean?  It’s really scary.

But, in discussing the other books, you did have some sense that there are these dystopian futures that are speculated on, but also that there are people on the other side saying that much of this is scaremongering. The fact that fiction can make you feel frightened of a possible future doesn’t mean that this is a genuine possibility or risk, and that people are going to be foolish enough to relinquish control to that degree.

Yes, you’re quite right. That’s why I’ve recommended these books as a whole because they provide a balanced overview. I might also say that I’m really terrible at watching frightening films. So, other people might not be quite so scared by this scenario as me. But also, remember, that we’re already handing a lot over to machines, perhaps more than we have bargained for.

Just out of interest, where do you stand on that dystopian/utopian line of the future of AI in relation to ethics?

Well, I’m fairly worried that we’re already doing things to ourselves that are problematic. There are many indications that how people are using technology is changing how children develop and even changing how our brains develop. It sounds very judgemental, but one thing that I increasingly notice is parents walking along with their children in a pushchair and the adults are just on their smart phones. They should be interacting with their children. If this is a widespread phenomenon, it’s going to have an impact on child development.

“There are many indications that how people are using technology is changing how children develop”

Somebody told me at the weekend about going to a zoo and seeing a two year old child seeing a lion behind glass and the two year old child in the buggy went up to the glass with its finger and tried to swipe left. It is frightening that the kid thinks it’s something on a screen that you could just get rid of, when it’s an actual real lion. I’m not completely sure if this story is true or not, but it could well be and it illustrates the issues. Those are things that we really need to talk about and that just shouldn’t be left in the hands of a few techy people. These are things that everybody should be having conversations about.

From what you’ve said here, it seems that the key is to have informed and imaginatively rich conversations rather than conversations based on a hunch about what’s going on.

Yes. That’s why I picked some of these books: so that people who maybe didn’t really know much about this or are not tech experts can read them and get a quite a good understanding of what’s going on and join in the conversation. One of the things I strongly feel is that in a sense it’s not simply about ethics. It’s not simply about developing a professional code of ethics and that will sort this all out.  We need to ask these basic questions about how we should live and why we want to do things in the world. Going back to the point I made at the beginning of the interview, why would you want to have music written and played by a robot? You don’t want that; you want to go into your garage or wherever and play it yourself.

But from the point of the view of the listener, you might want to listen to the music that was perfectly calibrated to your human brain.

Yes, you might do that, but that’s a question we need to ask. When I listen to Radio 3, it’s not just playing the music but a lot of it is talking about the composers’ lives. That’s part and parcel of why we find it interesting. I was listening to the St Matthew Passion on YouTube while I was working this morning and somebody had written a comment mentioning a merchant who was in business in Leipzig in 1727 who had stumbled into a church and found Bach conducting the choir and orchestra and felt himself in heaven. That is an aspect of why we find it all interesting. So, it may be that we’re going to go down the route of preferring AI compositions and creations, but maybe not. It’s certainly something that we need to be thinking about.

Interview by Nigel Warburton

October 1, 2023

Five Books aims to keep its book recommendations and interviews up to date. If you are the interviewee and would like to update your choice of books (or even just what you say about them) please email us at [email protected]

Paula Boddington

Paula Boddington

Paula Boddington is a philosopher whose work includes philosophy of mind, moral philosophy and applied ethics. She has worked extensively in interdisciplinary contexts and is interested in the links between theory, practice, and policy, including the ethical and social issues surrounding the development of new science and technology. She is currently Associate Professor of Philosophy and Healthcare at the University of West London. Previously, she was a senior research fellow in the Department of Computer Science, University of Oxford, on a project funded by the Future of Life Institute, exploring the development of codes of ethics for AI.

Paula Boddington

Paula Boddington

Paula Boddington is a philosopher whose work includes philosophy of mind, moral philosophy and applied ethics. She has worked extensively in interdisciplinary contexts and is interested in the links between theory, practice, and policy, including the ethical and social issues surrounding the development of new science and technology. She is currently Associate Professor of Philosophy and Healthcare at the University of West London. Previously, she was a senior research fellow in the Department of Computer Science, University of Oxford, on a project funded by the Future of Life Institute, exploring the development of codes of ethics for AI.