Advances in artificial intelligence pose a myriad of ethical questions, but the most incisive thinking on this subject says more about humans than it does about machines, says Paula Boddington, Oxford academic and author of Towards a Code of Ethics for Artificial Intelligence.
What do you mean by ethics for artificial intelligence?
Well, that’s a good starting point because there are different sorts of questions that are being asked in the books I’m looking at. One of the kinds of questions we’re asking is: what sorts of ethical issues is artificial intelligence, AI, going to bring? AI encompasses so many different applications that it could raise a really wide variety of different questions.
For instance, what’s going to happen to the workforce if AI makes lots of people redundant? That raises ethical issues because it affects people’s wellbeing and employment. There are also questions about whether we somehow need to build ethics into the sorts of decisions that AI devices are making on our behalf, especially as AI becomes more autonomous and more powerful. For example, one question which is debated a lot at the moment is: what sorts of decisions should be programmed into autonomous vehicles? If they come to a decision where they’re going to have to crash one way or the other, or kill somebody, or kill the driver, what sort of ethics might go into that?
But there are also ethical questions about AI in medicine. For instance, there’s already work developing virtual psychological therapies using AI such as cognitive behavioural therapy. This might be useful since it seems people may sometimes open up more freely online. But, obviously, there are going to be ethical issues in how you’re going to respond to someone saying that they’re going to kill themselves, or something along those lines. There are various ethical issues about how you program that in.
“AI is pushing us to the limits of various questions about what it is to be a human in the world”
I suppose work in AI can be divided into whether you’re talking about the sorts of issues which we’re facing now or in the very near future. The issues we are facing now concern ‘narrow’ AI which is focused on particular tasks. But there is also speculative work about whether we might develop an artificial general intelligence or, even then, going on from that to a superintelligence. But if we’re looking at an artificial general intelligence which would be mimicking human intelligence in general, depending on whether we’re retaining control of it or even if we are not retaining control of it, lots of people are arguing that we need to build in some kind of ethics or some way to make certain that the AI isn’t going to do something like turn back and decide to rebel – the sort of thing that happens in many of the Isaac Asimov robot stories. So there are many ethical questions that arise from AI, both as it is now and as it will be in the future.
But there is also a different range of questions raised by AI because AI is, in many ways, pushing us to the limits of various questions about what it is to be a human in the world. Some of the ethical questions in AI are precisely about how we think of ourselves in the world. To give an example of that, if you can imagine some science fiction future where you’ve got robots doing everything for us, people are also talking about how you can make robots not just to do mundane jobs and some quite complex jobs, but also creative tasks. If you live in a world where the robots are doing all the creative tasks, for instance if you have robots who are writing music that is better than a human could write, or at least as good as a human could write, it just raises fundamental questions of why on earth we are here. Why are all those youngsters in garage bands thrashing out their not tremendously good riffs? Why are they doing that if a robot or a machine could do it better?
Working in this area is really interesting because it pushes us to ask those kinds of questions. Questions like: what is the nature of human agency? How do we relate to other people? How do we even think of ourselves? So, there are lots of deep ethical issues that come up in this area.
What work have you been doing in this area?
In the last two or three years, a number of prominent individuals have voiced concerns about the need to try to ensure that AI develops in ways that are beneficial. The Future of Life Institute, based in the USA, has a programme of grants, funded by Elon Musk and the Open Philanthropy project, given to 35 projects working on different questions related to the issue of developing beneficial AI. I’ve been working on a project examining the groundwork to how we might develop codes of ethics for AI, and what role such codes might have. I’ve got a book on this topic due out soon.
Let’s go on to your first book choice, Heartificial Intelligence: Embracing Our Humanity to Maximize Machines (2016) by John C Havens.
First of all, I’ll say that I’ve chosen five books which I thought, as a package, would give quite a good introduction to the range of issues in AI as there’s a big range of issues and quite a wide range of approaches.
Heartificial Intelligence, my first choice, gives a general overview of the issues which we’re presented with. Havens is a really interesting writer. He was formerly an actor and he’s worked a lot in tech journalism, so he knows a lot about tech. He’s also one of the people who is currently leading the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. So, he’s really got his finger on the pulse about what the technological developments are and how people are thinking about it in ethics. He wrote this book a couple of years ago, developing it from an article he’d written for Mashable about ethics in AI. There are several things I really like about it, one of them is that it covers a broad range of issues.
Get the weekly Five Books newsletter
He’s also focussing very much on things which are happening now, or else are not in the very distant future but that you could envisage being realised perhaps within our lifetimes. For instance, he asks how we might relate to robots that are in the house. Technicians are already developing robots that can keep an eye on people or do various domestic tasks, so it’s just extrapolating from that. What he does in the book is start off each chapter with a fictional scenario which is quite well extrapolated and then discusses it. I think one of the things that we do need, in looking at AI, is precisely science fiction to engage our imaginations. Havens does that in quite a lot of detail. He doesn’t then just leave it as fiction but goes on to discuss its implications.
Can I just clarify why you think you need science fiction to get at the ethical issues in AI? It’s tempting to say ‘let’s just stick to the facts here’?
For one thing, we don’t know what the facts are. Secondly, we need to think about how we might feel about them because one of the things that AI is doing is changing how we interact with the world and changing how we interact with other people. So, there are as yet no facts out there about this for most of the scenarios he discusses. It is a question of how we feel running through these different situations.
His first scenario is quite a good illustration of that. He imagines himself as a father a little way into the future and he’s a bit nervous because his daughter is going out on her first date. And it turns out that her first date is with a robot. So, instead of being a bit nervous and thinking you’re not quite sure that you like the boy with a reputation from the wrong side of the tracks, instead, his daughter is on a date with a robot. He’s looking about how we might feel about our prejudices about going out with a robot, and what the advantages and disadvantages of going out with a robot might be. You can imagine yourself in that situation. It does seem a bit far fetched that somebody would want to do that, but then, on the other hand, thinking about it now and knowing how many teenage boys behave, you might think that a lot of teenage girls might prefer to go out with a robot. He might be programmed not to date rape them, for instance…
This book sounds great. Is there another example, perhaps, that you could entice us with here?
Yes. He’s got an example about a home robot vacuum cleaner. The example works nicely because it’s about how different forces can work towards a bad outcome. It’s not just the AI itself, but poor communication, pressures from online reviews of products, making decisions too quickly, and also how AI is coming into a world where we’re already steeped in technology.
Commercial pressure from bad Amazon reviews leads to a change in a robot vacuum cleaner’s algorithm which leads to one such cleaner unplugging the power source for a baby monitor in order to retain its own power – and a baby chokes alone and unheard. The baby monitor is not AI, but this shows we need to think about how AI is nested within dependence on technology in general. In the olden days, like when I had my babies, I was a complete nervous wreck of a mother and wouldn’t go more than two feet away from them. But if you were relying on technology, you might then become more relaxed. The book is really good at looking at exploring the minutiae of our interactions with technology.
I think it’s so important to think about how we might go step by step by step into situations that we wouldn’t have wanted to go into when viewed from the outside. It’s really good at looking at how AI might affect how we view ourselves and our notions of responsibility.
“If you’re interacting with a robot who is programmed to be nice to you, it might stop us maturing”
Havens looks really closely at how we might become reliant on robots, especially if they’re programmed to interact with us in a certain way. If you interact with another human being, they might or might not be nice back to you. It’s a bit hit and miss, as you may have found. There’s always that element of uncertainty. But if you’re interacting with a robot who is programmed in a certain way, it might stop us maturing if you’re over-dependent on them – we assume they’re more or less infallible. So, he’s considering that. He’s also considering what he calls the possible loss of alterity if we’re reliant too much on machines. If you’re interacting with another human being, there’s the idea that the other human being is always to some extent an unknown – they’ve always got their own consciousness, and motivations. Maybe far into the future AI might also be similarly developed but we’re nowhere near that kind of situation yet. But, in between getting there, we might have robots that are very human-like performing particular tasks, and we really need to look about how that might significantly affect how we interact with the world.
I can see how that’s already happening. We’ve got pretty efficient satnav systems, that more often than not take us to the right places. People who have grown up with that kind of system have become incredibly reliant on navigation by machine. If that starts to go wrong at any point, I’d imagine some people who have recently passed their test as drivers would a really struggle to use road signs, or memorised routes, or a conventional map as a way of getting from one place to another.
Yes. That’s precisely the sort of thing that Havens and other people are concerned about because we need to look at whether or not we’re losing skills.
Of course, it could be that we’re actually gaining something because we don’t have to worry ourselves about getting to places. We can use our intelligence and attention to focus on things that are more interesting.
Yes. So, another thing that I like about the book is that Havens is not at all anti-tech. He’s actually quite keen on it. Among the things he wants to do is to avoid polarisation. That’s something else that stories can help with because they can be pretty nuanced. Rather than saying ‘this is terrible’ which a lot of people are doing, or ‘this is going to be brilliant,’ which a lot of other people are doing as well, it’s a matter of looking at the subtleties of how we might be able to use tech for our own advantage, to use it so that it is truly beneficial. Of course, you can have pluses and minuses.
Another way of putting that is to say that you might have a gain from tech, but if what you’ve lost is of equal value, then you haven’t really moved anywhere. You have just made something different, without being in a better position.
There’s something else quite important about this book. John Havens is also very interested in psychology. His father was a psychiatrist, and has always influenced him. The book is influenced by positive psychology, and he’s interested in how we might use AI in line with our values and how we might check whether the AI we’re using is against our own values or not. Not everybody will necessarily buy the particular details of the positive psychology that underpins his stance, but I think it’s a really interesting way of looking at the issues. He’s looking at the grounded details about how we might think that AI is beneficial or not, and he’s linking that to how we might measure and think about human happiness and human welfare and wellbeing. I think that is incredibly important.
Thinking in ethical terms, one of the attractions of something like utilitarianism or consequentialism in looking at something that’s new and that we don’t really understand very much is that you can then look at the harms and the benefits and add them up, rather than having preconceptions about whether doing something in one way is right or wrong. But if AI is changing how we’re thinking about ourselves in relating to other people, how on earth do we even count or measure what the harms and benefits are? So, you’ve got to go and look at the detail, at what effects and impacts it’s having on us. A psychological level is a very good place to start.
Let’s move on to your second choice. This is The Technological Singularity (2015) by Murray Shanahan.
This is a very different sort of book. The ‘technological singularity’ is explained in slightly different ways, but it is roughly the point in AI development where things run away and we can’t predict what happens next. Imagine that we’ve created an AI that is cleverer than us. With a person that is cleverer than you, you probably don’t know what they’re going to do next. So, with an AI that is cleverer than us – especially if it’s one which has got a positive feedback loop and is developing and learning itself – we might be in a situation where we really can’t predict outcomes. That’s one way of looking at the singularity.
This book is quite short, and it’s very well explained. One of my irritations about much of the debate about AI is that some people come along and say: ‘the superintelligence is just around the corner – and any minute now we’re going to be enslaved to robots. There are going to be killer robots and drones everywhere, and nanoparticles that you’re going to breathe in, and they’re going to go into your brain, and this is all going to happen by 2035.’ And other people are saying ‘relax, there’s nothing to worry about; there’s nothing to see here.’
“With an AI cleverer than us – especially if it’s developing and learning itself – we really can’t predict outcomes”
Of course, ordinary people have absolutely no way whatsoever of knowing who’s right. They’ve got no idea whatsoever. What Shanahan does in this book is to look at exactly how we might map what intelligence is and different approaches to doing that, how we might build up intelligence in different ways. He looks at this in quite a lot of detail, but very simply and very well explained. He looks, for example, a lot at how you might understand what it is to model a mouse brain and what exactly you’d have to do to do that. Mouse brains are much much simpler than human brains. I think it’s useful and interesting material to help to understand better what’s going on in the debate. So, he’s taking us through the fundamental nuts and bolts about how we might reproduce intelligence. For instance, he talks about whole brain emulations…
Why brain ‘emulation’ rather than ‘simulation’?
There are different ways of trying to produce AI. An emulation is the attempt to make a precise copy of a brain in another form, in an attempt to create the brain’s intelligence artificially. You’d start by looking at exactly at how the brain works and try to build an emulation that way, looking at the physiology of how the brain actually works, and replicating that so that it replicates the brain’s functions, and performs those functions as closely as possible to the way the brain itself works. And I suppose there might be a question as to whether you could take something which operates in a biological material medium and reproduce it exactly in a non-biological material. But a different approach would be to see what intelligence fundamentally is, in terms of what we’re able to do, and see whether you could construct something, maybe in a completely different way, that is capable of that.
Machine learning, for instance, might be able to solve problems but we might not know how it’s doing it. It could do it in a way that is completely different to how our brains work. By looking at those different ways it helps us to see why there are disputes about whether we might be able to exactly reproduce human intelligence or if it might be different to us in some fundamental way.
And does he have something to say about the ethics of all this?
Yes. As a background to looking at the ethical issues, he helps to give a thorough understanding about the differences between a biological intelligence and a possible non-biological intelligence, and how biological limits might constrain us in certain ways. There are also questions about how our intelligence is linked to our actual bodies. With the brain, for instance, you can’t just take it out of your body. It wouldn’t function the same way. There are hormones, and all the rest of it, that are influencing how you behave; our actual embodiment is important.
“A lot of the central questions about ethics in AI revolve around what’s going to happen when we replace humans”
I think that’s a good background grounding for thinking about ethics in AI because a lot of the central ethical questions revolve around what’s going to happen when we replace humans with AI or enhance humans with AI. I suppose you could rephrase it as saying ‘what happens when we replace biological human intelligence or agency with something which is non-human?’. So, I think raising the question of how our biology affects who we are and how we relate to the world is really important to approaching the ethical questions. And there’s a lot more detail besides.
In both of the books you’ve described, what we say about AI and ethics really reflects back all the time on what human nature is, what we take it to be, and how fixed it is as well. There’s a sense in which we keep coming back to the fundamental philosophical question: what kind of a being are we?
Yes, precisely. One of the things that I like about Shanahan’s book is that he gets to the deep questions about ethics because he ends up by saying that AI ends up raising the question that Socrates raised about how we should live. Some of the questions that Murray Shanahan looks at later in the book are questions that might arise in some more futuristic scenarios which are linked to ethical questions and then linked to questions about the nature of personhood. A lot of philosophers have been really interested in this.
“AI ends up raising the question that Socrates raised about how we should live”
Supposing, for example, we could somehow upload someone’s mind to a computer, then you immediately could get all those questions about what happens if that splits. Once it’s on a computer, you can just copy it. But what then? Which copy is the real you? Should we treat the various copies equally, and so on? One of the points that he makes is that what AI is doing is making these thought experiment that philosophers like Bernard Williams and Derek Parfit have proposed about brain splitting and personal identity etc. real possibilities. AI can potentially realise some of those philosophical conundrums. It has that feature in common with John Havens’ book: its willingness to take what are currently science fiction tales seriously as tools for thought.
Your next choice is Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (2016) by Cathy O’Neil.
This is a book that you might not think at first is about AI per se, but it’s got very close links with important ethical issues in AI. One of the reasons why I’ve put this on the list, apart from the fact that it’s really interesting and important in its own right, is because AI certainly makes use of algorithms and a huge amount of data. One of the things that we need to think about is not simply the big picture Terminator situation where we’re going to get gunned down by killer robots, but also questions about automated calculation of results which AI can only speed up, and about how that’s raising really important ethical and social questions that are already here with us now.
Cathy O’Neil is a very interesting person. In a way, she’s a bit like somebody who used to belong to a terrorist cell who saw the light and is now spilling the beans. She was working with hedge funds when the financial collapse happened in 2008 and came to a realisation about how the use of numbers to manipulate data was having significant effects on people’s lives. As a result, she became a data analyst and, later, worked directly with exposing the issues. Again, as with John Havens’ book, hers includes many really interesting and gripping examples – in her case, from real life – about how the use of algorithms has affected people’s lives and occasionally ruined them.
Support Five Books
Five Books interviews are expensive to produce. If you're enjoying this interview, please support us by donating a small amount.
For example, she starts off by talking about how schools in the US introduced mechanised mathematical ways of scoring based on students’ test results in an attempt to check that teachers were doing a good job. One of the things that was in the algorithm was how well the student had done compared to how they did the year before. So, some of the teachers who everyone thought were really good educators could end up with a very low score. Some teachers were even sacked because of low scores. Some found that they’d get a six one year and ninety-six the next. What happens in this sort of situations is that people try to game the system. We’re not yet talking about sophisticated AI in this example, but these are the sorts of things that AI can easily magnify. Teachers would game the system and do their students’ homework for them, or make the tests easier, or make certain that they had kids who did really badly last year so that they could add on a lot of value.
O’Neil has numerous examples of how the system can be gamed. Another example that a lot of people in AI have talked about a lot is algorithms that try to determine the possible rates of recidivism in crime which is then used in sentencing. These are very real issues. A recent court case in Wisconsin concerned the COMPAS algorithm used to determine sentencing, because it appeared to be biased against black people. You could have lots of algorithms which are assessing insurance risk, for instance, which can end up being biased against people who live in certain areas.
“A recent court case in Wisconsin concerned the algorithm used to determine sentencing — it appeared to be biased against black people”
She also looked at an algorithm which measures the likelihood of staying in a job. One of the factors which makes you more likely to stay in a job is how close you are to where you live. So, one of the things I really like about the book is how she shows how what seemed to be mathematical decisions done with the lure of rationality and the lure of tech and the lure of numbers – which seem to be objective – can end up incorporating poor decisions of the past, or incorporating values or leaving them out without us consciously realising. The author is very clear about how something which just looks like a computer programme designed to produce a good outcome is imbued with values and may well reproduce and entrench those values.
“Machine learning is only as good as the data set that you’ve got”
She clearly links the tech to the social issues and also shows how, in example after example, it’s often the same group of people who don’t get insurance, don’t get good education, don’t get the job. It’s the same people who often end up getting discriminated against. I think that’s a really important aspect of AI, that we need to look at: how we’re using algorithms. Machine learning is only as good as the data set that you’ve got. So, if we’re suddenly starting from where we are now and having a big take off in using this way of trying to make decisions, what we might not get is some glorious future. What we might just get instead is some dystopian version of the present because we’re just reproducing the data that we’ve already got, and sometimes amplifying its biases.
That sounds like a pessimistic view of the use of big data. Does O’Neil provide any solutions to these problems?
I don’t think she’s pessimistic. She’s just warning that we need to be able to do this properly and make very clear what’s going into it. Another thing I really liked about it is that she explains the history of how things came to be as they are. I read this on the train on the way back from visiting a university open day with my daughter and I read the chapter about how various university rankings began. As soon as it’s pointed out to you, it’s obvious that the whole thing is a scam. If you are near the top of the list then you stay near the top of the list because people apply to your university because it’s at the top of the list. Universities will do things to try to game the system. O’Neil emphasises throughout how values are embedded in algorithms, like it or not.
The university ranking system helped to create an appalling runaway problem because fees were not included, so there was no incentive to keep these low. She explains the history of how this happened. It all began in America because a second-rate magazine had run out of things to write about and decided to feature an article about which the best universities were. It all took off from there. It’s just insane. So, one of the things I think really important in ethics in AI is that we keep a develop a clear and informed view of social history and keep track of why we make particular decisions about tech, and how this can run away with us.
Your fourth choice is Moral Machines: Teaching Robots Right from Wrong (2008) by Wendell Wallach and Colin Allen.
This book is slightly older than the other books and is a lot more detailed. I picked it because the authors look at the particular issue about how we might actually program ethics into machines, into what they call ‘artificial moral agents’.
One of the things I liked about it was how they thought in really detailed ways about how you might go about constructing a machine that could somehow understand or incorporate ethics and what that might mean for whether you’re talking about AI which is performing quite specific tasks, or whether you’re talking about something more general. But they also talk about what sort of moral theory might be needed. They consider how you might have a ‘top down’ approach of starting off with, say, a set of principles, or a consequentialist approach looking at measuring harm and benefit; or how you might have a ‘bottom up’ approach where you’re trying to teach a robot ethics in the ways you might teach a child and what that would involve. And then they go into quite a lot of detail about the important issues in ethical theory that lie behind this.
For instance, they discuss Jonathan Dancy’s work (something that caught my eye because he taught me when I was an undergraduate). Dancy has a theory of moral particularism which is basically the idea that when we make moral decisions or moral judgements, we’re not relying on general principles or rules, but on the specific details of each situation, because these are so individual and so unique. On his view, when we make decisions or judgements they are particular to that situation, and we can’t draw general principles and rules from how we act in these circumstances.
How does that play out with a robot? If you’re programming particularism into a robot’s repertoire, does that mean that it has to gauge the precise situation and then have a unique response to that that’s not then going to be generalisable to any other situation?
Well, I have no idea how you would do it. The book suggests certain models of building AI using neural networks might work better than others if a particularist approach is correct, but we have no answers so far. But one of the things that the authors talk about in the book is the notion of moral relevance. That’s such a key issue. If you’ve got a fairly simple moral theory like utilitarianism, the only things that are relevant are pain and pleasure – which is why the theory is wrong and why the theory is crude. It works quite well in some issues, but it’s really crude. What we need to do is to have an appreciation of which elements in a situation are morally relevant.
That’s really interesting in terms of machine learning because we can get machine learning to recognise that something is a cat, for instance, but this is done in a completely different way to how a child recognises a cat. If you show a child a cat once, then the next time they see a cat they will say ‘cat’. But machine learning does it very differently. It’s a really interesting question whether or not you could ever program machines to be able to pick out moral relevance in the same way that we do.
“If you show a child a cat once, then the next time they see a cat they will say ‘cat’. But machine learning does it very differently”
One of the things we need to think about further from this is that ethics is not just about getting to the right answer. When we’re talking about something being a moral problem or an ethical issue, one of the things that means is that we expect the people involved to be able to explain and justify their actions. If you just decide you’re going to have coffee rather than tea, I don’t expect you to justify that. But if you swear at me and push me down the stairs, then I expect you to have a very good reason for having done that, a reason like, ‘oh, I thought there was a bomb, so I was trying to get you out of harm’s way’.
For a moral issue, then, we expect reasons or an explanation. That’s one of the very important issues we need to think about: whether we really could program machines to be morally autonomous in that kind of way. This book is useful for exposing the level of detail we need to be asking both about machines and about morality to be able to answer those questions. Personally, from my understanding, I don’t think we’re anywhere near developing something which is morally autonomous in that sense. But this book is an excellent account of what the issues are, both in terms of the tech and of the ethics.
And your final choice is 2001: A Space Odyssey (1968) by Arthur C Clarke. For people who haven’t read the book or seen the movie, could you give a brief outline of the plot?
I’ll have to be careful not to spoil it. Basically, it’s about a space mission and a computer that goes bad. The book looks at humanity’s future, but it starts off right at the dawn of humanity. That’s one of the things I really like about it because, as we’ve seen in all these other books, developments in AI are now prompting us to ask questions about who we are as humans and where we’ve come from. One of the dangers from some of the current discussions about AI and ethics is that it would be easy to think that we can just look at where we are now. It’s too easy to think that we’re in a very good position and have made a lot of progress, so let’s just make that progress a bit further. But what we need to do is to keep in view a long historical sweep about where we’ve come from and who we are as humans.
2001 starts off in the cradle of humanity in Africa, with a group of early humans including one named Moon Watcher. The events of the book span that historical dawn of humanity to what’s happening to the characters in the book in the current day. I think we do need to have some sense of what we’re doing with AI and technology and what significance it has for the whole of humanity, not just to focus on the small issues that arise with specific uses of it.
In the novel, the spaceship’s computer HAL 9000 ‘goes bad’. The way in which the story unfolds is a really powerful way of raising all the issues that are being talked about now in relation to AI. I find the book, especially by being set in space, produces a striking depiction of the central ethical questions about our relationship with AI.
“Arthur C Clarke raises all the issues that are being talked about now in relation to AI”
The astronaut David Bowman ends up being alive on his own in a spacecraft with a two-hour delay in radio signals back to earth, with a computer that’s not on his side. One of the things that just about everybody I know working in AI in ethics at the moment is talking about is that we need to retain ultimate human control of AI. The book raises exactly that issue. So, David Bowman has to try and turn the computer off and has to go through some sneaky steps to try and do that. He has to really grapple with the question of whether he should be doing this, because the computer is arguing back with him as it gradually loses power. Bowman has to make all these decisions entirely on his own. So, the notion of a spacecraft out in space, with the astronaut having to go outside the craft to mend things, is, as many people have commented, an analogy for a foetus with an umbilical cord attached to the mothership.
For me, it focuses all these questions about who are we as humans and the idea that we might be tethered somehow to this ship that is shutting down, completely in the hands of AI, going off into this uncertain future. Clarke doesn’t go into much detail about these questions but the novel raises them all. For example, for training, the astronauts have to go through a period of hibernation and they are woken up by a computer. There you are, completely and utterly dependent on a computer. What would that be like? What would that mean? It’s really scary. It actually is terribly scary.
But, in discussing the other books, you did have some sense that there are these dystopian futures that are speculated on, but also that there are people on the other side saying that much of this is scaremongering. The fact that fiction can make you feel frightened of a possible future doesn’t mean that this is a genuine possibility or risk, and that people are going to be foolish enough to relinquish control to that degree.
Yes, you’re quite right. That’s why I’ve recommended these books as a whole because they provide a balanced overview. I might also say that I’m really terrible at watching frightening films. So, other people might not be quite so scared by this scenario as me. But also, remember, that we’re already handing a lot over to machines, perhaps more than we have bargained for.
Just out of interest, where do you stand on that dystopian/utopian line of the future of AI in relation to ethics?
Well, I’m fairly worried that we’re already doing things to ourselves that are problematic. There are many indications that how people are using technology is changing how children develop and even changing how our brains develop. It sounds very judgemental, but one thing that I increasingly notice is parents walking along with their children in a pushchair and the adults are just on their smart phones. They should be interacting with their children. If this is a widespread phenomenon, it’s going to have an impact on child development.
“There are many indications that how people are using technology is changing how children develop”
Somebody told me at the weekend about going to a zoo and seeing a two year old child seeing a lion behind glass and the two year old child in the buggy went up to the glass with its finger and tried to swipe left. It is frightening that the kid thinks it’s something on a screen that you could just get rid of, when it’s an actual real lion. I’m not completely sure if this story is true or not, but it could well be and it illustrates the issues. Those are things that we really need to talk about and that just shouldn’t be left in the hands of a few techy people. These are things that everybody should be having conversations about.
From what you’ve said here, it seems that the key is to have informed and imaginatively rich conversations rather than conversations based on a hunch about what’s going on.
Yes. That’s why I picked some of these books: so that people who maybe didn’t really know much about this or are not tech experts can read them and get a quite a good understanding of what’s going on and join in the conversation. One of the things I strongly feel is that in a sense it’s not simply about ethics. It’s not simply about developing a professional code of ethics and that will sort this all out. We need to ask these basic questions about how we should live and why we want to do things in the world. Going back to the point I made at the beginning of the interview, why would you want to have music written and played by a robot? You don’t want that; you want to go into your garage or wherever and play it yourself.
But from the point of the view of the listener, you might want to listen to the music that was perfectly calibrated to your human brain.
Yes, you might do that, but that’s a question we need to ask. When I listen to Radio 3, it’s not just playing the music but a lot of it is talking about the composers’ lives. That’s part and parcel of why we find it interesting. I was listening to St Matthew Passion on YouTube while I was working this morning and somebody had written a comment mentioning a merchant who was in business in Leipzig in 1727 who had stumbled into a church and found Bach conducting the choir and orchestra and felt himself in heaven. That is an aspect of why we find it all interesting. So, it may be that we’re going to go down the route of preferring AI compositions and creations, but maybe not. It’s certainly something that we need to be thinking about.
Five Books aims to keep its book recommendations and interviews up to date. If you are the interviewee and would like to update your choice of books (or even just what you say about them) please email us at editor@fivebooks.com
Paula Boddington is a philosopher whose work includes philosophy of mind, moral philosophy and applied ethics. She has worked extensively in interdisciplinary contexts and is interested in the links between theory, practice, and policy, including the ethical and social issues surrounding the development of new science and technology. She is currently a senior research fellow in the Department of Computer Science, University of Oxford, on a project funded by the Future of Life Institute, exploring the development of codes of ethics for AI.
Paula Boddington is a philosopher whose work includes philosophy of mind, moral philosophy and applied ethics. She has worked extensively in interdisciplinary contexts and is interested in the links between theory, practice, and policy, including the ethical and social issues surrounding the development of new science and technology. She is currently a senior research fellow in the Department of Computer Science, University of Oxford, on a project funded by the Future of Life Institute, exploring the development of codes of ethics for AI.