Technology

The best books on Digital Ethics

recommended by Carissa Véliz

Privacy Is Power: Why and How You Should Take Back Control of Your Data by Carissa Véliz

Privacy Is Power: Why and How You Should Take Back Control of Your Data
by Carissa Véliz

Read

Philosophers have a lot to add to debates about digital technology and the moral issues raised by its rapid rise, argues Carissa Véliz, a professor at the University of Oxford's Institute for Ethics in AI. Here she talks us through books for the general reader that introduce some of the challenges of digital ethics, from concerns about privacy and bias to the threat to democracy and the future of humanity.  

Interview by Nigel Warburton

Privacy Is Power: Why and How You Should Take Back Control of Your Data by Carissa Véliz

Privacy Is Power: Why and How You Should Take Back Control of Your Data
by Carissa Véliz

Read
Buy all books

Before we get to the books, could you explain what digital ethics is?

I think of digital ethics as the ethical problems that arise from the design and implementation of digital technologies. Some people call it AI ethics and my impression is that that is becoming a more and more common label. But digital ethics is a more precise term because often the kind of moral problems that we face with digital technologies don’t always involve AI. It could also be about an Excel spreadsheet, or how we use screens.

Could you give a few examples?

Digital ethics encompasses things like data collection and analysis and privacy, and the problems that can emerge as a result of these. It includes how algorithms are being used to distribute resources and opportunities—such as jobs and loans—and how they might be biased in different ways. It can be about algorithms that are designed for maximum engagement with social media, or how screens mediate our experience and change our relationship with one another.

You’re at the forefront of this: you’ve written a book about it, Privacy Is Power, as well as editing The Oxford Handbook of Digital Ethics. Is there much that philosophers can contribute to this area?

I think so. Initially, digital ethics was an area in which other disciplines were more involved—a lot of computer scientists, lawyers, anthropologists, sociologists. All of these disciplines have something to contribute. But philosophers have more experience with practical ethics in general and medical ethics in particular. There is a lot that we can learn from that. And more generally, philosophers can contribute clarity of thought, they can clarify the problems that are at stake. What are the premises? What are some of the implications of different decisions that we might make? Much as in medical ethics, philosophers have a role to play in making explicit which values are at stake.

And do you think that’s largely commentary after the fact, or something that’s going to shape the future of AI?

I hope that’s something that’s going to impact on the future of AI. We can affect the design of AI in at least two ways. One is producing work that companies are interested in and so ask for our opinion. The other is through regulation. There is still not enough regulation of AI. It’s something that we are working on right now and what we come up with will shape our society for decades to come.

Let’s look at the books you’re recommending on digital ethics. Which one shall we start with?

Let’s start with the fiction because, for a general audience, that is a very intuitive way to get into the topic. The novel I’ve chosen is very philosophical. It’s called Zed and it’s by Joanna Kavenna. I’m not sure if the author is a philosopher but if she’s not, she’s very philosophical. It’s a funny novel, that’s one of its virtues. But it’s very concerning at the same time. It’s about a big tech dystopia in which a company called Beetle has gained a lot of power and is led by a narcissistic CEO.

Doesn’t sound much like fiction so far.

It’s about how close this corporation becomes with the security services. What started as nudging—a notification that pushes you to stand up when you’ve been sitting down for a long time or to eat the right kinds of food—becomes seriously oppressive. It’s about how digital technologies can interact with society in a way that invites questions about what is fake and what is real. When you start experiencing reality through avatars, for instance, and through reports by companies, there are a lot of questions about whether this information can be trusted. The author plays with this duality of what is fake and what is real and also with self-fulfilling prophecies. How do we know what technology is doing? How do we check when there’s a mistake? How much transparency do people have?

In this case, the technology starts to develop glitches. Of course, the company always tries to explain them away. One of the ways in which they explain the glitches away is by creating very obfuscating language. There’s a case in which somebody gets killed by a drone and the company brands it as ‘suicide by drone.’ It’s a great novel. It’s powerful.

Let’s turn to your nonfiction choices. Shall we start with Weapons of Math Destruction by Cathy O’Neil?

Yes, it’s a good one to start with because it’s very well-written and super accessible. This is a great book with a memorable title. It’s very comprehensive and she has excellent examples—about getting insurance or jobs, how we evaluate people at work and in civic life. The book also has a lot of authority, because the author is a mathematician arguing that maths is not neutral, and that values are always baked into different algorithms.

Yes, it raises interesting issues about how algorithms appear neutral, but actually can be extremely selective and distorting in the way they operate within a range of areas. Do you have a favourite example?

One of the examples that most stuck with me is how we evaluate both students and teachers and how metrics change the activity that is being measured. So, if you focus, say, on test scores, then universities try to game the system by getting students to sit the exams many times and get the grades up. There’s another example in which there was an algorithm that was used to assess how good teachers were, and the algorithm was quite complicated—it took into account different things, including the grades of the students, which of course can vary depending on how difficult the exam was that year or other random things. In the end, people got sacked for this algorithm that was later proved to be tracking nothing at all. It was an algorithm that was self-referencing, but people had to go to court for that to come out.

Let’s move on to your next book, Mindf*ck. Can you explain what it’s about and why you’ve chosen it?

I really liked this book. I wanted to choose a book about privacy. I think privacy is possibly the most concerning challenge we have with digital technologies. Privacy can be tricky because it can feel very abstract. It doesn’t feel like anything to have your data collected. It seems innocent and painless. The consequences are not always tangible or sometimes very far off in the future.

“Privacy protects us from possible abuses of power”

This book is great because it’s written by Christopher Wylie, who was the whistleblower who exposed Cambridge Analytica. He tells the story of exactly how Cambridge Analytica got the idea to use personal data to try to sway elections and how he became part of this. He was the data analyst who made it happen, and he writes about how they built the tool and what exactly that tool could do. The book makes something very abstract and difficult to understand very tangible.

Is he like a criminal who then turns against crime, revealing how it’s done and how many villains there are around us in the process?

It’s a bit like that. It’s a book that also has a lot of narrative interest. Carole Cadwalladr, the Guardian journalist, persuaded him to become a whistleblower. It wasn’t his idea. There is also an ethical, interesting story there to be told about how someone becomes a whistleblower, how someone switches from thinking, ‘This is my job, and it’s okay to do it’ to realizing ‘Maybe I’ve done something really, really wrong and I have to make amends and try to change this.’

In the past, if you were a whistleblower, you’d have to smuggle in a tiny little camera somewhere on your body. You’d have to take pictures of lots of documents and it would be very painstaking. Now, with the press of a button, you can transfer huge amounts of data that are absolutely damning. There are lots of people who’ve done this: Edward Snowden, Chelsea Manning, Frances Haugen. This ease of transferring large amounts of data works both ways. It allows companies to collect a great deal of evidence about us, and there is an associated loss of privacy, but for companies too, unless the security is very tight, there is the problem that insiders who turn can transfer damning evidence to a pen drive or something similar. Was that involved in Christopher Wylie’s whistleblowing? Was that one of those cases where he didn’t just describe what he did, but actually provided detailed data on it?

He did leak documents, but what he revealed was mainly about the tool itself, and how it was developed. But you’re right. Daniel Ellsberg, the whistleblower of Watergate, had to spend hours photocopying those papers as the only way he could take them out. For someone like Snowden, it was very different.

You’re worried about invasions of privacy that are treated as commonplace now in our interactions with digital apps, online shopping or whatever it might be. What’s the big deal about revealing what things you like to buy, or when you get up in the morning, or whether you do exercise and what your heartbeat is?

In general, privacy protects us from possible abuses of power. If you share things like your heartbeat or what you drank last night, that could be used against you by insurance companies.

Likewise your genetics. If you do a genetic test that reveals hereditary conditions you could pay a higher premium, even though it’s through no fault of your own that you have those genes. Furthermore, there’s a Nature study that shows that about 40% of the results of these commercial DNA tests yield false positives. But many insurance companies still take them at face value.

So you might be revealing things which could put you in a category that you don’t want to be in. The data can also introduce systematic errors, so you might wrongly be put into an at-risk group, for instance, on the basis of evidence which doesn’t fully support that. Isn’t that just an argument for having more precise tools?

No, because you can argue that the fundamental principle is unfair. Even if they got it right about your genes, you’re still not to be blamed for your genes. There’s an argument to be made for why you shouldn’t be paying more than other people for genes that you didn’t choose, and you couldn’t change.

There will be winners as well as losers. Presumably, there are people who use their fitness app, and then somebody can see that they are incredibly fit and healthy. They’ll have a lower premium. What’s wrong with that? You’re just assuming that everybody should be treated equally. But if we have differences that are significant, shouldn’t you get the benefits of being a careful eater and a careful exerciser?

That sounds good, but it’s a very sterilized and clean image of a reality that just doesn’t pan out that way. Typically, the people who do exercise are people who are wealthier—they have more time to do exercise than somebody who is working two jobs to survive.

“Cambridge Analytica no longer exists, but there are more than 300 firms that do pretty much the same thing”

Furthermore, there are all kinds of assumptions there. For instance, it may be that you do a kind of exercise that doesn’t get tracked as easily with a watch. So that might nudge you to run, which might actually be harder on the body than doing, say, yoga. Whenever we track things and categorize people, there is a risk of tracking the wrong thing, or of nudging people into doing things that are actually not as good for them as it might seem at a superficial level, and of being unfair in different ways, either because we miscategorized them or because the categories are not respectful of social realities that should be taken into account. When personal data gets used to treat people in different ways, often it ends up in unfair discrimination because it takes into account things that shouldn’t be taken into account, and because it doesn’t take into account things that should be taken into account. In the end, we are not being treated as equal citizens anymore, but on the basis of our data, and that’s an affront to democracy.

Getting back to Cambridge Analytica, another aspect of that is the way that people can use large-scale data to swing elections in democracies through social media and targeted messaging. Once they have enough information about who’s on the brink of changing their voting pattern, and what might nudge them into that, this becomes a very powerful tool. Also, perhaps even more worryingly, in the hands of an authoritarian government, this data can be used to target individuals whom they consider subversive.

Yes! Another reason why I like this book is because we haven’t changed anything to make sure that it doesn’t happen again, so it serves as a warning. What this book reveals is still relevant. Cambridge Analytica no longer exists, but there are more than 300 firms that do pretty much the same thing. We haven’t fixed it. I’m worried that we are building a surveillance structure that could be co-opted by anyone. It could be an authoritarian regime, or it could even be a company.

Something that I’ve been thinking about recently is how digitization is analogous to colonialism. It’s a kind of colonizing of reality, a colonizing of the world to make trackable what wasn’t trackable before, to turn the analog into digital. When we look back to colonialism in India, the default image that comes to our mind—or at least it was for me—is that it was the British government who colonized India. But actually it was the East India Company, which at some point had more soldiers than the UK government.

So the rogue player could be an authoritarian government, but companies could also become oppressive enough that they jeopardize our freedom. When I see something like Amazon ring cameras becoming more and more popular—and having this very close connection to the police—that’s definitely a worry. When we have rivals like China, who are not very democratic, keen on collecting data and becoming leaders in AI, that’s a geopolitical risk that we’re taking.

Let’s go on to Algorithms of Oppression by Safiya Umoja Noble, a professor at UCLA.

A lot of people might already appreciate that algorithms can be biased. When this book was published, it really changed the way many people saw search engines. It’s about how algorithms can be sexist and racist and this is true of Google search engine in particular. People tend to think of Google search engine as something very neutral, very reliable. It’s public information, like a public service. Safiya Umoja Noble reminds us that Google is a commercial enterprise, it’s not the public sphere, it’s not a public square, it’s not an NGO and, actually, racism and sexism can be quite profitable.

How does that manifest itself through Google?

One story that the author tells is that she was looking for certain activities to entertain her nieces. She looked up ‘Black girls’ and found that most of the search results were incredibly sexualized and pornographic. By trying to find something to entertain girls she pushed them into this idea of Black girls as sexual objects. Another example was somebody who searched for ‘three black teenagers’ and the images that appeared were mugshots. But if you searched for ‘three white teenagers’ you got images of very wholesome kids smiling.

“Just because something is very popular doesn’t mean that it’s true or that it’s morally acceptable”

Google creates this product that is very profitable and when something goes wrong, sometimes it then fixes the problem, which shows that they could have fixed it before, had they thought it through. But sometimes they can’t even fix the problem. And then they just shirk responsibility and say, ‘Well, it’s the algorithm; we can’t really do anything about it.”

Another example was searches relating to Judaism, where the first page that came up was full of Holocaust deniers and anti-Semitic content. Google was confronted with this and tried to change it. But because these pages were so popular, they actually couldn’t change it. The best thing they could come up with was to buy an ad for themselves. So the first thing that you saw was a Google ad that explained to you that some of the first searches were unacceptable, and that Google didn’t endorse them. Instead of fixing the algorithm, the best they could come up with is to buy their own ad and display it to issue a warning.

Presumably, with that book, the attention it got will have resulted in those examples being fixed, but there may be millions of others. Is this all to do with the way the algorithms are written or is the problem in the content that’s searched?

Both. It’s to do with the way the algorithm is written and the associations that are made, and how pages get ranked through popularity. Just because something is very popular doesn’t mean that it’s true or that it’s morally acceptable. And that’s one of Noble’s points, that companies like Google try to shirk off these mistakes as ‘glitches,’ when in fact they are part and parcel of how most AI works.

Shocking things are often very popular, because people want to have a look.

There are some tricky ethical questions because when that happens, Google tries to put the burden on people. They say, ‘It’s people who like that kind of thing and it’s not our problem. We don’t decide the content, it’s just the popularity.’ But the decision to defer to popularity is an ethical, morally significant one.

Furthermore, their algorithm makes something popular even more popular, because then, when you search, ‘Why do women…’ it then gets auto-completed with something completely unacceptable that reinforces whatever prejudice might have been there already.

The United Nations highlighted this in a campaign. Google search suggestions included:

Women cannot drive/be bishops/be trusted to speak in church.

Women should not have rights/vote/work/box.

Women should stay at home/be slaves/be in the kitchen/not speak in church.

Women need to be put in their place/know their place/be controlled/be disciplined.

Google fixed that right after the campaign happened.

And in the book is it just Google that’s the target or other technology companies too?

It’s mostly Google. A good complement to this book is Race after Technology by Ruha Benjamin. That’s more about how technology is not race neutral, with many examples of different technologies and how this impacts people differently, including in the areas of pre-emptive policing and hiring algorithms. She argues that there is a new ‘Jim Code’ that is designed and implemented by algorithms, with biases coded into tech. It looks very objective and scientific but is just encoding biases just like the old Jim Crow laws.

The books you’ve chosen so far have been illustrative of ethical problems. They’re not obviously books that are theorizing from a philosophical point of view about the moral implications of different sorts of conceptual issues around AI.

That’s because I chose books for the general public. There was a temptation to include Shannon Vallor’s Technology and the Virtues, but that’s quite specialist and primarily written for professional philosophers.

It strikes me that so far, from beginning with a dystopian novel, you’ve chosen a series of books which describe a dystopian present. It’s almost as if you’re a pessimist or a cynic about digital technology. Does anybody ever write books that are positive and say, ‘Wow, this is amazing. Here are all the things we can do’?

Yes, all the time. We hear a lot of positives, such as in books like The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World by Pedro Domingos. I’m just reading one that’s called Public Parts: How Sharing in the Digital Age Improves the Way We Work and Live by Jeff Jarvis. That’s about how having so much data is going to boost innovation and how much we can learn from each other and everything’s going to be fine.

We also get a lot of positivity from tech companies who are always telling us how tech is going to save the world and how amazing it is. So, with these book choices, I’m trying to balance the view. There’s a tendency to be very optimistic when it comes to technology, and I think that’s natural, but it’s partly wishful thinking. We’re in this pandemic, we really want it to go away, ‘oh, this app is surely going to solve it…’ Also, human beings are natural makers of things. Technology has been so useful for us in the past that we have a tendency to trust it more than we trust other human beings—even though it’s human beings who are building the technologies.

It’s true that if you look at the use of large-scale medical surveys of journals, the way people can extract data and produce really interesting, unexpected results from multiple experiments that have already been done, these sorts of uses of digital resources could have huge benefits for humanity. And there are many other cases like this.

Yes, absolutely. It’s not an easy distinction to make when it comes to practice, but I think a lot of data that is not personal data should be more public and accessible and open. But with personal data, it’s so easy to misuse and so dangerous for individuals and societies that we should be much more careful than we are.

Even anonymous data?

Yes, because it turns out that it’s very hard to anonymize data. It’s in fact very easy to re-identify data. So you can remove the name from someone’s personal data but, actually, if you know where they live and where they work, you can quickly discover who they are.

You’re saying that there’s a lot of wishful thinking and optimism, but I think there’s a temptation to accentuate the negative as well. Say you’re worried about surveillance and you see a camera in the store that’s trying to pick up shoplifters. Even if it’s not checking out what you’re buying you might easily think, ‘Big Brother’s watching me.’

The evidence suggests the contrary. Given how much money and investment and how much development we’re putting into AI and surveillance, I think that the general tendency is to be very optimistic about it. We do need to balance that tendency. Oftentimes, when I criticize tech, people assume that I’m arguing that we should ditch it, or that tech is bad. It’s not that. The important question is: How do we design tech to get some of the benefits without incurring many of the costs or the risks? And I don’t think we’re striking that balance well at the moment.

“There is still not enough regulation of AI”

So partly it’s about balancing the optimistic view with a realistic look at what is going on around us. Secondly, I think a big task of ethics is to figure out what could go wrong. Our job is partly about finding the problems and trying to prevent future problems. That can seem like a negative enterprise, but in fact, it’s a worthwhile one—particularly when you propose solutions, and when you propose different ways of designing things.

Ultimately, it’s about trying to have the best society that we can, given the diversity of interests and the technology available. Also, possibly putting legal limits on what people can do in order to shape the way tech develops. Is that what you believe?

Exactly. But that road often goes through a negative process of focusing on what can go wrong and the problems.

Let’s turn to your final book selection, AI Ethics by Mark Coeckelbergh, part of the MIT Essential Knowledge series.

This book is by a philosopher. It’s very clear, and he knows what he’s talking about. He sets out a map of the problems. It covers issues like the problem of superintelligence. That’s the predicted moment when there is an intelligence explosion, and AI becomes smarter than we are. When it works on itself and improves itself until we become superfluous. The worry is that if we become superfluous, this AI might not care about us. Or it might be totally indifferent to us, and maybe it will even obliterate us. How do we make sure that we design AI so that we have value alignment, and we’re still in the picture? So that’s one classic problem in AI. Another one is privacy and surveillance. Another one is the problem of unexplainable decisions and black box algorithms, where we don’t exactly know how they work and what precisely they are inferring and with what data. The book covers challenges for policymakers, including the challenges posed by changes to the climate. It’s a kind of taster: a very short, compact survey, academic but very accessible.

Then there is another book I want to mention in passing called Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence by Kate Crawford. That one fulfils a similar purpose in that it mentions many of the biggest problems with AI and tech. But it does so from a very interesting perspective, and that is the material sustenance and composition of AI. It’s about what these machines are made of and who makes them. It’s about the mines that are used to extract the metals necessary to build phones and to build data servers and so on. The main thesis of the book is that artificial intelligence is neither artificial—because it actually depends on the natural environment—nor is it genuine intelligence. This book is very well-tuned to problems of power, and how AI gets used to enhance power asymmetries that are worrisome for labour rights and civil rights.

So it describes how things are made. Does it have proposals of how they ought to be made? Or does it stop by saying, ‘This is terrible. This involves children working in mines, slave labor in a factory, and then loads of carbon burnt by shipping it to America’ that kind of thing?’

It doesn’t have many proposals. In my view, that’s something that’s missing from most books in this area, and it’s something that I tried to redress in Privacy Is Power.

Can you say something about your book on digital ethics? It’s a multiple-authored book, isn’t it?

Yes. It’s a handbook with 36 chapters, written by philosophers, but not only for philosophers: it also aspires to be a source of information for people working in computer science, law, sociology, surveillance studies, media studies and anthropology. It covers a wide range of topics, including free speech and social media, predictive policing, sex robots, the future of democracy, cybersecurity, friendship online, the future of work, medical AI, the ethics of adblocking, how robots have politics. It’s very, very broad.

When I first had the idea to do this book, very few philosophers were working on AI ethics or digital ethics. I was very frustrated that philosophers weren’t producing more given the importance of the topic. In a matter of a few years, that has dramatically changed. There are so many papers coming out now, so many people getting interested. Hopefully, this book will be a text that can help academics and students get a map of the most important philosophical problems in the digital ethics field.

Interview by Nigel Warburton

November 30, 2022

Five Books aims to keep its book recommendations and interviews up to date. If you are the interviewee and would like to update your choice of books (or even just what you say about them) please email us at [email protected]

Support Five Books

Five Books interviews are expensive to produce. If you've enjoyed this interview, please support us by .

Carissa Véliz

Carissa Véliz

Carissa Véliz is a philosophy professor at the University of Oxford's Institute for Ethics in AI. In 2021, she received the Herbert A. Simon Award for Outstanding Research in Computing and Philosophy from the International Association of Computing and Philosophy.

Carissa Véliz

Carissa Véliz

Carissa Véliz is a philosophy professor at the University of Oxford's Institute for Ethics in AI. In 2021, she received the Herbert A. Simon Award for Outstanding Research in Computing and Philosophy from the International Association of Computing and Philosophy.