Technology

The best books on The Ethics of Technology

recommended by Tom Chatfield

Wise Animals: How Technology Has Made Us What We Are by Tom Chatfield

OUT NOW

Wise Animals: How Technology Has Made Us What We Are
by Tom Chatfield

Read

We are building ever more powerful machines that will compute answers to any questions we care to ask them, says Tom Chatfield, the author and tech philosopher. But are we asking the right questions? Here, he selects five of the best books on the ethics of technology—thoughtful explorations of how our newly-made tools might remake us.

Interview by Cal Flyn, Deputy Editor

Wise Animals: How Technology Has Made Us What We Are by Tom Chatfield

OUT NOW

Wise Animals: How Technology Has Made Us What We Are
by Tom Chatfield

Read
Buy all books

Before we talk about your recommended books specifically, I wonder if you might walk us through why it’s so crucial to think about the ethics of technology.

For me it begins with the observation that there are no neutral tools. The tech historian Melvin Kranzberg first said this in the 1980s, but it bears repeating. Similarly, to paraphrase Marshall McLuhan, we shape our tools, then our tools shape us. The very idea that the manmade technological world is a neutral environment seems to me dangerous and wrong.

If we’re interested in ethics, and especially an ethics that is literate in the best knowledge we have about the universe and ourselves, we have to take a close interest in the human-made world. Our ability to be good and do good is bound up with the systems through which we’re connected and interconnected. ‘Technology,’ today, is usually taken to mean the shiny, new, increasingly autonomous stuff. But in fact it encompasses everything we have made together as a species: from swords and shields and cooking vessels to written words, clocks and compasses. And it only becomes more important to talk about the propensities and purposes we’re embedding in these creations when they are autonomous or semi-autonomous.

So I think we need to be start by acknowledging that our ability to thrive, to live good and meaningful lives, to find meaningful connection, to find purpose, is co-dependent with the tools and technologies that connect us. And it follows that it’s extraordinarily important to have a vocabulary that allows us to talk in a rich way about the values embodied in technology. Otherwise we end up in a flat, hopeless landscape that features nothing but ‘consumer choice’ and ‘features’ and ‘bugs’—plus a bunch of technocrats who may solve our problems, or not. This way of thinking leads to a painfully narrow view of humanity, where we are just antiquated devices, ripe for displacement by super-smart machines.

Absolutely. As you argue in your book, Wise Animals, “we need the right kind of worries. And this means turning our gaze away from gadgets towards the values and assumptions baked into them.” Do you think this more questioning approach is baked into the tech industry as it exists today?

Well, I think a lot of people are increasingly worried about this, and that there’s a real hunger for bringing into these realms questions of what it means to live well or to thrive.

But the broader context is one in which plenty of people still shy away from questioning. There’s a lot of determinism around. By determinism, I mean the idea that in the long run we are powerless, that technology has a certain momentum that will sweep across the world. You can object and be a Luddite—but this is assumed to to futile. In the long term, the course of technological history is simply something that happens to you.

Now, I want to flip that around, and say, actually, if you look at people’s lives, most of the time it’s not only about money and power and features and progress. It’s the other, more mysterious and ancient stuff that matters more. And in this domain, there is a great deal of human agency and questioning to be found.

Once people reach a state of material sufficiency—which is absolutely essential—the most important considerations start to become doing something that provides a sense of purpose, creating meaningful connections with other people and nature, bequeathing to their descendants a world worth living in. So I think any realistic vision of development has to allow for the basic fact that, when people have enough, what they aspire towards is far more complicated than having more or better stuff. It’s animated by a deep, collective interest in realising their potential; questioning and challenging aspects off the present; building a better future.

Let’s look at the first book on tech ethics you’ve chosen to recommend. This is Luciano Floridi’s The 4th Revolution, in which he discusses the intellectual and technological revolutions that have transformed our understanding of the world. Our philosophy editor Nigel Warburton spoke to Floridi a few years ago about the best books on the ‘philosophy of information.’

I love Luciano Floridi—who is a philosopher of technology—because I think he has a breadth of vision, a genuinely systematic approach to the ethics of technology, but also he is deeply literate in history, and deeply interested in human nature. He’s not a consequentialist, in the sense of being interested in maximising some kind of uber-beneficial long-term outcome for humanity. A lot of tech philosophy naturally leans towards consequentialism in terms of mega payoffs and outputs. This can be a great tool for engaging with the outputs of particular systems, but it’s not a systematic philosophy of human nature or thriving.

In this particular book, Floridi starts by referencing and updating Freud’s account of three historical revolutions in human consciousness. With Copernicus and the birth of heliocentric models of the universe, we gradually learned as a species that we weren’t the centre of the universe. It’s not all about us, in a cosmological sense. Then in the Darwinian revolution of the 19th century, we found evidence that suggested that we are not the unique pinnacle of creation—that there wasn’t this moment where humans were ‘made,’ the best and most intelligent at the top of a pyramid. In fact, we’re connected with and emergent from the rest of nature; and nature turns out to be far vaster and stranger than we previously imagined. The timespan within which we exist is immense while—incidentally—you no longer need a deity to explain our existence.

Next, Freud argued that his own psychoanalytic revolution was another de-centring of human consciousness, because rather than the sublimely lucid self-knowledge of Descartes—I think therefore I am, cogito ergo sum—you can’t be at all sure of what’s going on when you introspect. You’re grasping at straws.

Floridi adds to this account what he calls a ‘fourth revolution’: a similar de-centring of human consciousness where suddenly, through artificial means, we’re creating entities that are capable of incredible feats of information processing. And by doing so we’ve been forced to re-conceive ourselves as informational organisms – and take on board the fact that even our intellectual capabilities may not be beyond replication.

“Our ability to be good and do good is bound up with the systems through which we’re connected and interconnected”

The subtle point he makes—which I think is a Kantian one—is to recognise that human dignity and thriving become more rather than less important in this informational context. For Floridi, the informational webs we weave across the world are themselves sites of ethical activity.

Building on this, he discusses what it means for the technologies we use to be ‘pro-ethical’ by design, in the sense that they enhance or uplift our capacity for accountable, autonomous actions. In order to do so, they need – for example – to give us correct, actionable information. They need to help us arrive at decisions that are appropriate and genuinely linked to our needs and concerns, rather than manipulating or disempowering us.

You can contrast this to what have elsewhere been described as ‘dark patterns,’ where you have system that are opaque and exploitative: where the interface is more like a casino, and it doesn’t want you to make a good decision, or really to have any choice at all. Some forms of social media might be one example of that—where the incentives are to do with emotional arousal, getting people to act fast and without consideration, with little ability to apply filters of quality to what they are doing.

I didn’t list it among my recommendations, but I love Neal Stephenson’s novel Seveneves, about a hypothetical future in which social media so deranged people’s collective judgement that it nearly led to the human species being wiped out. In his story, social media is up there with gene editing or bioweapons as a forbidden technology, because the danger is so great when its seductions meet human fallibilities and vulnerabilities.

I do like speculative fiction pitched as a kind of modern-day parable. I’m intrigued by what you said about certain technologies being powered by or powering a dark side of of human psychology. Do you think the intentions of the creators of these technologies matter? Or is it all about the effects?

I think intentions are important. But it’s very dangerous to ascribe too much foresight and brilliance to the people creating tools. I’ve spent quite a lot of time in the headquarters of tech companies dealing with very smart people, and it’s important to remember that even very smart people often have quite narrow knowledge and incentives. So, yes, intentions are important, but what you really need to be interested in is the blind spots, the lack of foresight, the capacity of people to pursue profit at the expense of other issues.

The big thing for me is what I call the ‘virtuous processes’ of technology’s development and deployment. What I mean by that is forms of regulation and governance where you don’t get to ignore certain consequences, to move fast and break things and not worry about the result: where you are obligated to weigh up the wider impacts of a technology.

You need to assume there are a lot of blindspots, lots of stuff that will emerge over time that you can’t anticipate. It’s one of the great lessons of systems thinking. I quote the French philosopher Paul Virilio in the book: “When you invent the ship, you invent the shipwreck.” This refers to those accidental revelations that will always come with technology. They’re inevitable; but this makes it all the more important to have feedback mechanisms whereby unintended or undesirable results—damage, injustice, unfairness—can be filtered back into the system, and checks and balances and mitigations created.

There’s been a lot of good stuff written recently about the Luddites. Brian Merchant has a great recent book recasting the Luddites as a sophisticated labour movement: not just people who didn’t like technology, who wasted everyone’s time by busting up factories and resisting the inevitable. He’s partly saying that, actually, all forms of automation potentially bring a great deal of injustice and disempowerment and exploitation. And much of the story of industrialisation has entailed, very gradually, societies working out collectively how industrial processes can be compatible with respect for human bodies and minds.

Of course, there are urgent conversations we still need to have about how models of production and modern lifestyles can be made compatible with sustainability. It’s not about moving back to some mythical Eden before technology; I don’t think that’s possible or meaningful. But I do think that keeping faith with our ability to change and adapt rapidly is important. We have the tools and the compassion, the awareness and the empathy, to come up with solutions for a way to live together. And, ironically, best serving these values tends to mean focusing on the local and the practical and the incremental, not the grand and the hand-waving.

Can we talk about your second book recommendation? This is former Five Books interviewee Alison Gopnik’s The Philosophical Baby. Could you talk us through why you recommend this book in the context of ethics and technology?

I love this book so much. Alison’s work has been so important to me. She brilliantly argues in this book and in her research that if you want to understand our species, in a psychologically and scientifically literate way, you have to be deeply interested in children and childhood. Childhood is key to our uniqueness as a species. Human childhood is a total outlier in biological terms in the mammal kingdom. We have this incredibly long period of enormous dependency, neuroplasticity and flexibility.

Fairly obviously, to be a language-using and technology-crafting species, you need an incredible capacity for learning: to be able primarily to acquire skills through nurture rather than nature, through teaching and communication rather than through instinct. But how did this come about?

Gopnik makes the case that over millions of years, our lineage doubled and then tripled down on this incredibly long, vulnerable, flexible childhood, because it conferred evolutionary advantages in terms of our ability to create tools, to build protections through technology. But she also points out that all this was necessarily bound up with an incredible capacity for mutual care, compassion, and for nurture. These traits are the absolute fundamentals for human survival and thriving. She calls children the R&D department of the human species. It’s a wonderful image; and it captures the entwined capacities for care and change that underpin technology and culture alike.

The human child is born incredibly vulnerable. It’s very dangerous for the human mother to give birth. And then, once the child is born, it’s utterly dependent for months and months and months—like no other ape. It takes months to even learn to sit up. Reproductive maturity is a decade, a decade and a half away. The prefrontal cortex continues to grow and develop into our twenties. But it brings great gifts: this capacity for change and learning and teaching.

Support Five Books

Five Books interviews are expensive to produce. If you're enjoying this interview, please support us by .

So much of the time, discussions of technology are obsessed with notionally independent adults—perhaps men with disposable income. But that’s an impoverished description of our species, both how we got here and how we build a future together. Pretty obviously, the future—in the most literal of ways—is children. They are born with this incredible plasticity, a wonderful lack of instinctual lock-in. So, for me, any ethics of technology has to be deeply, deeply interested in childhood and children: in how we learn, how we teach, how we change, and how the knowledge of one generation is passed on and adapted.

Gopnik has also written about AI as a social technology, as a kind of reification of the knowledge and understanding latent in language and culture—something more like the library to end all libraries than a mind. I think this is a very useful framing. AI isn’t, or shouldn’t be, our rival. It’s a resource: a cultural and technological compendium of our achievements. But it cannot do any of the most important stuff, in terms of nurture, care, teaching, hoping. If you are thinking about how to create systems that have values compatible with human and planetary thriving, children and child-raising are good models. If you want to look at the conditions a human needs to thrive, look at children.

It’s crushingly obvious that children need love. Of course, they need to be safe and warm and fed more than anything; but it’s love that is the driving and connecting force. The love and support of family and friends and kin is more important than piles of stuff and gadgets. The idea that children might be taught and raised by super-intelligent AI is just delusional. It’s stupid.

That makes me think about Harry Harlow’s baby monkeys clinging miserably to their wire ‘mothers.’

There’s a lovely line in the philosopher Alasdair MacIntyre’s lecture series Dependent Rational Animals, where he says that his attempts in his earlier work to come up with an ethics independent of biology was just wrong and impossible. To paraphrase, you need an ethical self-conception rooted in the fact that you’re part of humanity, part of a species, not just a notionally independent adult.

We have all been, and we all will be again, utterly reliant upon the care of others. Every single one of us was born into total dependency. We will all sicken, age and die. There’s nothing in our lives that is truly autonomous, when you think about it. Even the most libertarian forms of individualism are wholly predicated upon massive shared supplies of goods and services, money and trade, manufacture and technology.

These people in their bunkers with their generators, what are they doing? It’s a weird denial of our profound inter-dependency. And technology is the most inter-dependent thing of all, even more so than us. Technology is incredibly needy.  I think the desire to be wholly independent of others embodies a confusion between the enormously important freedom to pursue your goals, to live your life, and the fact that every opportunity and tool you have at your disposal is ultimately the product of countless others’ lives and labour.

Gopnik makes this point very powerfully. She says: I am the child of countless minds. Everything around me—the light, the chair, the clothes I wear—is the product of century after century of human ingenuity and inheritance.

You touched on artificial intelligence a moment ago. Might that lead us to the third book you want to recommend, which is Stuart Russell’s Human Compatible: Artificial Intelligence and the Problem of Control?

Yes, this is a book by a computer scientist who has thought deeply about the history and future trajectory of artificial intelligence, and who is very literate in the cutting edge discourse around it. He is no Luddite, in the pejorative sense, and in fact embraces many consequentialist ideas because they are extremely fit for some purposes: if you are talking about a system, you do need to take a close pragmatic interest in its inputs and outputs.

One of the reasons I find this book important is that it’s very clearly written. It’s not a long or a difficult book. Russell zeroes in on the significance of doubt, and this is a fundamental point that I think a lot of people miss in the field of computer science, which is simply that any kind of real-world problem (what should we do? what might we do? what would be the best outcome?) is computationally intractable. In other words, there will always be some uncertainty.

There is such a thing as a perfect game of chess. It’s very difficult to compute, but there is such a thing. A rules-based, deterministic game can be optimised. But human life can’t be optimised. It can be improved, but it can’t be optimised. ‘What should I have for lunch?’ is not a question that has an optimal answer. Given all the calculation time in the universe, you still couldn’t calculate the perfect, super-optimal lunch.

Of course, we all know the world is probabilistic. Yet, when it comes to computation, a lot of people seem to forget this and start looking for ‘the answer’ when it comes to complex social challenges, the future of AI, and so on. Russell emphasises the significance of the fact that there will always be a plurality of different priorities, and you’ll never be able to come up with the priority.

Then, crucially, he talks in a practical way about the importance of building doubt into machines. AIs, he argues, should be doubtful to the extent that we are about doubtful about what goal is worth pursuing. And they should also be willing to be switched off. The people who construct them should prioritise the existence of an off switch, and a process of constructive doubt, over trying to optimise technology towards a supreme, transcendent final goal for humanity.

Human Compatible is a humane book about the challenges and opportunities of AI. It’s also a very non-hype book. By which I mean, it talks about the trajectory of artificial intelligence, the enormous potentials of the pattern recognition on a vast scale that it offers, without indulging fantasies. AI has a great potential to do good and to help us solve problems, but the point is not that it or anybody will ever know best, but rather that we should ensure the values and tendencies encoded into powerful systems are compatible with human thriving, and the thriving of life on this planet. And that compatibility must in turn entail doubt, plurality, and an open interrogation of aims and purposes, not optimisation.

The thing I worry about, perhaps more than Russell, is that some technologists seem determined to reason backwards from a hypothetical, imagined future. There’s the so-called ‘singularity,’ the point beyond which computers become self-improving, and potentially solve all human problems. So the only thing that matters is getting to the singularity and having good rather than bad super machines. Then, at the point of singularity or beyond it, death will be solved, the losses of our ecosystem will be redressed, there can be an infinite number of people living infinitely magnificent lives, and so on. Or, if we do things wrong, everyone will be in computer hell. The problem with all this is that you’re reasoning backwards from a fixed conclusion you’ve arrived at through hand-waving and unacknowledged emotion. You’re engaging in metaphysical, or even eschatological speculation, while insisting that it’s all perfectly logical and evidence-based.

It’s really important, I think, to resist focusing on imaginary problems or treating hypotheticals with high degrees of certainty. This is precisely the opposite of what we need to do, which is focusing on real problems and opportunities, while preserving uncertainty and constructive doubt: while putting actual ideas and theories to meaningful, scientific tests.

I like that you encoded an element of doubt into your own summing up. And perhaps this idea of querying assumptions, and anticipating unintended consequences, draws us on to your next tech ethics book recommendation: Privacy is Power: Why and How You Should Take Back Control of Your Data, which looks at the dangers of surveillance through technology.

This is both a philosophical and a polemical book: one that links big ideas to immediate actions in the real world. I see it as of a piece with important writing by philosophers like Evan Selinger and Evan Greer in the US, who have sounded the alarm around the normalisation of various kinds of surveillance and data collection.

A lot of contemporary, AI-enabled technology is data hungry, with the broad promise that you give it your data, it can do more and more: keep you safe, make employees more efficient and more profitable, track students and optimise their learning, stop accidents, catch thieves or dissidents; and so on.

Some of this may be true, or even desirable. But privacy is incredibly important. It’s the space within which various kinds of trust, self-authorship, self-control and thriving can take place. And it’s also very important to the civic contract: the ability of different people to meaningfully control their lives and have agency within them. A line I like in Veliz’s book is that, contrary to the idea that philosophy is about dispassionate consideration, it’s appropriate and indeed important to protest and to show anger in the face of incursions upon your liberty. Resistance is not futile, but necessary.

In democracies at least, we are lucky enough to be able to say: no, we do not want ubiquitous surveillance in college campuses; no, we do not want everyone’s face being recognised in public places; no, we do not want databases of everyone who goes to a protest. We don’t want certain features of people to be tracked at all. Because the kind of power that this gives to the state, or corporations, or other small bodies of people, is dangerous and corrosive of a lot of the values we need in human society. A loss of privacy makes it easy for other people to have power over you and to manipulate you. Therefore, having control over that thing called data—which sounds so neutral—is powerful. Once again, it’s not neutral at all. It’s information about who you go to bed with, or what your children do, or who your God is.

We’re back to the point I began with. Far from technology being a dispassionate expert arena, letting people harvest data about you in an unfettered way takes away your privacy and gives others power over you. And this doesn’t have to be the case. Some of us, at least, can push back against it, demand protections. It’s not inevitable.

Between all these things, this is an eloquent and important book, and it’s one I enjoyed a lot. It’s a practical call for action, with examples that makes the case for action right now.

Your next book recommendation is Transcendence by Gaia Vince—another previous Five Books interviewee—in which the author argues that we have been transforming our species into a transcendent super organism she dubs Homo omnis. Tell us more.

Yes. In some ways this book has inspired me most directly in my own writing. It’s big, it’s beautiful, it’s well-written and unashamedly fascinated by the biggest of big pictures in terms of where we’ve come from as a species. And its themes are interwoven with deep readings of history, spirituality, language, belief. It’s almost dizzying, a compendium of fascinating ideas about humanity.

I love the way Vince leaps between the particular—the details of, for example, how archaeological evidence shows us hominins gradually learning to control fire—and the vast sweep of history. Our capacity to combine these perspectives is itself a great human gift. As she notes, we exist at an unprecedented moment in history: we are a planet-altering species, we are transcending the purely biological. Humanity is utterly remarkable. Yet at the same time, we remain a part of nature. We are part of this planetary system; a strange, self-remaking part. And we can’t possibly understand ourselves without being deeply interested in this connectivity and the history of our own emergence.

So, yes, I love the big-ness of this book—its eloquence and insistence that there is a commonality between archaeology, sociology, biology; the spiritual, the mental, the computational.

Why do you consider it a book about the ethics of technology?

I put it down as a book of ethics, because for me, the great task of ethics is to connect the facts we know to the question of what we should do, and why. So: what does thriving look like for our species in the light of what we know about our biology, our technology, our history? I constantly want to make this connection.

You know, I come back to someone like Kant, who is almost a byword for the generalised, the abstract, or the idealistic. He can seem an impossibly demanding ethicist, a philosopher’s philosopher. Yet he was famously awoken from his dogmatic slumber by Hume’s empiricism. He did all the things he did because he cared so deeply about facts—about us as beings, about nature, about truth. He had some appallingly unscientific and ignorant ideas, especially around race. But even this emphasizes that the great challenge for ethics is to address new forms of knowledge and old forms of ignorance: to help us face the facts of existence with clear eyes.

So. Which of our ideas, right now, are ripe for overturning? You could say: the way we treat animals, the planet, even our children. The way we pretend technology may dissolve all our problems. The way we pretend that we are completely rational when actually we are, often, in the grip of unacknowledged emotions.

So I love Vince’s book. It’s deeply attentive to our spiritual side, to our intellectual side—but biologically literate at the same time. I should also say that the author has written more recently about climate change and global migration trends, both incredibly important particularities in the 21st century. Any ethics worth its salt has to be deeply interested in the facts and politics of the present moment. The duty is to keep on trying to understand the world and ourselves – and, if necessary, to keep on changing our mids.

I quite agree. In Wise Animals you quote, as an epigraph, Picasso on computers: “they are useless,” he said. “They can only give you answers.” Do you think we have finally started to ask the right questions?

I love that line. You see it quoted a lot; the Paris Review interview it’s taken from is a great interview. And he wasn’t wrong. One of the most important facts about even the cutting edge of AI—transformer technologies, convolutional deep learning, all that jazz—is that its vast understanding simply compresses and queries huge amounts of our own knowledge. So, yes, we have wonderful machines that can endlessly give us answers. But only we can define the questions worth asking.

This is one reason I keep coming back to ancient myths in Wise Animals: those stories that have immemorially helped us to structure and explore our longings, fallibilities, identifications. Myths often remind us of human potential and hubris in the same breath. And one thing they say again and again is that if you have a tool—a gadget, a ring, a magical sword—that gives you the power to level mountains, to know answers, well, be careful what you wish for. Our species has this Promethean spark, this godlike power. Using it wisely is our defining challenge. And this means, for me, embracing the virtues of compassion and humility alongside our embodied, fallible humanity – and rejecting consequentialist fantasies of optimisation that may lead us to a terrible place at great speed.

Interview by Cal Flyn, Deputy Editor

April 24, 2024

Five Books aims to keep its book recommendations and interviews up to date. If you are the interviewee and would like to update your choice of books (or even just what you say about them) please email us at [email protected]

Tom Chatfield

Tom Chatfield

Tom Chatfield is a British tech philosopher and author, with a special interest in critical thinking and the ethics of technology. He has written a dozen books, published in over thirty languages. His latest, Wise Animals (Picador), explores the co-evolution of humanity and technology.

Tom Chatfield

Tom Chatfield

Tom Chatfield is a British tech philosopher and author, with a special interest in critical thinking and the ethics of technology. He has written a dozen books, published in over thirty languages. His latest, Wise Animals (Picador), explores the co-evolution of humanity and technology.