Before we talk about your recommended books specifically, I wonder if you might walk us through why itâs so crucial to think about the ethics of technology.
For me it begins with the observation that there are no neutral tools. The tech historian Melvin Kranzberg first said this in the 1980s, but it bears repeating. Similarly, to paraphrase Marshall McLuhan, we shape our tools, then our tools shape us. The very idea that the manmade technological world is a neutral environment seems to me dangerous and wrong.
If weâre interested in ethics, and especially an ethics that is literate in the best knowledge we have about the universe and ourselves, we have to take a close interest in the human-made world. Our ability to be good and do good is bound up with the systems through which weâre connected and interconnected. âTechnology,â today, is usually taken to mean the shiny, new, increasingly autonomous stuff. But in fact it encompasses everything we have made together as a species: from swords and shields and cooking vessels to written words, clocks and compasses. And it only becomes more important to talk about the propensities and purposes weâre embedding in these creations when they are autonomous or semi-autonomous.
So I think we need to be start by acknowledging that our ability to thrive, to live good and meaningful lives, to find meaningful connection, to find purpose, is co-dependent with the tools and technologies that connect us. And it follows that itâs extraordinarily important to have a vocabulary that allows us to talk in a rich way about the values embodied in technology. Otherwise we end up in a flat, hopeless landscape that features nothing but âconsumer choiceâ and âfeaturesâ and âbugsââplus a bunch of technocrats who may solve our problems, or not. This way of thinking leads to a painfully narrow view of humanity, where we are just antiquated devices, ripe for displacement by super-smart machines.
Absolutely. As you argue in your book, Wise Animals, âwe need the right kind of worries. And this means turning our gaze away from gadgets towards the values and assumptions baked into them.â Do you think this more questioning approach is baked into the tech industry as it exists today?
Well, I think a lot of people are increasingly worried about this, and that thereâs a real hunger for bringing into these realms questions of what it means to live well or to thrive.
But the broader context is one in which plenty of people still shy away from questioning. Thereâs a lot of determinism around. By determinism, I mean the idea that in the long run we are powerless, that technology has a certain momentum that will sweep across the world. You can object and be a Ludditeâbut this is assumed to to futile. In the long term, the course of technological history is simply something that happens to you.
Now, I want to flip that around, and say, actually, if you look at peopleâs lives, most of the time itâs not only about money and power and features and progress. Itâs the other, more mysterious and ancient stuff that matters more. And in this domain, there is a great deal of human agency and questioning to be found.
Once people reach a state of material sufficiencyâwhich is absolutely essentialâthe most important considerations start to become doing something that provides a sense of purpose, creating meaningful connections with other people and nature, bequeathing to their descendants a world worth living in. So I think any realistic vision of development has to allow for the basic fact that, when people have enough, what they aspire towards is far more complicated than having more or better stuff. Itâs animated by a deep, collective interest in realising their potential; questioning and challenging aspects off the present; building a better future.
Letâs look at the first book on tech ethics youâve chosen to recommend. This is Luciano Floridiâs The 4th Revolution, in which he discusses the intellectual and technological revolutions that have transformed our understanding of the world. Our philosophy editor Nigel Warburton spoke to Floridi a few years ago about the best books on the âphilosophy of information.â
I love Luciano Floridiâwho is a philosopher of technologyâbecause I think he has a breadth of vision, a genuinely systematic approach to the ethics of technology, but also he is deeply literate in history, and deeply interested in human nature. Heâs not a consequentialist, in the sense of being interested in maximising some kind of uber-beneficial long-term outcome for humanity. A lot of tech philosophy naturally leans towards consequentialism in terms of mega payoffs and outputs. This can be a great tool for engaging with the outputs of particular systems, but itâs not a systematic philosophy of human nature or thriving.
In this particular book, Floridi starts by referencing and updating Freudâs account of three historical revolutions in human consciousness. With Copernicus and the birth of heliocentric models of the universe, we gradually learned as a species that we werenât the centre of the universe. Itâs not all about us, in a cosmological sense. Then in the Darwinian revolution of the 19th century, we found evidence that suggested that we are not the unique pinnacle of creationâthat there wasnât this moment where humans were âmade,â the best and most intelligent at the top of a pyramid. In fact, weâre connected with and emergent from the rest of nature; and nature turns out to be far vaster and stranger than we previously imagined. The timespan within which we exist is immense whileâincidentallyâyou no longer need a deity to explain our existence.
Next, Freud argued that his own psychoanalytic revolution was another de-centring of human consciousness, because rather than the sublimely lucid self-knowledge of DescartesâI think therefore I am, cogito ergo sumâyou canât be at all sure of whatâs going on when you introspect. Youâre grasping at straws.
Floridi adds to this account what he calls a âfourth revolutionâ: a similar de-centring of human consciousness where suddenly, through artificial means, weâre creating entities that are capable of incredible feats of information processing. And by doing so weâve been forced to re-conceive ourselves as informational organisms â and take on board the fact that even our intellectual capabilities may not be beyond replication.
âOur ability to be good and do good is bound up with the systems through which weâre connected and interconnectedâ
The subtle point he makesâwhich I think is a Kantian oneâis to recognise that human dignity and thriving become more rather than less important in this informational context. For Floridi, the informational webs we weave across the world are themselves sites of ethical activity.
Building on this, he discusses what it means for the technologies we use to be âpro-ethicalâ by design, in the sense that they enhance or uplift our capacity for accountable, autonomous actions. In order to do so, they need â for example â to give us correct, actionable information. They need to help us arrive at decisions that are appropriate and genuinely linked to our needs and concerns, rather than manipulating or disempowering us.
You can contrast this to what have elsewhere been described as âdark patterns,â where you have system that are opaque and exploitative: where the interface is more like a casino, and it doesnât want you to make a good decision, or really to have any choice at all. Some forms of social media might be one example of thatâwhere the incentives are to do with emotional arousal, getting people to act fast and without consideration, with little ability to apply filters of quality to what they are doing.
I didnât list it among my recommendations, but I love Neal Stephensonâs novel Seveneves, about a hypothetical future in which social media so deranged peopleâs collective judgement that it nearly led to the human species being wiped out. In his story, social media is up there with gene editing or bioweapons as a forbidden technology, because the danger is so great when its seductions meet human fallibilities and vulnerabilities.
I do like speculative fiction pitched as a kind of modern-day parable. Iâm intrigued by what you said about certain technologies being powered by or powering a dark side of of human psychology. Do you think the intentions of the creators of these technologies matter? Or is it all about the effects?
I think intentions are important. But itâs very dangerous to ascribe too much foresight and brilliance to the people creating tools. Iâve spent quite a lot of time in the headquarters of tech companies dealing with very smart people, and itâs important to remember that even very smart people often have quite narrow knowledge and incentives. So, yes, intentions are important, but what you really need to be interested in is the blind spots, the lack of foresight, the capacity of people to pursue profit at the expense of other issues.
The big thing for me is what I call the âvirtuous processesâ of technologyâs development and deployment. What I mean by that is forms of regulation and governance where you donât get to ignore certain consequences, to move fast and break things and not worry about the result: where you are obligated to weigh up the wider impacts of a technology.
You need to assume there are a lot of blindspots, lots of stuff that will emerge over time that you canât anticipate. Itâs one of the great lessons of systems thinking. I quote the French philosopher Paul Virilio in the book: âWhen you invent the ship, you invent the shipwreck.â This refers to those accidental revelations that will always come with technology. Theyâre inevitable; but this makes it all the more important to have feedback mechanisms whereby unintended or undesirable resultsâdamage, injustice, unfairnessâcan be filtered back into the system, and checks and balances and mitigations created.
Thereâs been a lot of good stuff written recently about the Luddites. Brian Merchant has a great recent book recasting the Luddites as a sophisticated labour movement: not just people who didnât like technology, who wasted everyoneâs time by busting up factories and resisting the inevitable. Heâs partly saying that, actually, all forms of automation potentially bring a great deal of injustice and disempowerment and exploitation. And much of the story of industrialisation has entailed, very gradually, societies working out collectively how industrial processes can be compatible with respect for human bodies and minds.
Of course, there are urgent conversations we still need to have about how models of production and modern lifestyles can be made compatible with sustainability. Itâs not about moving back to some mythical Eden before technology; I donât think thatâs possible or meaningful. But I do think that keeping faith with our ability to change and adapt rapidly is important. We have the tools and the compassion, the awareness and the empathy, to come up with solutions for a way to live together. And, ironically, best serving these values tends to mean focusing on the local and the practical and the incremental, not the grand and the hand-waving.
Can we talk about your second book recommendation? This is former Five Books interviewee Alison Gopnikâs The Philosophical Baby. Could you talk us through why you recommend this book in the context of ethics and technology?
I love this book so much. Alisonâs work has been so important to me. She brilliantly argues in this book and in her research that if you want to understand our species, in a psychologically and scientifically literate way, you have to be deeply interested in children and childhood. Childhood is key to our uniqueness as a species. Human childhood is a total outlier in biological terms in the mammal kingdom. We have this incredibly long period of enormous dependency, neuroplasticity and flexibility.
Fairly obviously, to be a language-using and technology-crafting species, you need an incredible capacity for learning: to be able primarily to acquire skills through nurture rather than nature, through teaching and communication rather than through instinct. But how did this come about?
Gopnik makes the case that over millions of years, our lineage doubled and then tripled down on this incredibly long, vulnerable, flexible childhood, because it conferred evolutionary advantages in terms of our ability to create tools, to build protections through technology. But she also points out that all this was necessarily bound up with an incredible capacity for mutual care, compassion, and for nurture. These traits are the absolute fundamentals for human survival and thriving. She calls children the R&D department of the human species. Itâs a wonderful image; and it captures the entwined capacities for care and change that underpin technology and culture alike.
The human child is born incredibly vulnerable. Itâs very dangerous for the human mother to give birth. And then, once the child is born, itâs utterly dependent for months and months and monthsâlike no other ape. It takes months to even learn to sit up. Reproductive maturity is a decade, a decade and a half away. The prefrontal cortex continues to grow and develop into our twenties. But it brings great gifts: this capacity for change and learning and teaching.
Five Books interviews are expensive to produce. If you're enjoying this interview, please support us by donating a small amount.
So much of the time, discussions of technology are obsessed with notionally independent adultsâperhaps men with disposable income. But thatâs an impoverished description of our species, both how we got here and how we build a future together. Pretty obviously, the futureâin the most literal of waysâis children. They are born with this incredible plasticity, a wonderful lack of instinctual lock-in. So, for me, any ethics of technology has to be deeply, deeply interested in childhood and children: in how we learn, how we teach, how we change, and how the knowledge of one generation is passed on and adapted.
Gopnik has also written about AI as a social technology, as a kind of reification of the knowledge and understanding latent in language and cultureâsomething more like the library to end all libraries than a mind. I think this is a very useful framing. AI isnât, or shouldnât be, our rival. Itâs a resource: a cultural and technological compendium of our achievements. But it cannot do any of the most important stuff, in terms of nurture, care, teaching, hoping. If you are thinking about how to create systems that have values compatible with human and planetary thriving, children and child-raising are good models. If you want to look at the conditions a human needs to thrive, look at children.
Itâs crushingly obvious that children need love. Of course, they need to be safe and warm and fed more than anything; but itâs love that is the driving and connecting force. The love and support of family and friends and kin is more important than piles of stuff and gadgets. The idea that children might be taught and raised by super-intelligent AI is just delusional. Itâs stupid.
That makes me think about Harry Harlowâs baby monkeys clinging miserably to their wire âmothers.â
Thereâs a lovely line in the philosopher Alasdair MacIntyreâs lecture series Dependent Rational Animals, where he says that his attempts in his earlier work to come up with an ethics independent of biology was just wrong and impossible. To paraphrase, you need an ethical self-conception rooted in the fact that youâre part of humanity, part of a species, not just a notionally independent adult.
We have all been, and we all will be again, utterly reliant upon the care of others. Every single one of us was born into total dependency. We will all sicken, age and die. Thereâs nothing in our lives that is truly autonomous, when you think about it. Even the most libertarian forms of individualism are wholly predicated upon massive shared supplies of goods and services, money and trade, manufacture and technology.
These people in their bunkers with their generators, what are they doing? Itâs a weird denial of our profound inter-dependency. And technology is the most inter-dependent thing of all, even more so than us. Technology is incredibly needy. Â I think the desire to be wholly independent of others embodies a confusion between the enormously important freedom to pursue your goals, to live your life, and the fact that every opportunity and tool you have at your disposal is ultimately the product of countless othersâ lives and labour.
Gopnik makes this point very powerfully. She says: I am the child of countless minds. Everything around meâthe light, the chair, the clothes I wearâis the product of century after century of human ingenuity and inheritance.
You touched on artificial intelligence a moment ago. Might that lead us to the third book you want to recommend, which is Stuart Russellâs Human Compatible: Artificial Intelligence and the Problem of Control?
Yes, this is a book by a computer scientist who has thought deeply about the history and future trajectory of artificial intelligence, and who is very literate in the cutting edge discourse around it. He is no Luddite, in the pejorative sense, and in fact embraces many consequentialist ideas because they are extremely fit for some purposes: if you are talking about a system, you do need to take a close pragmatic interest in its inputs and outputs.
One of the reasons I find this book important is that itâs very clearly written. Itâs not a long or a difficult book. Russell zeroes in on the significance of doubt, and this is a fundamental point that I think a lot of people miss in the field of computer science, which is simply that any kind of real-world problem (what should we do? what might we do? what would be the best outcome?) is computationally intractable. In other words, there will always be some uncertainty.
There is such a thing as a perfect game of chess. Itâs very difficult to compute, but there is such a thing. A rules-based, deterministic game can be optimised. But human life canât be optimised. It can be improved, but it canât be optimised. âWhat should I have for lunch?â is not a question that has an optimal answer. Given all the calculation time in the universe, you still couldnât calculate the perfect, super-optimal lunch.
Of course, we all know the world is probabilistic. Yet, when it comes to computation, a lot of people seem to forget this and start looking for âthe answerâ when it comes to complex social challenges, the future of AI, and so on. Russell emphasises the significance of the fact that there will always be a plurality of different priorities, and youâll never be able to come up with the priority.
Then, crucially, he talks in a practical way about the importance of building doubt into machines. AIs, he argues, should be doubtful to the extent that we are about doubtful about what goal is worth pursuing. And they should also be willing to be switched off. The people who construct them should prioritise the existence of an off switch, and a process of constructive doubt, over trying to optimise technology towards a supreme, transcendent final goal for humanity.
Human Compatible is a humane book about the challenges and opportunities of AI. Itâs also a very non-hype book. By which I mean, it talks about the trajectory of artificial intelligence, the enormous potentials of the pattern recognition on a vast scale that it offers, without indulging fantasies. AI has a great potential to do good and to help us solve problems, but the point is not that it or anybody will ever know best, but rather that we should ensure the values and tendencies encoded into powerful systems are compatible with human thriving, and the thriving of life on this planet. And that compatibility must in turn entail doubt, plurality, and an open interrogation of aims and purposes, not optimisation.
The thing I worry about, perhaps more than Russell, is that some technologists seem determined to reason backwards from a hypothetical, imagined future. Thereâs the so-called âsingularity,â the point beyond which computers become self-improving, and potentially solve all human problems. So the only thing that matters is getting to the singularity and having good rather than bad super machines. Then, at the point of singularity or beyond it, death will be solved, the losses of our ecosystem will be redressed, there can be an infinite number of people living infinitely magnificent lives, and so on. Or, if we do things wrong, everyone will be in computer hell. The problem with all this is that youâre reasoning backwards from a fixed conclusion youâve arrived at through hand-waving and unacknowledged emotion. Youâre engaging in metaphysical, or even eschatological speculation, while insisting that itâs all perfectly logical and evidence-based.
Itâs really important, I think, to resist focusing on imaginary problems or treating hypotheticals with high degrees of certainty. This is precisely the opposite of what we need to do, which is focusing on real problems and opportunities, while preserving uncertainty and constructive doubt: while putting actual ideas and theories to meaningful, scientific tests.
I like that you encoded an element of doubt into your own summing up. And perhaps this idea of querying assumptions, and anticipating unintended consequences, draws us on to your next tech ethics book recommendation: Privacy is Power: Why and How You Should Take Back Control of Your Data, which looks at the dangers of surveillance through technology.
This is both a philosophical and a polemical book: one that links big ideas to immediate actions in the real world. I see it as of a piece with important writing by philosophers like Evan Selinger and Evan Greer in the US, who have sounded the alarm around the normalisation of various kinds of surveillance and data collection.
A lot of contemporary, AI-enabled technology is data hungry, with the broad promise that you give it your data, it can do more and more: keep you safe, make employees more efficient and more profitable, track students and optimise their learning, stop accidents, catch thieves or dissidents; and so on.
Some of this may be true, or even desirable. But privacy is incredibly important. Itâs the space within which various kinds of trust, self-authorship, self-control and thriving can take place. And itâs also very important to the civic contract: the ability of different people to meaningfully control their lives and have agency within them. A line I like in Velizâs book is that, contrary to the idea that philosophy is about dispassionate consideration, itâs appropriate and indeed important to protest and to show anger in the face of incursions upon your liberty. Resistance is not futile, but necessary.
In democracies at least, we are lucky enough to be able to say: no, we do not want ubiquitous surveillance in college campuses; no, we do not want everyoneâs face being recognised in public places; no, we do not want databases of everyone who goes to a protest. We donât want certain features of people to be tracked at all. Because the kind of power that this gives to the state, or corporations, or other small bodies of people, is dangerous and corrosive of a lot of the values we need in human society. A loss of privacy makes it easy for other people to have power over you and to manipulate you. Therefore, having control over that thing called dataâwhich sounds so neutralâis powerful. Once again, itâs not neutral at all. Itâs information about who you go to bed with, or what your children do, or who your God is.
Weâre back to the point I began with. Far from technology being a dispassionate expert arena, letting people harvest data about you in an unfettered way takes away your privacy and gives others power over you. And this doesnât have to be the case. Some of us, at least, can push back against it, demand protections. Itâs not inevitable.
Between all these things, this is an eloquent and important book, and itâs one I enjoyed a lot. Itâs a practical call for action, with examples that makes the case for action right now.
Your next book recommendation is Transcendence by Gaia Vinceâanother previous Five Books intervieweeâin which the author argues that we have been transforming our species into a transcendent super organism she dubs Homo omnis. Tell us more.
Yes. In some ways this book has inspired me most directly in my own writing. Itâs big, itâs beautiful, itâs well-written and unashamedly fascinated by the biggest of big pictures in terms of where weâve come from as a species. And its themes are interwoven with deep readings of history, spirituality, language, belief. Itâs almost dizzying, a compendium of fascinating ideas about humanity.
I love the way Vince leaps between the particularâthe details of, for example, how archaeological evidence shows us hominins gradually learning to control fireâand the vast sweep of history. Our capacity to combine these perspectives is itself a great human gift. As she notes, we exist at an unprecedented moment in history: we are a planet-altering species, we are transcending the purely biological. Humanity is utterly remarkable. Yet at the same time, we remain a part of nature. We are part of this planetary system; a strange, self-remaking part. And we canât possibly understand ourselves without being deeply interested in this connectivity and the history of our own emergence.
So, yes, I love the big-ness of this bookâits eloquence and insistence that there is a commonality between archaeology, sociology, biology; the spiritual, the mental, the computational.
Why do you consider it a book about the ethics of technology?
I put it down as a book of ethics, because for me, the great task of ethics is to connect the facts we know to the question of what we should do, and why. So: what does thriving look like for our species in the light of what we know about our biology, our technology, our history? I constantly want to make this connection.
You know, I come back to someone like Kant, who is almost a byword for the generalised, the abstract, or the idealistic. He can seem an impossibly demanding ethicist, a philosopherâs philosopher. Yet he was famously awoken from his dogmatic slumber by Humeâs empiricism. He did all the things he did because he cared so deeply about factsâabout us as beings, about nature, about truth. He had some appallingly unscientific and ignorant ideas, especially around race. But even this emphasizes that the great challenge for ethics is to address new forms of knowledge and old forms of ignorance: to help us face the facts of existence with clear eyes.
So. Which of our ideas, right now, are ripe for overturning? You could say: the way we treat animals, the planet, even our children. The way we pretend technology may dissolve all our problems. The way we pretend that we are completely rational when actually we are, often, in the grip of unacknowledged emotions.
So I love Vinceâs book. Itâs deeply attentive to our spiritual side, to our intellectual sideâbut biologically literate at the same time. I should also say that the author has written more recently about climate change and global migration trends, both incredibly important particularities in the 21st century. Any ethics worth its salt has to be deeply interested in the facts and politics of the present moment. The duty is to keep on trying to understand the world and ourselves â and, if necessary, to keep on changing our mids.
I quite agree. In Wise Animals you quote, as an epigraph, Picasso on computers: âthey are useless,â he said. âThey can only give you answers.â Do you think we have finally started to ask the right questions?
I love that line. You see it quoted a lot; the Paris Review interview itâs taken from is a great interview. And he wasnât wrong. One of the most important facts about even the cutting edge of AIâtransformer technologies, convolutional deep learning, all that jazzâis that its vast understanding simply compresses and queries huge amounts of our own knowledge. So, yes, we have wonderful machines that can endlessly give us answers. But only we can define the questions worth asking.
This is one reason I keep coming back to ancient myths in Wise Animals: those stories that have immemorially helped us to structure and explore our longings, fallibilities, identifications. Myths often remind us of human potential and hubris in the same breath. And one thing they say again and again is that if you have a toolâa gadget, a ring, a magical swordâthat gives you the power to level mountains, to know answers, well, be careful what you wish for. Our species has this Promethean spark, this godlike power. Using it wisely is our defining challenge. And this means, for me, embracing the virtues of compassion and humility alongside our embodied, fallible humanity â and rejecting consequentialist fantasies of optimisation that may lead us to a terrible place at great speed.
Interview by Cal Flyn, Deputy Editor
April 24, 2024
Five Books aims to keep its book recommendations and interviews up to date. If you are the interviewee and would like to update your choice of books (or even just what you say about them) please email us at [email protected]