Our topic today is tech utopias and dystopias. Perhaps it would be useful to begin by defining what these terms and concepts mean to you.
The word ‘utopia’ comes from the ancient Greek words ‘ou’ (not) and ‘topos’ (place), which means nowhere. It’s also a pun on ‘eutopia,’ derived from the ancient Greek ‘eu’ (good), which means ‘the good place’. So ‘utopia’ sounds like ‘the good place’ but means ‘nowhere’. Dystopia, its counterpart, means ‘the bad place.’
For the ancient Greeks, the bad place was real, and the good place didn’t exist. It’s worth remembering that utopia is an impossibility, and yet we still feel compelled to think about it a lot.
There’s an ancient Greek saying that haunts me: ‘The road to hell is wide and begins at your feet.’ It’s very easy to get things wrong, there are many ways to get dystopia and you can even create dystopia without making much effort. In contrast, you can spend your whole life trying to create utopia and working towards it, but will never get there. It’s a joke that the good place could be real; it can’t be—but the bad place certainly can.
The reason tech utopia and dystopia is so topical is because of the rapid developments in AI, which is a policy area that I work in. When we talk about AI, the rhetoric seems to switch between doom and hype—between dystopia and utopia. So it’s useful to think about the intellectual history and intellectual traditions behind that, and apply those to AI. We may not be able to know the specific impacts AI will have, but we shouldn’t delude ourselves that because this particular technology is new for us there’s nothing we can learn from to prepare for it.
Speaking of your day job, can you tell us a little about your background and how that’s influenced your work in tech policy?
I studied philosophy, specializing in philosophy of art. When I left academia, I started working in science and technology organizations because there were more opportunities than in the arts, and I enjoyed the intellectual stimulation of learning something new. But it meant I came to science and technology with a different North Star.
There’s this phenomenon called scientism, which is the elevation of science almost to the level of a religion, and I think it’s quite common in modern Western society. I didn’t have that impulse to revere science, to put it on a pedestal in any way, so I found it quite valuable to be working in science and technology with this critical perspective on the rhetoric and the politics and sociology of the area.
Your first recommendation for us is Archaeologies of the Future by Fredric Jameson, the American literary critic and philosopher. Its intriguing subtitle is The Desire Called Utopia and Other Science Fictions. Can you introduce us to this work?
I’m cheating a bit with this selection because it’s going to give me all of science fiction and fantasy for these five books, so I’m not going to have to choose my five favorite science fiction novels or fantasy novels. They’re all covered by this book.
In the book, Fredric Jameson looks at the idea of utopia as being a form of political wish fulfillment. He quotes Freud writing on the unconscious that when we dream, our dreams are very interesting to us and boring to everyone else. The role of a writer is to try to make one’s dreams interesting to others and to try to make them universal. For example, if someone had a dream about being wildly popular and partying every day, or a nightmare about having to do their school exams all over again but this time in their underwear, this might say something about that person’s values or vulnerabilities but there’s no reason why anyone else should care. What the writer does is take those underlying, unconscious hopes and fears—in this case hopes and fears about being judged in different ways—and turn them into a story that speaks to others.
Jameson sees utopia and dystopia as a political form of dreaming and nightmare. How do you make your unconscious political wishes or fears interesting to others? You try to tell a story about a whole world that others can care about – so the dream you share, the story you tell, is not just about yourself. The story you tell is about a shared world.
These political problems that inspire the search for utopia really have only a certain number of dimensions. It might be the problem of labour, or the problem of time, or the problem of mortality, and so on. With that understanding of what utopia really is, Jameson argues that science fiction and fantasy—the genre of speculative fiction—are the art forms that let us explore and express our unconscious political dreams.
The difference between the two is that science fiction tends to see science as a form of technology whereas fantasy (as a genre) tends to see nature in that way. In science fiction, a technology like androids might be how we explore and test our values around how labour is organised in our society. And in fantasy, a technology like sorcery might be how we explore our feelings about how power is distributed and used in our society.
As a policy expert, how do you feel about the way science fiction and fantasy present policy problems and policy solutions? Do you find it relevant to your work?
Ezra Pound said that the artists are the antennae of the species. Our writers, our creators, our filmmakers, are able to tune into aspects of the unconscious and tell us about it. So we can consider speculative fiction to be a wonderful mirror for revealing a community and their values, in ways that they might not be aware of themselves. As a policy-maker, you can only work within the range of a society’s cultural imagination. If you want to do more, then you have to find a way to make new stories.
Of course writers are making new stories, too — and people often allege that various tech moguls are just trying to recreate their favorite science fiction books.
Frederic Jameson says even though texts about utopia are often presented as a blueprint—‘here’s a manifesto for what an ideal society should look like’, or, if it’s dystopia, ‘here’s a warning about what a terrible society would look like’—very often they’re not actually blueprints; they’re just a reaction to something in our world that’s gone wrong. They’re presented as a blueprint, but they are really just a kind of daydream with no serious indication of how they could become real. So we still don’t have a map to get to ‘the good place’. The danger is if a tech mogul mistakes the fantasy for a working blueprint, and expects their technology to deliver utopia: it won’t.
Alas. The next book you recommended for us is The Birth of Tragedy from the Spirit of Music, by Friedrich Nietzsche. Can you introduce us to this early work by Nietzsche, and how it relates to technology and dystopia?
This is an amazing book. Nietzsche was very young when he became a professor of Classics, and this is the first book he wrote. I think he intended it quite seriously, but this book was considered so bizarre that it ended his academic career as soon as it had begun. But that set him free to become the Nietzsche that we know today.
Because he lost his academic career straight away, he could then have a lot more fun with his writing and his thinking. For example, by the end of his career he had titled his autobiography Ecce Homo (‘Behold the Man’), which is how Pontius Pilate presented Jesus to the mob for crucifixion; and it had chapter titles such as ‘Why I am so wise’ and ‘Why I write such good books’. Fantastic trolling. But The Birth of Tragedy is a young Nietzsche still earnestly trying to be a sensible academic.
Nietzsche argued that within culture, there are two opposing forces—the Dionysian force and the Apollonian force. On the one hand Dionysus, the ancient Greek god of chaos, passion, and what we’d call the darker aspects of human nature, as well as its most ambiguous and unpredictable aspects. On the other hand Apollo, who in ancient Greek mythology was the God of medicine and sunlight and prophecy, was quite rational. Both sides coexist in a person and in a society.
Ancient Greek culture paid a lot of tribute to Apollo and had a love of order and rationality, but they also had periods when they celebrated chaos and held Dionysian festivals. Nietzsche believed you had to acknowledge both, and you had to let both coexist, and he thought that tragedy was the art form that best ‘held’ and balanced both at the same time. The art form of tragedy teaches us to be comfortable with being uncomfortable, and enduring ambivalence.
For Nietzsche, having this psychological, aesthetic skill or disposition was essential for being able to navigate the contradictory hopes and fears of our own human nature—the same unconscious hopes and fears that Jameson says really underpin the political visions of utopia and dystopia. Our understanding of tragedy is why we can spend our lives trying to create ‘the good place’ while knowing that it doesn’t exist.
Where Nietzsche thought that modern Europe had gone wrong was in pursuing only the Apollonian and believing we could ignore the Dionysian. He says it’s a mistake to think that we can have a truly happy ending, and it’s a mistake to think that our storyline is not, in reality, a bittersweet tragedy.
“Utopia is an impossibility, and yet we still feel compelled to think about it a lot”
Nietzsche was concerned that modern Europe was losing a cultural ability to understand tragedy and instead was seduced by a Judeo-Christian wish for a powerful saviour to redeem us. He predicted that modern Europe would put its faith in a kind of future technoscientific utopia, a fantasy of Apollonian rationality and order saving us from ourselves.
In this collective cultural fantasy, science and technology are assigned an almost messianic role—Nietzsche called it a deus ex machina, science and technology coming down from the clouds like an angel dispensing tidy solutions, or science and technology charging in like a knight in shining armour to vanquish all our problems and take us to safety. And of course it’s not true: we already have amazing science and technology, but our fundamental problems of dystopia remain, and instead of thinking critically about that we seem to think that more science and more technology is the solution.
Famine is a compelling example of this. Famines are not caused by there not being enough food to go around, famines are caused by inequalities and greed and political decision-making. Looking to science and technology to increase agricultural production or efficiency is not going to solve the fundamental problem, which is that, as humans, we find ways to be greedy or chaotic or cruel—our Dionysian aspects. Science and technology can’t save us from what we carry within us as humans, and placing our hopes on the wrong thing sets ourselves up for disaster.
The book has a brilliant section about how the modern rhetoric of eternal progress will actually create social unrest because it will create the belief that we should all be at these sunlit uplands, that this mass elevation should have happened already. So it’s not just that science and technology can’t save us from ourselves, but also that the rhetoric of linear progress, of marching on endlessly into the light through science and technology, will actually create unhappiness and conflict when this utopia doesn’t materialise. I find that really interesting.
It was prescient.
It was, but not popular at the time. Probably not popular now, either.
Your next recommendation for us is How Does Government Listen to Scientists?, by Claire Craig. Even from the title this is clearly a different sort of book; why do you find it insightful about science, technology and dystopia?
I used to work at the Royal Society, Britain’s national academy for the sciences, and Claire Craig was then the Chief Science Policy Officer. She’s now Provost at Queen’s College, Oxford.
She trained as a scientist but was quite open to learning from other disciplines, such as thinking about the cultural narratives that inform how communities understand or use science or technology. So, for example, how a culture’s myths and folklore, or how a community’s science fiction and fantasy, might influence how they respond to a new technology—or even what kinds of technology they might try to develop (to go back to your earlier point about tech moguls trying to recreate their favourite sci fi works).
The main office wall in the science policy department had posters of Kurt Vonnegut’s diagrams of the shapes of stories that she’d introduced. I picked up the Frederic Jameson book from the staff library that she had curated there.
How Does Government Listen to Scientists? is a book that she wrote distilling her years of experience in science policy. It helped me make the mental switch from being an academic, and kind of an idealist about how politics works, to the realities of working in policy.
She explains that by the time a problem lands on the desk of government, it’s because no one else can solve it. If the free market could solve it, it would; if industry could solve it, they would, and they’d be benefiting commercially from it. So anything that lands on the desk of government is fundamentally unsolvable—and possibly expensive too.
When you’re a policymaker, you have to understand that you’re going for the ‘least worst’ outcome, that there’s no outcome that is going to satisfy everyone, and that you’re working in an environment where some form of failure is almost certainly guaranteed. It’s different from being a scientist, where you might have the hope of being able to fulfill a particular vision or create something you are confident is going to be good in almost every aspect.
When you’re in government, your remit is the unsolvable problems rather than just ‘the problems that haven’t been solved yet’. It’s one thing to think: ‘how can we use science and technology to build our perfect society, to build our utopia?’ It’s another thing to think: ‘how can we use science and technology to limit a particular dystopia?’
It’s a shift from idealism to pragmatism, so it’s bracing and powerful stuff. If you’re not comfortable with that mental shift, then stay on the science side and the technology side of doing pure research. But if you can make that shift, then you can work in politics because you understand that you have to live with imperfection all the time, navigating constant ambiguity and failure. And, again, that constant ambiguity and failure while trying to do something good is the bittersweet stuff of tragedy. It’s the road to hell being wide and starting at your feet.
Are there any specific examples that come to mind to illustrate this?
Governments all over the world had to come up with different pandemic response strategies in real time and with a very limited evidence base. Different strategies seemed to be more or less effective at different points in the development of the pandemic. At one point, Sweden seemed to have an effective policy with its light-touch approach, and then it didn’t. At one point, China did with its zero-tolerance approach, and then it didn’t.
No one knew exactly what the tradeoffs were going to look like and how those would change over time, because so many of the impacts felt unprecedented for our era. It’s imperfect decision making, in very challenging circumstances. You’re more likely to be blamed for the harm you failed to prevent, than be congratulated on the harm you did prevent.
Speaking of harms… the next title you’ve recommended is Chernobyl Prayers, by Svetlana Alexievich. What does this book teach us about our ideas of technology and dystopia?
Alexievich is a Belarusian investigative journalist, essayist, and oral historian. In 2015, she became the first journalist to win the Nobel Prize for Literature. The wider literary aim of all her books is to capture the lost cultures of Soviet and post-Soviet communities.
The Chernobyl power plant was intended to be the world’s largest nuclear power plant, at a time when nuclear energy was seen as having the potential to be a utopian technology for being cheap, efficient, reliable and tremendously powerful—until this catastrophic accident in 1986.
The Chernobyl disaster had a devastating impact on the community that was there in terms of all the lives lost and the environment becoming toxic—and with that a whole way of life, a whole cultural memory and way of being in the world. Despite this, some people chose to stay in Chernobyl. Maybe they didn’t understand the risks of staying. Maybe they thought that completely uprooting from all they knew to go somewhere where they would be shunned would just be a different kind of living death and they preferred to stay in the shadow of the blast.
Alexievich writes about this book as being a missing history. She explains that by the time she finished writing it, in 1997, hundreds of books on the Chernobyl disaster had already been published. But she wasn’t writing the history of Chernobyl the disaster: she was writing the history of Chernobyl the place, Chernobyl the community, and Chernobyl the culture.
She did this by creating a symphony of voices, by interviewing people over years and building up life stories and complex community relationships a thread at a time. It’s this incredibly rich, textured collage that also feels like a series of private, intimate conversations.
Five Books interviews are expensive to produce. If you're enjoying this interview, please support us by donating a small amount.
Another thing that Alexievich’s literary approach reveals is that when a technological disaster happens, not only does it have an impact on human lives, but human consciousness also changes. She shows the way that the Chernobyl disaster has changed the consciousness of the survivors, as well as the consciousness of everyone around the disaster, including us, the distant observers decades later.
She writes about how when something like this happens, the past becomes an archive. It contains nothing to guide you. There’s a sense in which history has stopped and you have no map for what comes next. I find that a powerful way to think about the impact a tech dystopia can have: what comes next is uncharted, and trying to navigate that with no precedent is going to change our consciousness.
During and after the Chernobyl disaster, you had people making impossible choices with the consequences not being clear at the time they had to decide. There were times when I had to put the book away, to physically hide it in a desk drawer for a while, because it felt so frightening. Quite early on in the book, Alexievich interviewed a woman whose husband had been a fireman at the plant. He was dying of radiation exposure, and she begged to be allowed to sit with him and care for him. When she was trying to feed him, he was coughing, and she realized that he was coughing up his lungs. She was trying to spoon away his lungs from the corners of his mouth. No one should ever have to be in that situation.
The book’s terrific. Did you see the TV adaptation?
I didn’t.
I’m guessing they didn’t include that scene because it’s so disturbing, but it’s the very first story in the book—you can’t hide from it. When she was gently wiping flecks of lung away from his mouth, I just thought, I can’t cope with this. But that’s exactly it—no one’s prepared for that. She wasn’t prepared for it either, or for how her consciousness would change in response. The medical staff warn her off, saying things like ‘This is not your husband anymore, this is not a person anymore.’ And she just repeats ‘I love him,’ and sneaks into the ward so that she can hold him and care for him as he dies, though she knows it might kill her. And all the medical staff who warned her off die from their own radiation exposure.
Alexievich, too, keeps returning to Chernobyl to do these interviews, and tell these stories, though it might kill her. It would have been easy to write about Chernobyl in a way that made us feel awe about the powerful technology, but instead Alexievich makes us feel awe about human nature, and our capacity to create darkness and light at the same time. I think Nietzsche would have approved.
Your last recommendation for us is Citizen: An American Lyric, by Claudia Rankine. How does it speak to the theme of technological utopias and dystopias?
The meaning of the word ‘technology’ has changed over time. In Ancient Greek, ‘logos,’ the second part of the word, means logical reasoning. ‘tekhnē,’ means craft or skill or art. For the ancient Greeks, ‘tekhnē’ included medicine, sculpting, music, and all sorts of things. So ‘technology’ is any kind of applied know-how. In the speculative fiction genres, technology ranges across science and nature. Technology can also be social tools, such as race.
A really good example of this comes from the artist Wendy Chun, who is one of several intellectuals and makers who talks about race as a form of technology. Chun says, ‘Don’t think about what technology is, think about what it does.’ If a tool or a skillful activity consistently has a certain effect, then it’s a technology, so to speak.
Race is an artificial category—there’s nothing deterministic about it in the natural world. It’s an artificial category that’s being used or applied to have a particular social effect, or to legitimize or create social structures around certain geographies or communities. So race is a sorting technology, which exists to organize people in ways that make certain kinds of exploitation or labor easier for the people using or controlling the technology.
Interesting. And this brings us back to Rankine’s Citizen.
It’s quite an internationalist outlook and she plays with well-known instances of contemporary racism, things that we’re all familiar with, like Serena Williams calmly playing outstanding tennis under hostile scrutiny. She compares the LA riots protesting the police beating of Rodney King with the London demonstrations protesting the killing of Mark Duggan. She draws on interviews with Zinedine Zidane about what it’s like being perceived to be from the Middle East while playing football for France in the World Cup.
Then she mixes up these everyday elements with quotes from critical theorists. One of my favourites is: “The state of emergency is also the state of emergence.” That’s from Homi Bhabha.
The narrator’s voice is so curious and empathetic about the people who are around them, and with whom they’re interacting, that you feel what it’s like to be a Black person in a white space, and you also feel what it’s like to be a white citizen interacting with a Black person. It’s painful precisely because you inhabit both perspectives at the same time.
Depending on which perspective you take, you could either take the view that this is a utopia where the technology is doing its function of segregating and organizing and creating a hierarchy that favours you, or you could take the view that this is a dystopia where the technology unjustly places you under constant threat. It is ambiguous as to whether what you’re exploring is a tech utopia or a tech dystopia—but this is what it’s like when the tech is working, and you see both sides of it.
That’s such an impressive literary feat.
It’s bruising. And it’s particularly effective because it’s a prose poem—a poem sometimes written in prose, sometimes in verse, sometimes blank verse, and interspersed with images and collages. It’s pushing the boundaries of language so that we can think and feel in new ways. Like Svetlana Alexievich, Rankine is finding or making a new way of exploring and articulating this uncharted territory—this ‘state of emergence’. It makes me think back to Ezra Pound again, and how artists can create the new forms of expression that we need around new kinds of tech utopia and dystopia.
Finally, you mentioned at the beginning of the conversation that recent developments in AI have gotten many more of us thinking about technological utopias and dystopias. How do the ideas you’ve told us about today apply to AI?
In this conversation, we’ve discussed different forms of technology, and different ideas and experiences of utopia and dystopia. We’ve discussed race as a technology, Chernobyl, speculative fiction and ancient Greek tragedy. We haven’t actually discussed AI, though AI is why so many of us are interested in tech utopias and dystopias right now.
But by looking at these other approaches to tech utopia and dystopia, we can see that the rhetoric of AI creating a kind of utopia is misguided, and risks creating a dystopia instead. We can see that we need to develop the social and emotional skills to understand and illuminate the unconscious desires that guide our political fantasies, and to be able to navigate ambivalence and contradiction about unsolvable political problems. We can see that we need to be conscious of the other technologies that AI will be interacting with—whether that is nuclear power, or race. We can see that we should expect AI to reveal human consciousness in new ways, and maybe even challenge it in new ways. And we can see that we are going to need artists to help map out and establish the new forms of expression that will be needed.
Five Books aims to keep its book recommendations and interviews up to date. If you are the interviewee and would like to update your choice of books (or even just what you say about them) please email us at [email protected]