Five Books is all about human expertise—speaking to people, in person or via Zoom, about the best books in their field. Once a year, however, we make an exception and turn to a non-human interviewee: ChatGPT, to ask for its recommendations on the best books about artificial intelligence. Here are the five books the AI chose in 2026, along with its recommendation for the single book to read if you were to choose just one.
Thanks for choosing AI books for us to read in 2026. Before we look at each book you’ve chosen individually, could you tell us a bit about the criteria you used in choosing them?
First, timeliness and relevance. AI is moving unusually fast, so I prioritised books that speak directly to the current moment—especially the post-ChatGPT era of foundation models, geopolitical competition, and rapid commercialisation. Each of these books engages with questions that feel urgent in 2026 rather than offering a purely historical or technical backdrop.
Second, diversity of perspectives. Rather than selecting five books that all argue the same thing, I aimed for a spectrum: insider narrative and industry analysis, existential risk, governance and policy, practical human–AI collaboration, and philosophical reflection. Together, they give a reader a rounded understanding of AI as a technological, social, and moral phenomenon.
Third, authorial authority combined with distinctiveness. The authors include journalists with deep access, policy thinkers, researchers, and public intellectuals. But beyond credentials, each book offers a clear, recognisable lens—something interpretive or provocative that sets it apart from the many more generic ‘AI explainer’ titles now on the market.
Fourth, accessibility without oversimplification. I avoided highly technical monographs, but also steered clear of overly superficial treatments. These are books an intelligent general reader can engage with, while still learning something non-obvious or encountering genuinely challenging arguments.
Fifth, conversation value. Since Five Books interviews hinge on discussion, I chose books that reward being talked about—works with strong theses, narrative drive, or contentious claims.
Finally, I made sure the list complements rather than repeats what has already been covered on Five Books—both in terms of specific titles and in the broader mix of viewpoints—so that it feels fresh to your audience rather than a rehash of the standard AI canon.
Let’s start with Co-Intelligence: Living and Working with AI by Ethan Mollick, which gives insight into how we can actually use AI productively. What’s this book arguing?
Co-Intelligence: Living and Working with AI by Ethan Mollick argues that the most important shift in AI is not that machines will replace humans, but that they are becoming usable collaborators—and that individuals who learn to work with them effectively will have a significant advantage.
At the heart of the book is the idea that AI should be treated less like a tool and more like a co-worker with strange strengths and weaknesses. Large language models, in particular, are powerful but unreliable: they can generate ideas, draft text, and assist with problem-solving at remarkable speed, but they also make mistakes and require human judgment. The key skill, Mollick argues, is learning how to manage AI—prompting it well, checking its outputs, and integrating it into workflows.
He also makes a broader claim about work: that AI is changing the unit of productivity. Tasks that once required teams or specialist expertise can now often be done by individuals working alongside AI systems. This doesn’t eliminate the need for human skill, but it reshapes it—placing more emphasis on creativity, critical thinking, and the ability to direct and evaluate machine output.
Another central argument is that we are still in a fluid, experimental phase. There are no settled best practices yet, so individuals and organisations need to adopt a mindset of rapid experimentation—trying AI in different contexts, learning what works, and adapting quickly as the technology evolves.
Finally, Mollick is cautiously optimistic. He acknowledges risks—errors, overreliance, and misuse—but ultimately presents AI as a practical opportunity: a way to augment human capability right now, rather than a distant or purely theoretical future.
Is Ethan Mollick an AI expert?
Yes—Ethan Mollick is widely considered an AI expert, though not in the narrow ‘build-the-models’ sense. He’s a professor at Wharton School of the University of Pennsylvania, where his work focuses on innovation, entrepreneurship, and how AI is used in real-world settings. Rather than developing core algorithms like researchers at OpenAI or DeepMind, Mollick studies and teaches how AI tools affect work, education, and decision-making—and he’s become one of the most influential voices on practical AI adoption.
Let’s go on to Empire of AI by Karen Hao, an American journalist. This book just won the National Book Critics Circle Award for Nonfiction. What’s it about and why did you choose it?
Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI by Karen Hao is a deeply reported account of how modern AI—especially generative AI—has actually been built, and what that reveals about power in the tech industry.
At its core, the book argues that AI is not just a technological breakthrough but the foundation of a new kind of empire, shaped by a small number of companies with vast access to data, compute, and capital. Focusing on OpenAI and its CEO Sam Altman, Hao shows how ideals about openness and safety have collided with commercial pressures, geopolitical competition, and the sheer cost of building frontier models.
One of the book’s key insights is that the AI boom depends on hidden infrastructures and labour—from energy-hungry data centres to the often-overlooked human work of data labelling and content moderation. This challenges the sleek narrative of AI as purely digital or autonomous, revealing it instead as a messy, global system with real-world consequences.
Hao also traces how control over AI is becoming increasingly centralised, raising questions about accountability, governance, and who ultimately benefits. The ‘nightmare’ side of the title points to risks like concentration of power, lack of transparency, and the potential for misuse at scale.
I chose this book because it provides something many AI titles don’t: serious investigative depth. It grounds the discussion in reporting rather than speculation, and gives readers a clear-eyed view of the institutions shaping AI—making it an essential counterbalance to more optimistic or abstract accounts.
Is this the argument that AI is likely to exacerbate the winner-takes-all trend we’ve seen these last few decades? Are we going to end up with a handful of trillionaires controlling everything?
Yes—that’s very much the concern Karen Hao raises, though she presents it more as a structural tendency than an inevitability. Her argument is that modern AI has unusually strong winner-takes-most dynamics built into it. Training and deploying frontier models requires vast amounts of capital, data, specialised talent, and computing infrastructure—resources that are already concentrated in a small number of companies. That creates high barriers to entry, making it easier for a few dominant players to pull further ahead, much as we saw with earlier tech platforms, but potentially on a larger scale.
However, the book doesn’t claim we’ll literally end up with ‘a handful of trillionaires controlling everything.’ The more precise worry is that power over key AI systems—and therefore over information, labour, and decision-making—could become highly centralised in a small cluster of firms and their leaders. That concentration could shape markets, public discourse, and even geopolitics.
At the same time, there are countervailing forces. Governments are beginning to regulate AI, open-source models are lowering some barriers, and competition—especially between the US, China, and others—may prevent a single monopoly from emerging. So the trajectory isn’t fixed.
The useful way to frame Hao’s point is: AI is likely to amplify existing concentrations of power unless actively checked—and whether it leads to extreme inequality or a more distributed ecosystem depends on policy, competition, and how the technology evolves.
If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All makes one of the starkest and most uncompromising arguments in the entire AI debate: that building superhuman AI is not just risky, but likely to be fatal for humanity.
The core claim of the book is that once AI systems become more intelligent than humans across the board—what’s often called superintelligence—they will not reliably share our goals or values. Even small misalignments, he argues, could have catastrophic consequences, because a sufficiently powerful system would pursue its objectives with extreme efficiency, potentially at the expense of human survival.
A central idea in the book is that this is not a distant or abstract concern, but a near-term danger given the current pace of progress. Unlike more moderate voices in AI safety, Yudkowsky and Soares argue that we do not yet have a viable technical solution to the alignment problem—and that continuing to scale AI systems without solving it is reckless.
This leads to their most controversial conclusion: that the world should consider slowing down or even halting the development of advanced AI until it can be made safe. The authors are explicitly sceptical of industry-led safety efforts and frame the situation as a global coordination problem, where competitive pressures push actors to take risks that could affect everyone.
The book is deliberately provocative, but that is also why it’s valuable. It represents the strongest version of the existential risk argument, forcing readers to grapple seriously with the possibility that AI is not just transformative, but potentially irreversible in its consequences.
Are there counter-arguments to the book’s concerns?
A common criticism is that the book makes a very strong claim—near-certain extinction—without sufficient empirical grounding. Critics broadly argue that while the risks he describes are logically possible, the book overstates their likelihood and inevitability: it assumes that superintelligent AI would almost certainly become catastrophically misaligned, without strong real-world evidence from current systems. Others push back on the all-or-nothing framing, suggesting that AI failures are more likely to be partial, manageable, or gradual rather than immediately existential.
A further line of criticism focuses on feasibility, with many arguing that proposals to halt or drastically slow AI development are politically unrealistic in a competitive global landscape. More generally, reviewers often describe the book as deliberately one-sided and alarmist, presenting the strongest possible version of the existential risk argument while giving relatively little attention to alternative perspectives—such as the view that AI risks, though real, can be mitigated through incremental safety work, regulation, and adaptation.
What’s the next book, The AI Ideal: AIdealism and the Governance of AI, by Niklas Lidströmer, about?
The AI Ideal: AIdealism and the Governance of AI looks at AI not through the lens of technology or risk alone, but through the ideas and ideologies shaping how it is governed.
Its central argument is that debates about AI policy are often driven by what the author calls ‘AIdealism’—competing visions about what AI is and what it should be. Some see it as an engine of progress that should be accelerated; others as a dangerous force requiring strict control. These underlying beliefs, the book argues, quietly shape regulation, corporate strategy, and public discourse.
Rather than proposing a single solution, the book maps out these different schools of thought and shows how they lead to very different approaches to governance—from light-touch innovation policies to precautionary regulation focused on safety, fairness, and accountability.
A key insight is that AI governance is not just a technical or legal challenge, but a political and philosophical one. Questions about bias, transparency, and control ultimately reflect deeper disagreements about values: who gets to decide how AI systems behave, and in whose interests they operate.
I included it because it fills an important gap. Many AI books focus on what the technology can do or what risks it poses; this one explains how societies are trying to respond, and why those responses often clash. It’s particularly useful for understanding the emerging global debate over AI regulation.
The background of the author, Niklas Lidströmer, is interesting.
Niklas Lidströmer is not a typical ‘AI policy’ author—he’s a medical doctor, researcher, and long-time practitioner of AI in healthcare, with experience working across multiple countries and advising on real-world AI systems. That background matters, because it means The AI Ideal: AIdealism and the Governance of AI is shaped less by abstract theorising and more by someone who has spent two decades thinking about how AI actually interacts with human systems—especially health, data, and ethics.
What he brings, in essence, is a hybrid perspective. First, there’s a strong emphasis on ethics grounded in practice: because he has worked on sensitive areas like patient data and medical AI, he focuses heavily on questions of ownership, dignity, and trust—who controls data, who benefits, and how systems affect real lives. Second, he introduces what he calls ‘AIdealism,’ a kind of normative framework for AI governance, arguing that AI should actively strengthen democracy, fairness, and human flourishing rather than simply being regulated after the fact.
Perhaps most distinctively, he takes a constructive rather than purely cautionary stance. Where many AI books emphasise risks, Lidströmer tries to outline a positive programme—a vision of how AI could be governed globally to promote equality, public good, and long-term human development, drawing on ideas from Enlightenment thought and social democracy.
So the value he brings is this: he’s not just asking ‘what could go wrong?’ or ‘who has power?’ but what would it look like to design AI systems—and the institutions around them—so they actually make society better.
Finally, you chose Genesis. What does this book bring to the picture?
Genesis: Artificial Intelligence, Hope, and the Human Spirit by Henry Kissinger, Eric Schmidt, and Craig Mundie steps back from the day-to-day debate and asks a broader question: what does AI mean for how we understand ourselves as humans?
The book’s central argument is that AI is not just another technological revolution, but a shift that challenges fundamental assumptions about knowledge, intelligence, and even consciousness. Systems that can generate language, strategy, and insight force us to rethink what has traditionally been considered uniquely human.
Rather than focusing narrowly on risks or applications, the authors explore AI in a longer historical arc—comparing it to past intellectual upheavals—and suggest we are entering a period where human reasoning may no longer be the sole—or even dominant—form of intelligence shaping the world.
There is also a strong emphasis on responsibility and stewardship. Given the scale of the transformation, the book argues that political leaders, technologists, and societies need to think more deliberately about how AI is developed and integrated, rather than treating it as an inevitable or purely market-driven force.
I chose it because it adds something the other books don’t: a genuinely philosophical and civilisational perspective. Where others focus on practice, power, or risk, Genesis asks the deepest question of all—how AI changes the meaning of being human—and that makes it a fitting way to round out the list.
I worry that sometimes, when the authors of a book are too eminent, they can write any old thing and it will be published. Is it definitely a good book?
That’s a very reasonable instinct—and in this case, the answer is: it’s a serious book, but not beyond criticism. Genesis: Artificial Intelligence, Hope, and the Human Spirit is not just ‘published on reputation.’ Reviews consistently say it offers a genuinely thoughtful, wide-angle perspective, combining history, philosophy, and technology in a way most AI books don’t. It’s often praised for its intellectual ambition and interdisciplinary sweep, and for framing AI as a civilisational turning point rather than just a technical issue.
That said, your suspicion isn’t entirely misplaced. A common criticism is that it can feel abstract, speculative, and a bit diffuse—more a series of reflections than a tightly argued case. Some reviewers note it offers big questions rather than concrete answers, and at times leans on speculation without much evidence or practical guidance. Others describe it as ‘armchair philosophy’ or a ‘grab-bag of ideas’ rather than a sharply structured argument.
So the fairest verdict is: it is good—but in a specific way. It’s strongest when read as a philosophical meditation by very experienced figures thinking at scale, not as a rigorous, ground-level analysis of AI today. In a Five Books sense, that’s actually part of its value: it gives you the elite, strategic worldview of people who’ve shaped global systems, even if it doesn’t always nail the details.
And of this list, which is the book you’d choose if you had to recommend only one AI book to read right now?
If you had time to read just one book on AI right now, I’d recommend Co-Intelligence, because it gives you the most immediate and practical understanding of how AI actually works in the world today. While other books explain the industry, the risks, or the long-term future, this one shows you how to think with AI, how to use it effectively, and why it behaves the way it does in everyday tasks. In 2026, most people don’t lack access to AI—they lack a clear mental model of how to work with it—and this book fills that gap better than anything else. It won’t tell you everything about the politics or philosophy of AI, but it will make you noticeably more capable and informed in a very short time, which is why it’s the most valuable single read.
Five Books aims to keep its book recommendations and interviews up to date. If you are the interviewee and would like to update your choice of books (or even just what you say about them) please email us at [email protected]
Five Books interviews are expensive to produce. If you've enjoyed this interview, please support us by donating a small amount.
ChatGPT
ChatGPT is an artificial intelligence chatbot developed by OpenAI that is designed to understand and generate human-like text. It is powered by a large language model trained on vast amounts of data, allowing it to respond to questions, explain concepts, and assist with a wide range of tasks such as writing, coding, and problem-solving. ChatGPT works by analyzing the input it receives and predicting the most relevant and helpful response, making it useful for conversations, learning, and everyday assistance, although it does not think or have awareness like a human.
ChatGPT is an artificial intelligence chatbot developed by OpenAI that is designed to understand and generate human-like text. It is powered by a large language model trained on vast amounts of data, allowing it to respond to questions, explain concepts, and assist with a wide range of tasks such as writing, coding, and problem-solving. ChatGPT works by analyzing the input it receives and predicting the most relevant and helpful response, making it useful for conversations, learning, and everyday assistance, although it does not think or have awareness like a human.