Philosophy Books

Favorite Books

recommended by Daniel Dennett

I've Been Thinking by Daniel Dennett

I've Been Thinking
by Daniel Dennett


Daniel Dennett is not only one of the most distinguished philosophers of mind working today, he also writes great books. As he publishes his memoir, I’ve Been Thinking, he talks us through some of the books that most influenced him, including two by evolutionary biologists.

Interview by Nigel Warburton

I've Been Thinking by Daniel Dennett

I've Been Thinking
by Daniel Dennett

Buy all books

A lot of people think that philosophers’ memoirs are boring. Philosophers stare at a blank screen or a bit of paper for a lot of their life, chat to some people, give some lectures, publish a few books and papers, and that’s it. Is that your story?

My life hasn’t been like that at all. It’s been a rollicking adventure all over the globe, and I’ve been able to learn from all the people in the world that I wanted to learn from. I learned directly from them, from getting to know them.

I’ve been the beneficiary of an astonishing education, mainly outside of philosophy. I had my philosophical education too, but I’m nowhere near the scholar that a lot of my philosophical colleagues are in philosophy, because I’ve decided to devote my attention to works in other fields as well.

Principally neuroscience and evolutionary theory?

Evolutionary theory, artificial intelligence, computer science, neuroscience, psychology, and linguistics.

From what I’ve read about your life, you’ve encountered some amazing philosophers as peers and as teachers.

When I started off, people thought I was a mathematician, but I wasn’t; I was just well-trained. I took a logic course that had me practically drowning. It was a very advanced logic course, and that was my introduction to logic. I discovered Willard Van Orman Quine’s From a Logical Point of View when I was a freshman. I found it in the math library and stayed up all night reading it. I decided the next morning to transfer to Harvard, so I could work with the great man. This is what I wanted to do. I thought he was wrong. As only a freshman could do, I set out to go to Harvard so I could get close enough to his mind to figure out what he was wrong about and why.

How did that go?

That was a great move. At the time I didn’t know how important it was going to be, but it certainly shaped my life. I arrived at Harvard the year that Word and Object—the first book on my list—came out. Quine taught the book, so I could not have arrived at a better time.

My classmates were also extraordinary. They included Thomas Nagel and David Lewis. There I was, a nineteen-year-old sophomore, thinking, ‘Ooh, this is fast. I’m in fast company here.’ I struggled to master the material, but it changed my life.

You also studied with Gilbert Ryle in Oxford. He seems like a figure from a completely different age now, especially if you look at videos of him—the way he spoke, the style of his examples, and so on. It feels a very long time ago that philosophy was done in that style.

My head is filled with almost entirely happy memories of Ryle. I thought he was a wonderful teacher in a very different way. He never really argued with me. It was like punching a pillow. I didn’t think I was learning anything from him. He wasn’t hammering me with objections the way I expected.

I did my dissertation with him. When, on the eve of submitting it, I compared it to an earlier draft, I could see his hand was all over it. He’d had a tremendous influence on my way of thinking, my way of writing, that I had not appreciated. It was almost subliminal, but he set a wonderful example. He hated what I call ‘phisticuffs.’

Did he mean the martial arts style that has been a feature of much academic philosophy?

That’s right, yes. In his book Philosophical Explanation, Bob Nozick says that some philosophers seem to want an argument in which if you accept the premises and don’t accept the conclusion, you die. Their goal is to write the argument that makes their opponents’ heads explode.

What’s the alternative—painting an attractive picture?

The alternative is using thinking tools of all kinds: thought experiments or intuition pumps, examples, arguments. You need the arguments. You need to keep everything consistent, but arguments play a background policing role that I think a lot of philosophers don’t fully appreciate. They think that through all the examples and imaginative scenarios they’re going to Euclidize philosophy and thought.

The topics that philosophy deals with are not like Euclid’s geometry. There are almost no bright lines, almost no indubitable truths or axioms. You have to learn as a philosopher to expect vagueness at the penumbra on just about every concept. In the end, you have to learn not to be sucked into the phisticuffs involved in counterexample mongering and shoring up the definition.

“I wanted to know how the mind works”

In my collection of eponymous definitions, The Philosophical Lexicon, one of the earliest terms in it, and one of my favorites, is ‘to chisholm away at a definition’ (after Roderick Chisholm). This is where you start with a definition, and somebody proposes a counterexample, so you tweak your definition to rule out that counterexample. They come up with another counterexample, and you keep revising the definition. Many philosophers used to do business that way, and almost nothing was learned from that. It deserves the oblivion in which it now rests. What you learn is this is not a good way to do philosophy.

It’s certainly painful to read. I’m a great admirer of you as a writer, and the way that you craft your books and articles and essays. There is a strong American tradition, it seems to me, of very good writing in philosophy. It’s not such a force within British philosophy, I have to say, and we don’t have so many role models. Ryle crafted his sentences very carefully, with rhythm and balance, and his examples are compelling and memorable, even if a bit quaint now.

I think J. L. Austin was a good writer too, but yes, they are few and far between.

I’m interested who your mentors were there. You’ve talked about philosophical mentors. Who were you mixing with who helped you as a writer?

Both Quine and Ryle. I don’t think either one of them ever wrote a boring sentence. They wrote baffling, sometimes perplexing, initially outrageous sentences.

The other question that fascinates me is how you made that turn to the world of science, computer science, and neuroscience. How did you find a way of connecting philosophy with that? If somebody had to describe you as a philosopher, your use of those ideas is very distinctive.

I wanted to know how the mind works. I wanted to know what consciousness and experience are. I discovered that my fellow graduate students in Oxford thought this was an armchair philosophical question, and it isn’t. They were blithely—one might even say proudly—ignorant about the brain and about psychology. I wonder how you think you’re going to know about the mind if you don’t look at the best science. In those days, back in the 60s, the topic of consciousness was forbidden in psychology and neuroscience unless you were an emeritus professor of neurology. Then you got to write a book where you summed up your life’s work and waxed philosophical. Most of that stuff’s dreadful.

I was lucky I got into the field at a time when philosophers in general were entirely ignorant of these areas. Even though I was partly ignorant myself and an autodidact, I knew more than they did. I also had the benefit of talking to scientists who were delighted and surprised to find I was a philosopher who actually wanted to learn something. They were used to philosophers who laid down the law and told them what was wrong with their theories.

Many analytic philosophers seemed to think that philosophical confusions were only the problems of lay people, that scientists didn’t have philosophical confusions. Scientists are as easily seduced and baffled and dumbfounded by philosophical questions as everybody else. I have taken a lot of time to try to help them out of their philosophical quandaries and show them that their thinking is philosophically naïve. That’s been a large part of my effort over the years.

It’s not just as a philosopher—you are who you are, and your personality is part of how you’ve been able to form those connections with people who take you seriously, treat you with respect, and listen to you. It’s not as if you’re putting them down and telling them they’ve made stupid philosophical mistakes; there’s a seriousness about the engagement.

Exactly. Some of my colleagues have been comically arrogant in talking to the scientists they know, and so they come off as fussbudget, know-nothing smarty-pants, and they’re ignored. The contempt in which philosophy is held by many scientists is unsettling. It’s something that I’ve had to deal with my whole professional life. I’ve been lucky to find the scientists who recognized that they had philosophical problems they needed help with. It’s been a great joy to see, in the last few decades, the scientists coming around and saying, ‘Let’s see what these philosophers are doing. They seem to be making some progress here. They seem to be able to help us.’ That’s a very good feeling.

My own take on this is that because philosophy is so often portrayed as a lone activity, we often neglect the importance of social connections in our learning about it. It’s almost impossible to learn without connecting with other people. You have to make connections. You can’t possibly get on top of philosophy without forming social connections. But to try and do it beyond philosophy, not having trained in those particular sciences and those particular ways of thinking, you need smart guides, and that requires a social connection, not just a book.

I’d put it even more strongly. Even mathematicians need a community of fellow mathematicians. Take Andrew Wiles’s proof of Fermat’s Last Theorem. Even he couldn’t be sure he had proved it until he got the acquiescence of all the other mathematicians who went over it with a fine-toothed comb and fixed some little mistakes that there were in it in the first place. Finally, they agreed. Nobody in the world would know whether Andrew Wiles had proved Fermat’s Last Theorem until that consensus emerged from that community.

This is fascinating. I’d love to talk to you longer about this, but we should probably turn to the books you’ve chosen. You’ve already mentioned Quine’s Word and Object and how you first encountered that as a lecture series. Was that before or after he published the book?

His book had just been published. It was the chief textbook, along with some work of Carnap’s, and others. In fact, I had a very delicious experience shortly before Quine’s death. I was going through a box of old papers, and I found my spiral-bound notebook of notes from that 1960 course, together with a copy of the syllabus. I decided I would do a seminar, recreating his course almost forty years later.

I was having lunch with him at the American Academy of Arts and Sciences, and I said, ‘I found my old course notes, and I’m going to give a seminar.’

Quine said, ‘Would you like me to come?’

I said, ‘Oh, yes!’

He didn’t come to every meeting, but he came to about half of them. It was an evening seminar. I’d pick him up at his home on Beacon Hill. It was a good time for him to do this because his wife, Marge, was dying. It got him out of the house, and it gave him the chance to get back into action a little bit. It was a moving experience for me. The students lapped it up, but they were unsympathetic, in some ways, to the fact that when they questioned him on certain points, he said what he’d said back in 1960. I said, ‘Yes, he’s an old man and he’s smart enough to realize that he shouldn’t tamper with the thing he worked on at the height of his powers. He shouldn’t betray his own legacy by venturing novel twists or conceding points. He should probably stick to his guns, and that’s what he’s doing, and it’s fun to watch.’

You’re in the position of having met lots of eminent thinkers. I suspect you know personally every author of the books that you’ve chosen.

That’s true.

I’ve been lucky enough to meet quite a few philosophers through making the podcast Philosophy Bites. I think there’s something different about reading a book written by somebody you’ve encountered in real life from reading a book by a stranger or someone long dead. We see it under a different description. There’s a sense of who they are as a person, whether they’ve got a twinkle in their eye or whether they’re deadly earnest. Are they mocking something or do they write in a rather pedantic tone because that’s how they speak? It’s difficult to imagine for you, presumably, having known Quine, but for a reader who’s not familiar with Quine, what’s his writing like?

I just went through my dog-eared, margin-commented copy that’s over sixty years old, in preparation for this. I find that a lot of his sentences are very arch, and he wanted to show off his linguistic competences and his artistry. There are wonderful turns of phrase and vivid uses of terms.

When I went off to graduate school, I thought I was the village anti-Quinian; then I got to Oxford and found out I was the village Quinian. I accepted more of what he believed than anybody else in Oxford, but I had my disagreements with him. This is what makes him one of my heroes, because you need somebody to bounce off. There’s one passage in that book which threw down the gauntlet to me. That’s where he says that Franz Brentano argued that there’s no way of reducing intentional language (intentional language means talk about beliefs and desires and propositions) to the language of the material world. This was the Brentano thesis of the irreducibility of the mental.

Daniel Dennett Books

Quine wrote: “One may accept the Brentano thesis either as showing the indispensability of intentional idioms and the importance of an autonomous science of intention, or as showing the baselessness of intentional idioms and the emptiness of a science of intention. My attitude, unlike Brentano’s, is the second.”

Bingo! I went after that. I was saving intentionality, not Brentano’s way—I was just as strong a naturalist as Quine—but Quine and I both agreed there was no reducing intentional language to the language of physics or the mechanistic language of biology. You didn’t have to reduce it; you had to understand it as its own realm and understand the rules under which it operated. It took me a decade to get clear about this. In 1971, I published a paper titled “Intentional Systems, and that was the basic move in my career.

In your development, Quine’s book Word and Object was absolutely key. Would you recommend it to a reader now, to get going in philosophy, to get ideas moving?

No. Going through it the other day, I found that Quine had a lot of fish to fry with people working on aspects of logic that loomed very large then and that don’t loom as large now—thanks, in part, to the work of all the books on my list.

He had to shock the philosophical world with his amazing doctrine of the indeterminacy of radical translation, which is a key move for Quine. It’s the one that says, ‘If you think there’s an ultimate fact objectively knowable about what something means, you’re wrong. Meaning is always relative to interpretation.’

In principle, you could have two different translations of one language into another, which fit all the facts you can gather scientifically but which differ in the truth values put on them, in some senses. I may be the only person in the world who really accepted, in its strongest form, Quine’s theory of the indeterminacy of radical translation. Donald Davidson, one of his students, accepted a version of it, but it was not as radical as the version that I accepted.

Did accepting that influence your philosophy?

Absolutely. I spent years defending the indeterminacy of radical translation and trying to come up with increasingly better examples of what worked and what didn’t, and why. In my early work, there are several papers that look at that: “Brain Writing and Mind Reading,” “Beyond Belief,” and “The Intentional Stance.”

Looking through Word and Object a couple of days ago, I came across a nice example. He said, consider the sentence “Neutrinos lack mass.” Now, translate that into a jungle language, like Pirahã (a language spoken by an Amazonian tribe). When you think about what’s involved, you begin to realize what the principle of the indeterminacy of radical translation is. In the real world, we can almost always count on there being a practically best translation. But we should not read that as the discovery of the translation; we should realize that this is just the best.

I have my little Quinian crossword puzzle. It’s simply a four-by-four square of words. There are two solutions to the crossword puzzle. It’s incredibly difficult to put together a crossword puzzle that has two good solutions. The question is: which is the right one? I deliberately made it so that neither one of them was clearly better than the other.

In normal translation—of bills of lading, newspaper reports, and histories of the Roman Empire—there are too many constraints to come up with two radically different translations that aren’t trivial. It’s important to recognize that. Don’t think of translation finding the inner essence of meaning. Quine called that the museum myth of meaning, and it is a myth. It still has an incredible hold on many of the people that work in the philosophy of language.

My friend and former colleague, Mark Richard, has a recent book called Meanings as Species, and it’s lovely. He’s finally seen the evolutionary light. Hail to Mark for doing that. Reading his book, I’m amazed at the effort that’s required of him to get his colleagues in the Propositional Attitudes Task Force world to take him seriously. It’s still entrenched.

You mentioned evolution. That’s a good segue through to your next choice, Richard Dawkins’s The Selfish Gene.  It was a massive bestseller. I’m not in a position to judge the evolutionary biology and how accurate he was about it, but it is absolutely brilliant as a popular work of science that communicates difficult ideas to a wider public.

That’s the one book on this list that I read not having met the author. I met him later and we’ve become fast friends and had tremendous discussions. I’ve learned a lot from him, and I think he’s learned a lot from me. When it came out, I asked a prominent evolutionary theorist and philosopher of biology about the book, and he dismissed it as a trade book potboiler. Pop science; not worth reading. I believed him, and I postponed reading it. Doug Hofstadter urged me to read it when we were working on The Mind’s I. I read it and I thought, ‘Oh my goodness. Imagine if I hadn’t read this book.’ It’s been that important to me.

Ever since then, I have thought, ‘I’m not going to ding a book without reading it.’ There are many books I’m sure are baloney that I don’t bother reading. But I’m not going to tell people that the books were baloney. I’m going to take my chances.

Was that the germ of Darwin’s Dangerous Idea?

In a way, yes; in a way, no. My dissertation had ‘evolution’ right in the title. My insight was that learning is evolution in the brain. Later, I learned it’s been the insight of lots of people. Darwin had a version of it, as did other philosophers and neuroscientists, including J. Z. Young, William Calvin, and John Dewey. The idea that learning is evolution in the brain is a winner. It is now becoming clearer and clearer how much of a winner it is. I tied myself to the mast of that idea in my dissertation, and in “Content and Consciousness.” It’s been at the heart of my work ever since.

Dawkins filled in all the delicious details that I didn’t understand well. I was very naïve. I had never taken a course in evolutionary biology; I’d read some stuff. Dawkins filled in a lot of gaps. He sent me to the literature once I got to know him. That’s been an extremely fruitful part of my life.

I started studying philosophy in 1980, and no one talked about evolution. That was for biologists; it wasn’t anything for philosophers. That’s bizarre, if you think about the fact that a lot of philosophy is about beings that have evolved and are evolving. There are huge implications, if you understand evolutionary theory, for what that means about your picture of humanity. And yet, it wasn’t even a topic. It wasn’t considered something you would bring into philosophy. I think that’s obviously changing.

The principled ignorance of evolution is still an embarrassing bit of ignorance in philosophy. I’ve been thinking about that a lot recently and all the rest of the books on my list deal with this. The philosophers have taken comprehension for granted. They have, since Descartes, taken a first-person point of view. ‘I understand my thoughts: they’re my thoughts. I may mis-express them, but I am the source and the home of understanding. My understanding could not be better if I’m careful.’

“The topics that philosophy deals with are not like Euclid’s geometry”

Descartes’s notion of clear and distinct ideas is the single worst idea to beset philosophy in the last millennium. You don’t know how your thoughts come to you. You are not authoritative about what you mean by what you say. You are not miraculous in your capacity to comprehend. You are only circumstantially better equipped than others to determine what your own words mean. You often betray what’s in your mind by your Freudian slips and your mistakes. You have to abandon that first-person authority view of understanding.

Descartes recognized that if he couldn’t trust his clear and distinct ideas, there was nowhere to go. The only way he could see out of that was by having the deus ex machina of bringing in a non-deceiving God who would not let him be wrong about his clear and distinct ideas. Give me a break! Maybe Descartes didn’t fully believe it; maybe he did that to pacify the Jesuits.

Five out of six of his Meditations are on that theme.

My one historical scholarship paper is called “Descartes’s Argument from Design.” It looks at the argument in Meditation 3, which was a good argument until Darwin came along. This is the argument for the existence of God based on ‘my idea of God is so wonderful that there must be a God that created it, because it couldn’t be my own creation.’

God left a trace, conveniently.

So he can assign God the role of intelligent creator.

That’s something else you have in common with Richard Dawkins, the unwillingness to go for a theological explanation for anything at all.

Darwin really did destroy the only respectable—and I think it was respectable—argument for the existence of God. ‘Look at the wonders of nature. Look at the brilliant design, right down to the ribosomes and the motor proteins.’ This is ravishingly effective, efficient, beautiful design. No question. Darwin showed how you can get all that design without the creative intelligence. You can get it from natural selection. That wiped out the best reason for believing in God.

To be fair, Hume eliminated a lot of the conclusions that were drawn from the ‘Argument from Design.’ Hume presented some pretty strong arguments about the limitations of that way of thinking long before Darwin.

Yes. One of my favorite philosophical texts is Hume’s Dialogues Concerning Natural Religion. I used it for many years. I read it as a freshman and fell in love with it. It looms large in Darwin’s Dangerous Idea. I call it Hume’s close encounter. He got that close to getting the whole Darwinian picture.

He didn’t get the mechanism, but he did entertain something close to evolution, didn’t he?

He was on the right track. He said some wonderfully prescient things and that’s why he’s my favorite philosopher.

Let’s move on to I Am a Strange Loop by Douglas Hofstadter. Another friend of yours, and one of your co-authors and co-editors. What’s a strange loop? This is a brilliant title.

It is, and I want to stress the most important word in it—’am.’ Not ‘I have a strange loop.’ Not ‘there’s a strange loop in me.’ But ‘I am a strange loop.’ What makes the loop strange is that the ‘I’ is included in the loop. Most theories of consciousness leave the ‘I’ out of it. They talk about access consciousness, but then they don’t say who the ‘I’ is that has the access. You have to.

If you haven’t included the ‘I,’ the self (in Millikan’s terms, the consumer of the representations, the interpreter), if you don’t have a model of the interpreter; you don’t have a theory of consciousness: you have a theory of television! There’s no Cartesian theater where the homunculus is. When you get rid of that idea, you take on the responsibility for showing how all that work and play gets done by the ‘I,’ the inner witness. You must have that in your theory of consciousness. Until you tackle that problem, you don’t have a theory of consciousness at all.

How would you describe Douglas Hofstadter? He is, by profession, a cognitive scientist, but he’s eclectic.

He is unique. He’s a computer scientist, physicist, philosopher, psychologist, artist, and translator. One of the luckiest days of my life was when I let him talk me into editing The Mind’s I with him.

That’s a brilliant book. I read that very early in studying philosophy. I love that book. A lot of my peers did, too.

I met Doug in 1979, just after Gödel, Escher, Bach had been published. He had reviewed my book Brainstorms in the New York Review of Books. It was a very lovely review. I was out at the center in Palo Alto, and his father, a Nobel laureate in physics, taught at Stanford. Doug had particularly liked my “Where Am I?” essay, and he went out of his way to visit me up at the center, and to twist my arm about joining him in creating The Mind’s I.

I had lots of fish to fry. I had projects that I was into. I thought this wasn’t going to be a worthwhile venture, but it changed my life. He’s taught me so much about many topics.

Here’s one way of viewing I Am a Strange Loop. Lots of people read his first amazing book, Gödel, Escher Bach—a cult classic, and I don’t think he would mind my calling it that—and they didn’t get it. They were bedazzled by the wordplay, logic games, the puzzles, the anagrams, the ambigrams, and they didn’t get what his big message was. Much the way Hume wrote A Treatise of Human Nature and it “fell dead-born from the press,” as Hume said, so he wrote An Enquiry Concerning Human Understanding. I view I Am a Strange Loop as Doug’s Enquiry. It’s the clearer version. It’s all about strange loops and patterns and putting an account of how there can be an ‘I’ as a strange loop.

We’re talking about a kind of feedback loop?

It is a kind, but it’s a very strange kind. He says it’s an abstraction. I pulled a sentence out of it that captures the heart of the book:

“It is the upward leap from raw stimuli to symbols that imbues the loop with strangeness.”

Symbols in the sense of one thing standing for another?

Only human minds—language-using minds—have the task of articulating and expressing who they are, what they are, and what their thoughts are, and of formulating their beliefs about what it’s like to be them. It’s that leap—that task of taking states of the brain that are about things and putting them crudely, at first, into symbols and expressing them, maybe just to yourself—that creates a feedback loop, too.

You talk to yourself, you reason to yourself, you imagine to yourself, you feel to yourself, you reflect and reflect. It’s the reflectivity that we’re capable of, and it’s not clear that any other mammal is capable of that kind of reflection because language is the crutch to get us up into it. It teaches us how to become reflective. Language is the ladder we climb. We can throw it away to gain our perspectives on our own minds. It enables us to enrich them and to stock us up with thinking tools of which words are the most important.

Brilliant. That was so clear. Should we move to the next book? Ruth Millikan’s Beyond Concepts. It’s a challenging book in lots of ways.

Ruth takes no prisoners. Ruth is an indomitable force. When she wrote her first book, Language, Thought, and Other Biological Categories, I read it in draft. I recommended it be published, edited the book, and wrote the foreword for it.

I realized this was an amazingly powerful mind because she went into the analytic tradition—the propositional attitude tradition, the Frege, Russell, and Wittgenstein tradition—and really understood it. And she had the strength to find a position that was critical of it in a very powerful way. She was able to pull herself out of that vortex.

I tried for years to do the same. The result is a paper of mine called “Beyond Belief”, which is my attempt to naturalize the propositional attitudes as part of the naturalization of the intentional stance. But in Beyond Concepts Millikan carries it much further, and she goes all the way to the metaphysics, to the ontology.

The brilliant idea in that book is she throws away the traditional philosophical concept of the concept and replaces it with the ‘unicept.’ We had some wonderful correspondence about whether she should come up with another name. When she came up with unicept, I said, ‘That’s it. Go! That’s what you mean. You want to have a term that sharply distinguishes.’

What is a unicept, then?

You and I can both talk about cats and dogs, and I understand you and you understand me. It’s not a hard topic to communicate about. The standard line is, that’s because we each share the concept of ‘cat’ and the concept of ‘dog.’ A lot of philosophy is involved with getting clear about what concepts are. How you grasp them, as Frege says, and so forth.

Millikan says, ‘No. The mistake is right there.’ You and I don’t have the same concept of dog. We each have a unicept. You have your unicept; I have my unicept. We each have our unique ways of telling dogs from cats and tracking dogs and tracking cats, and putting new things we learn about dogs and cats in the right “places” in our minds, but not because we have each acquired or mastered a specific concept, that we then might spend our entire lives trying to define.  We don’t need to share exactly the same concept (in oldspeak);  thanks to the regularities of the patterns in the world—the clumpiness of the world—we can use the world to keep our mutual understanding in registration. We don’t need to have shared concepts in our heads.

Because we can point to dogs and cats, and they’re readily distinguishable?

Large language models, in a way, have unicepts; they don’t have concepts. They don’t have concepts and neither do we.

Isn’t that close to John Locke’s theory of language, and the idea that we each have subjective associations for commonly shared words?

Yes, it has some similarity with Locke’s attack on innate ideas. The wonderful thing about this is that Millikan saw how to turn her back, elegantly, forcefully, and with lots of good hard argument, against the established traditions of analytic philosophy. Some people were very angry about it. Jerry Fodor, for example—my old dear friend—I was ashamed of him for the way he handled Millikan. He viewed her as a horrible blight on his whole worldview. She was right, and he could never come to grips with that. I have written some interesting stuff in my memoir about Ruth and her interactions with Jerry Fodor. He was unbelievably rude to her.

You are brilliant at coining new words and terms. How important do you think that is in philosophy? Some philosophers just use the tools they’re given; some invent new tools.

I’m a pack rat. I have been all my life. I see a bit of junk and I think, ‘Ooh, I can imagine a use for that.’ I’ll pick it up and put it in my pile of odds and ends of junk. I love tools. I’m a worker with tools and have a collection of tools that I cherish. I also have a collection of thinking tools, and sometimes I make thinking tools of my own. A good term is a thinking tool, and some of them are very simple. I give them a name and now people can use them. One of my favorites is the ‘surely’ alarm. Whenever a philosopher says ‘surely,’ a little bell should ring. Ding! This is usually where the weakest point in the argument is. It’s where it doesn’t go without saying, because the author seems to have to say it.

Thanks to computers and string search, it’s easy to look for the occurrence of ‘surely’ in philosophical texts. Sometimes the alarm rings and it’s a false alarm, but it’s a great way of spotting the weakest place in an argument. It’s the place where an unexamined assumption or presupposition is being allowed to enter without proper vetting, and that’s where you look for mistakes.

You’ve done this lots of times. Skyhooks, intuition pumps, universal acid—there are so many phrases you’ve coined that stick. You could have your own lexicon. We can have the Dennett dictionary of new terms. It’s brilliant. It’s comparatively rare for philosophers to do that.

I guess I never thought of it in those terms, but I love doing it. I’ve also loved championing terms that others have coined that I think are more important than a lot of their critics think. ‘Meme’ is the obvious choice there. I think Richard Dawkins clarified, crystallized, and brought into focus a very important idea—the idea that there are things that evolve in culture that have their own fitness. We need that idea now because we’re beginning to create new kinds of very fancy memes that are generated by large language models (LLMs) that make counterfeit people, which can then reproduce. This is the most dangerous advance of technology in the last thousand years. It’s more dangerous than gunpowder and nuclear war because it may destroy trust. If we lose trust, we’re toast.

What we’re doing right now, talking to each other, depends (in ways that we have underestimated or taken for granted) on a shared ideal of ‘let’s get it right, let’s go for truth, let’s not lie, let’s not deceive.’ If you can’t trust your interlocutor not to deceive you, and if you find it almost impossible to tell whether you’re talking to a person who shares that trust, then that whole fabric of knowledge (which depends not on God, as Descartes thought, but on human unity and trust) disintegrates.

There have always been liars and deceivers, but we’ve created a new way of making the millions of brooms in The Sorcerer’s Apprentice. They’re out of our control, and they reproduce. That’s the point I want people to understand. Nuclear weapons can’t reproduce; software can reproduce. We’re living in the age of high-fidelity reproduction. This is the small, warm pond that Darwin imagined was the birth of life. This is the salubrious environment where technological memes can reproduce very quickly and create a pandemic. I’m very worried about this. I think we should take steps as soon as possible to protect ourselves if it’s not too late. It bothers me that some of the most intelligent people I know, including Geoffrey Hinton and Yoshua Bengio, think that it may be too late, that we’re beyond the tipping point. I hope they’re wrong.

Do you take some consolation from seeing how complex behaviors are so easily built up from very simple, unthinking behavior, to give at least the illusion of thinking behavior? This development seems completely consistent with some of what you’ve written about the building up of minds.

The big difference is that LLMs aren’t persons. Persons are indefinitely reflexive. They have higher-order thoughts and higher-order desires and beliefs. An LLM is a counterfeit person because it doesn’t have any higher-order values. That’s the problem.

My own thought is that there’s no magical way that we think that differs in principle from the computational processes that go on in LLMs. The underlying hardware, though, is profoundly different.

As is the mechanism for building it up, in terms of the data.

Nobody’s got a good theory yet for how (in detail) the computing brain makes a human mind, although people are working on it. My colleague Michael Levin is leading the charge in some ways, in understanding how individual cells, unlike the parts of digital computers, have to work for a living, and have to have agendas. They are collaborating and competing, and that collaboration and competition is the basis for creating the persons that can collaborate and compete the way you and I do.

Could LLMs be encouraged to evolve, or perhaps we could shape them, to build in loops that give them this kind of consciousness or something close to it?

They could. I have a draft of a paper where I talk about how we could, but I think we shouldn’t—we don’t have to go there. It’s called “We Are All Cherry-Pickers.” I’ve been sending it around. My piece ‘The Problem with Counterfeit People’ was published by The Atlantic, which has had some serious traction.

We should probably move to your final book choice. This is From Darwin to Derrida, by David Haig, another evolutionary thinker.

David’s an evolutionary biologist who’s also, in the very best sense of the word, an amateur philosopher. He’s an insightful and imaginative reader of the history of philosophy and texts, and his book is a rollercoaster ride of insights and excellent ideas. I had an amazing thought just a couple of days ago about this. I wrote to David and asked him if he’d ever read Word and Object. I don’t think he has. We’ve never talked about it. But he’s doing what Quine was trying to do. Here he is, an evolutionary biologist at Harvard, and he is fleshing out a lot of the details in Quine’s project of finding a place for meaning in the world without elevating it into magic.

Is that what it means to give a naturalized account?

It’s consonant with and contributory to the sort of work that Ruth Milliken’s done, and what I’ve tried to do. He wrote an article called “Making Sense,” which is also a chapter in the book. It’s one of my favorites. Here’s a sentence from it:

“Information is what could have been otherwise before observation. Meaning is what would have been otherwise, had the observation been different.”

That’s one of those sentences you have to stop and think about.

Absolutely. “The non-living world,” he says, “is a repository of unintended information useful for living interpreters.”

He’s an aphorist! Those are brilliant.

Yes. And, of course, the philosophy of language crowd won’t have anything to do with it. I’ve tried.

Because he’s not part of that world, not trained in the philosophy of language with the vocabulary and premises that they share?

I wrote an article framing Haig’s “Making Sense” article, called “Haig’s Strange Inversion of Reason.” We sent both papers to Mind & Language, the British journal that is the current citadel of the traditional view. Due to a mechanical problem with the submission process, they got David’s paper before they got mine. My article was supposed to introduce them to David’s ideas, so that they wouldn’t dismiss them outright. As luck would have it, they read David’s first and rejected it because they didn’t understand it. When mine came in, they said, ‘No point publishing this because this is a framing piece for a piece we’ve already rejected.’ I suggested that since they didn’t read them in the right order, they might consider reading them in the right order and see if they still wanted to reject them. They wouldn’t do it. I have to say that I appreciate the fear and anxiety that leads them to treat David this way, but I don’t respect it.

Is it fear of Darwinist explanations? It would pull the rug from underneath a different way of doing things if it’s correct.

Absolutely. It is fear of having to do that strange inversion and realizing that comprehension—intelligence—is not the source but the effect, the outcome of a lot of mindless mechanistic activity. Competence comes first and then comprehension. Comprehension is not perfect, it’s not Cartesian certainty, it’s not guaranteed by God. When we do the strange inversion, we can understand how we can be the knowers that we are without being magical.

I guess that sums up your philosophy as well. You’ve wanted to remain in the world of science, not armchair philosophy. You’ve wanted to avoid magical thinking. You will absolutely not resort to religious explanation or invent new substances to explain phenomena that are difficult to explain scientifically. And you’ve entertained hypotheses that, in principle, are empirically testable—most of them, it seems to me.

There are two kinds of people. There are people that want magic tricks explained to them, and those that don’t—or they want to figure it out for themselves. I know quite a bit about magic, and some of my good friends, advisors, and mentors are magicians. Sometimes when I ask, ‘Do you want to know how that trick’s done?’ people reply, ‘No, no. I don’t want to know.’ There are the people who don’t want to know. And there are the people like me, who are nosy and troublesome, and we want to know. We’re going to try to figure out the explanations. What I want to say at the end is, ‘Look how wonderful it is!’ The explanations we get from science are so much more ravishingly beautiful, ingenious, and complex than anything from the know-nothing tradition. How they can prefer myth and magic and ignorance to scientific understanding is, to me, astonishing.

Presumably, that’s the kind of thing that a scientist could very easily study as well. Hume had all kinds of explanations about why people want miraculous explanations.

They’re easier. H. L. Mencken said, “For every complex question there is an answer that is clear, simple, and wrong.” Some people would rather cling to what is, by their light, a self-evident truth, than go to all the trouble of having to learn some initially difficult and upsetting ideas.

Interview by Nigel Warburton

September 25, 2023

Five Books aims to keep its book recommendations and interviews up to date. If you are the interviewee and would like to update your choice of books (or even just what you say about them) please email us at [email protected]

Support Five Books

Five Books interviews are expensive to produce. If you've enjoyed this interview, please support us by .

Daniel Dennett

Daniel Dennett

Daniel Dennett (1942-2024) was an American philosopher. (Read our interview with Dennett, about his favourite books). He was University Professor Emeritus at Tufts University and best known for his work on consciousness. Immersed in contemporary neuroscience, in his books he had a superb knack for finding memorable images to communicate his theories about the way consciousness works. He’s also known for his writing on free will. Dennett was a prominent atheist, one of the 'four horsemen' (along with Christopher Hitchens, Sam Harris and Richard Dawkins).

Dennett was a tireless critic of sloppy thinking and even if you don't agree with him, his books are always entertaining to read. He was a lively writer and really crafted his books, another reason they have been recommended many, many times in Five Books interviews.

Daniel Dennett

Daniel Dennett

Daniel Dennett (1942-2024) was an American philosopher. (Read our interview with Dennett, about his favourite books). He was University Professor Emeritus at Tufts University and best known for his work on consciousness. Immersed in contemporary neuroscience, in his books he had a superb knack for finding memorable images to communicate his theories about the way consciousness works. He’s also known for his writing on free will. Dennett was a prominent atheist, one of the 'four horsemen' (along with Christopher Hitchens, Sam Harris and Richard Dawkins).

Dennett was a tireless critic of sloppy thinking and even if you don't agree with him, his books are always entertaining to read. He was a lively writer and really crafted his books, another reason they have been recommended many, many times in Five Books interviews.