Philosophy Books » Philosophy of Mind

Best Books on the Neuroscience of Consciousness

recommended by Anil Seth

Being You: A New Science of Consciousness by Anil Seth

Being You: A New Science of Consciousness
by Anil Seth

Read

Nearly every human has a sense of self, a feeling that we are located in a body that's looking out at the world and experiencing it over the course of a lifetime. Some people even think of it as a soul or other nonphysical reality that is yet somehow connected to the blood and bones that make up our bodies. How things seem, however, is quite often an unreliable guide to how things are, says neuroscientist Anil Seth. Here he recommends five key books that led him to his own understanding of consciousness, and explores why it is that what is likely an illusion can be so utterly convincing.

Interview by Nigel Warburton

Being You: A New Science of Consciousness by Anil Seth

Being You: A New Science of Consciousness
by Anil Seth

Read
Buy all books

We’re talking about books on the neuroscience of consciousness. A difficult question to begin with, but what is consciousness?

It is a difficult question. And it is difficult to come up with a definition that everyone is going to agree on. For me, the best place to start is with a definition that people are unlikely to disagree violently about, so that we’re not talking past each other. The definition I go back to starts with the philosopher Thomas Nagel: for a conscious organism or a conscious system, there is something it is like to be that system. There’s something going on for that system. Colloquially, you can think of consciousness as any kind of subjective experience whatsoever. This may sound a bit circular, but I think it’s useful because it distinguishes consciousness from things that are not coextensive with the concept. Self is not coextensive with consciousness. It’s part of our human experience, but you don’t have to have a conscious self for consciousness to be present. Being conscious is not the same thing as being intelligent. It’s not the same thing as cognition, either. Consciousness is just raw experience. It’s what goes away when you go under general anesthesia and it’s what comes back when you come around again.

The focus of your book, Being You, is human consciousness, but presumably, from that definition, most animals have some level of consciousness?

It depends where the line is drawn. This is another difficult question because here we face one of the big challenges of consciousness science, which is methodological: how do we get the relevant data? The tricky thing about consciousness is that its mode of existence is intrinsically subjective. This has been suggested as a reason why a science of consciousness is—optimistically—very difficult or—pessimistically—actually impossible because we can’t directly, publicly observe conscious experiences. This means that it’s difficult to get data about the experiential qualities, or qualitative aspects of consciousness, even from humans. And it’s even more difficult when it comes to other animals: we can’t speak to them and ask them complicated questions about what’s going on for them. But this doesn’t mean that they don’t have conscious experiences, I think they very likely do. A question that often gets asked here is: Does consciousness fade out gently as you get towards simpler and simpler animals? Or is there a bright line that demarcates the circle of consciousness?

I think the more interesting question—rather than which other animals are conscious or not—is what the space of other minds might be like. There’s a book, which isn’t in the five I’ve chosen (though it could well have been), Peter Godfrey Smith’s Other Minds, which I think is a beautiful exploration of other minds. I talk about this question—the inner universes of octopuses and birds—a little bit in my book also.

You’re a scientist and you’re talking about the neuroscience of consciousness because there are very strong correlations between what happens particularly in the brain but in the nervous system generally, and human consciousness. What do you feel about people who think that we’re more than our bodies? A lot of people, religious people, believe that there is something completely separable from the body that is nonphysical, and they locate consciousness there. People talk about them as Cartesians, perhaps, but it predates Descartes by a long, long time and it seems quite a natural way to speak about our subjective experiences, as something different from the physiological.

It can feel intuitive to think this way, and that’s part of the challenge and part of the problem. The history of science has often been about taking how things seem and getting underneath that to figure out how things are. How things seem is often quite an unreliable guide to how things are. It seems like we’re at the center of the universe. We’re not. It seems like we might be distinct from all other species and special, with God-given rational minds. Oh, actually, we’re not, as Darwin showed. The fact that consciousness may intuitively seem to be nonphysical does not mean that that’s the way it actually is.

“Consciousness is…what goes away when you go under general anesthesia and it’s what comes back when you come around again”

Now, as for the materialistic viewpoint—the idea that there will be a satisfactory explanation of consciousness in terms of physical processes—I can’t rule out the ultimate insufficiency of that kind of explanation. I’m not a gung-ho materialist of the sort who might say, ‘Obviously, it’s going to work.’ But I’m also very reluctant to pronounce its inadequacy without giving it a really, really good shot. And so sometimes I’m a little bit impatient with people who rush, in my view too quickly, to pronounce the insufficiency of materialism. I rather think the fact that dualism (or any sort of nonphysical mode of existence for consciousness, it doesn’t have to be dualism) seems appealing is itself an interesting explanatory target. Why do people have that feeling? David Chalmers has now talked about this as the ‘meta problem’ of consciousness, following on his famous articulation of the hard problem. The hard problem is, ‘how can any physical system have conscious experiences at all?’ His meta problem is, ‘why do people think there’s a hard problem?’ For me, that’s a more tractable problem to address.

Let’s move to your first book. You’ve chosen Daniel Dennett, Consciousness Explained. It’s a very bold title: does it deliver?

This follows very nicely from the last point that we were discussing because Dennett, in this book, is trying to dismiss the dualist intuition completely, to get rid of it entirely. When he says ‘Consciousness Explained’, in my understanding it’s explicable for Dennett because he thinks we are mistaken in thinking that there is anything beyond what is within the realm of normal physical descriptions of mechanisms and their dispositions and their properties. That’s all we need.

This book has been a massive influence on me, as has Dennett himself, throughout my career. And I’ve been lucky enough to get to know him a bit over the last few years as part of my role in the Canadian Institute for Advanced Research, where I’m a co-director, and he’s one of our main advisors. It’s been a real pleasure. Having first read his book when I was just starting my undergraduate degree at Cambridge, I never thought that a quarter of a century later I’d be discussing consciousness with him in person. In fact, this year, he very kindly set a proof of my book for his philosophy class at Tufts University. They read through the whole thing and grilled me about it for six hours, which was a great. An exhilarating torture, it was.

But I’m not sure I’m entirely convinced by his perspective, which is itself a good thing because it maintains an ongoing conversation. When I first read Consciousness Explained, my reaction was ‘No, this doesn’t really explain consciousness because it doesn’t explain the redness of red, it doesn’t explain why a physical system has conscious experiences.’ To me, what Dennett is getting at is why we might be mistaken about the questions we’re asking. He comes up with all these beautiful thought experiments that deconstruct one of the central assumptions people often make when thinking about consciousness: the idea of a ‘Cartesian theater’. This, to me, is where he’s really strong. This assumption is one among a number that people frequently make when thinking about consciousness. Another is that consciousness exists. I have this assumption, too. I think consciousness is a real thing—there’s a ‘there’ there. Another common assumption is that there’s some inner observer, some inner self, that is in some sense the experiencer of conscious experiences, the audience for an inner movie that is played out somewhere in the brain. This is the Cartesian theatre. We can say that we’re not dualist and that we think materialism is a good way forward, but we can still fall into this Cartesian theatre fallacy and speak, even if only implicitly, in these terms. As Dennett often says now, the hard question is, ‘and then what happens?’ There’s always this temptation to say ‘Okay, and then what happens? At what point does whatever-is-going-on transmogrify into qualia?’

Dennett’s book does an outstanding job of deconstructing these ideas, of making us realize there is no need—in fact, it doesn’t make any sense—to think of a place, a Cartesian theatre, where everything comes together for the benefit of an inner observer. His ‘multiple drafts’ theory of consciousness is a positive proposal for what happens when you don’t make that assumption. There are just processes unfolding all over the place in time and in space, retrospectively woven together in something the brain later interprets as a stable ‘center of narrative gravity’, as he calls it.

Even though it’s a popular book, it’s also philosophically detailed. I confess I have not read all the way through it again since my first encounter, though I have repeatedly dipped in over the years. I include it here because I am convinced it remains very relevant. And, evidently, it has guided consciousness research, not only mine, but that of many colleagues too, over the years. People sometimes have a love-hate relationship with the book, they either think he’s completely right, or they think he’s completely wrong because he’s missing – or dismissing—the qualitative, phenomenal aspects that are at the heart of the matter. I don’t have that love-hate relationship. I like some bits of it very much indeed. But I’m not convinced by the whole.

A more recent way of putting Dennett’s core thesis would be that it’s a fantastic set of ideas about what the philosopher Ned Block has called ‘access consciousness’—those parts of our conscious experience that are manifest through broad availability to other cognitive functions and processes. Dennett has called this kind of consciousness ‘fame in the brain’. When we’re aware of something in this sense, we can behave very flexibly with respect to it. Dennett’s book does a wonderful job of highlighting how these aspects of consciousness can be explained by mechanisms—a story that has since been elaborated in modern neurobiological theories of consciousness, such as the global workspace theory, developed primarily by Bernard Baars and Stanislas Dehaene.

And maybe that’s all there is to the problem. I mean, some people would say once you’re done explaining access consciousness, that’s it, job done. But some others, like Ned Block, would say, ‘No, there’s also phenomenal consciousness, there’s an aspect of consciousness which doesn’t have to involve cognitive access.’ An example would be the ‘redness’ of seeing red. Sure, we can—and usually do—have cognitive access to our experiences of red. But the redness itself is not defined in terms of this access. Dennett’s theory doesn’t directly go after phenomenal consciousness, it tries to indirectly undermine the need to propose its existence. It’s often thought of as an ‘illusionist’ view on consciousness, where we’re somehow mistaken that redness as a qualitative property exists. As I said, I’m not convinced by that.

What I think he obviously shares with you is a real respect for, and knowledge of, neuroscience. You’re a neuroscientist yourself, but he was coming in as a philosopher, immersing himself in the world of neuroscience. When he started, that wasn’t how most philosophers of consciousness went about their business. I think that’s another way in which he has been extremely influential. It’s now obvious that you have to know that neuroscience to be a philosopher of mind. That wasn’t true in the 80s when he first started publishing extensively.

That’s part of his huge legacy. Let’s make that abundantly clear. One of the joys of my career over the last 25 years has been to witness, and to some extent participate in, this cross-fertilization between philosophy and neuroscience. Dennett is not the only philosopher in my list here. Later, Thomas Metzinger comes up. Then there’s Andy Clark—a close colleague at Sussex, and there are now many brilliant philosophers of mind who are incredibly literate with neuroscience. Some of them, like my colleague Jakob Hohwy, not only know the relevant science but also run experiments themselves. To have this conjunction of philosophically informed neuroscientists on the one hand, and neuroscientifically informed philosophers on the other, is both exciting and necessary. You’re just not going to make progress without having this mixture of expertise.

Let’s go to your second book, which I don’t know. It’s called The Mechanization of Mind.

This is by Jean-Pierre Dupuy. It’s a French book that came out in English translation in 2000. I’ve chosen it because I think one of the main obstacles to developing a broadly materialist view of consciousness is that people get stuck in the metaphor of the brain as a computer. There is a tendency to equate the two, which in turn leads to the assumption that materialism is basically the same thing as having a functionalist perspective on the mind-brain relation: the brain is a computer, in which the wet stuff, the wetware, is the hardware, and the mindware is the software. Where does consciousness fit into this? Is it some specific kind of mindware running on the wetware of the brain? Can we therefore simulate consciousness? Thinking of the brain as a computer overly constrains the way we think about these questions about the scope of materialism in general, and about the way we interpret experimental data.

The computer metaphor also leads to a neglect of the body and the environment, because computers work without a body and an environment. Unfortunately this metaphor, of the brain as a computer, has become very dominant, not just in consciousness research, but in cognitive science generally.

And in movies.

Oh yes. It underlies all these tropes about the simulation hypothesis, of mind uploading. There’s no necessity to buy into this computational metaphor of mind in order to pursue a broadly materialist agenda. You just don’t have to do it. We have always had metaphors for the brain – its complexity and opacity has demanded it – but these metaphors have always changed over time, driven often by whatever technology is dominant. In fact, one of my favourite recent books on the history of neuroscience is The Idea of the Brain by Matthew Cobb, which lays out a beautiful account of metaphors about the brain, how they’ve changed over time, and how they’ve influenced thinking about brain and mind and consciousness. I really enjoyed that book.

Going back to Dupuy, reading about the historical roots of the computational metaphor of mind and its alternatives was really enlightening for me when I first encountered it more than twenty years ago. I was doing my PhD at Sussex, the same place where I am now as faculty. We were learning all about dynamical systems theory and embodiment and autopoiesis and all these concepts which really didn’t have much to do with computation or representation, and which were considered rather fringe (some still are, of course). At Sussex we were like the vanguard of the anti-representationalist resistance in cognitive science in the 1990s—showing that you can, for instance, get simple robots to do complicated things without any need for use of representational language. Now, it’s possible to get carried away and say that there’s never any need for representational speak at all when discussing the mind and brain. I don’t agree with that. My preferred way of thinking about the brain now is as a ‘prediction machine’ and there are sensible ways in which to think of the generative models underlying predictions as instantiating representations of some sort.

Avoiding falling into the assumption that the brain is a computer is, I think, powerfully enabling for how we understand mind-brain behavior, and also for thinking about how brain activity might underlie or shape consciousness. Dupuy’s wonderful book takes us back to the Macy conferences, a series of meetings that happened in the late 1940s and early 50s in the US and which were closely associated with the discipline of cybernetics. I don’t know what people think cybernetics means now, because it’s a word that’s been co-opted and corrupted in many different ways, but at the time it was a rich mixture of philosophy, mathematics, the beginnings of computer science and AI, biology, engineering, sociology, even anthropology. The meetings hosted incredibly lively discussions exploring the basic idea that the brain and living systems are machines, and that they exercise control. This conception of the brain as a control system, engaged in regulation, with tight coupling between inputs and outputs and lots of feedback loops and the like was, and is, a very different perspective on brain function to the classical cognitive perspective of input, then computation, then output.

What’s fascinating historically is that, back in the 1950s, there was very little clear air between these ways of thinking. As well as cybernetic pioneers like Norbert Wiener, the Macy conferences attracted some of the big names that we now associate with the computational metaphor of mind—like John von Neumann, the developer of the serial architecture underlying modern computers. At the time, these people were discussing all these ideas, all together.

Even now, reading Dupuy’s book broadens one’s perspective about what kinds of form mechanistic and materialistic explanations can take and reminds us that the brain isn’t just doing computation—if it’s doing computation at all. The brain is embodied, and the body is embedded. The function of the brain can only be understood in terms of a rich context of tightly coupled interactions with strong feedback and recurrency—a very different situation from how a disembodied chess-playing (or Go-playing) computer searches ahead through unfolding trees of possible moves without doing anything.

What’s more, the first neural networks came out of cybernetic thinking, not out of the development of the computational metaphor of mind. The recent avalanche of deep learning traces directly to the perceptron, to the work of McCulloch and Pitts, who were central participants in the Macy conferences.

Dupuy’s book relates all this and much more, and it does so in a beautifully written and admirably concise way. I confess, as with Consciousness Explained, I haven’t read this book all the way through since my first encounter, but it made a big impact on me at the time, providing me with a systematicity to my skepticism about the computational metaphor of the mind.

“One of the main obstacles to developing a broadly materialist view of consciousness is that people get stuck in the metaphor of the brain as a computer”

There are a couple of other books I’d have loved to have included here. My PhD supervisor, Phil Husbands, and Owen Holland, have a recent book called The Mechanical Mind in History, which is a history of a similar group of meetings that took place in the UK, a parallel cybernetics movement in the 1950s that was called the Ratio Club. A lot of this activity, both in the UK and the US, was spurred by the Second World War, of course, the need to design machines that were useful in particular ways, that controlled guided missiles and that provided radar. You’re not going to control a missile by building a chess computer (though you might end up breaking the Enigma code). The UK group of cyberneticians included people like Ross Ashby, who has turned out to be very fundamental to my own thinking, and also Alan Turing, Horace Barlow and William Grey Walter, who is one of the great interdisciplinary pioneers of modern neuroscience. Grey Walter built a famous family of tortoises or turtles—it’s never entirely clear which term he preferred—so-called Machina speculatrix that showed autonomous behavior navigating back to their hutches when they were running low on battery. This was an early demonstration of how you can get compelling behavior from very simple circuits when they are embedded in a sufficiently rich embodied environment.

Looking back, it’s something of a tragedy that cybernetics lost its way—a message that is promiment in both Dupuy’s book and the treatment by Husbands and Holland. It certainly lost its influence at the time. There was a second wave of cybernetics, but it never really had the impact or integrative coverage that the first wave did. At the same time, of course, the computational metaphor of mind really took off because people were building actual computers. Suddenly it was, ‘Yes! We have a computer that really can play chess’ and all the funding went that way. All the everything went that way. Meanwhile, Chile tried to run its government on cybernetic principles and failed badly. There were lots of mistakes or historical contingencies that led to the general supremacy of the computational metaphor of mind. Reading books like Dupuy’s helps remind you that history could have unfolded differently. And it’s not too late. In fact, I think it’s still essential to take on board some of the insights from that time.

One of the things that is already emerging from this conversation is the degree to which neuroscience thrives on interdisciplinarity. The history of neuroscience isn’t a pure history. If you think of chemistry, although there are interactions with biology and physics, you can think of more or less pure chemistry. But neuroscience doesn’t seem to have that purity about it. Maybe this is going too far, but it seems you can’t really do it independently of thinking philosophically, maybe biologically, mathematically and, according to some people, having a deep knowledge of how computers work. It’s not so easily separable from a range of other subjects. Is that fair?

To some extent. I was never formally trained in philosophy, but it’s hard to imagine any discipline of natural science where having a philosophical literacy is not going to be an advantage. It’s always going to help. At a minimum, it helps you avoid making naive and wrong assumptions about what you’re doing. I suppose there will be some areas of neuroscience that can be moderately encapsulated, without needing a day-to-day interaction with philosophy. You will still need to have some basic, background philosophical knowledge of why you’re doing what you’re doing and what it means. But if, for instance, you’re trying to unpack how the stomatogastric ganglion of a sea snail works, sure, you can probably get on with that job and be a relatively pure neuroscientist, and that’s fine. But when it comes to thinking about something like consciousness, absolutely not. There is no way to make good progress without really taking it on the chin that it’s an interdisciplinary enterprise permeated by philosophy and following through with the consequences of that.

We’re at your third book choice now, Consciousness: How Matter Becomes Imagination.

This book marked the start of a new chapter in my career—and indeed in my life. In 2001, I had the opportunity to move to the Neurosciences Institute in San Diego, California, to work with Gerald Edelman. Edelman won his Nobel Prize when he was 43 and capitalized on that to be one of the few people bringing a new legitimacy to the study of consciousness in neuroscientific circles. It had been more or less taboo for a long time, and it certainly wasn’t part of what was on offer when I was an undergraduate student at Cambridge. But Edelman, along with Francis Crick and a few other early pioneers—Wolf Singer in Europe and so on—had fought hard to remind people that consciousness is a central problem in neuroscience and that there are ways to address it reasonably, even if you don’t solve the whole thing.

Now, one of the reasonable approaches to the scientific study of consciousness is the so-called ‘neural correlates of consciousness’ approach, pioneered by Francis Crick and Christof Koch. This approach was, and is, very deliberately pragmatic. I suppose the feeling was, ‘Okay, let’s just put aside these awkward philosophical and metaphysical problems and just remind ourselves that there are intimate relationships between what happens in brains and what happens in conscious experiences when you fall asleep, or if you undergo binocular rivalry, or whatever the manipulation is, so let’s just look for these correlations.’ This was a terrific idea, at least at the time. You can follow this approach and just remember to be modest about what can be concluded. It’s not a theory of consciousness, we’re just finding stuff out. The ability to do this had a massively salutary influence on the field, precisely for this reason. Of course, it is still methodologically problematic: for example, a key question is whether you’re studying the real correlates of consciousness, or the correlates of how people report about their experiences. Lots of debates rumble on about this to this day. But at least you could get on and do it. A lot of the empirical neuroscience of consciousness since the mid-1990s has followed this route, and productively so.

This book takes a different route and that’s why I found it hugely influential and interesting when I read it back in 2000. It was co- authored by Edelman and Giulio Tononi. Tononi is now well-known for pioneering the integrated information theory of consciousness, which builds on ideas in this book. What struck me about it was that it tried to go beyond just establishing correlations, to attempt to explain properties of phenomenology in terms of mechanisms. This seemed midway between philosophy and neuroscience, and exemplified the benefits of bringing both together. The overall approach is something like, ‘Okay, we’re not going to propose some dramatic claim that directly solves Chalmers’s hard problem. Instead, we are going to identify characteristic properties of all conscious experiences, and then ask what properties the underlying mechanisms must have, to account for these properties of phenomenology. They came up with a couple of insights into phenomenology that may seem obvious, but which are actually quite deep.

“How things seem is often quite an unreliable guide to how things are”

The first is that every conscious experience is different from every other conscious experience you’ll ever have. This means that consciousness is hugely informative in a very technical, formal sense: every experience rules out many, many alternatives. The second is that every experience is unified. All of our conscious experiences are ‘all of a piece’, bound together—when we are conscious there is a single stream of experiences going on. These two properties seem to coexist and characterize every experience, whether it’s looking out of my window now and seeing the sea in the distance, or meditating, or visiting the dentist and feeling a sharp pain as she drills into my tooth. These two properties obtain in all these cases.

The book then develops that and says, ‘Okay, well, if there’s a characteristic property of consciousness, that’s doesn’t obviously apply to other things in biology, at least not in the same way, then figuring out the mechanisms that could underlie this property should shed light on the brain basis of consciousness. Specifically, you can ask what kinds of systems co-express integration and informativeness (or differentiation) – and see if these kinds of systems exist in the brain.

In the book they point out that the cortex has the right kind of anatomical structure for generating both information and integration, but the cerebellum, which has more neurons in it than the rest of the brain, doesn’t have these kinds of properties. This explains why the cerebellum may not be involved in consciousness, and we know as an empirical fact that it isn’t. I was inspired by this book because, at least for me, was the beginning of a form of consciousness science that had real explanatory potential. There were a few earlier papers that had been developing some of these formal measures of complexity that characterize co-expression of integration and information, but they hadn’t really applied them to consciousness. Reading this book made me realise that we had a real chance of developing a materialist story that takes phenomenology seriously and gets us further than correlation. And it influenced me in a very practical way too, since it was one of the main factors that encouraged me to go to San Diego. So, in 2001, I moved to America for six, seven years and worked on consciousness with Edelman at The Neurosciences Institute.

Tononi had actually left San Diego before I arrived, and had moved to Wisconsin where he started to develop his integrated information theory (IIT) which builds on the core ideas I just mentioned, but takes them in a different direction. One of the main distinguishing features of IIT is that it makes a very strong claim about consciousness: IIT proposes that consciousness is integrated information, which is a very ambitious and provocative thing to say.

The other thing that’s in Edelman and Tononi’s book is a summary of Edelman’s ideas about neural Darwinism—or, to give it its full name, the ‘theory of neuronal group selection’. This is one of the things Edelman is best known for, but also something that has been notoriously hard to understand. Put simply, it’s a theory of biological selection applied to the brain, during both development and everyday function. We all know from Darwin that natural selection led the evolution of distinct species. But there’s also selection within the body. Edelman’s Nobel Prize was in the field of immunology, where he was known for the idea of immunological selection. According to this idea, antibodies are able to fit to antigens, not because they somehow fold themselves around the invading antigens and then instruct other antibodies to do the same, but because of an internal process of somatic selection. The immune response is generated by diversity and selection within the immune system. It’s a beautiful insight. Edelman then applied the same idea to the brain and asked, ‘Okay, how do neural populations form?’

A neural population is something like clusters of similarly organized neurons?

Broadly, yes. The brain is a vastly complex system in terms of the sheer number of neurons, but more so in the intricacy of their connectivity. It seems very implausible that single neurons end up doing particular things—implementing specific large-scale functions. Everything the brain does involves a network; it takes a village. How do these connections get sculpted? There are so many of them that it’s difficult to imagine they’re all precisely specified in the genome. There’s going to be—or at least this is Edelman’s argument, and I find it convincing—some internal variation and diversity that is then selected on through development and experience. We end up with the fine grained neuroanatomy that we each possess through this process of internal variation and selection.

Are you speaking about within an individual? This evolution occurs within my lifetime?

Yes.

So it’s very, very different from saying basically the things that work carry on existing because they go on to reproduce.

It’s very different. One of the disanalogies with Darwinian evolution is there’s no obvious mechanism of replication. In Darwinian evolution, genetic information is encoded in DNA, and DNA gets passed down through the generations. What happens during the lifetime doesn’t affect what’s passed down through the germline, at least to a first approximation. In the brain, it’s not clear there is any equivalent sort of encoding and inheritance, though there’s still variation and selection. Still, I think that selectionist principles offer a powerful way to think about brain development and its acquisition and function.

Crick was down the road from Edelman and the Neurosciences Institute at the Salk Institute. They had a beautifully sparky relationships, two old grandees of biology, both with important Nobel prizes in their pockets—both deciding to study consciousness. And taking very different approaches: Crick with his reductive correlative approach, and Edelman with his grand theories. Crick infamously dubbed neural Darwinism ‘neural Edelmanism’ when it first came out.

Edelman’s books are rarely easy reads. Working there for six years I was lucky to have the opportunity to have daily conversations with him. I read his books a few times and we discussed the ideas frequently, so I was able to put them into context. In general I think that Edelman’s ideas—especially those to do with neuronal group selection—may not have had the influence they deserved, in part because of the style of his writing, but in larger part because at the time there was a mismatch between his ideas and the availability of relevant data. These days, we finally have the technologies—like optogenetics—which allow us to observe the activity of very large populations of neurons in real time, which may deliver the data needed in order to see whether and, if so, how, population-level processes like selection are actually happening in the brain.

I can see why things might be strengthened when they’re functionally efficient. But why would things which don’t work get destroyed? It’s not the death of an animal that could have reproduced.

No, there has to be weakening as well as strengthening, for all sorts of reasons. Very simply, your skull would run out of room if things only ever got strengthened. At a computational level, learning is just as much about pruning connections as it is about strengthening connections. One initially surprising fact about brain development is that synaptic density is at its peak at between 2 to 3 years old. From then on, we’re all losing connections. But this is a good thing because learning requires trimming away the stuff that’s not necessary. The principles of statistics and machine learning have taught us that pruning is necessary in order to enhance generalization—to avoid overfitting to the data on which algorithms are trained. There’s also a metabolic cost to having too many neurons and connections. There are all sorts of reasons why we need to select and finesse rather than merely reinforce.

There’s one other book that I wanted to mention in connection with these ideas. This is Christof Koch’s recent The Feeling of Life Itself. Christof, after working very closely with Francis Crick until his death in 2004, then began collaborating with Giulio Tononi. In the last 15 years he has now become, along with Tononi himself, one of the main proponents of integrated information theory. In a way this seems to be a switch to the other side, but, alternatively, Koch’s trajectory can be thought of as an extension of the neural correlate approach, but now through a different theoretical lens. His book, The Feeling of Life Itself, is a very accessible, authoritative and up-to-date manifesto for IIT.

We’re at the fourth book, Thomas Metzinger, Being No One, which sounds like a Buddhist text from the title!

Well the first thing to say here is that this is a big book, both in terms of sheer size—and in the density of ideas on each page. Thomas Metzinger is a German analytic philosopher, and one of my most important mentors and inspirations over the years. Like Dennett, he trained in rigorous philosophy and combined this training with an enviable neuroscientific literacy. He’s one of the pioneers who crossed over from pure philosophy into the Wild West of interdisciplinary research. I first encountered Metzinger in 2005, in Iceland, where we were both giving talks, and we had a really fascinating and, for me, very illuminating conversation. Starting with this conversation, and going on to read his work, Metzinger inspired me to think much more closely about ‘the self.’ Until then, I’d mainly been thinking about consciousness in general and about perceptual experiences of the world. There has been a strong bias in consciousness research to study visual experience—partly because the neural correlates approach can be readily applied in this domain. But of course consciousness is much more than just visual experience. I mentioned at the beginning of this conversation that the experience of ‘being a self’ is not a precondition for consciousness. At the level of self as personal identity, I think this is very likely true. But the experiences of selfhood are nonetheless a central aspect of consciousness as it unfolds for us humans and probably for other animals, too. At some very basic levels, it might be that self-related experiences are foundational to all of consciousness—an idea that I explore in my own book, Being You.

It’s probably a good moment to say, succinctly, what you think is characteristic of ‘being a self’?

Yes, unpacking what ‘self’ means is one of the reasons Metzinger’s book is on this list. My own book follows a similar trajectory. It starts off by talking about consciousness in general, but really where I’m going, what my motivation is, is to unravel what it means to be a self: Being You, being me, Being No One. My debt to Metzinger is obvious even in the title. Many of the ideas within it are continuous with what he was saying nearly 20 years ago. Indeed, they go back at least as far as David Hume. In the 18th century, Hume proposed the idea of a ‘bundle theory’ of self. This is an anti-essentialist view of self, according to which there is no immutable, persistent, stable, single, perhaps even incorporeal essence of me or you. Rather, the self is a bundle of perceptions. The self is not the thing that does the perceiving, the self is a perception too; more specifically, it’s a collection of perceptions that are experienced as a unified whole in normal circumstances—when you’re not neurologically damaged, or psychiatrically ill, or in one of our lab’s experiments. There are many different and potentially separable aspects to the overall experience of being a self.

By being a unified whole, what do you mean, over time?

Over time and in the moment, too. Here, things get interestingly complex. At any given moment, there is the experience that the self is unified—that it is ‘all of a piece’, the essence of you. But of course the way things seem doesn’t mean that’s the way they are, and experiences of selfhood can come apart in all sorts of ways. Here I can mention the much loved books of Oliver Sacks. His case studies provide beautiful insights into how experiences of self and consciousness can break down and fractionate in ways that don’t seem apparent or even possible, unless or until they’ve actually happened.

Then there’s the stability of self over time—a sort of experiential temporal unity. This is something I go into in my book a bit more than these other books I reference—this sense of being the same person. I draw an analogy with the phenomenon of change blindness, which many people are familiar with in the context of visual perception. If a visual scene changes very, very slowly, and if you’re not focusing on the part that is changing, then you don’t experience the change at all. It seems like it’s a continuous, stable, conscious experience. Change of perception is not the same as perception of change. I think the same thing applies, in buckets, to experiences of being a continuous self. That is, I think there is a form of self-change blindness by which we perceive ourselves as changing less than we actually do, as being more stable and continuous over time than we actually are. And, further, that there are good reasons why things work that way in terms of the functions of self-related experiences in controlling and regulating the body. Of course, these days, with photographs and video, we now have all sorts of ways of recognizing that self-change does happens more than it might seem to.

Let me get back to the question of what a self is, and to the basic premise that the self is not a single thing. I can unpack that a little more. The self is a collection of perceptions. There are low-level perceptions and experiences of being a body and being identified with this object in the world that is my body. There is the experience of having a first-person perspective on the world from which I seem to observe the world. There are experiences of agency and volition: I can experience myself as being the cause of actions, and as intending to do things. Then, building on top of this, finally comes the ‘I’—the sense of being a continuous individual over time with a name, an identity, and a set of memories, which are in turn shaped and sharpened by all sorts of cultural and social resonances. All these things are aspects of being a self. This, anyway, is how I’ve come to think of the self, and it was all sparked by reading Metzinger’s book. In my own book I go further into this, and develop these ideas from a more neuroscientific and prediction engine perspective (Metzinger was writing before predictive processing was a thing), but his book was the key starting point.

It is true that this book is ridiculously long, weighing in at 700 pages or so of dense philosophy. Metzinger has often said, in a self-deprecating, Germanic humour kind of way, that he regrets writing it, because its length might have put off many readers. In academic philosophy apparently there is a tradition that it’s the monograph that really counts. Perhaps if he’d written the same ideas as a series of short papers, they would have reached a wider audience. But Being No One is a major piece of work and its influence has been massive, both on me and on the field as a whole.

Myself, I found time to read it in between leaving the Neuroscience Institute in San Diego and coming back to Sussex, to start my faculty job. It was the first time I’d closely read analytic philosophy of this sort applied to the mind and the self, and it taught me a huge amount. I learned a lot of useful concepts, such as ideas about the transparency and opacity of mental representations, which have turned out to be particularly important.

Can you say a bit more about that?

In my understanding, there are some representations that we experience as being representations—and so are opaque—while there are other representations which we do not experience as being representations—in some sense we see through them, hence the term transparency. This distinction has a lot of relevance for how we connect representational speak to descriptions of phenomenology. For example, if you look at the sun and then look away, you will experience an afterimage. But you do not experience the afterimage as being a real thing out there in the world. In this case, your visual experience, insofar as it’s a representation, has the property of opacity. But when I look out of the window, and I see a red car across the street, I do not experience the perceptual predictions that underlie this experience as being representations—here, the process is transparent. I experience redness as a property of the car, not as a property of a representation of the car. Again, this goes back to Hume, who wrote beautifully about how the mind ‘spreads itself’ out into the world, so that we ‘gild and stain’ natural objects with properties derived from the mind. And of course, as Metzinger explained, the same distinctions apply to the self too. We experience the self as being in some sense real because the underlying perceptions are mediated transparently.

I hadn’t realised that the neuroscience of self is such an important area of study.

Yes, it really is, and people come at it from different angles. One challenge is that studying the self is a bit hard empirically. By comparison, visual perception is very easy to study. We can control the input very precisely, and we have many decades of elaborate psychophysics on which to build. Experimental studies of the self are a bit harder, but they can be done. It pays to be wary though. I’m thinking here of the famous rubber hand illusion, which reveals the malleability of experiences of body ownership and has often been used to support the argument that these experiences depend on multisensory integration. The reason I’m wary of it is because of work my colleagues and I have done which indicates that it might just be a suggestion effect.

Really?

Well the data are consistent with this interpretation, but there’s been a lot of debate. Let’s start with suggestibility, which is a stable aspect of selfhood, a psychological trait. People used to call it hypnotic suggestibility, but we—led by my colleagues Pete Lush and Zoltan Dienes— now call it ‘phenomenological control’. This trait reflects how suggestible people are to having experiences that they expect to have, such as those that might be suggested to them—either explicitly or implicitly—by an experimenter. In psychology this is all wrapped up with the issue of demand characteristics. Demand characteristics are those aspects of an experiment that may suggest to the participant—again, either explicitly or implicitly—to respond in particular way. Typically, you want to avoid or control for demand characteristics when setting up experiments, and people that are highly suggestible may respond more strongly to demand characteristics.

What this means is that highly suggestible people might have experiences that you’re implicitly encouraging them to have, even though they need not be engaging in any kind of explicit response bias, or reactance, or anything of that sort. We suspect that this is what’s going on in the rubber hand illusion. People have the experiences that they expect to have. We don’t know if this is the only thing that’s going on—multisensory integration may play a role too—but there’s no good evidence for that yet.

Support Five Books

Five Books interviews are expensive to produce. If you're enjoying this interview, please support us by .

In one experiment, led by Pete Lush, we found substantial correlations between individual suggestibility and strength of the rubber hand illusion. And we also know, from another of Pete’s experiments, that people have very strong expectations about what they should experience in the rubber hand illusion. Put these two things together, and it becomes immediately very concerning that the experiences people report in the rubber hand illusion may be suggestion effects rather than—or as well as—due to the standard explanation of multisensory integration. I remember appealing to this standard explanation back in my 2017 TED talk—and I’d now say things differently, for sure.

What’s more, suggestion effects of this sort might apply very broadly across all sorts of experiments in psychology. So one of our big research programs now—this is with Pete Lush, Zoltan Dienes, Warrick Roseboom, Federico Micheli and others—is to clarify all this landscape, and try to figure out both the problems: which experiments might have been confounded by demand characteristics and suggestibility in ways that were not appreciated­, but also the opportunities, because we can use suggestion effects to study experience. From this perspective, suggestion—phenomenological control—provides a way of getting people to have experiences through top-down influences.

For me, there is a complete bifurcation between things which are amenable to suggestion in the sense that your mental set can affect your sensory experience and things like the Müller-Lyer illusion (when two lines of the same length appear to be different lengths), which seem absolutely immune to any kind of knowledge affecting the experience.

Well I rather think it’s an empirically open question. Indeed, one of the plans we have is to go back even to the Müller-Lyer illusion and just ask: Does the strength of the illusion correlate with, firstly, people’s expectations about what they should report? Secondly, does it correlate with individual suggestibility? My guess is that it won’t, but the proper experiments have yet to be done, at least to my knowledge. Then you go one stage a bit higher up. A PhD student, Federico Micheli, is looking at the McGurk effect. In this illusion, you see a face mouthing one syllable while listening to a different syllable, and the combined, integrated auditory percept is something else again. For example, if you listen to ‘ba’ while watching a mouth say ‘ga’ you’ll probably report hearing something like ‘da’. It’s very perceptually strong. A canonical example of multi-sensory integration, one might think. Well, we will see.

Really fascinating. Reminiscent of the final chapter of Orwell’s Nineteen Eighty-Four

I want to say one more thing on Being No One. It really is a book worth making time for, if you can. Metzinger has also written a trade book called The Ego Tunnel, which is a shorter, more accessible account of the same core ideas. It’s from 2009, so it’s been around a while, but it’s a really wonderful book. Another book I wanted to mention in this context is a fairly obvious choice, Antonio Damasio’s Descartes’ Error, which is a foundational book about the self, in particular because it reconnects emotion with rationality and the body. It was the first book that I read that reminded me that the body is not just a robot that takes the brain from meeting to meeting. It’s deeply involved in our cognition and experience. Then, finally, there is Lisa Feldman Barrett’s How Emotions are Made, which is very recent and very up to date. It’s a beautiful exposition of ideas that she and I have developed somewhat independently, thinking about emotions as a certain, specific variety of perceptual prediction—of inference about the state of the body in context.

So your final choice is quite different from everything so far, in that it’s a novel, and a very recent one. This is Klara and the Sun. My son’s just reading it. He says it’s good (and he’s a tough critic).

I am a huge fan of Kazuo Ishiguro. I loved his Never Let Me Go, which I thought was beautifully delicate and raw and powerful. Ishiguro is one of these novelists who is writing science fiction, but it’s not science fiction as we normally encounter it. It’s not the invention of a completely different world. Instead he takes one or maybe two conceits, and then explores their consequences in a setting that is relevant and immediately relatable to normal life. His writing is always so clear, spare, and vivid—and Klara and the Sun is no exception.

The basic premise of the book—I promise I won’t spoil it—is the idea of an ‘artificial friend’. This takes off from ideas emerging in AI that there will come a point soon when we can build robots or artificial virtual systems that seem to be conscious, and that may in fact be conscious. This is an old idea, of course, that’s been explored in many previous books and films. Ex Machina by Alex Garland, for example, is one of the best. There’s also Blade Runner. You can go back in history to wherever you want to go, really. There is Rossum’s Universal Robots by Karel Čapek, in which the term ‘robot’ was coined. Then back to Jacques de Vaucanson and his automata in the 18th century, these lifelike systems…

All the way back to the Ancient Greeks, the myth of Pygmalion, and the statues of Aphrodite that people interacted with and left stains on…

Absolutely, though I was thinking of the myth of the golem. In any case, Klara and the Sun takes us straight to some coexisting reality, or maybe slightly near future, in which there are systems you can buy in shops that serve as artificial friends for children. What’s beautiful about this book is we see the world primarily from the perspective of the artificial friend herself. We also get acquainted with other perspectives, but the central character is the AF, Klara. Ishiguru succeeds triumphantly in conveying how such a system might come to terms with its world in ways that are sometimes very surprising, very emotionally affecting, and which have the worthwhile effect of prompting all sorts of thoughts about the consequences of developing these kinds of technologies. As the story unfolds, the emotional heft of Klara’s role within the family that eventually bring her into service becomes deeper and deeper. We become confronted with  issues about how we treat others different from ourselves, AF or not, the assumptions we make, whether they’re justified, and what it does to our minds when we interact with systems whose moral, ethical, and conscious status is somehow ambiguous. The film and TV series Westworld does this too, in a more dramatic way, by concocting scenarios in which people are encouraged to express devastatingly horrible instincts of murder and rape and so on without consequence, which of course screws them up badly. Klara and the Sun is, characteristically of Ishiguro, a much more restrained, delicate, but ultimately, I think, for that reason, much more profound exploration of some of these issues.

I wanted to include a novel in my five books because we talked about a number of different disciplines for understanding consciousness, focusing mainly on philosophy and the sciences. But literature is hugely important too, because literature has always been about understanding what it’s like to be a person, what it’s like to have a stream of thought, to be a human, to be an individual, or to be another individual. And that’s a key part of the story. Klara and the Sun is an absolutely remarkable example of how to develop this understanding. In ways I don’t yet know, it will no doubt make me think about my own work differently.

Interview by Nigel Warburton

September 11, 2021

Five Books aims to keep its book recommendations and interviews up to date. If you are the interviewee and would like to update your choice of books (or even just what you say about them) please email us at [email protected]

Anil Seth

Anil Seth

Anil Seth is Professor of Cognitive and Computational Neuroscience at the University of Sussex. He is Co-Director both of the Sackler Centre for Consciousness Science and the CIFAR Program in Brain, Mind, and Consciousness. He is also Editor-in-Chief of the Neuroscience of Consciousness and Engagement Fellow for the Wellcome Trust.

Anil Seth

Anil Seth

Anil Seth is Professor of Cognitive and Computational Neuroscience at the University of Sussex. He is Co-Director both of the Sackler Centre for Consciousness Science and the CIFAR Program in Brain, Mind, and Consciousness. He is also Editor-in-Chief of the Neuroscience of Consciousness and Engagement Fellow for the Wellcome Trust.