What do you understand by ‘the philosophy of mind,’ and how does that relate to psychology?
Philosophy of mind is the study of the mind, the part of us that thinks and feels, perceives and wills, imagines and dreams. It asks what the mind is, how it works, what its powers are, and how it’s related to the body and to the rest of the world. This all relates to psychology because there is a continuity of subject matter. Philosophers of mind think about the same things psychologists think about – the nature of thought, perception, emotion, volition, consciousness, and so on. In the past – if you look at David Hume or Thomas Reid in the 18th century, for example – there was no distinction between philosophy and psychology. Psychology split off from philosophy in the 19th century, when people started to develop experimental ways of studying the mind, like the techniques used in other areas of science. So, the detailed experimental investigation of the mind is now the province of psychology and the neurosciences. But, despite this, there is still a lot of work for philosophers of mind to do.
What’s special about the questions philosophers of mind ask is that they are more fundamental and more general than the ones psychologists ask. There are different aspects to this. For one thing, philosophers think about the metaphysics of mind. What kinds of things are minds and mental states? Are they physical things, which can be explained in standard scientific ways? (The view that they are is known as physicalism or materialism.) Or are minds wholly or partly non-physical? These are questions about the limits of psychology rather than questions within psychology.
Philosophers of mind also think about conceptual issues. Take the question of whether we have free will. We might be able to do some relevant scientific experiments. But to answer the question we also need to understand what we mean by ‘free will’. What exactly are we claiming when we say that we do, or do not, have free will? What kind of experiments would settle the matter? Do we have a coherent concept of free will, or does our everyday talk about it conflate different things? We can ask similar questions about other mental concepts, such as those of perception, belief, or emotion. Many philosophers see this kind of work as articulating an everyday theory of the mind – ‘folk psychology’ – and they go on to ask how this everyday theory relates to scientific psychology. Do the two approaches conflict or are they compatible? In part, this is a contrast between the first-person view we have as possessors of minds – the view from the inside, as it were – and the third-person view of scientists studying the minds of other people. Are the two views compatible? Could science correct our first-person picture of our own minds?
That’s not all. Many contemporary philosophers do work that is continuous with scientific psychology. They rarely do experimental work themselves, but they read a lot of it and contribute to psychological theorising. One way they do this is by thinking about the concepts used in scientific psychology – concepts such as mental representation, information, and consciousness – and helping to clarify and refine them. Their aim is not just to analyse the concepts we already have, but to think about what concepts we need for scientific purposes. (I like to think of this activity as conceptual engineering, as opposed to traditional conceptual analysis.) Philosophers of mind also increasingly engage in substantive psychological theorising, trying to synthesise experimental results and paint a big theoretical picture – of, for example, the nature of conscious thought, the architecture of the mind, or the role of bodily processes in cognition. Broad theoretical speculation like this is something that experimental psychologists are often wary of doing, but it’s an important activity, and philosophers have a licence to speculate.
It strikes me that the best philosophy of mind has re-joined psychology and particularly neuroscience. We’re much closer to the kind of interdisciplinary study that was going on in the 18th century, in some ways, compared to what was going on in 1950s Oxford philosophy, which is easily caricatured as a bunch of dons sitting around splitting hairs in the comfort of their armchairs in ivory towers, without using examples informed by the latest science, or seeing any lack in their ignorance of contemporary psychology. Whereas now you couldn’t really be a serious philosopher of mind without immersing yourself in neuroscience and the best contemporary psychology.
Yes. The modern study of mind – cognitive science – is a cross-disciplinary one, and many philosophers contribute to it without worrying too much whether they are doing philosophy or science. They just bring the tools they have to this joint enterprise. That isn’t to dismiss old-fashioned conceptual analysis. It’s interesting to reflect on how we intuitively conceptualise the mind and how our minds seem to us from the inside – but in the end these are just psychological facts about us. We mustn’t assume that our intuitive picture of the mind is correct. If we want to understand the mind as it really is, then we must go beyond armchair reflection and engage with the science of the mind and brain.
This idea actually leads into your first book choice, because one of the dominant ways of thinking about the mind, within neuroscience and within philosophy, is as a material thing, in the sense of its being intimately connected with the brain. Your first book is David Armstrong’s A Materialist Theory of the Mind. Tell us a little about why you chose this.
It’s a classic work, which helped to establish the foundations for contemporary philosophy of mind. It’s a sort of bridge between the armchair philosophy of mind you mentioned (Armstrong studied at Oxford in the early 1950s) and the later more scientifically oriented approach I was talking about, and it sets the scene for a lot of what was to follow over the next quarter of a century. (In the 1993 reprint Armstrong added a preface discussing what he thought he’d missed in the original; it’s not a huge amount.) The book also functions as a good introduction for anyone new to philosophy of mind because Armstrong begins with a survey of different views of the metaphysics of mind, including Cartesian dualism – the idea that we have an immaterial soul that is completely distinct from the body – and other important theories, such as behaviourism, the view associated with Gilbert Ryle.
Armstrong clearly rejects what Ryle calls ‘the myth of the ghost in the machine’ – the Cartesian dualist theory that there are two types of stuff, one material and one immaterial, and that the mind is an immaterial soul that interacts with the material body. Armstrong’s rejection is implicit, obviously, in the title of his book. Armstrong is presenting a materialist theory, so he clearly stands in opposition to Cartesianism. But where does he stand in terms of behaviourism?
Behaviourism is itself a materialist view, in that it denies that minds are immaterial things. In fact, behaviourists deny that minds are things at all. They argue that when we talk about a person’s mind or mental state we’re not talking about a thing inside the person, but about how the person is disposed to behave. So, for example, to have a sudden pain in your knee is to become disposed to wince, cry out, rub your knee, complain, and so on. Or (to take an example Ryle himself uses) to believe that the ice on a pond is thin is be disposed to warn people about the ice, be careful when skating on the ice, and so on – the nature of actions depending on the circumstances.
Armstrong is quite sympathetic to behaviourism and he explains its advantages over Cartesian dualism and other views. He sees his own view as a natural step on from behaviourism. He agrees with Ryle that there is a very close connection between being in a certain mental state and being disposed to behave in certain ways, but instead of saying that the mental state is the disposition to display a certain pattern of behaviour, he says it is the brain state that causes us to display that pattern of behaviour. A pain in the knee is the brain state that tends to cause wincing, crying out, knee rubbing, and so on. The belief that the ice is thin is the brain state that tends to cause giving warnings, skating with care, and so on. The idea is that there is some specific brain state (the activation of a certain bunch of nerve fibres) that tends to produce the relevant cluster of actions, and that this brain state is the mental state – the pain or the belief, or whatever. Armstrong’s slogan is that mental states are ‘states of the person that are apt for the bringing about of behaviour of a certain sort’. So the mind turns out to be the same thing as the brain or the central nervous system. Armstrong calls this view central-state theory. It’s also known as the mind-brain identity theory or central-state materialism.
Armstrong was Australian, and it’s remarkable to me that for a country with a relatively modest population Australia has produced some of the foremost philosophers of mind in the recent history of the subject.
Yes, Australian philosophers played a central role in developing the mind-brain identity theory – not only Armstrong, but also J J C Smart and U T Place (Smart and Place were both British, but Smart moved to Australia and Place lectured there for some years). Indeed, identity theory was sometimes referred to as Australian materialism – sometimes with the (unwarranted) implication that it was an unsophisticated view. Australia has continued to produce important philosophers of mind – Frank Jackson and David Chalmers, for example, though those two have been critical of materialism.
So to be clear, Armstrong is presenting a theory where the mind is the brain explained in terms of its causal powers. How is that argument presented?
It’s in three parts. In the first part of the book, Armstrong makes a general case for the view that mental states are brain states (the central-state theory). He sets out the view’s advantages – for example, in explaining what distinguishes one mind from another, how minds interact with bodies, and how minds come into being. Then in the second part – which takes up most of the book – he shows how this view could be true, how mental states could be nothing more than brain states. He surveys a wide range of different mental states and processes and argues that they can all be analysed in causal terms – in terms of the behaviour they tend to cause, and also, in some cases, the things that cause them. So when we talk about someone willing, or believing, or perceiving, or whatever, we can translate that into talk about causal processes, about there being an internal state that was caused in a certain way and tends to have certain effects. These analyses are very detailed and often illuminating, and they go a long way towards demystifying the mind. Armstrong shows how mental phenomena that may initially seem mysterious and inexplicable can be naturally understood as complex but unmysterious causal processes.
What then turns that explanation in terms of cause and effect into a materialist theory?
Well, the causal analysis shows that mental states are just states that have certain causes and effects – that play a certain causal role. That doesn’t establish that they are brain states. They could be states of an immaterial soul. But it shows that they could be brain states. And putting that together with the general case for mind-brain identity made in the first part of the book, it’s reasonable to conclude that they are in fact brain states. There’s a short third part to the book in which Armstrong argues that there is no reason to think that brain states couldn’t play the right causal roles, and therefore concludes that central-state theory is true.
Your first book was published in 1968 and obviously there’s been a lot of thought about the nature of the mind since then. The second book you’ve chosen, Daniel Dennett’s confidently-titled book Consciousness Explained released in 1991 is another classic. But Dennett isn’t really satisfied with the kind of account Armstrong gives, would that be fair to say?
Well, Dennett is more wary of identifying mental states with brain states. It’s not that he thinks there’s anything nonphysical about the mind – far from it, he’s a committed physicalist. But he doubts that our everyday talk of mental states will map neatly onto scientific talk about brain states – that for every mental state a person has there will be a discrete brain state that causes all the associated behaviour. He sees folk psychology as picking out patterns in people’s behaviour, rather than internal states. (So his view is closer to that of Ryle, with whom he studied in the early 1960s.) That’s a large theme in his work. But in this book he’s addressing a different issue. In the years after Armstrong wrote, the idea that mental states are brain states became widely accepted, though it was tweaked in various ways. But some people argued that the view couldn’t explain all the features of mental states – in particular, consciousness. These people agreed with Armstrong that the mind is a physical thing, but they argued that it’s a physical thing with some non-physical properties – properties that can’t be explained in physical terms. This view is known as property dualism (as opposed to substance, or Cartesian, dualism, which holds that the mind is a non-physical thing).
In simple terms what is the phenomenon that needs explaining that you’ve labelled ‘consciousness’?
There’s a standard story about what consciousness is. When you’re having an experience – let’s say, seeing a blue sky – there’s brain activity going on. Nerve impulses from your retinas travel to your brain and produce a certain brain state, which in turn produces certain effects (it produces the belief that the sky is blue, disposes you to say that the sky is blue, and so on). This is the familiar story from Armstrong. And in principle a neuroscientist could identify that brain state and tell you all about it. But – the story goes – there’s something else going on too. It is like something for you to see the blue sky – the experience has a subjective quality, a phenomenal feel, a quale (from the Latin word ‘qualis’, meaning of what kind; the plural is ‘qualia’). And this subjective quality is something that the neuroscientists couldn’t detect. Only you know what it’s like for you to see blue (maybe blue things look different to other people). The same goes for all other sense experiences. There’s an inner world of qualia – of colours and smells and tastes, pains and pleasures and tickles – which we experience like a show in a private inner theatre. Now if you think about consciousness this way, then it seems incredibly mysterious. How could the brain – a spongy, pinky-grey mass of nerve cells – create this inner qualia show that’s undetectable by scientific methods? This is what David Chalmers has called the hard problem of consciousness.
Dennett’s title Consciousness Explained suggests he believes he has an answer to that problem…
Not an answer to the hard problem exactly. It’s more that he thinks it’s a pseudo-problem. He thinks that that whole picture of consciousness is wrong – there’s no inner theatre and no qualia to be displayed there. Dennett thinks that that picture is a relic of Cartesian dualism, and he calls the supposed inner theatre the Cartesian Theatre. We used to think there really was an inner observer – the immaterial soul. Descartes thought that signals from the sense organs were channelled to the pineal gland in the centre of the brain, from where they were somehow transmitted to the soul. Nowadays few philosophers believe in the soul, but Dennett thinks they still hang on to the idea that there’s a sort of arena in the brain where sensory information is assembled and presented for consciousness. He calls this view Cartesian materialism, and he thinks it’s deeply misconceived. Once we give up Cartesian dualism and accept that mental processes are just hugely complex patterns of neural activity, then we must give up the picture of consciousness that went with it. You’ve got to break down this idea of the inner show standing between us and the world. There’s no need for the brain to recreate an image of external world for the benefit of some internal observer. It’s a kind of illusion.
How then does Dennett explain consciousness? Because that just sounds like a machine.
I think Dennett would say that’s exactly what it should sound like – after all, if materialism is true, then we are machines, biological machines, made from physical materials. If you’re going to explain consciousness, then you need to show how it is made out of things that aren’t conscious. The 17th century philosopher Gottfried Leibniz said that if you could blow up the brain to the size of a building and walk around it, you wouldn’t see anything there that corresponded to thinking and experience. That can be seen as a problem for materialism, but in fact it’s just what materialism claims. The materialist says that consciousness isn’t something extra, over and above the various brain systems; it’s just the cumulative effect of those systems working as they do. And Dennett thinks that one of the effects of those brain systems is to create in us the sense that we have this inner world. It seems to us when we reflect on our experiences that there is an inner show, but that is an illusion. Dennett’s aim in the book is to break down that illusion, and he uses a variety of thought experiments to do so.
By a thought experiment, you mean an imaginary situation used to clarify our thinking?
Yes, that’s right – though Dennett’s thought experiments often draw on scientific findings. Here’s one he uses in the book. You see a woman jog past. She is not wearing glasses, but she reminds you of someone who does, and that memory immediately contaminates your memory of the running woman so that you become convinced she was wearing glasses. Now Dennett asks how this memory contamination affected your conscious experience. Did the contamination happen post-consciousness, so that you had a conscious experience of the woman without glasses, and then the memory of this experience was wiped and replaced with a false memory of her with glasses? Or did it happen pre-consciousness, so that your brain constructed a false conscious experience of her as having glasses? If there were a Cartesian Theatre, then there should be a fact of the matter: which scene was displayed in the theatre – with glasses or without them? But Dennett argues that, given the short timescale in which all of this happened, there won’t be a fact of the matter. Neuroscience couldn’t tell us.
“Some critics say that Dennett should have called his book ‘Consciousness Explained Away’”
Suppose we were monitoring your brain as the women passed and found that your brain detected the presence of a women without glasses before it activated the memory of the other woman with glasses. That still wouldn’t prove that you had a conscious experience of a woman without glasses, since the detection might have been made non-consciously. Nor would asking you have settled it. Suppose that as the women passed we had asked you whether she was wearing glasses. If we had put the question at one moment you might had said she wasn’t, but if we’d asked it a fraction of a second later you might have said she was. Which report would have caught the content of your consciousness? We can’t tell – and neither could you either. All we – or you – can really be sure of is what you sincerely think you saw, and that depends on the precise timing of the question. The book is packed with thought experiments like this, all designed to undermine the intuitive but misleading picture of the Cartesian Theatre.
If you had to characterise Dennett’s position, and some people find it quite difficult to pin down what his actual position is, what is it? It’d be really useful to know what you think Dennett believes about the nature of the mind.
The first thing to stress is that he’s not trying to provide a theory of consciousness in qualia-show sense, since he thinks that consciousness in that sense is an illusion. Some critics say that Dennett should have called his book ‘Consciousness Explained Away’, and up to a point they are right. He is trying to explain away consciousness in that sense. He thinks that that conception of consciousness is confused and unhelpful, and his aim is to persuade us to adopt a different one. In this respect Dennett’s book is a kind of philosophical therapy. He’s trying to help us give up a bad way of thinking, into which we easily lapse.
Five Books interviews are expensive to produce. If you're enjoying this interview, please support us by donating a small amount.
As for what we put in place of the Cartesian Theatre, there are two main parts to Dennett’s story. The first is what he calls the ‘Multiple Drafts’ model of consciousness. This is the idea that there isn’t one canonical version of experience. The brain is continually constructing multiple interpretations of sensory stimuli (woman without glasses, women with glasses), like multiple drafts of an essay, which circulate and compete for control of speech and other behaviour. Which version we report will depend on exactly when we are questioned – on which version has most influence at that moment. In a later book Dennett speaks of consciousness as fame in the brain. The idea is that those interpretations that are conscious are those that get a lot of influence over other brain processes – that become neurally famous. This may seem a rather vague account, but again I think Dennett would say that that’s how it should seem, since consciousness itself is vague. It isn’t a matter of an inner light being on or off, or of a show playing or not playing.
The second part of Dennett’s story is his account of conscious thought – the stream of consciousness that James Joyce depicted in his novel Ulysses. Dennett argues that this isn’t really a brain system at all; it’s a product of a certain activity we humans engage in. We actively stimulate our own cognitive systems, mainly by talking to ourselves in inner speech. This creates what Dennett calls the Joycean Machine – a sort of program running on the biological brain, which has all kinds of useful effects.
But is there any way of deciding empirically or conceptually between the Cartesian Theatre view and Dennett’s view? Is it just whichever gives the best explanation?
Dennett thinks there are both conceptual and empirical reasons for preferring the Multiple Drafts view. He thinks the idea of a qualia show contains all sorts of confusions and inconsistences – that’s what the thought experiments are designed to tease out. But he also cites a lot of scientific evidence in support of the Multiple Drafts view – for example, concerning how the brain represents time. And he certainly thinks his offers a better explanation of our behaviour, including our intuitions about consciousness. Positing a private undetectable qualia show doesn’t explain anything. Of course, Dennett’s views are controversial, and there are many important philosophers who take a very different view – most notably David Chalmers in his 1996 The Conscious Mind. But for my money Dennett’s line on this is the right one, and I think time will bear that out.
What about your third book, Ruth Millikan’s Varieties of Meaning? I’m unfamiliar with this book.
I chose it to represent another important strand of contemporary philosophy of mind, and that’s work on mental representation. Mental states – thoughts, perceptions, and so on – are ‘about’ things out in the world, and they can be true or false, accurate or inaccurate. For example, I was just thinking about my car, thinking that it is parked outside. Philosophers call this property of aboutness intentionality, and they say that what a mental state is about is its intentional content. Like consciousness, intentionality poses a problem for materialist theories. If mental states are brain states, how do they come to have intentional content? How can a brain state be about something, and how can it be true or false? Many materialists think the answer involves positing mental representations. We’re familiar with physical things that are representations of other things – words and pictures, for example. And the idea is that some brain states are representations, perhaps like sentences in a brain language (‘Mentalese’). Then the next question is how brain states can be representations. A lot of work in contemporary philosophy of mind has been devoted to this task of building a theory of mental representation. There are many books on this topic I could have chosen – by Fred Dretske, for example, or Jerry Fodor. But Ruth Millikan’s work on this is, in my view, some of the best and most profound, and this book, which is based on a series of lectures she gave in 2002, is a good introduction to her views.
Is this the same as meaning? How do mental representations of some kind acquire meaning for us?
Yes, the problem is how mental representations come to mean, or signify, or stand for, things. If there’s a brain language, how do words and sentences of that language get their meaning? As the title indicates, Millikan thinks there are many varieties of meaning. To begin with, she argues that there is a natural form of meaning which is the foundation of it all. We say that dark clouds mean rain, that tracks on the ground mean that pheasants have been there, that geese flying south mean that winter is coming, and so on. There is a reliable connection, or mapping, between occurrences of the two things, which makes the first a sign of the second. You can get information about the second from the first. Millikan calls these natural signs. Other philosophers, including Paul Grice and Fred Dretske, have discussed natural meaning like this, but Millikan’s account of it improves on previous work in various ways, and I think it’s the best around. So this is one basic form of meaning, but it is limited. One thing is a sign of another – carries information about it – only if the other thing really is there. Clouds mean rain only if rain is actually coming. Tracks means pheasants only if they were made by pheasants, and so on. So natural signs, unlike our thoughts and perceptions, cannot be false, cannot misrepresent.
So mental representations are different from natural signs?
Yes, they are what Millikan calls intentional signs. But normally they are natural signs too. Roughly speaking (Millikan’s account is very subtle and I’m cutting corners), an intentional sign is a sign that is used with the purpose of conveying some information to a recipient. Take a sentence of English, rather than a mental representation. (Sentences of human language are also intentional signs, as are animal calls.) Take ‘Rain is coming’. We say this with the purpose of alerting someone to the fact that rain is coming, and we can do this successfully only if rain is coming. (I can’t alert you to the fact that rain is coming if it’s not.) So if we succeed in our purpose, the sentence we produce will be a natural sign that rain is coming, just as dark clouds are. There’s a reliable connection between the two things. Now if we utter the sentence in error, when rain isn’t coming, then of course it won’t be a natural sign that rain is coming. However, it will still be an intentional sign that rain is coming in virtue of the fact that we used it with the purpose of signifying to someone that rain is coming. (Millikan argues that intentional signs are always designed for some recipient or consumer.) Roughly, then, an intentional sign of something is a sign whose purpose is to be a natural sign of it.
But how then can mental representations have meaning? We don’t use them for a purpose.
No, but our brains do. Millikan has a thoroughly evolutionary approach to the mind. Evolution has built biological mechanisms to do certain things – to have certain purposes or functions. (This doesn’t mean that evolution had intentions and intelligence, just that the mechanisms were naturally selected for because they did these things, rather than because of other things they did.) And the idea is that the mind is composed of a vast array of systems designed to perform specific tasks – detecting features of the world, interpreting them, reacting to them, and selecting actions to perform. These systems pass information to each other using representations which are designed to serve as natural signs of certain things – and which are thus intentional signs of those things. In very general terms, then, the view is that mental representations derive their meaning from the purposes with which they are used. This sort of view is called a teleological theory of meaning. (‘Teleological’ comes from the Greek word ‘telos’, meaning purpose or end.)
What about non-human animals? Does Millikan have a view on them?
Oh yes. As I said, Millikan takes an evolutionary approach to the mind. She thinks that in order to understand how our minds represent things we need to look at the evolution of mental representation, and she devotes a whole section of the book to this, with lots of information about animal psychology and fascinating observations of animal behaviour. Millikan thinks that the basic kind of intentional signs are what she calls pushmi-pullyu signs, which simultaneously represent what is happening and how to react to it. An example is the rabbit-thump. When a rabbit thumps its hind foot, this signals to other rabbits both that danger is present and that they should take cover. The sign is both descriptive and directive, and if used successfully, it will be a natural sign both of what is happening now and of what will happen next. Millikan thinks that the bulk of mental representations are of this kind; they represent both what is happening and what response to make. This enables creatures to take advantage of opportunities for purposive action as they present themselves. But creatures whose minds have only pushmi-pullyu representations are limited in their abilities – they can’t think ahead, can’t check they have reached their goals, and can get trapped in behavioural loops.
“This isn’t an easy book. You’ll have to work at it, and you may need to re-read the book several times. But it repays the effort”
Millikan argues that more sophisticated behavioural control requires splitting off the descriptive and directive roles, so that the creature has separate representations of objects and of its goals, expressed in a common mental code, and she devotes two chapters of the book to exploring how this might have happened. Finally, she argues that even with these separate representations, non-human animals are still limited in what they can represent. They can only represent things that have practical significance for them – things relevant in some way to their needs. We, on the other hand, can represent things that have no practical value for us. We can think about distant times and places, and about things we’ll never need or encounter. Millikan describes us as collectors of ‘representational junk’ – though, of course, it’s this collecting of theoretical knowledge that enables us to do science and history and philosophy and so on. To represent this kind of theoretical information, Millikan argues, a new representational medium was needed with a certain kind of structure, and she thinks that this was provided by language. It is language that has enabled us to collect representational junk and do all the wonderful things we do with it.
Does Millikan discuss language and linguistic meaning too?
Yes. In fact, there’s another section of the book on what she calls ‘outer intentional signs’ (animal calls and linguistic signs). Millikan argues that linguistic signs emerge from natural signs and that they are normally read in exactly the same way as natural signs. We read the word ‘pheasant’ as we read pheasant tracks on the ground, as a natural sign of pheasants. We don’t need to think about what the speaker intended or had in mind. This view has some surprising consequences, which Millikan traces out. One of them is that we can directly perceive things through language. When we hear someone say ‘Johnny’s arrived’, we perceive Johnny just as if we were to hear his voice or see his face, Millikan argues. The idea is that the words are a natural sign of Johnny just as the sound of his voice or the pattern of light reflected from his face is. They are all just ways of picking up information about Johnny’s whereabouts. Of course, there is processing involved in getting from the sound of the words to a belief about Johnny, but Millikan argues that the processes involved are not fundamentally different from those involved in sense perception. It’s a controversial view, but it fits in with the wider views about perception and language that she develops.
I should perhaps say that this isn’t an easy book. Millikan writes clearly, but the discussion is complex and subtle. You’ll have to work at it, especially if you’re new to the subject, and you may need to re-read the book several times. But it repays the effort. It’s packed full of insights, and you’ll come away with a much deeper understanding of how our minds latch onto the world.
Now let’s move on to the fourth book, The Architecture of Mind by Peter Carruthers. This is a book with a different approach to the mind?
To some extent. It’s a work of substantive psychological theorising. Carruthers makes the case for the thesis of massive modularity – the view that the mind is composed of numerous separate subsystems, or modules, each of which has a specialised function. This view has been popular with people working in evolutionary psychology, since it explains how the human mind could have developed from simpler precursors by adding or repurposing specific modules. Carruthers argues that this view offers the best explanation of a host of experimental data.
And why did you choose this particular book?
First, it’s an excellent example of what philosophy can contribute to psychology. Carruthers surveys a huge range of scientific work from across the cognitive sciences and fits it together into a big picture. As I said, this is something experimental psychologists are often wary of doing, because it means going beyond their own particular area of expertise. Second, the massive modularity thesis is an important one, and Carruthers’s version of it is the most detailed and persuasive one I’ve met. Third, because of the way Carruthers argues for his views, drawing on masses of empirical data from neuroscience, cognitive psychology, social psychology, it is a very informative work. Even if you completely disagree with Carruthers’s conclusions, you will learn a vast amount from this book.
What exactly does Carruthers mean by a mental ‘module’?
This notion of a mental module was made famous by Jerry Fodor in his 1983 book, The Modularity of Mind. As I said, a module is a specialist system for performing some specific task – say, for processing visual information. Fodor had a strict conception of what a module was. In particular, he thought of modules as encapsulated – they couldn’t draw on information from other cognitive systems, except for certain specific inputs. Fodor thought that sensory processes were modular in this way, but he denied that central, conceptual processes were – processes of belief formation, reasoning, decision making and so on. Indeed, he couldn’t see how these processes could possibly be modular, since in order to make judgements and decisions we need to draw on information from a variety of sources. Obviously, if the mind is massively modular, then it can’t be so in Fodor’s sense, and Carruthers proposes a looser definition which, among other things, drops the claim that modules can’t share information. He argues that evolution equipped animals with numerous modules like this, each dedicated to a specific task that was important for survival. There are suites of these modules, he thinks: learning modules, for forming beliefs about direction, time, number, food availability, social relations, and other topics; motivational modules, for generating different kinds of desire, emotion, and social motivation; memory modules for storing different kinds of information, and so on. He argues that the human mind has these modules too, together with various additional ones, including a language module and modules for reasoning about people’s minds, living things, physical objects, and social norms.
What is the argument for thinking that the mind is massively modular in this way?
Carruthers has several arguments. One is evolutionary. This is how complex systems evolve. Nature builds them bit by bit from simpler components, which can be modified without disrupting the whole system. This is true of genes, cells, organs, and whole organisms, and we should expect it to be true of minds too. Another argument is from animals. Carruthers argues that the minds of non-human animals are modular, and since our minds evolved from such minds, they will have retained their basic modular structure, with various new modules added on. A third argument turns on considerations of computability. Carruthers argues that the mind is a computational system; it works by manipulating symbols in something like a language of thought. And for these computations to be tractable, they can’t be done by a general system that draws on all potentially relevant information. It would just take too long. Instead, there must be specialised computational systems – modules – that each access only a limited amount of the information available in the wider system. This doesn’t mean that the modules can’t share information, just that they don’t share much of it. Of course, these are arguments only for the general principle of massive modularity; the arguments for the existence of the specific modules come later in the book.
But if our minds are collections of modules designed to deal with specific survival problems, how do we manage to do so many other things? I assume evolution didn’t equip us with modules for doing science, or making art, or playing football.
This is the big challenge for the massive modularity view. How can a collection of specialist modules support flexible, creative, and scientific thinking of the kind we are capable of? We can think about things that are not of immediate practical importance, we can combine concepts from different domains, and we learn to think in new and creative ways. How can we do this if our minds are modular? Carruthers devotes a lot of the book to answering this challenge in its various forms. It’s a long story, but the core idea is that these abilities involve co-opting systems that originally evolved for other purposes. Language plays a crucial role in the story, since it can combine outputs from different modules, and Carruthers argues that flexible and creative thinking involves rehearsing utterances and other actions in imagination, using mechanisms that originally developed for guiding action. (You’ll notice that this picks up a theme from Dennett and Millikan – that language is key to the distinctive powers of the human mind.) Carruthers thinks that we are conscious of things we mentally rehearse, so this is at the same time an account of the nature of conscious thought. It’s a very attractive account in its own right – another reason to read the book – and you might endorse even if you are sceptical of the modular picture that goes with it. Carruthers has developed his account of conscious thought further in his most recent book The Centred Mind.
Doesn’t Carruthers story about modules sound a little speculative? It’s not as though we can open up the brain and view the modular systems. Is there any empirical consequence for this kind of theorising?
The modules might not be evident from the anatomy. Carruthers isn’t claiming that each module is localised to a specific brain region. A module might be spread out across several regions, as the circulatory system is spread out across the body. But the modular theory should generate many testable predictions. For example, we should find distinctive patterns of response under experimental conditions (say, when a task places heavy demands on one module but not on another), distinctive kinds of breakdown (as when a stroke damages one module but leaves others intact), and distinctive patterns of activation in neuroimaging studies. What Carruthers is doing is setting out a research programme for cognitive science, and it’s only by pursuing the programme that we’ll find out whether it’s a good one. Does the programme lead us to new insights and new discoveries? This is very from far armchair conceptual analysis.
And finally, what did you choose for your last book?
Andy Clark’s, Supersizing the Mind. It’s about how the mind is embodied and extended. Clark is a fascinating philosopher, and he’s always been a bit ahead of the field. He’s played the role of alerting philosophers to the latest developments in cognitive science and AI, such as connectionism, dynamical systems theory, and predictive coding. If you want to know what philosophers of mind will be thinking about in five or ten years’ time, look at what Andy Clark is thinking about today.
To me Andy Clark’s theory of the extended mind is fascinating because it’s an example of the philosopher who, a bit like Dennett, causes us to rethink something we thought we understood. It’s also a very attractive picture that he presents of the way in which things we might not have thought of as parts of our mind, really are parts of our mind.
Yes. One way to think of it is in terms of a contrast between two models of the mind. Both are physicalist, but they differ as to the range of physical processes that make up the mind. One is what Clark calls the Brainbound model. This sees the mind confined to the brain, sealed away in the skull. It is the view that Armstrong has – it’s in the name ‘central-state materialism’, where ‘central’ means the central nervous system. In this model, the brain does all the processing work and the body has an ancillary role, sending sensory data to the brain and receiving the brain’s commands. This means that there’s a lot of work for the brain to do. It needs to model the external world in great detail and calculate precisely how to move the body in order to achieve its goals. This contrasts with what Clark calls the ‘Extended Model.’ This sees mental processes as involving the wider body and external artefacts. One aspect of this concerns the role of the body in cognition. The brain can offload some of the work onto the body. For example, our bodies are designed to do some things automatically, in virtue of their structure and dynamics. Walking is an example. So the brain doesn’t need to issue detailed muscular commands for these activities but can just monitor and tweak the process as it unfolds. Another example is that instead of constructing a detailed internal model of the world, the brain can simply probe the world with the sense organs as and when it needs information – using the world as its own model, as the roboticist Rodney Brooks puts it. So the work of controlling behaviour is not all done in the head but involves interaction and feedback between brain and body. Clark lists many examples of this, with data from psychology, neuroscience, and robotics.
There’s a more familiar element of this theory that suggests storage outside the brain could potentially be part of the mind, which is a fascinating idea.
Yes, that’s the other aspect of the Extended model. Mental processes don’t only involve the body but can also extend out into external objects and artefacts. This was an idea made famous by a 1998 article ‘The extended mind’, which Clark co-wrote with David Chalmers and which is included in the book. (Chalmers also contributed a foreword to the book, giving his later thoughts on the topic.) The argument involves what’s called the Parity Principle. This is the claim that if an external object performs a certain function that we’d regard as a mental function if it were performed by a bit of the brain, then that external object is a part of your mind. It’s what a thing does that matters, not where it’s located. Take memory. Our memories store our beliefs (for example, about names or appointments), which we can access as needed to guide our behaviour. Now suppose someone has a memory impairment, and they write down bits of information in a notebook which they carry around with them and consult regularly. Then the notebook is functioning like their memory used to, and the bits of information in it function as beliefs. So, the argument goes, we should think of the notebook as literally part of the person’s mind and its contents as among their mental states. This view may seem counterintuitive, but it isn’t all that far from where we started with Armstrong and the claim that mental states can be defined in terms of their causal roles – what work they do within the mind/brain system. The new claim is just that these causal roles can be played by things outside the brain. It also fits in nicely with Carruthers’s massive modularity. If the brain is itself composed of modules, then why couldn’t there be further modules or subsystems external to the brain? These external modules would need to have interfaces with the brain, of course – in the case of the notepad this would be through the person’s eyes and fingers. But, as Clark notes, internal modules will need interfaces too.
That explains in some ways the psychological phenomenon that people have when they lose a key address book or family album, they really have lost something that is crucial to their mental functioning.
Yes. Of course, this only applies to things that are closely integrated with your brain processes, things that you carry with you, that you consult regularly. Clark doesn’t claim that anything you consult is part of your mind – a book that you look at only once a year, say.
Could a room, or a bookshelf play the same role?
Yes, I think it could. Clark talks about how we construct cognitive niches – external environments that serve to guide and structure our activities. For example, the arrangement of materials and tools in a workplace might act like a workflow diagram, guiding the workers’ activities. Clark has a nice historical example of this from the Elizabethan theatre. The physical layout of the stage and scenery, combined with a schematic plot summary, enabled actors to master lengthy plays in a short time. We see this with elderly people too. As a person’s mental faculties decline, they become more and more dependent upon the cognitive niche they have created in their own home, and if you take them out of that niche and put them in an institution, they may become unable to do even simple everyday things.
The suggestion is then that an elderly person’s wardrobe and bedside table are actually part of their mind?
Yes. Or rather, the suggestion is that there is a perspective from which they can be seen that way. Clark isn’t dogmatic about this. The point is that the Extended model offers a perspective from which we see patterns and explanations that aren’t visible from the narrower Brainbound perspective. Again, this is nudging us away from this Cartesian view of the mind as something locked away from the world. We have an intuitive picture of our minds as private inner worlds, somehow separate from the physical world, but modern philosophy of mind is increasingly dismantling that picture.
With your book choices there’s a whole interesting set of different ways of thinking about ourselves. So, Armstrong is reacting primarily against Cartesian mind/body dualism, which sees the mind as an immaterial substance. Dennett is rejecting the inner cinema picture of the mind and urging us to rethink what it means to be conscious. Millikan is exploring how our thoughts and perceptions evolved from simpler, more basic signs and representations. Carruthers is suggesting that our mental processes are the product of different systems working with a degree of independence to produce what we think of as our single experience. And Clark is switching us again to thinking that we think too narrowly about the mind, that another way of understanding mental activities is to see it as extending far beyond the skull potentially. It’s a very interesting range of books that you’ve chosen.
Maybe there’s a metaphor of Dennett’s that can help us sum this up. Dennett talks about consciousness as a user illusion. He’s thinking of the graphical user interface on a computer, where you have an image of a desktop with files, folders, a waste bin, and so on, and you can do things by moving the icons around – deleting a file by dragging it to the waste bin, for example. Now these icons and operations correspond to things inside the computer – to complex data structures and ultimately to millions of microsettings in the hardware – but they do so only in a very simplified, metaphorical way. So the interface is a kind of illusion. But it’s a helpful illusion, which enables us to use the computer in an intuitive way, without needing any knowledge of its programming or hardware. Dennett suggests that our awareness of own minds is a bit like this. My mind seems to me to be a private world populated with experiences, images, thoughts, and emotions, which I can survey and control. And Dennett’s idea is that this is a kind of user illusion too. It is useful; it gives us some access to what is going in our brains and some control over it. But it represents the states and processes there only in a very simplified, schematic way. I think that’s right. And what these books are doing, and what a lot of modern philosophy of mind is doing, is deconstructing this user illusion, showing us how it is created and how it relates to what is actually happening as our brains interact with our bodies and the world around us.
Five Books aims to keep its book recommendations and interviews up to date. If you are the interviewee and would like to update your choice of books (or even just what you say about them) please email us at [email protected]