Scott Soames recommends the best books on

The Philosophy of Language

What a professor of philosophy has on his bookshelf. The languages of logic, mathematics, and science, as well as English, French, and German. The nitty gritty of truth

  • 0810106051.01.LZ_


    The Foundations of Arithmetic
    by Gottlob Frege

  • 0674598466.01.LZ_


    Naming and Necessity
    by Saul A Kripke

  • 019505217X.01.LZ_


    Themes from Kaplan
    by Joseph Almog, John Perry and Howard Wettstein

  • 3110172798.01.LZ_


    Syntactic Structures
    by Noam Chomsky

  • 0924922052.01.LZ_


    Frege’s Puzzle
    by Nathan Salmon

Scott Soames

Scott Soames is a professor and director of the School of Philosophy at the University of Southern California. He specialises in the philosophy of language and the history of analytic philosophy. He offers a new theory of propositions in his latest book What is Meaning? 

Save for later

Scott Soames

Scott Soames is a professor and director of the School of Philosophy at the University of Southern California. He specialises in the philosophy of language and the history of analytic philosophy. He offers a new theory of propositions in his latest book What is Meaning? 

Save for later

Can you explain what you mean by the philosophy of language?

It’s the study of the most central questions that we raise about language, and an analysis of the most fundamental concepts we apply to language. Among the most important of these are truth, reference and meaning. The task is to say what we mean by these concepts, and then to construct theories of truth, reference and meaning that help us understand not only the languages of logic, mathematics and science but also ordinary languages like English, French, and German.

Let’s start with your first choice, The Foundations of Arithmetic by Gottlob Frege. This is cited as one of the most influential books of the 20th century.

Yes, although it wasn’t written in the 20th century, it became very influential in the 20th century. Frege wrote it in 1884 in the middle of a large project in the philosophy of mathematics. The first part of the project was to develop a new system of logic capable of formalising the notion of proof in mathematics. The result was a system that replaced the classical Aristotelian logic of the syllogism, and represented the most important advance in logic in 2,000 years.

The second part of his project was to demonstrate that the axioms of arithmetic can be derived from his system of logic plus his logical definitions of all arithmetical concepts. He outlines the strategy for doing this in The Foundations of Arithmetic. Simply put, the idea is to show first that arithmetic is at bottom nothing but an elaboration of pure logic, and second that higher mathematics is at bottom nothing more than an elaboration of arithmetic. So the goal, as explained in the book, is to show how all of mathematics can be established with the unchallengeable a priori certainty of pure logic.

And why does this book continue to be so successful?

It is the best, most accessible work ever in the philosophy of mathematics. It is also beautifully conceived and executed. For those who want to know what philosophical analysis is, this is among the best example ever produced. Its vision, though complicated in details, is simple and compelling. In the end, Frege didn’t accomplish everything he hoped for. But he did succeed in laying the foundation for the stunning advances in mathematical logic in the 20th century that themselves provided frameworks for modern theories both of computation and of linguistically encoded information.

Your next choice, Naming and Necessity by Saul Kripke, is also widely seen as one of the most important philosophical works of the 20th century. How does he fit in with Frege?

This work is a kind of bookend, if you like, to Frege, who initiated a tradition which came to be known as analytic philosophy. In 1970, Saul Kripke, who is also an analytic philosopher, pointed out that despite all the progress made in following Frege’s emphasis on logic, language and meaning, there are certain limitations of that project that we must transcend.

Kripke’s central message emerges from a discussion of three distinctions. One is the distinction between necessary and contingent truth. A necessary truth is one that is true, and would have remained so no matter what possible state the world was in. A contingent truth is one that is true, but could have been false. For example, it is true that we are talking today, but we could have decided otherwise, in which case the claim that we are talking would have been false. This distinction between necessary and contingent truths is traditionally illustrated by saying that the truths of logic and mathematics are necessary, whereas those of natural and social science are contingent.

The second distinction is between those truths we can know a priori, just by thinking about them, and other truths, knowledge of which requires empirical observation and experiment for confirmation. As before, this distinction – between a priori and a posteriori truths – is traditionally illustrated by saying that our logical and mathematical knowledge is a priori, whereas our empirical, scientific knowledge is a posteriori. In short, the metaphysical distinction between what is and couldn’t have been otherwise vs what is but could have been otherwise was thought to coincide with the epistemological distinction between what we can know without empirical confirmation vs what we can know only though empirical confirmation.

Why should that be so? Well, that brings us to the third distinction – between analytic and synthetic truths. An analytic truth is a sentence made true simply by what the words mean – bachelors being unmarried, for example. By contrast, a synthetic truth is made true by corresponding to facts in the world. Throughout most of the 20th century this distinction between the meanings of two classes of sentences was assumed to explain the coincidence of the necessary with the a priori, and the contingent with the a posteriori. If a statement is made true by its meaning alone, then of course it would have remained true even if the facts of the world had been different, and of course it can be known without empirical confirmation, since understanding what it means is enough to know that it is true.

What Kripke shows in Naming and Necessity is that the difference between the analytic and the synthetic can’t explain this coincidence because there is, in fact, no coincidence to explain. Contrary to what had been assumed, there are necessary truths knowable only by experience – including many important scientific truths – and there are contingent truths that can be known a priori. Moreover, this difference is not reducible to differences in linguistic meaning or convention.

The reason this is important is that it falsified an assumption crucial to the self-conception of philosophy that had grown up in the first half of the 20th century – the assumption that philosophical truths are all analytic, necessary and a priori. Kripke’s shattering of this idea brought back something that had been missing from philosophy for a long time. It brought back the idea that things in the world have discoverable essences, which are properties not just physically required but metaphysically necessary for their existence. Some of these properties are discoverable by science. But these may not exhaust the essential properties of human beings. The impact of Kripke’s book was its message that, despite the progress philosophers have made in understanding meaning and language, philosophical knowledge is not limited to that, which means that philosophy must reconnect to the non-linguistic world.

Tell me about Themes from Kaplan by Joseph Almog, John Perry and Howard Wettstein.

The book contains one of David Kaplan’s most influential papers, called ‘Demonstratives’, the centre of which is his logic of demonstratives. This is a system of logic which, though a descendant of Frege’s original system, has an unusual feature. It employs terms we use in everyday life that vary their referents from speaker to speaker and use to use. Words like, ‘I’, ‘you’, ‘he’, ‘today’ and so on.

What makes this interesting is that logic has always aimed, since Frege, at producing systems of proof that are mechanically checkable, so there can be no dispute about whether or not something counts as a proof. It was always thought that in order to do this you had to abstract away from natural language, and eliminate all context-sensitive words, the referents of which change from one context to another.

Kaplan shows that this is not so, which is a great achievement. Other developments in logic since Frege have shown that many aspects of natural language not included in Frege’s system can be incorporated into logical systems. The effect is to give us more interesting logical languages which begin to approach languages like English in their expressive power. By applying the techniques developed for understanding the original logical languages to these richer, more English-like systems, we achieve an understanding of these systems that gives us a kind of understanding of English and other natural languages by proxy.

That is all to the good. On the other hand, the lesson of Kaplan’s paper is that there are limits to this. Even though he succeeds, he still has to abstract away from, and ignore, important aspects of the meanings of context-sensitive sentences in English. Thus we are left wondering whether the aims of logic construction and natural language understanding are still in some tension. I think they are, which is one reason why his paper is still a challenge to me.

Next up is Noam Chomsky’s book Syntactic Structures.

Syntactic Structures tells us what that task amounts to and how to go about it. It also gives samples of the kinds of rules we need. Chomsky shows that if you start at a great level of abstraction, and produce many intervening structures on your way to generating the strings that count as sentences, you will have levels of description that allow you to compare the similarities and differences in languages in ways that go beyond simply looking at word order and other surface phenomena.

We can say that two languages are similar if the grammars resulting from this project have fundamental similarities at significant computational levels. This is what Chomskians are always looking for – linguistic universals, in the limiting case. What is important for me is that this book brought another piece of the puzzle to the table. By giving us the grammatical part, Chomsky gave us what we need to combine with the meaning part in order to have a complete theories of spoken languages.

Your last book is a piece of the puzzle as well ­– Frege’s Puzzle by Nathan Salmon.

Yes, it is. Nathan wrote the draft of this book when he and I were both assistant professors at Princeton. We had something in common because we had both been influenced by two of my other authors – David Kaplan and Saul Kripke – who had revived and expanded an old idea going back to John Stuart Mill and Bertrand Russell. The idea is that the meanings of many of our words are simply things in the world they stand for, rather than ideas in our minds. This idea seemed revolutionary at the time because, since Frege, it had been thought that meanings must be internally and transparently accessible to our minds. Frege had convinced people of this using an argument known as ‘Frege’s puzzle’ in one of his most famous papers in 1892. The Kripke book had challenged Frege’s idea and the Kaplan paper had done so too, but no one had taken apart the Frege’s puzzle argument in a very convincing way until Nathan made real progress doing just that. Although the point remains controversial, Nathan’s book took us a long way in the right direction.

Many of the books that you have mentioned were written by people you discuss in your own book, Philosophy of Language – what progress have you made personally with their theories?

There are two parts to my book. In part one I identify and explain what I take to be the most important lessons to be drawn about language from work in the last century and a quarter. In part two I sketch my own vision of what needs to be done to solve today’s problems and indicate what I believe to be the direction of future progress. My discussion of the authors I have spoken to you about is mostly in part one. In part two I push forward in four main ways. First, I sketch a new theory of what propositions – thought of as meanings of sentences as well as objects of belief and knowledge – really are. Second, I use propositions to explain what possible states of the world are, where our knowledge of them comes from, and how epistemologically possible world-states are related to metaphysically possible world-states. This is needed to explain the truth conditions of sentences, and the difference between necessary and a priori truths. Third, I investigate surprising new aspects of the interaction of indexicality with a priori knowledge. Last, I integrate my own recent thinking with that of others in spelling out how linguistic meaning (semantics) relates to many other aspects of language use (pragmatics) in determining what is asserted, implicated and communicated by our uses of language.

October 15, 2010

Support Five Books

Five Books depends on donations to keep going. If you've enjoyed this interview, please consider giving a gift.