The Standard View is that, other things being equal, speakers' judgments about the meanings of sentences of their language are correct. After all, we make the meanings, so how wrong can we be about them? The Standard View underlies the Elicitation Method, a typical method in semantic fieldwork, according to which we should find out the truth-conditions of a sentence by eliciting speakers' judgments about the truth-value of the sentence in different situations. I put pressure on the Standard View and therefore on the Elicitation Method: for quite straightforward reasons, speakers can be radically mistaken about meanings.

Lewis (1969) gave a theory of convention in a game-theoretic framework. He showed how conventions could arise from repeated coordination games, and, as a special case, how meanings could arise from repeated signaling games. I put pressure on the Standard View by building on Lewis's framework. I construct coordination games in which the players can be wrong about their conventions, and signaling games in which the players can be wrong about their messages' meanings. The key idea is straightforward: knowing your own strategy and payoff needn't determine what the others do, so leaves room for false beliefs about the convention and meanings. The examples are simple, explicit, new in kind, and based on an independently plausible meta-semantic story. [link to full paper]

Work in progress

Click for abstracts. Email me for drafts.

How far can meaning come apart from use? That is The Question. Perhaps The Question seems hopelessly vague or hopelessly broad or hopelessly ill-defined or hopelessly difficult or hopelessly esoteric — hopeless, anyway. I say we should be hopeful: it is a good question. I describe a game-theoretic framework based on Lewis 1969 in which to think about it.

Signaling games crop up in several research programs in philosophy, economics, biology, and linguistics. Different research programs use signaling games in different ways. There are differences in emphasis, aim, analysis, application and direction, some minor and some major. An opinionated survey.

Are agreements to play Nash equilibria always self-enforcing? Aumann (1990) argues not. His argument has been well-received in the literature, at least when we assume the players commit to a move before agreeing. I show that Aumann's argument is self-defeating. He assumes, objectionably, that he is cleverer than the players he is analyzing. Philosophers should care about properly evaluating Aumann's argument for two reasons. First, it bears on the theory of convention: if Aumann's argument is sound, we should give up a platitude about convention. Second, evaluating Aumann's argument illuminates a key idea in the foundations of game theory, an idea which is meant to justify the contrast between decision theory and game theory, but which is not well-understood.

An epistemic characterization of a solution concept shows under what epistemic conditions (for example, what the players believe about each other's actions, rationality and beliefs) the players behave as the solution concept describes. Giving an epistemic characterization of a solution concept which involves mixed strategies, such as Nash equilibrium, is problematic, because mixed strategies don't fit easily into epistemic game theory.

A standard workaround is to reinterpret mixed strategies. On the classical interpretation, a mixed strategy for player i represents player i's randomization, and equilibria are strategic equilibria. On the new interpretation, a mixed strategy for player i represents the other players' beliefs about i's action, and equilibria are doxastic equilibria. Doxastic equilibria fit easily into epistemic game theory and epistemic characterizations of doxastic equilibrium have been given — notably, by Aumann and Brandenburger (1995).

First, I argue that the difference between strategic and doxastic equilibrium hasn't been appreciated in the literature. As a result, people have drawn unwarranted conclusions about when people will play a Nash equilibrium. Second, I assess doxastic equilibrium on its own merits. Following a broader discussion about the role of solution concepts, I argue that doxastic equilibrium doesn't deserve the attention it has received.

I argue that the long-run defense of maximizing expected value isn't sound. First, I describe the long-run defense. I show why it's worth taking seriously, so is worth refuting, and draw connections to broader issues about dynamic choice, dominance, and long-run defenses in other fields. Second, I adapt an idea well-known in economics but little-known in philosophy — maximizing expected growth rate — to show that a rival bet also has a long-run defense. The long-run defenses are parallel but come to incompatible conclusions, so neither is sound — or so I argue. Third, I show how to formalize a new conjecture, a conjecture with an interesting philosophical upshot: that many bets have a long-run defense, so long-run defenses are cheap.

This is part of a broader project addressing the more general question: When/why/how does the long-run bear on the short-run? For, when you start looking for it, you see that the long-run is taken to bear on the short-run in a wide variety of fields, not just decision theory. In statistics, how an estimator performs as the number of data points goes to infinity is taken to bear on how it performs given, say, 100 data points. In complexity theory, how an algorithm performs as the input size tends to infinity is taken to bear on how it performs given an input of size, say, 25. In game theory, how a strategy performs as the number of iterations goes to infinity is taken to bear on how it performs when the game is just played once. And so on. But why is the long-run taken to bear on the short-run in these cases? After all, "in the long-run, we’re all dead". I suggest that these cases are instances of a unified phenomenon, which benefit from a unified analysis.