%0 Conference Paper
%A Buchak, Lara
%D 2012
%F pittphilsci:9066
%K risk; rationality; trade-offs; expected utility; non-expected utility; decision theory; risk aversion
%T Risk and Tradeoffs
%U http://philsci-archive.pitt.edu/9066/
%X The prevailing view in decision theory is that subjective expected utility theory (hereafter, EU theory) characterizes the preferences of all rational decision makers. And yet, there are some preferences that violate EU theory that seem both intuitively appealing and prima facie consistent. An important group of these preferences stem from how ordinary decision makers take risk into account. In particular, EU theory does not allow a decision maker to care about “global” properties of gambles – the minimum, the maximum, the spread of possible values – except insofar as this can be reduced to how he values particular outcomes. But many people do care about these properties in a way that is not reducible to how they value outcomes, and thus cannot be characterized as EU maximizers. Why would an agent find these global considerations relevant to decision making? Decision theory is a theory of instrumental rationality: it formalizes and precisifies means-ends rationality. We are presented with an agent who wants some particular outcome and can attain that outcome through a particular act. Or, more precisely, we are presented with an agent who is faced with a choice among acts that lead to different outcomes, which he values to different degrees. To figure out what to do, the agent must make a judgment about which outcomes he cares about, and how much: this is what the utility function captures (even on views of utility theory on which utility is not meant to correspond to anything “in the head” but is merely a construction from the agent’s preferences). In typical cases, none of the acts available to the agent will lead with certainty some particular outcome, so he must also make a judgment about the likely result of each of his possible actions. This judgment is captured by the subjective probability function. Expected utility theory makes precise these two components of means-ends reasoning: how much an agent values various ends, and which courses of action he thinks are likely to realize these ends. But this can’t be the whole story: what we’ve said so far is not enough for an agent to reason to a unique decision, and so we can’t have captured all that is relevant to decision making. An agent might be faced with a choice between one action that guarantees that he will get something he desires somewhat and another action that might lead to something he strongly desires, but which is by no means guaranteed to do so. Knowing how much he values the various ends involved and how likely each act is to lead to each end is not enough to determine what the agent should do in these cases: the agent must make a judgment not only about how much he cares about particular ends, and how effective his actions will be in realizing each of these ends, but about which sort of strategy to take towards realizing his ends as a whole: how to structure the realization of his aims. This involves deciding whether to prioritize definitely ending up with something of some value or instead to prioritize possibly ending up with something of extraordinarily high value, and by how much: specifically, he must decide the extent to which he is generally willing to accept the risk of something worse in exchange for the possibility of something better. This judgment corresponds to considering global or structural properties of gambles. This third dimension of instrumental reasoning is the dimension of evaluation that standard decision theory has ignored. To be precise, it hasn’t ignored it but rather supposed that there is a single correct answer for all rational agents: one ought to take actions that have higher utility on average, regardless of the spread of possibilities. And we are not in a position to evaluate arguments for this claim until we get clear on what agents are doing, from the point of view of taking the means to their ends, when they violate EU theory. Economists have done an excellent job developing alternative models of decision making and analyzing their formal properties. However, there are two reasons that these developments have gone largely unnoticed by philosophers: first, these models are widely considered to be descriptive, not normative. Indeed, many of them allow decision makers to violate relatively uncontroversial criteria of rationality. Second, philosophers have found decision theory useful in the study of rational subjective probabilities. The non-expected utility models of economists typically employ either objective probabilities, or probabilities that are subjective but non-additive, and so could not be termed rational. Furthermore, the economics literature does not adequately deal with the question of why, from the point of view of instrumental reasoning, agents might fail to maximize expected utility. Drawing on some of the “rank-dependent” theories in the economics literature, as well as a result about probabilistic sophistication due to Mark Machina and David Schmeidler, I propose a theory on which agents subjectively determine all three components relevant to instrumental rationality: a utility function, a (subjective, additive) probability function, and a norm for translating these two things into instrumental value, a norm representable by a risk function which is a transformation of subjective probabilities that measures the agent’s attitude towards risk. (The norm of EU theory itself corresponds to a particular risk function.) I show, via a representation theorem, that if an agent’s preferences obey certain intuitive axioms – weaker than those of EU theory – then these functions can be derived uniquely (utility unique only up to positive affine transformation). Using the representation theorem, I show how the risk function corresponds at the level of preferences to how the agent values tradeoffs in different structural parts of the gamble, e.g., to whether he cares more about what happens in the worst-case scenario or the best-case scenario. And I show that an agent who cares disproportionately about the worst-case (or best-case) scenario does so not because his beliefs about the world depend on how good things are for him in various states (he is not “pessimistic” or “optimistic”) but because of how he rationally structures his goals in the face of risk. I thus show how non-EU maximizers can be seen as taking the means to their ends.