I focus my discussion on the well-known Ellsberg paradox. I find good normative reasons for incorporating non-precise belief, as represented by sets of probabilities, in an Ellsberg decision model. This amounts to forgoing the completeness axiom of expected utility theory. Provided that probability sets are interpreted as genuinely indeterminate belief (as opposed to “imprecise” belief), such a model can moreover make the “Ellsberg choices” rationally permissible. Without some further element to the story, however, the model does not explain how an agent may come to have unique preferences for each of the Ellsberg options. Levi (1986, Hard choices: Decision making under unresolved conflict. Cambridge, New York: Cambridge University Press) holds that the extra element amounts to innocuous secondary “risk” or security considerations that are used to break ties when more than one option is rationally permissible. While I think a lexical choice rule of this kind is very plausible, I argue that it involves a greater break with xpected utility theory than mere violation of the ordering axiom.