This article needs rewriting to enhance its relevance to psychologists..Please help to improve this page yourself if you can..

An action, belief, or desire is rational if we ought to choose it.[1] Rationality is a normative concept that refers to the conformity of one's beliefs with one's reasons to believe, or of one's actions with one's reasons for action. However, the term "rationality" tends to be used differently in different disciplines, including specialized discussions of economics, sociology, psychology, evolutionary biology and political science. A rational decision is one that is not just reasoned, but is also optimal for achieving a goal or solving a problem.

Determining optimality for rational behavior requires a quantifiable formulation of the problem, and making several key assumptions. When the goal or problem involves making a decision, rationality factors in how much information is available (e.g. complete or incomplete knowledge). Collectively, the formulation and background assumptions are the model within which rationality applies. Illustrating the relativity of rationality: if one accepts a model in which benefitting oneself is optimal, then rationality is equated with behavior that is self-interested to the point of being selfish; whereas if one accepts a model in which benefiting the group is optimal, then purely selfish behavior is deemed irrational. It is thus meaningless to assert rationality without also specifying the background model assumptions describing how the problem is framed and formulated.

Contents

The German sociologist Max Weber proposed an interpretation of social action that distinguished between four different idealized types of rationality. The first, which he called Zweckrational or purposive/instrumental rationality, is related to the expectations about the behavior of other human beings or objects in the environment. These expectations serve as means for a particular actor to attain ends, ends which Weber noted were "rationally pursued and calculated." The second type, Weber called Wertrational or value/belief-oriented. Here the action is undertaken for what one might call reasons intrinsic to the actor: some ethical, aesthetic, religious or other motive, independent of whether it will lead to success. The third type was affectual, determined by an actor's specific affect, feeling, or emotion – to which Weber himself said that this was a kind of rationality that was on the borderline of what he considered "meaningfully oriented." The fourth was traditional or conventional, determined by ingrained habituation. Weber emphasized that it was very unusual to find only one of these orientations: combinations were the norm. His usage also makes clear that he considered the first two as more significant than the others, and it is arguable that the third and fourth are subtypes of the first two.

The advantage in Weber's interpretation of rationality is that it avoids a value-laden assessment, say, that certain kinds of beliefs are irrational. Instead, Weber suggests that a ground or motive can be given – for religious or affect reasons, for example — that may meet the criterion of explanation or justification even if it is not an explanation that fits the Zweckrational orientation of means and ends. The opposite is therefore also true: some means-ends explanations will not satisfy those whose grounds for action are 'Wertrational'.

Weber's constructions of rationality have been critiqued both from a Habermasian (1984) perspective (as devoid of social context and under-theorised in terms of social power)[2] and also from a feminist perspective (Eagleton, 2003) whereby Weber's rationality constructs are viewed as imbued with masculine values and oriented toward the maintenance of male power.[3] An alternative position on rationality (which includes both bounded rationality (Simons and Hawkins, 1949),[4] as well as the affective and value-based arguments of Weber) can be found in the critique of Etzioni (1988),[5] who reframes thought on decision-making to argue for a reversal of the position put forward by Weber. Etzioni illustrates how purposive/instrumental reasoning is subordinated by normative considerations (ideas on how people 'ought' to behave) and affective considerations (as a support system for the development of human relationships).

In the psychology of reasoning, psychologists and cognitive scientists have defended different positions on human rationality. One prominent view, due to Philip Johnson-Laird and Ruth M.J. Byrne among others is that humans are rational in principle but they err in practice, that is, humans have the competence to be rational but their performance is limited by various factors.[6] However, it has been argued that many standard tests of reasoning, such as those on the conjunction fallacy, on the Wason selection task, or the base rate fallacy suffer from methodological and conceptual problems. This has led to disputes in psychology over whether researchers should (only) use standard rules of logic, probability theory and statistics, or rational choice theory as norms of good reasoning. Opponents of this view, such as Gerd Gigerenzer, favor a conception of bounded rationality, especially for tasks under high uncertainty.[7]

It is believed by some philosophers (notably A.C. Grayling) that a good rationale must be independent of emotions, personal feelings or any kind of instincts. Any process of evaluation or analysis, that may be called rational, is expected to be highly objective, logical and "mechanical". If these minimum requirements are not satisfied i.e. if a person has been, even slightly, influenced by personal emotions, feelings, instincts or culturally specific, moral codes and norms, then the analysis may be termed irrational, due to the injection of subjective bias.

Modern cognitive science and neuroscience show that studying the role of emotion in mental function (including topics ranging from flashes of scientific insight to making future plans), that no human has ever satisfied this criterion, except perhaps a person with no affective feelings, for example an individual with a massively damaged amygdala or severe psychopathy. Thus, such an idealized form of rationality is best exemplified by computers, and not people. However, scholars may productively appeal to the idealization as a point of reference. [citation needed]

Kant had distinguished theoretical from practical reason. Rationality theorist Jesús Mosterín makes a parallel distinction between theoretical and practical rationality, although, according to him, reason and rationality are not the same: reason would be a psychological faculty, whereas rationality is an optimizing strategy.[9] Humans are not rational by definition, but they can think and behave rationally or not, depending on whether they apply, explicitly or implicitly, the strategy of theoretical and practical rationality to the thoughts they accept and to the actions they perform. Theoretical rationality has a formal component that reduces to logical consistency and a material component that reduces to empirical support, relying on our inborn mechanisms of signal detection and interpretation. Mosterín distinguishes between involuntary and implicit belief, on the one hand, and voluntary and explicit acceptance, on the other.[10] Theoretical rationality can more properly be said to regulate our acceptances than our beliefs. Practical rationality is the strategy for living one’s best possible life, achieving your most important goals and your own preferences in as far as possible. Practical rationality has also a formal component, that reduces to Bayesian decision theory, and a material component, rooted in human nature (lastly, in our genome).

Individuals or organizations are called rational if they make optimal decisions in pursuit of their goals. It is in these terms that one speaks, for example, of a rational allocation of resources, or of a rational corporate strategy. For such "rationality", the decision maker's goals are taken as part of the model, and not made subject to criticism, ethical or otherwise.

Debates arise in these four fields about whether or not people or organizations are "really" rational, as well as whether it makes sense to model them as such in formal models. Some have argued that a kind of bounded rationality makes more sense for such models.

Others think that any kind of rationality along the lines of rational choice theory is a useless concept for understanding human behavior; the term homo economicus (economic man: the imaginary man being assumed in economic models who is logically consistent but amoral) was coined largely in honor of this view.

Within artificial intelligence, a rational agent is one that maximizes its expected utility, given its current knowledge. Utility is the usefulness of the consequences of its actions. The utility function is arbitrarily defined by the designer, but should be a function of performance, which is the directly measurable consequences, such as winning or losing money. In order to make a safe agent that plays defensively, a nonlinear function of performance is often desired, so that the reward for winning is lower than the punishment for losing. An agent might be rational within its own problem area, but finding the rational decision for arbitrarily complex problems is not practically possible. The rationality of human thought is a key problem in the psychology of reasoning.[11]