Although the reality of these biases is confirmed by replicable research, there are often controversies about how to classify these biases or how to explain them.[1] Some are effects of information-processing rules (i.e. mental shortcuts), called heuristics, that the brain uses to produce decisions or judgments. Such effects are called cognitive biases.[2][3] Biases in judgment or decision-making can also result from motivation, such as when beliefs are distorted by wishful thinking.
Some biases have a variety of cognitive ("cold") or motivational
("hot") explanations. Both effects can be present at the same time.[4][5]

There are also controversies as to whether some of these biases count as truly irrational or whether they result in useful attitudes or behavior. For example, when getting to know others, people tend to ask leading questions which seem biased towards confirming their assumptions about the person. This kind of confirmation bias has been argued to be an example of social skill: a way to establish a connection with the other person.[6]

The research on these biases overwhelmingly involves human subjects.
However, some of the findings have appeared in non-human animals as
well. For example, hyperbolic discounting has also been observed in rats, pigeons, and monkeys.[7]

wikipedia | Bias arises from various processes that are sometimes difficult to distinguish. These include

The notion of cognitive biases was introduced by Amos Tversky and Daniel Kahneman in 1972[16] and grew out of their experience of people's innumeracy, or inability to reason intuitively with the greater orders of magnitude. Tversky, Kahneman and colleagues demonstrated several replicable ways in which human judgments and decisions differ from rational choice theory.
Tversky and Kahneman explained human differences in judgement and
decision making in terms of heuristics.

For example, the representativeness heuristic is defined as the
tendency to “judge the frequency or likelihood” of an occurrence by the
extent of which the event “resembles the typical case” (Baumeister &
Bushman, 2010, p. 141). The “Linda Problem” illustrates the
representativeness heuristic (Tversky & Kahneman, 1983[18]
). Participants were given a description of the target person Linda
which implies Linda could be a feminist, as she is interested in
discrimination and social justice issues (see Tversky & Kahneman,
1983). Participants are asked whether they think Linda is more likely to
be a “a) bank teller” or a “b) bank teller and active in the feminist
movement”. Participants often select option “b)”. Tversky and Kahneman
(1983) termed participants choice as a “conjunction fallacy”; whereby
participants chose option b) because the description relates to
feminism. Moreover, the representativeness heuristic may lead to errors
such as activating stereotypes and inaccurate judgements of others
(Haselton et al., 2005, p. 726).

Alternatively, critics of Kahneman and Tversky such as Gerd Gigerenzer
argue that heuristics should not lead us to conceive of human thinking
as riddled with irrational cognitive biases, but rather to conceive
rationality as an adaptive tool that is not identical to the rules of formal logic or the probability calculus.[19]
Nevertheless, experiments such as the “Linda problem” grew into the
heuristics and biases research program which spread beyond academic
psychology into other disciplines including medicine and political
science.