Have you ever wished you could know the future?
You might want to bet that team A will beat team B in The Ultimate Championship of Some Kind of Sport, decide whether to drink from that carton of milk that has been opened in the fridge too long, or maybe just figure out which way home has the least traffic. Whatever situation you find yourself in, you have to make choices without the benefit of knowing what will happen in advance. To live is to choose.
As humans, we make decisions trying to predict the future, and we use information from both the present cues in the environment and our past experiences to do so. We sense the world and our brain does the math to produce a behavioral output. Although it usually works neatly, the decision-making process is not flawless—we often make mistakes called prediction errors. Moreover, we make systematic errors in decision, meaning we consecutively make choices that deviate from what rational behavior dictates. A common situation is the predisposition to search for or interpret information in a way that confirms our previous believes, known as the confirmation bias (Fig 1).
Fig. 1 Example from Monty Python and the Holy Grail. Peasants believe that a woman is a witch. Thus, they seek out information and misinterpret the facts to get confirmation about their own previous believes, disregarding any rational evidence that confronts them.
Does it mean we are biased?
Yes, we are! Psychologists have studied these phenomena for decades and categorized several types of cognitive biases in humans . Cognitive biases are ways of thinking – and therefore responding – that systematically deviate from rational behavior.
Perhaps you are one of the many people convinced that they do not have any biases. I will give you the benefit of the doubt. However, since you have read this far, take the chance to be part of an experiment:
If you are like most of us, your opinion about Alan is probably better than your opinion about Ben. However, the descriptions are exactly the same, even though the order of the words is reversed.
Studies like this were conducted by Solomon Asch , and suggested the existence of a cognitive bias known as the halo effect. The halo effect is the tendency to like – or dislike – everything about a person, based on a single trait or reduced number of traits. In this experiment, the halo effect might have driven you to associate the complete description of each person with the positive/negative first terms you read. For example, ‘stubborn’ is an ambiguous word and its value gets assigned according to its context, which is strongly conditioned by the first words that appear in each case. Maybe you think that Alan is justified in being stubborn because his first trait is “intelligent”. Conversely, maybe you think stubbornness only makes Ben’s “envious” personality worse.
Another example is the optimistic bias. Scientists have identified optimistic bias by polling people about common tasks and traits (i.e, driving ability, susceptibility to disease, life expectancy). The classic result is that more than 50% of the population believe that they are above the median. For example, more than have of people think they have higher life expectancy than the median value of that population (Fig 2).
Fig. 2 Bell shaped curve showing the number of people measured for a continuous variable (e.g, life expectancy). By definition, half of the population is indeed above the median (blue people) but more than half of the total think themselves as being above it (red people). This is mathematically impossible!
Is there any behavioral evidence that supports this idea in other organisms?
Cognitive psychologists and neuroscientists are increasingly asking questions like these in rodents, flies, bees and other animal models. Enkel and collaborators designed a simple task to evaluate how rats interpret an ambiguous cue . They trained the rats using two different tones that either predicted a positive outcome (a sugar reward after a lever press) or a negative outcome that rats could avoid by pressing a different lever. On the test day, they challenged the animals with intermediate tones to understand if their expectations were biased towards positive or negative outcomes. These kind of tasks are useful to dig deeper into molecular mechanisms and brain circuits that explain these phenomena.
Fig. 3 Experimental paradigm developed for testing bias in mice. Mice are able to press a lever (dark rectangle) after a positive or negative sound is played. During test, ambiguous tones are presented and lever press is recorded. Adapted from Enkel 2010.
While it’s true that our decisions are subject to biases, many of these biases act against each other or appear only when a big number of conditions are met. There is plenty room for flexibility in animal (hence human) behavior. Sometimes, biases do actually help us make faster, sometimes life-saving, decisions . Maybe it’s not so bad to be biased after all?
 Daniel Kahneman, Thinking fast and slow (Farrar, Straus and Giroux 2011)
 Thomas Enkel, Donya Gholizadeh, Oliver von Bohlen und Halbach, Carles Sanchis-Segura, Rene Hurlemann, Rainer Spanagel, Peter Gass and Barbara Vollmayr, ‘Ambiguous-Cue Interpretation is Biased Under Stress- and Depression-Like States in Rats’, Neuropsychopharmacology (2010) 35, 1008–1015 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3055368/
 Solomon Asch, ‘Forming impressions of personality’ The Journal of Abnormal and Social Psychology, 41(3) (1946), 258-290 http://dx.doi.org/10.1037/h0055756
 Gerd Gigerenzer, Daniel Goldstein, “Reasoning the fast and frugal way: Models of bounded rationality”. [Psychological Review](https://en.wikipedia.org/wiki/Psychological_Review) (1996) 103 (4): 650–669. 10.1037/0033-295X.103.4.650
More From Thats Life [Science]