• Chapter 17, “Regression to the mean”, (originally referred to as, regression to mediocrity”) illustrates how people mistakenly think that someone (e.g., athlete, interviewee, student, or trainee),  or something (e.g., stocks, horses in a race, business or team performance ) that performed especially well will maintain that performance. And someone or something that performed especially poorly is going to repeat that performance the next time. Without considering the statistical rule of regression to the mean. Regression to the mean is the phenomenon of everything regressing towards the mean, or average. Someone who performs especially well one day, is likely to perform less well on the next day and someone who performs very badly one day, is likely to perform better the next. 

    The chapter discusses people’s misinterpretation of this phenomenon as proof that their behavior (e.g., scolding, coaching, additional training) following a poor performance was responsible for the improved performance to follow. And that over confidence in a good performance or pressure to perform, resulted in poorer performance the following time. The truth being that regression to the mean is merely a statistical rule that nature tends to follow and is even evident in genetics and intelligence. Nevertheless, the unaware tend to attribute talent and luck to things that are undeserving. 

    Kahneman goes on to explain some of the history and application of regression analyses through the explanation of correlation coefficients and weights of factors. A common error discussed is confusing correlation with causation and “errors of Intuitive prediction” (p.183).  Kahneman then reiterates, that “our mind is strongly biased toward causal explanations and does not deal well with “mere statistics” (p.182). And, “causal explanations will be evoked when regression is detected, but they will be wrong because the truth is that regression to the mean has an explanation but does not have a cause” (p.182).

    Side note: When writing about “Talent and Luck”, Kahneman references “John Brockman, who edits the online magazine Edge“. You might want to check it out.

     

    Reference

    Kahneman, D. (2011). Regression to the Mean. In Thinking Fast and Slow (pp. 156-165). New York, NY: Farrar, Straus and Giroux.

  • Chapter 16, Causes Trump Statistics (Kahneman, 2011), explores Bayesian inference on judgement. Again, interesting viewpoint of statistics from the judgement perspective instead of strictly from an applied experimental perspective. This chapter looks at a priori information’s influence on judgement. The influence of information that can psychologically influence what information is considered or given more weight. A priori information may automatically trigger the System 1 stereotype. 

    Kahneman provides examples comparing “statistical base rates” vs. “causal base rates”. He defined statistical base rates as “facts about a population to which a case belongs, but… [which] are not relevant to the individual case” and may be “underweighted, and “sometimes neglected altogether” (P. 168). For example, the “statistical base rate” information provided was that there were many more of one color of cab (85% green) than the other color cab (15% blue) in a city. An eyewitness testified that a blue cab was involved in a particular accident. The reliability of the eyewitness was determined to be 80% reliable or correct. With this information, the decision-makers looked toward the reliability of the eye witness (80%) and determined that “the probability that the cab involved in the accident was Blue rather than Green” was 80%. The statistical base rate about the percentage of green vs blue cabs in the city was ignored.

    Kahneman defined the “causal base rates” as information that “change your view of how the individual case came to be” which “are treated as information about the individual case” and “are easily combined with other case-specific information” (p.168). The  “causal base rate” experiment stated that there were the same number of each color of cabs (50% green; 50% blue) but one color was involved in the majority of accidents (85% green; 15% blue). When given that information along with the reliability of the eyewitness (80% reliable), the decision-maker gave the percent involved in accident (85% Green) more weight because they instantly created a stereotype of the group involved in the majority of accidents as more accident prone without considering any other reasons which may have influenced the accidents (e.g., where they serviced, when they serviced, and who was deemed at fault in those accidents).

    The causal base rate information seemed to trigger System 1 and the automatic creation of stereotypes to make decisions easier (easier to place blame or cause). With only statistical base rates people tended to ignore the base rate information in determining the probability and go with the eyewitness reliability (which is easier). Basically, humans are emotionally swayed, quick to create stereotypes, and tend toward the easiest choice. No matter how well versed one may be in statistics and probability testing it is hard to fight this urge. 

    Despite the concept of stereotypes being frowned upon, Kahneman points out that the use of stereotypes which “represent categories as norms and prototypical exemplars” can aid in judgement (p.168). Although, he goes on to state that, “some stereotypes are perniciously wrong, and hostile” and “can have dreadful consequences” (p. 168-169). The conversation makes me think of the area of expertise in which there is an instant feeling that directs judgement and tends to be overwhelmingly correct and difficult to explain to anyone else. There might be a distinction to be made about whose System 1 judgement should be relied upon. One might be stereotyping based on ill conceived notions and one might be categorizing based on experience and both will be flooded with strong feelings.

    Kahneman goes on to discuss another study that illustrates how people tend to have an “unwillingness to deduce the particular from the general was matched only by their willingness to infer the general from the particular” (p.174). A common reminder in college classes is that a case you have experienced personally doesn’t make it more likely to have occurred in the general population. Nevertheless, people feel stronger about something they have personally experienced or have heard about one person experiencing, than a statistic that states the probability of the occurrence in a large population. He goes on to state that “even compelling causal statistics will not change long-held beliefs [e.g., stereotypes] or beliefs rooted in personal experience” (p.174). Nevertheless, be-it our narcissistic tendencies or evolutionary mechanisms, we are more likely to remember and learn something that we find surprising in our own behavior or experience than hearing about others and this will effect our judgement. 

    * A little background definitions: “Bayes’s theorem, in probability theory, [is] a means for revising predictions in light of relevant evidence, also known as conditional probability or inverse probability “(https://www.britannica.com/topic/Bayess-theorem). “Related to the theorem is Bayesian inference, or Bayesianism, based on the assignment of some a priori distribution of a parameter under investigation. In 1854 the English logician George Boole criticized the subjective character of such assignments, and Bayesianism declined in favor of “confidence intervals” and “hypothesis tests”—now basic research methods” (https://www.britannica.com/topic/Bayess-theorem).

     

    Reference

    Kahneman, D. (2011). Causes trump statistics. In Thinking Fast and Slow (pp. 156-165). New York, NY: Farrar, Straus and Giroux.

  • Those who are unfamiliar with certain statistical rules may not realize how their judgement is being swayed by illogical, emotional, and unreliable intuition.Kahneman delved into statistics and how important an understanding of statistics is to good judgement (chapters 15, 16 and 17). Even the most scholarly, who are well versed in statistics, tend to have a difficult time fighting their intuitive perceptions. They often will make the logical decision but it may leave them with an irritating feeling of uneasiness, nonetheless.

    Chapter 15 describes the pursuit for evidence about “the role of heuristics in judgement and of their incompatibility with logic” (p.156). Daniel Kahneman and Amos Tversky came up with an experiment that described a person named, “Linda”. Then they provided a list of eight scenarios and asked two groups to rank the example scenarios in terms of “representativeness” (group 1) or “probability” (group 2). The list included scenarios such as: “Linda is a bank teller”; “Linda is a bank teller who is active in the feminist movement”; “Linda is a teacher in an elementary school”; “Linda is a member of the League of Women Voters”, etc.. 

    The conflict was found when people let the scenarios with more details similar to the description they had read earlier of Linda, outweigh the statistical probability that Linda was a member of that scenario. That is, the more details included in a scenario may appear to be more representative of the description of Linda but would actually be statistically less probable… think of a venn diagram. The more information, the smaller the section in the venn diagram and therefore, the less probable. The less detailed scenario would be a larger area on the venn diagram and therefore have a higher probability. This is simple logic but it conflicts so sharply with what we feel we know about Linda based on her description, even when we are told that the description may be unreliable. We cling to an intuitive feeling that the more detailed description is more likely or probable – which is completely untrue… statistically. Nevertheless, the intuitive feeling tends to be linked with stronger and more certain emotional judgement than the statistically correct choice which tends to be accompanied by feelings of uncertainty and unease.

    Kahneman then states that, “the word fallacy is used, in general, when people fail to apply a logical rule that is obviously relevant” (p.158). He goes on to talk about a term they coined called the “conjunctive fallacy”, which is when people combine multiple descriptors in the scenario and think that the less probable scenario with more descriptors is more representative (or stereotypical) and therefore more “plausible”. For example, one may judge the conjunction of bank-teller and feminist as more plausible than just bank teller alone, when it is actually less probable. Kahneman states, “The most coherent stories are not necessarily the most probable, but they are plausible, and the notions of coherence, plausibility, and probability are easily confused by the unwary” (p. 159).

    It is great that Kahneman and Tversky documented this phenomenon and made us more conscious and able to understand better the intuitive pull of the wrong choices and the strange uneasiness that comes with the correct choices. Kahneman goes on to state that, “the prevalence of bias in human judgement is a large issue” and how in courtrooms and politics the “focus on weakness” is used to raise doubts. I assume he is suggesting that raising doubts leads to the “power of judgement heuristics” (p.164) due to the additional information creating a conjunctive fallacy and people feeling very good and confident in their incorrect judgements, decisions, and resultant behavior. Rather than isolating the important information and making the more difficult but correct choice.

     

    Reference

    Kahneman, D. (2011). Linda: Less is more. In Thinking Fast and Slow (pp. 156-165). New York, NY: Farrar, Straus and Giroux.

  • Daniel Kahneman described two systems of thought processing and decision-making, system 1 and system 2 (Kahneman, 2011). These are not literal areas of the brain but rather a way to understand how we process and organize information and how that in turn effects our decisions, actions, and beliefs. System 1 is easy, lazy, and automatically activated by heuristics, biases, and the unconscious. System 2 is organized, thoughtful, and requires more effort and energy. Literally, placing a frown on your face can help to elicit System 2 and a smile, system 1. It’s complicated and discussed in much greater detail in the earlier chapters of the book (which I plan to review and write up after I finish the book).

    This chapter, “Tom W’s Specialty” discussed the differences between representativeness and base rate in prediction. In psychology, we are always interested in predicting behavior. This chapter looks at how people can use the easier, less reliable, sometimes completely wrong representativeness of people (a lazy system 1 sort of stereotyping) to predict things that should really be based on base rate (more difficult system 2 sort of statistics). Most people aren’t trained in statistics so they are less likely to be aware of some rules of statistics that don’t readily come to mind if not taught to consider (like probability). Even when people were asked to ignore a description or extra details of questionable reliability, they couldn’t help but let it greatly effect their decision. Even those who were trained in statistics, weighed the unreliable information over the statistical base rate because it was easier (system 1 wins over system 2).  Kahneman points out that calculating the probability of something (or likelihood) is more difficult than similarity or representativeness of something which is more of an automatic stereotypical type of thinking and that is a big mistake. He continues to make the point that anyone who “ignores base rates and the quality of evidence in probability assessments will certainly make mistakes” (p.150).

    This makes me think about the election of Donald Trump to President of the United States for which the “subjective degree of belief” when evaluating the probability of an event was sorely misconstrued (p. 150). When confronted with such a hard question, such as the probability of Donald Trump being elected, the mind searches for easier questions to answer – questions that fall into “automatic assessment of representativeness“(p. 150). So when some people were met with the easier question, “Does Donald Trump appear Presidential?” They automatically came up with a mental images of a big white male in a suit who seemed authoritarian and powerful and thought, “yes” – an easy match. A vague concept but still a closer match than a women (Hillary Clinton for President) for whom many people have no mental image of a female U.S. President – outside of fictional images (VEEP, movies) or images of powerful women in other countries such as, the Chancellor of Germany, Angela Merkel, or Queen Elizabeth II of the United Kingdom, Canada, Australia, and New Zealand but they are not Presidents. There are several female Presidents but they don’t readily come to mind to many people – I dare say to the majority of the United States.

    Donald Trump raised that very question, himself, to the media and other groups, saying he felt that “Hillary Clinton doesn’t have a Presidential look” and asking others the same (New York Times 9/6/2016). The stories spread quickly – an example of the availability cascade and availability entrepreneurs discussed in chapter 13 . So, when people needed to evaluate the more difficult system 2 questions between the two candidates’ education, experience, and various probability questions or the easier automatic stereotypical system 1 question of “which one looks more presidential” – the easy answer comes first and wins. A good example of the availability heuristic, when people make decisions based on the availability of an example which easily comes to mind and may be heavily swayed by emotion (e.g., prejudices, fear, anxiety, etc.). It is in the availability entrepreneur’s best interest to arouse emotion and system 1 and align them with easy questions to answer.

     

    Reference

    Kahneman, D. (2011). Tom W.’s Specialty.  In Thinking Fast and Slow (pp. 156-165). New York, NY: Farrar, Straus and Giroux.

  •  

    Daniel Kahneman discussed the availability heuristic – when people make decisions based on the availability of an example which easily comes to mind but which may be heavily swayed by emotion. Emotional things often come most easily to mind because the brain is made to protect us from adverse effects. Therefore, if you burn yourself as a child, you remember that event clearly for a long time and are cautious from then on. Even if you only burnt yourself that one time and it was ten years ago. That is a protective emotional response to a real and possible threat but other events may be very rare (something you saw on the news) and very unlikely to ever effect you nevertheless, it made a big emotional impact and may quickly come to mind and sway your opinion or decision/action inappropriately in the future.

    This may been seen in the behavior of investors in the stock market after the 2008 crash. Some people emotionally ripped their money (what was left) out of the stock market, rather than calmly leaving it alone until the market rebounded. Years later, people still feared putting their money into certain investments because they remember the crash. Also, the Brexit or the night that Donald Trump started winning the presidency – the stocks started to plummet. These were people behaving emotionally rather than logically to the news.

    The stock market has a long history of being immediately influenced by news stories. The availability heuristic is influenced by the availability cascade  which is when the media and public opinion grow off each other and guide decisions and resources, often for their own benefit, referred to as “availability entrepreneurs“, such as for ratings or politics (p. 142). The result can lead to non-expert opinion being guided by emotion vs. expert opinion being guided by real statistical risk. These opinions can guide government policy, correctly or not, for good or bad. So, although public opinion may be strong and heavily influenced by emotion rather than facts or actual risk – it can move government policy in it’s direction and millions of dollars may be spent unnecessarily… other than for calming the people which may have it’s own worth in the long run.  As far as market effects go, Kahneman goes on the discusses the inability of our brains to “deal with small risks“, tending to “ignore them altogether or give them far too much weight – nothing in between”(p. 143).

    These small risks vs big risks seem to be leveled by time. Closer to 2008, the risks to investing in the stock market seemed steep but grows smaller the farther out we get. Maslow’s Hierarchy of Needs also comes to mind, as investing in the stock market or any investment is a reflection of security – financial security. At the base of Maslow’s hierarchy are psychological needs such as food, water, and sleep. The next level above that is safety, some say safety and security. These include things like, family, health, social stability, employment and property. This is the level that I think investments or financial security fall under. Which makes it one of the very basic of human needs- level 2. Above those two levels are three levels: love/belonging; esteem; and self-actualization. As you fulfill the lower levels of the pyramid of needs and go up the pyramid it seems there are more resources to then go back down the pyramid and reassess when needed. This type of multilayer support system can also effect one’s perceived risk or comfort with investing – whether to invest (safety and security for yourself, your spouse, your children) and what to invest in (self-actualization – riskier returns based on beliefs such as, solar vs. oil). Esteem may also influence people’s perceived risk of self-esteem vs. financial benefits of investment if they want to be perceived as rich or skilled at investing and creating money from their money (having their money work for them).

    This influence of esteem on investment risk may really bring us into the realm of Motivation psychology.  Are they internally or externally motivated?  If they are highly externally motivated and/or surrounded by people who value money over anything else then they may be more inclined to invest in riskier investments in the hope of making lots of money fast. Their perceived risk of not fitting into the particular group (however personally defined) is more threatening and emotionally upsetting than the risk of loosing money.

    This concept of emotional returns over actual losses also crosses over into gambling and cognitive psychology. The gambler’s mind releases dopamine with the excitement of the the risk and little wins release more dopamine and reinforce this behavior of taking risks. This person is likely to start playing the stock market and relishing it’s wins and discounting it’s losses – at all costs. Which reminds me of Kahneman’s comment about the inability of our brains to “deal with small risks“, tending to “ignore them altogether or give them far too much weight – nothing in between”(p. 143).

     

    Reference

    Kahneman, D. (2011). Thinking Fast and Slow. New York, NY: Farrar, Straus and Giroux.

  • I started this site to track my research and to document my venture into a new area of interest – Behavioral Economics or what some were calling Market Psychology and appears to be based in traditional psychology’s cognitive decision-making. Cognitive decision-making was a big part of my Ph.D. in Applied and Experimental Human Factors and Cognitive Psychology.

    My research has focused on the process from attention (psychophysics) and perception (sensory) to decision-making (cognitive processes) and response (behaviour).

    • What gets people’s attention and how can we direct their attention?
    • How do people perceive information and what influences their perception? Such as, their experiences, biological influences, motivation, or emotions.
    • What decision-making process follows?
    • And what is the ultimate response or behavior? Is it predictable?

    My interest in behavioral economics began with Daniel Kahneman. As is probably the case for many people. Daniel Kahneman was awarded the Nobel Prize in Economics, as a psychologist, for his work in behavioral economics. The application of psychology to economics is utilized globally. Often to try to motivate people to do what’s best for themselves, such as, saving for retirement. That knowledge in combination with understanding the underlying processes of attention, perception, decision-making, and response can also be applied to situations such as getting people to participate in life saving medical screenings.

    For anyone who is interested, I have also searched the topic through the American Psychological Association’s website which led me to Market Psychologist, Richard Peterson, MD – a psychiatrist who founded MarketPsych and has written several books on the subject. I also found several good books related to the topic on Amazon: “Well-Being: Foundations of Hedonic Psychology” (2003) by Daniel Kahneman; “The Joyless Economy: The Psychology of Human Satisfaction” (1992) by Tibor Scitovsky (original 1976?); “Choices, Values, and Frames” (kindle version – the hard covers are crazy expensive, over $800.00) by Daniel Kahneman, Amos Tversky; “Misbehaving: The Making of Behavioral Economics” (2015) by Richard H. Thaler; “The Foundations of Behavioral Economic Analysis” (2016 – only one review is odd ) by Sanjit Dhami; “Behavioral Economics (Routledge Advanced Texts in Economics and Finance)” 2nd Edition (2011) by Edward Cartwrigh.