• Based on the writing of Daniel Kahneman and his work with Amos Tversky, I would like to further explore the relationship between bad events and risky behavior. I find much of their work is pertinent to the crises that plague our country today (e.g., politics and prejudice).  Greater understanding of human behavior and its predictability is the cornerstone to preventing large groups of people (e.g., citizens) from making poor decisions and educating them to recognize risky behavior and their consequences in the future.

    Understanding that “losses loom larger than gains” is among Prospect Theory’s  (Kahneman & Tversky, 1979) many contributions to the fields of psychology and behavioral economics. That is, that a possible loss will be perceived as more important, arousing, or salient to an individual than a possible gain. This feeling will be acted on with an emotionally guided response – whether the person realizes it or not. Such an emotional response is contradictory to the Rational Choice Theory  in economics which views humans as rational decision-makers who are more likely to utilize probability and logic in their decision-making than to be swayed by emotion.

    Furthermore, people who have more to loose (greater loss) will evaluate the likelihood of the loss occurring even more heavily and therefore be willing to take even more risk to avoid the loss. This pattern of over-weighting a threat of loss is not aligned with the actual probability of the event occurring, even if the probability is known (which it seldom is). The emotionally driven estimate given to the probability of an event occurring, has been referred to as “decision weights”  by Daniel Kahneman (2011, pp. 314-316). Except for a probability of an event being 0% or 100% (which are weighted as such) all of the rest of the probabilities and their aligned decision weights are drastically different from each other. For example, an event with a 1% chance of happening will be given a decision weight of 5.5 and an event with a 99% chance of occurrence will be given a decision weigh of 91.2; 98% chance = 87.1 decision weight; 50% chance = 42.1 decision weight… etc. (see Kahneman, 2011, pp. 315 for the table).

    An example of this mismatch between the impact of gains and losses can be found in the study of positive and negative experiences, such as happy relationships (Kahneman, 2011). A happy relationship has been found to need approximately five positive encounters to any one negative encounter in order to remain a happy relationship. Another example is found in friendships. A friendship can take years to establish but can be quickly ended over merely one negative event. The unequal influence between negative and positive events can be seen in many areas of life including how humans recall negative events more easily. From accidents to bad news and even bad words, bad things are more salient and remembered and recalled more quickly and easily than good.

    Examples of the quick recall of negative over good messages are often seen in politics. When one party tries to provide positive messages while the opposing party tries to instill fear, fear will have a bigger impact in the listeners’ memories (Kahneman, 2011, pp. 301). Unfortunately, that is contrary to the message most people want to hear. People argue for positive campaigns full of hope for the future but those in the know, realize that nature is working to protect humans from negative events by making a negative message stand out and stick in one’s memory longer.

    So, if a candidate wants you to remember their message, they will likely be more successful if they state something negative about their opponent – especially if it arouses an emotion such as anger, disgust, or fear. A scary speech will stand out in the minds of the listeners, over a hopeful one. A more subtle example is that a negative face will stand out against a crowd of happy faces but a happy face in a group of angry, will not (Kahneman 2011).

    As a matter of fact, any words that trigger our amygdala (an emotion related area of the brain) may trigger a cascade of neurochemicals which will alter our interpretation and memory of it. This may trigger a fight or flight response in the listener, resulting in an urge to side with or fight with the very messenger that they didn’t want to support in order to unify to fight the threat. An example of this is when both President Clinton and Trump stated that an attack against them was an attack against the country, in an attempt to make the people of the country then feel personally attacked themselves and want to fight back against the attacker (their accuser/s).

    Kahneman also referred to the emotion of a decision in which “the anxiety of the second situation appears to be more salient than the hope in the first” (Kahneman, 2011, pp.315). He refers to the “psychology of worry” over the “rational model” and to a paper titled, “Bad is Stronger than Good”  by Baumeister, Bratslavsky, Finkenauer, and Vohs (2001) in which the authors examined the superior influence of bad events over good ones (Kahneman, 2011, pp. 316). They found that there were “hardly any exceptions” of people pursuing good over avoiding bad or being more influenced by good messages over bad messages. Again, that could shed some light on the state of U.S. politics.

    An example that Kahneman gave to illustrate the unconscious motivation not to lose being stronger than the will to win was found in Pope and Schweitzer’s  (2011) study in which they analyzed over 2.5 million putts in professional golf (and yes, 2.5 million may be statistical overkill resulting in very small differences, as significant). They found evidence of “loss aversion” even in the most famous of pro-golfers, such as Tiger Woods. The golfers were more likely to sink a shot that kept them from going one shot over (a bogey) the average (par) than they were to sink a shot that would leave them one shot below (birdie) the average. Of course, the central limit theorem (more likely to be par) and the fact that these are superior golfers (making them less likely to go over par) comes to mind but I have to assume they considered that in their analyses.

    The point of the example being, that people, whether consciously or not, due to heightened alertness and/or focus, possibly due to increased adrenalin from fear of a loss, show increased loss aversion and possibly increased risk seeking behavior – depending on their circumstances. Kahneman and Tversky illustrated these categories in their Fourfold Pattern (Kahneman, 2011, pp. 317). Many people will take terrible risks to avoid a “too painful” option which is very likely to happen, such as death or a likely loss (Kahneman, 2011, pp. 319). He refers to such a move as, when “the hope of complete relief [is] too enticing” and almost always a bad decision and completely irrational (Kahneman, 2011, pp. 319). This risk seeking behavior may be seen in taking on risky medical procedures or again might explain the drastic and possibly criminal lengths some politicians may be willing to go to in order to secure a win (e.g., USA) or at least increase their chances of a win (e.g., France), such as teaming up with shady cohorts in which they feel they need to keep their associations quiet indicating their guilt and knowledge that they are partaking in risky behavior in order to increase their chances of “not losing”(i.e., avoiding a loss). The avoidance of the original loss seems to loom larger than any possible new losses which may result from the risky behavior, such as public humiliation, loss of property or businesses, and even prison.

    It is confusing, as to how one can be so focused on the win/loss-avoidance that no consideration is given to the new possible losses on the horizon that will come with the win that is acquired through risky behavior. Possibly, that is associated to personality variables that a narcissistic risk-taking person doesn’t consider the possible consequences of their actions because they always assume a win for themselves in the short term is all that matters. Their life experience has shown them that their behavior tends to payoff, at least for themselves (usually financially) even though they tend not to be good for a business (or a country) (Kahneman, 2011).  Why then would they partake in risky loss avoidance behavior in the first place? It feels a bit cyclical. If they didn’t believe they could lose in the first place, why take risky steps to avoid a loss? I guess it is possible that they don’t see their behavior as risky and don’t see it as “loss aversion” but rather “do anything to win”. They just believe that they will win the next stage of any conflict that arises from their risk-taking behavior because it always has in the past (they may even enjoy it) and they have never truly suffered any long term consequences for their actions (not even tax evasion). As this posts, we are yet to see if that is a wrong assumption or not.

    Anyway, this concept of losses looming larger than gains can be found in all sorts of applications, such as bargaining or deal-making in which each side sees any compromise that they make to the other side as a loss. For example, Republicans vs. Democrats trying to pass legislation or business-to-business negotiations. One party’s loss looms larger in their mind than any gain they may acquire from the other party having a loss in the form of a compromise. Both sides feel the same way about any compromises they give and therefore neither ever feels good about the outcome. Unless, the only compromises they give are fake – in that they only put things up for compromise that they don’t really care about losing so they look like they had compromised to the other side. It’s all very unproductive and may have to due both with humans need for survival (i.e., money) and human’s evolutionary tendency towards in-grouping and out-grouping, us against them mentality (often seen in prejudice, religion, sports, and oppression).

    I wonder if there are cultural differences in negotiating behavior based on these criteria. As I’ve mentioned before, there are cultural differences between how people value something they have in their possession to sell. In the United States, people are more likely to value their own belongings, far higher than they would be willing to buy it themselves. Yet, in other countries such as England, people were found to value their belonging to sell, at about what they would be willing to by it. These cultural differences, for one thing, make life much easier for the average person in England as compared to the USA and have been documented in various cultures and on both sides of the continuum.

    I haven’t read yet, of a culture which would value the belonging for which they have to sell for less than they would be willing to purchase it but I am eager to look into that. I assume that too exists and it will be based on the reference point and life circumstances of the seller and possibly the buyer. For example, a seller who must sell something they highly value for less than it is worth because they must have money for food or shelter or to help someone else (altruism) whom could benefit much more from the item than themselves. That might be more circumstantial than cultural. Although, I know there are some cultures in Africa who value helping others, over personal belongings. There are also some cultures who don’t trust people with a great deal of belongings or wealth because that signals to them a likely lack of morals and/or ethics. Currently, I’m preferring both those cultures over my own (which has been found to ignorantly equate rich and beautiful with intelligent and benevolent) and hopefully we will all learn from them in the near future.

     

    References:

    Baumeister, R. F.,  Bratslavsky, E., Finkenauer, C. and Vohs, K. de (2001). Bad is Stronger than Good. Review of General Psychology, 5(4): 323-370. DOI: 10.1037/1089-2680.5.4.323

    Kahneman, D. (2011).  Thinking Fast and Slow . New York, NY: Farrar, Straus, and Giroux.

    Kahneman, D. and Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk.  Econometrica, 47(2): 263-291.

    Pope, D. G. and Schweitzer, M. E. (2011). Is Tiger Woods Loss Averse? Persistent Bias in the Face of Experience, Competition, and High Stakes.  American Economic Review, 101(1): 129-57.  DOI: 10.1257/aer.101.1.129

    Tversky, A. and Kahneman, D. (1992). Advances in Prospect Theory:  Cumulative Representation of Uncertainty. Journal of Risk and Uncertainty, 5(4): 297-323. Kluwer Academic Publishers. DOI: https://doi.org/10.1007/BF00122574
  • The great contribution of Prospect Theory to expected utility theory and economic theory in general, was consideration of the reference point and the finding that “losses loom larger than gains”. It was found that humans weighed loses approximately twice as much as they did gains. A new psychological perspective considered the impact that emotions have on the (not always rational) human decision-maker. Prospect Theory documented “losses looming larger than gains” through various examples of loss aversion when people made irrational decisions based on a fear of losing what they already had in hand. Regret, fear of regretting one’s actions later, disappointment, sadness, and the impact such emotions may have on one’s own self-worth and confidence is something that psychologists found that effect decision-making and behavior in the future. Predicting rational logical decision-makers, morphed into understanding the decision-makers’ perspective, their reference point, where they came from, and where they had arrived at the time of the decision – that is, the emotional human.

    It is believed, due to the effect of loss aversion, humans have a preference for the status quo. Opting for the status quo, even if it lacks desirability, seems to be preferred over the risk of loss. Even if the new option may appear to have many preferable qualities. The fear of the unknown, which may be worse, results in an irrational preference for not changing from the status quo. That is the disadvantages of change looming larger than the advantages of change (Kahneman, 2011, p. 292).

    As humans, we are readily adaptable to any new situation but subsequently, we adapt to our current situation and use it as a reference point for favoring small changes over large changes. Nevertheless, the assumption that one’s current state is all that matters is inaccurate, as well. “Remember where you came from” is a common phrase that also plays a role in the decision-makers’ perspective. Maybe that is why we somehow trust a person who came up from meager means over a person who came from privilege. Maybe we subconsciously know that the person who came from meager means may have more perspective, resulting in an ability to consider not only people’s perspective who share their current status but also is influenced by how important certain things were to them when they were seeing things from a different perspective. Understanding all this is how the field of behavioral economics developed.

    In the 1970’s, Richard Thaler was questioning the questions that couldn’t be answered by traditional economic theory and finding answers in the newly adopted Prospect Theory. He went on to collaborate with Daniel Kahneman and Jack Knetsch in Canada. Thaler referred to the notion of irrationally not wanting to give up or sell a belonging or investment for anything under an extreme profit, as the endowment effect. This emotional reaction to trading something someone had in hand for something far above what they would be willing to buy it for was documented by him repeatedly and could not be explained by traditional economic theory which said people would be willing to sell something for a cost that they would be willing to buy it.

    Very interesting to me, was that they found differences in this behavior between cultures/countries. For example, American sellers wanted more than double the money than they would be willing to buy the same object. People who could just trade an object for another object, also valued the object they had as slightly more than the object they were willing to take. The mere fact that they had one of the objects in their possession, raised the value of that object in their eyes. Whereas, in the United Kingdom this trend was not found and sellers kept their selling price much closer to what they would be willing to buy it. That made me wonder how the study would play out in Denmark, a country with a strict sense of equality between people and rated as one of the most happy countries in the world.

    Why do Americans, as an individualistic people, tend to over value the things we have when considering selling the item? It sounds a bit insecure. It sounds like we are fearful of letting go of something we have if it might have any monetary or emotional value to ourselves or others, now or in the future. Why do we place a value on something as twice as much as we would be willing to pay for it? Is it that we are a greedy people who are always trying to have or acquire more than the next person so we want to get the absolute most we can for anything and everything? Are we so insecure in our financial worth, that we are in need to raise it with every opportunity out of fear for our future stability? Are we insecure in our social status, that if we have something someone else wants than that makes it worth more to us emotionally? Is it just neurochemical and ingrained in our competitive culture, that we have more dopamine release when we know we have something that someone else wants and therefore we value it more and must keep it as it is the source of dopamine release? Why do people in the UK not have the same reaction? Are they happier than Americans? Are they more secure in their own financial or social status?

    Reference

    Kahneman, D. (2011). Thinking Fast and Slow  . New York, NY: Farrar, Straus, and Giroux.  Part 4  – “Choices”: Chapter 26: Prospect Theory (pp. 269 – 288) & Chapter 27: The Endowment Effect.

  • Based on the writings of Kahneman, D. (2011). Thinking Fast and Slow New York, NY: Farrar, Straus, and Giroux. Part 4 – Choices.  Chapter 25: Bernoulli’s Errors (pp. 269 – 277).

     

    Daniel Kahneman’s chapter 25, “Bernoulli’s Errors”, was just basically supposed to be an introduction into how Prospect Theory was developed from a realization that the economists’ theories about human decision-making were very different from psychologists’ views of human decision making. Economist think of humans as logical, rational, selfish, and having unchanging tastes. Whereas, psychologists think of humans as basing many of their decisions on information that is readily available and easily comes to mind (cognitively lazy) which makes them feel good and emotionally driven rather than logical or rational; they tend not to be completely selfish or selfless; and their tastes change regularly and often. Economists’ theories have been established for some 300 years and Kahneman built his theory off Daniel Bernoulli’s theory which was based on his older brother, Nicolas Bernoulli’s writings and teachings which were based on the St. Petersburg Paradox – or named the St. Petersburg Paradox because Daniel was working in St. Petersburg when he published it – or something like that.

    I’m not going to try to pretend that I’m an economist because I’m not. I’m a psychologist… cognitive, I guess. My degree was in Applied and Experimental Human Factors Psychology which is a multi-disciplinary degree. My background covers a great deal of decision-making and human performance. I’ve also studied physiological psychology and sensory perception. Cognitive perception, emotion, and how various influences in one’s life can impact their motivation and behavior are also large areas of my background. Attention, perception, decision-making, and response is the usual flow. How all these things interact on how real human beings make decisions and behave, is complicated. Psychologists and economists strive to make human behavior predictable through theories and formulas. Now, since I am not an economist and am not currently well versed in all the economic theories that have evolved over the centuries – I cannot easily synopsize all the research on Daniel Bernoulli, Nicolas Bernoulli, Gabriel Cramer, the St. Petersburg Paradox, Expected Utility Theory, and so on and so on here. I will tell you that there are many places on the internet that attempt to do just that and appear to do that fairly well but without deeper knowledge on the matter, I am unable to determine which one is on the money.

    Basically, for an extraordinarily brief take on the topic  – economists utilize the expected utility theory, from which the rational-agent model was developed. The rational-agent model was a “logic of choice” model. Kahneman and Tversky decided to look at “rules that govern people’s choices between different simple gambles and between gambles and sure things” (pp. 270). They set out to see “how humans actually make risky choices, without assuming anything about their rationality” (pp. 271). After five years they published “Prospect Theory: An Analyses of Decision under Risk” (pp. 271). Their model differed from utility theory in that it was a descriptive attempt to document when rationality is NOT utilized during decision-making. Prospect Theory was a modification of expected utility theory, predicting violations of the axioms or assumptions – such as rationality and logic in human decision-making during gambles.

    Kahneman pointed out that expected utility theory only compared the possible outcomes of a choice, ignoring the perspective of where the decision-maker was originating, financially – their wealth… sort-of. Bernoulli’s contribution to Nicolas and the St. Petersburg Theory seemed to be that one’s wealth aids in predicting how one will behave, such as a poor person is more willing to pay a rich person to insure their ship because they are willing to pay a premium to protect from a large loss. The theory has been proven true and applicable for around 300 years but it didn’t cover all applications.

    The original theory used probability to weigh the possible outcomes and stated something along the lines that, if you end up with say 4 million dollars, you should be happy but Prospect Theory points out that, if you started with 9 million you won’t be happy and if you started with one thousand you might be extremely happy. The concept of one’s wealth seems to be in both theories but applied differently. One is willing to pay a premium to protect from a loss and the other is not willing to risk any money even for the chance of ending up with more money. To me… both look at one’s original wealth and one is willing to lose a little money to prevent from loosing a lot of money or their livelihood and the other is not willing to risk loosing any money for the chance to win more money – especially if it means they are possibly going to just be out some money. A guaranteed loss for both but one has more to lose (their livelihood) and the other has no real reason to risk their money in the first place – they are just gambling what they have for fun gains but not for necessary gains or protection.

    It seems like both must consider the impact of the status quo to evaluate the proper risk to the decision-maker (i.e., subjective risk). Kahneman refers to Gustav Fechner’s (1801-1887) insights into subjective perception. For example, a dim light in the pitch black is subjectively brighter than that same light in the broad sunlight. Or a loud noise following deep silence can be subjectively more intense than the same noise when it is heard in a noisy club. These subjective perceptions can be translated to one’s perception of risk or wealth.

    During this time, Kahneman and Tversky also realized the effect of framing or “framing effects” (pp. 271). That is, “the large changes of preferences that are sometimes caused by inconsequential variations in the wording of a choice problem” (pp. 272). Often a person’s perception of something can be drastically altered by the framing of the information presented. Therefore, one’s perception of a risk can be altered if a question is framed a certain way. Politicians often frame a controversial comment with features that most would generally find agreeable, such as “for our children’s future”, “our history”, “our culture”, or “economics”. These attempts to sway people’s decision-making about policies and changes to the government and the country are framed with things that people approve of which may alter their perception of a complicated or a terrible truth. Remember, from previous posts, we humans tend to be cognitively lazy and readily welcome any information that justifies or simplifies our decisions, especially ones that make us feel good about ourselves.

    So, back to Daniel Bernoulli who in 1738 looked at the “psychological value or desirability of money”/utility (pp. 272). Prior to Bernoulli it was assumed that people assessed their risk on the expected value and the probability of the possible outcome, (e.g., 10% chance to win x or 10% chance to lose x). Bernoulli proposed that “the psychological response to a change of wealth is inversely proportional to the initial amount of wealth” (logarithmically – there is a chart). Bernoulli found that most people don’t like to take risk and they prefer a sure thing, even if it results in less money than they might possibly get if they took the risk but there was a “diminishing marginal value of wealth”. That is, if you have a million dollars, the addition of another million has more “utility points” or psychological value than the chance to gain one million if you already have nine million (pp. 274). Therefore, a person is more likely to be” risk averse” if the situation shows “diminishing marginal utility for wealth” (pp. 274). “His utility function explained why poor people buy insurance” (because they are risk averse and are willing to pay a premium for some security) “and why rich people sell it to them” (because they are more likely to just get money and lose nothing and if they did have to pay out, it would only be an amount they are willing to risk loosing in comparison to what they have and have to gain). This is still considered a valuable and useful contribution to economics 300 years later – even though, as Kahneman states, it was “flawed” (pp. 274).

    The flaw is what Kahneman noticed because he was looking with fresh eyes at an area in which he hadn’t been indoctrinated. He was a newbie to the filed of economic theory and to decision-making (his partner Amos Tversky’s area) and therefore didn’t think it was an issue to question everything – even something that had been accepted for 300 years in the economics field. Kahneman referred to the fact that Bernoulli’s theory went unchallenged for 300 years as “theory-induced blindness” (i.e., when something is so widely accepted that no one sees its flaws) (pp.277).  The flaw or what Kahneman and Tversky add to the theory is the perspective of personal wealth. Personal wealth is easily overlooked because it was mentioned in the “diminishing marginal utility for wealth” which influences the person’s risk taking or risk aversion behavior. Nevertheless, one’s financial status was not applied to predicting peoples’ decision-making behavior when presented with a gamble. What choice will one make when faced with a chance to gain or lose money, a chance to gain or lose nothing, or a sure gain or loss of money, or some combination thereof? Well, as Bernoulli found – generally, people don’t want to lose money or security.  People are risk averse.

    Of course, people’s’ risk aversion may be altered if they value the sacrifice. That is, they are sacrificing money or time or something of value for something they believe in – supporting a good cause, benefitting someone or something that they value (e.g., my loss is their gain). The utility in that case is the positive feeling they get from giving money or time or belongings. Kahneman and Tversky found that the person’s “utility of wealth” or psychological value placed on a possible gain or loss is strongly influenced by the amount of money the person started with (i.e., their baseline wealth).  Kahneman and Tversky asked – are two people who end up with an equal amount of wealth, equally happy? (Do they have the same utility?)”. Does the same resultant amount of money mean the same amount of happiness? If you want to be able to predict human behavior – you need to consider how the person perceives how they will feel after the decision is made and they either gain or lose money based on their action of either taking a risk or a sure thing. Their perception of how they will feel post outcome will influence their risk taking and risk averse behavior.

    Bernoulli predicted that people prefer a sure thing to a risk/gamble. Kahneman and Tversky pointed out, that is true unless the sure thing is going to be a sure loss from their baseline wealth. Utility is based on where they began – their baseline wealth. If given two bad choices in which both the gamble and the sure thing result in a loss of wealth from the baseline then they will choose whichever is more likely to lose the least. If there is even a small chance that they won’t lose any of their original wealth, than they are more likely to take that risk. Similar to Bernoulli’s example of preferring to lose some (e.g., insurance payments) and possibly not loosing anything, to any chance of loosing an important percentage of their wealth (i.e., lesser of two evils).

    I would agree that you must always consider the individual’s perspective of their individual baseline wealth to predict their behavior but I would add that one must also consider the personal value one may place on a particular loss (e.g., contributing towards something they believe in – altruism,  green investments, or their future). People will take great risks to protect their wealth and others will take great risks to gain wealth – especially if they have nothing to lose because they have so little. Kahneman refers to entrepreneurs and generals faced with two bad options but I think of immigrants and refugees seeking a better life for themselves and their families – risking everything for the possibility of a better life and at the same time feeling like loosing everything they have, all current wealth, would be worth the possible gain of eventual opportunity, safety, security, and well-being. An odd play of Maslow’s hierarchy of Needs, restructuring the perceptual value of wealth.

     

    Notes to Self:

    • Read “Mathematical Psychology” by Amos – it is on decision making.
    • Look up: John von Neumann – giant intellect of 20th century; Oskar Morgenstern – economist who derived theory of logical choice between gambles; Gustav Fechner – “psychologist and mystic” – the subjective perception based on where the previous stimulus was established.
    • Check out Econometrica Journal – the journal they published their prospect theory essay and changed their lives. He stated that had they published in a psych journal it would have had little impact on economics. Econometrica is where the best papers on decision-making had been published.
  • Kahneman’s 24th chapter, “The Engine of Capitalism” discusses five major themes: “Optimists”, “Entrepreneurial Delusions”, “Competition Neglect”, “Overconfidence”, and “The Premortem: A Partial Remedy”.  In the last chapter we learned about how overconfidence was associated to the insider’s view and how an outside view of similar projects’ past results are necessary to provide a baseline from which to predict the future of a project.

    Overconfidence may also be part of a personality trait – that of an optimist. Everyone loves an optimist, rather than a pessimist.  Optimists live longer, tend to be happier and as Kahneman stated,  they tend to make more money (at least for themselves). Despite optimists making themselves rich, they tend to loose value for their companies and projects. Especially, when they are an optimistic risk-taking Chief Executive Officer in a big company. These CEOs’ optimism can garner much money for themselves and investment in their companies but tend not to show similar returns in productivity of themselves or value for their company. Nevertheless, optimists and their overly confident and risk-taking behavior tend to be in positions of “greatest influence” over others lives. This can be a good thing when their optimistic personality is good for moral and motivation, even when delusional (pp.256).  Unfortunately, their delusional perspective often leaves them with the belief that they can control events.  This can be a bad thing when their risky behavior negatively impacts an entire company or country – putting everyone at risk.

    Kahneman describes optimism as a lucky genetic trait passed on through families. I think this viewpoint may be influenced by one’s life circumstances, such as being born into poverty or being born into wealth.  An extremely wealthy life experience is often described as one of privilege.   These lucky few are often provided with greater opportunities and greeted by people who cater to them in every way. They are also more able to avoid criticism and punishment for poor decisions and poor behavior due to the influential people that surround them.  Whereas, the less privileged may experience harsh criticism, punishment, and prejudice for poor decisions and poor behavior and have far less opportunities presented to them.

    Kahneman describes the optimistic entrepreneur as one who tends to take bigger risk, even when they have more personally on the line. Of course, a million dollars may not seem like a big risk if one is coming from a multi-million, billion, or multi-billion dollar background. It might even be a good tax write off to loose.  Whereas, a million dollars may seem unattainable and an inconceivable amount of money to loose to someone from meager means. These differing consequences may influence a more cautious perspective.

    That brings us to Kahneman’s section on “Entrepreneurial Delusions”. Americans are often taught, “you can be whatever you want to be” or “you can accomplish anything you put your mind to”.  The reality is that only approximately 35% of small businesses succeed.  Yet, the small businessmen have been found to estimate the average success rate as being around 60%  and their personal chance at 70% or higher (pp. 256-257).  Furthermore, it’s been found that approximately “33% of Americans said their chance of failing was zero” (pp. 257).  Kahneman points out that seeking out the outside view (the baseline success rate of others in similar business ventures) may not have even occurred to the optimistic entrepreneur – an example of  “competition neglect”, a term that was coined by Colin Camerer and Dan Lovallo (pp. 258 – 260).  Kahneman describes competition neglect as when people only focus on themselves (e.g.,  their goals, their plans, what they want to do, and what they can do) while ignoring what their competition may want to do and what they can do. Also they “neglect the role of luck” and ignore what they don’t know –  developing a “planning fallacy” and a sense of “overconfidence” (pp. 259).

    Kahneman illustrates a case of overconfidence with a study from Duke University on 11,600 chief financial officers’ (CFOs) forecast accuracy.  They found their predictions of the S&P index to be less than 0% accurate. That’s right, if they predicted the S&P to go one way it was more likely to go in the opposite direction.  Unfortunately, they developed a coherent story based on information that easily came to mind while ignoring what they don’t know. This again is an example of the System 1 (i.e., quick to mind) and WYSIATI (i.e., what you see is all there is).  Kahenman further points out that if the CFOs were to have accurately predicted the actual range (which was quite a wide range from -10% -+30%), they would have been “laughed out of the room” (pp.262).  Basically, the CFOs tend to be paid to be overconfident and unrealistically optimistic because no one really wants to hear the truth. They don’t want to hear that there is no way to know or that their head person doesn’t really know the answer – or worse, that there is a chance of failure/loss.  He warned that “Organizations that take the word of overconfident experts can expect costly consequences” (pp. 262).  Yet, people, firms, and the media tend to “reward the providers of dangerously misleading information more than they reward truth tellers” (pp. 262).

    Kahneman described the entrepreneurs of the world as an optimistically minded group of individuals with a “ cognitive bias” towards “WYSIATI” (i.e., what you see is all there is).  For example, the “Inventor’s Assistance Program” in Canada was set up to provide entrepreneurs a realistic evaluation of the likelyhood of their success.  They found that approximately 70% of all their participants were “likely to fail”.  Nevertheless, 47% of those entrepreneurs “likely to fail”, persisted with their plans – often loosing twice as much money as they originally intended to spend.  Camerer and Lovallo referred to these type of failing entrepreneurs as “optimistic martyrs”  (pp. 261). The optimistic martyrs’ failures tend to lead to improvements in the systems by those who follow, having learned from their predecessor’s failure or history.  The optimistic martyrs’s failures create new information to add  to the baseline estimate of what is required for predictable success in the future.

    The “premortem” is a process to combat overconfidence which was developed by Gary Klein (Kahneman’s self proclaimed “adversarial collaborator”).  Basically, after a decision has been made by a group of people, the team is told to “Imagine that we are a year into the future. We implemented the plan as it now exists. The outcome was a disaster. Please take 5 to 10 minutes to write a brief history of that disaster” (pp. 264). This exercise has been shown to overcome the problem of “groupthink” and stop “the suppression of doubt” by the other team members, especially in a hierarchical structure where the leader has expressed an opinion in one direction.  It also has been found to encourage people to consider any possible threats that may have been overlooked.

     

    Reference:

    Kahneman, D. (2011).  The Engine of Capitalism.  In Thinking Fast and Slow  (pp. 255-265). New York, NY: Farrar, Straus, and Giroux.

  • Kahneman, D. (2011).  The Outside View.  In Thinking Fast and Slow  (pp. 245-254). New York, NY: Farrar, Straus and Giroux.

    The “outside view” and the “inside view” are terms coined by Daniel Kahneman and Amos Tversky referring to the processes people use in forecasting aspects of projects, such as, time to completion, likelihood of completion, and cost of project. The inside view is forecasting aspects of or the successful completion of a project based on the insider’s view. It tends to be overly optimistic or maybe it is overly confident in one’s own abilities. It tends to ignore outside information which might conflict with ones wishful or hopeful viewpoint.

    Kahneman presented a wonderful example of he and his colleagues falling into this flawed forecasting method. They came together to write a textbook. Approximately two chapters and one year into the project, each individual of the group estimated how long it would take to complete the project. The consensus fell around two years (1.5 – 2.5 years). One colleague who knew several other groups who had undertaken the same type of project, estimated the same but when pressed as to how long the other groups he knew of took – he remembered it took them between seven and ten years to complete a textbook. Furthermore, approximately 40% didn’t complete their textbook project at all. And when pressed on their group’s level expertise as compared to the other groups – he estimated their group to be below the average. Nevertheless, the entire group decided not to weigh this information into a re-evaluation of their likely timeline and stuck with the two year estimate or forecast. Therefore, the original colleague who had this information and unintentionally chose to ignore it when making his original evaluation and then the entire group chose to ignore this baseline statistic for any further forecasting of their likely timeline to complete their project.

    The outside information that the colleague had about similar groups and projects was the type of information later coined the outside view. It is a baseline statistic one can use to start from when attempting to forecast one project based on the outcome of other similar projects. This is a statistical estimate of the field from which to estimate in one direction or another (longer, more expensive, more complications likely, etc.). The colleague said their group was below average as compared to the other text writing groups of which he knew. Therefore, the logical estimate from the baseline seven to ten year estimate would be slightly longer than those other groups, if they completed at all since there was only an approximately 40% success rate.

    Kahneman’s text writing group’s project ended up taking eight years. He refers to the miscalculation  of two years to competition as an extremely optimistic forecast for the project. It may possibly be a case of over confidence in one’s own capabilities, as well.  Possibly they felt they were far superior to the other groups, of whom they were not personally familiar. Whatever the underlying reason, Kahneman refers to this as a “planning fallacy”. A tendency to forecast more of a “best-case scenario” or he says, even a “delusional” forecast. He refers to an “irrational perseverance” to continue the project against all odds, rather than embarrass themselves for starting a project which they might not be able to complete. Kahneman breifly refers to the “sunk-cost fallacy” when decisions to continue on a projects are unwittingly based on the amount of financial, temporal, and emotional investment that has gone into the project already, rather than the logical future probability of success . Kahneman ended up moving and not participating in the completion of the project but some of the other team members did see it out to the end, despite the fact that the funding agency lost interest and never ended up using the text.

    Nevertheless, the experience guided Kahneman’s research and likely contributed to his Nobel prize in economics. He is now a strong advocate of baseline statistics and has had first hand experience that despite your knowledge, you may still fall prey to ignoring the baseline statistics available when making predictions. It also shows how people can be overly optimistic in their own likelihood of success. People can have an underlying feeling that if they just want something bad enough and work hard enough, they can make it happen. This is an American saying – “ you can do whatever you put your mind to.” Unfortunately, the reality is that ignoring the many other variables that are unpredictable or even unfathomable doesn’t guarantee success.  Kahneman also mentions certain fields where people may want to believe they are a “special case” and no previous case can predict their outcome, such as law and medicine. It may be necessary to provide the helpless with hope and in rare cases it may turn out in their favor.

  • Kahneman, D. (2011). Expert intuitions: When can we trust it? In Thinking Fast and Slow (pp. 234-244). New York, NY: Farrar, Straus and Giroux.

    As we wrapped up Chapter 21, “Intuitions vs. Formulas”, we asked the question – what about all those instances we’ve heard of where expert cognition and behavior were remarkable and impressive? That brings us to chapter 22, “Expert Intuitions: When Can We Trust It?” It is only logical to come to this question after the last chapter determined that a simple statistical equation was more accurately predictable than the average expert’s prediction. Additionally, humans are susceptible to environmental and biological factors and a simple statistic tends not to be. Well, Kahneman and his colleagues had the same questions. Kahneman was a staunch believer that the statistical route was the proper route but a colleague with opposing views, felt just a strongly in the opposite direction. How can two intelligent, well researched professionals have such opposing views? Well, they decided to work together to figure that out… and they did – for years. Chapter 22 is about that process and conclusions. To read their resulting article see, “Conditions for Intuitive Expertise: A Failure to Disagree”.

    In a nutshell, an excerpt from their abstract:

    “They conclude that evaluating the likely quality of an intuitive judgment requires an assessment of the predictability of the environment in which the judgment is made and of the individual’s opportunity to learn the regularities of that environment. Subjective experience is not a reliable indicator of judgment accuracy.”

    That is, it seemed to come down to the type of expert, the reliability of their variables that went into their evaluation, and therefore the predictability of their field. So, a firefighter with many years of experience can find reliable indicators within a fire which they can use to predict short term actions that may need to be taken. Whereas, in politics, experts who may have been working in the field for years, may not realize that their variables are lacking validity. That is, they are commonly looking at one question and substituting another which they have seen before – one for which they feel they can answer based on previous experience but it is not the same question. They don’t realize they are doing this. The answer comes easily so they feel overly confident in their answer. Overconfidence is usually a sign that they are wrongly creating a coherent story in their mind from invalid variables.

    All together, another incredibly interesting chapter which brings to mind the current political environment. As a matter of fact, Kahneman even brings up an example of political substitution of one question for another that results in completely invalid conclusions. That is, Kahneman references Gladwell’s book Blink, in which he describes a failure of so called, intuition. Malcolm Gladwell described the election of President Harding, “whose only qualification for the position was that he perfectly looked the part. Squared jawed and tall, he was the perfect image of a strong and decisive leader ” (p. 236). The truth is, he was also previously a senator and had been working in politics for some time before his fellow Republicans thought he should run for President because he so looked the part. Quite possibly, the American people were also swayed by his appearance and voted for someone who looked the part rather than someone who they felt was especially qualified for the position. If that was the case, they substituted an easier superficial question about appearance for the more difficult qualifications evaluation, resulting in an unreliable prediction based on an invalid question.

    This rings familiar of the 2016 election of a politically inexperienced business man, Donald Trump, over a very qualified and experienced career politician, lawyer, former first lady, Secretary of State, and Senator – Hilary Clinton. Donald Trump posed the very question that aided in Harding’s election – Does she look like a President? He posed this question in many ways, saying, she just doesn’t look Presidential to him, she doesn’t have stamina (an often used stereotypically male term to describe virility), or appear as strong and healthy as himself. If people couldn’t predict his likely success in the role, they could much more easily think he looked the part of a big strong man who would be a decisive leader. Harding died two years into office and despite that Harding was said to have “embraced technology and was sensitive to the plights of minorities and women”,  his presidency was marred by scandal that his friends and colleagues were profiting off the presidency – i.e.,”using their official positions for their own enrichment” (https://www.whitehouse.gov/1600/presidents/warrenharding). Donald Trump’s presidency is still to be written in history but as they say, if we don’t learn from the past, we are doomed to repeat it (Santayana, 1863) which is an example of searching for reliable variables from the past to improve predictability of the future and a warning not to use simpler invalid evaluations, such as appearance in substitution.

  • Kahneman, D. (2011). Intuitions vs. Formulas. In Thinking Fast and Slow  (pp. 222-233). New York, NY: Farrar, Straus and Giroux.

    In chapter 21, Intuitions vs. Formulas we explore the validity of expert intuition vs. statistical formulas for prediction. Kahneman began the chapter referencing Paul Meehl who he credits with one of his earliest accomplishments, an interview for new recruits to the Israeli Defense Forces. Paul Meehl’s work, “Clinical versus statistical prediction: A theoretical analysis and a review of the evidence” (1954) found, in-a-nut-shell, that “simple, statistical rules are superior to intuitive ‘clinical judgements’ ” (p.230). This had a great influence on Kahneman during his early career when he was assigned, by the “Israeli Defense Forces”, to create a more reliable interview for incoming soldiers (p. 229). By “focusing on standardized, factual questions”, he created an interview based on “six traits in a fixed sequence, rating each trait on a five-point scale before going on to the next” (p. 231). The interviewers were extremely displeased with the prospects of throwing out all their intuitive expertise for a boring scale. So, he added, at the end, “close your eyes, try to imagine the recruit as a soldier, and assign him a score on a scale of 1 to 5” (p. 231). The results were significantly better than the old interview process and still in use 45 years later, he found when he went back to visit. Additionally, the “close you eyes” intuitive, subjective, expert evaluation question at the end was also found to be as accurate as the overall sum of the six ratings which meant that the expert intuition or subjective measure was also valid and reliable but only after answering the six objective questions. The objective priming influenced the experts’ ratings. Research methods should have both subjective and objective measures that show support for each other to illustrate validly.

    Nevertheless, Meehl’s research found that statistical formulas were the same or better than expert intuition for prediction of just about everything. Despite the upset and controversy that his research started (he referred to his book as, “my disturbing little book” because it upset so many “experts” in various fields), to this day – his findings have only been further supported by subsequent studies. Kahneman cited various famous examples of prediction by simple statistical analyses that proved to out perform experts’ predictions for years and even decades to follow. Furthermore, it wasn’t just statistical analyses that were found to be superior but “simple” statistical analyses, as illustrated by “Robyn Dawes’s famous article ’The Robust Beauty of Improper Linear Models in Decision Making‘ (p. 226)”.  That is, weighting of variables in complicated multiple regression analyses were found unnecessary in many cases. As, simply picking six or so valid predictive variables and weighing them all equally was enough and even optimal to achieve accurate prediction.

    One example presented was of “Princeton economist and wine lover Orley Ashenfelter” who developed a simple statistical analyses to predict the future price and quality of wine from various regions. His results out-predicted (correlation greater than .90) experts  who traditionally provided such predictions (p. 224). This sent experts and the wine community overall, into upset and denial.

    Virginia Apgar, an anesthesiologist in 1953 utilized Meehl’s process by developing a simple statistic of “five variables (heart rate, respiration, reflex, muscle tone, and color) rated on a 3 point scale (0, 1, 2) as a predictor of a newborn baby’s distress, one minute after birth which is still used to today. This standardized statistic predicts respiratory function to prevent brain damage or death and alerts any need to intervene with the baby right after delivery. Previously, there was no consensus among experts as to what to monitor or when, resulting in higher infant mortalities.

    Experts have been found, not only to be less accurate than a simple statistical formula but to actually contradict themselves approximately 20% of the time – even only moments after they were presented with the same data. This can be very disconcerting in the medical field. As Kahneman stated, “Unreliable judgements cannot be valid predictors of anything” (p. 225). For more examples on the “virtues of checklists and simples rules”, Kahneman refers us to Atul Gawande’s, A Checklist Manifesto [http://atulgawande.com/book/the-checklist-manifesto/] (p. 227).

    Kahneman reminded us of another influence on experts’ judgements mentioned previously – that is that, “unnoticed stimuli in our environment [can] have substantial influence on our thoughts and actions” (p. 225). Remember that a “cool breeze on a hot day” or judgements made right after a food break can make you more optimistic which was illustrated by parolees being granted more lenient judgements post lunch than before lunch. Again, this information tends to be met with “hostility” because experts may know they are skilled but they tend not to know the “boundaries of their skill” and when or why a simple statistic of only a few equally weighted variables “could outperform the subtle complexity of human judgement”, resulting in confusion and defensiveness (p. 228). Additionally, Kahneman stated, “when human competes with a machine”, “our sympathies lie with our fellow human” (p. 228). Kahneman goes on to state that “prejudice against algorithms is magnified when the decisions are consequential” and that Meehl and others have “argued strongly that it is unethical to rely on intuitive judgments for important decisions if an algorithm is available that will make fewer mistakes” (p. 229).

    This argument relates to today’s transition to more autonomous systems in cars, airplanes, and anything else that can have improved performance by reducing the impact of human error. Human error has been at fault for up to 80% of airline accidents, 60% of nuclear accidents, and 90% car accidents. Automation is also not 100% accurate but it usually has less error than the human user. Our attention is limited and our response time slower than automation but people still feel uncomfortable when the pilot announces that we are going to let the automated system land us today. Naturally, we are human biased despite the facts about human limitations. Kahneman describes statistical prediction over human intuition and judgement may feel “artificial” or “synthetic” and difficult to accept but it will likely gain acceptance over time, as reliance increases with knowledge (p. 228).

    All that being said… there is a large body of research on the ability of human experts to instantly make judgements that can’t be matched by technology, at this time. Malcolm Gladwell’s book Blink  nicely summarizes some of this research, if not romantically. Humans also have the still not completely understood ability to take extensive years of experience and synopsize it, in an instant, to make judgements that are complex and accurate. Another example of expertise in action was when Captain Chesley “Sully” Sullenberger saved hundreds of people in 2009 by utilizing decades of expertise to distill complex judgements under extreme time pressure and stress to complete an emergency water landing on the Hudson River after birds took out both of his aircraft’s engines. I must say, that happening so successfully seems like it would have been statistically impossible to predict.

  • Kahneman, D. (2011). The Illusion of Validity. In Thinking Fast and Slow  (pp. 209-221). New York, NY: Farrar, Straus and Giroux.

    In chapter 20, “The Illusion of Validity” we continue to explore “Overconfidence” with Daniel Kahneman’s  book Thinking Fast and Slow (2011). Validity of a test or a measure, simply put, is testing what you think you are testing… or measuring, or predicting. Often times, we think we have found something to measure, that will predict something else but often times we are wrong and not measuring what we think we are measuring. For example, testing students on a subject under serious time constraints. You may think you are measuring their knowledge of the topic but you may be just getting a measure of how well they work under extreme stress, high cortisol levels, or how fast they are at regurgitating information on the topic – rather than they how much they actually know about a topic.

    Then there is reliability, the other side of the coin but equally important. It is the ability to show the same results with repeated use of the measurement. If you test a student on the same subject matter, repeatedly, and they come up with the same results consistently – you have good reliability and the measurement is a reliable measure. Although, if given the opportunity to learn from their mistakes – they may improve with each re-test. Also, you can have a very reliable measure which turns out to have very low validity. That is, they consistently perform the same on the measure but it turns out the measure isn’t measuring what you thought it was measuring. So, you can see why validity is very important and any measure without it is pointless.

    Kahneman gave an example of analyzing a groups of cadets’ performance under atificial conditions to predict their performance in officer academy and ultimately their performance as a real officer in the field/real world. They found their measures of the kadets’ performance in the artificial environment was completely unpredictive of their sucess in officer academy and conceptually therefore, the real world. Nevertheless, they continued to conduct these evaluations and to report their findings with an air of confidence. Why? Well, this is the complexity of the “illusion of validity” and which Kahneman describes as the discovery of his first “cognitive illusion” (p. 211).

    Despite Kahneman and his colleagues knowledge of the lack of predictability and validity of their measure, they confidently continued to measure subsequent cadets’ in the same manner because, as Kahneman states, “Confidence is a feeling which reflects the coherence of the information and the cognitive ease of processing” (p.212). He states that these were errors of “substitution” and the “representativeness heuristic” (p. 212). He went on to warn against trusting in people who have high confidence because “declarations of high confidence mainly tell you that an individual has constructed a coherent story in his mind,  not necessarily that the story is true” (p. 212).

    He illustrated this with examples about political pundits predictive skills and those in the financial field who are supposed to have “stock-picking” skills (p. 212). Kahneman explained that, “The illusion that we understand the past fosters overconfidence in our ability to predict the future” and recalled Nassim Taleb’s Black Swan Theory (discussed in previous chapters)  (p. 218).  Kahneman cited various studies which found highly experienced or in demand political pundits and people in the financial industry who were entrusted and paid bonuses based on their ability to beat the stock-market, were found to be not much more reliable than chance. Yet, when presented with this information, these people ignored the information and confidently continued. Although, they often became a little more defensive about their skills. This has been illustrated by the current political administration as well, who has been described as – “doubling down” when their competence or performance came into question. Kahneman described this “illusion of skill” as an “individual aberration… deeply ingrained in the culture” –  professional, religious, or national (p. 216). He goes on to explain that “facts that challenge… [or] threaten people’s livelihood and self-esteem” cannot usually be “absorbed” or “digested” by the individual in question (p. 216).

    Furthermore, Kahneman stated that, “Cognitive illusions can be more stubborn than visual illusions” which are undeniable and often impossible not to see even when you know you are not seeing what is true (e.g., Müller-Lyer‘s optical illusion (p. 217).  With a visual illusion, one can often take a ruler to the object in question and prove to themselves that the two lines are the same length, despite one’s perception of one line being shorter than the other. Whereas, in cognitive illusions people are often “ignorant of their ignorance”. Their “subjective confidence” is a “feeling” that they can’t change, can be very strong, and basically just feels better to believe. He goes on to state that such cognitive illusions can be supported by “powerful” cultures (e.g., professional, religious, national, etc.) and create an “unshakable faith” no matter how “absurd, when they are sustained by a community of like-minded believers” (p. 217). Especially, when it makes them feel happier, superior in skill, talent, power, and/or confidence.

    Kahneman cited some personality types from “Isaiah Berlin’s essay on Tolstoy ” – “hedgehogs” and “foxes” (p. 220). Hedgehogs were described as those who “know one big thing”, “have a theory about the world”, ” account for particular events within a coherent framework, bristle with impatience toward those who don’t see things their way, and are confident in their forecasts. They are also especially reluctant to admit error… and are opinionated and clear, which is exactly what television producers love to see on programs” (p. 220).  Whereas, foxes are “complex thinkers” who “recognize reality”, including “luck” and “unpredictable outcomes” (p. 220) and states that foxes are less likely to of interest to television producers.

  • In “Part 3: Overconfidence” of Daniel Kahneman’s Thinking Fast and Slow we delve into chapter 19 – “The Illusion of Understanding”. Here we are introduced to Nassim Taleb’s Black Swan theory  which describes human’s tendency to shape our world views based on rare and misinterpreted events of the past. Kahneman goes on to describe the story of Google’s success as an example. The original founders were blessed more with luck than with the many superior attributes for which everyone looks to now for guidance in acquiring such success for themselves. If the founders had received a mere one million dollars they wanted for the purchase of their product – their story would have ended there but they didn’t (bad luck for Yahoo) and so they persevered and we have today an incredible success story.

    When looking back at great stories, they become much more intwined with intent, talent, and intelligence than with what is more often than not, just an anomaly of luck. People want answers, they want to learn from these stories. If it’s just a story about luck, than it in no way can effect their lives, their future, their wealth. As they say, hindsight is 20/20 but Kahneman discusses in a subsection, “The Social Costs of Hindsight” how the human mind is inclined to create narratives about the past to make sense of it but then mistakenly uses these false life lessons to predict the future. Not only do they recollect their own decisions differently based on the outcome of predicted events but they don’t even recall their original viewpoint. Their final viewpoint is completely altered. If an event had a bad outcome, they will have seen that outcome and had a more negative view of it originally (impossible). If the event had a very good outcome, they will have thought that beforehand and knew that was the better way beforehand (impossible). Even if you tell them what they originally thought, they will be sure that you are incorrect and they were on the right side of history with their amazing foresight and deep feelings that it was going to go that way, all along.

    This is especially dangerous in jury outcomes – as the punishment will be dependent upon the outcome rather than the intention. Negative outcomes result in more negative impression about the doctor, or the device, or whoever there is to blame. Remember, we like blame – that is, a cause. We don’t understand or feel good about unclear reasons. We need to have closure by punishing the entity that is to blame. Kahneman sates that, “hindsight is especially unkind to decision makers who act as agents for others – physicians, financial advisers, third-base coaches, CEOs, social workers, diplomats, politicians” (p. 203). He refers to this as, “outcome bias” and states that this can unfortunately lead to rewarding “undeserved” and “irresponsible risk seekers” (p. 203-204). He actually states back in 2011 that, “a few lucky gambles can crown a reckless leader with a halo of prescience and boldness” which might be illustrated in the current president, his experiences, and his reputation (p. 204). Kahneman describes, as an example of an irresponsible risk taker, “an entrepreneur who took a crazy gamble and won” and “leaders who have been lucky are never punished for having taken too much risk” (p. 204). “They are believed to have had the flair and the foresight to anticipate success, and the sensible people who doubted them are seen in hindsight as mediocre, timid, and weak” (p. 204).

    He mentions something else that may be pertinent in this political environment and that is that, “we all have a need for the reassuring message that actions have appropriate consequences, and that success will reward wisdom and courage” (p. 205). The human mind is inclined toward hindsight’s illusionary tendencies because we find it comforting to feel we “can predict and control the future” (p.205). Of course, real statistics will tell us we can’t. Rare weird things happen more often than we like to accept and regression to the mean is one statistic we can count on.  So next time you start looking for a causal link, remember the problem of induction , the power of luck, and “the inevitability of regression” to the mean (p. 207). 

     

    References

    Kahneman, D. (2011). The Illusion of Understanding. In Thinking Fast and Slow  (pp. 199-208). New York, NY: Farrar, Straus and Giroux.

    Taleb, N. N., (2010). The Black Swan: Second Edition: The Impact of the Highly Improbable Fragility. New York, NY: Random House.

    Vickers, John, “The Problem of Induction”, The Stanford Encyclopedia of Philosophy (Spring 2016 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/spr2016/entries/induction-problem/&gt;.

  • Chapter 18, “Taming Intuitive Predictions” starts by clarifying the difference between those kind of instant “Blink”  decisions that come from expertise (see Gladwell, 2005) and instant decisions that come from heuristics or intuition. The heuristics of intuition often come into play when a question is too hard to answer so people often, unconsciously, substitute an easier question… or consciously – a preferable one. We’ve seen a lot of this on the news lately, when politicians or regular people are asked the hard questions that they either can’t answer or don’t want to answer. Possibly out of unconscious self-preservation, they reinterpret the question as a simpler one, one they are more comfortable with, or more practiced, and reply from there. How often we hear, “that was not the question.” Kahneman states, that intuitive judgements are often made with “high confidence” even when based on “weak evidence”(p. 185).

    Kahneman’s examples are far more complicated and surprising. They are wonderful examples of how people who may have some limited amount of knowledge in a particular domain are able to access some pertinent bit of information which may be remotely associated to the more complicated and difficult question. The leftover unanswered part of the question may then be altered to an easier associated question through their use of heuristics and the resultant emotion of confidence (referred to as, “intensity matching”) helps to develop their answer. The person “eventually settles on the most coherent solution” with a fair degree of conviction despite a lack of evidence and obvious substitution of the question (p.187). This is becoming fairly commonplace in politics while the viewer frustratingly watches and wonders if the speaker substituted the question on purpose or unconsciously. 

    So, how to correct for these sort of intuitive predictions or judgements? I would suggest a regression analyses with careful determination of predictive factors but Kahneman provides an interesting rough regression type of formula that suggests one uses “estimates” of averages and of correlations and “impressions of evidence”.  He states you may still have error but it will be smaller and less biased or extreme.  He also suggests that any attempt to correct intuitive predictions will take effort which won’t likely be pursued unless “stakes are high and when you are particularly keen not to make mistakes” (p.192). He further notes to always remember: regression to the mean when estimating; the importance of base rates; and the overconfidence of intuitive predictions.

    He also notes that some extreme predictions may be warranted, depending on the application, such as when venture capitalists are looking for those few extreme cases to invest in or when a conservative banker is overly cautious in an attempt to avoid any possible chance of loaning money to someone who may declare bankruptcy.  Kahneman sums up the chapter nicely when he states, “Extreme predictions and a willingness to predict rare events from weak evidence are both manifestations of System 1”… and “Following our intuition is more natural and somehow more pleasant, than acting against them.” Whereas, “Regression is … a problem for System 2. The very idea of regression to the mean is alien and difficult to” understand and “a causal interpretation” will likely be given “that is almost always wrong” (p.194-195). 

     

    References

    Kahneman, D. (2011). Taming Intuitive Predictions. In Thinking Fast and Slow (pp. 156-165). New York, NY: Farrar, Straus and Giroux.

    Gladwell, M. (2005). Blink: The Power of Thinking Without Thinking. Boston, MA: Little, Brown And Company.