Tag Archives: intuition

Review of THINKING, FAST AND SLOW – Daniel Kahneman

2011, Anchor, 499 pages

I predict time will be unkind to psychologist Daniel Kahneman’s groundbreaking, important, and misguided book. Having heard so many positive reviews of Thinking, Fast and Slow, I had expected to enjoy reading it. But it turns out I am quite allergic this book. Not since reviewing Sam Harris’ The Moral Landscape: How Science Can Determine Human Values has a book frustrated me to this degree. Do you remember doing math quizzes in grade school? Sometimes you would have some diabolical teacher that would put trick questions on the exams. Invariably, you would get some of these wrong. Then, when reviewing the error, at first you would wonder whether the marker was incorrect. Then, looking closer, you would see that it was a trick question, designed to fool. In many cases, you could have done the math. But you were fooled by a diabolical question designed to trip up your brain in the heat of the moment. Well, Kahneman’s book is filled up with trick questions him and fellow accomplice Amos Tversky dreamt up over the years. He presents leading questions that point you towards the incorrect answer. When you get the answer wrong, then he tells you your brain is not reacting rationally.

That the brain is irrational is an argument I accept. E. O. Wilson makes that claim in On Human Nature, a most excellent book. But the way Kahneman demonstrates the fallibility of the brain I absolutely disagree with in the same way as I disagreed with math teachers who set snares for students with trick questions. Who likes being fooled?

Less is More

Take this example that asks volunteers to price out two dinnerware sets. Set A has:

8 plates, good condition
8 soup bowls, good condition
8 desert plates, good condition
8 cups (6 in good condition and 2 broken)
8 saucers (1 in good condition and 7 broken)

Set B has:

8 plates, good condition
8 soup bowls, good condition
8 desert plates, good condition

When participants could see both sets, they valued, on average, Set A at $32 and Set B at $30. When participants were only shown one set–either Set A or Set B–they priced Set A, on average at $33 and Set B at $23. Kahneman (and Christopher Hsee, who came up with this experiment) call this the less is more effect, and, to them, it shows how the brain fails to handle probability. Their explanation is that, when participants could see both sets, they could see that Set A contains more good condition pieces than Set B. Therefore, they made the correct call and valued Set A at $32 and Set B at $30. However, when participants could only see one set, they would determine the price of the set by what the average price of the pieces. The set with intact pieces, therefore nets $33 while the set with the broken pieces nets $23, because the average value of the dishes, some of which are broken, is perceived to be lower.

To Kahneman and Hsee, the less is more effect illustrates the fallibility of the brain: if the eight cups and saucers (which include 7 pieces that are in good condition) are removed from Set A, Set A becomes worth more. To me, however, if I were shown Set A only, I would have also valued it at around $23 and if I were shown Set B only, I would have also valued it at around $33, and not because my brain is fallible (which it is), but because if I am shown in isolation a set of dinnerware with broken pieces, it makes me doubt the quality of the intact pieces! If, however, I can examine both sets, I can quickly see what the researchers are asking, which, to me, is: how much extra would I pay for 6 cups and 1 saucer. So, to me, this is not a case of the less is more effect, but rather the effect of the purchaser having less confidence in the quality of Set A because, out of 40 pieces, 9 are broken! This to me is a rather rational way of looking at Set A.

The Linda Problem

Imagine you are told this description of Linda:

Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

After hearing the description, you are then asked:

Which alternative is more likely?
a) Linda is a bank teller, or
b) Linda is a bank teller and is active in the feminist movement

When asked this question, 90% of undergraduates chose “b,” although by the laws of probability, it is more likely that Linda is a bank teller rather than a bank teller who is active in the feminist movement. The reason for this is that there are more bank tellers than bank tellers who are also feminists. Kahneman takes this as conclusive evidence of “of the role of heuristics in judgment and of their incompatibility with logic. I have a problem with this.

I get that there must be more bank tellers than bank tellers who are active in the feminist movement: bank tellers who are active in the feminist movement are a subset of the total number of bank tellers, which must be greater. But if, in the description of Linda, you tell me that she is “deeply concerned with issues of discrimination and social justice,” I am–if I were a participant in this study–going to try to cooperate with the questioners in anticipating what answer they want me to give. In this case, I would, even though I know that there are more bank tellers than feminist bank tellers, answer “a.” That I answered “a” is not, to me, conclusive evidence that my heuristics are incompatible with logic, as Kahneman argues. I was merely trying to be “helpful” by anticipating how the questioner wanted me to respond. And I was right: the questioner was trying to get me to say, “a.” Only, the questioner was not on my side and was deliberately trying to deceive me. No fair.

As Kahneman himself writes, without the questioner’s diabolical deception, participants could get this question right. Take this question:

Which alternative is more probable?
a) Mark has hair
b) Mark has blond hair

Participants have no problem getting the answer right. The answer is “a.” What I find insulting about the Linda Problem is that “no good deed goes unpunished.” The participant is trying to be helpful, not knowing the diabolical intentions of the questioner. And when the questioner deceives the participant, the questioner takes this to be proof of an impaired logical system in the brain. This adds insult to injury.

Consider also this scenario. Let’s say I am the questioner and that I am twenty-five pounds overweight. I go up to the questioner and ask: “Do you think I should lose some weight?” Let’s say the participant says: “You look great. No need for diet.” Would a smarty-pants psychologist look at this answer as proof that there is something wrong with the participant’s eyesight? I think, if the psychologist thought along the lines of Kahneman, the psychologist would say say yes, clearly there is an issue with the participant’s eyesight. But what I would say is that the participant is trying to be a nice person by anticipating the socially correct answer. There is something rational about saying the socially correct rather than the objectively correct answer as well, and I think Kahneman gives this point less consideration than I would have had.

The Hot Hand in Sports

On basketball, Kahneman debunks the idea of the hot hand:

Some years later, Amos and his students Tom Gilovich and Robert Vallone caused a stir with their study of misperceptions of randomness in basketball. The “fact” that players occasionally acquire a hot hand is generally accepted by players, coaches, and fans. The inference is irresistible: a player sinks three or four baskets in a row and you cannot help forming the causal judgment that this player is now hot, with a temporarily increased propensity to score. Players on both teams adapt to this judgment–teammates are more likely to pass to the hot scorer and the defense is more likely to double-team. Analysis of thousands of sequences of shots led to a disappointing conclusion: there is no such thing as a hot hand in professional basketball.

Kahneman explains the fallacy of the hot hand by a belief in what he calls the “law of small numbers,” the error that ascribes the law of large numbers to small numbers as well.” What that means is that three or four shots is too small a sampling size to demonstrate the presence of the hot hand.

Famed Boston Celtics coach, when he heard of the study, said: “Who is this guy? So he makes a study. I could care less.” I agree with him. Suppose you are coach of the Chicago Bulls in the 1990s. You are down two points with ten seconds on the clock. Michael Jordan has been on fire. Or at least he seems like he has the hot hand, having sunk his last five shots (some of which are high-percentage dunks). Dennis Rodman, on the other hand, is ice cold, having bricked his last five shots. Let’s say, to make this though experiment work, that Jordan and Rodman have the same field goal percentage. Who would you pass the ball to? Maybe “Team Psychology” would pass the ball to Rodman: he does not have the cold hand because such a thing does not exist. But the real-world team would pass the ball to Jordan. I think any coach who does not want to be fired or have the players revolt would pass the ball to Jordan. As they say, in theory there is nothing different between theory and practise, but in practise, there is.

Again, I understand what Kahneman is saying about small sample sizes. Small sample sizes can lead you awry. But what I have to say is this: in the absence of further data or more samples, you have to go with the data you have. That is the real world. In sports, you don’t have the luxury of looking at the player’s ten next shots to see if the player really has a hot hand. If the player seems to have a hot hand, you go with it.

Another objection I have to Kahneman’s debunking of the hot hand is that basketball players do, in real life, increase their field goal percentage. In his fourth year in the NBA, Shawne Williams, a player for the New York Knicks improved his 3-point field goal shooting percentage from 6 percent to 51 percent. If you knew him as a 6 percent shooter, and he hit three or four three-pointers in a row, and you dismissed his hot hand, well, you would be wrong: his field goal shooting percentage did actually move up from 6 percent to 51 percent! That year, he will seem to have had the hot hand and that hot hand is, statistically, real! As players hire shooting coaches and sports psychologists and move their shooting percentages higher, their hot hand will have been a real phenomenon. I don’t see how Kahneman and his friends could argue from a probabilistic and mathematical basis that sometimes players improve and, in the process of improvement, will have the hot hand.

Regression to the Mean

Air force cadets who do well one day will generally do worse the next day and cadets who do poorly one day will generally do better the next day. It is the same with golfers, claims Kahneman. This phenomenon is called the reversion or regression to the mean. Good performances will be balanced by poor performances so that, in the long term, the average is maintained.

Kahneman extends the phenomenon of the regression to the mean to companies: a business which did poorly last year, he claims, because of the regression to the mean, can be expected to do better the next year by the action of probability. Now, this idea can be tested in the stock market. There is a strategy called the “Dogs of the Dow” that works by arbitraging the regression to the mean. Each year, an investor buys the ten “dogs” or poorest performers in the thirty stock Dow Jones Industrials Index. At the beginning of each year, the investor sells the previous dogs and buys the dogs from the previous calendar year. If, as Kahneman claims, businesses obey the regression to the mean, by buying the poor performers, an investor should be able to do better than a buy-and-hold investor who holds all the stocks in the index.

This is not the case. With dividends reinvested, the twenty-year return in 2020 of the Dogs of the Dow strategy has returned 10.8%. Buying and holding all the Dow stocks for the same twenty year period would have also returned 10.8%. If Kahneman is correct about the regression to the mean, one would expect the Dogs of the Dow strategy to have produced a return in excess of 10.8%. It did not. There may be momentum effects at play where winners continue, despite probability, in producing outsized returns and losers, despite probability, produce diminished returns.

The regression to the mean is a real phenomenon. That I don’t doubt. But if Kahneman says it applies to businesses, it must be investable in real life. If it isn’t, then it’s just a fancy sounding term. You know, Kahneman might be right, that businesses revert to the mean. But he talks as though he is sure of the phenomenon without giving a real-world proof. Take the entire Japanese stock market, the Nikkei 225. It had a bad year in 1990. A very bad year. If I had listened to Kahneman, I would have backed up the truck to buy Japanese stocks in 1991. Now, almost thirty years later, the Nikkei is still below its 1991 levels. Regression to the mean?

Regression to the mean may be real, but not as easy as Kahneman puts it. There is a certain momentum in businesses and countries that defy regression to the mean for years, decades, and centuries. It strikes me that regression to the man works if you are looking backwards at the data. Say, after a century, you already know what the average is. You already have the data. Of course regression to the mean will work. But if you are looking forwards and do not have the data already, things change, trends emerge, industries fail: for example, when digital photography came into style, a company like Kodak is not going to revert to the mean! It will go bankrupt.

Prospect Theory

Prospect Theory is Kahneman’s feather in the cap. He won the 2002 Nobel Prize in Economics for Prospect Theory. Prospect Theory looks at how behaviour changes under the psychological loads of loss or gain. For example:

-In mixed gambles, where both a gain and a loss are possible, loss aversion causes extremely risk-averse choices.
-In bad choices, where a sure loss is compared to a larger loss that is merely probable, diminishing sensitivity causes risk seeking.

Prospect Theory explains why people buy insurance (even though it is an irrational practise that is money losing, in aggregate and in the long run), why people buy lottery tickets, why people pay lawyers too much to settle instead of fight it out in court (the large “structured settlements” industry), and the psychology that drove a con man like Bernie Madoff to seek more and more risk to avoid loss. To draw its conclusions, Kahneman would ask test participants questions such as:

Problem 1: Which do you choose?
Get $900 for sure OR 90% chance to get $1,000

Problem 2: Which do you choose?
Lose $900 for sure OR 90% chance to lose $1,000

His questions are designed to “tell us about the limits of human rationality. For one thing, it helps us see the logical consistency of Human preferences for what it is–a hopeless mirage.” I agree with Kahneman that human rationality is severely limited. Even free will, in my view, could be an illusion. E. O. Wilson, in a series of books including On Human Nature, has laid out an argument that convinces me of the limitations of the mind, which, Wilson argues, is a product of evolution conditioned to Stone Age rather than Space Age environments. Kahneman’s arguments fail to persuade me because his arguments presuppose that, should the participant confront the question in real life the participant would react in the same way as the participant answered the question, which, in the experiment, the participant knows is not real, is only a question in a study. That is a big jump that has been demonstrated conclusively to be false. There are, for example, ongoing litigations involving the “Know Your Client” (KYC) form that investment banks use. Financial advisors gauge their clients’ appetite or aversion to risk by asking them questions such as the ones Kahneman asks the participants in his studies. As it turns out, some clients said, on paper, that they had great appetite for risk. But when loss happened, they found that, in real life, this was not true. So they sued. Others said, on paper, that they had little risk tolerance. When, however, in real life, they saw how they missed the boat on outsized investment returns, they found out that they actually have a propensity for risk. And they sued. The Achilles’ heel of Prospect Theory is that Kahneman asks participants questions on paper and draws far-reaching conclusions on the assumption that this transfers over to real life. People do not behave the same way in real life as they do on paper. You cannot ask people paper questions and construct a real-world theory from their paper responses. No, no, no!

His method, in my eyes, would be like an anthropologist who polls different tribes. So, instead of observing what a tribe actually does, this anthropologist would give the tribespeople a poll. For example, the anthropologist would ask:

Problem 1: One year, your crop yield goes down 25% Would you:
a) attack the neighbouring tribe or
b) increase hunting activities

Then, if the participants answer “a,” this anthropologist would conclude that “the tribe is aggressive” or some other far reaching conclusion. But if the participants answer “b,” the anthropologist would conclude that the tribe is pacifist. This would be ludicrous. But this seems to be what Prospect Theory is based upon.

As they say, in theory, there is no difference between theory and practise but in practise, there is.

Government Spending

During the year that we spent working together in Vancouver, Richard Thaler, Jack Knetsch, and I were drawn into a study of fairness in economic transactions, partly because we were interested in the topic but also because we had an opportunity as well as an obligation to make up a new questionnaire every week. The Canadian government’s Department of Fisheries and Oceans had a program for unemployed professionals in Toronto, who were paid to administer telephone surveys. The large team of interviewers worked every night and new questions were constantly needed to keep operations going. Though Jack Knetsch, we agreed to generate a questionnaire every week, in four color-labeled versions. We could ask about anything; the only constraint was that the questionnaire should include at least one mention of fish, to make it pertinent to the mission of the department. This went on for many months, and we treated ourselves to an orgy of data collection.

That Kahneman mentions this I find disturbing. From what I gather, times are tough. There are many unemployed. So then the Canadian government hires three top-gun economists (because purse strings must be tight), two of which are American (because Canadian economists do not need the work) to conduct surveys which are meaningless to the participants, the government, and Canadian citizens. The government, however, markets this program as being relevant to Canada’s fishing industry: after all, each question must involve the mention of a fish. Of course, after the brilliant economists get the data they want for their pet experiments, they publish this in a book and throw the Canadian government under the bus: the survey, they say, really helped them and had nothing to do with fisheries and oceans. They had gamed the taxpayer money for their own benefit. This so smacks of elitism. It also strikes me as being deeply ironic: the study they were working on was “fairness in economic transactions.” Yikes.

That he printed this makes me wonder if he understands the real world. He talks of Davos, the party place of the billionaires. He goes through his book like some hero-psychologist, looking at everyone else’s blind spots. He talk about how he mentions one story at Davos, and someone overhearing says “it was worth the whole trip to Davos just to hear that,” and that this person who said this “was a major CEO.” Wow. It would have been good if someone in another book had said that about Kahneman. But for him to say this about himself in his own book?

Spider-sense Tingles “Danger”

Thinking, Fast and Slow is a book I had wanted very much to like. I had hoped to learn more about mental biases that would have been of use in the new book I’m writing on a theory of comedy. The more I read Thinking, Fast and Slow, however, the more my spider-sense was tingling “danger.” I voiced my disapproval of the book to friends and to my book club. People said: “You don’t like the book because you probably weren’t smart enough to answer his questions.” Other people said: “But he has won a Nobel Prize. Who are you to disagree?” It makes me laugh a little bit that people will say that I am irrational while themselves using ad hominem attacks, the rationality of which itself is doubtful.

I remember a story about two other Nobel Prize winners, also, like Kahneman, in the economics category. In 1997, Myron Scholes and Robert C. Merton won the Nobel Prize in Economics. A few years prior, they had started up one of the largest hedge funds in the world, Long-Term Capital Management. While they were winning the Nobel Prize, a journalist looked into the workings of their hedge fund. He called them out for being overleveraged: with 4 billion in their own and investors’ capital, they had borrowed in excess of 120 billion. The journalist called them out for “picking up pennies in front of a bulldozer.” Scholes and Merton shot back: “Who are you to question us, lowly journalist? We are Nobel Prize winners.” A year later, Long-Term Capital Management collapsed, taking the global economic system itself to the brink of collapse. How the mighty are fallen.

Kahneman comes across as the hero-psychologist pointing out others’ errors. But I wonder if he ever looked at the beam in his own eyes? I did a quick search on Google for the robustness of psychological experiments, the sort that are published in respected peer-reviewed journals. I found that less than half of such studies can be replicated. What sort of “science” is this? It’s like if you had a theory of gravitation that was published in a leading journal such as Science that predicted the moon would be at this place on this time. You “proved” it once and published it. But no one else can replicate it. And your theory is still accepted as canon, not to be questioned? I wonder, down the road, how robust many of Kahneman’s findings will be. Time will tell.

2015 Reproducibility Project study finds only 39 out of 100 psychology experiments able to be replicated, even after extensive consultation with original authors:

https://www.nytimes.com/2015/08/28/science/many-social-science-findings-not-as-strong-as-claimed-study-says.html

2018 Reproducibility Project study finds that only 14 out of 28 classic psychology experiments are able to be replicated, even under ideal condition:

https://www.nature.com/articles/d41586-018-07474-y

– – –

Don’t forget me, I’m Edwin Wong, and I do Melpomene’s work.
sine memoria nihil