Statistics, Basketball and Cognitive Bias
The fallacy of the Hot Hand Fallacy
19th June 2020
For decades cognitive psychologists used the ‘hot hand fallacy’ as a perfect example of how human intuition can lead people to false beliefs. Meanwhile, the basketball community stubbornly clung onto the idea of streaky shooting. Then two economists discovered a pair of small but crucial errors in the original study that flipped the debate on its head.
Toronto’s Fred VanVleet driving on Washington’s Ian Mahinmi in 2018
By Andrew Hillman
Is basketball shooting any different from flipping coins?
For the second time in as many weeks, the Toronto Raptors were on the brink of elimination from the 2019 NBA playoffs. On May 12th they had squeezed past the Philadelphia 76ers with a sudden-death victory thanks to a miraculous buzzer beater by superstar Kawhi Leonard. By the 19th, the Raptors were in danger of falling to a three-game deficit against the Milwaukee Bucks before further heroics from Leonard ensured a double-overtime victory.Defeat would have left the Raptors in the most precarious of positions – in the seventy-one-year history of the NBA there has not been a single team come back from a 0-3 deficit to win a playoff series.
The Raptors were still alive, but for Toronto’s young point-guard Fred VanVleet, the celebrations were tempered. It had been another torrid game for VanVleet individually. He scored on just one out of eleven attempts and committed three turnovers – a performance that epitomised a playoff campaign where he was shooting below 20% from three-point range.To date, VanVleet’s average shooting percentage from three-point range across his entire career is 39%.
During slumps in form, a little distraction can be welcome. The following morning VanVleet flew to his hometown of Rockford, Illinois, arriving in time for the birth of his second child, Fred Jr.
This moment marked a turning point for his and the Raptors form. Toronto won the next three games to progress to the championship finals, and VanVleet scored a remarkable fourteen of seventeen (82%) from three-point range. His hot streak continued as the Raptors defeated the reigning champion Golden State Warriors in six games.
Cue the tongue-in-cheek suggestions that Fred Jr. deserved some (or even all) of the credit for Toronto’s title run. He was born two weeks early; would the Raptors reversal in form have come too late if he had waited for the due date?
US sports magazine The Ringer noted that following the birth of his daughter in January 2018, VanVleet also enjoyed a three-week stretch of exceptional long-range shooting. In the brilliantly titled ‘Did Newborn Babies Drive The Raptors Playoff Success?’, the Huffington Post pointed out that Rocky, the son of Raptor’s coach Nick Nurse, was also born on May 20th. In response to the newborn attention, VanVleet wryly commented, “I wish I could go back in time and not tell anyone that I had a kid so I could get all the glory for turning around my performance.”
Nobody was seriously suggesting well-timed procreation as a strategy for boosting future playoff fortunes, but did Fred Jr. really make a difference? VanVleet said it provided “a little perspective,” his teammate Kyle Lowry felt it “relaxed him a little bit.”
The answer, according to a 1985 study titled ‘The Hot Hand in Basketball: On the Misperception of Random Sequences’ is an unequivocal no. In fact, the authors – psychologists Thomas Gilovich, Robert Vallone and Amos Tversky – went further; not only is the birth of his son not the reason for VanVleet’s improved shooting, there is no reason – it is simply the product of random variability. In other words, the ways we explain variations in performance – ‘momentum’, ‘rhythm’, being ‘in the zone’ or having a ‘hot hand’ – are all illusory concepts, dreamed up by our overly associative brains.
For the study, Gilovich et al. interviewed one hundred basketball fans. Each participant considered a hypothetical player who scores one in every two shots on average. They were then asked to estimate the player’s scoring percentage on an attempt following a score (on average they answered 61%) and on an attempt following a miss (on average they answered 42%).
Gilovich predicted that the true hot hand premium would be smaller than the fans’ estimates. To test his hypothesis, data was collected for the Philadelphia 76ers home games during the 1980-81 season.The 76ers were chosen, quite simply, because they were the only team recording the order in which shots were attempted. This was thanks to the team’s media relations director Harvey Pollack, an early pioneer of statistics in basketball who was also the first to record assists, blocked shots and separate offensive and defensive rebounds. Gilovich et al. then identified occasions where a 76er scored on three shots in a row and marked the subsequent shot as the ‘event’ – was the player more likely than average to score following three consecutive made field goals (i.e. when he was hot)?
The results did not simply show that the hot hand was smaller than fans expected, the 76ers were slightly less likely than average to score following three consecutive successful attempts.
This was surprising but hardly conclusive proof that the hot hand was an illusion. Gilovich et al. acknowledged that overconfidence (leading players to attempt more difficult shots) and shifting defensive strategies (leading to more tightly guarded shooters) could mask evidence of a hot hand in open play. Therefore, they conducted a second study looking at free throw data for the Boston Celtics (calculating the probability of scoring the second attempt given that the first shot was successful).Free throws are typically taken in pairs. Unlike shots in open play, each free throw is identical – the shot is taken from the same location and defenders cannot move until the ball is released. Once again, the analysis revealed a small negative relationship – if anything, a slightly cold hand.
Free throws, however, only allow for evaluation of a narrow definition of hot handedness. What if it takes more than one successful shot to get into the zone, or if the stop-start nature of free throw attempts disrupts a shooter’s rhythm?
Any analysis of actual game data seemed susceptible to a long list of similar caveats, and so the researcher’s final experiment became key to the strength of their argument. They replicated their methodology from the study of 76ers shooting (recording the outcome of ‘events’ – shots following three consecutive scores), but this time in a controlled environment, with twenty-six basketball players at Cornell University each attempting 100 shots from the same location on the court and with no defenders. For the third time in a row, Gilovich et al. found a small inverse correlation – it was the researchers, not the shooters, that had hit upon a hot streak.
How did the hot hand fallacy impact the world of psychology?
Initially, the basketball community was dismissive of the study. When sixteen-time NBA champion coach and executive Red Auerbach heard about the findings he said, “Who is this guy? So he makes a study. I couldn’t care less.” For Gilovich, the backlash was actually convenient. As he explained in his book “How We Know What Isn’t So”:
“In the grand scheme of things, whether or not basketball players shoot in streaks is not particularly important. What is important is the suggestion — conveyed with unusual clarity by the basketball example — that people chronically misconstrue random events, and that there may be other cases in which truly random phenomena are erroneously thought to be ordered and ‘real’.”
The more stubbornly fans, pundits, players and coaches challenged the theory, the stronger Gilovich’s argument appeared. It drew attention to the study’s findings and demonstrated that the intuition to seek patterns is so powerful that even in the face of compelling evidence that no such relationships exist, people were unable to change their minds.
In the subsequent years, the hot hand fallacy has ridden a wave of enthusiasm for cognitive psychology and behavioural economics. The general public’s new curiosity for understanding heuristics, biases and cognitive flaws has propelled two books to the top of bestseller charts: ‘Thinking Fast and Slow’, written by Tversky’s long-time collaborator and 2001 Nobel Prize recipient Daniel Kahneman, and ‘Nudge’, co-authored by Cass Sunstein and Richard Thaler (Thaler received his own Nobel prize in 2017).
In 2015, the hot hand even made it to Hollywood when, in the 2015 film ‘The Big Short’, Thaler and Selena Gomez used the concept to explain how the trading of collateralised debt obligations created a bubble that ultimately brought down the US housing market.During this scene, a spectator standing behind Thaler can be seen wearing a number #11 Golden State Warriors jersey, the number of Klay Thompson, widely regarded as the streakiest shooter in basketball. Thaler has claimed the choice of attire was a pure coincidence.
How does our tendency to spot patterns in random data affect our decisions?
Curiously, the hot hand fallacy persists even when we participate in games that we know are based on pure chance. Behavioural economists Rachel Croson and James Sundali studied eighteen hours of video footage from a roulette table at a casino in Nevada, recording 139 individuals placing a total of 24,131 bets.The research assistant who worked on the painstaking process of translating the video footage into numerical data also deserves a share of the credit. They found that during a hot streak, players were more likely to remain at the table and to bet more aggressively, increasing the number of bets they placed on each spin.A key strength of Croson and Sundali’s research is that it observed authentic behaviour – individuals placed bets with their own money, unaware that their choices would later be used for scientific research. However, this real-life scenario also leads to limitations: players may leave the table after losses because they run out of money, and may bet more aggressively following wins simply because they now have more money to play with. Fortunately, studies in controlled environments have accounted for these factors and still found evidence for the hot hand in gambling decisions.
Croson and Sundali also recorded the outcomes of each spin: did the ball land on a red, black or green number. They noticed that following a streak of spins all landing on the same colour, players’ bets changed dramatically. After five consecutive spins of the same colour, 75% of outside bets were placed on the other colour. For six consecutive spins of the same colour, the percentage rose to 85%.
This is the gambler’s fallacy.It is also often referred to as the Monte Carlo fallacy after the most famous case from the Casino de Monte-Carlo in 1913, where the ball landed on black twenty-six times in a row, and gamblers, believing that the next spin is bound to land on red, lost huge sums of money. When an outcome depends on an individual’s agency or skill, we get the hot hand fallacy: we are inclined to believe that past performance is indicative of future performance. But clearly the mechanical process of a spinning roulette wheel involves neither agency nor skill, and so we tend towards the opposite feeling – that since spins landing red and spins landing black should balance out in the long-run, a black outcome becomes increasingly ‘due’ after a streak of red outcomes.
In 2016, the Quarterly Journal of Economics published research by Daniel Chen, Tobias Moskowitz and Kelly Shue at Yale showing that baseball umpires are susceptible to the gambler’s fallacy. They found that for a pitch on the edge of the hitter’s strike zone, the umpire was more likely to declare a strike if they had called the previous pitch a ball, and more likely to declare a ball if they had called the previous pitch a strike.
More alarmingly, Chen, Moskowitz and Shue found the same pattern for both loan officers and asylum judges deliberating over applications. An applicant had a lower chance of being granted asylum if the judge evaluated their application following a streak of approved cases.
By focusing on the errors, these examples encourage us to view our penchant for pattern spotting as an undesirable skill. However, Gilovich has pointed out that the same intuitions that cause us difficulty with random sequences also help us to detect genuine relationships hidden within noisy data. “How We Know What Isn’t So” contains too important examples: Ignaz Semmelweis developed the practice of antisepsis after first recognising that instances of puerperal fever were more common among clinic births than street birth, and then that mortality rates were highest for clinicians that also conducted autopsies; and Charles Darwin’s theory of evolution arose after he spotted similarities in the beaks of different species of finches and mockingbirds on the Galapagos islands.
This visualisation includes random data generated in browser. Therefore, it will occasionally produce individual simulations that are incongruous with the clustering illusion.
How was the hot hand fallacy disproven?
The cognitive errors that lead us astray when looking at random data are rarely disputed, but the hot hand fallacy extended beyond cognitive psychology. Gilovich et al. did not just argue that we overlook random variation as an explanation for streaky shooting, they claimed that it was the only explanation – that basketball was just a disguised contest in coin-flipping.
For many academics this was a difficult conclusion to accept. At a critical juncture in the 2018 NBA playoffs, the Houston Rockets missed twenty-seven consecutive three-pointers – undoubtedly an unlucky streak, but was there really nothing else affecting their shooting? As Thaler noted in ‘Nudge’ in 2008, “Many researchers have been so sure that the original Gilovich results were wrong that they set out to find the hot hand. To date, no one has found it.”
Twelve years on, a more nuanced critique of the contesting literature is required. After years of unconvincing challenges, hot hand believers went on a streak of their own, starting with the 2014 and 2016 MIT Sloan Sport Conference research paper shortlists, both of which included hot hand studies.On the latter occasion FiveThirtyEight reported the event with the headline “The Hot Hand: Part 1,000,000”, in dry reference to the protracted nature of the debate.
The first was published by Andrew Bocskocsky, John Ezekowitz, and Carolyn Stein of Harvard University. They noted that a key limitation of the original study, that it did not account for the varying difficulty of each attempted shot in open play, could be corrected for using modern technology. Bocskocsky developed a shot difficulty model based on optical tracking data and applied it to the 2012-13 NBA season. Without the model, the analysis matched that of Gilovich et al. thirty years earlier, but when adjusted to control for shot difficulty, the results demonstrated a small positive dependency – a hot hand.
Jeffrey Zwiebel, a professor at Stanford University, had similar concerns. He felt that basketball was an imperfect testing ground because it had too many moving parts (such as shot difficulty) that would be difficult to rigorously control for. In baseball, the ability of the pitcher varies, and batters typically have a better on-base percentage for home games than on the road, but not much else changes – it’s a more uniform contest. Zwiebel and Brett Green, a professor at Berkeley, studied 12 years of MLB batting data. Their analysis ruled in favour of a hot hand – large enough to justify strategic coaching decisions based on recent form.
In 2017, FiveThirtyEight joined the debate, using novel research to show that MLB pitchers experience hot and cold streaks, and that the speed of the pitcher’s fast balls can be used to identify those streaks as they are occurring.
But the biggest breakthrough was made in a small city on the edge of the Mediterranean. Joshua Miller and Adam Sanjurjo, economists at the University of Alicante in Spain, decided to apply Gilovich’s methodology to data from three decades of NBA three-point contests.The NBA three-point contest occurs during the All-Star break, an interlude to the regular season that sees the league’s best players participate in semi-competitive, PR-heavy, sort-of-basketball contests: the skills challenge, the slam-dunk contest, the All-Star game and the three-point contest. The three-point contest sees the league’s top shooters attempt twenty-seven shots from different positions along the three-point line (with a seventy-second-time limit). To add drama, there is an overly complicated scoring system that I will not bother to explain. To their surprise, they found that Adam Hodges, who scored nineteen consecutive shots in 1991 and never missed more than five in a row, did not possess a hot hand. Perplexed by the results, Miller and Sanjurjo broke with scientific orthodoxy, refusing to accept the legitimacy of their own analysis.
Their scepticism paid off. As they scrutinised Gilovich’s original study, they identified two peculiar statistical errors that biased the experiment against evidence for streaky shooting. Their resulting paper corrected for this statistical bias and showed evidence of a positive correlation – smaller than many fans or players would predict, but a hot hand nonetheless.
Met Office Integrated Data Archive System (MIDAS)
The daylight temperature data is for Kirkwall, a small town in the Orkney Islands. The simulations involve generating random data live in the browser. Therefore, results will occasionally occur that are incongruous with the explanation of Miller and Sanjurjo’s research.
How do psychologists and basketball players and coaches talk about the hot hand?
For decades, the basketball community and cognitive psychologists were stuck in stalemate, with Miller and Sanjurjo’s statistical errors lying undiscovered in the no man’s land between them. With neither side able to convince the other, they each resorted to a ‘we know the truth, you can believe what you want to believe’ attitude.
Basketball players, coaches and fans all say the same thing: those who have played the sport know that the hot hand exists – some days you are ‘in the zone’ and cannot miss. Steve Kerr, the Golden State Warriors three-time champion coach thinks the hot hand exists “for sure.” In an interview with the Bay Area’s Mercury News, he described the feeling:
“Everything just feels lighter. The ball feels lighter, your body feels lighter. It just seems like you’re floating out there. The rim looks big. Your thoughts aren’t here nor there. You’re just out there and playing and you just lose yourself in the game. That’s the beauty of being in that zone: It’s the total connection of your mind and body. The beauty of that, as an athlete, is what we’re all looking for; because it’s so easy for the mind to get in the way. So when I see Steph [Curry] and Klay [Thompson] getting into that zone, that’s what I see, is them just completely losing themselves in the game.”
Following Kobe Bryant’s legendary 81-point game in 2006, Los Angeles Lakers five-time NBA champion coach Phil Jackson said, “When you have to win a game, it’s great to have that weapon to be able to do it. We rode the hot hand.” And after Kyrie Irving put up a career-high 57 points to defeat the San Antonio Spurs in 2015, his teammate LeBron James said, “when you have a guy like that who has a hot hand, you figure out a way, find a way, to get him the ball every time down if need be.”The anecdotal evidence that basketball players’ beliefs in the hot hand influences their decisions on the court is supported by empirical research. Benedict Brady, of the Harvard Sports Analysis Collective (the same place that John Ezekowitz and Carolyn Stein first took an interest in the hot hand), has shown that following a score, players are significantly more likely to take their team’s next shot. For example, during the 2016-17 season, Steph Curry attempted 36% of his team’s shots following a score and only 18% following a miss. Then in February, following a fourth-quarter in which he scored five consecutive three-point attempts (once again, in a victory over the Spurs), James responded to a question about the hot hand fallacy by saying, “I guarantee the analytics person or people has never ever been in the zone in their life.”In fact, Gilovich is, by all accounts, a very capable basketball player. In the book ‘The Hot Hand: The Mystery and Science of Streaks’, the author Ben Cohen claims that Gilovich once led the psychology department’s team to the final’s of the Stanford intramural tournament.
Cognitive psychologists have never disputed the feeling of being in the zone, they have contested the belief that it correlates with better shooting performance. As Gilovich explained in an interview with Dan Ariely:
“[The hot hand fallacy] conflicts with this very palpable thing that you think you see when you watch a game of basketball, and it’s even more powerful when you play. You play the game, you make several shots, everything just seems to be coming together, it just seems like you’re more likely to make the next shot.”
In other words, the hot hand is so intuitively powerful that basketball players cannot let go of their long-held beliefs about streaky shooting, even in the wake of empirical evidence. According to Gilovich, Tversky used to say, “I’ve been in one-thousand arguments over this topic. I’ve won them all, and I’ve convinced no one.”
Psychologists have also argued that once the hot hand becomes an established belief, it is especially difficult for subjective evidence to do anything but confirm it. When a player makes four shots in a row it is labelled as proof of the hot hand, but when they score three in a row and miss the fourth, this is not interpreted as disconfirming evidence. Instead, it is viewed as further evidence of streaky shooting (where the streak in this case is only three).
As Gilovich et al. point out in the original study, “If random sequences are perceived as streak shooting, then no amount of exposure to such sequences will convince the player, the coach, or the fan that the sequences are in fact random. The more basketball one watches and plays, the more opportunities one has to observe what appears to be streak shooting.”
Did the hot hand fallacy ever really pass the common sense test?
From asylum judges to roulette wheels to music playlists, there is plentiful evidence to support Gilovich, Tversky, Thaler and Kahneman’s assertion that human intuition is predisposed to underestimate the influence of random variation in producing patterns in data.
But they are also guilty of failing to practise as they preached. They too suffered from confirmatory bias, leading them to view criticism from people with valuable first-hand experience purely through the lens of cognitive illusions, as further evidence for the irresistible strength of our misguided intuitions. Furthermore, their attachment to the hot hand fallacy led them to overlook the logical fragility of the finding.
Why was it logically fragile? The 1985 study implied one of two possibilities: either the hot hand was a cognitive illusion, or the hot hand was very difficult to detect. Although cognitive psychologists quickly became proponents of the former, it was the latter possibility that always seemed most plausible.
Basketball players are not random processes like coin flips – we would expect their shooting to be affected by a variety of factors: confidence, motivation, the quality and fit of their teammates, the defender’s ability, the tactics deployed by each team, injuries, fatigue and illness. If these factors create a dependency between consecutive shots (which they theoretically should),The accompanying methodology article includes an explanation of why these factors create dependencies between consecutive shots and simulates situations where they create small hot and cold hands. we will get hot and cold hands.
Considering the multitude of different variables involved, all of which are hard to separate from the complex data generated during a season of basketball, finding these dependencies could be extremely challenging. But that does not mean that they do not exist.
Gilovich has spent much of his career exploring the human tendency to jump to conclusions. In a 1991 essay for the Wilson Quarterly, he quoted the British philosopher John Stuart Mill saying, “every erroneous inference involves the intellectual operation of admitting insufficient evidence as sufficient.” And yet, Gilovich failed to recognise the problem with forming conclusions based on a study that produced only statistical noise.
As the famous baseball statistician Bill James argued, “Random data…cannot be used as proof of nothingness…Whenever you do a study, if your study completely fails, you will get random data. Therefore, when you get random data, all you may conclude is that your study has failed.”This quote comes from a fascinating 2004 essay titled ‘Underestimating the Fog’, where James expressed doubt about many of the statistical results that he had helped to produce. He had reached the conclusion that a popular method of analysis – search for evidence of widely accepted theory, fail to find evidence, conclude that the theory is illusory – was invalid. James claimed that failing to find evidence could simply mean that the data was much noisier (i.e. patterns were harder to spot) than the statisticans anticipated.
On the hot hand, James said, “The methods that are used to prove that a hot hitter is not really hot, in my opinion, would reach this conclusion whether hot hitters in fact existed or whether they did not.”
In fact, a 2003 study by statisticians Kevin Korb and Michael Stillwell from Monash University demonstrated that Gilovich et al.’s original study of the Philadelphia 76ers was considerably statistically underpowered – if a hot hand did exist, you would need far larger data samples to be sure of finding it. In other words, there is an important difference between ‘we have found no evidence for a hot hand’ and ‘the hot hand does not exist.’ Absence of evidence does not imply evidence of absence.
Should cognitive psychologists have treated the hot hand fallacy with greater scepticism?
The dissonance between the three approaches – subjective experience, logical reasoning and empirical analysis – should have alerted researchers to the possibility of experimental error. However, replication studies in the following years primarily copied the methodology of the original. If the aim was to test the validity of the results, surely it would have been prudent to check that different experimental methods pointed to the same conclusion.
In the original paper, Gilovich et al. were careful not to claim conclusive proof against the hot hand, saying, “detailed analyses…provided no evidence for a positive correlation between the outcomes of successive shots.” However, by the time Kahneman published ‘Thinking Fast and Slow’ in 2011, this careful and deliberate approach had made way for the more defiant claim that:
“There is no such thing as a hot hand in professional basketball, either in shooting from the field or scoring from the foul line…The hot hand is a massive and widespread cognitive illusion.”
Were cognitive psychologists carried away by the illustrative allure of the hot hand? Jeffrey Zwiebel believes so. While discussing his own analysis of the hot hand in an interview with the Stanford Business School, he argued that, “[behaviourists] have jumped to that conclusion because it fits their story that everyone is making cognitive mistakes and that these mistakes are extraordinarily pervasive.”
Cognitive psychologists saw players, coaches and fan’s stubborn loyalty to the hot hand as a result of motivated reasoning. ‘Momentum’ and the subjective feeling of being ‘in the zone’ are crucial to the thrill of the game, and so it was unsurprising that the basketball community was resistant to evidence showing them to be illusory.
But Gilovich, Tversky, Thaler and Kahneman were also susceptible to motivated reasoning – any researcher of human irrationality would be excited by the opportunity to reveal that the tactical decisions and on-field behaviour of an entire sport had been shaped by a cognitive illusion. As Joshua Miller explained:
“Everyone wants to believe in the hot hand, and we should be suspicious of this kind of motivated reasoning because people are likely to only confirm their priors. On the other hand, there is a dual motivation, held among researchers, to be able to say that these experts don’t know what they are talking about, that with high-powered statistics and without any knowledge of basketball we can know more. This is sometimes true. But we should have some humility. We are looking at 0s and 1s, the player and the coach have a far richer information set.”
The hot hand fallacy was especially treasured by researchers because it had all the ingredients necessary to act as a flagship example of the blind spots in cognitive intuition: a discipline people are familiar with, a simple experiment, and a decisive outcome (it was not ‘the hot hand is smaller than fans would predict’, it was ‘the hot hand does not exist at all’) that emphatically contradicts their subjective experience.
But, as cognitive psychologists know better than anyone, the more invested you are in a belief, the harder it is to renounce in the face of new evidence. With the hot hand fallacy now integral to the general public’s understanding of cognitive biases as a whole, there must be a fear that if it is discredited, the legitimacy of other counter-intuitive findings will also suffer.
This may explain why Gilovich has been reluctant to accept Miller and Sanjurjo’s results. Shortly after they were first publicized, Gilovich responded in a New York Times article by saying, “The larger the sample of data for a given player, the less of an issue this is. Because our samples were fairly large, I don’t believe this changes the original conclusions about the hot hand.” Two years later, in 2017, Gilovich told ESPN, “People with tremendous math skills are all over the map on this one. I simply say that because the mathematicians I talk to are befuddled about the proper statistical analysis. Time will have to tell on that one.”
Miller and Sanjurjo’s argument is not for the statistically faint of heart, but it should not take academic researchers years to wrap their heads around it. Instead of dragging their feet, it would be gratifying if a saga that has involved two communities with differing perspectives dismissing the other’s view as ignorant, finally concluded with some humility and agreement.Renaming the hot hand fallacy to the more accurate ‘hot hand bias,’ as Joshua Miller has suggested, would be a positive step. After all, as Gilovich himself explains, “it ain’t so much the things we don’t know that get us into trouble. It’s the things we know that just ain’t so.”
For references, notes on the research required for this feature and a detailed look at the data used for the visualisations, read the accompanying methodology article here.