Why economic experts’ predictions fail
IN DECEMBER 2010 I appeared on John Stossel’s television special on skepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own skepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1, 2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase.
Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase— the market average—a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade “more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period aren’t the same ones who win in the next period.”
Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009, finding that only 13 beat the market average. Equating managed fund directors to “snake-oil salesmen,” Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They can’t. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: “Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!”
Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his “Pickens Plan” of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didn’t, plummeting as the drilling industry’s ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see.
Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than “a dart-throwing chimpanzee.”
There was one significant factor in greater prediction success, however, and that was cognitive style: “foxes” who know a little about many things do better than “hedgehogs” who know a lot about one area of expertise. Low scorers, Tetlock wrote, were “thinkers who ‘know one big thing,’ aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who ‘do not get it,’ and express considerable confidence that they are already pretty proficient forecasters.” High scorers in the study were “thinkers who know many small things (tricks of their trade), are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ‘ad hocery’ that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess.”
Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be skeptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I don’t make predictions and why I never will.