KimMiE” ’s #CBR5 Review #4: The Signal and the Noise: Why So Many Predictions Fail—But Some Don’t by Nate Silver


During the 2008 presidential election, statistician and writer Nate Silver correctly predicted the winner of 49 of the 50 states—missing only Indiana, which went to Barack Obama by 1 percentage point—and all 35 of the U.S. Senate races. Given his earlier success, I found it curious that his predictions around the 2012 election were greeted in some circles not just with skepticism, but with outright hostility. When the votes were counted, Silver’s predictions held up once again: he had called all 50 states and the District of Columbia and 33 out of 35 Senate races. My curiosity was sufficiently piqued to want to give this book a shot.

I enjoyed Freakonomics and Moneyball, both of which challenge conventional wisdom using numbers, and I thought Silver’s book might be similar. It is, but more so. Where the other books I mentioned provide bite-sized lessons that the less mathematically inclined among us can digest, The Signal and the Noise is like a crash course in statistics, packed with charts and graphs and terms like Bayesian probability. I enjoyed reading it, but it was hard work at times.

The book is divided into chapters that explore the challenges with predicting things like weather, or earthquakes, or batting averages (baseball fans will know Silver as the developer of PECOTA, a system for forecasting ball players’ future performance). It not only describes the limitations we have when predicting future events but also how we can make the most of the information we do have. Silver celebrates fields where we’ve come a long way in making accurate predictions, like weather forecasting (he’s talking about the National Weather Service, not the jokers on local television) and delves into the distinct advantages and disadvantages between humans and machines when making predictions. In an absolutely fascinating chapter on chess, Silver describes the legendary 1997 match between World Champion Garry Kasparov and Deep Blue, in which Kasparov lost his cool in the face of what he perceived to be unthinkable advance planning by the computer but which actually turned out to be a programming bug (probably).

In a chapter on gambling called “Less and Less and Less Wrong,” Silver dives into Bayesian reasoning, and this is where my head nearly exploded (I never studied statistics in school, so cut me some slack). According to Bayesian thinking, we learn about the universe “. . .through approximation, getting closer and closer to the truth as we gather more evidence.” It requires us to make some general predictions about the probability of an event occurring, and then modifying those predictions as new information comes to light. Silver uses the example of a potentially cheating spouse: if you come home and find a stranger’s underwear in your drawer, what is the likelihood that your spouse is cheating on you? Part of the equation for determining the probability depends on how likely it would be that your spouse was cheating before you found the illicit undergarments. The new information has to be factored in, but it can’t completely erase the previous probability. Problems in our thinking occur when we either a) are way off on our estimates of prior probability—either because we don’t have enough information, or we just aren’t being honest with ourselves or b) we fail to modify our predictions based on new information. This strikes at the heart of the book: a tremendous degree of uncertainty is obviously at play when we make predictions, and the more we try to deny that uncertainty, the more likely our predictions are to fail.

So why do we deny the uncertainty? In part, uncertainty is interpreted in our culture as lack of confidence. Most people are more likely to be persuaded by a pundit from either side of the political spectrum who bellows with absolute authority that his or her candidate is going to win than they are by a statistician presenting probabilities that may or may not change depending on the outcome of event XYZ (the former also makes for more compelling television). Furthermore, we are often blind to our own biases and, in some cases, there may not be enough incentive to even make accurate predictions.

Pundits blindly cheering the success of their favorite candidates may seem harmless enough, but failure to predict can have crippling consequences. Silver rounds out his book with an exploration of two devastating and unpredicted events: Pearl Harbor and 9/11. Could we have predicted 9/11? Maybe. Silver lists half a dozen alarming facts that could have clued the U.S. in to the possibility of such an attack. He takes pains to point out, however, that these are half a dozen signals among tens of thousands or even hundreds of thousands of pieces of information that national security has to sort through, most of which amount to nothing. Failure to pick them out doesn’t automatically indicate incompetence or (gasp) conspiracy. (Silver quotes Harvard professor H.L. Gates in saying that “Conspiracy theories are an irresistible labor-saving device in the face of complexity.”) Yet the most dangerous failure of prediction is the event that we don’t even consider as a possibility: the “unknown unknown” Donald Rumsfeld famously referenced in 2002.  It’s the question that we haven’t even thought to ask.

This book asks a lot of questions and challenges the reader to think critically. If you’re looking for a magic bullet, there isn’t one: remember, uncertainty is inevitable and not a sign of weakness. But Silver drives us to work at it, to be better at making predictions, to get things “less and less and less wrong.”