Mo’ Data, Mo’ Problems

Is more data always better? Do we have a natural tendency to make up patterns and trends where there aren’t any? Are computers never wrong?

These are some of the questions posed by data nerd Nate Silver, author of our latest book club entry, The Signal and The Noise.

In his book, Silver uses examples in failed predictions ranging from the 2008 financial crisis to the Fukushima earthquake in Japan to illustrate the following three points: his humble roots as an accountant, poker player and then baseball stats number cruncher, Silver hit the big time in 2008 by correctly predicting the winner in 49 out of 50 states during the presidential elections. His blog featuring stories using data and statistics,, was first bought by the New York Times and more recently, ESPN. Let’s just say he brought sexy back to data.
Big data, big problems: despite an abundant flow of data, humans continue to make wrong assumptions that have led to predictions that cost massive casualties. For instance, credit rating agencies during the financial crisis made the flawed assumption that housing prices would continue to increase every year as it had up until that point, overrating the mortgage-tied financial products that led to the market’s collapse.

Desperately seeking signal: Silver argues that in the face of an onslaught of data, humans are wired to detect patterns and trends even when there may not be any correlations. Humans love a narrative and may fit the data to suit their story. Silver cites a study that found most of the experiments in a prestigious medical journal didn’t produce the results claimed in the articles. This means that the authors may have presented data in a way that confirmed their hypothesis nicely but this data was actually more “noise” than “signal.”

Feature or bug: computers aren’t perfect. For all the technological advances, computers are man-made, after all. Silver cautions that we should not mistake a bug for a feature, as was the case with the world’s top-rated chess player Garry Kasparov in the 1990s when he faced IBM computer Deep Blue in a match. When the computer made a move deemed completely baffling by the reigning champion, he assumed it was a move so sophisticated that he got intimidated, gave up and lost the match. The truth was that it was a bug. The computer had messed up but it was viewed as a genius move.
Stay tuned for my second installment on the book next week.

Leave a Reply

Your email address will not be published. Required fields are marked *