Thursday, February 28, 2013

Early Indications February 2013: Big Caveats Regarding Big Data

Review essay

Michael Mauboussin, The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing (Boston: Harvard Business Review Press, 2012)

Nate Silver, The Signal and the Noise: Why So Many Predictions Fail — but Some Don't (New York: Penguin, 2012)

Nassim Nicholas Taleb, Antifragile: Things That Gain from Disorder (New York: Random House, 2012)

To set context, here is a sampling of what IT vendors are saying about Big Data:

Get the tools and technology you need to harness big data from any source – structured or unstructured – for a serious competitive advantage. Our big data solutions can help you capture, analyze, report, predict, and visualize mammoth volumes of data instantly – so you can make the best possible business decision, every time.

Big data is more than simply a matter of size; it is an opportunity to find insights in new and emerging types of data and content, to make your business more agile, and to answer questions that were previously considered beyond your reach.

The hopeful vision for big data is that organizations will be able to harness relevant data and use it to make the best decisions.
Technologies today not only support the collection and storage of large amounts of data, they provide the ability to understand and take advantage of its full value, which helps organizations run more efficiently and profitably.

For decades, companies have been making business decisions based on transactional data stored in relational databases. Beyond that critical data, however, is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and photographs that can be mined for useful information.
Oracle offers the broadest and most integrated portfolio of products to help you acquire and organize these diverse data sources and analyze them alongside your existing data to find new insights and capitalize on hidden relationships.
In sum, the vision of the Big Data movement appears to be to as follows:

to measure and capture, in greater detail and quantity, things that have happened in order to analyze the data, find insights/answer hard questions/capitalize on hidden relationships, and act more effectively in the future ("make better decisions.")

It all sounds reasonable, except the foundational logic has yet to be tested. Fortunately, we have some very smart people from diverse backgrounds who can help in that quest. It turns out that if these three gentlemen are correct, the very premises of Big Data need to be tempered, not with better computer science, but a better comprehension both of how people think, act, and decide and of how much luck and randomness still shape our world.

The three books all overlap to a degree, often in their appreciation for the behavioral economics of Daniel Kahneman, and each author brings serious credentials to the table:

-Mauboussin teaches at Columbia in addition to working at Legg Mason; he wrote an early and influential report on the financial implications of power laws back in the late 1990s.

-Silver gained fame on election night 2012 after correctly calling 50 out of 50 state results in the presidential race, after going 49 for 50 in 2008. His first data-centric venture was in baseball statistics.

-Taleb's previous books, The Black Swan and Fooled by Randomness, provided prescient color commentary to the financial crisis of 2008. Stylistically, existentially, and intellectually, he swims upstream but has repeatedly been proven right.

Three macro-level insights emerged from these books.

A) Luck remains a critically important factor in success, so prediction, even when successful (that is, skillful), may not generate much advantage

Mauboussin looks at the relationship of luck and skill in a variety of domains. The book owes many debts to Moneyball, but across more sports and extending convincingly into business. Results in the NBA, for example, are decided by skill to a much higher degree than in the NHL: in 2-1 or 1-0 games on ice, the slightest deflection or fluke play can determine a game. When the Spurs beat the Suns 104-98, however, random chance events are fewer (how many deflected shots actually go through the hoop?) and their impact is minimal.

When he turns to business and investing, Mauboussin makes similarly compelling points. For our purposes, the central insight relevant to Big Data concerns what might be called the water level: as the skill level rises in a population, differences between competitors shrink. Thus luck becomes more of a factor: "if stocks are priced efficiently in the market, luck will determine whether an investor correctly anticipates the next price move up or down. When everyone in business, sports, and investing copies the best practices of others, luck plays a greater role in how they all do." (p. 56)

This insight would seem to apply to the algorithmic arms races in baseball talent scouting, investing, and consumer data mining. In situations where no actor can accumulate a commanding lead (as Google has and Facebook might, however), whether in computing horsepower, algorithmic quality, or data to be analyzed, the skill premium dissipates. Luck, by this theory, will play a greater role than skill in a more heterogeneous environment.

Mauboussin concludes the book with 10 suggestions to improve the "art of good guesswork":

1 Understand where you are on the luck-skill continuum

2 Assess sample size, significance, and swans

3 Always consider a null hypothesis

4 Think carefully about feedback and rewards [many financial advisors get paid when clients trade, not when clients prosper, for example: what's the feedback loop there?]

5 Make use of counterfactuals

6 Develop aids to guide and improve your skill [checklists are a case in point]

7 Have a plan for strategic interactions [such as asymmetric warfare or disruptive innovations]

8 Make reversion to the mean work for you

9 Develop useful statistics

10 Know your limitations

This tenth maxim serves as a convenient segue to Silver's book.  From its title -- signals and noise are fundamental to information theory -- to its examples (which include economics, earthquakes, and climate change), the book would appear to be enthusiastic about using numbers to predict the future, to realize the promise of Big Data. But as Silver writes very early in the book, his focus is less on data and more on the people who use it:

"Big Data _will_ produce progress -- eventually. How quickly it does, and whether we regress in the meantime, will depend on us. . . .
Our biological instincts are not always very well adapted to the information-rich modern world. Unless we work _actively_ to become aware of the biases we introduce, the returns to additional information may be minimal -- or diminishing." (pp. 12-13)

Thus, the second macro-level idea concerns consciously testing ideas, assumptions, and admitting uncertainty.

B) Bayesian statistics, in particular its insistence on carefully articulated prior probabilities, forces human analysts to attach values to the context for their predictions rather than let them float ahistorically, otherwise known as "letting the data speak for itself."

This illusion of statistical sufficiency known sometimes as "frequentism" dates to the early 20th century, and the school of thought persists today. As Silver summarizes, "it emphasizes the objective purity of the experiment -- every hypothesis could be tested to a perfect conclusion if only enough data were collected. However, to achieve that purity, it denies the need for Bayesian priors or any other sort of messy real-world context." (p. 255)

Echoing his opening assertions in the conclusion, Silver plausibly argues that "distinguishing the signal from the noise requires both scientific knowledge and self-knowledge: the serenity to accept the things we cannot predict, the courage to predict the things we can, and the wisdom to know the difference." (p. 453)

Possibly because Silver's book ranges more widely than does Mauboussin, it felt more engaging. Written as it was before his successful handicapping of the Obama re-election, The Signal and the Noise is itself something of a prior: a self-aware assessment of Silver’s own methods and their probabilistic limits. The book forces erstwhile predictors to examine their methods, their objectives, and ultimately themselves -- not at all what the two-dimensional stat-geek stereotype would suggest.

In contrast to Mauboussin, Silver offers but two admonitions in his conclusion:

Know Where You're Coming From


Think Probabilistically.

In contrast to closed-end events -- when will be the first snowfall, who will win the championship, how many widgets will Samsung sell -- open-end events are the terrain of Nassim Nicholas Taleb: Black Swans, as they have come to be called. As Silver notes, nobody can remotely predict earthquakes or most natural phenomena, with the exception of weather. Nor can political revolts (in either London or Cairo, for example), equity or currency fluctuations, or other large-scale man-made phenomena be forecast at all reliably. Rather than predicting, Taleb advocates an entirely different approach.

C) Because of the nature of a highly complex and connected world, "Black Swan" events can generate very large, unforeseen effects, very quickly. A prudent strategy for living in such a world is to seek shelter to a substantial (but not complete) degree, while finding exposure to the upside of unforeseeable events with small bets in as many big-multiplier arenas as possible, often via optionality. Taleb calls this a "dumbbell" strategy for its bimodal distribution: for example, very large positions in cash or other low-risk and low-reward instruments, with focused but small investment in high-risk/very high-reward (and thus probably exotic) positions. Note that the middle is avoided entirely: Taleb's antipathy for bell curve distributions, especially where misapplied, is vehement.

The title of Taleb's book hints at how unaccustomed we are to thinking this way. Everyone knows that a wine glass is fragile: physical volatility is usually fatal. Note that fragility scales non-linearly: a fall from 32 inches onto the hardwood floor is far more than four times as damaging as an 8-inch drop, which is likely survivable. Many people say that "robust" is the antithesis of fragile, but Taleb disputes this position: what, instead, are the opposite of fragile phenomena, the things that actively IMPROVE in the presence of volatility? He looked in dozens of languages: none had a word to connote this property, which is, nonetheless, quite real. Taleb's contrarian-ness is of a high order indeed.

It turns out that the natural world, biology in particular, abounds in situations where volatility improves matters. Young children learning language, muscles after exercise, and immune defenses all qualify. In the human order, Taleb praises the Swiss city-state (canton), noting that many people can pick Switzerland as the most stable regime on earth and yet nobody outside the country knows who the president is: decentralized authority keeps the scale of both problems and solutions closer to human-friendly and risk-limited. Swiss disorder occurs in domains the exact opposite of "too big to fail," itself a curse in this system of thinking because increasing scale implies massive risk. Man-made "stabilization" often leads to instability, whether in financial systems, forest fires (preventing healthy little ones means a later, inevitable inferno), or corporate planning. When small, routine failures are prevented through naive bureaucratic intervention, stressors magnify until the impact is multiplied to the scale of the entire system (as witness the mortgage banking mess, rogue traders at Societe Generale and JPMorgan, and the flash crash). And on the basis of what empirical evidence is "equilibrium" the economist's ideal?

Thus rather than fail to predict the mechanism of [by-definition] unpredictable disaster, we can see the quite foreseeable effects of 100-year-old subway tunnels in New York (whether the stressor is a riot, a terrorist, or a hurricane is irrelevant), or slow responses to climate change, or overly long supply chains for food. In short, Taleb proves that prediction is systematically broken for both psychological -- yes, Kahneman gets his props here too -- and systematic/organizational reasons. The 425-page excursion into many nooks and crannies of the Western intellectual tradition (Seneca plays a featured role, for example) is itself unpredictable: Taleb does not so much explicate his argument as embody it, with frequent personal examinations that prove he literally has skin in the game. His conclusion is much more straightforward that its telling:

"Everything gains or loses from volatility. Fragility is what loses from volatility and uncertainty." (p. 421)

Rather than seek certainty in data* or in anything else, Taleb seeks to find situations, investments, and modes of living that are not only resistant to volatility but _thrive_ in its inevitable presence. The notion of antifragility thus stands as the most robust challenge to the uncritical application of data, algorithms, and prediction more generally -- especially outside realms (such as weather) where we can actually document a certain degree of success. As for lavish investments in police and fire departments for cities, in R&D at the corporate level, and in universities in any particular society, Taleb contends that we really know little about correlation vs. causation. This fundamental lack of evidence suggests that for data to improve our world, there are more intermediate steps between the computer and a better future than the apparent consensus would suggest.

*"There is a nasty phenomenon called 'Big Data' in which researchers bring cherry-picking to an industrial level. Modernity provides too many variables (but to little data per variable), and the spurious relationships grow much, much faster than real information, as noise is convex and information is concave." (p. 418) More simply, "the more data you get, the less you know what's going on." (p, 128)