Tuesday, December 31, 2013

Early Indications December 2013: A Look Ahead

Rather than issue predictions for the year-end letter, I am instead posing some (I hope) pertinent questions that should be at least partially answered in the year ahead.

1) How will enterprise hardware and software companies respond to cloud computing?

At first glance, this isn't a particularly fresh question: clouds are old news, in some ways. But whether one looks at Dell, IBM, Oracle, or HP, it's not at all clear that these companies have an assertive vision of the path forward. In each instance, revenues have been dented (or worse) by the shift away from on-premise servers, but what comes next is still in the process of being formulated, perhaps most painfully at HP, where Unix server revenues fell by more than 50% in just 5 quarters: Q4 2010 was about $820 million while Q1 2012 fell below $400 million. As of a month or two ago, HPQ stock had fallen roughly 60% since January 2010, in an otherwise bull market: the S&P 500 was up 64% over the same span.

Farther up the west coast, meanwhile, the leadership challenge at Microsoft relates as much to cloud as it does to mobile, does it not? Getting someone who can change the culture, the product mix, and the skills mix in the headcount will be a tall order. Absent massive change, the desktop-centrism of MSFT will be its undoing unless new models of computing, user experience, and revenue generation (along with commensurate cost structures) are implemented sooner rather than later.

2) Is Uber the next Groupon?

Before you explain how they're not comparable companies, here's my reasoning. Both companies are two-sided platform plays: Groupon enlists merchants to offer deals, and aggregates audiences of deal-seekers to consume them. Two-sided platforms are historically very profitable once they're up and running, but it's tough getting the flywheel to start spinning: no deals, no deal-seekers. No audience, no merchants. One side of the platform typically subsidizes the other: merchants who pay ~3% to accept credit cards are paying for your frequent flier miles.

Groupon ran into trouble after they scaled really really fast, and had lots of physical infrastructure (local sales offices) and headcount to fund. After a local business offered 2 $10 lunches for $10 (and paid $5 to Groupon), the $15 loss was hard to write off as new customer acquisition given that Groupon users frequently did not return to the restaurant to pay full price. Thus using local sales forces to recruit lots of new businesses to try Groupon once, with a low repeat offer rate, made the initial growth tough to scale.

Enter Uber. It's a two-sided platform, recruiting both drivers and riders. Because the smartphone app is an improvement on the taxi experience, customers use it a lot, and tell their friends -- just like Groupon. Meanwhile, getting the "merchant" side of the platform (in this case, the drivers) to stay happy and deliver quality service in the midst of rapid scaling is proving to be difficult: there are getting to be more riders than available cars in many localities. But Uber is not Amazon; it's more like Gilt, a [slightly] affordable luxury play rather than a taxicab replacement. Look at the company's ads and listen to the co-CEO, who calls the company "a cross between lifestyle and logistics." It can't, and has no reason to, meet the demand it's created.

Thus Uber charges "surge" prices, a significant multiple of the base fare when too many riders strain the system. It's presented as an incentive to get dormant cars into the market, but what's more likely is that the steepness of the price tamps down demand while conveying exclusivity. Given how reliably surge pricing is being invoked, it would appear that Uber is hitting some scale limits. One way to address this is to get more drivers into the pool, and Uber recently announced that it will help drivers buy cars. But like Groupon, this has the feeling of "fool me once, shame on me. Fool me twice, shame on you."  The indentured servitude that is Uber car paying-off will only appeal to a certain number of drivers, and for a certain amount of time. The transience of taxi-driver work forces is well demonstrated, and it's not clear that Uber can be immune to the same dynamics.

Meanwhile Uber is trying to expand its revenue streams: that same capacity of infrastructure (physical cars and human drivers) that's needed for a fraction of 24 hours needs to be utilized around the clock.  In early December the service offered a few Christmas trees ("very limited" availability in only ten markets) for the low low price of $135, at something less than "white-glove delivery" standards: the tree came to the "first point of entry," at which time you were on your own. A similar service was proposed for barbecue grills.

All that is a long windup for my Uber question: what is the ultimate scale this business can sustain, what is the revenue model for a 24-hour day during which the asset fleet is fully occupied less that 25% of the time, and what is the customer service guarantee for an impermanent labor force doing a hard job?

3) What is Google doing with robots?

The news that Google had acquired Boston Dynamics, a leader among DARPA robot-development shops, floored me when I heard it and continues to impress me. The shorthand version is that it feels as if Microsoft had acquired Google in 1999. That is, the leader in the current generation of computing invests heavily but astutely in something that's not quite here yet but will be big when it arrives. The buy doesn't get Google any current revenue streams worth mentioning, but there's astonishing talent on the engineering team and some very good IP, much of it, I suspect, classified.

There are several back-stories here. One is that Andy Rubin, formerly at the helm of Android, has been tasked with ramping up a real robotics business. He's been both hiring (quietly but effectively: I know some of the folks, and they're A+ players) and acquiring. The other key person might be Regina Duggan, a Cal Tech PhD who ran DARPA from 2009 until 2012, at which point she joined Google. At the time it was speculated that her expertise in cybersecurity would make her a prized hire, given the massive attacks on Google's worldwide networks (and, as we learned later, sizeable NSA demands for data). Now, however, her insight into the DARPA robotics pipeline no doubt accelerated Rubin's discussions with Boston Dynamics and perhaps other firms or individuals.

What could Google do with robots? Plenty:

-Work on battery issues. Cross-fertilization across Android and robotic research on this one issue alone could produce massive license revenue opportunities.

-Work on power and motor issues. Getting the physical world to connect and react to the "Internet of things" requires locomotion, a space where Moore's law-scale acceleration of performance has yet to be discovered.

-Automate server farm maintenance. Swapping out dead hard drives, to take just one example, would seem to be a local task worth doing with robots.

-Tune algorithms, something both Google engineers and roboticists do somewhat regularly.

-Scale down the machine vision, path-planning, dynamic compensation, and other insights gained in cars to smaller robots.

-Apply the same machine learning insights that allow Google to "translate" foreign languages -- without knowing said languages -- to other aspects of human behavior.

-Learn more about human behavior, specifically, human reactions to extremely capable (and biomimetic) robots. There's a great piece on the deep nature of "uncanny" resemblances here, and who better than Google to extend the limits of observed machine-human psychology.

One article mentioned a potential unintended side-effect. Some investment funds have screens that exclude tobacco companies, defense contractors, firms that do business in boycotted/embargoed nations, etc. Unlike iRobot and its Roomba division, Boston Dynamics has no consumer businesses; it appears to be almost entirely funded by defense research. Thus (much like Amazon building a computing cloud for the CIA), Google is now sort of a defense contractor. Apparently that status might change after the current contracts are completed, but even so, will the acquisition change Google's standing as an institutional stock holding?

4) What happens to the smartphone ecosystem as the developed world hits saturation?

For carriers and handset makers, this could feel like deja vu: once everyone who wants a cell phone (which happened around the year 2000 for the U.S.) or a smart phone (very soon in the U.S.) has one, where does revenue growth come from? New handsets would be one potential stream, but with subsidized phones, that turnover is unlikely to be very rapid. Content deals with HBO, the NFL, and other rights-holders will be a factor, much as ringtones were 8 or 10 years ago. ARPU, the all-important Average Revenue Per User metric by which the cellular industry lives and dies, is unlikely to grow any time soon in the EU or U.S., according to the research firm Strategy Analytics (http://www.totaltele.com/res/plus/tote_OCT2010.pdf -- see p.8).

Apple, Verizon, Disney, Comcast, Microsoft -- some very large companies in a variety of industries will be forced to change product strategies, internal cost structures, and/or revenue generation practices, the latter harder to do as competition increases (and thus one possible impetus for rumored merger talks between Sprint and T-Mobile).

There's also the geographic expansion move: focus on developing markets, selling lower-priced handsets in much, much larger numbers. That move, too, is hardly a sure thing, exposed as it is to competition from firms such as Hauwei that are not encountered as often in the U.S./EU. Supply chains, sales channels, regulatory compliance, and many other aspects of business practice can surprise a U.S. firm when it seeks to expand outside its traditional markets.

Honorable mention questions:

-How fast will 4K super-definition TV catch on? What geographies will lead the way?

-Will the US see any player gain momentum in the mobile wallet space?

-What will be the fallout of the NSA revelations?

-What will the consumer and B2B 3D printing landscape look like in 18 months? is there a single killer app that could emerge?

-How will crowdsourcing and especially crowd funding settle into well understood patterns after the current wild-West phase?

-What will the cryptocurrencies that follow Bitcoin mean -- for money laundering, for taxation, for developmental economics, for customer convenience, for alternative trust frameworks?

-When will we see a hot Internet-of-Things startup emerge? Given that AT&T networked lots of phones, then Microsoft eventually ran (and sort of) connected hundreds of millions of PCs, then Google indexed all the Internet's documents, then Apple shrunk and refined the PC into a smartphone, then Facebook connected massive numbers of personal contacts, it would appear that whoever can connect (and extract a toll from) some large number of the world's sensors and devices stands to be important in the next chapter of tech history.

Tuesday, November 26, 2013

Early Indications November 2013: What makes a great business book?

I recently read Brad Stone's book on Jeff Bezos and Amazon entitled The Everything Store. It's a good job of reporting on a topic of broad interest, given Amazon's unique history and powerful position. I learned a lot of facts about the key people, filled in some gaps in my understanding of the overall timeline, and heard personal impressions from some of the key people involved. I couldn't point to any particular page and say that I could have done it better.

And yet I wanted more. Stone does a fine job on the "what" questions, he has interviewed perhaps more of the principals than anyone else, and the writing is clear throughout. Why then does this not feel like a great business book? That's where my thinking turned next, and what I will discuss this month: after analyzing some commonalities among books that have changed my thinking, I divide my pantheon into two different camps then try to identify some common aspects of "greatness."

OK, professor know-it-all, what are some great business books? Let's start there, because the alphabetical list shows what I have found sticks with me over the years: hard-won narrative lessons, and deep conceptual muscle.

Peter Bernstein, Against the Gods

Frederick Brooks, The Mythical Man-month
Alfred Chandler, The Visible Hand

Yvon Chouinard, Let My People Go Surfing

Clayton Christensen, The Innovator's Dilemma
Annabelle Gawer and Michael Cusumano, Platform Leadership

Tracy Kidder, The Soul of a New Machine

Marc Levinson, The Box

Michael Lewis, Moneyball and Liar's Poker

Carlota Perez, Technological Revolutions and Financial Capital


Honorable mention (the authors probably wouldn't call these business books):

Atul Gawande, Better

Bruce Schneier, Secrets and Lies

Nassim Taleb, The Black Swan

What's missing here? I don't know the Drucker corpus well enough to pick a single volume there. I've never had much Velcro for self-help/personal effectiveness books, so that wipes out a lot of people's favorites, including Stephen Covey. I've admired Ron Chernow (Rockefeller) from a distance, so one day that might go on the list. Isaacson's Steve Jobs biography was rushed to market and thus too long. I never read Andy Grove's Only the Paranoid Survive in the day, but maybe it bears a second look.

There's another category of exclusion, the "we found the pattern of success" books that do no such thing. The exemplars here are Good to Great and In Search of Excellence, the Tom Peters/Robert Waterman sensation from 1985 that helped bring McKinsey to the front page of the business press. Taking the more recent book first, of Jim Collins' 11 "great" companies, only 2 (Nucor and Philip Morris) seriously outperformed the S&P over the following decade. Most of the other nine generally reverted to the mean, or in the case of Gillette, got bought. Most telling, just as Gary Hamel bet his reputation on Enron in his book Leading the Revolution, Collins got stuck with some outright clunkers, most notably Circuit City and Fannie Mae, while Pitney Bowes lost half its market cap. Did they somehow board the wrong people on the bus all of a sudden? I strongly doubt it. As for Peters and Waterman, Business Week published a cover story only TWO YEARS AFTER PUBLICATION showing how many of the 62 "excellent" companies were nothing of the sort. Retrospective pattern discovery at the company level, rife with cherry-picking, has yet to reliably predict future performance (in book form at any rate: I can't speak to what happens inside Berkshire Hathaway).

Ah, but what about the classic strategy tomes? Porter's Competitive Strategy, Hamel and Prahalad's Competing for the Future, and maybe Blue Ocean Strategy have their place, of course, but they all felt like exercises in hindsight bias rather than scientific discovery: the subtitle of Porter's book is, justifiably I think, "Techniques for Analyzing Industries and Competitors" and nothing to do with action. I have yet to see a strategic move in the real world that felt deeply linked to any of these efforts (that doesn't mean there are none, just that I don't see any). There's an old joke that sums up this orientation:

How do you spot the strategy professor at the racetrack?

He's the one who can tell you why the winning horse won.

In contrast, my personal list of the best business books veers away from such methodologies in one of two ways. First, a skilled writer, a self-aware founder/principal, or a combination of the two tells a story rich with personal experience in a highly particular situation. Second, a deep thinker creates a powerful conceptual apparatus that endures over time. (Moore's Crossing the Chasm was a near-miss here.)

Many of these books are striking in the modesty of their origins: Christensen started by knowing the disk drive industry inside and out, while Gawer was able to understand platforms after getting great access at Intel to see the company's handling of the USB standard during her Ph.D. research. Tracy Kidder -- one of our era's great storytellers -- compellingly documents the creation of a computer that never made it to market.

The best first-person tales were not unabated triumphs: Chouinard nearly lost Patagonia after some serious missteps, while Fred Brooks learned about software development the hard way, shipping an IBM operating system late. At the same time, some of the big brains attempted syntheses of stunning breadth: the whole idea of risk, in Bernstein's case, or the modern managerial organization, for Chandler. Both books, I suspect, were decades in the making.

On to the fundamentals: what makes a great business book? I would submit that it have some mix of four qualities:

1) Honesty

It's easy to paper over the messy bits; "authorized biographies" can be so hagiographic that all the sugarcoating makes one's teeth hurt. In contrast, the humility of a Fred Brooks or Yvon Chouinard is refreshing, frankly acknowledging the role of luck in any success that has come their way.

2) Human insight

Business, taken only on its own terms, can be pretty boring. But as part of "life's rich pageant," as Inspector Clouseau put it, business can become part of themes more enduring than inventory turns or new market entry. The best books connect commercial success to aspects of human drama.

3) Continued applicability

The retrospective nature of book publishing can be a curse, in the digital era particularly, but it also means that great research and storytelling stand up over time. A model should continue to help organize reality for years after publication, and the likes of Bernstein, Chandler, Christensen, Levinson, and Perez have earned their stature by delivering not just an investigation but a way of seeing the world.

4) Subtlety of insight

All too often, business books worship at the altar of the obvious. Acknowledging the facts of the situation and then deriving deeper principles, either by astute observation (hello Michael Lewis) or by rigorous scholarship, is a gift.

In the end, we find a collection of great business books illustrate a conundrum: just as with business itself, knowing the principles of business book greatness makes it no less unlikely that a given individual will achieve it. Luck still has something to do with it, and I doubt that most artists wanting to create a masterpiece ("the Great American Novel" for instance) actually did it.

Yet for all the dashed hopes of finding a science of success, and for all the ego trips and bad faith, there are times when stories from the arena of commerce transcend the genre and deliver gifts of insight far more meaningful than simply how to make more money.

Friday, October 25, 2013

October 2013 Early Indications: How do companies "do" Big Data?

1) Some of you may have seen a piece I wrote in the October 21 Wall Street Journal, on the risks of "big data" for companies trying to adopt these technologies.

2) Writing for a different audience, the same topic gets a different take:

"A riddle wrapped in a mystery inside an enigma."

That was Winston Churchill speaking of Russian politics in 1939, but it can also apply with uncanny accuracy to what the IT industry refers to as "Big Data." On one hand, it's intuitively obvious that we have more and faster computers, more sensors, and more data storage (including that "cloud" business) than ever before. At the same time, few of us can grasp what astrophysicists, or Facebook software engineers, or biostatisticians actually do, so applying Big Data to commerce can be a bit daunting.

To begin with, big data has a nomenclature problem. Like so many other technologies -- smartphones, robots, or information security -- the popular name doesn't really convey the essence of the situation. Yes, "big data" can involve very large volumes in some cases. But more generally, the phrase refers to new kinds of data, generated, managed, and parsed in new ways, not merely bigger ones.

While "Big Data" is a vague phrase, there is some agreement that it involves changes in scale along three dimensions:
            *Volume: whether it's your own hard disk space, the world's online video feeds, or a wealth of digital sensors measuring many aspects of the world, signs are abundant that data volumes are increasing steadily and substantially.
            *Variety: Big data is not only a matter of bigger relational databases. As opposed to the familiar numbers related to customer ID, SKU, or price and quantity, we are living in an age of massive amounts of unstructured data: e-mails, Facebook "likes," Tweets, machine traffic, and video.
            *Velocity: Overnight batch processes are getting to be less and less tenable as the world becomes an "always-on" information environment. When FedEx can tell me where my package is, or Fidelity can tell me my net worth, or Google Analytics can tell me my website performance right now, the pressure is on more and more other systems to do likewise.

Assuming a business can get past the vocabulary, Big Data presents challenges in many ways:

Skills
Here's a quiz: ask someone in the IT shop how many of his of her colleagues are qualified to work in Hive, Pig, Cassandra, MongoDb, or Hadoop.  These are some of the tools that are emerging from the front-runners in Big Data, web-scale companies including Google (that needs to index the entire Internet), Facebook (manage a billion users), Amazon (construct and run the world's biggest online merchant), or Yahoo (figure out what social media is conveying at the macro scale).  Outside this small industry, Big Data skills are rare.

Complicating the matter, most of these tools are open-source projects, which means a) that the code is free, b) that the pace of innovation is rapid, to the point where staying current is an issue, and c) that corporate training and support aren't as robust as they are in the traditional data world. Big data tools are evolving rapidly, aren't being taught in the universities, and require levels of flexibility from their users that more mature tools do not. One telling fact: when non-web companies post a "Big Data" job specification, oftentimes nobody can state what the key skill sets are or how they map to the existing team. Furthermore, knowing Hadoop doesn't mean the skilled IT candidate knows insurance, or pharmaceuticals, or energy: ground truth matters in data analysis, so getting skills plus domain knowledge is a substantial challenge in many cases.

Politics
Control over information is frequently thought to bring power within an organization. Big data, however, is heterogeneous, multi-faceted, and can bring performance metrics where they had not previously operated. If a large retailer, hypothetically speaking, traced its customers' purchase behavior first to social media expressions and then to advertising channel, how will the various budget-holders respond? Uncertainty as to ad spend efficacy is as old as advertising, but tracing ad channels to purchase activity might bring light where perhaps it is not wanted. Information sharing across organizational boundaries ("how are you going to use this data?") can also be unpopular.

Another political danger lies in the realization that "what gets measured gets gamed," as we notyed in a recent newsletter. If a senior executive requests a dashboard including high-volume, high-velocity metrics such as web clicks or Twitter mentions, there can be a temptation to abandon revenue-generating activities that might operate in a 3- or 6-month sales cycle, for the instant reward of a new metric that ultimately does nothing for either top or bottom line. Budgeting, meanwhile, is complicated by the open-source nature of these tools: the software may be free, but hardware, especially at this large scale and even when procured from cloud vendors such as IBM or Amazon, behaves oddly when compared to traditional IT shops. Add in the scarce skills, and the evolving skills mix, and Big Data can cost more than may initially be projected.

Risk can also fall under the political heading: who is willing to stick his neck out to support adoption of technologies that are both immature and broad-ranging? As more data is gathered, it can leak or be stolen. Supposedly anonymous records can often be de-anonymized; in a famous paper, a former governor of Massachusetts was connected to his own records in a public-health database. Insufficient privacy is in some ways a math issue of large spare data sets; there are also also engineering risks. Here's one: implemented well, the security of large traditional databases can be very sturdy, but information security in the Big Data toolset has far to evolve before it can be called robust.

Technique
Given that relational databases have been around for about 35 years, a substantial body of theory and practice make these environments predictable. Big Data, by contrast, is just being invented, but already there are some important differences between the two:

Most enterprise data is generated by or about humans and organizations: SKUs are bought by people, bills are paid by people, health care is provided to people, and so on. At some level, many human activities can be understood at human scale. Big data, particularly social media, can come from people too, but in more and more cases, it comes from machines: server logs, POS scanner data, security sensors, GPS traces. Given that these new types of data don't readily fit into relational structures and can get massively large in terms of storage, it's nontrivial to figure out what questions to ask of these data types.

When data is loaded into relational systems, it must fit predefined categories that ensure that what gets put into a system makes sense when it is pulled out. This process implies that the system is defined at the outset for what the designers expect to be queried: the questions are known, more or less, before the data is entered in a highly structured manner. In Big Data practice, meanwhile, data is stored in as complete a form as possible, close to its original state. As little as possible is thrown out so queries can evolve and not be constrained by the preconceptions of the system. Thus these systems can look highly random to traditional database experts.

Given that relational databases have been around for about 35 years, a substantial body of theory and practice make these environments predictable. Big Data, by contrast, is just being invented, but already some important differences are emerging between the two. It's important to stress that Big Data will not replace relational databases in most scenarios; it's a matter of now having more tools to choose from for a given task.

Traditional databases are designed for a concrete scenario, then populated with examples (customers, products, facilities, or whatever), usually one per row: the questions and answers one can ask are to some degree predetermined. Big data can be harvested in its original form and format, and then analyzed as the questions emerge. This open-ended flexibility can of course be both a blessing and a curse.

Traditional databases measured the world in numbers and letters that had to be predicted: zip codes were 5 or 10 digits, SKU formats were company-specific, or mortgage payments were of predictable amounts. Big Data can accommodate Facebook "likes," instances of the "check engine" light illuminating, cellphone location mapping, and many other types of information.

Traditional databases are limited by the computing horsepower available: to ask harder questions often means buying more hardware. Big Data tools can scale up much more gracefully and cost-effectively, so decision-makers must become accustomed to asking questions they could not contemplate previously. To judge advertising effectiveness, one cable operator analyzed every channel-surfing click of every remote across every household in its territory, for example: not long ago, such an investigation would have been completely impractical.

Both the increasing prevalence of machine data and the storage of data in near-native form generate major differences in information technique. Furthermore, the scientific method can play a more central role given that experimental findings are collected into less constrained systems for analysis. Web businesses have done A/B testing for years: 100 random site visitors get a red banner, a 10% off coupon, or a personalized element while a control group gets a blue banner, a 15% coupon, or a generic site greeting. (In Google's case, 41 shades of blue were run through A/B testing in 2009 before the same color was standardized on both Gmail and the home page.) Superior performance can be assessed, tweaked, and retested.

Harrah's Casinos use this method to test marketing ideas, as does Capital One. Scientific experimentation does not require Big Data, but Big Data begs in many cases for sophisticated statistical mechanics. The R programming language, for example, marks a substantial step beyond even the pivot-table experts who use Excel. While it's obviously an issue that the skills are in short supply, the deeper question is one of mindset within businesses that may use history, intuition, or other methods to ground decisions. Where is the will, the budget, the insight to employ Big Data approaches?

Cognition
What does it mean to think at scale? How do we learn to ask questions of the transmission of every car on the road in a metropolitan area, of the smartphone of every customer of a large retail chain, or of every overnight parcel in a massive distribution center? How can more and more businesspeople learn to think probabilistically rather than anecdotally?

The mantra that "correlation doesn't imply causation" is widely chanted yet frequently ignored; it takes logical reasoning beyond statistical relationships to test what's really going on. Unless the data team can grasp the basic relationships of how a given business works, the potential for complex numerical processing to generate false conclusions is ever-present. Numbers do not speak for themselves; it takes a human to tell stories, but as Daniel Kahneman and others have shown, our stories often embed mental traps.

Spreadsheets remain ubiquitous in the modern enterprise but numbers at the scale of Google, Facebook, or Amazon must be conveyed in other ways. Sonification -- turning numbers into a range of audible tones -- and visualization show a lot of promise as alternative pathways to the brain, bypassing mere and non-intuitive numerals. In the meantime, the pioneers are both seeing the trail ahead and taking some arrows in the back for their troubles. But the faster managers begin to break the stereotype that "Big Data is what we've always done, just with more records or fields," the faster the breakthrough questions, insights, and solutions will redefine business practice.

Summing Up
Not surprisingly, the principles of good management extend to the domain of Big Data even though the hype and rapid pace of change can create confusion. Before businesses can profit from Big Data, managers must refuse to get lost in the noise that can obscure the basic forces represented by customers, value, and execution. The scale, speed, and diversity of Big Data can make it feel foreign, but by both refusing to be dazzled by numerical tsunamis and insisting on the basics of sound analytical practice (particularly in regard to causal relationships), any executive can contribute to the next generation of performance-enhancing information practice.

Monday, September 30, 2013

Early Indications September 2013: How the Internet is Changing Business-to-Business Marketing

My colleague Ralph Oliva has done a great job running a research center here at Penn State called the Institute for the Study of Business Markets (ISBM). They just celebrated their 30th anniversary, and that milestone promoted me to ask how digital business is reshaping B2B markets. Here are some initial thoughts.

A) Product
Let's start with the product, or more properly, the offer: what customer problem is being solved? The rapid drop in the prices of many sensors, along with nearly universal access to wi-fi, means that B2B providers can instrument a customer's business process and sell "holes rather than drills," as the old saying goes. That is, if I'm a body shop, I don't want to buy sandpaper and abrasives; I want to know that I can prep and paint any car that comes in without delays for out-of-stock supplies. Thus 3M's model, as I understand it, of continuous restocking of the abrasives and masking tape supply closets turns their stuff into a service. Add sensors on nozzles or chemical tanks for remote monitoring, access to product engineers and experts to explain exactly how best to address a new challenge -- or custom-design a fresh solution, testing and certification services, or operator training for Green or Lean or other objectives. The result is that something as mundane as industrial lubricants can become a differentiated offering, as achieved by Castrol Industrial and other providers.

Thus the Internet and related technologies can play a crucial role in "servicization": recall that an Qantas Airbus A380 had its Rolls Royce jet engine fail spectacularly in November 2010. Rolls Royce knew immediately about the issue because the Trent 900 engines are monitored by satellite in real time all over the world. Not long ago, such a capability would sound like science fiction, but with the reality of "thrust by the hour" payment options, engine builders must maintain this kind of close oversight of their assets.

Finally, the Internet's power as an information medium makes it well suited for conveying knowledge. In B2B markets, this is often critical: engineers want to know parameters, plant managers want to know work-arounds, and designers want to know next-generation capabilities. It's been said that information about stuff is more valuable than stuff: if a supplier can differentiate its offer with the kind of supplemental expertise Castrol is providing, the purchasing decision can move from a low-price bake-off to a specified non-competitive contract. The customer goes from buying granules or goo to paying a premium for the knowledge of how to extract unconventional or optimal performance of the now-differentiated commodity.

B) Channel (Place, in marketing-speak)
Here's a revealing experiment: search for a B2B product on Google. See what channels have the product available: it's quite likely eBay and/or Amazon have resellers -- sometimes licensed ones -- offering everything from test equipment to medical devices to raw chemicals to forklifts. This isn't just used goods either: some are new, with unclear warranty backing.

Now repeat the experiment on Baidu, enlisting a Chinese speaker if at all possible.

C) Price
Search costs, in the economic sense of the word, have been dramatically altered by search, in the Google sense. Price transparency is a real issue, whether across national boundaries, across competitors, or across primary (new) versus secondary markets. One response is to bundle products and services, which can defeat transparency in some instances, but the once-secret pricing sheet is now often semi-public information.

D) Promotion
Here we will consider what a company can do to reach potential customers.

The first point worth mentioning here is that Internet marketing cannot replace all the things the marketing team has been doing for decades: thought leadership, sales collateral, conferences and trade shows, even direct mail. Thus there are four related challenges raised by Internet methods:

1) Identifying how marketing supports corporate strategy

2) Identifying profitable areas of Internet action

3) Identifying the points of connection with the previous marketing portfolio.

4) Identifying the right metrics, communicating them to the proper parties, and adjusting action to enhance profitability, market share, or whatever the high-level objective may be.

Strategy
Companies in B2B often put a premium on innovation because commodity price pressure crashes profit margins. If the objective is to get X% of revenue from products less than Y years old, profitability can be enhanced, but the task becomes to tell the buying public about all the new items in the pipeline, often without the shorthand benefits conferred by a brand.

Other companies in B2B innovate less frequently but at greater scale: WL Gore comes to mind. In these situations, patents and brand can combine to create a high-margin scenario. Steep barriers to entry clarify the marketing objective to focus on product performance.

Still other companies run with ruthless efficiency and can underprice most any competition.

These three types of companies would each require dramatically different marketing strategies. In the real world, markets might not be so clearly sorted out, so aligning marketing effort to larger objectives can be tricky.

Internet marketing
The sheer number of options in this one domain can make priority-setting a complex matter. E-mail, social media, webinars, YouTube videos, search engine marketing, user communities, mobile applications, product configurators and other widgets, knowledge bases, and many other tools can be useful in the right context. Each can also turn out to be an expensive diversion of scarce resources and public goodwill. Devising a fresh portfolio appropriate to a given competitive context, product suite, and customer segment can be extremely challenging.

Integrated marketing  - now with added Internet goodness
As hard as it is to build an effective Internet effort, getting leverage across all the firm's marketing investments challenges lines of authority, budgeting processes, cultural assumptions, and customer habits. In some instances, budget decisions might come down to an either/or: choosing either a local seminar series or a website re-launch. In more promising scenarios, identifying how a Twitter campaign can enhance the trade show presence, or how "chalk talk" videos can drive demand for white papers, or how a mobile app can feed call center volumes can reinvigorate old methods as well as giving the team credibility across demographics.

The number of potential combinations in a grid with the following axes grows very big very fast. In many cases, causation is not immediately apparent: in any given cell, which activity is driving outcomes in the other?

Traditional marketing efforts (an incomplete list)

Public relations
Analyst relations
Editorial connections: obtaining product reviews, for example, or placing guest editorials
Advertising: print, TV, other
Trade promotion (rebates, etc)
Thought leadership
Trade shows
Direct mail
Other marketing events
Sponsored content
Brochures
Catalogs
Case studies and testimonials
Certifications (LEED, fair trade, organic, cruelty-free, conflict-free, etc)
Call centers
Sales force support (print, gifts, laptop or tablet demos, pitch decks, proposal templates, etc)
Customer events (golf, auto racing, etc)
Professional associations

Internet marketing (also incomplete)

Website(s)
Estimation and configuration tools
Knowledge bases
Search engine optimization
Customer communities
Chat/discussion boards
Twitter
LinkedIn
YouTube
Email
Webinars
Mobile apps
Blogs
Banner ads
Podcasts

That hypothetical grid should make the point: building an integrated marketing plan with Internet activities connected to a coherent portfolio requires thinking digitally, in terms of word of mouth (no longer controlled by brands), in terms of speed, in terms of customer engagement, and in terms of revenue paths.

Measurement
Knowing how and what to manage becomes more complex in the online era, in part because certain forms of measurement (tweets, clicks, downloads) are so straightforward. Finding true signal, relative to the strategic objectives noted above, amidst the noise is a critical first step - but only that. Communicating evolving measurement systems, and the rationale behind them, to a broadly distributed, demographically diverse population is a true test of both management and leadership.

Thus there's a lot to discuss here. The game, the rules, the players, and the scoreboard are all in a state of flux, but in such times, the advantage goes not to the biggest, but to the most nimble and the fastest learners. Tallying those winners will be the task of a future newsletter.

Sunday, September 01, 2013

Early Indications August 2013: The Bicycle Issue

Whether one looks at professional cycling, commuting, or recreational riding (including mountain biking), these are interesting days for the bicycle industry. While it's not a massive economic sector -- the biggest player is about a 2 billion USD company -- bicycles are one of the few products that are sold essentially everywhere, from the developing world to New York boutiques. As a result of this ubiquity, bicycles can provide some useful insights into contemporary business and economics.

Sport
As the Tour de France recently showed, cycling is an international sport with great appeal at the professional level: great riders have come from all over Europe, Russia and nearby nations, the US, and now South America. Much like soccer and unlike American football or basketball, physical stature isn't really an advantage: anyone can take up racing. George Hincapie and Bradley Wiggins are both 6' 3" while the great Greg LeMond raced at about 5' 9". Somewhat counterintuitively, as Lance Armstrong realized, the world championship or an Olympic medal can mean less, commercially anyway, than a TdF title. Riders at the top level can earn superstar money: England's Mark Cavendish, a former world champion and superb sprinter currently ranked 23rd in the world, was worth an estimated $8 million in 2012, while Peter Sagan, a promising young Slovak, is reported by an Italian newspaper to be in the running for a 4 million Euro contract for 2015. Add in endorsements, where the big money is (cycling race purses are tiny), and Armstrong's estimated $125 million net worth becomes plausible.

At the corporate level of racing, Amaury Sport Organization, a family-run company that owns newspapers and produces sporting events including the TdF and the Paris marathon, is estimated to be a $200 million enterprise. As the Armstrong matter showed, governance in the sport can be complex, with national teams, the global cycling body (UCI), sponsors, broadcasters, and ASO all having various motives and stakes in the sport. Move into extreme sports, and ESPN (via its X games) becomes a very different kind of broadcast partner.

Bikes
On the manufacturing side, there are four major bike makers. The field is dominated by the aptly-named Giant, headquartered in Taiwan, which makes both complete bicycles and frames at all levels, including road, commuter, and mountain models. The other three main players are Cannondale (part of the same Doral conglomerate that makes Cosco child seats and currently owns the Schwinn brand), Specialized out of California, and Wisconsin-based Trek. Other large frame-makers are based in Italy, Germany, Spain, and North America. The vast majority of frames are manufactured in Asia (usually not by the company whose name goes on the bike), though Trek still lays up some carbon-fiber frames in Waterloo, Wisconsin. For a price, riders can get a US Trek, custom painted, and they're gorgeous.

In components (the moving pieces that attach to the frame), Japan's precision manufacturer Shimano has been around the longest and earned $2.6 billion in 2012 revenues; the bicycle business is about three quarters of that while fishing tackle is most of the remainder. The other major player is younger: SRAM was founded by four MBAs in Chicago and continues to be run from a business focus. In 2008, for example, SRAM took $235 million from the former buyout arm of Lehman Brothers, and an IPO has been mentioned of late, but not scheduled. SRAM has grown through acquisition, most notably of the Rock Shox mountain bike component company, and is currently about a $500 million company. In third, size-wise if not by racing heritage, is the Italian firm Campognolo.

Stores
At the retail level, bicycle shops are a tough business. In much of he world, there's seasonality. Inventory is expensive, with manufacturers having leverage over mom-and-pop shops that must buy ahead of demand. In the mass market, big retailers like Sears enjoys volume buying power, while in the specialty market, competition can be intense. After 100+ years, genuine innovations are rare, and get copied quickly in most cases. Given the component group oligopoly, sustained radical differentiation in complete bikes is extremely difficult, but branding effects can be significant -- hence the heavy marketing budgets, including racer sponsorships. Not that long ago, Lance Armstrong basically put Trek on the map. Given the carrying costs, the seasonality, and the comparability between rival brands, bike shops must compete on service and high-margin accessories such as clothing, racks, tires, and so on. Here, labor economics enters the picture: lots of bike mechanics love bikes, love riding, and work in part for product discounts. After a time, that can diminish as a motivator, especially as families and mortgages come into play, so employee turnover can be an issue.

No nationwide chain of bike retailers has emerged; instead, several regional chains include Performance Bike out of Chapel Hill (that owns the apparently competing BikeNashbar.com catalog+web business) with 90 stores, BikeStreetUSA's 17 stores in the southeast, and Mike's Bikes' 11 stores in northern California. The economics of local bike shops (LBSs are they're known in the US) are interesting: according to a trade group, the roughly 5,000 LBSs in the US sold 17% of the units but accounted for half the revenue in 2007. Big chains including Wal-Mart and Toys 'R'Us, meanwhile, moved 73% of the units while collecting 36% of revenues. Online retailers are plentiful, and a constant threat to LBSs. Backcountry.com, based in Salt Lake City, has revenues of roughly $300 million and is part of Liberty Media -- the parent of QVC, the Atlanta Braves, and part owner of Live Nation and other media properties: all told, an $11 billion operation. In 2011, Backcountry bought Competitive Cyclist, with estimated revenues of $30 million in the very high end of the market. Online retailers can offer substantial price discounts, particularly on the high-margin items that keep LBSs afloat.

With any bike but especially for expensive models, fit is particularly personalized. While some manufacturers certify dealers in a particular fitting system, the process is generally time-consuming, whether or not a system is used. In a given frame size and system of angles (not all 54 cm frames are shaped the same way), a fitter can:

*adjust the seat up and down
*move the seat back and forth
*tilt the saddle
*raise and lower the handlebars
*rotate the handlebars
*adjust the placement of brakes and shifters on the handlebar
*bring the handlebars closer in or push them farther out
*select pedals from several different system philosophies
*adjust the cleats bolted to the shoes that clip into the pedals
*choose different crank arm lengths to connect the pedal to the front gear, and
*work on the angle of the foot inside the shoe.

Each adjustment affects most of the others, and no two riders are alike: body weight, body proportions and ratios, leg strength, back tightness, neck flexibility, shoulder range of motion, balance, and desired riding goal all vary, sometimes considerably. From a retail perspective, the process can be a time sink -- but it's a crucial advantage of the LBS over online retailers that have considerably greater buying power. For their part, the online retailers have developed sometimes sophisticated fit calculators, liberal return policies, and libraries of YouTube videos to counteract the local advantage.

Global ridership
The global bicycle market has so many local nuances it's difficult to generalize. One trend is toward off-road cycling in beautiful parts of the developing world as a form of tourism; pedal taxis and farm produce haulers still constitute a major segment worldwide. In China, bicycle use is declining as automobiles gain market share. For our purposes, the more salient development is the rapid rise of bicycle commuting in the US. The numbers are moving off of a small base: Portland, Oregon leads the US with 4.2% of commuters using bikes, but in Copenhagen bicycle commuting is 9 times as popular, at 37%. Chicago has seen bike traffic increase 120% from 2000 to 2009 -- but the former population was only .5% of the commuting public. In New York, more people are riding, but issues related to statistical calculation mean that most people can't agree on the numbers. Among the issues: do you "commute by bicycle" if you ride 3 days a week? What about if you ride to the grocery but not to work? How are people who haul their bike on the bus counted?

Mayor Bloomberg has increased the mileage of bike lanes and encouraged a bike-sharing program, much to some people's dismay and thus one reason for statistical disagreement. (For comparison, New York's public Citi Bike program has 6,000 bikes, while Paris has 14,000, London has 8,000, and San Francisco launches this weekend with 600, helping the total US bike-share fleet double in the past year). The relationship between cyclists and cars can be delicate anywhere, but in a city, tempers are often high. Cyclists are understandably fearful of getting "doored" by a driver who doesn't look; motorists who may be trying to drive carefully grow tired of daredevil cyclists who ignore stop signs, harass pedestrians, and flip a middle finger at people who protest. Similar debates are underway in Chicago, Paris, and elsewhere, where mayors try to increase urban quality of life at the cost of vehicular convenience.

Bike manufacturers and others are attempting to capitalize on the commuter phenomenon. New frame shapes, tires, and seats are aimed at this segment. Gates (the fan belt people) has introduced a belt drive system that eliminates chain ring teeth and chain grease, two enemies of riders wearing work clothes. The challenge is getting the belt inside the rear triangle of the frame; chain links are an ideal solution from that standpoint. For five years Levi's has sold a water-resistant line of jeans that address the needs of this population and must be doing well: they are priced at a premium and hardly ever go on sale. Messenger bags are a staple of many companies' lines, joining the ubiquitous backpack as a default shape.

Social media
Much like runners, serious riders can be highly attuned to workout logs, social rankings, and personal bests. A number of online services address this set of impulses, among them Mapmyride, Fitocracy, and Bikemap. One app that has gained traction is called Strava. It runs on a freemium model: no ads, clean interface, GPS and /or heart rate monitor integration. One feature allows anyone to denote a stretch of road as a "segment," so when riders traverse that stretch with the smartphone app running, their performance is automatically plotted on a leader board. The Premium service allows people to slice the leader boards more finely, facilitating comparisons within age groups and not just overall ranking, which is standard on the free version. Knowing the population, it seems like a safe bet that people are paying up.

Much like the GoPro DV video camera, Strava has unintended side effects among an outdoorsy, adrenaline-driven population. In the process of trying to set speed records on segments, Strava riders have been known to blow through traffic lights and otherwise disregard their surroundings. Just as some ski resorts have contemplated restricting GoPro cameras to try to keep outrageous stunts under control, it seems like a matter of time before law enforcement, park management, or other authors forbid Strava-powered virtual competitions on certain roads or trails.

Where next?
Bicycles are made of many materials: steel, aluminum, titanium, carbon fiber, and hybrids of the above. But all of these materials are relatively expensive: how can entrepreneurs sell more affordable bikes to the poorest, the "bottom of the pyramid"? Many experiments with paper/cardboard have been tried but none have solved the engineering and economic challenges. (See here ). Similarly, air-filled tires have their drawbacks but solid alternatives work only in particular circumstances.

At the infrastructure level, the world's growing population is straining bicycle routes just as it stresses roads, bridges, and airports. Given how many people ride to work in Copenhagen, for instance, where are all the bikes parked safely, in a Nordic climate, either in the morning or at home at night? Will mayors preserve bike lanes or relent to the large numbers of frustrated automobile commuters? Will more workplaces install showers to encourage ridership? (In one office where I worked, a colleague took to drying his riding gear in the public bathroom and that didn't work so well.) The only sure bet is that ridership will continue to change shape; will mix pleasure, fitness, and commuting; and will represent the ultimate expression of efficient human-powered locomotion. Literally and figuratively, no other mode of transportation comes close to the mechanical wonder that is cycling.

Tuesday, June 25, 2013

Early Indications May 2013 First-world problems: Too much choice

Several smart people have written about choice as a paralyzing force in western consumer economies. A recent experience reminded me of a great consulting parable that has stuck with me for nearly 20 years, and those combined to raise some thoughts about some of the costs we bear for excessive choice.

1) Many of you know Jim Gilmore's work; along with Joe Pine, he wrote The Experience Economy. A long time ago, I worked with him at my first consulting-firm job. He told a great story that I can't find on line, and given that this is years later, I'm no doubt getting some of it wrong. If you read this and I got something wrong, sorry Jim, but the insight is still a good one, and I trust I'm true to the spirit of the tale:

Several businesspeople sit down in a hotel bar in New Orleans and a waitress approaches the table. "Hello everyone - may I bring you anything to drink?" One guy speaks up. "Yes, I'd like a draft beer." The waitress had heard this before, and responded semi-automatically. "I'm sorry, but we don't have any beer on tap. We have a great list of bottled beers though," and rattled off a long list of macro- and micro-brews. "Which of those may I get you?"

There's a reason consultants get a reputation, but knowing the guys in question, I doubt the following exchange was done in a snarky fashion. "I asked for a draft beer, and you don't have draft beers," the man confirmed. "No, we don't, but we have bottled beers," and the waitress recited the list of bottled brews again.

At the end of said recitation, the patron said gently, "You asked if you could bring us something to drink. I asked for a draft beer. Right across the street, I see the Hyatt bar has Bud on tap. Is it possible to bring a beer over from there?" Never having heard this before, the waitress had to go ask her manager, but a few minutes later, the table got its draft beer, ensuring the anonymous server immortality in consulting lore, not to mention a generous tip for being such a good sport.

2) I was thinking of this story this weekend as I was painting my kitchen. Armed with a fat Benjamin Moore color-swatch book, the lady and I held up tiny paint chip after tiny paint chip, settling on a promising candidate. Not wanting to buy multiple quarts of to-be-discarded colors, I bought a gallon of the I-hope-it-will-look-good chip color. After bringing it home to try on the wall, though, the paint was . . . wrong. We decided to try to lighten it to to the next higher swatch on the card of six colors. At the store, I was surprised to discover that I would need FOUR gallons of white to dilute my gallon one shade, but the paint guy (at Sherwin Williams, who used the Moore colors with no problem) was able to add green tint to make the color less rosy in just the gallon I'd already bought.

Back at home, the color was true to a neighboring chip -- the clerk did a great job, at no charge -- but still not right. After some on-line grazing of kitchen color advice, we found the Moore book had a tiny subsection called America's Colors and in that limited palette of maybe 25 colors, there was a color that looked good in pictures, looked good on a swatch, and, I can report, looks good on the walls (not to mention my hands, calves, and hair).

3) In both of these vignettes, the point is the same, and echoes the provocative TED talk by Barry Schwartz on the tyranny of too much choice (http://www.ted.com/talks/barry_schwartz_on_the_paradox_of_choice.html). People don't want infinite choice (not just because of the mental-health implications Schwartz outlines). No, people do not want infinite choice: they want what they want.

What does this idea have to do with technology and business? Two main ideas come to mind. First, extensive choice adds to the customer service burden: ordering a beer from a list of 75 is harder, takes more time, and requires more guidance than ordering from a list of, say, 10 brews. Restaurant servers get asked all the time, "what's the Basement Brewery Old Wheaties taste like?" and skilled waiters and waitresses have a variety of useful answers at the ready: a) compare it to something less obscure, b) offer a personal testimonial or diplomatic warning, or c) offer to bring you a sample. Each of these makes his or her job harder than it might otherwise be, but the burden of service is higher in a high-choice scenario, whether in wedding dresses, paints, restaurants, or car shopping: before Ford rationalized the optioning process, the 2008 F-150 pickup came in more than a billion possible combinations.

Recommendation engines can help cut through the noise. Amazon's recommendations are so good they sometimes hit  too close to home; eBay's are pretty generic and/or too obviously linked to my last visit, which may have been a one-off (e.g., 2005 Toyota Camry door handle). Netflix and iTunes are working hard on this set of technologies, but for domains outside of media, it's hard to build a sufficiently robust profile. No great recommendation engines jump to mind for financial services, consumer electronics, or other expensive, highly complex purchases.

Another alternative to requiring more and more skilled service is to emphasize design. Apple has basically three mobile devices, with varying amounts of memory/storage, usually two colors, and a few connectivity options. Android, meanwhile, offers more than 4,000 different devices. While one may be EXACTLY what I need, cutting through the noise to find it is non-trivial at point of purchase. In the supply chain, meanwhile, keeping parts, contractors, documentation, and manufacturing expertise all current (never mind optimized) for so many devices to be sold in so many geographies is a significant challenge. Furthermore, the network effects that result from a shared platform are constrained somewhat because of Android sub-speciation. An app built for a touch screen display of size x won't render quite the same way on a size y display, and keyboard-driven devices don't enjoy full reciprocity with the glass keyboard on the touchscreen models. I often do an experiment in my classes: swap Android devices with a stranger. Now try to open a familiar app, send a text, or call someone. Very few Android users feel comfortable on kindred but non-identical devices.

Apple, meanwhile, has made iPods, iPhones, and iPads look and feel like siblings; the learning curve, while cumulatively considerable (think about being handed an iPad in 1999), is continuously compatible. The exception is the current Apple TV remote: unintuitive, slow, too easily lost, not predictably responsive. I mention this because Microsoft's next Xbox, revealed last week, was demoed with a convincing, lag-free voice interface. All the Google Glass jokes notwithstanding, voice control will have its place -- but in the living room, most likely, rather than on the subway, in restrooms, or at the grocery.

Voice control, as anyone who has used IVR trees can attest, is non-trivial to get right, but does have the potential to render the negative implications of device proliferation less onerous. In the meantime, especially when dealing with the digital Swiss army knives of our era, consumers will continue to face a bewildering array of choices until Samsung and/or Google get a handle on simplification. At the macro level, choices are often easy for vendors to generate, particular insofar as they are increasingly defined by software (adding another menu item is "free" to the developer), but consumer frustration would suggest that the insights of the Don Normans, the Barry Schwartzes, the Bruce Schneiers, and the Jered Spools of the world are falling on infertile ground. In plenty of situations, less is truly more.

Early Indications June 2013: What gets measured, gets . . .

It has become a business truism that “what gets measured, gets managed” after the great Peter Drucker allegedly wrote it. (There is no citation, however, and it may be that original credit goes to Lord Kelvin, who stated that “If you cannot measure it, you cannot improve it.”) In the “big data” era, it has become an article of faith that the more measurements we can gather and presumably analyze, the more we can optimize behavior that drives medical outcomes, social welfare, and corporate profitability. While I believe that we will see some extremely positive validations of this hypothesis, there are also enough cautionary tales that suggest some skepticism is warranted before accepting the promises of the big data evangelists as articles of faith.

Five unrelated examples combine to suggest an alternative mantra:

what gets measured, gets gamed.

That is, the scorecard gets attention _at the expense of_ the nominal task that was being measured in the first place.
  
Example 1: A former student reported that forecasting tools in a consumer products company were generating remarkably consistent projections, regardless of seasonality, competitors’ new product launches, or other visible alterations to the landscape. After some investigation, it was determined that a specific forecast curve had become popular (whether with procurement, finance, marketing, or plant managers was not made clear). To generate the “acceptable” forecast month after month, analysts took to [essentially] defeating the forecast algorithms by adjusting past actual quantities: to get the future curve they wanted, employees rewrote history.

Sales forecasting is gamed by definition, given the way commissions, market uncertainty, and expectation management affect the process. Numerous attempts have been made to induce “best-guess” estimations by the sales people, but even those companies that deployed prediction markets reported mixed results.

Example 2: For a time there was a breed of financial planner who was paid not on the basis of his or her clients’ rate of return, but by commissions generated by equities trades. Not surprisingly, clients did not get advice based on the long-term growth of their portfolio, but on the hottest stock of the moment. Moving clients in and out of different equities based on magazine cover stories proved to be good business for the planners, and only incidentally and accidentally profitable for the clients.

Example 3: A former colleague of mine recently analyzed the marketing activities of a large technology company. Even though the company sells B-to-B with a direct sales force, an executive dashboard someplace measures website clicks. The word came down through marketing that each product group had to “win the dashboard,” in this case, piling up web clicks through heavy ad placement even though this behavior could in no way be tied to revenue, customer satisfaction, or even lead generation.

Example 4 comes from closer to home. Course evaluations have become the focus of many universities’ professional assessments of non-research faculty, trying to ensure that students feel the instructor did his or her job. At Penn State the forms are called not “course evaluations” but SRTEs: Student Ratings of Teacher Effectiveness, though I doubt I am alone in believing the E stands for Entertainment. In my last teaching job, now 20 years ago, I was known to game the evaluation process, bringing cookies for the class on the last day before passing out paper evaluation forms. In our modern age, however, the assessment has gone online, so students are able to fill out the forms at their convenience and administrators can get scores reported in days rather than the months it took to code paper instruments.

At Penn State, the move toward paperless assessment has coincided with a startling drop in the completion rate. Like some other schools, we have an institute for the advancement of teaching skills. Upon seeing the drop in SRTE completion, our center undertook a project to try to improve compliance with the assessment. Note that these efforts do nothing to improve pedagogy or understand why compliance is dropping; the focus on the course assessment process is completely unrelated to helping students learn. Once again, the tail is wagging the dog.

Example 5: Information technology has become the backbone of most modern organizations. Grading the performance of the IS group, however, is extremely difficult. In many IS shops, measuring system uptime is readily quantifiable and usually scores in the high 90s. (For reference, 99.5% is a great score on a test but in this context it means the system was down for almost two full days a year.) What is much more difficult to measure, yet more important to business performance, is whether the right applications were running in the first place, how much inefficiency in the data center was required to get the gaudy uptime number, or how good the data was that the system delivered. Information quality is one of those metrics that is incredibly hard (and sometimes embarrassing) to measure, hard to improve, and hard to justify in terms of conventional ROI. Yet while it is, more often than not, truly critical for business performance, information quality was not in years past a component of a CIO’s performance plan. I’m told the situation is changing, although measuring application portfolio management – how well IS gets the right tools into production and the old ones retired – remains a challenge.

The five examples, along with many others from your own experience, suggest two important lessons. First, more data will by definition – thank you Claude Shannon – contain more noise. As Nassim Taleb notes in his critique of uncritical big-data love, more data simply means more cherry-picking (and not, Nate Silver would add, better hypothesis generation).

Second, in the domain of human management, incentive structures remain hard to get right, so there will be more and more temptations to let “numbers speak for themselves.” Such attitudes can emphasize the most readily measured phenomena, often of activity rather than outcomes – web clicks are easier to count than conversions; sales calls are easier to generate than revenues; incoming SAT scores are easier to average than student loan debt or job placement rates of the graduating class.

One would hope that getting the assessments right, even though it usually means counting something that doesn’t look as good, should dictate performance assessment. Given so much evidence from the worlds of medicine, commerce, sports, the military (remember Robert McNamara's "kill ratios"?), and academia to the contrary, however, it would appear that these games will forever be with us.

Monday, April 29, 2013

April 2013 Early Indications: What are humans good at?

"So we will be producing about 1026 to 1029 cps of nonbiological computation per year in the early 2030s. This is roughly equal to our estimate for the capacity of all living biological human intelligence ... This state of computation in the early 2030s will not represent the Singularity, however, because it does not yet correspond to a profound expansion of our intelligence. By the mid-2040s, however, that one thousand dollars' worth of computation will be equal to 1026 cps, so the intelligence created per year (at a total cost of about $1012) will be about one billion times more powerful than all human intelligence today. That will indeed represent a profound change, and it is for that reason that I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045."

-Ray Kurzweil, The Singularity is Near (New York: Penguin Group, 2005), pp. 135–136.

In the manner of all true technology revolutions, this one has crept up on us. I speak of the vast array of ways that computing is augmenting human life. Rather than think of people and computers, or people and robots, I believe it makes more sense to think of a continuum, with a naked newborn on one end -- pure human, zero augmentation -- and 2001's HAL or an Asimov fictional robot on the other: pure cyborg, with ample human characteristics. Everywhere in between these two poles, we can see combinations of human traits and computo-mechanical assistance. For now, humans call the shots in most scenarios (but not all; see below) and our devices can assist in any of thousands of ways.

I've never really bought Kurzweil's Singularity hypothesis: that machine capability, whether in the solo or collective mode, will eclipse human capability with profound consequences. The simplistic equation of CPU capacity with "living biological human intelligence" has never been argued with any serious evidence. The fact that Kurzweil has recently become a senior exec at Google, meanwhile, raises some pretty interesting questions.

At the same time, there's a tendency to "privilege" (sorry - it's an academic phrasing) humanity. Every robot that is said to be convincing, for example, serves as evidence that people are somehow Special, if less so every year. At the same time, people impart human characteristics to machines, naming cars and Roombas, for example, but not mobile phones, as far as I can tell. There's a whole lot to be researched and written about how humans anthropomorphize non-humans (animals and now machines/devices), but that's out of scope for the moment.

I think these two viewpoints -- the Singularity school and human exceptionalism -- both carry a substantial sum of unacknowledged baggage.  Kurzweil et al adopt a simplistic understanding of humanity as simply the sum of our circuits. For those who worry about machines overtaking humans, meanwhile, this is not news. Humans are far from the strongest creatures or even strongest mammals, and nobody I know seems to feel the lesser for it. In the realm of mental capacity, computers have resoundingly beat our cerebral gladiators at both chess and trivia. In the realm of the everyday, pocket calculators from 40 years ago outperform everybody at mathematical figuring. Engines, hydraulics, and now computers are clearly better than humans at some tasks.

Here's one example. Inspired by the Economist's cover story of April 20, I conclude that machines are in fact better than humans at driving cars under many circumstances. Consider that calculating the speed of an oncoming car is guesswork for a human: people misjudge their window of opportunity (to make a left turn across oncoming traffic for example) literally hundreds of times every day. LIDAR plus processing power makes that calculation trivial for a computer-car. Knowing the limits of the car's handling, done by feel, is beyond the experience of 99% of drivers, most likely: especially with automatic transmissions, traction control (a computer assist), automatic pitch/yaw compensation, and other safety features, it's very difficult to get a car sideways, deliberately, to figure out how and when to react. For every "driving enthusiast" who squawks in a car magazine about "having all the decisions taken away from us by Big Brother," there will be many people who would LOVE to be chauffeured to their destination, especially when it's the Interstate grind crawling to work at 7:00 rather than the stick-shift byways of rural Virginia on a springtime Sunday morning. Even apart from the Google car, computers are doing more and more driving every year.

Preceded with the caveat that I am not not can I aspire to be a cognitive scientist, there is a question embedded here: what are humans good at, what are computers good at, and how will the person/machine partnership change shape over the coming years? There's got to be research done on this, but I couldn't find a clear, definitive list in plain English, so this fool will rush in, etc.

Let's start with machines: machines can count, multiply, and divide way faster than any person. Time and distance, easily quantified, are readily calculable. Short- and long-term memory (storage) can be pretty much permanent, especially in a well-engineered system, not to mention effectively infinite at Google scale. If-this/then-that logic, in long long chains, is a computer specialty. IBM's Watson, after winning Jeopardy with a really big rules engine, is now being used for medical diagnosis, where it should do well at both individual scenarios and public-health bigger pictures. Matching numbers, data patterns, and text strings is straightforward.

What about people? People can feel empathy. People can create art. People can see visual/logical nuances better than machines: a 5-year-old can know that a spoon is "like" a fork but not like a pencil but a computer must be taught that in explicit terms. Similarly, machine filters that "know it when they see it," in the Potter Stewart sense of hard-core material, have been spectacularly unreliable. People can read body language better than computers can. People can integrate new experience. People can infer better than machines. People can invent: recipe creation software can't duplicate even middling chefs, for example. Computers can be taught to recognize puns and, more recently, "that's what she said" double-entendres; only humans can create good ones.

Anthony Damasio's brilliant book Descartes' Error should be required reading for the Kurzweilians. Rather than accept the Cartesian split of mind from body - embodied in the epigram "I think therefore I am" -- Damasio insists, with evidence, that it is emotion, the blurry juncture of mind and body, that enabled human survival and continues to define the species. All the talk about calculations equaling and surpassing human intelligence ignores this basic reality. Until computers can laugh, cry, sing, and otherwise integrate mind and body, they cannot "surpass" what makes people people.

Here's a nice summary, from INC magazine of all places, in 2002:
*********
    Yet is thinking outside the box all it takes to be innovative? Are reasoning and imagination -- the twin faculties that most of us associate with innovation -- enough for Ray Kurzweil to know which of the formulas that he's dreamed up based on past technological trends will lead to the best mathematical models for predicting future trends?
    No, says Antonio Damasio, head of the neurology department at the University of Iowa College of Medicine. The innovator has to be able to feel outside the box, too -- that is, to make value judgments about the images and ideas that he or she has produced in such abundance. "Invention," as the French mathematician Henri Poincaré said, "is discernment, choice." And choice, notes Damasio, is based on human emotion -- sensations that originate in the brain but loop down into the body and back up again. "What you're really doing in the process of creating is choosing one thing over another, not necessarily because it is factually more positive but because it attracts you more," says Damasio. "Emotion is literally the alarm that permits the detection."
    Kurzweil, for his part, calls that alarm "intuitive judgment." But he disagrees that it -- or reasoning or imagination, for that matter -- is exclusively human. He sees a day in the not-too-distant future when we will merge mechanical processes with biological ones in order to amplify what our brains alone do today. "Ultimately, we'll be able to develop machines that are based on the principles of operation of the  human brain and that have the complexity of human intelligence," he says. "As we get to the 2030s and 2040s, the nonbiological component of our civilization's thinking power will dominate."
**********

As I suggested earlier, the human/machine distinction is far from binary. Even assuming a continuum, however, perhaps the most important category of tasks has been little discussed: computer systems which possess emergent properties that cannot be understood by humans. Wall Street is in this category, given algorithmic trades occurring in the millionths of a second, in a system where the interactions of proprietary codes are occasionally catastrophic yet beyond human comprehension (both in real time and after the fact) not to mention regulation. When algorithmic trades in synthetic instruments inadvertently wipe out underlying assets, who's left holding the bag? Sometimes it's the algorithm's "owner": Knight Capital basically had to sell itself to a rescue party last summer because its bad code (apparently test scripts found their way into the live NYSE and nobody noticed) lost $440 million in less than an hour; the program was buying, in automatic mode, $2.6 million of equities _per second._ Just because people could write algorithms and code doesn't mean they can foresee all potential interactions of said code -- assuming it was written as designed in the first place. (For more see here and here)

I'm not going to get nostalgic, or apocalyptic, or utopian here. Humanity has always built tools, and the tools always have unintended consequences. Those consequences have been substantial before: the rise of cities, extension of human life spans, atomic bombs, Tang. This time around, however, when the unintended consequence cuts so close to our identity it probably means that some self-awareness -- something computers can't do -- is probably in order. On that front, I'm not entirely hopeful: during the recent Boston bomb drama, when people were shown at their worst and finest, news feeds were dominated by updates on a Kardashian divorce development. I don't know if we're "amusing ourselves to death," as Neil Postman put it a long time ago, but maybe there's the chance that some people will dumb themselves down to computers rather than the machines catching up.

Saturday, March 30, 2013

Early Indications March 2013: Digital Heirlooms

I've been thinking a lot lately about the invisible consequences of our smartphone/mobile/digital world. Somewhere down the road, the dematerialization of cultural artifacts will be viewed, I believe, as a major shift. Looking back from today, books are our oldest mass cultural form, then between 1880 and 2000 we got music, movies, then television into widely available portable formats. Eventually, and rapidly, all of these became digital, and fungible: 15 years ago the radio couldn't play back voicemail nor could a VCR host video games.

The business competition between Amazon, which won the first leg of the e-book/e-reader race, Netflix (ditto for movies), and Apple (music) is for extremely high stakes, but not our concern today. As the barrier to cultural creation drops, artifacts get easier to make. Compare the process of creating photographs in 1913, 1963, and today. Humanity has never made -- or shared -- so many images, but how will these increasingly ephemeral artifacts get passed down? Finding one or two photos of my grandfather when he was a young boy was lucky and important; in 100 years, what will my grandkids have to show for their infancy, adolescence, and young adulthood?

Google's recent decision to drop the Reader product is instructive here. At what point do changing cloud computing business models endanger and/or support preservation? Is there any conceivable way Facebook can keep adding billions and billions of photo uploads in perpetuity? Given that some kind of limits will be reached, where do our cloud-identities go when businesses fail? As more and more variations emerge, what will be the fate of digital personae after we die? We may well confront a paradox: we make more images than ever before, yet in the future, we could have less of a visual inheritance.

A whole other branch of issues revolves around platform compatibility. Some of my written masterpieces from the 1990s are stuck on 3 1/2" floppies for which I no longer own a working drive. That's a hardware question. What about software compatibility? For how long will Adobe support the PDF standard? In the absence of such support, and the possibility that a given standard will not be open-sourced to a community that can maintain it, we will see further stranding of digital assets.

In such a world, what lasts? I was pondering this question when considering graduation gifts. An Apple device, no matter how sleek and easy to hold, will be obsolete in five years, maybe before. Music is hard to give: for how long can we assume most every household will be able to play a CD? The last two computers I bought, not to mention every tablet, lack the capability. One day I will wake up and realize, yet again, that there is another format of information I can't access, joining the floppies, VHS, Jazz, and Zip media boxed up, worthless, in the basement.

Books have played a huge role in my life. Leaving grad school, the moving company found that our books on the van outweighed the car that was also on the truck. Many books tell a story, independent of the printed page. Bookplates were a classy accoutrement of prior generations; inscriptions can still be precious.  But the fact remains that, apart from university press books, most paper rots, some startlingly quickly. Books weigh a lot and occupy substantial space. The stereotype of a book-lined academic household is giving way to cloud-ish realities: it's quicker to consult Google to hunt down a footnote than to drive to the campus library or plow through the boxes in the garage, given that my book collection currently surpasses my available wall space for shelving it. Much as I hate to admit it, books are losing their appeal for me as gifts, especially "special" ones. The good news is that books' operating system is now stable, and is likely to remain so.

To return to the question, what lasts in a digital world? Paper is a mixed blessing, but Moleskine has made a very profitable global luxury brand out of blank books (if you are a fan of the Italian-made gems, check out this fascinating article related to the company's upcoming IPO). Pens continue to satisfy; alongside the European classics, several Kickstarter businesses growing out of the cult following that has emerged around the 0.3 mm Pilot Hi-Tec C are fascinating to track. I don't watch people in their 20s and 30s closely enough to know whether pens are being replaced in the preparation of grocery lists, birthday cards, or journals, but sense they are not. (From "Dear Diary" to "Dear Evernote"?) Relating pens to a broader category, tools can be truly lasting gifts, the antithesis of digital ephemera. Specifically, bladed tools seem to hold some deep appeal: knives, kitchen or otherwise, and chisels/planes strike me as heirlooms more than, say, striking tools ("here son, a titanium framing hammer as your graduation present"), mechanics' tools, or even saws. In the grooming arena, shaving razors, and those lovely badger brushes, seem to continue the theme. Scissors, whether run with or standing still, don't hold the same appeal, but I don't write as a quilter or scrapbooker, for whom such tools might indeed be long-lived, essential, and personal.

Ah yes, say some women friends, you're so much of a guy, always missing the point: jewelry has struck a nerve for millennia. Gold, precious stones, and other articles of adornment appeal deeply to many women from many cultures. To this I say: true, but "little jewelry" is an oxymoron in my experience. Finding something well-made, lasting, and appealing for the same price as a Swiss Army knife or decent "graduation" pen has been difficult for me. There's also the strong sentimentality: giving jewelry to the babysitter graduating from high school feels a little too personal. Tools have a safety zone that rings do not. In both cases, however, the appeal relates to hands: things that people before us touched, treasured, and took care of mean so much more than something shiny and new -- unless we can imagine the new present enduring across generations.

The essential role of blades in our species' survival speaks to some deep parts of the psyche located, I suspect, far removed from the dopamine pumps so capably triggered by multitasking, texting, tweeting, and online grazing. To the question of "what lasts in a digital age?" the answer, I submit, is simple: tools that fit the hand of the user. Or gold.

Thursday, February 28, 2013

Early Indications February 2013: Big Caveats Regarding Big Data

Review essay

Michael Mauboussin, The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing (Boston: Harvard Business Review Press, 2012)

Nate Silver, The Signal and the Noise: Why So Many Predictions Fail — but Some Don't (New York: Penguin, 2012)

Nassim Nicholas Taleb, Antifragile: Things That Gain from Disorder (New York: Random House, 2012)

Preface
To set context, here is a sampling of what IT vendors are saying about Big Data:

SAP:
Get the tools and technology you need to harness big data from any source – structured or unstructured – for a serious competitive advantage. Our big data solutions can help you capture, analyze, report, predict, and visualize mammoth volumes of data instantly – so you can make the best possible business decision, every time.

IBM:
Big data is more than simply a matter of size; it is an opportunity to find insights in new and emerging types of data and content, to make your business more agile, and to answer questions that were previously considered beyond your reach.

SAS:
The hopeful vision for big data is that organizations will be able to harness relevant data and use it to make the best decisions.
Technologies today not only support the collection and storage of large amounts of data, they provide the ability to understand and take advantage of its full value, which helps organizations run more efficiently and profitably.

Oracle:
For decades, companies have been making business decisions based on transactional data stored in relational databases. Beyond that critical data, however, is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and photographs that can be mined for useful information.
Oracle offers the broadest and most integrated portfolio of products to help you acquire and organize these diverse data sources and analyze them alongside your existing data to find new insights and capitalize on hidden relationships.
--------
In sum, the vision of the Big Data movement appears to be to as follows:

to measure and capture, in greater detail and quantity, things that have happened in order to analyze the data, find insights/answer hard questions/capitalize on hidden relationships, and act more effectively in the future ("make better decisions.")

It all sounds reasonable, except the foundational logic has yet to be tested. Fortunately, we have some very smart people from diverse backgrounds who can help in that quest. It turns out that if these three gentlemen are correct, the very premises of Big Data need to be tempered, not with better computer science, but a better comprehension both of how people think, act, and decide and of how much luck and randomness still shape our world.

The three books all overlap to a degree, often in their appreciation for the behavioral economics of Daniel Kahneman, and each author brings serious credentials to the table:

-Mauboussin teaches at Columbia in addition to working at Legg Mason; he wrote an early and influential report on the financial implications of power laws back in the late 1990s.

-Silver gained fame on election night 2012 after correctly calling 50 out of 50 state results in the presidential race, after going 49 for 50 in 2008. His first data-centric venture was in baseball statistics.

-Taleb's previous books, The Black Swan and Fooled by Randomness, provided prescient color commentary to the financial crisis of 2008. Stylistically, existentially, and intellectually, he swims upstream but has repeatedly been proven right.

Three macro-level insights emerged from these books.

A) Luck remains a critically important factor in success, so prediction, even when successful (that is, skillful), may not generate much advantage

Mauboussin looks at the relationship of luck and skill in a variety of domains. The book owes many debts to Moneyball, but across more sports and extending convincingly into business. Results in the NBA, for example, are decided by skill to a much higher degree than in the NHL: in 2-1 or 1-0 games on ice, the slightest deflection or fluke play can determine a game. When the Spurs beat the Suns 104-98, however, random chance events are fewer (how many deflected shots actually go through the hoop?) and their impact is minimal.

When he turns to business and investing, Mauboussin makes similarly compelling points. For our purposes, the central insight relevant to Big Data concerns what might be called the water level: as the skill level rises in a population, differences between competitors shrink. Thus luck becomes more of a factor: "if stocks are priced efficiently in the market, luck will determine whether an investor correctly anticipates the next price move up or down. When everyone in business, sports, and investing copies the best practices of others, luck plays a greater role in how they all do." (p. 56)

This insight would seem to apply to the algorithmic arms races in baseball talent scouting, investing, and consumer data mining. In situations where no actor can accumulate a commanding lead (as Google has and Facebook might, however), whether in computing horsepower, algorithmic quality, or data to be analyzed, the skill premium dissipates. Luck, by this theory, will play a greater role than skill in a more heterogeneous environment.

Mauboussin concludes the book with 10 suggestions to improve the "art of good guesswork":

1 Understand where you are on the luck-skill continuum

2 Assess sample size, significance, and swans

3 Always consider a null hypothesis

4 Think carefully about feedback and rewards [many financial advisors get paid when clients trade, not when clients prosper, for example: what's the feedback loop there?]

5 Make use of counterfactuals

6 Develop aids to guide and improve your skill [checklists are a case in point]

7 Have a plan for strategic interactions [such as asymmetric warfare or disruptive innovations]

8 Make reversion to the mean work for you

9 Develop useful statistics

10 Know your limitations

This tenth maxim serves as a convenient segue to Silver's book.  From its title -- signals and noise are fundamental to information theory -- to its examples (which include economics, earthquakes, and climate change), the book would appear to be enthusiastic about using numbers to predict the future, to realize the promise of Big Data. But as Silver writes very early in the book, his focus is less on data and more on the people who use it:

"Big Data _will_ produce progress -- eventually. How quickly it does, and whether we regress in the meantime, will depend on us. . . .
Our biological instincts are not always very well adapted to the information-rich modern world. Unless we work _actively_ to become aware of the biases we introduce, the returns to additional information may be minimal -- or diminishing." (pp. 12-13)

Thus, the second macro-level idea concerns consciously testing ideas, assumptions, and admitting uncertainty.

B) Bayesian statistics, in particular its insistence on carefully articulated prior probabilities, forces human analysts to attach values to the context for their predictions rather than let them float ahistorically, otherwise known as "letting the data speak for itself."

This illusion of statistical sufficiency known sometimes as "frequentism" dates to the early 20th century, and the school of thought persists today. As Silver summarizes, "it emphasizes the objective purity of the experiment -- every hypothesis could be tested to a perfect conclusion if only enough data were collected. However, to achieve that purity, it denies the need for Bayesian priors or any other sort of messy real-world context." (p. 255)

Echoing his opening assertions in the conclusion, Silver plausibly argues that "distinguishing the signal from the noise requires both scientific knowledge and self-knowledge: the serenity to accept the things we cannot predict, the courage to predict the things we can, and the wisdom to know the difference." (p. 453)

Possibly because Silver's book ranges more widely than does Mauboussin, it felt more engaging. Written as it was before his successful handicapping of the Obama re-election, The Signal and the Noise is itself something of a prior: a self-aware assessment of Silver’s own methods and their probabilistic limits. The book forces erstwhile predictors to examine their methods, their objectives, and ultimately themselves -- not at all what the two-dimensional stat-geek stereotype would suggest.

In contrast to Mauboussin, Silver offers but two admonitions in his conclusion:

Know Where You're Coming From

and

Think Probabilistically.

In contrast to closed-end events -- when will be the first snowfall, who will win the championship, how many widgets will Samsung sell -- open-end events are the terrain of Nassim Nicholas Taleb: Black Swans, as they have come to be called. As Silver notes, nobody can remotely predict earthquakes or most natural phenomena, with the exception of weather. Nor can political revolts (in either London or Cairo, for example), equity or currency fluctuations, or other large-scale man-made phenomena be forecast at all reliably. Rather than predicting, Taleb advocates an entirely different approach.

C) Because of the nature of a highly complex and connected world, "Black Swan" events can generate very large, unforeseen effects, very quickly. A prudent strategy for living in such a world is to seek shelter to a substantial (but not complete) degree, while finding exposure to the upside of unforeseeable events with small bets in as many big-multiplier arenas as possible, often via optionality. Taleb calls this a "dumbbell" strategy for its bimodal distribution: for example, very large positions in cash or other low-risk and low-reward instruments, with focused but small investment in high-risk/very high-reward (and thus probably exotic) positions. Note that the middle is avoided entirely: Taleb's antipathy for bell curve distributions, especially where misapplied, is vehement.

The title of Taleb's book hints at how unaccustomed we are to thinking this way. Everyone knows that a wine glass is fragile: physical volatility is usually fatal. Note that fragility scales non-linearly: a fall from 32 inches onto the hardwood floor is far more than four times as damaging as an 8-inch drop, which is likely survivable. Many people say that "robust" is the antithesis of fragile, but Taleb disputes this position: what, instead, are the opposite of fragile phenomena, the things that actively IMPROVE in the presence of volatility? He looked in dozens of languages: none had a word to connote this property, which is, nonetheless, quite real. Taleb's contrarian-ness is of a high order indeed.

It turns out that the natural world, biology in particular, abounds in situations where volatility improves matters. Young children learning language, muscles after exercise, and immune defenses all qualify. In the human order, Taleb praises the Swiss city-state (canton), noting that many people can pick Switzerland as the most stable regime on earth and yet nobody outside the country knows who the president is: decentralized authority keeps the scale of both problems and solutions closer to human-friendly and risk-limited. Swiss disorder occurs in domains the exact opposite of "too big to fail," itself a curse in this system of thinking because increasing scale implies massive risk. Man-made "stabilization" often leads to instability, whether in financial systems, forest fires (preventing healthy little ones means a later, inevitable inferno), or corporate planning. When small, routine failures are prevented through naive bureaucratic intervention, stressors magnify until the impact is multiplied to the scale of the entire system (as witness the mortgage banking mess, rogue traders at Societe Generale and JPMorgan, and the flash crash). And on the basis of what empirical evidence is "equilibrium" the economist's ideal?

Thus rather than fail to predict the mechanism of [by-definition] unpredictable disaster, we can see the quite foreseeable effects of 100-year-old subway tunnels in New York (whether the stressor is a riot, a terrorist, or a hurricane is irrelevant), or slow responses to climate change, or overly long supply chains for food. In short, Taleb proves that prediction is systematically broken for both psychological -- yes, Kahneman gets his props here too -- and systematic/organizational reasons. The 425-page excursion into many nooks and crannies of the Western intellectual tradition (Seneca plays a featured role, for example) is itself unpredictable: Taleb does not so much explicate his argument as embody it, with frequent personal examinations that prove he literally has skin in the game. His conclusion is much more straightforward that its telling:

"Everything gains or loses from volatility. Fragility is what loses from volatility and uncertainty." (p. 421)

Rather than seek certainty in data* or in anything else, Taleb seeks to find situations, investments, and modes of living that are not only resistant to volatility but _thrive_ in its inevitable presence. The notion of antifragility thus stands as the most robust challenge to the uncritical application of data, algorithms, and prediction more generally -- especially outside realms (such as weather) where we can actually document a certain degree of success. As for lavish investments in police and fire departments for cities, in R&D at the corporate level, and in universities in any particular society, Taleb contends that we really know little about correlation vs. causation. This fundamental lack of evidence suggests that for data to improve our world, there are more intermediate steps between the computer and a better future than the apparent consensus would suggest.


*"There is a nasty phenomenon called 'Big Data' in which researchers bring cherry-picking to an industrial level. Modernity provides too many variables (but to little data per variable), and the spurious relationships grow much, much faster than real information, as noise is convex and information is concave." (p. 418) More simply, "the more data you get, the less you know what's going on." (p, 128)