Wednesday, October 01, 2014

Early Indications September 2014: Alternatives to Industry

In classic business school strategy formulation, a company’s industry is taken as the determining factor in cost structures, capital utilization, and other constraints to the pursuit of market success. Nowhere is this view more clearly visible than in Michael Porter’s seminal book Competitive Strategy, in which the word “industry” appears regularly.

I have long contended that Internet companies break this formulation. A series of blog posts (especially this one) in the past few weeks have crystallized this idea for me. The different paths pursued by Apple, Amazon, and Google — very different companies when viewed through the lens of industries — lead me to join those who contend that despite their different microeconomic categories, these three companies are in fact leading competitors in important ways: but not of the Coke/Pepsi variety.

Let us consider the traditional labels first. Amazon is nominally a retailer, selling people (and businesses) physical items that it distributes with great precision from its global network of warehouses. Its margins are thin, in part because of the company’s aggressive focus on delivering value to the customer, often at the cost of profitability at both Amazon itself and its suppliers.

Apple designs, supervises the manufacture of, and distributes digital hardware. Its profit margins are much higher than Amazon’s, in large part because its emphasis on design and usability allows it to command premium prices. Despite these margins and a powerful brand, investors value the company much less aggressively than they do Amazon.

Google, finally, collects vast sums of data and provides navigation in the digital age: search, maps, email. Algorithms manage everything from web search to data-center power management to geographic way-finding. In the core search business, profit margins are high because of the company’s high degree of automation (self-service ad sales) and the wide moats the company has built around its advertising delivery.

Thus in traditional terms, we have a mega-retailer, a computer hardware company, and a media concern.

When one digs beneath the surface, the picture morphs rather dramatically. Through a different lens, the three companies overlap to a remarkable degree — just not in ways that conform to industry definitions.

All three companies run enormous cloud data center networks, albeit monetized in slightly different ways. Apple and Amazon stream media; Google and Amazon sell enterprise cloud services; Apple and Google power mobile ecosystems with e-mail, maps, and related services. All three companies strive to deepen consumer connections through physical devices.

Apple runs an industry-leading retail operation built on prime real estate at the same time that Amazon is reinventing the supply-chain rule book for its fulfillment and now sortation centers. (For more on that, see this fascinating analysis by ChannelAdvisor of the Amazon network. In many cases, FCs are geographically clustered rather than spread more predictably across the landscape) Both of these retail models are hurting traditional mall and big-box chains.

At the most abstract but common level, all three companies are spending billions of dollars to connect computing to the physical world, to make reality a peripheral of algorithms, if you will. Amazon’s purchase of Kiva and its FC strategy both express an insistent strategy to connect a web browser or mobile device to a purchase, fulfilled in shorter and shorter time lags with more and more math governing the process. In the case of Kindle and streaming media, that latency is effectively zero, and the publishing industry is still in a profoundly confused and reactive state about the death of the physical book and its business model. The Fire phone fits directly into this model of making the connection between an information company and its human purchasers of stuff ever more seamless, but its weak market traction is hardly a surprise, given the strength of the incumbents -- not coincidentally, the other two tech titans.

Apple connects people to the world via the computer in their pocket. Because we no longer have the Moore’s law/Intel scorecard to track computing capacity, it’s easy to lose sight of just how powerful a smartphone or tablet is: Apple's A8 chip in the new iPhone contains 2 Billion (with a B) transistors, equivalent to the PC state of the art in 2010. In addition, the complexity of the sensor suite in a smartphone — accelerometers, microphone, compasses, multiple cameras, multiple antennae — is a sharp departure from a desktop computer, no matter how powerful, that generally had little idea of where it was or what its operator was doing. And for all the emphasis on hardware, Nokia’s rapid fall illustrates the power of effective software in not just serving but involving the human in the experience of computing.

Google obviously has a deep capability in wi-fi and GPS geolocation, for purposes of deeper knowledge of user behavior. The company’s recent big-bet investments — the Nest thermostat, DARPA robots, Waze, and the self-driving car team — further underline the urgency of integrating the world of physical computing on the Android platform(s) as a conduit for further and further knowledge of user behavior, social networks, and probably sentiment, all preconditions to more precise ad targeting.

Because these overlaps fail to fit industry definitions, metrics such as market share or profit margin are of limited utility: the three companies recruit, make money, and innovate in profoundly different ways. Amazon consistently keeps operating information quiet (nobody outside the company knows how many Kindle devices have been sold, for example) so revenue from the cloud operation is a mystery; Google’s finances are also somewhat difficult to parse, and the economics of Android for the company were never really explicated, much less reported. Apple provides most likely the most transparency of the three companies, but that’s not saying a lot, as the highly hypothetical discussion of the company’s massive cash position would suggest.

From a business school or investor perspective, the fact of quasi-competition despite the lack of industry similitude suggests that we are seeing a new phase of strategic analysis and execution, both enabled and complicated by our position with regard to Moore’s law, wireless bandwidth, consumer spending, and information economics. The fact that both Microsoft and Intel are largely irrelevant to this conversation (for the moment anyway) suggests several potential readings: that success is fleeting, that the PC paradigm limited both companies’ leaders from seeing a radically different set of business models, that fashion and habit matter more than licenses and seats, that software has changed from the days of the OSI layer cake.

In any event, the preconditions for an entirely new set of innovations — whether wearable, embedded/machine, algorithmic, entertainment, and/or health-related — are falling into place, suggesting that the next 5-10 years could be even more foreign to established managerial teaching and metrics. Add the external shocks — and shocks don’t get much more bizarre than ebola and media-savvy beheadings — and it’s clear that the path forward will be completely fascinating and occasionally terrifying to traverse. More than inspiration or insight from our business leaders, we will likely need courage.

Tuesday, July 29, 2014

Early Indications July 2014: Betting the Business


I read with great interest the recent Fortune article on the new Ford F-150 pickup. This is the best-selling vehicle in the U.S. (but sold in few other markets), and contributes mightily to the company’s profitability: it’s straightforward to manufacture, long production runs drive purchasing economies and assembly line efficiency, and option packages contribute to healthy — 40% — gross margins. In short, the light truck franchise at Ford is the one essential business that management has to get right: small cars aren’t that popular or profitable, large SUVs are out of fashion, overall car demand is flat for demographic and other reasons, and financing isn’t the profit center it once was given low interest rates.

A new model of the truck comes out this fall. Ford is reinventing the pickup, by making it mostly out of aluminum rather than steel. The weight savings (700 lb was the target) will help the automaker reach government-mandated fuel economy targets, but there are significant risks all across the landscape:

*Body shops need new, specialized equipment to address aluminum body damage. Ford had to create a nationwide network of authorized service centers, especially given how many trucks are sold in rural areas to miners, ranchers, and farmers. If owners have trouble getting repairs done, negative publicity will travel extra fast over social media.

*The aluminum supply as of 2010 was not sufficient to the need: 350,000 half-ton pickups in 2012 would be an awful lot of beer cans. Ford has to invent a whole new supply base and watch how one big buyer will move the entire commodity market. (I’ve heard this is why Subway doesn’t offer roasted red pepper strips: they’d need too many.)

*Manufacturing processes are being revised: aluminum can’t be welded the way steel can, so bonding and riveting require new engineering, new skills, new materials, and new assembly sequences.

In short, this is a really big gamble. Ford is messing with the formula that has generated decades of segment leadership, corporate profitability, and brand loyalty. Competitors are circling: Chevy would love to have Silverado win the category sales crown this year, especially given GM’s horrific year of bad publicity, and Chrysler’s Ram division was renamed solely because of their pickups’ brand equity.

It’s rare that a company takes a position of market leadership and invests in a significantly different platform, risking competitive position, profitability, and customer loyalty. Between us, my former boss John Parkinson and I could only come up with a handful: these kind of moments seem to happen only about once a decade (unless readers remind me of examples I missed).

Six US business decisions got our attention:

1) Boeing bet on passenger jet airplanes in the 1950s, launching the 707 in 1958. It was not the first such aircraft: the British De Havilland Comet won that honor, but had major safety issues related to leaks developing because of metal fatigue around window openings. Jets delivered greater power for their size, had fewer moving parts, and burned cheaper fuel. Planes could carry more passengers, fly farther and faster, and required fewer maintenance visits.

2) IBM completely departed from industry practice by announcing the System/360 in 1964. It was a family of highly compatible mainframes, allowing customers to grow up in capability without having to learn a new operating system or rebuild applications: previously, customers could buy a small computer that might soon box them in, or overspend on something too big for their needs in the hope of growing into it. Fred Brooks, who managed software development, learned from System/360 about the paradoxes of programming and later wrote the classic Mythical Man-Month, with its still-true insight: adding programmers to a late software project will make it later. Brooks’ team delivered, and S/360 helped IBM dominate the computer market for the next 15 years.

3) Few people remember that Intel has not always been synonymous with microprocessors. Until the early 1980s, the company’s focus was on memory devices. Simultaneously undercut in price by Japanese competition and alert to the rapid growth of the PC segment, Intel CEO Andrew Grove switched over to the far more technically demanding microprocessor market in 1983, followed by the famous “Intel Inside” branding campaign in 1991: it was unheard-of for a B2B supplier to build a consumer brand position. Intel stock in this period enjoyed an enviable run-up, but the success was not preordained.

4) It wasn’t a completely “bet-the-company” decision, but Walter Wriston at CitiBank wagered heavily on automatic teller machines in the 1980s, which not only cost a significant amount to develop and install, but also prompted criticism of a lack of a personal touch in client service. The decision of course paid off handsomely and revolutionized the finance industry.

5) It wasn’t a bet-the-company decision, as its failure makes clear, but Coke guessed wildly wrong on the flavor of “New Coke” in 1985 yet was able to recover.

6) Verizon made a significant investment in its residential fiber-optic buildout, but the rapid growth in the wireless business eventually eclipsed wireline in general, reducing both the risk and the impact of the original decision in 2005.

What am I missing? In what other situations have CEOs taken a flagship market offering and significantly revamped it, endangering market share, brand equity, and profitability to the extent Ford has, when the entire company’s future rides heavily on this product launch?

Friday, May 30, 2014

Early Indications May 2014: When computing leaves the box

Words can tell us a lot. In particular, when a new innovation emerges, the history of its naming shows how it goes from foreign entity to novelty to invisible ubiquity. A little more than 100 years ago, automobiles were called “horseless carriages,” defined by what they were not rather than what they were. “Cars” were originally parts of trains, as in boxcars or tank cars, but the automobile is now top of mind for most people. More recently, the U.S. military refers to drones as UAVs: unmanned aerial vehicles, continuing the trend of definition by negation. Tablets, the newest form of computing tool, originally were made of clay, yet the name feels appropriate.

The naming issues associated with several emerging areas suggest that we are in the early stages of a significant shift in the landscape. I see four major manifestations of a larger, as yet unnamed, trend, that for lack of better words I am calling “computing outside the box.” This phrase refers to digital processes — formerly limited to punchcards, magnetic media, keyboards/mice, and display screens — that now are evolving into three dimensional artifacts that interact with the physical world, both sensing and acting upon it as a result of those digital processes. My current framing of a book project addresses these technologies:

-robotics

-3D printing/additive manufacturing

-the emerging network of sensors and actuators known as the Internet of Things (another limited name that is due for some improvement)

-the aforementioned autonomous vehicles, airborne, wheeled, and otherwise.

Even the word “computer” is of interest here: the first meaning, dating to 1613 and in use for nearly 350 years, referred to a person who calculates numbers. After roughly 50 years of computers being big machines that gradually shrank in size, we now are in a stage where the networked digital computers carried by hundreds of millions of people are no longer called computers, or conceived of as such.

Most centrally, the word “robot” originated in the 1920s and was at first a type of slave; even now, robots are often characterized by their capabilities in performing dull, dirty, or dangerous tasks, sparing a human from these efforts. Today, the word has been shaped in public imagination more by science fiction literature and cinema than by wide familiarity with actual artificial creatures. (See my TEDx talk on the topic) Because the science and engineering of the field continue to evolve rapidly — look no further than this week’s announcement of a Google prototype self-driving car — computer scientists cannot come to anything resembling consensus: some argue that any device that can 1) sense its surroundings, 2) perform logical reasoning with various inputs, and 3) act upon the physical environment qualifies. Others insist that a robot must move in physical space (thus disqualifying the Nest thermostat), while others say that true robots are autonomous (excluding factory assembly tools).

I recently came across a sensible, nuanced discussion of this issue by Bernard Roth, a longtime professor of mechanical engineering who was associated with the Stanford Artificial Intelligence Lab (SAIL) from its inception.

“I do not think a definition [of what is or is not a robot] will ever be universally agreed upon. . . . My view is that the notion of a robot has to do with which activities are, at a given time, associated with people and which are associated with machines. If a machine suddenly becomes able to do what we normally associate with people, the machine can be upgraded in classification and classified as a robot. After a while, people get used to the activity being done by machines, and the devices get downgraded from ‘robot’ to ‘machine.’” *

However robots are defined, how is computing outside the box different from what we came to know of digital computing from 1950 until 2005 or so? Several factors come into play here.

1) The number of computational devices increases substantially. There were dozens of computers in the 1950s, thousands in the 1960s, millions in the 1980s, and so on. Now, networked sensors will soon number in the tens of billions. This increase in the scale of both the challenges (network engineering, data science, and energy management are being reinvented) and the opportunities requires breakthroughs in creativity: are fitness monitors, that typically get discarded after a few months except by the hardest-core trainers, really the best we can do for improving health outcomes?

2) With cameras and sensors everywhere — on phone poles, on people’s faces, in people’s pockets, in the ground (on water mains), and in the sky (drone photography is a rapidly evolving body of legal judgment and contestation) — the boundaries of security, privacy, and risk are all being reset. When robots enter combat, how and when will they be hacked? Who will program a self-driving suicide bomb?

3) Computer science, information theory, statistics, and physics (in terms of magnetic media) are all being stress-tested by the huge data volumes generated by an increasingly instrumented planet. A GE jet engine is reported to take off, on average, every two seconds, world wide. Each engine generates a terabyte of data per flight. 10:1 compression takes this figure down to a mere 100 gigabytes per engine per flight. Dealing with information problems at this scale, in domain after domain (here’s an Economist piece on agriculture) raises grand-challenge-scale hurdles all over the landscape.

4) The half-life of technical knowledge appears to accelerate. Machine learning, materials science (we really don’t understand precisely how 3D printing works at the droplet level, apparently), machine vision in robots, and so on will evolve rapidly, making employment issues and career evolution a big deal. Robots obviously displace more and more manual laborers, but engineers, programmers, and scientists will also be hard-pressed to keep up with the state of these fields.

5) What are the rules of engagement with computing moving about in the wild? A woman wearing a Google Glass headset was assaulted in a bar because she violated social norms; self-driving cars don’t yet have clear liability laws; 3D printing of guns and of patented or copyrighted material has yet to be sorted out; nobody yet knows what happens when complete strangers can invoke facial recognition on the sidewalk; Google could see consumer (or EU) blowback when Nest sensor data drives ad targeting.

6) How will these technologies augment and amplify human capability? Whether in exoskeletons, care robots, telepresence, or prostheses (a field perfectly suited to 3D printing), the human condition will change in its shape, reach, and scope in the next 100 years.

To anticipate the book version of this argument, computing outside the box introduces a new layer of complexity into the fields of artificial intelligence, big data, and ultimately, human identity and agency. Not only does the long history of human efforts to create artificial life see a new chapter, but also we are creating artificial life in vast networks that will behave differently than a single creature: Frankenstein’s creature is a forerunner of Google’s Atlas robot, but I don’t know if we have as visible a precedent/metaphor for self-tuning sensor-nets, bionic humans, or distributed fabrication of precision parts and products outside factories.

That piece of the argument remains to be worked out more completely, but for now, I’m finding validation for the concept every day in both the daily news feed and in the lack of words to talk about what is really happening.

-->
*Bernard Roth, Foreword to Bruno Siciliano and Oussama Khatib eds, Springer Handbook of Robotics (Berlin: Springer-Verlag, 2008), p. viii.

Monday, March 31, 2014

Early Indications March 2014: TED at 30

On the occasion of its 30th birthday, TED is the subject of a number of both critiques and analyses. It’s a tempting target: the brand is both powerful and global, and the sheer numbers are just plain big. 1,700 talks are said to be online, viewed more than a billion times. More than 9,000 mini-TEDs (TEDx events) have been held all over the world. And TED’s successful formula is prone to the perils of any formula: sameness, self-parody, insularity.

But to go so far as to say, as sociologist Benjamin Bratton does, that TED is a recipe for “civilizational disaster,” is attention-getting hyperbole. Does Bratton not watch TV, a much more likely candidate for his accusation? (Also: he made the charge in a TED talk, of all places.) Other critiques hit the mark. There can be heavy does of techno-utopianism, especially in a certain strand of the talks, which is hardly surprising given a heavy Silicon Valley bias among the advisory crew. Politics is often either a) ignored or b) addressed as a quasi-technical problem to be fixed. The stagecraft, coaching, and earnestness of the talks can lend an evangelical cast to the enterprise. Humanity is fallen, in this trope, from a state of “better” that can be reclaimed by more education, more technology, more self-actualization.

At the same time, that narrative contains more than a grain of realism. Civic debate works less wastefully when citizens have richer fact bases from which to reason, and Hans Rosling’s series of talks on international health and economics is an important contribution to that debate. (The same can be said for Ken Robinson and Sal Khan on education.) Medicine and technology can make some of us “better than well,” to quote Carl Elliott, or replace human capacity with machines. The state of prosthetics (not only limbs, but also exoskeletons and tools for cognitive abilities and other functions) is in a state of extreme dynamism right now, and 99% of us will never see the labs and rehab clinics where the revolution is gaining momentum. Finally, education is being enhanced and disrupted by digital media at a time when skills imbalances, economic inequality, and political corruption are crucial topics for much of the globe. The TED agenda includes many worthy elements.

Rather than go with the evangelical line of comparison (as illustrated at The Economist), I tend to look at TED in terms of its reach. Much like the Book of the Month Club that brought middlebrow literature to audiences far from metropolitan booksellers, TED serves as an introduction to ideas one would not encounter otherwise. The conferences and videos illustrate the power of “curation,” a buzzword that fits here, vis a vis mass populations utilizing search, popular-scientific journals, mass media, or classroom lectures. This curation, coupled with the huge scale of the freely distributed videos and the social networks that propel them, helps explain the TED phenomenon. And if it's "middlebrow," I'm not sure that's such a bad thing: this isn't babbitry, after all.

In TED-meister Chris Anderson’s own talk, he makes a compelling case for online video as a Gutenberg-scale revolution. In the beginning, says Anderson (the son of missionaries), was the word, and words were spread by humans with gestures, intonation, eye contact, and physical responses of either acknowledgement or confusion. After the inventions of ink, type, paper, and so on, words could be manufactured beyond human scale, but the accompanying nuances were lost: print scaled in a way talking could not. Now, in a brief historical moment (YouTube is not yet 10 years old), we have global scale for words to reach masses of niche audiences, complete with body language, show-and-tell visual explanations, and other attributes that restore the richness of the spoken word.

Bratton’s solution — “More Copernicus, less Tony Robbins” — has much to commend it, yet realistically, how many Copernican giants can any era of human history produce? And of these few, how many could communicate on whiteboards, in articles, or to students, the true breadth and depth of their insights and discoveries? The self-help strain of TED-dom is worrisome to me and to many others, but equally unhelpful is science and technology unmoored from human context. If there had been TED talks in 1910, when Fritz Haber fixed atmospheric nitrogen in fertilizers that now feed a third of the world’s population, would anyone have known what he should have said? Or what if Robert Oppenheimer had a TED-like forum for his concerns regarding atomic weapons in the 1950s? Historically, humanity has frequently misunderstood the geniuses in its midst, so I’m unsure if TED could actually find, coach, and memorialize on video many of today’s Copernican equivalents. At the same time, I should be on record as wishing for both less of Tony Robbins and fewer media Kardashians of any variety.

For me, these are both academic questions and personal ones: I gave my first TEDx talk in early March, and crafting it was a stiff challenge. I saw pitfalls everywhere: sounding like everyone else, overshooting the audience’s patience and current knowledge, and not giving people anything to _do_ about this urgent issue at the end of the talk. Thus I will leave it to the audience to judge after the edited talk is posted next month, but I must admit I measure great TED talks with a new yardstick after having tried to give even a pedestrian one.

Wednesday, February 26, 2014

Early Indications February 2014: Who will win in an Internet of Things?

As usual, Silicon Valley is in love with tech-y acronyms that sometimes do not translate into wider conversations involving everyday people. We’ve discussed “big data” in a recent newsletter, “cloud computing” has been around for a long time and still confuses people, and now, various firms are pointing toward a future of massively connected devices, controllers, sensors, and objects. Cisco went for the whole enchilada in its branding, opting for “the Internet of Everything.” IBM tended toward cerebral globality with its “smarter planet” branding. GE is focusing on its target markets by trying to build an “industrial Internet.” My preferred term, the Internet of Things (IoT), raises many challenging questions — but they’re worth pondering.

It’s commonly said that “there are more things than people connected to the Internet,” or some variant of that assertion. The current number is 8 to 10 billion “things,” which of course outstrips the population of planet Earth. But what is a “thing”? Out of that total — call it 9 billion for the sake of argument — probably 2 billion are smartphones and tablets. Add in routers, switches, access points, military gear, and PCs, and we’re probably in the 5-6 million range, total. That means that “things” such as garage-door openers, motor controllers, and webcams probably add up to 2-3 billion — about the same as the number of people who use the Internet on a regular basis, out of a global population of 7 billion. (For detailed numbers see here and here.)

This ratio will change rapidly. Given the inevitability of massive growth in this area, many big players, including some surprising ones, are beginning to fight for mindshare, patent portfolios, and other predecessors to market leverage. I will list the major players I’ve found in alphabetical order, then conclude with some summary thoughts.

Chip companies (ARM, Freescale, Intel, Qualcomm, Texas Instruments)
The magnitude of the opportunity is offset by the low price points required for true ubiquity. The business model should, in theory, be friendlier to an embedded-systems vendor like Freescale than to the makers of IP-heavy, super-powerful microprocessors such as those made by Qualcomm and Intel (unless we get data-aggregation appliances, dumber than a PC but capable of managing lots of sensors, in which case a set-top-box might be a useful guidepost). If a company can capture a key choke point with a patent “moat,” as Broadcom and Qualcomm did for previous generations of networking, this could be a key development.
Overall grade: lots of ways to win, but unlikely that any of these players dominate huge markets

Cisco: CEO John Chambers told a tech conference earlier this month that he projects the IoT to be a $19 TRILLION profit market in the next few years. (For scale, the US GDP is about $15 trillion, with about a fifth of that health care in some form or fashion; total global economic activity amounts to about $72 trillion.) Given such a massive addressable market, Cisco is bundling cloud, collaboration, data analytics, mobility, and security offerings into a go-to-market plan with an emphasis on lowering operational costs, raising business performance, and doing it all fast. This IoT strategy blends strengths in Cisco’s legacy businesses with an anticipation of new market needs.
Overall grade: Given history, the developing hardware stack, and their brainpower, Cisco must be taken very seriously

GE: If you look at how many pieces of capital equipment GE sells (from oil drilling gear, to locomotives, to generators, to MRI machines, to jet engines), and if you look at the profit margins on new sales vs after-market service, it makes complete sense that GE is working to instrument much of its installed base of Big Things That Break. Whether it’s preventive maintenance, or pre-empting a service call by a competitor, or designing more robust products for future release, the Industrial Internet is a smart approach to blending sensors, pervasive networking, and advanced analytics.
Overall grade: in GE's market strongholds, getting to IoT first could be a multi-billion-dollar win.

Google: Given the scale and speed of CEO Larry Page’s recent deals, assessing Google’s IoT future is a guessing game. There are huge building blocks already in place:

-Google mapped millions of wi-fi access points and has a well-regarded GIS competency (even if the privacy stance attached to said capability is less beloved).
-With the experience of owning a hardware company, if only briefly, and by owning the #1 global smartphone platform as measured by units, and by running a massive internal global network, and by trying their hand at municipal fiber, Google has extensive and relevant expertise in the transport layer.
-As possibility the biggest and best machine learning company anywhere, Google knows what to do with large numbers of sensor streams, possessing both the algorithmic know-how and computational infrastructure to handle heretofore unheard-of data problems.
-With the Nest purchase, Google gets more data streams as well as user experience expertise for IoT products in the home.
-The self-driving car effort has been a large-scale proving ground for sensor integration and actuator mechanics, with advanced vehicle dynamics and GIS/GPS thrown in.
-Perhaps less noticed than the Nest deal, Google’s recent agreement with Foxconn to develop an operating system for manufacturing robots potentially puts Google into closer competition with Amazon’s supply-chain automation investments, most notably Kiva.

Overall grade: Too big, too secretive, and too data-intensive to ignore. A possible 800-lb gorilla in a nascent market.
Software companies (SAS, Oracle, IBM, Cloudera, etc)
Obviously there is sensemaking involved in sensor-world and the rapidity of how the "big data" industry is generating multiple layers of toolsets suggests that there will be both incumbent and startup data management and analysis firms with a chance to win very big. One possible clue comes in the massive number of web tracking and analytics forms: running Ghostery in my browser has been a highly educational exercise as I've learned how many websites run how many beacons, widgets, and cookies. Will intense speciation be the rule in the IoT as well?
Overall grade: way too early to predict

Storage companies (EMC, Western Digital, Amazon)
Obviously there will be plumbing to build and maintain, and selling pickaxes to miners has been good business for centuries. The question of discontinuity looms large: will this wave of computing look similar enough to what we’re doing now that current leaders can make the transition, or will there be major shifts in approach, creating space for new configurations (and providers) of the storage layer?
Overall grade: Someone will win, but there’s no telling who

Wolfram: This was a surprise to me, but their reasoning makes all kinds of sense:

"But in the end our goal is not just to deal with information about devices, but actually be able to connect to the devices, and get data from them—and then do all sorts of things with that data. But first—at least if we expect to do a good job—we must have a good way to represent all the kinds of data that can come out of a device. And, as it turns out, we have a great solution for this coming: WDF, the Wolfram Data Framework. In a sense, what WDF does is to take everything we’ve learned about representing data and the world from Wolfram|Alpha, and make it available to use on data from anywhere."

An initial project is to “curate” a list of IoT things, and that can be found here.  The list began at a couple thousand items, from Nike Fuelbands to a wi-fi slow cooker to lots of industrial products. Just building an authoritative list, in rigorous Wolfram-style calculable form, is a real achievement. (Recall that Yahoo began as a human-powered directory of the World Wide Web, before Google’s automated crawl proved more adequate to the vastness of the exercise.)

But Wolfram wants to own a piece of the stack, getting its standard embedded on Raspberry Pi chips for starters. The WDF jumpstarts efforts to share simple things like positional data or physical properties (whether torque or temperature or vibrational frequency). Because the vast variety of sensors lack an interconnection/handshake standard like USB, Wolfram definitely helps fill a gap. The question is, what’s the long-term play, and I don’t have the engineering/CS credentials to even speculate.

Overall grade: Asking some pertinent questions, armed with some powerful credentials. Cannot be ignored.

****
A more general question emerges: where do the following conceptual categories each start and finish? One problem is that these new generations of networked sensing, analysis, and action are deeply intertwined with each other:

*cloud computing
*Internet of Things
*robotics
*artificial intelligence
*big data/data analytics
*telemedicine
*social graphs
*GPS/GIS

Let me be clear: I am in no way suggesting these are all the same. Rather, as Erik Brynjolfsson and Andrew McAfee suggest in their book The Second Machine Age (just released), recombinations of a small number of core computational technologies can yield effectively infinite variations. Instagram, which they use as an example, combined social networking, broadband wireless, tagging, photo manipulation software, and smartphone cameras, none of which were new, into an innovative and successful service.

The same goes for the Internet of Things, as sensors (in and outside of wireless devices) generate large volumes of data, stored in cloud architectures and processed using AI and possibly crowdsourcing, that control remote robotic actuators. What does one _call_ such a complex system? I posit that one of the biggest obstacles to be overcome will be not the difficult matters of bandwidth, or privacy, or algorithmic elegance, but the cognitive limitations of existing labels and categories, whether for funding, regulation, or invention. One great thing about the name “smartphones”: designers, marketers, regulators, and customers let go of the preconceptions of the word “computer” even though 95% of what most of us do with them is computational rather than telephonic.

I read the other day that women now outnumber men at UC-Berkeley in intro computer science courses, and it’s developments like that that give me confidence that we will solve these challenges with more diverse perspectives and approaches. The sooner we can free up imagination and stop being confined by rigid, archaic, and/or obtuse definitions of what these tools can do (and how they do it), the faster we can solve real problems rather than worry about what journal to publish in, how it should be taxed and regulated, or which market vertical the company should be ranked among. I think this is telling: among the early leaders in this space are a computer science-driven media company with essentially no competitors, a computer science-driven retailer that's not really a retailer and competes with seemingly everybody, and a "company" wrapped around a solitary mathematical genius. There's a lesson there somewhere about the kind of freedom from labels that it will take to compete on this new frontier.

Wednesday, January 29, 2014

Early Indications January 2014: The Incumbent’s Challenge (with apologies to Clayton Christensen)

I.
For all the attention paid to the secrets-of-success business book genre (see last October’s newsletter), very few U.S. companies that win one round of a competition can dominate a second time. Whether or not they are built to last, big winners rarely dominate twice:

-Sports Illustrated did not found ESPN.

-Coke did not invent, or dominate, energy drinks, or bottled water for that matter.

-IBM has yet to rule any market of the many it competes in the way it dominated mainframes. For many years, the U.S. government considered breaking IBM into smaller businesses, so substantial was its market power. Yet as of 1993, the company nearly failed and posted the largest loss ($8 billion) in U.S. corporate history.

-Microsoft did not dominate search, or social software, or mobile computing in the decade after the U.S. Department of Justice won a case ordering Microsoft to be broken up. (President Bush's Attorney General ordered the case closed upon taking office.)

-The Pennsylvania Railroad became irrelevant shortly after reaching its peak passenger load during World War II: the 6th largest company in the nation became the largest-ever bankruptcy.

-Neither Macy’s nor Sears is faring very well in the Wal-Mart/Target axis.

-It’s hard to remember when Digital Equipment Corporation employed 140,000 people and sold more minicomputers (mostly VAXes) than anyone else. Innovation was not the problem: at the very end of its commercial life DEC had built the fastest processor in its market (the Alpha), an early and credible search engine (AltaVista), and one of the first webpages in commercial history.

-After 45 years, Ford, GM, and Chrysler have yet to make a small-to-medium car as successful as the Japanese. Some efforts — notably the Pinto — are still laughingstocks.

2013 automobile estimated sales by model (excluding pickup trucks and SUVs), rounded to nearest thousand (source: www.motorintelligence.com via www.wsj.com)

Toyota Camry           408,000
Honda Accord          366,000
Honda Civic              336,000
Nissan Altima           321,000
Toyota Corolla          302,000
Ford Fusion              295,000
Chevrolet Cruze       248,000
Hyundai Elantra        248,000
Chevrolet Equinox    238,000
Ford Focus               234,000
Toyota Prius              234,000

II.
There are many reasons for this state of affairs, some enumerated in The Innovator’s Dilemma: managers in charge of current market-leading products get to direct R&D and ad spend, so funding is generally not channeled in the direction of new products. In the tech sector particularly, winning the next market often means switching business models: think how differently Microsoft circa 1996, Google in 2005, Apple as of 2010, and Facebook today generate revenues. Finally, many if not all of the lessons learned in winning one round of competition are not useful going forward, and usually hamper the cognitive awareness of those trying to understand emerging regimes.

Unlike automobiles or soft drinks, moreover, tech markets tend to extreme oligopoly (SAP and Oracle; Dell and HP on the desktop; iOS and Android) or monopoly (Microsoft, Intel, Qualcomm, Google, Facebook). Thus the stakes are higher than in more competitive markets where 45% share can be a position of strength.

All of this setup brings us to a straightforward question for January 2014: how will Google handle its new acquisitions? The search giant has been acquiring an impressing stable of both visible and stealth-mode companies in the fields of robotics (Boston Dynamics), home automation (Nest), and artificial intelligence (DeepMind). When I saw the Boston Dynamics news, I thought immediately of the scenario if Microsoft had bought Google in 1998 rather than the companies it actually did target: WebTV ($425 million in 1997), Hotmail ($500 million in 1997), or Visio ($1.375 billion in 2000). That is, what if the leader in desktop computing had acquired the “next Microsoft” in its infancy? Given corporate politics in general and not any special Microsoft gift for killing good ideas, it’s impossible to believe ad-powered search would have become its own industry.

Google’s track record in acquisitions outside advertising (DoubleClick being a bright exception) is not encouraging: GrandCentral became Google Voice and was orphaned. Dodgeball was orphaned, so its founders started over and maintained the playground naming scheme with Foursquare. Pyra (Blogger), Keyhole (Google Earth), and Picasa (photo sharing) all remain visible, but none has busted out into prominence. YouTube is plenty prominent, but doesn’t generate much apparent revenue.

Let’s assume for the moment that the internet of things and robotics will be foundations in the next generation of computing. Let’s further assume that Google has acquired sufficient talent in these domains to be a credible competitor. There’s one question outstanding: does Google follow its ancient history (in core search), or its post-IPO history?

That is, will great technologies be allowed to mature without viable business models, as was the case with search before ad placement was hit upon as the revenue generator? Or will the current owners of the revenue stream — the ad folks — attempt to turn the Nest, the self-driving car, the Android platform writ large, and people’s wide variety of Google-owned data streams (including Waze, Google Wallet, and the coercive Google+) into ad-related revenue through more precise targeting?


Just as Microsoft was immersed in the desktop metaphor at the time it didn’t buy Google and thus could not have foreseen the rise of ad-supported software, will Google now be blinded to new revenue models for social computing, for way finding, for home automation, and for both humanoid and vehicular robotics? Is Google, as one Twitter post wondered, now a “machine learning” company? Is it, as a former student opined, a de facto ETF for emerging technologies? Most critically, can Google win in a new market, one removed from its glory days in desktop search? Google missed social networking, as Orkut can testify, and CEO Larry Page sounds determined not to make that mistake again. It will bear watching to see if this company, nearly alone in business history, can capture lightning in a bottle more than once to the point where it can dominate two distinct chapters in the history of information technology.

Tuesday, December 31, 2013

Early Indications December 2013: A Look Ahead

Rather than issue predictions for the year-end letter, I am instead posing some (I hope) pertinent questions that should be at least partially answered in the year ahead.

1) How will enterprise hardware and software companies respond to cloud computing?

At first glance, this isn't a particularly fresh question: clouds are old news, in some ways. But whether one looks at Dell, IBM, Oracle, or HP, it's not at all clear that these companies have an assertive vision of the path forward. In each instance, revenues have been dented (or worse) by the shift away from on-premise servers, but what comes next is still in the process of being formulated, perhaps most painfully at HP, where Unix server revenues fell by more than 50% in just 5 quarters: Q4 2010 was about $820 million while Q1 2012 fell below $400 million. As of a month or two ago, HPQ stock had fallen roughly 60% since January 2010, in an otherwise bull market: the S&P 500 was up 64% over the same span.

Farther up the west coast, meanwhile, the leadership challenge at Microsoft relates as much to cloud as it does to mobile, does it not? Getting someone who can change the culture, the product mix, and the skills mix in the headcount will be a tall order. Absent massive change, the desktop-centrism of MSFT will be its undoing unless new models of computing, user experience, and revenue generation (along with commensurate cost structures) are implemented sooner rather than later.

2) Is Uber the next Groupon?

Before you explain how they're not comparable companies, here's my reasoning. Both companies are two-sided platform plays: Groupon enlists merchants to offer deals, and aggregates audiences of deal-seekers to consume them. Two-sided platforms are historically very profitable once they're up and running, but it's tough getting the flywheel to start spinning: no deals, no deal-seekers. No audience, no merchants. One side of the platform typically subsidizes the other: merchants who pay ~3% to accept credit cards are paying for your frequent flier miles.

Groupon ran into trouble after they scaled really really fast, and had lots of physical infrastructure (local sales offices) and headcount to fund. After a local business offered 2 $10 lunches for $10 (and paid $5 to Groupon), the $15 loss was hard to write off as new customer acquisition given that Groupon users frequently did not return to the restaurant to pay full price. Thus using local sales forces to recruit lots of new businesses to try Groupon once, with a low repeat offer rate, made the initial growth tough to scale.

Enter Uber. It's a two-sided platform, recruiting both drivers and riders. Because the smartphone app is an improvement on the taxi experience, customers use it a lot, and tell their friends -- just like Groupon. Meanwhile, getting the "merchant" side of the platform (in this case, the drivers) to stay happy and deliver quality service in the midst of rapid scaling is proving to be difficult: there are getting to be more riders than available cars in many localities. But Uber is not Amazon; it's more like Gilt, a [slightly] affordable luxury play rather than a taxicab replacement. Look at the company's ads and listen to the co-CEO, who calls the company "a cross between lifestyle and logistics." It can't, and has no reason to, meet the demand it's created.

Thus Uber charges "surge" prices, a significant multiple of the base fare when too many riders strain the system. It's presented as an incentive to get dormant cars into the market, but what's more likely is that the steepness of the price tamps down demand while conveying exclusivity. Given how reliably surge pricing is being invoked, it would appear that Uber is hitting some scale limits. One way to address this is to get more drivers into the pool, and Uber recently announced that it will help drivers buy cars. But like Groupon, this has the feeling of "fool me once, shame on me. Fool me twice, shame on you."  The indentured servitude that is Uber car paying-off will only appeal to a certain number of drivers, and for a certain amount of time. The transience of taxi-driver work forces is well demonstrated, and it's not clear that Uber can be immune to the same dynamics.

Meanwhile Uber is trying to expand its revenue streams: that same capacity of infrastructure (physical cars and human drivers) that's needed for a fraction of 24 hours needs to be utilized around the clock.  In early December the service offered a few Christmas trees ("very limited" availability in only ten markets) for the low low price of $135, at something less than "white-glove delivery" standards: the tree came to the "first point of entry," at which time you were on your own. A similar service was proposed for barbecue grills.

All that is a long windup for my Uber question: what is the ultimate scale this business can sustain, what is the revenue model for a 24-hour day during which the asset fleet is fully occupied less that 25% of the time, and what is the customer service guarantee for an impermanent labor force doing a hard job?

3) What is Google doing with robots?

The news that Google had acquired Boston Dynamics, a leader among DARPA robot-development shops, floored me when I heard it and continues to impress me. The shorthand version is that it feels as if Microsoft had acquired Google in 1999. That is, the leader in the current generation of computing invests heavily but astutely in something that's not quite here yet but will be big when it arrives. The buy doesn't get Google any current revenue streams worth mentioning, but there's astonishing talent on the engineering team and some very good IP, much of it, I suspect, classified.

There are several back-stories here. One is that Andy Rubin, formerly at the helm of Android, has been tasked with ramping up a real robotics business. He's been both hiring (quietly but effectively: I know some of the folks, and they're A+ players) and acquiring. The other key person might be Regina Duggan, a Cal Tech PhD who ran DARPA from 2009 until 2012, at which point she joined Google. At the time it was speculated that her expertise in cybersecurity would make her a prized hire, given the massive attacks on Google's worldwide networks (and, as we learned later, sizeable NSA demands for data). Now, however, her insight into the DARPA robotics pipeline no doubt accelerated Rubin's discussions with Boston Dynamics and perhaps other firms or individuals.

What could Google do with robots? Plenty:

-Work on battery issues. Cross-fertilization across Android and robotic research on this one issue alone could produce massive license revenue opportunities.

-Work on power and motor issues. Getting the physical world to connect and react to the "Internet of things" requires locomotion, a space where Moore's law-scale acceleration of performance has yet to be discovered.

-Automate server farm maintenance. Swapping out dead hard drives, to take just one example, would seem to be a local task worth doing with robots.

-Tune algorithms, something both Google engineers and roboticists do somewhat regularly.

-Scale down the machine vision, path-planning, dynamic compensation, and other insights gained in cars to smaller robots.

-Apply the same machine learning insights that allow Google to "translate" foreign languages -- without knowing said languages -- to other aspects of human behavior.

-Learn more about human behavior, specifically, human reactions to extremely capable (and biomimetic) robots. There's a great piece on the deep nature of "uncanny" resemblances here, and who better than Google to extend the limits of observed machine-human psychology.

One article mentioned a potential unintended side-effect. Some investment funds have screens that exclude tobacco companies, defense contractors, firms that do business in boycotted/embargoed nations, etc. Unlike iRobot and its Roomba division, Boston Dynamics has no consumer businesses; it appears to be almost entirely funded by defense research. Thus (much like Amazon building a computing cloud for the CIA), Google is now sort of a defense contractor. Apparently that status might change after the current contracts are completed, but even so, will the acquisition change Google's standing as an institutional stock holding?

4) What happens to the smartphone ecosystem as the developed world hits saturation?

For carriers and handset makers, this could feel like deja vu: once everyone who wants a cell phone (which happened around the year 2000 for the U.S.) or a smart phone (very soon in the U.S.) has one, where does revenue growth come from? New handsets would be one potential stream, but with subsidized phones, that turnover is unlikely to be very rapid. Content deals with HBO, the NFL, and other rights-holders will be a factor, much as ringtones were 8 or 10 years ago. ARPU, the all-important Average Revenue Per User metric by which the cellular industry lives and dies, is unlikely to grow any time soon in the EU or U.S., according to the research firm Strategy Analytics (http://www.totaltele.com/res/plus/tote_OCT2010.pdf -- see p.8).

Apple, Verizon, Disney, Comcast, Microsoft -- some very large companies in a variety of industries will be forced to change product strategies, internal cost structures, and/or revenue generation practices, the latter harder to do as competition increases (and thus one possible impetus for rumored merger talks between Sprint and T-Mobile).

There's also the geographic expansion move: focus on developing markets, selling lower-priced handsets in much, much larger numbers. That move, too, is hardly a sure thing, exposed as it is to competition from firms such as Hauwei that are not encountered as often in the U.S./EU. Supply chains, sales channels, regulatory compliance, and many other aspects of business practice can surprise a U.S. firm when it seeks to expand outside its traditional markets.

Honorable mention questions:

-How fast will 4K super-definition TV catch on? What geographies will lead the way?

-Will the US see any player gain momentum in the mobile wallet space?

-What will be the fallout of the NSA revelations?

-What will the consumer and B2B 3D printing landscape look like in 18 months? is there a single killer app that could emerge?

-How will crowdsourcing and especially crowd funding settle into well understood patterns after the current wild-West phase?

-What will the cryptocurrencies that follow Bitcoin mean -- for money laundering, for taxation, for developmental economics, for customer convenience, for alternative trust frameworks?

-When will we see a hot Internet-of-Things startup emerge? Given that AT&T networked lots of phones, then Microsoft eventually ran (and sort of) connected hundreds of millions of PCs, then Google indexed all the Internet's documents, then Apple shrunk and refined the PC into a smartphone, then Facebook connected massive numbers of personal contacts, it would appear that whoever can connect (and extract a toll from) some large number of the world's sensors and devices stands to be important in the next chapter of tech history.