I didn’t really go looking for this particular constellation of ideas,
but several good pieces really got me connecting the dots and this
month’s letter represents an effort to spell things out with regard to
surveillance.
1) The Economist published one of its special reports on September 13 on
online advertising. Entitled “Little Brother,” the report argues that
mobile devices combined with social networks are providing advertisers —
and more importantly, a complex ecosystem
of trackers, brokers, and aggregators illustrated in Luma Partners’ now-famous eye-chart slides —
with unprecedented targeting information. One prominently quoted survey
asserts that marketers have seen more change in the past two years than
in the previous 50. Among the biggest of these shifts: programmatic ad
buying now works much like algorithmic trading
on Wall Street, with automated ad bidding and fulfillment occurring in
the 150 milliseconds between website arrival and page load on the
consumer device.
[As I type this, Facebook announced that the firm made $3.2 billion in
one quarter, mostly from ads, nearly $2 billion of it from mobile.]
Given that surveillance pays dividends in the form of more precise
targeting — one broker sells a segment called “burdened by debt:
small-town singles” — it is no surprise that literally hundreds of
companies are harvesting user information to fuel the bidding
process: online ad inventory is effectively infinite, so user
information is the scarce commodity and thus valued. This marks a
radical reversal from the days of broadcast media, when audience
aggregators such as NBC or the New York Times sold ad availability
that was constrained by time or space. Thus the scarcity has shifted
from publishers to ad brokers who possess the targeting information
gleaned from Facebook, GPS, Twitter, Google searches, etc. Oh, and
anyone who does even rudimentary research on the supposedly
“anonymous” nature of this data knows it isn’t, really: Ed Felten, a
respected computer scientist at Princeton, and others have repeatedly
shown how easy de-anonymization is. (Here’s one widely cited example.)
2) In another sign that surveillance is a very big deal, not only for
advertising, the always-astute security guru Bruce Schneier announced
that his next book Data and Goliath, due out in March, addresses this
issue.
3) Robots, which for our purposes can be defined as sensor platforms,
are getting better — fast — and Google has acquired expertise in several
forms of the discipline:
-the self-driving car (that has severe real-world limitations)
-Internet of Things (Nest and Waze)
-autonomous military and rescue robots (Boston Dynamics and Schaft).
4) A September 28 post by Steve Cheney
raised the prospect of Google moving some or all of the aforementioned
robot platforms onto some version of Android. While he predicted that
“everything around you will feel like an app,” I’m more concerned that
every interaction with any computing-driven
platform will be a form of surveillance. From garage-door openers and
thermostats to watches to tablets to “robots” (like the one Lowes is prototyping for store assistance)
to cars, the prospect of a Google-powered panopticon feels plausible.
(I looked for any mention of robotics in the Google annual report but
all the major acquisitions
were made in this fiscal year, so next year's 10-K will bear watching
on this topic.)
5) Hence Apple’s recent positioning makes competitive sense. When Tim Cook said “A few years ago, users of Internet services began to realize
that when an online service is free, you’re not the customer. You’re the
product.” he was ahead of the curve, I believe:
according to the Economist report, only .00015% of people use those
little triangle things to opt out of online ad tracking. In Cook’s and
Apple’s narrative, premium prices implicitly become more reasonable to
those who value privacy insofar as there is no
“audience commodity” as at eBay, Amazon, Google Twitter, or Facebook.
6) One other thing to consider here is how that information is being
processed at unprecedented scale. When The Economist noted the likeness
of ad-buying to algo trading, we enter the world of artificial
intelligence, something Google counts as a core competency,
with 391 papers published not to mention untold portions of secret
sauce.
Some very smart people are urging caution here. Elon Musk was at MIT for
a fascinating (if you’re a nerd) discussion of rockets, Tesla, the
hyperloop, and space exploration. Thus for someone serious about a Mars
space base to warn against opening an AI Pandora’s box was quite revealing:
“I think we should be very careful about artificial intelligence. If I
were to guess like what our biggest existential threat is, it’s
probably that. So we need to be very careful with the
artificial intelligence. Increasingly scientists think there should
be some regulatory oversight maybe at the national and international
level, just to make sure that we don’t do something very foolish. With
artificial intelligence we are summoning the demon. In all those stories
where there’s the guy with the pentagram and
the holy water, it’s like yeah he’s sure he can control the demon.
Didn’t work out.”
(The complete MIT talk is here)
Musk is not alone. The University of Oxford’s Nick Bostrom recently
wrote Superintelligence (maybe best thought of as an alternative
excursion into Kurzweil-land), a book that quite evidently is grappling
with the unknown unknowns we are bumping up against.
He knows of what he speaks, but the book is, by his own admission, a
frustrating read: no generation of earth’s population has ever had to
ask these questions before. The book’s incompleteness and tentativeness,
while making for a suboptimal read, are at the
same time reassuring: someone, both informed and set in a broad
context, is asking the questions many of us want on the table but lack
the ability, vocabulary, and credibility to raise ourselves.
********
In a nutshell, there it is: mobile devices and social networks generate
data points that supercomputing and sophisticated analytical tools turn
into ad (or terrorist, or tax-cheat) profiling data. Computing liberated
from desktop boxes and data centers moves
in, and acts on, the physical world, extending surveillance further.
Apple positions itself as a self-contained entity selling consumers
stuff they pay for, not selling eyeballs/purchase histories/log-in
fields/expressed and implied preferences to advertisers.
In sharp contrast, Google has repeatedly shown — with Streetview, wi-fi
access point mapping, Buzz, and Google+ — a desire to collect more
information about individuals than many of those individuals would
voluntarily reveal. With AI in the picture, the prospect
of surveillance producing some very scary scenario — it may not
accurately be called a breach, just as the flash crash wasn’t illegal —
grows far more likely. Human safeguards didn’t work at the NSA; why
should they work in less secure organizations? Like
Bostrom, I have no ready answers other than to lead a relatively
careful digital existence and hope that the wisdom of caution and
respect for privacy will edge out the commercial pressures in the
opposite direction.
Next month: unexpected consequences of a surveillance state.
Early Indications is the weblog version of a newsletter I've been publishing since 1997. It focuses on emerging technologies and their social implications.
Thursday, October 30, 2014
Wednesday, October 01, 2014
Early Indications September 2014: Alternatives to Industry
In
classic business school strategy formulation, a company’s industry is
taken as the determining factor in cost structures, capital utilization,
and other constraints to the pursuit of market success. Nowhere is this
view more clearly visible than in Michael Porter’s seminal book
Competitive Strategy, in which the word “industry” appears regularly.
I have long contended that Internet companies break this formulation. A series of blog posts (especially this one) in the past few weeks have crystallized this idea for me. The different paths pursued by Apple, Amazon, and Google — very different companies when viewed through the lens of industries — lead me to join those who contend that despite their different microeconomic categories, these three companies are in fact leading competitors in important ways: but not of the Coke/Pepsi variety.
Let us consider the traditional labels first. Amazon is nominally a retailer, selling people (and businesses) physical items that it distributes with great precision from its global network of warehouses. Its margins are thin, in part because of the company’s aggressive focus on delivering value to the customer, often at the cost of profitability at both Amazon itself and its suppliers.
Apple designs, supervises the manufacture of, and distributes digital hardware. Its profit margins are much higher than Amazon’s, in large part because its emphasis on design and usability allows it to command premium prices. Despite these margins and a powerful brand, investors value the company much less aggressively than they do Amazon.
Google, finally, collects vast sums of data and provides navigation in the digital age: search, maps, email. Algorithms manage everything from web search to data-center power management to geographic way-finding. In the core search business, profit margins are high because of the company’s high degree of automation (self-service ad sales) and the wide moats the company has built around its advertising delivery.
Thus in traditional terms, we have a mega-retailer, a computer hardware company, and a media concern.
When one digs beneath the surface, the picture morphs rather dramatically. Through a different lens, the three companies overlap to a remarkable degree — just not in ways that conform to industry definitions.
All three companies run enormous cloud data center networks, albeit monetized in slightly different ways. Apple and Amazon stream media; Google and Amazon sell enterprise cloud services; Apple and Google power mobile ecosystems with e-mail, maps, and related services. All three companies strive to deepen consumer connections through physical devices.
Apple runs an industry-leading retail operation built on prime real estate at the same time that Amazon is reinventing the supply-chain rule book for its fulfillment and now sortation centers. (For more on that, see this fascinating analysis by ChannelAdvisor of the Amazon network. In many cases, FCs are geographically clustered rather than spread more predictably across the landscape) Both of these retail models are hurting traditional mall and big-box chains.
At the most abstract but common level, all three companies are spending billions of dollars to connect computing to the physical world, to make reality a peripheral of algorithms, if you will. Amazon’s purchase of Kiva and its FC strategy both express an insistent strategy to connect a web browser or mobile device to a purchase, fulfilled in shorter and shorter time lags with more and more math governing the process. In the case of Kindle and streaming media, that latency is effectively zero, and the publishing industry is still in a profoundly confused and reactive state about the death of the physical book and its business model. The Fire phone fits directly into this model of making the connection between an information company and its human purchasers of stuff ever more seamless, but its weak market traction is hardly a surprise, given the strength of the incumbents -- not coincidentally, the other two tech titans.
Apple connects people to the world via the computer in their pocket. Because we no longer have the Moore’s law/Intel scorecard to track computing capacity, it’s easy to lose sight of just how powerful a smartphone or tablet is: Apple's A8 chip in the new iPhone contains 2 Billion (with a B) transistors, equivalent to the PC state of the art in 2010. In addition, the complexity of the sensor suite in a smartphone — accelerometers, microphone, compasses, multiple cameras, multiple antennae — is a sharp departure from a desktop computer, no matter how powerful, that generally had little idea of where it was or what its operator was doing. And for all the emphasis on hardware, Nokia’s rapid fall illustrates the power of effective software in not just serving but involving the human in the experience of computing.
Google obviously has a deep capability in wi-fi and GPS geolocation, for purposes of deeper knowledge of user behavior. The company’s recent big-bet investments — the Nest thermostat, DARPA robots, Waze, and the self-driving car team — further underline the urgency of integrating the world of physical computing on the Android platform(s) as a conduit for further and further knowledge of user behavior, social networks, and probably sentiment, all preconditions to more precise ad targeting.
Because these overlaps fail to fit industry definitions, metrics such as market share or profit margin are of limited utility: the three companies recruit, make money, and innovate in profoundly different ways. Amazon consistently keeps operating information quiet (nobody outside the company knows how many Kindle devices have been sold, for example) so revenue from the cloud operation is a mystery; Google’s finances are also somewhat difficult to parse, and the economics of Android for the company were never really explicated, much less reported. Apple provides most likely the most transparency of the three companies, but that’s not saying a lot, as the highly hypothetical discussion of the company’s massive cash position would suggest.
From a business school or investor perspective, the fact of quasi-competition despite the lack of industry similitude suggests that we are seeing a new phase of strategic analysis and execution, both enabled and complicated by our position with regard to Moore’s law, wireless bandwidth, consumer spending, and information economics. The fact that both Microsoft and Intel are largely irrelevant to this conversation (for the moment anyway) suggests several potential readings: that success is fleeting, that the PC paradigm limited both companies’ leaders from seeing a radically different set of business models, that fashion and habit matter more than licenses and seats, that software has changed from the days of the OSI layer cake.
In any event, the preconditions for an entirely new set of innovations — whether wearable, embedded/machine, algorithmic, entertainment, and/or health-related — are falling into place, suggesting that the next 5-10 years could be even more foreign to established managerial teaching and metrics. Add the external shocks — and shocks don’t get much more bizarre than ebola and media-savvy beheadings — and it’s clear that the path forward will be completely fascinating and occasionally terrifying to traverse. More than inspiration or insight from our business leaders, we will likely need courage.
I have long contended that Internet companies break this formulation. A series of blog posts (especially this one) in the past few weeks have crystallized this idea for me. The different paths pursued by Apple, Amazon, and Google — very different companies when viewed through the lens of industries — lead me to join those who contend that despite their different microeconomic categories, these three companies are in fact leading competitors in important ways: but not of the Coke/Pepsi variety.
Let us consider the traditional labels first. Amazon is nominally a retailer, selling people (and businesses) physical items that it distributes with great precision from its global network of warehouses. Its margins are thin, in part because of the company’s aggressive focus on delivering value to the customer, often at the cost of profitability at both Amazon itself and its suppliers.
Apple designs, supervises the manufacture of, and distributes digital hardware. Its profit margins are much higher than Amazon’s, in large part because its emphasis on design and usability allows it to command premium prices. Despite these margins and a powerful brand, investors value the company much less aggressively than they do Amazon.
Google, finally, collects vast sums of data and provides navigation in the digital age: search, maps, email. Algorithms manage everything from web search to data-center power management to geographic way-finding. In the core search business, profit margins are high because of the company’s high degree of automation (self-service ad sales) and the wide moats the company has built around its advertising delivery.
Thus in traditional terms, we have a mega-retailer, a computer hardware company, and a media concern.
When one digs beneath the surface, the picture morphs rather dramatically. Through a different lens, the three companies overlap to a remarkable degree — just not in ways that conform to industry definitions.
All three companies run enormous cloud data center networks, albeit monetized in slightly different ways. Apple and Amazon stream media; Google and Amazon sell enterprise cloud services; Apple and Google power mobile ecosystems with e-mail, maps, and related services. All three companies strive to deepen consumer connections through physical devices.
Apple runs an industry-leading retail operation built on prime real estate at the same time that Amazon is reinventing the supply-chain rule book for its fulfillment and now sortation centers. (For more on that, see this fascinating analysis by ChannelAdvisor of the Amazon network. In many cases, FCs are geographically clustered rather than spread more predictably across the landscape) Both of these retail models are hurting traditional mall and big-box chains.
At the most abstract but common level, all three companies are spending billions of dollars to connect computing to the physical world, to make reality a peripheral of algorithms, if you will. Amazon’s purchase of Kiva and its FC strategy both express an insistent strategy to connect a web browser or mobile device to a purchase, fulfilled in shorter and shorter time lags with more and more math governing the process. In the case of Kindle and streaming media, that latency is effectively zero, and the publishing industry is still in a profoundly confused and reactive state about the death of the physical book and its business model. The Fire phone fits directly into this model of making the connection between an information company and its human purchasers of stuff ever more seamless, but its weak market traction is hardly a surprise, given the strength of the incumbents -- not coincidentally, the other two tech titans.
Apple connects people to the world via the computer in their pocket. Because we no longer have the Moore’s law/Intel scorecard to track computing capacity, it’s easy to lose sight of just how powerful a smartphone or tablet is: Apple's A8 chip in the new iPhone contains 2 Billion (with a B) transistors, equivalent to the PC state of the art in 2010. In addition, the complexity of the sensor suite in a smartphone — accelerometers, microphone, compasses, multiple cameras, multiple antennae — is a sharp departure from a desktop computer, no matter how powerful, that generally had little idea of where it was or what its operator was doing. And for all the emphasis on hardware, Nokia’s rapid fall illustrates the power of effective software in not just serving but involving the human in the experience of computing.
Google obviously has a deep capability in wi-fi and GPS geolocation, for purposes of deeper knowledge of user behavior. The company’s recent big-bet investments — the Nest thermostat, DARPA robots, Waze, and the self-driving car team — further underline the urgency of integrating the world of physical computing on the Android platform(s) as a conduit for further and further knowledge of user behavior, social networks, and probably sentiment, all preconditions to more precise ad targeting.
Because these overlaps fail to fit industry definitions, metrics such as market share or profit margin are of limited utility: the three companies recruit, make money, and innovate in profoundly different ways. Amazon consistently keeps operating information quiet (nobody outside the company knows how many Kindle devices have been sold, for example) so revenue from the cloud operation is a mystery; Google’s finances are also somewhat difficult to parse, and the economics of Android for the company were never really explicated, much less reported. Apple provides most likely the most transparency of the three companies, but that’s not saying a lot, as the highly hypothetical discussion of the company’s massive cash position would suggest.
From a business school or investor perspective, the fact of quasi-competition despite the lack of industry similitude suggests that we are seeing a new phase of strategic analysis and execution, both enabled and complicated by our position with regard to Moore’s law, wireless bandwidth, consumer spending, and information economics. The fact that both Microsoft and Intel are largely irrelevant to this conversation (for the moment anyway) suggests several potential readings: that success is fleeting, that the PC paradigm limited both companies’ leaders from seeing a radically different set of business models, that fashion and habit matter more than licenses and seats, that software has changed from the days of the OSI layer cake.
In any event, the preconditions for an entirely new set of innovations — whether wearable, embedded/machine, algorithmic, entertainment, and/or health-related — are falling into place, suggesting that the next 5-10 years could be even more foreign to established managerial teaching and metrics. Add the external shocks — and shocks don’t get much more bizarre than ebola and media-savvy beheadings — and it’s clear that the path forward will be completely fascinating and occasionally terrifying to traverse. More than inspiration or insight from our business leaders, we will likely need courage.
Tuesday, July 29, 2014
Early Indications July 2014: Betting the Business
I read with great interest the recent Fortune article on the new Ford F-150 pickup. This is the best-selling vehicle in the U.S. (but sold in few other markets), and contributes mightily to the company’s profitability: it’s straightforward to manufacture, long production runs drive purchasing economies and assembly line efficiency, and option packages contribute to healthy — 40% — gross margins. In short, the light truck franchise at Ford is the one essential business that management has to get right: small cars aren’t that popular or profitable, large SUVs are out of fashion, overall car demand is flat for demographic and other reasons, and financing isn’t the profit center it once was given low interest rates.
A new model of the truck comes out this fall. Ford is reinventing the pickup, by making it mostly out of aluminum rather than steel. The weight savings (700 lb was the target) will help the automaker reach government-mandated fuel economy targets, but there are significant risks all across the landscape:
*Body shops need new, specialized equipment to address aluminum body damage. Ford had to create a nationwide network of authorized service centers, especially given how many trucks are sold in rural areas to miners, ranchers, and farmers. If owners have trouble getting repairs done, negative publicity will travel extra fast over social media.
*The aluminum supply as of 2010 was not sufficient to the need: 350,000 half-ton pickups in 2012 would be an awful lot of beer cans. Ford has to invent a whole new supply base and watch how one big buyer will move the entire commodity market. (I’ve heard this is why Subway doesn’t offer roasted red pepper strips: they’d need too many.)
*Manufacturing processes are being revised: aluminum can’t be welded the way steel can, so bonding and riveting require new engineering, new skills, new materials, and new assembly sequences.
In short, this is a really big gamble. Ford is messing with the formula that has generated decades of segment leadership, corporate profitability, and brand loyalty. Competitors are circling: Chevy would love to have Silverado win the category sales crown this year, especially given GM’s horrific year of bad publicity, and Chrysler’s Ram division was renamed solely because of their pickups’ brand equity.
It’s rare that a company takes a position of market leadership and invests in a significantly different platform, risking competitive position, profitability, and customer loyalty. Between us, my former boss John Parkinson and I could only come up with a handful: these kind of moments seem to happen only about once a decade (unless readers remind me of examples I missed).
Six US business decisions got our attention:
1) Boeing bet on passenger jet airplanes in the 1950s, launching the 707 in 1958. It was not the first such aircraft: the British De Havilland Comet won that honor, but had major safety issues related to leaks developing because of metal fatigue around window openings. Jets delivered greater power for their size, had fewer moving parts, and burned cheaper fuel. Planes could carry more passengers, fly farther and faster, and required fewer maintenance visits.
2) IBM completely departed from industry practice by announcing the System/360 in 1964. It was a family of highly compatible mainframes, allowing customers to grow up in capability without having to learn a new operating system or rebuild applications: previously, customers could buy a small computer that might soon box them in, or overspend on something too big for their needs in the hope of growing into it. Fred Brooks, who managed software development, learned from System/360 about the paradoxes of programming and later wrote the classic Mythical Man-Month, with its still-true insight: adding programmers to a late software project will make it later. Brooks’ team delivered, and S/360 helped IBM dominate the computer market for the next 15 years.
3) Few people remember that Intel has not always been synonymous with microprocessors. Until the early 1980s, the company’s focus was on memory devices. Simultaneously undercut in price by Japanese competition and alert to the rapid growth of the PC segment, Intel CEO Andrew Grove switched over to the far more technically demanding microprocessor market in 1983, followed by the famous “Intel Inside” branding campaign in 1991: it was unheard-of for a B2B supplier to build a consumer brand position. Intel stock in this period enjoyed an enviable run-up, but the success was not preordained.
4) It wasn’t a completely “bet-the-company” decision, but Walter Wriston at CitiBank wagered heavily on automatic teller machines in the 1980s, which not only cost a significant amount to develop and install, but also prompted criticism of a lack of a personal touch in client service. The decision of course paid off handsomely and revolutionized the finance industry.
5) It wasn’t a bet-the-company decision, as its failure makes clear, but Coke guessed wildly wrong on the flavor of “New Coke” in 1985 yet was able to recover.
6) Verizon made a significant investment in its residential fiber-optic buildout, but the rapid growth in the wireless business eventually eclipsed wireline in general, reducing both the risk and the impact of the original decision in 2005.
What am I missing? In what other situations have CEOs taken a flagship market offering and significantly revamped it, endangering market share, brand equity, and profitability to the extent Ford has, when the entire company’s future rides heavily on this product launch?
Friday, May 30, 2014
Early Indications May 2014: When computing leaves the box
Words
can tell us a lot. In particular, when a new innovation emerges, the
history of its naming shows how it goes from foreign entity to novelty
to invisible ubiquity. A little more than 100 years ago, automobiles
were called “horseless carriages,” defined by what they were not rather
than what they were. “Cars” were originally parts of trains, as in
boxcars or tank cars, but the automobile is now top of mind for most
people. More recently, the U.S. military refers to drones as UAVs:
unmanned aerial vehicles, continuing the trend of definition by
negation. Tablets, the newest form of computing tool, originally were
made of clay, yet the name feels appropriate.
The naming issues associated with several emerging areas suggest that we are in the early stages of a significant shift in the landscape. I see four major manifestations of a larger, as yet unnamed, trend, that for lack of better words I am calling “computing outside the box.” This phrase refers to digital processes — formerly limited to punchcards, magnetic media, keyboards/mice, and display screens — that now are evolving into three dimensional artifacts that interact with the physical world, both sensing and acting upon it as a result of those digital processes. My current framing of a book project addresses these technologies:
-robotics
-3D printing/additive manufacturing
-the emerging network of sensors and actuators known as the Internet of Things (another limited name that is due for some improvement)
-the aforementioned autonomous vehicles, airborne, wheeled, and otherwise.
Even the word “computer” is of interest here: the first meaning, dating to 1613 and in use for nearly 350 years, referred to a person who calculates numbers. After roughly 50 years of computers being big machines that gradually shrank in size, we now are in a stage where the networked digital computers carried by hundreds of millions of people are no longer called computers, or conceived of as such.
Most centrally, the word “robot” originated in the 1920s and was at first a type of slave; even now, robots are often characterized by their capabilities in performing dull, dirty, or dangerous tasks, sparing a human from these efforts. Today, the word has been shaped in public imagination more by science fiction literature and cinema than by wide familiarity with actual artificial creatures. (See my TEDx talk on the topic) Because the science and engineering of the field continue to evolve rapidly — look no further than this week’s announcement of a Google prototype self-driving car — computer scientists cannot come to anything resembling consensus: some argue that any device that can 1) sense its surroundings, 2) perform logical reasoning with various inputs, and 3) act upon the physical environment qualifies. Others insist that a robot must move in physical space (thus disqualifying the Nest thermostat), while others say that true robots are autonomous (excluding factory assembly tools).
I recently came across a sensible, nuanced discussion of this issue by Bernard Roth, a longtime professor of mechanical engineering who was associated with the Stanford Artificial Intelligence Lab (SAIL) from its inception.
“I do not think a definition [of what is or is not a robot] will ever be universally agreed upon. . . . My view is that the notion of a robot has to do with which activities are, at a given time, associated with people and which are associated with machines. If a machine suddenly becomes able to do what we normally associate with people, the machine can be upgraded in classification and classified as a robot. After a while, people get used to the activity being done by machines, and the devices get downgraded from ‘robot’ to ‘machine.’” *
However robots are defined, how is computing outside the box different from what we came to know of digital computing from 1950 until 2005 or so? Several factors come into play here.
1) The number of computational devices increases substantially. There were dozens of computers in the 1950s, thousands in the 1960s, millions in the 1980s, and so on. Now, networked sensors will soon number in the tens of billions. This increase in the scale of both the challenges (network engineering, data science, and energy management are being reinvented) and the opportunities requires breakthroughs in creativity: are fitness monitors, that typically get discarded after a few months except by the hardest-core trainers, really the best we can do for improving health outcomes?
2) With cameras and sensors everywhere — on phone poles, on people’s faces, in people’s pockets, in the ground (on water mains), and in the sky (drone photography is a rapidly evolving body of legal judgment and contestation) — the boundaries of security, privacy, and risk are all being reset. When robots enter combat, how and when will they be hacked? Who will program a self-driving suicide bomb?
3) Computer science, information theory, statistics, and physics (in terms of magnetic media) are all being stress-tested by the huge data volumes generated by an increasingly instrumented planet. A GE jet engine is reported to take off, on average, every two seconds, world wide. Each engine generates a terabyte of data per flight. 10:1 compression takes this figure down to a mere 100 gigabytes per engine per flight. Dealing with information problems at this scale, in domain after domain (here’s an Economist piece on agriculture) raises grand-challenge-scale hurdles all over the landscape.
4) The half-life of technical knowledge appears to accelerate. Machine learning, materials science (we really don’t understand precisely how 3D printing works at the droplet level, apparently), machine vision in robots, and so on will evolve rapidly, making employment issues and career evolution a big deal. Robots obviously displace more and more manual laborers, but engineers, programmers, and scientists will also be hard-pressed to keep up with the state of these fields.
5) What are the rules of engagement with computing moving about in the wild? A woman wearing a Google Glass headset was assaulted in a bar because she violated social norms; self-driving cars don’t yet have clear liability laws; 3D printing of guns and of patented or copyrighted material has yet to be sorted out; nobody yet knows what happens when complete strangers can invoke facial recognition on the sidewalk; Google could see consumer (or EU) blowback when Nest sensor data drives ad targeting.
6) How will these technologies augment and amplify human capability? Whether in exoskeletons, care robots, telepresence, or prostheses (a field perfectly suited to 3D printing), the human condition will change in its shape, reach, and scope in the next 100 years.
To anticipate the book version of this argument, computing outside the box introduces a new layer of complexity into the fields of artificial intelligence, big data, and ultimately, human identity and agency. Not only does the long history of human efforts to create artificial life see a new chapter, but also we are creating artificial life in vast networks that will behave differently than a single creature: Frankenstein’s creature is a forerunner of Google’s Atlas robot, but I don’t know if we have as visible a precedent/metaphor for self-tuning sensor-nets, bionic humans, or distributed fabrication of precision parts and products outside factories.
That piece of the argument remains to be worked out more completely, but for now, I’m finding validation for the concept every day in both the daily news feed and in the lack of words to talk about what is really happening.
-->
The naming issues associated with several emerging areas suggest that we are in the early stages of a significant shift in the landscape. I see four major manifestations of a larger, as yet unnamed, trend, that for lack of better words I am calling “computing outside the box.” This phrase refers to digital processes — formerly limited to punchcards, magnetic media, keyboards/mice, and display screens — that now are evolving into three dimensional artifacts that interact with the physical world, both sensing and acting upon it as a result of those digital processes. My current framing of a book project addresses these technologies:
-robotics
-3D printing/additive manufacturing
-the emerging network of sensors and actuators known as the Internet of Things (another limited name that is due for some improvement)
-the aforementioned autonomous vehicles, airborne, wheeled, and otherwise.
Even the word “computer” is of interest here: the first meaning, dating to 1613 and in use for nearly 350 years, referred to a person who calculates numbers. After roughly 50 years of computers being big machines that gradually shrank in size, we now are in a stage where the networked digital computers carried by hundreds of millions of people are no longer called computers, or conceived of as such.
Most centrally, the word “robot” originated in the 1920s and was at first a type of slave; even now, robots are often characterized by their capabilities in performing dull, dirty, or dangerous tasks, sparing a human from these efforts. Today, the word has been shaped in public imagination more by science fiction literature and cinema than by wide familiarity with actual artificial creatures. (See my TEDx talk on the topic) Because the science and engineering of the field continue to evolve rapidly — look no further than this week’s announcement of a Google prototype self-driving car — computer scientists cannot come to anything resembling consensus: some argue that any device that can 1) sense its surroundings, 2) perform logical reasoning with various inputs, and 3) act upon the physical environment qualifies. Others insist that a robot must move in physical space (thus disqualifying the Nest thermostat), while others say that true robots are autonomous (excluding factory assembly tools).
I recently came across a sensible, nuanced discussion of this issue by Bernard Roth, a longtime professor of mechanical engineering who was associated with the Stanford Artificial Intelligence Lab (SAIL) from its inception.
“I do not think a definition [of what is or is not a robot] will ever be universally agreed upon. . . . My view is that the notion of a robot has to do with which activities are, at a given time, associated with people and which are associated with machines. If a machine suddenly becomes able to do what we normally associate with people, the machine can be upgraded in classification and classified as a robot. After a while, people get used to the activity being done by machines, and the devices get downgraded from ‘robot’ to ‘machine.’” *
However robots are defined, how is computing outside the box different from what we came to know of digital computing from 1950 until 2005 or so? Several factors come into play here.
1) The number of computational devices increases substantially. There were dozens of computers in the 1950s, thousands in the 1960s, millions in the 1980s, and so on. Now, networked sensors will soon number in the tens of billions. This increase in the scale of both the challenges (network engineering, data science, and energy management are being reinvented) and the opportunities requires breakthroughs in creativity: are fitness monitors, that typically get discarded after a few months except by the hardest-core trainers, really the best we can do for improving health outcomes?
2) With cameras and sensors everywhere — on phone poles, on people’s faces, in people’s pockets, in the ground (on water mains), and in the sky (drone photography is a rapidly evolving body of legal judgment and contestation) — the boundaries of security, privacy, and risk are all being reset. When robots enter combat, how and when will they be hacked? Who will program a self-driving suicide bomb?
3) Computer science, information theory, statistics, and physics (in terms of magnetic media) are all being stress-tested by the huge data volumes generated by an increasingly instrumented planet. A GE jet engine is reported to take off, on average, every two seconds, world wide. Each engine generates a terabyte of data per flight. 10:1 compression takes this figure down to a mere 100 gigabytes per engine per flight. Dealing with information problems at this scale, in domain after domain (here’s an Economist piece on agriculture) raises grand-challenge-scale hurdles all over the landscape.
4) The half-life of technical knowledge appears to accelerate. Machine learning, materials science (we really don’t understand precisely how 3D printing works at the droplet level, apparently), machine vision in robots, and so on will evolve rapidly, making employment issues and career evolution a big deal. Robots obviously displace more and more manual laborers, but engineers, programmers, and scientists will also be hard-pressed to keep up with the state of these fields.
5) What are the rules of engagement with computing moving about in the wild? A woman wearing a Google Glass headset was assaulted in a bar because she violated social norms; self-driving cars don’t yet have clear liability laws; 3D printing of guns and of patented or copyrighted material has yet to be sorted out; nobody yet knows what happens when complete strangers can invoke facial recognition on the sidewalk; Google could see consumer (or EU) blowback when Nest sensor data drives ad targeting.
6) How will these technologies augment and amplify human capability? Whether in exoskeletons, care robots, telepresence, or prostheses (a field perfectly suited to 3D printing), the human condition will change in its shape, reach, and scope in the next 100 years.
To anticipate the book version of this argument, computing outside the box introduces a new layer of complexity into the fields of artificial intelligence, big data, and ultimately, human identity and agency. Not only does the long history of human efforts to create artificial life see a new chapter, but also we are creating artificial life in vast networks that will behave differently than a single creature: Frankenstein’s creature is a forerunner of Google’s Atlas robot, but I don’t know if we have as visible a precedent/metaphor for self-tuning sensor-nets, bionic humans, or distributed fabrication of precision parts and products outside factories.
That piece of the argument remains to be worked out more completely, but for now, I’m finding validation for the concept every day in both the daily news feed and in the lack of words to talk about what is really happening.
-->
*Bernard Roth, Foreword to Bruno Siciliano and Oussama Khatib eds,
Springer Handbook of Robotics (Berlin: Springer-Verlag, 2008), p. viii.
Monday, March 31, 2014
Early Indications March 2014: TED at 30
On the occasion of its 30th birthday, TED is the
subject of a number of both critiques and analyses. It’s a tempting
target: the brand is both powerful and global, and the sheer numbers are
just plain big. 1,700 talks are said to be online, viewed more than a
billion times. More than 9,000 mini-TEDs (TEDx events) have been held
all over the world. And TED’s successful formula is prone to the perils
of any formula: sameness, self-parody, insularity.
But to go so far as to say, as sociologist Benjamin Bratton does, that TED is a recipe for “civilizational disaster,” is attention-getting hyperbole. Does Bratton not watch TV, a much more likely candidate for his accusation? (Also: he made the charge in a TED talk, of all places.) Other critiques hit the mark. There can be heavy does of techno-utopianism, especially in a certain strand of the talks, which is hardly surprising given a heavy Silicon Valley bias among the advisory crew. Politics is often either a) ignored or b) addressed as a quasi-technical problem to be fixed. The stagecraft, coaching, and earnestness of the talks can lend an evangelical cast to the enterprise. Humanity is fallen, in this trope, from a state of “better” that can be reclaimed by more education, more technology, more self-actualization.
At the same time, that narrative contains more than a grain of realism. Civic debate works less wastefully when citizens have richer fact bases from which to reason, and Hans Rosling’s series of talks on international health and economics is an important contribution to that debate. (The same can be said for Ken Robinson and Sal Khan on education.) Medicine and technology can make some of us “better than well,” to quote Carl Elliott, or replace human capacity with machines. The state of prosthetics (not only limbs, but also exoskeletons and tools for cognitive abilities and other functions) is in a state of extreme dynamism right now, and 99% of us will never see the labs and rehab clinics where the revolution is gaining momentum. Finally, education is being enhanced and disrupted by digital media at a time when skills imbalances, economic inequality, and political corruption are crucial topics for much of the globe. The TED agenda includes many worthy elements.
Rather than go with the evangelical line of comparison (as illustrated at The Economist), I tend to look at TED in terms of its reach. Much like the Book of the Month Club that brought middlebrow literature to audiences far from metropolitan booksellers, TED serves as an introduction to ideas one would not encounter otherwise. The conferences and videos illustrate the power of “curation,” a buzzword that fits here, vis a vis mass populations utilizing search, popular-scientific journals, mass media, or classroom lectures. This curation, coupled with the huge scale of the freely distributed videos and the social networks that propel them, helps explain the TED phenomenon. And if it's "middlebrow," I'm not sure that's such a bad thing: this isn't babbitry, after all.
In TED-meister Chris Anderson’s own talk, he makes a compelling case for online video as a Gutenberg-scale revolution. In the beginning, says Anderson (the son of missionaries), was the word, and words were spread by humans with gestures, intonation, eye contact, and physical responses of either acknowledgement or confusion. After the inventions of ink, type, paper, and so on, words could be manufactured beyond human scale, but the accompanying nuances were lost: print scaled in a way talking could not. Now, in a brief historical moment (YouTube is not yet 10 years old), we have global scale for words to reach masses of niche audiences, complete with body language, show-and-tell visual explanations, and other attributes that restore the richness of the spoken word.
Bratton’s solution — “More Copernicus, less Tony Robbins” — has much to commend it, yet realistically, how many Copernican giants can any era of human history produce? And of these few, how many could communicate on whiteboards, in articles, or to students, the true breadth and depth of their insights and discoveries? The self-help strain of TED-dom is worrisome to me and to many others, but equally unhelpful is science and technology unmoored from human context. If there had been TED talks in 1910, when Fritz Haber fixed atmospheric nitrogen in fertilizers that now feed a third of the world’s population, would anyone have known what he should have said? Or what if Robert Oppenheimer had a TED-like forum for his concerns regarding atomic weapons in the 1950s? Historically, humanity has frequently misunderstood the geniuses in its midst, so I’m unsure if TED could actually find, coach, and memorialize on video many of today’s Copernican equivalents. At the same time, I should be on record as wishing for both less of Tony Robbins and fewer media Kardashians of any variety.
For me, these are both academic questions and personal ones: I gave my first TEDx talk in early March, and crafting it was a stiff challenge. I saw pitfalls everywhere: sounding like everyone else, overshooting the audience’s patience and current knowledge, and not giving people anything to _do_ about this urgent issue at the end of the talk. Thus I will leave it to the audience to judge after the edited talk is posted next month, but I must admit I measure great TED talks with a new yardstick after having tried to give even a pedestrian one.
But to go so far as to say, as sociologist Benjamin Bratton does, that TED is a recipe for “civilizational disaster,” is attention-getting hyperbole. Does Bratton not watch TV, a much more likely candidate for his accusation? (Also: he made the charge in a TED talk, of all places.) Other critiques hit the mark. There can be heavy does of techno-utopianism, especially in a certain strand of the talks, which is hardly surprising given a heavy Silicon Valley bias among the advisory crew. Politics is often either a) ignored or b) addressed as a quasi-technical problem to be fixed. The stagecraft, coaching, and earnestness of the talks can lend an evangelical cast to the enterprise. Humanity is fallen, in this trope, from a state of “better” that can be reclaimed by more education, more technology, more self-actualization.
At the same time, that narrative contains more than a grain of realism. Civic debate works less wastefully when citizens have richer fact bases from which to reason, and Hans Rosling’s series of talks on international health and economics is an important contribution to that debate. (The same can be said for Ken Robinson and Sal Khan on education.) Medicine and technology can make some of us “better than well,” to quote Carl Elliott, or replace human capacity with machines. The state of prosthetics (not only limbs, but also exoskeletons and tools for cognitive abilities and other functions) is in a state of extreme dynamism right now, and 99% of us will never see the labs and rehab clinics where the revolution is gaining momentum. Finally, education is being enhanced and disrupted by digital media at a time when skills imbalances, economic inequality, and political corruption are crucial topics for much of the globe. The TED agenda includes many worthy elements.
Rather than go with the evangelical line of comparison (as illustrated at The Economist), I tend to look at TED in terms of its reach. Much like the Book of the Month Club that brought middlebrow literature to audiences far from metropolitan booksellers, TED serves as an introduction to ideas one would not encounter otherwise. The conferences and videos illustrate the power of “curation,” a buzzword that fits here, vis a vis mass populations utilizing search, popular-scientific journals, mass media, or classroom lectures. This curation, coupled with the huge scale of the freely distributed videos and the social networks that propel them, helps explain the TED phenomenon. And if it's "middlebrow," I'm not sure that's such a bad thing: this isn't babbitry, after all.
In TED-meister Chris Anderson’s own talk, he makes a compelling case for online video as a Gutenberg-scale revolution. In the beginning, says Anderson (the son of missionaries), was the word, and words were spread by humans with gestures, intonation, eye contact, and physical responses of either acknowledgement or confusion. After the inventions of ink, type, paper, and so on, words could be manufactured beyond human scale, but the accompanying nuances were lost: print scaled in a way talking could not. Now, in a brief historical moment (YouTube is not yet 10 years old), we have global scale for words to reach masses of niche audiences, complete with body language, show-and-tell visual explanations, and other attributes that restore the richness of the spoken word.
Bratton’s solution — “More Copernicus, less Tony Robbins” — has much to commend it, yet realistically, how many Copernican giants can any era of human history produce? And of these few, how many could communicate on whiteboards, in articles, or to students, the true breadth and depth of their insights and discoveries? The self-help strain of TED-dom is worrisome to me and to many others, but equally unhelpful is science and technology unmoored from human context. If there had been TED talks in 1910, when Fritz Haber fixed atmospheric nitrogen in fertilizers that now feed a third of the world’s population, would anyone have known what he should have said? Or what if Robert Oppenheimer had a TED-like forum for his concerns regarding atomic weapons in the 1950s? Historically, humanity has frequently misunderstood the geniuses in its midst, so I’m unsure if TED could actually find, coach, and memorialize on video many of today’s Copernican equivalents. At the same time, I should be on record as wishing for both less of Tony Robbins and fewer media Kardashians of any variety.
For me, these are both academic questions and personal ones: I gave my first TEDx talk in early March, and crafting it was a stiff challenge. I saw pitfalls everywhere: sounding like everyone else, overshooting the audience’s patience and current knowledge, and not giving people anything to _do_ about this urgent issue at the end of the talk. Thus I will leave it to the audience to judge after the edited talk is posted next month, but I must admit I measure great TED talks with a new yardstick after having tried to give even a pedestrian one.
Wednesday, February 26, 2014
Early Indications February 2014: Who will win in an Internet of Things?
As
usual, Silicon Valley is in love with tech-y acronyms that sometimes do
not translate into wider conversations involving everyday people. We’ve
discussed “big data” in a recent newsletter, “cloud computing” has been
around for a long time and still confuses people, and now, various
firms are pointing toward a future of massively connected devices,
controllers, sensors, and objects. Cisco went for the whole enchilada in
its branding, opting for “the Internet of Everything.” IBM tended
toward cerebral globality with its “smarter planet” branding. GE is
focusing on its target markets by trying to build an “industrial
Internet.” My preferred term, the Internet of Things (IoT), raises many
challenging questions — but they’re worth pondering.
It’s commonly said that “there are more things than people connected to the Internet,” or some variant of that assertion. The current number is 8 to 10 billion “things,” which of course outstrips the population of planet Earth. But what is a “thing”? Out of that total — call it 9 billion for the sake of argument — probably 2 billion are smartphones and tablets. Add in routers, switches, access points, military gear, and PCs, and we’re probably in the 5-6 million range, total. That means that “things” such as garage-door openers, motor controllers, and webcams probably add up to 2-3 billion — about the same as the number of people who use the Internet on a regular basis, out of a global population of 7 billion. (For detailed numbers see here and here.)
This ratio will change rapidly. Given the inevitability of massive growth in this area, many big players, including some surprising ones, are beginning to fight for mindshare, patent portfolios, and other predecessors to market leverage. I will list the major players I’ve found in alphabetical order, then conclude with some summary thoughts.
Chip companies (ARM, Freescale, Intel, Qualcomm, Texas Instruments)
The magnitude of the opportunity is offset by the low price points required for true ubiquity. The business model should, in theory, be friendlier to an embedded-systems vendor like Freescale than to the makers of IP-heavy, super-powerful microprocessors such as those made by Qualcomm and Intel (unless we get data-aggregation appliances, dumber than a PC but capable of managing lots of sensors, in which case a set-top-box might be a useful guidepost). If a company can capture a key choke point with a patent “moat,” as Broadcom and Qualcomm did for previous generations of networking, this could be a key development.
Overall grade: lots of ways to win, but unlikely that any of these players dominate huge markets
Cisco: CEO John Chambers told a tech conference earlier this month that he projects the IoT to be a $19 TRILLION profit market in the next few years. (For scale, the US GDP is about $15 trillion, with about a fifth of that health care in some form or fashion; total global economic activity amounts to about $72 trillion.) Given such a massive addressable market, Cisco is bundling cloud, collaboration, data analytics, mobility, and security offerings into a go-to-market plan with an emphasis on lowering operational costs, raising business performance, and doing it all fast. This IoT strategy blends strengths in Cisco’s legacy businesses with an anticipation of new market needs.
Overall grade: Given history, the developing hardware stack, and their brainpower, Cisco must be taken very seriously
GE: If you look at how many pieces of capital equipment GE sells (from oil drilling gear, to locomotives, to generators, to MRI machines, to jet engines), and if you look at the profit margins on new sales vs after-market service, it makes complete sense that GE is working to instrument much of its installed base of Big Things That Break. Whether it’s preventive maintenance, or pre-empting a service call by a competitor, or designing more robust products for future release, the Industrial Internet is a smart approach to blending sensors, pervasive networking, and advanced analytics.
Overall grade: in GE's market strongholds, getting to IoT first could be a multi-billion-dollar win.
Google: Given the scale and speed of CEO Larry Page’s recent deals, assessing Google’s IoT future is a guessing game. There are huge building blocks already in place:
-Google mapped millions of wi-fi access points and has a well-regarded GIS competency (even if the privacy stance attached to said capability is less beloved).
-With the experience of owning a hardware company, if only briefly, and by owning the #1 global smartphone platform as measured by units, and by running a massive internal global network, and by trying their hand at municipal fiber, Google has extensive and relevant expertise in the transport layer.
-As possibility the biggest and best machine learning company anywhere, Google knows what to do with large numbers of sensor streams, possessing both the algorithmic know-how and computational infrastructure to handle heretofore unheard-of data problems.
-With the Nest purchase, Google gets more data streams as well as user experience expertise for IoT products in the home.
-The self-driving car effort has been a large-scale proving ground for sensor integration and actuator mechanics, with advanced vehicle dynamics and GIS/GPS thrown in.
-Perhaps less noticed than the Nest deal, Google’s recent agreement with Foxconn to develop an operating system for manufacturing robots potentially puts Google into closer competition with Amazon’s supply-chain automation investments, most notably Kiva.
Overall grade: Too big, too secretive, and too data-intensive to ignore. A possible 800-lb gorilla in a nascent market.
Software companies (SAS, Oracle, IBM, Cloudera, etc)It’s commonly said that “there are more things than people connected to the Internet,” or some variant of that assertion. The current number is 8 to 10 billion “things,” which of course outstrips the population of planet Earth. But what is a “thing”? Out of that total — call it 9 billion for the sake of argument — probably 2 billion are smartphones and tablets. Add in routers, switches, access points, military gear, and PCs, and we’re probably in the 5-6 million range, total. That means that “things” such as garage-door openers, motor controllers, and webcams probably add up to 2-3 billion — about the same as the number of people who use the Internet on a regular basis, out of a global population of 7 billion. (For detailed numbers see here and here.)
This ratio will change rapidly. Given the inevitability of massive growth in this area, many big players, including some surprising ones, are beginning to fight for mindshare, patent portfolios, and other predecessors to market leverage. I will list the major players I’ve found in alphabetical order, then conclude with some summary thoughts.
Chip companies (ARM, Freescale, Intel, Qualcomm, Texas Instruments)
The magnitude of the opportunity is offset by the low price points required for true ubiquity. The business model should, in theory, be friendlier to an embedded-systems vendor like Freescale than to the makers of IP-heavy, super-powerful microprocessors such as those made by Qualcomm and Intel (unless we get data-aggregation appliances, dumber than a PC but capable of managing lots of sensors, in which case a set-top-box might be a useful guidepost). If a company can capture a key choke point with a patent “moat,” as Broadcom and Qualcomm did for previous generations of networking, this could be a key development.
Overall grade: lots of ways to win, but unlikely that any of these players dominate huge markets
Cisco: CEO John Chambers told a tech conference earlier this month that he projects the IoT to be a $19 TRILLION profit market in the next few years. (For scale, the US GDP is about $15 trillion, with about a fifth of that health care in some form or fashion; total global economic activity amounts to about $72 trillion.) Given such a massive addressable market, Cisco is bundling cloud, collaboration, data analytics, mobility, and security offerings into a go-to-market plan with an emphasis on lowering operational costs, raising business performance, and doing it all fast. This IoT strategy blends strengths in Cisco’s legacy businesses with an anticipation of new market needs.
Overall grade: Given history, the developing hardware stack, and their brainpower, Cisco must be taken very seriously
GE: If you look at how many pieces of capital equipment GE sells (from oil drilling gear, to locomotives, to generators, to MRI machines, to jet engines), and if you look at the profit margins on new sales vs after-market service, it makes complete sense that GE is working to instrument much of its installed base of Big Things That Break. Whether it’s preventive maintenance, or pre-empting a service call by a competitor, or designing more robust products for future release, the Industrial Internet is a smart approach to blending sensors, pervasive networking, and advanced analytics.
Overall grade: in GE's market strongholds, getting to IoT first could be a multi-billion-dollar win.
Google: Given the scale and speed of CEO Larry Page’s recent deals, assessing Google’s IoT future is a guessing game. There are huge building blocks already in place:
-Google mapped millions of wi-fi access points and has a well-regarded GIS competency (even if the privacy stance attached to said capability is less beloved).
-With the experience of owning a hardware company, if only briefly, and by owning the #1 global smartphone platform as measured by units, and by running a massive internal global network, and by trying their hand at municipal fiber, Google has extensive and relevant expertise in the transport layer.
-As possibility the biggest and best machine learning company anywhere, Google knows what to do with large numbers of sensor streams, possessing both the algorithmic know-how and computational infrastructure to handle heretofore unheard-of data problems.
-With the Nest purchase, Google gets more data streams as well as user experience expertise for IoT products in the home.
-The self-driving car effort has been a large-scale proving ground for sensor integration and actuator mechanics, with advanced vehicle dynamics and GIS/GPS thrown in.
-Perhaps less noticed than the Nest deal, Google’s recent agreement with Foxconn to develop an operating system for manufacturing robots potentially puts Google into closer competition with Amazon’s supply-chain automation investments, most notably Kiva.
Overall grade: Too big, too secretive, and too data-intensive to ignore. A possible 800-lb gorilla in a nascent market.
Storage companies (EMC, Western Digital, Amazon)
Obviously there will be plumbing to build and maintain, and selling pickaxes to miners has been good business for centuries. The question of discontinuity looms large: will this wave of computing look similar enough to what we’re doing now that current leaders can make the transition, or will there be major shifts in approach, creating space for new configurations (and providers) of the storage layer?
Overall grade: Someone will win, but there’s no telling who
Wolfram: This was a surprise to me, but their reasoning makes all kinds of sense:
"But in the end our goal is not just to deal with information about devices, but actually be able to connect to the devices, and get data from them—and then do all sorts of things with that data. But first—at least if we expect to do a good job—we must have a good way to represent all the kinds of data that can come out of a device. And, as it turns out, we have a great solution for this coming: WDF, the Wolfram Data Framework. In a sense, what WDF does is to take everything we’ve learned about representing data and the world from Wolfram|Alpha, and make it available to use on data from anywhere."
An initial project is to “curate” a list of IoT things, and that can be found here. The list began at a couple thousand items, from Nike Fuelbands to a wi-fi slow cooker to lots of industrial products. Just building an authoritative list, in rigorous Wolfram-style calculable form, is a real achievement. (Recall that Yahoo began as a human-powered directory of the World Wide Web, before Google’s automated crawl proved more adequate to the vastness of the exercise.)
But Wolfram wants to own a piece of the stack, getting its standard embedded on Raspberry Pi chips for starters. The WDF jumpstarts efforts to share simple things like positional data or physical properties (whether torque or temperature or vibrational frequency). Because the vast variety of sensors lack an interconnection/handshake standard like USB, Wolfram definitely helps fill a gap. The question is, what’s the long-term play, and I don’t have the engineering/CS credentials to even speculate.
Overall grade: Asking some pertinent questions, armed with some powerful credentials. Cannot be ignored.
****
A more general question emerges: where do the following conceptual categories each start and finish? One problem is that these new generations of networked sensing, analysis, and action are deeply intertwined with each other:
*cloud computing
*Internet of Things
*robotics
*artificial intelligence
*big data/data analytics
*telemedicine
*social graphs
*GPS/GIS
Let me be clear: I am in no way suggesting these are all the same. Rather, as Erik Brynjolfsson and Andrew McAfee suggest in their book The Second Machine Age (just released), recombinations of a small number of core computational technologies can yield effectively infinite variations. Instagram, which they use as an example, combined social networking, broadband wireless, tagging, photo manipulation software, and smartphone cameras, none of which were new, into an innovative and successful service.
The same goes for the Internet of Things, as sensors (in and outside of wireless devices) generate large volumes of data, stored in cloud architectures and processed using AI and possibly crowdsourcing, that control remote robotic actuators. What does one _call_ such a complex system? I posit that one of the biggest obstacles to be overcome will be not the difficult matters of bandwidth, or privacy, or algorithmic elegance, but the cognitive limitations of existing labels and categories, whether for funding, regulation, or invention. One great thing about the name “smartphones”: designers, marketers, regulators, and customers let go of the preconceptions of the word “computer” even though 95% of what most of us do with them is computational rather than telephonic.
I read the other day that women now outnumber men at UC-Berkeley in intro computer science courses, and it’s developments like that that give me confidence that we will solve these challenges with more diverse perspectives and approaches. The sooner we can free up imagination and stop being confined by rigid, archaic, and/or obtuse definitions of what these tools can do (and how they do it), the faster we can solve real problems rather than worry about what journal to publish in, how it should be taxed and regulated, or which market vertical the company should be ranked among. I think this is telling: among the early leaders in this space are a computer science-driven media company with essentially no competitors, a computer science-driven retailer that's not really a retailer and competes with seemingly everybody, and a "company" wrapped around a solitary mathematical genius. There's a lesson there somewhere about the kind of freedom from labels that it will take to compete on this new frontier.
Wednesday, January 29, 2014
Early Indications January 2014: The Incumbent’s Challenge (with apologies to Clayton Christensen)
I.
For all the attention paid to the
secrets-of-success business book genre (see last October’s newsletter), very
few U.S. companies that win one round of a competition can dominate a second
time. Whether or not they are built to last, big winners rarely dominate twice:
-Sports Illustrated did not found ESPN.
-Coke did not invent, or dominate, energy
drinks, or bottled water for that matter.
-IBM has yet to rule any market of the many it
competes in the way it dominated mainframes. For many years, the U.S.
government considered breaking IBM into smaller businesses, so substantial was
its market power. Yet as of 1993, the company nearly failed and posted the
largest loss ($8 billion) in U.S. corporate history.
-Microsoft did not dominate search, or social
software, or mobile computing in the decade after the U.S. Department of
Justice won a case ordering Microsoft to be broken up. (President Bush's
Attorney General ordered the case closed upon taking office.)
-The Pennsylvania Railroad became irrelevant
shortly after reaching its peak passenger load during World War II: the 6th
largest company in the nation became the largest-ever bankruptcy.
-Neither Macy’s nor Sears is faring very well
in the Wal-Mart/Target axis.
-It’s hard to remember when Digital Equipment
Corporation employed 140,000 people and sold more minicomputers (mostly VAXes)
than anyone else. Innovation was not the problem: at the very end of its
commercial life DEC had built the fastest processor in its market (the Alpha),
an early and credible search engine (AltaVista), and one of the first webpages
in commercial history.
-After 45 years, Ford, GM, and Chrysler have
yet to make a small-to-medium car as successful as the Japanese. Some efforts —
notably the Pinto — are still laughingstocks.
2013 automobile estimated sales by model (excluding pickup trucks and
SUVs), rounded to nearest thousand (source:
www.motorintelligence.com via www.wsj.com)
Toyota Camry 408,000
Honda Accord 366,000
Honda Civic 336,000
Nissan Altima 321,000
Toyota Corolla 302,000
Ford Fusion 295,000
Chevrolet Cruze 248,000
Hyundai Elantra 248,000
Chevrolet Equinox 238,000
Ford Focus 234,000
Toyota Prius 234,000
II.
There are many reasons for this state of
affairs, some enumerated in The Innovator’s Dilemma: managers in charge of
current market-leading products get to direct R&D and ad spend, so funding
is generally not channeled in the direction of new products. In the tech sector
particularly, winning the next market often means switching business models:
think how differently Microsoft circa 1996, Google in 2005, Apple as of 2010,
and Facebook today generate revenues. Finally, many if not all of the lessons
learned in winning one round of competition are not useful going forward, and
usually hamper the cognitive awareness of those trying to understand emerging
regimes.
Unlike automobiles or soft drinks, moreover,
tech markets tend to extreme oligopoly (SAP and Oracle; Dell and HP on the
desktop; iOS and Android) or monopoly (Microsoft, Intel, Qualcomm, Google,
Facebook). Thus the stakes are higher than in more competitive markets where
45% share can be a position of strength.
All of this setup brings us to a
straightforward question for January 2014: how will Google handle its new
acquisitions? The search giant has been acquiring an impressing stable of both
visible and stealth-mode companies in the fields of robotics (Boston Dynamics),
home automation (Nest), and artificial intelligence (DeepMind). When I saw the
Boston Dynamics news, I thought immediately of the scenario if Microsoft had
bought Google in 1998 rather than the companies it actually did target: WebTV
($425 million in 1997), Hotmail ($500 million in 1997), or Visio ($1.375
billion in 2000). That is, what if the leader in desktop computing had acquired
the “next Microsoft” in its infancy? Given corporate politics in general and
not any special Microsoft gift for killing good ideas, it’s impossible to
believe ad-powered search would have become its own industry.
Google’s track record in acquisitions outside
advertising (DoubleClick being a bright exception) is not encouraging:
GrandCentral became Google Voice and was orphaned. Dodgeball was orphaned, so
its founders started over and maintained the playground naming scheme with Foursquare.
Pyra (Blogger), Keyhole (Google Earth), and Picasa (photo sharing) all remain
visible, but none has busted out into prominence. YouTube is plenty prominent,
but doesn’t generate much apparent revenue.
Let’s assume for the moment that the internet of things and robotics
will be foundations in the next generation of computing. Let’s further assume
that Google has acquired sufficient talent in these domains to be a credible
competitor. There’s one question outstanding: does Google follow its ancient history
(in core search), or its post-IPO history?
That is, will great technologies be allowed to mature without viable
business models, as was the case with search before ad placement was hit upon
as the revenue generator? Or will the current owners of the revenue stream —
the ad folks — attempt to turn the Nest, the self-driving car, the Android
platform writ large, and people’s wide variety of Google-owned data streams
(including Waze, Google Wallet, and the coercive Google+) into ad-related
revenue through more precise targeting?
Just as Microsoft was immersed in the desktop metaphor at the time
it didn’t buy Google and thus could not have foreseen the rise of ad-supported
software, will Google now be blinded to new revenue models for social
computing, for way finding, for home automation, and for both humanoid and
vehicular robotics? Is Google, as one Twitter post wondered, now a “machine
learning” company? Is it, as a former student opined, a de facto ETF for
emerging technologies? Most critically, can Google win in a new market, one
removed from its glory days in desktop search? Google missed social networking,
as Orkut can testify, and CEO Larry Page sounds determined not to make that
mistake again. It will bear watching to see if this company, nearly alone in
business history, can capture lightning in a bottle more than once to the point
where it can dominate two distinct chapters in the history of information
technology.