Wednesday, April 29, 2015

Early Indications April 2015 Review Essay: Being Mortal by Atul Gawande

Let me begin by dispensing with any pretext of objectivity: I think Atul Gawande, a surgeon at a Harvard teaching hospital who writes for The New Yorker, is a national treasure. Complications may be the best first book of our generation; Better is brilliant. We have personal parallels: both of us grew up in the midwest and each named a son for the greatest physician-novelist of the 20th century. He teaches and practices at the hospital where my twins were born back in my Boston days.

Being Mortal is a sobering book. I had to read it in small doses in part to savor its richness but in larger measure to cope with the existential finality it addresses so beautifully and concretely. To the Amazon reviewers complaining that it’s based on anecdotes, let me say simply, they’re not anecdotes, they’re parables. There’s a difference. Those parables made me face my own life’s end in ways nothing else ever has.

Given that the scope of the book is broad and nuanced, I have nothing to gain by attempting to summarize it. Instead, I want to look closely at one piece of his wisdom, that regarding the Hard Conversations. Physicians aren’t trained, he states, to guide patients into death; dying is taken not as natural but as a failure. Given both a cultural reticence to see death as part of life and the readily litigious context of modern U.S. medicine, doctors tend to reach deep into the armamentum of ventilators, central lines, kilobuck antibiotics, dialysis, and other tools near the end of life. Thus the family often can say “the doctor did everything she could,” rather than “Dad went out peacefully, surrounded by his loved ones.”

Gawande gives a great example of the alternative by recounting the story of his father’s end of life passage. Based on a conversation with a bioethicist who had just watched her own father die, Gawande asks his father frank questions about tradeoffs, about limits, about fears. One person might want to get to a family milestone (a grandchild’s wedding, say) and will tolerate high levels of pain in that pursuit; another can bear roaring tinnitus or deafness but is terrified of the implications of an ostomy bag; a third wants to be remembered as cogent rather than as a narcotized, slurring shell of her former self.

The point here is an important one: medical technology has cured old ways of dying but located more deaths in high-tech hospital scenarios. Hospitals employ doctors and technicians who are expert in life-extending treatments more than in guiding the hard conversations. Duration is taken as the relevant yardstick by default; quality takes time and skill to be assessed as a different way to judge outcomes. In one case, Gawande pins down one of his patients’ oncologists who admits that the best-case scenario after a brutal chemotherapy regime is measured in months: the same prospect as with palliative care, and not the years the family and patient were hopefully assuming was the case. The path toward one’s demise is too often governed by what drugs and machines can do rather than what the patient and the family want.

This paradox reminds me of another Boston conversation, this one originating at MIT rather than at Harvard. The psychologist Sherry Turkle’s most recent book, Alone Together, asserts that modern communications technologies have done their job too well: millennials and also many older than they have come to expect human gratification from a tweet, a like, a text, often more than from real people in real proximity. The absence of these digital stimuli — quiet — is painful and to be avoided, she finds; people have lost the ability to be alone with their thoughts. Further, Facebook profiles, Twitter feeds, Pinterest boards, Instagram portfolios, and the other billboards we erect are carefully curated, to use the modern term of art. Thus we can control the self the world sees and interacts with, making the comparatively naked conventional social self more vulnerable and less practiced in the “messy bits” of human interaction, as she calls them.

In both of these scenarios, modern technologies — ventilators and pharmaceuticals in the former case, smartphones in the latter — have become so powerful that they rather than their users shape the tenor and often content of the debate: rather than ask “what do we want?” and use the technologies to get there, we take the limits of the technology as our boundaries and push up against that instead. In both of these instances, the problem is that modern medicines, computing, and sensors exceed human scale: no human can last long on incredibly potent modern chemotherapy poisons, nor can a person be “friends” with 5,000 people 24 hours a day.

What then are the resources for the conversations we should be having? The professor in me wants to say, “the great intellectual traditions.” Indeed, Gawande cites Tolstoy on p. 1 and Plato much later. The problem is that in the U.S. and elsewhere, college as a time for introducing and possibly pondering the big questions is out of fashion right now. In public universities especially, other agendas are in play.

In Florida, governor Rick Scott tried to make tuition for literature, history, and philosophy majors more expensive than engineering or biotechnology, notwithstanding the cost differences in the respective professoriates and infrastructure. Florida is not alone: here at Penn State, a committee was charged with updating the general education curriculum (that includes the essential ideas everyone should encounter, regardless of major) and the task is turning out to be more difficult than expected: the deadline has been extended since the idea was proposed five years ago. To assess whether a Penn State education prepares people to ask “what is a good society?” or “what is a good way to live one’s life?” you can see the committee’s report here. The principles guiding the effort have evolved and can be seen here.

Though I doubt he realizes it, Gov. Scott embodies the paradox. America’s society and economy value the contributions of engineers and programmers more than marketing assistants, retail managers, school teachers, or social service providers  — the landing spots for humanities and social science undergrads. Those makers of machines and software have done amazing things, but the state of the academic humanities does little to inspire confidence that college courses in English or philosophy will teach young adults how to form healthy personal selves and relationships in a digital social context (can Aristotle help cure Facebook envy?), and to help their elders die well. Like it or not, Gawande’s Tolstoy is more than ever an intellectual luxury good rather than the staple of a balanced diet. Thus college and the wisdom of the past are less of a resource than some of us might hope.

What Gawande calls on us to do — beginning with doctors and patients, then patients and families — is the hardest thing: to listen, including to our deepest selves, and to talk honestly. What do we value? What makes us frightened? How do we reconcile ourselves to family differences and breakages in our final days? To watch mom or granddad die, and to help listen to what they really want, is both terrifically hard and a great gift. That Gawande has jump-started that conversation not for a handful but for thousands of people makes him the closest thing to a secular saint I have ever witnessed.

Tuesday, March 31, 2015

Early Indications March 2015

After a busy few months away, the newsletter returns with a collection of news and notes.

1) My long-ish blog post on Uber, Airbnb, and regulation as competitive barrier to entry was just posted today on The Conversation, a foundation-funded collection of various informed points of view.

2) I am delighted to announce that MIT Press has me under contract to deliver a book manuscript on robots and robotics for 2016 publication.

3) The reaction against Indiana governor Mike Pence's signing of the "religious freedom" legislation has been fascinating to watch, in part because I grew up in the state. One analysis suggested that the polarization of media has led to "echo chambers" on both left and right: if you listen only to the cheerleaders for your side, the reaction of what used to be called "the silent majority" can be a blindside smackdown. Pence's complete lack of articulate answers to the broader media (most visibly George Stephanopoulos) suggests he may have little idea of how non-social-conservatives outside Indiana see the world.

Lest this viewpoint appears partisan, Hillary Clinton's stonewalling of the archival process suggests a similar blind spot. One difference is that she is much more experienced in handling the media than Pence, but also the integrity of the historical record matters less to most people than the prospect of Aunt Peg and her partner Allison getting turned away from a hotel. Gay and lesbian rights has become personalized in a way that records retention has not: an overwhelming majority of Americans knows a gay or lesbian family member or colleague. Very few of us can even name an archivist or historian.

Once that personal association sensitizes people to an issue, social media provides a ready environment for expressed outrage. The power of the hashtag allows individuals to feel like they're part of what Lawrence Goodwyn referred to as "movement culture" when he discussed the civil rights protests of the 1960s. There, the options were to be on site or watch on TV; now, one can be physically remote from the protest yet feel active solidarity. The tidal wave of #boycottIndiana could not have happened in a TV-driven media environment, and I'm sure both parties' 2016 presidential nominees will remember the episode.

4) Amazon remains relentless in its pace of innovation. The drone delivery system, effectively grounded by current FAA rules, is being tested in Canada (a country where Target couldn't operate profitably). The Echo AI appliance is shipping and changes household behavior in ways I will examine in a forthcoming letter. Today, Amazon announced "impulse buy" devices called Dash buttons you can affix to the storage spaces for household items like bleach or paper towels. These are another small but discernible step in the march toward the "smart" house. Yesterday Amazon launched a listing of vetted home services providers ("from plumbers to herders" in the words of one headline), once again connecting the physical and virtual worlds unlike any other company.

At the same time, Amazon quietly stopped its mobile wallet efforts, which unlike the ATT/Verizon effort had the benefit of not sharing a name with a Middle Eastern terror group. The rapid warehouse buildout appears to be continuing, as does the slow rollout of home grocery delivery. In short, the company that has consistently zigged while others zagged (or stood still) appears to be moving full speed ahead to continue launching new initiatives that challenge conventional wisdom in field after field.

Thursday, October 30, 2014

Early Indications October 2014: Who’s Watching?

I didn’t really go looking for this particular constellation of ideas, but several good pieces really got me connecting the dots and this month’s letter represents an effort to spell things out with regard to surveillance.

1) The Economist published one of its special reports on September 13 on online advertising. Entitled “Little Brother,” the report argues that mobile devices combined with social networks are providing advertisers — and more importantly, a complex ecosystem of trackers, brokers, and aggregators illustrated in Luma Partners’ now-famous eye-chart slides with unprecedented targeting information. One prominently quoted survey asserts that marketers have seen more change in the past two years than in the previous 50. Among the biggest of these shifts: programmatic ad buying now works much like algorithmic trading on Wall Street, with automated ad bidding and fulfillment occurring in the 150 milliseconds between website arrival and page load on the consumer device.

[As I type this, Facebook announced that the firm made $3.2 billion in one quarter, mostly from ads, nearly $2 billion of it from mobile.]

Given that surveillance pays dividends in the form of more precise targeting — one broker sells a segment called “burdened by debt: small-town singles” — it is no surprise that literally hundreds of companies are harvesting user information to fuel the bidding process: online ad inventory is effectively infinite, so user information is the scarce commodity and thus valued. This marks a radical reversal from the days of broadcast media, when audience aggregators such as NBC or the New York Times sold ad availability that was constrained by time or space. Thus the scarcity has shifted from publishers to ad brokers who possess the targeting information gleaned from Facebook, GPS, Twitter, Google searches, etc. Oh, and anyone who does even rudimentary research on the supposedly “anonymous” nature of this data knows it isn’t, really: Ed Felten, a respected computer scientist at Princeton, and others have repeatedly shown how easy de-anonymization is. (Here’s one widely cited example.)

2) In another sign that surveillance is a very big deal, not only for advertising, the always-astute security guru Bruce Schneier announced that his next book Data and Goliath, due out in March, addresses this issue.

3) Robots, which for our purposes can be defined as sensor platforms, are getting better — fast — and Google has acquired expertise in several forms of the discipline:

-the self-driving car (that has severe real-world limitations)

-Internet of Things (Nest and Waze)

-autonomous military and rescue robots (Boston Dynamics and Schaft).

4) A September 28 post by Steve Cheney raised the prospect of Google moving some or all of the aforementioned robot platforms onto some version of Android. While he predicted that “everything around you will feel like an app,” I’m more concerned that every interaction with any computing-driven platform will be a form of surveillance. From garage-door openers and thermostats to watches to tablets to “robots” (like the one Lowes is prototyping for store assistance) to cars, the prospect of a Google-powered panopticon feels plausible. (I looked for any mention of robotics in the Google annual report but all the major acquisitions were made in this fiscal year, so next year's 10-K will bear watching on this topic.)

5) Hence Apple’s recent positioning makes competitive sense. When Tim Cook said “A few years ago, users of Internet services began to realize that when an online service is free, you’re not the customer. You’re the product.” he was ahead of the curve, I believe: according to the Economist report, only .00015% of people use those little triangle things to opt out of online ad tracking. In Cook’s and Apple’s narrative, premium prices implicitly become more reasonable to those who value privacy insofar as there is no “audience commodity” as at eBay, Amazon, Google Twitter, or Facebook.

6) One other thing to consider here is how that information is being processed at unprecedented scale. When The Economist noted the likeness of ad-buying to algo trading, we enter the world of artificial intelligence, something Google counts as a core competency, with 391 papers published not to mention untold portions of secret sauce.

Some very smart people are urging caution here. Elon Musk was at MIT for a fascinating (if you’re a nerd) discussion of rockets, Tesla, the hyperloop, and space exploration. Thus for someone serious about a Mars space base to warn against opening an AI Pandora’s box was quite revealing:

“I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”

(The complete MIT talk is here)

Musk is not alone. The University of Oxford’s Nick Bostrom recently wrote Superintelligence (maybe best thought of as an alternative excursion into Kurzweil-land), a book that quite evidently is grappling with the unknown unknowns we are bumping up against. He knows of what he speaks, but the book is, by his own admission, a frustrating read: no generation of earth’s population has ever had to ask these questions before. The book’s incompleteness and tentativeness, while making for a suboptimal read, are at the same time reassuring: someone, both informed and set in a broad context, is asking the questions many of us want on the table but lack the ability, vocabulary, and credibility to raise ourselves.
********

In a nutshell, there it is: mobile devices and social networks generate data points that supercomputing and sophisticated analytical tools turn into ad (or terrorist, or tax-cheat) profiling data. Computing liberated from desktop boxes and data centers moves in, and acts on, the physical world, extending surveillance further. Apple positions itself as a self-contained entity selling consumers stuff they pay for, not selling eyeballs/purchase histories/log-in fields/expressed and implied preferences to advertisers. In sharp contrast, Google has repeatedly shown — with Streetview, wi-fi access point mapping, Buzz, and Google+ — a desire to collect more information about individuals than many of those individuals would voluntarily reveal. With AI in the picture, the prospect of surveillance producing some very scary scenario — it may not accurately be called a breach, just as the flash crash wasn’t illegal — grows far more likely. Human safeguards didn’t work at the NSA; why should they work in less secure organizations? Like Bostrom, I have no ready answers other than to lead a relatively careful digital existence and hope that the wisdom of caution and respect for privacy will edge out the commercial pressures in the opposite direction.

Next month: unexpected consequences of a surveillance state.

Wednesday, October 01, 2014

Early Indications September 2014: Alternatives to Industry

In classic business school strategy formulation, a company’s industry is taken as the determining factor in cost structures, capital utilization, and other constraints to the pursuit of market success. Nowhere is this view more clearly visible than in Michael Porter’s seminal book Competitive Strategy, in which the word “industry” appears regularly.

I have long contended that Internet companies break this formulation. A series of blog posts (especially this one) in the past few weeks have crystallized this idea for me. The different paths pursued by Apple, Amazon, and Google — very different companies when viewed through the lens of industries — lead me to join those who contend that despite their different microeconomic categories, these three companies are in fact leading competitors in important ways: but not of the Coke/Pepsi variety.

Let us consider the traditional labels first. Amazon is nominally a retailer, selling people (and businesses) physical items that it distributes with great precision from its global network of warehouses. Its margins are thin, in part because of the company’s aggressive focus on delivering value to the customer, often at the cost of profitability at both Amazon itself and its suppliers.

Apple designs, supervises the manufacture of, and distributes digital hardware. Its profit margins are much higher than Amazon’s, in large part because its emphasis on design and usability allows it to command premium prices. Despite these margins and a powerful brand, investors value the company much less aggressively than they do Amazon.

Google, finally, collects vast sums of data and provides navigation in the digital age: search, maps, email. Algorithms manage everything from web search to data-center power management to geographic way-finding. In the core search business, profit margins are high because of the company’s high degree of automation (self-service ad sales) and the wide moats the company has built around its advertising delivery.

Thus in traditional terms, we have a mega-retailer, a computer hardware company, and a media concern.

When one digs beneath the surface, the picture morphs rather dramatically. Through a different lens, the three companies overlap to a remarkable degree — just not in ways that conform to industry definitions.

All three companies run enormous cloud data center networks, albeit monetized in slightly different ways. Apple and Amazon stream media; Google and Amazon sell enterprise cloud services; Apple and Google power mobile ecosystems with e-mail, maps, and related services. All three companies strive to deepen consumer connections through physical devices.

Apple runs an industry-leading retail operation built on prime real estate at the same time that Amazon is reinventing the supply-chain rule book for its fulfillment and now sortation centers. (For more on that, see this fascinating analysis by ChannelAdvisor of the Amazon network. In many cases, FCs are geographically clustered rather than spread more predictably across the landscape) Both of these retail models are hurting traditional mall and big-box chains.

At the most abstract but common level, all three companies are spending billions of dollars to connect computing to the physical world, to make reality a peripheral of algorithms, if you will. Amazon’s purchase of Kiva and its FC strategy both express an insistent strategy to connect a web browser or mobile device to a purchase, fulfilled in shorter and shorter time lags with more and more math governing the process. In the case of Kindle and streaming media, that latency is effectively zero, and the publishing industry is still in a profoundly confused and reactive state about the death of the physical book and its business model. The Fire phone fits directly into this model of making the connection between an information company and its human purchasers of stuff ever more seamless, but its weak market traction is hardly a surprise, given the strength of the incumbents -- not coincidentally, the other two tech titans.

Apple connects people to the world via the computer in their pocket. Because we no longer have the Moore’s law/Intel scorecard to track computing capacity, it’s easy to lose sight of just how powerful a smartphone or tablet is: Apple's A8 chip in the new iPhone contains 2 Billion (with a B) transistors, equivalent to the PC state of the art in 2010. In addition, the complexity of the sensor suite in a smartphone — accelerometers, microphone, compasses, multiple cameras, multiple antennae — is a sharp departure from a desktop computer, no matter how powerful, that generally had little idea of where it was or what its operator was doing. And for all the emphasis on hardware, Nokia’s rapid fall illustrates the power of effective software in not just serving but involving the human in the experience of computing.

Google obviously has a deep capability in wi-fi and GPS geolocation, for purposes of deeper knowledge of user behavior. The company’s recent big-bet investments — the Nest thermostat, DARPA robots, Waze, and the self-driving car team — further underline the urgency of integrating the world of physical computing on the Android platform(s) as a conduit for further and further knowledge of user behavior, social networks, and probably sentiment, all preconditions to more precise ad targeting.

Because these overlaps fail to fit industry definitions, metrics such as market share or profit margin are of limited utility: the three companies recruit, make money, and innovate in profoundly different ways. Amazon consistently keeps operating information quiet (nobody outside the company knows how many Kindle devices have been sold, for example) so revenue from the cloud operation is a mystery; Google’s finances are also somewhat difficult to parse, and the economics of Android for the company were never really explicated, much less reported. Apple provides most likely the most transparency of the three companies, but that’s not saying a lot, as the highly hypothetical discussion of the company’s massive cash position would suggest.

From a business school or investor perspective, the fact of quasi-competition despite the lack of industry similitude suggests that we are seeing a new phase of strategic analysis and execution, both enabled and complicated by our position with regard to Moore’s law, wireless bandwidth, consumer spending, and information economics. The fact that both Microsoft and Intel are largely irrelevant to this conversation (for the moment anyway) suggests several potential readings: that success is fleeting, that the PC paradigm limited both companies’ leaders from seeing a radically different set of business models, that fashion and habit matter more than licenses and seats, that software has changed from the days of the OSI layer cake.

In any event, the preconditions for an entirely new set of innovations — whether wearable, embedded/machine, algorithmic, entertainment, and/or health-related — are falling into place, suggesting that the next 5-10 years could be even more foreign to established managerial teaching and metrics. Add the external shocks — and shocks don’t get much more bizarre than ebola and media-savvy beheadings — and it’s clear that the path forward will be completely fascinating and occasionally terrifying to traverse. More than inspiration or insight from our business leaders, we will likely need courage.

Tuesday, July 29, 2014

Early Indications July 2014: Betting the Business


I read with great interest the recent Fortune article on the new Ford F-150 pickup. This is the best-selling vehicle in the U.S. (but sold in few other markets), and contributes mightily to the company’s profitability: it’s straightforward to manufacture, long production runs drive purchasing economies and assembly line efficiency, and option packages contribute to healthy — 40% — gross margins. In short, the light truck franchise at Ford is the one essential business that management has to get right: small cars aren’t that popular or profitable, large SUVs are out of fashion, overall car demand is flat for demographic and other reasons, and financing isn’t the profit center it once was given low interest rates.

A new model of the truck comes out this fall. Ford is reinventing the pickup, by making it mostly out of aluminum rather than steel. The weight savings (700 lb was the target) will help the automaker reach government-mandated fuel economy targets, but there are significant risks all across the landscape:

*Body shops need new, specialized equipment to address aluminum body damage. Ford had to create a nationwide network of authorized service centers, especially given how many trucks are sold in rural areas to miners, ranchers, and farmers. If owners have trouble getting repairs done, negative publicity will travel extra fast over social media.

*The aluminum supply as of 2010 was not sufficient to the need: 350,000 half-ton pickups in 2012 would be an awful lot of beer cans. Ford has to invent a whole new supply base and watch how one big buyer will move the entire commodity market. (I’ve heard this is why Subway doesn’t offer roasted red pepper strips: they’d need too many.)

*Manufacturing processes are being revised: aluminum can’t be welded the way steel can, so bonding and riveting require new engineering, new skills, new materials, and new assembly sequences.

In short, this is a really big gamble. Ford is messing with the formula that has generated decades of segment leadership, corporate profitability, and brand loyalty. Competitors are circling: Chevy would love to have Silverado win the category sales crown this year, especially given GM’s horrific year of bad publicity, and Chrysler’s Ram division was renamed solely because of their pickups’ brand equity.

It’s rare that a company takes a position of market leadership and invests in a significantly different platform, risking competitive position, profitability, and customer loyalty. Between us, my former boss John Parkinson and I could only come up with a handful: these kind of moments seem to happen only about once a decade (unless readers remind me of examples I missed).

Six US business decisions got our attention:

1) Boeing bet on passenger jet airplanes in the 1950s, launching the 707 in 1958. It was not the first such aircraft: the British De Havilland Comet won that honor, but had major safety issues related to leaks developing because of metal fatigue around window openings. Jets delivered greater power for their size, had fewer moving parts, and burned cheaper fuel. Planes could carry more passengers, fly farther and faster, and required fewer maintenance visits.

2) IBM completely departed from industry practice by announcing the System/360 in 1964. It was a family of highly compatible mainframes, allowing customers to grow up in capability without having to learn a new operating system or rebuild applications: previously, customers could buy a small computer that might soon box them in, or overspend on something too big for their needs in the hope of growing into it. Fred Brooks, who managed software development, learned from System/360 about the paradoxes of programming and later wrote the classic Mythical Man-Month, with its still-true insight: adding programmers to a late software project will make it later. Brooks’ team delivered, and S/360 helped IBM dominate the computer market for the next 15 years.

3) Few people remember that Intel has not always been synonymous with microprocessors. Until the early 1980s, the company’s focus was on memory devices. Simultaneously undercut in price by Japanese competition and alert to the rapid growth of the PC segment, Intel CEO Andrew Grove switched over to the far more technically demanding microprocessor market in 1983, followed by the famous “Intel Inside” branding campaign in 1991: it was unheard-of for a B2B supplier to build a consumer brand position. Intel stock in this period enjoyed an enviable run-up, but the success was not preordained.

4) It wasn’t a completely “bet-the-company” decision, but Walter Wriston at CitiBank wagered heavily on automatic teller machines in the 1980s, which not only cost a significant amount to develop and install, but also prompted criticism of a lack of a personal touch in client service. The decision of course paid off handsomely and revolutionized the finance industry.

5) It wasn’t a bet-the-company decision, as its failure makes clear, but Coke guessed wildly wrong on the flavor of “New Coke” in 1985 yet was able to recover.

6) Verizon made a significant investment in its residential fiber-optic buildout, but the rapid growth in the wireless business eventually eclipsed wireline in general, reducing both the risk and the impact of the original decision in 2005.

What am I missing? In what other situations have CEOs taken a flagship market offering and significantly revamped it, endangering market share, brand equity, and profitability to the extent Ford has, when the entire company’s future rides heavily on this product launch?

Friday, May 30, 2014

Early Indications May 2014: When computing leaves the box

Words can tell us a lot. In particular, when a new innovation emerges, the history of its naming shows how it goes from foreign entity to novelty to invisible ubiquity. A little more than 100 years ago, automobiles were called “horseless carriages,” defined by what they were not rather than what they were. “Cars” were originally parts of trains, as in boxcars or tank cars, but the automobile is now top of mind for most people. More recently, the U.S. military refers to drones as UAVs: unmanned aerial vehicles, continuing the trend of definition by negation. Tablets, the newest form of computing tool, originally were made of clay, yet the name feels appropriate.

The naming issues associated with several emerging areas suggest that we are in the early stages of a significant shift in the landscape. I see four major manifestations of a larger, as yet unnamed, trend, that for lack of better words I am calling “computing outside the box.” This phrase refers to digital processes — formerly limited to punchcards, magnetic media, keyboards/mice, and display screens — that now are evolving into three dimensional artifacts that interact with the physical world, both sensing and acting upon it as a result of those digital processes. My current framing of a book project addresses these technologies:

-robotics

-3D printing/additive manufacturing

-the emerging network of sensors and actuators known as the Internet of Things (another limited name that is due for some improvement)

-the aforementioned autonomous vehicles, airborne, wheeled, and otherwise.

Even the word “computer” is of interest here: the first meaning, dating to 1613 and in use for nearly 350 years, referred to a person who calculates numbers. After roughly 50 years of computers being big machines that gradually shrank in size, we now are in a stage where the networked digital computers carried by hundreds of millions of people are no longer called computers, or conceived of as such.

Most centrally, the word “robot” originated in the 1920s and was at first a type of slave; even now, robots are often characterized by their capabilities in performing dull, dirty, or dangerous tasks, sparing a human from these efforts. Today, the word has been shaped in public imagination more by science fiction literature and cinema than by wide familiarity with actual artificial creatures. (See my TEDx talk on the topic) Because the science and engineering of the field continue to evolve rapidly — look no further than this week’s announcement of a Google prototype self-driving car — computer scientists cannot come to anything resembling consensus: some argue that any device that can 1) sense its surroundings, 2) perform logical reasoning with various inputs, and 3) act upon the physical environment qualifies. Others insist that a robot must move in physical space (thus disqualifying the Nest thermostat), while others say that true robots are autonomous (excluding factory assembly tools).

I recently came across a sensible, nuanced discussion of this issue by Bernard Roth, a longtime professor of mechanical engineering who was associated with the Stanford Artificial Intelligence Lab (SAIL) from its inception.

“I do not think a definition [of what is or is not a robot] will ever be universally agreed upon. . . . My view is that the notion of a robot has to do with which activities are, at a given time, associated with people and which are associated with machines. If a machine suddenly becomes able to do what we normally associate with people, the machine can be upgraded in classification and classified as a robot. After a while, people get used to the activity being done by machines, and the devices get downgraded from ‘robot’ to ‘machine.’” *

However robots are defined, how is computing outside the box different from what we came to know of digital computing from 1950 until 2005 or so? Several factors come into play here.

1) The number of computational devices increases substantially. There were dozens of computers in the 1950s, thousands in the 1960s, millions in the 1980s, and so on. Now, networked sensors will soon number in the tens of billions. This increase in the scale of both the challenges (network engineering, data science, and energy management are being reinvented) and the opportunities requires breakthroughs in creativity: are fitness monitors, that typically get discarded after a few months except by the hardest-core trainers, really the best we can do for improving health outcomes?

2) With cameras and sensors everywhere — on phone poles, on people’s faces, in people’s pockets, in the ground (on water mains), and in the sky (drone photography is a rapidly evolving body of legal judgment and contestation) — the boundaries of security, privacy, and risk are all being reset. When robots enter combat, how and when will they be hacked? Who will program a self-driving suicide bomb?

3) Computer science, information theory, statistics, and physics (in terms of magnetic media) are all being stress-tested by the huge data volumes generated by an increasingly instrumented planet. A GE jet engine is reported to take off, on average, every two seconds, world wide. Each engine generates a terabyte of data per flight. 10:1 compression takes this figure down to a mere 100 gigabytes per engine per flight. Dealing with information problems at this scale, in domain after domain (here’s an Economist piece on agriculture) raises grand-challenge-scale hurdles all over the landscape.

4) The half-life of technical knowledge appears to accelerate. Machine learning, materials science (we really don’t understand precisely how 3D printing works at the droplet level, apparently), machine vision in robots, and so on will evolve rapidly, making employment issues and career evolution a big deal. Robots obviously displace more and more manual laborers, but engineers, programmers, and scientists will also be hard-pressed to keep up with the state of these fields.

5) What are the rules of engagement with computing moving about in the wild? A woman wearing a Google Glass headset was assaulted in a bar because she violated social norms; self-driving cars don’t yet have clear liability laws; 3D printing of guns and of patented or copyrighted material has yet to be sorted out; nobody yet knows what happens when complete strangers can invoke facial recognition on the sidewalk; Google could see consumer (or EU) blowback when Nest sensor data drives ad targeting.

6) How will these technologies augment and amplify human capability? Whether in exoskeletons, care robots, telepresence, or prostheses (a field perfectly suited to 3D printing), the human condition will change in its shape, reach, and scope in the next 100 years.

To anticipate the book version of this argument, computing outside the box introduces a new layer of complexity into the fields of artificial intelligence, big data, and ultimately, human identity and agency. Not only does the long history of human efforts to create artificial life see a new chapter, but also we are creating artificial life in vast networks that will behave differently than a single creature: Frankenstein’s creature is a forerunner of Google’s Atlas robot, but I don’t know if we have as visible a precedent/metaphor for self-tuning sensor-nets, bionic humans, or distributed fabrication of precision parts and products outside factories.

That piece of the argument remains to be worked out more completely, but for now, I’m finding validation for the concept every day in both the daily news feed and in the lack of words to talk about what is really happening.

-->
*Bernard Roth, Foreword to Bruno Siciliano and Oussama Khatib eds, Springer Handbook of Robotics (Berlin: Springer-Verlag, 2008), p. viii.

Monday, March 31, 2014

Early Indications March 2014: TED at 30

On the occasion of its 30th birthday, TED is the subject of a number of both critiques and analyses. It’s a tempting target: the brand is both powerful and global, and the sheer numbers are just plain big. 1,700 talks are said to be online, viewed more than a billion times. More than 9,000 mini-TEDs (TEDx events) have been held all over the world. And TED’s successful formula is prone to the perils of any formula: sameness, self-parody, insularity.

But to go so far as to say, as sociologist Benjamin Bratton does, that TED is a recipe for “civilizational disaster,” is attention-getting hyperbole. Does Bratton not watch TV, a much more likely candidate for his accusation? (Also: he made the charge in a TED talk, of all places.) Other critiques hit the mark. There can be heavy does of techno-utopianism, especially in a certain strand of the talks, which is hardly surprising given a heavy Silicon Valley bias among the advisory crew. Politics is often either a) ignored or b) addressed as a quasi-technical problem to be fixed. The stagecraft, coaching, and earnestness of the talks can lend an evangelical cast to the enterprise. Humanity is fallen, in this trope, from a state of “better” that can be reclaimed by more education, more technology, more self-actualization.

At the same time, that narrative contains more than a grain of realism. Civic debate works less wastefully when citizens have richer fact bases from which to reason, and Hans Rosling’s series of talks on international health and economics is an important contribution to that debate. (The same can be said for Ken Robinson and Sal Khan on education.) Medicine and technology can make some of us “better than well,” to quote Carl Elliott, or replace human capacity with machines. The state of prosthetics (not only limbs, but also exoskeletons and tools for cognitive abilities and other functions) is in a state of extreme dynamism right now, and 99% of us will never see the labs and rehab clinics where the revolution is gaining momentum. Finally, education is being enhanced and disrupted by digital media at a time when skills imbalances, economic inequality, and political corruption are crucial topics for much of the globe. The TED agenda includes many worthy elements.

Rather than go with the evangelical line of comparison (as illustrated at The Economist), I tend to look at TED in terms of its reach. Much like the Book of the Month Club that brought middlebrow literature to audiences far from metropolitan booksellers, TED serves as an introduction to ideas one would not encounter otherwise. The conferences and videos illustrate the power of “curation,” a buzzword that fits here, vis a vis mass populations utilizing search, popular-scientific journals, mass media, or classroom lectures. This curation, coupled with the huge scale of the freely distributed videos and the social networks that propel them, helps explain the TED phenomenon. And if it's "middlebrow," I'm not sure that's such a bad thing: this isn't babbitry, after all.

In TED-meister Chris Anderson’s own talk, he makes a compelling case for online video as a Gutenberg-scale revolution. In the beginning, says Anderson (the son of missionaries), was the word, and words were spread by humans with gestures, intonation, eye contact, and physical responses of either acknowledgement or confusion. After the inventions of ink, type, paper, and so on, words could be manufactured beyond human scale, but the accompanying nuances were lost: print scaled in a way talking could not. Now, in a brief historical moment (YouTube is not yet 10 years old), we have global scale for words to reach masses of niche audiences, complete with body language, show-and-tell visual explanations, and other attributes that restore the richness of the spoken word.

Bratton’s solution — “More Copernicus, less Tony Robbins” — has much to commend it, yet realistically, how many Copernican giants can any era of human history produce? And of these few, how many could communicate on whiteboards, in articles, or to students, the true breadth and depth of their insights and discoveries? The self-help strain of TED-dom is worrisome to me and to many others, but equally unhelpful is science and technology unmoored from human context. If there had been TED talks in 1910, when Fritz Haber fixed atmospheric nitrogen in fertilizers that now feed a third of the world’s population, would anyone have known what he should have said? Or what if Robert Oppenheimer had a TED-like forum for his concerns regarding atomic weapons in the 1950s? Historically, humanity has frequently misunderstood the geniuses in its midst, so I’m unsure if TED could actually find, coach, and memorialize on video many of today’s Copernican equivalents. At the same time, I should be on record as wishing for both less of Tony Robbins and fewer media Kardashians of any variety.

For me, these are both academic questions and personal ones: I gave my first TEDx talk in early March, and crafting it was a stiff challenge. I saw pitfalls everywhere: sounding like everyone else, overshooting the audience’s patience and current knowledge, and not giving people anything to _do_ about this urgent issue at the end of the talk. Thus I will leave it to the audience to judge after the edited talk is posted next month, but I must admit I measure great TED talks with a new yardstick after having tried to give even a pedestrian one.