Saturday, January 30, 2016

Early Indications January 2016: Shocks

The past month has been marked by a series of extraordinary events that would have been completely unforeseen only a year ago, or even in mid-summer. (In June, West Texas Intermediate crude oil futures contracts were selling at $60 a barrel, roughly twice the current price.) While this may be an unusual month, the larger question remains: how can human institutions evolve to better address both sudden and glacial change, in both positive and negative directions? Put another way, if we see what keeps surprising us, maybe we can adapt our practices and assumptions to be surprised less often, less acutely, or both.

Oil is certainly big news. While the dynamics of a global market, controlled by a wide range of political and business players, remain fascinating, “common knowledge” in energy markets shifts dramatically. Recall how recently talk of “peak oil” was common: according to Google Trends, searches for the phrase spiked in August 2005 and, at a slightly lower index, May 2008. After 2011, interest dwindled to baseline noise, and today we wrestle with the problems of sub-$2.00 gasoline. The precise events coming into play right now have complex origins: innovations in drilling technology, geopolitical forces (including bitter national and ethnic rivalries), and national budgets whose planning assumptions have been obliterated. Saudi Arabia, for example, can produce a barrel of oil for about $3 but needs $93 to break even for budget purposes given its economic monoculture; Venezuela needs $149 a barrel to break even, to take the most extreme example. At $30, budgets in many places (including Alaska) are a mess.

Given that oil is such big business in so many parts of the world, considerable expertise is deployed in forecasting. Yet the industry’s record, with regard to both estimates of oil reserves and now prices, is consistently poor. Perhaps the lesson is that complex systems cannot be predicted well, so the best answer might be to shorten planning horizons — a tough call in light of the magnitude of investment and concomitant project lead time required.

The next “shock’ is in some ways predictable: U.S. infrastructure investment has lagged for so long that calamities on bridges, railroads, and water supplies are unfortunately overdue. The particular politics of Flint, Michigan’s mismanagement are also not surprising given the nature of both large, overlapping bureaucracies and the governor’s high priority on municipal budget repair to be performed by unelected “emergency managers.” The competing agendas are difficult: if bondholders lose trust, investment becomes prohibitive. At the same time, the dismissal of known test results and risks, and the human consequences thereof, are criminal: GM stopped using Flint water because it was destroying auto parts while Flint’s citizens had to keep drinking it.

The pattern in Flint is not all that unusual, except in its impact: given the size of federal and state governments, it’s hard to imagine who voters could hold accountable for substandard ports, roads, and airports. Many are in poor repair, but the constituencies are diffuse and/or politically marginal, and so can be ignored. Who can one complain to (or vote out) regarding connections inside Philadelphia’s airport, or Amtrak’s unreliability, or Detroit’s crumbling schools? Conversely, what good came to the Detroit mayor who supported that airport’s modernization? Who is the primary constituency that benefits from New Jersey’s extremely heavy spending on roads ($2 million per state-controlled mile) that are consistently graded as among the nation’s worst (at both the Interstate and local arterial levels)? Rather than planning horizons, the issue here appears to center on accountability. The interconnections of race, poverty, and party politics can also fuel tragedy: decisions were made in Flint that would be unthinkable in more affluent Detroit suburbs. (Another water issue, the one in California, could also amplify class conflicts in the event the El Nino snowpack melts to last summer’s levels in coming years.)

The third shock is a positive one. Google’s DeepMind unit (acquired for $400 million in 2014) announced that it had used machine learning to develop a computer capable of defeating the European champion at Go, the ancient Chinese game of strategy. AlphaGo, DeepMind’s program, will now play a higher-ranked champion in March. If the machine can win, another cognitive milestone will have been achieved with AI, about ten years faster than had been generally predicted. Interestingly, Facebook had previously announced that it had made significant progress at Go in a purely machine tournament, but the Google news swamped the magnitude of Facebook’s achievement.

To their credit, DeepMind’s team published the algorithmic architecture in Nature. Two distinct neural networks are built: one, the “policy network,” limits its scope to a small number of attractive options for each move, while the “value network” rates the proximate choices in the context of 20 moves ahead. It’s likely the technology will be tested outside abstract board games, potentially in climate forecasting, medical diagnostics, and other fields.

In this case, the breakthrough is so unexpected that nobody, including the scientists involved, knows what it means. Even though Deep Blue won at championship chess and Watson won at Jeopardy, neither advancement has translated into wide commercial or humanitarian benefit even though the game wins were in 1997 and 2011 respectively. This is by no means a critique of IBM; rather, turning technology breakthroughs in a specific domain into a more general-purpose tool can in some cases be impossible when it is not merely hard.

Elsewhere, however, giant strides are possible: Velodyne lidar, the spinning sensor atop the first generation Google car, has dropped from $75,000 per unit to a smaller unit costing under $500, with further economies of mass production to come. Even more astoundingly, the cost of human genomic sequencing continues to plummet: the first human DNA sequence cost $2.7 billion, for the entire research program. Shortly after, the cost was about $100,000 as of 2002; today it’s approaching $1,000, outpacing Moore’s law by a factor of thousands (depending on how one calculates) in a 15-year span.

In each of these technological instances, people have yet to invent large markets, business models, or related apparatus (liability law, quality metrics, etc) for these breakthroughs. As the IBM example showed in regard to AI, this is in some ways normal. At the same time, I believe we can create better scaffolding for technology commercialization: patent law reform comes immediately to mind. Erik Brynjolfsson and Andrew McAfee suggest some other ideas in their essential book, The Second Machine Age.

Education is of course a piece of the puzzle, and there’s a lot of discussion regarding STEM courses, including why more people should learn to code. I’ve seen several people make the case that code is already the basis of our loss of privacy, and there will be more deep questions emerging soon: who owns my genomic information? who controls my digital breadcrumbs? should big-data collection be opt-in or opt-out? Yes, knowing _how to_ code can get you a job, but more and more, knowing _about_ code will be essential for making informed choices as a citizen. The widespread lack of understanding of what “net neutrality” actually entails serves as a cautionary tale: few people understand the mechanics of peering, CDNs, and now mobile ad tech so much of the debate misses the core issue, which is lack of competition among Internet service providers. “Broadband industry consolidation” isn’t on anyone’s top-5 agenda in the U.S., yet even comedian John Oliver identified it as the major nut to crack with regard to information access.

In the end, humans will continue to see the future as looking much like the present, driven by psychological patterns we now understand better than ever. As shocks increase in magnitude, for many reasons including climatic ones, and impact, because so many aspects of life and commerce are interconnected, it may be time to rethink some of our approaches to planning for both the normal and the exceptional.

Monday, January 25, 2016

Early Indications November 2015: Broad thoughts on the Internet of Things

Current state

The notion of an Internet of Things is at once both old and new. From the earliest days of the World Wide Web, devices were connected so people could see the view out a window, traffic or ski conditions, a coffee pot at the University of Cambridge, or a Coke machine at Carnegie Mellon University. The more recent excitement dates to 2010 or thereabouts, and builds on a number of developments: many new Internet Protocol (IP) addresses have become available, the prices of sensors are dropping, new data and data-processing models are emerging to handle the scale of billions of device "chirps," and wireless bandwidth is getting more and more available. At a deeper level, however, the same criteria -- sense, think, act -- that define a robot for many working in the field also characterize large-scale Internet of Things systems: they are essentially meta-robots, if you will. The GE Industrial Internet model discussed below includes sensors on all manner of industrial infrastructure, a data analytics platform, and humans to make presumably better decisions based on the massive numbers from the first domain crunched by algorithms and computational resources in the second.

Building Blocks
The current sensor landscape can be understood more clearly by contrasting it to the old state of affairs. Most important, sensor networks mimicked analog communications: radios couldn't display still pictures (or broadcast them), record players couldn't record video, newspapers could not facilitate two- or multi-way dialog in real time. For centuries, sensors in increasing precision and sophistication were invented to augment human senses: thermometers, telescopes, microscopes, ear trumpets, hearing aids, etc. With the 19th century advances in electro-optics and electro-mechanical devices, new sensors could be developed to extend the human senses into different parts of the spectrum (e.g., infrared, radio frequencies, measurement of vibration, underwater acoustics, etc.).

Where they were available, electromechanical sensors and later sensor networks

*stood alone
*measured one and only one thing
*cost a lot to develop and implement
*had inflexible architectures: they did not adapt well to changing circumstances.

Sensors traditionally stood alone because networking them together was expensive and difficult. Given the lack of shared technical standards, to build a network of offshore data buoys for example, the interconnection techniques and protocols would be uniquely engineered to a particular domain, in his case, salt water, heavy waves, known portions of the magnetic spectrum, and so on. An agency seeking to connect sensors of a different sort (such as surveillance cameras) would have to start from scratch, as would a third agency monitoring road traffic.

In part because of their mechanical componentry, sensors rarely measured across multiple yardsticks. Oven thermometers measured only oven temperature, and displayed the information locally, if at all (given that perhaps a majority of sensor traffic informs systems rather than persons, the oven temperature might only drive the thermostat rather than a human-readable display). Electric meters only counted watt-hours in aggregate. Fast forward to today: a consumer Global Positioning Satellite (GPS) unit or smartphone will tell location, altitude, compass heading, and temperature, along with providing weather radio.

Electromechanical sensors were not usually mass-produced, with the exception of common items such as thermometers. Because supply was limited, particularly for specialized designs, the combination of monopoly supply and small order quantities kept prices high.

The rigid architecture was a function of mechanical devices’ specificity. A vibration sensor was different from a camera was different from a humidistat. Humidity data, in turn, was designed to be moved and managed in a particular analog domain (a range of zero to 100 per cent), while image recognition in the camera’s information chain typically featured extensive use of human eyes rather than automated processing.

Changes in each of these facets combine to help create today’s emerging sensor networks, which are growing in scope and capability every year. The many examples of sensor capability accessible to (or surveilling) the everyday citizen illustrate the limits of the former regime: today there are more sensors recording more data to be accessed by more end points. Furthermore, the traffic increasingly originates and transits exclusively in the digital domain.

*Computers, which sense their own temperature, location, user patterns, number of printer pages generated, etc.
*Thermostats, which are networked within buildings and now remotely controlled and readable
*Telephones, the wireless variety of which can be understood as beacons, bar-code scanners, pattern-matchers (the Shazam application names songs from a brief audio sample), and network nodes
*Motor and other industrial controllers: many cars no longer have mechanical throttle linkages, so people step on a sensor every day without thinking as they drive by wire. Automated tire-pressure monitoring is also standard on many new cars. Airbags rely on a sophisticated system of accelerometers and high-speed actuators to deploy the proper reaction for collision involving a small child versus a lamp strapped into the front passenger seat.
*Vehicles: the OBD II diagnostics module, the toll pass, satellite devices on heavy trucks, and theft recovery services such as Lojack, not to mention the inevitable mobile phone, make vehicle tracking both powerful and relatively painless
*Surveillance cameras (of which there are over 10,000 in Chicago alone, and more than 500,000 in London)
*Most hotel door handles and many minibars are instrumented and generate electronic records of people’s and vodka bottles’ comings and goings.
*Sensors, whether embedded in animals (RFID chips in both household pets and race horses) or gardens (the EasyBloom plant moisture sensor connects to a computer via USB and costs only $50), or affixed to pharmaceutical packaging.

Note the migration from heavily capital-intensive or national-security applications down-market. A company called Vitality has developed a pill-bottle monitoring system: if the cap is not removed when medicine is due, an audible alert is triggered, or a text message could be sent.

A relatively innovative industrial deployment of vibration sensors illustrates the state of the traditional field. In 2006, BP instrumented an oil tanker with "motes," which integrated a processor, solid-state memory, a radio, and an input/output board on a single 2" square chip. Each mote could receive vibration data from up to ten accelerometers, which were mounted on pumps and motors in the ship’s engine room. The goal was to determine if vibration data could predict mechanical failure, thus turning estimates—a motor teardown every 2,000 hours, to take a hypothetical example—into concrete evidence of an impending need for service.

The motes had a decided advantage over traditional sensor deployments in that they operated over wireless spectrum. While this introduced engineering challenges arising from the steel environment as well as the need for batteries and associated issues (such as lithium’s being a hazardous material), the motes and their associated sensors were much more flexible and cost-effective to implement compared to hard-wired solutions. The motes also communicate with each other in a mesh topology: each mote looks for nearby motes, which then serve as repeaters en route to the data’s ultimate destination. Mesh networks are usually dynamic: if a mote fails, signal is routed to other nearby devices, making the system fault-tolerant in a harsh environment. Finally, the motes could perform signal processing on the chip, reducing the volume of data that had to be transmitted to the computer where analysis and predictive modeling was conducted. This blurring of the lines between sensing, processing, and networking elements is occurring in many other domains as well.

All told, there are dozens of billions of items that can connect and combine in new ways. The Internet has become a common ground for many of these devices, enabling multiple sensor feeds—traffic camera, temperature, weather map, social media reports, for example—to combine into more useful, and usable, applications. Hence the intuitive appeal of "the Internet of Things." As we saw earlier, network effects and positive feedback loops mean that considerable momentum can develop as more and more instances converge on shared standards. While we will not discuss them in detail here, it can be helpful to think of three categories of sensor interaction:

*Sensor to people: the thermostat at the ski house tells the occupants that the furnace is broken the day before they arrive, or a dashboard light alerting the driver that the tire pressure on their car is low
*Sensor to sensor: the rain sensor in the automobile windshield alerts the antilock brakes of wet road conditions and the need for different traction-control algorithms
*Sensor to computer/aggregator: dozens of cell phones on a freeway can serve as beacons for a traffic-notification site, at much lower cost than helicopters or "smart highways."

An "Internet of Things" is an attractive phrase that at once both conveys expansive possibility and glosses over substantial technical challenges. Given 20+ years of experience with the World Wide Web, people have long experience with hyperlinks, reliable inter-network connections, search engines to navigate documents, and wi-fi access everywhere from McDonalds to mid-Atlantic in flight. None of these essential pieces of scaffolding has an analog in the Internet of Things, however: garage-door openers and moisture sensors aren't able to read; naming, numbering, and navigation conventions do not yet exist; low-power networking standards are still unsettled; and radio-frequency issues remain problematic. In short, as we will see, "the Internet" may not be the best metaphor for the coming stage of device-to-device communications, whatever its potential utility.

Beyond the Web metaphor
Given that "the Internet" as most people experience it is global, searchable, and anchored by content or, increasingly, social connections, the "Internet of Things" will in many ways be precisely the opposite. Having smartphone access to my house's thermostat is a private transaction, highly localized and preferably NOT searchable by anyone else. While sensors will generate volumes of data that are impossible for most humans to comprehend, that data is not content of the sort that Google indexed as the foundation of its advertising-driven business. Thus while an "Internet of Things" may feel like a transition from a known world to a new one, the actual benefits of networked devices separate from people will probably be more foreign than saying "I can connect to my appliances remotely."

Consumer applications
The notion of networked sensors and actuators can usefully be subdivided into industrial, military/security, or business-to-business versus consumer categories. Let us consider the latter first. Using the smartphone or a web browser, it is already possible to remotely control and/or monitor a number of household items:

•    slow cooker
•    garage-door opener
•    blood-pressure cuff
•    exercise tracker (by mileage, heart rate, elevation gain, etc)
•    bathroom scale
•    thermostat
•    home security system
•    smoke detector
•    television
•    refrigerator.

These devices fall into some readily identifiable categories: personal health and fitness, household security and operations, entertainment. While the data logging of body weight, blood pressure, and caloric expenditures would seem to be highly relevant to overall physical wellness, few physicians, personal trainers, or health insurance companies have built business processes to manage the collection, security, or analysis of these measurements.  Privacy, liability, information overload, and, perhaps most centrally, outcome-predicting algorithms have yet to be developed or codified. If I send a signal to my physician indicating a physical abnormality, she could bear legal liability if her practice does not act on the signal and I subsequently suffer a medical event that could have been predicted or prevented.

People are gradually becoming more aware of the digital "bread crumbs" our devices leave behind. Progressive Insurance's Snapshot campaign has had good response to a sensor that tracks driving behavior as the basis for rate-setting: drivers who drive frequently, or brake especially hard, or drive a lot at night, or whatever could be judged worse risks and be charged higher rates. Daytime or infrequent drivers, those with a light pedal, or people who religiously buckle seat belts might get better rates. This example, however, illustrates some of the drawbacks of networked sensors: few sensors can account for all potentially causal factors. Snapshot doesn't know how many people are in the car (a major accident factor for teenage drivers), if the radio is playing, if the driver is texting, or when alcohol might be impairing the driver's judgment. Geographic factors are delicate: some intersections have high rates of fraudulent claims, but the history of racial redlining is also still a sensitive topic, so data that might be sufficiently predictive (ZIP codes traversed) might not be used out of fear it could be abused.

The "smart car" applications excepted, most of the personal Internet of Things use cases are to date essentially remote controls or intuitively useful data collection plays. One notable exception lies in pattern-cognition engines that are grouped under the heading of "augmented reality." Whether on a smartphone/tablet or through special headsets such as Google Glass, a person can see both the physical world and an information overlay. This could be a real-time translation of a road sign in a foreign country, a direction-finding aid, or a tourist application: look through the device at the Eiffel Tower and see how tall it is, when it was built, how long the queue is to go to the top, or any other information that could be attached to the structure, attraction, or venue.

While there is value to the consumer in such innovations, these connected devices will not drive the data volumes, expenditures, or changes in everyday life that will emerge from industrial, military, civic, and business implementations.

The Internet(s) of [infrastructure] Things
Because so few of us see behind the scenes to understand how public water mains, jet engines, industrial gases, or even nuclear deterrence work, there is less intuitive ground to be captured by the people working on large-scale sensor networking. Yet these are the kinds of situations where networked instrumentation will find its broadest application, so it is important to dig into these domains.

In many cases, sensors are in place to make people (or automated systems) aware of exceptions: is the ranch gate open or closed? Is there a fire, or just an overheated wok? Is the pipeline leaking? Has anyone climbed the fence and entered a secure area? In many cases, a sensor could be in place for years and never note a condition that requires action. As the prices of sensors and their deployment drop, however, more and more of them can be deployed in this manner, if the risks to be detected are high enough. Thus one of the big questions in security -- in Bruce Schneier's insight, not "Does the security measure work?" but "Are the gains in security worth the costs?" -- gets difficult to answer: the costs of IP-based sensor networks are dropping rapidly, making cost-benefit-risk calculations a matter of moving targets.

In some ways, the Internet of Things business-to-business vision is a replay of the RFID wave of the mid-aughts. Late in 2003, Wal-Mart mandated that all suppliers would use radio-frequency tags on their incoming pallets (and sometimes cases) beginning with the top 100 suppliers, heavyweight consumer packaged goods companies like Unilever, Procter & Gamble, Gillette, Nabisco, and Johnson & Johnson. The payback to Wal-Mart was obvious: supply chain transparency. Rather than manually counting pallets in a warehouse or on a truck, radio-powered scanners could quickly determine inventory levels without workers having to get line-of-sight reads on every bar code. While the 2008 recession contributed to the scaled-back expectations, so too did two powerful forces: business logic, and physics.

To take the latter first, RFID turned out to be substantially easier in labs than in warehouses. RF coverage was rarely strong and uniform, particularly in retrofitted facilities. Noise -- in the form of everything from microwave ovens to portable phones to forklift-guidance systems -- made reader accuracy an issue. Warehouses involve lots of metal surfaces, some large and flat (bay doors and ramps), others heavy and in motion (forklifts and carts): all of these reflect radio signals, often problematically. Finally, the actual product being tagged changes radio performance: aluminum cans of soda, plastic bottles of water, and cases of tissue paper each introduce different performance effects. Given the speed of assembly lines and warehouse operations, any slowdowns or errors introduced by a new tracking system could be a showstopper.

The business logic issue played out away from the shop floor. Retail and CPG profit margins can be very thin, and the cost of the RFID tagging systems for manufacturers that had negotiated challenging pricing schedules with Wal-Mart was protested far and wide. The business case for total supply chain transparency was stronger for the end seller than for the suppliers, manufacturers, and truckers required to implement it for Wal-Mart's benefit. Given that the systems delivered little value to the companies implementing them, and given that the technology didn't work as advertised, the quiet recalibration of the project was inevitable.

RFID is still around. It is a great solution to fraud detection, and everything from sports memorabilia to dogs to ski lift tickets can be easily tested for authenticity. These are high-value items, some of them scanned no more than once or twice in a lifetime rather than thousands of times per hour, as on an assembly line. Database performance, industry-wide naming and sharing protocols, and multi-party security practices are much less of an issue. 

While it's useful to recall the wave of hype for RFID circa 2005, the Internet of Things will be many things. The sensors, to take only one example, will be incredibly varied, as a rapidly growing online repository makes clear. Laboratory instruments are shifting to shared networking protocols rather than proprietary ones. This means it's quicker to set up or reconfigure an experiment, not that the lab tech can see the viscometer or Geiger counter from her smart phone or that the lab will "put the device on the Internet" like a webcam.

Every one of the billions of smartphones on the planet is regularly charged by its human operator, carriers a powerful suite of sensors -- accelerometer, temperature sensor, still and video cameras/bar-code readers, microphone, GPS receiver -- and operates on multiple radio frequencies: Bluetooth, several cellular, WiFi. There are ample possibilities for crowdsourcing news coverage, fugitive hunting, global climate research (already, amateur birders help show differences in species' habitat choices), and more using this one platform.

Going forward, we will see more instrumentation of infrastructure, whether bridges, the power grid, water mains, dams, railroad tracks, or even sidewalks. While states and other authorities will gain visibility into security threats, potential outages, maintenance requirements, or usage patterns, it's already becoming clear that there will be multiple paths by which to come to the same insight. The state of Oregon was trying to enhance the experience of bicyclists, particularly commuters. While traffic counters for cars are well established, bicycle data is harder to gather. Rather than instrumenting bike paths and roadways, or paying a third party to do so, Oregon bought aggregated user data from Strava, a fitness-tracking smartphone app. While not every rider, particularly commuters, tracks his mileage, enough do that the bike-lane planners could see cyclist speeds and traffic volumes by time of day, identify choke points, and map previously untracked behaviors.

Strava was careful to anonymise user data, and in this instance, cyclists were the beneficiaries. Furthermore, cyclists compete on Strava and have joined with the expectation that their accomplishments can show up on leader boards. In many other scenarios, however, the Internet of Things' ability to "map previously untracked behaviors" will be problematic, for reasons we will discuss later.

Industrial scenarios
GE announced its Industrial Internet initiative in 2013. The goal is to instrument more and more of the company's capital goods -- jet engines are old news, but also locomotives, turbines, undersea drilling rigs, MRI machines, and other products -- with the goal of improving power consumption and reliability for existing units, and to improve the design of future products. Given how big the company's footprint is in these industrial markets, 1% improvements turn out to yield multi-billion-dollar opportunities. Of course, instrumenting the devices, while not trivial, is only the beginning: operational data must be analyzed, often using completely new statistical techniques, and then people must make decisions and put them into effect.

This holistic vision is far-sighted on GE's part and transcends the frequent technology-centric marketing messages that often characterize Silicon Valley rhetoric. That is, GE's end-to-end insistence on sensors AND software AND algorithms AND people is considerably more nuanced and realistic than, for example, Qualcomm's vision:

“the Internet of Everything (IoE) is changing our world, but its effect on daily life will be most profound. We will move through our days and nights surrounded by connectivity that intelligently responds to what we need and want—what we call the Digital Sixth Sense. Dynamic and intuitive, this experience will feel like a natural extension of our own abilities. We will be able to discover, accomplish and enjoy more. Qualcomm is creating the fabric of IoE for everyone everywhere to enable this Digital Sixth Sense.”

Not surprisingly, Cisco portrays the Internet of Things in similar terms; what Qualcomm calls "fabric" Cisco names "connectivity," appropriately for a networking company:
“These objects contain embedded technology to interact with internal states or the     external environment. In other words, when objects can sense and communicate, it changes how and where decisions are made, and who makes them.

The IoT is connecting new places–such as manufacturing floors, energy grids,     healthcare facilities, and transportation systems–to the Internet. When an object can represent itself digitally, it can be controlled from anywhere. This connectivity means more data, gathered from more places, with more ways to increase efficiency and improve safety and security.”

The other striking advantage of the GE approach is financial focus: 1% savings in a variety of industrial process areas yields legitimately huge cost savings opportunities. This approach has the simultaneous merits of being tangible, bounded, and motivational. Just 1% savings in aviation fuel over 15 years would generate more than $30 billion, for example.

But to get there, the GE vision is notably realistic about the many connected investments that must precede the harvesting of these benefits.

    1) The technology doesn't exist yet. Sensors, instrumentation, and user interfaces need to be made more physically robust, usable by a global work force, and standardized to the appropriate degree.
    2) Information security has to protect assets that don't yet exist, containing value that has yet to be measured, from threats that have yet to materialize.
    3) Data literacy and related capabilities need to be cultivated in a global workforce that already has many skills shortfalls, language and cultural barriers, and competing educational agendas. Traditional engineering disciplines, computer science, and statistics will merge into new configurations.

Despite a lot of vague marketing rhetoric, the good news is that engineers, financial analysts, and others are recognizing the practical hurdles that have yet to be cleared. Among these are the following:

1) Power consumption

If all of those billions of sensors require either hard-wired power or batteries, the toxic waste impact alone could be daunting. Add to this requirement the growing pressure of the electric-car industry on the worldwide battery supply, and the need for new power management, storage, and disposal approaches becomes clear.

2) Network engineering

It's easy to point to all those sensors, each with its own IP address, and make comparisons to the original Internet. It's quite another matter, however, to make networks work when the sensor might "wake up" only once a day -- or once a month -- to report status. Other sensors, as we saw with jet engines, have the opposite effect, that of a firehose. Some kind of transitional device will likely emerge, either collecting infrequent heterogeneous "chirps" or consolidating, error-checking, compressing, and/or pre-processing heavy sensor volumes at the edge of a conventional network. Power management, security, and data integrity might also be in some of these devices' job description.

3) Security

As the Stuxnet virus illustrated, the Internet of Things will be attacked by both amateur and highly trained people writing a wide variety of exploits. Given that Internet security is already something of a contradiction in terms, and given widespread suspicion that the NSA has engineered back doors into U.S. firms' technology products, market opportunities for EU and other IoT vendors might increase as a result. In any event, the challenge of making lightweight, distributed systems robustly secure without undue costs in selling price, operational overhead, interoperability, or performance has yet to be solved at a large scale. In 2014 the security firm Symantec announced that all exercise monitors tested were found to be insecure.

4) Data processing
The art and science of data fusion is far from standardized in fields that have been practicing it for decades. Context, for instance, is often essential for interpretation but difficult to guarantee during collection. Add to the mix humans as sensor platforms, intermittent and hybrid network connectivity, information security requirements outside a defense/intelligence cultural matrix, and unclear missions -- many organizations quite reasonably do not know why they are measuring what they are measuring until after they try to analyze the numbers -- and the path of readings off the sensors and into decision-making becomes complicated indeed.

5) Cost effectiveness

The RFID experiment foundered in part on the price of the sensors, which even when measured in dimes became an issue when the volumes of items to be tracked ranged into the millions. With past hardware investments in memory, for example, still stinging some investors, the path to profitability for ultra-low-power, ultra-low-cost now will be considerably different from the high-complexity, high-margin world that Intel so successfully mastered in the PC era.

6) Protocols

The process by which the actual day-to-day workings of complex systems get negotiated makes for good business-school case studies, but challenging investment and decision-making. The USB standard, for example, had substantial industry "convening power" being exercised by Intel, and the benefits have been widely shared. For the IoT, it's less clear which companies will have a similar combination of engineering know-how, intellectual property (and a management mandate to form a profitless patent pool), industry fear and respect, and so on. As the VHS/Betamax, high-resolution audio CD, and high-resolution DVD standards wars have taught many people, it's highly undesirable to be stranded on the wrong side of an industry protocol. Hence, many players may sit out pending identifiable winners in the various standards negotiations.

7) APIs and middleware
The process by which device chirps become management insights requires multiple handoffs between sensors and PCs or other user devices. Relatively high up the stack are a variety of means by which processed, analyzed data can be connected to and queried by human decision makers, and so far, enterprise software vendors have yet to make a serious commitment to integrating these new kinds of data streams (or trickles, or floods) into management applications.

8) System management

The IoT will need to generate usage logs, integrity checks, and all manner of tools for managing these new kinds of networks. Once again, data center and desktop PC systems management tools simply are not designed to handle tasks at this new level of granularity and scale. What will an audit of a network of "motes" look like? Who will conduct it? Who will require it?

As this note has hinted, the label "Internet of Things" could well steer thinking in unproductive directions. Looking at the World Wide Web as a prototype has many shortcomings: privacy, security, network engineering, human-in-the-loop advantages that may not carry over, and even the basic use case. At the same time, thinking of sensor networks in the same proprietary, single-purpose terms that have dictated generations of designs is also overdue.

Beyond the level of the device, data processing is being faced with new challenges -- in both scope and kind -- as agencies, companies, and NGOs (to name but three interested parties) try to figure out how to handle billions of cellphone chirps, remote-control clicks, or GPS traces. What information can and should be collected? By what entity? With what safeguards? For how long? At what level of aggregation, anonymization, and detail? With devices and people opting in or opting out? Who can see what data at what stage in the analysis life cycle?

Once information is collected, the statistical and computer science disciplines are challenged to find patterns that are not coincidence, predictions that can be validated, and insights available in no other way. Numbers rarely speak for themselves, and the context for Internet of Things data is often difficult to obtain or manage given the wide variety of data types in play. The more inclusive the model, however, the more noise is introduced and must be managed. And the scale of this information is nearly impossible to fathom: according to IBM Chief Scientist Jeff Jonas, mobile devices in the United States alone generated 600 billion geo-tagged transactions every day -- as of 2010.

In addition to the basic design criteria, the privacy issues cannot be ignored. Here, the history of Google Glass might be instructive: whatever the benefits that accrue to the user, the rights of those being scanned, identified, recorded, or searched matter in ways that Google has yet to acknowledge. Magnify Glass to the city or nation-state level (recall that England has an estimated 6 million video cameras, but nobody knows exactly how many), as the NSA revelations appear to do, and it's clear that technological capability has far outrun the formal and informal rules that govern social life in civil society.

Early Indications October 2015: Of colleges, jobs, and analytics

It's funny how careers unfold. As a result of being in a particular place in a particular time, I find myself teaching analytics, supply-chain management, and digital strategy, mostly at the masters level. Not only did I not study any of these subjects in graduate school, none of these disciplines existed under their current name as recently as 20 years ago or so. What follows are some reflections on careers, skills, and patterns in education prompted by my latest adventures as well as some earlier ones.

1) What should I major in?
Across the globe, parents and students look at the cost of college, salary trends, layoffs, predilections, and aspirations, then take a deep breath and sign up for a major. I have seen this process unfold multiple times, and people sometimes miss some less obvious questions that are tough to address, but even tougher to underestimate.

The seemingly relevant question, "what am I good at," is tough to answer with much certainty: we require students to declare a major before they've taken many (sometimes any) courses in it, and coursework and salary work are of course two different things as well. While it's tempting to ask, "who's hiring," it's much harder to ask "where will there be good jobs in 20 years?" Very few Chief Information officers in senior positions aspired to that title in college, mostly because it didn't exist. Now that CIOs are more common, it's unclear whether the title and skills will be as widely required once sensors, clouds, and algorithms improve over the next decade or two.

It's even more difficult to extrapolate what the new "hot" jobs will be. In the late 1990's, the U.S. Bureau of Labor statistics encouraged students to go into desk top publishing, based on projected demand. In light of smartphones, social networks and "green" thinking, the demand for paper media never materialized, then tablets, e- readers, and wearables cut into demand still further. It's easy to say the Internet of Things or robotics will be more important in 20 years than they are today, but a) will near-term jobs have materialized when the student loan payments come due right after graduation, or b) are there enough relevant courses at a given institution? One cause of a nursing shortage that emerged about 15 years ago was a shortfall in the number of nursing professors: there were unfilled jobs, and eager students, but not enough capacity to train sufficient numbers of people to ease the hiring crunch.

2). English (or psychology, or fill in the blank) majors are toast

Many politicians are trying to encourage STEM career development in state universities and cite low earning potential for humanities graduates as a reason to cut funding to these fields. As Richard Thaler would say, it matters whether you make French majors pay a premium, or give chemical engineers a discount: the behavioral economics of these things are fascinating. The University of Florida led the way here about three years ago, but it's hard to tell how the experiment panned out.

At the same time, the respected venture investor Bill Janeway wrote a pointed piece in Forbes this summer, arguing that overcoming the friction in the atoms-to-bits-to atoms business model (Uber being a prime example) demands not just coding or financial modeling, but something else:

"Unfortunately for those who believe we have entered a libertarian golden age, freed by digital technology from traditional constraints on market behavior, firms successful in disrupting the old physical economy will need to have as a core competency the ability to manage the political and cultural elements of the eco-systems in which they operate, as well as the purely economic ones. . . .

In short, the longer term, sustainable value of those disrupters that succeed in closing the loop from atoms to bits and back to atoms will depend as much on successful application of lessons from the humanities (history, moral philosophy) and the social sciences (the political economy and sociology of markets) as to mastery of the STEM disciplines."

On the whole, as the need for such contrarian advice illustrates, we know little beyond the stereotypes of college majors. The half-life of technical skills is shrinking, so learning how to learn becomes important in building a career rather than merely landing an entry-level position. Evidence for the growing ability of computers and robots to replace humans is abundant: IBM bought the Weather Channel in part to feed the Watson AI engine, Uber wants robotic cars to replace its human drivers, and even skilled radiologists can be outperformed by algorithms. A paper by Carl Frey and Michael Osborne at Oxford convincingly rates most career fields by their propensity to be automated. It's a very illuminating, scary list (skip to the very end):

To bet against one's own career, in effect short-selling an occupational field requires insight, toughness, and luck. At the same time, the jobs that require human interaction, memory of historical precedent, and tactile skills will take longer to automate. Thus the liberal arts orientation toward teaching people how to think rather than how to be a teacher, accountant, or health-club trainer will win out, I believe.  This is a long term bet, to be sure, and in the interim, there will be unemployed Ivy Leaguers looking with some envy at their more vocationally focused state-school kin. Getting the timing right will be more luck than foresight.

3). What is analytics anyway?
As I've developed both a grad course and a workshop for a client in industry, I'm coming to understand this question differently. A long time ago, when I taught freshman composition, it took a few semesters to understand that while effective writing uses punctuation correctly, an expository writing (as it was called) course was an attempt to teach students how to think: to assess sources, to take a position, and to buttress an argument with evidence. All too frequently, however, colleges see the labor-intensive nature of freshman writing seminars as a cost to be cut, whether through using grad students, adjuncts, automation, or bigger section sizes. Each of these detracts from the close reading, personal attention, and rigorous exercises that neither scale well nor are done capably by many grad students or overworked adjuncts.

I'm seeing similar patterns in analytics. Once you get past the initial nomenclature, the two disciplines look remarkably similar: while courses are nominally about different things (words and numbers), each seeks to teach the skills related to assessing evidence, sustaining a point of view, and convincing a fair-minded audience with analysis and sourcing. To overstate, perhaps, analytics is no more a matter of statistics than writing is about grammar: each is a necessary but far from sufficient element of the larger enterprise. Numbers can be made to say pretty much whatever the author wants them to say, just as words can. In this context, the recent finding that very few (39%) published research findings in psychology could be replicated stands as a cautionary tale. ( Unfortunately, American numeracy -- quantitative literacy -- is extremely low, rendering millions of people incapable of managing business, households, and retirement portfolios. Being able to write sound academic research, meanwhile, looks to be even more rare than we've thought.

A paradox emerges; at the moment when computational capability is effectively free and infinite relative to an individual's needs, the skills required to deploy that power are highly unequally distributed, with little sign of improvement any time soon. How colleges teach, who we teach, what we teach, and how it gets applied are all in tremendous upheaval: it's exciting, for sure, but the collateral damage is mounting (in the form of student loan defaults and low completion rates at for-profit colleges, to take just one example). Are we nearing a perfect storm of online learning, rapidly escalating demand for new skills, sticker shock or even buyer refusal to pay college tuition bills, abuses of student loan funding models, expensive and decaying physical infrastructure (much of it built in the higher-education boom of the 1960s), and demographics? Speaking of paradoxes, how soon will the insights of analytics -- discovering, interpreting, and manipulating data to inform better decisions -- take hold in this domain?

Wednesday, September 30, 2015

Early Indications September 2015: The MBA at 100 (or so)

It’s mostly coincidence, but the MBA degree is at something of a crossroads entering its second century. A short list of big questions might begin as follows:

-What is the place (pun intended) of a two-year resident MBA in a global, Internet era?

-What is the market need for general managers versus specialists in finance, supply chains, accounting, or HR, for example? How does market supply align with this need?

-What is the cost and revenue structure for an MBA program outside the elite tier? 

-How can business degrees prepare graduates for a highly dynamic, uncertain commercial environment?

-What do and should MBAs know about the regulatory environments in which their businesses are situated?

-What is and should be the relationship between managerial scholarship and commercial practice?

-What is the relationship of functional silos to modern business practice? Marketers need to know a fair bit of technology to do mobile ad targeting, for example, as do equities traders in the age of algorithmic bots. Navigating the aforementioned regulatory landscape, meanwhile, draws on an entirely different range of skills generally not covered in management or negotiation classes. Is the MBA/JD the degree of choice here?

-How can and should U.S. business schools teach ethics, which are highly culture-specific, to students from many home countries who will likely work in still another country/culture soon after graduation?

2015 is not happening in a vacuum, of course. The first graduate school of business, Tuck, offered the Master of Science in Commerce after its founding in 1900. (Recall that the functionally organized corporation was at the time a fairly recent phenomenon: railroads split ownership from management in part because of the huge capital requirements, and the vast distances involved meant that managers often lacked direct visibility of workers. Thus, in broad strokes, the late 19th century began the age of policies and procedures, and the idea of a middle manager.) Harvard launched its MBA program eight years after Dartmouth, with significant help from such scientific management exponents as Frederick Winslow Taylor. Enrollments surged: Harvard alone grew from 80 students to nearly 1100 in 22 years. Unsurprisingly, other universities began offering the degree: Northwestern in 1920, Michigan in 1924, Stanford in the late 1920s, Chicago in 1935, UNC in 1952. According to the office of the university archivist, Penn State began offering the MBA in 1959.

1959 was also the year two different reports, commissioned by industrialists’ foundations — Ford and Carnegie — reoriented American graduate business education. The more strongly worded of the two, by Robert Aaron Gordon and James Edwin Howell, systematically attacked the entire institution of the MBA as it then stood: students were weak, curricula were sloppily constructed, and faculty taught with little academic rigor at many schools.

The Gordon-Howell report quickly influenced accreditation and related discussions. New courses on many campuses covering strategy were at the forefront of a larger emphasis on quantitative methods and theory. What was not well addressed, according to many critics, was the practice of management itself. Balancing theory and practice has never been simple in business — as compared to medicine, companies do not conduct clinical trials parallel to those of drugs or procedures. 

Entrepreneurship has proven particularly hard to teach: on any list of great business-launchers, few hold the MBA. None of the following hold the degree: Paul Allen, Jeff Bezos, Sergei Brin, Warren Buffett, Michael Dell, Larry Ellison, Bill Gates, Jim Goodnight (SAS Institute), Bill Hewlett and David Packard, Steve Jobs, Elon Musk, Ken Olsen (Digital Equipment), Pierre Omidyar (eBay), Larry Page,   Sam Walton, and Mark Zuckerberg. 

MBAs can of course do quite well for themselves, as Michael Bloomberg and Nike’s Phil Knight (a 4:10 miler at Oregon 50 years ago) prove. Still, there appears to be a negative correlation between academic achievement, particularly in the MBA, and entrepreneurial accomplishment. Ten of the top sixteen richest self-made people in the world did not finish college or dropped out of grad school: Gates, Ellison, Zuckerberg, Sheldon Adelson (casinos), Page and Brin, Carl Icahn, Steve Ballmer, Harold Hamm (oil and gas at Continental Resources), and Dell.

Apart from not being able to produce mega-entrepreneurs, what of the more real-world challenges to MBA programs noted above? In reverse order, a few notes on each:

-Ethics has never been an easy topic to include in a business curriculum, but as the world’s top schools continue to get more global, trying to say anything stringent encounters the reality of cultural diversity. Sanctions against bribery, greed, ostentation, money-lending (with interest), and constraints on the role of women and ethic minorities are impossible to align; even the U.S., Canada, and England do some things very differently despite many similarities. The ethical lapses of the early 2000s — at Waste Management, Enron, Adelphia, and HealthSouth, among many others — put some focus on business schools (along with accounting firms) as agents of better behavior. In light of recent scandals at Toyota, Volkswagen, and GM, to name only the automakers, the challenge for MBA curricula does not appear to be any less daunting than in the crisis years of 2002 or thereabouts.

-Teaching students to work across functions and to deal with regulatory bounds and procedures continues to stymie MBA programs. We teach an integrative consulting-project exercise in the 4th semester; Harvard teaches something similar across the whole first year. Numerous programs have moved the project-based course back and forth, with equally compelling logic for early and late inclusion. Seeing how messy real problems are prepares students for the functional courses, while having some base of knowledge before being turned loose on a client also has merit. No one approach has emerged as a winner from the many variations being used.

-Managerial theory and practice remain difficult both to do and to convey more than a half-century after Gordon and Howell. Scholarship that gets published tends not to come from practitioners (Clayton Christensen is a notable exception, having run a company before earning his doctorate at Harvard), while managers and executives remain understandably wary of controlled experiments on live business units. Professors’ contributions to the semi-academic journals that practicing businesspeople might read — Harvard Business Review, Sloan Management Review, California Management Review, and the like — usually do not count heavily (if at all) toward tenure or promotion. For their part, many managers tell me they find little of value in the A-list journals held in academic esteem. Suffice it to say there remain many opportunities to improve the dialogue between the factory or office and the academy. 

-How can MBA programs teach resiliency, creativity, willingness to challenge convention, and the other traits required in a particularly turbulent business landscape? Marc Benioff, the CEO of, is far from a disinterested observer, but it is difficult to disagree with his recent contention that essentially every industry is in the midst of or about to confront fundamental change. Whether from fracking, Uber, mobile platforms, Amazon, or demographics, every business (and governmental) unit I can see is hard-pressed to do things “the way we’ve always done it around here.”

An entrepreneur (whose masters was in arts management) told me a cautionary tale back in the dot-com boom. “We’re a startup,” he said. “Strategy for us isn’t chess, it’s poker: we have to bluff because we can’t compete with the big guys at scale, with equal playing pieces on a defined board with agreed-upon rules. We faked features in the demo we couldn’t deliver. We have had months where we couldn’t make payroll. We’ve reinvented our business model three times. That’s the reality. We hired a bunch of top-school MBAs to try to compete better, and had to let them all go. Why? These men and women all had good grades in high school. They cleared the next hurdle and got into a good college, then positioned themselves to deliver the right answers, earn As, and get into Ivy League b-schools. There it was more of the same: high achievers got the top internships at the I-banks and consulting firms. They’ve always been rewarded for getting the right answer. Now we have all this chaos and instability. None of them can handle it; they keep wanting to know the answer and there isn’t one.”

Fifteen years later, I can’t see that the incentive structure has changed all that much. Doing well in controlled environments seems to be counter-intuitive preparation for radical reinvention, new rules, unconventional insurgencies, and broken profit models.

-This atmosphere of disruption is affecting MBA programs themselves. Getting the costs, revenues, and rankings to acceptable levels has never been more challenging. Last year Wake Forest shut down its two-year resident MBA program, ranked in the top 50 in the US, as did Thunderbird, a pioneer in the internationally-oriented masters. In the past 5 years, however, 30 new schools earned AACSB accreditation in the U.S.; 96 others had joined the club in the preceding decade. Thus competition for students, faculty, and resources is intense, and the international nature of the MBA means that foreign competition is accelerating even faster than those 126 newly-accredited U.S. institutions would suggest: Poets & Quants states in a recent article that there are 50% more MBAs being earned today than ten years ago, so filling those classes is a challenging job. Marketing efforts to reach prospective MBA students are in something of an arms race, so many schools are cheered by a reported uptick in applications. Unfortunately, nobody can know if more applications is a result of more applications per student or more students jumping into the pool. Amid both increased competition and rising costs (health care continues to outpace other expenses), increasing tuition is a non-starter in most circumstances, so schools are confronting the need for creative alternatives if they are to avoid the approach taken at Wake Forest.

-An MBA is by definition something of a generalist, even with a curricular focus area in one or two functions. Meanwhile specialized business masters, in finance, accounting, marketing, or whatever, are on the rise. I have undergrads ask me about the relative merits, and each has its place. For many mid-career professionals, having an alternative to the generalist approach is attractive. Our supply-chain masters students, for example, never take courses in HR, real estate, finance, or general management: all the courses presuppose one business area rather than a variety. With years or decades already invested in that function, these students did the career calculus and concluded that the generalist approach did not make sense for them. They are far from alone, given the national trends.

-Thus we end where we began: what is the place of a 2-year, resident MBA? Each of those variables is getting interesting. Duration: INSEAD offers a 10-month program; one-year options are not uncommon. Locus: On-line MBAs are being offered all over the world, executive (weekend) MBAs allow students to keep their jobs and their lodging stable, and hybrids like the program at Carnegie Mellon combine multiple delivery methods. Content: As we have seen, different masters degrees in business are being offered in response to market needs, including for more depth of coverage: if one considers the complexity of contemporary finance, or supply chains, or accounting, having only a handful of courses within a generalist curriculum may not provide adequate preparation for the job’s primary duties, while the breadth of coverage has minimal compensatory value.

Numerous observers, including The Economist, predict major changes to the MBA market, particularly outside the top 20 or so schools. Today’s junior faculty joining the ranks will be in for a wild ride in the coming decades. As with so many other areas, as Ray Kurzweil argues, the rate of change is accelerating: the world is changing faster, faster, and business education will likely change more in the next 20 years than in the first century. Happy 100th birthday, indeed.

Tuesday, July 28, 2015

Early Indications July 2015: Crossover Points

I recently read an enjoyable study of the airport as cultural icon (Alastair Gordon’s Naked Airport; hat-tip to @fmbutt) and got to thinking about how fast new technologies displace older ones. Based on a small sample, it appears that truly transformative technologies achieve a kind of momentum with regard to adoption: big changes happen rapidly, across multiple domains. After looking at a few examples, we can speculate about what technologies might be the next to be surpassed.

Gordon makes uncited references to air travel: in 1955, more Americans were traveling by air than by rail, while in 1956, more Americans crossed the ocean by plane than by ship. (I tried to find the crossover point for automobile inter-city passenger-miles overtaking those of railroads, but can only infer that it happened some time in the 1920s.) This transition from rail to air was exceptionally rapid, given that only 10 years before, rail was at its all-time peak and air travel was severely restricted by the war.

Moving into another domain, I was surprised to learn that in 1983, LP album sales were surpassed not by the CD but by . . . cassette tapes; CDs did not surpass cassettes for another 10 years. In the digital age, the album is no longer the main unit of measurement, nor is purchasing the only way to obtain songs. This shift in bundle size is also occurring in news media as we speak: someone asked me the other day what newspaper(s) I read, and it struck me as odd: I can’t remember when I last had a physical paper land on my porch. That’s the other thing about these crossover points: they usually happen quietly and are not well remembered.

The smartphone is taking over multiple categories. Once again, we see a new unit of measurement: in the film camera age, people developed rolls of film, then perhaps ordered reprints for sharing. (That quiet transition again: can you remember the last time you took film to the drugstore or camera shop?) Now the unit of measurement is the individual image. Interestingly, digital still cameras surpassed film cameras in 2004, but not until 2007 were there more prints made from digital than from film. After 2007, digital prints have steadily declined. Furthermore, digital cameras themselves have been replaced by cameraphones: only 80 million point-and-shoot digital cameras shipped in 2013 and that number is dropping to well under 50 million this year, while smartphone sales are on target for about 1.5 billion units this year.

Standalone GPS units, MP3 players, and video camcorders (with GoPro being a notable exception, albeit in relatively tiny numbers) are other casualties of the smartphone boom. Landline-only houses were surpassed by cellular-only in 2009. Smartphones surpassed PC sales back in 2011.

The implications for employment are tremendous: Kodak employed 145,000 people in 1988; Facebook, a major player in personal image-sharing, has a headcount of about 9,000, most obviously not working on photos. Snapchat has 200 employees at a service that shares 8800 images EVERY SECOND, a number Kodak could not have conceived of. When these technology shifts occur, jobs are lost at a greater rate than they are gained. Railroads employed more than 1.5 million Americans in 1947; it’s now about a sixth of that. U.S. airlines, meanwhile, employed a peak of about 600,000 workers in the tech boom of 2000, well less than half that of the railroads, in a more populous country with more people traveling.

Let’s look at the smartphone. Given globalization, what used to be U.S. telecom numbers no longer equate. AT&T employed around a million people at its peak; right now AT&T plus Verizon (which counts cable TV and other operations) employ roughly 425,000 people. Apple’s 2015 headcount of 63,000 includes about 35,000 retail employees and about 3,000 temps or contractors. Samsung is a major player in world telco matters, but figuring out how many of its 275,000 employees can count toward a comparison vs AT&T is impossible. All told, more people have more phones than they did in 1985 but employment in the phone industry looks to be lower, and lower-paying, given how many retail employees now enter the equation.

Coming soon, we will see major changes to ad-supported industries. Already newspaper revenues are in serious decline. Digital ad revenue is already higher than newspaper, magazine, and billboard combined. “Cable cutting” is a very big deal, with clear demographic delineations: a 70-year-old is likely to read a paper newspaper and watch the big-4 network evening news; a 20-year-old is highly unlikely to do either. Comcast announced in May that it has more Internet-only subscribers than cable-TV subscribers, and the unbundling of cable networks into smartphone/tablet apps such as HBO-Go will likely accelerate.

In personal transportation, there could be two major upheavals to the 125-year-old internal combustion regime: electric cars and self-driving vehicles. Obviously Tesla is already in production in regard to the former transition, but the smartphone example, along with such factors as Moore’s law, cloud computing, and an aging Western-world demographic could fuel rapid growth in autonomous vehicles. In regard to cloud computing, for example, every Google car is as “smart” as the smartest one as of tonight’s software upgrade. Given the company’s demonstrated expertise in A/B testing, there’s no reason not to expect that competing models, algorithms, and human tweaks will be tested in real-world competitions and pushed out to the fleet upon demonstrated proof of superior fitness. 

There are many moving parts here: miniaturization, demographics, the rise of service industries relative to manufacturing (including cloud computing), growing returns to capital rather than labor, and so on. The history or technology substitutions and related innovations does have some clear lessons however: predicting future job growth is perilous (in 1999, the US Bureau of Labor Statistics was bullish on . . . desktop publishers); infrastructure takes decades while some of these cycles (Android OS releases) run in months; and the opportunities in such areas as robotics, AI, and health care are enormous. The glass may be half-full rather than half-empty, but in more and more cases, people are looking at entirely different scenarios: Kodak vs Snapchat, as it were. Whoever the next US president turns out to be will, I believe, face the reality of this split, perhaps in dramatic fashion.

Sunday, May 31, 2015

Early Indications May 2015: Centaurs

“We expect a human-robot symbiosis in which it will be natural to see cooperation between robots and humans on both simple and complex tasks.”

-George Bekey, University of Southern California, 2005

“The second machine age will be characterized by countless instances of machine intelligence and billions of interconnected brains working together to better understand and improve our world.”

-Erik Brynjolfsson and Andrew McAfee, MIT, 2014

Who performs better: a computer or a human?

The short answer is obvious: it depends on the task. Computers are now unquestionably better at chess than even a grandmaster-grade human player, and the highly visible triumph of IBM’s Watson over the best Jeopardy players shows how artificial intelligence can be applied to a linguistically rich trivia contest.

What might come next? Just this month, four of the top 10 poker players in the world played a marathon against a Carnegie Mellon computer. Given the complexities of no-limit Texas Hold’em, the result was not a Jeopardy-like rout, but the statistical tie elated the researchers. Each player played 20,000 hands; a cumulative $170 million in chips was bet over the two-week competition. In the end, the humans came out less than $1 million ahead — even though the computer did things like betting $19,000 to win a $700 pot.

Physical-world tasks are farther behind pure thought. In soccer competitions, robots are still decades away from impersonating, much less beating, human championship teams. Physical machine movement has yet to follow anything like Moore’s law, and team play is harder to model than individual contests. As these examples show, the race between people and computers/robots plays out differently, depending on the tasks being contested.

The long answer to the "who's better" question is emerging: a team of both. We will see the origin of the term “centaurs” presently, but I think this is going to be the most amazing domain, one in which each party does what it does best. We are seeing that teams of humans AND robots outperform either humans OR robots. Here are four domains in which progress is being made more rapidly than might be widely understood.

1) Audi has teamed with Stanford’s self-driving car lab to develop a TT that can beat a club-level human driver. There have yet to be head-to-head races, apparently, so human adrenaline hasn't played a role, nor have racing tactics come into play. The car simply follows a pre-programmed line and parameters around the course: it hasn't raced anyone and won yet. The centaurs are well developed here: stability control, anti-lock brakes, and sophisticated all-wheel-drive control systems all digitally amplify the skill of a human driver, so finding a "purely" human-driven car is less than straightforward.

Earlier this month Mercedes showed a driverless truck that can operate on public roads but still needs human drivers for navigating the start and finish of a trip as well as any diversions from clear, open highway such as snow covering lane lines, police officers directing traffic, or construction areas. It’s early, but eventually might the analogue of a “harbor pilot” carry over from sea to land?

2) The Internet is awash in images, some of them incredibly beautiful. Researchers at Yahoo Labs and the University of Barcelona have taught an algorithm to trawl through image databases and find beautiful but unpopular (under appreciated) images using the results of training sessions with human “votes.”

As The Economist recently noted, the process of machine learning is itself undergoing rapid improvement, in part through the process of “deep learning” as developed by the giant web businesses with both massive data and effectively unlimited computing resources. Google and Facebook are familiar names on their list; Baidu is a newer entrant into the field, having made some high-profile hires.

3) Chess has never been the same since Deep Blue defeated Gerry Kasparov, in part because of a software bug that led the human to infer that the machine was substantially smarter than he rather than allowing for the possibility that it was a dumb move. Since about 2013, teams of average players and good software have been able to defeat both grandmaster humans and computers. This type of match is where the "centaur" terminology first took hold.

4) Exoskeletons are common in Hollywood sci-fi, but robots that encase a human body and amplify its capabilities are coming into use in several scenarios:

-Rehabilitation for stroke patients, amputees, and paralytics, among other populations

-DARPA wants soldiers to be able to march or run longer, with less fatigue

-In military and other similar scenarios, able-bodied humans can be augmented to increase their lifting capacity, for example

-The Da Vinci surgical robot is a specialized exoskeleton of a sort, extending a doctor’s finger manipulations into more precise movements in the surgical field.

One big challenge for all of these efforts is in making the power source light enough to work at human scale. In warehouses, to take a very rough approximation, a forklift truck typically weights 1.6 to 2 times the intended weight to be carried. If a human is intended to carry 200 additional pounds, that puts the exoskeleton in the 400-lb range, unloaded, so the whole package would be about 750 lb. Lowering the battery weight is the quickest way to shrink the total assembly, but physics is tough to cheat: a lot of battery power would be expended in carrying the battery, and carrying a frame sufficiently robust to support the battery.

It will bear watching to see how roboticists and computer scientists design the cyber side of the centaur, optimizing around human strengths that might be expressed in unpredictable ways. Similarly, training a human to leave part of the task to a machine, and not to overthink the transaction, might be tricky in certain situations. In others (traction control on the car for example), people are already augmented and don’t even realize it.

At the same time, centaurs will have to deal with the infinite supply of human stupidity: what will self-driving cars do when a drunk driver is headed the wrong way on a divided highway? Wall Street is one big centaur, as the recent charges in the 2009 flash crash reveal: a day trader in England apparently spoofed enough orders — manually rather than algorithmically — that programmatic trading bots reacted in unstable, unpredictable ways. The gambit seems to have worked: the day trader (who lived with his parents) made $40 million over four years. The point here is not what one Navinder Singh did or did not do, or when other actors in the flash crash might be identified, but simply that the interactions between clever (or less than clever) people and computerized entities will be a most complicated territory for the coming decades.

Once you start seeing the world this way, the potential possibilities expand far beyond websites, apps, or algorithms — there’s so much human work that can be done better. Consider travel: I would love to have a computer assistant work with me to book a trip. I have several free weekends, let’s say, and want to know the best, cheapest trip I should take. Right now it’s possible to spend hours looking at maps, air fares, hotel rates, weather predictions, and events calendars. A computer can’t tell me what I want — my preferences are dynamic, conditional, personal, and fickle — but right now the computer can’t really do a great job of letting me discover what I want either.

I have a suspicion some of these limitations are far from being solved. at the same time, whether it’s in the realm of computer-controlled tools (whether scalpels or lathes), transportation (personal drones are worth a book all by themselves), or human augmentation, the various tandems of human and computing capabilities will have far-reaching impact sooner than most anyone expects.

Wednesday, April 29, 2015

Early Indications April 2015 Review Essay: Being Mortal by Atul Gawande

Let me begin by dispensing with any pretext of objectivity: I think Atul Gawande, a surgeon at a Harvard teaching hospital who writes for The New Yorker, is a national treasure. Complications may be the best first book of our generation; Better is brilliant. We have personal parallels: both of us grew up in the midwest and each named a son for the greatest physician-novelist of the 20th century. He teaches and practices at the hospital where my twins were born back in my Boston days.

Being Mortal is a sobering book. I had to read it in small doses in part to savor its richness but in larger measure to cope with the existential finality it addresses so beautifully and concretely. To the Amazon reviewers complaining that it’s based on anecdotes, let me say simply, they’re not anecdotes, they’re parables. There’s a difference. Those parables made me face my own life’s end in ways nothing else ever has.

Given that the scope of the book is broad and nuanced, I have nothing to gain by attempting to summarize it. Instead, I want to look closely at one piece of his wisdom, that regarding the Hard Conversations. Physicians aren’t trained, he states, to guide patients into death; dying is taken not as natural but as a failure. Given both a cultural reticence to see death as part of life and the readily litigious context of modern U.S. medicine, doctors tend to reach deep into the armamentum of ventilators, central lines, kilobuck antibiotics, dialysis, and other tools near the end of life. Thus the family often can say “the doctor did everything she could,” rather than “Dad went out peacefully, surrounded by his loved ones.”

Gawande gives a great example of the alternative by recounting the story of his father’s end of life passage. Based on a conversation with a bioethicist who had just watched her own father die, Gawande asks his father frank questions about tradeoffs, about limits, about fears. One person might want to get to a family milestone (a grandchild’s wedding, say) and will tolerate high levels of pain in that pursuit; another can bear roaring tinnitus or deafness but is terrified of the implications of an ostomy bag; a third wants to be remembered as cogent rather than as a narcotized, slurring shell of her former self.

The point here is an important one: medical technology has cured old ways of dying but located more deaths in high-tech hospital scenarios. Hospitals employ doctors and technicians who are expert in life-extending treatments more than in guiding the hard conversations. Duration is taken as the relevant yardstick by default; quality takes time and skill to be assessed as a different way to judge outcomes. In one case, Gawande pins down one of his patients’ oncologists who admits that the best-case scenario after a brutal chemotherapy regime is measured in months: the same prospect as with palliative care, and not the years the family and patient were hopefully assuming was the case. The path toward one’s demise is too often governed by what drugs and machines can do rather than what the patient and the family want.

This paradox reminds me of another Boston conversation, this one originating at MIT rather than at Harvard. The psychologist Sherry Turkle’s most recent book, Alone Together, asserts that modern communications technologies have done their job too well: millennials and also many older than they have come to expect human gratification from a tweet, a like, a text, often more than from real people in real proximity. The absence of these digital stimuli — quiet — is painful and to be avoided, she finds; people have lost the ability to be alone with their thoughts. Further, Facebook profiles, Twitter feeds, Pinterest boards, Instagram portfolios, and the other billboards we erect are carefully curated, to use the modern term of art. Thus we can control the self the world sees and interacts with, making the comparatively naked conventional social self more vulnerable and less practiced in the “messy bits” of human interaction, as she calls them.

In both of these scenarios, modern technologies — ventilators and pharmaceuticals in the former case, smartphones in the latter — have become so powerful that they rather than their users shape the tenor and often content of the debate: rather than ask “what do we want?” and use the technologies to get there, we take the limits of the technology as our boundaries and push up against that instead. In both of these instances, the problem is that modern medicines, computing, and sensors exceed human scale: no human can last long on incredibly potent modern chemotherapy poisons, nor can a person be “friends” with 5,000 people 24 hours a day.

What then are the resources for the conversations we should be having? The professor in me wants to say, “the great intellectual traditions.” Indeed, Gawande cites Tolstoy on p. 1 and Plato much later. The problem is that in the U.S. and elsewhere, college as a time for introducing and possibly pondering the big questions is out of fashion right now. In public universities especially, other agendas are in play.

In Florida, governor Rick Scott tried to make tuition for literature, history, and philosophy majors more expensive than engineering or biotechnology, notwithstanding the cost differences in the respective professoriates and infrastructure. Florida is not alone: here at Penn State, a committee was charged with updating the general education curriculum (that includes the essential ideas everyone should encounter, regardless of major) and the task is turning out to be more difficult than expected: the deadline has been extended since the idea was proposed five years ago. To assess whether a Penn State education prepares people to ask “what is a good society?” or “what is a good way to live one’s life?” you can see the committee’s report here. The principles guiding the effort have evolved and can be seen here.

Though I doubt he realizes it, Gov. Scott embodies the paradox. America’s society and economy value the contributions of engineers and programmers more than marketing assistants, retail managers, school teachers, or social service providers  — the landing spots for humanities and social science undergrads. Those makers of machines and software have done amazing things, but the state of the academic humanities does little to inspire confidence that college courses in English or philosophy will teach young adults how to form healthy personal selves and relationships in a digital social context (can Aristotle help cure Facebook envy?), and to help their elders die well. Like it or not, Gawande’s Tolstoy is more than ever an intellectual luxury good rather than the staple of a balanced diet. Thus college and the wisdom of the past are less of a resource than some of us might hope.

What Gawande calls on us to do — beginning with doctors and patients, then patients and families — is the hardest thing: to listen, including to our deepest selves, and to talk honestly. What do we value? What makes us frightened? How do we reconcile ourselves to family differences and breakages in our final days? To watch mom or granddad die, and to help listen to what they really want, is both terrifically hard and a great gift. That Gawande has jump-started that conversation not for a handful but for thousands of people makes him the closest thing to a secular saint I have ever witnessed.

Tuesday, March 31, 2015

Early Indications March 2015

After a busy few months away, the newsletter returns with a collection of news and notes.

1) My long-ish blog post on Uber, Airbnb, and regulation as competitive barrier to entry was just posted today on The Conversation, a foundation-funded collection of various informed points of view.

2) I am delighted to announce that MIT Press has me under contract to deliver a book manuscript on robots and robotics for 2016 publication.

3) The reaction against Indiana governor Mike Pence's signing of the "religious freedom" legislation has been fascinating to watch, in part because I grew up in the state. One analysis suggested that the polarization of media has led to "echo chambers" on both left and right: if you listen only to the cheerleaders for your side, the reaction of what used to be called "the silent majority" can be a blindside smackdown. Pence's complete lack of articulate answers to the broader media (most visibly George Stephanopoulos) suggests he may have little idea of how non-social-conservatives outside Indiana see the world.

Lest this viewpoint appears partisan, Hillary Clinton's stonewalling of the archival process suggests a similar blind spot. One difference is that she is much more experienced in handling the media than Pence, but also the integrity of the historical record matters less to most people than the prospect of Aunt Peg and her partner Allison getting turned away from a hotel. Gay and lesbian rights has become personalized in a way that records retention has not: an overwhelming majority of Americans knows a gay or lesbian family member or colleague. Very few of us can even name an archivist or historian.

Once that personal association sensitizes people to an issue, social media provides a ready environment for expressed outrage. The power of the hashtag allows individuals to feel like they're part of what Lawrence Goodwyn referred to as "movement culture" when he discussed the civil rights protests of the 1960s. There, the options were to be on site or watch on TV; now, one can be physically remote from the protest yet feel active solidarity. The tidal wave of #boycottIndiana could not have happened in a TV-driven media environment, and I'm sure both parties' 2016 presidential nominees will remember the episode.

4) Amazon remains relentless in its pace of innovation. The drone delivery system, effectively grounded by current FAA rules, is being tested in Canada (a country where Target couldn't operate profitably). The Echo AI appliance is shipping and changes household behavior in ways I will examine in a forthcoming letter. Today, Amazon announced "impulse buy" devices called Dash buttons you can affix to the storage spaces for household items like bleach or paper towels. These are another small but discernible step in the march toward the "smart" house. Yesterday Amazon launched a listing of vetted home services providers ("from plumbers to herders" in the words of one headline), once again connecting the physical and virtual worlds unlike any other company.

At the same time, Amazon quietly stopped its mobile wallet efforts, which unlike the ATT/Verizon effort had the benefit of not sharing a name with a Middle Eastern terror group. The rapid warehouse buildout appears to be continuing, as does the slow rollout of home grocery delivery. In short, the company that has consistently zigged while others zagged (or stood still) appears to be moving full speed ahead to continue launching new initiatives that challenge conventional wisdom in field after field.