Friday, April 29, 2016

Early Indications April 2016: Tesla Thoughts

In the absence of tech IPOs, must-have new apps, or killer demos, we are in a period of waiting:

  • will Uber continue to scale, win its legal battles, and develop a self-driving ride-share car?
  • will the rapid decline of so many “unicorn” company valuations chill the funding side of the cycle as Theranos et al become cautionary tales?
  • will Apple rebound from the quarter where it failed to set a revenue record, whether on iPad adoption, the watch, or growth in emerging economies?
  • will Facebook ever hit a wall past which privacy concerns, a saturated user base, and generationality slow its growth of ad revenue?

Amid all of this wait-and-see, one big shock has hit the tech world, and it’s more in the realm of bits (and electrons) than atoms: Tesla took $1,000 deposits for roughly 400,000 Model 3 sedans — in under a month. For scale, BMW sold 100,000 3-series cars in the U.S. in 2015, a 6% drop from 2014. Tesla’s name of its car is no accident: BMW is the standard for the mid-size sport sedan, and Tesla likely wants to do to that benchmark what the Model S did to Mercedes S-class, BMW 7-series, and Audi A8 sales: torpedo them. All of a sudden, Tesla is shaking up the automotive world, and every time I investigate, another interesting tidbit comes up.

First, the model 3 sales might not be the biggest news. Solar power is about to get cost-competitive in some climates (without subsidies), given new advances in sun-tracking technology for the arrays. One big drawback is the night-time, obviously, so battery power is one key way for solar to make sense. Tesla’s energy business unit is on track to sell 168.5 megawatt-hours of energy storage applications to SolarCity (another Tesla unit), according to GTM Research. That number is six times what Tesla sold SolarCity just last year, and a 60% increase on the entire industry output for 2015.  In addition, the 85kw battery in the model S is massive — just how big, I didn’t realize until I read that it can power the average household for 3 1/2 days. What does that do to electric company projections, to household disaster recovery, to our thinking about what charges what in the family garage? Tesla is remaking the auto industry, but power generation could be affected pretty radically as well.

Second, Tesla is learning the realities of manufacturing quality control, vendor management, and other “boring” supply-chain tasks that are tripping up the company. Some examples: Reuters reports that Tesla spent more than $1,000 per car on repairs, and set aside about $2,000 more per car for future repairs, on cars sold in 2015. Daimler (a more apt comparison than GM or Ford, given the average selling pice) spent $970 per vehicle but set aside only $1294. This approach appears to be well justified: Tesla has missed ship dates, suffered from repeated quality issues, and is trying to rewrite the industry rule book with over-the-air software updates.

The Model X SUV (the one with the funky gull-wing doors) is getting blasted by online forum reviewers, at the Wall Street Journal, and from Consumer Reports (which loved the Model S). Even CEO Elon Musk said earlier this year, "I'm not sure anyone should have built or designed this car, because it's so difficult to make." Doors won't open (or open correctly), the heater is chilly, and the touch-screen freezes, among other issues. Some of this is a reflection of making something as complex as an automobile, now with more software than ever before. Musk tried to point out a bright side in one presentation, noting that only 6 out of 8,000 parts for one car were in short supply — but most of the time, a single part shortage can stop production entirely.

Third, Tesla is taking a bold tack on self-driving. Their cars on the road are minimally instrumented (in that they lack Lidar), but are recording driving data at a prodigious clip: one estimate claims that Tesla “learns” (in AI terms) as much in one or two days as Google has from all of its cumulative driving experience. Thus if Google sees one deer strike per hundred thousand miles, let’s guesstimate, then Google has a base of 12 or 15 deer strikes whereas Tesla has hundreds or even thousands. Every Tesla has a cellular data connection for the software updates, but that link also harvests driving data from owners who do not opt out of being guinea pigs. Thus the Model 3 could offer stronger autopilot capability than anyone else in the market when it appears. (See this for more)

Fourth, the Model 3 buyers could face a nasty surprise if they are late in the queue. Specifically, U.S. government subsidies of $7500 for electric cars expire after 200,000 units have been sold. If U.S. sales of the Model 3 are 50% of the total, using round numbers, the subsidy (which can be augmented with state incentives in a given locality) will run out early in that 400,000 run: sometime in 2018. Thus buyers who came late to the party might pay the sticker $35,000 base price rather than $27,500 (or less in some states). In reality, Musk reported, the typical option package for the first weekend brought the total average selling price up to $42,000 or so.

Fifth, the big question is of course, can Tesla meet demand? The Model S began as a 15-20 units/week exercise in 2012 before hitting 1,000/week in 2015. Assuming early growing pains, but a faster ramp, 400,000 is a big leap, from 50,000 a year to something close to 3 or 4 times that, in less time than the Model S took to get to scale volumes. The good news is that the Model X complexity costs were lessons well learned, and the Model 3 has the potential to be the “best of all worlds” assuming a) battery production at the Tesla/Panasonic factory in Nevada comes on line as predicted, b) the same engineering that delivers “stunningly graceful” ride quality in the Models X and S can be scaled down to meet the price point (in part from a steel rather than aluminum body, most likely), and c) the factory processes, vendor management, and warranty issues can be contained.

For those who ask, no I did not reserve a Model 3: range anxiety in the middle of nowhere is real (the nearest Supercharger is more than an hour away, and there are none in the places I tend to drive for vacation).

Finally, the wonders of Quora continue to amaze me. There, I learned about the Model S as a “green” car: most electricity is not carbon-free, obviously, but how much does power source matter? If we use a Toyota Prius as a benchmark (19 metric tons of CO2 per year), the Model S wins only if it’s on a clean-running grid, such as the California mix of fuels/methods (11 metric tons) or if one can connect to wind (at which point emissions fall below 1 metric ton). A coal-fired diet for the Tesla’s electricity results in a 34-metric-ton CO2 toll, nearly twice that of the Prius.

Given Apple’s share price slide and the apparent saturation of its main markets, along with the heavy cross-pollination of engineers who have worked at both companies, should Apple buy Tesla? Apple’s supply-chain and marketing expertise and its capacity for major capital expenditures make it a seemingly attractive suitor. In addition, Tesla CEO Elon Musk may be too busy with Mars mission plans in his SpaceX capacity to get deeply engaged in the much less interesting earth-bound business issues such as those at Tesla: quality control, procurement analysis, lobbying for company-owned dealerships, etc.

I’m partial to another scenario, however: Apple could team up (somehow) with BMW, a company with a viable electric compact already in the market in the i3 at $44,000 MSRP. The two brands share customer bases, design aesthetics, and profit margins. Apple CEO Tim Cook is reported to have visited the i3 assembly line, which in itself is pretty amazing (see here), as is Tesla’s, to be sure. Tesla has a nice injection of working capital from those deposits, but also a tall order in the need to execute a step-function increase in production, engineering, and after-sales service on an entirely new scale. Cook’s expertise is in supply chains, and he likely understands better than most the risks facing Tesla at this juncture.

However it plays out, cars are suddenly “cool” again, for entirely new reasons, in part because the global smartphone market appears to be saturating. Wherever one sits in the tech industry (except at Amazon Web Services, apparently), there seems to be concern and caution rather than the unbounded-horizon talk to which we’ve grown accustomed (Intel just laid off more than 10,000 employees, to take only one example). Seeing who emerges from the recalibration will be fascinating indeed.

Thursday, March 31, 2016

Early Indications March 2016: Robotics Business Models

For all the engineering successes of robots in the past few years, it’s unclear how the various sub-fields will make money. Past business models appear to be of limited use, so after a recap of the current conundrum, I will speculate on some options.

First, the successes. The Boston Dynamics military robots can run incredibly fast, traverse uneven terrain, and maintain their balance in many circumstances. Last fall Amazon renamed its acquired company Kiva as Amazon Robotics, is hiring aggressively, and serves reference customers in the supply chains of such firms as The Gap, Walgreens, and Crate & Barrel. Self-driving cars are becoming real, and improving, far faster than anyone could have predicted: Tesla made the Autopilot feature (an enhanced cruise control, essentially) a software download in 2015; no hardware retrofits were needed. Drones comprise an essential, if controversial, facet of U.S. foreign policy.

Those engineering successes, however, have not yet translated to revenue. Amazon appears to be in investment mode, with LinkedIn postings mentioning a “new robotics platform” that could involve machine vision. Google/Alphabet is reportedly selling Boston Dynamics, but the future of the other companies acquired at about the same time is less clear within the Alphabet/X division of labor. Google’s commitment to self-driving cars looks to be extensive, but the revenue model — ads? licensing? OEM? a platform play? — remains undisclosed, or undiscovered.

Who might buy Boston Dynamics? Its founder, former MIT professor Marc Raibert, is one of the world’s leading authorities on “legged-ness” (balance and locomotion). The firm has earned multiple DARPA contract wins. The company’s robots (especially the towering humanoid Atlas) can be frightening, the notion of robotic warriors can scare some people, and the economics of potentially being a defense contractor, with long procurement and decades-long product support cycles, don’t work for most start-ups. Amazon has the deep pockets, and potentially the culture (and Amazon Robotics is already located outside Boston); it has been frequently mentioned as a logical landing spot, but the fit of humanoid robots in Amazon’s supply chain isn’t obvious. Toyota announced a massive robotics initiative lead by Gil Pratt (formerly at the DARPA Robotics Challenge, which featured the Atlas as a shared platform) and James Kuffner (formerly the robotics head at Google). Both men know Boston Dynamics well. Another bet was placed by Rich Smith of The Motley Fool: General Dynamics builds land-based weapon systems (tanks) already, knows the procurement process, and could scale production of military robots relatively easily.

None of these three companies would surprise me. Some less likely suitors: the military side of iRobot (which is spinning out of its now-consumer-oriented parent, also based in Boston); GE (making big bets on 3D printing, the Internet of Things, and advanced manufacturing, and with a military clientele); Boeing/Lockheed Martin/Northrup Grumman. I don’t know that they could write the necessary check, but the nonprofit SRI might be appropriate for a pre-revenue technology shop with potential government/defense clients.

Apart from Google, other companies exhibit a lack of certainty. Daimler’s CEO said last year that his firm did not want to play Foxconn to anyone’s Apple. Uber then reportedly placed an order for 100,000 S-class Mercedes sedans, some of which could be self-driving. Terms were not announced, but for those doing the math at home, that’s $10 billion at current sticker prices, not counting any autonomous add-ons. (Daimler’s Freightliner unit is leading the way in self-driving semi trucks, so the autonomous speculation is not far-fetched.) As far as business models for robotics goes, clearly one play is selling pickaxes to miners.

What might be some other robotics business models?

-Financing
Just as GE Capital, GMAC, and other firms carved out niches providing financing for capital expenditures, there will be a role for companies that can finance robotics purchases, package the loans for Wall Street, and service the accounts on all sides of the transaction. There may be sub-specialties: hospital robots, amusement-park animatronics, and robotic materials-inspectors (for airplanes for example) might each require different sorts of business expertise in the financial realm.

-Platforms
IBM dominated mainframe computing, Unix had its day in the midrange, Windows enjoyed near-monopoly status in the 1990s, and now Android and iOS control much of the world smartphone installed base. Will there be a similar software system for robotics? Possibly, but the heterogeneity of uses and contexts may mean that no one market is big enough to spawn a predominant operating system. At the same time, certain vendors could control crucial IP, much like Qualcomm did and does on the mobile phone. Maybe one company cracks machine vision particularly elegantly and becomes the Qualcomm or Intel Inside of that one subsystem. Traction in mobile robots, batteries, and privacy audits could each end up branded by a vendor that supplies the final machine-assembler.

-Security
Who could become the trusted third party, much as some websites announce (TRUSTe et al), that monitors data collection and handling, payments, identity management, and other functions that will become necessary? Google had a major trust issue with Glass in its consumer incarnation; is there some player that could restore confidence as facial recognition, to give only one example, runs the risk of becoming abused, then discarded, and eventually rejected in the market?

-Training
Numerous universities, starting with UC-Berkeley and joined recently by MIT, have added targeted training in various facets of data and analytics. The same will hold true here. From trade schools to PhD programs, there will be demand for robot assemblers, installers/systems integrators, repairers, programmers, designers, and more. “Original” education as well as retraining will be required.

-Component suppliers
Unlike computers that sit on a desktop or even reside in a pocket or purse, computers that move about in the physical world require more sensors, actuators, robustness/ruggedizing, and batteries.  Obviously some components will come from the usual suspects: batteries from Panasonic; cameras from Sony, Omnivision, or Toshiba; design by Huge, Frog, or IDEO; etc. Other specialized components such as hydraulics, treads/wheels, and casing could present new opportunities for smaller, more nimble entrants.

-Services
As care-bots improve, a visiting nurse service might fold robotic care into an existing contract: a given individual might receive x hours of skilled nursing care per month, y hours of semi-skilled nursing, and z hours of robotic assistance, whether in moving, play, or monitoring. Similar robotic components could become part of security services such as Pinkerton or Securitas, safety inspections in mines or oil drilling, or repairs in places that humans can’t easily reach.

-Rentals
Robots are particularly suited for dull, dangerous, or dirty jobs. Temp-agency firms such as Kelly hire out people to do some of these, so it would be a natural extension of the current business to rent out robots to do nasty things like clean chemical tanks, sort packages for UPS at peak times, or clean out student apartments before move-in in August. (There’s already a germ-killing robot used in hospital bathrooms — why not deploy it in dorms?)

-Hobbyists and gadget-lovers
The personal computer took hold among a subculture that merged political idealism with what’s now known as “maker” culture. Certainly there are plenty of people creating 3D printed artifacts, building robots, experimenting with drones, and even programming self-driving cars (Google “GeoHot car”). The Jibo social robot wants to be a core part of the family interactions (albeit of early adopters) rather than a science-fair project. Absent an obvious home-entertainment adoption pathway as of now (and robotic toys like the Aibo could be the breakthrough device), in what room(s) will a robot reside in a human dwelling? If it’s the workshop, that’s a market, but nothing like the demand for PCs or smartphones. (The entire U.S. power tool market is roughly $10 billion.) That may be the initial play: recall that one of the proposed use cases for a personal computer in the 1980s was as a recipe repository, but that hardly turned out to be the PC’s killer app.

-Testing and certification
As robots and self-driving cars move more and more freely among human populations, the potential for injury will require inspections and licenses closer to those for private aircraft than for cars or boats. Given the stakes in liability cases, the certifying authority will be a major step in market adoption.

-Telepresence
The long history of telephones as a less immersive communications technology suggests that people will readily pay for the ability to “be” in more than one place at a time. Suitable Technology’s Beam robots (made famous by Edward Snowdon at TED) could be used for remote visits to a project team, supplier factory, or art museum. Paying by the visit/meeting, or the minute, may be the win; selling the robots may become a secondary consideration if demand pulls said teams, factories, or museums to offer remote visitation.

That’s a brief list; I trust readers will spot many opportunities that I missed. More tellingly, companies (some of which are household names) will soon be reporting results from robotic businesses, and market forces will put pressure on those business models from both customer and capital perspectives. In other words, Alphabet’s sorting-out of its investments portends a much broader testing of the new regime’s various configurations; picking winners will be slightly easier once we get a sense of what the contenders look like.

Monday, February 29, 2016

Early Indications February 2016: B2B e-Commerce update

While there has been substantial attention paid to consumer e-commerce (Black Friday numbers, for example), the state of both practice and understanding for online business-to-business commerce is less developed. There are some good reasons for this: B2B prices are often more complex than merely buttons in a shopping cart application, sales representatives still play a role in education and facilitation, and hybrids of call center, online, in-person, and even fax orders are hard to disentangle. For all of these obstacles, however, fundamental questions remain regarding for B2B e-commerce.

A current study attempts to answer some of these questions. Beginning with 100 sites and expanding to 250 later this year, I am collecting basic statistics for web/mobile presences. Preliminary results are in, with the caveat that the short list does not allow industry-level analysis given insufficient sample sizes. 16 industries are included, which means there are some apples (Deloitte) to oranges (Bobcat) to grapes (Dolby) comparisons implicit in the results.

Just glancing at the front pages of these sites, it’s apparent that the customer is not always the primary audience: some sites clearly addressed investors most prominently, while in other cases, recruiting appeared to be a higher priority. My colleague Ralph Oliva asked how often a customer value proposition is evident, and on this admittedly subjective yardstick, only 40% of sites successfully articulated why someone should buy from the company. Finally, commerce was rarely an option, outside the firewall: site logins were common, and I obviously could not see order, tracking, or content registration functionality behind the curtain. The one exception to the lack of e-commerce on B2B sites was branded merchandise: hats, sweatshirts, and jackets, along with die-cast model tractors and such, were widely available.

Social media activity was common: 75 sites linked to Twitter, and 93% of those feeds were current. Oracle displayed a staggering 70 distinct Twitter presences; I did not attempt to analyze the content on each of these relative to the others. Facebook and Twitter often featured identical content, the differences in audience notwithstanding. LinkedIn was also commonly employed, whether for sales or recruiting I did not analyze.

Especially as audiences use mobile devices for more of their access, many sites appear to be outdated. One company had a page copyrighted 1999, with the “Download Internet Explorer” logo still live. PDF product catalogs (sometimes separated into small page groupings, but often a massive single download) were common; web-native and mobile-native catalogs were the decided exception. Data sheets for chemical exposure and other risks were frequently available for download; this seems to be particularly low-hanging fruit to pick. Only 65 of the 100 sites were mobile-friendly, while only 12 offered smartphone apps, some of which were extremely well executed.

In sum, innovation was rare, basic execution (such as site loading time) was often uneven, and navigation often confused rather than enlightened. The good news is that there is so much upside, at so little cost. The bad news is needing to know where to start. When asked to summarize the top areas of opportunity, I can offer 3 Cs.

*Content
Many B2B purchases are complex, such as semiconductors, medical devices, or industrial adhesives for special purposes. In such instances of considered purchases, companies that better inform the customer will be at an advantage. I observed wide variation in the richness and depth of documentation; “contact your representative” was unfortunately the default solution on a large number of sites. The often-absent customer value proposition and/or branding can be considered as another content area.

Opportunities to improve this state of affairs abound. Only 21% of sites sampled offered a corporate blog, for example, a channel that interfaces nicely with social media, with trade shows, and with formal content such as white papers or customer case studies. An even richer opportunity lies with the use of online video. While 85% of sites offered some form of video, gauzy corporate overviews were often the first option. In contract, the really effective uses of video were rare: points of view, such as Corning’s “A Day Made of Glass” (with 25 million YouTube views); precise training and instruction (look at Yaskawa); and head-to-head product comparison (Bobcat stands out here), to name only three. Timken got almost 500,000 views for an instructional video about automobile wheel bearings. Haas Automation has a fine video library in support of their machine tools and associated processes. These are the exceptional few; most companies have substantial room for improvement, at low cost and free distribution (compared to the days of pressing DVDs). Social media, cheap in direct expense, does require dedicated headcount, but most companies in the sample had room for improvement in richness, relevance, and engagement.

*Configurators
While in some cases it makes sense to talk to a live salesperson or technician, there are still many opportunities to provide detailed configuration information and perhaps pricing. Such tools were used effectively at Texas Instruments, MRC Global (in an app), Kennametal, and NXP. There’s no reason they couldn’t be used at more businesses. Some configurators are deployed as lead generation tools rather than as customer information repositories: after doing all the work to select and option a Bobcat tractor, for instance, I had to contact a dealer for the actual price.

*Customer contact
The final C provided many examples of good, bad, and ugly options. A simple example lies with e-mail. According to the Direct Marketing Association, commercial opt-in e-mail generated $36.70 of sales per $1 of investment in 2014; it was the ROI champ, far outpacing Internet search (about $22.00), direct mail catalogs ($7.27), and internet display ads ($19.21). How many sites asked for my e-mail address to send me newsletters, product updates, point of view pieces, or other messages? Only 40 of 100. (Similarly, only 40 of 100 connected trade show information to the online presence.)

Contact information was often presented from the inside out: here’s how we are organized (by geographies, by industries, by dealerships, etc) and it’s up to you the customer to figure us out. A smaller number of sites organized contact by customer tasks: “How can we help you?” is a user-friendly way of organizing different product lines, industry solutions, or support functions from the outside in. These rubrics were, unfortunately, in the minority. Many sites offered multiple navigational paths, often on the same page (which can be good practice): Oracle’s pull-down menus were quite complicated and incredibly information-dense, but seemed to get the job done; Oracle’s direct competitor SAP opted for a very different, leaner user experience model. Simplest of all in this industry was Salesforce.

The other number that jumped out with regard to contact concerned live chat. With millennials often eschewing the telephone as a voice tool, “talk to a rep” often sounds unappealing, and “e-mail us for a quote” may take too long. Given these demographic trends, along with the reality that B2B customer support often occurs on customer premises or on noisy shop floors where voice communication is distracting or impossible, it was surprising to see only 13 of 100 sites offer a chat function.

There were many other surprises (such as how often basic execution failed: broken links, outdated posts, and improper server configurations were not uncommon), and the larger data set will deliver further insights. Some of my potential research questions concern proximity to B2B/C sites, especially Amazon and eBay: are companies that overlap these channels more likely to adopt similar site functionality, or should industrial distributors seek to look as little like Amazon as possible? (I saw both approaches in the sample.) Further work also needs to be done to compare like companies or divisions: how can the B2B universe best be segmented so that insights can cross domains at the same time that differences (in purchase frequency, in service/product hybridization, or in end use of the product) are recognized? Finally, getting insight into what’s behind the firewall would be revealing if it is feasible. Until then, I hope these preliminary results offer food for thought and I will be happy to share the entire presentation of findings upon request.

Saturday, January 30, 2016

Early Indications January 2016: Shocks

The past month has been marked by a series of extraordinary events that would have been completely unforeseen only a year ago, or even in mid-summer. (In June, West Texas Intermediate crude oil futures contracts were selling at $60 a barrel, roughly twice the current price.) While this may be an unusual month, the larger question remains: how can human institutions evolve to better address both sudden and glacial change, in both positive and negative directions? Put another way, if we see what keeps surprising us, maybe we can adapt our practices and assumptions to be surprised less often, less acutely, or both.

Oil is certainly big news. While the dynamics of a global market, controlled by a wide range of political and business players, remain fascinating, “common knowledge” in energy markets shifts dramatically. Recall how recently talk of “peak oil” was common: according to Google Trends, searches for the phrase spiked in August 2005 and, at a slightly lower index, May 2008. After 2011, interest dwindled to baseline noise, and today we wrestle with the problems of sub-$2.00 gasoline. The precise events coming into play right now have complex origins: innovations in drilling technology, geopolitical forces (including bitter national and ethnic rivalries), and national budgets whose planning assumptions have been obliterated. Saudi Arabia, for example, can produce a barrel of oil for about $3 but needs $93 to break even for budget purposes given its economic monoculture; Venezuela needs $149 a barrel to break even, to take the most extreme example. At $30, budgets in many places (including Alaska) are a mess.

Given that oil is such big business in so many parts of the world, considerable expertise is deployed in forecasting. Yet the industry’s record, with regard to both estimates of oil reserves and now prices, is consistently poor. Perhaps the lesson is that complex systems cannot be predicted well, so the best answer might be to shorten planning horizons — a tough call in light of the magnitude of investment and concomitant project lead time required.

The next “shock’ is in some ways predictable: U.S. infrastructure investment has lagged for so long that calamities on bridges, railroads, and water supplies are unfortunately overdue. The particular politics of Flint, Michigan’s mismanagement are also not surprising given the nature of both large, overlapping bureaucracies and the governor’s high priority on municipal budget repair to be performed by unelected “emergency managers.” The competing agendas are difficult: if bondholders lose trust, investment becomes prohibitive. At the same time, the dismissal of known test results and risks, and the human consequences thereof, are criminal: GM stopped using Flint water because it was destroying auto parts while Flint’s citizens had to keep drinking it.

The pattern in Flint is not all that unusual, except in its impact: given the size of federal and state governments, it’s hard to imagine who voters could hold accountable for substandard ports, roads, and airports. Many are in poor repair, but the constituencies are diffuse and/or politically marginal, and so can be ignored. Who can one complain to (or vote out) regarding connections inside Philadelphia’s airport, or Amtrak’s unreliability, or Detroit’s crumbling schools? Conversely, what good came to the Detroit mayor who supported that airport’s modernization? Who is the primary constituency that benefits from New Jersey’s extremely heavy spending on roads ($2 million per state-controlled mile) that are consistently graded as among the nation’s worst (at both the Interstate and local arterial levels)? Rather than planning horizons, the issue here appears to center on accountability. The interconnections of race, poverty, and party politics can also fuel tragedy: decisions were made in Flint that would be unthinkable in more affluent Detroit suburbs. (Another water issue, the one in California, could also amplify class conflicts in the event the El Nino snowpack melts to last summer’s levels in coming years.)

The third shock is a positive one. Google’s DeepMind unit (acquired for $400 million in 2014) announced that it had used machine learning to develop a computer capable of defeating the European champion at Go, the ancient Chinese game of strategy. AlphaGo, DeepMind’s program, will now play a higher-ranked champion in March. If the machine can win, another cognitive milestone will have been achieved with AI, about ten years faster than had been generally predicted. Interestingly, Facebook had previously announced that it had made significant progress at Go in a purely machine tournament, but the Google news swamped the magnitude of Facebook’s achievement.

To their credit, DeepMind’s team published the algorithmic architecture in Nature. Two distinct neural networks are built: one, the “policy network,” limits its scope to a small number of attractive options for each move, while the “value network” rates the proximate choices in the context of 20 moves ahead. It’s likely the technology will be tested outside abstract board games, potentially in climate forecasting, medical diagnostics, and other fields.

In this case, the breakthrough is so unexpected that nobody, including the scientists involved, knows what it means. Even though Deep Blue won at championship chess and Watson won at Jeopardy, neither advancement has translated into wide commercial or humanitarian benefit even though the game wins were in 1997 and 2011 respectively. This is by no means a critique of IBM; rather, turning technology breakthroughs in a specific domain into a more general-purpose tool can in some cases be impossible when it is not merely hard.

Elsewhere, however, giant strides are possible: Velodyne lidar, the spinning sensor atop the first generation Google car, has dropped from $75,000 per unit to a smaller unit costing under $500, with further economies of mass production to come. Even more astoundingly, the cost of human genomic sequencing continues to plummet: the first human DNA sequence cost $2.7 billion, for the entire research program. Shortly after, the cost was about $100,000 as of 2002; today it’s approaching $1,000, outpacing Moore’s law by a factor of thousands (depending on how one calculates) in a 15-year span.

In each of these technological instances, people have yet to invent large markets, business models, or related apparatus (liability law, quality metrics, etc) for these breakthroughs. As the IBM example showed in regard to AI, this is in some ways normal. At the same time, I believe we can create better scaffolding for technology commercialization: patent law reform comes immediately to mind. Erik Brynjolfsson and Andrew McAfee suggest some other ideas in their essential book, The Second Machine Age.

Education is of course a piece of the puzzle, and there’s a lot of discussion regarding STEM courses, including why more people should learn to code. I’ve seen several people make the case that code is already the basis of our loss of privacy, and there will be more deep questions emerging soon: who owns my genomic information? who controls my digital breadcrumbs? should big-data collection be opt-in or opt-out? Yes, knowing _how to_ code can get you a job, but more and more, knowing _about_ code will be essential for making informed choices as a citizen. The widespread lack of understanding of what “net neutrality” actually entails serves as a cautionary tale: few people understand the mechanics of peering, CDNs, and now mobile ad tech so much of the debate misses the core issue, which is lack of competition among Internet service providers. “Broadband industry consolidation” isn’t on anyone’s top-5 agenda in the U.S., yet even comedian John Oliver identified it as the major nut to crack with regard to information access.

In the end, humans will continue to see the future as looking much like the present, driven by psychological patterns we now understand better than ever. As shocks increase in magnitude, for many reasons including climatic ones, and impact, because so many aspects of life and commerce are interconnected, it may be time to rethink some of our approaches to planning for both the normal and the exceptional.

Monday, January 25, 2016

Early Indications November 2015: Broad thoughts on the Internet of Things



Current state

The notion of an Internet of Things is at once both old and new. From the earliest days of the World Wide Web, devices were connected so people could see the view out a window, traffic or ski conditions, a coffee pot at the University of Cambridge, or a Coke machine at Carnegie Mellon University. The more recent excitement dates to 2010 or thereabouts, and builds on a number of developments: many new Internet Protocol (IP) addresses have become available, the prices of sensors are dropping, new data and data-processing models are emerging to handle the scale of billions of device "chirps," and wireless bandwidth is getting more and more available. At a deeper level, however, the same criteria -- sense, think, act -- that define a robot for many working in the field also characterize large-scale Internet of Things systems: they are essentially meta-robots, if you will. The GE Industrial Internet model discussed below includes sensors on all manner of industrial infrastructure, a data analytics platform, and humans to make presumably better decisions based on the massive numbers from the first domain crunched by algorithms and computational resources in the second.

Building Blocks
The current sensor landscape can be understood more clearly by contrasting it to the old state of affairs. Most important, sensor networks mimicked analog communications: radios couldn't display still pictures (or broadcast them), record players couldn't record video, newspapers could not facilitate two- or multi-way dialog in real time. For centuries, sensors in increasing precision and sophistication were invented to augment human senses: thermometers, telescopes, microscopes, ear trumpets, hearing aids, etc. With the 19th century advances in electro-optics and electro-mechanical devices, new sensors could be developed to extend the human senses into different parts of the spectrum (e.g., infrared, radio frequencies, measurement of vibration, underwater acoustics, etc.).

Where they were available, electromechanical sensors and later sensor networks

*stood alone
*measured one and only one thing
*cost a lot to develop and implement
*had inflexible architectures: they did not adapt well to changing circumstances.

Sensors traditionally stood alone because networking them together was expensive and difficult. Given the lack of shared technical standards, to build a network of offshore data buoys for example, the interconnection techniques and protocols would be uniquely engineered to a particular domain, in his case, salt water, heavy waves, known portions of the magnetic spectrum, and so on. An agency seeking to connect sensors of a different sort (such as surveillance cameras) would have to start from scratch, as would a third agency monitoring road traffic.

In part because of their mechanical componentry, sensors rarely measured across multiple yardsticks. Oven thermometers measured only oven temperature, and displayed the information locally, if at all (given that perhaps a majority of sensor traffic informs systems rather than persons, the oven temperature might only drive the thermostat rather than a human-readable display). Electric meters only counted watt-hours in aggregate. Fast forward to today: a consumer Global Positioning Satellite (GPS) unit or smartphone will tell location, altitude, compass heading, and temperature, along with providing weather radio.

Electromechanical sensors were not usually mass-produced, with the exception of common items such as thermometers. Because supply was limited, particularly for specialized designs, the combination of monopoly supply and small order quantities kept prices high.

The rigid architecture was a function of mechanical devices’ specificity. A vibration sensor was different from a camera was different from a humidistat. Humidity data, in turn, was designed to be moved and managed in a particular analog domain (a range of zero to 100 per cent), while image recognition in the camera’s information chain typically featured extensive use of human eyes rather than automated processing.

Ubiquity
Changes in each of these facets combine to help create today’s emerging sensor networks, which are growing in scope and capability every year. The many examples of sensor capability accessible to (or surveilling) the everyday citizen illustrate the limits of the former regime: today there are more sensors recording more data to be accessed by more end points. Furthermore, the traffic increasingly originates and transits exclusively in the digital domain.

*Computers, which sense their own temperature, location, user patterns, number of printer pages generated, etc.
*Thermostats, which are networked within buildings and now remotely controlled and readable
*Telephones, the wireless variety of which can be understood as beacons, bar-code scanners, pattern-matchers (the Shazam application names songs from a brief audio sample), and network nodes
*Motor and other industrial controllers: many cars no longer have mechanical throttle linkages, so people step on a sensor every day without thinking as they drive by wire. Automated tire-pressure monitoring is also standard on many new cars. Airbags rely on a sophisticated system of accelerometers and high-speed actuators to deploy the proper reaction for collision involving a small child versus a lamp strapped into the front passenger seat.
*Vehicles: the OBD II diagnostics module, the toll pass, satellite devices on heavy trucks, and theft recovery services such as Lojack, not to mention the inevitable mobile phone, make vehicle tracking both powerful and relatively painless
*Surveillance cameras (of which there are over 10,000 in Chicago alone, and more than 500,000 in London)
*Most hotel door handles and many minibars are instrumented and generate electronic records of people’s and vodka bottles’ comings and goings.
*Sensors, whether embedded in animals (RFID chips in both household pets and race horses) or gardens (the EasyBloom plant moisture sensor connects to a computer via USB and costs only $50), or affixed to pharmaceutical packaging.

Note the migration from heavily capital-intensive or national-security applications down-market. A company called Vitality has developed a pill-bottle monitoring system: if the cap is not removed when medicine is due, an audible alert is triggered, or a text message could be sent.

A relatively innovative industrial deployment of vibration sensors illustrates the state of the traditional field. In 2006, BP instrumented an oil tanker with "motes," which integrated a processor, solid-state memory, a radio, and an input/output board on a single 2" square chip. Each mote could receive vibration data from up to ten accelerometers, which were mounted on pumps and motors in the ship’s engine room. The goal was to determine if vibration data could predict mechanical failure, thus turning estimates—a motor teardown every 2,000 hours, to take a hypothetical example—into concrete evidence of an impending need for service.

The motes had a decided advantage over traditional sensor deployments in that they operated over wireless spectrum. While this introduced engineering challenges arising from the steel environment as well as the need for batteries and associated issues (such as lithium’s being a hazardous material), the motes and their associated sensors were much more flexible and cost-effective to implement compared to hard-wired solutions. The motes also communicate with each other in a mesh topology: each mote looks for nearby motes, which then serve as repeaters en route to the data’s ultimate destination. Mesh networks are usually dynamic: if a mote fails, signal is routed to other nearby devices, making the system fault-tolerant in a harsh environment. Finally, the motes could perform signal processing on the chip, reducing the volume of data that had to be transmitted to the computer where analysis and predictive modeling was conducted. This blurring of the lines between sensing, processing, and networking elements is occurring in many other domains as well.

All told, there are dozens of billions of items that can connect and combine in new ways. The Internet has become a common ground for many of these devices, enabling multiple sensor feeds—traffic camera, temperature, weather map, social media reports, for example—to combine into more useful, and usable, applications. Hence the intuitive appeal of "the Internet of Things." As we saw earlier, network effects and positive feedback loops mean that considerable momentum can develop as more and more instances converge on shared standards. While we will not discuss them in detail here, it can be helpful to think of three categories of sensor interaction:

*Sensor to people: the thermostat at the ski house tells the occupants that the furnace is broken the day before they arrive, or a dashboard light alerting the driver that the tire pressure on their car is low
*Sensor to sensor: the rain sensor in the automobile windshield alerts the antilock brakes of wet road conditions and the need for different traction-control algorithms
*Sensor to computer/aggregator: dozens of cell phones on a freeway can serve as beacons for a traffic-notification site, at much lower cost than helicopters or "smart highways."

An "Internet of Things" is an attractive phrase that at once both conveys expansive possibility and glosses over substantial technical challenges. Given 20+ years of experience with the World Wide Web, people have long experience with hyperlinks, reliable inter-network connections, search engines to navigate documents, and wi-fi access everywhere from McDonalds to mid-Atlantic in flight. None of these essential pieces of scaffolding has an analog in the Internet of Things, however: garage-door openers and moisture sensors aren't able to read; naming, numbering, and navigation conventions do not yet exist; low-power networking standards are still unsettled; and radio-frequency issues remain problematic. In short, as we will see, "the Internet" may not be the best metaphor for the coming stage of device-to-device communications, whatever its potential utility.

Beyond the Web metaphor
Given that "the Internet" as most people experience it is global, searchable, and anchored by content or, increasingly, social connections, the "Internet of Things" will in many ways be precisely the opposite. Having smartphone access to my house's thermostat is a private transaction, highly localized and preferably NOT searchable by anyone else. While sensors will generate volumes of data that are impossible for most humans to comprehend, that data is not content of the sort that Google indexed as the foundation of its advertising-driven business. Thus while an "Internet of Things" may feel like a transition from a known world to a new one, the actual benefits of networked devices separate from people will probably be more foreign than saying "I can connect to my appliances remotely."

Consumer applications
The notion of networked sensors and actuators can usefully be subdivided into industrial, military/security, or business-to-business versus consumer categories. Let us consider the latter first. Using the smartphone or a web browser, it is already possible to remotely control and/or monitor a number of household items:

•    slow cooker
•    garage-door opener
•    blood-pressure cuff
•    exercise tracker (by mileage, heart rate, elevation gain, etc)
•    bathroom scale
•    thermostat
•    home security system
•    smoke detector
•    television
•    refrigerator.

These devices fall into some readily identifiable categories: personal health and fitness, household security and operations, entertainment. While the data logging of body weight, blood pressure, and caloric expenditures would seem to be highly relevant to overall physical wellness, few physicians, personal trainers, or health insurance companies have built business processes to manage the collection, security, or analysis of these measurements.  Privacy, liability, information overload, and, perhaps most centrally, outcome-predicting algorithms have yet to be developed or codified. If I send a signal to my physician indicating a physical abnormality, she could bear legal liability if her practice does not act on the signal and I subsequently suffer a medical event that could have been predicted or prevented.

People are gradually becoming more aware of the digital "bread crumbs" our devices leave behind. Progressive Insurance's Snapshot campaign has had good response to a sensor that tracks driving behavior as the basis for rate-setting: drivers who drive frequently, or brake especially hard, or drive a lot at night, or whatever could be judged worse risks and be charged higher rates. Daytime or infrequent drivers, those with a light pedal, or people who religiously buckle seat belts might get better rates. This example, however, illustrates some of the drawbacks of networked sensors: few sensors can account for all potentially causal factors. Snapshot doesn't know how many people are in the car (a major accident factor for teenage drivers), if the radio is playing, if the driver is texting, or when alcohol might be impairing the driver's judgment. Geographic factors are delicate: some intersections have high rates of fraudulent claims, but the history of racial redlining is also still a sensitive topic, so data that might be sufficiently predictive (ZIP codes traversed) might not be used out of fear it could be abused.

The "smart car" applications excepted, most of the personal Internet of Things use cases are to date essentially remote controls or intuitively useful data collection plays. One notable exception lies in pattern-cognition engines that are grouped under the heading of "augmented reality." Whether on a smartphone/tablet or through special headsets such as Google Glass, a person can see both the physical world and an information overlay. This could be a real-time translation of a road sign in a foreign country, a direction-finding aid, or a tourist application: look through the device at the Eiffel Tower and see how tall it is, when it was built, how long the queue is to go to the top, or any other information that could be attached to the structure, attraction, or venue.

While there is value to the consumer in such innovations, these connected devices will not drive the data volumes, expenditures, or changes in everyday life that will emerge from industrial, military, civic, and business implementations.

The Internet(s) of [infrastructure] Things
Because so few of us see behind the scenes to understand how public water mains, jet engines, industrial gases, or even nuclear deterrence work, there is less intuitive ground to be captured by the people working on large-scale sensor networking. Yet these are the kinds of situations where networked instrumentation will find its broadest application, so it is important to dig into these domains.

In many cases, sensors are in place to make people (or automated systems) aware of exceptions: is the ranch gate open or closed? Is there a fire, or just an overheated wok? Is the pipeline leaking? Has anyone climbed the fence and entered a secure area? In many cases, a sensor could be in place for years and never note a condition that requires action. As the prices of sensors and their deployment drop, however, more and more of them can be deployed in this manner, if the risks to be detected are high enough. Thus one of the big questions in security -- in Bruce Schneier's insight, not "Does the security measure work?" but "Are the gains in security worth the costs?" -- gets difficult to answer: the costs of IP-based sensor networks are dropping rapidly, making cost-benefit-risk calculations a matter of moving targets.

In some ways, the Internet of Things business-to-business vision is a replay of the RFID wave of the mid-aughts. Late in 2003, Wal-Mart mandated that all suppliers would use radio-frequency tags on their incoming pallets (and sometimes cases) beginning with the top 100 suppliers, heavyweight consumer packaged goods companies like Unilever, Procter & Gamble, Gillette, Nabisco, and Johnson & Johnson. The payback to Wal-Mart was obvious: supply chain transparency. Rather than manually counting pallets in a warehouse or on a truck, radio-powered scanners could quickly determine inventory levels without workers having to get line-of-sight reads on every bar code. While the 2008 recession contributed to the scaled-back expectations, so too did two powerful forces: business logic, and physics.

To take the latter first, RFID turned out to be substantially easier in labs than in warehouses. RF coverage was rarely strong and uniform, particularly in retrofitted facilities. Noise -- in the form of everything from microwave ovens to portable phones to forklift-guidance systems -- made reader accuracy an issue. Warehouses involve lots of metal surfaces, some large and flat (bay doors and ramps), others heavy and in motion (forklifts and carts): all of these reflect radio signals, often problematically. Finally, the actual product being tagged changes radio performance: aluminum cans of soda, plastic bottles of water, and cases of tissue paper each introduce different performance effects. Given the speed of assembly lines and warehouse operations, any slowdowns or errors introduced by a new tracking system could be a showstopper.

The business logic issue played out away from the shop floor. Retail and CPG profit margins can be very thin, and the cost of the RFID tagging systems for manufacturers that had negotiated challenging pricing schedules with Wal-Mart was protested far and wide. The business case for total supply chain transparency was stronger for the end seller than for the suppliers, manufacturers, and truckers required to implement it for Wal-Mart's benefit. Given that the systems delivered little value to the companies implementing them, and given that the technology didn't work as advertised, the quiet recalibration of the project was inevitable.

RFID is still around. It is a great solution to fraud detection, and everything from sports memorabilia to dogs to ski lift tickets can be easily tested for authenticity. These are high-value items, some of them scanned no more than once or twice in a lifetime rather than thousands of times per hour, as on an assembly line. Database performance, industry-wide naming and sharing protocols, and multi-party security practices are much less of an issue. 

While it's useful to recall the wave of hype for RFID circa 2005, the Internet of Things will be many things. The sensors, to take only one example, will be incredibly varied, as a rapidly growing online repository makes clear. Laboratory instruments are shifting to shared networking protocols rather than proprietary ones. This means it's quicker to set up or reconfigure an experiment, not that the lab tech can see the viscometer or Geiger counter from her smart phone or that the lab will "put the device on the Internet" like a webcam.

Every one of the billions of smartphones on the planet is regularly charged by its human operator, carriers a powerful suite of sensors -- accelerometer, temperature sensor, still and video cameras/bar-code readers, microphone, GPS receiver -- and operates on multiple radio frequencies: Bluetooth, several cellular, WiFi. There are ample possibilities for crowdsourcing news coverage, fugitive hunting, global climate research (already, amateur birders help show differences in species' habitat choices), and more using this one platform.

Going forward, we will see more instrumentation of infrastructure, whether bridges, the power grid, water mains, dams, railroad tracks, or even sidewalks. While states and other authorities will gain visibility into security threats, potential outages, maintenance requirements, or usage patterns, it's already becoming clear that there will be multiple paths by which to come to the same insight. The state of Oregon was trying to enhance the experience of bicyclists, particularly commuters. While traffic counters for cars are well established, bicycle data is harder to gather. Rather than instrumenting bike paths and roadways, or paying a third party to do so, Oregon bought aggregated user data from Strava, a fitness-tracking smartphone app. While not every rider, particularly commuters, tracks his mileage, enough do that the bike-lane planners could see cyclist speeds and traffic volumes by time of day, identify choke points, and map previously untracked behaviors.

Strava was careful to anonymise user data, and in this instance, cyclists were the beneficiaries. Furthermore, cyclists compete on Strava and have joined with the expectation that their accomplishments can show up on leader boards. In many other scenarios, however, the Internet of Things' ability to "map previously untracked behaviors" will be problematic, for reasons we will discuss later.

Industrial scenarios
GE announced its Industrial Internet initiative in 2013. The goal is to instrument more and more of the company's capital goods -- jet engines are old news, but also locomotives, turbines, undersea drilling rigs, MRI machines, and other products -- with the goal of improving power consumption and reliability for existing units, and to improve the design of future products. Given how big the company's footprint is in these industrial markets, 1% improvements turn out to yield multi-billion-dollar opportunities. Of course, instrumenting the devices, while not trivial, is only the beginning: operational data must be analyzed, often using completely new statistical techniques, and then people must make decisions and put them into effect.

This holistic vision is far-sighted on GE's part and transcends the frequent technology-centric marketing messages that often characterize Silicon Valley rhetoric. That is, GE's end-to-end insistence on sensors AND software AND algorithms AND people is considerably more nuanced and realistic than, for example, Qualcomm's vision:

“the Internet of Everything (IoE) is changing our world, but its effect on daily life will be most profound. We will move through our days and nights surrounded by connectivity that intelligently responds to what we need and want—what we call the Digital Sixth Sense. Dynamic and intuitive, this experience will feel like a natural extension of our own abilities. We will be able to discover, accomplish and enjoy more. Qualcomm is creating the fabric of IoE for everyone everywhere to enable this Digital Sixth Sense.”

Not surprisingly, Cisco portrays the Internet of Things in similar terms; what Qualcomm calls "fabric" Cisco names "connectivity," appropriately for a networking company:
“These objects contain embedded technology to interact with internal states or the     external environment. In other words, when objects can sense and communicate, it changes how and where decisions are made, and who makes them.

The IoT is connecting new places–such as manufacturing floors, energy grids,     healthcare facilities, and transportation systems–to the Internet. When an object can represent itself digitally, it can be controlled from anywhere. This connectivity means more data, gathered from more places, with more ways to increase efficiency and improve safety and security.”

The other striking advantage of the GE approach is financial focus: 1% savings in a variety of industrial process areas yields legitimately huge cost savings opportunities. This approach has the simultaneous merits of being tangible, bounded, and motivational. Just 1% savings in aviation fuel over 15 years would generate more than $30 billion, for example.

But to get there, the GE vision is notably realistic about the many connected investments that must precede the harvesting of these benefits.

    1) The technology doesn't exist yet. Sensors, instrumentation, and user interfaces need to be made more physically robust, usable by a global work force, and standardized to the appropriate degree.
    2) Information security has to protect assets that don't yet exist, containing value that has yet to be measured, from threats that have yet to materialize.
    3) Data literacy and related capabilities need to be cultivated in a global workforce that already has many skills shortfalls, language and cultural barriers, and competing educational agendas. Traditional engineering disciplines, computer science, and statistics will merge into new configurations.

Despite a lot of vague marketing rhetoric, the good news is that engineers, financial analysts, and others are recognizing the practical hurdles that have yet to be cleared. Among these are the following:

1) Power consumption

If all of those billions of sensors require either hard-wired power or batteries, the toxic waste impact alone could be daunting. Add to this requirement the growing pressure of the electric-car industry on the worldwide battery supply, and the need for new power management, storage, and disposal approaches becomes clear.

2) Network engineering

It's easy to point to all those sensors, each with its own IP address, and make comparisons to the original Internet. It's quite another matter, however, to make networks work when the sensor might "wake up" only once a day -- or once a month -- to report status. Other sensors, as we saw with jet engines, have the opposite effect, that of a firehose. Some kind of transitional device will likely emerge, either collecting infrequent heterogeneous "chirps" or consolidating, error-checking, compressing, and/or pre-processing heavy sensor volumes at the edge of a conventional network. Power management, security, and data integrity might also be in some of these devices' job description.

3) Security

As the Stuxnet virus illustrated, the Internet of Things will be attacked by both amateur and highly trained people writing a wide variety of exploits. Given that Internet security is already something of a contradiction in terms, and given widespread suspicion that the NSA has engineered back doors into U.S. firms' technology products, market opportunities for EU and other IoT vendors might increase as a result. In any event, the challenge of making lightweight, distributed systems robustly secure without undue costs in selling price, operational overhead, interoperability, or performance has yet to be solved at a large scale. In 2014 the security firm Symantec announced that all exercise monitors tested were found to be insecure.

4) Data processing
The art and science of data fusion is far from standardized in fields that have been practicing it for decades. Context, for instance, is often essential for interpretation but difficult to guarantee during collection. Add to the mix humans as sensor platforms, intermittent and hybrid network connectivity, information security requirements outside a defense/intelligence cultural matrix, and unclear missions -- many organizations quite reasonably do not know why they are measuring what they are measuring until after they try to analyze the numbers -- and the path of readings off the sensors and into decision-making becomes complicated indeed.

5) Cost effectiveness

The RFID experiment foundered in part on the price of the sensors, which even when measured in dimes became an issue when the volumes of items to be tracked ranged into the millions. With past hardware investments in memory, for example, still stinging some investors, the path to profitability for ultra-low-power, ultra-low-cost now will be considerably different from the high-complexity, high-margin world that Intel so successfully mastered in the PC era.

6) Protocols

The process by which the actual day-to-day workings of complex systems get negotiated makes for good business-school case studies, but challenging investment and decision-making. The USB standard, for example, had substantial industry "convening power" being exercised by Intel, and the benefits have been widely shared. For the IoT, it's less clear which companies will have a similar combination of engineering know-how, intellectual property (and a management mandate to form a profitless patent pool), industry fear and respect, and so on. As the VHS/Betamax, high-resolution audio CD, and high-resolution DVD standards wars have taught many people, it's highly undesirable to be stranded on the wrong side of an industry protocol. Hence, many players may sit out pending identifiable winners in the various standards negotiations.

7) APIs and middleware
The process by which device chirps become management insights requires multiple handoffs between sensors and PCs or other user devices. Relatively high up the stack are a variety of means by which processed, analyzed data can be connected to and queried by human decision makers, and so far, enterprise software vendors have yet to make a serious commitment to integrating these new kinds of data streams (or trickles, or floods) into management applications.

8) System management

The IoT will need to generate usage logs, integrity checks, and all manner of tools for managing these new kinds of networks. Once again, data center and desktop PC systems management tools simply are not designed to handle tasks at this new level of granularity and scale. What will an audit of a network of "motes" look like? Who will conduct it? Who will require it?

Conclusion
As this note has hinted, the label "Internet of Things" could well steer thinking in unproductive directions. Looking at the World Wide Web as a prototype has many shortcomings: privacy, security, network engineering, human-in-the-loop advantages that may not carry over, and even the basic use case. At the same time, thinking of sensor networks in the same proprietary, single-purpose terms that have dictated generations of designs is also overdue.

Beyond the level of the device, data processing is being faced with new challenges -- in both scope and kind -- as agencies, companies, and NGOs (to name but three interested parties) try to figure out how to handle billions of cellphone chirps, remote-control clicks, or GPS traces. What information can and should be collected? By what entity? With what safeguards? For how long? At what level of aggregation, anonymization, and detail? With devices and people opting in or opting out? Who can see what data at what stage in the analysis life cycle?

Once information is collected, the statistical and computer science disciplines are challenged to find patterns that are not coincidence, predictions that can be validated, and insights available in no other way. Numbers rarely speak for themselves, and the context for Internet of Things data is often difficult to obtain or manage given the wide variety of data types in play. The more inclusive the model, however, the more noise is introduced and must be managed. And the scale of this information is nearly impossible to fathom: according to IBM Chief Scientist Jeff Jonas, mobile devices in the United States alone generated 600 billion geo-tagged transactions every day -- as of 2010.

In addition to the basic design criteria, the privacy issues cannot be ignored. Here, the history of Google Glass might be instructive: whatever the benefits that accrue to the user, the rights of those being scanned, identified, recorded, or searched matter in ways that Google has yet to acknowledge. Magnify Glass to the city or nation-state level (recall that England has an estimated 6 million video cameras, but nobody knows exactly how many), as the NSA revelations appear to do, and it's clear that technological capability has far outrun the formal and informal rules that govern social life in civil society.

Early Indications October 2015: Of colleges, jobs, and analytics

It's funny how careers unfold. As a result of being in a particular place in a particular time, I find myself teaching analytics, supply-chain management, and digital strategy, mostly at the masters level. Not only did I not study any of these subjects in graduate school, none of these disciplines existed under their current name as recently as 20 years ago or so. What follows are some reflections on careers, skills, and patterns in education prompted by my latest adventures as well as some earlier ones.

1) What should I major in?
Across the globe, parents and students look at the cost of college, salary trends, layoffs, predilections, and aspirations, then take a deep breath and sign up for a major. I have seen this process unfold multiple times, and people sometimes miss some less obvious questions that are tough to address, but even tougher to underestimate.

The seemingly relevant question, "what am I good at," is tough to answer with much certainty: we require students to declare a major before they've taken many (sometimes any) courses in it, and coursework and salary work are of course two different things as well. While it's tempting to ask, "who's hiring," it's much harder to ask "where will there be good jobs in 20 years?" Very few Chief Information officers in senior positions aspired to that title in college, mostly because it didn't exist. Now that CIOs are more common, it's unclear whether the title and skills will be as widely required once sensors, clouds, and algorithms improve over the next decade or two.

It's even more difficult to extrapolate what the new "hot" jobs will be. In the late 1990's, the U.S. Bureau of Labor statistics encouraged students to go into desk top publishing, based on projected demand. In light of smartphones, social networks and "green" thinking, the demand for paper media never materialized, then tablets, e- readers, and wearables cut into demand still further. It's easy to say the Internet of Things or robotics will be more important in 20 years than they are today, but a) will near-term jobs have materialized when the student loan payments come due right after graduation, or b) are there enough relevant courses at a given institution? One cause of a nursing shortage that emerged about 15 years ago was a shortfall in the number of nursing professors: there were unfilled jobs, and eager students, but not enough capacity to train sufficient numbers of people to ease the hiring crunch.

2). English (or psychology, or fill in the blank) majors are toast

Many politicians are trying to encourage STEM career development in state universities and cite low earning potential for humanities graduates as a reason to cut funding to these fields. As Richard Thaler would say, it matters whether you make French majors pay a premium, or give chemical engineers a discount: the behavioral economics of these things are fascinating. The University of Florida led the way here about three years ago, but it's hard to tell how the experiment panned out.

At the same time, the respected venture investor Bill Janeway wrote a pointed piece in Forbes this summer, arguing that overcoming the friction in the atoms-to-bits-to atoms business model (Uber being a prime example) demands not just coding or financial modeling, but something else:


"Unfortunately for those who believe we have entered a libertarian golden age, freed by digital technology from traditional constraints on market behavior, firms successful in disrupting the old physical economy will need to have as a core competency the ability to manage the political and cultural elements of the eco-systems in which they operate, as well as the purely economic ones. . . .

In short, the longer term, sustainable value of those disrupters that succeed in closing the loop from atoms to bits and back to atoms will depend as much on successful application of lessons from the humanities (history, moral philosophy) and the social sciences (the political economy and sociology of markets) as to mastery of the STEM disciplines."

http://www.forbes.com/sites/valleyvoices/2015/07/30/from-atoms-to-bits-to-atoms-friction-on-the-path-to-the-digital-future/

On the whole, as the need for such contrarian advice illustrates, we know little beyond the stereotypes of college majors. The half-life of technical skills is shrinking, so learning how to learn becomes important in building a career rather than merely landing an entry-level position. Evidence for the growing ability of computers and robots to replace humans is abundant: IBM bought the Weather Channel in part to feed the Watson AI engine, Uber wants robotic cars to replace its human drivers, and even skilled radiologists can be outperformed by algorithms. A paper by Carl Frey and Michael Osborne at Oxford convincingly rates most career fields by their propensity to be automated. It's a very illuminating, scary list (skip to the very end):

http://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf

To bet against one's own career, in effect short-selling an occupational field requires insight, toughness, and luck. At the same time, the jobs that require human interaction, memory of historical precedent, and tactile skills will take longer to automate. Thus the liberal arts orientation toward teaching people how to think rather than how to be a teacher, accountant, or health-club trainer will win out, I believe.  This is a long term bet, to be sure, and in the interim, there will be unemployed Ivy Leaguers looking with some envy at their more vocationally focused state-school kin. Getting the timing right will be more luck than foresight.

3). What is analytics anyway?
As I've developed both a grad course and a workshop for a client in industry, I'm coming to understand this question differently. A long time ago, when I taught freshman composition, it took a few semesters to understand that while effective writing uses punctuation correctly, an expository writing (as it was called) course was an attempt to teach students how to think: to assess sources, to take a position, and to buttress an argument with evidence. All too frequently, however, colleges see the labor-intensive nature of freshman writing seminars as a cost to be cut, whether through using grad students, adjuncts, automation, or bigger section sizes. Each of these detracts from the close reading, personal attention, and rigorous exercises that neither scale well nor are done capably by many grad students or overworked adjuncts.

I'm seeing similar patterns in analytics. Once you get past the initial nomenclature, the two disciplines look remarkably similar: while courses are nominally about different things (words and numbers), each seeks to teach the skills related to assessing evidence, sustaining a point of view, and convincing a fair-minded audience with analysis and sourcing. To overstate, perhaps, analytics is no more a matter of statistics than writing is about grammar: each is a necessary but far from sufficient element of the larger enterprise. Numbers can be made to say pretty much whatever the author wants them to say, just as words can. In this context, the recent finding that very few (39%) published research findings in psychology could be replicated stands as a cautionary tale. (https://www.washingtonpost.com/news/speaking-of-science/wp/2015/08/27/trouble-in-science-massive-effort-to-reproduce-100-experimental-results-succeeds-only-36-times/) Unfortunately, American numeracy -- quantitative literacy -- is extremely low, rendering millions of people incapable of managing business, households, and retirement portfolios. Being able to write sound academic research, meanwhile, looks to be even more rare than we've thought.

A paradox emerges; at the moment when computational capability is effectively free and infinite relative to an individual's needs, the skills required to deploy that power are highly unequally distributed, with little sign of improvement any time soon. How colleges teach, who we teach, what we teach, and how it gets applied are all in tremendous upheaval: it's exciting, for sure, but the collateral damage is mounting (in the form of student loan defaults and low completion rates at for-profit colleges, to take just one example). Are we nearing a perfect storm of online learning, rapidly escalating demand for new skills, sticker shock or even buyer refusal to pay college tuition bills, abuses of student loan funding models, expensive and decaying physical infrastructure (much of it built in the higher-education boom of the 1960s), and demographics? Speaking of paradoxes, how soon will the insights of analytics -- discovering, interpreting, and manipulating data to inform better decisions -- take hold in this domain?

Wednesday, September 30, 2015

Early Indications September 2015: The MBA at 100 (or so)


It’s mostly coincidence, but the MBA degree is at something of a crossroads entering its second century. A short list of big questions might begin as follows:

-What is the place (pun intended) of a two-year resident MBA in a global, Internet era?

-What is the market need for general managers versus specialists in finance, supply chains, accounting, or HR, for example? How does market supply align with this need?

-What is the cost and revenue structure for an MBA program outside the elite tier? 

-How can business degrees prepare graduates for a highly dynamic, uncertain commercial environment?

-What do and should MBAs know about the regulatory environments in which their businesses are situated?

-What is and should be the relationship between managerial scholarship and commercial practice?

-What is the relationship of functional silos to modern business practice? Marketers need to know a fair bit of technology to do mobile ad targeting, for example, as do equities traders in the age of algorithmic bots. Navigating the aforementioned regulatory landscape, meanwhile, draws on an entirely different range of skills generally not covered in management or negotiation classes. Is the MBA/JD the degree of choice here?

-How can and should U.S. business schools teach ethics, which are highly culture-specific, to students from many home countries who will likely work in still another country/culture soon after graduation?

2015 is not happening in a vacuum, of course. The first graduate school of business, Tuck, offered the Master of Science in Commerce after its founding in 1900. (Recall that the functionally organized corporation was at the time a fairly recent phenomenon: railroads split ownership from management in part because of the huge capital requirements, and the vast distances involved meant that managers often lacked direct visibility of workers. Thus, in broad strokes, the late 19th century began the age of policies and procedures, and the idea of a middle manager.) Harvard launched its MBA program eight years after Dartmouth, with significant help from such scientific management exponents as Frederick Winslow Taylor. Enrollments surged: Harvard alone grew from 80 students to nearly 1100 in 22 years. Unsurprisingly, other universities began offering the degree: Northwestern in 1920, Michigan in 1924, Stanford in the late 1920s, Chicago in 1935, UNC in 1952. According to the office of the university archivist, Penn State began offering the MBA in 1959.

1959 was also the year two different reports, commissioned by industrialists’ foundations — Ford and Carnegie — reoriented American graduate business education. The more strongly worded of the two, by Robert Aaron Gordon and James Edwin Howell, systematically attacked the entire institution of the MBA as it then stood: students were weak, curricula were sloppily constructed, and faculty taught with little academic rigor at many schools.

The Gordon-Howell report quickly influenced accreditation and related discussions. New courses on many campuses covering strategy were at the forefront of a larger emphasis on quantitative methods and theory. What was not well addressed, according to many critics, was the practice of management itself. Balancing theory and practice has never been simple in business — as compared to medicine, companies do not conduct clinical trials parallel to those of drugs or procedures. 

Entrepreneurship has proven particularly hard to teach: on any list of great business-launchers, few hold the MBA. None of the following hold the degree: Paul Allen, Jeff Bezos, Sergei Brin, Warren Buffett, Michael Dell, Larry Ellison, Bill Gates, Jim Goodnight (SAS Institute), Bill Hewlett and David Packard, Steve Jobs, Elon Musk, Ken Olsen (Digital Equipment), Pierre Omidyar (eBay), Larry Page,   Sam Walton, and Mark Zuckerberg. 

MBAs can of course do quite well for themselves, as Michael Bloomberg and Nike’s Phil Knight (a 4:10 miler at Oregon 50 years ago) prove. Still, there appears to be a negative correlation between academic achievement, particularly in the MBA, and entrepreneurial accomplishment. Ten of the top sixteen richest self-made people in the world did not finish college or dropped out of grad school: Gates, Ellison, Zuckerberg, Sheldon Adelson (casinos), Page and Brin, Carl Icahn, Steve Ballmer, Harold Hamm (oil and gas at Continental Resources), and Dell.

Apart from not being able to produce mega-entrepreneurs, what of the more real-world challenges to MBA programs noted above? In reverse order, a few notes on each:

-Ethics has never been an easy topic to include in a business curriculum, but as the world’s top schools continue to get more global, trying to say anything stringent encounters the reality of cultural diversity. Sanctions against bribery, greed, ostentation, money-lending (with interest), and constraints on the role of women and ethic minorities are impossible to align; even the U.S., Canada, and England do some things very differently despite many similarities. The ethical lapses of the early 2000s — at Waste Management, Enron, Adelphia, and HealthSouth, among many others — put some focus on business schools (along with accounting firms) as agents of better behavior. In light of recent scandals at Toyota, Volkswagen, and GM, to name only the automakers, the challenge for MBA curricula does not appear to be any less daunting than in the crisis years of 2002 or thereabouts.

-Teaching students to work across functions and to deal with regulatory bounds and procedures continues to stymie MBA programs. We teach an integrative consulting-project exercise in the 4th semester; Harvard teaches something similar across the whole first year. Numerous programs have moved the project-based course back and forth, with equally compelling logic for early and late inclusion. Seeing how messy real problems are prepares students for the functional courses, while having some base of knowledge before being turned loose on a client also has merit. No one approach has emerged as a winner from the many variations being used.

-Managerial theory and practice remain difficult both to do and to convey more than a half-century after Gordon and Howell. Scholarship that gets published tends not to come from practitioners (Clayton Christensen is a notable exception, having run a company before earning his doctorate at Harvard), while managers and executives remain understandably wary of controlled experiments on live business units. Professors’ contributions to the semi-academic journals that practicing businesspeople might read — Harvard Business Review, Sloan Management Review, California Management Review, and the like — usually do not count heavily (if at all) toward tenure or promotion. For their part, many managers tell me they find little of value in the A-list journals held in academic esteem. Suffice it to say there remain many opportunities to improve the dialogue between the factory or office and the academy. 

-How can MBA programs teach resiliency, creativity, willingness to challenge convention, and the other traits required in a particularly turbulent business landscape? Marc Benioff, the CEO of Salesforce.com, is far from a disinterested observer, but it is difficult to disagree with his recent contention that essentially every industry is in the midst of or about to confront fundamental change. Whether from fracking, Uber, mobile platforms, Amazon, or demographics, every business (and governmental) unit I can see is hard-pressed to do things “the way we’ve always done it around here.”

An entrepreneur (whose masters was in arts management) told me a cautionary tale back in the dot-com boom. “We’re a startup,” he said. “Strategy for us isn’t chess, it’s poker: we have to bluff because we can’t compete with the big guys at scale, with equal playing pieces on a defined board with agreed-upon rules. We faked features in the demo we couldn’t deliver. We have had months where we couldn’t make payroll. We’ve reinvented our business model three times. That’s the reality. We hired a bunch of top-school MBAs to try to compete better, and had to let them all go. Why? These men and women all had good grades in high school. They cleared the next hurdle and got into a good college, then positioned themselves to deliver the right answers, earn As, and get into Ivy League b-schools. There it was more of the same: high achievers got the top internships at the I-banks and consulting firms. They’ve always been rewarded for getting the right answer. Now we have all this chaos and instability. None of them can handle it; they keep wanting to know the answer and there isn’t one.”

Fifteen years later, I can’t see that the incentive structure has changed all that much. Doing well in controlled environments seems to be counter-intuitive preparation for radical reinvention, new rules, unconventional insurgencies, and broken profit models.

-This atmosphere of disruption is affecting MBA programs themselves. Getting the costs, revenues, and rankings to acceptable levels has never been more challenging. Last year Wake Forest shut down its two-year resident MBA program, ranked in the top 50 in the US, as did Thunderbird, a pioneer in the internationally-oriented masters. In the past 5 years, however, 30 new schools earned AACSB accreditation in the U.S.; 96 others had joined the club in the preceding decade. Thus competition for students, faculty, and resources is intense, and the international nature of the MBA means that foreign competition is accelerating even faster than those 126 newly-accredited U.S. institutions would suggest: Poets & Quants states in a recent article that there are 50% more MBAs being earned today than ten years ago, so filling those classes is a challenging job. Marketing efforts to reach prospective MBA students are in something of an arms race, so many schools are cheered by a reported uptick in applications. Unfortunately, nobody can know if more applications is a result of more applications per student or more students jumping into the pool. Amid both increased competition and rising costs (health care continues to outpace other expenses), increasing tuition is a non-starter in most circumstances, so schools are confronting the need for creative alternatives if they are to avoid the approach taken at Wake Forest.

-An MBA is by definition something of a generalist, even with a curricular focus area in one or two functions. Meanwhile specialized business masters, in finance, accounting, marketing, or whatever, are on the rise. I have undergrads ask me about the relative merits, and each has its place. For many mid-career professionals, having an alternative to the generalist approach is attractive. Our supply-chain masters students, for example, never take courses in HR, real estate, finance, or general management: all the courses presuppose one business area rather than a variety. With years or decades already invested in that function, these students did the career calculus and concluded that the generalist approach did not make sense for them. They are far from alone, given the national trends.

-Thus we end where we began: what is the place of a 2-year, resident MBA? Each of those variables is getting interesting. Duration: INSEAD offers a 10-month program; one-year options are not uncommon. Locus: On-line MBAs are being offered all over the world, executive (weekend) MBAs allow students to keep their jobs and their lodging stable, and hybrids like the program at Carnegie Mellon combine multiple delivery methods. Content: As we have seen, different masters degrees in business are being offered in response to market needs, including for more depth of coverage: if one considers the complexity of contemporary finance, or supply chains, or accounting, having only a handful of courses within a generalist curriculum may not provide adequate preparation for the job’s primary duties, while the breadth of coverage has minimal compensatory value.

Numerous observers, including The Economist, predict major changes to the MBA market, particularly outside the top 20 or so schools. Today’s junior faculty joining the ranks will be in for a wild ride in the coming decades. As with so many other areas, as Ray Kurzweil argues, the rate of change is accelerating: the world is changing faster, faster, and business education will likely change more in the next 20 years than in the first century. Happy 100th birthday, indeed.

Tuesday, July 28, 2015

Early Indications July 2015: Crossover Points

I recently read an enjoyable study of the airport as cultural icon (Alastair Gordon’s Naked Airport; hat-tip to @fmbutt) and got to thinking about how fast new technologies displace older ones. Based on a small sample, it appears that truly transformative technologies achieve a kind of momentum with regard to adoption: big changes happen rapidly, across multiple domains. After looking at a few examples, we can speculate about what technologies might be the next to be surpassed.

Gordon makes uncited references to air travel: in 1955, more Americans were traveling by air than by rail, while in 1956, more Americans crossed the ocean by plane than by ship. (I tried to find the crossover point for automobile inter-city passenger-miles overtaking those of railroads, but can only infer that it happened some time in the 1920s.) This transition from rail to air was exceptionally rapid, given that only 10 years before, rail was at its all-time peak and air travel was severely restricted by the war.

Moving into another domain, I was surprised to learn that in 1983, LP album sales were surpassed not by the CD but by . . . cassette tapes; CDs did not surpass cassettes for another 10 years. In the digital age, the album is no longer the main unit of measurement, nor is purchasing the only way to obtain songs. This shift in bundle size is also occurring in news media as we speak: someone asked me the other day what newspaper(s) I read, and it struck me as odd: I can’t remember when I last had a physical paper land on my porch. That’s the other thing about these crossover points: they usually happen quietly and are not well remembered.

The smartphone is taking over multiple categories. Once again, we see a new unit of measurement: in the film camera age, people developed rolls of film, then perhaps ordered reprints for sharing. (That quiet transition again: can you remember the last time you took film to the drugstore or camera shop?) Now the unit of measurement is the individual image. Interestingly, digital still cameras surpassed film cameras in 2004, but not until 2007 were there more prints made from digital than from film. After 2007, digital prints have steadily declined. Furthermore, digital cameras themselves have been replaced by cameraphones: only 80 million point-and-shoot digital cameras shipped in 2013 and that number is dropping to well under 50 million this year, while smartphone sales are on target for about 1.5 billion units this year.

Standalone GPS units, MP3 players, and video camcorders (with GoPro being a notable exception, albeit in relatively tiny numbers) are other casualties of the smartphone boom. Landline-only houses were surpassed by cellular-only in 2009. Smartphones surpassed PC sales back in 2011.

The implications for employment are tremendous: Kodak employed 145,000 people in 1988; Facebook, a major player in personal image-sharing, has a headcount of about 9,000, most obviously not working on photos. Snapchat has 200 employees at a service that shares 8800 images EVERY SECOND, a number Kodak could not have conceived of. When these technology shifts occur, jobs are lost at a greater rate than they are gained. Railroads employed more than 1.5 million Americans in 1947; it’s now about a sixth of that. U.S. airlines, meanwhile, employed a peak of about 600,000 workers in the tech boom of 2000, well less than half that of the railroads, in a more populous country with more people traveling.

Let’s look at the smartphone. Given globalization, what used to be U.S. telecom numbers no longer equate. AT&T employed around a million people at its peak; right now AT&T plus Verizon (which counts cable TV and other operations) employ roughly 425,000 people. Apple’s 2015 headcount of 63,000 includes about 35,000 retail employees and about 3,000 temps or contractors. Samsung is a major player in world telco matters, but figuring out how many of its 275,000 employees can count toward a comparison vs AT&T is impossible. All told, more people have more phones than they did in 1985 but employment in the phone industry looks to be lower, and lower-paying, given how many retail employees now enter the equation.

Coming soon, we will see major changes to ad-supported industries. Already newspaper revenues are in serious decline. Digital ad revenue is already higher than newspaper, magazine, and billboard combined. “Cable cutting” is a very big deal, with clear demographic delineations: a 70-year-old is likely to read a paper newspaper and watch the big-4 network evening news; a 20-year-old is highly unlikely to do either. Comcast announced in May that it has more Internet-only subscribers than cable-TV subscribers, and the unbundling of cable networks into smartphone/tablet apps such as HBO-Go will likely accelerate.

In personal transportation, there could be two major upheavals to the 125-year-old internal combustion regime: electric cars and self-driving vehicles. Obviously Tesla is already in production in regard to the former transition, but the smartphone example, along with such factors as Moore’s law, cloud computing, and an aging Western-world demographic could fuel rapid growth in autonomous vehicles. In regard to cloud computing, for example, every Google car is as “smart” as the smartest one as of tonight’s software upgrade. Given the company’s demonstrated expertise in A/B testing, there’s no reason not to expect that competing models, algorithms, and human tweaks will be tested in real-world competitions and pushed out to the fleet upon demonstrated proof of superior fitness. 

There are many moving parts here: miniaturization, demographics, the rise of service industries relative to manufacturing (including cloud computing), growing returns to capital rather than labor, and so on. The history or technology substitutions and related innovations does have some clear lessons however: predicting future job growth is perilous (in 1999, the US Bureau of Labor Statistics was bullish on . . . desktop publishers); infrastructure takes decades while some of these cycles (Android OS releases) run in months; and the opportunities in such areas as robotics, AI, and health care are enormous. The glass may be half-full rather than half-empty, but in more and more cases, people are looking at entirely different scenarios: Kodak vs Snapchat, as it were. Whoever the next US president turns out to be will, I believe, face the reality of this split, perhaps in dramatic fashion.