Much attention is paid to the necessity for innovation, typically at the level of a technology: a molecule, a machine, a circuit. While these types of innovations matter, they cannot succeed without institutional “scaffolding,” as it were, the organizational infrastructure that gets the molecule made into pills and dispensed through pharmacies or gets the electric car assembled, distributed, recharged, and maintained over its entire product life. We posit that this organizational infrastructure will be a critical factor in the adoption of a powerful new technology in the coming decades.
We are at the cusp of a period of broad adoption of machine learning in more and more technical, professional, and commercial fields. While the labor shortage of data scientists and employees with related skills is well documented, less appreciated is the need for the institutional scaffolding mentioned above. Three factors have contributed to the boom in machine learning since 2012 or so: larger scales of training data sets, improved algorithms, and broader availability of powerful computing capabilities, often through (or at) cloud providers including Google and Amazon. The need now is for business models, institutional processes, and deep functional knowledge to be coupled with the technical advances.
This has long been the pattern of technological innovation. Powerful steam engines drove the need for bigger, stronger ocean vessels; retrofitting schooners and other wooden sail-powered craft with the new engines was insufficient. The rise of the automobile would have stalled without consumer credit, adequate paved roads, and a network of gas stations. AT&T’s Picturephone never achieved market acceptance as a landline device, but video calling apps such as Skype and FaceTime are extremely popular as smartphone add-ons. Technological innovation without organizational infrastructure rarely succeeds.
Within an organization, this cultural and organizational scaffolding is often invisible, making it difficult to change. A classic example is provided by the transition from steam and water power to electric motors in textile mills and similar facilities in the late 19th/early 20th century. Historically, water wheels and then steam engines powered overhead drive shafts that ran the length of the facility. Individual machines were thus located in relation to this overhead power source. Electric motors were initially used to drive the same shafts, slowing adoption: the case for new investment was weak. It took roughly 30 years for “group drive” to be replaced by “unit drive” in which individual machines each had their own electric motor(s) on board. Cost savings accrued as unused machines no longer were driven by the centralized power source, energy efficiency was improved with the removal of slippage in the belt-driven transmission system, and variable-speed motors allowed for each machine to run at its optimal RPM. Productivity improved dramatically as factory engineers repositioned machines, now freed from the proximity to the drive shaft, to facilitate workflow. In addition, removing the drive belts and overhead shafts allowed for the installation of overhead cranes, improving productivity further.
As machine earning is brought into mature organizations (as opposed to algorithm-centric companies such as Google or Facebook), managers will likely be tempted to overlay the new technology onto existing business practices. In the US, cellular phones were initially treated as mobile versions of stationary voice devices. In countries without a well-developed voice-device network (most of the world), such practices as texting took off much faster: the capabilities of the new technology were unconstrained by previous preconceptions and habits. More recently, 3D printing is often compared to mass-production molding and milling processes with which managers are familiar, and not surprisingly, 3D printing is found to make little business sense for long runs of simple geometries.
For machine learning to be applied to problems and processes where it can add distinctive and unprecedented value, it will need to be understood in the context of a particular market, a particular business model, and a particular business process. This level of understanding requires a rare combination of deep functional expertise, an open mind, adequate historic data for training the algorithms, and technical acumen. (Many efforts we have seen focus only on the latter aspect.) Weather forecasting — where IBM can claim deep expertise — has been reshaped by new algorithms and models (including ensembles) over the past 20 years, but medical reasoning has proven much more difficult to automate. Ad placement has been transformed by machine learning; production planning remains variable, volatile, and largely manual.
What kinds of scenarios are we seeing as machine learning is adopted in legacy business processes?
1) Unquestioned assumptions can be challenged by a naive algorithm
Because many more variables can be correlated across much larger samples than humans can cognitively manage, seemingly random inputs can be found to drive outcomes. At one hotel booking site, photos of guest rooms with the window shades open drove higher sales, but only in specific circumstances. In other scenarios, images with the shades closed performed better. One size rarely fits all, and bigger data sets can help identify which sizes fit which customers under which constraints.
2) Machine learning can cheat, often creatively
In one image-recognition application, the algorithm was supposed to be identifying sheep when it was in fact using grass as its decision tool, so a photo of a green field with rocks was scored as containing sheep. A race-car simulator found that the application’s physics engine did not penalize crashing, so the car would careen around the track, bouncing off the mid-point on the barrier wall of the straightaways. An algorithm designed to create locomotion simply built a tall, unstable tower that fell forward; a refinement allowed the tower to somersault for greater speed.
3) Knowledge of ground truth matters more than ever
As algorithms will be trusted to do more and more activities either unattended or with minimal human oversight, people will need to spot errors in data, logic, or translation into managerial levers. We have all been served online ads that are irrelevant, offensive, or otherwise a waste of the advertiser’s money. The “flash crash” of May 6, 2010 that shut down the US stock market was caused by a confluence of mischievous human trades and runaway algorithms. Our heavily digitized automobiles illuminate “check engine” lights that are as often a sensor error as a mechanical fault.
4) Ethics needs to catch up to the technology
Google, Facebook, and other machine learning leaders are realizing the potential for algorithmic harm and have funded both internships and full-time positions for ethicists. The costs are only beginning to be counted: of the speed and scope of algorithms that cannot be understood at human scale, or the “cheats” that may not be immediately obvious (as with the sheep/grass image recognition glitch), or the migration of computing from beige boxes (from which electronic money could be stolen, for example) to physical devices in people’s homes that both invade privacy and control door locks and other safety features.
These are in fact exciting times: machine learning has incredible potential beyond spam detection, credit card fraud monitoring, and energy efficiency gains. For these benefits to accrue, however, managers, investors, and institutions of training and education need to recognize the need for the organizational contexts, from budgets to risk mitigation, in which the technical gains will be situated. This path from recognition to action will take decades, be marked by many false starts, and unfold with an often frustrating sequence of relearning lessons that have already been taught elsewhere.