Friday, May 30, 2014

Early Indications May 2014: When computing leaves the box

Words can tell us a lot. In particular, when a new innovation emerges, the history of its naming shows how it goes from foreign entity to novelty to invisible ubiquity. A little more than 100 years ago, automobiles were called “horseless carriages,” defined by what they were not rather than what they were. “Cars” were originally parts of trains, as in boxcars or tank cars, but the automobile is now top of mind for most people. More recently, the U.S. military refers to drones as UAVs: unmanned aerial vehicles, continuing the trend of definition by negation. Tablets, the newest form of computing tool, originally were made of clay, yet the name feels appropriate.

The naming issues associated with several emerging areas suggest that we are in the early stages of a significant shift in the landscape. I see four major manifestations of a larger, as yet unnamed, trend, that for lack of better words I am calling “computing outside the box.” This phrase refers to digital processes — formerly limited to punchcards, magnetic media, keyboards/mice, and display screens — that now are evolving into three dimensional artifacts that interact with the physical world, both sensing and acting upon it as a result of those digital processes. My current framing of a book project addresses these technologies:

-robotics

-3D printing/additive manufacturing

-the emerging network of sensors and actuators known as the Internet of Things (another limited name that is due for some improvement)

-the aforementioned autonomous vehicles, airborne, wheeled, and otherwise.

Even the word “computer” is of interest here: the first meaning, dating to 1613 and in use for nearly 350 years, referred to a person who calculates numbers. After roughly 50 years of computers being big machines that gradually shrank in size, we now are in a stage where the networked digital computers carried by hundreds of millions of people are no longer called computers, or conceived of as such.

Most centrally, the word “robot” originated in the 1920s and was at first a type of slave; even now, robots are often characterized by their capabilities in performing dull, dirty, or dangerous tasks, sparing a human from these efforts. Today, the word has been shaped in public imagination more by science fiction literature and cinema than by wide familiarity with actual artificial creatures. (See my TEDx talk on the topic) Because the science and engineering of the field continue to evolve rapidly — look no further than this week’s announcement of a Google prototype self-driving car — computer scientists cannot come to anything resembling consensus: some argue that any device that can 1) sense its surroundings, 2) perform logical reasoning with various inputs, and 3) act upon the physical environment qualifies. Others insist that a robot must move in physical space (thus disqualifying the Nest thermostat), while others say that true robots are autonomous (excluding factory assembly tools).

I recently came across a sensible, nuanced discussion of this issue by Bernard Roth, a longtime professor of mechanical engineering who was associated with the Stanford Artificial Intelligence Lab (SAIL) from its inception.

“I do not think a definition [of what is or is not a robot] will ever be universally agreed upon. . . . My view is that the notion of a robot has to do with which activities are, at a given time, associated with people and which are associated with machines. If a machine suddenly becomes able to do what we normally associate with people, the machine can be upgraded in classification and classified as a robot. After a while, people get used to the activity being done by machines, and the devices get downgraded from ‘robot’ to ‘machine.’” *

However robots are defined, how is computing outside the box different from what we came to know of digital computing from 1950 until 2005 or so? Several factors come into play here.

1) The number of computational devices increases substantially. There were dozens of computers in the 1950s, thousands in the 1960s, millions in the 1980s, and so on. Now, networked sensors will soon number in the tens of billions. This increase in the scale of both the challenges (network engineering, data science, and energy management are being reinvented) and the opportunities requires breakthroughs in creativity: are fitness monitors, that typically get discarded after a few months except by the hardest-core trainers, really the best we can do for improving health outcomes?

2) With cameras and sensors everywhere — on phone poles, on people’s faces, in people’s pockets, in the ground (on water mains), and in the sky (drone photography is a rapidly evolving body of legal judgment and contestation) — the boundaries of security, privacy, and risk are all being reset. When robots enter combat, how and when will they be hacked? Who will program a self-driving suicide bomb?

3) Computer science, information theory, statistics, and physics (in terms of magnetic media) are all being stress-tested by the huge data volumes generated by an increasingly instrumented planet. A GE jet engine is reported to take off, on average, every two seconds, world wide. Each engine generates a terabyte of data per flight. 10:1 compression takes this figure down to a mere 100 gigabytes per engine per flight. Dealing with information problems at this scale, in domain after domain (here’s an Economist piece on agriculture) raises grand-challenge-scale hurdles all over the landscape.

4) The half-life of technical knowledge appears to accelerate. Machine learning, materials science (we really don’t understand precisely how 3D printing works at the droplet level, apparently), machine vision in robots, and so on will evolve rapidly, making employment issues and career evolution a big deal. Robots obviously displace more and more manual laborers, but engineers, programmers, and scientists will also be hard-pressed to keep up with the state of these fields.

5) What are the rules of engagement with computing moving about in the wild? A woman wearing a Google Glass headset was assaulted in a bar because she violated social norms; self-driving cars don’t yet have clear liability laws; 3D printing of guns and of patented or copyrighted material has yet to be sorted out; nobody yet knows what happens when complete strangers can invoke facial recognition on the sidewalk; Google could see consumer (or EU) blowback when Nest sensor data drives ad targeting.

6) How will these technologies augment and amplify human capability? Whether in exoskeletons, care robots, telepresence, or prostheses (a field perfectly suited to 3D printing), the human condition will change in its shape, reach, and scope in the next 100 years.

To anticipate the book version of this argument, computing outside the box introduces a new layer of complexity into the fields of artificial intelligence, big data, and ultimately, human identity and agency. Not only does the long history of human efforts to create artificial life see a new chapter, but also we are creating artificial life in vast networks that will behave differently than a single creature: Frankenstein’s creature is a forerunner of Google’s Atlas robot, but I don’t know if we have as visible a precedent/metaphor for self-tuning sensor-nets, bionic humans, or distributed fabrication of precision parts and products outside factories.

That piece of the argument remains to be worked out more completely, but for now, I’m finding validation for the concept every day in both the daily news feed and in the lack of words to talk about what is really happening.

-->
*Bernard Roth, Foreword to Bruno Siciliano and Oussama Khatib eds, Springer Handbook of Robotics (Berlin: Springer-Verlag, 2008), p. viii.