Friday, July 29, 2011

Early Indications July 2011: Place, Space, and Time

For millennia, geography has defined human civilizations. As our communications capability increases, as measured by technical specifications if not necessarily emotional ones, the need to be physically located in a certain place to do a job, support a social movement, or complete a business transaction is becoming less of an absolute constraint. Mobile phones, cloud computing, and other tools (such as lightweight project management software or online social networks) allow people and resources to be organized without physical contact; this might be called the emerging domain of space, as in "cyber." People can put up virtual storefronts on eBay, let Amazon be their supply chain, rent computing from Google to run code written in India, and let PayPal be their treasury system. Salesforce.com keeps track of customers and prospects; ADP runs payroll once enough employees sign on. Thus, the actual "business" could physically be the size of a laptop PC.

As place becomes negotiable, so does time. Asynchronous television viewing, for example, is reshaping the cable TV landscape. Comcast bought NBC Universal, which in turn was part of the Hulu joint venture. Apart from sports, college students watch very little television at its scheduled time, or over its traditional channel for that matter. Shopping has also become time-shifted: one can easily walk into Sears, shop at a kiosk, and have the item delivered to physical address, or else shop on line and drive to the store for faster pickup than FedEx can manage. At the other end of the time spectrum, tools like Twitter are far faster than TV news, not to mention print newspapers. Voicemail seems primitive now that it's roughly 30 years old, a time-shifting capability now taken for granted.

Place and time increasingly interconnect. Real-time package tracking for a routine Amazon purchase contrasts dramatically with a common scene at an automobile dealership: a customer saw a vehicle on the website earlier in the week and none of the sales people know what happened to it. UPS can track more than a million packages per day while a car dealer can lose a $15,000 two-ton vehicle, one of a few dozen, from a fenced concrete lot. Customer expectations are set by the former experience and are growing increasingly intolerant of the latter.

The corollary of that place/time flexibility, however, is being tracked: everybody with digital assets is plugged into some kind of information grid, and those grids can be mapped. Sometimes it's voluntary: Foursquare, Shopkick, and Facebook Places turn one's announced location into games. More often Big Brother's watch is without consent: London's security cameras are controlled by the same police department accused of using official assets in the service of the Murdoch newspapers' snooping on innocent citizens. Unplugged from the Internet but still needing to distribute directives and communiqués, Osama Bin Laden relied on a USB-stick courier who proved to be his undoing. As we have seen elsewhere, the entire idea of digital privacy, its guarantees and redresses, for bad guys and for everyday folk, is still primitive.

Examples are everywhere: Google Streetview has proved more controversial in Europe, and Germany in particular. Local "open records" laws have yet to be rethought in the age of instant global access: it's one thing for the neighbors to stop by town hall to see how much the new family paid for their house, but something else entirely (we don't really know what) when tens of millions of such transactions are searchable -- especially within overlays of Streetview, Bing's Bird's eye aerial (as opposed to satellite) imagery, and other potentially intrusive mechanisms.

In 2009, a WIRED magazine reporter attempted to vanish using a combination of physical disguises and digital trickery: pre-paid cell phones, proxy servers for IP address masking, cash-purchased gift cards. He was found though a combination of old-fashioned detective work and sophisticated network analysis: he was signing on to Facebook with an alias, and the alias had few real friends. The Facebook group he was lurking in was comprised of people trying to find him. The intersections of place and space are growing more curious every year.

Virtuality

From its origins as a network perimeter tunnel (Virtual Private Network, the ability to see computing resources inside the firewall while being physically remote from the corporate facility), virtualization has become a major movement within enterprise computing. Rather than dedicating a piece of hardware to a particular piece of software, hardware becomes more fungible. In a perfect virtual world, people with applications they need to run can schedule the necessary resources (possibly priced by an auction mechanism), do their work, then retreat from the infrastructure until they next need computing. In this way, the theory goes, server utilization is improved: all the downtime associated with captive hardware can go offline, freeing computing to be used to the current work, whatever its size, shape, or origin.

Once again, physical presence (in this case, big computers in a temperature-controlled facility with expensive redundant network and power connections, physical security, and specialized technicians tending the machines) is disconnected from data and/or application logic. In many consumer scenarios, people act this way without thinking twice: looking at Google Maps instead of Rand McNally, using the online version of Turbotax, or even reading Facebook is similar: no software package resides on the user's machine, and the physical location of the actual computing is both invisible and irrelevant.

From the world of computing, it's a short hop to the world of work. People no longer need to come to the physical assets if they're not doing work on somebody else's drill presses and assembly lines: brain work, a large component of the services economy, is often independent of physical capital, and thus of scheduled shifts. "Working from home" is commonplace, and with the rise of the smartphone, work becomes an anytime/anywhere proposition for more and more people. What this seamlessness means for identity, for health, for family and relationships, and for business performance has yet to be either named or sorted out.

Another dimension of virtuality is personal. Whether in Linden Labs' Second Life, World of Warcraft, or any number of other venues, millions of people play roles and interact through a software persona. As processing power and connection quality increase, these avatars will get more capable, more interesting, and more common. One fascinating possibility relates to virtual permanence: even if the base-layer person dies or quits the environment, the virtual identity can age (or, like Bart Simpson, remain timeless), and can either grow and learn or remain blissfully unaware of change in its own life or the various outside worlds.

Practical applications of virtualization for everyday life seem to be emerging. In South Korea, busy commuters can shop for groceries at transparencies of store shelves identical to those at their nearby Tesco Homeplus store; the photos of the products bear 2D barcodes which, when scanned and purchased, generate orders that are bundled together for home delivery. Picking up ingredients for dinner on the way home from work is a time-honored ritual; here, the shopper chooses the items but never touches them until arriving at his or her residence.

Cisco is making a major play toward virtual collaboration in enterprise videoconferencing; their preferred term, "telepresence," hasn't caught on, but the idea has. Given the changes to air travel in the past ten years (longer check-in times, fewer empty seats, higher fares), compounded by oil price shocks, many people dislike flying more than they used to. Organizations on lean budgets also look to travel as an expense category ripe for cutting, so videoconferencing is coming into its own at some firms. Cisco reports that it has used the technology to save more than $800 million over five years; productivity gains add up, by their math, to another $350 million.

Videoconferencing is also popular with individuals, but it isn't called that: in July 2011 Skype's CEO said that users make about 300 million minutes of video calls per month, which is about the same as pure voice connections. when measured as web traffic; since video consumes more bandwidth, the number of voice-to-voice calls might well be higher. The point for our purposes is that rich interaction can facilitate relationships and collaboration in the absence of physical proximity, at very low cost in hardware, software, and connection. As recently as 2005, a corporate videoconference facility could cost more than a half million US dollars to install; monthly connection charges were another $18,000, or $216,000 annually. In 2011, Apple iPads and many laptop PCs include cameras, and Skype downloads are free.

Organizations

Given that vertical integration has its limits in speed and the cost of capital investment (both in dollars and in opportunity costs), partnering has become a crucial capability. While few companies can emulate the lightweight, profit-free structure of a hacker's collective like Anonymous, of Wikipedia, or of Linux, neither can many firms assume that they control all necessary resources under their own roof. Thus the conventional bureaucracy model is challenged to open up, to connect data and other currencies to partners. Whether it involves sharing requirements documents, blueprints for review, production schedules, regulatory signoffs, or other routine but essential categories of information, few companies can quickly yet securely vet, map, and integrate a partner organization. Differences in nomenclature, in signing authority or span of control, time zones, language and/or currency, and any number of other characteristics complicate the interaction. So-called onboarding -- granting a partner appropriate data access -- can be a months-long process, particularly in secure (aerospace and defense) or regulated settings. Creating a selectively permeable membrane to let in the good guys, let out the proper information, turn off the faucet when it's not being used, and maintain trade secrets throughout has proven to be non-trivial.

Automata

What would happen if a person's avatar could behave independently? If an attractive bargain comes up at Woot, buy it for me. If someone posts something about me on a social network, notify me or, better yet, correct any inaccuracies. If the cat leaves the house through the pet door and doesn't return within two hours, call the petsitter. Who would bear responsibility for the avatar's actions? The person on whose behalf it is "working"? The software writer? The environment in which it operates?

Once all those avatars started interacting independently, unpredictable things might happen, the equivalent of two moose getting their antlers stuck together in the wild, or of a DVD refusing to play on some devices but not others because of a scratch on the disc. Avatars might step out of each other's way, or might trample each other in mobs. They might adapt to new circumstances or they might freeze up in the face of unexpected inputs. Some avatars might stop and wait for human guidance; other might create quite a bit of havoc given a particular set of circumstances.

It's one thing for a person's physical butler, nanny, or broker to act on his or her behalf, but something else quite new for software to be making such decisions. Rather than being a hypothetical thought experiment, the above scenarios are already real. Software "snipers" win eBay auctions with the lowest possible winning bid at the last possible moment. Google Alerts can watch for web postings that fit my criteria and forward them.

More significantly, Wall Street transactions generated by the jacketed floor traders waving their hands furiously are a dying breed. So-called algorithmic trading is a broad category that includes high-frequency trading, in which bids, asks, and order cancellations are computer-to-computer interactions that might last less than a second (and thus cannot involve human traders). By itself, HFT is estimated to generate more than 75% of equities trading volume; nearly half of commodity futures (including oil) trading volume is also estimated to be computer-generated in some capacity. The firms that specialize in such activity are often not well known, and most prefer not to release data which may expose sources of competitive advantage. Thus the actual numbers are not widely known.

What is known is that algorithms can go wrong, and when they go wrong at scale, consequences can be significant. The May 6, 2010 "Flash Crash" is still not entirely understood, but the source of the New York Stock Exchange's biggest, fastest loss (998 points) in history lies in large measure in the complex system of competing algorithms running trillions of dollars of investment. The long-time financial fundamentalist John Bogle -- founder of the Vanguard Group -- pulled no punches in his analysis: "The whole system failed. In an era of intense technology, bad things can happen so rapidly. Technology can accelerate things to the point that we lose control."

Artifacts of the algorithmic failure were just plain weird. Apple stock hit $100,000 a share for a moment; Accenture, a computer services provider, instantaneously dropped from $40 to a cent only to bounce back a few seconds later. Circuit-breakers, or arrangements to halt trading once certain limits are exceeded, were tripping repeatedly. For example, if a share price moves more than 10% in a five minute interval, trading can be halted for a five-minute break. A bigger question relates to the HFT firms that, in good times, provide liquidity, but that can withdraw from the market without notice and in doing so make trading more difficult. Technically, the exchanges' information systems were found to have shortcomings: some price quotes were more than two seconds delayed, which represents an extreme lag in a market where computer-generated actions measure in the millions (or more) per second.

Implications

What does it mean to be somewhere? As people sitting together both text other people in college cafeterias, what does it mean to be physically present? What does it mean to "be at work"? Conversely, what does it mean to be "on vacation"? If I am at my job, how is my output or lack thereof measured? Counting lines of code proved to be a bad way to measure software productivity; how many jobs measure performance by the quality of ideas generated, the quality of collaboration facilitated, the quality of customer service? These are difficult to instrument, so industrial-age measures, including physical output, remain popular even as services (which lend themselves to extreme virtualization) grow in importance and impact.

What is a resource? Who creates it, gets access to it, bears responsibility for its use or misuse? Where do resources "live"? How are they protected? What is obsolescence? How are out-of-date resources retired from service? Enterprise application software, for example, often lives well past its useful life; ask any CIO how many zombie programs he or she has running. Invisible to the naked eye, software can take on a life of its own, and once another program connects to it (the output of a sales forecasting program might be used in HR scheduling, or in marketing planning), the life span likely increases: complexity makes pruning more difficult since turning off an application might have dire consequences at the next quarterly close, the next annual performance review, or the next audit. Better to err on the side of safety and leave things running.

What does it mean for information to be weightless, massless, and infinitely portable? Book collections are becoming a thing of the past for many readers, as Kindle figures and Google searches can attest: having a reference collection near the dining room used to be essential in some academic households, to settle dinnertime contests. Music used to weigh a lot, in the form of LP records. Compact discs were lighter, but the plastic jewel box proved to be a particularly poor storage solution. MP3 downloads eliminated the software, but still needed bits to be stored on a personal hard drive. Now that's changing, to the point where physical books, newspapers, music, and movies all share a cloud-based solution. The result is a dematerialization of many people's lives: book collections, record collections, sheet music -- artifacts that defined millions of people are now disappearing, for good ecological reasons, but with as yet undetermined ramifications for identity, not to mention decorating.

What does it mean to bear personal responsibility? If software operating in my name does something bad, did I do anything? If I am not physically present at my university, my workplace, or my political organization, how loosely or tightly am I connected to the institution, to its people, to its agenda? Harvard sociologist Robert Putnam worried about the implications of the decline in the number of American bowling leagues; are Facebook groups a substitute for, or an improvement on, physical manifestations of civic engagement? If so, which groups, and which forms of engagement? In other words, at the scale of 700-plus million users, saying anything about Facebook is impossible, given the number of caveats, exceptions, and innovations: Facebook today is not what it will be a year from now, whereas bowling leagues have been pretty stable for decades.

Thus the fluidity of (cyber) space, (physical) place, and time has far-reaching implications for getting work done, for entrepreneurial opportunity, and for personal identity. As with so many other innovations, the technologists who are capable of writing code and designing and building breakthrough devices have little sense of what those innovations will mean. The sailing ship meant, in part, that Britain could establish a global empire; the first century of the automobile meant wars for oil, environmental degradation, new shapes for cities, the post-war rise of Japan, and unprecedented personal mobility, to start a very long list. What barely a quarter century of personal computing, 20 years of the commercial Internet, and a few months of smartphones might mean is impossible to tell so far, but it looks like they could mean a lot.