Tuesday, September 28, 2010

Early Indications September 2010: The Power and Paradoxes of Usability

Usability is among the most difficult of topics to define and analyze.
At one level, it is much like the famous Supreme Court justice who
noted of potentially criminal extreme sexual images, "you know it when
you see it." At another level, the number of daily moments that
reinforce the presence of poor design can be overwhelming. Examples
are everywhere: building entrance doors with a grab handle you're
supposed to push but that you instinctively (and unsuccessfully) pull,
all manner of software (in Outlook, does hitting "cancel" stop the
transaction or clear a meeting from the calendar?), and pinched
fingers and scraped knuckles. Usability may be easy to spot, but it
is clearly very difficult to engineer in.

Systems

Why is this so? As Don Norman, one of the heroic figures in modern
usability studies, puts it in a recent ACM piece, complex products are
not merely things; they provide services: "although a camera is
thought of as a product, its real value is the service it offers to
its owner: Cameras provide memories. Similarly, music players provide
a service: the enjoyment of listening." In this light, the product
must be considered as part of a system that supports experience, and
systems thinking is hard, complicated, and difficult to accomplish in
functionally-siloed organizations.

The ubiquitous iPod makes his point perfectly:

"The iPod is a story of systems thinking, so let me repeat the essence
for emphasis. It is not about the iPod; it is about the system. Apple
was the first company to license music for downloading. It provides a
simple, easy to understand pricing scheme. It has a first-class
website that is not only easy to use but fun as well. The purchase,
downloading the song to the computer and thence to the iPod are all
handled well and effortlessly. And the iPod is indeed well designed,
well thought out, a pleasure to look at, to touch and hold, and to
use. Then there is the Digital Rights Management system, invisible to
the user, but that both satisfies legal issues and locks the customer
into lifelong servitude to Apple (this part of the system is
undergoing debate and change). There is also the huge number of
third-party add-ons that help increase the power and pleasure of the
unit while bringing a very large, high-margin income to Apple for
licensing and royalties. Finally, the 'Genius Bar' of experts offering
service advice freely to Apple customers who visit the Apple stores
transforms the usual unpleasant service experience into a pleasant
exploration and learning experience. There are other excellent music
players. No one seems to understand the systems thinking that has made
Apple so successful."

One of the designers of the iPod interface, Paul Mercer of Pixo,
affirms that systems thinking shaped the design process: "The iPod is
very simple-minded, in terms of at least what the device does. It's
very smooth in what it does, but the screen is low-resolution, and it
really doesn't do much other than let you navigate your music. That
tells you two things. It tells you first that the simplification that
went into the design was very well thought through, and second that
the capability to build it is not commoditized." Thus more complex
management and design vision are prerequisites for user
simplification. (Mercer quoted in Bill Moggridge, Designing
Interactions
(Cambridge: MIT Press, 2007))

Because it requires systems thinking and complex organizational
behavior to achieve, usability is often last on the list of design
criteria, behind such considerations as manufacturability or modular
assembly, materials costs, packaging, skill levels of the factory
employees, and so on. The hall of shame for usability issues is far
longer than the list of successes. For every garage door opener, LEGO
brick, or Amazon Kindle, there are multiple BMW iDrives, Windows
ribbons, European faucets, or inconsistent anesthesia machines:
doctors on a machine from company A turned the upper right knob
clockwise to increase the flow rate, but had to go counter-clockwise
on company B's machine in the next operating room over. Fortunately,
the industry has standardized the control interface, with a resulting
decline in human endangerment. (See Atul Gawande, Complications: A
surgeon's notes on an imperfect science
(New York: Macmillan, 2003))

Paradoxes

As Ronald Rust and his colleagues have shown, usability presents
manufacturers of consumer electronics with a paradox. In purchase
mode, buyers overemphasize option value in their purchase
consideration: if multifunction device from company D does 13 things
and a competitor from company H performs 18 actions, the potential
utility is overemphasized even if the known need is only for, say, six
tasks. Watching the evolution of the Swiss Army knife testifies to
this phenomenon: very few of us, I suspect, have precisely the tools
we a) want or b) use on our knife.

Once they get that 18-way gadget home, however, option value recedes
and usability comes to the fore, and the extra controls, interfaces,
and other factors that drive complexity can make using the more
"capable" device frustrating at best and impossible at worst. At
consumer electronics retailers, most returned items function
perfectly, but are often returned because they are too hard to
integrate into everyday life. (They may also be returned because
consumers routinely seek better deals, get tired of a color or finish,
or use the purchase essentially as a free rental, performing a task
then returning the device.)

Hence the paradox: does the designer back off on features and
capabilities, and thus lose the head-to-head battle of shelf-side
calculus in order to win on usability, or do purchase rather than use
considerations win out? There are some ways out of this apparent
paradox: modular add-ons, better point-of-sale information, and
tutorials and other documentation (knowing that the vast majority of
people will never read a manual). The involvement of user groups is
growing, for both feedback on products in development and support
communities for stumped users. (Roland T. Rust, Debora Viana Thompson,
and Rebecca W. Hamilton, "Defeating Feature Fatigue," Harvard Business
Review
, February 2006)

At its worst, overwhelming complexity and other forms of poor
usability can kill, as the anesthesia example makes clear. Nuclear
power plants, military hardware, and automobiles provide ready
examples. Especially with software-driven interfaces becoming the
norm (even for refrigerators and other devices with little status to
report and few user-driven options to adjust), the potential for
either bugs or unforeseen situations to escalate is becoming more
common.

Beyond Gadgets

This essay will not become a tribute to Apple or Southwest Airlines,
however, if only to escape the cliche. Instead, I'd like to discuss a
recent video
by TED producer Chris Anderson. In it he looks at the
proliferation of online videos as tools for mass learning and
improvement. Starting with the example of self-taught street dancers
in Brazil, Japan, LA, and elsewhere, he argues that the broad
availability of video as shared show-and-tell mechanism spurs, first,
one-upmanship through imitation and then innovation. The level of TED
talks themselves, Anderson argues, provides home-grown evidence that
cheap, vivid multimedia can raise the bar for many kinds of tasks:
futurist presentations, basketball dunks, surgical techniques, and so
on.

Five things are important here.

1) The low barrier to entry for imitator/innovator #2 to post her
contribution to the discussion may inspire, inform, or infuriate
imitator/innovator #3. Mass media did some of these things (in
athletic moves, for example: watch a playground the week after the
Super Bowl or a halfpipe after the X games). The lack of a feedback
loop, however, limited the power of broadcast to propagate secondary
and tertiary contributions.

2) Web video moves incredibly fast. The speed of new ideas entering
the flow can be staggering once a video goes "viral," as its
epidemiological metaphor would suggest.

3) The incredible diversity of the online world is increasing every
year, so the sources of new ideas, fresh thinking, and knowledge of
existing solutions multiply as well. Credentials are self-generated
rather than externally conferred: my dance video gets views not
because I went to Julliard but because people find it compelling, and
tell their friends, followers, or colleagues.

4) Web video is itself embedded in a host of other tools, both social
and technical, that are also incredibly easy to use. Do you want to
tell someone across the country about an article in today's paper
newspaper? Get out the scissors, find an envelope, dig up his current
address, figure out correct postage (pop quiz: how much is a
first-class stamp today?), get to a mailbox, and wait a few days.
Want to recommend a YouTube or other web video? There are literally
hundreds of tools for doing so, essentially all of which are free and
have short learning curves.

5) Feedback is immediate, in the form of both comments and views
counters. The reputational currency that attaches to a "Charlie bit
my finger" or "Evolution of dance" is often (but not always)
non-monetary, to be sure, but emotionally extremely affecting
nonetheless.

With such powerful motivators, low barriers to participation, vast and
diverse populations, rapidity of both generation and diffusion, and a
rich ancillary toolset relating to online video, Anderson makes a
compelling case for the medium as a vast untapped resource for
problem-solving on multiple fronts. In addition, because it involves
multiple senses, the odds that a given person will grasp my ideas
increases as the viewer can hear, watch, or read text relating to the
topic.

Thus the power of extreme usability transcends gadgets, frustration,
and false-failure returns. When done right, giving people easy access
to tools for creation, distribution, interpretation, and
classification/organization can help address problems and
opportunities far beyond the sphere of electromechanical devices.
Apart from reducing frustration, improving safety, or increasing
sales, lowering barriers to true engagement (as in the web browser,
for example) may in fact help change the world.

Wednesday, September 01, 2010

Early Indications August 2010: Rethinking Location and Identity

Even though they're sometimes overlooked in relation to spectacular
growth rates (50x increases in wireless data carriage), successful
consumer applications (half a billion Facebook users), and technical
achievement (at Google, Amazon, Apple, and elsewhere), location-based
technologies deserve more attention than they typically receive. The
many possible combinations of wired Internet, wireless data, vivid
displays, well-tuned algorithms running on powerful hardware, vast
quantities of data, and new monetization models, when combined with
location awareness, have yet to be well understood.

Digital location-based services arose roughly in chronological
parallel with the commercial Internet. In 1996, GM introduced the
OnStar navigation and assistance service in high-end automobiles.
Uses of Global Positioning System (GPS, which, like the Internet, was
a U.S. military invention) and related technologies have exploded in
the intervening years, in the automotive sector and, more recently, on
smartphones. The widespread use of Google Earth in television is
another indicator of the underlying trend.

Handheld GPS units continue to double in sales every year or two in
the North American market. As the technology is integrated into
mobile phones, the social networking market is expected to drive far
wider adoption. Foursquare, Gowalla, numerous other startups, and the
telecom carriers are expected to deliver more and more applications
linking "who," "where," and "when." Powerful indications of this
tendency came when Nokia bought Navteq (the "Intel inside" of many
online mapping applications) for $8.1 billion in 2007, when Facebook
integrated location services in 2010, and when the rapid adoption of
the iPhone and other smartphones amplified the market opportunity
dramatically. Location-based services (whether Skyhook geolocation,
Google Maps and Earth, GPS, and others) have evolved to become a
series of platforms on which specific applications can build, tapping
the market's creativity and vast quantities of data.

In the process, the evolution of location taps into significant questions:

-Who am I in relation to where I am? That is, what are the
implications of mapping for identity management?

-Who knows where I am, when I'm there, and where I've been? How much
do I control the "information exhaust" related to my movements? Who
is liable for any harm that may come to me based on the release of my
identity and location?

-Who are we relative to where we are? In other words, how do social
networks change as they migrate back and forth between virtual space
(Facebook) and real space (Mo's Bar)? What happens as the two worlds
converge?

Variations on a Theme

While location often seems to be synonymous with GPS, location-based
data services actually come in a variety of packages. Some examples
follow:

-Indoor Positioning Systems
For all of the utility of GPS, there are numerous scenarios where it
doesn't work: mobile x-ray machines or patient gurneys in hospitals,
people in burning buildings, work-in-process inventory, and
specialized measurement or other tools in a lab or factory all need to
be located in sometimes vast and often challenging landscapes,
sometimes within minutes. GPS signals may not penetrate the building,
and even if they can, the object of interest must "report back" to
those responsible for it. A variety of wired and wireless
technologies can be used to create what is in essence a scaled-down
version of the GPS environment.

-Optical
Such well known firms as Leica and Nikon have professional products to
track minute movements in often massive structures or bodies: dams,
glaciers, bridges. Any discussion of location awareness that neglects
the powerful role of precision optics, beginning with the essential
surveyor's transit, would be incomplete.

-WiFi mapping
As we have seen, the worldwide rise of wi-fi networking is very much a
bottom-up phenomenon. Two consequences of that mode of installation
are, first, often lax network security and second, considerable
coverage overspill. Driving down any suburban or metropolitan street
with even a basic wireless device reveals dozens of residential or
commercial networks. Such firms as Google have systematically mapped
those networks, resulting in yet another overlay onto a growing number
of triangulation points. The privacy implications of such mapping
have yet to be resolved.

-Cellular
Wireless carriers can determine the position of an active (powered-up)
device through triangulation with the customer's nearby towers. Such
an approach lacks precision when compared to approaches (most notably
GPS) that reside on the handset rather than in the network. In either
case, the carrier can establish historical location for law
enforcement and potentially other purposes.

-Skyhook
A startup based in Boston, Skyhook has built a database of 100 million
wi-fi physical coordinates then added both GPS and cellular
components, making Skyhook most precise (inside or near buildings)
where GPS is weakest. A software solution combines all available
information to create location-tracking for any wi-fi enabled device,
indoors or out. Skyhook powers location awareness for devices from
Apple, Dell, Samsung, and other companies, and is now generating
secondary data based on those devices.

Landmarks

Noting a few historic transitions and innovations in the history of
location-based services reveals the scale, complexity, and wide
variety of applications that the core technologies are powering.

OnStar
With roughly 5.5 million subscribers in mid-2010, OnStar has become
the world's largest remote vehicle-assistance service. In addition to
receiving navigation and roadside assistance, subscribers can have
doors unlocked and gain access to certain diagnostic data related to
that particular vehicle. The service delivers important information
to emergency response personnel: when extricating occupants from a
damaged vehicle, knowing which airbags have deployed can assist in
keeping EMTs, police, and firefighters safe from the explosive force
of an undeployed device that might be inadvertently tripped. Knowing
the type and severity of the crash before arrival on the scene can
also help the teams prepare for the level of damage and injury they
are likely to encounter.

The service was launched as a joint venture. General Motors brought
the vehicle platform and associated engineering, Hughes Electronics
managed the satellite and communications aspects, and Electronic Data
Systems, itself being spun out from GM in OnStar's launch year,
performed systems integration and information management.

GPS
The history of GPS is even more compelling when considered alongside
its nearly contemporary stable mate, the Internet. GPS originated in
1973, ARPANET in 1969. Ronald Reagan allowed GPS to be used for
civilian purposes after a 1983 incident involving a Korean Air Lines
plane that strayed into Soviet airspace. The Internet was handed off
from the National Science Foundation to commercial use in 1995; Bill
Clinton ordered fully accurate GPS (20 meter resolution) to be made
available May 1, 2000. Previously, the military had access to the
most accurate signals while "Selective Availability" (300 meter
resolution) was delivered to civilian applications.

Since 1990, GPS has spread to a wide variety of uses: recreational
hiking and boating, commercial marine navigation, cell phone
geolocation, certain aircraft systems, and of course vehicle
navigation. Heavy mining and farming equipment can be steered to less
than 1" tolerances. Vehicles (particularly fleets) and even animals
can be "geofenced," with instant notification if the transmitter
leaves a designated area. In addition to latitude and longitude, GPS
delivers highly precise time services as well as altitude.

Trimble
Founded by Charles Trimble and two colleagues from Hewlett-Packard in
1978 (the first year a GPS satellite was launched), Trimble Navigation
has become an essential part of geolocation history. From its base in
Silicon Valley, the company has amassed a portfolio of more than 800
patents and offers more than 500 products. Much like Cisco, Trimble
has made acquisition of smaller companies a core competency, with many
M&A moves in the past ten years in particular. A measure of Trimble's
respect in the industry can be seen in the quality of its
joint-venture partners: both Caterpillar and Nikon have gone to market
jointly with Trimble.

The company has a long history of "firsts": the first commercial
scientific-research and geodectic-survey products based on GPS for
oil-drilling teams on offshore platforms, the first GPS unit taken
aboard the space shuttle, the first circuit board combining GPS and
cellular communications. The reach of GPS can be seen in the variety
of Trimble's product offerings: agriculture, engineering and
construction, federal government, field and mobile worker (including
both public safety and utilities applications), and advanced devices,
the latter indicating a significant commitment to R&D.

Location, Mobility, and Identity

Issues of electronic identity and mobility have been playing out in
quiet but important ways. Each of several instances is a classic case
of social or economic problems being tangled up with a technology
challenge. To see only one side of the question is to create the
possibility of unintended consequences, allow hidden agendas into
play, and generally confuse the allocation of sometimes-scarce
resources.

-Social Networking Goes Local
Whether through Dodgeball, (a New York startup that was bought by
Google in 2005 then left unexploited), Foursquare, or Facebook Places,
the potential for the combination of virtual and real people in
virtual or real places is still being explored. Viewed in
retrospect, the course of the Dodgeball acquisition raises the revenue
questions familiar to watchers of Friendster et al: who will pay for
what, and who collects, by what mechanism? Who owns my location
information and what aspects of it do I control? Much like my medical
records, which are not mine but rather the doctor's or hospital's,
control appears to be defaulting to the collector rather than the
generator of digital bread crumbs.

-The Breakdown of 911
After a series of implementations beginning in 1968, Americans on
wireline voice connections could reliably dial the same three-digit
emergency number anywhere in the country. As the Bell System of the
twentieth century fades farther and farther from view, the presumption
of 911 reliability declines proportionately with the old business
model even as demand increases: the U.S. generates about 12 million
calls a day to 911. The problem comes in two variants.

First, a number of Voice over IP customers with life-threatening --
and as it turned out, life-ending -- emergencies could only reach a
recording at Vonage saying to call 911 from another phone. The Texas
Attorney General is raising the question after a 911 call failed
during a home invasion in Houston. A baby's death in Florida was
blamed on a Vonage 911 failure. According to the Wall Street Journal,
"In a letter to Florida's Attorney General, [the mother] said the
Vonage customer-service representative laughed when she told her that
Julia had died. 'She laughed and stated that they were unable to
revive a baby'. . . ."

For their part, Vonage includes bold-print instructions for manual 911
mapping during the sign-up process, but it's been estimated that up to
a quarter of the U.S. population is functionally illiterate. One
feature of VoIP is its portability: plug the phone into an RJ45 jack
anywhere and receive calls at a virtual area code of the customer's
choice. Navigating firewalls, dynamic IP addresses, wireless
connections, and frequent network outages taxes anyone but the most
technically adept Internet user. Children are also a key 911
constituency. Taken collectively, these overlapping populations raise
dozens of tricky questions. At the infrastructure level, the FCC and
other agencies face the substantial challenge of determining the
fairest, safest set of technical interconnection requirements
incumbent on the Regional Bells and VoIP carriers.

From the Bell perspective, 911 obviously costs money to implement and
maintain, and declining wireline revenues translate to declining 911
funds. Connecting 911 to the Internet in a reliable, secure manner is
nontrivial -- network attacks have used modems to target the service
in the past -- and until contractual arrangements are finalized there
is reluctance to subsidize the same firms that present themselves as
full wireline replacements.

911 isn't just a VoIP problem either: cellular users represent nearly
75% of emergency callers, but math and economics conspire to make
finding them difficult or impossible. In rural areas, cell towers
often follow roads, so attempting to triangulate from three points in
a straight line can limit precision. States have raided 911 tax
revenues for budget relief.

-Cell phone tracking
The wireless carriers offer a variety of services that give a relative
(often a parent, or an adult child of a potentially confused elder)
location information generated by a phone. the service has also been
used to help stalkers and abusive spouses find their wives in hiding.
Women's shelters routinely strip out the tracking component of cell
phones; according to the Wall Street Journal, a Justice Department
report in 2009 estimated that 25,000 adults in the U.S. were victims
of GPS stalking every year. In addition to the carriers, tracking
capability is being developed by sophisticated PC users that spoof the
behavior of a cell tower. Keystroke and location logging software is
also available; one package, called MobileSpy, costs under $100 per
year.

Conclusion

As the telephone system migrates from being dominated by fixed lines,
where identity resided in the phone, to mobile usage, where identity
typically relates to an individual, location is turning out to matter
a lot. Mobile number portability was an unexpectedly popular mandate
a few years ago, and the fastest technology adoption in history was a
phone feature: 55 million people signed up in a matter of months for a
service -- the Federal Do Not Call registry -- that didn't exist when
it was announced. (That's even faster than the previous champ,
Netscape Navigator's zooming to 38 million users in 18 months.) Given
the global nature of some of these questions, not to mention numerous
issues with ICANN and DNS, the discussions and solutions will only get
more complicated. As the examples illustrate, getting social
arrangements to keep pace with technology innovation is if anything
more difficult than the innovation itself.

Saturday, July 31, 2010

July 2010 Early Indications: Living with an iPad

I'm not typically a "gadget guy," one of those folks (Ed Baig at USA Today is one of the best) who regularly evaluate new devices. The iPad, however, stands as a milestone that redefines how people and technology inter-relate. A colleague is asking me to evaluate it as an educational tool, so I'm probably a bit more self-conscious than usual in my uptake of this particular technology. Herewith are a few thoughts.

The iPad perfectly embodies wider confusion over the intermingling of work and life. I have yet to load the office apps, so most of my reaction concerns the device used in "life" mode. That being said, the iPad is too convenient to ignore "just a peek" at e-mail. The screen is so bright and actually pretty that it's an attention magnet. The widely discussed aluminum case has just the right heft in the hand, just the right curve in the palm, that people (not just technologists) want to pick it up. From there, assuming a good wi-fi signal, I found everyone got up and running with very little coaching, usually without invitation.

I expect this will become more of an issue with work-related applications, but the iPad's limited text entry will be interesting to assess. Right now you can sort of double-thumb, sort of touch type, sort of trace letters with fingers (in 3rd-party applications). For short to medium e-mails, I did not mind, but a Crackberry addict might find the slow pace frustrating. The Apple case that folds back on itself to form a triangular base can be helpful here, from other people I've watched.

Similarly, I expect that at some point kludges or formal fixes will address the lack of printer support. Along the same lines, the single-threaded mode of operation can get annoying: leave an app to check something else (it does remember multiple web pages, however) and you face a full restart upon returning to whatever non-browser activity you were just doing. An update to the operating system should fix this issue in September.

The iPad rapidly changed some of my long-standing habits. Reading, however, is not one of them. I have yet to get on board the e-reader bandwagon, and have left several texts I should read for work untouched: I literally forget they're loaded and waiting for me. In part this is because I read scholarly books idiosyncratically, never starting at page 1 and proceeding to 347. Rather, I'll start by looking at the plates if the book has them, checking out the pictures bound somewhere randomly in the middle. From there I might look through the endnotes, or the jacket blurbs. I'll often skip chapter 1, at least initially, preferring instead to start with what often turns out to be the first body chapter with real evidence and real argument rather than introductory matter which some people find very hard to write. The point is that e-readers do not support non-fiction reading as well as they do a good mystery, where there's only one way through the story. Pagination also presents a real issue when you need to footnote a source.

To stay with the question of reading, what was widely called "the Jesus tablet" in the publishing industry can not yet serve as a replacement for a physical magazine -- particularly at the prices being suggested: $4.99 a week of Time or Sports Illustrated is not going to fly, I don't believe. Merely exporting static, dated dead-tree content to a new medium (which happens to be dynamic, real-time, and capable of multimedia) follows a familiar trap. The Wright brothers did not succeed by mimicking a bird. Printed books did not find a market mass-producing hand-lettered scrolls. Television quickly stopped presenting radio shows with visible people. Businesses are continuing to learn that the Web is not "television except different."

To their credit, the team at Flipboard is trying to transcend the paper magazine by integrating social networking feeds: "hey, did you see the piece in [wherever]?" The half-page-oriented turning metaphor looks clever at first glance, and some of the content is strong. The problem is it's too strong, too predictable: thus far it's hard to find fresh stories in the pretty format. Too many taps stand between "hmm, let's look at that" and the actual story, most of which I'd already seen in my other grazings.

In addition, the Flipboard business model looks extremely shaky: adding one more intermediary between any potential consumer and the brand creates disincentives all around. I'd also wager that the Web 2.0 Tom Sawyer approach -- let the crowd do your work for you and pay them in reputational or other non-monetary compensation -- can not run at the current pace forever. Sure, I recommend articles in my Twitter feed (38apples), but a) not at scale, b) not reliably, from an advertiser's vantage point, and c) not systematically, from a subscriber's standpoint. Dialing in the right balance between serendipity and editorial coherence (the current buzzword calls it "curated" content) is a new challenge. The New York Times, as good as it is at many things, has not yet found the key to this new medium, nor should anyone expect them to: it's simply too early. The same goes for AOL, for the BBC, for NBC, and for just about everyone else.

Because it is so relentlessly visual and was never trapped in a paper model, weather information can be arresting on the iPad. The Weather Channel app reminded me immediately of what I remember of Pointcast (which, as I pointed out on Twitter, would make a great iPad app: minimal text input, free-floating news and other topical links, ticker streaming, and other invitations to tap). Maps, graphs, videos, icons -- weather information works essentially perfectly on the iPad.

I did not find the same attractiveness true for Google maps. I believe this discomfort relates to the nature of wayfinding. If you're looking at a map, you're likely already doing something else: dialing a phone, looking out the window for a house number or street sign, holding a steering wheel, maybe grasping a slip of paper with an address. Given the iPad's two-handed operation, those other ancillary activities often make it the wrong tool for the job, particularly compared to a one-handed or voice-activated GPS.

I have yet to fly with the iPad but look forward to doing so: I never found the iPhone a desirable movie player, but expect my next long flight will pass faster with the iPad's vivid display of something I want rather than the typical choices on the airlines. One great feature of all operations: the iPad runs silently. The move to a world in which mobile devices rely far more heavily on broadband connections to "cloud" resources than they do on on-board storage will have many side effects, and the loss of noisiness is one of them. (I did not yet try any connection other than WiFi, but will attempt to assess how well 3G works once school starts.)

In my time with the iPad, the life-altering application has been Scrabble. It may actually be better than the physical board game. Let me count the ways:

1) You can't lose pieces.
2) You can't cheat by marking or memorizing tiles (as my late father-in-law was fond of doing).
3) The dictionary is hard-wired: no fights, though to be fair, in some circles the lexicographic litigation is part of the point, and that gets lost.
4) A partially completed game is trivial to save.
5) Lifetime statistics are kept automatically, including win-loss.
6) The touch screen allows automatic shuffling and very comfortable flicking of the letters in the tray, unlike the iPhone app Words with Friends, in which I sometimes must break out the physical set to parse a really tough rack.
7) You can play by yourself against the computer.
8) Virtual games against on-line strangers are also possible.
9) You can play in bed, on a train, on a plane, on a subway, unlike the original.

In sum, what do the various aspects tell us about the iPad? First, the device almost demands interaction, but limits its sphere. Highlighting and annotation, so far, have not worked well. The well-publicized exclusion of Flash from the device rules out many websites, such as those running Flash-based catalog apps. Typing remains problematic. Printing will have to be added soon.

Second, the rapid start (from sleep) and silent operation take the user away from the world of "computers" and into the domain of "appliances," which I say as a compliment. I will withhold analysis of the device's pricing for the moment, however.

Third, the particular combination of heft, touch-screen, and vivid display is so new to us as a user community that I do not think we have a large catalog of applications that exploit the new hardware to its fullest. While the iPad runs some games superbly well, it's not a PSP. Yes you can read books but the iPad is not really a proper reader, or if it is, it's a really expensive one. One can replicate laptop functionality, but the iPad is not conceptually a computer, unlike the Microsoft family of tablets from a few years ago.

Until we can say with subconscious certainty what this thing is (and does) and behave accordingly, just as we could identify a television and all that it embodied as little as five years ago, I believe the iPad's transformative potential remains only partially recognized.

(The best assessment I read while researching his piece is here)

Thursday, June 17, 2010

June 2010 Early Indications II: Book Review of Clay Shirky, Cognitive Surplus: Creativity and Generosity in a Connected Age

To those of us who for a long time have tried to understand the many impacts of the Internet, Clay Shirky stands among a very small group of folks who Get It. Usually without hyperbole and with a sense of both historicity and humor, Shirky has been asking not the obvious questions but the right ones. Explaining first the import then the implications of these questions has led him to topics ranging from pro-anorexia support groups to the Library of Congress cataloging system and flame wars to programming etiquette.

This book continues that useful eclecticism. Examples are both fashionably fresh and honorably historical: Josh Groban and Johannes Gutenberg appear in telling vignettes. Rural India, 18th-century London, Korean boy-band fans, and empty California swimming pools are important for the lessons they can reinforce. The usual cliches -- Amazon, Zappos, Second Life, even Twitter -- are pretty much invisible. As Shirky has done elsewhere, two conventional narratives of various phenomena are both shown to miss the point: in this case, neither "Young people are immoral" nor "Young people are blissfully generous with their possessions" adequately explained the rise in music file sharing.

In a career of writing cogently about what radical changes in connectivity do to people, groups, and institutions, Cognitive Surplus is, I believe, Shirky's best work yet. Not content with explaining how we have come to our peculiar juncture of human attention, organizational possibility, and technological adaptation, in a final chapter Shirky challenges us to do something meaningful -- to civic institutions, for civil liberties, with truth and beauty on the agenda -- with social media, mobility, ubiquitous Internet access, and the rest of our underutilized toolkit. At the same time, he avoids technological utopianism, acknowledging that the tools are morally neutral and can be used as easily for cheating on exams as for the cleanup of Pakistani squalor.

A core premise of the book holds that the Internet allows many people to reallocate their time. Specifically, the amount of time people in many countries spend watching television is so vast that even a nudge in the media landscape opens up some significant possibilities. Wikipedia, for example, is truly encyclopedic in its coverage: comprised of work in more than 240 languages, the effort has accumulated more than a billion edits, all by volunteers. At the time of his analysis, Shirkey noted, the estimated human effort to create Wikipedia was roughly equivalent to the time consumed by the television ads running on one average weekend.

So ample available time exists to do something, as opposed to lying on a coach passively receiving TV messages. What might people do with this "cognitive surplus"? Read War and Peace. Volunteer at a soup kitchen. Join Bob Putnam's bowling league. Thus far, however, people haven't tended, in large numbers, to do these things, even though civic participation is apparently on the rise. Rather, people are connecting with other people on line: the shift from personal computing to social networking (Facebook alone hosts roughly half a billion accounts) is well underway but not yet well understood. Once we can communicate with people, anywhere, anytime, at close to zero economic cost, what do we do?

Here Shirky is inclusive: people help other people write and maintain operating systems, web servers, or browsers. They recaption silly cat pictures with sillier messages. They identify election irregularities, or ethnic discrimination, or needs for public safety and public welfare resources in both Haiti and the streets of London. The state of the technology landscape makes many things possible:

-Individuals do not need to be professionals to publish ideas; to disseminate pictures, music, or words; to have an opinion in the public discourse; or to analyze public data on crime or what have you.

-Based on an emerging subset of behavioral economics, we are discovering that markets are not the optimal organizing and motivational principle for every situation. For many kinds of social interaction, whether in regard to fishing grounds or blood donation, reputation- and community-based solutions work better than a monetary one. At the collective level, belonging to a group we believe in and having a chance to be generous are powerful motivators. For their part, individuals are motivated by autonomy (shaping and solving problems ourselves) and competence (over time, getting better at doing so). In addition, the introduction of money into an interaction may make it impossible for the group to perform as well as before money, even after the financial rules are removed (think of certain Native American tribes as tragic examples here, but day-care parents who come late to pick-up hit closer to home).

-People in groups can organize to achieve some goal, whether it is the pursuit of tissue type registration for organ donation, a boycott of BP, or making car pools scale beyond office-mates.

In sum: amateurs can enter many fields of communication, performing at various levels of quality for free and displacing professionals with credentials who used to be paid more. Low overhead in both technical skill and capital infrastructure opens media businesses to new entrants. Finally, the combination of intrinsic motivation for cognitive work and low coordination costs means that informal organizations can outperform firms along several axes: Linux and Wikipedia stand as vivid, but not isolated, examples here.

This new order of things complicates matters for incumbents: record-label executives, newspaper reporters, and travel agents can all testify to being on the wrong side of a disruptive force. It also raises questions that can trouble some people:

-"Who will preserve cultural quality?"
Without proper editors guarding access to the publishing machinery, lots of bad ideas might see an audience. (The problem is not new: before movable type, every published book was a masterpiece, while afterward, we eventually got dime novels.)

-"What happens if that knowledge falls into the wrong hands?"
Previous mechanisms of cultural authority, such as those attached to a physician or politician, might be undermined.

-"Where do you find the time?"
Excessive exposure to electronic games, virtual communities, or the universally suspect "chat rooms" might crowd out normal behavior, most likely including American Idol, Oprah, or NCIS.

In sum, as Shirky crystallizes the objections, "Shared, unmanaged effort might be fine for picnics and bowling leagues, but serious work is done for money, by people who work in proper organizations, with managers directing their work." (p. 162)

These, then, are the stakes. Just as the limited liability joint stock corporation was a historically specific convenience that solved many problems relating to industrial finance, so too are new organizational models becoming viable to address today's problems and possibilities. At the same time, they challenge the cognitive infrastructure that coevolved with industrial capitalism.

That infrastructure, in broad outline, builds on the following:

-Individuals are not equipped to determine their own contributions to a larger group or entity.

-Money is a widely useful yardstick.

-Material consumption is good for psychic and economic reasons.

-Organizations are more powerful than disorganized individuals, and the larger the organization, the more powerful it is.

If each of those pillars is, if not demolished, at least shown to be wobbly, what comes next? In the book's final chapter, Shirky moves beyond analysis to prescription, arguing that with surplus time and massive low-cost infrastructure at our disposal, we owe it to each other and to our children to create something more challenging and beneficial than the best of what's out there: "Creating a participatory culture with wider benefits for society is harder than sharing amusing photos." (p. 185)

Patientslikeme.com, Ushahidi, and Responsible Citizens each represent a start rather than an acme. Digital society awaits, in short, its Gutenbergs, its Jeffersons, its Nightingales, its Ghandis. Shirky's concrete list of how-tos is likely to inform the blueprint utilized by this upcoming generation of innovators, reformers, and entrepreneurs. As a result, Cognitive Surplus is valuable for anyone needing to understand the potential ramifications of our historical moment.

Friday, June 11, 2010

Early Indications June 2010: World Cup special on sports brand equity

It's a familiar business school discussion.  "Let's talk about
powerful brands," begins the professor.  "Who comes to mind?"  Usual
suspects emerge: Coke, Visa, Kleenex. "OK," asks the prof, "what brand
is so influential that people tattoo it on their arms?"  The answer is
of course Harley-Davidson.

There is of course another category of what we might call "tattoo
brands," however: sports teams.  Measuring sporting allegiance as a
form of brand equity is both difficult and worth thinking about.

For a brief definition up front, Wikipedia's well-footnoted statement will do:

"Brand equity refers to the marketing effects or outcomes that accrue
to a product with its brand name compared with those that would accrue
if the same product did not have the brand name."

That is, people think more highly of one product than another because
of such factors as word of mouth, customer satisfaction, image
creation and management, track record, and a range of tangible and
intangible benefits of using or associating with the product.

The discussion is timely on two fronts.  First, the sporting world's
eyes are on the World Cup, and several European soccer clubs are
widely reckoned as power brands on the global level.  Domestically,
the pending shifts in college athletic conferences have everything to
do with brand equity: the University of Texas, a key prize, is one of
a handful of programs that make money, in part because of intense fan
devotion (one estimate puts football revenues alone at $88 million).

Our focus today will be limited to professional sports franchises, but
many of the arguments can be abstracted, in qualitative terms, to
collegiate athletics as well.  If we consider the revenue streams of a
professional sports franchise, three top the list:

-television revenues
-ticket sales and in-stadium advertising
-licensing for shirts, caps, and other memorabilia.

Of these, ticket sales are relatively finite: a team with a powerful
brand will presumably have more fans than can logistically or
financially attend games.  Prices can and do rise, but for a quality
franchise, the point is to build a fan network beyond the arena.
Television is traditionally the prime way to do this.  National and
now global TV contracts turn viewership into advertising revenue for
partners up and down the value chain from the leagues and clubs
themselves.  That Manchester United and the New York Yankees can have
fan bases in China, Japan, or Brazil testifies to the power of
television and, increasingly, various facets of the Internet in
brand-building.

Sports fandom exhibits peculiar economic characteristics.  Compared
to, say, house- or car-buying, fans do not research various
alternatives before making a presumably "rational" consumption
decision: team allegiance is not a "considered purchase."  If you are
a Boston Red Sox fan, your enthusiasm may or may not be relevant to
mine: network effects and peer pressure can come into play (as at a
sports bar), but are less pronounced than in telecom, for example. If
I am a Cleveland Cavaliers fan, I am probably not a New York Knicks
fan: a choice in one league generally precludes other teams in season.
 Geography matters, but not decisively: one can comfortably cheer for
San Antonio in basketball, Green Bay in football, and St. Louis in
baseball.   At the same time, choice is not completely independent of
place, particularly for ticket-buying (as compared to hat-buying).

Finally, switching costs are generally psychic and only mildly
economic (as in having to purchase additional cable TV tiers to see an
out-of-region team, for example).  Those psychic costs are not to be
underestimated: just because someone lives in London with access to
several soccer clubs, allegiances are not determined by the low-price
or high-quality provider on an annual basis.  Allegiance also does not
typically switch for reasons of performance: someone in Akron who has
cheered, in vain, for the Cleveland Browns is not likely to switch to
Pittsburgh even though the Steelers have a far superior championship
history.

Given the vast reach of today's various communications channels, it
would seem that successful sports brands could have a global brand
equity that exceeds the club's ability to monetize those feelings.  I
took five of the franchises ranked highest on the Forbes 2010 list of most valuable sports brands and calculated the ratio of the estimated brand equity to the club's revenues.  If the club were able to capture more fan allegiance than it could realize in cash inflows, that ratio should be greater than one. Given the approximations I used, that is not the case.

For a benchmark, I also consulted Interbrand's list of the top global
commercial brands and their value to see how often a company's image
was worth more than its annual sales.  I chose six companies from a
variety of consumer-facing sectors (so long IBM, SAP,  and Cisco), and
the company had to be roughly the same as the brand (the Gillette
brand is not the parent company of P&G).

Three points should be made before discussing the results.  First, any
calculation of brand equity is a rough estimate: no auditable figures
or scientific calculations can generate these lists (see here). Second, Forbes and Interbrand used
different methodologies.  We will see the consequences of these
differences shortly.  Finally, corporate revenues often accrued from
more brands than just the flagship: people buy Minute Maid apart from
the Coca Cola brand, but the juice revenues are counted in the
corporate ratio.  All told, this is NOT a scientific exercise but
rather a surprising thought-starter.



The stunning 8:1 ratio of brand equity to revenues at Louis Vuitton is
in part a consequence of Interbrand's methodology, which overweights
luxury items.  Even so, six conclusions and suggestions for further
investigation emerge:

1) The two scales do not align.  The New York Yankees, the most
valuable sports brand in the world, is worth 1/24 that of Amazon.  One
or both of those numbers is funny.

2) Innovation runs counter to brand power.  New Coke remains a
textbook failure, while Apple's brand is only worth about a third of
its revenue.  Harley-Davidson draws its cachet from its retrograde
features and styling, the antithesis of innovativeness.

3) Geography is not destiny for sports teams.  Apart from New York and
Madrid, Dallas, Manchester, and Boston (not included here but with two
teams in Forbes' top ten) are not global megaplexes or media centers;
London, Rome, and Los Angeles are all absent.

4) Soccer is the world's game, as measured by brand: five of the ten
most valuable names belong to European football teams.  The NFL has
two entries and Major League Baseball three to round out the top ten
list.  Despite the presence of more international stars than American
football, and their being from a wider range of countries than MLB's
feeders, basketball and hockey are absent from the Forbes top ten.

5) Assuming for the sake of argument that the Interbrand list is
overvalued and therefore that the Forbes list is more accurate, the
sports teams' relatively close ratio of brand equity to revenues would
suggest that teams are monetizing a large fraction of fan feeling.

6) Alternatively, if the Forbes list is undervalued, sports teams have
done an effective job of creating fan awareness and passion well
beyond the reach of the home stadium.  Going back to our original
assumption, if tattoos are a proxy for brand equity, this is more
likely the case.  The question then becomes, what happens next?

As more of the world comes on line, as media becomes more
participatory, and as the sums involved for salaries, transfer fees,
and broadcast rights at some point hit limits (as may be happening in
the NBA), the pie will continue to be reallocated.  The intersection
of fandom and economics, as we have seen, is anything but rational, so
expect some surprises in this most emotionally charged of markets.

Saturday, May 22, 2010

May 2010 Early Indications: Devising the cloud-aware organization

As various analysts and technology executives assess the pros and cons of cloud computing, two points of consensus appear to be emerging:

A) very large data centers benefit from extreme economies of scale

B) cloud success stories are generally found outside of the traditional IT shop.

Let us examine each of these in more detail, then probe some of the implications.

The advantages of scale

Whether run by a cloud provider or a well-managed enterprise IT group, very large data centers exhibit economies of scale not found in smaller server installations. First, the leverage of relatively expensive and skilled technologists is far higher when one person can manage between 1,000 and 2,000 highly automated servers, as at Microsoft, as opposed to one person being responsible for between five and 50 machines, which is common.

Second, the power consumption of a well-engineered data center can be more efficient than that of many traditional operations. Yahoo is building a new facility in upstate New York, for example, that utilizes atmospheric cooling to the point that only 1% of electricity consumption is for air conditioning and related cooling tasks. Having people with deep expertise in cooling, power consumption, recovery, and other niche skills on staff also helps make cloud providers more efficient than those running at smaller scales.

Finally, large data centers benefit from aggregation of demand. Assume facility A has 10,000 users of computing cycles spread over a variety of different cyclical patterns while facility B has fewer users, all with similar seasonality for retail, quarterly closes for an accounting function, or monthly invoices. Facility A should be able to run more efficiently because it has a more "liquid" market for its capabilities while facility B will likely have to build to its highest load (plus a safety margin) then run less efficiently the majority of the time. What James Hamilton of Amazon calls
"non-correlated peaks" can be difficult to generate within a single enterprise or function.

Who reaps the cloud's benefits?

For all of these benefits, external cloud successes have yet to accrue to traditional IT organizations. At Amazon Web Services, for example, of roughly 100 case studies, none are devoted to traditional enterprise processes such as order management, invoicing and payment processing, or HR.

There are many readily understandable reasons for this pattern; here is a sample. First, legal and regulatory constraints often require a physical audit of information handling practices to which virtual answers are unacceptable. Second, the laws of physics may make large volumes of database joins and other computing tasks difficult to
execute off-premise. In general, high-volume transaction processing is not currently recommended as a cloud candidate.

Third, licenses from traditional enterprise providers such as Microsoft, Oracle, and SAP are still evolving, making it difficult to run their software in hybrid environments (in which some processes run locally while others run in a cloud). In addition, only a few enterprise applications of either the package or custom variety are designed to run as well on cloud infrastructure as they do on a conventional server or cluster. Fourth, accounting practices in IT may make it difficult to know the true baseline costs and benefits to which an outside provider must compare: some CIOs never see their electric bills, for example.

For these reasons, among others, the conclusion is usually drawn that cloud computing is a suboptimal fit for traditional enterprise IT. However, let's invert that logic to see how organizations have historically adapted to new technology capability. When electric motors replaced overhead drive shafts driven by waterwheels adjoining textile mills, the looms and other machines were often left in the same positions for decades before mill owners realized the facility could be organized independently of power supply. More recently, word-processing computers from the likes of Wang initially automated typing pools (one third of all U.S. women working in 1971 were secretaries); it was not until 10 to 20 years later that large numbers of managers began to service their own document-production needs, and thereby alter the shape of organizations.

The cloud will change how resources are organized

Enterprise IT architectures embed a wide range of operating assumptions regarding the nature of work, the location of business processes, clockspeed, and other factors. When a major shift occurs in the information or other infrastructure, it takes years for organizations to adapt. If we take as our premise that most organizations are not yet prepared to exploit cloud computing (rather than talk about clouds not being ready for "the enterprise"), what are some potential ramifications?

-Organizations are already being founded with very little capital investment. For a services- or knowledge-intensive business that does not make anything physical, free tools and low-cost computing cycles can mostly be expensed, changing the fund-raising and indeed organizational strategies significantly.

-The perennial question of "who owns the data?" enters a new phase. While today USB drives and desktop databases continue to make it possible to hoard data, in the future organizations built on cloud-friendly logic from their origins will deliver new wrinkles to information-handling practices. The issue will by no means disappear:
Google's Gmail cloud storage is no doubt already home to a sizable quantity of enterprise data.

-Smartphones, tablets, and other devices built without mass storage can thrive in a cloud-centric environment, particularly if the organization is designed to be fluid and mobile. Coburn Ventures in New York, for example, is an investment firm comprised of a small team of mobile knowledge workers who for the first five years had no
corporate office whatsoever: the organization operated from wi-fi hotspots, with only occasional all-hands meetings.

-New systems of trust and precautions will need to take shape as the core IT processing capacity migrates to a vendor. It's rarely consequential to contract for a video transcoding or a weather simulation and have it be interrupted. More problematically, near-real-time processes such as customer service will likely need to
be redesigned to operate successfully in a cloud, or cluster of clouds. Service-level agreements will need to reflect the true cost and impact of interruptions or other lapses. Third-party adjudicators may emerge to assess the responsibility of the cloud customer who introduced a hiccup into the environment relative to the vendor whose
failover failed.

In short, as cloud computing reallocates the division of labor within the computing fabric, it will also change how managers and, especially, entrepreneurs organize resources into firms, partnerships, and other formal structures. Once these forms emerge, the nature of everything else will be subject to reinvention: work, risk, reward, collaboration, and indeed value itself.

Thursday, April 29, 2010

Early Indications April 2010 The Web of Opinion: Metadata as conversation

In the beginning, there was data, enumerating how many, what kind,
where. Data was kept in proprietary formats and physically located:
if the library was missing the Statistical Abstract for 1940, or some
other grad student had sequestered it, you had little chance to
determine corn production in Nebraska before World War II. Such
statistics were the exception: most data remained unpublished, in lab
notebooks and elsewhere.

Once data escaped from print into bits, it became potentially
ubiquitous, and once formats became less proprietary, more people
could gain access to more forms of data. The early history of the web
was built in part on a footing of public access to data: online
collections of maps, congressional votes, stock prices, phone numbers,
product catalogs, and other data proliferated.

Data has always required metadata: that table of corn production had a
title and probably a methodological footnote. Such metadata was
typically contributed by an expert in either the technical field or in
the practice of categorizing. Official taxonomies have continued the
tradition of creators and curators having cognitive authority in the
process of organizing. In addition, as Clay Shirky has pointed out in
"Ontology is Overrated," the heritage of physicality led to the need
for one answer being correct so that an asset could be found: a book
about Russian and American agricultural policy during the 1930s had to
live among books on Russian history, agricultural history, or U.S.
history: it was arguably about any or all of those things, but someone
(most likely at the Library of Congress) assigned it a catalog number
that finalized the discussion: the book in question was officially and
forever "about" this more than it was about that.

In the past decade, the so-called read-write web has allowed anyone to
become both a content creator and a metadata creator. Sometimes these
activities coincide, as when someone tags their own YouTube video for
example. More often, creations are submitted to a commons, and the
commoners (rather than a cognitive authority) determine what the
contribution "is" and what it is "about." Rather than editors or peer
reviewers judging an asset's quality before publication, in more and
more settings the default process is publication then collaborative
filtering for definition, quality, and meaning.

Imagine a particular propane torch for sale on Amazon.com. So-called
social metadata has been nurtured and collected for years on the site.
If I appreciate the way the torch works for its intended use of
brazing copper pipe, I can submit a review with both a star rating and
prose. Amazon quickly allowed for more social metadata as you the
reader of my review can now rate my review, thus creating metadata
about metadata.

Here is where the discussion gets complicated and extremely
interesting. Suppose I say in my review that I use the Flamethrower
1000 for creme brulee even though the device is not rated (by whatever
safety or sanitation authority) for kitchen use. The comments about
my torch review can quickly become a foodie discussion thread: the
best creme brulee recipe, the best restaurants at which to order it,
regional variations in the naming or preparation of creme brulee, and
so forth. Amazon's moderators might truncate the discussion to the
extent it's not "about" the Flamethrower 1000 under review, but the
urge to digress has long been and will be demonstrated elsewhere.

Enter Facebook. The platform is in essence a gigantic metadata
generation and distribution system. ("I liked the concert." "The
person who liked the concert did not know what she was talking about."
"My friend was at the concert and said it was uneven." and so on)
Strip Facebook of attribute data and there is little left: it's
essentially a mass of descriptors (including "complicated"), created
by amateurs and never claimed as authoritative, linked by a
21st-century kinship network. Facebook's announcement on April 21st
of the Open Graph institutionalizes this collection of conversations
as one vast, logged, searchable metadata repository. If I "like"
something, my social network can be alerted, and the website object of
my affection will know as well.

Back in November, Bruce Schneier laid out five categories of social
networking data
:

1. Service data. Service data is the data you need to give to a social
networking site in order to use it. It might include your legal name,
your age, and your credit card number.
2. Disclosed data. This is what you post on your own pages: blog
entries, photographs, messages, comments, and so on.
3. Entrusted data. This is what you post on other people's pages. It's
basically the same stuff as disclosed data, but the difference is that
you don't have control over the data -- someone else does.
4. Incidental data. Incidental data is data the other people post
about you. Again, it's basically the same stuff as disclosed data, but
the difference is that 1) you don't have control over it, and 2) you
didn't create it in the first place.
5. Behavioral data. This is data that the site collects about your
habits by recording what you do and who you do it with.

What does that list look like today? A user's trail of "like" clicks
makes this list or her Netflix reviews and star ratings, themselves
the subject of privacy concerns, seem like merely the tip of the
iceberg. As Dan Frankowski said in his Google Talk on data mining,
people have been defined by their preferences for millennia --
sometimes to the point of dying for them.

With anything so new and so massive in scale (50,000 sites adopted the
"like" software toolkit in the first week), the unexpected
consequences will take months and more likely years to accumulate.
What will it mean when every opinion we express on line, from the
passionate to the petty, gets logged in the Great Preference
Repository in the Sky, never to be erased and forever being able to be
correlated, associated, regressed, and otherwise algorithmically
parsed?

Several questions follow: who will have either direct or indirect
access to the metadata conversation? What are the opt-in, opt-out,
and monitoring/correction provisions? If I once mistakenly clicked a
Budweiser button but have since publicly declared myself a Molson man,
can I see my preference library as if it's a credit score and remedy
any errors or misrepresentations? What will be the rewards for brand
monogamy versus the penalties for promiscuous "liking" of every
product with a prize or a coupon attached?

While this technology appears to build barriers to competitive entry
for Facebook, what happens if I establish a preference profile when
I'm 14, then decide I no longer like zoos, American Idol, or Gatorade?
Will people seek a fresh start at some point in an undefined network,
with no prehistory? What is the mechanism for "unliking" something,
and how far retrospectively will it apply?

Precisely because Facebook is networked, we've come a very long way
from from that Statistical Abstract on the library shelf. What
happens to my social metadata once it traverses my network? How much
or how little control do I have over what my network associates
("friends" in Facebook-speak) do with my behavioral and opinion data
that comes their way? As both the Burger King "Whopper Sacrifice"
(defriend ten people, get a hamburger coupon) and a more recent
Ikea-spoofing scam have revealed, Facebook users will sell out their
friends for rewards large and small, whether real or fraudulent.

Finally, to the extent that Facebook is both free to use and expensive
to operate, the Open Graph model opens a fascinating array of revenue
streams. If beggars can't be choosers, users of a free system have
limited say in how that system survives. At the same time, the global
reach of Facebook exposes it to a broad swath of regulators, not the
least formidable of whom come out of the European Union's strict
privacy rights milieu. As both the uses and inevitable abuses of the
infinite metadata repository unfold, the reaction will be sure to be
newsworthy.

Wednesday, March 31, 2010

March 2010 Early Indications: Behaviorism, Online

One of the consequences of the ubiquity of our communications tools is a shift away from fascination with and the need for expertise in the tools themselves; PC Magazine, for example, ceased physical publication last year. Instead, relatively transparent use of the tools supports our need to do a job: schedule a plane trip, send relatives some photos, or coordinate a social engagement. As we perform more of our social interactions online, our behavior will adjust to the tool's constraints and capabilities. Those behavioral adjustments are starting to accumulate, and the patterns are fascinating indeed.

In her book Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages, Carlota Perez studied five technology breakthroughs in western history. In every case, a financial bubble burst after early enthusiasm, but then the technology became embedded in multiple processes and relationships, transforming the host society. (Think of the impact of automobiles after the crash of 1929 and the economic recovery driven by World War II: interstate highways, suburbs, Holiday Inns, drive-in and drive-through fast food chains, the rise of manufacturing labor unions as core elements of the middle class, and on and on.)

The trends in financial markets suggest that we might be moving into what Perez calls "synergy" after the bursting of the Internet bubble in 2001 as computing and communications technologies become deeply embedded in everyday life. Apple, Google, Nokia, and Samsung are key players in any list of global companies to watch. Facebook's population is bigger than the third-largest nation on earth. Video traffic on the Internet is projected to double every eight months or so for the foreseeable future.

The rapid growth of online social networks, across many cultures, is one major development, but there are many others. Spurred in part by Jesse Schell's highly compelling talk at DICE earlier this year, I am seeing important behavioral changes in many domains.

-One reason for Apple's success with the iPhone relates to the powerful attractor the App Store provides for independent software developers. Changing the traditional compensation model offloads risk from Apple (which could never have imagined, much less built, 100,000 applications in less than two years) while attracting innovation. First-mover advantage is proving to be substantial as other software companies try to catch up.

-When DARPA wanted to celebrate the Internet's 40th birthday last fall, they conducted a fascinating experiment. 10 red weather balloons were tethered in plain view and the first team to submit the correct coordinates of all 10 won $40,000. MIT's team won, in large measure because they devised a clever incentive model to grow the network of observers. The $4,000 per balloon was divided between referring parties and the observer him or herself, leading to increased participation. Other fascinating developments included the spoofing of competing teams with Photoshopped fake balloons.

-Online dating is clearly a huge business, but the rules of engagement are still being sorted out. The desire to attract appealing candidates with one's description leads to the temptation to lie. Researchers at Cornell and Michigan State looked at daters' actual height, weight, and age, then compared those to online representations. Unlike purely virtual environments, online dating ideally leads to face-to-face meeting, so the ability to lie is tempered by the possibility of real-world confirmation. Men lied about height and, infrequently, about age more than women did, while both sexes lied almost equally, and in the majority of cases, about weight. As the researchers concluded, "the pattern of the deceptions, frequent but slight, suggests that deception on online dating profiles is strategic. Participants balanced the tension between appearing as attractive as possible while also being perceived as honest."

-Behavioral economics has many insights to contribute to this domain. Stikk.com was founded by three Yale professors who saw the application of behavioral economics to personal aspiration. The model is simple: people select a goal, set the stakes, get a referee, and build a network of friends for moral support. Whether for weight loss, smoking cessation, or dissertation writing, there is evidence to suggest small, symbolically powerful incentives matter much more than substantial financial rewards.

-Symbolic rewards, paradoxically, can get people to spend real money. Many online games' business models feature large inflows of revenue for upgraded game elements (swords, shields, real estate). Disney's Club Penguin gives away game points, but charges $6 per month for players to redeem the points.

-Foursquare was one of the hot companies at SXSW this year, following Twitter's breakout there a few years ago. In Foursquare, people "check in" to the real-world places they visit via mobile phone, announcing their presence to friends and proprietor alike. It turns out you can check in to a place you are not visiting: to make a point, one guy became mayor (one of Foursquare's honorary titles) of the North Pole. Foursquare's founder replied to that effort in a blog comment by asking "We often wonder why people 'cheat' when there’s really nothing to win – it’s not like we’re giving away trips to Hawaii or Ford Fiestas over here. But I guess the combo of mayorships, local recognition and, hey, maybe a free slice of pizza is a little too much for some people to live without :)"

-Facebook games like Farmville, with extremely limited graphics and plotlines, contrast vividly with Playstation 3 titles with massive visual horsepower but high barriers to entry. Females especially appear to be gravitating to Facebook games, and helped drive the Wii to the top of console market share, so Microsoft and Sony are responding by mimicking Nintendo's simple but gesture-driven platform. The shift hit home hard at Electronic Arts, which bought a social network game company the same day that the firm laid off 1500 console-supporting employees.

Several tendencies appear to be emerging here. First, the barrier between real life and play life can get fuzzy. In 2008 two Dutch youths were convicted of stealing virtual goods from an online gamer by beating him up at school and coercing him into transferring the goods. A Chinese gamer was murdered over the sale of an online sword artifact. The Wii bowler uses a real arm motion to hurl a virtual ball toward virtual pins. People's Farmville opponents are their real-world friends. In addition, people are powerfully motivated by symbols, just as they are elsewhere, whether those artifacts are military service ribbons, flags, or luxury cars. Finally, as always, people work assiduously to game every system, whether of grades or Facebook friend counts or Stickk weight loss programs.

What's new here is both the degree of portability and the global scale: ten years ago, nobody could play Scrabble with hundreds of people while sitting on a bus. Now that we can, what comes next? With so many games now resident in the computational cloud, how will people remember or recreate them in the future? How will human relationships, whether intense or trivial, scale in these virtually physical or physically virtual settings particularly? Finally, how will other systems, currently driven by other incentive programs, be transformed by the permeation of game and other group dynamics? Schell points to education as an obvious target, but corporate HR, aging, personal fitness, and retirement savings are just as likely. As a result, nearly every field of endeavor could be affected by the clever application of behavioral carrots and sticks via new electronic media. Social engineering, in short, appears to be supplanting technical engineering in the vanguard of innovation.

Saturday, February 27, 2010

Early Indications February 2010: Ticket Punching

As one surveys the landscape of industries whose business models have
been transformed by the Internet, airline ticketing and travel agents
invariably come in near the top of the list. Southwest was at the
forefront of air carriers that offloaded customer service from call
centers to web browsers, reinforcing their lead in lean operating
budgets. At-home check-in is routine at most U.S. airlines, reducing
both costs and wait times.

For all the change that the Net brought to the distribution of airline
tickets, however, its impact on ticket pricing is difficult to tease
out from other macro forces such as increased security screening, fuel
prices, labor agreements and disagreements, and the transparency
afforded by online travel sites such as Expedia or Travelocity. In
addition, while Priceline has its niche, the effect of name-your-price
on the larger sector has not been widely discussed.

Compared to event ticketing, however, airline tickets appear to be a
coherent, rational universe. Although the recent controversy
surrounding the Live Nation merger with Ticketmaster has focused some
attention on the industry, much remains gray area. (A notable
exception to the rule is John Seabrook's excellent New Yorker piece in
the August 10/17, 2009 issue, entitled "The Price of the Ticket.")

Artists whose product revenue stream has been decimated by online file
sharing are now in effect giving away recorded music to drive interest
in the tour, which can get very big very fast. John Mayer, for example, has approved the posting of 80 live shows in the Internet Archive; fans of New Orleans favorites TheRadiators have uploaded over 1,000 shows, while Boston's Guster, a staple of the college circuit, has 331 shows up. The three mostfinancially successful rock tours of all time -- by the Rolling Stones, U2, and the Police -- have all grossed more than $350 million
apiece. As thought-provoking as the music industry is, that's all we can say about it for the moment. Given that this is a newsletter and not a dissertation, I'm going to narrow scope still further and explore sports ticket pricing, a subset of event pricing with its own
peculiar dynamics.

Particular shows on music tours are relatively fungible: it was more
convenient for people to see Springsteen in State College last year
than two hours away in Hershey 11 nights later, but the two
experiences were reasonably equivalent. Sports tickets, however, are
far more time-sensitive, as there will likely be only one game per
season with a particular matchup's unique characteristics. If I can't
see, say the Cleveland Indians host the Boston Red Sox on Saturday
June 10, flying a few hundred miles to see the Yankees play in Detroit
won't satisfy my demand curve, nor will Sunday's Indians-Sox afternoon
game be a functional replacement for Saturday night's experience.

Sports marketing is unique in the nature of its competitive framework,
from a business strategy perspective: in most cities, a franchise
holds a monopoly, competing for fans' dollars and emotional investment
with concerts, dinner out, or college sports. Even though the Chicago
Bears and Green Bay Packers compete on the field and in their
conference, the businesses really do not do so.

Given the unique challenges of sports marketing -- stars vs. teams,
championships vs. laser shows and dance squads, injuries and "off the
field issues" -- it's no surprise that ticket pricing occupies a place
of central importance in the industry. In this domain, rapid and
substantial changes have accumulated in the past 5 to 10 years. When
the economy was more robust, annual ticket price increases were a way
of life in many markets. When a new star was signed, or the venue was
improved, the team typically passed the revenue load onto the fan
base. (We won't touch the hairball of issues related to stadium
financing).

Even more important than revenue maximization, however, is the issue
of risk management. Baseball's season is long; a team can be out of
contention, and just plain stinking up the joint, by July. If club
ownership cannot sell a critical mass of season tickets in the winter,
the task of extracting revenue gets extremely difficult in the long
months of summer. It's also much easier to sell wholesale than
retail: in round numbers, assume a 25,000-seat field and 80 games.
That's 2 million seats if they're sold one at a time, versus 10,000
pairs of season tickets (not counting nosebleed and bleacher seats).
The bundle scenario is 200 times simpler; teams also group games into
batches of 5 or 10 that mix in visits from both losers and
front-runners: if you live in Kansas City, for example, and want to
see the Yankees or Red Sox visit, you almost certainly will have to
watch (or at least own a ticket to) the historically bumbling Orioles,
the Oakland club, or another also-ran.

This pricing strategy moves risk from the club to the fans, who
traditionally could only give or sell the tickets to private contacts;
going to the public market with secondary tickets was illegal. That
status changed in the past decade, as first eBay then StubHub (itself
now owned by eBay) and a number of other businesses matched buyers and
sellers in ways and at a scale that ticket scalpers (or touts, as
they're known in the UK) could not. the Internet also helps drive
both buyers and sellers to the market: the payoff for scale is
liquidity.

That risk management comes at a financial price: while bad teams are
pleased to offload future tickets to possibly worthless games onto
often long-suffering fan bases, good teams leave millions of dollars
to be claimed by secondary sellers, including "ticket brokers" such as
Ace Tickets in Boston. The Red Sox have sold out 550 games in a row,
so many of those season ticket-holders can sell single-game tickets at
a substantial profit -- a profit that could be going to the club, but
only if the club held inventory longer and thus rebalanced the risk.
Nobody knows in March what a September matchup (even with the hated
Yankees) might be worth: injuries, the economy, other teams' level of
play, trades, and other factors determine interest and demand in the
weeks before, not 6 months out.

Bill Simmons, an ESPN blogger, noted NBA owners' behavior in this
regard, particularly when good but expensive players are traded in
mid-season to augment an already losing record in the hopes of earning
a higher pick in the next season's draft of new players. As he noted
last week,

"Does [a terrible record and some bad luck] mean they're lowering
ticket prices for the rest of the year then? Nope. Over the past five
years, half the league's franchises crapped on their season-ticket
holders at least once with mismanagement, salary dumping and/or
tanking for lottery picks. Along with the Wizards, the following fan
bases have reached a breaking point with their respective teams:
Sixers, Pistons, Pacers, Nets, Knicks, Suns, Clippers, Warriors and
Timberwolves. Depending on how the summer of 2010 works out, we could
be adding Cavs, Heat, Raptors, Hawks and/or Grizzlies fans to that
list. And four other teams have tried to put out a quality product but
still hemorrhaged money this season: New Orleans, Milwaukee, Charlotte
and San Antonio. (Yes, I just mentioned 19 of the 30 NBA teams. You
counted correctly.)"

Enabling fans to buy single-game tickets to desirable games is not in
the clubs' interests, yet secondary markets make precisely that
practice possible. As Simmons noted, "Teams depend on season-ticket
revenue because it's guaranteed income. With the current setup, I
could skip getting season tickets, then use stubhub.com, ebay.com and
even team-endorsed ticket sites to cherry-pick choice seats for six or
seven big games per season. So if the NBA wants to keep me (or you, or
anyone) as a customer, it needs to prevent me from sampling instead of
buying. . . . . They don't want me for seven games. They want me for
all of them." But as ticket prices go inexorably up to support
sometimes ill-advised player contracts, the fans' incentive to buy
season tickets goes down, whether one is an individual, one of four
buddies who split a set, or a law firm writing off the tickets as
client entertainment.

The unique and time-sensitive nature of a sports ticket makes it
behave very much like a call option in a financial market (see Happel
and Jennings, "Creating a Futures Market for Major Event Tickets:
Problems and Prospects," Cato Journal 21 (Winter 2002), pp. 443-461).
The value of a ticket is highly contingent, as we have noted, which
means it is an ideal candidate for hedging behaviors. Teams already
do this by emphasizing season ticket sales, creating both technical
lock-in and what Simmons more euphemistically calls "the illusion of
regret" in which fans buy seats for yet another year because they
might miss out on something good. The utility of StubHub in this
regard has led to Major League Baseball striking a deal with the
reseller, giving a fans a reputable outlet for both buying and selling
tickets: fraud is a common concern on both sides of the transaction.
In response, will sports follow the lead of airlines and go
ticketless? No time soon, I don't believe.

Some clubs have experimented with dynamic ticket pricing. A startup
called Qcue helped the San Francisco Giants baseball team increase
attendance in about 2,000 seats inside the 42,000-seat stadium. For
an unappealing matchup, possibly made worse by bad weather, tickets
were as low as $5. When an eagerly awaited game, however, lined up
Randy Johnson (a future Hall of Famer) against the New York Mets and
Johan Santana (a potential HoF electee), the same seat was $33. In
such a scenario, the fan wins and the club gets its full share of
market value. Stadiums could be full every night, but bad teams will
no longer be able to charge the regret-inducing prices they currently
do. That outcome could potentially upset competitive balance: bad
teams might have less revenue from full houses than they currently do
from nearly empty ones. But they might also make more, and the
fairness issue would be more adequately addressed: the clubs would
collect a much closer approximation of what the experience was worth
to the buyer. For the franchises, would this arrangement be a bonanza
or a beatdown?

The answer would likely depend on how well the clubs hedged their
position. To that end, it's not difficult to imagine a futures market
for sports tickets much like YooNew used to provide. Clubs could also develop community relationships at various price tiers, fillingthe park with $5 Boy Scouts or youth soccer leagues when the matchupwas for whatever reason unfavorable, as opposed to the business
entertainers, celebrities, and politicians who show up for must-see
games. StubHub, like eBay, has a rich opportunity for data mining to
tell teams how historic pricing patterns have emerged from given
preconditions: stars getting hot, stars getting hurt, day versus night
games, various playoff implications, regional economic and
unemployment trends, and the like.

As labor disputes loom in pro basketball and football, as mobile phone
television traffic becomes more pervasive, as demographics continue to
shift, as social media evolves, and as dissatisfaction with ticket
prices mounts, it seems inevitable that sports ticket pricing is ripe
for some transformations and potentially dramatic disruptions much
like those that altered the travel and music landscape. In this
Olympic and World Cup year, we'll be watching for clues that might
point the ways forward.

Monday, January 25, 2010

Early Indications January 2010: Do You Remember?

Looking back over the 30 or so years of the personal computing era, I'm struck by how easily we discard the past, how often we miss a revolution when we're in the middle of it, and how few moments stop us in our tracks, giving us reason to demarcate a historical transition. Everybody is different, of course, but I'll wager you may have some similar reactions to the thought experiment I played out over the holidays. Overall, I was struck by how few times I appreciated the historical importance of an event in the moment.

Do you remember . . . where you were when the Berlin wall fell?

I do not. As I argue elsewhere, 1989 marks a convenient beginning to the "modern" era of globalization, mobile telecommunications, and the rise of the Internet. After the 1960s, with the Kennedy assassinations, moon landing, and a "living room war," followed by the Munich Olympic terror and fall of Saigon in 1975, perhaps "culture fatigue" set in. For whatever reason (perhaps it was because I was on the academic job market, with dismal results), November 1989 does not register.

Do you remember . . . your first mobile phone?

This is much clearer. It was a Nokia 101 (still for sale here), used only "for emergencies." I had had friends whose wealthy parents had car phones, which were big, expensive, and exotic. As I was none of those, my pattern of usage had to reflect my station; the device was not to be used trivially. But it was still fun to know I could order pizza on the way home rather than arriving at my destination, calling, and setting out again.

Do you remember . . . the "video game war" in Iraq?

The incredible night shots, the 15 minutes of [U.S.] fame for Canadian Arthur Kent (aka the Scud Stud), and the television-centric news coverage feel like a very long time ago. After the USS Cole, 9/11, and Richard Reid, "asymmetric" warfare features extremely few visual highlights for the U.S. forces; our side has yet to see a positive iconic image of 21st-century warfare.

Do you remember . . . your first e-mail address? What about the second?

Here as so often, I was a late adopter. Teaching at Harvard in the early 1990s, I followed the lead of neither my Ph.D. advisor nor my students, instead getting e-mail pretty late: 1994, when I entered the commercial work force on the Lotus Notes e-mail infrastructure that was typical of consulting firms at the time.

Do you remember . . . the first time you saw the Web?

This was a lightning bolt for me, as vivid as the my first car. The CIO at my consulting firm showed me NCSA Mosaic (which looked like this), and all the stuff I'd been reading about WAIS, Archie, and Gopher faded as I heard from my Silicon Valley friends about this amazing startup called Netscape which was going to be even bigger than 3DO, the supposedly "can't miss" video game outfit. About a year afterward, I saw Pointcast, the way-ahead-of-its-time streaming service whose graphic intensity, profligate use of resources, and viral growth combined to make it a deadly network-killer. I still would love to see it again, for nostalgia's sake if nothing else: old browsers and web pages (remember the original gray Amazon.com with blue text?) can still be found. Pointcast exists only in [human] memory, I gather. (If you want to remember Windows 3.11, here's a brilliant rendition that runs in a browser, complete with Minesweeper.)

Do you remember . . . your first Internet purchase?

I don't, precisely, but would wager it was an Amazon book. Amazon no doubt knows that. The firm's status as a "gateway drug" to Internet shopping cannot be underestimated: because the navigation was good, because they delivered, and because the price/selection/convenience equation was so positive, Amazon initiated millions of consumers into behaviors they repeated in stock trading, travel booking, and medical care.

Do you remember . . . being misunderstood in an instant message or e-mail?

Both media are emotionally "flat," doing a generally poor job of conveying nuance. For someone with a deadpan mien and a frequent recourse to irony, they presented numerous opportunities for miscommunication and, when I was lucky, damage control. The development of new conventions with no real-life analog (haha, lol, emoticons) illustrates how human interaction adapts to the strengths and limits of the available media.

Do you remember . . . Windows 95?

Microsoft's ultimate launch was possibly the apex of the company's influence. Contrast the Rolling Stones to the Jerry Seinfeld ad of 2008, for example. People camped out at CompUSA stores (speaking of memory lane) to get their hands on the OS that unlocked the Internet for millions of users. Vista and even Windows 7, the best product Microsoft has introduced for a long, long time, have received only passing public buzz, the amazing launch party video notwithstanding.

Do you remember . . . your first text message?

This will vary wildly by geography. It's not so long ago that plumbers and doctors carried pagers, then mobile phones were essentially repurposed as interactive short massage devices. We now have the phenomenon of telephones (literally "sound from far away") that carry no voices.

Do you remember first seeing Google? If so, what search engine did it displace?

The clean, sparse interface posed a sharp contrast to the portal wars of the late 1990s. I heard about it pretty early, and as a heavy searcher, I was using a combination of Northern Light and Alta Vista at the time. Other companies you may have used, then forgotten, include Lycos, Excite, Infoseek, and Inktomi.

Do you remember . . . when a "conservative" investor sold after a 30% appreciation?

In 1998, I knew numerous friends and colleagues who were planning their life on the basis of a Netscape-like IPO (at the "-ents," for example: Scient, Viant, Sapient), saying sagely that "retiring at 40 really makes the most sense so that I can travel for a few years then give back to society, possibly by teaching."

Do you remember . . . your first flat screen display?

More important, do you remember your last CRT display? Here's a major change that took place so gradually, yet inevitably, that the CRT's demise was like a sinking ship slipping beneath the waves. LCD panels, meanwhile, continue on a march toward bigger displays, at lower prices, every year as new fabs come on line. I write this while staring at a display that's bigger than my first color TV, from about 18 inches away. And I'm leaning toward it, as if to crawl inside, rather than reclining or retreating.

Do you remember . . . the first video you saw on the Internet?

Before YouTube, uploading and hosting video online was a headache. Creating and editing it, meanwhile, was non-trivial. As if overnight, cell phones and cheap HD video cameras are capturing decent to excellent image quality. Editing can be done on any number of platforms, while Cisco a) draws steep graphs of traffic growth and b) holds the enviable position as prime supplier to a perpetual network upgrade to accommodate all this multimedia.

Do you remember . . . your last landline phone bill?

Whether replaced by a cable company's triple play or mobile substitution, the fixed Bell company telephone is in rapid retreat. Whereas the first cell phone might be a landmark, few of us pay much attention to letting go of an outdated technology. After the USB stick became ubiquitous, overnight it seemed, I can't name the last time I saved a document to a 3.5 inch floppy. More relevantly, neither can I remember archiving each generation of storage to its replacement.

Do you remember . . . your last roll of photographic film?

Once again, the seismic transition is accomplished one defection at a time, and those moments happen when the cost-benefit equation no longer makes sense (in this case, the price of a roll of Kodak or Fuji film, the cost of developing and printing even the worthless photos, and the difficulty of sharing the good ones). In other instances, the shrinking user bases make the economics of scale unattractive from the seller's perspective, meaning price increases, quality sacrifices, or both, and again, the customer may be driven away by vendors who may feel stuck between a rock and a hard place. The same dynamic seems to hold for newspaper subscriptions.

Do you remember . . . when GPS navigation was exotic?

Last week's announcement by Nokia that it will supply turn-by-turn directions to its smartphones obviously countered Google's foray into mobile hardware. Caught as collateral damage in this contest, meanwhile, are the standalone GPS makers like Garmin or TomTom, who not that long ago offered something clever and soon to be essential. In a matter of months, navigation on the mobile platform has become a commodity, table stakes in competitions for a global audience of hyperconnected nomads.

Do you remember . . . your first cross-generational "friend" on Facebook?

As the massive social network grows larger than the U.S. population, it has moved beyond its initial cadre of college students and recent graduates. Preteens join regularly, as do parents and relatives of teenagers hoping to a) stay relevant to or b) monitor their kin, as the case may be. Going forward, persona management for multiple publics will become second nature as the tool set increases in ease of use and flexibility.

Do you remember . . . when retail store replaced CD racks with vinyl?

This has been fascinating. Whether for reasons of its resistance to Limewire redistribution, or its purported fidelity, or the richness of cover art, or contrarian retro-hipness, the phonograph record is one of the few analog revivals in the digital tsunami. It is not impossible to envision turntable sales outpacing CD players, if not Blu-ray machines, within five years. For more, see this story in the New York Times from December 2009.

What will be next? Domestic robots (not just anthropomorphized vacuum cleaners)? Battery-powered cars? Implanted communications devices? Heavier reliance on analog storage methods like paper, for fear of snooping, blackouts, or cloud computing bankruptcies? The vinyl situation suggests that in some instances, analog may not be completely supplanted by digital competitors, so the 2010s will likely see some more surprising instances of both/and.

It would appear that we cut our ties with the past without much thought or regret, while true breakthroughs do not always capture our imagination: in November 2001, Apple Computer was struggling, and to suggest that its expensive, idiosyncratic MP3 player would eventually sell more than 225 million units would have been delusional. Somewhere, some entrepreneurs and inventors are similarly disregarding conventional wisdom and against all odds will be the heroes of January 2020.