Last year, on the cusp of a decade, I looked ahead and in essence
asked 24 questions. In some ways, the world did not change enough to
answer these kinds of questions, while dramatic events in other areas
occurred almost on cue. For brevity, the questions are condensed and
rephrased. (The full newsletter is here)
On a brief personal note, warmest holiday wishes to this virtual
community, which celebrated its 13th birthday in October.
A
Having brilliantly migrated from computers to MPs players to mobile
data devices, what will Apple do for its next adjacent market?
Score: hit. The iPad sold a million units in less than a month. (As
Samsung moved another million soon thereafter, 2010 will go down in
the record books as the year of the tablet.)
B
What business models, specifically for social media tools, will emerge?
Score: hit. Location-based services Gowalla and Foursquare, not to
mention Facebook Places, all surged in popularity this year.
C
What will be the fate of the global middle class as counties like
Brazil grow rapidly even as the U.S. is in some ways "hollowing out"?
Score: Too soon to tell.
D
Is the time ripe for a design renaissance on par with streamlined
toasters, or the neo-Bauhaus movement that poured so much concrete in
the 1960s?
Score: Too soon to tell. Niche products like the Mini Cooper, unique
structures like the Bilbao Guggenheim or Burj Al Arab "sail" hotel,
and well-designed Apple products do not yet a trend make.
D2
How will the U.S. address its drug problem?
Score: Change is afoot. Although California voters defeated
Proposition 19, as of January 1, 2011 possession of less than one once
of marijuana will be treated as a civil infraction rather than a
criminal misdemeanor. California is also supposed to release 40,000
prisoners, and financially challenged states across the nation may use
budget crises as an impetus to revisit sentencing guidelines.
Internationally, Mexico obviously remains a hot spot in this regard,
with presidential elections scheduled in less than 18 months.
E
Whether in oil prices, coal emissions debates, or nuclear power
lobbying efforts, competition for energy will have geopolitical
consequences, potentially including more armed ones.
Score: hit, if you count lightly regulated deep-water drilling in the
Gulf of Mexico that goes really wrong as a consequence of competition
for energy. Closer to Pennsylvania, gas drilling using hydraulic
fracturing in the Marcellus shale merited a 60 Minutes segment; the
core technology is the subject of an HBO documentary that won an award
at Sundance.
F
Will texts, Tweets, and web-hosted highlight clips related to
futball's World Cup be a global coming-out party for social media,
just as the 1958 NFL championship game or the JFK assassination were
for television?
Score: Hit. Twitter traffic reached 3,000 messages per second in the
aftermath of Spain's victory; the service's "fail whale" was busy
during the event as servers were overwhelmed. Multiple information
visualizations reinforced the point in clever ways. YouTube video of
the U.S. goal to beat Algeria traveled far and wide.
G
Can Google expand beyond its core search franchise?
Score: hit, at least in numbers if not revenues. Google recently
reported activating 300,000 Android devices per day.
H
What will happen with U.S. housing stock?
Score: hit, if "extreme bad news" is news. Existing home sales fell
to a 15-year low in the summer, even with historically low mortgage
interest rates. Housing starts nearly hit a record low in October.
Repossession numbers improved, largely in the wake of voluntary pauses
by several major lenders.
I
Regarding identity, as more people grow up breathing the oxygen of
online, all-the-time social broadcasting, what will be the unintended
consequences, the business opportunities, and the backlash?
Score: hit. Facebook's change to default privacy settings last spring
was a major event. This visualization made the point forcefully.
J
Where will jobs come from?
Score: hit. The economic recovery continues to feature high
unemployment, high underemployment, and high numbers of people who
give up trying to find work. The national unemployment rate of 9.8%
only begins to tell the story; part-timers who want full time work and
other categories push the number of people un- or underemployed to
probably twice that.
K
How much does the Kindle matter?
Score: hit. While Amazon releases no unit numbers, e-book sales
remain strong, and the Kindle constitutes an important piece of the
tablet revolution discussed above.
L
When talking about long tails, we clearly have hits and clearly have
infinite markets for niche tastes on eBay, YouTube, and elsewhere.
The question is, can the middle market -- smaller audiences than Harry
Potter or American Idol, more expensive than kittens-on-a-treadmill
videos -- thrive?
Score: maybe. The Hulu experiment, with deepening coverage of back
catalogs, remains ongoing. ESPN's superb 30 for 30 documentaries
would seem to validate mid-market success.
M
How fast and how momentous is the shift to mobile data?
Score: Really big and really fast. U.S. smartphone market share, for
example, was 21% in Q4 2009; it could be nearly a third by early 2011.
The number of mobile websites increased 2,000% between 2008 and 2010,
for 150,000 to more than 3 million. On Black Friday, mobile traffic
to shopping websites was up 50 times over 2009, much of the traffic
for price comparisons in-store.
N
Will Google's Living Stories experiment with the NY Times and
Washington Post spawn still more innovation?
Score: miss on Living Stories, which died a quick death. News sources
are aggressively moving content onto tablets, however, often at
ridiculous prices. (See Illustrated, Sports.)
O
Open records, or open meetings, laws were never intended to broadcast
local, paper-based information to the entire planet. At the same time,
"sunshine is the best disinfectant," as Louis Brandeis so aptly put
it. How and where will different people and groups trade off voluntary
and involuntary exposure of private information for what perceived
benefits?
Score: Hit: WikiLeaks repeatedly forced this issue, for example.
P
Will we see new platform wars?
Score: hit. Apple's app store is expanding to tablets and PCs.
Google is bringing out both Android and Chrome, in whatever
complementary or competitive relation to each other. Facebook marches
on, Salesforce is adding database as a service, and cloud vendors
jostle for primacy. Microsoft's Wii-killer (Kinect) sold a million
units in 10 days. So the answer is yes.
R
How does "real time" filter down to people?
Score: hit. Twitter and the location-based services continue to enjoy
rapid uptake. See here.
S
In software, who will be left behind? What further surprises still await?
Score: hit. SAP lost a $1.3 billion suit over its use of Oracle's
intellectual property in a support business it acquired. Microsoft
enjoyed considerable success with Windows 7 for the PC, moving 175
million copies in under a year. Microsoft's smartphone hopes,
however, appear to remain in the future, with new Windows phones
selling 2-for-the-price-of-1 soon after release.
T
Will the Internet of Things continue its low-hype, high-impact trajectory?
Score: hit. The use of smart electric meters in Bakersfield, CA
generated many unexpected consequences, a lawsuit against PG&E among
them. Smartphone-based sensor enablement is accelerating: Amazon and
eBay both released comparison-shopping barcode readers during the
holiday shopping season.
U
At both public and private universities, the next decade will force
tough decisions to be made.
Score: hit. "Strategic reviews" of programs, majors, campuses, and
funding models are underway at many campuses. New buildings aren't
being built, or are being scaled back. International alliances are
being aggressively pursued, but even these can be problematic:
Michigan State was having trouble filling a class at its Dubai
operation and so offered half-price tuition. Intercollegiate
athletics could be a canary in the coal mine: the University of
California-Berkeley dropped five varsity sports for the 2011 school
year. Rutgers is currently paying off more than $100 million in debt
for football stadium renovations; the team finished 4-8 this year.
V
Regarding virtualization: just as Descartes split mind and body for
the individual, will some latter-day philosopher distinguish
physically co-located groups and digitally "present" assemblages?
Score: still waiting. Cisco continues to brand "telepresence."
People's identities in Facebook and in Farmville and in Twitter
streams continue to evolve. Most any computing service can be
accessed from a location remote to its origin. But still we lack
vocabulary and deep cognitive understanding of what it means for a
group of people to "be someplace" vs. "be anyplace."
W
What will we see relative to the need for wireless bandwidth?
Score: hit. A scandal relating to cellular spectrum auctions is
front-page news in India. The FCC is pushing hard to release
additional spectrum in the U.S., but one sticking point among many
relates to rights fees.
Early Indications is the weblog version of a newsletter I've been publishing since 1997. It focuses on emerging technologies and their social implications.
Friday, December 17, 2010
Sunday, November 14, 2010
Review Essay: Kevin Kelly, What Technology Wants
In 35 years of reading seriously and often professionally, I have never a read a book like What Technology Wants. I dog-eared at least 30 pages and filled several margins with reactions. Over two long plane rides, I was by turns absorbed, consternated, and counter-punching. I think What Technology Wants gets the story wrong, but it lays out a bold, original, and challenging position with a complex array of evidence, analysis, and conviction. The core hypothesis is untestable, however, and enough counterexamples can be summoned that substantial uncertainty undermines Kelly's deterministic argument.
Make no mistake, optimism is the operative motif. As Kelly notes, when sages or prophets foretold the future in ages past, the outlook was usually bad. The very notion of progress, by contrast, is itself a relatively modern invention. As we will see, Kelly's book is best understood as part of a larger conversation, one that has found particularly fertile ground in America.
What exactly is the technology that "wants" things? From the outset, Kelly finesses a sweepingly broad definition:
"I've somewhat reluctantly coined a word to designate the greater, global, massively interconnected system of technology vibrating around us. I call it the _technium_. The technium extends beyond shiny hardware to include culture, art, social institutions, and intellectual creations of all types. . . . And most important, it includes the generative impulses of our inventions to encourage more tool making, more technology invention, and more self-enhancing connections." (11-12)
Several of the book's key themes become apparent early. Most centrally, technology is read as, if not alive ("vibrating" with "impulses"), then something very close to alive: connections between technology and biology, moving in both directions, are drawn throughout the book. For example, "if I can demonstrate that there is an internally generated direction within natural evolution, then my argument that the technium extends this direction is easier to see." (119)
The second, and more regrettable, tendency of the book is to argue along multiple slippery slopes. In the initial definition, for example, the technium includes everything from churches (both buildings and people) to cloned sheep to George Foreman grills to the Internet. If it includes so much, what is the technium _not_? I believe that understanding "social institutions and intellectual creations of all types" and their role in the technology artifacts that more commonly concern us -- things like end-of-life treatment protocols, ever-nastier methods of warfare, or high levels of carbon dioxide output -- requires a sharper knife.
The aforementioned slippery slope argumentative technique may have been most brilliantly parodied in the student court trial scene in Animal House:
***
But you can't hold a whole fraternity responsible for the behavior of a few sick, perverted individuals. If you do, shouldn't we blame the whole fraternity system?
And if the whole fraternity system is guilty, then isn't this an indictment of our educational institutions in general?
I put it to you, Greg. Isn't this an indictment of our entire American society?
Well, you can do what you want to us, but we won't sit here, and listen to you badmouth the United States of America!
***
Several sections of What Technology Wants raised red flags that suggest similarly deft rhetoric may be in play elsewhere in the book. In an argument structurally very similar to the Animal House logic, for example, the technium is given almost literally biological properties: "Because the technium is an outgrowth of the human mind, it is also an outgrowth of life, and by extension it is also an outgrowth of the physical and chemical self-organization that first led to life." (15) If, like me, one does not grant him this chain of logic linking single-celled life forms to Ferraris or credit default swaps, Kelly's argument loses some of its momentum: for him, the quasi-sentient life force that is the sum of humanity's efforts to create is ultimately life-enhancing rather than destructive or even indifferent.
Nowhere is this faith more clearly stated than in the book's conclusion. "[The technium] contains more goodness than anything else we know," Kelly asserts. Given that the technium is everything that people have ever made or written down, what is the alternative that could be "more good"? Pure nature? But the technium is awfully close to nature too: "the technium's wants are those of life." In fact, like Soylent Green, the technium is (at least partially) people: "It will take the whole technium, and that includes us, to discover the tools that are needed to surprise the world." (359)
But the fact of the matter is that much of the technium is built to kill, not to want life: the role of warfare in the advancement of technology dates back millennia. From swords and plowshares, to Eli Whitney's concept of interchangeable parts in musket-making, to nuclear weapons, people and governments have long used technical innovation to subdue each other. Even Kelly's (and my) beloved Internet can trace its origins directly to the game theoretics of John von Neumann and mutual assured destruction. Statecraft shapes technology, sometimes decisively, yet this influence is buried in Kelly's avalanche of technological determinism.
To be sure, some of Kelly's optimism has convincing grounding; it's his teleology I question. In What Technology Wants, the strongest sections combined clever data-gathering and analysis to express the power of compounding innovation: particularly where they can get smaller, things rapidly become cheaper and more powerful at a rate never before witnessed. Microprocessors and DNA tools (both sequencing and synthesis) are essential technologies for the 21st century, with Moore's law-like trajectories of cost and performance. In addition, because software allows human creativity to express and replicate itself, the computer age can advance very rapidly indeed. The key question, however, relates less to technological progress than to our relation to that progress.
In my discussions with Kelly back when we were affiliated with the same think tank in the 1990s, he had already identified the Amish as a powerful resource for thinking about the adoption of technology. Chapter 11, on Amish hackers, raises the issues of selective rejection to a level of depth and nuance that I have seen nowhere else. Four principles govern the Amish, who are often surprising in their technology choices, as anyone who has seen their skilled and productive carpenters (with their pneumatic nail guns carried in the back of pickup trucks) can attest.
1) They are selective, ignoring far more than they adopt.
2) They evaluate new things by experience, in controlled trial scenarios.
3) Their criteria for evaluation are clear: enhance family and community while maintaining distance from the non-Amish world.
4) The choices are not individual but communal. (225-6)
Remarkably, Amish populations are growing (fast), unlike the Shakers of New England who attempted similar removal from the world but could not sustain their existence either individually or collectively. Instead, the Amish often become expert in the use of a technology while eschewing its ownership. They are clever hackers, admirable for their ability to fix things that many non-Amish would simply throw away. At the same time, there are no Amish doctors, and girls have precisely one career trajectory: motherhood or a close equivalent thereof. As Kelly notes, the people who staff and supply grocery stores or doctor's offices, participate in a cash economy, and pay taxes for roads and other infrastructure enable their retreat. In the end, the Amish stance cannot scale to the rest of us, in part because of their radical withdrawal from the world of television, cell phones, and automobiles, and because of the sect's cohesive religious ethos.
Speaking of governments and economies, the role of money and markets is also remarkably limited for Kelly. Technologies evolve through invention and innovation. Those processes occur within a lattice of investors, marketers, sales reps, and other businesspeople who have different motivations for getting technologies into people's hands or lives. Not all of these motives support the wants of life, as Bhopal, cigarette marketing, and Love Canal would attest.
The capitalist underpinnings beneath so much western technology are ignored, as in this summary passage: "Like personality, technology is shaped by a triad of forces. The primary driver is preordained development -- what technology wants. The second driver is the influence of technology history, the gravity of the past . . . . The third force is society's collective free will in shaping the technium, or our choices." (181)
Profit motives, lock-in/lock-out, and the psychology of wants and needs (along with business's attempts to engage it) are all on the sideline. Furthermore, a "collective free will" feels problematic: what exactly does that mean? Market forces? I don't think that reading is in play here. Rather than economics, Kelly seems most closely aligned with biology, to an extreme degree at some points: "The most helpful metaphor for understanding technology may be to consider humans as the parents of our technological children." (257)
But understanding ourselves as "parents" doesn't help solve real technological problems: how do we address billions of discarded plastic beverage bottles (many fouling the oceans), or the real costs of long-term adoption of the internal combustion engine, or the systems of food and crop subsidies and regulations that shape diet in a age of simultaneous starvation and obesity? How does the technium want goodness in any of those scenarios? Maybe the polity and the increasingly vibrant non-profit sector are part of the technology superstructure, seeing as they are human inventions, but if that's the case, Kelly's definition is so broad as to lose usefulness: the book gives little idea of what lies outside the technium. If money and markets (and kings and congresses, as well as missiles and machine guns) are coequal with cathedrals and computers, getting leverage on questions of how humans use, and are used by, our technologies becomes more difficult.
With all of its strengths and shortcomings, Kelly has written a book at once unique and rooted in a deep tradition: for well over a century Americans in particular have simultaneously worried and effused over their machines. The distinguished historian of technology Thomas P. Hughes noted in 1989 that the 1960s had given many technologies a bad name, so that cheerleaders had become scarce even as technology was infusing itself into the conceptual and indeed existential ground water: "Today technological enthusiasm, although much muted as compared with the 1920s, survives among engineers, managers, system builders, and others with vested interests in technological systems. The systems spawned by that enthusiasm, however, have acquired a momentum -- almost a life -- of their own." (American Genesis, 12) The technology-is-alive meme is a familiar one, and a whole other study could position Kelly in that tradition as well.
For our purposes, it is sufficient to note that Kelly stands as a descendant of such enthusiasts as Edison, Ford, Frederick W. Taylor, Vannevar Bush, and, perhaps most directly, Lewis Mumford, now most famous as an urban theorist. Like Kelly, Mumford simultaneously delighted in the wonders of his age while also seeing causes for concern. Note how closely his 1934 book Technics and Civilization anticipates Kelly, excepting the fact that Mumford predated the computer:
"When I use the word machines I shall refer to specific objects like the printing press or the power loom. When I use the term 'the machine' I shall employ it as a shorthand reference to the entire technological complex. This will embrace the knowledge and skills and arts derived from industry or implicated in the new technics, and will include various forms of tool, instrument, apparatus and utility as well as machines proper." (12)
One man's technium is another man's machine. For all their similarity of definition, however, Mumford kept human agency at the center of his ethos, compared to Kelly's talk of inevitability and other semi-biological tendencies of the technium super-system: "No matter how completely technics relies upon the objective procedures of the sciences, it does not form an independent system, like the universe: it exists as an element in human culture and it promises well or ill as the social groups that exploit it promise well or ill." (6) Mumford focuses on the tool-builder; Kelly gives primacy to the cumulative (and, he asserts, mostly beneficent) sum of their tool-building. In the end, however, that technium is a mass of human devices, institutions, and creations so sprawling that it loses conceptual usefulness since no human artifacts are excluded.
The critical difference between the two perspectives becomes clear as Mumford resists the same determinism in which Kelly revels: "In order to reconquer the machine and subdue it to human purposes, one must first understand it and assimilate it. So far, we have embraced the machine without fully understanding it, or, like the weaker romantics, we have rejected the machine without first seeing how much of it we could intelligently assimilate." (6) Mumford's goal -- consciously understanding and assimilating technologies within a cultivated human culture -- sounds remarkably like the Amish notion of selective rejection that Kelly admires yet ultimately rejects as impractical at scale.
It is a tribute to Kevin Kelly that he forced me to think so hard about these issues. What Technology wants deserves to be widely read and discussed, albeit with red pencils close at hand; it is a book to savor, to consider, to challenge, and to debate. The book is not linear by any stretch of the imagination, and strong chapters (such as on deep progress and on the Amish) sit alongside weaker discussions of technology-as-biology and an arbitrary grocery list of the technium's attributes that feels like it could have been handled less randomly.
Those shortcomings help define the book: by tackling a hard, messy topic, Kelly was bound to have tough patches of tentative prose, partially unsatisfying logic, and conclusions that will not be universally accepted. For having the intellectual courage to do so, I tip my hat. Meanwhile I look for a latter-day Lewis Mumford to restore human agency to the center of the argument while at the same time recognizing that governments, markets, and above all people interact with our technologies in a contingent, dynamic interplay that is anything but deterministic.
Make no mistake, optimism is the operative motif. As Kelly notes, when sages or prophets foretold the future in ages past, the outlook was usually bad. The very notion of progress, by contrast, is itself a relatively modern invention. As we will see, Kelly's book is best understood as part of a larger conversation, one that has found particularly fertile ground in America.
What exactly is the technology that "wants" things? From the outset, Kelly finesses a sweepingly broad definition:
"I've somewhat reluctantly coined a word to designate the greater, global, massively interconnected system of technology vibrating around us. I call it the _technium_. The technium extends beyond shiny hardware to include culture, art, social institutions, and intellectual creations of all types. . . . And most important, it includes the generative impulses of our inventions to encourage more tool making, more technology invention, and more self-enhancing connections." (11-12)
Several of the book's key themes become apparent early. Most centrally, technology is read as, if not alive ("vibrating" with "impulses"), then something very close to alive: connections between technology and biology, moving in both directions, are drawn throughout the book. For example, "if I can demonstrate that there is an internally generated direction within natural evolution, then my argument that the technium extends this direction is easier to see." (119)
The second, and more regrettable, tendency of the book is to argue along multiple slippery slopes. In the initial definition, for example, the technium includes everything from churches (both buildings and people) to cloned sheep to George Foreman grills to the Internet. If it includes so much, what is the technium _not_? I believe that understanding "social institutions and intellectual creations of all types" and their role in the technology artifacts that more commonly concern us -- things like end-of-life treatment protocols, ever-nastier methods of warfare, or high levels of carbon dioxide output -- requires a sharper knife.
The aforementioned slippery slope argumentative technique may have been most brilliantly parodied in the student court trial scene in Animal House:
***
But you can't hold a whole fraternity responsible for the behavior of a few sick, perverted individuals. If you do, shouldn't we blame the whole fraternity system?
And if the whole fraternity system is guilty, then isn't this an indictment of our educational institutions in general?
I put it to you, Greg. Isn't this an indictment of our entire American society?
Well, you can do what you want to us, but we won't sit here, and listen to you badmouth the United States of America!
***
Several sections of What Technology Wants raised red flags that suggest similarly deft rhetoric may be in play elsewhere in the book. In an argument structurally very similar to the Animal House logic, for example, the technium is given almost literally biological properties: "Because the technium is an outgrowth of the human mind, it is also an outgrowth of life, and by extension it is also an outgrowth of the physical and chemical self-organization that first led to life." (15) If, like me, one does not grant him this chain of logic linking single-celled life forms to Ferraris or credit default swaps, Kelly's argument loses some of its momentum: for him, the quasi-sentient life force that is the sum of humanity's efforts to create is ultimately life-enhancing rather than destructive or even indifferent.
Nowhere is this faith more clearly stated than in the book's conclusion. "[The technium] contains more goodness than anything else we know," Kelly asserts. Given that the technium is everything that people have ever made or written down, what is the alternative that could be "more good"? Pure nature? But the technium is awfully close to nature too: "the technium's wants are those of life." In fact, like Soylent Green, the technium is (at least partially) people: "It will take the whole technium, and that includes us, to discover the tools that are needed to surprise the world." (359)
But the fact of the matter is that much of the technium is built to kill, not to want life: the role of warfare in the advancement of technology dates back millennia. From swords and plowshares, to Eli Whitney's concept of interchangeable parts in musket-making, to nuclear weapons, people and governments have long used technical innovation to subdue each other. Even Kelly's (and my) beloved Internet can trace its origins directly to the game theoretics of John von Neumann and mutual assured destruction. Statecraft shapes technology, sometimes decisively, yet this influence is buried in Kelly's avalanche of technological determinism.
To be sure, some of Kelly's optimism has convincing grounding; it's his teleology I question. In What Technology Wants, the strongest sections combined clever data-gathering and analysis to express the power of compounding innovation: particularly where they can get smaller, things rapidly become cheaper and more powerful at a rate never before witnessed. Microprocessors and DNA tools (both sequencing and synthesis) are essential technologies for the 21st century, with Moore's law-like trajectories of cost and performance. In addition, because software allows human creativity to express and replicate itself, the computer age can advance very rapidly indeed. The key question, however, relates less to technological progress than to our relation to that progress.
In my discussions with Kelly back when we were affiliated with the same think tank in the 1990s, he had already identified the Amish as a powerful resource for thinking about the adoption of technology. Chapter 11, on Amish hackers, raises the issues of selective rejection to a level of depth and nuance that I have seen nowhere else. Four principles govern the Amish, who are often surprising in their technology choices, as anyone who has seen their skilled and productive carpenters (with their pneumatic nail guns carried in the back of pickup trucks) can attest.
1) They are selective, ignoring far more than they adopt.
2) They evaluate new things by experience, in controlled trial scenarios.
3) Their criteria for evaluation are clear: enhance family and community while maintaining distance from the non-Amish world.
4) The choices are not individual but communal. (225-6)
Remarkably, Amish populations are growing (fast), unlike the Shakers of New England who attempted similar removal from the world but could not sustain their existence either individually or collectively. Instead, the Amish often become expert in the use of a technology while eschewing its ownership. They are clever hackers, admirable for their ability to fix things that many non-Amish would simply throw away. At the same time, there are no Amish doctors, and girls have precisely one career trajectory: motherhood or a close equivalent thereof. As Kelly notes, the people who staff and supply grocery stores or doctor's offices, participate in a cash economy, and pay taxes for roads and other infrastructure enable their retreat. In the end, the Amish stance cannot scale to the rest of us, in part because of their radical withdrawal from the world of television, cell phones, and automobiles, and because of the sect's cohesive religious ethos.
Speaking of governments and economies, the role of money and markets is also remarkably limited for Kelly. Technologies evolve through invention and innovation. Those processes occur within a lattice of investors, marketers, sales reps, and other businesspeople who have different motivations for getting technologies into people's hands or lives. Not all of these motives support the wants of life, as Bhopal, cigarette marketing, and Love Canal would attest.
The capitalist underpinnings beneath so much western technology are ignored, as in this summary passage: "Like personality, technology is shaped by a triad of forces. The primary driver is preordained development -- what technology wants. The second driver is the influence of technology history, the gravity of the past . . . . The third force is society's collective free will in shaping the technium, or our choices." (181)
Profit motives, lock-in/lock-out, and the psychology of wants and needs (along with business's attempts to engage it) are all on the sideline. Furthermore, a "collective free will" feels problematic: what exactly does that mean? Market forces? I don't think that reading is in play here. Rather than economics, Kelly seems most closely aligned with biology, to an extreme degree at some points: "The most helpful metaphor for understanding technology may be to consider humans as the parents of our technological children." (257)
But understanding ourselves as "parents" doesn't help solve real technological problems: how do we address billions of discarded plastic beverage bottles (many fouling the oceans), or the real costs of long-term adoption of the internal combustion engine, or the systems of food and crop subsidies and regulations that shape diet in a age of simultaneous starvation and obesity? How does the technium want goodness in any of those scenarios? Maybe the polity and the increasingly vibrant non-profit sector are part of the technology superstructure, seeing as they are human inventions, but if that's the case, Kelly's definition is so broad as to lose usefulness: the book gives little idea of what lies outside the technium. If money and markets (and kings and congresses, as well as missiles and machine guns) are coequal with cathedrals and computers, getting leverage on questions of how humans use, and are used by, our technologies becomes more difficult.
With all of its strengths and shortcomings, Kelly has written a book at once unique and rooted in a deep tradition: for well over a century Americans in particular have simultaneously worried and effused over their machines. The distinguished historian of technology Thomas P. Hughes noted in 1989 that the 1960s had given many technologies a bad name, so that cheerleaders had become scarce even as technology was infusing itself into the conceptual and indeed existential ground water: "Today technological enthusiasm, although much muted as compared with the 1920s, survives among engineers, managers, system builders, and others with vested interests in technological systems. The systems spawned by that enthusiasm, however, have acquired a momentum -- almost a life -- of their own." (American Genesis, 12) The technology-is-alive meme is a familiar one, and a whole other study could position Kelly in that tradition as well.
For our purposes, it is sufficient to note that Kelly stands as a descendant of such enthusiasts as Edison, Ford, Frederick W. Taylor, Vannevar Bush, and, perhaps most directly, Lewis Mumford, now most famous as an urban theorist. Like Kelly, Mumford simultaneously delighted in the wonders of his age while also seeing causes for concern. Note how closely his 1934 book Technics and Civilization anticipates Kelly, excepting the fact that Mumford predated the computer:
"When I use the word machines I shall refer to specific objects like the printing press or the power loom. When I use the term 'the machine' I shall employ it as a shorthand reference to the entire technological complex. This will embrace the knowledge and skills and arts derived from industry or implicated in the new technics, and will include various forms of tool, instrument, apparatus and utility as well as machines proper." (12)
One man's technium is another man's machine. For all their similarity of definition, however, Mumford kept human agency at the center of his ethos, compared to Kelly's talk of inevitability and other semi-biological tendencies of the technium super-system: "No matter how completely technics relies upon the objective procedures of the sciences, it does not form an independent system, like the universe: it exists as an element in human culture and it promises well or ill as the social groups that exploit it promise well or ill." (6) Mumford focuses on the tool-builder; Kelly gives primacy to the cumulative (and, he asserts, mostly beneficent) sum of their tool-building. In the end, however, that technium is a mass of human devices, institutions, and creations so sprawling that it loses conceptual usefulness since no human artifacts are excluded.
The critical difference between the two perspectives becomes clear as Mumford resists the same determinism in which Kelly revels: "In order to reconquer the machine and subdue it to human purposes, one must first understand it and assimilate it. So far, we have embraced the machine without fully understanding it, or, like the weaker romantics, we have rejected the machine without first seeing how much of it we could intelligently assimilate." (6) Mumford's goal -- consciously understanding and assimilating technologies within a cultivated human culture -- sounds remarkably like the Amish notion of selective rejection that Kelly admires yet ultimately rejects as impractical at scale.
It is a tribute to Kevin Kelly that he forced me to think so hard about these issues. What Technology wants deserves to be widely read and discussed, albeit with red pencils close at hand; it is a book to savor, to consider, to challenge, and to debate. The book is not linear by any stretch of the imagination, and strong chapters (such as on deep progress and on the Amish) sit alongside weaker discussions of technology-as-biology and an arbitrary grocery list of the technium's attributes that feels like it could have been handled less randomly.
Those shortcomings help define the book: by tackling a hard, messy topic, Kelly was bound to have tough patches of tentative prose, partially unsatisfying logic, and conclusions that will not be universally accepted. For having the intellectual courage to do so, I tip my hat. Meanwhile I look for a latter-day Lewis Mumford to restore human agency to the center of the argument while at the same time recognizing that governments, markets, and above all people interact with our technologies in a contingent, dynamic interplay that is anything but deterministic.
Tuesday, November 02, 2010
Early Indications October 2010: The Analytics Moment: Getting numbers to tell stories
Thanks in part to vigorous efforts by vendors (led by IBM) to bring
the idea to a wider public, analytics is coming closer to the
mainstream. Whether in ESPN ads for fantasy football, or
election-night slicing and dicing of vote and poll data, or the
ever-broadening influence of quantitative models for stock trading and
portfolio development, numbers-driven decisions are no longer the
exclusive province of people with hard-core quantitative skills.
Not surprisingly, the definition is completely problematic. At the
simple end of the spectrum, one Australian firm asserts that
"Analytics is basically using existing business data or statistics to
make informed decisions." At the other end of a broad continuum,
TechTarget distinguishes, not completely convincingly, between data
mining and data analytics:
"Data analytics (DA) is the science of examining raw data with the
purpose of drawing conclusions about that information. Data analytics
is used in many industries to allow companies and organization to make
better business decisions and in the sciences to verify or disprove
existing models or theories. Data analytics is distinguished from data
mining by the scope, purpose and focus of the analysis. Data miners
sort through huge data sets using sophisticated software to identify
undiscovered patterns and establish hidden relationships."
To avoid a terminological quagmire, let us merely assert that
analytics uses statistical and other methods of processing to tease
out business insights and decision cues from masses of data.
In order to see the reach of these concepts and methods, consider a
few examples drawn at random:
-The "flash crash" of May 2010 focused attention on the many forms and
roles of algorithmic trading of equities. While firm numbers on the
practice are difficult to find, it is telling that the regulated New
York Stock Exchange has fallen from executing 80% of trades in its
listed stocks to only 26% in 2010, according to Bloomberg. The
majority occur in other trading venues, many of them essentially
"lights-out" data centers; high-frequency trading firms, employing a
tiny percentage of the people associated with the stock markets,
generate 60% of daily U.S. trading volume of roughly 10 billion
shares.
-In part because of the broad influence of Michael Lewis's bestselling
book Moneyball, quantitative analysis has moved from its formerly
geeky niche at the periphery to become a central facet of many sports.
MIT holds an annual conference on sports analytics that draws both
sell-out crowds and A-list speakers. Statistics-driven fantasy sports
continue to rise in popularity all over the world as soccer, cricket,
and rugby join the more familiar U.S. staples of football and
baseball.
-Social network analysis, a lightly practiced subspecialty of
sociology only two decades ago, has surged in popularity within the
intelligence, marketing, and technology industries. Physics, biology,
economics, and other disciplines all are contributing to the rapid
growth of knowledge in this domain. Facebook, Al Qaeda, and countless
startups all require new ways of understanding cell phone, GPS, and
friend/kin-related traffic.
Why now?
Perhaps as interesting as the range of its application are the many
converging reasons for the rise of interest in analytics. Here are
ten, from perhaps a multitude of others.
1) Total quality management and six-sigma programs trained a
generation of production managers to value rigorous application of
data. That six-sigma has been misapplied and misinterpreted there can
be little doubt, but the successes derived from a data-driven approach
to decisions are, I believe, informing today's wider interest in
statistically sophisticated forms of analysis within the enterprise.
2) Quantitative finance applied ideas from operations research,
physics, biology, supply chain management, and elsewhere to problems
of money and markets. In a bit of turnabout, many data-intensive
techniques, such as portfolio theory, are now migrating out of formal
finance into day-to-day management.
3) As Eric Schmidt said in August, we now create in two days as much
information as humanity did from the beginning of recorded history
until 2003. That's measuring in bits, obviously, and as such Google's
estimate is skewed by the rise of high-resolution video, but the
overall point is valid: people and organizations can create data far
faster than any human being or process can assemble, digest, or act on
it. Cell phones, seen as both sensor and communications platforms,
are a major contributor, as are enterprise systems and image
generation. More of the world is instrumented, in increasingly
standardized ways, than ever before: Facebook status updates, GPS,
ZigBee and other "Internet of things" efforts, and barcodes and RFID
on more and more items merely begin a list.
4) Even as we as a species generate more data points than ever before,
Moore's law and its corollaries (such as Kryder's law of hard disks)
are creating a computational fabric which enables that data to be
processed more cost-effectively than ever before. That processing, of
course, creates still more data, compounding the glut.
5) After the reengineering/ERP push, the Internet boom, and the
largely failed effort to make services-oriented architectures a
business development theme, vendors are putting major weight behind
analytics. It sells services, hardware, and software; it can be used
in every vertical segment; it applies to every size of business; and
it connects to other macro-level phenomena: smart grids, carbon
footprints, healthcare cost containment, e-government, marketing
efficiency, lean manufacturing, and so on. In short, many vendors
have good reasons to emphasize analytics in their go-to-market
efforts. Investments reinforce the commitment: SAP's purchase of
Business Objects was its biggest acquisition ever, while IBM, Oracle,
Microsoft, and Google have also spent billions buying capability in
this area.
6) Despite all the money spent on ERP, on data warehousing, and on
"real-time" systems, most managers still can not fully trust their
data. Multiple spreadsheets document the same phenomena through
different organizational lenses, data quality in enterprise systems
rarely inspires confidence, and timeliness of results can vary widely,
particularly in multinationals. I speak to executives across
industries who have the same lament: for all of our systems and
numbers, we often don't have a firm sense of what's going on in our
company and our markets.
7) Related to this lack of confidence in enterprise data, risk
awareness is on the rise in many sectors. Whether in product
provenance (Mattel), recall management (Toyota, Safeway, or CVS),
exposure to natural disasters (Allstate, Chubb), credit and default
risk (anyone), malpractice (any hospital), counterparty risk (Goldman
Sachs), disaster management, or fraud (Enron, Satyam, Societe
General), events of the past decade have sensitized executives and
managers to the need for rigorous, data-driven monitoring of complex
situations.
8) Data from across domains can be correlated through such ready
identifiers as GPS location, credit reporting, cell phone number, or
even Facebook identity. The "like" button, by itself, serves as a
massive spur to inter-organizational data analysis of consumer
behavior at a scale never before available to sampling-driven
marketing analytics. What happens when a "sample" population includes
100 million individuals?
9) Visualization is improving. While the spreadsheet is ubiquitous in
every organization and will remain so, the quality of information
visualization has improved over the past decade. This may result
primarily from the law of large numbers (1% of a boatload is bigger
than 1% of a handful), or it may reflect the growing influence of a
generation of skilled information designers, or it may be that such
tools as Mathematica and Adobe Flex are empowering better number
pictures, but in any event, the increasing quality of both the tools
and the outputs of information visualization reinforce the larger
trend toward sophisticated quantitative analysis.
10) Software as a service puts analytics into the hands of people who
lack the data sets, the computational processing power, and the rich
technical training formerly required for hard-core number-crunching.
Some examples follow.
Successes, many available as SaaS
-Financial charting and modeling continue to migrate down-market:
retail investors can now use Monte Carlo simulations and other tools
well beyond the reach of individuals at the dawn of online investing
in 1995 or thereabouts.
-Airline ticket prices at Microsoft's Bing search engine are rated
against a historical database, so purchasers of a particular route and
date are told whether to buy now or wait.
-Wolfram Alpha is taking a search-engine approach to calculated
results: a stock's price/earnings ratio is readily presented on a
historical chart, for example. Scientific calculations are currently
handled more readily than natural-language queries, but the tool's
potential is unbelievable.
-Google Analytics brings marketing tools formerly unavailable anywhere
to the owner of the smallest business: anyone can slice and dice ad-
and revenue-related data from dozens of angles, as long as it relates
to the search engine in some way.
-Fraud detection through automated, quantitative tools holds great
appeal because of both labor savings and rapid payback. Health and
auto insurers, telecom carriers, and financial institutions are
investing heavily in these technologies.
Practical considerations: Why analytics is still hard
For all the tools, all the data, and all the computing power, getting
numbers to tell stories is still difficult. There are a variety of
reasons for the current state of affairs.
First, organizational realities mean that different entities collect
the data for their own purposes, label and format it in often
non-standard ways, and hold it locally, usually in Excel but also in
e-mails, or pdfs, or production systems. Data synchronization efforts
can be among the most difficult of a CIO's tasks, with uncertain
payback. Managers in separate but related silos may ask the same
question using different terminology, or see a cross-functional issue
through only one lens.
Secondly, skills are not yet adequately distributed. Database
analysts can type SQL queries but usually don't have the managerial
instincts or experience to probe the root cause of a business
phenomenon. Statistical numeracy, often at a high level, remains a
requirement for many analytics efforts; knowing the right tool for a
given data type, or business event, or time scale, takes experience,
even assuming a clean data set. For example, correlation does not
imply causation, as every first-year statistics student knows, yet
temptations to let it do so abound, especially as scenarios outrun
human understanding of ground truths.
Third, odd as it sounds in an age of assumed infoglut, getting the
right data can be a challenge. Especially in extended enterprises but
also in extra-functional processes, measures are rarely sufficiently
consistent, sufficiently rich, or sufficiently current to support
robust analytics. Importing data to explain outside factors adds
layers of cost, complexity, and uncertainty: weather, credit, customer
behavior, and other exogenous factors can be critically important to
either long-term success or day-to-day operations, yet representing
these phenomena in a data-driven model can pose substantial
challenges. Finally, many forms of data do not readily plug into the
available processing tools: unstructured data is growing at a rapid
rate, adding to the complexity of analysis.
In short, getting numbers to tell stories requires the ability to ask
the right question of the data, assuming the data is clean and
trustworthy in the first place. This unique skill requires a blend of
process knowledge, statistical numeracy, time, narrative facility, and
both rigor and creativity in proper proportion. Not surprisingly,
such managers are not technicians, and are difficult to find in many
workplaces. For the promise of analytics to match what it actually
delivers, the biggest breakthroughs will likely come in education and
training rather than algorithms or database technology.
the idea to a wider public, analytics is coming closer to the
mainstream. Whether in ESPN ads for fantasy football, or
election-night slicing and dicing of vote and poll data, or the
ever-broadening influence of quantitative models for stock trading and
portfolio development, numbers-driven decisions are no longer the
exclusive province of people with hard-core quantitative skills.
Not surprisingly, the definition is completely problematic. At the
simple end of the spectrum, one Australian firm asserts that
"Analytics is basically using existing business data or statistics to
make informed decisions." At the other end of a broad continuum,
TechTarget distinguishes, not completely convincingly, between data
mining and data analytics:
"Data analytics (DA) is the science of examining raw data with the
purpose of drawing conclusions about that information. Data analytics
is used in many industries to allow companies and organization to make
better business decisions and in the sciences to verify or disprove
existing models or theories. Data analytics is distinguished from data
mining by the scope, purpose and focus of the analysis. Data miners
sort through huge data sets using sophisticated software to identify
undiscovered patterns and establish hidden relationships."
To avoid a terminological quagmire, let us merely assert that
analytics uses statistical and other methods of processing to tease
out business insights and decision cues from masses of data.
In order to see the reach of these concepts and methods, consider a
few examples drawn at random:
-The "flash crash" of May 2010 focused attention on the many forms and
roles of algorithmic trading of equities. While firm numbers on the
practice are difficult to find, it is telling that the regulated New
York Stock Exchange has fallen from executing 80% of trades in its
listed stocks to only 26% in 2010, according to Bloomberg. The
majority occur in other trading venues, many of them essentially
"lights-out" data centers; high-frequency trading firms, employing a
tiny percentage of the people associated with the stock markets,
generate 60% of daily U.S. trading volume of roughly 10 billion
shares.
-In part because of the broad influence of Michael Lewis's bestselling
book Moneyball, quantitative analysis has moved from its formerly
geeky niche at the periphery to become a central facet of many sports.
MIT holds an annual conference on sports analytics that draws both
sell-out crowds and A-list speakers. Statistics-driven fantasy sports
continue to rise in popularity all over the world as soccer, cricket,
and rugby join the more familiar U.S. staples of football and
baseball.
-Social network analysis, a lightly practiced subspecialty of
sociology only two decades ago, has surged in popularity within the
intelligence, marketing, and technology industries. Physics, biology,
economics, and other disciplines all are contributing to the rapid
growth of knowledge in this domain. Facebook, Al Qaeda, and countless
startups all require new ways of understanding cell phone, GPS, and
friend/kin-related traffic.
Why now?
Perhaps as interesting as the range of its application are the many
converging reasons for the rise of interest in analytics. Here are
ten, from perhaps a multitude of others.
1) Total quality management and six-sigma programs trained a
generation of production managers to value rigorous application of
data. That six-sigma has been misapplied and misinterpreted there can
be little doubt, but the successes derived from a data-driven approach
to decisions are, I believe, informing today's wider interest in
statistically sophisticated forms of analysis within the enterprise.
2) Quantitative finance applied ideas from operations research,
physics, biology, supply chain management, and elsewhere to problems
of money and markets. In a bit of turnabout, many data-intensive
techniques, such as portfolio theory, are now migrating out of formal
finance into day-to-day management.
3) As Eric Schmidt said in August, we now create in two days as much
information as humanity did from the beginning of recorded history
until 2003. That's measuring in bits, obviously, and as such Google's
estimate is skewed by the rise of high-resolution video, but the
overall point is valid: people and organizations can create data far
faster than any human being or process can assemble, digest, or act on
it. Cell phones, seen as both sensor and communications platforms,
are a major contributor, as are enterprise systems and image
generation. More of the world is instrumented, in increasingly
standardized ways, than ever before: Facebook status updates, GPS,
ZigBee and other "Internet of things" efforts, and barcodes and RFID
on more and more items merely begin a list.
4) Even as we as a species generate more data points than ever before,
Moore's law and its corollaries (such as Kryder's law of hard disks)
are creating a computational fabric which enables that data to be
processed more cost-effectively than ever before. That processing, of
course, creates still more data, compounding the glut.
5) After the reengineering/ERP push, the Internet boom, and the
largely failed effort to make services-oriented architectures a
business development theme, vendors are putting major weight behind
analytics. It sells services, hardware, and software; it can be used
in every vertical segment; it applies to every size of business; and
it connects to other macro-level phenomena: smart grids, carbon
footprints, healthcare cost containment, e-government, marketing
efficiency, lean manufacturing, and so on. In short, many vendors
have good reasons to emphasize analytics in their go-to-market
efforts. Investments reinforce the commitment: SAP's purchase of
Business Objects was its biggest acquisition ever, while IBM, Oracle,
Microsoft, and Google have also spent billions buying capability in
this area.
6) Despite all the money spent on ERP, on data warehousing, and on
"real-time" systems, most managers still can not fully trust their
data. Multiple spreadsheets document the same phenomena through
different organizational lenses, data quality in enterprise systems
rarely inspires confidence, and timeliness of results can vary widely,
particularly in multinationals. I speak to executives across
industries who have the same lament: for all of our systems and
numbers, we often don't have a firm sense of what's going on in our
company and our markets.
7) Related to this lack of confidence in enterprise data, risk
awareness is on the rise in many sectors. Whether in product
provenance (Mattel), recall management (Toyota, Safeway, or CVS),
exposure to natural disasters (Allstate, Chubb), credit and default
risk (anyone), malpractice (any hospital), counterparty risk (Goldman
Sachs), disaster management, or fraud (Enron, Satyam, Societe
General), events of the past decade have sensitized executives and
managers to the need for rigorous, data-driven monitoring of complex
situations.
8) Data from across domains can be correlated through such ready
identifiers as GPS location, credit reporting, cell phone number, or
even Facebook identity. The "like" button, by itself, serves as a
massive spur to inter-organizational data analysis of consumer
behavior at a scale never before available to sampling-driven
marketing analytics. What happens when a "sample" population includes
100 million individuals?
9) Visualization is improving. While the spreadsheet is ubiquitous in
every organization and will remain so, the quality of information
visualization has improved over the past decade. This may result
primarily from the law of large numbers (1% of a boatload is bigger
than 1% of a handful), or it may reflect the growing influence of a
generation of skilled information designers, or it may be that such
tools as Mathematica and Adobe Flex are empowering better number
pictures, but in any event, the increasing quality of both the tools
and the outputs of information visualization reinforce the larger
trend toward sophisticated quantitative analysis.
10) Software as a service puts analytics into the hands of people who
lack the data sets, the computational processing power, and the rich
technical training formerly required for hard-core number-crunching.
Some examples follow.
Successes, many available as SaaS
-Financial charting and modeling continue to migrate down-market:
retail investors can now use Monte Carlo simulations and other tools
well beyond the reach of individuals at the dawn of online investing
in 1995 or thereabouts.
-Airline ticket prices at Microsoft's Bing search engine are rated
against a historical database, so purchasers of a particular route and
date are told whether to buy now or wait.
-Wolfram Alpha is taking a search-engine approach to calculated
results: a stock's price/earnings ratio is readily presented on a
historical chart, for example. Scientific calculations are currently
handled more readily than natural-language queries, but the tool's
potential is unbelievable.
-Google Analytics brings marketing tools formerly unavailable anywhere
to the owner of the smallest business: anyone can slice and dice ad-
and revenue-related data from dozens of angles, as long as it relates
to the search engine in some way.
-Fraud detection through automated, quantitative tools holds great
appeal because of both labor savings and rapid payback. Health and
auto insurers, telecom carriers, and financial institutions are
investing heavily in these technologies.
Practical considerations: Why analytics is still hard
For all the tools, all the data, and all the computing power, getting
numbers to tell stories is still difficult. There are a variety of
reasons for the current state of affairs.
First, organizational realities mean that different entities collect
the data for their own purposes, label and format it in often
non-standard ways, and hold it locally, usually in Excel but also in
e-mails, or pdfs, or production systems. Data synchronization efforts
can be among the most difficult of a CIO's tasks, with uncertain
payback. Managers in separate but related silos may ask the same
question using different terminology, or see a cross-functional issue
through only one lens.
Secondly, skills are not yet adequately distributed. Database
analysts can type SQL queries but usually don't have the managerial
instincts or experience to probe the root cause of a business
phenomenon. Statistical numeracy, often at a high level, remains a
requirement for many analytics efforts; knowing the right tool for a
given data type, or business event, or time scale, takes experience,
even assuming a clean data set. For example, correlation does not
imply causation, as every first-year statistics student knows, yet
temptations to let it do so abound, especially as scenarios outrun
human understanding of ground truths.
Third, odd as it sounds in an age of assumed infoglut, getting the
right data can be a challenge. Especially in extended enterprises but
also in extra-functional processes, measures are rarely sufficiently
consistent, sufficiently rich, or sufficiently current to support
robust analytics. Importing data to explain outside factors adds
layers of cost, complexity, and uncertainty: weather, credit, customer
behavior, and other exogenous factors can be critically important to
either long-term success or day-to-day operations, yet representing
these phenomena in a data-driven model can pose substantial
challenges. Finally, many forms of data do not readily plug into the
available processing tools: unstructured data is growing at a rapid
rate, adding to the complexity of analysis.
In short, getting numbers to tell stories requires the ability to ask
the right question of the data, assuming the data is clean and
trustworthy in the first place. This unique skill requires a blend of
process knowledge, statistical numeracy, time, narrative facility, and
both rigor and creativity in proper proportion. Not surprisingly,
such managers are not technicians, and are difficult to find in many
workplaces. For the promise of analytics to match what it actually
delivers, the biggest breakthroughs will likely come in education and
training rather than algorithms or database technology.
Tuesday, September 28, 2010
Early Indications September 2010: The Power and Paradoxes of Usability
Usability is among the most difficult of topics to define and analyze.
At one level, it is much like the famous Supreme Court justice who
noted of potentially criminal extreme sexual images, "you know it when
you see it." At another level, the number of daily moments that
reinforce the presence of poor design can be overwhelming. Examples
are everywhere: building entrance doors with a grab handle you're
supposed to push but that you instinctively (and unsuccessfully) pull,
all manner of software (in Outlook, does hitting "cancel" stop the
transaction or clear a meeting from the calendar?), and pinched
fingers and scraped knuckles. Usability may be easy to spot, but it
is clearly very difficult to engineer in.
Systems
Why is this so? As Don Norman, one of the heroic figures in modern
usability studies, puts it in a recent ACM piece, complex products are
not merely things; they provide services: "although a camera is
thought of as a product, its real value is the service it offers to
its owner: Cameras provide memories. Similarly, music players provide
a service: the enjoyment of listening." In this light, the product
must be considered as part of a system that supports experience, and
systems thinking is hard, complicated, and difficult to accomplish in
functionally-siloed organizations.
The ubiquitous iPod makes his point perfectly:
"The iPod is a story of systems thinking, so let me repeat the essence
for emphasis. It is not about the iPod; it is about the system. Apple
was the first company to license music for downloading. It provides a
simple, easy to understand pricing scheme. It has a first-class
website that is not only easy to use but fun as well. The purchase,
downloading the song to the computer and thence to the iPod are all
handled well and effortlessly. And the iPod is indeed well designed,
well thought out, a pleasure to look at, to touch and hold, and to
use. Then there is the Digital Rights Management system, invisible to
the user, but that both satisfies legal issues and locks the customer
into lifelong servitude to Apple (this part of the system is
undergoing debate and change). There is also the huge number of
third-party add-ons that help increase the power and pleasure of the
unit while bringing a very large, high-margin income to Apple for
licensing and royalties. Finally, the 'Genius Bar' of experts offering
service advice freely to Apple customers who visit the Apple stores
transforms the usual unpleasant service experience into a pleasant
exploration and learning experience. There are other excellent music
players. No one seems to understand the systems thinking that has made
Apple so successful."
One of the designers of the iPod interface, Paul Mercer of Pixo,
affirms that systems thinking shaped the design process: "The iPod is
very simple-minded, in terms of at least what the device does. It's
very smooth in what it does, but the screen is low-resolution, and it
really doesn't do much other than let you navigate your music. That
tells you two things. It tells you first that the simplification that
went into the design was very well thought through, and second that
the capability to build it is not commoditized." Thus more complex
management and design vision are prerequisites for user
simplification. (Mercer quoted in Bill Moggridge, Designing
Interactions (Cambridge: MIT Press, 2007))
Because it requires systems thinking and complex organizational
behavior to achieve, usability is often last on the list of design
criteria, behind such considerations as manufacturability or modular
assembly, materials costs, packaging, skill levels of the factory
employees, and so on. The hall of shame for usability issues is far
longer than the list of successes. For every garage door opener, LEGO
brick, or Amazon Kindle, there are multiple BMW iDrives, Windows
ribbons, European faucets, or inconsistent anesthesia machines:
doctors on a machine from company A turned the upper right knob
clockwise to increase the flow rate, but had to go counter-clockwise
on company B's machine in the next operating room over. Fortunately,
the industry has standardized the control interface, with a resulting
decline in human endangerment. (See Atul Gawande, Complications: A
surgeon's notes on an imperfect science (New York: Macmillan, 2003))
Paradoxes
As Ronald Rust and his colleagues have shown, usability presents
manufacturers of consumer electronics with a paradox. In purchase
mode, buyers overemphasize option value in their purchase
consideration: if multifunction device from company D does 13 things
and a competitor from company H performs 18 actions, the potential
utility is overemphasized even if the known need is only for, say, six
tasks. Watching the evolution of the Swiss Army knife testifies to
this phenomenon: very few of us, I suspect, have precisely the tools
we a) want or b) use on our knife.
Once they get that 18-way gadget home, however, option value recedes
and usability comes to the fore, and the extra controls, interfaces,
and other factors that drive complexity can make using the more
"capable" device frustrating at best and impossible at worst. At
consumer electronics retailers, most returned items function
perfectly, but are often returned because they are too hard to
integrate into everyday life. (They may also be returned because
consumers routinely seek better deals, get tired of a color or finish,
or use the purchase essentially as a free rental, performing a task
then returning the device.)
Hence the paradox: does the designer back off on features and
capabilities, and thus lose the head-to-head battle of shelf-side
calculus in order to win on usability, or do purchase rather than use
considerations win out? There are some ways out of this apparent
paradox: modular add-ons, better point-of-sale information, and
tutorials and other documentation (knowing that the vast majority of
people will never read a manual). The involvement of user groups is
growing, for both feedback on products in development and support
communities for stumped users. (Roland T. Rust, Debora Viana Thompson,
and Rebecca W. Hamilton, "Defeating Feature Fatigue," Harvard Business
Review, February 2006)
At its worst, overwhelming complexity and other forms of poor
usability can kill, as the anesthesia example makes clear. Nuclear
power plants, military hardware, and automobiles provide ready
examples. Especially with software-driven interfaces becoming the
norm (even for refrigerators and other devices with little status to
report and few user-driven options to adjust), the potential for
either bugs or unforeseen situations to escalate is becoming more
common.
Beyond Gadgets
This essay will not become a tribute to Apple or Southwest Airlines,
however, if only to escape the cliche. Instead, I'd like to discuss a
recent video by TED producer Chris Anderson. In it he looks at the
proliferation of online videos as tools for mass learning and
improvement. Starting with the example of self-taught street dancers
in Brazil, Japan, LA, and elsewhere, he argues that the broad
availability of video as shared show-and-tell mechanism spurs, first,
one-upmanship through imitation and then innovation. The level of TED
talks themselves, Anderson argues, provides home-grown evidence that
cheap, vivid multimedia can raise the bar for many kinds of tasks:
futurist presentations, basketball dunks, surgical techniques, and so
on.
Five things are important here.
1) The low barrier to entry for imitator/innovator #2 to post her
contribution to the discussion may inspire, inform, or infuriate
imitator/innovator #3. Mass media did some of these things (in
athletic moves, for example: watch a playground the week after the
Super Bowl or a halfpipe after the X games). The lack of a feedback
loop, however, limited the power of broadcast to propagate secondary
and tertiary contributions.
2) Web video moves incredibly fast. The speed of new ideas entering
the flow can be staggering once a video goes "viral," as its
epidemiological metaphor would suggest.
3) The incredible diversity of the online world is increasing every
year, so the sources of new ideas, fresh thinking, and knowledge of
existing solutions multiply as well. Credentials are self-generated
rather than externally conferred: my dance video gets views not
because I went to Julliard but because people find it compelling, and
tell their friends, followers, or colleagues.
4) Web video is itself embedded in a host of other tools, both social
and technical, that are also incredibly easy to use. Do you want to
tell someone across the country about an article in today's paper
newspaper? Get out the scissors, find an envelope, dig up his current
address, figure out correct postage (pop quiz: how much is a
first-class stamp today?), get to a mailbox, and wait a few days.
Want to recommend a YouTube or other web video? There are literally
hundreds of tools for doing so, essentially all of which are free and
have short learning curves.
5) Feedback is immediate, in the form of both comments and views
counters. The reputational currency that attaches to a "Charlie bit
my finger" or "Evolution of dance" is often (but not always)
non-monetary, to be sure, but emotionally extremely affecting
nonetheless.
With such powerful motivators, low barriers to participation, vast and
diverse populations, rapidity of both generation and diffusion, and a
rich ancillary toolset relating to online video, Anderson makes a
compelling case for the medium as a vast untapped resource for
problem-solving on multiple fronts. In addition, because it involves
multiple senses, the odds that a given person will grasp my ideas
increases as the viewer can hear, watch, or read text relating to the
topic.
Thus the power of extreme usability transcends gadgets, frustration,
and false-failure returns. When done right, giving people easy access
to tools for creation, distribution, interpretation, and
classification/organization can help address problems and
opportunities far beyond the sphere of electromechanical devices.
Apart from reducing frustration, improving safety, or increasing
sales, lowering barriers to true engagement (as in the web browser,
for example) may in fact help change the world.
At one level, it is much like the famous Supreme Court justice who
noted of potentially criminal extreme sexual images, "you know it when
you see it." At another level, the number of daily moments that
reinforce the presence of poor design can be overwhelming. Examples
are everywhere: building entrance doors with a grab handle you're
supposed to push but that you instinctively (and unsuccessfully) pull,
all manner of software (in Outlook, does hitting "cancel" stop the
transaction or clear a meeting from the calendar?), and pinched
fingers and scraped knuckles. Usability may be easy to spot, but it
is clearly very difficult to engineer in.
Systems
Why is this so? As Don Norman, one of the heroic figures in modern
usability studies, puts it in a recent ACM piece, complex products are
not merely things; they provide services: "although a camera is
thought of as a product, its real value is the service it offers to
its owner: Cameras provide memories. Similarly, music players provide
a service: the enjoyment of listening." In this light, the product
must be considered as part of a system that supports experience, and
systems thinking is hard, complicated, and difficult to accomplish in
functionally-siloed organizations.
The ubiquitous iPod makes his point perfectly:
"The iPod is a story of systems thinking, so let me repeat the essence
for emphasis. It is not about the iPod; it is about the system. Apple
was the first company to license music for downloading. It provides a
simple, easy to understand pricing scheme. It has a first-class
website that is not only easy to use but fun as well. The purchase,
downloading the song to the computer and thence to the iPod are all
handled well and effortlessly. And the iPod is indeed well designed,
well thought out, a pleasure to look at, to touch and hold, and to
use. Then there is the Digital Rights Management system, invisible to
the user, but that both satisfies legal issues and locks the customer
into lifelong servitude to Apple (this part of the system is
undergoing debate and change). There is also the huge number of
third-party add-ons that help increase the power and pleasure of the
unit while bringing a very large, high-margin income to Apple for
licensing and royalties. Finally, the 'Genius Bar' of experts offering
service advice freely to Apple customers who visit the Apple stores
transforms the usual unpleasant service experience into a pleasant
exploration and learning experience. There are other excellent music
players. No one seems to understand the systems thinking that has made
Apple so successful."
One of the designers of the iPod interface, Paul Mercer of Pixo,
affirms that systems thinking shaped the design process: "The iPod is
very simple-minded, in terms of at least what the device does. It's
very smooth in what it does, but the screen is low-resolution, and it
really doesn't do much other than let you navigate your music. That
tells you two things. It tells you first that the simplification that
went into the design was very well thought through, and second that
the capability to build it is not commoditized." Thus more complex
management and design vision are prerequisites for user
simplification. (Mercer quoted in Bill Moggridge, Designing
Interactions (Cambridge: MIT Press, 2007))
Because it requires systems thinking and complex organizational
behavior to achieve, usability is often last on the list of design
criteria, behind such considerations as manufacturability or modular
assembly, materials costs, packaging, skill levels of the factory
employees, and so on. The hall of shame for usability issues is far
longer than the list of successes. For every garage door opener, LEGO
brick, or Amazon Kindle, there are multiple BMW iDrives, Windows
ribbons, European faucets, or inconsistent anesthesia machines:
doctors on a machine from company A turned the upper right knob
clockwise to increase the flow rate, but had to go counter-clockwise
on company B's machine in the next operating room over. Fortunately,
the industry has standardized the control interface, with a resulting
decline in human endangerment. (See Atul Gawande, Complications: A
surgeon's notes on an imperfect science (New York: Macmillan, 2003))
Paradoxes
As Ronald Rust and his colleagues have shown, usability presents
manufacturers of consumer electronics with a paradox. In purchase
mode, buyers overemphasize option value in their purchase
consideration: if multifunction device from company D does 13 things
and a competitor from company H performs 18 actions, the potential
utility is overemphasized even if the known need is only for, say, six
tasks. Watching the evolution of the Swiss Army knife testifies to
this phenomenon: very few of us, I suspect, have precisely the tools
we a) want or b) use on our knife.
Once they get that 18-way gadget home, however, option value recedes
and usability comes to the fore, and the extra controls, interfaces,
and other factors that drive complexity can make using the more
"capable" device frustrating at best and impossible at worst. At
consumer electronics retailers, most returned items function
perfectly, but are often returned because they are too hard to
integrate into everyday life. (They may also be returned because
consumers routinely seek better deals, get tired of a color or finish,
or use the purchase essentially as a free rental, performing a task
then returning the device.)
Hence the paradox: does the designer back off on features and
capabilities, and thus lose the head-to-head battle of shelf-side
calculus in order to win on usability, or do purchase rather than use
considerations win out? There are some ways out of this apparent
paradox: modular add-ons, better point-of-sale information, and
tutorials and other documentation (knowing that the vast majority of
people will never read a manual). The involvement of user groups is
growing, for both feedback on products in development and support
communities for stumped users. (Roland T. Rust, Debora Viana Thompson,
and Rebecca W. Hamilton, "Defeating Feature Fatigue," Harvard Business
Review, February 2006)
At its worst, overwhelming complexity and other forms of poor
usability can kill, as the anesthesia example makes clear. Nuclear
power plants, military hardware, and automobiles provide ready
examples. Especially with software-driven interfaces becoming the
norm (even for refrigerators and other devices with little status to
report and few user-driven options to adjust), the potential for
either bugs or unforeseen situations to escalate is becoming more
common.
Beyond Gadgets
This essay will not become a tribute to Apple or Southwest Airlines,
however, if only to escape the cliche. Instead, I'd like to discuss a
recent video by TED producer Chris Anderson. In it he looks at the
proliferation of online videos as tools for mass learning and
improvement. Starting with the example of self-taught street dancers
in Brazil, Japan, LA, and elsewhere, he argues that the broad
availability of video as shared show-and-tell mechanism spurs, first,
one-upmanship through imitation and then innovation. The level of TED
talks themselves, Anderson argues, provides home-grown evidence that
cheap, vivid multimedia can raise the bar for many kinds of tasks:
futurist presentations, basketball dunks, surgical techniques, and so
on.
Five things are important here.
1) The low barrier to entry for imitator/innovator #2 to post her
contribution to the discussion may inspire, inform, or infuriate
imitator/innovator #3. Mass media did some of these things (in
athletic moves, for example: watch a playground the week after the
Super Bowl or a halfpipe after the X games). The lack of a feedback
loop, however, limited the power of broadcast to propagate secondary
and tertiary contributions.
2) Web video moves incredibly fast. The speed of new ideas entering
the flow can be staggering once a video goes "viral," as its
epidemiological metaphor would suggest.
3) The incredible diversity of the online world is increasing every
year, so the sources of new ideas, fresh thinking, and knowledge of
existing solutions multiply as well. Credentials are self-generated
rather than externally conferred: my dance video gets views not
because I went to Julliard but because people find it compelling, and
tell their friends, followers, or colleagues.
4) Web video is itself embedded in a host of other tools, both social
and technical, that are also incredibly easy to use. Do you want to
tell someone across the country about an article in today's paper
newspaper? Get out the scissors, find an envelope, dig up his current
address, figure out correct postage (pop quiz: how much is a
first-class stamp today?), get to a mailbox, and wait a few days.
Want to recommend a YouTube or other web video? There are literally
hundreds of tools for doing so, essentially all of which are free and
have short learning curves.
5) Feedback is immediate, in the form of both comments and views
counters. The reputational currency that attaches to a "Charlie bit
my finger" or "Evolution of dance" is often (but not always)
non-monetary, to be sure, but emotionally extremely affecting
nonetheless.
With such powerful motivators, low barriers to participation, vast and
diverse populations, rapidity of both generation and diffusion, and a
rich ancillary toolset relating to online video, Anderson makes a
compelling case for the medium as a vast untapped resource for
problem-solving on multiple fronts. In addition, because it involves
multiple senses, the odds that a given person will grasp my ideas
increases as the viewer can hear, watch, or read text relating to the
topic.
Thus the power of extreme usability transcends gadgets, frustration,
and false-failure returns. When done right, giving people easy access
to tools for creation, distribution, interpretation, and
classification/organization can help address problems and
opportunities far beyond the sphere of electromechanical devices.
Apart from reducing frustration, improving safety, or increasing
sales, lowering barriers to true engagement (as in the web browser,
for example) may in fact help change the world.
Wednesday, September 01, 2010
Early Indications August 2010: Rethinking Location and Identity
Even though they're sometimes overlooked in relation to spectacular
growth rates (50x increases in wireless data carriage), successful
consumer applications (half a billion Facebook users), and technical
achievement (at Google, Amazon, Apple, and elsewhere), location-based
technologies deserve more attention than they typically receive. The
many possible combinations of wired Internet, wireless data, vivid
displays, well-tuned algorithms running on powerful hardware, vast
quantities of data, and new monetization models, when combined with
location awareness, have yet to be well understood.
Digital location-based services arose roughly in chronological
parallel with the commercial Internet. In 1996, GM introduced the
OnStar navigation and assistance service in high-end automobiles.
Uses of Global Positioning System (GPS, which, like the Internet, was
a U.S. military invention) and related technologies have exploded in
the intervening years, in the automotive sector and, more recently, on
smartphones. The widespread use of Google Earth in television is
another indicator of the underlying trend.
Handheld GPS units continue to double in sales every year or two in
the North American market. As the technology is integrated into
mobile phones, the social networking market is expected to drive far
wider adoption. Foursquare, Gowalla, numerous other startups, and the
telecom carriers are expected to deliver more and more applications
linking "who," "where," and "when." Powerful indications of this
tendency came when Nokia bought Navteq (the "Intel inside" of many
online mapping applications) for $8.1 billion in 2007, when Facebook
integrated location services in 2010, and when the rapid adoption of
the iPhone and other smartphones amplified the market opportunity
dramatically. Location-based services (whether Skyhook geolocation,
Google Maps and Earth, GPS, and others) have evolved to become a
series of platforms on which specific applications can build, tapping
the market's creativity and vast quantities of data.
In the process, the evolution of location taps into significant questions:
-Who am I in relation to where I am? That is, what are the
implications of mapping for identity management?
-Who knows where I am, when I'm there, and where I've been? How much
do I control the "information exhaust" related to my movements? Who
is liable for any harm that may come to me based on the release of my
identity and location?
-Who are we relative to where we are? In other words, how do social
networks change as they migrate back and forth between virtual space
(Facebook) and real space (Mo's Bar)? What happens as the two worlds
converge?
Variations on a Theme
While location often seems to be synonymous with GPS, location-based
data services actually come in a variety of packages. Some examples
follow:
-Indoor Positioning Systems
For all of the utility of GPS, there are numerous scenarios where it
doesn't work: mobile x-ray machines or patient gurneys in hospitals,
people in burning buildings, work-in-process inventory, and
specialized measurement or other tools in a lab or factory all need to
be located in sometimes vast and often challenging landscapes,
sometimes within minutes. GPS signals may not penetrate the building,
and even if they can, the object of interest must "report back" to
those responsible for it. A variety of wired and wireless
technologies can be used to create what is in essence a scaled-down
version of the GPS environment.
-Optical
Such well known firms as Leica and Nikon have professional products to
track minute movements in often massive structures or bodies: dams,
glaciers, bridges. Any discussion of location awareness that neglects
the powerful role of precision optics, beginning with the essential
surveyor's transit, would be incomplete.
-WiFi mapping
As we have seen, the worldwide rise of wi-fi networking is very much a
bottom-up phenomenon. Two consequences of that mode of installation
are, first, often lax network security and second, considerable
coverage overspill. Driving down any suburban or metropolitan street
with even a basic wireless device reveals dozens of residential or
commercial networks. Such firms as Google have systematically mapped
those networks, resulting in yet another overlay onto a growing number
of triangulation points. The privacy implications of such mapping
have yet to be resolved.
-Cellular
Wireless carriers can determine the position of an active (powered-up)
device through triangulation with the customer's nearby towers. Such
an approach lacks precision when compared to approaches (most notably
GPS) that reside on the handset rather than in the network. In either
case, the carrier can establish historical location for law
enforcement and potentially other purposes.
-Skyhook
A startup based in Boston, Skyhook has built a database of 100 million
wi-fi physical coordinates then added both GPS and cellular
components, making Skyhook most precise (inside or near buildings)
where GPS is weakest. A software solution combines all available
information to create location-tracking for any wi-fi enabled device,
indoors or out. Skyhook powers location awareness for devices from
Apple, Dell, Samsung, and other companies, and is now generating
secondary data based on those devices.
Landmarks
Noting a few historic transitions and innovations in the history of
location-based services reveals the scale, complexity, and wide
variety of applications that the core technologies are powering.
OnStar
With roughly 5.5 million subscribers in mid-2010, OnStar has become
the world's largest remote vehicle-assistance service. In addition to
receiving navigation and roadside assistance, subscribers can have
doors unlocked and gain access to certain diagnostic data related to
that particular vehicle. The service delivers important information
to emergency response personnel: when extricating occupants from a
damaged vehicle, knowing which airbags have deployed can assist in
keeping EMTs, police, and firefighters safe from the explosive force
of an undeployed device that might be inadvertently tripped. Knowing
the type and severity of the crash before arrival on the scene can
also help the teams prepare for the level of damage and injury they
are likely to encounter.
The service was launched as a joint venture. General Motors brought
the vehicle platform and associated engineering, Hughes Electronics
managed the satellite and communications aspects, and Electronic Data
Systems, itself being spun out from GM in OnStar's launch year,
performed systems integration and information management.
GPS
The history of GPS is even more compelling when considered alongside
its nearly contemporary stable mate, the Internet. GPS originated in
1973, ARPANET in 1969. Ronald Reagan allowed GPS to be used for
civilian purposes after a 1983 incident involving a Korean Air Lines
plane that strayed into Soviet airspace. The Internet was handed off
from the National Science Foundation to commercial use in 1995; Bill
Clinton ordered fully accurate GPS (20 meter resolution) to be made
available May 1, 2000. Previously, the military had access to the
most accurate signals while "Selective Availability" (300 meter
resolution) was delivered to civilian applications.
Since 1990, GPS has spread to a wide variety of uses: recreational
hiking and boating, commercial marine navigation, cell phone
geolocation, certain aircraft systems, and of course vehicle
navigation. Heavy mining and farming equipment can be steered to less
than 1" tolerances. Vehicles (particularly fleets) and even animals
can be "geofenced," with instant notification if the transmitter
leaves a designated area. In addition to latitude and longitude, GPS
delivers highly precise time services as well as altitude.
Trimble
Founded by Charles Trimble and two colleagues from Hewlett-Packard in
1978 (the first year a GPS satellite was launched), Trimble Navigation
has become an essential part of geolocation history. From its base in
Silicon Valley, the company has amassed a portfolio of more than 800
patents and offers more than 500 products. Much like Cisco, Trimble
has made acquisition of smaller companies a core competency, with many
M&A moves in the past ten years in particular. A measure of Trimble's
respect in the industry can be seen in the quality of its
joint-venture partners: both Caterpillar and Nikon have gone to market
jointly with Trimble.
The company has a long history of "firsts": the first commercial
scientific-research and geodectic-survey products based on GPS for
oil-drilling teams on offshore platforms, the first GPS unit taken
aboard the space shuttle, the first circuit board combining GPS and
cellular communications. The reach of GPS can be seen in the variety
of Trimble's product offerings: agriculture, engineering and
construction, federal government, field and mobile worker (including
both public safety and utilities applications), and advanced devices,
the latter indicating a significant commitment to R&D.
Location, Mobility, and Identity
Issues of electronic identity and mobility have been playing out in
quiet but important ways. Each of several instances is a classic case
of social or economic problems being tangled up with a technology
challenge. To see only one side of the question is to create the
possibility of unintended consequences, allow hidden agendas into
play, and generally confuse the allocation of sometimes-scarce
resources.
-Social Networking Goes Local
Whether through Dodgeball, (a New York startup that was bought by
Google in 2005 then left unexploited), Foursquare, or Facebook Places,
the potential for the combination of virtual and real people in
virtual or real places is still being explored. Viewed in
retrospect, the course of the Dodgeball acquisition raises the revenue
questions familiar to watchers of Friendster et al: who will pay for
what, and who collects, by what mechanism? Who owns my location
information and what aspects of it do I control? Much like my medical
records, which are not mine but rather the doctor's or hospital's,
control appears to be defaulting to the collector rather than the
generator of digital bread crumbs.
-The Breakdown of 911
After a series of implementations beginning in 1968, Americans on
wireline voice connections could reliably dial the same three-digit
emergency number anywhere in the country. As the Bell System of the
twentieth century fades farther and farther from view, the presumption
of 911 reliability declines proportionately with the old business
model even as demand increases: the U.S. generates about 12 million
calls a day to 911. The problem comes in two variants.
First, a number of Voice over IP customers with life-threatening --
and as it turned out, life-ending -- emergencies could only reach a
recording at Vonage saying to call 911 from another phone. The Texas
Attorney General is raising the question after a 911 call failed
during a home invasion in Houston. A baby's death in Florida was
blamed on a Vonage 911 failure. According to the Wall Street Journal,
"In a letter to Florida's Attorney General, [the mother] said the
Vonage customer-service representative laughed when she told her that
Julia had died. 'She laughed and stated that they were unable to
revive a baby'. . . ."
For their part, Vonage includes bold-print instructions for manual 911
mapping during the sign-up process, but it's been estimated that up to
a quarter of the U.S. population is functionally illiterate. One
feature of VoIP is its portability: plug the phone into an RJ45 jack
anywhere and receive calls at a virtual area code of the customer's
choice. Navigating firewalls, dynamic IP addresses, wireless
connections, and frequent network outages taxes anyone but the most
technically adept Internet user. Children are also a key 911
constituency. Taken collectively, these overlapping populations raise
dozens of tricky questions. At the infrastructure level, the FCC and
other agencies face the substantial challenge of determining the
fairest, safest set of technical interconnection requirements
incumbent on the Regional Bells and VoIP carriers.
From the Bell perspective, 911 obviously costs money to implement and
maintain, and declining wireline revenues translate to declining 911
funds. Connecting 911 to the Internet in a reliable, secure manner is
nontrivial -- network attacks have used modems to target the service
in the past -- and until contractual arrangements are finalized there
is reluctance to subsidize the same firms that present themselves as
full wireline replacements.
911 isn't just a VoIP problem either: cellular users represent nearly
75% of emergency callers, but math and economics conspire to make
finding them difficult or impossible. In rural areas, cell towers
often follow roads, so attempting to triangulate from three points in
a straight line can limit precision. States have raided 911 tax
revenues for budget relief.
-Cell phone tracking
The wireless carriers offer a variety of services that give a relative
(often a parent, or an adult child of a potentially confused elder)
location information generated by a phone. the service has also been
used to help stalkers and abusive spouses find their wives in hiding.
Women's shelters routinely strip out the tracking component of cell
phones; according to the Wall Street Journal, a Justice Department
report in 2009 estimated that 25,000 adults in the U.S. were victims
of GPS stalking every year. In addition to the carriers, tracking
capability is being developed by sophisticated PC users that spoof the
behavior of a cell tower. Keystroke and location logging software is
also available; one package, called MobileSpy, costs under $100 per
year.
Conclusion
As the telephone system migrates from being dominated by fixed lines,
where identity resided in the phone, to mobile usage, where identity
typically relates to an individual, location is turning out to matter
a lot. Mobile number portability was an unexpectedly popular mandate
a few years ago, and the fastest technology adoption in history was a
phone feature: 55 million people signed up in a matter of months for a
service -- the Federal Do Not Call registry -- that didn't exist when
it was announced. (That's even faster than the previous champ,
Netscape Navigator's zooming to 38 million users in 18 months.) Given
the global nature of some of these questions, not to mention numerous
issues with ICANN and DNS, the discussions and solutions will only get
more complicated. As the examples illustrate, getting social
arrangements to keep pace with technology innovation is if anything
more difficult than the innovation itself.
growth rates (50x increases in wireless data carriage), successful
consumer applications (half a billion Facebook users), and technical
achievement (at Google, Amazon, Apple, and elsewhere), location-based
technologies deserve more attention than they typically receive. The
many possible combinations of wired Internet, wireless data, vivid
displays, well-tuned algorithms running on powerful hardware, vast
quantities of data, and new monetization models, when combined with
location awareness, have yet to be well understood.
Digital location-based services arose roughly in chronological
parallel with the commercial Internet. In 1996, GM introduced the
OnStar navigation and assistance service in high-end automobiles.
Uses of Global Positioning System (GPS, which, like the Internet, was
a U.S. military invention) and related technologies have exploded in
the intervening years, in the automotive sector and, more recently, on
smartphones. The widespread use of Google Earth in television is
another indicator of the underlying trend.
Handheld GPS units continue to double in sales every year or two in
the North American market. As the technology is integrated into
mobile phones, the social networking market is expected to drive far
wider adoption. Foursquare, Gowalla, numerous other startups, and the
telecom carriers are expected to deliver more and more applications
linking "who," "where," and "when." Powerful indications of this
tendency came when Nokia bought Navteq (the "Intel inside" of many
online mapping applications) for $8.1 billion in 2007, when Facebook
integrated location services in 2010, and when the rapid adoption of
the iPhone and other smartphones amplified the market opportunity
dramatically. Location-based services (whether Skyhook geolocation,
Google Maps and Earth, GPS, and others) have evolved to become a
series of platforms on which specific applications can build, tapping
the market's creativity and vast quantities of data.
In the process, the evolution of location taps into significant questions:
-Who am I in relation to where I am? That is, what are the
implications of mapping for identity management?
-Who knows where I am, when I'm there, and where I've been? How much
do I control the "information exhaust" related to my movements? Who
is liable for any harm that may come to me based on the release of my
identity and location?
-Who are we relative to where we are? In other words, how do social
networks change as they migrate back and forth between virtual space
(Facebook) and real space (Mo's Bar)? What happens as the two worlds
converge?
Variations on a Theme
While location often seems to be synonymous with GPS, location-based
data services actually come in a variety of packages. Some examples
follow:
-Indoor Positioning Systems
For all of the utility of GPS, there are numerous scenarios where it
doesn't work: mobile x-ray machines or patient gurneys in hospitals,
people in burning buildings, work-in-process inventory, and
specialized measurement or other tools in a lab or factory all need to
be located in sometimes vast and often challenging landscapes,
sometimes within minutes. GPS signals may not penetrate the building,
and even if they can, the object of interest must "report back" to
those responsible for it. A variety of wired and wireless
technologies can be used to create what is in essence a scaled-down
version of the GPS environment.
-Optical
Such well known firms as Leica and Nikon have professional products to
track minute movements in often massive structures or bodies: dams,
glaciers, bridges. Any discussion of location awareness that neglects
the powerful role of precision optics, beginning with the essential
surveyor's transit, would be incomplete.
-WiFi mapping
As we have seen, the worldwide rise of wi-fi networking is very much a
bottom-up phenomenon. Two consequences of that mode of installation
are, first, often lax network security and second, considerable
coverage overspill. Driving down any suburban or metropolitan street
with even a basic wireless device reveals dozens of residential or
commercial networks. Such firms as Google have systematically mapped
those networks, resulting in yet another overlay onto a growing number
of triangulation points. The privacy implications of such mapping
have yet to be resolved.
-Cellular
Wireless carriers can determine the position of an active (powered-up)
device through triangulation with the customer's nearby towers. Such
an approach lacks precision when compared to approaches (most notably
GPS) that reside on the handset rather than in the network. In either
case, the carrier can establish historical location for law
enforcement and potentially other purposes.
-Skyhook
A startup based in Boston, Skyhook has built a database of 100 million
wi-fi physical coordinates then added both GPS and cellular
components, making Skyhook most precise (inside or near buildings)
where GPS is weakest. A software solution combines all available
information to create location-tracking for any wi-fi enabled device,
indoors or out. Skyhook powers location awareness for devices from
Apple, Dell, Samsung, and other companies, and is now generating
secondary data based on those devices.
Landmarks
Noting a few historic transitions and innovations in the history of
location-based services reveals the scale, complexity, and wide
variety of applications that the core technologies are powering.
OnStar
With roughly 5.5 million subscribers in mid-2010, OnStar has become
the world's largest remote vehicle-assistance service. In addition to
receiving navigation and roadside assistance, subscribers can have
doors unlocked and gain access to certain diagnostic data related to
that particular vehicle. The service delivers important information
to emergency response personnel: when extricating occupants from a
damaged vehicle, knowing which airbags have deployed can assist in
keeping EMTs, police, and firefighters safe from the explosive force
of an undeployed device that might be inadvertently tripped. Knowing
the type and severity of the crash before arrival on the scene can
also help the teams prepare for the level of damage and injury they
are likely to encounter.
The service was launched as a joint venture. General Motors brought
the vehicle platform and associated engineering, Hughes Electronics
managed the satellite and communications aspects, and Electronic Data
Systems, itself being spun out from GM in OnStar's launch year,
performed systems integration and information management.
GPS
The history of GPS is even more compelling when considered alongside
its nearly contemporary stable mate, the Internet. GPS originated in
1973, ARPANET in 1969. Ronald Reagan allowed GPS to be used for
civilian purposes after a 1983 incident involving a Korean Air Lines
plane that strayed into Soviet airspace. The Internet was handed off
from the National Science Foundation to commercial use in 1995; Bill
Clinton ordered fully accurate GPS (20 meter resolution) to be made
available May 1, 2000. Previously, the military had access to the
most accurate signals while "Selective Availability" (300 meter
resolution) was delivered to civilian applications.
Since 1990, GPS has spread to a wide variety of uses: recreational
hiking and boating, commercial marine navigation, cell phone
geolocation, certain aircraft systems, and of course vehicle
navigation. Heavy mining and farming equipment can be steered to less
than 1" tolerances. Vehicles (particularly fleets) and even animals
can be "geofenced," with instant notification if the transmitter
leaves a designated area. In addition to latitude and longitude, GPS
delivers highly precise time services as well as altitude.
Trimble
Founded by Charles Trimble and two colleagues from Hewlett-Packard in
1978 (the first year a GPS satellite was launched), Trimble Navigation
has become an essential part of geolocation history. From its base in
Silicon Valley, the company has amassed a portfolio of more than 800
patents and offers more than 500 products. Much like Cisco, Trimble
has made acquisition of smaller companies a core competency, with many
M&A moves in the past ten years in particular. A measure of Trimble's
respect in the industry can be seen in the quality of its
joint-venture partners: both Caterpillar and Nikon have gone to market
jointly with Trimble.
The company has a long history of "firsts": the first commercial
scientific-research and geodectic-survey products based on GPS for
oil-drilling teams on offshore platforms, the first GPS unit taken
aboard the space shuttle, the first circuit board combining GPS and
cellular communications. The reach of GPS can be seen in the variety
of Trimble's product offerings: agriculture, engineering and
construction, federal government, field and mobile worker (including
both public safety and utilities applications), and advanced devices,
the latter indicating a significant commitment to R&D.
Location, Mobility, and Identity
Issues of electronic identity and mobility have been playing out in
quiet but important ways. Each of several instances is a classic case
of social or economic problems being tangled up with a technology
challenge. To see only one side of the question is to create the
possibility of unintended consequences, allow hidden agendas into
play, and generally confuse the allocation of sometimes-scarce
resources.
-Social Networking Goes Local
Whether through Dodgeball, (a New York startup that was bought by
Google in 2005 then left unexploited), Foursquare, or Facebook Places,
the potential for the combination of virtual and real people in
virtual or real places is still being explored. Viewed in
retrospect, the course of the Dodgeball acquisition raises the revenue
questions familiar to watchers of Friendster et al: who will pay for
what, and who collects, by what mechanism? Who owns my location
information and what aspects of it do I control? Much like my medical
records, which are not mine but rather the doctor's or hospital's,
control appears to be defaulting to the collector rather than the
generator of digital bread crumbs.
-The Breakdown of 911
After a series of implementations beginning in 1968, Americans on
wireline voice connections could reliably dial the same three-digit
emergency number anywhere in the country. As the Bell System of the
twentieth century fades farther and farther from view, the presumption
of 911 reliability declines proportionately with the old business
model even as demand increases: the U.S. generates about 12 million
calls a day to 911. The problem comes in two variants.
First, a number of Voice over IP customers with life-threatening --
and as it turned out, life-ending -- emergencies could only reach a
recording at Vonage saying to call 911 from another phone. The Texas
Attorney General is raising the question after a 911 call failed
during a home invasion in Houston. A baby's death in Florida was
blamed on a Vonage 911 failure. According to the Wall Street Journal,
"In a letter to Florida's Attorney General, [the mother] said the
Vonage customer-service representative laughed when she told her that
Julia had died. 'She laughed and stated that they were unable to
revive a baby'. . . ."
For their part, Vonage includes bold-print instructions for manual 911
mapping during the sign-up process, but it's been estimated that up to
a quarter of the U.S. population is functionally illiterate. One
feature of VoIP is its portability: plug the phone into an RJ45 jack
anywhere and receive calls at a virtual area code of the customer's
choice. Navigating firewalls, dynamic IP addresses, wireless
connections, and frequent network outages taxes anyone but the most
technically adept Internet user. Children are also a key 911
constituency. Taken collectively, these overlapping populations raise
dozens of tricky questions. At the infrastructure level, the FCC and
other agencies face the substantial challenge of determining the
fairest, safest set of technical interconnection requirements
incumbent on the Regional Bells and VoIP carriers.
From the Bell perspective, 911 obviously costs money to implement and
maintain, and declining wireline revenues translate to declining 911
funds. Connecting 911 to the Internet in a reliable, secure manner is
nontrivial -- network attacks have used modems to target the service
in the past -- and until contractual arrangements are finalized there
is reluctance to subsidize the same firms that present themselves as
full wireline replacements.
911 isn't just a VoIP problem either: cellular users represent nearly
75% of emergency callers, but math and economics conspire to make
finding them difficult or impossible. In rural areas, cell towers
often follow roads, so attempting to triangulate from three points in
a straight line can limit precision. States have raided 911 tax
revenues for budget relief.
-Cell phone tracking
The wireless carriers offer a variety of services that give a relative
(often a parent, or an adult child of a potentially confused elder)
location information generated by a phone. the service has also been
used to help stalkers and abusive spouses find their wives in hiding.
Women's shelters routinely strip out the tracking component of cell
phones; according to the Wall Street Journal, a Justice Department
report in 2009 estimated that 25,000 adults in the U.S. were victims
of GPS stalking every year. In addition to the carriers, tracking
capability is being developed by sophisticated PC users that spoof the
behavior of a cell tower. Keystroke and location logging software is
also available; one package, called MobileSpy, costs under $100 per
year.
Conclusion
As the telephone system migrates from being dominated by fixed lines,
where identity resided in the phone, to mobile usage, where identity
typically relates to an individual, location is turning out to matter
a lot. Mobile number portability was an unexpectedly popular mandate
a few years ago, and the fastest technology adoption in history was a
phone feature: 55 million people signed up in a matter of months for a
service -- the Federal Do Not Call registry -- that didn't exist when
it was announced. (That's even faster than the previous champ,
Netscape Navigator's zooming to 38 million users in 18 months.) Given
the global nature of some of these questions, not to mention numerous
issues with ICANN and DNS, the discussions and solutions will only get
more complicated. As the examples illustrate, getting social
arrangements to keep pace with technology innovation is if anything
more difficult than the innovation itself.
Saturday, July 31, 2010
July 2010 Early Indications: Living with an iPad
I'm not typically a "gadget guy," one of those folks (Ed Baig at USA Today is one of the best) who regularly evaluate new devices. The iPad, however, stands as a milestone that redefines how people and technology inter-relate. A colleague is asking me to evaluate it as an educational tool, so I'm probably a bit more self-conscious than usual in my uptake of this particular technology. Herewith are a few thoughts.
The iPad perfectly embodies wider confusion over the intermingling of work and life. I have yet to load the office apps, so most of my reaction concerns the device used in "life" mode. That being said, the iPad is too convenient to ignore "just a peek" at e-mail. The screen is so bright and actually pretty that it's an attention magnet. The widely discussed aluminum case has just the right heft in the hand, just the right curve in the palm, that people (not just technologists) want to pick it up. From there, assuming a good wi-fi signal, I found everyone got up and running with very little coaching, usually without invitation.
I expect this will become more of an issue with work-related applications, but the iPad's limited text entry will be interesting to assess. Right now you can sort of double-thumb, sort of touch type, sort of trace letters with fingers (in 3rd-party applications). For short to medium e-mails, I did not mind, but a Crackberry addict might find the slow pace frustrating. The Apple case that folds back on itself to form a triangular base can be helpful here, from other people I've watched.
Similarly, I expect that at some point kludges or formal fixes will address the lack of printer support. Along the same lines, the single-threaded mode of operation can get annoying: leave an app to check something else (it does remember multiple web pages, however) and you face a full restart upon returning to whatever non-browser activity you were just doing. An update to the operating system should fix this issue in September.
The iPad rapidly changed some of my long-standing habits. Reading, however, is not one of them. I have yet to get on board the e-reader bandwagon, and have left several texts I should read for work untouched: I literally forget they're loaded and waiting for me. In part this is because I read scholarly books idiosyncratically, never starting at page 1 and proceeding to 347. Rather, I'll start by looking at the plates if the book has them, checking out the pictures bound somewhere randomly in the middle. From there I might look through the endnotes, or the jacket blurbs. I'll often skip chapter 1, at least initially, preferring instead to start with what often turns out to be the first body chapter with real evidence and real argument rather than introductory matter which some people find very hard to write. The point is that e-readers do not support non-fiction reading as well as they do a good mystery, where there's only one way through the story. Pagination also presents a real issue when you need to footnote a source.
To stay with the question of reading, what was widely called "the Jesus tablet" in the publishing industry can not yet serve as a replacement for a physical magazine -- particularly at the prices being suggested: $4.99 a week of Time or Sports Illustrated is not going to fly, I don't believe. Merely exporting static, dated dead-tree content to a new medium (which happens to be dynamic, real-time, and capable of multimedia) follows a familiar trap. The Wright brothers did not succeed by mimicking a bird. Printed books did not find a market mass-producing hand-lettered scrolls. Television quickly stopped presenting radio shows with visible people. Businesses are continuing to learn that the Web is not "television except different."
To their credit, the team at Flipboard is trying to transcend the paper magazine by integrating social networking feeds: "hey, did you see the piece in [wherever]?" The half-page-oriented turning metaphor looks clever at first glance, and some of the content is strong. The problem is it's too strong, too predictable: thus far it's hard to find fresh stories in the pretty format. Too many taps stand between "hmm, let's look at that" and the actual story, most of which I'd already seen in my other grazings.
In addition, the Flipboard business model looks extremely shaky: adding one more intermediary between any potential consumer and the brand creates disincentives all around. I'd also wager that the Web 2.0 Tom Sawyer approach -- let the crowd do your work for you and pay them in reputational or other non-monetary compensation -- can not run at the current pace forever. Sure, I recommend articles in my Twitter feed (38apples), but a) not at scale, b) not reliably, from an advertiser's vantage point, and c) not systematically, from a subscriber's standpoint. Dialing in the right balance between serendipity and editorial coherence (the current buzzword calls it "curated" content) is a new challenge. The New York Times, as good as it is at many things, has not yet found the key to this new medium, nor should anyone expect them to: it's simply too early. The same goes for AOL, for the BBC, for NBC, and for just about everyone else.
Because it is so relentlessly visual and was never trapped in a paper model, weather information can be arresting on the iPad. The Weather Channel app reminded me immediately of what I remember of Pointcast (which, as I pointed out on Twitter, would make a great iPad app: minimal text input, free-floating news and other topical links, ticker streaming, and other invitations to tap). Maps, graphs, videos, icons -- weather information works essentially perfectly on the iPad.
I did not find the same attractiveness true for Google maps. I believe this discomfort relates to the nature of wayfinding. If you're looking at a map, you're likely already doing something else: dialing a phone, looking out the window for a house number or street sign, holding a steering wheel, maybe grasping a slip of paper with an address. Given the iPad's two-handed operation, those other ancillary activities often make it the wrong tool for the job, particularly compared to a one-handed or voice-activated GPS.
I have yet to fly with the iPad but look forward to doing so: I never found the iPhone a desirable movie player, but expect my next long flight will pass faster with the iPad's vivid display of something I want rather than the typical choices on the airlines. One great feature of all operations: the iPad runs silently. The move to a world in which mobile devices rely far more heavily on broadband connections to "cloud" resources than they do on on-board storage will have many side effects, and the loss of noisiness is one of them. (I did not yet try any connection other than WiFi, but will attempt to assess how well 3G works once school starts.)
In my time with the iPad, the life-altering application has been Scrabble. It may actually be better than the physical board game. Let me count the ways:
1) You can't lose pieces.
2) You can't cheat by marking or memorizing tiles (as my late father-in-law was fond of doing).
3) The dictionary is hard-wired: no fights, though to be fair, in some circles the lexicographic litigation is part of the point, and that gets lost.
4) A partially completed game is trivial to save.
5) Lifetime statistics are kept automatically, including win-loss.
6) The touch screen allows automatic shuffling and very comfortable flicking of the letters in the tray, unlike the iPhone app Words with Friends, in which I sometimes must break out the physical set to parse a really tough rack.
7) You can play by yourself against the computer.
8) Virtual games against on-line strangers are also possible.
9) You can play in bed, on a train, on a plane, on a subway, unlike the original.
In sum, what do the various aspects tell us about the iPad? First, the device almost demands interaction, but limits its sphere. Highlighting and annotation, so far, have not worked well. The well-publicized exclusion of Flash from the device rules out many websites, such as those running Flash-based catalog apps. Typing remains problematic. Printing will have to be added soon.
Second, the rapid start (from sleep) and silent operation take the user away from the world of "computers" and into the domain of "appliances," which I say as a compliment. I will withhold analysis of the device's pricing for the moment, however.
Third, the particular combination of heft, touch-screen, and vivid display is so new to us as a user community that I do not think we have a large catalog of applications that exploit the new hardware to its fullest. While the iPad runs some games superbly well, it's not a PSP. Yes you can read books but the iPad is not really a proper reader, or if it is, it's a really expensive one. One can replicate laptop functionality, but the iPad is not conceptually a computer, unlike the Microsoft family of tablets from a few years ago.
Until we can say with subconscious certainty what this thing is (and does) and behave accordingly, just as we could identify a television and all that it embodied as little as five years ago, I believe the iPad's transformative potential remains only partially recognized.
(The best assessment I read while researching his piece is here)
The iPad perfectly embodies wider confusion over the intermingling of work and life. I have yet to load the office apps, so most of my reaction concerns the device used in "life" mode. That being said, the iPad is too convenient to ignore "just a peek" at e-mail. The screen is so bright and actually pretty that it's an attention magnet. The widely discussed aluminum case has just the right heft in the hand, just the right curve in the palm, that people (not just technologists) want to pick it up. From there, assuming a good wi-fi signal, I found everyone got up and running with very little coaching, usually without invitation.
I expect this will become more of an issue with work-related applications, but the iPad's limited text entry will be interesting to assess. Right now you can sort of double-thumb, sort of touch type, sort of trace letters with fingers (in 3rd-party applications). For short to medium e-mails, I did not mind, but a Crackberry addict might find the slow pace frustrating. The Apple case that folds back on itself to form a triangular base can be helpful here, from other people I've watched.
Similarly, I expect that at some point kludges or formal fixes will address the lack of printer support. Along the same lines, the single-threaded mode of operation can get annoying: leave an app to check something else (it does remember multiple web pages, however) and you face a full restart upon returning to whatever non-browser activity you were just doing. An update to the operating system should fix this issue in September.
The iPad rapidly changed some of my long-standing habits. Reading, however, is not one of them. I have yet to get on board the e-reader bandwagon, and have left several texts I should read for work untouched: I literally forget they're loaded and waiting for me. In part this is because I read scholarly books idiosyncratically, never starting at page 1 and proceeding to 347. Rather, I'll start by looking at the plates if the book has them, checking out the pictures bound somewhere randomly in the middle. From there I might look through the endnotes, or the jacket blurbs. I'll often skip chapter 1, at least initially, preferring instead to start with what often turns out to be the first body chapter with real evidence and real argument rather than introductory matter which some people find very hard to write. The point is that e-readers do not support non-fiction reading as well as they do a good mystery, where there's only one way through the story. Pagination also presents a real issue when you need to footnote a source.
To stay with the question of reading, what was widely called "the Jesus tablet" in the publishing industry can not yet serve as a replacement for a physical magazine -- particularly at the prices being suggested: $4.99 a week of Time or Sports Illustrated is not going to fly, I don't believe. Merely exporting static, dated dead-tree content to a new medium (which happens to be dynamic, real-time, and capable of multimedia) follows a familiar trap. The Wright brothers did not succeed by mimicking a bird. Printed books did not find a market mass-producing hand-lettered scrolls. Television quickly stopped presenting radio shows with visible people. Businesses are continuing to learn that the Web is not "television except different."
To their credit, the team at Flipboard is trying to transcend the paper magazine by integrating social networking feeds: "hey, did you see the piece in [wherever]?" The half-page-oriented turning metaphor looks clever at first glance, and some of the content is strong. The problem is it's too strong, too predictable: thus far it's hard to find fresh stories in the pretty format. Too many taps stand between "hmm, let's look at that" and the actual story, most of which I'd already seen in my other grazings.
In addition, the Flipboard business model looks extremely shaky: adding one more intermediary between any potential consumer and the brand creates disincentives all around. I'd also wager that the Web 2.0 Tom Sawyer approach -- let the crowd do your work for you and pay them in reputational or other non-monetary compensation -- can not run at the current pace forever. Sure, I recommend articles in my Twitter feed (38apples), but a) not at scale, b) not reliably, from an advertiser's vantage point, and c) not systematically, from a subscriber's standpoint. Dialing in the right balance between serendipity and editorial coherence (the current buzzword calls it "curated" content) is a new challenge. The New York Times, as good as it is at many things, has not yet found the key to this new medium, nor should anyone expect them to: it's simply too early. The same goes for AOL, for the BBC, for NBC, and for just about everyone else.
Because it is so relentlessly visual and was never trapped in a paper model, weather information can be arresting on the iPad. The Weather Channel app reminded me immediately of what I remember of Pointcast (which, as I pointed out on Twitter, would make a great iPad app: minimal text input, free-floating news and other topical links, ticker streaming, and other invitations to tap). Maps, graphs, videos, icons -- weather information works essentially perfectly on the iPad.
I did not find the same attractiveness true for Google maps. I believe this discomfort relates to the nature of wayfinding. If you're looking at a map, you're likely already doing something else: dialing a phone, looking out the window for a house number or street sign, holding a steering wheel, maybe grasping a slip of paper with an address. Given the iPad's two-handed operation, those other ancillary activities often make it the wrong tool for the job, particularly compared to a one-handed or voice-activated GPS.
I have yet to fly with the iPad but look forward to doing so: I never found the iPhone a desirable movie player, but expect my next long flight will pass faster with the iPad's vivid display of something I want rather than the typical choices on the airlines. One great feature of all operations: the iPad runs silently. The move to a world in which mobile devices rely far more heavily on broadband connections to "cloud" resources than they do on on-board storage will have many side effects, and the loss of noisiness is one of them. (I did not yet try any connection other than WiFi, but will attempt to assess how well 3G works once school starts.)
In my time with the iPad, the life-altering application has been Scrabble. It may actually be better than the physical board game. Let me count the ways:
1) You can't lose pieces.
2) You can't cheat by marking or memorizing tiles (as my late father-in-law was fond of doing).
3) The dictionary is hard-wired: no fights, though to be fair, in some circles the lexicographic litigation is part of the point, and that gets lost.
4) A partially completed game is trivial to save.
5) Lifetime statistics are kept automatically, including win-loss.
6) The touch screen allows automatic shuffling and very comfortable flicking of the letters in the tray, unlike the iPhone app Words with Friends, in which I sometimes must break out the physical set to parse a really tough rack.
7) You can play by yourself against the computer.
8) Virtual games against on-line strangers are also possible.
9) You can play in bed, on a train, on a plane, on a subway, unlike the original.
In sum, what do the various aspects tell us about the iPad? First, the device almost demands interaction, but limits its sphere. Highlighting and annotation, so far, have not worked well. The well-publicized exclusion of Flash from the device rules out many websites, such as those running Flash-based catalog apps. Typing remains problematic. Printing will have to be added soon.
Second, the rapid start (from sleep) and silent operation take the user away from the world of "computers" and into the domain of "appliances," which I say as a compliment. I will withhold analysis of the device's pricing for the moment, however.
Third, the particular combination of heft, touch-screen, and vivid display is so new to us as a user community that I do not think we have a large catalog of applications that exploit the new hardware to its fullest. While the iPad runs some games superbly well, it's not a PSP. Yes you can read books but the iPad is not really a proper reader, or if it is, it's a really expensive one. One can replicate laptop functionality, but the iPad is not conceptually a computer, unlike the Microsoft family of tablets from a few years ago.
Until we can say with subconscious certainty what this thing is (and does) and behave accordingly, just as we could identify a television and all that it embodied as little as five years ago, I believe the iPad's transformative potential remains only partially recognized.
(The best assessment I read while researching his piece is here)
Thursday, June 17, 2010
June 2010 Early Indications II: Book Review of Clay Shirky, Cognitive Surplus: Creativity and Generosity in a Connected Age
To those of us who for a long time have tried to understand the many impacts of the Internet, Clay Shirky stands among a very small group of folks who Get It. Usually without hyperbole and with a sense of both historicity and humor, Shirky has been asking not the obvious questions but the right ones. Explaining first the import then the implications of these questions has led him to topics ranging from pro-anorexia support groups to the Library of Congress cataloging system and flame wars to programming etiquette.
This book continues that useful eclecticism. Examples are both fashionably fresh and honorably historical: Josh Groban and Johannes Gutenberg appear in telling vignettes. Rural India, 18th-century London, Korean boy-band fans, and empty California swimming pools are important for the lessons they can reinforce. The usual cliches -- Amazon, Zappos, Second Life, even Twitter -- are pretty much invisible. As Shirky has done elsewhere, two conventional narratives of various phenomena are both shown to miss the point: in this case, neither "Young people are immoral" nor "Young people are blissfully generous with their possessions" adequately explained the rise in music file sharing.
In a career of writing cogently about what radical changes in connectivity do to people, groups, and institutions, Cognitive Surplus is, I believe, Shirky's best work yet. Not content with explaining how we have come to our peculiar juncture of human attention, organizational possibility, and technological adaptation, in a final chapter Shirky challenges us to do something meaningful -- to civic institutions, for civil liberties, with truth and beauty on the agenda -- with social media, mobility, ubiquitous Internet access, and the rest of our underutilized toolkit. At the same time, he avoids technological utopianism, acknowledging that the tools are morally neutral and can be used as easily for cheating on exams as for the cleanup of Pakistani squalor.
A core premise of the book holds that the Internet allows many people to reallocate their time. Specifically, the amount of time people in many countries spend watching television is so vast that even a nudge in the media landscape opens up some significant possibilities. Wikipedia, for example, is truly encyclopedic in its coverage: comprised of work in more than 240 languages, the effort has accumulated more than a billion edits, all by volunteers. At the time of his analysis, Shirkey noted, the estimated human effort to create Wikipedia was roughly equivalent to the time consumed by the television ads running on one average weekend.
So ample available time exists to do something, as opposed to lying on a coach passively receiving TV messages. What might people do with this "cognitive surplus"? Read War and Peace. Volunteer at a soup kitchen. Join Bob Putnam's bowling league. Thus far, however, people haven't tended, in large numbers, to do these things, even though civic participation is apparently on the rise. Rather, people are connecting with other people on line: the shift from personal computing to social networking (Facebook alone hosts roughly half a billion accounts) is well underway but not yet well understood. Once we can communicate with people, anywhere, anytime, at close to zero economic cost, what do we do?
Here Shirky is inclusive: people help other people write and maintain operating systems, web servers, or browsers. They recaption silly cat pictures with sillier messages. They identify election irregularities, or ethnic discrimination, or needs for public safety and public welfare resources in both Haiti and the streets of London. The state of the technology landscape makes many things possible:
-Individuals do not need to be professionals to publish ideas; to disseminate pictures, music, or words; to have an opinion in the public discourse; or to analyze public data on crime or what have you.
-Based on an emerging subset of behavioral economics, we are discovering that markets are not the optimal organizing and motivational principle for every situation. For many kinds of social interaction, whether in regard to fishing grounds or blood donation, reputation- and community-based solutions work better than a monetary one. At the collective level, belonging to a group we believe in and having a chance to be generous are powerful motivators. For their part, individuals are motivated by autonomy (shaping and solving problems ourselves) and competence (over time, getting better at doing so). In addition, the introduction of money into an interaction may make it impossible for the group to perform as well as before money, even after the financial rules are removed (think of certain Native American tribes as tragic examples here, but day-care parents who come late to pick-up hit closer to home).
-People in groups can organize to achieve some goal, whether it is the pursuit of tissue type registration for organ donation, a boycott of BP, or making car pools scale beyond office-mates.
In sum: amateurs can enter many fields of communication, performing at various levels of quality for free and displacing professionals with credentials who used to be paid more. Low overhead in both technical skill and capital infrastructure opens media businesses to new entrants. Finally, the combination of intrinsic motivation for cognitive work and low coordination costs means that informal organizations can outperform firms along several axes: Linux and Wikipedia stand as vivid, but not isolated, examples here.
This new order of things complicates matters for incumbents: record-label executives, newspaper reporters, and travel agents can all testify to being on the wrong side of a disruptive force. It also raises questions that can trouble some people:
-"Who will preserve cultural quality?"
Without proper editors guarding access to the publishing machinery, lots of bad ideas might see an audience. (The problem is not new: before movable type, every published book was a masterpiece, while afterward, we eventually got dime novels.)
-"What happens if that knowledge falls into the wrong hands?"
Previous mechanisms of cultural authority, such as those attached to a physician or politician, might be undermined.
-"Where do you find the time?"
Excessive exposure to electronic games, virtual communities, or the universally suspect "chat rooms" might crowd out normal behavior, most likely including American Idol, Oprah, or NCIS.
In sum, as Shirky crystallizes the objections, "Shared, unmanaged effort might be fine for picnics and bowling leagues, but serious work is done for money, by people who work in proper organizations, with managers directing their work." (p. 162)
These, then, are the stakes. Just as the limited liability joint stock corporation was a historically specific convenience that solved many problems relating to industrial finance, so too are new organizational models becoming viable to address today's problems and possibilities. At the same time, they challenge the cognitive infrastructure that coevolved with industrial capitalism.
That infrastructure, in broad outline, builds on the following:
-Individuals are not equipped to determine their own contributions to a larger group or entity.
-Money is a widely useful yardstick.
-Material consumption is good for psychic and economic reasons.
-Organizations are more powerful than disorganized individuals, and the larger the organization, the more powerful it is.
If each of those pillars is, if not demolished, at least shown to be wobbly, what comes next? In the book's final chapter, Shirky moves beyond analysis to prescription, arguing that with surplus time and massive low-cost infrastructure at our disposal, we owe it to each other and to our children to create something more challenging and beneficial than the best of what's out there: "Creating a participatory culture with wider benefits for society is harder than sharing amusing photos." (p. 185)
Patientslikeme.com, Ushahidi, and Responsible Citizens each represent a start rather than an acme. Digital society awaits, in short, its Gutenbergs, its Jeffersons, its Nightingales, its Ghandis. Shirky's concrete list of how-tos is likely to inform the blueprint utilized by this upcoming generation of innovators, reformers, and entrepreneurs. As a result, Cognitive Surplus is valuable for anyone needing to understand the potential ramifications of our historical moment.
This book continues that useful eclecticism. Examples are both fashionably fresh and honorably historical: Josh Groban and Johannes Gutenberg appear in telling vignettes. Rural India, 18th-century London, Korean boy-band fans, and empty California swimming pools are important for the lessons they can reinforce. The usual cliches -- Amazon, Zappos, Second Life, even Twitter -- are pretty much invisible. As Shirky has done elsewhere, two conventional narratives of various phenomena are both shown to miss the point: in this case, neither "Young people are immoral" nor "Young people are blissfully generous with their possessions" adequately explained the rise in music file sharing.
In a career of writing cogently about what radical changes in connectivity do to people, groups, and institutions, Cognitive Surplus is, I believe, Shirky's best work yet. Not content with explaining how we have come to our peculiar juncture of human attention, organizational possibility, and technological adaptation, in a final chapter Shirky challenges us to do something meaningful -- to civic institutions, for civil liberties, with truth and beauty on the agenda -- with social media, mobility, ubiquitous Internet access, and the rest of our underutilized toolkit. At the same time, he avoids technological utopianism, acknowledging that the tools are morally neutral and can be used as easily for cheating on exams as for the cleanup of Pakistani squalor.
A core premise of the book holds that the Internet allows many people to reallocate their time. Specifically, the amount of time people in many countries spend watching television is so vast that even a nudge in the media landscape opens up some significant possibilities. Wikipedia, for example, is truly encyclopedic in its coverage: comprised of work in more than 240 languages, the effort has accumulated more than a billion edits, all by volunteers. At the time of his analysis, Shirkey noted, the estimated human effort to create Wikipedia was roughly equivalent to the time consumed by the television ads running on one average weekend.
So ample available time exists to do something, as opposed to lying on a coach passively receiving TV messages. What might people do with this "cognitive surplus"? Read War and Peace. Volunteer at a soup kitchen. Join Bob Putnam's bowling league. Thus far, however, people haven't tended, in large numbers, to do these things, even though civic participation is apparently on the rise. Rather, people are connecting with other people on line: the shift from personal computing to social networking (Facebook alone hosts roughly half a billion accounts) is well underway but not yet well understood. Once we can communicate with people, anywhere, anytime, at close to zero economic cost, what do we do?
Here Shirky is inclusive: people help other people write and maintain operating systems, web servers, or browsers. They recaption silly cat pictures with sillier messages. They identify election irregularities, or ethnic discrimination, or needs for public safety and public welfare resources in both Haiti and the streets of London. The state of the technology landscape makes many things possible:
-Individuals do not need to be professionals to publish ideas; to disseminate pictures, music, or words; to have an opinion in the public discourse; or to analyze public data on crime or what have you.
-Based on an emerging subset of behavioral economics, we are discovering that markets are not the optimal organizing and motivational principle for every situation. For many kinds of social interaction, whether in regard to fishing grounds or blood donation, reputation- and community-based solutions work better than a monetary one. At the collective level, belonging to a group we believe in and having a chance to be generous are powerful motivators. For their part, individuals are motivated by autonomy (shaping and solving problems ourselves) and competence (over time, getting better at doing so). In addition, the introduction of money into an interaction may make it impossible for the group to perform as well as before money, even after the financial rules are removed (think of certain Native American tribes as tragic examples here, but day-care parents who come late to pick-up hit closer to home).
-People in groups can organize to achieve some goal, whether it is the pursuit of tissue type registration for organ donation, a boycott of BP, or making car pools scale beyond office-mates.
In sum: amateurs can enter many fields of communication, performing at various levels of quality for free and displacing professionals with credentials who used to be paid more. Low overhead in both technical skill and capital infrastructure opens media businesses to new entrants. Finally, the combination of intrinsic motivation for cognitive work and low coordination costs means that informal organizations can outperform firms along several axes: Linux and Wikipedia stand as vivid, but not isolated, examples here.
This new order of things complicates matters for incumbents: record-label executives, newspaper reporters, and travel agents can all testify to being on the wrong side of a disruptive force. It also raises questions that can trouble some people:
-"Who will preserve cultural quality?"
Without proper editors guarding access to the publishing machinery, lots of bad ideas might see an audience. (The problem is not new: before movable type, every published book was a masterpiece, while afterward, we eventually got dime novels.)
-"What happens if that knowledge falls into the wrong hands?"
Previous mechanisms of cultural authority, such as those attached to a physician or politician, might be undermined.
-"Where do you find the time?"
Excessive exposure to electronic games, virtual communities, or the universally suspect "chat rooms" might crowd out normal behavior, most likely including American Idol, Oprah, or NCIS.
In sum, as Shirky crystallizes the objections, "Shared, unmanaged effort might be fine for picnics and bowling leagues, but serious work is done for money, by people who work in proper organizations, with managers directing their work." (p. 162)
These, then, are the stakes. Just as the limited liability joint stock corporation was a historically specific convenience that solved many problems relating to industrial finance, so too are new organizational models becoming viable to address today's problems and possibilities. At the same time, they challenge the cognitive infrastructure that coevolved with industrial capitalism.
That infrastructure, in broad outline, builds on the following:
-Individuals are not equipped to determine their own contributions to a larger group or entity.
-Money is a widely useful yardstick.
-Material consumption is good for psychic and economic reasons.
-Organizations are more powerful than disorganized individuals, and the larger the organization, the more powerful it is.
If each of those pillars is, if not demolished, at least shown to be wobbly, what comes next? In the book's final chapter, Shirky moves beyond analysis to prescription, arguing that with surplus time and massive low-cost infrastructure at our disposal, we owe it to each other and to our children to create something more challenging and beneficial than the best of what's out there: "Creating a participatory culture with wider benefits for society is harder than sharing amusing photos." (p. 185)
Patientslikeme.com, Ushahidi, and Responsible Citizens each represent a start rather than an acme. Digital society awaits, in short, its Gutenbergs, its Jeffersons, its Nightingales, its Ghandis. Shirky's concrete list of how-tos is likely to inform the blueprint utilized by this upcoming generation of innovators, reformers, and entrepreneurs. As a result, Cognitive Surplus is valuable for anyone needing to understand the potential ramifications of our historical moment.
Friday, June 11, 2010
Early Indications June 2010: World Cup special on sports brand equity
It's a familiar business school discussion. "Let's talk about
powerful brands," begins the professor. "Who comes to mind?" Usual
suspects emerge: Coke, Visa, Kleenex. "OK," asks the prof, "what brand
is so influential that people tattoo it on their arms?" The answer is
of course Harley-Davidson.
There is of course another category of what we might call "tattoo
brands," however: sports teams. Measuring sporting allegiance as a
form of brand equity is both difficult and worth thinking about.
For a brief definition up front, Wikipedia's well-footnoted statement will do:
"Brand equity refers to the marketing effects or outcomes that accrue
to a product with its brand name compared with those that would accrue
if the same product did not have the brand name."
That is, people think more highly of one product than another because
of such factors as word of mouth, customer satisfaction, image
creation and management, track record, and a range of tangible and
intangible benefits of using or associating with the product.
The discussion is timely on two fronts. First, the sporting world's
eyes are on the World Cup, and several European soccer clubs are
widely reckoned as power brands on the global level. Domestically,
the pending shifts in college athletic conferences have everything to
do with brand equity: the University of Texas, a key prize, is one of
a handful of programs that make money, in part because of intense fan
devotion (one estimate puts football revenues alone at $88 million).
Our focus today will be limited to professional sports franchises, but
many of the arguments can be abstracted, in qualitative terms, to
collegiate athletics as well. If we consider the revenue streams of a
professional sports franchise, three top the list:
-television revenues
-ticket sales and in-stadium advertising
-licensing for shirts, caps, and other memorabilia.
Of these, ticket sales are relatively finite: a team with a powerful
brand will presumably have more fans than can logistically or
financially attend games. Prices can and do rise, but for a quality
franchise, the point is to build a fan network beyond the arena.
Television is traditionally the prime way to do this. National and
now global TV contracts turn viewership into advertising revenue for
partners up and down the value chain from the leagues and clubs
themselves. That Manchester United and the New York Yankees can have
fan bases in China, Japan, or Brazil testifies to the power of
television and, increasingly, various facets of the Internet in
brand-building.
Sports fandom exhibits peculiar economic characteristics. Compared
to, say, house- or car-buying, fans do not research various
alternatives before making a presumably "rational" consumption
decision: team allegiance is not a "considered purchase." If you are
a Boston Red Sox fan, your enthusiasm may or may not be relevant to
mine: network effects and peer pressure can come into play (as at a
sports bar), but are less pronounced than in telecom, for example. If
I am a Cleveland Cavaliers fan, I am probably not a New York Knicks
fan: a choice in one league generally precludes other teams in season.
Geography matters, but not decisively: one can comfortably cheer for
San Antonio in basketball, Green Bay in football, and St. Louis in
baseball. At the same time, choice is not completely independent of
place, particularly for ticket-buying (as compared to hat-buying).
Finally, switching costs are generally psychic and only mildly
economic (as in having to purchase additional cable TV tiers to see an
out-of-region team, for example). Those psychic costs are not to be
underestimated: just because someone lives in London with access to
several soccer clubs, allegiances are not determined by the low-price
or high-quality provider on an annual basis. Allegiance also does not
typically switch for reasons of performance: someone in Akron who has
cheered, in vain, for the Cleveland Browns is not likely to switch to
Pittsburgh even though the Steelers have a far superior championship
history.
Given the vast reach of today's various communications channels, it
would seem that successful sports brands could have a global brand
equity that exceeds the club's ability to monetize those feelings. I
took five of the franchises ranked highest on the Forbes 2010 list of most valuable sports brands and calculated the ratio of the estimated brand equity to the club's revenues. If the club were able to capture more fan allegiance than it could realize in cash inflows, that ratio should be greater than one. Given the approximations I used, that is not the case.
For a benchmark, I also consulted Interbrand's list of the top global
commercial brands and their value to see how often a company's image
was worth more than its annual sales. I chose six companies from a
variety of consumer-facing sectors (so long IBM, SAP, and Cisco), and
the company had to be roughly the same as the brand (the Gillette
brand is not the parent company of P&G).
Three points should be made before discussing the results. First, any
calculation of brand equity is a rough estimate: no auditable figures
or scientific calculations can generate these lists (see here). Second, Forbes and Interbrand used
different methodologies. We will see the consequences of these
differences shortly. Finally, corporate revenues often accrued from
more brands than just the flagship: people buy Minute Maid apart from
the Coca Cola brand, but the juice revenues are counted in the
corporate ratio. All told, this is NOT a scientific exercise but
rather a surprising thought-starter.
The stunning 8:1 ratio of brand equity to revenues at Louis Vuitton is
in part a consequence of Interbrand's methodology, which overweights
luxury items. Even so, six conclusions and suggestions for further
investigation emerge:
1) The two scales do not align. The New York Yankees, the most
valuable sports brand in the world, is worth 1/24 that of Amazon. One
or both of those numbers is funny.
2) Innovation runs counter to brand power. New Coke remains a
textbook failure, while Apple's brand is only worth about a third of
its revenue. Harley-Davidson draws its cachet from its retrograde
features and styling, the antithesis of innovativeness.
3) Geography is not destiny for sports teams. Apart from New York and
Madrid, Dallas, Manchester, and Boston (not included here but with two
teams in Forbes' top ten) are not global megaplexes or media centers;
London, Rome, and Los Angeles are all absent.
4) Soccer is the world's game, as measured by brand: five of the ten
most valuable names belong to European football teams. The NFL has
two entries and Major League Baseball three to round out the top ten
list. Despite the presence of more international stars than American
football, and their being from a wider range of countries than MLB's
feeders, basketball and hockey are absent from the Forbes top ten.
5) Assuming for the sake of argument that the Interbrand list is
overvalued and therefore that the Forbes list is more accurate, the
sports teams' relatively close ratio of brand equity to revenues would
suggest that teams are monetizing a large fraction of fan feeling.
6) Alternatively, if the Forbes list is undervalued, sports teams have
done an effective job of creating fan awareness and passion well
beyond the reach of the home stadium. Going back to our original
assumption, if tattoos are a proxy for brand equity, this is more
likely the case. The question then becomes, what happens next?
As more of the world comes on line, as media becomes more
participatory, and as the sums involved for salaries, transfer fees,
and broadcast rights at some point hit limits (as may be happening in
the NBA), the pie will continue to be reallocated. The intersection
of fandom and economics, as we have seen, is anything but rational, so
expect some surprises in this most emotionally charged of markets.
powerful brands," begins the professor. "Who comes to mind?" Usual
suspects emerge: Coke, Visa, Kleenex. "OK," asks the prof, "what brand
is so influential that people tattoo it on their arms?" The answer is
of course Harley-Davidson.
There is of course another category of what we might call "tattoo
brands," however: sports teams. Measuring sporting allegiance as a
form of brand equity is both difficult and worth thinking about.
For a brief definition up front, Wikipedia's well-footnoted statement will do:
"Brand equity refers to the marketing effects or outcomes that accrue
to a product with its brand name compared with those that would accrue
if the same product did not have the brand name."
That is, people think more highly of one product than another because
of such factors as word of mouth, customer satisfaction, image
creation and management, track record, and a range of tangible and
intangible benefits of using or associating with the product.
The discussion is timely on two fronts. First, the sporting world's
eyes are on the World Cup, and several European soccer clubs are
widely reckoned as power brands on the global level. Domestically,
the pending shifts in college athletic conferences have everything to
do with brand equity: the University of Texas, a key prize, is one of
a handful of programs that make money, in part because of intense fan
devotion (one estimate puts football revenues alone at $88 million).
Our focus today will be limited to professional sports franchises, but
many of the arguments can be abstracted, in qualitative terms, to
collegiate athletics as well. If we consider the revenue streams of a
professional sports franchise, three top the list:
-television revenues
-ticket sales and in-stadium advertising
-licensing for shirts, caps, and other memorabilia.
Of these, ticket sales are relatively finite: a team with a powerful
brand will presumably have more fans than can logistically or
financially attend games. Prices can and do rise, but for a quality
franchise, the point is to build a fan network beyond the arena.
Television is traditionally the prime way to do this. National and
now global TV contracts turn viewership into advertising revenue for
partners up and down the value chain from the leagues and clubs
themselves. That Manchester United and the New York Yankees can have
fan bases in China, Japan, or Brazil testifies to the power of
television and, increasingly, various facets of the Internet in
brand-building.
Sports fandom exhibits peculiar economic characteristics. Compared
to, say, house- or car-buying, fans do not research various
alternatives before making a presumably "rational" consumption
decision: team allegiance is not a "considered purchase." If you are
a Boston Red Sox fan, your enthusiasm may or may not be relevant to
mine: network effects and peer pressure can come into play (as at a
sports bar), but are less pronounced than in telecom, for example. If
I am a Cleveland Cavaliers fan, I am probably not a New York Knicks
fan: a choice in one league generally precludes other teams in season.
Geography matters, but not decisively: one can comfortably cheer for
San Antonio in basketball, Green Bay in football, and St. Louis in
baseball. At the same time, choice is not completely independent of
place, particularly for ticket-buying (as compared to hat-buying).
Finally, switching costs are generally psychic and only mildly
economic (as in having to purchase additional cable TV tiers to see an
out-of-region team, for example). Those psychic costs are not to be
underestimated: just because someone lives in London with access to
several soccer clubs, allegiances are not determined by the low-price
or high-quality provider on an annual basis. Allegiance also does not
typically switch for reasons of performance: someone in Akron who has
cheered, in vain, for the Cleveland Browns is not likely to switch to
Pittsburgh even though the Steelers have a far superior championship
history.
Given the vast reach of today's various communications channels, it
would seem that successful sports brands could have a global brand
equity that exceeds the club's ability to monetize those feelings. I
took five of the franchises ranked highest on the Forbes 2010 list of most valuable sports brands and calculated the ratio of the estimated brand equity to the club's revenues. If the club were able to capture more fan allegiance than it could realize in cash inflows, that ratio should be greater than one. Given the approximations I used, that is not the case.
For a benchmark, I also consulted Interbrand's list of the top global
commercial brands and their value to see how often a company's image
was worth more than its annual sales. I chose six companies from a
variety of consumer-facing sectors (so long IBM, SAP, and Cisco), and
the company had to be roughly the same as the brand (the Gillette
brand is not the parent company of P&G).
Three points should be made before discussing the results. First, any
calculation of brand equity is a rough estimate: no auditable figures
or scientific calculations can generate these lists (see here). Second, Forbes and Interbrand used
different methodologies. We will see the consequences of these
differences shortly. Finally, corporate revenues often accrued from
more brands than just the flagship: people buy Minute Maid apart from
the Coca Cola brand, but the juice revenues are counted in the
corporate ratio. All told, this is NOT a scientific exercise but
rather a surprising thought-starter.
The stunning 8:1 ratio of brand equity to revenues at Louis Vuitton is
in part a consequence of Interbrand's methodology, which overweights
luxury items. Even so, six conclusions and suggestions for further
investigation emerge:
1) The two scales do not align. The New York Yankees, the most
valuable sports brand in the world, is worth 1/24 that of Amazon. One
or both of those numbers is funny.
2) Innovation runs counter to brand power. New Coke remains a
textbook failure, while Apple's brand is only worth about a third of
its revenue. Harley-Davidson draws its cachet from its retrograde
features and styling, the antithesis of innovativeness.
3) Geography is not destiny for sports teams. Apart from New York and
Madrid, Dallas, Manchester, and Boston (not included here but with two
teams in Forbes' top ten) are not global megaplexes or media centers;
London, Rome, and Los Angeles are all absent.
4) Soccer is the world's game, as measured by brand: five of the ten
most valuable names belong to European football teams. The NFL has
two entries and Major League Baseball three to round out the top ten
list. Despite the presence of more international stars than American
football, and their being from a wider range of countries than MLB's
feeders, basketball and hockey are absent from the Forbes top ten.
5) Assuming for the sake of argument that the Interbrand list is
overvalued and therefore that the Forbes list is more accurate, the
sports teams' relatively close ratio of brand equity to revenues would
suggest that teams are monetizing a large fraction of fan feeling.
6) Alternatively, if the Forbes list is undervalued, sports teams have
done an effective job of creating fan awareness and passion well
beyond the reach of the home stadium. Going back to our original
assumption, if tattoos are a proxy for brand equity, this is more
likely the case. The question then becomes, what happens next?
As more of the world comes on line, as media becomes more
participatory, and as the sums involved for salaries, transfer fees,
and broadcast rights at some point hit limits (as may be happening in
the NBA), the pie will continue to be reallocated. The intersection
of fandom and economics, as we have seen, is anything but rational, so
expect some surprises in this most emotionally charged of markets.
Saturday, May 22, 2010
May 2010 Early Indications: Devising the cloud-aware organization
As various analysts and technology executives assess the pros and cons of cloud computing, two points of consensus appear to be emerging:
A) very large data centers benefit from extreme economies of scale
B) cloud success stories are generally found outside of the traditional IT shop.
Let us examine each of these in more detail, then probe some of the implications.
The advantages of scale
Whether run by a cloud provider or a well-managed enterprise IT group, very large data centers exhibit economies of scale not found in smaller server installations. First, the leverage of relatively expensive and skilled technologists is far higher when one person can manage between 1,000 and 2,000 highly automated servers, as at Microsoft, as opposed to one person being responsible for between five and 50 machines, which is common.
Second, the power consumption of a well-engineered data center can be more efficient than that of many traditional operations. Yahoo is building a new facility in upstate New York, for example, that utilizes atmospheric cooling to the point that only 1% of electricity consumption is for air conditioning and related cooling tasks. Having people with deep expertise in cooling, power consumption, recovery, and other niche skills on staff also helps make cloud providers more efficient than those running at smaller scales.
Finally, large data centers benefit from aggregation of demand. Assume facility A has 10,000 users of computing cycles spread over a variety of different cyclical patterns while facility B has fewer users, all with similar seasonality for retail, quarterly closes for an accounting function, or monthly invoices. Facility A should be able to run more efficiently because it has a more "liquid" market for its capabilities while facility B will likely have to build to its highest load (plus a safety margin) then run less efficiently the majority of the time. What James Hamilton of Amazon calls
"non-correlated peaks" can be difficult to generate within a single enterprise or function.
Who reaps the cloud's benefits?
For all of these benefits, external cloud successes have yet to accrue to traditional IT organizations. At Amazon Web Services, for example, of roughly 100 case studies, none are devoted to traditional enterprise processes such as order management, invoicing and payment processing, or HR.
There are many readily understandable reasons for this pattern; here is a sample. First, legal and regulatory constraints often require a physical audit of information handling practices to which virtual answers are unacceptable. Second, the laws of physics may make large volumes of database joins and other computing tasks difficult to
execute off-premise. In general, high-volume transaction processing is not currently recommended as a cloud candidate.
Third, licenses from traditional enterprise providers such as Microsoft, Oracle, and SAP are still evolving, making it difficult to run their software in hybrid environments (in which some processes run locally while others run in a cloud). In addition, only a few enterprise applications of either the package or custom variety are designed to run as well on cloud infrastructure as they do on a conventional server or cluster. Fourth, accounting practices in IT may make it difficult to know the true baseline costs and benefits to which an outside provider must compare: some CIOs never see their electric bills, for example.
For these reasons, among others, the conclusion is usually drawn that cloud computing is a suboptimal fit for traditional enterprise IT. However, let's invert that logic to see how organizations have historically adapted to new technology capability. When electric motors replaced overhead drive shafts driven by waterwheels adjoining textile mills, the looms and other machines were often left in the same positions for decades before mill owners realized the facility could be organized independently of power supply. More recently, word-processing computers from the likes of Wang initially automated typing pools (one third of all U.S. women working in 1971 were secretaries); it was not until 10 to 20 years later that large numbers of managers began to service their own document-production needs, and thereby alter the shape of organizations.
The cloud will change how resources are organized
Enterprise IT architectures embed a wide range of operating assumptions regarding the nature of work, the location of business processes, clockspeed, and other factors. When a major shift occurs in the information or other infrastructure, it takes years for organizations to adapt. If we take as our premise that most organizations are not yet prepared to exploit cloud computing (rather than talk about clouds not being ready for "the enterprise"), what are some potential ramifications?
-Organizations are already being founded with very little capital investment. For a services- or knowledge-intensive business that does not make anything physical, free tools and low-cost computing cycles can mostly be expensed, changing the fund-raising and indeed organizational strategies significantly.
-The perennial question of "who owns the data?" enters a new phase. While today USB drives and desktop databases continue to make it possible to hoard data, in the future organizations built on cloud-friendly logic from their origins will deliver new wrinkles to information-handling practices. The issue will by no means disappear:
Google's Gmail cloud storage is no doubt already home to a sizable quantity of enterprise data.
-Smartphones, tablets, and other devices built without mass storage can thrive in a cloud-centric environment, particularly if the organization is designed to be fluid and mobile. Coburn Ventures in New York, for example, is an investment firm comprised of a small team of mobile knowledge workers who for the first five years had no
corporate office whatsoever: the organization operated from wi-fi hotspots, with only occasional all-hands meetings.
-New systems of trust and precautions will need to take shape as the core IT processing capacity migrates to a vendor. It's rarely consequential to contract for a video transcoding or a weather simulation and have it be interrupted. More problematically, near-real-time processes such as customer service will likely need to
be redesigned to operate successfully in a cloud, or cluster of clouds. Service-level agreements will need to reflect the true cost and impact of interruptions or other lapses. Third-party adjudicators may emerge to assess the responsibility of the cloud customer who introduced a hiccup into the environment relative to the vendor whose
failover failed.
In short, as cloud computing reallocates the division of labor within the computing fabric, it will also change how managers and, especially, entrepreneurs organize resources into firms, partnerships, and other formal structures. Once these forms emerge, the nature of everything else will be subject to reinvention: work, risk, reward, collaboration, and indeed value itself.
A) very large data centers benefit from extreme economies of scale
B) cloud success stories are generally found outside of the traditional IT shop.
Let us examine each of these in more detail, then probe some of the implications.
The advantages of scale
Whether run by a cloud provider or a well-managed enterprise IT group, very large data centers exhibit economies of scale not found in smaller server installations. First, the leverage of relatively expensive and skilled technologists is far higher when one person can manage between 1,000 and 2,000 highly automated servers, as at Microsoft, as opposed to one person being responsible for between five and 50 machines, which is common.
Second, the power consumption of a well-engineered data center can be more efficient than that of many traditional operations. Yahoo is building a new facility in upstate New York, for example, that utilizes atmospheric cooling to the point that only 1% of electricity consumption is for air conditioning and related cooling tasks. Having people with deep expertise in cooling, power consumption, recovery, and other niche skills on staff also helps make cloud providers more efficient than those running at smaller scales.
Finally, large data centers benefit from aggregation of demand. Assume facility A has 10,000 users of computing cycles spread over a variety of different cyclical patterns while facility B has fewer users, all with similar seasonality for retail, quarterly closes for an accounting function, or monthly invoices. Facility A should be able to run more efficiently because it has a more "liquid" market for its capabilities while facility B will likely have to build to its highest load (plus a safety margin) then run less efficiently the majority of the time. What James Hamilton of Amazon calls
"non-correlated peaks" can be difficult to generate within a single enterprise or function.
Who reaps the cloud's benefits?
For all of these benefits, external cloud successes have yet to accrue to traditional IT organizations. At Amazon Web Services, for example, of roughly 100 case studies, none are devoted to traditional enterprise processes such as order management, invoicing and payment processing, or HR.
There are many readily understandable reasons for this pattern; here is a sample. First, legal and regulatory constraints often require a physical audit of information handling practices to which virtual answers are unacceptable. Second, the laws of physics may make large volumes of database joins and other computing tasks difficult to
execute off-premise. In general, high-volume transaction processing is not currently recommended as a cloud candidate.
Third, licenses from traditional enterprise providers such as Microsoft, Oracle, and SAP are still evolving, making it difficult to run their software in hybrid environments (in which some processes run locally while others run in a cloud). In addition, only a few enterprise applications of either the package or custom variety are designed to run as well on cloud infrastructure as they do on a conventional server or cluster. Fourth, accounting practices in IT may make it difficult to know the true baseline costs and benefits to which an outside provider must compare: some CIOs never see their electric bills, for example.
For these reasons, among others, the conclusion is usually drawn that cloud computing is a suboptimal fit for traditional enterprise IT. However, let's invert that logic to see how organizations have historically adapted to new technology capability. When electric motors replaced overhead drive shafts driven by waterwheels adjoining textile mills, the looms and other machines were often left in the same positions for decades before mill owners realized the facility could be organized independently of power supply. More recently, word-processing computers from the likes of Wang initially automated typing pools (one third of all U.S. women working in 1971 were secretaries); it was not until 10 to 20 years later that large numbers of managers began to service their own document-production needs, and thereby alter the shape of organizations.
The cloud will change how resources are organized
Enterprise IT architectures embed a wide range of operating assumptions regarding the nature of work, the location of business processes, clockspeed, and other factors. When a major shift occurs in the information or other infrastructure, it takes years for organizations to adapt. If we take as our premise that most organizations are not yet prepared to exploit cloud computing (rather than talk about clouds not being ready for "the enterprise"), what are some potential ramifications?
-Organizations are already being founded with very little capital investment. For a services- or knowledge-intensive business that does not make anything physical, free tools and low-cost computing cycles can mostly be expensed, changing the fund-raising and indeed organizational strategies significantly.
-The perennial question of "who owns the data?" enters a new phase. While today USB drives and desktop databases continue to make it possible to hoard data, in the future organizations built on cloud-friendly logic from their origins will deliver new wrinkles to information-handling practices. The issue will by no means disappear:
Google's Gmail cloud storage is no doubt already home to a sizable quantity of enterprise data.
-Smartphones, tablets, and other devices built without mass storage can thrive in a cloud-centric environment, particularly if the organization is designed to be fluid and mobile. Coburn Ventures in New York, for example, is an investment firm comprised of a small team of mobile knowledge workers who for the first five years had no
corporate office whatsoever: the organization operated from wi-fi hotspots, with only occasional all-hands meetings.
-New systems of trust and precautions will need to take shape as the core IT processing capacity migrates to a vendor. It's rarely consequential to contract for a video transcoding or a weather simulation and have it be interrupted. More problematically, near-real-time processes such as customer service will likely need to
be redesigned to operate successfully in a cloud, or cluster of clouds. Service-level agreements will need to reflect the true cost and impact of interruptions or other lapses. Third-party adjudicators may emerge to assess the responsibility of the cloud customer who introduced a hiccup into the environment relative to the vendor whose
failover failed.
In short, as cloud computing reallocates the division of labor within the computing fabric, it will also change how managers and, especially, entrepreneurs organize resources into firms, partnerships, and other formal structures. Once these forms emerge, the nature of everything else will be subject to reinvention: work, risk, reward, collaboration, and indeed value itself.