In our time of economic stagnation, attention on the part of many
political figures is turning to the question of immigration.
Presidential candidate Michele Bachmann called for phased deportation
of 11 million illegal immigrants. The stated rationale for such a
stance includes a desire to "curb the unfair strain on our country's
job markets." Such dramatic proposals notwithstanding, with accurate
information by definition hard to collect at such a large scale, the
connection of immigration to employment is impossible to establish.
Thus, many questions exist: How often are native-born American
citizens pushed to the sidelines by "cheap" immigrant labor? When do
immigrants do the dirty work that employers need done in foodservice,
agriculture, and landscaping/construction? What is the ratio of
immigrants who put a load on municipal services such as schools and
emergency rooms, compared to people who may lack a passport but pay
taxes and spend money where they earn it? There is simply no way to
tell for sure.
In the pursuit of tight borders, however, current immigration policy
seems to be categorically turning away potential contributors to
economic strength: as the rules stand, tens of thousands of
international students who attend U.S. universities cannot compete for
jobs here. Robert Guest, an editor at the Economist, wrote a piece in
the Wall Street Journal last week (12/21/11) in which he compared
sending away international graduates to "Saudi Arabia setting fire to
its oil wells." New York Mayor Michael Bloomberg (who won election as
a Republican and knows something about entrepreneurship) calls the
practice "national suicide."
Speaking of fire, the flame war related to the Guest article burns hot
on the WSJ site: software programmers who lost their jobs to foreign
workers have their story to tell, as do employers who can't fill
technical jobs, or who, once they find a productive contributor, must
do battle with an extremely difficult bureaucratic process to get a
visa. Once again, the situation is so complicated and vast that any
single person's perspective is by definition limited.
I have no way of knowing how often H1B visa holders displace
native-born candidates nor how often they fill a gap for which
qualified applicants are scarce. My belief (following the argument of
my one-time co-author Erik Brynjolfsson at MIT) is that we are in the
midst of a structural shift in the global economy, which causes both
labor shortages in technical jobs and high unemployment in old-school
manufacturing and other sectors. After speaking both to job-seeking
students and candidate-seeking employers, I hear often that there is a
skills shortage: part of the unemployment issue may be that employers
are less willing to hire generalists (read, "liberal arts graduates")
in the era of ERP, social media, algorithmic decision-making, and
global supply chains. The structural shift argument also would explain
the rapid obsolescence of many 40- and 50-somethings' resumes.
My purpose here is not to try to adjudicate message boards or
presidential campaigns, but to argue from history: immigrant
entrepreneurs have helped make America great, and have created
literally millions of jobs. This is not a recent phenomenon, but the
changing makeup of the entrepreneurs reflects the relative openness or
closure of US borders over the centuries, as well as changing patterns
of migration: the relationship between who wants in and how welcoming
the U.S. border is has proven to be emotional and complicated.
Here is the beginning of a list of relevant examples, building on a
post from three years ago. Time after time, immigrant entrepreneurs
have altered the course of business history:
-Andrew Carnegie was famously Scottish. His fortune, about $300
billion in 2007 dollars, funded a wide range of philanthropies that
exerted substantial cultural influence long after his death.
-Alexander Graham Bell, also born in Scotland, invented or seriously
experimented in the fields of telecommunications, aviation, magnetic
storage, desalination of seawater, and even solar cells. He was a true
giant in human history.
-Henry Ford was the son of an Irish immigrant. His one-time business
partner Thomas Edison, who founded 14 companies including GE, was the
son of a Canadian immigrant.
-Ray Kroc, who grew McDonalds into a global giant, was the son of
Czech immigrants. He was enterprising from an early age, to the point
of talking his way into driving ambulances in World War I at 15 years
of age.
-Walt Disney's father left Canada to try to find gold in California;
Walt was born in Chicago.
-Steven Udvar-Hazy fled Hungary after the Soviets occupied his home
country. He founded one of the world's two biggest lessors of
commercial aircraft, International Lease Finance Corporation.
-An Wang, founder of the computer company of that name, was born in
Shanghai. At one point his firm employed 30,000 people.
-Amar Bose was born in Philadelphia to an immigrant fleeing pressure
from the British for his political activities on behalf of Bengali
liberation. The loudspeaker company was founded in 1964 as a sideline
to his professorship at MIT and now ranks among the three most trusted
U.S. electronics brands, according to Forrester Research. Bose employs
more than 9,000 people.
-Sidney Harman, born in Quebec, teamed with his boss Bernard Kardon to
make the first integrated hi-fi receiver; their eponymous company was
founded in 1953. Nearly 60 years later, Harman International employed
about 10,000 people. Harman himself was a fascinating person, devoted
to the arts and learning, and he bought Newsweek for $1 in 2010 from
the Washington Post Company.
--Vinod Khosla emigrated from India after university and co-founded
Sun Microsystems at the age of 27. After leaving Sun relatively
quickly, he has spent most of the last three decades as a venture
investor.
-Jeff Bezos founded Amazon after working at a hedge fund following his
undergraduate studies in electrical engineering at Princeton. The
immigrant connection? His adoptive father left Cuba.
-Pierre Omidyar was born in Paris to Iranian immigrants; his mother
holds a PhD from the Sorbonne and his father was a surgeon. After
growing up in Washington, DC and attending Tufts University, Omidyar
wrote the base code for eBay over a long weekend.
-Yahoo! co-founder Jerry Yang was born in Taipei and grew up in San
Jose, raised by a single mother after his father died.
-Google co-founder Sergei Brin's family emigrated from Russia after he
was born. His father is a math professor and his mother is, literally,
a rocket scientist at NASA.
-Elon Musk left his native South Africa at 17, wanting to head to the
U.S. because "It is where great things are possible." By the age of 40
Musk had co-founded PayPal, SpaceX, and Tesla Motors.
-Tony Hsieh was born to Taiwanese immigrants living in California. He
sold his first company to Microsoft for $265 million when he was 24
before co-founding a venture firm that backed Zappos, where he later
became CEO.
-Steve Jobs' biological father was Syrian, though he was adopted by
the Jobs family at birth. The company he co-founded currently employs
60,000 people, who generate a staggering $420,000 of profit (not
revenue) apiece.
These 17 people are mostly household names, but the pattern also works
in most any community: immigrants are frequently the risk-takers who
launch restaurants, convenience stores, dry cleaners, lawn care
operations, and other ventures. Immigration is central to the story
of the United States, and figuring out how to do it right in the 21st
century is both critically important and politically loaded to the
point where rational debate is impossible: nobody knew for sure
whether candidate Herman Cain's proposal for an electrified fence on
the Mexican border was a joke or not.
Open borders are not an option for any sovereign nation of significant
size. At the same time, periods of intense nativism in U.S. history
have spurred extremely restrictive policies. As history should show
conclusively, however, the cost of barring potential entrepreneurs is
high. In a global economy that runs on ideas and talent, former
Citibank CEO Walter Wriston's comment about money - "Capital goes
where it's wanted, and stays where it's well treated" -- holds true of
brainpower as well.
As the U.S. becomes less hospitable, formally and perhaps informally,
those bright, motivated individuals are being courted by New Zealand,
Israel, and Canada, not to mention the potential immigrants' native
countries. Atlassian Software (it specializes in developer tools) is
in Sydney; Weta Workshop generates world-class special effects for New
Zealand native Peter Jackson's movies at its facility in Wellington.
Skype's software was written in Estonia. Spotify's R&D operations are
in Stockholm. Vancouver is home to dozens of tech and software
companies, including the motion-capture studio for Electronic Arts. In
the global race for the next generation of startups, the U.S. has a
privileged position, but that status as a preferred destination is in
danger of being diminished as collateral damage in a debate with other
protagonists, motivations, and constituencies. We can only hope that
some good sense comes into play alongside the fear, stereotypes, and
lack of solid data that currently characterize the issue.
Early Indications is the weblog version of a newsletter I've been publishing since 1997. It focuses on emerging technologies and their social implications.
Thursday, December 29, 2011
Wednesday, November 30, 2011
November 2011 Early Indications: The State of Online Music
The past 20 years have been tumultuous, to say the least, for the recording industry. From a time when old formats (the LP and cassette) were declared dead and the compact disc was essentially the only way to buy music, through Napster, then Rhapsody, iTunes, and MySpace music, listeners now confront a confusing array of options that call into question some basic questions:
*What does it mean to "buy" a song? Or to "own" it without paying?
*How are artists discovered by publishers?
*How do listeners discover new music?
*Who pays, for what, and who makes money?
*Most fundamentally, is music a standalone industry or will it become a feature of a larger "content" business?
While much remains uncertain, six facts about the state of online music can be asserted.
1) The state of online music is puzzling.
Any venture capitalist will ask a few basic questions of a new venture, one of them being the size of the addressable market. According to the recording industry's trade association, which values CDs at retail and not at wholesale or purchase price, the total value of the U.S. music industry was only $6.8 billion in 2010, a 10.9% decline over the previous year. Just this year, Citigroup basically repossessed EMI, the #4 label, from the private equity firm to whom it lent money for its purchase and recently sold part to a Sony-led consortium and part to Vivendi's Universal label; Citi took a loss of more than $1 billion on the deal, best I can tell. Why are so many players fighting for a piece of a shrinking industry that is unprofitable for many?
2) The state of online music is crowded.
According to Wikipedia, there are 9 download sites with more than 10 million songs; nobody has more than Amazon's 18 million (iTunes is at 14):
*7digital
*Amazon MP3
*artistxsite
*eMusic
*Fairsharemusic
*Apple iTunes
*Napster
*Rhapsody
*Spotify
*Microsoft Zune
At the same time, there are 15 sites with more than 5 million tracks available for streaming, with Grooveshark the leader at 22 million:
*8tracks
*Deezer
*Grooveshark
*last.fm
*Mflow
*MOG
*Qriocity
*Rdio
*Rhapsody
*Simfy
*Spotify
*we7
*WiMP
*Zune
Also worth mentioning as a digital music resource is YouTube, for which music numbers are not readily broken out. The most-watched videos of all time, however, tilt heavily to music: how Justin Bieber could have 666 million views (the published number, not hyperbole from your cynical narrator) without click fraud is an open question. Finally, Internet radio is alive even after being hit with extremely high license fees. (One estimate I saw estimated that a station with 2,000 listeners playing 15 songs an hour would face $185,000 a year in fees.) Live365 coordinates more than 5,000 stations from more than 150 countries.
3) The state on online music is increasingly social.
If one takes a long view, music has traversed a complicated arc. Originally performed by performers sometimes backed by considerable cultural infrastructure (in the case of symphony orchestras and opera companies, not to mention large church bodies for organists and choirs) for massed audiences, or performed by amateur town bands and similar groups, music shrunk in the 20th century: singer-songwriters flourished, and consumer electronics and recording technologies made audiences of one common. The apex of solitary listening was probably iTunes on the iPod, with the iconic white earbuds. In this vein, with expensive celebrity models like Beats now fashionable, U.S. headphone sales nearly doubled in the past year, to $2 billion.
After the demise of MySpace, a great tool for music sharing and distribution if not much else, other social tools are emerging. Blackberry, Google, Grooveshark, Last.fm (owned by CBS), and other services connect people to other fans, to concerts, and to artists. Shared listening services including turntable.fm reconvene groups of people to play and listen to each other. Other startups of note include Flowd, a social network for music much like Apple's Ping. Thus music went from being physically communal, to private, to both solitary and electronically, and sometimes asynchronously, communal.
4) The state of online music is multi-platform.
In the "old" days of Napster or iTunes, one could store ripped CDs or MP3 files on a computer hard drive. iTunes and Zune could sync the files to a portable storage device (iPod or Microsoft player). Now, the complexity of the landscape is begetting both new solutions and consumer confusion. Some streaming services like Spotify have mobile apps that allow offline and/or streamed listening away from the PC, and Spotify just announced an app store platform -- a great idea. Given the complexity of the smartphone world, however, this means building and updating a series of apps for Windows, Apple, RIM, and Android, along with potentially Palm's webOS (if HP sells it off) and Samsung's Bada.
At the same time, PCs and smartphones are being joined by hard-drive-based home entertainment solutions for music and video. Some TVs, meanwhile, are USB and wi-fi compatible. Finally, depending on the model, blu-ray players can play multiple types of disks, or support games, or stream networked content. Thus the home entertainment and mobile entertainment worlds are recombining, with several billion-dollar markets up for grabs. Google tried an Internet-augmented TV; Apple is rumored to have one in the works. Confusingly, Apple TV isn't a TV at all, but a universal media streamer without storage. Western Digital, for its part, offers a hard-disk solution for music, videos, and movies that also supports streaming. The Olive and Sonos systems offer multi-room music playback, controlled by an iOS or Android device.
While MP3 is widely adopted, more and more higher-than-CD bitrate files are becoming available to enthusiasts. At the very expensive end of the market, high-resolution music files from the likes of UK firms Linn and Bowers and Wilkins, or Reference Recordings and HD Tracks in the U.S., can be played through megabuck hard-drive (as opposed to silver disk) systems from the likes of Linn, Meridian, Ayer, Berkeley Audio, and custom PC-based platforms.
Thus the "dematerialization" process continues: from owning physical polycarbonate, to storing files on one's hard drive, now to cloud-resident lockers, one's entertainment collection is getting more and more ethereal. Those cloud-based bits are also getting more and more agnostic about their playback/display device.
5) The state of online music is geographically determined
A quick look at the streaming services reveals considerable geographic specificity. While some services are worldwide, others are available only in one or two countries. Combine this rights-related limitation with the question of mobile platforms, and some degree of fragmentation becomes inevitable: uptake of different phone brands and types, recording artists, and payment systems varies widely. Unlike the compact disc, which was the universal medium for music (as product) distribution, online music might turn out to exhibit some degree of speciation.
6) The state of online music is potentially irrelevant
This could be misinterpreted: I would be the last person to say music doesn't matter. But music as a separate category may not stand alone much longer. Digital content types continue to converge (Amazon wants to hold my book bits, and my music bits, and my video bits on the server-in-the-sky; YouTube bridges music, TV, instructional videos, and other genres). Digital playback hardware diverges: in my house alone, I could play music on more than a dozen devices, many of which do lots of other things too.
Thus the proliferation of music file types, playback modes and devices, and available artists (a classic long-tail story) means that online music is diverging into more and more sub-markets: not only do music genres proliferate, but so do listening patterns. A Pandora user at a desktop PC in Canada will have relatively little in common with a YouTube viewer in Brazil or a Spotify mobile user in Sweden or a high-resolution Society of Sound member in the UK. All are listening to digital music files via the Internet, but the social layers, business models, hardware requirements, and connection mechanics are considerably different.
For the present, confusion reigns as niche players and mega-players compete to get big fast. One prediction: while free content will continue to be an option, the forces pushing paid content are not going away, and I would wager millions of people will at some point buy music they already own in yet another format: iTunes will not be forever, I don't think. A second prediction: the limits of ad-supported business models are accumulating. It works for TV, for sure, but not for movies, not to mention books (but it's fine for magazines). Where music will fall on the new continuum is anybody's guess as experiments like Nokia's now-dead Comes with Music (cell phone subscriptions included pre-paid rights payments) and Radiohead's Name Your Price exercise will continue.
Who will win? It's hard to bet against the Goliaths: rights fees are expensive, and music as a content type can increasingly be viewed as a feature of a larger bit-management regime, whether iTunes or Amazon's cloud or the Google/Android/YouTube mediaplex. As much as people love Spotify, or Pandora, or legal Napster, or the Internet Archive's amazing live-music treasure chest, it's hard to see any of these ever expanding beyond niche status alongside Disney, Sony, or other statutorily privileged content conglomerates. Outside the reach of said statutes, it's a pretty unappealing landscape for content owners, which means there might be innovation -- or merely more unpaid downloads. The wild card is in the social layer: music and friends will always travel well together, so the Facebook music partnerships may well change the landscape. In any case, it's unlikely to be either the hardware or carriage companies that set the tone: Apple is now a content company, essentially using high-margin hardware as "smart pipes" to carry bits of personal meaning, and along with Google and Amazon, likely to set the music agenda for the next 5-10 years.
*What does it mean to "buy" a song? Or to "own" it without paying?
*How are artists discovered by publishers?
*How do listeners discover new music?
*Who pays, for what, and who makes money?
*Most fundamentally, is music a standalone industry or will it become a feature of a larger "content" business?
While much remains uncertain, six facts about the state of online music can be asserted.
1) The state of online music is puzzling.
Any venture capitalist will ask a few basic questions of a new venture, one of them being the size of the addressable market. According to the recording industry's trade association, which values CDs at retail and not at wholesale or purchase price, the total value of the U.S. music industry was only $6.8 billion in 2010, a 10.9% decline over the previous year. Just this year, Citigroup basically repossessed EMI, the #4 label, from the private equity firm to whom it lent money for its purchase and recently sold part to a Sony-led consortium and part to Vivendi's Universal label; Citi took a loss of more than $1 billion on the deal, best I can tell. Why are so many players fighting for a piece of a shrinking industry that is unprofitable for many?
2) The state of online music is crowded.
According to Wikipedia, there are 9 download sites with more than 10 million songs; nobody has more than Amazon's 18 million (iTunes is at 14):
*7digital
*Amazon MP3
*artistxsite
*eMusic
*Fairsharemusic
*Apple iTunes
*Napster
*Rhapsody
*Spotify
*Microsoft Zune
At the same time, there are 15 sites with more than 5 million tracks available for streaming, with Grooveshark the leader at 22 million:
*8tracks
*Deezer
*Grooveshark
*last.fm
*Mflow
*MOG
*Qriocity
*Rdio
*Rhapsody
*Simfy
*Spotify
*we7
*WiMP
*Zune
Also worth mentioning as a digital music resource is YouTube, for which music numbers are not readily broken out. The most-watched videos of all time, however, tilt heavily to music: how Justin Bieber could have 666 million views (the published number, not hyperbole from your cynical narrator) without click fraud is an open question. Finally, Internet radio is alive even after being hit with extremely high license fees. (One estimate I saw estimated that a station with 2,000 listeners playing 15 songs an hour would face $185,000 a year in fees.) Live365 coordinates more than 5,000 stations from more than 150 countries.
3) The state on online music is increasingly social.
If one takes a long view, music has traversed a complicated arc. Originally performed by performers sometimes backed by considerable cultural infrastructure (in the case of symphony orchestras and opera companies, not to mention large church bodies for organists and choirs) for massed audiences, or performed by amateur town bands and similar groups, music shrunk in the 20th century: singer-songwriters flourished, and consumer electronics and recording technologies made audiences of one common. The apex of solitary listening was probably iTunes on the iPod, with the iconic white earbuds. In this vein, with expensive celebrity models like Beats now fashionable, U.S. headphone sales nearly doubled in the past year, to $2 billion.
After the demise of MySpace, a great tool for music sharing and distribution if not much else, other social tools are emerging. Blackberry, Google, Grooveshark, Last.fm (owned by CBS), and other services connect people to other fans, to concerts, and to artists. Shared listening services including turntable.fm reconvene groups of people to play and listen to each other. Other startups of note include Flowd, a social network for music much like Apple's Ping. Thus music went from being physically communal, to private, to both solitary and electronically, and sometimes asynchronously, communal.
4) The state of online music is multi-platform.
In the "old" days of Napster or iTunes, one could store ripped CDs or MP3 files on a computer hard drive. iTunes and Zune could sync the files to a portable storage device (iPod or Microsoft player). Now, the complexity of the landscape is begetting both new solutions and consumer confusion. Some streaming services like Spotify have mobile apps that allow offline and/or streamed listening away from the PC, and Spotify just announced an app store platform -- a great idea. Given the complexity of the smartphone world, however, this means building and updating a series of apps for Windows, Apple, RIM, and Android, along with potentially Palm's webOS (if HP sells it off) and Samsung's Bada.
At the same time, PCs and smartphones are being joined by hard-drive-based home entertainment solutions for music and video. Some TVs, meanwhile, are USB and wi-fi compatible. Finally, depending on the model, blu-ray players can play multiple types of disks, or support games, or stream networked content. Thus the home entertainment and mobile entertainment worlds are recombining, with several billion-dollar markets up for grabs. Google tried an Internet-augmented TV; Apple is rumored to have one in the works. Confusingly, Apple TV isn't a TV at all, but a universal media streamer without storage. Western Digital, for its part, offers a hard-disk solution for music, videos, and movies that also supports streaming. The Olive and Sonos systems offer multi-room music playback, controlled by an iOS or Android device.
While MP3 is widely adopted, more and more higher-than-CD bitrate files are becoming available to enthusiasts. At the very expensive end of the market, high-resolution music files from the likes of UK firms Linn and Bowers and Wilkins, or Reference Recordings and HD Tracks in the U.S., can be played through megabuck hard-drive (as opposed to silver disk) systems from the likes of Linn, Meridian, Ayer, Berkeley Audio, and custom PC-based platforms.
Thus the "dematerialization" process continues: from owning physical polycarbonate, to storing files on one's hard drive, now to cloud-resident lockers, one's entertainment collection is getting more and more ethereal. Those cloud-based bits are also getting more and more agnostic about their playback/display device.
5) The state of online music is geographically determined
A quick look at the streaming services reveals considerable geographic specificity. While some services are worldwide, others are available only in one or two countries. Combine this rights-related limitation with the question of mobile platforms, and some degree of fragmentation becomes inevitable: uptake of different phone brands and types, recording artists, and payment systems varies widely. Unlike the compact disc, which was the universal medium for music (as product) distribution, online music might turn out to exhibit some degree of speciation.
6) The state of online music is potentially irrelevant
This could be misinterpreted: I would be the last person to say music doesn't matter. But music as a separate category may not stand alone much longer. Digital content types continue to converge (Amazon wants to hold my book bits, and my music bits, and my video bits on the server-in-the-sky; YouTube bridges music, TV, instructional videos, and other genres). Digital playback hardware diverges: in my house alone, I could play music on more than a dozen devices, many of which do lots of other things too.
Thus the proliferation of music file types, playback modes and devices, and available artists (a classic long-tail story) means that online music is diverging into more and more sub-markets: not only do music genres proliferate, but so do listening patterns. A Pandora user at a desktop PC in Canada will have relatively little in common with a YouTube viewer in Brazil or a Spotify mobile user in Sweden or a high-resolution Society of Sound member in the UK. All are listening to digital music files via the Internet, but the social layers, business models, hardware requirements, and connection mechanics are considerably different.
For the present, confusion reigns as niche players and mega-players compete to get big fast. One prediction: while free content will continue to be an option, the forces pushing paid content are not going away, and I would wager millions of people will at some point buy music they already own in yet another format: iTunes will not be forever, I don't think. A second prediction: the limits of ad-supported business models are accumulating. It works for TV, for sure, but not for movies, not to mention books (but it's fine for magazines). Where music will fall on the new continuum is anybody's guess as experiments like Nokia's now-dead Comes with Music (cell phone subscriptions included pre-paid rights payments) and Radiohead's Name Your Price exercise will continue.
Who will win? It's hard to bet against the Goliaths: rights fees are expensive, and music as a content type can increasingly be viewed as a feature of a larger bit-management regime, whether iTunes or Amazon's cloud or the Google/Android/YouTube mediaplex. As much as people love Spotify, or Pandora, or legal Napster, or the Internet Archive's amazing live-music treasure chest, it's hard to see any of these ever expanding beyond niche status alongside Disney, Sony, or other statutorily privileged content conglomerates. Outside the reach of said statutes, it's a pretty unappealing landscape for content owners, which means there might be innovation -- or merely more unpaid downloads. The wild card is in the social layer: music and friends will always travel well together, so the Facebook music partnerships may well change the landscape. In any case, it's unlikely to be either the hardware or carriage companies that set the tone: Apple is now a content company, essentially using high-margin hardware as "smart pipes" to carry bits of personal meaning, and along with Google and Amazon, likely to set the music agenda for the next 5-10 years.
Monday, October 31, 2011
October 2011 Early Indications: Heroes
Watching the public reaction to the death of Steve Jobs has been a
fascinating exercise. The flowers, poems, tributes, and now biography
sales are certainly unprecedented in the infotech sector, and in
popular culture more generally. How many deaths have moved people this
way? Michael Jackson? Curt Cobain? Elvis? Each of these people was in
some way reaping the harvest of celebrity, and all were entertainers.
John Lennon might be the closest precedent to Jobs. Martin Luther
King was the last American martyr, but this phenomenon isn't in that
territory.
Jobs meant more to people than mere celebrity. Here is a random
tribute I pulled from an Irish blog:
"I never met Steve, but he meant a lot to anyone and everyone in the
technology community, and he was an idol of mine. The Apple chairman
and former CEO who made personal computers, smartphones, tablets, and
digital animation mass-market products passed away today. We're going
to miss him, deeply."
What led people to this kind of personal identification with "Steve"?
Perhaps it was the beauty of Apple products? But Jobs did not design
anything: he was instead a frequently tyrannical boss who extracted
the best from his underlings, albeit at a cost. Great -- really great
-- architects and industrial designers have come and gone, and even
today only Apple fanboys know of Jony Ive, who is our latter-day
Teague or Saarinen.
Does the prematurity of his death explain the extremity of the
reaction? Maybe partially.
Was it his philanthropy? No. Like many in Silicon Valley, Jobs was
not public about whatever charities he might have supported. There
was no Gates-like combination of focus and scale, no "think
differently" about his wealth's potential for the kind of social
change he admired as a young man, at least that we know of.
I think part of the fascination with Steve Jobs derived from his
extremes: throughout his adult life, Jobs had an extraordinary ability
to embody contradictions. The famous "reality distortion field"
surrounding the man from an early age meant that people suspended
disbelief in his presence, sometimes unwillingly. To take one example,
Apple never got hit with the same kind of "sweatshop" rhetoric that
was directed at Nike even though life in the Chinese assembly plants
was nasty, brutish, and often short: in 2010, 14 successful suicides
occurred, along with 4 other jumps. In response, a broad system of
nets was installed, and the management of the dorms improved, along
with other changes. It's worth noting that the factory suicide rate,
while alarming, is lower than in either urban or rural China more
generally: the outsourced manufacturer Foxconn employs a million
workers, nearly half of whom work in one facility in Shenzhen when
Apple products (along with others) are assembled.
There were other contradictions: think differently while telling
customers to conform to Apple's dictates of what constitutes fashion.
Build luxury goods while citing Buddhism. The image of artful
rebellion coexisting with a rigidly locked-down computing environment,
particularly for paid content.
None of these factors is a sufficient predictor of the Princess
Diana-like personal identification. Instead, the outpouring of
feeling speaks to several things, I believe.
1) Blogs, social media, Facebook pages, and YouTube make
self-expression to potential multitudes easy and accessible. These
tools, descended from the scruffy, anarchic origins of the public web
rather than the clean appliance-like aesthetic of Apple products,
allow for mass outpourings we would never have seen on public-access
cable or letters to the editor.
2) Jobs personified salvation from the stupidity millions of people
felt in the face of DOS prompts, device drivers, dongles, funny-shaped
plugs that never matched, and other arcana of computing. Used to
feeling inferior before a beige or gray box that would not do what we
wanted, people liked feeling more in control of the products that were
cute, friendly, and, after Jobs' death, now talk to us intelligently.
3) More broadly, perhaps, we have a shortage of heroes. Much like
Lennon, Jobs dared to imagine. As one commentator noted, envisioning
the future loosely connected Jobs to the saints and other religious
figures who truly had Visions. That connection does something to
explain the near-martyrdom that seemed to be shaping up in some
quarters. But even in purely secular terms, where are today's heroes?
Athletes? In part through the Internet and social media in
particular, we see these champions as sexual abusers, prima donnas, or
mercenaries whose multi-million dollar salaries remove them from the
pantheon. Super-sprinter Usain Bolt seems otherworldly, the product
of incredible gifts rather than perseverance and hard work to which
kids can relate. Pat Tillman, whose complex, troubling story will
probably never be fully told, is rapidly being relegated to footnote
status. Baseball is confronting the steroid era one Hall of Fame
candidate at a time, often awkwardly. Even worse than seeing athletes
as removed, perhaps, we also see them as human: Michael Phelps did
what millions of 20-somethings do after his successful transit of the
pressure-packed quest for gold in Beijing. Ignorant Tweets regularly
issue from the keyboards of the fast and gifted, ruling heroism
farther out of the picture.
Politicians? John McCain's story contains the elements of true
heroism, but he has the misfortune of living in a time of media-driven
political polarization, and his flavor of bipartisan citizenship is
out of fashion. Barack Obama did not inspire baby names in his honor
in 2009 the way John F. Kennedy seemed to, and he does not run for
re-election on a platform of traditionally Democratic accomplishments:
civil liberties, success in environmental protection, and a better
life for working person aren't looking promising. Instead, he can
claim foreign policy wins, usually Republican turf, particularly the
weakening of Al Qaeda. Among the Republicans, meanwhile, Mitt Romney
feels like a capable COO or maybe CEO, but falls far short of heroic
status, even within his party's faithful. More generally, public
approval with Congress is at historic lows.
What about business leaders? Only deep industry insiders know anything
about Ginni Rometty, recently named to run IBM. Meg Whitman joins
Rometty in an exclusive club of tech giant CEOs, but her California
political campaign (not to mention the titanic loss on eBay's Skype
deal) showed her to fall far short of heroic status. Outside of tech,
how many of these CEO names are familiar, or even close to iconic:
Frazier, Rosenfeld, Blankfein, Tillerson, Mulally, Iger, Pandit,
Oberhelman, or Roberts? (Their companies are, in order, Merck, Kraft,
Goldman Sachs, Exxon Mobil, Ford, Disney, Citi, Caterpillar, and
Comcast.) Warren Buffett may be the closest we get, but an object of
envy is not necessarily a hero.
Back to tech, the closest analogue I can think of to Steve Jobs in the
American imagination was Henry Ford, the person who made a liberating
new technology accessible to the masses. The great information
technology pioneers -- Shannon, Turing, von Neumann, Baran, Hopper --
are hardly widely known. Even our villains are scaled down: compared
to a Theodore Vail at AT&T or William Randolph Hearst, Larry Ellison
can hardly compare. As for Page, Brin, and Zuckerberg, they're mildly
famous for being rich, for certain, but so little is known about the
innards of Google and Facebook that the person on the street can't
really say for sure who they are or what they did -- or how their
companies make money. Facebook has become social oxygen, privacy
settings or no privacy settings, joining Google as a monopoly utility
(our analogues of AT&T) of the moment.
In the end, then, Steve Jobs was an elite, often inegalitarian figure
whose ability to bring forth usable, even likable technology inspired
the false familiarity of celebrity. His products -- running Photoshop
as well as they did, contributing to the ubiquity of wi-fi, or putting
multimedia in people's pockets -- fittingly contributed to the many
layers of paradox surrounding an extraordinary leader, manager, and,
yes, visionary. Apart from education reformer Sal Khan or perhaps
Ratan Tata (whose runway to global fame is getting short), it's hard
to see anyone on the horizon ready to occupy similar real estate in
the public imagination.
fascinating exercise. The flowers, poems, tributes, and now biography
sales are certainly unprecedented in the infotech sector, and in
popular culture more generally. How many deaths have moved people this
way? Michael Jackson? Curt Cobain? Elvis? Each of these people was in
some way reaping the harvest of celebrity, and all were entertainers.
John Lennon might be the closest precedent to Jobs. Martin Luther
King was the last American martyr, but this phenomenon isn't in that
territory.
Jobs meant more to people than mere celebrity. Here is a random
tribute I pulled from an Irish blog:
"I never met Steve, but he meant a lot to anyone and everyone in the
technology community, and he was an idol of mine. The Apple chairman
and former CEO who made personal computers, smartphones, tablets, and
digital animation mass-market products passed away today. We're going
to miss him, deeply."
What led people to this kind of personal identification with "Steve"?
Perhaps it was the beauty of Apple products? But Jobs did not design
anything: he was instead a frequently tyrannical boss who extracted
the best from his underlings, albeit at a cost. Great -- really great
-- architects and industrial designers have come and gone, and even
today only Apple fanboys know of Jony Ive, who is our latter-day
Teague or Saarinen.
Does the prematurity of his death explain the extremity of the
reaction? Maybe partially.
Was it his philanthropy? No. Like many in Silicon Valley, Jobs was
not public about whatever charities he might have supported. There
was no Gates-like combination of focus and scale, no "think
differently" about his wealth's potential for the kind of social
change he admired as a young man, at least that we know of.
I think part of the fascination with Steve Jobs derived from his
extremes: throughout his adult life, Jobs had an extraordinary ability
to embody contradictions. The famous "reality distortion field"
surrounding the man from an early age meant that people suspended
disbelief in his presence, sometimes unwillingly. To take one example,
Apple never got hit with the same kind of "sweatshop" rhetoric that
was directed at Nike even though life in the Chinese assembly plants
was nasty, brutish, and often short: in 2010, 14 successful suicides
occurred, along with 4 other jumps. In response, a broad system of
nets was installed, and the management of the dorms improved, along
with other changes. It's worth noting that the factory suicide rate,
while alarming, is lower than in either urban or rural China more
generally: the outsourced manufacturer Foxconn employs a million
workers, nearly half of whom work in one facility in Shenzhen when
Apple products (along with others) are assembled.
There were other contradictions: think differently while telling
customers to conform to Apple's dictates of what constitutes fashion.
Build luxury goods while citing Buddhism. The image of artful
rebellion coexisting with a rigidly locked-down computing environment,
particularly for paid content.
None of these factors is a sufficient predictor of the Princess
Diana-like personal identification. Instead, the outpouring of
feeling speaks to several things, I believe.
1) Blogs, social media, Facebook pages, and YouTube make
self-expression to potential multitudes easy and accessible. These
tools, descended from the scruffy, anarchic origins of the public web
rather than the clean appliance-like aesthetic of Apple products,
allow for mass outpourings we would never have seen on public-access
cable or letters to the editor.
2) Jobs personified salvation from the stupidity millions of people
felt in the face of DOS prompts, device drivers, dongles, funny-shaped
plugs that never matched, and other arcana of computing. Used to
feeling inferior before a beige or gray box that would not do what we
wanted, people liked feeling more in control of the products that were
cute, friendly, and, after Jobs' death, now talk to us intelligently.
3) More broadly, perhaps, we have a shortage of heroes. Much like
Lennon, Jobs dared to imagine. As one commentator noted, envisioning
the future loosely connected Jobs to the saints and other religious
figures who truly had Visions. That connection does something to
explain the near-martyrdom that seemed to be shaping up in some
quarters. But even in purely secular terms, where are today's heroes?
Athletes? In part through the Internet and social media in
particular, we see these champions as sexual abusers, prima donnas, or
mercenaries whose multi-million dollar salaries remove them from the
pantheon. Super-sprinter Usain Bolt seems otherworldly, the product
of incredible gifts rather than perseverance and hard work to which
kids can relate. Pat Tillman, whose complex, troubling story will
probably never be fully told, is rapidly being relegated to footnote
status. Baseball is confronting the steroid era one Hall of Fame
candidate at a time, often awkwardly. Even worse than seeing athletes
as removed, perhaps, we also see them as human: Michael Phelps did
what millions of 20-somethings do after his successful transit of the
pressure-packed quest for gold in Beijing. Ignorant Tweets regularly
issue from the keyboards of the fast and gifted, ruling heroism
farther out of the picture.
Politicians? John McCain's story contains the elements of true
heroism, but he has the misfortune of living in a time of media-driven
political polarization, and his flavor of bipartisan citizenship is
out of fashion. Barack Obama did not inspire baby names in his honor
in 2009 the way John F. Kennedy seemed to, and he does not run for
re-election on a platform of traditionally Democratic accomplishments:
civil liberties, success in environmental protection, and a better
life for working person aren't looking promising. Instead, he can
claim foreign policy wins, usually Republican turf, particularly the
weakening of Al Qaeda. Among the Republicans, meanwhile, Mitt Romney
feels like a capable COO or maybe CEO, but falls far short of heroic
status, even within his party's faithful. More generally, public
approval with Congress is at historic lows.
What about business leaders? Only deep industry insiders know anything
about Ginni Rometty, recently named to run IBM. Meg Whitman joins
Rometty in an exclusive club of tech giant CEOs, but her California
political campaign (not to mention the titanic loss on eBay's Skype
deal) showed her to fall far short of heroic status. Outside of tech,
how many of these CEO names are familiar, or even close to iconic:
Frazier, Rosenfeld, Blankfein, Tillerson, Mulally, Iger, Pandit,
Oberhelman, or Roberts? (Their companies are, in order, Merck, Kraft,
Goldman Sachs, Exxon Mobil, Ford, Disney, Citi, Caterpillar, and
Comcast.) Warren Buffett may be the closest we get, but an object of
envy is not necessarily a hero.
Back to tech, the closest analogue I can think of to Steve Jobs in the
American imagination was Henry Ford, the person who made a liberating
new technology accessible to the masses. The great information
technology pioneers -- Shannon, Turing, von Neumann, Baran, Hopper --
are hardly widely known. Even our villains are scaled down: compared
to a Theodore Vail at AT&T or William Randolph Hearst, Larry Ellison
can hardly compare. As for Page, Brin, and Zuckerberg, they're mildly
famous for being rich, for certain, but so little is known about the
innards of Google and Facebook that the person on the street can't
really say for sure who they are or what they did -- or how their
companies make money. Facebook has become social oxygen, privacy
settings or no privacy settings, joining Google as a monopoly utility
(our analogues of AT&T) of the moment.
In the end, then, Steve Jobs was an elite, often inegalitarian figure
whose ability to bring forth usable, even likable technology inspired
the false familiarity of celebrity. His products -- running Photoshop
as well as they did, contributing to the ubiquity of wi-fi, or putting
multimedia in people's pockets -- fittingly contributed to the many
layers of paradox surrounding an extraordinary leader, manager, and,
yes, visionary. Apart from education reformer Sal Khan or perhaps
Ratan Tata (whose runway to global fame is getting short), it's hard
to see anyone on the horizon ready to occupy similar real estate in
the public imagination.
Friday, September 30, 2011
September 2011 Early Indications: The Innovation Moment?
What follows is a selection from the opening to a book I'm completing. Recent tech-related news -- Apple's dominance, Amazon's alternative axis of competition, tablet and other woes at HP and Research in Motion, social media and social change all over the world, Anonymous, and sustained global un- and underemployment -- seems to reinforce the hypothesis that the rules of the game are in transition.
Thoughts and reactions are welcome, though they might not make it into the final product.
**********
Given the changes of the past 40 years—the personal computer, the Internet, GPS, cell phones, and smartphones—it’s not hyperbole to refer to a technological revolution. This book explores the consequences of this revolution, particularly but not exclusively for business. The overriding argument is straightforward:
1) Computing and communications technologies change how people view and understand the world, and how they relate to each other.
2) Not only the Internet but also such technologies as search, GPS, MP3 file compression, and general-purpose computing create substantial value for their users, often at low or zero cost. Online price comparison engines are an obvious example.
3) Even though they create enormous value for their users, however, those technologies do not create large numbers of jobs in western economies. At a time when manufacturing is receding in importance, information industries are not yet filling the gap in employment as economic theory would predict.
4) Reconciling these three traits will require major innovations going forward. New kinds of warfare and crime will require changes to law and behavior, the entire notion of privacy is in need of reinvention, and getting computers to generate millions of jobs may be the most pressing task of all. The tool kit of current technologies is an extremely rich resource.
Cognition
Let’s take a step back. Every past technological innovation over the past 300-plus years has augmented humanity’s domination over the physical world. Steam, electricity, internal combustion engines, and jet propulsion provided power. Industrial chemistry provided new fertilizers, dyes, and medicines. Steel, plastics, and other materials could be formed into skyscrapers, household and industrial items, and clothing. Mass production, line and staff organization, the limited liability corporation, and self service were among many managerial innovations that enhanced companies’ ability to organize resources and bring offerings to market.
The current revolution is different. Computing and communications augment not muscles but our brain and our sociability: rather than expanding control over the physical world, the Internet and the smartphone can combine to make people more informed and cognitively enhanced, if not wiser. Text messaging, Twitter, LinkedIn, and Facebook allow us to maintain both "strong" and "weak" social ties—each of which matters, albeit in different ways—in new ways and at new scales. Like every technology, the tools are value-neutral and also have a dark side. They can be used to exercise forms of control such as bullying, stalking, surveillance, and behavioral tracking. After about 30 years—the IBM PC launched in 1981—this revolution is still too new to reflect on very well, and of a different sort from its predecessors, making comparisons only minimally useful.
For a brief moment let us consider the "information" piece of "information technology," the trigger to that cognitive enhancement. Claude Shannon, the little-known patron saint of the information age, conceived of information mathematically; his fundamental insights gave rise to developments ranging from digital circuit design to the blackjack method popularized in the movie 21. Shannon made key discoveries, of obvious importance to cryptography but also to telephone engineering, concerning the mathematical relationships between signals and noise. He also disconnected information as it would be understood in the computer age from human uses of it: meaning was "irrelevant to the engineering problem." This tension between information as engineers see it and information that people generate and absorb is one of the defining dynamics of the era. It is expressed in the Facebook privacy debate, Google’s treatment of copyrighted texts, and even hedge funds that mine Twitter data and invest accordingly. Equally important, however, these technologies allow groups to form that can collectively create meaning; the editorial backstory behind every Wikipedia entry, collected with as much rigor as the entry itself, stands as an unprecedented history of meaning-making.
The information revolution has several important side effects. First, it stresses a nation’s education system: unlike 20th-century factories, many information-driven jobs require higher skills than many members of the work force can demonstrate. Finland’s leadership positions in education and high technology are related. Second, the benefits of information flow disproportionately to people who are in a position to understand information. As the economist Tyler Cowen points out, "a lot of the internet’s biggest benefits are distributed in proportion to our cognitive abilities to exploit them." This observation is true at the individual and collective level. Hence India, with a strong technical university system, has been able to capitalize on the past 20 years in ways that its neighbor Pakistan has not.
Innovation
Much more tangibly, this revolution is different in another regard: it has yet to generate very many jobs, particularly in first-world markets. In a way, it may be becoming clear that there is no free lunch. The Internet has created substantial value for consumers: free music, both illegal and now legal. Free news and other information such as weather. Free search engines. Price transparency. Self-service travel reservations and check-in, stock trades, and driver’s license renewals. But the massive consumer surplus created by the Internet comes at some cost: of jobs, shareholder dividends, and tax revenues formerly paid by winners in less efficient markets.
In contrast to a broad economic ecosystem created by the automobile—repair shops, drive-in and drive-through restaurants, road-builders, parking lots, dealerships, parts suppliers, and final assembly plants—the headcount at the core of the information industry is strikingly small and doesn’t extend out very far. Apple, the most valuable company by market capitalization in the world in 2011, employs roughly 50,000 people, more than half of whom work in the retail operation. Compare Apple’s 25,000 non-retail workers to the industrial era, when headcounts at IBM, General Motors, and General Electric all topped 400,000 at one time or another. In addition, the jobs that are created are in a very narrow window of technical and managerial skill. Contrast the hiring at Microsoft or Facebook to the automobile, which in addition to the best and the brightest could also give jobs to semi-skilled laborers, tollbooth collectors, used-car salesmen, and low-level managers. That reality of small workforces (along with outsourcing deals and offshore contract manufacturing), high skill requirements, and the frequent need for extensive education may become another legacy of the information age.
In the past 50 years, computers have become ubiquitous in American business, and in many global ones. They have contributed to increases in efficiency and productivity through a wide variety of mechanisms, whether self service websites, ATMs, or gas pumps; improved decision-making supported by data analysis and planning software; or robotics on assembly lines. The challenge now is to move beyond optimization of known processes. In order to generate new jobs—most of the old ones aren’t coming back—the economy needs to utilize the computing and communications resources to do new things: cure suffering and disease with new approaches, teach with new pedagogy, and create new forms of value. Rather than optimization, in short, the technology revolution demands breakthroughs in innovation, which as we will see is concerned with more than just patents.
There are of course winners in the business arena. But in the long run, the companies that can operate at a sufficiently high level of innovation and efficiency to win in brutally transparent and/or low-margin markets are a minority: Amazon, Apple, Caterpillar, eBay, Facebook, and Google are familiar names on a reasonably short list. Even Dell, HP, Microsoft, and Yahoo, leaders just a few years ago, are struggling to regain competitive swagger. Others of yesterday’s leaders have tumbled from the top rank: Merrill Lynch was bought; GM and Chrysler each declared bankruptcy. Arthur Andersen, Lehman Brothers, and Nortel are gone completely. How could decline happen so quickly?
Consider Dell, which achieved industry leadership in the 1990s through optimization of inventory control, demand creation, and the matching of the two. The 2000s have treated the company less well. Apple, which like Dell boasts extremely high levels of supply chain performance, has separated itself from the PC industry through its relentless innovation. Seeing Apple pull away with the stunning success of the iPhone, Google in turn mobilized the Android smartphone platform through a different, but similarly effective, series of technical and organizational innovations. In contrast to Apple and Google, optimizers like Dell are suffering, and unsuccessful innovators including Nokia are making desperate attempts to compete. Successful innovation is no longer a better mousetrap, however: the biggest winners are the companies that can innovate at the level of systems, or platforms. Amazon's repeated innovations, many of which came as stunning surprises, illustrate a profound understanding of this truth. At the same time, Microsoft's efforts to shift from the PC platform onto something new have met with mixed success: the Xbox has done well in a small market, while the results of the Nokia mobile bet will obviously be a top story for the coming year.
Given our place in the history of technology, it appears that structural changes to work and economics are occurring. To set some context, consider how mechanization changed American agriculture after 1900. Fewer people were needed to till the land, leading to increased farm size and migration of spare laborers to cities. Manufacturing replaced agriculture at the core of the economy. Beginning in 1960, computers helped optimize manufacturing. Coincident with the rise of enterprise and then personal computing, services replaced manufacturing as the main employer and value generator in the U.S. economy. In short, innovation could be to information what mechanization was to agriculture, the agent of its marginalization and the gateway to a new economic era.
How information technology relates to this shift from manufacturing to services and, potentially, a new wave of innovation is still not well understood; to take one example, as Michael Mandel argued in Bloomberg Businessweek, a shortfall of innovation helps explain the misplaced optimism that contributed to the financial crises of the past years. But rather than merely incant that "innovation is good," I believe that the structure of economic history has certain limits, and computers’ propensity for optimization may be encountering one such limit. It takes people to innovate, however, and identifying both the need as well as the capabilities and resources necessary for them to do so may be a partial path out of the structural economic stagnation in which we find ourselves.
Thoughts and reactions are welcome, though they might not make it into the final product.
**********
Given the changes of the past 40 years—the personal computer, the Internet, GPS, cell phones, and smartphones—it’s not hyperbole to refer to a technological revolution. This book explores the consequences of this revolution, particularly but not exclusively for business. The overriding argument is straightforward:
1) Computing and communications technologies change how people view and understand the world, and how they relate to each other.
2) Not only the Internet but also such technologies as search, GPS, MP3 file compression, and general-purpose computing create substantial value for their users, often at low or zero cost. Online price comparison engines are an obvious example.
3) Even though they create enormous value for their users, however, those technologies do not create large numbers of jobs in western economies. At a time when manufacturing is receding in importance, information industries are not yet filling the gap in employment as economic theory would predict.
4) Reconciling these three traits will require major innovations going forward. New kinds of warfare and crime will require changes to law and behavior, the entire notion of privacy is in need of reinvention, and getting computers to generate millions of jobs may be the most pressing task of all. The tool kit of current technologies is an extremely rich resource.
Cognition
Let’s take a step back. Every past technological innovation over the past 300-plus years has augmented humanity’s domination over the physical world. Steam, electricity, internal combustion engines, and jet propulsion provided power. Industrial chemistry provided new fertilizers, dyes, and medicines. Steel, plastics, and other materials could be formed into skyscrapers, household and industrial items, and clothing. Mass production, line and staff organization, the limited liability corporation, and self service were among many managerial innovations that enhanced companies’ ability to organize resources and bring offerings to market.
The current revolution is different. Computing and communications augment not muscles but our brain and our sociability: rather than expanding control over the physical world, the Internet and the smartphone can combine to make people more informed and cognitively enhanced, if not wiser. Text messaging, Twitter, LinkedIn, and Facebook allow us to maintain both "strong" and "weak" social ties—each of which matters, albeit in different ways—in new ways and at new scales. Like every technology, the tools are value-neutral and also have a dark side. They can be used to exercise forms of control such as bullying, stalking, surveillance, and behavioral tracking. After about 30 years—the IBM PC launched in 1981—this revolution is still too new to reflect on very well, and of a different sort from its predecessors, making comparisons only minimally useful.
For a brief moment let us consider the "information" piece of "information technology," the trigger to that cognitive enhancement. Claude Shannon, the little-known patron saint of the information age, conceived of information mathematically; his fundamental insights gave rise to developments ranging from digital circuit design to the blackjack method popularized in the movie 21. Shannon made key discoveries, of obvious importance to cryptography but also to telephone engineering, concerning the mathematical relationships between signals and noise. He also disconnected information as it would be understood in the computer age from human uses of it: meaning was "irrelevant to the engineering problem." This tension between information as engineers see it and information that people generate and absorb is one of the defining dynamics of the era. It is expressed in the Facebook privacy debate, Google’s treatment of copyrighted texts, and even hedge funds that mine Twitter data and invest accordingly. Equally important, however, these technologies allow groups to form that can collectively create meaning; the editorial backstory behind every Wikipedia entry, collected with as much rigor as the entry itself, stands as an unprecedented history of meaning-making.
The information revolution has several important side effects. First, it stresses a nation’s education system: unlike 20th-century factories, many information-driven jobs require higher skills than many members of the work force can demonstrate. Finland’s leadership positions in education and high technology are related. Second, the benefits of information flow disproportionately to people who are in a position to understand information. As the economist Tyler Cowen points out, "a lot of the internet’s biggest benefits are distributed in proportion to our cognitive abilities to exploit them." This observation is true at the individual and collective level. Hence India, with a strong technical university system, has been able to capitalize on the past 20 years in ways that its neighbor Pakistan has not.
Innovation
Much more tangibly, this revolution is different in another regard: it has yet to generate very many jobs, particularly in first-world markets. In a way, it may be becoming clear that there is no free lunch. The Internet has created substantial value for consumers: free music, both illegal and now legal. Free news and other information such as weather. Free search engines. Price transparency. Self-service travel reservations and check-in, stock trades, and driver’s license renewals. But the massive consumer surplus created by the Internet comes at some cost: of jobs, shareholder dividends, and tax revenues formerly paid by winners in less efficient markets.
In contrast to a broad economic ecosystem created by the automobile—repair shops, drive-in and drive-through restaurants, road-builders, parking lots, dealerships, parts suppliers, and final assembly plants—the headcount at the core of the information industry is strikingly small and doesn’t extend out very far. Apple, the most valuable company by market capitalization in the world in 2011, employs roughly 50,000 people, more than half of whom work in the retail operation. Compare Apple’s 25,000 non-retail workers to the industrial era, when headcounts at IBM, General Motors, and General Electric all topped 400,000 at one time or another. In addition, the jobs that are created are in a very narrow window of technical and managerial skill. Contrast the hiring at Microsoft or Facebook to the automobile, which in addition to the best and the brightest could also give jobs to semi-skilled laborers, tollbooth collectors, used-car salesmen, and low-level managers. That reality of small workforces (along with outsourcing deals and offshore contract manufacturing), high skill requirements, and the frequent need for extensive education may become another legacy of the information age.
In the past 50 years, computers have become ubiquitous in American business, and in many global ones. They have contributed to increases in efficiency and productivity through a wide variety of mechanisms, whether self service websites, ATMs, or gas pumps; improved decision-making supported by data analysis and planning software; or robotics on assembly lines. The challenge now is to move beyond optimization of known processes. In order to generate new jobs—most of the old ones aren’t coming back—the economy needs to utilize the computing and communications resources to do new things: cure suffering and disease with new approaches, teach with new pedagogy, and create new forms of value. Rather than optimization, in short, the technology revolution demands breakthroughs in innovation, which as we will see is concerned with more than just patents.
There are of course winners in the business arena. But in the long run, the companies that can operate at a sufficiently high level of innovation and efficiency to win in brutally transparent and/or low-margin markets are a minority: Amazon, Apple, Caterpillar, eBay, Facebook, and Google are familiar names on a reasonably short list. Even Dell, HP, Microsoft, and Yahoo, leaders just a few years ago, are struggling to regain competitive swagger. Others of yesterday’s leaders have tumbled from the top rank: Merrill Lynch was bought; GM and Chrysler each declared bankruptcy. Arthur Andersen, Lehman Brothers, and Nortel are gone completely. How could decline happen so quickly?
Consider Dell, which achieved industry leadership in the 1990s through optimization of inventory control, demand creation, and the matching of the two. The 2000s have treated the company less well. Apple, which like Dell boasts extremely high levels of supply chain performance, has separated itself from the PC industry through its relentless innovation. Seeing Apple pull away with the stunning success of the iPhone, Google in turn mobilized the Android smartphone platform through a different, but similarly effective, series of technical and organizational innovations. In contrast to Apple and Google, optimizers like Dell are suffering, and unsuccessful innovators including Nokia are making desperate attempts to compete. Successful innovation is no longer a better mousetrap, however: the biggest winners are the companies that can innovate at the level of systems, or platforms. Amazon's repeated innovations, many of which came as stunning surprises, illustrate a profound understanding of this truth. At the same time, Microsoft's efforts to shift from the PC platform onto something new have met with mixed success: the Xbox has done well in a small market, while the results of the Nokia mobile bet will obviously be a top story for the coming year.
Given our place in the history of technology, it appears that structural changes to work and economics are occurring. To set some context, consider how mechanization changed American agriculture after 1900. Fewer people were needed to till the land, leading to increased farm size and migration of spare laborers to cities. Manufacturing replaced agriculture at the core of the economy. Beginning in 1960, computers helped optimize manufacturing. Coincident with the rise of enterprise and then personal computing, services replaced manufacturing as the main employer and value generator in the U.S. economy. In short, innovation could be to information what mechanization was to agriculture, the agent of its marginalization and the gateway to a new economic era.
How information technology relates to this shift from manufacturing to services and, potentially, a new wave of innovation is still not well understood; to take one example, as Michael Mandel argued in Bloomberg Businessweek, a shortfall of innovation helps explain the misplaced optimism that contributed to the financial crises of the past years. But rather than merely incant that "innovation is good," I believe that the structure of economic history has certain limits, and computers’ propensity for optimization may be encountering one such limit. It takes people to innovate, however, and identifying both the need as well as the capabilities and resources necessary for them to do so may be a partial path out of the structural economic stagnation in which we find ourselves.
Tuesday, August 23, 2011
Early Indications August 2011: Paradoxical Productivity
There are some structural issues with our economy, where a lot of
businesses have learned to become much more efficient with a lot fewer
workers. You see it when you go to a bank and you use an ATM; you
don't go to a bank teller.
-President Barack Obama, NBC News, June 14 (?) 2011
The debate over the relationship between automating technologies and
unemployment is not new, as Adam Smith's famous example of pin-making
goes back to 1776. Things get particularly messy trying to understand
services productivity: that ATM does not merely replicate the pin
factory or behave like industrial scenarios. Finally, trying to
quantify the particular contribution of information technology to
productivity, and thus to the current unemployment scenario, proves
particularly difficult. Nevertheless, the question is worth
considering closely insofar as multiple shifts are coinciding, making
job-seeking, managing, investing, and policy formulation difficult, at
best, in these challenging times.
I. Classic productivity definitions
At the most basic level, a nation's economic output is divided by the
number of workers or the number of hours worked. This model is
obviously rough, and has two major implications. First, investment
(whether in better machinery or elsewhere) does not necessarily map to
hours worked. Secondly, unemployment should drive this measure of
productivity up, all other things being equal, merely as a matter of
shrinking the denominator in the fraction: fewer workers producing the
same level of output are intuitively more productive. Unemployment,
however, is not free.
A more sophisticated metric is called total factor productivity, or
TFP. This indicator attempts to track how efficiently both labor and
capital are being utilized. It is calculated as a residual, by
looking at hours worked (or some variant thereof) and capital stock
(summarizing a nation's balance sheet, as it were, to tally up the
things that can produce other things of value for sale). Any rise in
economic output not captured in labor or capital will be counted as
improved productivity. The problem here is that measuring productive
capital at any level of scale is extremely difficult.
TFP, while being hard to pin down, does have advantages. One strength
of TFP is in its emphasis on innovation. In theory, if inventors and
innovators are granted monopolies (through patents), their investment
in new technologies can be recouped as competitors are prevented from
copying the innovation. Skilled labor is an important ingredient in
this process: commercialization is much more difficult if the
workforce cannot perform the necessary functions to bring new ideas
and products to market.
II. Services productivity
As The Economist points out, quoting Fast Company from 2004, ATMs did
not displace bank tellers. Instead, the rise of self-service
coincided with a broad expansion of bank functions and an aging (and
growing) American population: baby boomers started needing lots of car
loans, and home mortgages, and tuition loans, starting in about 1970
when the first boomers turned 25. All those financial services
required people to deliver:
1985: 60,000 ATMs 485,000 bank tellers
2002: 352,000 ATMs 527,000 bank tellers
That a technology advance coincided with a shift in the banking market
tells us little about productivity. Did ATMs, or word processors, or
Blackberries increase output per unit of input? Nobody knows: the
output of a bank teller, or nurse, or college professor, is
notoriously hard to measure. Even at the aggregate level, the
measurement problem is significant. Is a bank's output merely the sum
of its teller transactions? Maybe. Other economists argue that a
bank should be measured by its balances of loans and deposits.
A key concept in macroeconomics concerns intermediate goods: raw
material purchases, work in process inventory, and the like. Very few
services were included in these calculations, airplane tickets and
telephone bills being exceptions. As of the early 1990s, ad agencies
and computer programming were not included. Thus the problem is
two-fold: services inputs to certain facets of the economy were not
counted, and the output of systems integrators like Accenture or Tata
Consulting Services or advertising firms such as Publicis or WPP is
intuitively very difficult to count in any consistent or meaningful
manner.
III. Services productivity and information technology
In the mid-1990s a number of prominent economists pointed to roughly
three decades of investment in computers along with the related
software and services, and asked for statistical evidence of IT's
improvement of productivity, particularly in the period between 1974
and 1994, when overall productivity stagnated.
Those years coincided with the steep decline in manufacturing's
contribution to the U.S. economy, and measuring the productivity of an
individual office worker is difficult (as in a performance review),
not to mention millions of such workers in the aggregate. Services
are especially sensitive to labor inputs: low student-faculty ratios
are usually thought to represent quality teaching, not inefficiency.
As the economist William Baumol noted, a string quartet must still be
played by four musicians; there has been zero increase in productivity
over the 300 years since the art form originated.
The late 1990s were marked by the Internet stock market bubble, heavy
investment by large firms in enterprise software packages, and
business process "reengineering." Alongside these developments,
productivity spiked: manufacturing sectors improved an average of 2.3%
annually, but services did even better, at 2.6%. In hotels, however,
the effect was less pronounced, possibly reflecting Baumol's "disease"
in which high-quality service is associated with high labor content.
Unfortunately, health care is another component in the services sector
marked by low productivity growth, and, until recently, relatively low
innovation in the use of information technologies. Measuring the
productivity of such a vast, inefficiently organized, and intangibly
measured sector is inherently difficult, so it will be hard to assess
the impact of self-care, for example: people who research their back
spasms on the Internet, try some exercises or heating pads, and avoid
a trip to a physician. Such behavior should improve the productivity
of the doctor's office but only in theory can it be counted.
In a systematic review of the IT productivity paradox in the
mid-1990s, MIT economist Eric Brynjolfsson and his colleagues
investigated what they saw as four explanations for the apparent
contradiction. Subsequent history suggests they are correct:
1) Mismeasurement of outputs and inputs
Services industries (led by the financial sector) are among the
heaviest users of IT, and services outputs are hard to measure. As we
saw, productivity statistics in general are complex and not terribly
robust.
2) Lags due to learning and adjustment
This explanation has grown in influence in the past 15 years. To take
one common example, the organizational adjustment to a $50-100 million
enterprise software deployment takes years, by which time many other
factors will influence productivity: currency fluctuations, mergers or
acquisitions, broad economic recessions, and so on.
3) Redistribution and dissipation of profits
If a leading firm in a sector uses information effectively, it may
steal market share from less effective competitors. The sector at
large thus may not appear to gain in productivity. In addition,
IT-maximizing firms might be using the technology investment for more
effective forecasting, let's say, as opposed to using less labor in
order fulfillment. The latter action would theoretically improve
productivity. But if the wrong items were being produced relative to
the market leader which more accurately sensed demand, profitability
would improve at the leading firm even though productivity could go up
at the laggard.
4) Mismanagement of information and technology
In the early years of computing, paper processes were automated but
the basic business design was left unchanged. In the 1990s, however,
such companies as Wal-Mart, Dell, Amazon, and Google invented entirely
new business processes and in some cases business models building on
IT. The revenue per employee at Amazon ($960,000) or Google ($1.2
million) is far higher than at Harley Davidson or Clorox (both are
leanness leaders in their respective categories at about $650,000).
"Mismanagement" sounds negative, but it is easy to see, as with every
past technology shift, that managers take decades to internalize the
capability of a new way of doing work before they can reinvent
commerce to exploit the new tools.
IV. IT and unemployment
Are we in a situation that parallels farming, when tractors reduced
the number of men and horses needed to work a given acreage? One way
to look at the question involves job losses by industry. Using Bureau
of Labor Statistics numbers from 2009, I compared the number of
layoffs and business closings to the total employment in the sector.
Not surprisingly, construction and manufacturing both lost in excess
of 10% of total jobs. It's hard to point to information technology as
a prime factor in either case: the credit crisis and China,
respectively, are much more likely explanations. Professional and
business services, an extremely broad category, shrank by 8% in one
year, which includes consultants among many other titles.
Another analysis can come from looking at jobs that never
materialized. Using the BLS 10-year projections of job growth from
2000, the computer and information industry moved in a very different
direction from what economists predicted. Applications programmers,
for example, rather than being a growth category actually grew only
modestly. Desktop publishers shrank in numbers, possibly because of
the rise of blogs and other web- rather than paper-based formats. The
population of customer service reps was projected to grow 32% in 10
years; the actual growth was about 10%, possibly reflecting a
combination of offshoring and self-service, both phone- and web-based.
The need for retail salespeople was thought to grow by 12%, but the
number stayed flat, and here is another example where IT, in the form
of the web and self-service, might play a role.
Neither sales clerks nor customer service reps would constitute
anything like a backbone of a vibrant middle class: average annual
wages for retail are about $25,000 and customer service reps do
somewhat better, at nearly $33,000. The decline of middle-class jobs
is a complex phenomenon: information technology definitely automated
away the payroll clerk, formerly a reliably middle-class position in
many firms, to take one example. Auto industry union employment is
shrinking, on the other hand, in large part because of foreign
competition, not robotic armies displacing humans. That same
competitive pressure has taken a toll on the thick management layer in
Detroit as well, as the real estate market in the suburbs there can
testify. Those brand managers were not made obsolete by computers.
President Obama gets the (almost) last word. In a town hall meeting in
Illinois in mid-August, he returned to the ATM theme:
"One of the challenges in terms of rebuilding our economy is
businesses have gotten so efficient that—when was the last time
somebody went to a bank teller instead of using the ATM, or used a
travel agent instead of just going online? A lot of jobs that used to
be out there requiring people now have become automated."
Have they really? The impact on the unemployment rate of information
technology and its concomitant automation is not at all clear. The
effect is highly variable across different countries, for example.
Looking domestically, travel agents were never a major job category:
even if such jobs were automated away as the number of agencies
dropped by about 2/3 in the decade-plus after 1998, such numbers pale
alongside construction, manufacturing, and, I would wager, computer
programmers whose positions were offshored.
The unfortunate thing in the entire discussion, apart from people
without jobs obviously, is the lack of political and popular
understanding of both the sources of the unemployment and the
necessary solutions. Merely saying "education" or "job retraining"
defers rather than settles the debate about what actually is to be
done in the face of the structural transformation we are living
through. On that aspect, the President is assuredly correct: he has
the terminology correct, but structural changes need to be addressed
with fundamental rethinking of rules and behaviors rather than with
sound bites and band-aids.
businesses have learned to become much more efficient with a lot fewer
workers. You see it when you go to a bank and you use an ATM; you
don't go to a bank teller.
-President Barack Obama, NBC News, June 14 (?) 2011
The debate over the relationship between automating technologies and
unemployment is not new, as Adam Smith's famous example of pin-making
goes back to 1776. Things get particularly messy trying to understand
services productivity: that ATM does not merely replicate the pin
factory or behave like industrial scenarios. Finally, trying to
quantify the particular contribution of information technology to
productivity, and thus to the current unemployment scenario, proves
particularly difficult. Nevertheless, the question is worth
considering closely insofar as multiple shifts are coinciding, making
job-seeking, managing, investing, and policy formulation difficult, at
best, in these challenging times.
I. Classic productivity definitions
At the most basic level, a nation's economic output is divided by the
number of workers or the number of hours worked. This model is
obviously rough, and has two major implications. First, investment
(whether in better machinery or elsewhere) does not necessarily map to
hours worked. Secondly, unemployment should drive this measure of
productivity up, all other things being equal, merely as a matter of
shrinking the denominator in the fraction: fewer workers producing the
same level of output are intuitively more productive. Unemployment,
however, is not free.
A more sophisticated metric is called total factor productivity, or
TFP. This indicator attempts to track how efficiently both labor and
capital are being utilized. It is calculated as a residual, by
looking at hours worked (or some variant thereof) and capital stock
(summarizing a nation's balance sheet, as it were, to tally up the
things that can produce other things of value for sale). Any rise in
economic output not captured in labor or capital will be counted as
improved productivity. The problem here is that measuring productive
capital at any level of scale is extremely difficult.
TFP, while being hard to pin down, does have advantages. One strength
of TFP is in its emphasis on innovation. In theory, if inventors and
innovators are granted monopolies (through patents), their investment
in new technologies can be recouped as competitors are prevented from
copying the innovation. Skilled labor is an important ingredient in
this process: commercialization is much more difficult if the
workforce cannot perform the necessary functions to bring new ideas
and products to market.
II. Services productivity
As The Economist points out, quoting Fast Company from 2004, ATMs did
not displace bank tellers. Instead, the rise of self-service
coincided with a broad expansion of bank functions and an aging (and
growing) American population: baby boomers started needing lots of car
loans, and home mortgages, and tuition loans, starting in about 1970
when the first boomers turned 25. All those financial services
required people to deliver:
1985: 60,000 ATMs 485,000 bank tellers
2002: 352,000 ATMs 527,000 bank tellers
That a technology advance coincided with a shift in the banking market
tells us little about productivity. Did ATMs, or word processors, or
Blackberries increase output per unit of input? Nobody knows: the
output of a bank teller, or nurse, or college professor, is
notoriously hard to measure. Even at the aggregate level, the
measurement problem is significant. Is a bank's output merely the sum
of its teller transactions? Maybe. Other economists argue that a
bank should be measured by its balances of loans and deposits.
A key concept in macroeconomics concerns intermediate goods: raw
material purchases, work in process inventory, and the like. Very few
services were included in these calculations, airplane tickets and
telephone bills being exceptions. As of the early 1990s, ad agencies
and computer programming were not included. Thus the problem is
two-fold: services inputs to certain facets of the economy were not
counted, and the output of systems integrators like Accenture or Tata
Consulting Services or advertising firms such as Publicis or WPP is
intuitively very difficult to count in any consistent or meaningful
manner.
III. Services productivity and information technology
In the mid-1990s a number of prominent economists pointed to roughly
three decades of investment in computers along with the related
software and services, and asked for statistical evidence of IT's
improvement of productivity, particularly in the period between 1974
and 1994, when overall productivity stagnated.
Those years coincided with the steep decline in manufacturing's
contribution to the U.S. economy, and measuring the productivity of an
individual office worker is difficult (as in a performance review),
not to mention millions of such workers in the aggregate. Services
are especially sensitive to labor inputs: low student-faculty ratios
are usually thought to represent quality teaching, not inefficiency.
As the economist William Baumol noted, a string quartet must still be
played by four musicians; there has been zero increase in productivity
over the 300 years since the art form originated.
The late 1990s were marked by the Internet stock market bubble, heavy
investment by large firms in enterprise software packages, and
business process "reengineering." Alongside these developments,
productivity spiked: manufacturing sectors improved an average of 2.3%
annually, but services did even better, at 2.6%. In hotels, however,
the effect was less pronounced, possibly reflecting Baumol's "disease"
in which high-quality service is associated with high labor content.
Unfortunately, health care is another component in the services sector
marked by low productivity growth, and, until recently, relatively low
innovation in the use of information technologies. Measuring the
productivity of such a vast, inefficiently organized, and intangibly
measured sector is inherently difficult, so it will be hard to assess
the impact of self-care, for example: people who research their back
spasms on the Internet, try some exercises or heating pads, and avoid
a trip to a physician. Such behavior should improve the productivity
of the doctor's office but only in theory can it be counted.
In a systematic review of the IT productivity paradox in the
mid-1990s, MIT economist Eric Brynjolfsson and his colleagues
investigated what they saw as four explanations for the apparent
contradiction. Subsequent history suggests they are correct:
1) Mismeasurement of outputs and inputs
Services industries (led by the financial sector) are among the
heaviest users of IT, and services outputs are hard to measure. As we
saw, productivity statistics in general are complex and not terribly
robust.
2) Lags due to learning and adjustment
This explanation has grown in influence in the past 15 years. To take
one common example, the organizational adjustment to a $50-100 million
enterprise software deployment takes years, by which time many other
factors will influence productivity: currency fluctuations, mergers or
acquisitions, broad economic recessions, and so on.
3) Redistribution and dissipation of profits
If a leading firm in a sector uses information effectively, it may
steal market share from less effective competitors. The sector at
large thus may not appear to gain in productivity. In addition,
IT-maximizing firms might be using the technology investment for more
effective forecasting, let's say, as opposed to using less labor in
order fulfillment. The latter action would theoretically improve
productivity. But if the wrong items were being produced relative to
the market leader which more accurately sensed demand, profitability
would improve at the leading firm even though productivity could go up
at the laggard.
4) Mismanagement of information and technology
In the early years of computing, paper processes were automated but
the basic business design was left unchanged. In the 1990s, however,
such companies as Wal-Mart, Dell, Amazon, and Google invented entirely
new business processes and in some cases business models building on
IT. The revenue per employee at Amazon ($960,000) or Google ($1.2
million) is far higher than at Harley Davidson or Clorox (both are
leanness leaders in their respective categories at about $650,000).
"Mismanagement" sounds negative, but it is easy to see, as with every
past technology shift, that managers take decades to internalize the
capability of a new way of doing work before they can reinvent
commerce to exploit the new tools.
IV. IT and unemployment
Are we in a situation that parallels farming, when tractors reduced
the number of men and horses needed to work a given acreage? One way
to look at the question involves job losses by industry. Using Bureau
of Labor Statistics numbers from 2009, I compared the number of
layoffs and business closings to the total employment in the sector.
Not surprisingly, construction and manufacturing both lost in excess
of 10% of total jobs. It's hard to point to information technology as
a prime factor in either case: the credit crisis and China,
respectively, are much more likely explanations. Professional and
business services, an extremely broad category, shrank by 8% in one
year, which includes consultants among many other titles.
Another analysis can come from looking at jobs that never
materialized. Using the BLS 10-year projections of job growth from
2000, the computer and information industry moved in a very different
direction from what economists predicted. Applications programmers,
for example, rather than being a growth category actually grew only
modestly. Desktop publishers shrank in numbers, possibly because of
the rise of blogs and other web- rather than paper-based formats. The
population of customer service reps was projected to grow 32% in 10
years; the actual growth was about 10%, possibly reflecting a
combination of offshoring and self-service, both phone- and web-based.
The need for retail salespeople was thought to grow by 12%, but the
number stayed flat, and here is another example where IT, in the form
of the web and self-service, might play a role.
Neither sales clerks nor customer service reps would constitute
anything like a backbone of a vibrant middle class: average annual
wages for retail are about $25,000 and customer service reps do
somewhat better, at nearly $33,000. The decline of middle-class jobs
is a complex phenomenon: information technology definitely automated
away the payroll clerk, formerly a reliably middle-class position in
many firms, to take one example. Auto industry union employment is
shrinking, on the other hand, in large part because of foreign
competition, not robotic armies displacing humans. That same
competitive pressure has taken a toll on the thick management layer in
Detroit as well, as the real estate market in the suburbs there can
testify. Those brand managers were not made obsolete by computers.
President Obama gets the (almost) last word. In a town hall meeting in
Illinois in mid-August, he returned to the ATM theme:
"One of the challenges in terms of rebuilding our economy is
businesses have gotten so efficient that—when was the last time
somebody went to a bank teller instead of using the ATM, or used a
travel agent instead of just going online? A lot of jobs that used to
be out there requiring people now have become automated."
Have they really? The impact on the unemployment rate of information
technology and its concomitant automation is not at all clear. The
effect is highly variable across different countries, for example.
Looking domestically, travel agents were never a major job category:
even if such jobs were automated away as the number of agencies
dropped by about 2/3 in the decade-plus after 1998, such numbers pale
alongside construction, manufacturing, and, I would wager, computer
programmers whose positions were offshored.
The unfortunate thing in the entire discussion, apart from people
without jobs obviously, is the lack of political and popular
understanding of both the sources of the unemployment and the
necessary solutions. Merely saying "education" or "job retraining"
defers rather than settles the debate about what actually is to be
done in the face of the structural transformation we are living
through. On that aspect, the President is assuredly correct: he has
the terminology correct, but structural changes need to be addressed
with fundamental rethinking of rules and behaviors rather than with
sound bites and band-aids.
Friday, July 29, 2011
Early Indications July 2011: Place, Space, and Time
For millennia, geography has defined human civilizations. As our communications capability increases, as measured by technical specifications if not necessarily emotional ones, the need to be physically located in a certain place to do a job, support a social movement, or complete a business transaction is becoming less of an absolute constraint. Mobile phones, cloud computing, and other tools (such as lightweight project management software or online social networks) allow people and resources to be organized without physical contact; this might be called the emerging domain of space, as in "cyber." People can put up virtual storefronts on eBay, let Amazon be their supply chain, rent computing from Google to run code written in India, and let PayPal be their treasury system. Salesforce.com keeps track of customers and prospects; ADP runs payroll once enough employees sign on. Thus, the actual "business" could physically be the size of a laptop PC.
As place becomes negotiable, so does time. Asynchronous television viewing, for example, is reshaping the cable TV landscape. Comcast bought NBC Universal, which in turn was part of the Hulu joint venture. Apart from sports, college students watch very little television at its scheduled time, or over its traditional channel for that matter. Shopping has also become time-shifted: one can easily walk into Sears, shop at a kiosk, and have the item delivered to physical address, or else shop on line and drive to the store for faster pickup than FedEx can manage. At the other end of the time spectrum, tools like Twitter are far faster than TV news, not to mention print newspapers. Voicemail seems primitive now that it's roughly 30 years old, a time-shifting capability now taken for granted.
Place and time increasingly interconnect. Real-time package tracking for a routine Amazon purchase contrasts dramatically with a common scene at an automobile dealership: a customer saw a vehicle on the website earlier in the week and none of the sales people know what happened to it. UPS can track more than a million packages per day while a car dealer can lose a $15,000 two-ton vehicle, one of a few dozen, from a fenced concrete lot. Customer expectations are set by the former experience and are growing increasingly intolerant of the latter.
The corollary of that place/time flexibility, however, is being tracked: everybody with digital assets is plugged into some kind of information grid, and those grids can be mapped. Sometimes it's voluntary: Foursquare, Shopkick, and Facebook Places turn one's announced location into games. More often Big Brother's watch is without consent: London's security cameras are controlled by the same police department accused of using official assets in the service of the Murdoch newspapers' snooping on innocent citizens. Unplugged from the Internet but still needing to distribute directives and communiqués, Osama Bin Laden relied on a USB-stick courier who proved to be his undoing. As we have seen elsewhere, the entire idea of digital privacy, its guarantees and redresses, for bad guys and for everyday folk, is still primitive.
Examples are everywhere: Google Streetview has proved more controversial in Europe, and Germany in particular. Local "open records" laws have yet to be rethought in the age of instant global access: it's one thing for the neighbors to stop by town hall to see how much the new family paid for their house, but something else entirely (we don't really know what) when tens of millions of such transactions are searchable -- especially within overlays of Streetview, Bing's Bird's eye aerial (as opposed to satellite) imagery, and other potentially intrusive mechanisms.
In 2009, a WIRED magazine reporter attempted to vanish using a combination of physical disguises and digital trickery: pre-paid cell phones, proxy servers for IP address masking, cash-purchased gift cards. He was found though a combination of old-fashioned detective work and sophisticated network analysis: he was signing on to Facebook with an alias, and the alias had few real friends. The Facebook group he was lurking in was comprised of people trying to find him. The intersections of place and space are growing more curious every year.
Virtuality
From its origins as a network perimeter tunnel (Virtual Private Network, the ability to see computing resources inside the firewall while being physically remote from the corporate facility), virtualization has become a major movement within enterprise computing. Rather than dedicating a piece of hardware to a particular piece of software, hardware becomes more fungible. In a perfect virtual world, people with applications they need to run can schedule the necessary resources (possibly priced by an auction mechanism), do their work, then retreat from the infrastructure until they next need computing. In this way, the theory goes, server utilization is improved: all the downtime associated with captive hardware can go offline, freeing computing to be used to the current work, whatever its size, shape, or origin.
Once again, physical presence (in this case, big computers in a temperature-controlled facility with expensive redundant network and power connections, physical security, and specialized technicians tending the machines) is disconnected from data and/or application logic. In many consumer scenarios, people act this way without thinking twice: looking at Google Maps instead of Rand McNally, using the online version of Turbotax, or even reading Facebook is similar: no software package resides on the user's machine, and the physical location of the actual computing is both invisible and irrelevant.
From the world of computing, it's a short hop to the world of work. People no longer need to come to the physical assets if they're not doing work on somebody else's drill presses and assembly lines: brain work, a large component of the services economy, is often independent of physical capital, and thus of scheduled shifts. "Working from home" is commonplace, and with the rise of the smartphone, work becomes an anytime/anywhere proposition for more and more people. What this seamlessness means for identity, for health, for family and relationships, and for business performance has yet to be either named or sorted out.
Another dimension of virtuality is personal. Whether in Linden Labs' Second Life, World of Warcraft, or any number of other venues, millions of people play roles and interact through a software persona. As processing power and connection quality increase, these avatars will get more capable, more interesting, and more common. One fascinating possibility relates to virtual permanence: even if the base-layer person dies or quits the environment, the virtual identity can age (or, like Bart Simpson, remain timeless), and can either grow and learn or remain blissfully unaware of change in its own life or the various outside worlds.
Practical applications of virtualization for everyday life seem to be emerging. In South Korea, busy commuters can shop for groceries at transparencies of store shelves identical to those at their nearby Tesco Homeplus store; the photos of the products bear 2D barcodes which, when scanned and purchased, generate orders that are bundled together for home delivery. Picking up ingredients for dinner on the way home from work is a time-honored ritual; here, the shopper chooses the items but never touches them until arriving at his or her residence.
Cisco is making a major play toward virtual collaboration in enterprise videoconferencing; their preferred term, "telepresence," hasn't caught on, but the idea has. Given the changes to air travel in the past ten years (longer check-in times, fewer empty seats, higher fares), compounded by oil price shocks, many people dislike flying more than they used to. Organizations on lean budgets also look to travel as an expense category ripe for cutting, so videoconferencing is coming into its own at some firms. Cisco reports that it has used the technology to save more than $800 million over five years; productivity gains add up, by their math, to another $350 million.
Videoconferencing is also popular with individuals, but it isn't called that: in July 2011 Skype's CEO said that users make about 300 million minutes of video calls per month, which is about the same as pure voice connections. when measured as web traffic; since video consumes more bandwidth, the number of voice-to-voice calls might well be higher. The point for our purposes is that rich interaction can facilitate relationships and collaboration in the absence of physical proximity, at very low cost in hardware, software, and connection. As recently as 2005, a corporate videoconference facility could cost more than a half million US dollars to install; monthly connection charges were another $18,000, or $216,000 annually. In 2011, Apple iPads and many laptop PCs include cameras, and Skype downloads are free.
Organizations
Given that vertical integration has its limits in speed and the cost of capital investment (both in dollars and in opportunity costs), partnering has become a crucial capability. While few companies can emulate the lightweight, profit-free structure of a hacker's collective like Anonymous, of Wikipedia, or of Linux, neither can many firms assume that they control all necessary resources under their own roof. Thus the conventional bureaucracy model is challenged to open up, to connect data and other currencies to partners. Whether it involves sharing requirements documents, blueprints for review, production schedules, regulatory signoffs, or other routine but essential categories of information, few companies can quickly yet securely vet, map, and integrate a partner organization. Differences in nomenclature, in signing authority or span of control, time zones, language and/or currency, and any number of other characteristics complicate the interaction. So-called onboarding -- granting a partner appropriate data access -- can be a months-long process, particularly in secure (aerospace and defense) or regulated settings. Creating a selectively permeable membrane to let in the good guys, let out the proper information, turn off the faucet when it's not being used, and maintain trade secrets throughout has proven to be non-trivial.
Automata
What would happen if a person's avatar could behave independently? If an attractive bargain comes up at Woot, buy it for me. If someone posts something about me on a social network, notify me or, better yet, correct any inaccuracies. If the cat leaves the house through the pet door and doesn't return within two hours, call the petsitter. Who would bear responsibility for the avatar's actions? The person on whose behalf it is "working"? The software writer? The environment in which it operates?
Once all those avatars started interacting independently, unpredictable things might happen, the equivalent of two moose getting their antlers stuck together in the wild, or of a DVD refusing to play on some devices but not others because of a scratch on the disc. Avatars might step out of each other's way, or might trample each other in mobs. They might adapt to new circumstances or they might freeze up in the face of unexpected inputs. Some avatars might stop and wait for human guidance; other might create quite a bit of havoc given a particular set of circumstances.
It's one thing for a person's physical butler, nanny, or broker to act on his or her behalf, but something else quite new for software to be making such decisions. Rather than being a hypothetical thought experiment, the above scenarios are already real. Software "snipers" win eBay auctions with the lowest possible winning bid at the last possible moment. Google Alerts can watch for web postings that fit my criteria and forward them.
More significantly, Wall Street transactions generated by the jacketed floor traders waving their hands furiously are a dying breed. So-called algorithmic trading is a broad category that includes high-frequency trading, in which bids, asks, and order cancellations are computer-to-computer interactions that might last less than a second (and thus cannot involve human traders). By itself, HFT is estimated to generate more than 75% of equities trading volume; nearly half of commodity futures (including oil) trading volume is also estimated to be computer-generated in some capacity. The firms that specialize in such activity are often not well known, and most prefer not to release data which may expose sources of competitive advantage. Thus the actual numbers are not widely known.
What is known is that algorithms can go wrong, and when they go wrong at scale, consequences can be significant. The May 6, 2010 "Flash Crash" is still not entirely understood, but the source of the New York Stock Exchange's biggest, fastest loss (998 points) in history lies in large measure in the complex system of competing algorithms running trillions of dollars of investment. The long-time financial fundamentalist John Bogle -- founder of the Vanguard Group -- pulled no punches in his analysis: "The whole system failed. In an era of intense technology, bad things can happen so rapidly. Technology can accelerate things to the point that we lose control."
Artifacts of the algorithmic failure were just plain weird. Apple stock hit $100,000 a share for a moment; Accenture, a computer services provider, instantaneously dropped from $40 to a cent only to bounce back a few seconds later. Circuit-breakers, or arrangements to halt trading once certain limits are exceeded, were tripping repeatedly. For example, if a share price moves more than 10% in a five minute interval, trading can be halted for a five-minute break. A bigger question relates to the HFT firms that, in good times, provide liquidity, but that can withdraw from the market without notice and in doing so make trading more difficult. Technically, the exchanges' information systems were found to have shortcomings: some price quotes were more than two seconds delayed, which represents an extreme lag in a market where computer-generated actions measure in the millions (or more) per second.
Implications
What does it mean to be somewhere? As people sitting together both text other people in college cafeterias, what does it mean to be physically present? What does it mean to "be at work"? Conversely, what does it mean to be "on vacation"? If I am at my job, how is my output or lack thereof measured? Counting lines of code proved to be a bad way to measure software productivity; how many jobs measure performance by the quality of ideas generated, the quality of collaboration facilitated, the quality of customer service? These are difficult to instrument, so industrial-age measures, including physical output, remain popular even as services (which lend themselves to extreme virtualization) grow in importance and impact.
What is a resource? Who creates it, gets access to it, bears responsibility for its use or misuse? Where do resources "live"? How are they protected? What is obsolescence? How are out-of-date resources retired from service? Enterprise application software, for example, often lives well past its useful life; ask any CIO how many zombie programs he or she has running. Invisible to the naked eye, software can take on a life of its own, and once another program connects to it (the output of a sales forecasting program might be used in HR scheduling, or in marketing planning), the life span likely increases: complexity makes pruning more difficult since turning off an application might have dire consequences at the next quarterly close, the next annual performance review, or the next audit. Better to err on the side of safety and leave things running.
What does it mean for information to be weightless, massless, and infinitely portable? Book collections are becoming a thing of the past for many readers, as Kindle figures and Google searches can attest: having a reference collection near the dining room used to be essential in some academic households, to settle dinnertime contests. Music used to weigh a lot, in the form of LP records. Compact discs were lighter, but the plastic jewel box proved to be a particularly poor storage solution. MP3 downloads eliminated the software, but still needed bits to be stored on a personal hard drive. Now that's changing, to the point where physical books, newspapers, music, and movies all share a cloud-based solution. The result is a dematerialization of many people's lives: book collections, record collections, sheet music -- artifacts that defined millions of people are now disappearing, for good ecological reasons, but with as yet undetermined ramifications for identity, not to mention decorating.
What does it mean to bear personal responsibility? If software operating in my name does something bad, did I do anything? If I am not physically present at my university, my workplace, or my political organization, how loosely or tightly am I connected to the institution, to its people, to its agenda? Harvard sociologist Robert Putnam worried about the implications of the decline in the number of American bowling leagues; are Facebook groups a substitute for, or an improvement on, physical manifestations of civic engagement? If so, which groups, and which forms of engagement? In other words, at the scale of 700-plus million users, saying anything about Facebook is impossible, given the number of caveats, exceptions, and innovations: Facebook today is not what it will be a year from now, whereas bowling leagues have been pretty stable for decades.
Thus the fluidity of (cyber) space, (physical) place, and time has far-reaching implications for getting work done, for entrepreneurial opportunity, and for personal identity. As with so many other innovations, the technologists who are capable of writing code and designing and building breakthrough devices have little sense of what those innovations will mean. The sailing ship meant, in part, that Britain could establish a global empire; the first century of the automobile meant wars for oil, environmental degradation, new shapes for cities, the post-war rise of Japan, and unprecedented personal mobility, to start a very long list. What barely a quarter century of personal computing, 20 years of the commercial Internet, and a few months of smartphones might mean is impossible to tell so far, but it looks like they could mean a lot.
As place becomes negotiable, so does time. Asynchronous television viewing, for example, is reshaping the cable TV landscape. Comcast bought NBC Universal, which in turn was part of the Hulu joint venture. Apart from sports, college students watch very little television at its scheduled time, or over its traditional channel for that matter. Shopping has also become time-shifted: one can easily walk into Sears, shop at a kiosk, and have the item delivered to physical address, or else shop on line and drive to the store for faster pickup than FedEx can manage. At the other end of the time spectrum, tools like Twitter are far faster than TV news, not to mention print newspapers. Voicemail seems primitive now that it's roughly 30 years old, a time-shifting capability now taken for granted.
Place and time increasingly interconnect. Real-time package tracking for a routine Amazon purchase contrasts dramatically with a common scene at an automobile dealership: a customer saw a vehicle on the website earlier in the week and none of the sales people know what happened to it. UPS can track more than a million packages per day while a car dealer can lose a $15,000 two-ton vehicle, one of a few dozen, from a fenced concrete lot. Customer expectations are set by the former experience and are growing increasingly intolerant of the latter.
The corollary of that place/time flexibility, however, is being tracked: everybody with digital assets is plugged into some kind of information grid, and those grids can be mapped. Sometimes it's voluntary: Foursquare, Shopkick, and Facebook Places turn one's announced location into games. More often Big Brother's watch is without consent: London's security cameras are controlled by the same police department accused of using official assets in the service of the Murdoch newspapers' snooping on innocent citizens. Unplugged from the Internet but still needing to distribute directives and communiqués, Osama Bin Laden relied on a USB-stick courier who proved to be his undoing. As we have seen elsewhere, the entire idea of digital privacy, its guarantees and redresses, for bad guys and for everyday folk, is still primitive.
Examples are everywhere: Google Streetview has proved more controversial in Europe, and Germany in particular. Local "open records" laws have yet to be rethought in the age of instant global access: it's one thing for the neighbors to stop by town hall to see how much the new family paid for their house, but something else entirely (we don't really know what) when tens of millions of such transactions are searchable -- especially within overlays of Streetview, Bing's Bird's eye aerial (as opposed to satellite) imagery, and other potentially intrusive mechanisms.
In 2009, a WIRED magazine reporter attempted to vanish using a combination of physical disguises and digital trickery: pre-paid cell phones, proxy servers for IP address masking, cash-purchased gift cards. He was found though a combination of old-fashioned detective work and sophisticated network analysis: he was signing on to Facebook with an alias, and the alias had few real friends. The Facebook group he was lurking in was comprised of people trying to find him. The intersections of place and space are growing more curious every year.
Virtuality
From its origins as a network perimeter tunnel (Virtual Private Network, the ability to see computing resources inside the firewall while being physically remote from the corporate facility), virtualization has become a major movement within enterprise computing. Rather than dedicating a piece of hardware to a particular piece of software, hardware becomes more fungible. In a perfect virtual world, people with applications they need to run can schedule the necessary resources (possibly priced by an auction mechanism), do their work, then retreat from the infrastructure until they next need computing. In this way, the theory goes, server utilization is improved: all the downtime associated with captive hardware can go offline, freeing computing to be used to the current work, whatever its size, shape, or origin.
Once again, physical presence (in this case, big computers in a temperature-controlled facility with expensive redundant network and power connections, physical security, and specialized technicians tending the machines) is disconnected from data and/or application logic. In many consumer scenarios, people act this way without thinking twice: looking at Google Maps instead of Rand McNally, using the online version of Turbotax, or even reading Facebook is similar: no software package resides on the user's machine, and the physical location of the actual computing is both invisible and irrelevant.
From the world of computing, it's a short hop to the world of work. People no longer need to come to the physical assets if they're not doing work on somebody else's drill presses and assembly lines: brain work, a large component of the services economy, is often independent of physical capital, and thus of scheduled shifts. "Working from home" is commonplace, and with the rise of the smartphone, work becomes an anytime/anywhere proposition for more and more people. What this seamlessness means for identity, for health, for family and relationships, and for business performance has yet to be either named or sorted out.
Another dimension of virtuality is personal. Whether in Linden Labs' Second Life, World of Warcraft, or any number of other venues, millions of people play roles and interact through a software persona. As processing power and connection quality increase, these avatars will get more capable, more interesting, and more common. One fascinating possibility relates to virtual permanence: even if the base-layer person dies or quits the environment, the virtual identity can age (or, like Bart Simpson, remain timeless), and can either grow and learn or remain blissfully unaware of change in its own life or the various outside worlds.
Practical applications of virtualization for everyday life seem to be emerging. In South Korea, busy commuters can shop for groceries at transparencies of store shelves identical to those at their nearby Tesco Homeplus store; the photos of the products bear 2D barcodes which, when scanned and purchased, generate orders that are bundled together for home delivery. Picking up ingredients for dinner on the way home from work is a time-honored ritual; here, the shopper chooses the items but never touches them until arriving at his or her residence.
Cisco is making a major play toward virtual collaboration in enterprise videoconferencing; their preferred term, "telepresence," hasn't caught on, but the idea has. Given the changes to air travel in the past ten years (longer check-in times, fewer empty seats, higher fares), compounded by oil price shocks, many people dislike flying more than they used to. Organizations on lean budgets also look to travel as an expense category ripe for cutting, so videoconferencing is coming into its own at some firms. Cisco reports that it has used the technology to save more than $800 million over five years; productivity gains add up, by their math, to another $350 million.
Videoconferencing is also popular with individuals, but it isn't called that: in July 2011 Skype's CEO said that users make about 300 million minutes of video calls per month, which is about the same as pure voice connections. when measured as web traffic; since video consumes more bandwidth, the number of voice-to-voice calls might well be higher. The point for our purposes is that rich interaction can facilitate relationships and collaboration in the absence of physical proximity, at very low cost in hardware, software, and connection. As recently as 2005, a corporate videoconference facility could cost more than a half million US dollars to install; monthly connection charges were another $18,000, or $216,000 annually. In 2011, Apple iPads and many laptop PCs include cameras, and Skype downloads are free.
Organizations
Given that vertical integration has its limits in speed and the cost of capital investment (both in dollars and in opportunity costs), partnering has become a crucial capability. While few companies can emulate the lightweight, profit-free structure of a hacker's collective like Anonymous, of Wikipedia, or of Linux, neither can many firms assume that they control all necessary resources under their own roof. Thus the conventional bureaucracy model is challenged to open up, to connect data and other currencies to partners. Whether it involves sharing requirements documents, blueprints for review, production schedules, regulatory signoffs, or other routine but essential categories of information, few companies can quickly yet securely vet, map, and integrate a partner organization. Differences in nomenclature, in signing authority or span of control, time zones, language and/or currency, and any number of other characteristics complicate the interaction. So-called onboarding -- granting a partner appropriate data access -- can be a months-long process, particularly in secure (aerospace and defense) or regulated settings. Creating a selectively permeable membrane to let in the good guys, let out the proper information, turn off the faucet when it's not being used, and maintain trade secrets throughout has proven to be non-trivial.
Automata
What would happen if a person's avatar could behave independently? If an attractive bargain comes up at Woot, buy it for me. If someone posts something about me on a social network, notify me or, better yet, correct any inaccuracies. If the cat leaves the house through the pet door and doesn't return within two hours, call the petsitter. Who would bear responsibility for the avatar's actions? The person on whose behalf it is "working"? The software writer? The environment in which it operates?
Once all those avatars started interacting independently, unpredictable things might happen, the equivalent of two moose getting their antlers stuck together in the wild, or of a DVD refusing to play on some devices but not others because of a scratch on the disc. Avatars might step out of each other's way, or might trample each other in mobs. They might adapt to new circumstances or they might freeze up in the face of unexpected inputs. Some avatars might stop and wait for human guidance; other might create quite a bit of havoc given a particular set of circumstances.
It's one thing for a person's physical butler, nanny, or broker to act on his or her behalf, but something else quite new for software to be making such decisions. Rather than being a hypothetical thought experiment, the above scenarios are already real. Software "snipers" win eBay auctions with the lowest possible winning bid at the last possible moment. Google Alerts can watch for web postings that fit my criteria and forward them.
More significantly, Wall Street transactions generated by the jacketed floor traders waving their hands furiously are a dying breed. So-called algorithmic trading is a broad category that includes high-frequency trading, in which bids, asks, and order cancellations are computer-to-computer interactions that might last less than a second (and thus cannot involve human traders). By itself, HFT is estimated to generate more than 75% of equities trading volume; nearly half of commodity futures (including oil) trading volume is also estimated to be computer-generated in some capacity. The firms that specialize in such activity are often not well known, and most prefer not to release data which may expose sources of competitive advantage. Thus the actual numbers are not widely known.
What is known is that algorithms can go wrong, and when they go wrong at scale, consequences can be significant. The May 6, 2010 "Flash Crash" is still not entirely understood, but the source of the New York Stock Exchange's biggest, fastest loss (998 points) in history lies in large measure in the complex system of competing algorithms running trillions of dollars of investment. The long-time financial fundamentalist John Bogle -- founder of the Vanguard Group -- pulled no punches in his analysis: "The whole system failed. In an era of intense technology, bad things can happen so rapidly. Technology can accelerate things to the point that we lose control."
Artifacts of the algorithmic failure were just plain weird. Apple stock hit $100,000 a share for a moment; Accenture, a computer services provider, instantaneously dropped from $40 to a cent only to bounce back a few seconds later. Circuit-breakers, or arrangements to halt trading once certain limits are exceeded, were tripping repeatedly. For example, if a share price moves more than 10% in a five minute interval, trading can be halted for a five-minute break. A bigger question relates to the HFT firms that, in good times, provide liquidity, but that can withdraw from the market without notice and in doing so make trading more difficult. Technically, the exchanges' information systems were found to have shortcomings: some price quotes were more than two seconds delayed, which represents an extreme lag in a market where computer-generated actions measure in the millions (or more) per second.
Implications
What does it mean to be somewhere? As people sitting together both text other people in college cafeterias, what does it mean to be physically present? What does it mean to "be at work"? Conversely, what does it mean to be "on vacation"? If I am at my job, how is my output or lack thereof measured? Counting lines of code proved to be a bad way to measure software productivity; how many jobs measure performance by the quality of ideas generated, the quality of collaboration facilitated, the quality of customer service? These are difficult to instrument, so industrial-age measures, including physical output, remain popular even as services (which lend themselves to extreme virtualization) grow in importance and impact.
What is a resource? Who creates it, gets access to it, bears responsibility for its use or misuse? Where do resources "live"? How are they protected? What is obsolescence? How are out-of-date resources retired from service? Enterprise application software, for example, often lives well past its useful life; ask any CIO how many zombie programs he or she has running. Invisible to the naked eye, software can take on a life of its own, and once another program connects to it (the output of a sales forecasting program might be used in HR scheduling, or in marketing planning), the life span likely increases: complexity makes pruning more difficult since turning off an application might have dire consequences at the next quarterly close, the next annual performance review, or the next audit. Better to err on the side of safety and leave things running.
What does it mean for information to be weightless, massless, and infinitely portable? Book collections are becoming a thing of the past for many readers, as Kindle figures and Google searches can attest: having a reference collection near the dining room used to be essential in some academic households, to settle dinnertime contests. Music used to weigh a lot, in the form of LP records. Compact discs were lighter, but the plastic jewel box proved to be a particularly poor storage solution. MP3 downloads eliminated the software, but still needed bits to be stored on a personal hard drive. Now that's changing, to the point where physical books, newspapers, music, and movies all share a cloud-based solution. The result is a dematerialization of many people's lives: book collections, record collections, sheet music -- artifacts that defined millions of people are now disappearing, for good ecological reasons, but with as yet undetermined ramifications for identity, not to mention decorating.
What does it mean to bear personal responsibility? If software operating in my name does something bad, did I do anything? If I am not physically present at my university, my workplace, or my political organization, how loosely or tightly am I connected to the institution, to its people, to its agenda? Harvard sociologist Robert Putnam worried about the implications of the decline in the number of American bowling leagues; are Facebook groups a substitute for, or an improvement on, physical manifestations of civic engagement? If so, which groups, and which forms of engagement? In other words, at the scale of 700-plus million users, saying anything about Facebook is impossible, given the number of caveats, exceptions, and innovations: Facebook today is not what it will be a year from now, whereas bowling leagues have been pretty stable for decades.
Thus the fluidity of (cyber) space, (physical) place, and time has far-reaching implications for getting work done, for entrepreneurial opportunity, and for personal identity. As with so many other innovations, the technologists who are capable of writing code and designing and building breakthrough devices have little sense of what those innovations will mean. The sailing ship meant, in part, that Britain could establish a global empire; the first century of the automobile meant wars for oil, environmental degradation, new shapes for cities, the post-war rise of Japan, and unprecedented personal mobility, to start a very long list. What barely a quarter century of personal computing, 20 years of the commercial Internet, and a few months of smartphones might mean is impossible to tell so far, but it looks like they could mean a lot.
Thursday, June 30, 2011
Early Indications June 2011: Identity and privacy
With the soft launch of Google Plus, it's an opportune time to think
about digital privacy, insofar as Google is explicitly targeting
widespread user dissatisfaction with Facebook's treatment of their
personal information. The tagging feature, for example, that was used
to build a massive (hundreds of millions of users) facial recognition
database has important privacy implications, for example. In standard
Facebook fashion, it's turned on by default, and opting out once may
not guarantee that a user is excluded from the next wave of changes.
If a government did that, controversy would likely be intense, but in
Facebook's case, people seem to be resigned to the behavior.
According to a 2010 poll developed at the University of Michigan and
administered by the American Customer Satisfaction Index, Facebook
scored in the bottom 5%, in the range of cable operators, airlines,
and the IRS. Even as Facebook is rumored to be holding off user-base
announcements for now-mundane 100-million intervals, users are
defecting. While the service is said to be closing in on 750 million
users globally, reports of 1% of that population in the U.S. and
Canada defecting in one month were not confirmed by the company, but
neither were they denied. A Google search on "Facebook fatigue"
returned 23 million hits. At the same time, Facebook delivers 31% of the 1.1 trillion ads served in the U.S. each quarter (Yahoo is a distant second at 10% share); those ads are expected to represent $4 billion in 2011 revenue.
With the Facebook IPO still impending, the questions about privacy
take on more urgency. What, really, is privacy? It's clearly a
fundamental concept, typically conceived of as a human or civil right.
According to the Oxford English Dictionary, privacy is "the state or
condition of being alone, undisturbed, or free from public attention,
as a matter of choice or right; seclusion; freedom from interference
or intrusion." It's an old word, dating to the 14th century, that is
constantly being reinvented as times change.
Being left alone in a digital world is a difficult concept, however.
Here, NYU's Helen Nissenbaum is helpful: "What people care most about
is not simply restricting the flow of information but ensuring that it
flows appropriately. . . ." Thus she does not wade further into the
definitional swamp, but spends a book's worth of analysis* on the issue
of how people interact with the structures that collect, parse, and
move their information. (*Privacy in Context: Technology, Privacy, and the Integrity of Social Life)
Through this lens, the following artifacts are not able to be judged
as public or private, good or bad, acceptable or unacceptable, but
they can be discussed and considered in the context of people's
values, choices, and autonomy: when I use X, is my information handled
in a way that I consent to in some reasonably informed way? The
digital privacy landscape is vast, including some familiar tools, and
for all the privacy notices I have received, there is a lot I don't
know about the workings of most of these:
-loyalty card programs
-Google streetview
-toll-pass RFID tags
-surveillance cameras
-TSA no-fly lists
-Facebook data and actions
-credit-rating data
-Amazon browsing and purchase history
-Google search history
-Foursquare check-ins
-digital camera metadata
-expressed preferences such as star ratings, Facebook Likes, or eBay
seller feedback
-searchable digital public records such as court dates, house
purchases, or bankruptcy
-cell phone location and connection records
-medical records, electronic or paper
-Gmail correspondence
-TSA backscatter X-ray
Does such lack of knowledge mean that I have conceded privacy, or that
I am exposing aspects of my life I would rather not? Probably both.
In addition, the perfection of digital memory -- handled properly,
bits don't degrade with repeated copying -- means that what these
entities know, they know for a very long time. The combination,
therefore, of lack of popular understanding of the mechanics of
personal information and the permanence of that information makes
privacy doubly suspect.
Scale
Given the climate of the past ten years in relation to privacy, the
events of 9/11 have conditioned the debate to an extraordinary degree.
The U.S. government was reorganized, search and seizure rules were
broadened, and rules of the game got more complicated: not only were
certain entities ordered to turn over information related to their
customers, they were obligated to deny that they had done so. More
centrally, the FBI's well-documented failure to "connect the dots"
spurred a reorganization of multiple information silos into a vast and
possibly suboptimally sprawling Department of Homeland Security.
Governments have always wanted more information than people typically
wanted to give them. Given the new legal climate along with
improvements in the technologies of databases, information retrieval,
and image processing, for example, more is known about U.S.
individuals than at any time heretofore. (Whether it is known by the
proper people and agencies is a separate question.) At the 2000 Super
Bowl, for example, the entire crowd was scanned and matched against an
image database. Note the rhetoric employed even before the terrorist
attack on the twin towers and the Pentagon:
"[Tampa detective Bill] Todd is excited about the biometric
crimestopper aid: The facial recognition technology is an extremely
fast, technologically advanced version of placing a cop on a corner,
giving him a face book of criminals and saying, Pick the criminals out
of the crowd and detain them. It's just very fast and accurate."
Note that the category of "criminals" can be conveniently defined: the
definition in Yemen, Libya, or Pakistan might be debatable, depending
on one's perspective. In Tampa, civil liberties were not explicitly
addressed, nor was there judicial oversight:
"Concerned first and foremost with public safety, the Tampa police
used its judgment in viewing the images brought up on the monitor. Although the cameras permitted the police to view crimes captured by the cameras and apprehend suspects for pick-pocketing and other petty
crimes, their real goal was to ensure crowd safety. The Tampa Police were involved in forming the database and determining by threat level who was added to the database." (emphasis added)
Letting a police force, which in any given locality may have
corruption issues as in large areas of Mexico, use digital records to
figuratively stand on a corner and pick "the criminals out of the
crowd" without probable cause is scary stuff. Also in this week's
news, a major story concerns an FBI agent who protected his informant
from murder charges. And police officers might not be corrupt:
Mexican drug gangs are now being said to threaten U.S. law enforcement
officers with harm. Once the information and the technology exist,
they will be abused: the issue is how to design safeguards to the
process.
Consider RFID toll passes. According to a transportation industry
trade journal,
"The first case of electronic toll record tracking may have been in
September 1997, when the New York City Police Department used E-Z Pass toll records to track the movements of a car owned by New Jersey millionaire Nelson G. Gross who had been abducted and murdered. The police did not use a subpoena to obtain these records but asked the Metropolitan Transportation Authority and they complied."
Again, the potential for privacy abuse emerged before protections did.
I could find no statistics for the number of EZpass and similar
tokens in current use, but it could well be in the tens of millions.
As only one in a number of highly revealing artifacts attached to a
person's digital identity, toll tokens join a growing number of
sensors of which few people are aware. The OBD system in a car,
expanded from a mechanic's engine diagnostic, has become a "black box"
like those recovered from airplane crashes. Progressive Insurance is
experimenting with data logging from the devices as a premium-setting
tool, which does not, significantly, include GPS information; the firm
discontinued a GPS-based experiment in 2000.
Invisibility
In its excellent "What They Know" investigative series in 2010, the
Wall Street Journal concluded that "they" know a lot. Numbers only
scratch the surface of the issues:
-Dictionary.com installed 234 tracking cookies in a single visit.
WSJ.com itself came in below average, at 60. Wikipedia.org was the
only site of 50 tested to install zero tracking software files.
-When Microsoft relaunched Internet Explorer in 2008, corporate
interests concerned about ad revenue vetoed a plan to make privacy
settings persistent. Thus users have to reset the privacy preferences
with every browser restart, and few people are aware of the settings
console in the first place.
-The Facebook Like button connects a behavior (an online vote, a
pursuit of a coupon, or an act of whim) to a flesh-and-blood person:
the Facebook profile's presumably real name, real age, real sex, and
real location. Again according to the Journal,
"For example, Facebook or Twitter know when one of their members reads
an article about filing for bankruptcy on MSNBC.com or goes to a blog
about depression called Fighting the Darkness, even if the user
doesn't click the "Like" or "Tweet" buttons on those sites.
For this to work, a person only needs to have logged into Facebook or
Twitter once in the past month. The sites will continue to collect
browsing data, even if the person closes their browser or turns off
their computers, until that person explicitly logs out of their
Facebook or Twitter accounts, the study found."
-Few people realize how technologies can be used to follow them from
one realm to another. The giant advertising firm WPP recently
launched Xaxis, which, according to the Wall Street Journal (in a
story separate from its "What They Know" series), "will manage what it describes as the 'world's largest' database of profiles of individuals that includes demographic, financial, purchase, geographic and other information collected from their Web activities and brick-and-mortar transactions. The database will be used to personalize ads consumers see on the Web, social-networking sites, mobile phones and ultimately, the TV set."
In each of these examples, it's pretty clear that all of these
companies ignored, or at least lightly valued, Nissenbaum's notion of
contextual integrity as it relates to the individual. Given the lack
of tangible consequences, it makes economic sense for them to do so.
Identity
Given that digital privacy seems almost to be a quaint notion in the
U.S. (European live and are legally protected differently), a deeper
question emerges: if that OED sense of freedom from intrusion is being
reshaped by our many digital identities, who are we and what do we
control? Ads, spam, nearly continuous interruption (if we let
ourselves listen), and an often creepy sense of "how did they know
that?" as LinkedIn, Amazon, Google, Facebook, and Netflix hone in our
most cherished idiosyncrasies -- all of these are embedded in the
contemporary connected culture. Many sites such as Lifehacker
recommend frequent pruning: e-mail offers, coupon sites, Twitter
feeds, and Facebook friends can multiply out of control, and saying no
often requires more deliberation than joining up.
Who am I? Not to get metaphysical, but the context for that question
is in flux. My fifth-grade teacher was fond of saying "tell me who
your friends are and I'll tell you who you are." What would he say to
today's fifth-grader, who may well text 8,000 times a month and have a
public Facebook page?
Does it matter that a person's political alignment, sexual
orientation, religious affiliation, and zip code (a reasonable proxy
for household income) are now a matter of public, searchable record?
Is her identity different now that some many facets of it are
transparent? Or is it a matter of Mark Zuckerberg's vision -- people
have one identity, and transparency is good for relationships -- being
implicitly shared more widely across the planet? Just today, a review
of Google Plus argued that people don't mind having one big list of
"friends," even as Facebook scored poorly in this year's customer
satisfaction index.
Indeed, one solution to the privacy dilemma is to overshare: if
nothing can possibly be held close, secrets lose their potency,
perhaps. (For an example, see the story of Hasan Elahi and his
Trackingtransience website in the May 2007 Wired and in Albert-László
Barabási's book Bursts.) The recent fascination with YouTube
pregnancy-test videos is fascinating: one of life's most meaningful,
trajectory-altering moments is increasingly an occasion to show the
world the heavy (water) drinking, the trips to the pharmacy and the
toilet, and the little colored indicator, followed by the requisite
reaction shots. (For more, see Marisa Meltzer's piece on Slate,
wonderfully titled "WombTube.")
The other extreme, opting out, is difficult. Living without a mobile
phone, without electronic books, without MP3 music files, without
e-mail, and of course without Facebook or Google is difficult for many
to comprehend. In fact, the decision to unplug frequently goes
hand-in-hand with a book project, so unheard-of is the notion.
At the same time, the primacy of the word represented by these massive
information flows leaves out at least 10% of the adult U.S.
population: functional illiteracy, by its very nature, is difficult to
measure. One shocking statistic, presented without attribution by the
Detroit Literacy Coalition, pegs the number in that metro area at a
stunning 47%. Given a core population of about 4 million in the
3-county area, that's well over 1 million adults who have few concerns
with Twitter feeds, Google searches, or allocating their 401(k)
portfolio.
In the middle, where most Americans now live, there's an abundance of
grey area. As "what they know," in the Journal's words, grows and
what they can do with it expands, perhaps the erosion of analog
notions of privacy will be steady but substantial. Another
possibility is some high-profile, disproportionately captivating event
that galvanizes reaction. The fastest adoption of a technology in
modern times is not GPS, or DVD, or even Facebook: it was the U.S.
government's Do Not Call registry. Engineering privacy into browsers,
cell phones, and very large data stores is unlikely; litigation is,
unfortunately, a more likely outcome. Just today a U.S. federal judge
refused to halt a class-action suit against Google's practice of
using its Streetview cars for wi-fi sniffing. The story of privacy,
while old, is entering a fascinating, and exasperating, new phase, and
much remains to be learned, be tested, and be accepted as normal.
about digital privacy, insofar as Google is explicitly targeting
widespread user dissatisfaction with Facebook's treatment of their
personal information. The tagging feature, for example, that was used
to build a massive (hundreds of millions of users) facial recognition
database has important privacy implications, for example. In standard
Facebook fashion, it's turned on by default, and opting out once may
not guarantee that a user is excluded from the next wave of changes.
If a government did that, controversy would likely be intense, but in
Facebook's case, people seem to be resigned to the behavior.
According to a 2010 poll developed at the University of Michigan and
administered by the American Customer Satisfaction Index, Facebook
scored in the bottom 5%, in the range of cable operators, airlines,
and the IRS. Even as Facebook is rumored to be holding off user-base
announcements for now-mundane 100-million intervals, users are
defecting. While the service is said to be closing in on 750 million
users globally, reports of 1% of that population in the U.S. and
Canada defecting in one month were not confirmed by the company, but
neither were they denied. A Google search on "Facebook fatigue"
returned 23 million hits. At the same time, Facebook delivers 31% of the 1.1 trillion ads served in the U.S. each quarter (Yahoo is a distant second at 10% share); those ads are expected to represent $4 billion in 2011 revenue.
With the Facebook IPO still impending, the questions about privacy
take on more urgency. What, really, is privacy? It's clearly a
fundamental concept, typically conceived of as a human or civil right.
According to the Oxford English Dictionary, privacy is "the state or
condition of being alone, undisturbed, or free from public attention,
as a matter of choice or right; seclusion; freedom from interference
or intrusion." It's an old word, dating to the 14th century, that is
constantly being reinvented as times change.
Being left alone in a digital world is a difficult concept, however.
Here, NYU's Helen Nissenbaum is helpful: "What people care most about
is not simply restricting the flow of information but ensuring that it
flows appropriately. . . ." Thus she does not wade further into the
definitional swamp, but spends a book's worth of analysis* on the issue
of how people interact with the structures that collect, parse, and
move their information. (*Privacy in Context: Technology, Privacy, and the Integrity of Social Life)
Through this lens, the following artifacts are not able to be judged
as public or private, good or bad, acceptable or unacceptable, but
they can be discussed and considered in the context of people's
values, choices, and autonomy: when I use X, is my information handled
in a way that I consent to in some reasonably informed way? The
digital privacy landscape is vast, including some familiar tools, and
for all the privacy notices I have received, there is a lot I don't
know about the workings of most of these:
-loyalty card programs
-Google streetview
-toll-pass RFID tags
-surveillance cameras
-TSA no-fly lists
-Facebook data and actions
-credit-rating data
-Amazon browsing and purchase history
-Google search history
-Foursquare check-ins
-digital camera metadata
-expressed preferences such as star ratings, Facebook Likes, or eBay
seller feedback
-searchable digital public records such as court dates, house
purchases, or bankruptcy
-cell phone location and connection records
-medical records, electronic or paper
-Gmail correspondence
-TSA backscatter X-ray
Does such lack of knowledge mean that I have conceded privacy, or that
I am exposing aspects of my life I would rather not? Probably both.
In addition, the perfection of digital memory -- handled properly,
bits don't degrade with repeated copying -- means that what these
entities know, they know for a very long time. The combination,
therefore, of lack of popular understanding of the mechanics of
personal information and the permanence of that information makes
privacy doubly suspect.
Scale
Given the climate of the past ten years in relation to privacy, the
events of 9/11 have conditioned the debate to an extraordinary degree.
The U.S. government was reorganized, search and seizure rules were
broadened, and rules of the game got more complicated: not only were
certain entities ordered to turn over information related to their
customers, they were obligated to deny that they had done so. More
centrally, the FBI's well-documented failure to "connect the dots"
spurred a reorganization of multiple information silos into a vast and
possibly suboptimally sprawling Department of Homeland Security.
Governments have always wanted more information than people typically
wanted to give them. Given the new legal climate along with
improvements in the technologies of databases, information retrieval,
and image processing, for example, more is known about U.S.
individuals than at any time heretofore. (Whether it is known by the
proper people and agencies is a separate question.) At the 2000 Super
Bowl, for example, the entire crowd was scanned and matched against an
image database. Note the rhetoric employed even before the terrorist
attack on the twin towers and the Pentagon:
"[Tampa detective Bill] Todd is excited about the biometric
crimestopper aid: The facial recognition technology is an extremely
fast, technologically advanced version of placing a cop on a corner,
giving him a face book of criminals and saying, Pick the criminals out
of the crowd and detain them. It's just very fast and accurate."
Note that the category of "criminals" can be conveniently defined: the
definition in Yemen, Libya, or Pakistan might be debatable, depending
on one's perspective. In Tampa, civil liberties were not explicitly
addressed, nor was there judicial oversight:
"Concerned first and foremost with public safety, the Tampa police
used its judgment in viewing the images brought up on the monitor. Although the cameras permitted the police to view crimes captured by the cameras and apprehend suspects for pick-pocketing and other petty
crimes, their real goal was to ensure crowd safety. The Tampa Police were involved in forming the database and determining by threat level who was added to the database." (emphasis added)
Letting a police force, which in any given locality may have
corruption issues as in large areas of Mexico, use digital records to
figuratively stand on a corner and pick "the criminals out of the
crowd" without probable cause is scary stuff. Also in this week's
news, a major story concerns an FBI agent who protected his informant
from murder charges. And police officers might not be corrupt:
Mexican drug gangs are now being said to threaten U.S. law enforcement
officers with harm. Once the information and the technology exist,
they will be abused: the issue is how to design safeguards to the
process.
Consider RFID toll passes. According to a transportation industry
trade journal,
"The first case of electronic toll record tracking may have been in
September 1997, when the New York City Police Department used E-Z Pass toll records to track the movements of a car owned by New Jersey millionaire Nelson G. Gross who had been abducted and murdered. The police did not use a subpoena to obtain these records but asked the Metropolitan Transportation Authority and they complied."
Again, the potential for privacy abuse emerged before protections did.
I could find no statistics for the number of EZpass and similar
tokens in current use, but it could well be in the tens of millions.
As only one in a number of highly revealing artifacts attached to a
person's digital identity, toll tokens join a growing number of
sensors of which few people are aware. The OBD system in a car,
expanded from a mechanic's engine diagnostic, has become a "black box"
like those recovered from airplane crashes. Progressive Insurance is
experimenting with data logging from the devices as a premium-setting
tool, which does not, significantly, include GPS information; the firm
discontinued a GPS-based experiment in 2000.
Invisibility
In its excellent "What They Know" investigative series in 2010, the
Wall Street Journal concluded that "they" know a lot. Numbers only
scratch the surface of the issues:
-Dictionary.com installed 234 tracking cookies in a single visit.
WSJ.com itself came in below average, at 60. Wikipedia.org was the
only site of 50 tested to install zero tracking software files.
-When Microsoft relaunched Internet Explorer in 2008, corporate
interests concerned about ad revenue vetoed a plan to make privacy
settings persistent. Thus users have to reset the privacy preferences
with every browser restart, and few people are aware of the settings
console in the first place.
-The Facebook Like button connects a behavior (an online vote, a
pursuit of a coupon, or an act of whim) to a flesh-and-blood person:
the Facebook profile's presumably real name, real age, real sex, and
real location. Again according to the Journal,
"For example, Facebook or Twitter know when one of their members reads
an article about filing for bankruptcy on MSNBC.com or goes to a blog
about depression called Fighting the Darkness, even if the user
doesn't click the "Like" or "Tweet" buttons on those sites.
For this to work, a person only needs to have logged into Facebook or
Twitter once in the past month. The sites will continue to collect
browsing data, even if the person closes their browser or turns off
their computers, until that person explicitly logs out of their
Facebook or Twitter accounts, the study found."
-Few people realize how technologies can be used to follow them from
one realm to another. The giant advertising firm WPP recently
launched Xaxis, which, according to the Wall Street Journal (in a
story separate from its "What They Know" series), "will manage what it describes as the 'world's largest' database of profiles of individuals that includes demographic, financial, purchase, geographic and other information collected from their Web activities and brick-and-mortar transactions. The database will be used to personalize ads consumers see on the Web, social-networking sites, mobile phones and ultimately, the TV set."
In each of these examples, it's pretty clear that all of these
companies ignored, or at least lightly valued, Nissenbaum's notion of
contextual integrity as it relates to the individual. Given the lack
of tangible consequences, it makes economic sense for them to do so.
Identity
Given that digital privacy seems almost to be a quaint notion in the
U.S. (European live and are legally protected differently), a deeper
question emerges: if that OED sense of freedom from intrusion is being
reshaped by our many digital identities, who are we and what do we
control? Ads, spam, nearly continuous interruption (if we let
ourselves listen), and an often creepy sense of "how did they know
that?" as LinkedIn, Amazon, Google, Facebook, and Netflix hone in our
most cherished idiosyncrasies -- all of these are embedded in the
contemporary connected culture. Many sites such as Lifehacker
recommend frequent pruning: e-mail offers, coupon sites, Twitter
feeds, and Facebook friends can multiply out of control, and saying no
often requires more deliberation than joining up.
Who am I? Not to get metaphysical, but the context for that question
is in flux. My fifth-grade teacher was fond of saying "tell me who
your friends are and I'll tell you who you are." What would he say to
today's fifth-grader, who may well text 8,000 times a month and have a
public Facebook page?
Does it matter that a person's political alignment, sexual
orientation, religious affiliation, and zip code (a reasonable proxy
for household income) are now a matter of public, searchable record?
Is her identity different now that some many facets of it are
transparent? Or is it a matter of Mark Zuckerberg's vision -- people
have one identity, and transparency is good for relationships -- being
implicitly shared more widely across the planet? Just today, a review
of Google Plus argued that people don't mind having one big list of
"friends," even as Facebook scored poorly in this year's customer
satisfaction index.
Indeed, one solution to the privacy dilemma is to overshare: if
nothing can possibly be held close, secrets lose their potency,
perhaps. (For an example, see the story of Hasan Elahi and his
Trackingtransience website in the May 2007 Wired and in Albert-László
Barabási's book Bursts.) The recent fascination with YouTube
pregnancy-test videos is fascinating: one of life's most meaningful,
trajectory-altering moments is increasingly an occasion to show the
world the heavy (water) drinking, the trips to the pharmacy and the
toilet, and the little colored indicator, followed by the requisite
reaction shots. (For more, see Marisa Meltzer's piece on Slate,
wonderfully titled "WombTube.")
The other extreme, opting out, is difficult. Living without a mobile
phone, without electronic books, without MP3 music files, without
e-mail, and of course without Facebook or Google is difficult for many
to comprehend. In fact, the decision to unplug frequently goes
hand-in-hand with a book project, so unheard-of is the notion.
At the same time, the primacy of the word represented by these massive
information flows leaves out at least 10% of the adult U.S.
population: functional illiteracy, by its very nature, is difficult to
measure. One shocking statistic, presented without attribution by the
Detroit Literacy Coalition, pegs the number in that metro area at a
stunning 47%. Given a core population of about 4 million in the
3-county area, that's well over 1 million adults who have few concerns
with Twitter feeds, Google searches, or allocating their 401(k)
portfolio.
In the middle, where most Americans now live, there's an abundance of
grey area. As "what they know," in the Journal's words, grows and
what they can do with it expands, perhaps the erosion of analog
notions of privacy will be steady but substantial. Another
possibility is some high-profile, disproportionately captivating event
that galvanizes reaction. The fastest adoption of a technology in
modern times is not GPS, or DVD, or even Facebook: it was the U.S.
government's Do Not Call registry. Engineering privacy into browsers,
cell phones, and very large data stores is unlikely; litigation is,
unfortunately, a more likely outcome. Just today a U.S. federal judge
refused to halt a class-action suit against Google's practice of
using its Streetview cars for wi-fi sniffing. The story of privacy,
while old, is entering a fascinating, and exasperating, new phase, and
much remains to be learned, be tested, and be accepted as normal.
Monday, May 30, 2011
Early Indications May 2011: Firms, Ecosystems, and Collaboratives
The Internet and mobility are changing how resources can be organized to do work. The limited liability joint stock corporation remains useful for assembling capital at scale, which helps build railroads, steel mills, and other industrial facilities. But with manufacturing growing less important in the U.S. economy in the past 50 years, and new tools facilitating coordination and collaboration at scale without need for 20th century firms, we are witnessing some fascinating new sizes, shapes, and types of organizations. As Erik Brynolfsson noted in Sloan Management Review, we need to rethink the very nature of firms, beginning with Ronald Coase's famous theory: "The traditionally sharp distinction between markets and firms is giving way to a multiplicity of different kinds of organizational forms that don't necessarily have those sharp boundaries." Rather than try to construct a typology or theory of these non-firm entities, I will give a series of examples in which people can get things done outside traditional governmental and company settings, then try to draw some preliminary conclusions.
Kickstarter.com
How do art and creativity find funding? The answers have varied tremendously throughout human history: rich patrons, family members, credit card debt, and many forms of government funding. David Bowie issued an asset-backed security with the future revenue streams of the albums he recorded before 1990 as collateral. Given the decline in the audience for buying recorded music, Moody's downgraded the $55 million in debt to one step above junk bonds: Prudential, the buyer of the notes, looks to be the loser here while Bowie was either smart or lucky (but hasn't created much art of note since 1997 when the transaction occurred).
In 2009, a new model emerged: Kickstarter allows artists and other creators to post projects to which donors (not lenders) can commit. If I want to make an independent film, or catalog the works of a graffiti artist, or write a book, I can post the project, and any special rewards to funders, on the site. Donors might receive a signed copy of the finished work, or pdf updates while the work is in process, or tickets to the film's premiere, or other reciprocation.
Donors and artists alike are protected by a threshold requirement: if the required sum is not raised, the project never launches. Kickstarter takes 5% of the funds and Amazon Payments receives another 5% cut. Once completed, the works are permanently archived on the site. The site attracted some notice in 2010 when a user-controlled alternative to Facebook, called Diaspora, raised $200,000.
While it's too early to judge the longevity or scope of the model, Time named it one of the 50 best inventions of 2010.
Software developer networks
Microsoft enjoyed a huge competitive advantage here in the 1990s. As of 2002, one estimate showed about 3.25 million developers in the Microsoft camp. None of these men and women were employees, but were often trained, certified, and equipped with tool sets by Microsoft. The developers, in turn, could sense market demand for applications large and small and build solutions in the Windows environments for customers conditioned to seek out the Windows branding in the service provider.
More recently, the App Store model has attracted developers who seek a more direct path to monetization. Apple has hundreds of thousands of applications for the iPhone and iPad; Google's Android platform has nearly as many, depending on counting methodology. Tools are still important, but rather than certification programs, the app store model relies on the market for validation of an application. Obviously dry cleaners and other small businesses still need accounting programs, or whatever, and Google can't compete with Microsoft for this slice of the business. Even so, enterprise software vendors such as Adobe, Autodesk, Oracle, and SAP must navigate new territory as the app store model, along with Software as a Service, make such competitors as Salesforce.com and its Force.com developer program a new kind of market entrant.
The app store developers aren't really a network in any meaningful sense of the word: they don't meet, don't know each other, don't exist in a directory of members, affiliates, or prospects. There are developer conferences, of course, but not in the same form that Microsoft pioneered. The networks, particularly the app store developers, certainly aren't even remotely an extension of Apple's, Google's, or HTC's corporate organization: the market model is much more central than any org chart an be.
The market sifts winners from the mass of losers. According to Dutch app counters at Distimo, "We found that only two paid applications have been downloaded more than half a million times in the Google Android Market worldwide to date, while six paid applications in the Apple App Store for iPhone generate the same number of downloads within a two month timeframe in the United States alone." This model shifts risk away from the platform company, which gets a slice no matter which applications emerge as winners and invests nothing in losers.
Not all of these developer networks play inside the lines, as it were. Despite robust security technologies, Sony PlayStations and Apple iPhones have been "unlocked" by 3rd-party teams. The iPhone Dev Team, described by the Wall Street Journal as "a loose-knit but exclusive group of highly-skilled technologists who are considered to be the leaders among iPhone hackers," has contributed a steady stream of software kits for Apple customers to "jailbreak" their devices. The procedure is not illegal but can void certain warranty provisions. The benefit to the user is greater control over the device, access to software not necessarily approved by Apple, and sometimes features not supported by the official operating system.
Because they create value for the user base at the same time they have developed deep understanding of the technical architecture, the Dev Team and similar groups cannot be attacked too vigorously by the platform owners, as Sony is discovering in the PlayStation matter: the online group Anonymous explicitly connected the attacks (while denying that the group conducted them) to Sony's efforts to stop users from unlocking PS3s. Thus far Sony has stated that the attacks have cost $170 million.
The iPhone Dev Team, meanwhile, is so loosely organized that it functioned quite effectively, solving truly difficult technical challenges in elegant ways, even if its members did not physically meet until they were invited to a German hackers conference.
Kiva.org
Founded in 2005, Kiva.org is a non-profit microlending effort. The organization, headquartered in San Francisco, recruits both lenders and entrepreneurial organizations around the world. The Internet connects the individuals and groups who lend money to roughly 125 lending partners (intermediaries) in developing countries and in the United States, and the lending partners disburse and collect the loans. Kiva does not charge any interest, but the independent field partner for each loan can charge interest.
After six years, Kiva has loaned more than $200 million, with a repayment rate of 98.65%. More than 500,000 donations have come in, and nearly 300,000 loans have been initiated, at an average size of slightly under $400 US. While the recipients often are featured on the Kiva website, lenders can no longer choose the recipients of their
loan, as was formerly the case. Still, the transparency of seeing the effect of money for a farmer's seeds, or a fishing boat repair, or a village water pump is strong encouragement to the donors, so most money that people give to Kiva is reloaned multiple times.
Kiva and other microfinance organizations challenge the conventional wisdom of economic development, as embodied in large capital projects funded by the World Bank and similar groups. Instead of building massive dams, for example, Kiva works at the individual or small-group level, with high success rates that relate in part to the emotional
and economic investment of the people rather than a country's elites, the traditional point of contact for the large aid organizations. Make no mistake: the scale of the macro aid organizations is truly substantial, and Kiva has never billed itself as a replacement for traditional economic development.
At the current time, Kiva faces substantial challenges:
• the quality of the local lending partners
• currency risk
• balancing supply and demand for microcredit at a global scale
• transparency into lending partners' practices.
Still, the point for our purposes relates to $200 million in loans to the world's poorest, with low overhead and emotional linkages between donors and recipients. 15 years ago such a model would have been impossible even to conceive.
Internet Engineering Task Force (IETF)
More than a decade ago, the Boston Globe's economics editor (yes, daily newspapers once had economics editors) David Warsh contrasted Microsoft's pursuit of features to the Internet Engineering Task Force. In the article, the IETF was personified by Harvard University's Scott Bradner, a true uber-geek who embraces a minimalist, functionalist perspective. "Which system of development," Warsh asked, "[Bill] Gates's or Bradner's, has been more advantageous to consumers? . . . Which technology has benefited you more?" Bradner contends that, like the Oxford English Dictionary, the IETF serves admirably as a case study in open-source methodology, though the people making both models work didn't call it that at the time.
Companies in any realm of intellectual property, especially, should consider Warsh's conclusion:
"Simpler standards [in contrast to those coming from governmental or other bureaucratic entities or lowest-common-denominator consensus, and in contrast to many proprietary standards that emphasize features over function] mean greater success. And it was the elegant standards of the IETF, simply written and speedily established, that have made possible the dissemination of innovations such as the World Wide Web and its browsers. . . ."
The IETF's structure and mission are straightforward and refreshingly apolitical:
"The IETF's mission is 'to make the Internet work better,' but it is the Internet _Engineering_ Task Force, so this means: make the Internet work better from a engineering point of view. We try to avoid policy and business questions, as much as possible. If you're interested in these general aspects, consider joining the Internet Society."
A famous aspect of its mission statement commits the group to "Rough consensus and running code." That is, the IETF makes "standards based on the combined engineering judgment of our participants and our real-world experience in implementing and deploying our specifications." The IETF has meetings, to be sure, and a disciplined process for considering and implementing proposed changes, but it remains remarkable that such a powerful and dynamic global communications network is not "owned" by any corporation, government, or individual.
There are several conclusions to this line of thinking.
Infrastructure
The emergence of powerful information networks is shifting the load traditionally borne by public or other forms of infrastructure. The power grid, roads, schools, Internet service providers (ISPs) -- all will be utilized differently as the capital base further decentralizes. In addition, given contract manufacturing, offshore programming, cloud computing, and more and more examples of software as a service, the infrastructure requirements for starting a venture have plummeted: leadership, talent, and a few laptops and smartphones are often sufficient.
Rethinking size
The importance of scale can at times be diminished. For example, Jeff Vinik didn't need the resources of Fidelity Investments to run his hedge fund after he quit managing the giant Magellan mutual fund. In the 1950s, one reason a hotel investor would affiliate with Holiday Inn was for access to the brand and, later, the reservations network. Now small inns and other lodging providers can work word-of-mouth and other referral channels and be profitable standing alone.
Talent
As Linux and other developer networks grow in stature and viability, managing the people who remain in traditional organizations will likely become more difficult. What Dan Pink reasonably calls "the purpose motive" is a powerful spur to hard work: as grand challenges have shown, people will work for free on hard, worthy problems. Outside of those settings, bureaucracies are not known for proving either worthy challenges or worthy purposes.
One defining fact of many successful startups -- Netflix, Zappos, and Skype come to mind -- is their leaders' ability to put profitability in the context of doing something "insanely great," in the famous phrasing of Steve Jobs. Given alternatives to purpose-challenged cubicle-dwelling, an increasing number of attractive job candidates will opt out of traditional large organizations. Harvard Business School and other institutions are seeing strong growth of a cadre of students who resist traditional employment and more importantly, traditional motivation. Both non-profits and startups are challenging investment banking and consulting for ambitious, capable leaders of the next generation.
Revisiting Coase
In the end, the purpose of a firm, to be an alternative to market transactions, is being rescaled, rethought, and redefined. Firms will always be an option, to be sure, but as the examples have shown, no longer are they a default for delivering value. One major hint points to the magnitude of the shift that is well underway: contrasted to "firm," the English vocabulary lacks good words to describe Wikipedia, Linux, Skype, and other networked entities that can do much of what commercial firms might once have been formed to undertake.
Kickstarter.com
How do art and creativity find funding? The answers have varied tremendously throughout human history: rich patrons, family members, credit card debt, and many forms of government funding. David Bowie issued an asset-backed security with the future revenue streams of the albums he recorded before 1990 as collateral. Given the decline in the audience for buying recorded music, Moody's downgraded the $55 million in debt to one step above junk bonds: Prudential, the buyer of the notes, looks to be the loser here while Bowie was either smart or lucky (but hasn't created much art of note since 1997 when the transaction occurred).
In 2009, a new model emerged: Kickstarter allows artists and other creators to post projects to which donors (not lenders) can commit. If I want to make an independent film, or catalog the works of a graffiti artist, or write a book, I can post the project, and any special rewards to funders, on the site. Donors might receive a signed copy of the finished work, or pdf updates while the work is in process, or tickets to the film's premiere, or other reciprocation.
Donors and artists alike are protected by a threshold requirement: if the required sum is not raised, the project never launches. Kickstarter takes 5% of the funds and Amazon Payments receives another 5% cut. Once completed, the works are permanently archived on the site. The site attracted some notice in 2010 when a user-controlled alternative to Facebook, called Diaspora, raised $200,000.
While it's too early to judge the longevity or scope of the model, Time named it one of the 50 best inventions of 2010.
Software developer networks
Microsoft enjoyed a huge competitive advantage here in the 1990s. As of 2002, one estimate showed about 3.25 million developers in the Microsoft camp. None of these men and women were employees, but were often trained, certified, and equipped with tool sets by Microsoft. The developers, in turn, could sense market demand for applications large and small and build solutions in the Windows environments for customers conditioned to seek out the Windows branding in the service provider.
More recently, the App Store model has attracted developers who seek a more direct path to monetization. Apple has hundreds of thousands of applications for the iPhone and iPad; Google's Android platform has nearly as many, depending on counting methodology. Tools are still important, but rather than certification programs, the app store model relies on the market for validation of an application. Obviously dry cleaners and other small businesses still need accounting programs, or whatever, and Google can't compete with Microsoft for this slice of the business. Even so, enterprise software vendors such as Adobe, Autodesk, Oracle, and SAP must navigate new territory as the app store model, along with Software as a Service, make such competitors as Salesforce.com and its Force.com developer program a new kind of market entrant.
The app store developers aren't really a network in any meaningful sense of the word: they don't meet, don't know each other, don't exist in a directory of members, affiliates, or prospects. There are developer conferences, of course, but not in the same form that Microsoft pioneered. The networks, particularly the app store developers, certainly aren't even remotely an extension of Apple's, Google's, or HTC's corporate organization: the market model is much more central than any org chart an be.
The market sifts winners from the mass of losers. According to Dutch app counters at Distimo, "We found that only two paid applications have been downloaded more than half a million times in the Google Android Market worldwide to date, while six paid applications in the Apple App Store for iPhone generate the same number of downloads within a two month timeframe in the United States alone." This model shifts risk away from the platform company, which gets a slice no matter which applications emerge as winners and invests nothing in losers.
Not all of these developer networks play inside the lines, as it were. Despite robust security technologies, Sony PlayStations and Apple iPhones have been "unlocked" by 3rd-party teams. The iPhone Dev Team, described by the Wall Street Journal as "a loose-knit but exclusive group of highly-skilled technologists who are considered to be the leaders among iPhone hackers," has contributed a steady stream of software kits for Apple customers to "jailbreak" their devices. The procedure is not illegal but can void certain warranty provisions. The benefit to the user is greater control over the device, access to software not necessarily approved by Apple, and sometimes features not supported by the official operating system.
Because they create value for the user base at the same time they have developed deep understanding of the technical architecture, the Dev Team and similar groups cannot be attacked too vigorously by the platform owners, as Sony is discovering in the PlayStation matter: the online group Anonymous explicitly connected the attacks (while denying that the group conducted them) to Sony's efforts to stop users from unlocking PS3s. Thus far Sony has stated that the attacks have cost $170 million.
The iPhone Dev Team, meanwhile, is so loosely organized that it functioned quite effectively, solving truly difficult technical challenges in elegant ways, even if its members did not physically meet until they were invited to a German hackers conference.
Kiva.org
Founded in 2005, Kiva.org is a non-profit microlending effort. The organization, headquartered in San Francisco, recruits both lenders and entrepreneurial organizations around the world. The Internet connects the individuals and groups who lend money to roughly 125 lending partners (intermediaries) in developing countries and in the United States, and the lending partners disburse and collect the loans. Kiva does not charge any interest, but the independent field partner for each loan can charge interest.
After six years, Kiva has loaned more than $200 million, with a repayment rate of 98.65%. More than 500,000 donations have come in, and nearly 300,000 loans have been initiated, at an average size of slightly under $400 US. While the recipients often are featured on the Kiva website, lenders can no longer choose the recipients of their
loan, as was formerly the case. Still, the transparency of seeing the effect of money for a farmer's seeds, or a fishing boat repair, or a village water pump is strong encouragement to the donors, so most money that people give to Kiva is reloaned multiple times.
Kiva and other microfinance organizations challenge the conventional wisdom of economic development, as embodied in large capital projects funded by the World Bank and similar groups. Instead of building massive dams, for example, Kiva works at the individual or small-group level, with high success rates that relate in part to the emotional
and economic investment of the people rather than a country's elites, the traditional point of contact for the large aid organizations. Make no mistake: the scale of the macro aid organizations is truly substantial, and Kiva has never billed itself as a replacement for traditional economic development.
At the current time, Kiva faces substantial challenges:
• the quality of the local lending partners
• currency risk
• balancing supply and demand for microcredit at a global scale
• transparency into lending partners' practices.
Still, the point for our purposes relates to $200 million in loans to the world's poorest, with low overhead and emotional linkages between donors and recipients. 15 years ago such a model would have been impossible even to conceive.
Internet Engineering Task Force (IETF)
More than a decade ago, the Boston Globe's economics editor (yes, daily newspapers once had economics editors) David Warsh contrasted Microsoft's pursuit of features to the Internet Engineering Task Force. In the article, the IETF was personified by Harvard University's Scott Bradner, a true uber-geek who embraces a minimalist, functionalist perspective. "Which system of development," Warsh asked, "[Bill] Gates's or Bradner's, has been more advantageous to consumers? . . . Which technology has benefited you more?" Bradner contends that, like the Oxford English Dictionary, the IETF serves admirably as a case study in open-source methodology, though the people making both models work didn't call it that at the time.
Companies in any realm of intellectual property, especially, should consider Warsh's conclusion:
"Simpler standards [in contrast to those coming from governmental or other bureaucratic entities or lowest-common-denominator consensus, and in contrast to many proprietary standards that emphasize features over function] mean greater success. And it was the elegant standards of the IETF, simply written and speedily established, that have made possible the dissemination of innovations such as the World Wide Web and its browsers. . . ."
The IETF's structure and mission are straightforward and refreshingly apolitical:
"The IETF's mission is 'to make the Internet work better,' but it is the Internet _Engineering_ Task Force, so this means: make the Internet work better from a engineering point of view. We try to avoid policy and business questions, as much as possible. If you're interested in these general aspects, consider joining the Internet Society."
A famous aspect of its mission statement commits the group to "Rough consensus and running code." That is, the IETF makes "standards based on the combined engineering judgment of our participants and our real-world experience in implementing and deploying our specifications." The IETF has meetings, to be sure, and a disciplined process for considering and implementing proposed changes, but it remains remarkable that such a powerful and dynamic global communications network is not "owned" by any corporation, government, or individual.
There are several conclusions to this line of thinking.
Infrastructure
The emergence of powerful information networks is shifting the load traditionally borne by public or other forms of infrastructure. The power grid, roads, schools, Internet service providers (ISPs) -- all will be utilized differently as the capital base further decentralizes. In addition, given contract manufacturing, offshore programming, cloud computing, and more and more examples of software as a service, the infrastructure requirements for starting a venture have plummeted: leadership, talent, and a few laptops and smartphones are often sufficient.
Rethinking size
The importance of scale can at times be diminished. For example, Jeff Vinik didn't need the resources of Fidelity Investments to run his hedge fund after he quit managing the giant Magellan mutual fund. In the 1950s, one reason a hotel investor would affiliate with Holiday Inn was for access to the brand and, later, the reservations network. Now small inns and other lodging providers can work word-of-mouth and other referral channels and be profitable standing alone.
Talent
As Linux and other developer networks grow in stature and viability, managing the people who remain in traditional organizations will likely become more difficult. What Dan Pink reasonably calls "the purpose motive" is a powerful spur to hard work: as grand challenges have shown, people will work for free on hard, worthy problems. Outside of those settings, bureaucracies are not known for proving either worthy challenges or worthy purposes.
One defining fact of many successful startups -- Netflix, Zappos, and Skype come to mind -- is their leaders' ability to put profitability in the context of doing something "insanely great," in the famous phrasing of Steve Jobs. Given alternatives to purpose-challenged cubicle-dwelling, an increasing number of attractive job candidates will opt out of traditional large organizations. Harvard Business School and other institutions are seeing strong growth of a cadre of students who resist traditional employment and more importantly, traditional motivation. Both non-profits and startups are challenging investment banking and consulting for ambitious, capable leaders of the next generation.
Revisiting Coase
In the end, the purpose of a firm, to be an alternative to market transactions, is being rescaled, rethought, and redefined. Firms will always be an option, to be sure, but as the examples have shown, no longer are they a default for delivering value. One major hint points to the magnitude of the shift that is well underway: contrasted to "firm," the English vocabulary lacks good words to describe Wikipedia, Linux, Skype, and other networked entities that can do much of what commercial firms might once have been formed to undertake.