Thursday, December 28, 2017

Early Indications December 2017: Unexpected Consequences

Things rarely unfold the way we expect them to, especially when people adopt a dramatically new technology. Henry Ford didn't foresee McDonald's or suburbs; the Wright brothers could not have anticipated jet engines' huge carbon footprint. Two items featuring prominently in this month's news both illustrate how expectations can fall so far wide of eventual reality.

First, the hopeful premise of the World Wide Web seemed obvious: give more people access to more information and people could be make better decisions, ask and answer more questions, and generally interact more easily with knowledge. Coming from my PhD fact-chasing background, it was this version of the Internet -- massive, instant, free research library -- with which I fell in love. The world of avatars, multiple identities, and cyberspace as a replacement for physical people never really interested me, but it was in many ways the latter version, as it evolved, that won out.

Fast forward to Brexit, to the Trump campaign, to politics in both democracies and other systems. The Internet’s access to more knowledge doesn't, in general, make people more inquisitive or better informed, it makes them more tribal. I'm struck by Clay Shirky's insight from more than ten years ago that extreme, antisocial online behavior isn't a bug, it's an essential feature of massive virtual social systems. Old forms of authority, whether teachers, Miss Manners, or political parties, no longer hold as much sway. Humanity's basest instincts too often win out over reasoned debate, and I have to give Donald Trump and/or his handlers credit for seeing the opening that shift provided.

Trolling, defined as "antagonizing others by deliberately posting inflammatory, irrelevant, or offensive content," shouted out reasoned debate, which had been assumed as the democratic ideal. The Economist, founded in 1843, stated as its mission "to take part in a severe contest between intelligence, which presses forward, and a timid, unworthy ignorance obstructing our progress." Rather than seek out facts and reasoned opinion, many now instead define reality as what they like, and Facebook in particular has been assiduous in giving it to them, often to the exclusion of anything outside an algorithmic bubble.

Thing 2: Bitcoin began life in 2008 as a distributed payment system. In the words of the Bitcoin Foundation’s website, “Bitcoin uses peer-to-peer technology to operate with no central authority or banks; managing transactions and the issuing of bitcoins is carried out collectively by the network. Bitcoin is open-source; its design is public, nobody owns or controls Bitcoin and everyone can take part. Through many of its unique properties, Bitcoin allows exciting uses that could not be covered by any previous payment system.”

In terms of "everybody taking part," that feels true. There's an old saying attributed to J.P. Morgan, John D. Rockefeller, or even Warren Buffet. None of these have been authenticated, but Joe Kennedy (John F.'s father) did apparently utter something close: "It is said that he knew it was time to get out of the market when he received stock tips from a shoe-shine boy." Today, waiters, cabbies, various people on airplanes -- everyone seems to have an opinion (and apparently a position) in cryptocurrency. In a year the price soared from about $900 to $18,000, at which point there was a 20% drop, but the price is currently still in the 15,000s. That is not the behavior of a distributed payment system.

On the other hand, what _is_ the price a reflection of? Nobody really knows; Bitcoin is apparently decoupled from the financial system so is not correlated (positively or negatively) with any asset class, financial indicator, or other anchor. This statement from Angela Walch, who studies blockchain at St. Mary’s University School of Law, struck me as extremely astute. “I think it’s a bubble, but I also think it’s very hard to tell what the value of Bitcoin should be,” she said on the POLITICO Money podcast. “It could be zero dollars, it could be five dollars, or it could be a million dollars.” Showing how much I know, I called the Bitcoin peak back in the summer, so far be it from me to say it can't go up any higher, bubble or no bubble.

Thus we are living through an extreme unexpected circumstance: Even six months ago, it was impossible to predict people taking out home equity loans, betting their houses on a cryptocurrency the likes of which nobody has ever seen. The lure of the easy win, whether from stocks in 1929, Internet startups in 2000, or residential real estate in 2008, appears to sing its siren song once again. In an unregulated market in uncharted territory, that people will put their life's savings or more (at 20% interest) at risk for the "sure thing" is another illustration of how even savvy, technically brilliant people operating with few preconceptions could still see their creation veer in completely unexpected directions.

Thursday, November 30, 2017

November 2017 Early Indications: 10 Predictions from 10 Years Ago

A quick note: last month’s 20th anniversary newsletter appears to have bounced off different organization’s content filters in both long-form text and Word attachment formats. It’s here if you missed it.

10 years ago, after doing a retrospective in October, I looked ahead in November. What did I think would matter in 2017? The short answer: cloud computing, handsets/smartphones, health (including concerns about cellphone radiation), IoT, security issues, a dialectic between attention-gathering and privacy, new human-computer interfaces, education, crowds, and technology-mediated emotional connection. Going in order, what happened?

Cloud computing: Amazon Web Services became a profit engine for the otherwise low-margin Seattle mega-retailer. Microsoft chose a new CEO based in large measure on his ability to lead the company to a cloud-centric future. Adobe and other software companies went all-in on software as a subscription.
Score: Hit

Handsets/smartphones: Facebook on smartphones has become an essential utility for more than a billion people. Apple and Google deposed global smartphone giants Nokia and Rim in a matter of a few quarters. Television viewed on smart devices is reshaping entire industries: “There's little question as to demand, particularly after seeing adoption in Japan and Korea, but allocating the money may prove to be difficult.” That last bit is an understatement: ESPN, to name one content provider caught in the middle, has laid off hundreds of staffers (the most recent wave was this week) as TV economics continue to be rewritten.
Score: Hit

Health: Radiation concerns never really slowed adoption. Electronic medical records are by no means universal. South African sprinter Oscar Pistorius (who I mentioned) made the news, but not for his famous carbon-fiber legs. Digital prosthetics, including exoskeletons, are progressing, but not yet widely adopted.
Score: Miss 

IoT: “All told, there are dozens of billions of items that can connect and combine in new ways.” Indeed there are.
Score: Hit

Security concerns: The TJMaxx security failure, which I noted had leaked 47 million credit card numbers, feels quaint alongside the massive breaches at the U.S. Office of Personnel Management (22 million records related to background checks), Yahoo (3 billion email logins), and Equifax (147 million credit ratings and related data). In addition, the IoT vulnerabilities that have emerged with everything from baby monitors to Jeep Cherokees constitute  an entire new front in the battle for secure computing.

Oh, and saying “one must assume viruses will attack . . . powerplant controls” anticipated the Stuxnet worm, first uncovered in 2010.
Score: Hit

A dialectic between attention-gathering and privacy
“Who owns my trail of digital breadcrumbs that everyone from Axciom and Amazon to Vodaphone and Yahoo is trying to use for commercial purposes?” The short answer has emerged: Google and Facebook. Everyone from iRobot (the new Internet-connected Roomba sends maps of your floorpan to the mothership for monetization) to cable and wireless providers is trying to capture their share of the market.
Score: Hit

New human-computer interfaces
No it wasn’t haptics or electrodermal technologies, and OLED and e-ink aren’t really earth-shattering, but the Alexa/Siri/Cortana gold rush validates the prediction.
Score: Hit

“How do schools prepare young people for jobs and organizational designs that have yet to be invented?” Answer: Not very well. The MOOC fad has yet to converge into a stable isotope, Khan Academy is important but still a small player, and Lynda is a feature within LinkedIn which is owned by Microsoft. No Google or Amazon-scale player has emerged.
Score: Miss

“Clay Shirky has suggested that flame wars are essentially inevitable outcomes, rather than side effects, of social software. . . . Given that more people will be in contact with more people in new ways, how will new rules of behavior take shape? Will the lack of interpersonal civility (exemplified in the golden age of the ad hominem attack, offline and on-) evolve? If so, in which direction?” Said behavior is shaping politics in the US, UK, Italy, and France, to name a few. It’s safe to say this is a big deal.
Score: Hit

Technology-mediated emotional connection
Forget dancing with earbuds rather than to speakers, people naming their Roomba, and even the reduction in teenage inhibition when texting rather than talking: Facebook has 1.4 billion people who check in _every day_. Another 700 million log in at least monthly.
Score: Hit

Here’s the original newsletter for reference, edited for space considerations:

As promised last month, here are ten information-technology-related areas to watch over the next ten years. Rather than attempting to be systematic, this list will merely suggest topic areas and point to some relevant data points; otherwise, a ten-item list would soon get unwieldy. 

1) The New Physical Layer

Although everything from power grids to bridges and ports to railways is being built or rebuilt, our focus here is on computing and networking. In particular, power and bandwidth will be transformed in the next decade.

Taking power first, cloud computing vendors are waging an arms race as they build data centers to power a range of offerings loosely called "web services." Because of the intensity of their power consumption, these often appear near cheap hydroelectric power sources (which themselves may be affected by global climate changes). It's estimated, for example, that Google's data center, housed in two adjacent buildings in Oregon, contains 1.3 million computing cores on 9,000 racks per structure, and photographs of the cooling towers are staggering.

Something else is going on: Caterpillar reported that its Q2 07 revenues from sales of backup generators, such as those used in data centers, were up 41% at a time when overall U.S. construction equipment sales are slumping. The growth of "cloud computing" feels as though it's related to the trend toward virtualization, where resources can be located, physically and/or logically, away from their locus of deployment.  

2) Enmeshed

The distinction between telephones and PCs is getting fuzzier every year, as we have noted, and the iPhone presents a clear case in point: running a Unix variant, it can be spoken at, but performs best moving and manipulating images and data. Mobile phones, ultra-mobile PCs (UMPCs), gaming devices including Nokia's N-Gage, handheld PCs, televisions, and other devices (such as standalone GPS trackers) will continue to converge. 

Television over mobile handsets is estimated to reach over 100 million users by 2009, and the number should soar further in conjunction with the 2010 World Cup. Expect to see spirited competition among content owners like News Corp, handset manufacturers, network equipment firms (including heavyweights Qualcomm, Nokia, and potentially Intel), and carriers such as Vodaphone and T-Mobile. Finally, given that [lots of] advertising is involved, expect something unexpected from Google. There's little question as to demand, particularly after seeing adoption in Japan and Korea, but allocating the money may prove to be difficult.

3) Healthy, Wealthy, and Wired

Entire books need to be written on various facets of information, technology, and health. A few bullets suggest the reach of potential issues:

-Electronic medical records have the potential to improve care, save money, and enhance the patient's experience with his or her health care system. EMRs also could help transform the economics of health insurance, lead to data breaches of untold pain and economic impact, and alter the role of physicians relative to insurers, employers, and patients. Automating the current, broken U.S. system (I can't speak for other countries), feels unappealing, which means that implementing EMRs implies deeper transformation, parallel to but much bigger than the changes brought about by corporate ERP implementations.

-What does it mean to be human? Mechanical joints and prostheses are rapidly becoming more sophisticated and digitized. When does a disability become an unfair advantage? Oscar Pistorius is a South African sprinter whose 400 meter time is about a second slow of Olympic qualifying. He's also a double amputee whose carbon-fiber "legs" are challenging old ideas about fair competition. 

-What will be the long-term effects of nearfield electromagnetic emissions, particularly after they have been focused through the ear directly into people’s brains? Cell phone antennas are a potential hazard, but so are earbuds and Bluetooth radios, and nobody knows yet what might or could happen across broad populations with widely varying spectrum allocations, cultural patterns, and governmental regulations.

4) Connection Machines

As more kinds of things get connected to information networks, the potential for unexpected consequences gets ever more interesting to contemplate. Just listing the number of classes of devices that can or will soon interoperate gives a sense of scale:

-telephones, the wireless variety of which can be understood as beacons, bar-code scanners, and network nodes - potentially in a mesh configuration
-motor- and other industrial controllers
-surveillance cameras (of which there are over 2,000 in Chicago alone)
-sensors, whether embedded in animals, affixed to pharmaceutical packaging, or attached to engine components to predict mechanical failure.

All told, there are dozens of billions of items that can connect and combine in new ways.

5) Virtual Fences

It's extremely difficult to delimit this space. Risk, trust, identity, and security are all intertwined, and each has implications for the others. At base, the questions of "who are you," "can you prove it," and "who else knows your information" are all in play, all over the world.

Given the numbers of networked devices listed above, one must assume viruses will attack everything from powerplant controls to cellphone networks to several types of security systems.

The biggest data breach I'm aware of is the 47 million credit-card numbers lost by TJX (parent company to TJ Maxx, Marshalls, and HomeGoods) as a result of improperly configured in-store wireless networks. 

6) Of Memory and Forgetting

Many questions relating to monetization of data are relevant here. Who owns my trail of digital breadcrumbs that everyone from Axciom and Amazon to Vodaphone and Yahoo is trying to use for commercial purposes? In healthcare, who holds, owns, and controls my lifelong record of prescriptions (filled and unfilled), medical test results, over-the-counter and supplement purchases (helpfully recorded by loyalty cards), public health data, and even caloric intake and, at the health club, expenditure?

7) The Human Peripheral

Traditionally, people connected to the computer through punch tapes or cards, keyboards, and screens. That list is getting longer, quickly.

It's been five years already since Cambridge and MIT researchers shook hands across the Atlantic. Haptic (3-D touch-based) interfaces are entering the mass market, most visibly via the Nintendo Wii, which is outselling conventional game consoles from Sony and Microsoft.

Vyro has developed a Bluetooth device about the size of a gum eraser. It measures stress through sweat gland activity in the skin, so one application is a clever game in which two players race their cars on a Bluetooth phone, the winner being the one who's more relaxed.

-New screens
Organic Light Emitting Diode (OLED) technology is coming to market soon, in Sony televisions for instance. Compared to LCD, OLED is brighter, more power efficient, and thinner - but it reacts badly to water. E-ink and other flexible displays are making similar progress.

8) Education

How do schools prepare young people for jobs and organizational designs that have yet to be invented? To take two current examples, where did today's generation of sushi chefs and yoga teachers get their training? Where will robot mechanics, Internet addiction counselors, and Chinese lawyers get started? Getting computers (possibly through One Laptop Per Child or Project Inkwell) to the masses will start a process but by no means finish it.

As online course delivery ramps up, questions arise about architecture: what should a virtually-enabled classroom look like? Where should schools be built, particularly in developing environments? What should they look like? What is the role and function of a public library in a world in which the place of print is in major upheaval?

9) {Your Theme Here}

As blogging, social networking, and user-generated content proliferate, we're seeing one manifestation of a larger trend toward delegitimization of received cultural authority. Instead of trusting politicians, professional reviewers, or commercial spokespeople, many people across the world are putting trust in each other's opinions: Zagat is a great example of formal ratings systems being challenged by masses of uncredentialed, anonymous diners. Zagat also raises the issue of when crowds can be "wise," cannot possibly be "wise," or generally do not matter one way or the other.

Clay Shirky has suggested that flame wars are essentially inevitable outcomes, rather than side effects, of social software. Many blogs have comments turned off because of abuse that simply takes too long to monitor and manage. Given that more people will be in contact with more people in new ways, how will new rules of behavior take shape? Will the lack of interpersonal civility (exemplified in the golden age of the ad hominem attack, offline and on-) evolve? If so, in which direction?

10) Silicon Emotion

People are interacting with other people with multiple layers of computing and communications in between. The nature of emotional expression is changing as a result.

-Dancing alone
What does it mean when tens of millions of music lovers listen in isolation, through headphones, rather than in rooms, or concert halls?

-Inhibition deficiency
In addition to flaming, people will say things electronically they would be much more hesitant to articulate verbally. Watching teenagers IM each other fluently and unabashedly, then stand with each other awkwardly after school, is a fascinating exercise. In the Nordics, the second-most prevalent use of text messaging (after coordination), is "grooming" - flirting.

-Robot love
The Roomba has inspired tremendous affection in its brief lifetime. (See the fascinating paper by Ja-Young Sung, Lan Guo, Rebecca E. Grinter, and Henrik I. Christensen, all of Georgia Tech, entitled "'My Roomba is a Rambo': Intimate Home Appliances" for compelling evidence on this point.) 
One final word: ten years is probably too long a time horizon for some of these areas, but institutional change, in education for instance, is always the slow part that will balance out some of the blink-of-an-eye things we’re about to witness.

Monday, October 30, 2017

Early Indications October 2017: Special 20th anniversary issue

Many thanks to my two longest-running readers, Lawrence Baxter and John Parkinson, who have been along since the beginning.

Re-reading well over 300 of these newsletters (in the busy years, there were up to 3 newsletters a month), I’m struck by several things:

1) How many developments I got wrong, or missed. Napster (initially), the DVD format, MP3, Facebook, the iPod, and Bitcoin, for starters.

2) How little has happened since 2008 (when Tesla delivered its roadster). Apart from Uber (and maybe Bitcoin), what else qualifies as a breakthrough? In contrast, the Linux project and the World Wide Web first appeared on Usenet only two months apart in 1991. Then between 1997 and 2007, we saw Google, Skype, Wi-Fi, the iPod and iPhone, web-based email, text messaging, Kindle, GPS commercialization, digital photography, Napster, Amazon Web Services (first noted here in 2002, alongside Keyhole (now Google Earth)), YouTube, Facebook (of course), and “Software” (both operating systems and applications) means entirely different things now vs. 20 years ago. So does connectivity: cable modems (especially in the US) have made a big difference. Many issues in the early years speculated on what might happen after 56 kb/s dial-up connections.

3) What’s the next bubble? As AI gets better at unsupervised learning, many computer programmers will become redundant. Our current rush to STEM as the answer to everything will need to be balanced with education in critical thinking, argument from evidence, new kinds of jobs, and new models for civic discourse.

4) When things actually happened. As I noted, AWS was already around in 2002. In 1998, I speculated about networked lightbulbs; Internet refrigerators were mentioned in a 1999 newsletter. Amazon registered the domain in 1998. Linux shook things up quite a bit: at the extreme, besides the VA Linux stock bubble, there was an open-source mutual fund that launched in 1999 (and folded not long thereafter). 3D printing was in the March 2001 newsletter. In December 2001, the iPod was “a device to watch.” McDonald’s installed its first wi-fi in 2003, just before Skype launched. In May 2004, the first Darpa robotic vehicle challenge produced no finishers. Thomas Piketty and Elizabeth Warren were both noting class differences in income growth in a 2006 newsletter, well before they each became famous for such. February 2008 and August 2013 both saw bicycle issues, a bit of foresight I’m proud of given the potential of bicycles (and e-bikes) to change urban landscapes in the next 20 years.

Enough of the front matter. Herewith are 20 years of greatest hits (and misses), edited for clarity and in some cases, updated:

January 2000: Calling the tech bubble

Reflections on the state of e-commerce
While the AOL-Time Warner deal has grabbed all kinds of attention, it's only one of several announcements that form a larger pattern.  The one-line summary of this pattern is that e-business has become just plain business, period.  That is to say, companies like Ford, WalMart, AOL, and Royal Bank of Scotland are "getting it" and making their Internet activities an integral part of the business's identity.  A flip side to this is that the money people have so overwhelmed the tech and Internet sectors that it's getting harder and harder to find passionate geeks who deeply want to change the world by doing something "insanely great," as they used to say at Apple.  Think about it: how many people at Petopia, or, or are thrilled and excited about the prospect of turning the world upside down the way Netscape did with Navigator?  I was in a restaurant full of the money people who were alternately courting and browbeating tech entrepreneurs at power breakfasts earlier this week in San Francisco, and I really couldn't feel any passion in the room.  Rather, it was all Stanford MBA logic that I felt, running the numbers, vetting marketing plans, lining up this CTO with that CFO and above all counting the millions to be raked in at the "liquidity event," as they're called.

I have to believe there will be a change in climate.  First, all the suits from the Fortune 50 will not touch this sector, as they're beginning to, without leaving their mark. Another reason to think the worm will turn relates to the conditions in Silicon Valley.  People who want any kind of life need to plunk down far in excess of $400,000 for a basic suburban house with any kind of yard.  Even so, the horrible traffic on highway 101 means that 1 to 3-hour commutes are standard.  The working conditions are exhausting, and one wonders how long the market will sustain this, especially as Atlanta, Austin, the Research Triangle, Ann Arbor, Boston, and countless other places are developing Internet subcultures of their own.  Look at every other economic monoculture, whether centered on mills in Manchester or Lowell, autos in Detroit, tobacco in different parts of the American South, steel in Pittsburgh or Gary -- no matter what the level of prosperity, it never lasts forever.
January 2001: Getting MP3 way wrong

The audio writer Corey Greenberg wrote that “Depending on the music track, I heard varying degrees of the same three characteristics from every MP3-encoded file I compared to the original PCM track:
1. a noticeable roll-off in the highest octaves;
2. an edgy emphasis in the low treble, which added a tinny, unwelcome distortion to vocals, horns and guitars; and
3. less low-level detail, such as reverb tails and echoes - the "ambience" that gives the sense that the music is happening in a real room.”

Those last three bullets denote, in audiophile-speak, deficits in traits that lend music emotional tangibility.  New formats - super audio CD (SACD) and DVD audio - increase the believability of the artifact, but you have to listen for the improvement to hear it - and the difference won't come through on a car stereo or boombox.  To the extent that the mass market listens to music through headphones, on computers, or in cars, mp3 can be fully adequate. Given what's happening to graphics and compression technology, we'll be seeing rapid advances in digital video that will extend to home users - Playstation 2 and Xbox are relevant precursors here.  The power of the Napster incursion is partially indicated by the fact that old technologies die slowly: VCRs outsold DVD players over the 2000 holiday, according to the Centris research firm.

VCRs: 7.4 million units
DVD players: 3.5 million
MP3 players: 600,000

So the bottom line is that CD may be endangered, but my sense is that richer, not poorer, media will surpass it. 
September 2001: 9/11 happened in a different world, before Twitter and Facebook

Networked Commerce Update I: Six Degrees of Connection

Before I start, a vital link: if you are worried about someone's safety in the aftermath of the terrorist attacks, you can check one of a number of websites.  Here's the most visible one, with links to others:

This week's attacks and the subsequent deaths are being painfully felt.  Everybody, it seems, knows someone who knows someone who died or was injured.  Here in Boston, the odds of being connected are especially high, while our hearts go out to the New Yorkers living with so much uncertainty and dread.  At this time of tragedy, the phenomenon known as "six degrees of separation" feels different.  Each of us connects to the computer network administrators, or the daughters, or the fire fighters, or the air travelers for some personal reason: I've flown American 11 to LA probably a dozen times, for example, and everyone has some similar story that connects him or her to some facet of the tragedy.  In this moment, rather than talk about cell phone antennas or Internet slowdowns or American government digital snooping efforts to gather clues, I wanted to point to some fascinating research that shows some weird and wonderful aspects of networks that underlie this dynamic of personal involvement.

The story begins with a distinguished Hungarian mathematician, Paul Erdos, who died in 1996 with an academic body of 1500 research papers.  For years after, however, unpublished work came out.  Erdos' eminence, combined with academics' love of the obtuse, spawned an ongoing conversation among math professors about their "Erdos number" - how many links connect any given professor with Erdos as a co-author.  The lower one's number (1 if co-author with Erdos, 2 if a co-author with a 1, etc), the presumably higher one's own reputation.  As of last year, the number of people 1 remove from Erdos was 507.  What's interesting, apart from the traction of the exercise, is Erdos' influence: 60 Nobel prize winners have relatively low Erdos numbers (Watson's and Crick's are 7 and 8, respectively, despite their field's distance from pure math), while 42 winners of the profession's Fields medal (the highest honor in mathematics) have low Erdos numbers - most four or less, with all under 6.

We then move outside mathematics, in 1967, to Stanley Milgram, the same psychologist responsible for the "we do as we're told" experiment in which subjects rather willingly administered fictional (but apparently real) electric shocks to other people under orders of an authority figure.  Along a different research direction he created an experiment to see how social networks actually behave.  He asked people in Kansas and Nebraska to get letters to people in Boston by sending the letters to people the Kansans and Nebraskans thought might know the recipients personally.  The recipients did the same thing, forwarding he notes and notifying Milgram of their participation.  The letters took from 2 to 10 hops to arrive, with the average being five.  Why Milgram termed the phenomenon six degrees of separation remains a mystery, but the small number of intermediaries poses a riddle. 

If, as sociologists estimate, each American knows about 300 people, and there are about 270 million Americans, simple math suggests that it would take on average about a million handshakes to connect any given person to any other one.  The problematic assumption here, however, is that the networks are evenly distributed.  If all the people I know only know each other, it's a closed community.  At the other extreme, according to Steve Strogatz of Cornell, if I know 100 people, and each of them knows 100 people, I'm two hops from 10,000 people, three from 1 million, and five degrees from the entire planet - but that assumes zero overlap, that each of 100 people has 100 more friends not already in the network.

It is the middle ground between closed order and complete randomness that Strogatz and his graduate student Duncan Watts investigated.  This story made headlines in 1997 and 1998 when the six degrees of Kevin Bacon game was resonating with many people's experience with the Internet.  Whether in terms of web links or e-mail communities, many people found that they knew someone who knew someone who could answer their question or procure a desired item.  (In fact, Lada Adamic at Xerox PARC showed that within a large percentage of the Web, any webpage was an average of four hyperlinks from any other.)  Watts and Strogatz found that only a tiny - 1% - increase in randomness had orders of magnitude implications for reducing the number of intermediaries between points A and B.  What's interesting is that redoing those few connections doesn't change the clustering within the network - most of your friends still know each other.  The other factor here is that the connections are extremely unevenly distributed: everyone knows someone who's always sending along e-mail jokes or industry buzz. His or her e-mail friend network will have far more one-degree connections than average.  At the same time, there are info-hounds who are the recipients of many feeds but the spreaders of few.  Marketing experts like Seth Godin are looking at the network problem from this perspective.

Back to Watts and Strogatz, this is when the story gets weird: the same network structure, ordered with a tuned amount of randomness, explains not only Kevin Bacon's movie career but the western U.S. electric power grid topology and the neural structure of a worm called a nematode.  The implications of this incredible finding are only beginning to be exhausted. Two business school professors found that small world networks, as they're called, explain the ownership structure of over 500 large German companies: any one firm is connected to any other by only four intermediaries.  The fact that consciously engineered systems, natural systems, and cumulative patterns of behavior all can be represented by the same graphical model is truly stunning.

So in the midst of our public and private grief, the feeling of connection to the victims relates to a powerful phenomenon - one still mysterious even to experts like Watts and Strogatz, who speak with a certain reverence about it.  What tragedies do is activate our sense of our networks: none of us goes around asking our coworkers if they have a friend or relative who works in, say, Miami or Milan unless there's some reason to wonder.  Once we identify end points - the lists of the dead or the stories of survival - then the network of human connection emerges, and from there we cry, or give thanks, or give blood, or do any of the other myriad of things people have done for millennia to heal each other in times of suffering.

Feb 2004: just before Facebook, a plethora of social networking sites

What is the point of social networking software?
-To display to my peers how many friends I have (or don’t have)?
-To make me an instrument for other people’s quests?
-To give me more email, more IM sessions, more RSS feeds?
-To make my life somehow richer on screen than it is in meatspace?

I may be proven wrong, but believe this too shall pass. Push was supposed to be a huge market (remember Pointcast and Backweb?), portal sites were all the rage for a time (but there’s still only one Yahoo), and business-to-business exchanges would rule the landscape (has anyone else noticed the quiet demise and transfiguration of Covisint, the remainder of which was bought last week by Compuware?). Now search will be a huge battleground, and network mapping fits under the same umbrella under which Google, Yahoo, and Microsoft appear to be the main contenders.

Among the constellation represented by Ecademy, EveryonesConnected, Evite, Friendster, FriendSurfer, Friendzy, LinkedIn, MySpace, Orkut, Plaxo, realContacts, Ringo, Ryze, Spoke, Squilby, Tickle, Tribe, WhizSpark, Yafro, and ZeroDegrees, how many will truly flourish? What abuses will turn public opinion away from one or more of the services? Until the cost/benefit equation turns more favorable to the users while still generating a solid revenue stream, most likely through a narrowing of the sites’ aspirations, it’s tough to get very excited about this particular corner of the market.

October 2006: who will build the IoT? Motorola? What was I thinking?

Getting Motorola, which had lost prestige and shed thousands of jobs, to accept risk-taking has been a core aspect of Zander’s mission. As a result of the company’s focus, patent holdings, and coherent product footprint and market positioning, it’s hard to see a head-on competitor. Cisco has more wireline clout and a bigger set-top box presence after acquiring Scientific Atlanta but no WiMax or cellular business, much less consumer design expertise of the sort embodied in the RAZR. Tag manufacturers including Texas Instruments, middleware companies such as BEA, or identity managers like Sun or maybe Microsoft may well play important roles as components in the cloud from sensors through computing to people, but it’s hard to see any of these companies taking a leadership position. Many, many piece-parts will be required for anything resembling the science fiction vision to come to fruition, but given that personal communications and computing platforms, a variety of broadband networks, and sensors in many shapes and sizes will be involved, in the near term Motorola appears to have rebounded and assumed a leadership position in a market it is helping to invent.

April 2007: Hedging my bets on the iPhone (but voicemail as the killer app?)

[following a parallel section on why the iPhone might fail] If the iPhone follows the iPod as a success, it will be because:

-Apple invents a new category of device, learning from previous failures. In the music player case, Apple integrated a far superior music management software application with the MP3 player, and implemented copy protection to satisfy the labels that their 99-cent songs would not be copied indefinitely. In the iPhone case, Apple took a variety of lessons from Motorola's ROKR music player+phone, which has
had its iTunes license pulled. Rather than being a followup to the groundbreaking RAZR, the ROKR is essentially the answer to a trivia question. The iPhone is a bet of a different order entirely.

-The user interface transcends anything else in the category not with bells-and-whistles complexity but intuitive simplicity. Compare the owner's manual of even a mid-price mobile handset to the iPod to get a feel for what the iPhone user experience should be. The human finger replaces the stylus that doomed everything from Apple's Newton to Palm's Pilot, the device is aware of itself in space (much like the
surprisingly successful Wii), and the display's colors and lighting are rich and vivid.

-Apple has once again used superior industrial design, elevated to the level of art, to create unsurpassed "cool" factor in a category. The microscopic attention to coherence and detail in the iPod, from marketing to packaging to peripherals to product endorsers, creates an emotional appeal found in few electronic devices.

-As venture capitalist John Doerr recently noted, Apple has a vast army of users trained to synch their device with a computer. It's an installed base of user behavior that could give the iPhone a jump-start in adoption.

-The iPhone captures momentum amidst industry disruption. As mobile broadband emerges from competing standards and platforms, the iPhone could dominate a multi-radio niche just at the moment that heterogeneous coverage becomes a reality. Going abroad? Working in a Starbucks hotspot? Surfing on a train in the northeast corridor? Experimenting with Clearwire or Sprint Wimax? Having a unified device to maintain connectivity across access technologies could become extremely valuable.

-The demographics align: the price points, through whatever mechanism, drive adoption in both the niche knowledge-worker and technology-as-jewelry segments along with 20-somethings who replaced landline phones with mobiles and may augment PCs with a tablet-phone-music player.

-The beautiful device is powered by a "killer application." In the case of the iPod it was clearly iTunes (not necessarily the music store), and with the iPhone, it could be visual voice mail, being able to browse and manage voice messages from the screen rather than having to listen to them serially.

However the iPhone plays out, I can't remember a product launch that generated so much attention. (Microsoft's Vista launch, by contrast, is a distant cousin of the fever generated by Windows 95's entry into the market.) Industry analysts, Mac fans, gadget fiends, and future-scanners are all watching closely to see if we're present at the creation of a new industry with new possibilities for communications and lifestyle, or if the audacity of the claims can't match the complexity and rigors of real-world supply chains, sales channels, and use scenarios. Apple is sticking with the June launch date, so we won't have to wait long to find out.
January 2009: A vivid economic contrast of manufacturing to services in the trough of the recession

As of Q1 2008, the biggest employer in Pennsylvania was the State of Pennsylvania. #2 was the U.S. government, even after the closing of the Philadelphia naval yard in 1995. As private sector employment and profits drop for the foreseeable future, how will public-sector employers maintain their payrolls? The state of California, ahead of the curve in some matters, may be the bellwether here, and the story looks grim as the shortfall through 2010 reaches $40 billion.
As a state Pennsylvania is reasonably representative, with a relatively large economy (#6 out of 50 states), and a per capita income almost perfectly at the median, ranking 26th of 50. Agriculture is important but not predominant, and 25 Fortune 500 companies are headquartered here.

To continue that list of Pennsylvania's top employers, note the paucity of private-sector job-generators:

3) Wal-Mart
4) City of Philadelphia
5) University of Pennsylvania (roughly 35,000 jobs, including a big medical center)
6) Philadelphia school district
7) Penn State University (not counting the affiliated medical center)
8) Giant Food Stores
9) UPS
10) University of Pittsburgh Medical Center
11) University of Pittsburgh
12) Weis Supermarkets
13) State System of Higher Education (public colleges and universities excluding Penn State, Pitt, and Temple)

All told, 20 of the top 50 employers in Pennsylvania are not businesses in the traditional sense of the word: that's 40% of the leader board, including six of the top seven. Half of the 30 largest private-sector employers, including eight of the top 12, are retailers, known for relatively low wages and, according to the BLS, the highest turnover among major sectors. As Circuit City demonstrated, they are also sensitive to economic downturns and could themselves be at further risk.

More significantly, Pennsylvania is officially a services economy: only one employer (Merck) in the top 25 and four in the top 50 make something. Many kinds of services are represented, with health care in the lead, followed by education, and grocery, retail, financial services, and fast food/convenience stores.

The contrast to the intermediate past is shocking. Courtesy of researchers at the Pennsylvania Department of Labor and Industry, here are the top 25 employers of 1965 (the earliest year for which they have available records) — note how important supermarkets are, and were:

1) United States Steel
2) Bethlehem Steel
3) Westinghouse Electric
4) Bell Telephone of Pennsylvania
5) Jones & Laughlin Steel
6) General Electric
7) Sears, Roebuck
8) A&P
9) Acme Markets
10) Western Electric
11) Philco
12) Budd
13) Philadelphia Electric
14) Boeing
15) Crucible Steel
16) Pittsburgh Plate Glass
17) Allegheny Ludlum Steel
18) Sylvania Electric
19) Sun Oil
20) Pittsburgh Steel
21) Armco Steel
22) Aluminum Company of America (Alcoa)
23) RCA
24) Armstrong Cork
25) Rohm and Haas

May 2010: Cloud computing will force organizational change

In short, as cloud computing reallocates the division of labor within the computing fabric, it will also change how managers and, especially, entrepreneurs organize resources into firms, partnerships, and other formal structures. Once these forms emerge, the nature of everything else will be subject to reinvention: work, risk, reward, collaboration, and indeed value itself.
January 2010: Do You Remember?
Looking back over the 30 or so years of the personal computing era, I'm struck by how easily we discard the past, how often we miss a revolution when we're in the middle of it, and how few moments stop us in our tracks, giving us reason to demarcate a historical transition. Everybody is different, of course, but I'll wager you may have some similar reactions to the thought experiment I played out over the holidays. Overall, I was struck by how few times I appreciated the historical importance of an event in the moment.

Do you remember . . . where you were when the Berlin wall fell?

I do not. As I argue elsewhere, 1989 marks a convenient beginning to the "modern" era of globalization, mobile telecommunications, and the rise of the Internet. After the 1960s, with the Kennedy assassinations, moon landing, and a "living room war," followed by the Munich Olympic terror and fall of Saigon in 1975, perhaps "culture fatigue" set in. For whatever reason (perhaps it was because I was on the academic job market, with dismal results), November 1989 does not register.

Do you remember . . . your first mobile phone?

This is much clearer. It was a Nokia 101 (still for sale here), used only "for emergencies." I had had friends whose wealthy parents had car phones, which were big, expensive, and exotic. As I was none of those, my pattern of usage had to reflect my station; the device was not to be used trivially. But it was still fun to know I could order pizza on the way home rather than arriving at my destination, calling, and setting out again.

Do you remember . . . your first e-mail address? What about the second?

Here as so often, I was a late adopter. Teaching at Harvard in the early 1990s, I followed the lead of neither my Ph.D. advisor nor my students, instead getting e-mail pretty late: 1994, when I entered the commercial work force on the Lotus Notes e-mail infrastructure that was typical of consulting firms at the time.

Do you remember . . . the first time you saw the Web?

This was a lightning bolt for me, as vivid as my first car. The CIO at my consulting firm showed me NCSA Mosaic (which looked like this), and all the stuff I'd been reading about WAIS, Archie, and Gopher faded as I heard from my Silicon Valley friends about this amazing startup called Netscape which was going to be even bigger than 3DO, the supposedly "can't miss" video game outfit. About a year afterward, I saw Pointcast, the way-ahead-of-its-time streaming service whose graphic intensity, profligate use of resources, and viral growth combined to make it a deadly network-killer. I still would love to see it again, for nostalgia's sake if nothing else: old browsers and web pages (remember the original gray with blue text?) can still be found. Pointcast exists only in [human] memory, I gather. (If you want to remember Windows 3.11, here's a brilliant rendition that runs in a browser, complete with Minesweeper.)

Do you remember . . . your first Internet purchase?

I don't, precisely, but would wager it was an Amazon book. Amazon no doubt knows that. The firm's status as a "gateway drug" to Internet shopping cannot be underestimated: because the navigation was good, because they delivered, and because the price/selection/convenience equation was so positive, Amazon initiated millions of consumers into behaviors they repeated in stock trading, travel booking, and medical care.

Do you remember . . . being misunderstood in an instant message or e-mail?

Both media are emotionally "flat," doing a generally poor job of conveying nuance. For someone with a deadpan mien and a frequent recourse to irony, they presented numerous opportunities for miscommunication and, when I was lucky, damage control. The development of new conventions with no real-life analog (haha, lol, emoticons) illustrates how human interaction adapts to the strengths and limits of the available media.

Do you remember first seeing Google? If so, what search engine did it displace?

The clean, sparse interface posed a sharp contrast to the portal wars of the late 1990s. I heard about it pretty early, and as a heavy searcher, I was using a combination of Northern Light and Alta Vista at the time. Other search companies you may have used, then forgotten, include Lycos, Excite, Infoseek, and Inktomi.

Do you remember . . . when a "conservative" investor sold after a 30% appreciation?

In 1998, I knew numerous friends and colleagues who were planning their life on the basis of a Netscape-like IPO (at the "-ents," for example: Scient, Viant, Sapient), saying sagely that "retiring at 40 really makes the most sense so that I can travel for a few years then give back to society, possibly by teaching."

Do you remember . . . your first flat screen display?

More important, do you remember your last CRT display? Here's a major change that took place so gradually, yet inevitably, that the CRT's demise was like a sinking ship slipping beneath the waves. LCD panels, meanwhile, continue on a march toward bigger displays, at lower prices, every year as new fabs come on line. I write this while staring at a display that's bigger than my first color TV, from about 18 inches away. And I'm leaning toward it, as if to crawl inside, rather than reclining or retreating.

Do you remember . . . the first video you saw on the Internet?

Before YouTube, uploading and hosting video online was a headache. Creating and editing it, meanwhile, was non-trivial. This video was worth the effort: George H. Gobel lit a barbecue with liquid oxygen in a tiny frame/low resolution video from 1995 that took forever to download. Here it is on YouTube. (He won an Ig Nobel prize, in large measure because of this Dave Berry column.)

Do you remember . . . your last roll of photographic film?

Once again, the seismic transition is accomplished one defection at a time, and those moments happen when the cost-benefit equation no longer makes sense (in this case, the price of a roll of Kodak or Fuji film, the cost of developing and printing even the worthless photos, and the difficulty of sharing the good ones). In other instances, the shrinking user bases make the economics of scale unattractive from the seller's perspective, meaning price increases, quality sacrifices, or both, and again, the customer may be driven away by vendors who may feel stuck between a rock and a hard place. The same dynamic seems to hold for newspaper subscriptions.

Do you remember . . . when GPS navigation was exotic?

Last week's announcement by Nokia that it will supply turn-by-turn directions to its smartphones obviously countered Google's foray into mobile hardware. Caught as collateral damage in this contest, meanwhile, are the standalone GPS makers like Garmin or TomTom, who not that long ago offered something clever and soon to be essential. In a matter of months, navigation on the mobile platform has become a commodity, table stakes in competitions for a global audience of hyperconnected nomads.

Do you remember . . . your first cross-generational "friend" on Facebook?

As the massive social network grows larger than the U.S. population, it has moved beyond its initial cadre of college students and recent graduates. Preteens join regularly, as do parents and relatives of teenagers hoping to a) stay relevant to or b) monitor their kin, as the case may be. Going forward, persona management for multiple publics will become second nature as the tool set increases in ease of use and flexibility.

Do you remember . . . when retail store replaced CD racks with vinyl?

This has been fascinating. Whether for reasons of its resistance to Limewire redistribution, or its purported fidelity, or the richness of cover art, or contrarian retro-hipness, the phonograph record is one of the few analog revivals in the digital tsunami. It is not impossible to envision turntable sales outpacing CD players, if not Blu-ray machines, within five years.

What will be next? Domestic robots (not just anthropomorphized vacuum cleaners)? Battery-powered cars? Implanted communications devices? Heavier reliance on analog storage methods like paper, for fear of snooping, blackouts, or cloud computing bankruptcies? The vinyl situation suggests that in some instances, analog may not be completely supplanted by digital competitors, so the 2010s will likely see some more surprising instances of both/and.

It would appear that we cut our ties with the past without much thought or regret, while true breakthroughs do not always capture our imagination: in November 2001, Apple Computer was struggling, and to suggest that its expensive, idiosyncratic MP3 player would eventually sell more than 225 million units would have been delusional. Somewhere, some entrepreneurs and inventors are similarly disregarding conventional wisdom and against all odds will be the heroes of January 2020.

March 2011: Worrying about retail before the great bleed-out started (and when Groupon was important)

Where is retail heading? Three overall trends appear to be mutually reinforcing:

1) Physical and virtual shopping are becoming indistinguishable. Shoppers can touch and compare physical items at the same moment they're accessing extensive price comparisons, researching detailed descriptions of features and benefits, and weighing word of mouth (either archived on review sites or real-time via Twitter).

2) Retailing, particularly for discretionary purchases, must transcend price, selection, and service. Involvement, whether through game elements (including the in-store promotions made possible with smartphone bar-code readers), user-generated content (ski videos at Backcountry, for example), clever ad copy, or other features, is becoming more important in some categories.

3) Price and performance pressure will not relent. Groupon and LivingSocial are conditioning bargain-hunters to expect 50% off as the baseline. Amazon's volume purchasing, supply-chain excellence, and tax-advantaged status make them difficult to beat. At the same time, their sites load fast, their mobile apps are appealing, and surveys rank them at the top of on- or off-line customer service polls.
Regardless of prime real estate, customer goodwill, or previous isolation from competition, local retailers cannot avoid confronting the long reach of the Seattle superstore. In addition, Amazon never stands still, constantly innovating, acquiring, and refining, making them a moving target for anyone else to benchmark, much less emulate.
February 2011: An Earthquake Every Year (but not so much any more)
It's become a commonplace to state that we live in extraordinary times. Rather than merely assert this, however, it doesn't take a lot of digging to find data: in nearly every year for the past 15, a new industry has been jump-started, an old one crippled, or a new way of looking at the world propagated. Consider a quick timetable that ignores such developments as PayPal, Wikipedia, Twitter, Craigslist, AOL, online mapping, or the iPod, and let me know what you think:

The Netscape browser goes from 0 to 38 million users in 18 months, the world's fastest technology adoption to date.

Windows 95 sells 1 million copies in its first 4 days on the market, and later serves as a launch pad to the Net for millions of users via Internet networking support, CD-ROM, and native modem drivers.

Dell focuses on supply-chain and related innovations as opposed to lab-based R&D, the norm at IBM or HP. As the world's businesses and households strive to join the online revolution, the build-to-order model surges in popularity for desktop configuration. IBM soon exits the business, while such manufacturers as Digital, Compaq, Gateway, and others either fade or get absorbed in consolidations. From an
also-ran position in 1996, Dell more than doubled its global market share in 5 years, becoming the #1 producer.

Linux and Apache explode in market share for server operating systems and web server software respectively. Linux shipments tripled, not counting free downloads; Apache powered the majority of websites as sampled by the Netcraft measurement firm, particularly as compared to Microsoft's competing Internet Information Server. The fact that neither product emerged from a traditional development process, from a corporation, or from a monetary transaction stymied many industry observers who contended that the open-source model simply could not work.

DVD player sales quadruple from 1 million to 4 million, an astonishing rate of adoption for a physical product (as opposed to virtual Netscape software downloads).

Shortly after its launch in June 1999, Napster redefined the music landscape. Rather than attempt to use the tool for promotion in the manner of radio, the music industry wanted to shut down all peer-to-peer file sharing. Because it employed a centralized directory structure, Napster was vulnerable to legal action in ways later distributed models were not; much of the enterprise's brief history was spent in or around courtrooms. 25 million users, many of them college students enjoying broadband speeds that few other
populations could access, flocked to the service, which shut down in 2001. In a fascinating secondary outcome to the ascendancy of MP3 music, manufacturers including Bose, Yamaha, and Harman International witnessed a 93% drop in sales of standalone audio components over the following four years -- an entire industry unrelated to the much-maligned record companies essentially vaporized overnight.

After indexing a billion Web documents and contracting with Yahoo to power the latter's search bar in 2000, Google rapidly becomes essential; the American Dialect Society called the verb its "word of the year" for 2002 and the term entered both Merriam-Webster and the Oxford English Dictionary in 2006. Counting partnerships, Google handled about 85% of all web searches as of early 2004 before Yahoo pulled out of the agreement and built its own capability. A staggering succession of acquisitions -- including Pyra (Blogger), Keyhole (Google Earth), YouTube, DoubleClick, and Hans Rosling's Gapminder -- followed.

According to Instat, wireless Local Area Network shipments rose 65% from 2001 to 2002. Business shipments of 11.6 million units led the way, and with home shipments of 6.8 million units, the total market revenue of $2.2 billion. Given that the more familiar term for this technology -- WiFi -- entered the Merriam-Webster dictionary in 2005, it's no surprise that it became a multi-billion dollar industry only three years after launch. Even more significantly, wireless networking entered all those homes and businesses one at a time: there was no "Sputnik moment," no tax credit, no policy mandate, no Big Blue or Ma Bell. Instead, particularly on the consumer side, the rapid adoption represents millions of trips to Best Buy or the equivalent. Combined with wide deployment of cable modems and DSL connections in this same period, the U.S. weaned itself off the acoustic modem in a surprising short period of time, without anyone making much of a fuss.

In yet another quiet transition that was barely remarked upon, cell phones surpassed landline connections in the U.S., replicating the norm in essentially every other country in the world. At about the same moment, digital cameras overtook their analog equivalents (Kodak stopped making film cameras entirely in 2004); soon the standalone device would itself be usurped by cellphone cameras. In one brief transition, two stable, ubiquitous technologies dating to the late 19th century were surpassed by digital counterparts.

No technology can compare to the wireline phone for reach, particularly in the U.S., where "universal service" is literally the law of the land. After 100 years, more than 97% of households had phone service; the average household had 1.3 lines. The 1-2 punch of Voice over Internet Protocol (the phone service offered by Vonage, Skype, and by cable operators' triple plays) and mobile changed that in a hurry: wireline penetration is heading south of 40% less than 15 years after peaking. Equities markets took notice of the VoIP takeoff and began depressing telecom valuations accordingly, their cellular growth notwithstanding. Skype, meanwhile, has grown enormous: as of March 2010, up to 23 million concurrent users are logged in. The total installed base was roughly the same size as Facebook, with 560 million users at the end of 2009, at which time the service accounted for 12% of all international calling minutes -- on the entire planet. From launch through 2009, users had completed 250 billion minutes of calls.

GPS is another technology that seeped into mainstream adoption without anyone making an editorial point of noticing a breakout year, yet its ubiquity cannot be ignored. In 2004, GPS on a mobile phone was successfully proven; it rapidly became a key component of the mobile platform. The original $12 billion investment by the U.S. Department of Defense spawned a commercial market worth $13 billion in 2003 alone; recent estimates predict a $70 billion market by 2013, with location-based services comprising $10 billion by themselves.

Following its launch the previous April, YouTube soared from 50 million page views per day after barely six months live to hit 7 billion on several days in August 2006. At the time of the Google acquisition, 100 million videos had been uploaded. Every one of them had the capacity to reach a worldwide audience for zero distribution cost and minimal, if any, production expense.

While Amazon refuses to release unit sales figures for the e-reader launched in 2007, one statistic about electronic books merits mentioning: Kindle book sales in the first quarter of 2010 were 1.8 times those of hardbacks. In other words, a technology dating back nearly to Gutenberg was eclipsed in market share in about 30 months by one retailer.

According to Morgan Stanley analyst Mary Meeker's statistics, the iPhone (counted along with its wi-fi-only iPod Touch sibling) reached 50 million customers faster than any piece of hardware in human history and jump-started the entire smartphone market.

Facebook claimed an incredible 600 million users in roughly six years after launch. 2009 was the breakout year as membership surged from about 150 million to 350 million.

Apple sold three million iPads in less than one calendar quarter. This matches the sales rate of the DVD after five years in the market. Even more telling is the calculation by Deutsche Bank analyst Chris Whitmore that if the iPad counted as a PC, it completely rewrites the market share scoreboard, putting Apple on top by a comfortable margin. In an unrelated corner of the industry, meanwhile, the Groupon online coupon business went from revenues of $33 million to $760 million in one year, making it most likely the fastest growing business in history.
November 2012: How will TV be disrupted? (Hint: Amazon)
No sector has been more transformed by the internet than audience aggregators. These content companies, through various means, make money from assembling viewers, readers, and listeners for artifacts that in the past 20 years have increasingly become digital rather than analog. Whether it is Kindle with books, iTunes for music, Google news vs. newspapers, or Netflix for movies, Internet-based content has disrupted whole sectors, eliminating such brands as Borders, Newsweek, Tower Records, Blockbuster, and Virgin Megastores.

The pattern seems to be based on relative simultaneity: Columbia sold more than 100 million copies of Michael Jackson's Thriller LP/CD, but that audience doesn't all listen at the same time. The New York Times company moves many copies of its flagship newspaper, and while readers will pick it up to read their selected articles during the day it's distributed, again, the audience is asynchronous. Harry Potter books are even more distributed in time than printed news. Television, by contrast, assembles mass audiences at a particular time: more than 100 million US viewers saw the 2011 Super Bowl, the vast majority of them in real time. But: every content bundling model, from the record album to financial advising/stock purchasing to the newspaper, has come under attack in the Internet era. (University degrees might be next, a topic for another time.)

A number of signals are suggesting that TV's time of reckoning is coming. Tablet sales are soaring: 20% of US adults now own the devices only two years after launch, and the intimacy of the tablet form changes viewing habits: social interaction, for example, is facilitated in ways that traditional TV does not allow. Time shifting, whether through Hulu, BitTorrent, or DVR, is becoming predominant among those in their 20s. Games, online shopping, social networking, and other new tasks threaten to shrink the staggering five hours per day the average American spends watching TV, if Nielsen's numbers are to be believed. Bottom line: as the Wall Street Journal put it on November 30,

"Television viewership is declining across the board. Although CBS remains the most-watched network in prime time, its average overall audience of 11.5 million in that time period is down 10% in the fall season so far, compared with the same time the year before, according to Nielsen. Its audience among 18-to-49-year-olds, the demographic most prized by advertisers, has tumbled 20%."

Two questions thus emerge: what will Internet video look like in terms of markets, business models, and financial attractiveness, and how will today's incumbents respond?

I use the term "Internet video" deliberately: just as all sparrows are birds but not all birds are sparrows, television is just one of many categories of Internet video. Without the constraints of 30-minute multiples, we are seeing a proliferation of new content forms. TED talks, for example, have been viewed about a billion times, and many of those are presumably in front of room-size audiences. YouTube served eight million simultaneous streams of Felix Baumgartner's Red Bull space jump, something no earthly TV network could have done, given the global composition of the audience. Jerry Seinfeld's Web series, "Comedians in Cars Getting Coffee," focuses on exactly that scenario, but episodes are only as long as they are: 7 minutes, 13 minutes, whatever. TV ads, particularly global ones pulled out of context to fascinating effect, have long been prime YouTube material.

Going forward, every aspect of the video experience may be contested. Apple TV, whatever it turns out to be, should improve on the remote by substituting a tablet/smartphone for the dumb infrared devices we all currently use. As for the viewing device, a big smartphone is only slightly smaller than a small tablet. As the fates of Sharp, Panasonic, and Sony illustrate, global consumers aren't as infatuated with 3D television and other innovations as they are with these smaller devices: Panasonic lost $9 billion -- $9 billion -- in its last quarter. Transport, meanwhile, is "obviously" over cable, but as mobile bandwidth gets faster and faster (is 4G as fast as we can go? There's no reason to believe so), that wire infrastructure may no longer be a monopoly. Finally, content creation used to be the private province of a small number of studios, but barriers to entry have dropped so literally anyone can produce viewable material, which then of course costs precisely zero to distribute.

In the current model for TV, big audiences equate to big ad revenues and big salaries: last season Ashton Kutcher earned $24 million for his role on "Two and a Half Men." Apart from sports, however, "big" is smaller than it used to be. Because of channel proliferation, audience fragmentation, and other factors, the top-rated non-sports shows currently running -- "60 Minutes" and "NCIS" -- earned only an 8.0 and 10.5 rating respectively (depending on week), meaning 8-11% of households are watching. Compare today's picture to that of past decades, when "I Love Lucy" pulled numbers in the 60s; the farewell episode of M*A*S*H in 1983 also drew 60.2% of households.

On one hand, the fragmentation of cable that spawns such critical successes as "Mad Men," "The Sopranos," "Breaking Bad," and "Homeland" is extending into web video. That is, quality work with niche audiences might be better addressed via YouTube or other distribution models, particularly because interactive advertising can be measured so much more precisely than broadcast or even cable numbers. At the same time, the networks' cost structure has already been reset, beginning about 15 years ago: reality and other non-scripted programming proved to be a convenient way around the Hollywood writers' strike in 2007 and also kept talent costs to a minimum. Even today, such shows as American Idol, X Factor, Survivor, and Dancing with the Stars score well while being relatively cheap to produce.

The energy drink/adventure sports sector has discovered this fact of online behavior and exploited it aggressively. Ken Block's driving stunts, the aforementioned Red Bull oeuvre (including non-ad programming such as "Jackson's Hole"), and Monster (with more than 34 million views) offer programming that would be ill-suited for even niche cable channels but perfectly situated for viral distribution, repeated viewing, and powerful branding opportunities.

Because of the sheer volume of material -- 72 hours of YouTube video are uploaded every MINUTE -- viewers confront a classic long-tail scenario: every niche of interest, taste, culture, and video quality is addressed, including some that haven't been invented yet. Cable television solves this problem with brand identity: viewers who tune in to HGTV, Food Network, or Fox Soccer Channel have a pretty good idea of what they'll see. While YouTube, TED, and other outlets have adopted some variation of the "channel" philosophy, it doesn't scale. Neither does search work very well: if you didn't know what Gangnam Style was, how on earth could you phrase the search to find that video?  Social networking is, to date, the best option for finding this stuff, raising the prospect of more formal varieties of trusted filters who watch thousands of hours of bad video to snare the treasures.

To give just one example of such valuable artifacts, there are a multitude of black-and-white clips of musical, stand-up comedy, and other performances from the 1950s and '60s. Seeing Thelonius Monk on Swedish television can be wonderful, surreal, and dazzling all at the same time. Again, television, even of the cable variety, is a poor distribution mechanism for such work, but the planet's artistic inheritance is far richer than it was just a decade ago: it's one thing to dust off a Wes Montgomery LP and play it for the budding musician and something else entirely for her to see the genius himself, often explaining the music between takes. There are doubtless thousands of other examples (university lectures are one - reading Richard Feynman and seeing him are two very different experiences).

How will the incumbents respond to these and the many other changes ahead? Hulu is trying to impose cable economics on web video, and it might be able to do so for a few years. As cable subscription prices rise relentlessly, however, cord-cutting will increase, I predict. The fact that cable operators control the majority of high-speed home connections complicates this switchover somewhat, but as mentioned above, wireless broadband and tablets could lead to a different economic model in the future. At what point could AT&T/Verizon supplant Comcast?

Sports, particularly football, appear to be immune to time-shifting, so much so that multi-billion dollar bets are being made on this assumption. ESPN paid $1.1 billion in 2010 for 18 games of Monday Night Football; the rate nearly doubles, to $1.9 billion, only three years later. The Big Ten cable network (half owned by Fox) generated $7 million per school in 2009; Maryland joined the conference last month in part because the school was reportedly promised $43 million (along with all other members) in 2017.

Big Ten commissioner Jim Delany is reported to be a student of demographics, and the athletic conference's core states in the Rust Belt are indeed losing population to the American West and South. But Delany is, I believe, falling victim to linear projections of cable access fees paid by households that, in the main, are not seeing big jumps in income. As the NFL, college football, the Olympics, the World Cup, and other sports bodies continue to rely on heavier funding from their television partners, this technological sea change threatens to undermine that rising tide of non-advertising revenue. When might the golden goose stop laying so many eggs? Delany's math is reminiscent of that of the financial industry in 2005-6, when it was believed that housing prices could never go down.

What might happen next? Pay-as-you-go for cable channels may become a reality at some point, the global audiences for web video could become a factor in ways broadcast cannot reach, and new advertising technologies could alter the landscape. Might brand loyalists get more, better ads, and subsidized cable subscriptions, for example? Will overlays, quizzes, games, and even biological sensors augment the 30-second spot?

Who should we be watching for signs of business model change? Clearly the current rights holders are not standing still, and ventures such as Hulu will evolve. Given Apple's past relationships with Hollywood content companies, iTunes could turn into a new kind of cable TV experience.  Startups such as GetGlue, Showyou, Vimeo, and Yahoo's IntoNow provide a variety of social layers for video production, distribution, and consumption. Amazon has lots of digital assets, a hardware platform in the Kindle Fire, and a history of surprise moves.

Whatever its future, broadcast TV had a good run, and will of course deliver value in certain circumstances in the future. But the monopoly that television has had over video audience aggregation for the past 70 years is being broken. As a result, the possibilities for the future are both exciting (for content producers and consumers) and potentially expensive for incumbents. In any case, creativity should flourish as it has in every other technology revolution that empowered artists, whether printing, paint chemistry, or photography.
November 2013: What makes a great business book?
I recently read Brad Stone's book on Jeff Bezos and Amazon entitled The Everything Store. It's a good job of reporting on a topic of broad interest, given Amazon's unique history and powerful position. I learned a lot of facts about the key people, filled in some gaps in my understanding of the overall timeline, and heard personal impressions from some of the key people involved. I couldn't point to any particular page and say that I could have done it better.

And yet I wanted more. Stone does a fine job on the "what" questions, he has interviewed perhaps more of the principals than anyone else, and the writing is clear throughout. Why then does this not feel like a great business book? That's where my thinking turned next, and what I will discuss this month: after analyzing some commonalities among books that have changed my thinking, I divide my pantheon into two different camps then try to identify some common aspects of "greatness."

OK, professor know-it-all, what are some great business books? Let's start there, because the alphabetical list shows what I have found sticks with me over the years: hard-won narrative lessons, and deep conceptual muscle.

Peter Bernstein, Against the Gods
Frederick Brooks, The Mythical Man-month
Alfred Chandler, The Visible Hand
Yvon Chouinard, Let My People Go Surfing
Clayton Christensen, The Innovator's Dilemma
Annabelle Gawer and Michael Cusumano, Platform Leadership
Tracy Kidder, The Soul of a New Machine
Marc Levinson, The Box
Michael Lewis, Moneyball and Liar's Poker
Carlota Perez, Technological Revolutions and Financial Capital

Honorable mention (the authors probably wouldn't call these business books):

Atul Gawande, Better
Bruce Schneier, Secrets and Lies
Nassim Taleb, The Black Swan

What's missing here? I don't know the Drucker corpus well enough to pick a single volume there. I've never had much Velcro for self-help/personal effectiveness books, so that wipes out a lot of people's favorites, including Stephen Covey. I've admired Ron Chernow (Rockefeller) from a distance, so one day that might go on the list. Isaacson's Steve Jobs biography was rushed to market and thus too long. I never read Andy Grove's Only the Paranoid Survive in the day, but maybe it bears a second look.

There's another category of exclusion, the "we found the pattern of success" books that do no such thing. The exemplars here are Good to Great and In Search of Excellence, the Tom Peters/Robert Waterman sensation from 1985 that helped bring McKinsey to the front page of the business press. Taking the more recent book first, of Jim Collins' 11 "great" companies, only 2 (Nucor and Philip Morris) seriously outperformed the S&P over the following decade. Most of the other nine generally reverted to the mean, or in the case of Gillette, got bought. Most telling, just as Gary Hamel bet his reputation on Enron in his book Leading the Revolution, Collins got stuck with some outright clunkers, most notably Circuit City and Fannie Mae, while Pitney Bowes lost half its market cap. Did they somehow board the wrong people on the bus all of a sudden? I strongly doubt it. As for Peters and Waterman, Business Week published a cover story only TWO YEARS AFTER PUBLICATION showing how many of the 62 "excellent" companies were nothing of the sort. Retrospective pattern discovery at the company level, rife with cherry-picking, has yet to reliably predict future performance (in book form at any rate: I can't speak to what happens inside Berkshire Hathaway).

Ah, but what about the classic strategy tomes? Porter's Competitive Strategy, Hamel and Prahalad's Competing for the Future, and maybe Blue Ocean Strategy have their place, of course, but they all felt like exercises in hindsight bias rather than scientific discovery: the subtitle of Porter's book is, justifiably I think, "Techniques for Analyzing Industries and Competitors" and nothing to do with action. I have yet to see a strategic move in the real world that felt deeply linked to any of these efforts (that doesn't mean there are none, just that I don't see any). There's an old joke that sums up this orientation:

How do you spot the strategy professor at the racetrack?

He's the one who can tell you why the winning horse won.

In contrast, my personal list of the best business books veers away from such methodologies in one of two ways. First, a skilled writer, a self-aware founder/principal, or a combination of the two tells a story rich with personal experience in a highly particular situation. Second, a deep thinker creates a powerful conceptual apparatus that endures over time. (Moore's Crossing the Chasm was a near-miss here.)

Many of these books are striking in the modesty of their origins: Christensen started by knowing the disk drive industry inside and out, while Gawer was able to understand platforms after getting great access at Intel to see the company's handling of the USB standard during her Ph.D. research. Tracy Kidder -- one of our era's great storytellers -- compellingly documents the creation of a computer that never made it to market.

The best first-person tales were not unabated triumphs: Chouinard nearly lost Patagonia after some serious missteps, while Fred Brooks learned about software development the hard way, shipping an IBM operating system late. At the same time, some of the big brains attempted syntheses of stunning breadth: the whole idea of risk, in Bernstein's case, or the modern managerial organization, for Chandler. Both books, I suspect, were decades in the making.

On to the fundamentals: what makes a great business book? I would submit that it have some mix of four qualities:

1) Honesty

It's easy to paper over the messy bits; "authorized biographies" can be so hagiographic that all the sugarcoating makes one's teeth hurt. In contrast, the humility of a Fred Brooks or Yvon Chouinard is refreshing, frankly acknowledging the role of luck in any success that has come their way.

2) Human insight

Business, taken only on its own terms, can be pretty boring. But as part of "life's rich pageant," as Inspector Clouseau put it, business can become part of themes more enduring than inventory turns or new market entry. The best books connect commercial success to aspects of human drama.

3) Continued applicability

The retrospective nature of book publishing can be a curse, in the digital era particularly, but it also means that great research and storytelling stand up over time. A model should continue to help organize reality for years after publication, and the likes of Bernstein, Chandler, Christensen, Levinson, and Perez have earned their stature by delivering not just an investigation but a way of seeing the world.

4) Subtlety of insight

All too often, business books worship at the altar of the obvious. Acknowledging the facts of the situation and then deriving deeper principles, either by astute observation (hello Michael Lewis) or by rigorous scholarship, is a gift.

In the end, we find a collection of great business books illustrate a conundrum: just as with business itself, knowing the principles of business book greatness makes it no less unlikely that a given individual will achieve it. Luck still has something to do with it, and I doubt that most artists wanting to create a masterpiece ("the Great American Novel" for instance) actually did it.

Yet for all the dashed hopes of finding a science of success, and for all the ego trips and bad faith, there are times when stories from the arena of commerce transcend the genre and deliver gifts of insight far more meaningful than simply how to make more money.
March 2013: Digital Heirlooms
I've been thinking a lot lately about the invisible consequences of our smartphone/mobile/digital world. Somewhere down the road, the dematerialization of cultural artifacts will be viewed, I believe, as a major shift. Looking back from today, books are our oldest mass cultural form, then between 1880 and 2000 we got music, movies, then television into widely available portable formats. Eventually, and rapidly, all of these became digital, and fungible: 15 years ago the radio couldn't play back voicemail nor could a VCR host video games.

The business competition between Amazon, which won the first leg of the e-book/e-reader race, Netflix (ditto for movies), and Apple (music) is for extremely high stakes, but not our concern today. As the barrier to cultural creation drops, artifacts get easier to make. Compare the process of creating photographs in 1913, 1963, and today. Humanity has never made -- or shared -- so many images, but how will these increasingly ephemeral artifacts get passed down? Finding one or two photos of my grandfather when he was a young boy was lucky and important; in 100 years, what will my grandkids have to show for their infancy, adolescence, and young adulthood?

Google's recent decision to drop the Reader product is instructive here. At what point do changing cloud computing business models endanger and/or support preservation? Is there any conceivable way Facebook can keep adding billions and billions of photo uploads in perpetuity? Given that some kind of limits will be reached, where do our cloud-identities go when businesses fail? As more and more variations emerge, what will be the fate of digital personae after we die? We may well confront a paradox: we make more images than ever before, yet in the future, we could have less of a visual inheritance.

A whole other branch of issues revolves around platform compatibility. Some of my written masterpieces from the 1990s are stuck on 3 1/2" floppies for which I no longer own a working drive. That's a hardware question. What about software compatibility? For how long will Adobe support the PDF standard? In the absence of such support, and the possibility that a given standard will not be open-sourced to a community that can maintain it, we will see further stranding of digital assets.

In such a world, what lasts? I was pondering this question when considering graduation gifts. An Apple device, no matter how sleek and easy to hold, will be obsolete in five years, maybe before. Music is hard to give: for how long can we assume most every household will be able to play a CD? The last two computers I bought, not to mention every tablet, lack the capability. One day I will wake up and realize, yet again, that there is another format of information I can't access, joining the floppies, VHS, Jazz, and Zip media boxed up, worthless, in the basement.

Books have played a huge role in my life. Leaving grad school, the moving company found that our books on the van outweighed the car that was also on the truck. Many books tell a story, independent of the printed page. Bookplates were a classy accoutrement of prior generations; inscriptions can still be precious.  But the fact remains that, apart from university press books, most paper rots, some startlingly quickly. Books weigh a lot and occupy substantial space. The stereotype of a book-lined academic household is giving way to cloud-ish realities: it's quicker to consult Google to hunt down a footnote than to drive to the campus library or plow through the boxes in the garage, given that my book collection currently surpasses my available wall space for shelving it. Much as I hate to admit it, books are losing their appeal for me as gifts, especially "special" ones. The good news is that books' operating system is now stable, and is likely to remain so.

To return to the question, what lasts in a digital world? Paper is a mixed blessing, but Moleskine has made a very profitable global luxury brand out of blank books (if you are a fan of the Italian-made gems, check out this fascinating article related to the company's upcoming IPO). Pens continue to satisfy; alongside the European classics, several Kickstarter businesses growing out of the cult following that has emerged around the 0.3 mm Pilot Hi-Tec C are fascinating to track. I don't watch people in their 20s and 30s closely enough to know whether pens are being replaced in the preparation of grocery lists, birthday cards, or journals, but sense they are not. (From "Dear Diary" to "Dear Evernote"?) Relating pens to a broader category, tools can be truly lasting gifts, the antithesis of digital ephemera. Specifically, bladed tools seem to hold some deep appeal: knives, kitchen or otherwise, and chisels/planes strike me as heirlooms more than, say, striking tools ("here son, a titanium framing hammer as your graduation present"), mechanics' tools, or even saws. In the grooming arena, shaving razors, and those lovely badger brushes, seem to continue the theme. Scissors, whether run with or standing still, don't hold the same appeal, but I don't write as a quilter or scrapbooker, for whom such tools might indeed be long-lived, essential, and personal.

Ah yes, say some women friends, you're so much of a guy, always missing the point: jewelry has struck a nerve for millennia. Gold, precious stones, and other articles of adornment appeal deeply to many women from many cultures. To this I say: true, but "little jewelry" is an oxymoron in my experience. Finding something well-made, lasting, and appealing for the same price as a Swiss Army knife or decent "graduation" pen has been difficult for me. There's also the strong sentimentality: giving jewelry to the babysitter graduating from high school feels a little too personal. Tools have a safety zone that rings do not. In both cases, however, the appeal relates to hands: things that people before us touched, treasured, and took care of mean so much more than something shiny and new -- unless we can imagine the new present enduring across generations.

The essential role of blades in our species' survival speaks to some deep parts of the psyche located, I suspect, far removed from the dopamine pumps so capably triggered by multitasking, texting, tweeting, and online grazing. To the question of "what lasts in a digital age?" the answer, I submit, is simple: tools that fit the hand of the user. Or gold.
July 2015: Crossover Points
I recently read an enjoyable study of the airport as cultural icon (Alastair Gordon’s Naked Airport; hat-tip to @fmbutt) and got to thinking about how fast new technologies displace older ones. Based on a small sample, it appears that truly transformative technologies achieve a kind of momentum with regard to adoption: big changes happen rapidly, across multiple domains. After looking at a few examples, we can speculate about what technologies might be the next to be surpassed.

Gordon makes uncited references to air travel: in 1955, more Americans were traveling by air than by rail, while in 1956, more Americans crossed the ocean by plane than by ship. (I tried to find the crossover point for automobile inter-city passenger-miles overtaking those of railroads, but can only infer that it happened some time in the 1920s.) This transition from rail to air was exceptionally rapid, given that only 10 years before, rail was at its all-time peak and air travel was severely restricted by the war.

Moving into another domain, I was surprised to learn that in 1983, LP album sales were surpassed not by the CD but by . . . cassette tapes; CDs did not surpass cassettes for another 10 years. In the digital age, the album is no longer the main unit of measurement, nor is purchasing the only way to obtain songs. This shift in bundle size is also occurring in news media as we speak: someone asked me the other day what newspaper(s) I read, and it struck me as odd: I can’t remember when I last had a physical paper land on my porch. That’s the other thing about these crossover points: they usually happen quietly and are not well remembered.

The smartphone is taking over multiple categories. Once again, we see a new unit of measurement: in the film camera age, people developed rolls of film, then perhaps ordered reprints for sharing. (That quiet transition again: can you remember the last time you took film to the drugstore or camera shop?) Now the unit of measurement is the individual image. Interestingly, digital still cameras surpassed film cameras in 2004, but not until 2007 were there more prints made from digital than from film. After 2007, digital prints have steadily declined. Furthermore, digital cameras themselves have been replaced by cameraphones: only 80 million point-and-shoot digital cameras shipped in 2013 and that number is dropping to well under 50 million this year, while smartphone sales are on target for about 1.5 billion units this year.

Standalone GPS units, MP3 players, and video camcorders (with GoPro being a notable exception, albeit in relatively tiny numbers) are other casualties of the smartphone boom. Landline-only houses were surpassed by cellular-only in 2009. Smartphones surpassed PC sales back in 2011.

The implications for employment are tremendous: Kodak employed 145,000 people in 1988; Facebook, a major player in personal image-sharing, has a headcount of about 9,000, most obviously not working on photos. Snapchat has 200 employees at a service that shares 8800 images EVERY SECOND, a number Kodak could not have conceived of. When these technology shifts occur, jobs are lost at a greater rate than they are gained. Railroads employed more than 1.5 million Americans in 1947; it’s now about a sixth of that. U.S. airlines, meanwhile, employed a peak of about 600,000 workers in the tech boom of 2000, well less than half that of the railroads, in a more populous country with more people traveling.

Let’s look at the smartphone. Given globalization, what used to be U.S. telecom numbers no longer equate. AT&T employed around a million people at its peak; right now AT&T plus Verizon (which counts cable TV and other operations) employ roughly 425,000 people. Apple’s 2015 headcount of 63,000 includes about 35,000 retail employees and about 3,000 temps or contractors. Samsung is a major player in world telco matters, but figuring out how many of its 275,000 employees can count toward a comparison vs AT&T is impossible. All told, more people have more phones than they did in 1985 but employment in the phone industry looks to be lower, and lower-paying, given how many retail employees now enter the equation.

Coming soon, we will see major changes to ad-supported industries. Already newspaper revenues are in serious decline. Digital ad revenue is already higher than newspaper, magazine, and billboard combined. “Cable cutting” is a very big deal, with clear demographic delineations: a 70-year-old is likely to read a paper newspaper and watch the big-4 network evening news; a 20-year-old is highly unlikely to do either. Comcast announced in May that it has more Internet-only subscribers than cable-TV subscribers, and the unbundling of cable networks into smartphone/tablet apps such as HBO-Go will likely accelerate.

In personal transportation, there could be two major upheavals to the 125-year-old internal combustion regime: electric cars and self-driving vehicles. Obviously Tesla is already in production in regard to the former transition, but the smartphone example, along with such factors as Moore’s law, cloud computing, and an aging Western-world demographic could fuel rapid growth in autonomous vehicles. In regard to cloud computing, for example, every Google car is as “smart” as the smartest one as of tonight’s software upgrade. Given the company’s demonstrated expertise in A/B testing, there’s no reason not to expect that competing models, algorithms, and human tweaks will be tested in real-world competitions and pushed out to the fleet upon demonstrated proof of superior fitness. 
May 2016: Technology and inevitability
Human nature drives us to look backwards and see a series of developments neatly explaining the current situation: we all exhibit hindsight bias is some form. It’s much harder to look back and recapture the indeterminacy in which life is lived in the present tense. Technological history is particularly prone to this kind of thought and rhetoric: the iPhone was famously (but not universally) mocked upon its introduction, to take but one example; looking for “the next Microsoft” or the “next Google” is another manifestation. The project “singularity” of digital cognition surpassing the human kind builds on this kind of logic. Coming in June, longtime tech observer Kevin Kelly’s new book is called simply The Inevitable.

It’s important to remember, however, that merely inventing (or imagining) a technology is a far cry from getting it into garages, factories, living rooms, or otherwise achieving successful commercialization. The low success rate for university technology transfer offices bears this out: a great molecule, material, or method does not a successful product make, absent entrepreneurship, markets, and other non-technical factors. This month I’ll run through a few technologies, some well-known and visible, others largely forgotten, that failed to achieve market success. I do this less out of nostalgia and more in the interest of tempering some current projections with a reminder that luck, competition, timing, and individual drive and vision still matter.

1) Very Light Jets
Led by Eclipse but also joined by Honda, Embraer, and others, the late 1990s stand as the height of the promise of a small, cheap (under $1 million new) aircraft that could both lower the barrier to personal jet ownership and fuel the rise of short-hop air taxi services. Eclipse shipped far later than promised at more than twice the projected cost, and performance problems were numerous: tires needed frequent replacement, the windscreens cracked, fire extinguishers leaked corrosive chemicals into sensitive components, the computerized “glass cockpit” failed to perform, and so on. A few air taxi firms went live (such as DayJet), but failed in the 2008 financial crisis, as did Eclipse. Honda, meanwhile, is prone to showing HondaJets in company advertising, but as of last December, had delivered a total of one plane to a paying customer. Sale prices are in the $4 million and up range — more than a used Hawker or similar mainstream business jet.

2) Flying cars
However intuitive the appeal, flying cars remain a niche market occupied primarily by mad-scientist visionaries rather than established production teams and facilities. The latest attempt, the Aeromobil, is claimed to be ready for market in 2017. The video is pretty impressive. Much like VLJs, flying cars have failed as much for economic reasons as technical ones. Building such a complex vehicle is not cheap, and safety considerations raise the product’s cost in multiple ways: FAA certification, spare parts management, expensive short-run production, and insurance factor into the actual operational expenses. Some of these expenses are out of the control of the aforementioned visionary (and in the Eclipse case, Bert Rutan has throughly impressive credentials), while other business challenges, including marketing, are common in tech-driven startups: who will buy this and what problem does it solve for a critical mass of real people?

3) ATT Picturephone

Here is ATT’s website, verbatim:

"The first Picturephone test system, built in 1956, was crude - it transmitted an image only once every two seconds. But by 1964 a complete experimental system, the "Mod 1," had been developed. To test it, the public was invited to place calls between special exhibits at Disneyland and the New York World's Fair. In both locations, visitors were carefully interviewed afterward by a market research agency.

People, it turned out, didn't like Picturephone. The equipment was too bulky, the controls too unfriendly, and the picture too small. But the Bell System* was convinced that Picturephone was viable. Trials went on for six more years. In 1970, commercial Picturephone service debuted in downtown Pittsburgh and AT&T executives confidently predicted that a million Picturephone sets would be in use by 1980.

What happened? Despite its improvements, Picturephone was still big, expensive, and uncomfortably intrusive. It was only two decades later, with improvements in speed, resolution, miniaturization, and the incorporation of Picturephone into another piece of desktop equipment, the computer, that the promise of a personal video communication system was realized."

*I’m sure the story of exactly who in the Bell System drove this $500 million boondoggle is fascinating, if heavily revised.

4) Voice recognition software
Bill Gates is very smart, and obviously has connected some pretty important dots (as in the Internet pivot Microsoft executed in the late 1990s). On voice recognition, however, “just around the corner” has yet to come to pass. His predictions began in earnest with his 1995 book The Road Ahead, and in numerous speeches since then (well into the 2000s), he doubled down. Even now, in the age of Siri/Alexa/Cortana, however, natural-language processing is a very different beast compared to replacing a keyboard and mouse with talking. Compare two statements to see the difference: “What is the temperature?” vs “highlight ‘voice recognition software’ and make it bold face.”

5) Nuclear civilian ships
President Dwight Eisenhower wanted both Americans and citizens of other nations to temper their fears of military atomic and nuclear weapons by encouraging peacetime uses (his “Atoms for Peace speech” was delivered in 1953). The NS Savannah, a nuclear cargo ship, was intended to be a proof of concept, and it remains a handsome vessel a half-century on. The ship toured many ports of call for publicity and drew good crowds. Reaction was mixed, and the fear of both nuclear accidents and waste leaking into the oceans proved prescient as the US vessel and, later, a Japanese civilian ship both experienced losses of radioactive water. Although operational costs are low, the high up-front investment and, more critically, unpredictable decommissioning and disposal costs presented unacceptable risks to funding agencies or banks. Despite 700 military nuclear vessels becoming standard pieces of national arsenals, nuclear civilian craft have never caught on (with the exception of a few Russian icebreakers). A great BBC story on the Savannah (now moored in Baltimore) can be found here.

6) 3DO gaming console
After the 1980s, in which Sony’s Betamax format lost out to Matsushita’s VHS, consumers remained wary of adopting a technology in the midst of a format war. The lesson has been learned and relearned in the succeeding decades. In the early 1990s, Trip Hawkins (who founded Electronic Arts) helped found a new kind of console company, one based on licensing rather than manufacturing. The effort attracted considerable attention, but numerous problems doomed the effort. Sony and Nintendo can subsidize the cost of their hardware with software royalty streams; this is a basic element of platform economics as seen in printer cartridges and other examples. The 3DO manufacturers lacked this financial capability, so a high selling price was one problem. In another basic of platform economics, software and hardware must be available in tandem, and there was only one game — Crash ’n Burn, ironically enough — available at US product launch. In Japan, a later launch helped enable better reception as six game titles were available, but within a year the platform had become known for its support for pornographic titles, so general adoption lagged. 3DO clearly had some technically attractive elements (some of which were never included in the Nintendo 64 and Sony Playstation that followed) but the superior technology failed to compensate for market headwinds.

7) Elcaset
Unless you’re an audio enthusiast of a certain age, this is deep trivia. Sony introduced this magnetic tape format in 1977, and it was clearly technically superior to the audio cassette that had become entrenched by this time. The tape was twice as wide, and moved twice as fast, improving the signal/noise ratio and allowing for more information to be recorded, thus increasing fidelity. Like a VHS deck, tape handling was done outside the plastic shell, improving performance further. Unfortunately, the added performance came at a cost, and few consumers saw any reason to embrace the odd new format, which was supported by TEAC, Technics (Matsushita), and JVC as well. Also, no pre-recorded titles were available: this was the time when “home taping is killing music” — the 1980s UK anti-cassette campaign was later dusted off for the Internet age by the Norwegian recording industry association — and label execs were of two minds with regard to cassettes. In a curious twist I only recently learned about, Sony sent the remaining inventory of players and tapes to Finland after a distributor there won the wholesale auction, where many of the machines, well-made as they were, continued to work well for decades afterward.

This somewhat random collection of technologies holds very few generalizations. Having high-ranking executive sponsorship — up to and including the President of the United States — failed to compensate for deep fears and uncertain cost projections. Some failures came from corporate labs, others from entrepreneurs. Platform economics prove to be critical, whether for hardware and software, spare parts and airfields, or communications technologies. In the end, the only true generalization might be that markets are fickle, and there’s very little technology that is truly inevitable in its adoption.
December 2016: The Future of Aging

We grow too soon old and too late smart.
      -Proverb variously attributed to Swedes, Germans, and Dutch

While it is common to note that the U.S. worships youth and beauty in contrast to other cultures that revere age and wisdom, the demographic tidal wave that is the baby boomer generation will change aging just as it changed higher education (the explosion of college attendance), childrearing norms (compare baby strollers and birthday parties in 1960 and 1980), family structure (marriage rates have plummeted since World War II), the workplace (cube farms), and the built landscape (McMansions). Speaking only of the U.S., and not of those wise Dutch, German, Swedish, or Chinese elders, what will we see in the next 25 years? The short answer: lots of big changes.

-Medical breakthroughs
When life expectancy was shorter, body and mind wore out at more or less the same rate. As life expectancy increases, dementia, on one hand, as well as crippling orthopedic and spinal conditions both increase in likelihood: the possibility of mind failing before body, and body before mind, means that heartbreaking scenarios of both asymmetries are on the upswing. (See this.) Exoskeletons, 3D-printed artificial joints, and other mobility solutions can help with the latter class of conditions, while new Alzheimer’s and other dementia drugs are getting more attention, given the growing market need (see this). Just as fertility treatment advanced markedly in the baby boomer’s childbearing years, expect new medical miracles to address the aging process.

Whether or not the aged will be able to pay for these new medicines and devices remains an open question. Social Security is both underfunded and insufficient for a moderate lifestyle, pensions are less available and often underfunded for the public-sector employees lucky enough to get them, and the tab for the everyone-his-own-investment-analyst experiment known as the 401(k) defined contribution approach will soon be coming due. As of 2013, only 53% of U.S. families had a retirement plan, and of those aged 56-61, the median account was valued at only $17,000. The mean account value for that age cadre — $163,000 — is clearly boosted by a very few families with extensive or even adequate resources: in round numbers, a 65-year-old couple needs about $850,000 to generate $50,000 a year (the “average” U.S. income) for 20 years, assuming 1% inflation and 3% investment returns, not counting Social Security. Fidelity Investments estimates that same couple will pay $260,000 in out-of-pocket health care expenses, not counting nursing homes or related costs. (More here.) Most American families will not be able to afford to retire under the current rules.

The question then becomes, what happens? After rising for more than 50 years running, U.S. life expectancy might be dropping: earlier this month, the National Center for Health Statistics announced that death rates for 8 of the top 10 causes of death increased in 2015. Life expectancy at birth dropped about a month for women and 2+ months for men. One year does not a trend make, but it’s possible we could be seeing an effect of the growth in income/wealth disparity that has characterized so much of American life in the past 30 years: one hypothesized cause of an increase in death rates for white middle-age people is the increase in so-called diseases of despair: alcoholism, overdoses, and suicide. Whether through despair, diet/lifestyle, or limited access to care, financial stress and low income most likely reduce life expectancy. 

-Safety nets
So if life is getting harder, life expectancy lasts 20+ years beyond age-65 retirement, and savings are minuscule, what will government do? One colleague of mine suggests that Medicare could expand to include food stamps or some other nutrition component. Perhaps there will be wider calls for a public option for health care coverage. Social Security was originally instituted in the Great Depression, and the nation is a very different place 80 years later: might government old-age insurance be redefined in the coming decade, especially given the coming bust in 401(k) assets relative to need? That bailout will dwarf both Wall Street’s and Detroit’s proppings-up after 2008. Will the retirement age will increased from 65 to reflect modern longevity? (In 1935, when Social Security was introduced, the U.S. life expectancy was 61; it is now about 79.) I can’t see the current collection of national safety nets — VA, Social Security, Medicare/Medicaid, disability, SNAP — being able to withstand another 10 years without being reworked.

-Living arrangements
The prevalence of 2 or 3 adult generations living in close proximity fell dramatically after World War II: the growth of suburbs filled with single-family detached-houses, with limited walkability, along with the rising number of nursing and retirement homes, meant that grandparents less commonly lived with their grandchildren. The numbers are difficult to track, given the changing makeup of care resources: adult day care, in-home service providers, nursing homes, and adult care communities can all overlap both in structure (a community agency can offer both adult day care and hospice, let’s say) as well as by person: transitions from one type of care to another are common as health needs change, offspring move in or out of town, spouses die, and finances change.

Multigenerational families are on the upswing in the U.S., and elders are part of the picture, but those aged 25-34 are moving in with their parents in stunning numbers. According to figures from the Pew Foundation, 11% of adults 25-34 lived in a multi-generational household in 1980. That proportion had more than doubled as of 2012, and by 2014, more young adults (18-34) lived with parents than with a spouse or significant other. As those unmarried millennials age, how will they change our assumptions about, and institutions related to, aging? Or will they marry in traditional numbers, only later? As the costs of aging rise, how will families, and real estate, adapt? How will Uber and, later, autonomous vehicles change where elders live and what they do with their days?

-Religious practice
The U.S. church landscape is changing profoundly. These changes matter for aging, insofar as churches are often providers of both formal and informal support networks, but aging matters for some churches, especially “mainstream” Protestantism. Consider that U.S. population grew 65% in the half-century between 1965 and 2015. In that same period, the Episcopal church lost 49% of its adherents, Presbyterians (PCUSA) 47%, and United Methodists 33%. Even among Roman Catholics, where membership pretty closely tracked population growth, parishes are closing: those 70 million self-described Catholics don’t attend Mass very often, statistically speaking. Catholics also have a supply-side issue getting men to join the priesthood; staffing parishes is a problem for many denominations, especially those without access to women clergy. Another side effect of the drop in mainstream church membership and attendance relates to the growth of towns and cities: those 19th-century church buildings are often located in prime real estate at the same time that maintenance and heating costs for big, old buildings with creaking structure and infrastructure (wiring, plumbing, HVAC) are non-trivial. The continuing shift in American church affiliation will affect both the look of our cities and the delivery of social services to many, including the aged.

It should be clear that demographics, medical science, social institutions, government programs, religious faith, and technological change are wound together into a yarn ball: telecommuting and Uber will let people who can’t drive work at jobs they currently can’t get to. A stock-market decline could wipe out even more people than were decimated by 2008, given how many more baby boomers are out of the work force since then, making sharing living quarters a necessity. Older people feel more vulnerable, and scams of various sorts prey on many of them, including politicians, at the same time that elders are outliving their churches. Social networks facilitate the spread of fake news, rumors, and other misinformation, both frightening people further and making real solutions harder to design and implement. Jobs and work tasks are changing incredibly fast, making older expensive workers expendable, yet intellectual capital is leaving U.S. companies via retirement (particularly in process manufacturing) without clear backup plans in place. With so many strands to the issue, no single initiative can be considered in isolation; the side effects of policy (think of the home mortgage interest exemption as just one example) are often nearly as important as the primary objectives. Whether it’s robotics, social networks, autonomous vehicles, or telepresence for work or family ties, the new elderly will be key factors in many technological waves of the coming decades. 
January 2017: Where’s the innovation?
As I was discussing the pace of change with my class recently, I struggled to name a hot young startup. It turns out there was a reason for that. Looking at the Fortune list of the biggest private companies with billion-dollar valuations, filtered for US head offices only, you get these companies, all with valuations over $5 billion:

Intarcia Therapeutics

Of these, Uber and Airbnb are of course interesting, and valuable, but it’s hard to call them tech startups, based as they are on the so-called sharing economy model. (Lyft is valued at about a ninth of Uber.) Palantir is an intelligence/defense contractor, so 99% of people won’t knowingly interact with it or recognize it. SpaceX is a literal moonshot, again, not really a typical tech startup. Pinterest feels like it could have been big, but given that its valuation is higher than a projected IPO, it feels like an underwater mortgage. WeWork is more like a REIT than an Apple or Google, Theranos is discredited and won’t likely be on the 2017 list, and the biotech firm Intarcia is almost fifteen years old, focused on a diabetes drug. Stripe builds payment infrastructure, a classic B2B play. That leaves one sole unicorn in the Netscape/Google/Facebook mold: Snapchat. An IPO there could draw some attention, but I can’t see it being a seismic event on par with Google or Facebook.

It also bears noting that none of these companies is at all young. Intarcia is 22 years old, while the youngest companies date from 2010-11. Put another way, here’s a list of important tech IPOs:

Apple 1980
Compaq 1983
Lotus 1983
EMC 1986
Microsoft 1986
Oracle 1986
Sun Microsystems 1986
Dell 1988
Electronic Arts 1989
Cisco 1990
AOL 1992
Netscape 1995
Yahoo 1996
Amazon 1997
Netflix 2002
Google 2004
Facebook 2012

Note the slowdown after 2000 in “blockbuster” IPOs of companies that return value over a relatively long span; such companies as Etsy, Fitbit, GoPro, Twitter and Zynga all have flopped after hitting the public markets. Tesla stock has performed well thus far despite never turning a profit, but that can’t last forever. Netflix required a lot of patience: if you bought at the IPO, it took 8 years for the stock to stop flat-lining, but since 2010, it’s risen from $8 a share to more than $140. All in all, we seem to be in a lull as far as fast-growing tech startups are concerned (with the caveat that Uber and Airbnb are both game-changers precisely because their asset model breaks traditional assumptions).

Several forces are at work, I’m hypothesizing:

1) The App Store platform model has lowered the barrier to entry for software developers. It’s hard to find a major pure-play software company of any magnitude in the past 10 years. The enterprise market has some counter-examples, to be sure: Workday, VMware, Palantir, and Tableau each have market niches, but none dominate an entire industry or have broad public visibility.

2) The gap (in the wrong direction) between private market valuations and public market outcomes is making many companies hesitate before launching an IPO. Dropbox has (or has had) a paper valuation of $10 billion. Its public traded competitor Box has a market cap of $2.2 billion on revenues of $300 million, so Dropbox would need to be pulling in roughly 4-5 times that — in the neighborhood of $1.5 billion — to justify such a lofty pre-IPO price. Staying private prevents that gap from becoming public, but it also delays the funding entities’ exits.

3) The next frontiers in computing — big data, AI, robotics, autonomous vehicles, Internets of Things — will often be capital-intensive ventures. It’s hard to see a startup outmaneuvering GE or Caterpillar on locomotive instrumentation, or disrupting Rolls Royce or Pratt & Whitney with revolutionary jet engine monitoring systems and software. Cloud plays like Rackspace, Cloudera, and Dropbox will be similarly asset-heavy, making Facebook or Google-like multiples difficult now that the industry is both mature (in its tight margins) and operating at huge scale. Meanwhile, most IoT or autonomous vehicle operations will need deep pockets: Uber bought Otto, Google’s car business (Waymo) finally has a name to go with its budget, and the incumbents (Volkswagen/Audi, Toyota, GM, Ford, Delphi, Continental, Bosch) are busy as well. Factors like product liability can quickly discourage garage-scale operations, as they did George Hotz’s effort last October. Robotics isn’t a big factor in the unicorn list; maybe that will come later.

4) Starting with Netscape, web-based software businesses have had a difficult time getting money from retail customers, compared with Lotus, for example. Netflix is the rare content play that turns a profit from direct payments; AOL, news media, standalone music services, and even investment advice sites are struggling. Some have tried the enterprise route, but the big successes have been ad-funded. Given the enormous power (speaking here of the U.S. market) of Google and Facebook, it’s hard to see how Snapchat, Vine, Twitter, Tumblr, or some new startup can break into that select club. Even Quora, with its vast knowledge base, seems content to run low-key ads that likely don’t pay the rent. On the demand side meanwhile, people are accustomed to getting good stuff for free (“consumer surplus,” in economists’ terms), making the ad model viable and in many sectors essential. There are only so many hours of human attention in a day, though, and getting new share means dislodging some well-entrenched incumbents.

5) Maybe, in line with what the economist Tyler Cowen has argued, a broader innovation slowdown is hitting the tech sector. When you look at our grandparents or great-grandparents, some of whom lived through both the Wright Brothers’ first flight and the 1969 moon landing, mass electrification and the atom bomb, penicillin’s introduction and the MRI, open-heart surgery and test-tube babies, the Internet’s origination and the first cell phone, 1900-1980 was a period of innovations that reshaped everyday life. Since 1980, what else is in that league besides the smartphone and World Wide Web, obviously? Fracking transformed oil and gas, the mini-mill reinvented the steel industry, and minimally invasive surgery is the norm for many procedures (as are stents rather than open-heart surgery). Cloud computing is reshaping the server and now storage markets, Skype was revolutionary (but non-revenue-producing) before Microsoft tamed it, and Google search solved a very hard technical problem. GPS reinvents our sense of space and location. But will we really look back on Facebook, YouTube, and LinkedIn on the same plane as the automobile, television, or the transistor?

6) Speaking of the smartphone, the final factor affecting our perception of innovation is the globalization of tech. Whether it’s the Japanese messaging app Line raising $1 billion last summer, Alibaba’s record-setting $25 billion offering in 2014, or privately held Xiaomi’s status as a pre-IPO hardware company worth $45 billion, the biggest stories are all global plays (Uber and Airbnb among them). More and more are headquartered closer to the fastest growing markets and/or talent bases outside Silicon Valley: India’s Flipkart (an online retailer), Sweden’s Spotify, Coupon (a South Korea e-commerce business), or Global Fashion Group (an e-business focused on apparel serving 24 countries based in London). Innovators can be literally anywhere, building apps and businesses North Americans never see or even hear of.

So is innovation slowing own? I think it makes sense to set a baseline: a huge percentage of human codified knowledge is now online, often for free. Many, soon most, adults on the planet have a networked supercomputer close at hand, often in a pocket or purse. Everyone with these devices knows exactly where he or she is at any time; can reach millions or billions with a tweet, a post, or a blog; and can capture and watch high-definition video and still images. That’s our baseline of “interesting,” which has to count for something. At the same time, we still burn coal and petroleum for most of our mobility and much of our illumination, train service is pathetic in much of the world, and human life expectancy extension may be slowing down or even reversing. In the narrow realm of content, E-book sales are slowing, vinyl LP sales are expanding rapidly off a very small base, and even cassette tapes are in favor among some hip populations. 

Do we measure innovation by the magnitude of problems we have solved, or by the frontiers left relatively unconquered? Is the IPO success of an online merchant important for quality of life, relative to the possibilities for telemedicine or Kenya’s mobile banking success? As usual, “it depends” sounds like a copout, and maybe it is, but I do long for the days when hardware, software, and services for the “average” North American were new, exciting, and a bit rough around the edges as compared to the tech landscape of the media and entertainment period we currently inhabit.