Early Indications is published twice monthly by the eBusiness Research Center at Penn State University. The author holds no direct financial stake in any of the companies mentioned.
If one ventures into the enterprise software world, the sheer volume of verbiage devoted to variations on the words "service" and "services" is bewildering, especially because a server, which confusingly can be either hardware or software, is unrelated to services. Web services are related to but not synonymous with service oriented architectures (SOAs), some of which can be implemented using an enterprise services bus (ESB). Public examples of SOA-like behavior can be found in the much-better named category called mashups, examples of which can be found below.
At the macroeconomic level, meanwhile, the services sector (which is really several sectors, as UCLA's Uday Karmarkar has noted) is crowding out products companies as the dominant force in gross domestic product. In the middle between code and Alan Greenspan, marketers worry about satisfaction ratings for customer service (including self-service) in the transaction process, while aftermarket repair and maintenance is yet another kind of service. Finally, there are transactions in which activities are performed in conjunction with a product purchase: software implementation is one such service.
According to the Oxford English Dictionary, "service" has at least five different meanings that could apply to the current confusion:
-the action or process of performing duties for
-an act of assistance
-the process of attending to a customer in a shop
-a system supplying a public need such as transport, or utilities such as electricity and water
-a periodic routine inspection and maintenance of a vehicle or other machine
(The root word, servus, means "slave.")
IBM has launched a research initiative into what is now being called Services Science, Management, and Engineering (SSME). According to the initiative's website, "A service is a provider/client interaction that creates and captures value." Elsewhere, an IBM Software page answers the question "What is an SOA?" this way:
"SOA is the blueprint for IT infrastructure of the future. SOA extends the Web services value proposition by providing guidance on how enterprise IT infrastructure should be architected using services."
But it's clear that the software folks do not intend that such architectures should use "provider/client interactions that create and capture value." Given this wide semantic variation, it will not come as a surprise that measuring services is problematic. At the economic level, what is the productivity of a teacher or programmer? The input-output metric used for mechanical efficiency breaks down, but no model readily presents itself as an alternative. There is substantial promise in the area of supply chain research however, insofar as networks of people in various roles perform tasks to accomplish a process. At a sufficient level of abstraction, treating some kinds of medical patients, building software, and delivering fresh strawberries do share metrics, constructs, and success factors.
Measurement of services is important for many areas, particularly where money is concerned. Thus a network provider might have a service level agreement (SLA) with a customer, that throughput will never fall below a given threshold and downtime is expected to be 30 minutes a month, never falling between 8 am and 6 pm, etc. Confusingly, because a services-oriented architecture runs across a network or set of networks, it requires a service level agreement from the provider of the network. The "S" is SOA has nothing to do with the one in SLA.
One further thought. Does customer satisfaction measure what was delivered by the provider, or what was experienced by the customer? For this discussion, "service" can be understood as retail or hospitality, professional services like law or consulting, and possibly more. Major corporate effort is expended in the area of increasing customer satisfaction, usually in operational improvements. It may not be the best place to apply effort, however. If satisfaction is understood as the congruence between expectation and experience, some of that spending on operations could be redirected into negotiation, getting customers to reset expectations rather than trying to meet unrealistic ones.
A cab ride is a service. Going from midtown Manhattan to LaGuardia at 1 pm on a weekday can take 25-45 minutes, while doing so at 5:15 is a very different proposition. Weather makes matters worse. If, however, a driver takes an hour in the middle of a sunny day because of inexperience or other driver-related factors, the rider is right to be irritated. The point here is that much effort is expended in trying to define service levels for a wide range of contingencies. Alternatively, it may make more sense to educate customers into the factors, both in and out of the provider's control, that influence service performance.
The broader issue of confusion over service, service levels, and service economics will get worse before things improve. That the language is insufficient probably relates to the relative newness of the swings from products to services, and from applications that run on processors to services that run over networks. I can only hope that just as "horseless carriage," a lame extension of a soon-to-be outdated name, gave rise to a rich vocabulary of automobile language, so too can we get more words like "mashup" and fewer bureaucratic three-letter acronyms that usually define reality neither vividly nor accurately.
Mashup examples:
http://www.mashmap.com/
http://www.internetbargaincenter.com/
Early Indications is the weblog version of a newsletter I've been publishing since 1997. It focuses on emerging technologies and their social implications.
Monday, November 21, 2005
October 2005 Early Indications newsletter I: New technologies mean new business choices
Early Indications is published twice monthly by the eBusiness Research
Center at Penn State University. The author holds no direct financial
stake in any of the companies mentioned.
"In the Web 1.0 era, when a company raised $10 million, they spent $2
million on servers from Sun, another $2 million on software from BEA,
another $2 million on Oracle software, and then they'd have only $4
million left to actually build the thing."
-David Hornik, venture capitalist at August Capital, speaking
at the Web 2.0 meeting, quoted in the Boston Globe, October 10
"Here's what we're going to do. We're going to go out, and we're
either going to buy Oracle financials or SAP. That's a $5 million
plus or minus purchase. Plus consulting [fees]. . . . I said, 'Look -
I have been through so many general ledger conversions in my life . .
. I'm not going through another conversion [off of Great Plains] when
we get to be a $100 million company.'"
- Jim Barksdale, speaking in 1997 of his early days at Netscape,
in Michael Cusumano and David Yoffie, Competing on Internet Time
As Jared Diamond posited in his Seminal Guns, Germs, and Steel,
there's often a tight connection between resources and destiny. Less
important than the quantity of a resource, however, is its fit with
human or market need. Much like the Maginot line, corporate barriers
to entry (assemblages of resources) can turn out to be competitive
liabilities as speed, focus, and agility frequently trump mass. Right
now Ford and GM are jettisoning as much excess baggage as possible,
for example, perhaps noting that neither Honda nor Toyota run car
rental companies, own satellite factories, or build military armored
vehicles. By contrast, Netscape built infrastructure in anticipation
of rapid growth, correctly as it turned out.
Businesses can be built of many resources: material, human, financial.
More recently, economists including Paul Romer have contended that
innovativeness itself is a resource: the world might run out of oil,
in this line of argument, but it can't run out of creative people
motivated to solve energy problems. There's also the matter of
intangible assets like patent portfolios, branding, and capabilities
in employee selection and training. The factors of production have
expanded beyond the original land, labor, and capital.
Technology is of course a core business resource. In the developer
and IT management communities, the acronyms are running particularly
hot and heavy right now. Between AJAX (1), LAMP(2), two flavors of
OSS, RSS, and CSS, people who write code have a wide range of
lightweight, network-centric tools at their disposal. Many of these
standards are supported by free and/or open-source software, and many
expand the repertoire of the web browser to behave more like a thin
client of a "real" computer. (For examples, see Google Maps or Gmail,
Web Boggle (http://weboggle.shackworks.com/4x4/), backpackit.com,
Microsoft Outlook web access, Flickr, or even enterprise-grade
applications like NetSuite.)
(1) Asynchronous Javascript and XML
(2) Linux, Apache, MySQL, Perl/Python/PHP
These new kinds of software tools and materials are in the process of
changing not only the technology world, but the business and social
environment. Think about the history of building: when structural
steel and fast elevators became available, Louis Sullivan and his
successors built buildings that would have been inconceivable only
years earlier. What would major modern cities look and feel like if
buildings were only 10 to 15 stories tall? When powerful air
conditioning became sufficiently cheap and reliable, the American
South and Southwest underwent a development boom that persists to the
current time. Resources shape destiny, again and again.
The relationship between the nature of technology and business
potentialities reminded me of my colleague Dave Robertson's very pithy
explanation of the delicate business of getting information technology
and business decisions to reinforce each other. I'm paraphrasing:
"Let's compare a business to a vehicle. You could choose to be a dump
truck, or a hybrid, or a sports car. None of these are inherently
better than the others until we know whether the context is family
driving with $4 gas, or building highways, or racing. Once a business
decides what kind of business it needs to become, a gravel hauler or a
dragster, you look under the hood: IT is the vehicle's engine. A
cogenerating electric motor probably won't power the dump truck, while
dropping a Hemi into a hybrid would just be wrong. Each engine is
right for a certain kind of vehicle, but again, it all depends on
context."
Dave, who's a professor at IMD, goes on to say that this metaphor
provides a way into his research, soon to be published, into the
intricate but essential matter of getting business architectures and
technology architectures to reinforce each other. (His co-authors are
Jeanne Ross and Peter Weill of MIT's Center for Information Systems
Research.) If a retailer is positioned to deliver high-quality men's
wear at a premium price, for example, late shipments, sloppy customer
records, and slow network connections will undermine that strategy.
On the other hand,
plenty of successful service businesses, including banks and
universities, still run green-screen mainframe or minicomputer
applications, complete with batch processing and poor access to
analytical data streams.
Both business architectures and technology architectures are creations
of the organizations they inhabit: formal methodologies and
prescriptions, while they have a place, will not in and of themselves
build either a sturdy chassis or an engine that will fit both that
chassis and its real-world requirements. It's one thing to decree
data quality standards and quite another to understand how and where
conflicting or erroneous entries are introduced. If a business
intends to expand internationally, is it hiring employees who are
multilingual and can operate across cultures? Are computer systems
ready for multiple currencies, multiples time zones, and multiple
process maps? Similarly, it's one thing to say "our customer always
comes first" and something quite different to give employees the
training, managerial cover, and career incentive to act on the
rhetoric.
By noting the wider acceptance of Linux, Python, or
software-as-service applications, do I suggest that every IT shop
throw out its Rational methodologies, Microsoft developer suite, or
Oracle databases? By no means. Being aware of available resources
informs choices, including the decision to stand pat. A building
architect needs to be informed of the state of available materials so
he or she can choose to incorporate glass-and-steel curtain walls (as
at the United Nations building), curvilinear reinforced concrete (the
Guggenheim), or self-weathering Cor-Ten steel (the Chicago Civic
Center). But just as not every building is a landmark, most
technology environments will not rival WalMart's or Google's. Even
so, good IT architecture is no less important, whether the
environment's job is to run unobtrusively but reliably, or whether IT
_is_ the business, as at Amazon or Morgan Stanley.
Who's responsible for constraining and enabling the architects'
technology choices, of being the client as it were? In a recent
article in Harvard Business Review, longtime IT authorities Richard
Nolan and Warren McFarlan contend that IT governance begins with the
board. Following that logic, it's both appropriate and necessary that
business and technology executives learn what extreme programming
looks and feels like, or where PHP can work better than Java, or what
Linux actually costs and delivers relative to proprietary Microsoft and Unix.
Just as hybrids aren't inherently better than diesels and dump trucks
aren't superior to Priuses, so too for information technology: it's
not a matter of choosing the "best" technology, but the one that fits
(and perhaps reshapes) its context. In the quest for better, and
better-fitting, business and technology architectures, a working
knowledge of the rapidly evolving set of alternatives is too valuable
to be left to technologists alone.
Center at Penn State University. The author holds no direct financial
stake in any of the companies mentioned.
"In the Web 1.0 era, when a company raised $10 million, they spent $2
million on servers from Sun, another $2 million on software from BEA,
another $2 million on Oracle software, and then they'd have only $4
million left to actually build the thing."
-David Hornik, venture capitalist at August Capital, speaking
at the Web 2.0 meeting, quoted in the Boston Globe, October 10
"Here's what we're going to do. We're going to go out, and we're
either going to buy Oracle financials or SAP. That's a $5 million
plus or minus purchase. Plus consulting [fees]. . . . I said, 'Look -
I have been through so many general ledger conversions in my life . .
. I'm not going through another conversion [off of Great Plains] when
we get to be a $100 million company.'"
- Jim Barksdale, speaking in 1997 of his early days at Netscape,
in Michael Cusumano and David Yoffie, Competing on Internet Time
As Jared Diamond posited in his Seminal Guns, Germs, and Steel,
there's often a tight connection between resources and destiny. Less
important than the quantity of a resource, however, is its fit with
human or market need. Much like the Maginot line, corporate barriers
to entry (assemblages of resources) can turn out to be competitive
liabilities as speed, focus, and agility frequently trump mass. Right
now Ford and GM are jettisoning as much excess baggage as possible,
for example, perhaps noting that neither Honda nor Toyota run car
rental companies, own satellite factories, or build military armored
vehicles. By contrast, Netscape built infrastructure in anticipation
of rapid growth, correctly as it turned out.
Businesses can be built of many resources: material, human, financial.
More recently, economists including Paul Romer have contended that
innovativeness itself is a resource: the world might run out of oil,
in this line of argument, but it can't run out of creative people
motivated to solve energy problems. There's also the matter of
intangible assets like patent portfolios, branding, and capabilities
in employee selection and training. The factors of production have
expanded beyond the original land, labor, and capital.
Technology is of course a core business resource. In the developer
and IT management communities, the acronyms are running particularly
hot and heavy right now. Between AJAX (1), LAMP(2), two flavors of
OSS, RSS, and CSS, people who write code have a wide range of
lightweight, network-centric tools at their disposal. Many of these
standards are supported by free and/or open-source software, and many
expand the repertoire of the web browser to behave more like a thin
client of a "real" computer. (For examples, see Google Maps or Gmail,
Web Boggle (http://weboggle.shackworks.com/4x4/), backpackit.com,
Microsoft Outlook web access, Flickr, or even enterprise-grade
applications like NetSuite.)
(1) Asynchronous Javascript and XML
(2) Linux, Apache, MySQL, Perl/Python/PHP
These new kinds of software tools and materials are in the process of
changing not only the technology world, but the business and social
environment. Think about the history of building: when structural
steel and fast elevators became available, Louis Sullivan and his
successors built buildings that would have been inconceivable only
years earlier. What would major modern cities look and feel like if
buildings were only 10 to 15 stories tall? When powerful air
conditioning became sufficiently cheap and reliable, the American
South and Southwest underwent a development boom that persists to the
current time. Resources shape destiny, again and again.
The relationship between the nature of technology and business
potentialities reminded me of my colleague Dave Robertson's very pithy
explanation of the delicate business of getting information technology
and business decisions to reinforce each other. I'm paraphrasing:
"Let's compare a business to a vehicle. You could choose to be a dump
truck, or a hybrid, or a sports car. None of these are inherently
better than the others until we know whether the context is family
driving with $4 gas, or building highways, or racing. Once a business
decides what kind of business it needs to become, a gravel hauler or a
dragster, you look under the hood: IT is the vehicle's engine. A
cogenerating electric motor probably won't power the dump truck, while
dropping a Hemi into a hybrid would just be wrong. Each engine is
right for a certain kind of vehicle, but again, it all depends on
context."
Dave, who's a professor at IMD, goes on to say that this metaphor
provides a way into his research, soon to be published, into the
intricate but essential matter of getting business architectures and
technology architectures to reinforce each other. (His co-authors are
Jeanne Ross and Peter Weill of MIT's Center for Information Systems
Research.) If a retailer is positioned to deliver high-quality men's
wear at a premium price, for example, late shipments, sloppy customer
records, and slow network connections will undermine that strategy.
On the other hand,
plenty of successful service businesses, including banks and
universities, still run green-screen mainframe or minicomputer
applications, complete with batch processing and poor access to
analytical data streams.
Both business architectures and technology architectures are creations
of the organizations they inhabit: formal methodologies and
prescriptions, while they have a place, will not in and of themselves
build either a sturdy chassis or an engine that will fit both that
chassis and its real-world requirements. It's one thing to decree
data quality standards and quite another to understand how and where
conflicting or erroneous entries are introduced. If a business
intends to expand internationally, is it hiring employees who are
multilingual and can operate across cultures? Are computer systems
ready for multiple currencies, multiples time zones, and multiple
process maps? Similarly, it's one thing to say "our customer always
comes first" and something quite different to give employees the
training, managerial cover, and career incentive to act on the
rhetoric.
By noting the wider acceptance of Linux, Python, or
software-as-service applications, do I suggest that every IT shop
throw out its Rational methodologies, Microsoft developer suite, or
Oracle databases? By no means. Being aware of available resources
informs choices, including the decision to stand pat. A building
architect needs to be informed of the state of available materials so
he or she can choose to incorporate glass-and-steel curtain walls (as
at the United Nations building), curvilinear reinforced concrete (the
Guggenheim), or self-weathering Cor-Ten steel (the Chicago Civic
Center). But just as not every building is a landmark, most
technology environments will not rival WalMart's or Google's. Even
so, good IT architecture is no less important, whether the
environment's job is to run unobtrusively but reliably, or whether IT
_is_ the business, as at Amazon or Morgan Stanley.
Who's responsible for constraining and enabling the architects'
technology choices, of being the client as it were? In a recent
article in Harvard Business Review, longtime IT authorities Richard
Nolan and Warren McFarlan contend that IT governance begins with the
board. Following that logic, it's both appropriate and necessary that
business and technology executives learn what extreme programming
looks and feels like, or where PHP can work better than Java, or what
Linux actually costs and delivers relative to proprietary Microsoft and Unix.
Just as hybrids aren't inherently better than diesels and dump trucks
aren't superior to Priuses, so too for information technology: it's
not a matter of choosing the "best" technology, but the one that fits
(and perhaps reshapes) its context. In the quest for better, and
better-fitting, business and technology architectures, a working
knowledge of the rapidly evolving set of alternatives is too valuable
to be left to technologists alone.
Tuesday, September 27, 2005
September 2005 Early Indications II: Vendor Tectonics
The following headlines were selected from the News.com website on Monday September 26:
-Cingular to launch music download service in 2006
-Verizon switches on TV service
-Intel launches WiMax trials in Asia
-T-Mobile to invest in 3G in U.S.
-Google confirms it's testing wireless service
-Verizon Wireless teams with notebook makers
Couple these tidbits with eBay's purchase of Skype, Sony's massive layoffs and attempted reorganization, and Apple's continuing dominance of the handheld entertainment market, and a raft of questions emerges.
1) Will any company be able to duplicate Microsoft's powerful position in desktop computing as new platforms emerge?
2) Which current industry leaders will be acquirers and which will be acquired in the coming wave of consolidation?
3) What will be the new leverage points that allow hardware, software, or connection vendors to develop tighter customer relationships and presumably higher profitability?
4) What external forces will help shape this contest?
The list of headlines is at once tantalizing and frustrating: familiar vendors are assuming slippery identities. Cingular is a wireless carrier, but now it sells Motorola phones with Apple iTunes software. Will its music service be a competitor or complement to Apple's? Verizon used to be a phone company, then it became a phone plus half of a mobile phone company, and now it's delivering television. But wait: Google is streaming UPN video and Yahoo is delivering both network shows and original video news reports from ex-CNN reporter Kevin Sites.
On the hardware side, meanwhile, Intel used to live in the computing market, but the company remains determined to make an impact in the Lucent-Ericsson neighborhood as well. Verizon is trying to increase adoption of its wide-area wireless midband service by signing deals with hardware manufacturers. Finally, China's emergence as a hardware factory will have major repercussions.
Let's look at the various players in this new "digital home" environment grouped into three buckets: decliners, question marks, and ascenders.
Decliners:
Despite my respect for Howard Stringer, Sony looks to be in a bad way. The Playstation franchise could remain a bright spot, but the company's high prices and slow time to market may be endemic to the culture and thus not fixable, particularly by an outsider. Sony-Ericsson has not shaken up the mobile handset market and the company's proprietary standards (for memory, among other things) swim against the prevailing tide of open standards.
Slow-moving telecoms, as The Economist suggests, will be undone by VoIP. Surprisingly, the magazine's second most vulnerable company, in its heavy reliance on voice revenues, was Vodaphone; other nominees include British Telecom, SBC, and Telecom Italia.
Other decliners, such as AT&T and IBM's PC unit, have already been sold. In the future, players like Paul Allen's Charter Communications, Ericsson, Time Warner, Philips' consumer electronics business, and others could be similarly vulnerable to takeover.
Question marks:
Motorola seems to have been defibrillated by new CEO Ed Zander. The Razr phone has become a must-have, and the Rokr i-Pod phone will bear watching. In the carrier market, Moto's Canopy system is much farther along than Intel's WiMax. Whether the company can compete within a footprint that remains broad (even after the semiconductor unit was spun out) is the big question: can the same company profitably sell home networking, military radios, carrier gear, and smart phones?
Microsoft certainly counts as front-page news these days, with lead stories in both the Wall Street Journal and Business Week. The culture is clearly in transition as Vista has required new ways of writing code and the executive turnover continues to mount. Microsoft also has prime real estate in the current platform, and substantial cash with which to buy a competitor as instant access to new markets. The centrism of the PC to the firm's worldview may be a limiting factor, given how quickly Google has innovated and how prominently smartphones figure in the global market.
Apple has soared on the success of the iPod's excellent combination of hardware, software, and content. But what happens to the computer piece of the franchise? And can the successor to the iPod do the same thing for video? Getting permissions will be harder (and indeed some music rightsholders may successfully renegotiate rates), the network connections will need to be faster, and video viewing -- unlike music listening -- is not a background activity. Getting the interface to be as intuitive and smooth may also be harder.
Nokia has ridden a roller coaster over the past few years as its various phones have touched or failed to touch the nerve of a fashion-conscious public. Revenues have been declining while profit has been highly variable. The company's future depends in large measure on carriers over which it has
limited control, and on usage habits which are similarly fickle. Finally, content providers like Yahoo, Disney, and News Corp may have a large say in the company's fate as phones become TV substitutes.
Aggressive telecom and cable providers have connections to the home or customer that are fast and getting faster. They also have limited control over programming costs, and face competition both from each other (Comcast offers voice even as Verizon offers TV) and from satellite. The list of broadband pioneers also includes Orange, Korea's KT, and Yahoo BB in Japan. Balancing the value brought by a fast connection with content-driven revenues will remain the challenge for these companies.
Ascenders:
Google is clearly frightening the industry. The firm's deep pockets, inventiveness, and sheer technical prowess mean that new product and service announcements can come from any sector of the technology map. (Speaking of talent, Vint Cerf, Rob Pike, Adam Bosworth, and more than 100 former Microsoft developers all work there.) The company has a strong and growing presence on the PC desktop, unsurpassed Linux experience and expertise, deep knowledge of mapping and image searches, testbeds in wireless and cell phone markets, and the attention of smart people all over the world who want to work there. Google could expand its voice chat into full-fledged voice over IP (and become a phone company), or sell a super-cheap network-centric PC running a non-Microsoft OS, or make any of a dozen other bold plays that would truly disrupt existing industries.
Like Google, Yahoo has lots of cash and has been hiring superstar talent. It has more media savvy and focus among its leadership team, and may well morph into more of a Viacom/News Corp competitor than a technology company. As navigating the home page makes clear, however, managing the extreme breadth of services (from driving directions to dating, finance to fantasy football to photos) might become unwieldy.
Samsung is on a roll. The company now has the most powerful Asian brand in the world, surpassing Sony this year, according to Interbrand. The company's displays, memory, and cell phones all hold leadership positions in their markets, and the patent portfolio is strong. Unlike Microsoft or Nokia, Samsung is probably equally comfortable in a wired or unwired universe. Also unlike most American companies, Samsung is well positioned for growth in the developing world including India and China given its geographic presence and price points.
After completely reinventing the economics of the PC industry, Dell has begun moving into adjoining markets: its flat-panel TV prices undercut most name brands by hundreds of dollars, for example. Its MP3 players will never challenge the iPod for design quality, but like the Axim handhelds they continue to improve while maintaining a low price point. To a certain extent Dell stands to gain as a result of Microsoft's heavy marketing in support of Vista next year, but the company is now sufficiently diversified, both product-wise and geographically, that its fates are no longer tied to Microsoft's.
Who's missing:
It's hard to know where to put the companies that will make the digital home possible. EMS providers like Jabil Circuit, logistics companies like Fedex, and component manufacturers including Intel and Synaptics (which makes scroll wheels) all could profit regardless of which of the branded companies win in the consumer market. Infrastructure and business services providers including Cisco, IBM, HP, Oracle, and SAP could similarly benefit, depending on their presence in a given vertical.
Finally, retailers including Best Buy, Wal-Mart, and Dixons stand to benefit if they can master the merchandising and logistics required by rapidly changing, complex bundles of products and services: it will no longer suffice merely to move boxes. The retailers' challenge will soon include such elements as liability for recycling toxic waste like that found in PCs and cell phones, reverse logistics for returns, and serving as a systems integrator for connected systems that to date require considerable expertise to install and manage.
Wild cards:
Government regulation will play an important role is sorting out winners from losers. Rules for broadband competition, copyright duration and extent, and protection of national "champions" (such as telecoms like Telstra or France Telecom with government ownership interest) only begin the list of extra-market forces.
Finally, and most crucially, revenue models are in the midst of a dramatic reinvention. In telecom alone, Skype threatens minute- and distance-based pricing with obsolescence, competing broadband technologies break any natural monopoly that might have existed, and new forms of seemingly peripheral content like games and ringtones play a disproportionate role in determining profitability. Elsewhere, expensive investments in global news organizations (think of CNN) or movie studios (think Viacom) could become boat anchors as their relevance declines in the face of bottom-up alternatives like news blogs or digital moviemaking and distribution.
The ultimate signal of the market's volatility is the reluctance of both consumers and manufacturers to commit to new standards: high-definition audio, high-capacity DVD, and high-bandwidth wireless are only three examples of multi-billion dollar hesitation and disagreement. Until obsolescence is no longer at the top of buyers' concerns, demand will remain inhibited. Paradoxicallly, the current state of messy competitiveness could be Microsoft's legacy: in the absence of a dominant vendor as all the players seek to prevent a leader from emerging, customers lack assurance of interoperability and backwards compatibility, and remain -- intelligently -- tentative.
-Cingular to launch music download service in 2006
-Verizon switches on TV service
-Intel launches WiMax trials in Asia
-T-Mobile to invest in 3G in U.S.
-Google confirms it's testing wireless service
-Verizon Wireless teams with notebook makers
Couple these tidbits with eBay's purchase of Skype, Sony's massive layoffs and attempted reorganization, and Apple's continuing dominance of the handheld entertainment market, and a raft of questions emerges.
1) Will any company be able to duplicate Microsoft's powerful position in desktop computing as new platforms emerge?
2) Which current industry leaders will be acquirers and which will be acquired in the coming wave of consolidation?
3) What will be the new leverage points that allow hardware, software, or connection vendors to develop tighter customer relationships and presumably higher profitability?
4) What external forces will help shape this contest?
The list of headlines is at once tantalizing and frustrating: familiar vendors are assuming slippery identities. Cingular is a wireless carrier, but now it sells Motorola phones with Apple iTunes software. Will its music service be a competitor or complement to Apple's? Verizon used to be a phone company, then it became a phone plus half of a mobile phone company, and now it's delivering television. But wait: Google is streaming UPN video and Yahoo is delivering both network shows and original video news reports from ex-CNN reporter Kevin Sites.
On the hardware side, meanwhile, Intel used to live in the computing market, but the company remains determined to make an impact in the Lucent-Ericsson neighborhood as well. Verizon is trying to increase adoption of its wide-area wireless midband service by signing deals with hardware manufacturers. Finally, China's emergence as a hardware factory will have major repercussions.
Let's look at the various players in this new "digital home" environment grouped into three buckets: decliners, question marks, and ascenders.
Decliners:
Despite my respect for Howard Stringer, Sony looks to be in a bad way. The Playstation franchise could remain a bright spot, but the company's high prices and slow time to market may be endemic to the culture and thus not fixable, particularly by an outsider. Sony-Ericsson has not shaken up the mobile handset market and the company's proprietary standards (for memory, among other things) swim against the prevailing tide of open standards.
Slow-moving telecoms, as The Economist suggests, will be undone by VoIP. Surprisingly, the magazine's second most vulnerable company, in its heavy reliance on voice revenues, was Vodaphone; other nominees include British Telecom, SBC, and Telecom Italia.
Other decliners, such as AT&T and IBM's PC unit, have already been sold. In the future, players like Paul Allen's Charter Communications, Ericsson, Time Warner, Philips' consumer electronics business, and others could be similarly vulnerable to takeover.
Question marks:
Motorola seems to have been defibrillated by new CEO Ed Zander. The Razr phone has become a must-have, and the Rokr i-Pod phone will bear watching. In the carrier market, Moto's Canopy system is much farther along than Intel's WiMax. Whether the company can compete within a footprint that remains broad (even after the semiconductor unit was spun out) is the big question: can the same company profitably sell home networking, military radios, carrier gear, and smart phones?
Microsoft certainly counts as front-page news these days, with lead stories in both the Wall Street Journal and Business Week. The culture is clearly in transition as Vista has required new ways of writing code and the executive turnover continues to mount. Microsoft also has prime real estate in the current platform, and substantial cash with which to buy a competitor as instant access to new markets. The centrism of the PC to the firm's worldview may be a limiting factor, given how quickly Google has innovated and how prominently smartphones figure in the global market.
Apple has soared on the success of the iPod's excellent combination of hardware, software, and content. But what happens to the computer piece of the franchise? And can the successor to the iPod do the same thing for video? Getting permissions will be harder (and indeed some music rightsholders may successfully renegotiate rates), the network connections will need to be faster, and video viewing -- unlike music listening -- is not a background activity. Getting the interface to be as intuitive and smooth may also be harder.
Nokia has ridden a roller coaster over the past few years as its various phones have touched or failed to touch the nerve of a fashion-conscious public. Revenues have been declining while profit has been highly variable. The company's future depends in large measure on carriers over which it has
limited control, and on usage habits which are similarly fickle. Finally, content providers like Yahoo, Disney, and News Corp may have a large say in the company's fate as phones become TV substitutes.
Aggressive telecom and cable providers have connections to the home or customer that are fast and getting faster. They also have limited control over programming costs, and face competition both from each other (Comcast offers voice even as Verizon offers TV) and from satellite. The list of broadband pioneers also includes Orange, Korea's KT, and Yahoo BB in Japan. Balancing the value brought by a fast connection with content-driven revenues will remain the challenge for these companies.
Ascenders:
Google is clearly frightening the industry. The firm's deep pockets, inventiveness, and sheer technical prowess mean that new product and service announcements can come from any sector of the technology map. (Speaking of talent, Vint Cerf, Rob Pike, Adam Bosworth, and more than 100 former Microsoft developers all work there.) The company has a strong and growing presence on the PC desktop, unsurpassed Linux experience and expertise, deep knowledge of mapping and image searches, testbeds in wireless and cell phone markets, and the attention of smart people all over the world who want to work there. Google could expand its voice chat into full-fledged voice over IP (and become a phone company), or sell a super-cheap network-centric PC running a non-Microsoft OS, or make any of a dozen other bold plays that would truly disrupt existing industries.
Like Google, Yahoo has lots of cash and has been hiring superstar talent. It has more media savvy and focus among its leadership team, and may well morph into more of a Viacom/News Corp competitor than a technology company. As navigating the home page makes clear, however, managing the extreme breadth of services (from driving directions to dating, finance to fantasy football to photos) might become unwieldy.
Samsung is on a roll. The company now has the most powerful Asian brand in the world, surpassing Sony this year, according to Interbrand. The company's displays, memory, and cell phones all hold leadership positions in their markets, and the patent portfolio is strong. Unlike Microsoft or Nokia, Samsung is probably equally comfortable in a wired or unwired universe. Also unlike most American companies, Samsung is well positioned for growth in the developing world including India and China given its geographic presence and price points.
After completely reinventing the economics of the PC industry, Dell has begun moving into adjoining markets: its flat-panel TV prices undercut most name brands by hundreds of dollars, for example. Its MP3 players will never challenge the iPod for design quality, but like the Axim handhelds they continue to improve while maintaining a low price point. To a certain extent Dell stands to gain as a result of Microsoft's heavy marketing in support of Vista next year, but the company is now sufficiently diversified, both product-wise and geographically, that its fates are no longer tied to Microsoft's.
Who's missing:
It's hard to know where to put the companies that will make the digital home possible. EMS providers like Jabil Circuit, logistics companies like Fedex, and component manufacturers including Intel and Synaptics (which makes scroll wheels) all could profit regardless of which of the branded companies win in the consumer market. Infrastructure and business services providers including Cisco, IBM, HP, Oracle, and SAP could similarly benefit, depending on their presence in a given vertical.
Finally, retailers including Best Buy, Wal-Mart, and Dixons stand to benefit if they can master the merchandising and logistics required by rapidly changing, complex bundles of products and services: it will no longer suffice merely to move boxes. The retailers' challenge will soon include such elements as liability for recycling toxic waste like that found in PCs and cell phones, reverse logistics for returns, and serving as a systems integrator for connected systems that to date require considerable expertise to install and manage.
Wild cards:
Government regulation will play an important role is sorting out winners from losers. Rules for broadband competition, copyright duration and extent, and protection of national "champions" (such as telecoms like Telstra or France Telecom with government ownership interest) only begin the list of extra-market forces.
Finally, and most crucially, revenue models are in the midst of a dramatic reinvention. In telecom alone, Skype threatens minute- and distance-based pricing with obsolescence, competing broadband technologies break any natural monopoly that might have existed, and new forms of seemingly peripheral content like games and ringtones play a disproportionate role in determining profitability. Elsewhere, expensive investments in global news organizations (think of CNN) or movie studios (think Viacom) could become boat anchors as their relevance declines in the face of bottom-up alternatives like news blogs or digital moviemaking and distribution.
The ultimate signal of the market's volatility is the reluctance of both consumers and manufacturers to commit to new standards: high-definition audio, high-capacity DVD, and high-bandwidth wireless are only three examples of multi-billion dollar hesitation and disagreement. Until obsolescence is no longer at the top of buyers' concerns, demand will remain inhibited. Paradoxicallly, the current state of messy competitiveness could be Microsoft's legacy: in the absence of a dominant vendor as all the players seek to prevent a leader from emerging, customers lack assurance of interoperability and backwards compatibility, and remain -- intelligently -- tentative.
Friday, September 09, 2005
September 2005 Early Indications I: Open Source Beyond Software
In the September 5 issue of the New Yorker, Malcolm Gladwell explores the efforts by Mattson, a food R&D firm, to design a new cookie. The problem had tight constraints on such factors as fat, shelf stability, and calories, and three different teams competed with alternative proposals. One was a classic top-down, managed group led by a Mattson EVP. Another team was comprised of two strong hands-on associates. Finally, a so-called dream team was drawn from across the industry: Mars, Kraft, Keebler, Nestle, and Kellogg's were represented, among others. You'll have to read the article to find out who wins, but the project raises several important issues.
Mattson's head man, Steve Gundrum, works in Silicon Valley and carefully tracks the tech industry. For the bakeoff, he wanted to test his hypothesis that software engineering can provide lessons to other industries. The two-man team of peers was based on Kent Beck's notion of extreme programming, or XP, in which programmers attack projects in small increments with pairs of programmers taking turns at the keyboard. The dream team was an attempt to use the open-source model to generate great ideas based on the wealth of expertise represented by the participants.
As Gladwell points out, re-designing Unix is a fundamentally different exercise compared to inventing a tasty, nutritious treat: fixing ("many eyes make bugs shallow") is not imagining. The fifteen expert bakers all held strong opinions about their own contributions and couldn't unite behind a consensus idea, but the project manager was told to let the group find its own "natural rhythm" and so let the chaos play out. In the end the team's friction prevented its potential expertise from being plumbed; in contrast, one Mattson person was able to draw on previous experiences, including an insight that topical (surface-applied rather than baked-in) seasoning makes tortilla chips more compelling, and devise a marginally more popular entry. Gladwell argues that if the dream team had been smaller it would have functioned better, but that speculation evades the question of whether Linux was the right model in the first place.
Many other unanswered questions arise. Open source has many parallels to classic scientific research: open publication, peer review, and incremental progress. In both cases the primary incentives relate to reputation rather than commerce. Because the two communities and code bases share many similarities, it may follow that applying open source techniques to biology will amplify traditional pathways to progress. But biology can be methodically incremental in ways that new product design cannot.
Even in software, it's hard to point to examples in which an open-source community model generated something new and ready for a broad user base; Linux, Apache, MySQL, and the scripting languages (Python et al) cannot remotely be called mass-market software. Linux is also built for use: I'll update a storage-attachment routine because I have to do that task in my job. Compare the cookie bakers, who were designing for a market. Without the commercial distributions like Red Hat and SuSE, Linux would have very few user interface refinements, include much different documentation, and lack things like liability protection and warrantees that a market demands.
A new generation of for-profit companies is attempting to use open-source methods to build applications rather than infrastructure. So far it's too soon to tell whether SugarCRM can dent SAP, Siebel, or Salesforce, or whether Mitch Kapor's Open Source Applications Foundation can bring Chandler up to the level of Kapor's last Personal Information Manager, Lotus Agenda. Even though the teams can once again follow established patterns of an existing package, it still remains to be seen how well the open source model applies to more "productized" offerings. There's also money to be made in the integration of free and/or open source software with both commercial software and in-house applications, and VCs are backing several startups in this sector.
The question of money points to a connected nuance: open source relies on more than attracting mobs of people to attack a problem. The cookie dream team, for example, didn't share a goal or a reward mechanism. Through a variety of means, by contrast, the Linux community knows who's contributed what. Just because open source is not for profit, some observers fall into the altruism trap. Experience suggests there is a third way here: in no way can the model be described as a charity, which means that managing in or near an open source environment raises unique challenges.*
A major and often overlooked cornerstone of the open source model is transparency: beta code is released early and often precisely because it will be imperfect. The wide variety of public responses to the Hurricane Katrina disaster illustrates how these habits are becoming ingrained: many people have offered to help individual families or groups by opening their home or trying to direct financial assistance. Not only are FEMA and the Red Cross incapable of organizing relief in this way, but also the implications of such widespread personalized benevolence take us into new political, ethical, and even public-safety territory. Such an impulse challenges the traditional Jewish notion, which has many echoes in policy and practice, that both the donor and recipient of charity should be anonymous.
By one participant's own admission, the open source cookie model couldn't beat existing offerings from Pepperidge Farm. Metamarket's Open Fund mutual fund transparently published its holdings in real time and relied on a similar dream team of business and technology gurus, but shut down after 24 months of operation in August 2001. For all of open source's impact, which is difficult to overstate in its home terrain, we may have to wait some time until we see new drugs, fashions, or buildings built on parallel communities.
Perhaps the most potent discovery of the open source model's power occurred when people weren't expressly looking for it. In Howard Dean's 2003-4 campaign, word of mouth led to unprecedented numbers of small donations. The campaign's workings were visible to the community in ways most political organizations are not. Semi-tangible reward and recognition systems sprang up to motivate more and more volunteers to contribute energy, ideas, and time. The fact that the grass-roots movement in some ways overwhelmed the formal infrastructure was both a blessing and a curse to the campaign, which in fairness cannot be faulted for not being able to find a fulcrum for the unanticipated groundswell. (To be clear, Dean and his handlers can and should be faulted for plenty of other things.)
The overarching lesson, whether from code, campaigns, or cookies, is clear: new communications tools are facilitating new kinds of political, social, and economic interactions, the implications of which we're only beginning to comprehend.
_____
*This analysis from an economics paper on a non-software topic seems to fit perfectly: "We suggest that . . . the individual motivations supporting community governance are not captured by either the conventional self-interested preferences of 'Homo economicus' or by unconditional altruism towards one’s fellow community members"
Samuel Bowles and Herbert Gintis, "Social Capital and Community" Santa Fe Institute working paper 01-01-003
Mattson's head man, Steve Gundrum, works in Silicon Valley and carefully tracks the tech industry. For the bakeoff, he wanted to test his hypothesis that software engineering can provide lessons to other industries. The two-man team of peers was based on Kent Beck's notion of extreme programming, or XP, in which programmers attack projects in small increments with pairs of programmers taking turns at the keyboard. The dream team was an attempt to use the open-source model to generate great ideas based on the wealth of expertise represented by the participants.
As Gladwell points out, re-designing Unix is a fundamentally different exercise compared to inventing a tasty, nutritious treat: fixing ("many eyes make bugs shallow") is not imagining. The fifteen expert bakers all held strong opinions about their own contributions and couldn't unite behind a consensus idea, but the project manager was told to let the group find its own "natural rhythm" and so let the chaos play out. In the end the team's friction prevented its potential expertise from being plumbed; in contrast, one Mattson person was able to draw on previous experiences, including an insight that topical (surface-applied rather than baked-in) seasoning makes tortilla chips more compelling, and devise a marginally more popular entry. Gladwell argues that if the dream team had been smaller it would have functioned better, but that speculation evades the question of whether Linux was the right model in the first place.
Many other unanswered questions arise. Open source has many parallels to classic scientific research: open publication, peer review, and incremental progress. In both cases the primary incentives relate to reputation rather than commerce. Because the two communities and code bases share many similarities, it may follow that applying open source techniques to biology will amplify traditional pathways to progress. But biology can be methodically incremental in ways that new product design cannot.
Even in software, it's hard to point to examples in which an open-source community model generated something new and ready for a broad user base; Linux, Apache, MySQL, and the scripting languages (Python et al) cannot remotely be called mass-market software. Linux is also built for use: I'll update a storage-attachment routine because I have to do that task in my job. Compare the cookie bakers, who were designing for a market. Without the commercial distributions like Red Hat and SuSE, Linux would have very few user interface refinements, include much different documentation, and lack things like liability protection and warrantees that a market demands.
A new generation of for-profit companies is attempting to use open-source methods to build applications rather than infrastructure. So far it's too soon to tell whether SugarCRM can dent SAP, Siebel, or Salesforce, or whether Mitch Kapor's Open Source Applications Foundation can bring Chandler up to the level of Kapor's last Personal Information Manager, Lotus Agenda. Even though the teams can once again follow established patterns of an existing package, it still remains to be seen how well the open source model applies to more "productized" offerings. There's also money to be made in the integration of free and/or open source software with both commercial software and in-house applications, and VCs are backing several startups in this sector.
The question of money points to a connected nuance: open source relies on more than attracting mobs of people to attack a problem. The cookie dream team, for example, didn't share a goal or a reward mechanism. Through a variety of means, by contrast, the Linux community knows who's contributed what. Just because open source is not for profit, some observers fall into the altruism trap. Experience suggests there is a third way here: in no way can the model be described as a charity, which means that managing in or near an open source environment raises unique challenges.*
A major and often overlooked cornerstone of the open source model is transparency: beta code is released early and often precisely because it will be imperfect. The wide variety of public responses to the Hurricane Katrina disaster illustrates how these habits are becoming ingrained: many people have offered to help individual families or groups by opening their home or trying to direct financial assistance. Not only are FEMA and the Red Cross incapable of organizing relief in this way, but also the implications of such widespread personalized benevolence take us into new political, ethical, and even public-safety territory. Such an impulse challenges the traditional Jewish notion, which has many echoes in policy and practice, that both the donor and recipient of charity should be anonymous.
By one participant's own admission, the open source cookie model couldn't beat existing offerings from Pepperidge Farm. Metamarket's Open Fund mutual fund transparently published its holdings in real time and relied on a similar dream team of business and technology gurus, but shut down after 24 months of operation in August 2001. For all of open source's impact, which is difficult to overstate in its home terrain, we may have to wait some time until we see new drugs, fashions, or buildings built on parallel communities.
Perhaps the most potent discovery of the open source model's power occurred when people weren't expressly looking for it. In Howard Dean's 2003-4 campaign, word of mouth led to unprecedented numbers of small donations. The campaign's workings were visible to the community in ways most political organizations are not. Semi-tangible reward and recognition systems sprang up to motivate more and more volunteers to contribute energy, ideas, and time. The fact that the grass-roots movement in some ways overwhelmed the formal infrastructure was both a blessing and a curse to the campaign, which in fairness cannot be faulted for not being able to find a fulcrum for the unanticipated groundswell. (To be clear, Dean and his handlers can and should be faulted for plenty of other things.)
The overarching lesson, whether from code, campaigns, or cookies, is clear: new communications tools are facilitating new kinds of political, social, and economic interactions, the implications of which we're only beginning to comprehend.
_____
*This analysis from an economics paper on a non-software topic seems to fit perfectly: "We suggest that . . . the individual motivations supporting community governance are not captured by either the conventional self-interested preferences of 'Homo economicus' or by unconditional altruism towards one’s fellow community members"
Samuel Bowles and Herbert Gintis, "Social Capital and Community" Santa Fe Institute working paper 01-01-003
Wednesday, August 10, 2005
August 2005 Early Indications I: Remembering Windows 95
It's a slow time in the technology industry. Breakthrough innovations
are few and far between: the iPod is almost four years old, and it's
hard to point to anything very interesting since then. Because
revenue growth has slowed, mergers and acquisitions have become the
main order of business at such companies as Oracle and, for a time,
HP. Venture capital is increasingly migrating to biotech, physical
security, and other sectors only tangentially related to computing.
The industry could use an injection of energy, activity, and not least
important, revenue.
Given this state of things, everyone is watching Microsoft, which is
preparing to launch a new operating system next year. Last month
merely changing the name from code (Longhorn) to product (Vista)
devoured a lot of attention, and more recently a stripped-down version
of the product shipped to beta testers. The product has been a long
time in coming, and the scope has been managed downward in several
respects. Nevertheless, both Microsoft and the industry more
generally see Vista as a potential jump-start very much in the same
category as Windows 95 ten years ago. Because Vista represents the
first opportunity in over ten years to begin with a "clean sheet of
paper," unlike Windows 3.1, 98, ME, and 2000/XP, Bill Gates has
repeatedly linked the two products in public.
Before looking at whether that association is warranted, it's worth
remembering just what Windows 95 brought to market. In 1994, loading
a browser onto Windows could be complicated by the operating system's
lack of Internet Protocol support. DOS prompts were very much a
day-to-day reality. File names were limited to eight letters, and
CD-ROM support was spotty. E-mail was used only by fringe populations
rather than being nearly universal. Adding hardware was more
difficult than it needed to be, multitasking was nearly impossible for
both processing and user interface reasons, and multimedia computing
was, again, the province of only a small subset of users.
Windows 95 changed all of that. Even before Gates' famous "Pearl
Harbor" speech helped turn Microsoft into an Internet-aware company,
Windows 95 made Internet connection, through both browser and e-mail,
a mass phenomenon. Multimedia, too, became an everyday event with
better hardware support (including CD-ROM drivers). Overall
usability, despite the initial confusion at using a "Start" button to
shut down a machine, was enhanced by deeper camouflaging of the
command-line layer, longer file names, and plug-and-play peripheral
support. Finally, the operating system kept pace with Intel's chip
performance and supported more realistic instances of multitasking.
The public responded. In the quarters immediately following the
launch, retail sales of Windows 95 software soared, augmenting a
strong increase in OEM sales of pre-loaded operating systems.
Responding positively to improved networking support and promises of
enhanced manageability, corporate IT organizations spent at record
levels: Microsoft's operating systems revenues jumped from $1.5
billion in fiscal 1994 to $4.1 billion only two years later.
What might we deduce about the prospects for Vista based on the
Windows 95 experience? First, it's hard to see a parallel burst of
initial interest, with or without a Rolling Stones commercial.
According to Microsoft, the benefits of Vista fall under five general
headings:
-Reliability
-Security
-Deployment (for organizations managing large rollouts)
-Performance (including better power management and faster boot up)
-Management of distributed users' machines
These categories of improvements are clearly aimed at corporate buyers
more than individuals. Most of the things a consumer-grade user will
see - including better desktop graphics, and RSS support within
Internet Explorer - already come standard in Mac OS X. Backward
compatibility will be substantial, to the point that many Vista
improvements (including the IE browser) will be available as retrofits
to Windows XP. These upgrades will also slow Vista adoption.
Here's another way of thinking about the comparison. In 1995,
Microsoft turned the telephone network into an extension of the
computer, or vice versa: between them AOL and Windows 95 made the
Internet a household utility. In 2006, no parallel leap into an
adjoining domain - think of home entertainment, specifically the
television - will be supported. Bill Gates longstanding prediction
about widespread adoption of a voice and speech interface to the PC
will be addressed with Vista support, but even given a powerful
standard processor configuration at its disposal, Vista still won't
make masses of people retire their keyboards.
In short, Windows Vista looks like a solid product for corporate
purchasers, but the lack of "gee-whiz" and "I've always wanted to be
able to do that" desirability will prevent end-user excitement from
reappearing the way it did ten years ago. An industry in search of
the next big thing will probably have to keep looking.
are few and far between: the iPod is almost four years old, and it's
hard to point to anything very interesting since then. Because
revenue growth has slowed, mergers and acquisitions have become the
main order of business at such companies as Oracle and, for a time,
HP. Venture capital is increasingly migrating to biotech, physical
security, and other sectors only tangentially related to computing.
The industry could use an injection of energy, activity, and not least
important, revenue.
Given this state of things, everyone is watching Microsoft, which is
preparing to launch a new operating system next year. Last month
merely changing the name from code (Longhorn) to product (Vista)
devoured a lot of attention, and more recently a stripped-down version
of the product shipped to beta testers. The product has been a long
time in coming, and the scope has been managed downward in several
respects. Nevertheless, both Microsoft and the industry more
generally see Vista as a potential jump-start very much in the same
category as Windows 95 ten years ago. Because Vista represents the
first opportunity in over ten years to begin with a "clean sheet of
paper," unlike Windows 3.1, 98, ME, and 2000/XP, Bill Gates has
repeatedly linked the two products in public.
Before looking at whether that association is warranted, it's worth
remembering just what Windows 95 brought to market. In 1994, loading
a browser onto Windows could be complicated by the operating system's
lack of Internet Protocol support. DOS prompts were very much a
day-to-day reality. File names were limited to eight letters, and
CD-ROM support was spotty. E-mail was used only by fringe populations
rather than being nearly universal. Adding hardware was more
difficult than it needed to be, multitasking was nearly impossible for
both processing and user interface reasons, and multimedia computing
was, again, the province of only a small subset of users.
Windows 95 changed all of that. Even before Gates' famous "Pearl
Harbor" speech helped turn Microsoft into an Internet-aware company,
Windows 95 made Internet connection, through both browser and e-mail,
a mass phenomenon. Multimedia, too, became an everyday event with
better hardware support (including CD-ROM drivers). Overall
usability, despite the initial confusion at using a "Start" button to
shut down a machine, was enhanced by deeper camouflaging of the
command-line layer, longer file names, and plug-and-play peripheral
support. Finally, the operating system kept pace with Intel's chip
performance and supported more realistic instances of multitasking.
The public responded. In the quarters immediately following the
launch, retail sales of Windows 95 software soared, augmenting a
strong increase in OEM sales of pre-loaded operating systems.
Responding positively to improved networking support and promises of
enhanced manageability, corporate IT organizations spent at record
levels: Microsoft's operating systems revenues jumped from $1.5
billion in fiscal 1994 to $4.1 billion only two years later.
What might we deduce about the prospects for Vista based on the
Windows 95 experience? First, it's hard to see a parallel burst of
initial interest, with or without a Rolling Stones commercial.
According to Microsoft, the benefits of Vista fall under five general
headings:
-Reliability
-Security
-Deployment (for organizations managing large rollouts)
-Performance (including better power management and faster boot up)
-Management of distributed users' machines
These categories of improvements are clearly aimed at corporate buyers
more than individuals. Most of the things a consumer-grade user will
see - including better desktop graphics, and RSS support within
Internet Explorer - already come standard in Mac OS X. Backward
compatibility will be substantial, to the point that many Vista
improvements (including the IE browser) will be available as retrofits
to Windows XP. These upgrades will also slow Vista adoption.
Here's another way of thinking about the comparison. In 1995,
Microsoft turned the telephone network into an extension of the
computer, or vice versa: between them AOL and Windows 95 made the
Internet a household utility. In 2006, no parallel leap into an
adjoining domain - think of home entertainment, specifically the
television - will be supported. Bill Gates longstanding prediction
about widespread adoption of a voice and speech interface to the PC
will be addressed with Vista support, but even given a powerful
standard processor configuration at its disposal, Vista still won't
make masses of people retire their keyboards.
In short, Windows Vista looks like a solid product for corporate
purchasers, but the lack of "gee-whiz" and "I've always wanted to be
able to do that" desirability will prevent end-user excitement from
reappearing the way it did ten years ago. An industry in search of
the next big thing will probably have to keep looking.
Monday, August 01, 2005
July 2005 Early Indications II: Virtual Trust
[Business notice: I have officially formed a company, Still River Research, to deliver consulting and analysis services. The website (www.stillriverresearch.com) is now out of beta after generous suggestions from several newsletter readers. I am now booking projects for the fall; please notify me if I can be of service.]
Security stories currently dominate much of the news. Between London, the Patriot Act, data thefts and losses, and renewed efforts to mandate identity cards for immigrants, it's difficult to help but feel that the world is a scary, dangerous place. My focus here, however, is on a near neighbor to security: trust, and how it can be both reinforced and undermined in new ways via digital networks.
It doesn't take long to see how various online efforts attempt to prove their trustworthiness:
-eBay relies on collated word of mouth to label bad apples and reassure good citizens. The company's institutionalized reputational currency ("view my 100% positive feedback!") is not patentable yet constitutes an enormous barrier to competitive entry.
-Some social network and dating sites use acquaintances as proxies: "you don't know me, but you know Mike, and Mike knows me, so I'm probably OK." As the Spokes and Friendsters of the world have discovered, trying to scale friend-of-a-friend trust is neither obvious nor cheap. When was the last time you used one of these services and could honestly say it was overwhelmingly positive?
-Other dating sites rely on the objective authority of social science. At eHarmony, potential daters are greeted by "relationship expert Dr. Neil Clark Warren" who has built a "detailed questionnaire measures the intricate facets of a person, including the 29 dimensions that are most important in relationship success." Not only that, an American Psychological Association conference included a paper that suggests that eHarmony marriages are happier than marriages built on other matchmaking techniques.
-Some entities have had a difficult time recreating the trust they built offline in new media. According to Lawrence Baxter, chief e-commerce officer at Wachovia quoted in the July 21 Boston Globe, the bank can no longer use e-mail to communicate with customers because phishing attacks so skillfully recreated the look and feel of official correspondence that customers routinely delete real messages. Cost structures used in the online bank's business case, meanwhile, almost certainly are rendered obsolete by the need to revert to physical mail.
-Amidst all of the 10-year celebrations of e-commerce sites (eBay, Amazon, CNet), some longtime readers may recall our discussion of Encyclopedia Britannica, which was nearly wiped off the map after over 225 years of operation. The company still exists, still publishes multi-volume hard-copy products, and recently announced it had re-formed and upgraded its panel of experts. That body, once home primarily to white males, now includes four Nobel laureates, two Pulitzer Prize winners, and a much more representative cultural makeup. Significantly, the last meeting of the board was ten years ago.
Several conclusions emerge:
1) Trust pays: eHarmony says they get 10,000-15,000 new members a day, each of whom has spent between $50 and $250.
2) Trust is expensive to build. As I searched for a new cell phone, Staples referred me to an outside vendor, but the vendor's site retains a Staples logo at the top, with the reminder that Staples will stand behind any transactions. The vendor's own site, with no such guarantee, sells the exact same service plan and phone for $50 less. Given the failure rate and overall dissatisfaction with U.S. wireless carriers, that $50 insurance looks very appealing.
3) There's a fallacy that identification can routinize trust: TSA screenings assume that someone with a driver's license that matches her face won't try to do anything bad to the aircraft. Conversely, someone who doesn't provide ID is kept off the plane: former Sun Microsystems employee John Gilmore is in federal court challenging the unwritten and/or secret law (nobody has yet produced it) that states that an "internal passport," as he calls it, is a condition for public transportation. (Here's the Gilmore site.)
4) The Britannica case, in its contrast with Wikipedia, highlights a particular dynamic on the Net, that of open vs. closed credibility, or trust if you will. Much as "many eyes make bugs shallow," as Eric Raymond argued in The Cathedral and the Bazaar in reference to open-source software, Wikipedia establishes trust in the volume of researcher-reader-editors who will spot and fix errors. Unlike the Staples model, money is less effective than reputational currency - the same stock of "funds" that makes eBay work.
Britannica, on the other hand, seeks the credibility of the few: the Encyclopedia's editor stated that "At a time when vast quantities of questionable information are available on the Internet and elsewhere, rigorous and reliable reference works are more important than ever." They are, but Britannica has a lot to answer for: the BBC reported that a 12-year old boy in London found five errors in two entries. Add to the errors the cost to fix paper editions, and the lag between error detection and correction - how many readers will propagate errors in the interval?
5) In the physical world, institutions can convey cues that reassure patrons of their solidity and presumably good intentions: marble pillars on a bank, brightly lit colorful plastic in a strip mall, even flight attendants' and pilots' uniforms. Online, Wells Fargo, Target, or Delta can't convey the same kind of authority in pixels, so the task becomes twofold: connecting to the existing credibility through branding, and capturing various kinds of word of mouth.
In the coming months, several trust stories will bear watching:
-Pharmaceutical companies, particularly in the COX-2 (Vioxx) neighborhood, have suffered major reputational damage, much of it related to online behavior, and the legal proceedings will be only one element of a fight to regain public trust.
-The 2008 presidential race will begin heating up, particularly the early-stage fundraising. Watch for the lessons various candidates learned from the Howard Dean experience.
-After the "golden age" of the CEO as hero, the past few years have reversed the public reception of business leaders. Huge severance packages following poor shareholder results, lawsuits, criminal guilty verdicts, and general tarnish on the aura make for a tough time to be a leader. Will Mark Hurd fare better at HP than did Carly Fiorina? Can Ford and GM rise to the challenge of viability and profitability? Will Boeing build a lead on Airbus? In each case, much will hinge on how much trust the leader can generate in his or her own company, the market, and the financial community. So far, by the way, it appears that Hurd understands the power of e-mail better than Harry Stonecipher at Boeing, who apparently let it become his undoing.
Security stories currently dominate much of the news. Between London, the Patriot Act, data thefts and losses, and renewed efforts to mandate identity cards for immigrants, it's difficult to help but feel that the world is a scary, dangerous place. My focus here, however, is on a near neighbor to security: trust, and how it can be both reinforced and undermined in new ways via digital networks.
It doesn't take long to see how various online efforts attempt to prove their trustworthiness:
-eBay relies on collated word of mouth to label bad apples and reassure good citizens. The company's institutionalized reputational currency ("view my 100% positive feedback!") is not patentable yet constitutes an enormous barrier to competitive entry.
-Some social network and dating sites use acquaintances as proxies: "you don't know me, but you know Mike, and Mike knows me, so I'm probably OK." As the Spokes and Friendsters of the world have discovered, trying to scale friend-of-a-friend trust is neither obvious nor cheap. When was the last time you used one of these services and could honestly say it was overwhelmingly positive?
-Other dating sites rely on the objective authority of social science. At eHarmony, potential daters are greeted by "relationship expert Dr. Neil Clark Warren" who has built a "detailed questionnaire measures the intricate facets of a person, including the 29 dimensions that are most important in relationship success." Not only that, an American Psychological Association conference included a paper that suggests that eHarmony marriages are happier than marriages built on other matchmaking techniques.
-Some entities have had a difficult time recreating the trust they built offline in new media. According to Lawrence Baxter, chief e-commerce officer at Wachovia quoted in the July 21 Boston Globe, the bank can no longer use e-mail to communicate with customers because phishing attacks so skillfully recreated the look and feel of official correspondence that customers routinely delete real messages. Cost structures used in the online bank's business case, meanwhile, almost certainly are rendered obsolete by the need to revert to physical mail.
-Amidst all of the 10-year celebrations of e-commerce sites (eBay, Amazon, CNet), some longtime readers may recall our discussion of Encyclopedia Britannica, which was nearly wiped off the map after over 225 years of operation. The company still exists, still publishes multi-volume hard-copy products, and recently announced it had re-formed and upgraded its panel of experts. That body, once home primarily to white males, now includes four Nobel laureates, two Pulitzer Prize winners, and a much more representative cultural makeup. Significantly, the last meeting of the board was ten years ago.
Several conclusions emerge:
1) Trust pays: eHarmony says they get 10,000-15,000 new members a day, each of whom has spent between $50 and $250.
2) Trust is expensive to build. As I searched for a new cell phone, Staples referred me to an outside vendor, but the vendor's site retains a Staples logo at the top, with the reminder that Staples will stand behind any transactions. The vendor's own site, with no such guarantee, sells the exact same service plan and phone for $50 less. Given the failure rate and overall dissatisfaction with U.S. wireless carriers, that $50 insurance looks very appealing.
3) There's a fallacy that identification can routinize trust: TSA screenings assume that someone with a driver's license that matches her face won't try to do anything bad to the aircraft. Conversely, someone who doesn't provide ID is kept off the plane: former Sun Microsystems employee John Gilmore is in federal court challenging the unwritten and/or secret law (nobody has yet produced it) that states that an "internal passport," as he calls it, is a condition for public transportation. (Here's the Gilmore site.)
4) The Britannica case, in its contrast with Wikipedia, highlights a particular dynamic on the Net, that of open vs. closed credibility, or trust if you will. Much as "many eyes make bugs shallow," as Eric Raymond argued in The Cathedral and the Bazaar in reference to open-source software, Wikipedia establishes trust in the volume of researcher-reader-editors who will spot and fix errors. Unlike the Staples model, money is less effective than reputational currency - the same stock of "funds" that makes eBay work.
Britannica, on the other hand, seeks the credibility of the few: the Encyclopedia's editor stated that "At a time when vast quantities of questionable information are available on the Internet and elsewhere, rigorous and reliable reference works are more important than ever." They are, but Britannica has a lot to answer for: the BBC reported that a 12-year old boy in London found five errors in two entries. Add to the errors the cost to fix paper editions, and the lag between error detection and correction - how many readers will propagate errors in the interval?
5) In the physical world, institutions can convey cues that reassure patrons of their solidity and presumably good intentions: marble pillars on a bank, brightly lit colorful plastic in a strip mall, even flight attendants' and pilots' uniforms. Online, Wells Fargo, Target, or Delta can't convey the same kind of authority in pixels, so the task becomes twofold: connecting to the existing credibility through branding, and capturing various kinds of word of mouth.
In the coming months, several trust stories will bear watching:
-Pharmaceutical companies, particularly in the COX-2 (Vioxx) neighborhood, have suffered major reputational damage, much of it related to online behavior, and the legal proceedings will be only one element of a fight to regain public trust.
-The 2008 presidential race will begin heating up, particularly the early-stage fundraising. Watch for the lessons various candidates learned from the Howard Dean experience.
-After the "golden age" of the CEO as hero, the past few years have reversed the public reception of business leaders. Huge severance packages following poor shareholder results, lawsuits, criminal guilty verdicts, and general tarnish on the aura make for a tough time to be a leader. Will Mark Hurd fare better at HP than did Carly Fiorina? Can Ford and GM rise to the challenge of viability and profitability? Will Boeing build a lead on Airbus? In each case, much will hinge on how much trust the leader can generate in his or her own company, the market, and the financial community. So far, by the way, it appears that Hurd understands the power of e-mail better than Harry Stonecipher at Boeing, who apparently let it become his undoing.
Friday, July 22, 2005
Signals and Noise on Broadband
(distributed July 11)
Patterns are emerging from some seemingly unrelated recent developments:
Item: After the mass transit bombings in London, what used to be called "man on the street" perspectives provided some of the most vivid news sources. The BBC has long solicited cameraphone images and personal accounts, and this week's events proved the value of this approach as one element in comprehensive newsgathering.
Item: Wasting no time integrating the Keyhole technology, Google launched a free beta of Google Earth, an even more addictive variation on the satellite imagery embedded in Google Maps. In the Wall Street Journal, Walt Mossberg questioned the utility but not the fun and wonderment fueled by the technology. For example, Google Siteseeing, a weblog unaffiliated with the company, gathers readers' harvests of interesting (for whatever reason) images from the air: shadows of airplanes about to land, smoke plumes, college campuses with giant initials on nearby hillsides, Bill Gates' house. The site has proved so popular it's had to rehost onto commercial-grade infrastructure. (For lots more on this theme, see the coverage of O'Reilly's Where 2.0 conference: http://www.oreillynet.com/where2005/.)
Item: Earlier this month, a carrier in a major European nation announced 3G cellular service over which it will broadcast 42 channels of television to mobile devices. People would be accustomed to this kind of activity in South Korea, but in France? France Telecom is involved along with Orange, giving rise to speculation that national policymakers have decided to emphasize broadband as an economic growth engine.
Item: Apple's new OS, nicknamed Tiger, includes an RSS feedreader within the Safari browser. It wasn't that long ago that people who wanted aggregated feeds needed to install and understand scripting languages. Bloglines changed that, but Tiger appears to be the cleanest implementation to date: I've heard of people upgrading only for this one feature.
Item: After MTV's North American feed of the Live 8 performances was interrupted by ads - midsong - music fans were delighted to see AOL open up a nearly complete video archive of six venues' performances. Apple's Quicktime format is supported, but not Firefox; in Internet Explorer, the viewer is treated to annoying Microsoft ads.
Conclusion 1: Diversity is good. Even without worries of terrorist attacks, viruses, and tightly coupled grids that may or may not have adequate bulkheads to prevent cascading failure (as in the US-Canada power failure of two years ago), convenience and basic prudence suggest that heterogeneous communications channels make a lot of sense. Anyone who switches to sole reliance on voice over IP, the cable company, cellular, instant messaging, or anything else risks total lights-out, as London residents discovered this week. (According to the Wall Street Journal, officials decided against shutting down the cellular networks; the lack of service was apparently caused by heavy traffic.) The success of AOL undoubtedly spurred MTV's decision to re-broadcast Live 8 without interruption.
Conclusion 2: Innovation is happening from both top down (as in Google Labs) and bottom up. Historically the Web has made it easy to find big news sources, but with RSS, it's similarly simple to find small ones. I'm finding it educational to watch media outlets attempt to include and/or co-opt blogs: the Wall Street Journal regularly includes Glenn (Instapundit) Reynolds in the print paper, while publications all over the map are including various blog voices. It's unclear as to what editorial oversight news-organ bloggers enjoy, how they're paid, and what precedence the "day job" medium has with regard to liability, scoops, retractions, and the like. There's still much to be sorted out here.
Conclusion 3: Broadband makes things happen. Whether it's telemedicine, gaming, or secure transmission of private data, its low broadband penetration means that untapped opportunities abound in the United States. As of March the OECD rated the U.S. 17th out of 30 nations in terms of broadband service cost, but this is deceiving as the U.S. ranks only 6th in average broadband speed: Japan's standard service is merely 12 times as fast (26 MB/sec to 2, with a theoretical limit of 51; fiber to the home is expanding rapidly and delivers 100 MB/sec). In short, U.S. customers pay a lot for service that's only charitably defined as mid-band.
The implications can be found in multiple domains. For example, the recent data thefts are increasingly being reported not from hackers but from boxes of backup tapes falling off the back of trucks. It's an open question whether an employee or customer would rather have her sensitive information carried on MCI fiber or a Fedex van.
Remote work is an even more pressing example: between 1970 and 2002, vehicle miles traveled in U.S. urban areas has tripled. Road mileage has in no way kept pace, and the next thirty years will be worse for congestion: few states can afford to maintain the roads and bridges they already have, much less build more. Broadband promises to help create alternative ways of organizing resources, with Jet Blue's virtual call center (consisting of work-at-home customer service reps) serving as one real-life progenitor.
Other promising signs keep cropping up: on the connection front, services like Sprint's EVDO and Verizon's FiOS support reasonably symmetric speeds up and down in part because the business case for customer uploading (digital photos and the like) is getting harder to ignore. Furthermore, increasingly multi-modal communications media like the Weather Channel on cell phones and Google's purchase of Dodgeball support heterogeneous redundancy. Finally, at the FCC and elsewhere people who can make a difference seem to be raising communication policy to a slightly higher level of import.
One great thing about the current cornucopia of technologies is that some can be deployed very rapidly, to the point where Japan, for example, was able to leapfrog much of the world in a matter of a few years. Who will be the next Korea, the next Japan, the next Sweden? It's no exaggeration to say that the whole world is indeed watching - and, increasingly, contributing to content creation and distribution.
Patterns are emerging from some seemingly unrelated recent developments:
Item: After the mass transit bombings in London, what used to be called "man on the street" perspectives provided some of the most vivid news sources. The BBC has long solicited cameraphone images and personal accounts, and this week's events proved the value of this approach as one element in comprehensive newsgathering.
Item: Wasting no time integrating the Keyhole technology, Google launched a free beta of Google Earth, an even more addictive variation on the satellite imagery embedded in Google Maps. In the Wall Street Journal, Walt Mossberg questioned the utility but not the fun and wonderment fueled by the technology. For example, Google Siteseeing, a weblog unaffiliated with the company, gathers readers' harvests of interesting (for whatever reason) images from the air: shadows of airplanes about to land, smoke plumes, college campuses with giant initials on nearby hillsides, Bill Gates' house. The site has proved so popular it's had to rehost onto commercial-grade infrastructure. (For lots more on this theme, see the coverage of O'Reilly's Where 2.0 conference: http://www.oreillynet.com/where2005/.)
Item: Earlier this month, a carrier in a major European nation announced 3G cellular service over which it will broadcast 42 channels of television to mobile devices. People would be accustomed to this kind of activity in South Korea, but in France? France Telecom is involved along with Orange, giving rise to speculation that national policymakers have decided to emphasize broadband as an economic growth engine.
Item: Apple's new OS, nicknamed Tiger, includes an RSS feedreader within the Safari browser. It wasn't that long ago that people who wanted aggregated feeds needed to install and understand scripting languages. Bloglines changed that, but Tiger appears to be the cleanest implementation to date: I've heard of people upgrading only for this one feature.
Item: After MTV's North American feed of the Live 8 performances was interrupted by ads - midsong - music fans were delighted to see AOL open up a nearly complete video archive of six venues' performances. Apple's Quicktime format is supported, but not Firefox; in Internet Explorer, the viewer is treated to annoying Microsoft ads.
Conclusion 1: Diversity is good. Even without worries of terrorist attacks, viruses, and tightly coupled grids that may or may not have adequate bulkheads to prevent cascading failure (as in the US-Canada power failure of two years ago), convenience and basic prudence suggest that heterogeneous communications channels make a lot of sense. Anyone who switches to sole reliance on voice over IP, the cable company, cellular, instant messaging, or anything else risks total lights-out, as London residents discovered this week. (According to the Wall Street Journal, officials decided against shutting down the cellular networks; the lack of service was apparently caused by heavy traffic.) The success of AOL undoubtedly spurred MTV's decision to re-broadcast Live 8 without interruption.
Conclusion 2: Innovation is happening from both top down (as in Google Labs) and bottom up. Historically the Web has made it easy to find big news sources, but with RSS, it's similarly simple to find small ones. I'm finding it educational to watch media outlets attempt to include and/or co-opt blogs: the Wall Street Journal regularly includes Glenn (Instapundit) Reynolds in the print paper, while publications all over the map are including various blog voices. It's unclear as to what editorial oversight news-organ bloggers enjoy, how they're paid, and what precedence the "day job" medium has with regard to liability, scoops, retractions, and the like. There's still much to be sorted out here.
Conclusion 3: Broadband makes things happen. Whether it's telemedicine, gaming, or secure transmission of private data, its low broadband penetration means that untapped opportunities abound in the United States. As of March the OECD rated the U.S. 17th out of 30 nations in terms of broadband service cost, but this is deceiving as the U.S. ranks only 6th in average broadband speed: Japan's standard service is merely 12 times as fast (26 MB/sec to 2, with a theoretical limit of 51; fiber to the home is expanding rapidly and delivers 100 MB/sec). In short, U.S. customers pay a lot for service that's only charitably defined as mid-band.
The implications can be found in multiple domains. For example, the recent data thefts are increasingly being reported not from hackers but from boxes of backup tapes falling off the back of trucks. It's an open question whether an employee or customer would rather have her sensitive information carried on MCI fiber or a Fedex van.
Remote work is an even more pressing example: between 1970 and 2002, vehicle miles traveled in U.S. urban areas has tripled. Road mileage has in no way kept pace, and the next thirty years will be worse for congestion: few states can afford to maintain the roads and bridges they already have, much less build more. Broadband promises to help create alternative ways of organizing resources, with Jet Blue's virtual call center (consisting of work-at-home customer service reps) serving as one real-life progenitor.
Other promising signs keep cropping up: on the connection front, services like Sprint's EVDO and Verizon's FiOS support reasonably symmetric speeds up and down in part because the business case for customer uploading (digital photos and the like) is getting harder to ignore. Furthermore, increasingly multi-modal communications media like the Weather Channel on cell phones and Google's purchase of Dodgeball support heterogeneous redundancy. Finally, at the FCC and elsewhere people who can make a difference seem to be raising communication policy to a slightly higher level of import.
One great thing about the current cornucopia of technologies is that some can be deployed very rapidly, to the point where Japan, for example, was able to leapfrog much of the world in a matter of a few years. Who will be the next Korea, the next Japan, the next Sweden? It's no exaggeration to say that the whole world is indeed watching - and, increasingly, contributing to content creation and distribution.
Friday, June 17, 2005
June Early Indications: Can IT Fix Health Care?
"The solution seems obvious: to get all the information about patients out of paper files and into electronic databases that -- and this is the crucial point -- can connect to one another so that any doctor can access all the information that he needs to help any given patient at any time in any place. In other words, the solution is not merely to use computers, but to link the systems of doctors, hospitals, laboratories, pharmacies and insurers, thus making them, in the jargon, 'interoperable'."
-"Special report: IT in the health-care industry," The Economist, April 30, 2005, p. 65
There's no question that North American medicine is approaching a crisis. According to the Washington Post, 45 million Americans carry no health insurance. Between 44,000 and 98,000 people are estimated to die every year from preventable medical errors such as drug interactions; the fact that the statistics are so vague testifies to the problem. The U.S. leads the world in health care spending per capita by a large margin ($4500 vs. $2500 for the runners-up: Germany, Luxembourg, and Switzerland), but the life expectancy ranks 27th, near that of Cuba, which is reported to spend about 1/25th as much per capita. Information technology has made industries such as package delivery, retail, and mutual funds more efficient: can health care benefit from similar gains?
The farther one looks into this issue, the more tangled the questions get. Let me assert at the outset that I believe electronic medical records are a good idea. But for reasons outlined below, IT by itself falls far short of meeting the challenge of rethinking health and health care. Any industry with the emotional freight, economic impact, and cultural significance of medicine can't be analyzed closely in a few paragraphs, but perhaps these ideas might begin discussion in other venues.
1) Definitions
What does the health care system purport to deliver? If longevity is the answer, clearly much less money could be spent to bring U.S. life expectancy closer to Australia, where people live an average of three years longer. But health means more than years: the phrase "quality of life" hints at the notion that we seek something non-quantifiable from doctors, therapists, nutritionists, and others. At a macro level, no one can assess how well a health care system works because the metrics lack explanatory power: we know, roughly, how much money goes in to a hospital, HMO, or even economic sector, but we don't know much about the outputs.
For example, should health care make us even "better than well"? As the bioethicist Carl Elliott compellingly argues in his book of that name, a substantial part of our investment in medicine, nutrition, and surgery is enhancement beyond what's naturally possible. Erectile dysfunction pills, steroids, implants, and blood doping are no longer the province of celebrities and world-class athletes. Not only can we not define health on its lower baseline, it's getting more and more difficult to know where it stops on the top bound as well.
Finally, Americans at large don't seem to view death as natural, even though it's one of the very few things that happens to absolutely everyone. Within many outposts of the health care system, death is regarded as a failure of technology, to the point where central lines, respirators, and other interventions are applied to people who are naturally coming to the end of life. This approach of course incurs astronomical costs, but it is a predictable outcome of a heavily technology-driven approach to care.
2) Health care as car repair for people?
Speaking in gross generalizations, U.S. hospitals are not run to deliver health; they're better described as sickness-remediation facilities. The ambiguous position of women who deliver babies demonstrates the primary orientation. Many of the institutional interventions and signals (calling the woman a "patient," for example) are shared with the sickness-remediation side of the house even though birth is not morbid under most circumstances. Some hospitals are turning this contradiction into a marketing opportunity: plushly appointed "birthing centers" have the stated aim of making the new mom a satisfied customer. "I had such a good experience having Max and Ashley at XYZ Medical Center," the intended logic goes, "that I want them taking care of Dad's heart problems."
Understanding health care as sickness-remediation has several corollaries. Doctors are deeply protective of their hard-won cultural authority, which they guard with language, apparel, and other mechanisms, but the parallels between a hospital and a car-repair garage run deep. After Descartes split the mind from the body, medicine followed the ontology of science to divide fields of inquiry -- and presumably repair -- into discrete units.
At teaching hospitals especially, patients frequently report feeling less like a person and more like a sum of sub-systems. Rashes are for dermatology, heart blockages set off a tug-of-war between surgeons and cardiologists, joint pain is orthopedics or maybe endocrinology. Root-cause analysis frequently falls to secondary priority as the patient is reduced to his or her compartmentalized complaints and metrics. Pain is no service's specialty but many patients' primary concern. Systems integration between the sub-specialties often falls to floor nurses and the patient's advocate if he or she can find one. The situation might be different if one is fortunate enough to have access to a hospitalist: a new specialty that addresses the state of being hospitalized, which the numbers show to be more deadly than car crashes. (To restate: something on the order of 100,000 people die in the U.S. every year from preventable medical accidents.)
The division of the patient into sub-systems that map to professional fields has many consequences. Attention focuses on the disease state, rather than the path that led to that juncture: preventive care lags far behind crisis management in glamour, funding, and attention. Diabetes provides a current example. Drug companies have focused large sums of money on insulin therapies, a treatment program that can change millions of peoples' lives. But when public-health authorities try to warn against obesity as a preventive attack on diabetes, soft-drink and other lobbies immediately spring into action.
Finally, western medicine's claim to be evidence-based contradicts the lack of definitive evidence for ultimate consequences. The practice of performing autopsies in cases of death where the cause is unclear has dropped steadily and steeply, to the point where doctors and families typically do not know what killed a sizable population of patients. A study at the University of Michigan estimated that almost 50% of hospital patients died of a condition for which they were not receiving treatment. It's potentially the same situation as storeowner John Wanamaker bemoaning that half of his advertising budget was being wasted, but not knowing which half.
3) Following the money
Health care costs money, involves scarcities and surplus, and employs millions of people. As such, it constitutes a market - but one that fails to run under conventional market mechanisms. (For example, excess inventory, in the form of unbooked surgical times, let's say, is neither auctioned to the highest bidder nor put on sale to clear the market.) The parties that pay are rarely the parties whose health is being treated; the parties that deliver care lack detailed cost data and therefore price services only in the loosest sense; and the alignment of patient preference with greater good through the lens of for-profit insurers has many repercussions.
Consider a few market-driven sub-optimizations:
-Chief executives at HMOs are rewarded for cost-cutting, which often translates to cuts in hospital reimbursement. Hospitals, meanwhile, are frequently not-for-profit institutions, many of which have been forced to closed their doors in the past decade.
-Arrangements to pay for certain kinds of care for the uninsured introduce further costs, and further kinds of costs, into an already complex set of financial flows.
-As Richard Titmuss showed over 30 years ago in The Gift Relationship, markets don't make sense for certain kinds of social goods. In his study, paying for blood donation lowered the amount and quality of blood available for transfusion; more recently, similar paradoxes and ethical issues have arisen regarding tissue and organ donation.
-Insurers prefer to pay for tangible rather than intangible services. Hospitals respond by building labs and imaging centers as opposed to mental health facilities, where services like psychiatric nursing are rarely covered.
-Once they build labs, hospitals want them utilized, so there's further pressure (in addition to litigation-induced defensiveness) for technological evidence-gathering rather than time-consuming medical art such as history-taking and palpation, for which doctors are not reimbursed.
-As a result, conditions with clear diagnoses (like fractures) are treated more favorably in economic terms, and therefore in interventional terms, than conditions such as allergies or neck pain that lack "hard" diagnostics. Once again, the vast number of people with mental health issues are grossly underserved.
-Medical schools can no longer afford for their professors to do unreimbursable things like teach or serve on national standards bodies. The doctors need to bring in grant money to fund research and insurance money for their clinical time. Teaching can be highly uneconomical for all concerned. One reason for a shortage of nurses, meanwhile, is a shortage of nursing professors.
4) Where can IT help?
Information technology has made significant improvements possible in business settings with well-defined, repeatable processes like originating a loan or filling an order. Medicine involves some processes that fit this description, but it also involves a lot of impossible-to-predict scheduling, healing as art rather than science, and institutionalized barriers to communication.
IT is currently used in four broad medical areas: billing and finance, supply chain and logistics, imaging and instrumentation, and patient care. Patient registration is an obvious example of the first; lines and foodservice the second; MRIs, blood tests, and bedside monitoring the third; and physician order entry, patient care notes, and prescription writing the fourth. Each type of automation introduces changes in work habits, incentives, and costs to various parties in the equation.
Information regarding health and information regarding money often follow parallel paths: if I get stitched up after falling on my chin, the insurance company is billed for an emergency department visit and a suture kit at the same time that the hospital logs my visit -- and hopefully flags any known antibiotic allergies. Meanwhile the interests and incentives are frequently anything but parallel: I might want a plastic surgeon to suture my face; the insurer prefers a physician's assistant. From the patient's perspective, having systems that more seamlessly interoperate with the HMO may not be positive if that results in fewer choices or a perceived reduction in the quality of care. On the provider side, the hospital and the plastic surgeon will send separate bills, each hoping for payment but neither coordinating with the other. Bills frequently appear in a matter of days, with the issuer hoping to get paid first, before the patient realizes any potential errors in calculating co-pay or deductible. The amount of time and money spent on administering the current dysfunctional multi-payer system is impossible to conceive.
Privacy issues are non-trivial. Given that large-scale breaches of personal information are almost daily news, what assurance will patients have that a complex medical system will do a better job shielding privacy than Citigroup or LexisNexis? With genomic predictors of health -- and potential cost for insurance coverage -- around the corner, how will patients' and insurers' claims on that information be reconciled?
A number of services currently let individuals combine personal control and portability of their records. It's easy to see how such an approach may not scale: something as trivial as password-resets in corporate computing environments already involves sizeable costs -- now think about managing the sum of past and present patients and employees as a user base with access to the most sensitive information imaginable. With portable devices proliferating, potential paths of entry multiply both the security perimeter and the cost of securing it: think of teenage hackers trying to find their way to Paris Hilton's medical record rather than her Sidekick.
Hospitals already tend to treat privacy as an inconvenience -- witness the universal use of the ridiculous johnnies, which do more to demean the patient than to improve quality of care. The medical record doesn't even belong to the person whose condition it documents. American data privacy standards, even after HIPAA, lag behind those in the European Union. From such a primitive baseline, getting to a new state of shared accountability, access, and privacy will take far more diplomacy than systems development.
Spending on diagnostic technology currently outpaces patient care IT. Hospitals routinely advertise less confining MRI machines, digital mammography, and 3D echocardiography; it's less easy to impress constituencies with effective metadata for patient care notes, for example. (Some computerized record systems merely capture images of handwritten notes with only minimal indexing.) After these usually expensive machines produce their intended results, the process by which diagnosticians and ultimately caregivers use those results is often haphazard: many tests are never consulted, or compared to previous results -- particularly if they were generated somewhere else. NIH doesn't just stand for National Institutes of Health; Not Invented Here is also alive and well in hospitals.
Back in the early days of reengineering, when technology and process change were envisioned as a potent one-two punch in the gut of inefficiency, the phrase "don't pave the cowpaths" was frequently used as shorthand. Given that medicine can only be routinized to a certain degree, and given that many structural elements contribute to the current state of affairs, it's useful to recall the old mantra. Without new ways of organizing the vastness of a longitudinal medical record, for example, physicians could easily find themselves buried in a haystack of records, searching for a needle without a magnet. Merely automating a bad process rarely solves any problems, and usually creates big new ones.
Change comes slowly to medicine, and the application of technology depends, here as always, on the incentives for different parties to adopt new ways of doing things. Computerized approaches to caregiving include expert knowledge bases, automated lockouts much like those in commercial aviation, and medical simulators for training students and experienced practitioners alike. Each of these has proven benefits, but only limited deployment. Further benefits could come from well care and preventive medicine, but these areas have proven less amenable to the current style of IT intensification. Until the reform efforts such as Leapfrog can address the culture, process, and incentive issues in patient care, the increase in clinical IT investment will do little to drive breakthrough change in the length and quality of Americans' lives.
-"Special report: IT in the health-care industry," The Economist, April 30, 2005, p. 65
There's no question that North American medicine is approaching a crisis. According to the Washington Post, 45 million Americans carry no health insurance. Between 44,000 and 98,000 people are estimated to die every year from preventable medical errors such as drug interactions; the fact that the statistics are so vague testifies to the problem. The U.S. leads the world in health care spending per capita by a large margin ($4500 vs. $2500 for the runners-up: Germany, Luxembourg, and Switzerland), but the life expectancy ranks 27th, near that of Cuba, which is reported to spend about 1/25th as much per capita. Information technology has made industries such as package delivery, retail, and mutual funds more efficient: can health care benefit from similar gains?
The farther one looks into this issue, the more tangled the questions get. Let me assert at the outset that I believe electronic medical records are a good idea. But for reasons outlined below, IT by itself falls far short of meeting the challenge of rethinking health and health care. Any industry with the emotional freight, economic impact, and cultural significance of medicine can't be analyzed closely in a few paragraphs, but perhaps these ideas might begin discussion in other venues.
1) Definitions
What does the health care system purport to deliver? If longevity is the answer, clearly much less money could be spent to bring U.S. life expectancy closer to Australia, where people live an average of three years longer. But health means more than years: the phrase "quality of life" hints at the notion that we seek something non-quantifiable from doctors, therapists, nutritionists, and others. At a macro level, no one can assess how well a health care system works because the metrics lack explanatory power: we know, roughly, how much money goes in to a hospital, HMO, or even economic sector, but we don't know much about the outputs.
For example, should health care make us even "better than well"? As the bioethicist Carl Elliott compellingly argues in his book of that name, a substantial part of our investment in medicine, nutrition, and surgery is enhancement beyond what's naturally possible. Erectile dysfunction pills, steroids, implants, and blood doping are no longer the province of celebrities and world-class athletes. Not only can we not define health on its lower baseline, it's getting more and more difficult to know where it stops on the top bound as well.
Finally, Americans at large don't seem to view death as natural, even though it's one of the very few things that happens to absolutely everyone. Within many outposts of the health care system, death is regarded as a failure of technology, to the point where central lines, respirators, and other interventions are applied to people who are naturally coming to the end of life. This approach of course incurs astronomical costs, but it is a predictable outcome of a heavily technology-driven approach to care.
2) Health care as car repair for people?
Speaking in gross generalizations, U.S. hospitals are not run to deliver health; they're better described as sickness-remediation facilities. The ambiguous position of women who deliver babies demonstrates the primary orientation. Many of the institutional interventions and signals (calling the woman a "patient," for example) are shared with the sickness-remediation side of the house even though birth is not morbid under most circumstances. Some hospitals are turning this contradiction into a marketing opportunity: plushly appointed "birthing centers" have the stated aim of making the new mom a satisfied customer. "I had such a good experience having Max and Ashley at XYZ Medical Center," the intended logic goes, "that I want them taking care of Dad's heart problems."
Understanding health care as sickness-remediation has several corollaries. Doctors are deeply protective of their hard-won cultural authority, which they guard with language, apparel, and other mechanisms, but the parallels between a hospital and a car-repair garage run deep. After Descartes split the mind from the body, medicine followed the ontology of science to divide fields of inquiry -- and presumably repair -- into discrete units.
At teaching hospitals especially, patients frequently report feeling less like a person and more like a sum of sub-systems. Rashes are for dermatology, heart blockages set off a tug-of-war between surgeons and cardiologists, joint pain is orthopedics or maybe endocrinology. Root-cause analysis frequently falls to secondary priority as the patient is reduced to his or her compartmentalized complaints and metrics. Pain is no service's specialty but many patients' primary concern. Systems integration between the sub-specialties often falls to floor nurses and the patient's advocate if he or she can find one. The situation might be different if one is fortunate enough to have access to a hospitalist: a new specialty that addresses the state of being hospitalized, which the numbers show to be more deadly than car crashes. (To restate: something on the order of 100,000 people die in the U.S. every year from preventable medical accidents.)
The division of the patient into sub-systems that map to professional fields has many consequences. Attention focuses on the disease state, rather than the path that led to that juncture: preventive care lags far behind crisis management in glamour, funding, and attention. Diabetes provides a current example. Drug companies have focused large sums of money on insulin therapies, a treatment program that can change millions of peoples' lives. But when public-health authorities try to warn against obesity as a preventive attack on diabetes, soft-drink and other lobbies immediately spring into action.
Finally, western medicine's claim to be evidence-based contradicts the lack of definitive evidence for ultimate consequences. The practice of performing autopsies in cases of death where the cause is unclear has dropped steadily and steeply, to the point where doctors and families typically do not know what killed a sizable population of patients. A study at the University of Michigan estimated that almost 50% of hospital patients died of a condition for which they were not receiving treatment. It's potentially the same situation as storeowner John Wanamaker bemoaning that half of his advertising budget was being wasted, but not knowing which half.
3) Following the money
Health care costs money, involves scarcities and surplus, and employs millions of people. As such, it constitutes a market - but one that fails to run under conventional market mechanisms. (For example, excess inventory, in the form of unbooked surgical times, let's say, is neither auctioned to the highest bidder nor put on sale to clear the market.) The parties that pay are rarely the parties whose health is being treated; the parties that deliver care lack detailed cost data and therefore price services only in the loosest sense; and the alignment of patient preference with greater good through the lens of for-profit insurers has many repercussions.
Consider a few market-driven sub-optimizations:
-Chief executives at HMOs are rewarded for cost-cutting, which often translates to cuts in hospital reimbursement. Hospitals, meanwhile, are frequently not-for-profit institutions, many of which have been forced to closed their doors in the past decade.
-Arrangements to pay for certain kinds of care for the uninsured introduce further costs, and further kinds of costs, into an already complex set of financial flows.
-As Richard Titmuss showed over 30 years ago in The Gift Relationship, markets don't make sense for certain kinds of social goods. In his study, paying for blood donation lowered the amount and quality of blood available for transfusion; more recently, similar paradoxes and ethical issues have arisen regarding tissue and organ donation.
-Insurers prefer to pay for tangible rather than intangible services. Hospitals respond by building labs and imaging centers as opposed to mental health facilities, where services like psychiatric nursing are rarely covered.
-Once they build labs, hospitals want them utilized, so there's further pressure (in addition to litigation-induced defensiveness) for technological evidence-gathering rather than time-consuming medical art such as history-taking and palpation, for which doctors are not reimbursed.
-As a result, conditions with clear diagnoses (like fractures) are treated more favorably in economic terms, and therefore in interventional terms, than conditions such as allergies or neck pain that lack "hard" diagnostics. Once again, the vast number of people with mental health issues are grossly underserved.
-Medical schools can no longer afford for their professors to do unreimbursable things like teach or serve on national standards bodies. The doctors need to bring in grant money to fund research and insurance money for their clinical time. Teaching can be highly uneconomical for all concerned. One reason for a shortage of nurses, meanwhile, is a shortage of nursing professors.
4) Where can IT help?
Information technology has made significant improvements possible in business settings with well-defined, repeatable processes like originating a loan or filling an order. Medicine involves some processes that fit this description, but it also involves a lot of impossible-to-predict scheduling, healing as art rather than science, and institutionalized barriers to communication.
IT is currently used in four broad medical areas: billing and finance, supply chain and logistics, imaging and instrumentation, and patient care. Patient registration is an obvious example of the first; lines and foodservice the second; MRIs, blood tests, and bedside monitoring the third; and physician order entry, patient care notes, and prescription writing the fourth. Each type of automation introduces changes in work habits, incentives, and costs to various parties in the equation.
Information regarding health and information regarding money often follow parallel paths: if I get stitched up after falling on my chin, the insurance company is billed for an emergency department visit and a suture kit at the same time that the hospital logs my visit -- and hopefully flags any known antibiotic allergies. Meanwhile the interests and incentives are frequently anything but parallel: I might want a plastic surgeon to suture my face; the insurer prefers a physician's assistant. From the patient's perspective, having systems that more seamlessly interoperate with the HMO may not be positive if that results in fewer choices or a perceived reduction in the quality of care. On the provider side, the hospital and the plastic surgeon will send separate bills, each hoping for payment but neither coordinating with the other. Bills frequently appear in a matter of days, with the issuer hoping to get paid first, before the patient realizes any potential errors in calculating co-pay or deductible. The amount of time and money spent on administering the current dysfunctional multi-payer system is impossible to conceive.
Privacy issues are non-trivial. Given that large-scale breaches of personal information are almost daily news, what assurance will patients have that a complex medical system will do a better job shielding privacy than Citigroup or LexisNexis? With genomic predictors of health -- and potential cost for insurance coverage -- around the corner, how will patients' and insurers' claims on that information be reconciled?
A number of services currently let individuals combine personal control and portability of their records. It's easy to see how such an approach may not scale: something as trivial as password-resets in corporate computing environments already involves sizeable costs -- now think about managing the sum of past and present patients and employees as a user base with access to the most sensitive information imaginable. With portable devices proliferating, potential paths of entry multiply both the security perimeter and the cost of securing it: think of teenage hackers trying to find their way to Paris Hilton's medical record rather than her Sidekick.
Hospitals already tend to treat privacy as an inconvenience -- witness the universal use of the ridiculous johnnies, which do more to demean the patient than to improve quality of care. The medical record doesn't even belong to the person whose condition it documents. American data privacy standards, even after HIPAA, lag behind those in the European Union. From such a primitive baseline, getting to a new state of shared accountability, access, and privacy will take far more diplomacy than systems development.
Spending on diagnostic technology currently outpaces patient care IT. Hospitals routinely advertise less confining MRI machines, digital mammography, and 3D echocardiography; it's less easy to impress constituencies with effective metadata for patient care notes, for example. (Some computerized record systems merely capture images of handwritten notes with only minimal indexing.) After these usually expensive machines produce their intended results, the process by which diagnosticians and ultimately caregivers use those results is often haphazard: many tests are never consulted, or compared to previous results -- particularly if they were generated somewhere else. NIH doesn't just stand for National Institutes of Health; Not Invented Here is also alive and well in hospitals.
Back in the early days of reengineering, when technology and process change were envisioned as a potent one-two punch in the gut of inefficiency, the phrase "don't pave the cowpaths" was frequently used as shorthand. Given that medicine can only be routinized to a certain degree, and given that many structural elements contribute to the current state of affairs, it's useful to recall the old mantra. Without new ways of organizing the vastness of a longitudinal medical record, for example, physicians could easily find themselves buried in a haystack of records, searching for a needle without a magnet. Merely automating a bad process rarely solves any problems, and usually creates big new ones.
Change comes slowly to medicine, and the application of technology depends, here as always, on the incentives for different parties to adopt new ways of doing things. Computerized approaches to caregiving include expert knowledge bases, automated lockouts much like those in commercial aviation, and medical simulators for training students and experienced practitioners alike. Each of these has proven benefits, but only limited deployment. Further benefits could come from well care and preventive medicine, but these areas have proven less amenable to the current style of IT intensification. Until the reform efforts such as Leapfrog can address the culture, process, and incentive issues in patient care, the increase in clinical IT investment will do little to drive breakthrough change in the length and quality of Americans' lives.
Tuesday, June 07, 2005
May 2005 Early Indications II: Power laws for fun and profit
(shipped May 26, posted at www.guidewiregroup.com, and archived here)
Five years ago, the Internet sector was in the middle of a momentous
slide in market capitalization. Priceline went from nearly $500 a
share to single digits in three quarters. CDnow fell from $23 to
$3.40 in about 9 months ending in March 2000. Corvis, Music Maker,
Dr. Koop - 2000 was a meltdown the likes of which few recreational
investors had ever seen or imagined. Science was invoked to explain
this new world of Internet business.
Bernardo Huberman, then at Xerox PARC, and others found that the
proportion of websites that got the bulk of the traffic fell far from
the 80/20 rule of thumb: as of December 1, 1997, the top 1% of the
website population accounted for over 55% of all traffic. This kind
of distribution was not new, as it turned out. A Harvard linguist
with the splendid name of George Zipf counted words, and found that a
tiny percentage of English words account for a disproportionate share
of usage. A Zipf distribution, plotted on a log-log scale, is a
straight line from upper left to lower right. In linear scale, it
plunges from the top left and then goes flat for the characteristic
long tail of the distribution: twosies and then onesies occupy most of
the x-axis.
Given such "scientific" logic, investors began to argue that the
Internet was a new kind of market, with high barriers to entry that
made incumbents' positions extremely secure. Michael Mauboussin, then
at CS First Boston and now at Legg Mason, wrote a paper in late 1999
called "Absolute Power." In it he asserted that "power laws . . .
strongly support the view that on-line markets are winner-take-all."
Since that time, Google has challenged Yahoo, weblogs have markedly
deteriorated online news sites' traffic, and the distinction between
"on-line markets" and plain old markets is getting harder to maintain.
Is the Zipf distribution somehow changing? Were power laws wrongly
applied or somehow misunderstood?
Chris Anderson, editor of Wired, has a different reading of the graph
and focuses instead on the long tail. In an article last fall that's
being turned into a book, Anderson explains how a variety of web
businesses have prospered by successfully addressing the very large
number of niches in any given market. Jeff Bezos, for instance,
estimates that 30% of the books Amazon sells aren't in physical
retailers. Unlike Excite, which couldn't make money on the mostly
unique queries that came into the site, Google uses adwords to sell
almost anything to the very few people who search for something
related to it. As of March, every iTunes song in inventory (that's
over 1 million) had been purchased at least once. Netflix carries far
more inventory than a neighborhood retailer can, and can thus satisfy
any film nut's most esoteric request.
At the same time, producers of distinctive small-market goods (like
weblogs, garage demo CDs, and self-published books) can through a
variety of mechanisms reach a paying public. These mechanisms include
word of mouth, search-driven technologies, and public performance
tie-ins; digital distribution can also change a market's economics.
Thus the news is good for both makers and users, buyers and sellers;
in fact, libertarian commentator Virginia Postrel has written for the
last several years on the virtues of the choice and variety we
currently enjoy.
There's currently a "long tail" fixation in Silicon Valley. Venture
capitalists report seeing a requisite power law slide in nearly any
pitch deck. CEO Eric Schmidt showed a long tail slide at the Google
shareholder meeting. Joe Krause, formerly of Excite and now at
Jotspot, tries to argue for a long tail in software development upon
which his product of course capitalizes. The term has made USAToday
and The Economist. In some ways this feels like the bubble again, for
better and for worse.
At one level, the Internet industry seems to need intense bursts of
buzzword mania: you no longer hear anyone talking about push,
incubators, portals, exchanges, or on-line communities even though
each of these was a projected multi-billion dollar market. The visual
appeal of a Zipf distribution may also confer Anderson's long tail
with a quasi-scientism that simple terms like "blog," "handheld," or
"broadband" lack. Netflix, Amazon, and Google lacked power law
graphs, I'm pretty certain, in their startup documents and have
managed to thrive regardless. Anderson's own evidence illustrates
what a long way it is from explanation to prediction: showing how some
firms can profitably address niches doesn't prove that a startup will
similarly prosper in an adjacent market. To his credit, he focuses
primarily on entertainment, where digitization is most prevalent.
The recourse to supposed mathematical precision to buttress something
as unscientific as a business plan is not new. Sociologists
investigating networks of people have been overshadowed by physicists
who bring higher math horsepower to the same sets of problems, yet
it's still difficult to understand Friendster's revenue model.
Complex adaptive systems research was very hot in the 90s, following
in the course of the now barely visible "artificial intelligence."
The problem extends beyond calculus to spreadsheets: much of what
passes for quantitative market research is barely legitimate data. To
be reduced to a single semi-reliable number, a simple 5-point
questionnaire response should have the answers vary in regular
intervals, yet words rarely behave this way. Is "most of the time" 8
times out of ten or 95 times out of 100? Who remembers to count
before someone asks? Purchase intent rarely translates to purchase.
Yet executives make decisions every day based on customer satisfaction
scores, opinion surveys, and focus groups, all of which reduce noisy
variation to apparently clinical precision.
Make no mistake: Chris Anderson has identified something important and
widespread when he groups superficially dissimilar businesses to spot
their shared reliance on the medium's powerful capability for matching
big, sparse populations to things they want and will pay for.
Returning to our opening question with regard to what's changed since
2000, the necessary preconditions of successful long tail models
include large populations and strong search, a relatively new
capability. What will disrupt today's incumbents by 2010? New kinds
of batteries? Flexible displays? Enforced shutdown of the
peer-to-peer networks, possibly by a massive worm/virus of unknown
origin?
It's also important to see the both/and: just because quirky tastes
can constitute a profitable audience in new ways does not preclude
hits like the Da Vinci Code, let's say, from being major news. And
power laws still apply to traffic (and presumably revenue): Google and
Amazon profitably handle massive volumes of site visits whereas Real's
download service, about which Anderson rhapsodizes, still loses money.
At the end of the day, no algorithm in the world can negate the most
powerful "law" of business, that of cash flow.
Five years ago, the Internet sector was in the middle of a momentous
slide in market capitalization. Priceline went from nearly $500 a
share to single digits in three quarters. CDnow fell from $23 to
$3.40 in about 9 months ending in March 2000. Corvis, Music Maker,
Dr. Koop - 2000 was a meltdown the likes of which few recreational
investors had ever seen or imagined. Science was invoked to explain
this new world of Internet business.
Bernardo Huberman, then at Xerox PARC, and others found that the
proportion of websites that got the bulk of the traffic fell far from
the 80/20 rule of thumb: as of December 1, 1997, the top 1% of the
website population accounted for over 55% of all traffic. This kind
of distribution was not new, as it turned out. A Harvard linguist
with the splendid name of George Zipf counted words, and found that a
tiny percentage of English words account for a disproportionate share
of usage. A Zipf distribution, plotted on a log-log scale, is a
straight line from upper left to lower right. In linear scale, it
plunges from the top left and then goes flat for the characteristic
long tail of the distribution: twosies and then onesies occupy most of
the x-axis.
Given such "scientific" logic, investors began to argue that the
Internet was a new kind of market, with high barriers to entry that
made incumbents' positions extremely secure. Michael Mauboussin, then
at CS First Boston and now at Legg Mason, wrote a paper in late 1999
called "Absolute Power." In it he asserted that "power laws . . .
strongly support the view that on-line markets are winner-take-all."
Since that time, Google has challenged Yahoo, weblogs have markedly
deteriorated online news sites' traffic, and the distinction between
"on-line markets" and plain old markets is getting harder to maintain.
Is the Zipf distribution somehow changing? Were power laws wrongly
applied or somehow misunderstood?
Chris Anderson, editor of Wired, has a different reading of the graph
and focuses instead on the long tail. In an article last fall that's
being turned into a book, Anderson explains how a variety of web
businesses have prospered by successfully addressing the very large
number of niches in any given market. Jeff Bezos, for instance,
estimates that 30% of the books Amazon sells aren't in physical
retailers. Unlike Excite, which couldn't make money on the mostly
unique queries that came into the site, Google uses adwords to sell
almost anything to the very few people who search for something
related to it. As of March, every iTunes song in inventory (that's
over 1 million) had been purchased at least once. Netflix carries far
more inventory than a neighborhood retailer can, and can thus satisfy
any film nut's most esoteric request.
At the same time, producers of distinctive small-market goods (like
weblogs, garage demo CDs, and self-published books) can through a
variety of mechanisms reach a paying public. These mechanisms include
word of mouth, search-driven technologies, and public performance
tie-ins; digital distribution can also change a market's economics.
Thus the news is good for both makers and users, buyers and sellers;
in fact, libertarian commentator Virginia Postrel has written for the
last several years on the virtues of the choice and variety we
currently enjoy.
There's currently a "long tail" fixation in Silicon Valley. Venture
capitalists report seeing a requisite power law slide in nearly any
pitch deck. CEO Eric Schmidt showed a long tail slide at the Google
shareholder meeting. Joe Krause, formerly of Excite and now at
Jotspot, tries to argue for a long tail in software development upon
which his product of course capitalizes. The term has made USAToday
and The Economist. In some ways this feels like the bubble again, for
better and for worse.
At one level, the Internet industry seems to need intense bursts of
buzzword mania: you no longer hear anyone talking about push,
incubators, portals, exchanges, or on-line communities even though
each of these was a projected multi-billion dollar market. The visual
appeal of a Zipf distribution may also confer Anderson's long tail
with a quasi-scientism that simple terms like "blog," "handheld," or
"broadband" lack. Netflix, Amazon, and Google lacked power law
graphs, I'm pretty certain, in their startup documents and have
managed to thrive regardless. Anderson's own evidence illustrates
what a long way it is from explanation to prediction: showing how some
firms can profitably address niches doesn't prove that a startup will
similarly prosper in an adjacent market. To his credit, he focuses
primarily on entertainment, where digitization is most prevalent.
The recourse to supposed mathematical precision to buttress something
as unscientific as a business plan is not new. Sociologists
investigating networks of people have been overshadowed by physicists
who bring higher math horsepower to the same sets of problems, yet
it's still difficult to understand Friendster's revenue model.
Complex adaptive systems research was very hot in the 90s, following
in the course of the now barely visible "artificial intelligence."
The problem extends beyond calculus to spreadsheets: much of what
passes for quantitative market research is barely legitimate data. To
be reduced to a single semi-reliable number, a simple 5-point
questionnaire response should have the answers vary in regular
intervals, yet words rarely behave this way. Is "most of the time" 8
times out of ten or 95 times out of 100? Who remembers to count
before someone asks? Purchase intent rarely translates to purchase.
Yet executives make decisions every day based on customer satisfaction
scores, opinion surveys, and focus groups, all of which reduce noisy
variation to apparently clinical precision.
Make no mistake: Chris Anderson has identified something important and
widespread when he groups superficially dissimilar businesses to spot
their shared reliance on the medium's powerful capability for matching
big, sparse populations to things they want and will pay for.
Returning to our opening question with regard to what's changed since
2000, the necessary preconditions of successful long tail models
include large populations and strong search, a relatively new
capability. What will disrupt today's incumbents by 2010? New kinds
of batteries? Flexible displays? Enforced shutdown of the
peer-to-peer networks, possibly by a massive worm/virus of unknown
origin?
It's also important to see the both/and: just because quirky tastes
can constitute a profitable audience in new ways does not preclude
hits like the Da Vinci Code, let's say, from being major news. And
power laws still apply to traffic (and presumably revenue): Google and
Amazon profitably handle massive volumes of site visits whereas Real's
download service, about which Anderson rhapsodizes, still loses money.
At the end of the day, no algorithm in the world can negate the most
powerful "law" of business, that of cash flow.
Friday, May 27, 2005
May 2005 Early Indications I: O Brother, Where Art Thou?
(shipped 5/16)
Issues of electronic identity and mobility have recently been playing out in quiet but important ways. Each of three instances from last week's news is a classic case of social or economic problems being tangled up with a technology challenge. To see only one side of the question is to create the possibility of unintended consequences, allow hidden agendas into play, and generally confuse the allocation of sometimes-scarce resources. The emergence of location-sensitive services has occurred in sometimes unpredictable ways: rather than driving by the mall and getting an ad on my cell phone or satellite radio telling me about the sale at Penny's (to quote Airplane!), I and many others have much more pressing concerns about being identified and found - on our terms rather than someone else's.
-Google Buys Dodgeball
Dodgeball is a social networking service built - literally - by two guys, possibly in a garage. It combines mapping and location-awareness with connections to friends and friends-of-friends. Right now it has achieved its greatest traction in New York city, where people can find each other and meet up with help from the software that runs on cell phones. Last week Google announced that it had purchased the company for an undisclosed sum.
The connections to Google's expanding portfolio are fascinating to contemplate. The more I use Google Maps, toggling between a map location and a satellite photo or seamlessly dragging a map into a viewable window, the more impressed I am. It's a useful exercise to compare Mapquest and Google Maps, given that the underlying map data comes from the same source (NAVTEQ): the implementations have significant differences. Dodgeball also could connect with the Orkut social networking work, the Hello and Picasa photo sharing tools, and of course Google Local.
Viewed in isolation, the Dodgeball service raises the revenue questions familiar to watchers of Friendster et al: who will pay for what, and who collects, by what mechanism? But in the context of the Google suite, Dodgeball becomes a feature rather than a product, and those revenue concerns vaporize, particularly in light of the massive runup in online ad spending that's enriching both Yahoo and Google.
-Real ID Becomes Law
Buried in a Senate appropriations bill devoted to funding the war in Afghanistan, the Real ID provision mandates that states adhere to a federal standard for issuing drivers' licenses, effectively creating a national ID card. According to F. James Sensenbrenner, the Republican Chair of the House Judiciary Committee and the bill's author, "American citizens have the right to know who is in their country, that people are who they say they are, and that the name on the driver's license is the real holder's name, not some alias." Opponents of the bill - which numbered 600 - included states, pro-immigration groups, and privacy watchdogs. Encapsulating the Real ID language deep in the appropriations bill prevented consideration of the proposal on its own merits, a fact that both Republican and Democratic Senators bemoaned.
The Smart Card Alliance announced its support for the measure, which will cost an estimated $100 million to $750 million to implement. Expect other technology lobbies to follow; Oracle, for one, has been active in the debate for some time. That cost will be paid by the states, which are now in the position of having to make their licenses "machine readable" without any further guidance. The Departments of Homeland Security and Transportation will set standards for deployment; states that opt out will have their citizens kept off airplanes and out of Federal buildings.
More chillingly, as Bruce Schneier points out, nobody can use post office boxes on their license - even judges and undercover police officers. Illegal aliens, or "undocumented workers," will be forbidden from holding driver's licenses, which in purely statistical terms will result in more uninsured motorists and higher insurance premiums. States will be required to retain the documentation (such as a birth certificate) on the basis of which they issue a given license, in sharable digital form, for years afterward, increasing the risk of identity theft still further. As Schneier points out, a national ID requires a national database, and the abuses of the database, not the card, are the main cause for concern. Substantial breaches have already occurred at current leaders in identity management: credit card companies, state DMVs, health authorities. With more agencies connected into a de facto national identity database, the scale of risk magnifies. But Congressional leadership decided that a debate over the costs and benefits of such a system was not as important as preventing Latinos from driving to work. Expect those 600 groups of opponents to fight this battle further.
-The Breakdown of 911
After a series of implementations beginning in 1968, Americans on wireline voice connections could reliably dial the same three-digit emergency number anywhere in the country. As the Bell System of the twentieth century fades farther and farther from view, the presumption of 911 reliability declines proportionately with the old business model even as demand increases: New York handles about 30,000 calls a day to 911. The problem comes in two variants.
First, a number of Voice over IP customers with life-threatening - and as it turned out, life-ending - emergencies could only reach a recording at Vonage saying to call 911 from another phone. The Texas Attorney General is raising the question after a 911 call failed during a home invasion in Houston. A baby's death in Florida is being blamed on a Vonage 911 failure. According to the Wall Street Journal, "In a letter to Florida's Attorney General, [the mother] said the Vonage customer-service representative laughed when she told her that Julia had died. 'She laughed and stated that they were unable to revive a baby'. . . ."
For their part, Vonage includes bold-print instructions for manual 911 mapping during the sign-up process, but it's been estimated that up to a quarter of the U.S. population is functionally illiterate. One feature of VoIP is its portability: plug the phone into an RJ45 jack anywhere and receive calls at a virtual area code of the customer's choice. Navigating firewalls, dynamic IP addresses, wireless connections, and frequent network outages taxes anyone but the most technically adept Internet user. Children are also a key 911 constituency. Taken collectively, these overlapping populations raise dozens of tricky questions. At the infrastructure level, the FCC and other agencies face the substantial challenge of determining the fairest, safest set of technical interconnection requirements incumbent on the Regional Bells and VoIP carriers.
From the Bell perspective, 911 obviously costs money to implement and maintain, and declining wireline revenues translate to declining 911 funds. Connecting 911 to the Internet in a reliable, secure manner is nontrivial - network attacks have used modems to target the service in the past - and until contractual arrangements are finalized there is reluctance to subsidize the same firms that present themselves as full wireline replacements.
911 isn't just a VoIP problem either: cellular users represent about a third of emergency callers, but math and economics conspire to make finding them difficult or impossible. In rural areas, cell towers often follow roads, so attempting to triangulate from three points in a straight line can limit precision. States have raided 911 tax revenues for budget relief. According to the National Emergency Number Association, quoted in the Wall Street Journal, "sixteen states, including New Jersey, Arizona and Ohio, have upgraded less than 10% of their counties. Six of those states haven't finished a single county."
It's turning out that the phone is a major platform in the identity debate. Number portability was an unexpectedly popular mandate a few years ago, and the fastest technology adoption in history was a phone feature: 55 million people signed up in a matter of months for a service - the Federal Do Not Call registry - that didn't exist when it was announced. That's even faster than the previous champ, Netscape Navigator's zooming to 38 million users in 18 months. Given the global nature of some of these questions, not to mention numerous issues with ICANN and DNS, the discussions and solutions will only get more complicated. As the examples illustrate, getting social arrangements to keep pace with technology innovation is if anything more difficult than the innovation itself.
Issues of electronic identity and mobility have recently been playing out in quiet but important ways. Each of three instances from last week's news is a classic case of social or economic problems being tangled up with a technology challenge. To see only one side of the question is to create the possibility of unintended consequences, allow hidden agendas into play, and generally confuse the allocation of sometimes-scarce resources. The emergence of location-sensitive services has occurred in sometimes unpredictable ways: rather than driving by the mall and getting an ad on my cell phone or satellite radio telling me about the sale at Penny's (to quote Airplane!), I and many others have much more pressing concerns about being identified and found - on our terms rather than someone else's.
-Google Buys Dodgeball
Dodgeball is a social networking service built - literally - by two guys, possibly in a garage. It combines mapping and location-awareness with connections to friends and friends-of-friends. Right now it has achieved its greatest traction in New York city, where people can find each other and meet up with help from the software that runs on cell phones. Last week Google announced that it had purchased the company for an undisclosed sum.
The connections to Google's expanding portfolio are fascinating to contemplate. The more I use Google Maps, toggling between a map location and a satellite photo or seamlessly dragging a map into a viewable window, the more impressed I am. It's a useful exercise to compare Mapquest and Google Maps, given that the underlying map data comes from the same source (NAVTEQ): the implementations have significant differences. Dodgeball also could connect with the Orkut social networking work, the Hello and Picasa photo sharing tools, and of course Google Local.
Viewed in isolation, the Dodgeball service raises the revenue questions familiar to watchers of Friendster et al: who will pay for what, and who collects, by what mechanism? But in the context of the Google suite, Dodgeball becomes a feature rather than a product, and those revenue concerns vaporize, particularly in light of the massive runup in online ad spending that's enriching both Yahoo and Google.
-Real ID Becomes Law
Buried in a Senate appropriations bill devoted to funding the war in Afghanistan, the Real ID provision mandates that states adhere to a federal standard for issuing drivers' licenses, effectively creating a national ID card. According to F. James Sensenbrenner, the Republican Chair of the House Judiciary Committee and the bill's author, "American citizens have the right to know who is in their country, that people are who they say they are, and that the name on the driver's license is the real holder's name, not some alias." Opponents of the bill - which numbered 600 - included states, pro-immigration groups, and privacy watchdogs. Encapsulating the Real ID language deep in the appropriations bill prevented consideration of the proposal on its own merits, a fact that both Republican and Democratic Senators bemoaned.
The Smart Card Alliance announced its support for the measure, which will cost an estimated $100 million to $750 million to implement. Expect other technology lobbies to follow; Oracle, for one, has been active in the debate for some time. That cost will be paid by the states, which are now in the position of having to make their licenses "machine readable" without any further guidance. The Departments of Homeland Security and Transportation will set standards for deployment; states that opt out will have their citizens kept off airplanes and out of Federal buildings.
More chillingly, as Bruce Schneier points out, nobody can use post office boxes on their license - even judges and undercover police officers. Illegal aliens, or "undocumented workers," will be forbidden from holding driver's licenses, which in purely statistical terms will result in more uninsured motorists and higher insurance premiums. States will be required to retain the documentation (such as a birth certificate) on the basis of which they issue a given license, in sharable digital form, for years afterward, increasing the risk of identity theft still further. As Schneier points out, a national ID requires a national database, and the abuses of the database, not the card, are the main cause for concern. Substantial breaches have already occurred at current leaders in identity management: credit card companies, state DMVs, health authorities. With more agencies connected into a de facto national identity database, the scale of risk magnifies. But Congressional leadership decided that a debate over the costs and benefits of such a system was not as important as preventing Latinos from driving to work. Expect those 600 groups of opponents to fight this battle further.
-The Breakdown of 911
After a series of implementations beginning in 1968, Americans on wireline voice connections could reliably dial the same three-digit emergency number anywhere in the country. As the Bell System of the twentieth century fades farther and farther from view, the presumption of 911 reliability declines proportionately with the old business model even as demand increases: New York handles about 30,000 calls a day to 911. The problem comes in two variants.
First, a number of Voice over IP customers with life-threatening - and as it turned out, life-ending - emergencies could only reach a recording at Vonage saying to call 911 from another phone. The Texas Attorney General is raising the question after a 911 call failed during a home invasion in Houston. A baby's death in Florida is being blamed on a Vonage 911 failure. According to the Wall Street Journal, "In a letter to Florida's Attorney General, [the mother] said the Vonage customer-service representative laughed when she told her that Julia had died. 'She laughed and stated that they were unable to revive a baby'. . . ."
For their part, Vonage includes bold-print instructions for manual 911 mapping during the sign-up process, but it's been estimated that up to a quarter of the U.S. population is functionally illiterate. One feature of VoIP is its portability: plug the phone into an RJ45 jack anywhere and receive calls at a virtual area code of the customer's choice. Navigating firewalls, dynamic IP addresses, wireless connections, and frequent network outages taxes anyone but the most technically adept Internet user. Children are also a key 911 constituency. Taken collectively, these overlapping populations raise dozens of tricky questions. At the infrastructure level, the FCC and other agencies face the substantial challenge of determining the fairest, safest set of technical interconnection requirements incumbent on the Regional Bells and VoIP carriers.
From the Bell perspective, 911 obviously costs money to implement and maintain, and declining wireline revenues translate to declining 911 funds. Connecting 911 to the Internet in a reliable, secure manner is nontrivial - network attacks have used modems to target the service in the past - and until contractual arrangements are finalized there is reluctance to subsidize the same firms that present themselves as full wireline replacements.
911 isn't just a VoIP problem either: cellular users represent about a third of emergency callers, but math and economics conspire to make finding them difficult or impossible. In rural areas, cell towers often follow roads, so attempting to triangulate from three points in a straight line can limit precision. States have raided 911 tax revenues for budget relief. According to the National Emergency Number Association, quoted in the Wall Street Journal, "sixteen states, including New Jersey, Arizona and Ohio, have upgraded less than 10% of their counties. Six of those states haven't finished a single county."
It's turning out that the phone is a major platform in the identity debate. Number portability was an unexpectedly popular mandate a few years ago, and the fastest technology adoption in history was a phone feature: 55 million people signed up in a matter of months for a service - the Federal Do Not Call registry - that didn't exist when it was announced. That's even faster than the previous champ, Netscape Navigator's zooming to 38 million users in 18 months. Given the global nature of some of these questions, not to mention numerous issues with ICANN and DNS, the discussions and solutions will only get more complicated. As the examples illustrate, getting social arrangements to keep pace with technology innovation is if anything more difficult than the innovation itself.
Tuesday, May 03, 2005
April Early Indications II: My Way
Until further notice, I'll be posting at the Guidewire Group site (I'm on their advisory board). The posts will be archived here, however.
April 2005 Early Indications II: My Way
One of the great but difficult thinkers of the twentieth century, the economist and satirist Thorstein Veblen, wrestled with people's interconnected relationships both to what Marx named the means of production and to the consumption of mass produced goods. Veblen attributed a nobility to work that he called the "instinct of workmanship": man the maker "has a sense of the merit of serviceability or efficiency and of the demerit of futility, waste, or incapacity." By contrast, what he memorably named "conspicuous consumption" was "ceremonial" in that it sorted people by reputation, the basis of an ultimately unwinnable competition.
By mentioning Veblen I raise an unanswerable question. The people who buy mass-produced stuff, often called "consumers," want in some deep-rooted way to shape their environment beyond just piling up purchased goods. How much people want to stand out as unique, and how much they want to create something tangible, is of course impossible to differentiate or quantify, and of course sometimes an artifact embodies both consumption (or conspicuousness) and workmanship. But the current business landscape provides too many examples for this to be a fad: there's something very potent afoot in the rise of personalization and customization.
-Nike's ID (www.nikeid.com) mass-customization site has been up for a long time now. Back in 2000, an MIT grad student wanted his custom label to read "sweatshop," and Nike's refusal turned into a PR bonanza for both the grad student and the site itself. I don't know when it was most recently upgraded, but the current range of product and color options is truly dazzling: the Flash visualization tool is a lot of fun, to the point where designing your own stuff can get addictive. The price points have come down too: you can get a customized bike messenger bag for $50 or a watch for $85, while shoes seem to sell for $10 over retail.
There's a lot of cleverness embedded in the site. The color palettes, for example, are constructed such that it's truly hard to get two adjoining colors to clash. In some shoes, you can order two different sizes - 8 right and 8 1/2 left, for example - at no extra cost. The number of potential configurations is huge, but the volume is sufficiently small to justify the cost with customer "delight" - it's clearly an opportunity to surpass expectations.
Nike in turn plays directly to Veblen: university sports teams and elite athletes frequently get shoes in custom color schemes. It used to be a mark of distinction, or perhaps what Thorstein would call "invidious [unfair] comparison," to have shoes that aren't available in stores. Now, anyone can just do it - and not only with youth-oriented brands either: for the same $10 premium, you can design your own L.L. Bean canvas tote.
-One of the many reasons for Japanese automakers' success in the U.S. relates to their conscious courtship not of the NASCAR audience (which with few exceptions has been offered unmemorable racing-related products to buy by the American Big Three) but of the street-racing subculture central to "The Fast and the Furious." Honda, Mitsubishi, and Subaru have sold inexpensive cars that have serious upgrade and customization potential, and while some offerings such as the Dodge Neon SRT have come from U.S. badges, they're clearly late to the game. The difference between car as finished artifact and car as a platform for creativity was grasped first by some astute observers of Asian-American teenagers.
-Rupert Murdoch recently told the U.S. newspaper industry convention that paper news is in danger missing the key young adult demographic: circulation numbers are off, so badly that many big papers have been caught padding them to maintain ad revenue. TV news is also in decline: viewership is off by a third since 1993. In contrast, the most comparable online sites are either highly opinionated and interactive, or else highly selectable: Yahoo and Google News are growing far faster than any mainstream online news operation.
-Keeping with the TV theme, TiVo may have made it out of the woods with the Comcast deal. The future of TV ad revenues is being reshaped not by the remote control, which was frequently used to skip advertising, but by an automated device that does the same thing. Broadcasters are accustomed to control over the viewing experience: even mighty Microsoft has its logo disappear at bootup, as do cellular carriers and handset makers, so people can display any pictures or colors they choose on their computer and mobile phone screens. The networks, meanwhile, are fighting time-shifting and time-compression even as they find new ways to insert logos, promotional announcements, and other reminders of whose interface we're watching. Viewer annoyance with Jim Nance during the NCAA basketball tournament reached new highs, for example. By contrast, the cellular industry has supported customization and is profiting: U.S. ringtone sales were $850 million last year, double the 2003 total.
-One of the most powerful trends in electronic gaming is the creation and maintenance of virtual worlds. SimCity originally went nowhere among distributors because there was no winning and losing. Now, there is a cottage industry of Sims furnishers, not to mention players: in a hot industry, the Sims 2 was 2004's top-selling PC title. In racing games, players often configure their own cars. The examples go on from there.
-Burger King has had over 30 ad slogans, and the chain recently adopted the second, most memorable one: "Have it your way." Returning to that message was part of a turnaround that took average annual franchise sales from $970,000 to $1.3 million.
-At the top of the digital camera mass markets are "prosumer" models that mimic the functionality of the top 35mm film cameras. Canon's Digital Rebel was the first to market at the $1,000 price point, and it has sold 1.2 million units; Nikon's D70, released slightly later, sold over a million units between March 2003 and February 2005. These single-lens reflex cameras allow the photographer to compose the picture based on what the lens "sees," and, significantly for our discussion, to interchange lenses. These cameras also support file formats that better facilitate complex manipulation software like Adobe Photoshop. The range of possibilities is a long way from taking film to the neighborhood drugstore for processing.
-The music industry has yet to adjust to a model in which listeners choose what they play and in what order, as opposed to embracing the album or CD as the unit of consumption. Playlists are a textbook example of user customization, and unlike the old Uplister business, which only published playlists, Apple has integrated playlist publishing into the iTunes operation.
-Under the radar, Creative Memories has become a substantial business. Never heard of it? The Minnesota-based company began in 1987 as a spinoff of a photo album-manufacturing company and now uses 90,000 "consultants" to help people build personalized scrapbooks. 2004 sales were roughly $425 million; for comparison in the direct-sales industry, the Longaberger basket company made $1 billion in 2000, at which time it employed 8,000 people - a figure cut in half since then as demand fell with the economic slowdown.
-Not-so-random facts: Do-it-yourself (DIY) has helped drive Home Depot and Lowes to strong, sustained growth. As we discussed in February, O'Reilly Media has tapped a nerve with MAKE magazine, aimed at people "who don't mind voiding the warranty." Home entertainment digitization is being led by hobbyists building PCs to serve as music and video servers. One of the most popular campus and young adult activities is reported to be knitting; celebrity knitters include Julia Roberts and Cameron Diaz.
What does this batch of widely-ranging examples suggest? Companies that fail to understand and market to the customization mentality will face an uphill climb: General Motors and Ford, the paragons of mass production, are in deep financial trouble. Toyota and Honda serve more niches more profitably and are thriving. Subway, where each sandwich is made to order, has severely dented McDonalds' industrial model. Dell's dominance of the PC market begins with customization. Financial services companies that rely on brokers and brokers' commission structures like Merrill Lynch have had to respond to the Schwabs of the world - but now that firm is fighting a two-front battle against lower-cost low-service brokerages and established firms like Fidelity that provide a range of interaction options.
As I noted at the outset, it's difficult but important to separate and identify a number of forces. Daniel Bell identified "The Coming of Post-Industrial Society" thirty years ago, but it took the Internet for us to feel what it's like to transcend factories the way factories had trumped farming roughly a century before Bell. As information about stuff becomes more valuable than stuff, the activities of creation and individualization take on a new shape in both tangible and intangible realms. First, in an economy largely devoted to non-essentials there exists some [essential?] desire to make meaningful stuff, not just ideas and decisions. Secondly, we can see a broad-based quest to differentiate oneself by differentiating one's stuff. Finally, there's a sense of entitlement, related to the "affordable luxury" trend embodied by Starbucks, itself a primo customizer: I want the best (of something) made for me because I'm worth it.
The digital world both contributes to the desire for and enables the fulfillment of customization, but, returning to the theme of last month's "Being Analog," we still have a lot to learn about the boundaries, permeable and otherwise, between the worlds of ether and of earth and its offspring.
April 2005 Early Indications II: My Way
One of the great but difficult thinkers of the twentieth century, the economist and satirist Thorstein Veblen, wrestled with people's interconnected relationships both to what Marx named the means of production and to the consumption of mass produced goods. Veblen attributed a nobility to work that he called the "instinct of workmanship": man the maker "has a sense of the merit of serviceability or efficiency and of the demerit of futility, waste, or incapacity." By contrast, what he memorably named "conspicuous consumption" was "ceremonial" in that it sorted people by reputation, the basis of an ultimately unwinnable competition.
By mentioning Veblen I raise an unanswerable question. The people who buy mass-produced stuff, often called "consumers," want in some deep-rooted way to shape their environment beyond just piling up purchased goods. How much people want to stand out as unique, and how much they want to create something tangible, is of course impossible to differentiate or quantify, and of course sometimes an artifact embodies both consumption (or conspicuousness) and workmanship. But the current business landscape provides too many examples for this to be a fad: there's something very potent afoot in the rise of personalization and customization.
-Nike's ID (www.nikeid.com) mass-customization site has been up for a long time now. Back in 2000, an MIT grad student wanted his custom label to read "sweatshop," and Nike's refusal turned into a PR bonanza for both the grad student and the site itself. I don't know when it was most recently upgraded, but the current range of product and color options is truly dazzling: the Flash visualization tool is a lot of fun, to the point where designing your own stuff can get addictive. The price points have come down too: you can get a customized bike messenger bag for $50 or a watch for $85, while shoes seem to sell for $10 over retail.
There's a lot of cleverness embedded in the site. The color palettes, for example, are constructed such that it's truly hard to get two adjoining colors to clash. In some shoes, you can order two different sizes - 8 right and 8 1/2 left, for example - at no extra cost. The number of potential configurations is huge, but the volume is sufficiently small to justify the cost with customer "delight" - it's clearly an opportunity to surpass expectations.
Nike in turn plays directly to Veblen: university sports teams and elite athletes frequently get shoes in custom color schemes. It used to be a mark of distinction, or perhaps what Thorstein would call "invidious [unfair] comparison," to have shoes that aren't available in stores. Now, anyone can just do it - and not only with youth-oriented brands either: for the same $10 premium, you can design your own L.L. Bean canvas tote.
-One of the many reasons for Japanese automakers' success in the U.S. relates to their conscious courtship not of the NASCAR audience (which with few exceptions has been offered unmemorable racing-related products to buy by the American Big Three) but of the street-racing subculture central to "The Fast and the Furious." Honda, Mitsubishi, and Subaru have sold inexpensive cars that have serious upgrade and customization potential, and while some offerings such as the Dodge Neon SRT have come from U.S. badges, they're clearly late to the game. The difference between car as finished artifact and car as a platform for creativity was grasped first by some astute observers of Asian-American teenagers.
-Rupert Murdoch recently told the U.S. newspaper industry convention that paper news is in danger missing the key young adult demographic: circulation numbers are off, so badly that many big papers have been caught padding them to maintain ad revenue. TV news is also in decline: viewership is off by a third since 1993. In contrast, the most comparable online sites are either highly opinionated and interactive, or else highly selectable: Yahoo and Google News are growing far faster than any mainstream online news operation.
-Keeping with the TV theme, TiVo may have made it out of the woods with the Comcast deal. The future of TV ad revenues is being reshaped not by the remote control, which was frequently used to skip advertising, but by an automated device that does the same thing. Broadcasters are accustomed to control over the viewing experience: even mighty Microsoft has its logo disappear at bootup, as do cellular carriers and handset makers, so people can display any pictures or colors they choose on their computer and mobile phone screens. The networks, meanwhile, are fighting time-shifting and time-compression even as they find new ways to insert logos, promotional announcements, and other reminders of whose interface we're watching. Viewer annoyance with Jim Nance during the NCAA basketball tournament reached new highs, for example. By contrast, the cellular industry has supported customization and is profiting: U.S. ringtone sales were $850 million last year, double the 2003 total.
-One of the most powerful trends in electronic gaming is the creation and maintenance of virtual worlds. SimCity originally went nowhere among distributors because there was no winning and losing. Now, there is a cottage industry of Sims furnishers, not to mention players: in a hot industry, the Sims 2 was 2004's top-selling PC title. In racing games, players often configure their own cars. The examples go on from there.
-Burger King has had over 30 ad slogans, and the chain recently adopted the second, most memorable one: "Have it your way." Returning to that message was part of a turnaround that took average annual franchise sales from $970,000 to $1.3 million.
-At the top of the digital camera mass markets are "prosumer" models that mimic the functionality of the top 35mm film cameras. Canon's Digital Rebel was the first to market at the $1,000 price point, and it has sold 1.2 million units; Nikon's D70, released slightly later, sold over a million units between March 2003 and February 2005. These single-lens reflex cameras allow the photographer to compose the picture based on what the lens "sees," and, significantly for our discussion, to interchange lenses. These cameras also support file formats that better facilitate complex manipulation software like Adobe Photoshop. The range of possibilities is a long way from taking film to the neighborhood drugstore for processing.
-The music industry has yet to adjust to a model in which listeners choose what they play and in what order, as opposed to embracing the album or CD as the unit of consumption. Playlists are a textbook example of user customization, and unlike the old Uplister business, which only published playlists, Apple has integrated playlist publishing into the iTunes operation.
-Under the radar, Creative Memories has become a substantial business. Never heard of it? The Minnesota-based company began in 1987 as a spinoff of a photo album-manufacturing company and now uses 90,000 "consultants" to help people build personalized scrapbooks. 2004 sales were roughly $425 million; for comparison in the direct-sales industry, the Longaberger basket company made $1 billion in 2000, at which time it employed 8,000 people - a figure cut in half since then as demand fell with the economic slowdown.
-Not-so-random facts: Do-it-yourself (DIY) has helped drive Home Depot and Lowes to strong, sustained growth. As we discussed in February, O'Reilly Media has tapped a nerve with MAKE magazine, aimed at people "who don't mind voiding the warranty." Home entertainment digitization is being led by hobbyists building PCs to serve as music and video servers. One of the most popular campus and young adult activities is reported to be knitting; celebrity knitters include Julia Roberts and Cameron Diaz.
What does this batch of widely-ranging examples suggest? Companies that fail to understand and market to the customization mentality will face an uphill climb: General Motors and Ford, the paragons of mass production, are in deep financial trouble. Toyota and Honda serve more niches more profitably and are thriving. Subway, where each sandwich is made to order, has severely dented McDonalds' industrial model. Dell's dominance of the PC market begins with customization. Financial services companies that rely on brokers and brokers' commission structures like Merrill Lynch have had to respond to the Schwabs of the world - but now that firm is fighting a two-front battle against lower-cost low-service brokerages and established firms like Fidelity that provide a range of interaction options.
As I noted at the outset, it's difficult but important to separate and identify a number of forces. Daniel Bell identified "The Coming of Post-Industrial Society" thirty years ago, but it took the Internet for us to feel what it's like to transcend factories the way factories had trumped farming roughly a century before Bell. As information about stuff becomes more valuable than stuff, the activities of creation and individualization take on a new shape in both tangible and intangible realms. First, in an economy largely devoted to non-essentials there exists some [essential?] desire to make meaningful stuff, not just ideas and decisions. Secondly, we can see a broad-based quest to differentiate oneself by differentiating one's stuff. Finally, there's a sense of entitlement, related to the "affordable luxury" trend embodied by Starbucks, itself a primo customizer: I want the best (of something) made for me because I'm worth it.
The digital world both contributes to the desire for and enables the fulfillment of customization, but, returning to the theme of last month's "Being Analog," we still have a lot to learn about the boundaries, permeable and otherwise, between the worlds of ether and of earth and its offspring.
Friday, April 22, 2005
April Early Indications I: Convergence Management
Since as long as ten or twelve years ago, observers at the bleeding edge of technology have predicted the coming of something called convergence: as long as "bits are bits," television and computing, voice and data, and gaming and the Internet can somehow combine. Convergence was implied to be an end state, the logical extreme of which would be a master device capable of informing, educating, communicating with, and entertaining the owner at his or her whim.
The verdict on convergence is mixed. NTSC television, a standard over 50 years old, remains America's dominant medium. The most successful consumer electronics product of the last five years, the iPod, performs a single function elegantly. Fear and some would say ignorance on the part of content owners like movie studios and record labels has led to a lot of money and energy being devoted to inhibiting convergence through legal, technical, and other means.
On the positive side of the convergence ledger, the tendency toward content and device independence continues to increase. The daily newspaper can be read in a number of forms. Music is available streaming or by download, and on radio transmissions coming via analog airwave, internet, satellite, or digital airwave. Voice can be sent through a wide variety of technologies.
What does this tendency hold in store? Verizon and SBC have been aggressive in challenging the cable companies with fiber near (SBC) or to (Verizon) the premise. In both cases, Microsoft has announced it will supply the operating system for the set-top box, which can include Tivo-like digital video recording. Verizon has announced content partnerships with the likes of HBO, Starz, and other mainstream programmers, meaning that the same company might be collecting for a household's voice, Internet, mobile, and video connectivity.
A mere 15 years ago, a typical American household had as few as four monthly "utility" bills: electricity, telephone, water/sewage, and fuel. It's now possible to have as many as nine: Internet, mobile, local voice, long distance voice, electric transport, electric generation, water/sewage, cable or satellite, and fuel. How people choose to manage the communications components of this portfolio will shape the financial future of some of America's largest companies, including AOL/Time Warner, GE, Viacom, Disney, Motorola, and Comcast, as well as the aforementioned Microsoft, SBC, and Verizon.
One factor in that decision will result from choices as to platforms: how will people manage convergence? Will the model tilt predominately toward a single master device, possibly a Microsoft/Dell PC+TV? (Check out what Dell is doing to TV monitor pricing, by the way: their efficient supply chain is radically undercutting the likes of Sony, and on the global market, flat-panel manufacturers like Samsung and especially Philips are suffering from oversupply, selling at prices below the cost of production.)
Or will the U.S. follow other countries into increasing reliance on a mobile platform? As I pointed out on Bloomberg radio last week, Apple feels justifiably proud of moving about 5 million iPods per quarter. This year, though, we'll start to see phones with hard drives: even if 3% of global units impinge on the Apple domain, that's about 20 million units right there. Steve Jobs is skating along the edge between running a tech stock and running a consumer electronics company. Once he moves the company into the latter markets, volumes can get frighteningly big.
Apart from consumer preference, the other factor to watch will be the fight over who "owns" the customer. Is it really immaterial to Disney whether a viewer watches ESPN over satellite, cable, telco fiber, or on a cell phone? AOL used to be more vertically integrated than they became after selling off the cable properties: will they regret having to buy their way into people's houses on someone else's wires? And what of Microsoft: once they build the OS for the set-top box and maybe the smartphone, will the company be content to run "under" SBC's Cingular and Lightspeed logos? Everyone remembers the outcome when IBM let them do a similar thing on PCs 25 years ago.
Regardless of who ends up on which tier of the pecking order from a vendor perspective, the potential combinations will be fascinating to watch. Some people may opt for a single supplier (Google? Microsoft? Yahoo? SBC?) to eliminate the confusion and tedium of managing multiple logins or information repositories, for example. Others may pledge allegiance to a device (like the Danger Hiptop) regardless of which network it runs on. Finally, a customer's primary task may lead to a particular platform: convergence may mean little to someone who places high value on mobile text messaging.
Convergence implies tradeoffs. Just because you can watch TV on your computer, or read e-mail on your TV, doesn't mean the experience is completely fungible. Maybe the most popular convergence will be two-way rather than an n-way rollup into the "master" device. After all, you can open cans, uncork wine bottles, peel carrots, and cut meat with a Swiss army knife, but very few kitchens subsist on that one tool. Being able to carry voice and e-mail has made the Blackberry quite popular: it's unclear whether playing music would make it more so. The sales of cameraphones don't seem to be displacing standalone cameras, but that could change. Looking forward, will the mobile phone emerge as a serious gaming, wayfinding (GPS), image-capture, voice, messaging, data display, and music platform? Just from a user interface standpoint, it's hard to imagine how one would "naturally" use such a device for such different functions.
So there will be a lot to watch in the next few years. Verizon wants to connect 3 million homes with fiber by the end of this year. Samsung will be shipping 3 GB hard drives on cell phones this year, based on what they showed at CeBIT. Intel and Fujitsu will both sell WiMax chipsets in the near future, making fixed broadband wireless a further element in the connection mix. Handset manufacturers are relentlessly improving, and sometimes innovating. The next-generation DVD will arrive soon once the standard is finalized; HDTV is already here. In short, the technology landscape, particularly in the consumer markets, remains highly volatile, and the stakes look to be higher than ever.
If a company can turn convergence into economic consolidation, the payoff looks to be handsome -- which explains the ambition of the plays being made by most of the companies already noted: Motorola has Canopy, Verizon has FiOS, lots of folks have huge investments in search, and the list goes on. These are bet-the-business investments in most cases, so punishment for the also-rans will be harsh. Fortunately for most of us, it's plenty rewarding watching the story unfold.
The verdict on convergence is mixed. NTSC television, a standard over 50 years old, remains America's dominant medium. The most successful consumer electronics product of the last five years, the iPod, performs a single function elegantly. Fear and some would say ignorance on the part of content owners like movie studios and record labels has led to a lot of money and energy being devoted to inhibiting convergence through legal, technical, and other means.
On the positive side of the convergence ledger, the tendency toward content and device independence continues to increase. The daily newspaper can be read in a number of forms. Music is available streaming or by download, and on radio transmissions coming via analog airwave, internet, satellite, or digital airwave. Voice can be sent through a wide variety of technologies.
What does this tendency hold in store? Verizon and SBC have been aggressive in challenging the cable companies with fiber near (SBC) or to (Verizon) the premise. In both cases, Microsoft has announced it will supply the operating system for the set-top box, which can include Tivo-like digital video recording. Verizon has announced content partnerships with the likes of HBO, Starz, and other mainstream programmers, meaning that the same company might be collecting for a household's voice, Internet, mobile, and video connectivity.
A mere 15 years ago, a typical American household had as few as four monthly "utility" bills: electricity, telephone, water/sewage, and fuel. It's now possible to have as many as nine: Internet, mobile, local voice, long distance voice, electric transport, electric generation, water/sewage, cable or satellite, and fuel. How people choose to manage the communications components of this portfolio will shape the financial future of some of America's largest companies, including AOL/Time Warner, GE, Viacom, Disney, Motorola, and Comcast, as well as the aforementioned Microsoft, SBC, and Verizon.
One factor in that decision will result from choices as to platforms: how will people manage convergence? Will the model tilt predominately toward a single master device, possibly a Microsoft/Dell PC+TV? (Check out what Dell is doing to TV monitor pricing, by the way: their efficient supply chain is radically undercutting the likes of Sony, and on the global market, flat-panel manufacturers like Samsung and especially Philips are suffering from oversupply, selling at prices below the cost of production.)
Or will the U.S. follow other countries into increasing reliance on a mobile platform? As I pointed out on Bloomberg radio last week, Apple feels justifiably proud of moving about 5 million iPods per quarter. This year, though, we'll start to see phones with hard drives: even if 3% of global units impinge on the Apple domain, that's about 20 million units right there. Steve Jobs is skating along the edge between running a tech stock and running a consumer electronics company. Once he moves the company into the latter markets, volumes can get frighteningly big.
Apart from consumer preference, the other factor to watch will be the fight over who "owns" the customer. Is it really immaterial to Disney whether a viewer watches ESPN over satellite, cable, telco fiber, or on a cell phone? AOL used to be more vertically integrated than they became after selling off the cable properties: will they regret having to buy their way into people's houses on someone else's wires? And what of Microsoft: once they build the OS for the set-top box and maybe the smartphone, will the company be content to run "under" SBC's Cingular and Lightspeed logos? Everyone remembers the outcome when IBM let them do a similar thing on PCs 25 years ago.
Regardless of who ends up on which tier of the pecking order from a vendor perspective, the potential combinations will be fascinating to watch. Some people may opt for a single supplier (Google? Microsoft? Yahoo? SBC?) to eliminate the confusion and tedium of managing multiple logins or information repositories, for example. Others may pledge allegiance to a device (like the Danger Hiptop) regardless of which network it runs on. Finally, a customer's primary task may lead to a particular platform: convergence may mean little to someone who places high value on mobile text messaging.
Convergence implies tradeoffs. Just because you can watch TV on your computer, or read e-mail on your TV, doesn't mean the experience is completely fungible. Maybe the most popular convergence will be two-way rather than an n-way rollup into the "master" device. After all, you can open cans, uncork wine bottles, peel carrots, and cut meat with a Swiss army knife, but very few kitchens subsist on that one tool. Being able to carry voice and e-mail has made the Blackberry quite popular: it's unclear whether playing music would make it more so. The sales of cameraphones don't seem to be displacing standalone cameras, but that could change. Looking forward, will the mobile phone emerge as a serious gaming, wayfinding (GPS), image-capture, voice, messaging, data display, and music platform? Just from a user interface standpoint, it's hard to imagine how one would "naturally" use such a device for such different functions.
So there will be a lot to watch in the next few years. Verizon wants to connect 3 million homes with fiber by the end of this year. Samsung will be shipping 3 GB hard drives on cell phones this year, based on what they showed at CeBIT. Intel and Fujitsu will both sell WiMax chipsets in the near future, making fixed broadband wireless a further element in the connection mix. Handset manufacturers are relentlessly improving, and sometimes innovating. The next-generation DVD will arrive soon once the standard is finalized; HDTV is already here. In short, the technology landscape, particularly in the consumer markets, remains highly volatile, and the stakes look to be higher than ever.
If a company can turn convergence into economic consolidation, the payoff looks to be handsome -- which explains the ambition of the plays being made by most of the companies already noted: Motorola has Canopy, Verizon has FiOS, lots of folks have huge investments in search, and the list goes on. These are bet-the-business investments in most cases, so punishment for the also-rans will be harsh. Fortunately for most of us, it's plenty rewarding watching the story unfold.