Wednesday, November 22, 2006

November 2006 Early Indications: Devices We Love (and the people behind them)

Review Essay

Steven Levy, The Perfect Thing: How the iPod Shuffles Commerce, Culture, and Coolness (New York: Simon and Schuster, 2006)

Bill Moggridge, Designing Interactions (Cambridge: MIT Press, 2007)

Coincident with the iPod's fifth birthday, Newsweek technology reporter Steven Levy produced a book that helps situate the importance of the iPod on several landscapes. The book continues Levy's longstanding interest in Apple Computer and is enhanced by the author's relationship with Apple CEO Steve Jobs. At times a history but more often a love letter to the ultimate gadget, The Perfect Thing raises some useful questions about the defining personal artifact of the 21st century.

-How did it come to be?

Levy is at his best in reporting mode: interviews with principals, clear chronologies, and an eye for the telling detail make these chapters some of the book's strongest. He details the processes whereby a collection of vendors and subcontractors, under strict secrecy, converged in a notably seamless device. Within Apple, the passion for music among employees on the project infused the team with extra motivation: "the chance to make a product they would kill to have in their own pockets," in Levy's words. PortalPlayer, a startup also located in San Jose, solved many hardware issues, including power management of the miniature hard drive. For software, Apple enlisted Pixo, another Silicon Valley company that had to meet Jobs' exacting standards for usabiity, stability, and audio quality.

One hard-fought decision made by the interface team illustrates a deeper theme in both the iPod project and Levy's book. Jobs, long known for holding strong design ideas, finally approved the typeface used in Pixo's interface software. For those who think it looks somehow familiar, it is: the font is Chicago, which had been created by Susan Kare for the original Macintosh menus. As Levy points out, Jobs has now launched four technology revolutions: the Apple II, the Mac, the computer-animated feature film at Pixar, and the iPod. The last of these was far from preordained: the company Jobs inherited upon his return in 1997 lacked the focus, passion, and distinction of what he had left 12 years before. In many ways, the Mac and the iPod are tied by Jobs' characteristic combination of vision, unreasonable expectations, and skills in building and driving a team to bring true breakthrough products to market.

-Why can't anyone build a viable competitor?

Comparing the iPod to its competitors illustrates a great deal about the current computing and entertainment market. Recall first that upon Jobs' return to Apple, Michael Dell advised him publicly to "shut [the company] down and give the money back to the shareholders." Fast forward to January 13, 2006, at which time Apple's market capitalization exceeded that of Dell. While Dell and other tech vendors brought out iPod imitators, none has succeeded, and the odds for Microsoft's Zune do not look good based on historical precedent. Dell's DJ player illustrates the pattern: the DJ had longer lasting batteries, cost less, and used an industry-standard MP3 file format. But not only did the DJ fail to dent iPod sales, it was the rare Dell failure, shut down as it was in 2006. At launch, a Dell spokeswoman nicely summarized the gulf between Apple and its competitors: "Style is nice," she said, "but function and value are what ultimately matter to consumers." DJ sales proved otherwise: great design can make a product or even a market.

In addition to its design, the iPod also benefits from a tightly integrated platform of content licenses from record labels, an unusually clean software interface for both Apple and Microsoft operating systems, and a truly idiot-proof device that solves many of the usability issues that typically characterize computer peripherals. For all the elegance of Jonathan Ive's iPod case and the status markers of the white earbuds, the power of the iPod lies in how well everything works together. Finding content from a formidably large catalog of legal downloads is easy, and so are importing CDs, buying downloads, and playing music or spoken voice recordings.

Contrast the Zune, which uses a complicated point scheme for buying Microsoft-licensed music, to iTunes, which simply charges 99 cents per song. You need to buy Zune money in $5 increments for 400 points, then pay 79 points per song. Simple math indicates that you'll always end up with a small point balance, giving the user a choice between stranding money or adding more points. Not surprisingly, Microsoft's scheme has the effect of locking in a customer. Paradoxically, Apple's lock-in is ultimately stronger, which has prompted government efforts to open the platform, but infinitely more graceful and compelling.

-Why do people love the iPod so much?

In this aspect, Levy attempts both the personal essay style of an Andy Rooney or Nick Hornby and the cultural anthropologist mode. Neither really works. Beginning with the book's title (what does it mean to "shuffle commerce, culture, and coolness?") and continuing with its conceit of printing the books with the chapters in varying, shuffled, orders (which is why I can't cite page numbers), Levy is wrestling with concepts and vocabularies outside his comfort zone. The section on Identity, in particular, fails to connect Levy's personal experiences to some larger phenomenon.

That said, in reportage mode the book does decode some of the iPod's uniqueness and appeal. The particular whiteness of the original case, for example, comes from a "double-shot" polycarbonate: an opaque white layer underlies a transparent one, giving the device a feel that no Dell product has ever possessed. The iPod's heft is not an accident, and feels pleasing. Similarly, the lack of mechanical artifacts on the case - hinges, screws, or other fasteners - contributes to a runic quality: the device is at once signifier, totem, and tool. Finally, and Levy gives this topic less attention than he might, the iPod is highly personal on several levels. Not only is every device with more than ten audio files going to be unique, but the act of inserting earbuds and wearing the player on one's person introduces an intimacy not found in most electronic or mechanical devices.

For a better understanding of how people can and do relate to their digital technologies, the interviews collected in Designing Interactions were accessable, compelling, and difficult to put down: IDEO founder Bill Moggridge has produced, with considerable help, a 766-page beast of a book that is as addictive as popcorn. The people behind so many defining artifacts of our world - the Acela, the Sims, the Palm Pilot and Treo, Google, and of course the WIMP (windows, icons, mouse, pointer) interface - tell fascinating stories about the design process. A few representative insights should give some flavor for the richness of the book:

-Will Wright, on the origin of Sim City:
"When I was designing ['a stupid shoot-up'] game, part of it involved me creating this landscape that you would fly over and bomb. It was a landscape of islands with roads and factories and things, and I created an editor for doing that, where I could scroll around and put down roads and things. I found I was having more fun designing and building these islands than I was bombing them in the game, so I took that editor and kept working on it [by incorporating systems dynamics theories from MIT professor Jay Forrester]."

-Alan Kay, the father of the laptop concept, on the effect of Marshall McLuhan and Jerome Bruner:
"The computer is a medium! I had always thought of the computer as a tool, perhaps a vehicle - a much weaker conception. . . . Now, if we agree with the evidence that the human cognitive facilities are made up of a doing mentality, an image mentality, and a symbolic mentality, then any user interface that we construct should at least cater to the mechanisms that seem to be there."

-Google co-founder Sergey Brin on resisting the urge to universalive your own experience:
"We have always wondered about how many search results we should display. I have my default, I always show fifty search results, so I thought, 'Why would you want to have just ten?' It turned out in testing that people really wanted just ten. Sometimes your personal bias really colors your way of thinking. I don't know that we've fine-tuned it between nine, ten, or eleven, but once you're in that range, ten is a number that people deal with pretty well."

-Pixo founder Paul Mercer, on the puzzle of the iPod's lack of a credible challenger:
"The iPod is very simple-minded, in terms of at least what the device does. It's very smooth in what it does, but the screen is low-resolution, and it really doesn't do much other than let you navigate your music. That tells you two things. It tells you first that the simplification that went into the design was very well thought through, and second that the capability to build it is not commoditized."

Many industry pioneers tell stories (some with DVD videos) of the details behind landmark designs, providing a sense of just how hard it is to get things right. It was both great fun and dramatically illuminating to hear about some of the industry's defining moments:

-GRiD founder John Ellenby's "Gzunda" computer, a Xerox ALTO that "goes unda" the desk because it was too big to fit on top
-Mouse inventor Doug Engelbart's daring demo of an interactive graphical user interface in 1968
-Apple graphics genius Bill Atkinson's designing the pull-down menu bar for the Mac - overnight
-Pen computing veteran and sometime student of neuroscience Jeff Hawkins' surprise at people's rejection of Graffiti and adoption of the thumb-size QWERTY keyboard.

The icing on the cake, to mix food metaphors, is Moggridge's closing summation of the design process, which focuses heavily on people and thus on the prototyping process. This section alone will be worth the purchase price for practitioners in the field, informed as it is with decades of personal experience along with the collected insights of his peers.

In the end, the two books drive home the importance of personality in great design: the old stereotype about something looking like it was "designed by a committee" embeds much truth about the differences between great products (or services, or interactions) and the vast majority of experiences that lack the coherence, utility, and appeal of those few efforts that get it right. Whether it's Tom West's Eagle in Tracy Kidder's Soul of a New Machine, or Doug Englebart's mouse, or Mari Matsunaga's iMode medium, many great designs embody the personal capabilities of a designer or team to a greater degree than most people realize. Getting a glimpse behind the curtain in these two books was both enlightening and humbling: enlightening because of the sheer number of design decisions that can be involved, and humbling because of the degree of genius from which we as customers are privileged to benefit.

Thursday, October 12, 2006

Early Indications October 2006: Who will build the “Internet of Things”?

(NB: As usual, the author holds no direct financial position in any of the companies mentioned.)

“The network is the computer”
Sun Microsystems, circa 1983

Ed Zander was already a seasoned tech executive when he arrived at Sun Microsystems in 1987. During his tenure of over 15 years there, Zander saw Sun enjoy great success in the Internet infrastructure boom, building in part on its semi-retired but still-prescient slogan. He also saw Sun struggle after 2000 as the company turned in a variety of directions, in part toward the market’s emerging need to manage systems of sensors, associated information, and identity. Servers had, however, become a commodity item, and Sun’s current position is substantially weaker than that of a decade ago.

Zander retired from Sun, then was hired as CEO at Motorola a short while later. The company he inherited was not noted as a tech powerhouse as of 2003. Decisions were heavily second-guessed, in part because the most recent in a line of leaders from the founding family was not delivering results. Moto had missed two big shifts, first by overestimating the desire for satellite telephony with its Iridium bet that lost $2 billion and is still the subject of a lawsuit by creditors that could cost the company an additional $1-4 billion. Second, Motorola underestimated the shift in mobile voice from analog to digital. Its StarTAC cell phone was a worldwide leader in the mid-1990s, but after that Motorola gave up handset market share to Nokia, LG, Samsung, and others. Many business units were fragmented and/or redundant, and speed suffered amidst indecision, bureaucracy, and the impact of major job cuts.

The turnaround has been sudden. Since January 5, 2004, when Zander came aboard, Motorola stock has risen from about $15 a share to a solid $25. The company’s instantly recognizable RAZR phone is hugely popular, with over 50 million units sold. The follow-up products have generally done well, and both market share and handset margins are rising by meaningful increments.

But this is more than a consumer device popularity story. Zander has streamlined decision-making, reoriented the hiring process to include an emphasis on interpersonal skills alongside technical ones, and crafted a very Sun-like overarching vision. Whether in the public safety, handset, telecom equipment, or enterprise business, everything is connected to what Motorola is calling “seamless mobility.” According to the 2005 annual report, “Seamless Mobility means people have easy, uninterrupted access to information, entertainment, communication, monitoring and control.” While key technologies -- such as sensors, metro-area and mesh wireless broadband, and voice and video over Internet Protocols -- are certainly maturing, one clue to the power of this vision lies in Zander’s putting people at the center of the Internet of Things.

The branding and expertise that come from leadership in handsets and set-top boxes no doubt fuel some of this positioning. Out of view of consumer markets, however, Motorola is creating a compelling portfolio of both wired and especially wireless broadband plays: it is the only vendor to be part of the two major WiMax trials in the U.S., which are Clearwire (Craig McCaw’s newest startup) and Sprint, which announced a $3 billion investment in the technology earlier this summer. Motorola also has significant market presence in wide-area wi-fi, various cellular standards, and wireline broadband to the home. In short, if there’s a broadband connection, whether in the air or over a wire, or connecting a person or a device, Motorola probably has a solution in the market.

The company augmented an already robust patent portfolio with the September acquisition of Symbol, which makes a variety of sensors, software, and network equipment for enterprise mobility of the stock-clerk and forklift variety. Symbol also will augment Motorola’s enterprise sales channel and expand its foothold in vertical industries including retail, travel, and transportation.

While there are good reasons to be bullish on a combo platter of sensors, consumer and government handsets, and broadband, risks are embedded in nearly all of the company’s major plays:

-Qualcomm owns a formidable portfolio of wireless patents and could build on its strength in cellular to attack other facets of mobile broadband.

-One company - Sprint - represented 12% of Motorola’s total market in 2005. Sprint has not executed the Nextel merger particularly effectively and is losing market share. The COO was asked to leave in August and the executive chairman announced his departure earlier this week.

-The WiMax “family” of technologies (fixed vs. mobile wireless broadband) is not yet a product suite, and 4G is not yet a standard. Intel’s place in the wireless ecosystem is not yet clear either.

-The investment made by Motorola’s large customers -- telephone carriers and cable companies -- in both wireless and wired broadband, particularly in laying fiber closer to the end customer, could be affected by technology change, regulation, or availability of capital.

-Supply-chain products like bar codes and RFID may remain niche offerings rather than as parts of a conceptually unified “Internet of Things.”

Getting Motorola, which had lost prestige and shed thousands of jobs, to accept risk-taking has been a core aspect of Zander’s mission. As a result of the company’s focus, patent holdings, and coherent product footprint and market positioning, it’s hard to see a head-on competitor. Cisco has more wireline clout and a bigger set-top box presence after acquiring Scientific Atlanta but no WiMax or cellular business, much less consumer design expertise of the sort embodied in the RAZR. Tag manufacturers including Texas Instruments, middleware companies such as BEA, or identity managers like Sun or maybe Microsoft may well play important roles as components in the cloud from sensors through computing to people, but it’s hard to see any of these companies taking a leadership position. Many, many piece-parts will be required for anything resembling the science fiction vision to come to fruition, but given that personal communications and computing platforms, a variety of broadband networks, and sensors in many shapes and sizes will be involved, in the near term Motorola appears to have rebounded and assumed a leadership position in a market it is helping to invent.

Wednesday, September 20, 2006

September 2006 Early Indications: Thinking about Transparency

Starting in the mid-1990s, a growing number of investors, academics,
and analysts have been calling for greater transparency in business
and government. Transparency was somehow different from
accountability and visibility, implying that various constituencies
could see the inner workings of an organization. The scandals at
Adelphia, Enron, Worldcom, and elsewhere intensified the clamor, which
found one prominent expression of its logic in Don Tapscott's book The
Naked Corporation.

Making achievements and shortcomings more visible, in timely fashion
sounds like an obviously Good Thing. But it's a long way from
conceptual aspiration to working out the many details of who should
and can see what, who should and can act on what, and when can which
people see what, all of which involves multiple layers of costs and
benefits. Instead of merely asking "is transparency desirable?" there
seems to be movement toward more sophisticated understandings of
transparency's many contexts.

1) Transparency costs money

As Sarbanes-Oxley has demonstrated, regulatory requirements impose
externalities (costs not accounted for by the market) and unexpected
consequences. Someone compared the burden of reporting, particularly
for small and medium enterprises, to spending $1000 on a safe to
protect $100 of jewelry; it also costs taxpayers money to enforce
regulations that may or may not be improving the efficiency of
markets. Establishing transparency thus becomes a question of how to
require useful information to be made available in accessible form
rather than merely increasing the amount of reporting in complex,
redundant, and often arcane processes: less truly can be more.

2) Transparency can impose competitive disadvantage

Some of the most revealing knowledge about a firm comes from what Stan
Davis once called "information exhaust." Business-to-business
exchanges, RFID-based tracking networks, credit card records, and the
like all can reveal fundamental knowledge about a supplier or
customer's economic situation. As Scott McNealy of Sun said of some
B2B exchange proposals back while he was still CEO, "We don't want
demand for our products to be known and understood. Why would you
outsource your purchasing department? Isn't your purchasing strategic?
People are going to find it is a really dumb idea to outsource to a
competitor so they can see your demand curve."

3) Transparency can be deliberate or inadvertent

Note that the transparency of demand curves as revealed in the course
of doing business is inadvertent: no investor safety or regulatory
assurance is gained or sought in such a situation. Yet by letting
patterns be discerned from large bodies of transactional data, a firm
puts itself at risk of unfair pricing, strategic shortages, and other
disruptions. (A classic example is large mutual funds, which hold
such substantial positions that they can signal buying or selling
intention to the wider market in their everyday movement of large
orders.) Help-wanted ads, real estate purchases, travel programs, and
other everyday interactions can tip off analysts or competitors as to
new initiatives, acquisitions and divestitures, and the like.
Accordingly, the use of third parties, shell corporations, and other
devices is common: executive search delivers obvious benefits as it
reduces transparency.

4) The world has grown less transparent in the past five years

As we observe the fifth anniversary of the September 11 terrorist
attacks, the transparency debate is being reframed. Much could be
gained by the sharing of avian flu preparedness plans, for example,
but I have seen little willingness to do so: because such plans are
near neighbors to disaster preparedness documents, it is often
irresponsible to circulate either type of document widely.
E-government is another area in transition: just as localities,
states, and many national agencies were rushing to make documents and
processes available on line, security considerations led to the
rethinking of such services as maps, agency directories, and the like.
Just the other day I was using Google Earth to view Pennsylvania GIS
data pertaining to a trout stream. At one point I zoomed too close or
otherwise triggered something, at which time a huge red "X" covered
the screen without further explanation - it truly felt as though the
system had discovered me as an intruder a la "Mission Impossible" even
though there are no obviously secure locations for miles from the
creek in question.

5) Transparency is part of risk: something to be managed

The overall scope of what has come to be called "enterprise risk
management" is truly staggering. At the same time that regulators
require both numbers and the processes for generating those numbers to
be certified, clever people must be scouring the business and its
interactions with the wider world to see where useful information is
being unwittingly compromised. It's an apparently trivial example,
but how many URLs have you seen which display the logo of the web
server vendor as a favicon, indicating that at least one default has
not been overridden during setup? (Here's a gallery of proper icons:
http://mppierce66.home.comcast.net/web/fi/)

Technology frequently both solves and creates problems in this domain.
Databases are an excellent example: sometimes it's the details that
are sensitive, while elsewhere the rollups need to be locked down.
Health care records containing identifiable patient data are supposed
to be safeguarded while aggregate statistics for public health and
similar purposes can be circulated. In the military supply chain,
meanwhile, low-level personnel or contractors can see line items in
order to load trucks or ships, but the aggregate list of what's going
on the ship, and its destination, are classified.

The latter example gets more complicated in a coalition scenario: for
a hypothetical exercise let's say the US Navy runs SAP as its
enterprise backbone while a partner navy from Canada or the U.K. runs
Oracle. At the same time that coalition partners manage classified
data vertically within their own force, they must also manage data
flows both horizontally across forces and then vertically, up and down
a different culture and organizational model. The elements of a
ship's inventory, for example, might consist of rollups that are
masked even within the ship: those same Xs over the first twelve
digits of your credit card number on a receipt have other uses. At
the same time, commanders at an appropriate level need to see
aggregate numbers derived from all participating forces, which means
that close agreement on translation of definitions, rank, and job
descriptions must precede any technical granting of access. As
difficult as it is from a technical perspective, getting SAP to
interoperate with Oracle in a truly mission-critical situation is
secondary to getting the relationships of the various parties
clarified, codified, and enforced outside of software.

This necessity in turn raises a related question. Even as web
services and XML rely on multi-party standards for information and
application sharing, 1:1 mapping still looks like it will be with us
for a long time. RFID provides another example: the current
specification for a multiparty Object Naming Service (ONS) will not be
sufficient to handle the sheer number of potential any-to-any data
relationships, so trading partners will need to conduct some degree of
A-to-B clarification. For cost, competitive, and complexity reasons,
supply chain players will not make their reader and status information
routinely or widely available. If a particular application at a
particular trading partner makes sense from two or more parties'
perspective, then systems will be connected to enable that. Once
again, enabling transparency becomes a matter of managing
externalities, unintentional risk exposure, costs, and benefits, and
negotiating these types of conditions is problematic for groups larger
than two or three parties.

While the future belongs to networks, the reality of transparency
highlights the importance of trust, which is far more easily
negotiated and enforced in two-way relationships than in n-way
situations. The mixed success of HIPAA, Sarbanes-Oxley, and other
efforts to legislate trustworthiness testifies to the inherent
difficulties in managing networks of interested parties, each of which
collects and moves information for its own reasons from the inside
out.

Saturday, August 12, 2006

August 2006 Early Indications: Of Copiers, Counterfeiters, and Pirates

Enterprise computing is almost exactly 50 years old: the first purchase of a commercial Univac occurred in 1954. As the Economist pointed out recently, the personal computer is 25. This historical symmetry neatly sets the context for a problem that has been with us only a short time: software copying.

This is truly a problem without precedent. While the Xerox machine did not duplicate the printing (and particularly the binding) process, photocopying did have a major economic impact on at least two sub-sectors: textbook publishing and sheet music. Even so, these markets are small relative to software, music, movies, and proprietary research. Thus the business model changes, court cases, and other signs of market adaptation to photocopying do not compare to the issues we face today.

I could find surprisingly little literature on the historical arc of this issue. Nevertheless, given the prominence of Napster and its successors, Microsoft's interactions with the Chinese government and market, and the rise to economic power of the gaming sector, copying and its control have become central issues in a digital economy.

Duplicating a physical artifact means, at the least, having access to both raw materials and skills. Whether that's steel and metalworking, food and cooking, or wood and carpentry, the issue of copying has been nonexistent in some markets. Elsewhere, particularly branded consumer products such as watches and purses, copying -- as in imitation rather than replication -- can be a significant concern. Physicality also implies a moderate barrier to movement: successful counterfeiters still face the problem of getting merchandise to customers.

The Internet removes the barrier of getting raw materials because code is malleable, easily transported, and closer to idea than infrastructure. The skill involved in copying a file, whether of executable code or data, is miniscule in contrast to what was needed to create either the software artifact or a physical original. Unlike physical counterfeits, which typically lack material quality and/or craftsmanship compared to the original, or analog copies in which successive generations of cassette tapes, photocopies, or faxes rapidly degrade, digital copies are nearly perfect. Furthermore, the means of production (a PC) is inexpensive and ubiquitous, which makes tracing the origin of copies harder than locating activities with heavier infrastructure, such as radio broadcasts and LP record pressing. Finally, the digital distribution channel is not only faster than a physical counterpart, it is instantaneously global.

Owners of digital content have relied on three tactics to combat copying. First, there has been a series of attempts to make computer disks hard or impossible to copy, by hiding files, using proprietary formats (such as game cartridges), or doing something called nibbilizing that rearranged the bit sequence of a copy. (Similarly, Macrovision enforced copy protection in analog VHS recorders.) As software distribution goes increasingly online, such measures still have their place, but they have not slowed the spread of copying by a significant margin. An exception is the video DVD, which can be copied by nowledgeable users but not casual ones: proprietary protection of the digital bitstream in a DVD player or PC is enforced
in hardware.

Secondly, software publishers can make it hard or impossible to use a copy. Some companies required users to consult a paper manual ("what is the last word on page 67?") to generate crude authentication. One manual used symbols, and printed the manual in a color scheme that was impossible to photocopy. Still others relied on a hardware device called a dongle to activate a program in conjunction with the software and a generic PC (which quickly raised the problem of getting multiple dongles to interact gracefully on the same machine). More recently, a program can "phone home" via the Internet to see if software with a given serial number is in use on multiple machines. This approach can be made relatively robust for application software, and a variant called Fairplay prevents unauthorized copying of Apple's iTunes music files. Adobe is including auditing and monitoring of print materials in its LiveCycle Policy Server: if a user forwards an e-mail or file, or prints it, or otherwise interacts with it, the originator of the document can be informed. How this extensive reach will affect task design and business process remains to be determined.

Finally, software owners can lobby legislatures to change laws relating to copyright. The doctrine of fair use has been dramatically altered by both the duplication technologies of the past 100 years and the lobbying of content industries. There have been many unintended consequences: copying application software off a 5 ¼" floppy onto a USB stick would generally be illegal, but with rapidly outmoded storage technologies, what is the owner of the application to do if she owns a PC without the appropriate outmoded drive? At the enterprise and government level, archiving digital assets often turns into an exercise in curatorship of a technology museum: successive generations of outmoded hardware and software need to be maintained in the event that a given file or storage format needs to be read.

The whole question of software copying has many layers of complexity. The economics of digital goods means that the first copy is extremely expensive, representing as it does all the capital investment and years of r&d. Afterward, copies are effectively free to produce, which can lead to very high profit margins. The lack of effective channels for certain digital goods (single-song music downloads in the Napster era, for example) means that some markets might reject the arbitrary bundling or other pricing offered by copyright owners. After a copy is made, with whatever motivation, different parties might be financially liable for an individual's action, depending on how the law is written.

The content industry currently tends to reject copying as backup: if I buy an iTunes song (or 500 of them) and my host PC's hard drive dies, I'm generally out of luck even though all software was purchased and used under the terms of the license. Another example is DVDs: if I have two places of residence and want to watch a movie where I am, why must I buy a second copy of the same software rather than make a single copy for personal use? Once again, copyright law tends to prohibit any copying under blanket provisions Such a move blurs the distinction between copying and counterfeiting, which are overlapping but not identical concepts. As processor speeds, graphics capability, and bandwidth all improve, content owners have lobbied to engineer copy protection deeper into the computing platform.

This degree of restriction would be unprecedented. If I want to weld a Ferrari nose onto the front end of a dump truck, Ferrari (or Caterpillar) can't control what is done either with the purchased asset or, more important, an oxyacetylene torch. Governments have engineered protection into color copiers, for example, and it's hard to argue against some degree of action in the public interest to protect the integrity of the money supply. But being able to use small clips of published text in scholarly works, for example, is standard practice -- and essential to the expansion of knowledge in law or literary criticism. The parallel action of copying any portion of a movie for personal or scholarly use, however, might be illegal, depending on jurisdiction. Similarly the study of cryptography is highly regulated: scholars who decode copy protection algorithms run the risk of prosecution if they publish their findings.

Herein lies the conundrum. The digital asset copying problem is unprecedented, so new kinds and degrees of measures will be required. At the same time, the legitimacy of certain forms of copying -- for preservation, backup, or fair use -- means that broad prohibitions, enforced in a general-purpose computing platform, come at an extremely high price to the purchasers and users of software and other digital media. No single answer will apply in every market to every application, but there have been some noteworthy efforts:

-Use copying to build an installed base.
Software makers with sufficiently strong cash reserves and long planning horizons can consider letting copies go relatively unpunished to build up a user base. Once a large body of people is trained on the software and file extensions and other conventions are well established, there are high enough switching costs that there may be reason to buy later versions of the product, particularly if the registration process is tightened, the pricing is attractive, and/or competitors have been weakened.

-Use copying of entertainment to sell other entertainment.
The Grateful Dead's support of tape-swappers who were allowed to record concerts is a widely cited example. Other artists have used music downloads as an alternative path around the gatekeepers of radio playlists to build live audiences for concerts -- where the t-shirt concession is tightly protected against counterfeiters.

-Reconsider analog.
Several music labels, faced with plummeting CD sales, have turned to high-quality vinyl releases of both new and back catalog. Some high-end financial newsletters never left paper distribution. There are still many places where one can't conveniently read an electronic newspaper.

-Utilize advertising-supported distribution.
Archives are a perfect example: while a few newspapers have succeeded in charging subscriptions, most are failing to monetize their back issues with clumsy subscription or registration models which often don't support permanent linking from blogs or other sources of traffic. As paper newspapers continue to decline in circulation, the economic models of hybrid (digital + physical) production and distribution are ripe for reinvention. As a former big-city newspaper editor recently told me, this talk about the sky falling on newspaper ad revenues has happened before: in the late 1960s, political advertising moved overwhelmingly to television almost overnight, and the newspaper industry survived.

-Think of King Gillette and sell blades after giving away razors.
Giving away a multi-player game title free, or allowing users to copy it without restriction, provides software publishers with a powerful distribution channel (it used to be called "viral" back in the day). Recovering the cost can be more effectively achieved by making the proprietary on-line gaming environment a tightly controlled, for profit affair, with monthly or annual renewals: players will pay for access to other players, not for the plastic disc. Several online gaming environments (including Second Life) have spun off real economies based on cash flowing to merchants of virtual assets.

The list is not inclusive, but should suggest that there are enough viable responses to digital copying such that broad prohibition of all software copying will impose social costs that may outweigh proprietary benefits. It's important that there be open public debate to consider all of these potential costs, benefits, and risks of various courses of action. Copying and piracy, meanwhile, are not one and the same, but the rhetorical landscape tends to make this distinction harder and harder to draw. At the same time, true piracy -- illicit DVD pressing plants for example -- should be considered and addressed separately rather than being conceptually lumped in with the many gray areas of fair use.

Friday, July 28, 2006

July 2006 Early Indications: Web Video Update

As we predicted in January, video over the Internet is making a major impact.

"The relentless reinvention of business markets by the Internet and digitization will continue. . . Who might be next? Television is my best guess, given the presence of Apple (iPod video purchases), Microsoft, Google, Yahoo, Cisco (with its newly-purchased set-top box business), and AOL/Time Warner along with the RBOCs: that's a lot of intellectual and financial capital being focused on a mature industry that is becoming more digital every day."

But who is the main player in this emerging category? None of the giants listed above, though all continue to be active. Surprisingly, a startup called YouTube continues to control the majority of web video market share. The company is doing so despite owning no production facilities, archives, or other "media assets." Instead, YouTube is supplied with content by individuals who submit video clips of themselves on skateboards or hanging around, or old TV clips, or any number of other materials. Some of the videos are illegal in certain jurisdictions, others problematic for a variety of reasons, primarily lack of copyright. One of many fascinating sidenotes to the site's growth is soldiers' use of YouTube as a mechanism to post graphic, unedited videostreams from Iraq, a practice the military is trying to curtail.

But the numbers are soaring. According to Nielsen/Netratings, YouTube's traffic quadrupled in five months. Page views have increased even faster, from 117 million a month in January to 724 million in June: more people are visiting, and those who visit stay longer.

The site's traffic got a huge boost - 75% - in a week. Why? Bill Simmons, a former comedy writer who has a column on ESPN's site, listed his "YouTube Hall of Fame" in mid-June. It featured great athletic plays, unintentional goofiness, on-air interview meltdowns, and emotional moments, particularly the end of the 2004 World Series when Simmons' beloved Boston Red Sox won the World Series. The net result is that a video website with only a small amount of licensed content is beating AOL, Google, and Yahoo, in part on the recommendation of an uncredentialed sportswriter who eschews locker rooms and press boxes posting on an incumbent sports network's website.

Several elements of the YouTube model merit attention. First, while short clips have long been a unit of video production, programming has always worked in 30-minute multiples. Now consumption can work in the same short bursts that characterize everything from news correspondent stand-ups to highlight reels to comedy sketches.

Second, although there's speculation over who will buy YouTube and how much it's worth, recent history would seem to suggest that buyers may inherit major liability. Recall that Napster was a parallel to YouTube, a site hosting unlicensed digital content. It was shut down by lawyers with copyright concerns, then KaZaa and other sites moved music and other software to a peer-to-peer architecture with no central storage facilities, only pointers to users willing to share content. These too have drawn legal attention: just this week KaZaa was ordered to pay $150 AU in damages to recording artists. The vast amount of copyrighted material being distributed by YouTube -- even though it both distributes legal video for NBC and others and has removed copyrighted materials at the rights holder's request -- makes litigation and/or criminal charges seemingly inevitable.

Whatever the shape of the emerging business model for this kind of material, I'd like to look at the question more broadly. How might the economics, viewing practices, and other elements of Internet video evolve?

1) Time Scales
Currently, there is a great deal of time shifting for television viewing, due mostly to the use of VCRs and then personal video recorders such as the TiVo. Rarely will these technologies allow a home viewer to collect an archive's worth of content, nor will the content be indexed and managed. NBC pulled down Saturday Night Live clips from YouTube, so there's currently no easy way to see favorite routines and musical performances from the show's 25+ years, although a select few are on DVD.

Assuming new and appropriately powerful search tools, Web video promises to bridge the gap between TiVo's allowing a viewer to watch a show a day or two after broadcast and the tiny percentage of broadcast material (excluding as it does local news, advertising, and other material never included on commercial DVDs) available on permanent media. The questions of who manages the archives, what conditions are attached to their use, and who pays whom for what are mostly open and promise to be controversial.

2) Business Models
Unlike cinema and music, television has operated on an advertising-supported model from the beginning. As software companies, music labels, and publishers work to adapt to online distribution, television rights holders may be in a prime position to capitalize on the new distribution channel. At the same time, seeing the Internet only as distribution and not as creation and collaboration will lead to major missteps, as the people who upload 65 million videos a day to YouTube alone would seem to prove. The migration from broadcast "push" to download "pull" also makes a major difference.

This blurring of production and consumption is new, and not yet fully formed. The notion of fair use, under U.S. copyright law, will be further reinterpreted by both lawyers and users: the rise of "remix culture" in digital media extends familiar artistic methods of reuse (Jasper Johns' "Flag" is a prime example) to music and video. Establishing principles and practices of ownership will continue to challenge societies and businesses across the globe. However these questions get resolved, web video will emerge under a different model from the so-called "audience commodity" that broadcasters used to sell to over-the-air advertisers.

3) Segmentation
Video serves many kinds of people in many different ways. A corporate training video is fundamentally different from sports replays; long-form mini-series can't be confused with weather reports. Spanish-language game shows vary fundamentally from their Japanese or American cousins. It will be fascinating to watch how cultural and time-zone barriers are affected by the Internet video channels. The World Cup head-butt (a huge YouTube download) is as close to a globally universal image sequence as we currently have.

At the same time, the microscopic scale of the audiences for political satire, or any one of thousands of rock bands, or self-produced documentaries extends the tendency of network television toward fragmentation. The move from three networks in the U.S. to hundreds of cable channels is potentially expanding by orders of magnitude, and "viewers" are no longer passive nor bound to broadcasters' schedules.

4) Tools
As video migrates both to big-screen HDTV sets and small-screen cameraphones, it's clear that one size will not fit all for either production or consumption. It's a parallel situation to blogging: citizen video from mobile phones can be immediate and dramatic (as after the London subway bombings last year), but just as an army of bloggers can't produce and manage a big-city newspaper, so too a dozen or even a hundred amateur videographers can't deliver the World Cup video feed. Lightweight, on-the-spot image capture will continue to emerge, and better tools are available to small-timers every day, but there will always be an audience for highly-paid stars shot in high resolution formats and edited by professionals.

All that said, it's worth noting as a data point that an HD video camera can be bought on Amazon for well under $2000, and an HD digital video editing environment (DV Rack with HDV upgrade) costs only $500 for software. With that kind of access, and with Moore's law making electronics (if not optics or petrochemically-intensive storage media) cheaper every year, the ingredients are in place for major upheaval. Predicting who will capitalize -- Sony, Samsung, Disney, News Corp, Fujitsu (which makes mass storage for this kind of thing), video search startups, Google -- is premature, but it's clear that incumbents and upstarts alike are moving fast. The market, however, is moving even faster.


Bill Simmons YouTube column: http://sports.espn.go.com/espn/page2/story?page=simmons/060626

Background on Simmons:
http://sportsillustrated.cnn.com/2006/writers/chris_ballard/03/22/qa.simmons/index.html
and
Bryan Curtis, ADRIFT ON THE SEA OF ESPN.COM, New York Times, June 4, 2006,

Nielsen/Netratings figures: http://www.nielsen-netratings.com/pr/pr_060721_2.pdf

YouTube potential litigation: http://www.linuxinsider.com/story/must-read/51832.html

Amazon DV video camera:
http://www.amazon.com/gp/product/B00028SR0A/ref=pd_cp_p_title/103-1555740-4483015?%5Fencoding=UTF8&v=glance&n=502394

Thursday, June 29, 2006

June 2006 Early Indications: Inversions

1) "I'll be at 362-9296 for a while; then I'll be at 648-0024 for
about fifteen minutes; then I'll be at 752-0420; and then I'll be
home, at 621-4598."
-Dick Christie in Woody Allen's "Play It Again Sam"

In the early 1970s, Dick Christie's character portrayed a hard-
charging, super-connected deal-maker. Seeing the movie now,
hisconnection of location to telephone numbers feels quaint. Not so
long ago, you could tell, within a few blocks, where somebody was
based on their area code and three digits of the "exchange," or
central office. Now, with cellular number portability, voice over IP,
and mobile telephony more generally, a phone number has gone from
being an indication of location to an indication of identity as the number
follows the person.

2) Wal-Mart announced a new program late last year called "Remix," in
which one objective is to separate fast-moving inventory from slower
sellers in the supply chain. The long-term rollout won't be completed
until 2007, and involves other facets including store re-design. The
inversion plays out as follows: typical inventory organization is
performed on the basis of what something is, of what properties it
inherently possesses. The new model organizes inventory by how fast
people purchase it, which is a characteristic external to the item.

3) Mechanical diagnosis traditionally resided in the fingers, eyes,
ears, and brain of a mechanic or technician. Attempts to organize
craft knowledge, whether from detectives or repair personnel, into
knowledge bases have typically been disappointing. From small
beginnings in large turbines, automotive, and elsewhere, however,
there's a trend toward putting diagnosis into the machinery itself.
Military technologists, for example, have laid out a future-state
vision of "swaptronics," which would mean components could identify
themselves when the odds of failure exceed a given parameter. In an
increasing number of instances, including computer hardware, the locus
of diagnosis is in the midst of a migration from the external observer
to the device itself.

Taken together, these three transitions point to some emerging issues
of digital identity. Cell phone numbers, e-mail addresses, and
instant messaging names all follow the person, in contrast to landline
numbers, physical addresses, and database information. Changing how
who I am relates to where I am, and to the parties that know where I am -- in addition to being an extension of a landline phone, the cell phone is
also a beacon -- will in turn change notions of identity.

Wal-Mart's use of velocity as an organizing principle follows other
examples of "what it is" being subsumed by "what it does," based on
easily analyzed historic data: frequent flier and buyer databases are
other examples. In contrast, actuarial data is based less on behavior
(smoking being the major exception) and more on macro-level patterns:
ethnicity, parental cause of death. As risk factors are identified
and quantified, it would seem likely that insurers and other risk
markets will more frequently charge based on what their clients do.
In states that do not require motorcycle riders to wear helmets, for
example, the cost of increased numbers of injuries is borne broadly.
If the risk premium is calibrated to an individual's helmet decision,
by contrast, economic incentives may motivate safer behavior if the rider chooses not to pay the realistic cost of going helmetless.

The migration in the locus of diagnosis parallels a much larger
transition as data evolves from paper to bits. As information moves,
it allows different things to be done with it. Standardized ways of
naming things for example, begin to give life to the notion of
information as a utility. Maps are an excellent example. The mere
task of obtaining and storing paper maps for any given area formerly
required special facilities and expertise. Visiting the map room of a
great research library is a rare treat, but prohibitively costly at
scale. With maps being easily standardized and quantified through the
efforts of Navteq and other companies, anyone with a browser can see
down to the block or house level for many locations around the world.
The same technology taking diagnosis from the mechanic into the car is
moving navigation from maps to vehicles.

This build-up of shared resources in turn has implications for
identity. When information was scarce -- think of pre-Gutenberg for
example -- books and learning (the ability to use the books) conferred
great power. As recently as the mid-20th century, there were still
debates in England, the U.S., and possibly elsewhere over what body of
knowledge an educated man "obviously" would command. In just a few
decades women outnumbered men in American college enrollment, the
specialization and mass of scholarship exploded, and information
became not just plentiful but overly plentiful. Compared to those
Reformation-era monks, today's smart (learned?) person may have
internalized relatively little but have access to many pieces and
kinds of information: how much one knows and how much one can find or
manipulate are related but distinct questions.

These related questions of "who," "where," and "what is happening" are
being both asked and answered differently than they were only a few
years ago. Based on the uses of mobile broadband in places like
Korea, the answers in the U.S. ten years from now will be different
yet again. Many questions raised by this series of inversions are
related to education, but also concern data rights and
responsibilities.

U.S.-based collectors of personal information -- whether banks,
hospitals, governments, or communications companies -- currently can
and usually do treat information about a person as the institution's
property. AT&T recently changed its privacy policy to say that it can
track what a customer watches on its broadband and television
services, and use that information as it sees fit. Recall that there
was a leak of Robert Bork's video rentals to the press at the time of
his confirmation hearings, and subsequent legislation made such
disclosures illegal. Legislation is again lagging behavior, as the
often alarming disclosures of data privacy mistakes accumulate.
Before the laptop was located, the White House requested a
supplemental appropriation of $160 million to fund the Veterans
Administration's response to the theft of private data from a single
PC: that covers a year of free credit monitoring for 26.5 million
veterans. A call center set up in response to the error was costing
$200,000 a day to operate as of mid-June.

That single example points to a myriad of other privacy and security
questions that haven't been widely raised yet, given that such a crime
would have been impossible 20 years ago and extremely difficult even a
few years ago. Merely having to report such breaches has only
recently been required, and then only in certain states. At base, we are in
the midst of redefining such ancient notions as privacy, property
rights, and risk, all of which relate intimately to identity. For all
of the possibilities of the digital era -- getting lost less often,
fixing machines before they break, being reachable at any time -- the
nightmare scenarios also proliferate.

At the minimum, we need to develop more widely shared technical,
managerial, social, and legal languages for discussing some basic
questions:

-Where is the data collected and stored relative to the person or
thing to which it relates? How are disputes over accuracy identified
and resolved?

-What are the analytical possibilities of that data?

-How are the benefits of those analytics apportioned to the people
whose behavior might be under analysis, or otherwise have a claim to
ownership (however that term is defined) as well as to the parties
doing and/or purchasing the analysis?

-What are legitimate uses of personal data? How, if at all, is
consent granted for certain of these? What controls insure that
proper safeguards are in place and followed? What constitutes abuse
of personal data? What recourse exists for various affected parties?

Recent Congressional debates have brought some of these issues to the
forefront, but insofar as the intricate details of any proposed
legislation will have major implications, much remains to be seen as
to whether the U.S. has actually made headway on the problem. As hard
as that task will be, the global nature of data networks implies that
even broader questions of jurisdiction will also need to be addressed.

Friday, May 26, 2006

May 2006 Early Indications: Of Spaces and Places

Hurricane season is starting on the east and south coasts of the U.S., with residents hoping for a respite from the extensive damage of recent years. On the other end of the country, California is prone to mudslides, brush and forest fires, and earthquakes. A vaguely defined "tornado alley" runs south from the Dakotas to Texas, and as far east as Indiana. Waterfront areas in New England flooded earlier this month, and Katrina damage in the Gulf coast region last year was only partially hurricane-related.

As the world and the North American economy become more virtual, businesses are encountering new layers of the paradox of place. An Internet connection can link two people by voice, text chat, or video almost anywhere in the developed world, with many developing nations catching up fast. As Alan Blinder points out in an important article in Foreign Affairs, many kinds of work can be done in the lowest-cost environment and rapidly moved to less expensive surroundings switching costs are dropping as work seeks out low cost workers and infrastructure. For product work, that migration implies moving factories. More recently, services from radiology to call centers to coding have begun to be outsourced and/or offshored. One shorthand prediction calls China the emerging factory to the world, with India its back office. But costs are only one aspect of the tension between place and space.

The paradox of place also shapes how companies support their presence in the etherworld. I spoke with two major companies in the past month that have located a desirable geography for backup operations centers -- away from tornadoes, hurricanes, earthquakes, and floods, and near skilled workforces -- in Minnesota. Omaha's favorable positioning relative to major telecom connections has helped fuel the growth of call centers near what used to be a railroad interconnection center; the fiber optic cables were often laid in railroad right-of-way, linking the 19th century to the 21st.

The dynamics of place affect many business choices. Locating a factory or distribution center near a prime customer, as Dell's suppliers have near Austin, tightens tolerances on deliveries and can support higher levels of customer service. Moving R&D operations near major university centers, as Novartis and other companies have around MIT and Harvard, can impose extremely high housing prices, and thus wage scales, onto employers. For employers outside those sectors that require such specialized (and localized) expertise, Massachusetts is undesirable as a new business destination, and high housing prices are noted as a major deterrent to job growth there.

Richard Florida's influential book The Rise of the Creative Class argued that rather than lobbying with tax breaks and other inducements for large Toyota or Mercedes factories, states and localities in search of jobs should instead seek to attract creative individuals. These people tend to migrate to places with good music and culture, good restaurants and diverse populations, and strong educational institutions. After arriving, they put their skills and networks together and make jobs for themselves and others. His examples -- San Francisco, Minneapolis, and Pittsburgh, among many others -- appear to support his thesis. But another characteristic joins these places: essential but non-creative people like plumbers, police officers, teachers, and support personnel get priced out of cities that Florida lists as exemplars.

The increase in commuting distance for the working people who make creative centers work is increasing. These jobs matter for quality of life. Places like Marin County and Greenwich, Connecticut are undeniably appealing in many ways. But what happens when auto repair shops and dry cleaners can't survive? Many skilled jobs can be performed remotely, to be sure, but how can affluent, attractive locales keep nurses, delivery truck drivers, and other people whose skills are in short supply right now? Societies at all stages of economic development are experiencing the effects of selective job mobility in the aftermath of the Internet and cellular telephony revolutions.

There's another recent phenomenon of skills and place: workers in skilled jobs (such as information technology) often are trained at academic centers far from an employer base. Kathy Brittain White served as CIO at Cardinal Health before founding Rural Sourcing, an American company that seeks to provide the cost savings of displacing work to a lower-cost, lower-wage environment. Her twist to the offshore model is locating programming and support centers in such places as Greenville, North Carolina - home to East Carolina University, which now enrolls over 20,000 students.

Rural Sourcing uses networks to take relative isolation and turn it into comparative advantage. In a parallel move, Google is opening major facilities in New York and Pittsburgh, the latter because of Carnegie Mellon's powerful computer science presence. In the 19th century, proximity to water power made New England mill towns economic engines for the shoe and textile industries that were centered there. Detroit built on access to freighter ports that delivered the bulk materials for the auto industry (and on the venture capital provided by timer barons enriched by the need for mass-produced wooded furniture and building supplies).

Today, universities are vying to attract knowledge-intensive industries, but what are the other sources of advantage for the next 25 years? If the air taxi model driving Eclipse Aviation and other companies takes hold, the need for an airport with commercial carriers may drop in priority for example. If home schooling continues its strong rise in popularity, more people might move to places without demonstrably good school systems. Telemedicine could reduce the urge to live near major medical centers. Many such wild cards remain to be played.

Far from the fields that White is cultivating, the place of cities remains contested and important. The Economist recently ran an obituary for Jane Jacobs, a powerful force in 20th century American urbanism. Jacobs lacked academic credentials but argued for the organic aspects of cities. She opposed zoning for example, reasoning that people should be able to live near their work. Her energy and ideas helped defeat some of the more sweeping "urban renewal" efforts of the 1950s and 60s as citizen movements began to oppose the bulldozing of neighborhoods that happened to lie in the path of expressways. Criticized for advocating gentrification, she herself was priced out of Greenwich Village in the 1990s and found Toronto more hospitable to her thinking (and financial means) than her adopted New York, which she tended to idealize. Jacobs' crusade served as a reminder that the cost of the suburban model can only partially be measured in fuel consumption or rising commute times.

Thomas Friedman famously asserted that "the world is flat": anyone anywhere can participate in the global economy via various connections. Florida replied last fall that rather than being flat, "the world is spiky" in that concentrations of talent and resources matter more than the ubiquitous access Friedman chronicles. Rather than forcing these two arguments into false opposition, it is useful to use the insights of both to examine how connection is changing work, culture, and economics.


Blinder article on offshoring (long version):
http://www.princeton.edu/blinder/papers/05offshoringWP.pdf

New York Times article on air taxis (1 March 2006):
http://www.nytimes.com/2006/03/01/business/01flight.html?ei=5088&en=8c4b5c2eb378642d&ex=1298869200&adxnnl=1&partner=rssnyt&emc=rss&adxnnlx=1148475788-8bOlM7Fw9IRoMu3c+oD95A

Florida response to Friedman:
http://www.creativeclass.org/acrobat/TheWorldIsSpiky.pdf

Thursday, April 13, 2006

April 2006 Early Indications: Four and a Half Companies to Watch

I don't have any explanation of why there seems to be such a burst of compelling stories all of a sudden, but here are some startups that look promising.

Metaweb
Danny Hillis is clearly one of the giants on the current computational landscape. After he founded Thinking Machines at the age of 26, the company delivered breakthrough massively parallel computers that were as visually stunning as they were powerful; Maya Lin was responsible for some of the hardware designs (which ended up featured in the Jarassic Park movie) and the NSA bought Connection Machines for cryptology. After a stint at Disney as an Imagineer, Hillis teamed up with Bran Ferren to found Applied Minds, a design studio of exceptionally smart designers, engineers, and tinkerers whose most famous product may be the Maximog, a Mercedes Unimog on steroids. Much of their work is out of public view but is said to similarly blend hardware, software, and engineering: robotics is one area of focus.

More recently, and just as quietly, Hillis co-founded Metaweb in 2005. The only public bit of information appears in a press release dated March 14 from one of the company's funding VCs, Millennium: "The 'metaweb' is a system designed to provide Internet users the ability to more efficiently locate and use information. The idea is to help people make better use of the vast information sources online." On the company's Jobs page comes a further clue: "Features of our system include database design, collaborative filtering, data visualization, recommendation systems and semantic networks."

I can come up with three further hints as to what might be involved to spend the announced $15 million: 1) The Connection Machine architecture has become the great-grandfather of Google's computing platform, as John Battelle has pointed out, so computationally searching and organizing big spaces is probably involved. 2) Hillis has been working on the problem of how to abstract the processes of the brain for over two decades; his two companies contained the words "thinking" and "minds," after all. Some sort of association, beyond indexing, feels probable. 3) I would wager that the "meta" in the name is neither accidental nor marketing-driven.

Vivid Sky
I first saw these guys at Demo, where their story caught the imagination of a wider audience via stories in USAToday and elsewhere. Most important, the firm has raised the necessary funding after the exposure. As I noted in my trip report, "Vivid Sky's story begins with a UPS-grade ruggedized handheld that you rent at the baseball stadium. From it you can watch video highlights throughout the game, order concessions, participate in online contests and surveys, order tickets, and check statistics, scores, etc. Pilots will be deployed this summer, and word is there will be football action later this year."

According to the company's website, the pilots will be literally big-league: Boston, Chicago, and Los Angeles franchises have expressed interest. Frank Gehry's firm is investigating designing the technology into the new Brooklyn Nets arena. Miami hosts the Super Bowl next year and hopes to pilot the platform with that event in mind. I'm hoping to see a game deployment this summer.

ThingMagic
ThingMagic builds tag-agnostic RFID readers that are more like routers than radios. As the company's VP for development puts it, "The new RFID readers are designed to provide the functionality of a gateway for large networks. RF interfaces to the tags reside on one side of a reader, with a database server and a TCP/IP network interface on the other side, fully equipped to be part of a networked-distributed data aggregation and analysis system." Network-friendly functionality, including load-balancing, quality of service, and security, is now provided by the readers. These can handle active and passive tags, including those in any geography and written to any standard. The company, an MIT spinout, has raised $21 million, including $6 million from Cisco and Media Lab founder Nicholas Negroponte in February. It has been profitable for all of its five-plus years.

While RFID adoption has widely been reported to be slower than expected, a new generation of tags and readers, including ThingMagic's, is expected to accelerate the pace of adoption: Bear Stearns reported this week that WalMart had purchased 15,000 fixed readers from ThingMagic's competitors Alien and Impinj while Albertson's bought 5,000 fixed readers from Symbol.

Ekahau
This company is by no means new, but I just got wind of it lately. Like ThingMagic, Ekahau was founded in 2000 from academic roots: the team comes from the University of Helsinki's Complex Systems Computation Group. (I have no idea what the name means or connotes in whatever language, but if you want to work there, being bilingual in Finnish and English is a high priority.) The technology is quite clever, using open-standards-based wireless Ethernet to do positioning inside a wi-fi hot spot. The company also links RFID tags to 802.11 networks, allowing faster deployments because of the large installed base of wireless networks.

The combination of tag tracking and location has many applications, particularly in health and safety: where in the medical center complex is the needed asset and/or skilled person? Where is the 911 call coming from among the wireless VoIP handsets? After an industrial or mining accident, where are the workers?

WikiCalc
This is the half-company. Like Danny Hillis, Dan Bricklin has longstanding credentials, having co-invented the spreadsheet in 1979. Beginning with Wall Street (Bricklin has a Harvard MBA), the spreadsheet transformed finance by allowing traders to calculate complex relationships on their desktop computers. Along with word processors, the application helped fuel the PC boom of the 1980s.

Now, 27 years after VisiCalc, Bricklin is working on WikiCalc, a distributed spreadsheet. Just as Google bought Writely as a probable cornerstone of a network-centric productivity suite, WikiCalc looks like an obvious Excel competitor that is built from the ground up for distributed use. John Sviokla, formerly a professor (he co-wrote the seminal Virtual Value Chain article in HBR with Jeff Rayport) and now heading up innovation at DiamondCluster, immediately saw the importance for organizational behavior:

"Spreadsheets are the key, interdependent control system used by large organizations.  GE, BP, American Express, and all large organizations have thousands and thousands of financial models in Excel, which are the basis for budgeting, and day to day management of the company.  One of the big challenges of this situation is that there is no “compare” function for spreadsheets, the way that there is for word documents.  . . . These financial models are very important to the running of large organizations, as they serve a role to both articulate what the given function or division will do (e.g. what sales are you projecting at what margins), but they are also often the control system by which senior management reviews progress, or lack of progress and uses it to guide the business on a day to day basis."

Like the Open Source Application Foundation's Chandler personal information manager (now in version .6), WikiCalc bears watching for what it could do to work, and workplaces, if its model takes hold.

A few strands connect these five efforts. Vivid Sky and Ekahau both exploit pervasive wireless. WikiCalc and Metaverse both presume a network of connected people and devices; neither is a ported desktop application. ThingMagic, Ekahau, and WikiCalc are all on the "open" end of the standards spectrum, while the SkyBox is more proprietary and Metaweb isn't saying. Metaweb and WikiCalc have legendary bloodlines, Ekahau and ThingMagic sport Ph.D.s en masse, and Vivid Sky is a scrappy startup with a particular passion and proven perfectionism that they've honed over four years of nights and weekends as the team kept their day jobs. Between rooting for some underdogs (Bricklin is a one-man show), wondering what Hillis has in store, and possibly seeing the beginnings of an RFID surge, there's a lot to watch in these five situations.

Thursday, March 30, 2006

March 2006 Early Indications I: What is the digital era?

It's commonplace to refer to our situation as a digital economy, an information age, or a post-industrial society. Because we have almost exactly 50 years of experience with commercial computers (GE purchased the first commercial Univac system for its Louisville appliance plant in 1954), it's worth trying to analyze how the U.S. has changed in that time. Isolating how information technology has shaped the economy and society turns out to be much less direct than looking for the impact of, say, the automobile.

To begin with, the nation has steadily produced more and more economic value. In 1900, U.S. GDP per capita, in current dollars, was $268. By 1950, that figure had multiplied seven times to about $2000. Between 1950 and 2000, the multiple was eighteen. To put that per capita figure in perspective, real GDP rose from $294 billion to $9817 billion: a 33-fold increase. (That ratio of 18 to 33 suggests that total population nearly doubled, which it did, from 151 million to 281 million. I don't know how much the apparently sudden growth could be methodological, but I was quite surprised to see a counter on the Census site indicating that the U.S. is just about to cross the 300 million mark. It probably has, depending on how illegal aliens are counted.)

The role of agriculture has changed in surprising ways. The number of farms in the U.S. in 1950 was almost the same as in 1900, a little over five and a half million after having peaked in the mid-1930s. By 2000, the number of farms had dropped to 2.2 million, but the average size, possibly reflecting the rise of organic farms, was actually dropping from its high in 1994. The amount of total acreage in farms reached its peak in 1953: for all the talk of urbanization in the late 19th century, it turns out that in 1950 the U.S. was still robustly rural, in land use terms anyway.

The surprising amount of farm acreage belied a strong population shift, however. In 1900, 41 percent of the U.S. workforce was employed in agriculture. The number fell to 16 percent in 1945, and in 2000 less than 2 percent did so, most part-time. Agriculture was less than 1 percent of GDP by that time, a ninth of what it had been in 1945.

Where did people go when they left the farms, which were mostly located in the Midwest? South and west, of course: the 13 states that constitute the Census Bureau's West region (every state west of Texas) combined to grow from 13 percent of U.S. population in 1950 to 22.5 percent fifty years later. I was surprised that the South only increased from 31 to 36 percent. Before talking about macroeconomic effects of computing, one has to appreciate the impact of air conditioning on where people live: such fast-growing states as Virginia, Georgia, Texas, and Arizona can be uncomfortable and unhealthy without climate control. Air conditioning in turn raises the importance of electric power, which was still a novelty in the rural South in 1950.

Along with internal migration, the second half of the twentieth century was marked by broad social change: political and economic involvement by women moved broader, in terms of numbers, and deeper, in terms of impact. Women doubled their participation in the workforce, from 30 to 60 per cent, and now often, but not routinely, hold seats as CEOs, senators, Supreme Court justices, and astronauts. Women also constitute well over half of the college population only a generation after having gained admittance to the leading private universities. The U.S. is also a much older nation than in 1950. Life expectancy at birth has risen from 68 to 77. Both of these trends relate closely to changes in medical technology and, for women, birth control. It will require further study to determine how much the increase in life expectancy relates to computing: trends in immunization, smoking, nutrition, and cardiac interventions are, I suspect, far more important.

In demographic terms, the automobile stands out among technologies with major impact. The pervasiveness of its influence tracks closely with the invention and habitation of the suburb - a term that did not exist at the time of the 1900 census. But from 1910 until 1950, when the percentage of population in suburbs more than tripled, to 23 percent of the population, the rise of the automobile literally reshaped the landscape. In the fifty years that constitute our focus here, the percentage of population in suburbs more than doubled: fully half the U.S. population now lives in suburbs, a striking testimony to the geographic transition caused by the automobile.

Shifting from residence to occupation, manufacturing surged at agriculture's expense as the air conditioning- and automobile-related figures would suggest. But while manufacturing employment peaked in 1979 at over 19 million jobs, it had been declining since 1953 as a percentage of total employment: the drop, from one job in three to one in ten, constitutes another defining characteristic of the past half-century.

The picture of how and why this shift occurred is complex, politically loaded, and still poorly understood, but one salient factor ties agriculture and manufacturing: massive productivity growth. Just as less than one percent of the workforce can go a long way toward feeding the U.S. population while exporting over $50 billion worth of food a year, so too has manufacturing been able to produce more and more with fewer inputs of labor. The role of computerization in productivity growth has yet to be fully analyzed, but it clearly contributes.

Given that the 1950 to 2000 half-century marked precipitous loss of employment, as a percentage of the workforce, in both agricultural and manufacturing arenas, we know that services have become the dominant economic sector. According to the CIA's World Factbook, "industry" constitutes 21% of the U.S. economy, while services add up to 78 percent. How could this sector grow so big so fast, and what are some of the implications?

For all the rhetoric about "becoming a nation of burger-flippers," government has become a much bigger economic entity that gets bundled into services: prison guards, for example, constitute one of the fastest-growing occupational categories. As of 2002, there were more government employees -- about 19 million at all levels, not counting military personnel -- than in any other standard industry group. Health care was second, and retail was third.

All three of these industries, writ large, have failed to harvest the productivity increases that manufacturing or, say, financial services have. How much and how well these three industries adopt information technology will go a long way toward dictating the fate of the tech sector. A further difficulty in assessing IT's impact comes in the measurement of services productivity: x units of input create y units of manufactured (or harvested) output, but if the input is time spent working, measuring the output of a teacher, software engineer, or bank teller is much less straightforward.

Because they frequently don't store or travel well, services are harder to export than manufactures or food. Once again using NAICS definitions from 2002, of the top five service categories by revenues, retail and health care are highly localized, finance and insurance face cross-border regulatory issues, and information and professional/scientific/technical services, while valuable and often portable, face price-based competition from off shore. If trends in offshoring are any indication, it may be easier for the U.S. to import services (such as remote radiographic analysis, call centers, and computer programming) than it is to export such activities as investment banking, neurosurgery, or engineering education.

The uncomfortable juxtaposition of globalization and locality is not a new phenomenon -- just look at England in the twilight of empire. If people only earn money from local sources but spend it on goods and, increasingly, services from "away," eventually money needs to come back into the locality: just as a multi-crop family farm is no longer a viable option, neither is a self-sustaining local economy. Somehow, money needs to come in as well as leave, and the current trade imbalance and federal debt levels both ratchet up that imperative.

Every new technology has at least three uses. You can use it to:

-do things you've been doing, but faster and easier (in shorthand, this is automation)
-make things to do things (capitalization)
-do new things that were previously impossible (innovation).

In looking over the government's service categories, it's hard to see the capitalization and innovation. Let me be clear: the automation effects are often substantial, and there are elements of technology-driven innovation in service industries. A word processor not only automates a typewriter but allows new freedom to revise and to reuse. Google more than automates a reference library. In terms of capitalization, eBay allows anyone to become a retailer with just a PC for infrastructure.

But the main service innovations appear to be in recreational and peripheral (no pun intended) activities rather than at the heart of the economy. Cellular telephony is a large industry, but a) it's cannibalizing an existing sector and b) it's no automobile in its impact. Digital gaming is an innovation, but hardly a core activity, and it has yet to disrupt adjacent industries like movies or education. The automation effects of ERP replacing a general ledger or CAD replacing drafting tables are significant, without a doubt, but business computing innovates far less often and powerfully in services. As a macroeconomic replacement for manufacturing, the information sector falls short - today.

Like many others, I persist in believing that the transformative power of computing lies ahead of us. Whether it's in genome-aware therapeutics, or rich media self-publishing, or low-cost avionics that make small jets feasible as air taxis, the majority of digital innovations that will remake the economy are as yet uncommercialized. And compared to such landmarks as the invention of the steam engine or the factory system, the fifty years of computing represents enormous change in short time. The daunting fact is that the change to come looms even bigger.

In the interim, the demands of a digital economy contribute to complex and difficult demographic issues: skill- and education-based bifurcation, along with a changing racial composition. While I need to research the matter further, my hunch is that factory work paid better than farm work and was widely accessible at the low end. People could leave farms, enter manufacturing with no or few skills and little education, and stay afloat. A further correlate here is decentralization: factory work collects resources in one place, while services industries (and powerful communications networks) disperse them. What are the consequences of the south and west's growth without a heavy reliance on industrial centers like Milwaukee, Pittsburgh, or Detroit?

Prior to and during World War II, the internal migration of black Americans from the rural south to the industrial midwest led to such varied changes as a rebirth of popular music, a power base for the Democratic party, and the rise of a black middle class. Only one or two cultural hops separate Henry Ford from Diana Ross, Lyndon Johnson, and the Cosby Show. Look a little closer and you see the Rolling Stones, Magic Johnson's NBA, and Oprah Winfrey, who was born in Mississippi but made her name in Chicago.

Now the opposite dynamic is at work as manufacturing automation and globalization release workers to take jobs in lower-paying categories such as hospital food service or big-box retail; in raw numbers, the biggest job-creators of the past few years have been Home Depot and Lowe's, and of course Wal-Mart's net role in employment is being hotly disputed. Retail and other services often teach their workers how to use automated systems but rarely prepare them to enter a better-paying sector. How the shift to services interrelates with America's racial picture, including of course the emerging Hispanic majority, will be critically important to track.

As the CIA's World Factbook puts the issue,

"The onrush of technology largely explains the gradual development of a 'two-tier labor market' in which those at the bottom lack the education and the professional/technical skills of those at the top and, more and more, fail to get comparable pay raises, health insurance coverage, and other benefits. Since 1975, practically all the gains in household income have gone to the top 20% of households."

The consequences of such a bifurcated populace touch sociology, politics, economics, and even ethics, so I won't even attempt a summary comment. Perhaps this trend is the result of moving farther and farther from a subsistence economy. One of the things we'll be tracking as this research progresses is the changing composition of the economy away from food, clothing, and shelter to transportation, entertainment, and other luxuries. The interplay of rapid population growth, rapid increase in the amount of liveable and available real estate, wider education, suburbanization, and the shift to a services economy all contribute to making the task of assessing information's role highly problematic.

But that won't stop us from trying.

Sources:

Unless otherwise noted, all figures come from the U.S. Census Bureau.

CIA World Factbook
(http://www.cia.gov/cia/publications/factbook/geos/us.html)

Carolyn Dimitri, Anne Effland, and Neilson Conklin, "The 20th Century Transformation of U.S. Agriculture and Farm Policy," USDA Economic Research Service Electronic Information Bulletin 3, June 2005.

Frank Hobbs and Nicole Stoops, "Demographic Trends in the 20th Century," U.S. Census Bureau, 2002.

Louis D. Johnston and Samuel H. Williamson, "The Annual Real and Nominal GDP for the United States, 1790-Present." Economic History Services, October 2005 (http://www.eh.net/hmit/gdp)

Tuesday, February 28, 2006

February 2006 Early Indications II: Demo trip report

Earlier this month I spent several days at the 16th annual Demo conference, where 68 emerging companies and technologies launched and/or presented. Both bloggers and mass media outlets have fuller coverage (see below), but I wanted to point to a few important companies and play out some of their implications. If you want to see a company's pitch for yourself, Demo has videos of all the participants.

1) Distributed infrastructure
One of the trends that's having a significant impact on the global economy is decentralization. Apart from a relatively few activities - mining, manufacturing, and medicine - many aspects of productive infrastructure can migrate to one's laptop or thereabouts: newspaper printing presses, recording studios and CD-pressing factories, video production suites, travel agency ticket printers, call centers (as with Jet Blue), banks, phone companies, and photo labs are just a few examples. In about 10 years, each of these very expensive ventures has been replicated and in some cases undermined by a digital equivalent. Demo had a few more companies in this vein:

*Blurb allows customers to design books with photos and (unlike Ofoto and iPhoto) words. The custom-printed volumes are reasonably priced and handsome to look at. The company is finding a large market in the charity cookbook industry, 400,000 titles of which appeared in 2004; some Junior League "Taste of" cookbooks sell 25,000 copies, which would dent a best-seller list.

*The big media draw of the conference was Moobella, a make-to-order ice cream factory the size of a large soft-drink vending machine. In 4 minutes a customized cup of ice cream appears, and by all accounts tastes great. It's not clear how the economics will work (vs. a vendor cart for example), but the team solved some tough technology problems and prompted a lot of thought.

*iGuitar is a nice-looking electric guitar that's a USB peripheral. It obviates the need for lots of midi and outboard gear on the guitar-to-computer front, but also feeds a whole orchestra of synthesized music, to the extent that guitar-only pros are scoring TV shows formerly serviced by keyboard players.

*Locamoda connects cell phones, IVR systems, the Web, and physical storefronts in a powerful way. One application is in real estate: you're walking down the street and see a condo listed at an agency. You can interact with the 42-inch plasma screen via your cell ("press 9 to see an exterior shot"), send yourself a hyperlink, or leave voicemail for the real estate agent. The other play for this company is social computing - in space. Imagine going to a bar and texting a message to a physical screen where everyone can see the message. thetruth.com, the antismoking outfit, is testing these at skatebard (pun intended) parks to capture "wifiti."

*EQO (say "echo") connects mobile phones to Skype through a very clean interface. The economic implications of this kind of service are staggering if you're a voice-centric international carrier: VoIP is a big enough issue when the user is chained to a PC or home network, but turning the connectivity loose affects still more incumbents.

2) Search
As others have noted, living in a networked world where information is shared via standardized technologies changes how we find information. There's definitely a whiff of "me-too" in the industry right now: every startup dreams of a Googlesque IPO as their exit strategy. (Check out this videoconferencing company's homepage if you doubt my read on the zeitgeist.) At the same time, the presence of search as a pillar of both Microsoft and Apple operating systems, as a key to both on- and off-line shopping, and as a first-impulse information gathering strategy means that there is ample room for innovation, the human capital arms race in the search industry notwithstanding.

*Krugle is a search engine for code - it preserves sourcecode formatting rather than text strings. It also keeps tags of a whole session that you can keep as a trail of breadcrumbs to forward or post. Reuse has been a cherished ideal of software engineers for decades, but this approach makes reuse far more practical than any formal tool I've seen.

*Kosmix was co-founded by Anand Rajaraman and Venky Harinarayan, who came from Amazon after founding Junglee, which we wrote about in 1998. They're building vertical search in consumer areas (starting in health, travel, and politics). It looks like a company that will do well - their target markets are already generating deals.

*Transparensee does structured search: the demo was of real estate databases. Instead of having to requery after retrieving a set of findings, you use slider bars to re-weight categories to refine or expand results. If you want a $300,000 4-BR house in zip code X, let's imagine nothing comes up. The tool allows you to increase the price, decrease the BR count, or expand zip codes. The smooth interface and intuitive refining of the search were pretty appealing.

*Nexidia is building video search (which addresses a screaming-hot market: 20 billion streams were delivered in 2005, and 2006 looks to quadruple that, at least). The company uses phonetic indexing, which runs at 60x real time, and phonetic querying, so it's somewhat language-independent. The speed and accuracy of the search results got very high marks.

*Yet another collaborative filter for music comes from garageband.com, which focuses on "if you like this, try that" for independent bands and genres; it also pushes new music to your playlist as the user community discovers new favorites. Garageband also provides hosting and generally does a wider job of supporting the indie ecosystem than merely generating playlists. (Music Genone Project (pandora.com) is better and farther along on the recommendation front; musicplasma is quicker and dirtier but lighter weight).

*Riya is a pretty hot company, apparently, and focuses on image searching - very successfully, to the point where it can do facial recognition with scary-accurate results (it convinced me I want no photos of my kids on line, names or no names). It also matches text that's captured in a photograph, even sideways in the background, like on a book spine.


3) Data
Finding ways to manage, secure, and exploit information overload at both the individual and corporate levels is generating some useful technologies. Questions like "who owns the data," "how do I know this information is correct, up-to-date, and/or calculated appropriately," and "compared to what" are getting considerable attention.

*Cnetchannel.com uses C|Net's huge database of technology attributes to do smart up-sell positioning on web pages. The demo showed a laptop on a catalog webpage with upsells of a) memory that wouldn't fit, b) a wireless card for a machine with built-in 802.11, and c) maybe a bluetooth mouse for a PC without that capability (I forget exactly). The "after" matched the peripherals and accessories much more closely and, the company would assert, profitably for the merchant. The company also wants to use the underlying matching and control capabilities (there's a control console for MBA types to use) in other domains.

*Zimini is an attempt to be a next-generation couponing engine, but it's not quite clear how coupon-users will respond to yet another helpful online service asking them to "tell us a little bit about yourself" to drive the customization.

*Kaboodle is a "collaborative shopping" service, where you dump items you find into sharable pages; it's also a handy way to organize web shopping where you're comparing price/features across sites or pages. It's easier to like than to describe.

*Panoratio began as a project within Siemens to manage and move enormous data stores from the sensors at power plants. They have what are called PDIs: Portable Data Images, which are compressed but readily navigable. It isn't highly graphical (it still looks a lot like Excel) but the navigation is still better than what you get from your everyday spreadsheet.

*Bones in Motion collects GPS and time information to create online exercise diaries, which can also be published as blogs. It correlates effort across cities (Denver's altitude vs. Charleston's humidity) so users can compare training programs and share training courses: I could run or cycle for X effort in multiple cities using shared and equalized routes. It's a great example of capturing information as byproduct and organizing it. Look for it on Sprint mobile.

*Vivid Sky's story begins with a UPS-grade ruggedized handheld that you rent at the baseball stadium. From it you can watch video highlights throughout the game, order concessions, participate in online contests and surveys, order tickets, and check statistics, scores, etc. Pilots will be deployed this summer, and word is there will be football action later this year.

*Several companies played the tagging angle, in which users or communities contribute metadata to organize some body of content (Flickr is a great example). Tagworld is a bit like MySpace, but there's a marketplace piece as well. The demonstrator said it was trivial for a 14-year-old to buy things online, some from other members' classifieds, but that begs the question of what a 14-year-old is doing with a credit card online in the first place. Draper Fisher Jurvetson just funded these guys.

*Sproutit uses tags in an email-based tool for small teams of about 10 people. The objective is to make individuals less critical for customer interaction: if I as a co-worker know your context for a given interaction, I can cover for you when you're out of the office. In a similar vein, eeminder gives mobile workers very low-latency access to corporate data.


It's fitting for conference producer, and host, Chris Shipley to have the last word. In her opening remarks she made two salient points about this year's crop of demonstrators. First, the differentiation between enterprise and personal technology gets fuzzier every year. Second, the industry desperately needs to address the complexity issues that prevent more people from using more products and services. As she said in her keynote,

"Who needs more buttons and features and options – on just about any product? Can you seriously say that you've used all the capabilities of any of the software or devices that you already own? Do you really want more?

"Unless, as an industry, we commit ourselves to a better user experience, clearer choices, and greater value, I am afraid that a many people may just sit out the market. Needing no more new features, being unable to sift through any more search results, being overwhelmed by options, these individuals are going to stop, or at least slow down, the acquisition of new technology."

Food for thought, even tastier than the ice cream.

Other coverage:
Information Week

PC Magazine

CNN