In the September 5 issue of the New Yorker, Malcolm Gladwell explores the efforts by Mattson, a food R&D firm, to design a new cookie. The problem had tight constraints on such factors as fat, shelf stability, and calories, and three different teams competed with alternative proposals. One was a classic top-down, managed group led by a Mattson EVP. Another team was comprised of two strong hands-on associates. Finally, a so-called dream team was drawn from across the industry: Mars, Kraft, Keebler, Nestle, and Kellogg's were represented, among others. You'll have to read the article to find out who wins, but the project raises several important issues.
Mattson's head man, Steve Gundrum, works in Silicon Valley and carefully tracks the tech industry. For the bakeoff, he wanted to test his hypothesis that software engineering can provide lessons to other industries. The two-man team of peers was based on Kent Beck's notion of extreme programming, or XP, in which programmers attack projects in small increments with pairs of programmers taking turns at the keyboard. The dream team was an attempt to use the open-source model to generate great ideas based on the wealth of expertise represented by the participants.
As Gladwell points out, re-designing Unix is a fundamentally different exercise compared to inventing a tasty, nutritious treat: fixing ("many eyes make bugs shallow") is not imagining. The fifteen expert bakers all held strong opinions about their own contributions and couldn't unite behind a consensus idea, but the project manager was told to let the group find its own "natural rhythm" and so let the chaos play out. In the end the team's friction prevented its potential expertise from being plumbed; in contrast, one Mattson person was able to draw on previous experiences, including an insight that topical (surface-applied rather than baked-in) seasoning makes tortilla chips more compelling, and devise a marginally more popular entry. Gladwell argues that if the dream team had been smaller it would have functioned better, but that speculation evades the question of whether Linux was the right model in the first place.
Many other unanswered questions arise. Open source has many parallels to classic scientific research: open publication, peer review, and incremental progress. In both cases the primary incentives relate to reputation rather than commerce. Because the two communities and code bases share many similarities, it may follow that applying open source techniques to biology will amplify traditional pathways to progress. But biology can be methodically incremental in ways that new product design cannot.
Even in software, it's hard to point to examples in which an open-source community model generated something new and ready for a broad user base; Linux, Apache, MySQL, and the scripting languages (Python et al) cannot remotely be called mass-market software. Linux is also built for use: I'll update a storage-attachment routine because I have to do that task in my job. Compare the cookie bakers, who were designing for a market. Without the commercial distributions like Red Hat and SuSE, Linux would have very few user interface refinements, include much different documentation, and lack things like liability protection and warrantees that a market demands.
A new generation of for-profit companies is attempting to use open-source methods to build applications rather than infrastructure. So far it's too soon to tell whether SugarCRM can dent SAP, Siebel, or Salesforce, or whether Mitch Kapor's Open Source Applications Foundation can bring Chandler up to the level of Kapor's last Personal Information Manager, Lotus Agenda. Even though the teams can once again follow established patterns of an existing package, it still remains to be seen how well the open source model applies to more "productized" offerings. There's also money to be made in the integration of free and/or open source software with both commercial software and in-house applications, and VCs are backing several startups in this sector.
The question of money points to a connected nuance: open source relies on more than attracting mobs of people to attack a problem. The cookie dream team, for example, didn't share a goal or a reward mechanism. Through a variety of means, by contrast, the Linux community knows who's contributed what. Just because open source is not for profit, some observers fall into the altruism trap. Experience suggests there is a third way here: in no way can the model be described as a charity, which means that managing in or near an open source environment raises unique challenges.*
A major and often overlooked cornerstone of the open source model is transparency: beta code is released early and often precisely because it will be imperfect. The wide variety of public responses to the Hurricane Katrina disaster illustrates how these habits are becoming ingrained: many people have offered to help individual families or groups by opening their home or trying to direct financial assistance. Not only are FEMA and the Red Cross incapable of organizing relief in this way, but also the implications of such widespread personalized benevolence take us into new political, ethical, and even public-safety territory. Such an impulse challenges the traditional Jewish notion, which has many echoes in policy and practice, that both the donor and recipient of charity should be anonymous.
By one participant's own admission, the open source cookie model couldn't beat existing offerings from Pepperidge Farm. Metamarket's Open Fund mutual fund transparently published its holdings in real time and relied on a similar dream team of business and technology gurus, but shut down after 24 months of operation in August 2001. For all of open source's impact, which is difficult to overstate in its home terrain, we may have to wait some time until we see new drugs, fashions, or buildings built on parallel communities.
Perhaps the most potent discovery of the open source model's power occurred when people weren't expressly looking for it. In Howard Dean's 2003-4 campaign, word of mouth led to unprecedented numbers of small donations. The campaign's workings were visible to the community in ways most political organizations are not. Semi-tangible reward and recognition systems sprang up to motivate more and more volunteers to contribute energy, ideas, and time. The fact that the grass-roots movement in some ways overwhelmed the formal infrastructure was both a blessing and a curse to the campaign, which in fairness cannot be faulted for not being able to find a fulcrum for the unanticipated groundswell. (To be clear, Dean and his handlers can and should be faulted for plenty of other things.)
The overarching lesson, whether from code, campaigns, or cookies, is clear: new communications tools are facilitating new kinds of political, social, and economic interactions, the implications of which we're only beginning to comprehend.
*This analysis from an economics paper on a non-software topic seems to fit perfectly: "We suggest that . . . the individual motivations supporting community governance are not captured by either the conventional self-interested preferences of 'Homo economicus' or by unconditional altruism towards one’s fellow community members"
Samuel Bowles and Herbert Gintis, "Social Capital and Community" Santa Fe Institute working paper 01-01-003