As technologies, cultural attitudes, demographics, and economics change, people have both the opportunity and the need to reinvent organizational models. When industrialization drew farmers into cities and factories, the military provided a convenient reference: the army of labor was directed by captains of industry. Symphony orchestras provided another authoritarian model. As the corporation matured, it invented its own characteristics. Henry Ford fathered process-centric division of labor with his refinements to the assembly line, while Alfred Sloan pioneered many organizational and financial practices, such as divisions (another military offshoot?) and ROI, that made the corporation the model for other entities, such as as schools, foundations, and some sports teams.
Today's business environment presents new challenges to old models. A long list of factors combine to reshape work and organization:
-prosperity (Maslow's hierarchy of needs)
-the shift from manufacturing to services
-the rise of intangible forms of value such as brand and intellectual property
-global markets for risk
-urban congestion and telecommuting
-safety and security considerations
-China's resource hunger
-the unique nature of software as an invisible asset
-increased monetization of data
-the Internet and its associated technologies such as e-mail
-mobility, particularly the impact of cellular and other wireless data networks
-global enterprise software packages
-work-family issues that followed mass entry of women into universities and the workforce
-problem-solving vs. assembly-line routinization
-shorter product life- and use-cycles
-offshoring and outsourcing
-widespread cultural resistance to positional authority
-intensity of task and knowledge specialization
-mass air travel
and many more. As Erik Brynolfsson recently noted in Sloan Management Review (spring 2007 p. 55), we need to rethink the very nature of firms, beginning with Ronald Coase's famous theory: "The traditionally sharp distinction between markets and firms is giving way to a multiplicity of different kinds of organizational forms that don't necessarily have those sharp boundaries."
Given the uncertainty and rapid change implied by this list, it's no surprise that academics and other management thinkers have focused on improvisation. Rather than looking at the everyday sense of the word having to do with makeshift or ad hoc solutions, however, these theorists see considerable structure in musical and dramatic improvisation. One researcher went so far as to live with Chicago's Second City comedy troupe to investigate these structures, but our focus here will be on jazz. (A particularly valuable resource can be found in the September-October 1998 issue of Organization Science devoted to jazz and many of its organizational implications and parallels.)
According to Kathleen Eisenhardt of Stanford (in "Strategic Decisions and All That Jazz," Business Strategy Review 8 (3), 1–3), improvisation both involves intense communication between players in real time and a small number of well understood rules in which improvising is performed. The practice is not a matter of the soloist "making it up as he goes along," but something much richer and more collectively created. Paul Berliner, whose 1994 book Thinking in Jazz is a milestone, goes even farther:
[T]he popular definitions of improvisation that emphasize only its spontaneous, intuitive nature -- characterizing it as the 'making of something out of nothing' -- are astonishingly incomplete. This simplistic understanding of improvisation belies the discipline and experience on which improvisers depend, and it obscures the actual practices and processes that engage them. (p. 492, quoted in Weick, "Improvisation as a Mindset," in the Organization Science volume noted above, p. 544)
To give some indication of just how complex the practice of improvisation can be, the Canadian organizational scholar Karl Weick explains that it in fact exists on a continuum, with the progression of different techniques implying "increased demands on imagination and concentration." To summarize, the simplest form of improvisation is interpretation, moving through embellishment then variation, all the way to improvisation, which implies a time pressure and a lack of precomposition. Thinking about the organizational equivalents of these techniques is a compelling but highly imprecise exercise. (Weick pp. 544-545)
Perhaps because it evolved in parallel with the information age, jazz appears to be well suited to collaborative work by impermanent teams of skilled workers. It is also more applicable to performance than to decision-making: few great quartets or quintets have been democratic, and many leaders of bands large and small have been solitary, poor, nasty, brutish, or short, to borrow from Thomas Hobbes. Improvisation found little place in the classic big bands of Goodman or Ellington. More recently, until his death James Brown fined band members, many of whom were truly A-list musicians, in mid-performance for breaking his rules.
So improvisation in and of itself does not solve the organizational dilemma of managing real-time knowledge work. Michael Gold, who lectures on the intersection of jazz and business after having been both a bassist and a banker, posits an acronym - APRIL - to denote the five traits that carry over:
The members of a jazz ensemble possess and practice a set of shared behaviors that we call the Five Dynamics of Jazz.
* Autonomy -- self-governing, self-regulating, adaptable and independent - yet in support of (and interdependent with) the larger organism.
* Passion -- the quality of emotional vibrancy, zest, commitment, and energy to pursue excellence and the course one believes to be true.
* Risk -- the ability to take chances and explore new territory and methods in pursuit of shared goals, and the ability to support others in their explorations.
* Innovation -- the skill to invent, recombine, and create new solutions to problems using either old or new forms, methods, and/or resources.
* Listening -- the ability to truly hear and feel the the communication of passion, meaning and rhythms of others. (http://www.jazz-impact.com/about.shtml)
Gold's Five Dynamics are useful but not sufficient, and raise operational questions presumably addressed in his lectures: how do good managers channel both passion and the need to show up on time? Innovation is of course vital, but how do the other members of his quartet know what to do when the improvised bass solo is over?
Another jazz player/business speaker (and a classmate of Gold's) has combined his education and work as a drummer with lessons from jobs in consulting and startups to present a potentially more rigorous view. Operating from his home bases in Norway and Boston, Carl Stormer has been addressing banks, consulting firms, telecom companies, and CPG firms on the topic of "Cracking the Jazzcode." The presentation itself, which I have not yet seen, is innovative in both structure and message.
Stormer begins with a brief welcome, then proceeds to play drums in a band of three or four players who have never before performed as an ensemble (every performance is different). These are high-grade professionals: Cameron Brown has played bass for Archie Shepp, Art Blakey, Joe Lovano, and Dewey Redman. Saxophonist Rob Scheps has recorded with John Scofield, Carla Bley, and Steve Swallow. Guitarists Jon Herington and Georg Wadenius have both toured with Steely Dan.
So the musicianship is top-shelf. What can managers learn? Stormer has developed a rich set of insights. First among these is the notion of instruments: improvisation is key to jazz, but does not in and of itself define the genre. What functions do each instrument perform at what time? In other words, why don't we hear trios of drummers or quartets of saxophones? What are the rules for passing a solo? What are the responsibilities of the horn player during the guitar solo? Instruments have different roles in an ensemble, roles that ensure that players don’t have to fight for the same functions. (Conversely, when functions overlap, as with a guitar and piano, players must work out who leaves room for whom.) In addition, the ownership of instruments ensures that players match their skills with their task.
While improvisation may look individual, jazz is inherently made by groups. What are the elements that define an ensemble? Why are sextets more than twice as difficult to manage and play in than trios? How do groups communicate? Why don't quartets have teambuilding exercises? Why can the Jazzcode band of the moment work effectively without rehearsal?
The Jazzcode lecture also includes important ideas about shared cultural references: if my tenor solo quotes from "Round Midnight," the drummer will do a better job faster if he can pick up on the source of the riff. If the band gets a request not everyone knows, what happens? What is the score from which a group plays? What are the differences between notes on paper and music in performance, and what do they tell us about business processes?
Many other thought-starters emerge in Stormer's conversation. What are the benefits of increasing your competence on an instrument vs. cross-training on other instruments, most notably piano? For all the emphasis on improvisation and traded soloing, why is it that arrangers play such an important role in certain ensembles? What are the payoffs of increased competence on my instrument? Do I get more solos, will better musicians want to play with me, will I make more money? To that end, how do I practice: improving on my weak points or developing deeper insights into my favorite techniques and songs?
I don't want to give away Stormer's trade secrets, but jazz -- as a music and not just as a vague concept thought to involve chaos and unscripted soloing -- is rich with business implications. In short, I believe there may well be a Jazzcode for business and that if there is, Carl Stormer is uniquely positioned to discern and explain it. Furthermore, the emerging business and technology climate will only amplify the wisdom of his approach.
http://www.carlstormer.com/jazz/
Early Indications is the weblog version of a newsletter I've been publishing since 1997. It focuses on emerging technologies and their social implications.
Wednesday, June 27, 2007
Thursday, June 21, 2007
May 2007 Early Indications
The following is based on the opening talk presented at the Center for
Digital Transformation's spring 2007 research forum.
I.
Roughly 20 years ago, Citibank CEO Walter Wriston said that
"information about money has become almost as important as money
itself." Since that time, complex secondary and tertiary risk markets
have grown into a massive global financial information-processing
mechanism. Stocks and bonds, traded on primary markets, are hedged by
futures, options, and derivatives, as well as a variety of arcane (to
the public) devices such as Enron's famous special purpose entities.
These instruments are nothing more than information about money, and
their growth helps prove the truth and wisdom of Wriston's comment.
Data, what Stan Davis once called "information exhaust" or the
byproduct of traditional business transactions, has become a means of
exchange and a store of value in its own right. Hundreds or even
thousands of business plans are circulating, each promising to
"monetize data." While Google is an obvious poster child for this
trend, there are many other, often less obvious, business models
premised on Wriston's core insight, that information about stuff is
often more valuable and/or profitable than the stuff.
Internet businesses are the first that come to mind. Both Linux and
eBay have captured reputational currency and developed communities
premised on members' skills, trustworthiness, and other attributes.
These attributes are, in the case of eBay, highly codified and make
the business much more than a glorified classified ad section.
Information about retail goods is used by 7-Eleven Japan to drive
new-product hypotheses in much the same way than analytical credit
card operations such as Capital One develop offers in silico. An
astounding 70% of SKUs in a 7-Eleven are new in a given year, and such
innovation in a seemingly constrained market is only possible because
of effective use of data.
Amazon's use of purchase and browsing data remains unsurpassed. I
recently compared a generic public page -- "welcome guest!" -- to my
home page, and at least eighteen different elements were customized
for me. These were both "more of the same," continuing a trend begun
with a previous author or recording artist purchase, and "we thought
you might like," recommendations based on customer behavior of others
deemed similar to me. Of the eighteen elements of that home page,
each had a valid reason for inclusion and was a plausible purchase.
Another less visible example of this trend is the Pantone system.
Information about color is almost certainly more profitable than paint
or ink. Pantone has a monopoly on the precise defitions for colors
used in commerce, whether in advertising or branding - Barbi pink and
Gap blue are omnipresent - or in production processes: every brownie
baked for use in Ben & Jerry's ice cream is compared to two Pantone
browns to ensure consistency. Pantone is also global: Gap blue is the
same in Japan as in New Jersey, and on shopping bags, neon signs, and
printed materials. The private company does not disclose revenues,
but it is now branching out into prediction businesses, selling
briefings telling fashion, furniture, and other
companies whether olive green will be popular or not next year.
II.
A second trend crossing business, science, and other fields can
colloquially be called "big data." We are seeing the growth of truly
enormous data stores, which can facilitate both business decisions and
analytic insights for other purposes.
Some examples:
-The Netflix prize invites members of the machine learning community
to improve the
prediction algorithms used to recommend "if you liked X you might like
Y" recommendations. While it is not clear that the performance
benchmark needed to win the $1 million top prize can be reached
incrementally, one major attractor for computer scientists is the size
and richness of Netflix's test data set, the likes of which are scarce
in the public domain: it consists of more than 100 million ratings
from over 480 thousand randomly-chosen, anonymous customers on nearly
18 thousand movie titles.
-Earlier this month a new effort, the Encyclopedia of Life, was
launched to provide an online catalog of every species on earth. In
the past several years, however, geneticist Craig Venter sailed around
the world on a boat equipped with gene sequencing gear. The wealth of
the results is staggering: at least six million new genes were
discovered.
-The data available on a Bloomberg terminal allows complex inquiries
across asset classes, financial markets, and time to be completed
instantaneously. Before this tool, imagine answering a simple
question using spreadsheets, paper records, multiple currencies, and
optimization: "What basket of six currencies - three short and three
long - delivered the best performance over the past two years?"
-The Church of Latter-day Saints has gathered genealogical records
into an online repository. The International Genealogical Index
database contains approximately 600 million names of deceased
individuals, while the addendum to the International Genealogical
Index contains an additional 125 million names. Access is free to the
public.
In the presence of such significant data sets, various academic
disciplines are debating how the fields progress. Quantitative vs.
qualitative methods continue to stir spirited discussion in fields
ranging from sociology to computer science. The continuing relevance
of such essays as C.P. Snow's The Two Cultures and David Hollinger's
"The Knower and the Artificer" testify to the divide between competing
visions of inquiry and indeed truth.
A fascinating question, courtesy of my colleague Steve Sawyer,
concerns the nature of errors in data-rich versus data-poor
disciplines. Some contend that data-rich disciplines tend to be wary
of type I errors (false positives) and thus miss many opportunities by
committing false negatives (type II) that are less visible. Data-poor
communities, meanwhile, may be unduly wedded to theories given that
evidence is sparse and relatively static: in contrast to Venter's
marine discoveries, historians are unlikely to get much new evidence
of either Roman or Thomas Jefferson's politics.
III.
Given that data is clearly valuable, bad guys are finding ways to get
and use it. Privacy is becoming a concern that is both widely shared
and variously defined. Indeed, our commentator Lawrence Baxter, who
used to be a law professor at Duke, noted that defining what privacy
is has proven to be effectively impossible. What can be defined are
the violations, which leads to a problematic state of affairs for both
law and policy.
Data breaches are growing both in number and in size: in the past year
and a half, there have been roughly 50 episodes that involved loss of
more than 100,000 records. The mechanisms for loss range from lost
backup tapes (that were not encrypted) to human error (government
officials opening or publishing databases containing personally
identifiable information) to unauthorized network access. In the
latter category, retailer TJX lost over 45 million credit- and
debit-card numbers, with the thieves, thought to be connected to
Russian organized crime, gaining access through an improperly
configured wireless network at a Marshall's store in Minnesota. Bad
policies, architecture, and procedures compounded the network problem,
to the point where TJX cannot decrypt the files created by the hackers
inside the TJX headquarters transactional system.
Part of data's attractiveness is its scale. If an intruder wanted to
steal paper records of 26 million names, as were lost by the Veterans
Administration last year after a single laptop was stolen, he or she
would need time, energy, and a big truck: counting filing cabinets,
the records would weigh an estimated 11,000 pounds. A USB drive
holding 120 gigabytes of data, meanwhile, can be as small as a 3" x 5"
card and a half-inch thick.
Redefining risk management in a data economy is proving to be
difficult, in part because IT workers have been slow to lead the way
in both seeing the value of data and treating it accordingly. To take
one notable example, the Boston Globe printed green-bar records
containing personal data relating to 240,000 subscribers, then
recycled the office paper by using it to wrap Sunday Globes for
distribution. Not surprisingly, an arms race is emerging between bad
guys, with tools such as phishing generators and network sniffers, and
the good guys, who all too often secure the barn after the horse has
run away.
IV.
Who will be the winners? That is, what companies, agencies, or
associations will use data most effectively? Axciom, Amazon, American
Express, and your college alumni office might come to mind, but it is
so early in the game that a lot can happen. Some criteria for a
potential winner, and there will of course be many, might include the
following:
-Who is trusted?
-Who has the best algorithms?
-Who has, or can create, the cleanest data?
-Who stands closest to real transactions?
-Who controls the chain of custody?
-Who can scale?
-Who has the clearest value proposition?
-Who understands the noise in a given system?
-Who can exploit network externalities?
Whoever emerges at the front of the pack, the next few years are sure
to be a wild ride.
Digital Transformation's spring 2007 research forum.
I.
Roughly 20 years ago, Citibank CEO Walter Wriston said that
"information about money has become almost as important as money
itself." Since that time, complex secondary and tertiary risk markets
have grown into a massive global financial information-processing
mechanism. Stocks and bonds, traded on primary markets, are hedged by
futures, options, and derivatives, as well as a variety of arcane (to
the public) devices such as Enron's famous special purpose entities.
These instruments are nothing more than information about money, and
their growth helps prove the truth and wisdom of Wriston's comment.
Data, what Stan Davis once called "information exhaust" or the
byproduct of traditional business transactions, has become a means of
exchange and a store of value in its own right. Hundreds or even
thousands of business plans are circulating, each promising to
"monetize data." While Google is an obvious poster child for this
trend, there are many other, often less obvious, business models
premised on Wriston's core insight, that information about stuff is
often more valuable and/or profitable than the stuff.
Internet businesses are the first that come to mind. Both Linux and
eBay have captured reputational currency and developed communities
premised on members' skills, trustworthiness, and other attributes.
These attributes are, in the case of eBay, highly codified and make
the business much more than a glorified classified ad section.
Information about retail goods is used by 7-Eleven Japan to drive
new-product hypotheses in much the same way than analytical credit
card operations such as Capital One develop offers in silico. An
astounding 70% of SKUs in a 7-Eleven are new in a given year, and such
innovation in a seemingly constrained market is only possible because
of effective use of data.
Amazon's use of purchase and browsing data remains unsurpassed. I
recently compared a generic public page -- "welcome guest!" -- to my
home page, and at least eighteen different elements were customized
for me. These were both "more of the same," continuing a trend begun
with a previous author or recording artist purchase, and "we thought
you might like," recommendations based on customer behavior of others
deemed similar to me. Of the eighteen elements of that home page,
each had a valid reason for inclusion and was a plausible purchase.
Another less visible example of this trend is the Pantone system.
Information about color is almost certainly more profitable than paint
or ink. Pantone has a monopoly on the precise defitions for colors
used in commerce, whether in advertising or branding - Barbi pink and
Gap blue are omnipresent - or in production processes: every brownie
baked for use in Ben & Jerry's ice cream is compared to two Pantone
browns to ensure consistency. Pantone is also global: Gap blue is the
same in Japan as in New Jersey, and on shopping bags, neon signs, and
printed materials. The private company does not disclose revenues,
but it is now branching out into prediction businesses, selling
briefings telling fashion, furniture, and other
companies whether olive green will be popular or not next year.
II.
A second trend crossing business, science, and other fields can
colloquially be called "big data." We are seeing the growth of truly
enormous data stores, which can facilitate both business decisions and
analytic insights for other purposes.
Some examples:
-The Netflix prize invites members of the machine learning community
to improve the
prediction algorithms used to recommend "if you liked X you might like
Y" recommendations. While it is not clear that the performance
benchmark needed to win the $1 million top prize can be reached
incrementally, one major attractor for computer scientists is the size
and richness of Netflix's test data set, the likes of which are scarce
in the public domain: it consists of more than 100 million ratings
from over 480 thousand randomly-chosen, anonymous customers on nearly
18 thousand movie titles.
-Earlier this month a new effort, the Encyclopedia of Life, was
launched to provide an online catalog of every species on earth. In
the past several years, however, geneticist Craig Venter sailed around
the world on a boat equipped with gene sequencing gear. The wealth of
the results is staggering: at least six million new genes were
discovered.
-The data available on a Bloomberg terminal allows complex inquiries
across asset classes, financial markets, and time to be completed
instantaneously. Before this tool, imagine answering a simple
question using spreadsheets, paper records, multiple currencies, and
optimization: "What basket of six currencies - three short and three
long - delivered the best performance over the past two years?"
-The Church of Latter-day Saints has gathered genealogical records
into an online repository. The International Genealogical Index
database contains approximately 600 million names of deceased
individuals, while the addendum to the International Genealogical
Index contains an additional 125 million names. Access is free to the
public.
In the presence of such significant data sets, various academic
disciplines are debating how the fields progress. Quantitative vs.
qualitative methods continue to stir spirited discussion in fields
ranging from sociology to computer science. The continuing relevance
of such essays as C.P. Snow's The Two Cultures and David Hollinger's
"The Knower and the Artificer" testify to the divide between competing
visions of inquiry and indeed truth.
A fascinating question, courtesy of my colleague Steve Sawyer,
concerns the nature of errors in data-rich versus data-poor
disciplines. Some contend that data-rich disciplines tend to be wary
of type I errors (false positives) and thus miss many opportunities by
committing false negatives (type II) that are less visible. Data-poor
communities, meanwhile, may be unduly wedded to theories given that
evidence is sparse and relatively static: in contrast to Venter's
marine discoveries, historians are unlikely to get much new evidence
of either Roman or Thomas Jefferson's politics.
III.
Given that data is clearly valuable, bad guys are finding ways to get
and use it. Privacy is becoming a concern that is both widely shared
and variously defined. Indeed, our commentator Lawrence Baxter, who
used to be a law professor at Duke, noted that defining what privacy
is has proven to be effectively impossible. What can be defined are
the violations, which leads to a problematic state of affairs for both
law and policy.
Data breaches are growing both in number and in size: in the past year
and a half, there have been roughly 50 episodes that involved loss of
more than 100,000 records. The mechanisms for loss range from lost
backup tapes (that were not encrypted) to human error (government
officials opening or publishing databases containing personally
identifiable information) to unauthorized network access. In the
latter category, retailer TJX lost over 45 million credit- and
debit-card numbers, with the thieves, thought to be connected to
Russian organized crime, gaining access through an improperly
configured wireless network at a Marshall's store in Minnesota. Bad
policies, architecture, and procedures compounded the network problem,
to the point where TJX cannot decrypt the files created by the hackers
inside the TJX headquarters transactional system.
Part of data's attractiveness is its scale. If an intruder wanted to
steal paper records of 26 million names, as were lost by the Veterans
Administration last year after a single laptop was stolen, he or she
would need time, energy, and a big truck: counting filing cabinets,
the records would weigh an estimated 11,000 pounds. A USB drive
holding 120 gigabytes of data, meanwhile, can be as small as a 3" x 5"
card and a half-inch thick.
Redefining risk management in a data economy is proving to be
difficult, in part because IT workers have been slow to lead the way
in both seeing the value of data and treating it accordingly. To take
one notable example, the Boston Globe printed green-bar records
containing personal data relating to 240,000 subscribers, then
recycled the office paper by using it to wrap Sunday Globes for
distribution. Not surprisingly, an arms race is emerging between bad
guys, with tools such as phishing generators and network sniffers, and
the good guys, who all too often secure the barn after the horse has
run away.
IV.
Who will be the winners? That is, what companies, agencies, or
associations will use data most effectively? Axciom, Amazon, American
Express, and your college alumni office might come to mind, but it is
so early in the game that a lot can happen. Some criteria for a
potential winner, and there will of course be many, might include the
following:
-Who is trusted?
-Who has the best algorithms?
-Who has, or can create, the cleanest data?
-Who stands closest to real transactions?
-Who controls the chain of custody?
-Who can scale?
-Who has the clearest value proposition?
-Who understands the noise in a given system?
-Who can exploit network externalities?
Whoever emerges at the front of the pack, the next few years are sure
to be a wild ride.