"The solution seems obvious: to get all the information about patients out of paper files and into electronic databases that -- and this is the crucial point -- can connect to one another so that any doctor can access all the information that he needs to help any given patient at any time in any place. In other words, the solution is not merely to use computers, but to link the systems of doctors, hospitals, laboratories, pharmacies and insurers, thus making them, in the jargon, 'interoperable'."
-"Special report: IT in the health-care industry," The Economist, April 30, 2005, p. 65
There's no question that North American medicine is approaching a crisis. According to the Washington Post, 45 million Americans carry no health insurance. Between 44,000 and 98,000 people are estimated to die every year from preventable medical errors such as drug interactions; the fact that the statistics are so vague testifies to the problem. The U.S. leads the world in health care spending per capita by a large margin ($4500 vs. $2500 for the runners-up: Germany, Luxembourg, and Switzerland), but the life expectancy ranks 27th, near that of Cuba, which is reported to spend about 1/25th as much per capita. Information technology has made industries such as package delivery, retail, and mutual funds more efficient: can health care benefit from similar gains?
The farther one looks into this issue, the more tangled the questions get. Let me assert at the outset that I believe electronic medical records are a good idea. But for reasons outlined below, IT by itself falls far short of meeting the challenge of rethinking health and health care. Any industry with the emotional freight, economic impact, and cultural significance of medicine can't be analyzed closely in a few paragraphs, but perhaps these ideas might begin discussion in other venues.
1) Definitions
What does the health care system purport to deliver? If longevity is the answer, clearly much less money could be spent to bring U.S. life expectancy closer to Australia, where people live an average of three years longer. But health means more than years: the phrase "quality of life" hints at the notion that we seek something non-quantifiable from doctors, therapists, nutritionists, and others. At a macro level, no one can assess how well a health care system works because the metrics lack explanatory power: we know, roughly, how much money goes in to a hospital, HMO, or even economic sector, but we don't know much about the outputs.
For example, should health care make us even "better than well"? As the bioethicist Carl Elliott compellingly argues in his book of that name, a substantial part of our investment in medicine, nutrition, and surgery is enhancement beyond what's naturally possible. Erectile dysfunction pills, steroids, implants, and blood doping are no longer the province of celebrities and world-class athletes. Not only can we not define health on its lower baseline, it's getting more and more difficult to know where it stops on the top bound as well.
Finally, Americans at large don't seem to view death as natural, even though it's one of the very few things that happens to absolutely everyone. Within many outposts of the health care system, death is regarded as a failure of technology, to the point where central lines, respirators, and other interventions are applied to people who are naturally coming to the end of life. This approach of course incurs astronomical costs, but it is a predictable outcome of a heavily technology-driven approach to care.
2) Health care as car repair for people?
Speaking in gross generalizations, U.S. hospitals are not run to deliver health; they're better described as sickness-remediation facilities. The ambiguous position of women who deliver babies demonstrates the primary orientation. Many of the institutional interventions and signals (calling the woman a "patient," for example) are shared with the sickness-remediation side of the house even though birth is not morbid under most circumstances. Some hospitals are turning this contradiction into a marketing opportunity: plushly appointed "birthing centers" have the stated aim of making the new mom a satisfied customer. "I had such a good experience having Max and Ashley at XYZ Medical Center," the intended logic goes, "that I want them taking care of Dad's heart problems."
Understanding health care as sickness-remediation has several corollaries. Doctors are deeply protective of their hard-won cultural authority, which they guard with language, apparel, and other mechanisms, but the parallels between a hospital and a car-repair garage run deep. After Descartes split the mind from the body, medicine followed the ontology of science to divide fields of inquiry -- and presumably repair -- into discrete units.
At teaching hospitals especially, patients frequently report feeling less like a person and more like a sum of sub-systems. Rashes are for dermatology, heart blockages set off a tug-of-war between surgeons and cardiologists, joint pain is orthopedics or maybe endocrinology. Root-cause analysis frequently falls to secondary priority as the patient is reduced to his or her compartmentalized complaints and metrics. Pain is no service's specialty but many patients' primary concern. Systems integration between the sub-specialties often falls to floor nurses and the patient's advocate if he or she can find one. The situation might be different if one is fortunate enough to have access to a hospitalist: a new specialty that addresses the state of being hospitalized, which the numbers show to be more deadly than car crashes. (To restate: something on the order of 100,000 people die in the U.S. every year from preventable medical accidents.)
The division of the patient into sub-systems that map to professional fields has many consequences. Attention focuses on the disease state, rather than the path that led to that juncture: preventive care lags far behind crisis management in glamour, funding, and attention. Diabetes provides a current example. Drug companies have focused large sums of money on insulin therapies, a treatment program that can change millions of peoples' lives. But when public-health authorities try to warn against obesity as a preventive attack on diabetes, soft-drink and other lobbies immediately spring into action.
Finally, western medicine's claim to be evidence-based contradicts the lack of definitive evidence for ultimate consequences. The practice of performing autopsies in cases of death where the cause is unclear has dropped steadily and steeply, to the point where doctors and families typically do not know what killed a sizable population of patients. A study at the University of Michigan estimated that almost 50% of hospital patients died of a condition for which they were not receiving treatment. It's potentially the same situation as storeowner John Wanamaker bemoaning that half of his advertising budget was being wasted, but not knowing which half.
3) Following the money
Health care costs money, involves scarcities and surplus, and employs millions of people. As such, it constitutes a market - but one that fails to run under conventional market mechanisms. (For example, excess inventory, in the form of unbooked surgical times, let's say, is neither auctioned to the highest bidder nor put on sale to clear the market.) The parties that pay are rarely the parties whose health is being treated; the parties that deliver care lack detailed cost data and therefore price services only in the loosest sense; and the alignment of patient preference with greater good through the lens of for-profit insurers has many repercussions.
Consider a few market-driven sub-optimizations:
-Chief executives at HMOs are rewarded for cost-cutting, which often translates to cuts in hospital reimbursement. Hospitals, meanwhile, are frequently not-for-profit institutions, many of which have been forced to closed their doors in the past decade.
-Arrangements to pay for certain kinds of care for the uninsured introduce further costs, and further kinds of costs, into an already complex set of financial flows.
-As Richard Titmuss showed over 30 years ago in The Gift Relationship, markets don't make sense for certain kinds of social goods. In his study, paying for blood donation lowered the amount and quality of blood available for transfusion; more recently, similar paradoxes and ethical issues have arisen regarding tissue and organ donation.
-Insurers prefer to pay for tangible rather than intangible services. Hospitals respond by building labs and imaging centers as opposed to mental health facilities, where services like psychiatric nursing are rarely covered.
-Once they build labs, hospitals want them utilized, so there's further pressure (in addition to litigation-induced defensiveness) for technological evidence-gathering rather than time-consuming medical art such as history-taking and palpation, for which doctors are not reimbursed.
-As a result, conditions with clear diagnoses (like fractures) are treated more favorably in economic terms, and therefore in interventional terms, than conditions such as allergies or neck pain that lack "hard" diagnostics. Once again, the vast number of people with mental health issues are grossly underserved.
-Medical schools can no longer afford for their professors to do unreimbursable things like teach or serve on national standards bodies. The doctors need to bring in grant money to fund research and insurance money for their clinical time. Teaching can be highly uneconomical for all concerned. One reason for a shortage of nurses, meanwhile, is a shortage of nursing professors.
4) Where can IT help?
Information technology has made significant improvements possible in business settings with well-defined, repeatable processes like originating a loan or filling an order. Medicine involves some processes that fit this description, but it also involves a lot of impossible-to-predict scheduling, healing as art rather than science, and institutionalized barriers to communication.
IT is currently used in four broad medical areas: billing and finance, supply chain and logistics, imaging and instrumentation, and patient care. Patient registration is an obvious example of the first; lines and foodservice the second; MRIs, blood tests, and bedside monitoring the third; and physician order entry, patient care notes, and prescription writing the fourth. Each type of automation introduces changes in work habits, incentives, and costs to various parties in the equation.
Information regarding health and information regarding money often follow parallel paths: if I get stitched up after falling on my chin, the insurance company is billed for an emergency department visit and a suture kit at the same time that the hospital logs my visit -- and hopefully flags any known antibiotic allergies. Meanwhile the interests and incentives are frequently anything but parallel: I might want a plastic surgeon to suture my face; the insurer prefers a physician's assistant. From the patient's perspective, having systems that more seamlessly interoperate with the HMO may not be positive if that results in fewer choices or a perceived reduction in the quality of care. On the provider side, the hospital and the plastic surgeon will send separate bills, each hoping for payment but neither coordinating with the other. Bills frequently appear in a matter of days, with the issuer hoping to get paid first, before the patient realizes any potential errors in calculating co-pay or deductible. The amount of time and money spent on administering the current dysfunctional multi-payer system is impossible to conceive.
Privacy issues are non-trivial. Given that large-scale breaches of personal information are almost daily news, what assurance will patients have that a complex medical system will do a better job shielding privacy than Citigroup or LexisNexis? With genomic predictors of health -- and potential cost for insurance coverage -- around the corner, how will patients' and insurers' claims on that information be reconciled?
A number of services currently let individuals combine personal control and portability of their records. It's easy to see how such an approach may not scale: something as trivial as password-resets in corporate computing environments already involves sizeable costs -- now think about managing the sum of past and present patients and employees as a user base with access to the most sensitive information imaginable. With portable devices proliferating, potential paths of entry multiply both the security perimeter and the cost of securing it: think of teenage hackers trying to find their way to Paris Hilton's medical record rather than her Sidekick.
Hospitals already tend to treat privacy as an inconvenience -- witness the universal use of the ridiculous johnnies, which do more to demean the patient than to improve quality of care. The medical record doesn't even belong to the person whose condition it documents. American data privacy standards, even after HIPAA, lag behind those in the European Union. From such a primitive baseline, getting to a new state of shared accountability, access, and privacy will take far more diplomacy than systems development.
Spending on diagnostic technology currently outpaces patient care IT. Hospitals routinely advertise less confining MRI machines, digital mammography, and 3D echocardiography; it's less easy to impress constituencies with effective metadata for patient care notes, for example. (Some computerized record systems merely capture images of handwritten notes with only minimal indexing.) After these usually expensive machines produce their intended results, the process by which diagnosticians and ultimately caregivers use those results is often haphazard: many tests are never consulted, or compared to previous results -- particularly if they were generated somewhere else. NIH doesn't just stand for National Institutes of Health; Not Invented Here is also alive and well in hospitals.
Back in the early days of reengineering, when technology and process change were envisioned as a potent one-two punch in the gut of inefficiency, the phrase "don't pave the cowpaths" was frequently used as shorthand. Given that medicine can only be routinized to a certain degree, and given that many structural elements contribute to the current state of affairs, it's useful to recall the old mantra. Without new ways of organizing the vastness of a longitudinal medical record, for example, physicians could easily find themselves buried in a haystack of records, searching for a needle without a magnet. Merely automating a bad process rarely solves any problems, and usually creates big new ones.
Change comes slowly to medicine, and the application of technology depends, here as always, on the incentives for different parties to adopt new ways of doing things. Computerized approaches to caregiving include expert knowledge bases, automated lockouts much like those in commercial aviation, and medical simulators for training students and experienced practitioners alike. Each of these has proven benefits, but only limited deployment. Further benefits could come from well care and preventive medicine, but these areas have proven less amenable to the current style of IT intensification. Until the reform efforts such as Leapfrog can address the culture, process, and incentive issues in patient care, the increase in clinical IT investment will do little to drive breakthrough change in the length and quality of Americans' lives.
Early Indications is the weblog version of a newsletter I've been publishing since 1997. It focuses on emerging technologies and their social implications.
Friday, June 17, 2005
Tuesday, June 07, 2005
May 2005 Early Indications II: Power laws for fun and profit
(shipped May 26, posted at www.guidewiregroup.com, and archived here)
Five years ago, the Internet sector was in the middle of a momentous
slide in market capitalization. Priceline went from nearly $500 a
share to single digits in three quarters. CDnow fell from $23 to
$3.40 in about 9 months ending in March 2000. Corvis, Music Maker,
Dr. Koop - 2000 was a meltdown the likes of which few recreational
investors had ever seen or imagined. Science was invoked to explain
this new world of Internet business.
Bernardo Huberman, then at Xerox PARC, and others found that the
proportion of websites that got the bulk of the traffic fell far from
the 80/20 rule of thumb: as of December 1, 1997, the top 1% of the
website population accounted for over 55% of all traffic. This kind
of distribution was not new, as it turned out. A Harvard linguist
with the splendid name of George Zipf counted words, and found that a
tiny percentage of English words account for a disproportionate share
of usage. A Zipf distribution, plotted on a log-log scale, is a
straight line from upper left to lower right. In linear scale, it
plunges from the top left and then goes flat for the characteristic
long tail of the distribution: twosies and then onesies occupy most of
the x-axis.
Given such "scientific" logic, investors began to argue that the
Internet was a new kind of market, with high barriers to entry that
made incumbents' positions extremely secure. Michael Mauboussin, then
at CS First Boston and now at Legg Mason, wrote a paper in late 1999
called "Absolute Power." In it he asserted that "power laws . . .
strongly support the view that on-line markets are winner-take-all."
Since that time, Google has challenged Yahoo, weblogs have markedly
deteriorated online news sites' traffic, and the distinction between
"on-line markets" and plain old markets is getting harder to maintain.
Is the Zipf distribution somehow changing? Were power laws wrongly
applied or somehow misunderstood?
Chris Anderson, editor of Wired, has a different reading of the graph
and focuses instead on the long tail. In an article last fall that's
being turned into a book, Anderson explains how a variety of web
businesses have prospered by successfully addressing the very large
number of niches in any given market. Jeff Bezos, for instance,
estimates that 30% of the books Amazon sells aren't in physical
retailers. Unlike Excite, which couldn't make money on the mostly
unique queries that came into the site, Google uses adwords to sell
almost anything to the very few people who search for something
related to it. As of March, every iTunes song in inventory (that's
over 1 million) had been purchased at least once. Netflix carries far
more inventory than a neighborhood retailer can, and can thus satisfy
any film nut's most esoteric request.
At the same time, producers of distinctive small-market goods (like
weblogs, garage demo CDs, and self-published books) can through a
variety of mechanisms reach a paying public. These mechanisms include
word of mouth, search-driven technologies, and public performance
tie-ins; digital distribution can also change a market's economics.
Thus the news is good for both makers and users, buyers and sellers;
in fact, libertarian commentator Virginia Postrel has written for the
last several years on the virtues of the choice and variety we
currently enjoy.
There's currently a "long tail" fixation in Silicon Valley. Venture
capitalists report seeing a requisite power law slide in nearly any
pitch deck. CEO Eric Schmidt showed a long tail slide at the Google
shareholder meeting. Joe Krause, formerly of Excite and now at
Jotspot, tries to argue for a long tail in software development upon
which his product of course capitalizes. The term has made USAToday
and The Economist. In some ways this feels like the bubble again, for
better and for worse.
At one level, the Internet industry seems to need intense bursts of
buzzword mania: you no longer hear anyone talking about push,
incubators, portals, exchanges, or on-line communities even though
each of these was a projected multi-billion dollar market. The visual
appeal of a Zipf distribution may also confer Anderson's long tail
with a quasi-scientism that simple terms like "blog," "handheld," or
"broadband" lack. Netflix, Amazon, and Google lacked power law
graphs, I'm pretty certain, in their startup documents and have
managed to thrive regardless. Anderson's own evidence illustrates
what a long way it is from explanation to prediction: showing how some
firms can profitably address niches doesn't prove that a startup will
similarly prosper in an adjacent market. To his credit, he focuses
primarily on entertainment, where digitization is most prevalent.
The recourse to supposed mathematical precision to buttress something
as unscientific as a business plan is not new. Sociologists
investigating networks of people have been overshadowed by physicists
who bring higher math horsepower to the same sets of problems, yet
it's still difficult to understand Friendster's revenue model.
Complex adaptive systems research was very hot in the 90s, following
in the course of the now barely visible "artificial intelligence."
The problem extends beyond calculus to spreadsheets: much of what
passes for quantitative market research is barely legitimate data. To
be reduced to a single semi-reliable number, a simple 5-point
questionnaire response should have the answers vary in regular
intervals, yet words rarely behave this way. Is "most of the time" 8
times out of ten or 95 times out of 100? Who remembers to count
before someone asks? Purchase intent rarely translates to purchase.
Yet executives make decisions every day based on customer satisfaction
scores, opinion surveys, and focus groups, all of which reduce noisy
variation to apparently clinical precision.
Make no mistake: Chris Anderson has identified something important and
widespread when he groups superficially dissimilar businesses to spot
their shared reliance on the medium's powerful capability for matching
big, sparse populations to things they want and will pay for.
Returning to our opening question with regard to what's changed since
2000, the necessary preconditions of successful long tail models
include large populations and strong search, a relatively new
capability. What will disrupt today's incumbents by 2010? New kinds
of batteries? Flexible displays? Enforced shutdown of the
peer-to-peer networks, possibly by a massive worm/virus of unknown
origin?
It's also important to see the both/and: just because quirky tastes
can constitute a profitable audience in new ways does not preclude
hits like the Da Vinci Code, let's say, from being major news. And
power laws still apply to traffic (and presumably revenue): Google and
Amazon profitably handle massive volumes of site visits whereas Real's
download service, about which Anderson rhapsodizes, still loses money.
At the end of the day, no algorithm in the world can negate the most
powerful "law" of business, that of cash flow.
Five years ago, the Internet sector was in the middle of a momentous
slide in market capitalization. Priceline went from nearly $500 a
share to single digits in three quarters. CDnow fell from $23 to
$3.40 in about 9 months ending in March 2000. Corvis, Music Maker,
Dr. Koop - 2000 was a meltdown the likes of which few recreational
investors had ever seen or imagined. Science was invoked to explain
this new world of Internet business.
Bernardo Huberman, then at Xerox PARC, and others found that the
proportion of websites that got the bulk of the traffic fell far from
the 80/20 rule of thumb: as of December 1, 1997, the top 1% of the
website population accounted for over 55% of all traffic. This kind
of distribution was not new, as it turned out. A Harvard linguist
with the splendid name of George Zipf counted words, and found that a
tiny percentage of English words account for a disproportionate share
of usage. A Zipf distribution, plotted on a log-log scale, is a
straight line from upper left to lower right. In linear scale, it
plunges from the top left and then goes flat for the characteristic
long tail of the distribution: twosies and then onesies occupy most of
the x-axis.
Given such "scientific" logic, investors began to argue that the
Internet was a new kind of market, with high barriers to entry that
made incumbents' positions extremely secure. Michael Mauboussin, then
at CS First Boston and now at Legg Mason, wrote a paper in late 1999
called "Absolute Power." In it he asserted that "power laws . . .
strongly support the view that on-line markets are winner-take-all."
Since that time, Google has challenged Yahoo, weblogs have markedly
deteriorated online news sites' traffic, and the distinction between
"on-line markets" and plain old markets is getting harder to maintain.
Is the Zipf distribution somehow changing? Were power laws wrongly
applied or somehow misunderstood?
Chris Anderson, editor of Wired, has a different reading of the graph
and focuses instead on the long tail. In an article last fall that's
being turned into a book, Anderson explains how a variety of web
businesses have prospered by successfully addressing the very large
number of niches in any given market. Jeff Bezos, for instance,
estimates that 30% of the books Amazon sells aren't in physical
retailers. Unlike Excite, which couldn't make money on the mostly
unique queries that came into the site, Google uses adwords to sell
almost anything to the very few people who search for something
related to it. As of March, every iTunes song in inventory (that's
over 1 million) had been purchased at least once. Netflix carries far
more inventory than a neighborhood retailer can, and can thus satisfy
any film nut's most esoteric request.
At the same time, producers of distinctive small-market goods (like
weblogs, garage demo CDs, and self-published books) can through a
variety of mechanisms reach a paying public. These mechanisms include
word of mouth, search-driven technologies, and public performance
tie-ins; digital distribution can also change a market's economics.
Thus the news is good for both makers and users, buyers and sellers;
in fact, libertarian commentator Virginia Postrel has written for the
last several years on the virtues of the choice and variety we
currently enjoy.
There's currently a "long tail" fixation in Silicon Valley. Venture
capitalists report seeing a requisite power law slide in nearly any
pitch deck. CEO Eric Schmidt showed a long tail slide at the Google
shareholder meeting. Joe Krause, formerly of Excite and now at
Jotspot, tries to argue for a long tail in software development upon
which his product of course capitalizes. The term has made USAToday
and The Economist. In some ways this feels like the bubble again, for
better and for worse.
At one level, the Internet industry seems to need intense bursts of
buzzword mania: you no longer hear anyone talking about push,
incubators, portals, exchanges, or on-line communities even though
each of these was a projected multi-billion dollar market. The visual
appeal of a Zipf distribution may also confer Anderson's long tail
with a quasi-scientism that simple terms like "blog," "handheld," or
"broadband" lack. Netflix, Amazon, and Google lacked power law
graphs, I'm pretty certain, in their startup documents and have
managed to thrive regardless. Anderson's own evidence illustrates
what a long way it is from explanation to prediction: showing how some
firms can profitably address niches doesn't prove that a startup will
similarly prosper in an adjacent market. To his credit, he focuses
primarily on entertainment, where digitization is most prevalent.
The recourse to supposed mathematical precision to buttress something
as unscientific as a business plan is not new. Sociologists
investigating networks of people have been overshadowed by physicists
who bring higher math horsepower to the same sets of problems, yet
it's still difficult to understand Friendster's revenue model.
Complex adaptive systems research was very hot in the 90s, following
in the course of the now barely visible "artificial intelligence."
The problem extends beyond calculus to spreadsheets: much of what
passes for quantitative market research is barely legitimate data. To
be reduced to a single semi-reliable number, a simple 5-point
questionnaire response should have the answers vary in regular
intervals, yet words rarely behave this way. Is "most of the time" 8
times out of ten or 95 times out of 100? Who remembers to count
before someone asks? Purchase intent rarely translates to purchase.
Yet executives make decisions every day based on customer satisfaction
scores, opinion surveys, and focus groups, all of which reduce noisy
variation to apparently clinical precision.
Make no mistake: Chris Anderson has identified something important and
widespread when he groups superficially dissimilar businesses to spot
their shared reliance on the medium's powerful capability for matching
big, sparse populations to things they want and will pay for.
Returning to our opening question with regard to what's changed since
2000, the necessary preconditions of successful long tail models
include large populations and strong search, a relatively new
capability. What will disrupt today's incumbents by 2010? New kinds
of batteries? Flexible displays? Enforced shutdown of the
peer-to-peer networks, possibly by a massive worm/virus of unknown
origin?
It's also important to see the both/and: just because quirky tastes
can constitute a profitable audience in new ways does not preclude
hits like the Da Vinci Code, let's say, from being major news. And
power laws still apply to traffic (and presumably revenue): Google and
Amazon profitably handle massive volumes of site visits whereas Real's
download service, about which Anderson rhapsodizes, still loses money.
At the end of the day, no algorithm in the world can negate the most
powerful "law" of business, that of cash flow.