Wednesday, September 30, 2015

Early Indications September 2015: The MBA at 100 (or so)

It’s mostly coincidence, but the MBA degree is at something of a crossroads entering its second century. A short list of big questions might begin as follows:

-What is the place (pun intended) of a two-year resident MBA in a global, Internet era?

-What is the market need for general managers versus specialists in finance, supply chains, accounting, or HR, for example? How does market supply align with this need?

-What is the cost and revenue structure for an MBA program outside the elite tier? 

-How can business degrees prepare graduates for a highly dynamic, uncertain commercial environment?

-What do and should MBAs know about the regulatory environments in which their businesses are situated?

-What is and should be the relationship between managerial scholarship and commercial practice?

-What is the relationship of functional silos to modern business practice? Marketers need to know a fair bit of technology to do mobile ad targeting, for example, as do equities traders in the age of algorithmic bots. Navigating the aforementioned regulatory landscape, meanwhile, draws on an entirely different range of skills generally not covered in management or negotiation classes. Is the MBA/JD the degree of choice here?

-How can and should U.S. business schools teach ethics, which are highly culture-specific, to students from many home countries who will likely work in still another country/culture soon after graduation?

2015 is not happening in a vacuum, of course. The first graduate school of business, Tuck, offered the Master of Science in Commerce after its founding in 1900. (Recall that the functionally organized corporation was at the time a fairly recent phenomenon: railroads split ownership from management in part because of the huge capital requirements, and the vast distances involved meant that managers often lacked direct visibility of workers. Thus, in broad strokes, the late 19th century began the age of policies and procedures, and the idea of a middle manager.) Harvard launched its MBA program eight years after Dartmouth, with significant help from such scientific management exponents as Frederick Winslow Taylor. Enrollments surged: Harvard alone grew from 80 students to nearly 1100 in 22 years. Unsurprisingly, other universities began offering the degree: Northwestern in 1920, Michigan in 1924, Stanford in the late 1920s, Chicago in 1935, UNC in 1952. According to the office of the university archivist, Penn State began offering the MBA in 1959.

1959 was also the year two different reports, commissioned by industrialists’ foundations — Ford and Carnegie — reoriented American graduate business education. The more strongly worded of the two, by Robert Aaron Gordon and James Edwin Howell, systematically attacked the entire institution of the MBA as it then stood: students were weak, curricula were sloppily constructed, and faculty taught with little academic rigor at many schools.

The Gordon-Howell report quickly influenced accreditation and related discussions. New courses on many campuses covering strategy were at the forefront of a larger emphasis on quantitative methods and theory. What was not well addressed, according to many critics, was the practice of management itself. Balancing theory and practice has never been simple in business — as compared to medicine, companies do not conduct clinical trials parallel to those of drugs or procedures. 

Entrepreneurship has proven particularly hard to teach: on any list of great business-launchers, few hold the MBA. None of the following hold the degree: Paul Allen, Jeff Bezos, Sergei Brin, Warren Buffett, Michael Dell, Larry Ellison, Bill Gates, Jim Goodnight (SAS Institute), Bill Hewlett and David Packard, Steve Jobs, Elon Musk, Ken Olsen (Digital Equipment), Pierre Omidyar (eBay), Larry Page,   Sam Walton, and Mark Zuckerberg. 

MBAs can of course do quite well for themselves, as Michael Bloomberg and Nike’s Phil Knight (a 4:10 miler at Oregon 50 years ago) prove. Still, there appears to be a negative correlation between academic achievement, particularly in the MBA, and entrepreneurial accomplishment. Ten of the top sixteen richest self-made people in the world did not finish college or dropped out of grad school: Gates, Ellison, Zuckerberg, Sheldon Adelson (casinos), Page and Brin, Carl Icahn, Steve Ballmer, Harold Hamm (oil and gas at Continental Resources), and Dell.

Apart from not being able to produce mega-entrepreneurs, what of the more real-world challenges to MBA programs noted above? In reverse order, a few notes on each:

-Ethics has never been an easy topic to include in a business curriculum, but as the world’s top schools continue to get more global, trying to say anything stringent encounters the reality of cultural diversity. Sanctions against bribery, greed, ostentation, money-lending (with interest), and constraints on the role of women and ethic minorities are impossible to align; even the U.S., Canada, and England do some things very differently despite many similarities. The ethical lapses of the early 2000s — at Waste Management, Enron, Adelphia, and HealthSouth, among many others — put some focus on business schools (along with accounting firms) as agents of better behavior. In light of recent scandals at Toyota, Volkswagen, and GM, to name only the automakers, the challenge for MBA curricula does not appear to be any less daunting than in the crisis years of 2002 or thereabouts.

-Teaching students to work across functions and to deal with regulatory bounds and procedures continues to stymie MBA programs. We teach an integrative consulting-project exercise in the 4th semester; Harvard teaches something similar across the whole first year. Numerous programs have moved the project-based course back and forth, with equally compelling logic for early and late inclusion. Seeing how messy real problems are prepares students for the functional courses, while having some base of knowledge before being turned loose on a client also has merit. No one approach has emerged as a winner from the many variations being used.

-Managerial theory and practice remain difficult both to do and to convey more than a half-century after Gordon and Howell. Scholarship that gets published tends not to come from practitioners (Clayton Christensen is a notable exception, having run a company before earning his doctorate at Harvard), while managers and executives remain understandably wary of controlled experiments on live business units. Professors’ contributions to the semi-academic journals that practicing businesspeople might read — Harvard Business Review, Sloan Management Review, California Management Review, and the like — usually do not count heavily (if at all) toward tenure or promotion. For their part, many managers tell me they find little of value in the A-list journals held in academic esteem. Suffice it to say there remain many opportunities to improve the dialogue between the factory or office and the academy. 

-How can MBA programs teach resiliency, creativity, willingness to challenge convention, and the other traits required in a particularly turbulent business landscape? Marc Benioff, the CEO of, is far from a disinterested observer, but it is difficult to disagree with his recent contention that essentially every industry is in the midst of or about to confront fundamental change. Whether from fracking, Uber, mobile platforms, Amazon, or demographics, every business (and governmental) unit I can see is hard-pressed to do things “the way we’ve always done it around here.”

An entrepreneur (whose masters was in arts management) told me a cautionary tale back in the dot-com boom. “We’re a startup,” he said. “Strategy for us isn’t chess, it’s poker: we have to bluff because we can’t compete with the big guys at scale, with equal playing pieces on a defined board with agreed-upon rules. We faked features in the demo we couldn’t deliver. We have had months where we couldn’t make payroll. We’ve reinvented our business model three times. That’s the reality. We hired a bunch of top-school MBAs to try to compete better, and had to let them all go. Why? These men and women all had good grades in high school. They cleared the next hurdle and got into a good college, then positioned themselves to deliver the right answers, earn As, and get into Ivy League b-schools. There it was more of the same: high achievers got the top internships at the I-banks and consulting firms. They’ve always been rewarded for getting the right answer. Now we have all this chaos and instability. None of them can handle it; they keep wanting to know the answer and there isn’t one.”

Fifteen years later, I can’t see that the incentive structure has changed all that much. Doing well in controlled environments seems to be counter-intuitive preparation for radical reinvention, new rules, unconventional insurgencies, and broken profit models.

-This atmosphere of disruption is affecting MBA programs themselves. Getting the costs, revenues, and rankings to acceptable levels has never been more challenging. Last year Wake Forest shut down its two-year resident MBA program, ranked in the top 50 in the US, as did Thunderbird, a pioneer in the internationally-oriented masters. In the past 5 years, however, 30 new schools earned AACSB accreditation in the U.S.; 96 others had joined the club in the preceding decade. Thus competition for students, faculty, and resources is intense, and the international nature of the MBA means that foreign competition is accelerating even faster than those 126 newly-accredited U.S. institutions would suggest: Poets & Quants states in a recent article that there are 50% more MBAs being earned today than ten years ago, so filling those classes is a challenging job. Marketing efforts to reach prospective MBA students are in something of an arms race, so many schools are cheered by a reported uptick in applications. Unfortunately, nobody can know if more applications is a result of more applications per student or more students jumping into the pool. Amid both increased competition and rising costs (health care continues to outpace other expenses), increasing tuition is a non-starter in most circumstances, so schools are confronting the need for creative alternatives if they are to avoid the approach taken at Wake Forest.

-An MBA is by definition something of a generalist, even with a curricular focus area in one or two functions. Meanwhile specialized business masters, in finance, accounting, marketing, or whatever, are on the rise. I have undergrads ask me about the relative merits, and each has its place. For many mid-career professionals, having an alternative to the generalist approach is attractive. Our supply-chain masters students, for example, never take courses in HR, real estate, finance, or general management: all the courses presuppose one business area rather than a variety. With years or decades already invested in that function, these students did the career calculus and concluded that the generalist approach did not make sense for them. They are far from alone, given the national trends.

-Thus we end where we began: what is the place of a 2-year, resident MBA? Each of those variables is getting interesting. Duration: INSEAD offers a 10-month program; one-year options are not uncommon. Locus: On-line MBAs are being offered all over the world, executive (weekend) MBAs allow students to keep their jobs and their lodging stable, and hybrids like the program at Carnegie Mellon combine multiple delivery methods. Content: As we have seen, different masters degrees in business are being offered in response to market needs, including for more depth of coverage: if one considers the complexity of contemporary finance, or supply chains, or accounting, having only a handful of courses within a generalist curriculum may not provide adequate preparation for the job’s primary duties, while the breadth of coverage has minimal compensatory value.

Numerous observers, including The Economist, predict major changes to the MBA market, particularly outside the top 20 or so schools. Today’s junior faculty joining the ranks will be in for a wild ride in the coming decades. As with so many other areas, as Ray Kurzweil argues, the rate of change is accelerating: the world is changing faster, faster, and business education will likely change more in the next 20 years than in the first century. Happy 100th birthday, indeed.

Tuesday, July 28, 2015

Early Indications July 2015: Crossover Points

I recently read an enjoyable study of the airport as cultural icon (Alastair Gordon’s Naked Airport; hat-tip to @fmbutt) and got to thinking about how fast new technologies displace older ones. Based on a small sample, it appears that truly transformative technologies achieve a kind of momentum with regard to adoption: big changes happen rapidly, across multiple domains. After looking at a few examples, we can speculate about what technologies might be the next to be surpassed.

Gordon makes uncited references to air travel: in 1955, more Americans were traveling by air than by rail, while in 1956, more Americans crossed the ocean by plane than by ship. (I tried to find the crossover point for automobile inter-city passenger-miles overtaking those of railroads, but can only infer that it happened some time in the 1920s.) This transition from rail to air was exceptionally rapid, given that only 10 years before, rail was at its all-time peak and air travel was severely restricted by the war.

Moving into another domain, I was surprised to learn that in 1983, LP album sales were surpassed not by the CD but by . . . cassette tapes; CDs did not surpass cassettes for another 10 years. In the digital age, the album is no longer the main unit of measurement, nor is purchasing the only way to obtain songs. This shift in bundle size is also occurring in news media as we speak: someone asked me the other day what newspaper(s) I read, and it struck me as odd: I can’t remember when I last had a physical paper land on my porch. That’s the other thing about these crossover points: they usually happen quietly and are not well remembered.

The smartphone is taking over multiple categories. Once again, we see a new unit of measurement: in the film camera age, people developed rolls of film, then perhaps ordered reprints for sharing. (That quiet transition again: can you remember the last time you took film to the drugstore or camera shop?) Now the unit of measurement is the individual image. Interestingly, digital still cameras surpassed film cameras in 2004, but not until 2007 were there more prints made from digital than from film. After 2007, digital prints have steadily declined. Furthermore, digital cameras themselves have been replaced by cameraphones: only 80 million point-and-shoot digital cameras shipped in 2013 and that number is dropping to well under 50 million this year, while smartphone sales are on target for about 1.5 billion units this year.

Standalone GPS units, MP3 players, and video camcorders (with GoPro being a notable exception, albeit in relatively tiny numbers) are other casualties of the smartphone boom. Landline-only houses were surpassed by cellular-only in 2009. Smartphones surpassed PC sales back in 2011.

The implications for employment are tremendous: Kodak employed 145,000 people in 1988; Facebook, a major player in personal image-sharing, has a headcount of about 9,000, most obviously not working on photos. Snapchat has 200 employees at a service that shares 8800 images EVERY SECOND, a number Kodak could not have conceived of. When these technology shifts occur, jobs are lost at a greater rate than they are gained. Railroads employed more than 1.5 million Americans in 1947; it’s now about a sixth of that. U.S. airlines, meanwhile, employed a peak of about 600,000 workers in the tech boom of 2000, well less than half that of the railroads, in a more populous country with more people traveling.

Let’s look at the smartphone. Given globalization, what used to be U.S. telecom numbers no longer equate. AT&T employed around a million people at its peak; right now AT&T plus Verizon (which counts cable TV and other operations) employ roughly 425,000 people. Apple’s 2015 headcount of 63,000 includes about 35,000 retail employees and about 3,000 temps or contractors. Samsung is a major player in world telco matters, but figuring out how many of its 275,000 employees can count toward a comparison vs AT&T is impossible. All told, more people have more phones than they did in 1985 but employment in the phone industry looks to be lower, and lower-paying, given how many retail employees now enter the equation.

Coming soon, we will see major changes to ad-supported industries. Already newspaper revenues are in serious decline. Digital ad revenue is already higher than newspaper, magazine, and billboard combined. “Cable cutting” is a very big deal, with clear demographic delineations: a 70-year-old is likely to read a paper newspaper and watch the big-4 network evening news; a 20-year-old is highly unlikely to do either. Comcast announced in May that it has more Internet-only subscribers than cable-TV subscribers, and the unbundling of cable networks into smartphone/tablet apps such as HBO-Go will likely accelerate.

In personal transportation, there could be two major upheavals to the 125-year-old internal combustion regime: electric cars and self-driving vehicles. Obviously Tesla is already in production in regard to the former transition, but the smartphone example, along with such factors as Moore’s law, cloud computing, and an aging Western-world demographic could fuel rapid growth in autonomous vehicles. In regard to cloud computing, for example, every Google car is as “smart” as the smartest one as of tonight’s software upgrade. Given the company’s demonstrated expertise in A/B testing, there’s no reason not to expect that competing models, algorithms, and human tweaks will be tested in real-world competitions and pushed out to the fleet upon demonstrated proof of superior fitness. 

There are many moving parts here: miniaturization, demographics, the rise of service industries relative to manufacturing (including cloud computing), growing returns to capital rather than labor, and so on. The history or technology substitutions and related innovations does have some clear lessons however: predicting future job growth is perilous (in 1999, the US Bureau of Labor Statistics was bullish on . . . desktop publishers); infrastructure takes decades while some of these cycles (Android OS releases) run in months; and the opportunities in such areas as robotics, AI, and health care are enormous. The glass may be half-full rather than half-empty, but in more and more cases, people are looking at entirely different scenarios: Kodak vs Snapchat, as it were. Whoever the next US president turns out to be will, I believe, face the reality of this split, perhaps in dramatic fashion.

Sunday, May 31, 2015

Early Indications May 2015: Centaurs

“We expect a human-robot symbiosis in which it will be natural to see cooperation between robots and humans on both simple and complex tasks.”

-George Bekey, University of Southern California, 2005

“The second machine age will be characterized by countless instances of machine intelligence and billions of interconnected brains working together to better understand and improve our world.”

-Erik Brynjolfsson and Andrew McAfee, MIT, 2014

Who performs better: a computer or a human?

The short answer is obvious: it depends on the task. Computers are now unquestionably better at chess than even a grandmaster-grade human player, and the highly visible triumph of IBM’s Watson over the best Jeopardy players shows how artificial intelligence can be applied to a linguistically rich trivia contest.

What might come next? Just this month, four of the top 10 poker players in the world played a marathon against a Carnegie Mellon computer. Given the complexities of no-limit Texas Hold’em, the result was not a Jeopardy-like rout, but the statistical tie elated the researchers. Each player played 20,000 hands; a cumulative $170 million in chips was bet over the two-week competition. In the end, the humans came out less than $1 million ahead — even though the computer did things like betting $19,000 to win a $700 pot.

Physical-world tasks are farther behind pure thought. In soccer competitions, robots are still decades away from impersonating, much less beating, human championship teams. Physical machine movement has yet to follow anything like Moore’s law, and team play is harder to model than individual contests. As these examples show, the race between people and computers/robots plays out differently, depending on the tasks being contested.

The long answer to the "who's better" question is emerging: a team of both. We will see the origin of the term “centaurs” presently, but I think this is going to be the most amazing domain, one in which each party does what it does best. We are seeing that teams of humans AND robots outperform either humans OR robots. Here are four domains in which progress is being made more rapidly than might be widely understood.

1) Audi has teamed with Stanford’s self-driving car lab to develop a TT that can beat a club-level human driver. There have yet to be head-to-head races, apparently, so human adrenaline hasn't played a role, nor have racing tactics come into play. The car simply follows a pre-programmed line and parameters around the course: it hasn't raced anyone and won yet. The centaurs are well developed here: stability control, anti-lock brakes, and sophisticated all-wheel-drive control systems all digitally amplify the skill of a human driver, so finding a "purely" human-driven car is less than straightforward.

Earlier this month Mercedes showed a driverless truck that can operate on public roads but still needs human drivers for navigating the start and finish of a trip as well as any diversions from clear, open highway such as snow covering lane lines, police officers directing traffic, or construction areas. It’s early, but eventually might the analogue of a “harbor pilot” carry over from sea to land?

2) The Internet is awash in images, some of them incredibly beautiful. Researchers at Yahoo Labs and the University of Barcelona have taught an algorithm to trawl through image databases and find beautiful but unpopular (under appreciated) images using the results of training sessions with human “votes.”

As The Economist recently noted, the process of machine learning is itself undergoing rapid improvement, in part through the process of “deep learning” as developed by the giant web businesses with both massive data and effectively unlimited computing resources. Google and Facebook are familiar names on their list; Baidu is a newer entrant into the field, having made some high-profile hires.

3) Chess has never been the same since Deep Blue defeated Gerry Kasparov, in part because of a software bug that led the human to infer that the machine was substantially smarter than he rather than allowing for the possibility that it was a dumb move. Since about 2013, teams of average players and good software have been able to defeat both grandmaster humans and computers. This type of match is where the "centaur" terminology first took hold.

4) Exoskeletons are common in Hollywood sci-fi, but robots that encase a human body and amplify its capabilities are coming into use in several scenarios:

-Rehabilitation for stroke patients, amputees, and paralytics, among other populations

-DARPA wants soldiers to be able to march or run longer, with less fatigue

-In military and other similar scenarios, able-bodied humans can be augmented to increase their lifting capacity, for example

-The Da Vinci surgical robot is a specialized exoskeleton of a sort, extending a doctor’s finger manipulations into more precise movements in the surgical field.

One big challenge for all of these efforts is in making the power source light enough to work at human scale. In warehouses, to take a very rough approximation, a forklift truck typically weights 1.6 to 2 times the intended weight to be carried. If a human is intended to carry 200 additional pounds, that puts the exoskeleton in the 400-lb range, unloaded, so the whole package would be about 750 lb. Lowering the battery weight is the quickest way to shrink the total assembly, but physics is tough to cheat: a lot of battery power would be expended in carrying the battery, and carrying a frame sufficiently robust to support the battery.

It will bear watching to see how roboticists and computer scientists design the cyber side of the centaur, optimizing around human strengths that might be expressed in unpredictable ways. Similarly, training a human to leave part of the task to a machine, and not to overthink the transaction, might be tricky in certain situations. In others (traction control on the car for example), people are already augmented and don’t even realize it.

At the same time, centaurs will have to deal with the infinite supply of human stupidity: what will self-driving cars do when a drunk driver is headed the wrong way on a divided highway? Wall Street is one big centaur, as the recent charges in the 2009 flash crash reveal: a day trader in England apparently spoofed enough orders — manually rather than algorithmically — that programmatic trading bots reacted in unstable, unpredictable ways. The gambit seems to have worked: the day trader (who lived with his parents) made $40 million over four years. The point here is not what one Navinder Singh did or did not do, or when other actors in the flash crash might be identified, but simply that the interactions between clever (or less than clever) people and computerized entities will be a most complicated territory for the coming decades.

Once you start seeing the world this way, the potential possibilities expand far beyond websites, apps, or algorithms — there’s so much human work that can be done better. Consider travel: I would love to have a computer assistant work with me to book a trip. I have several free weekends, let’s say, and want to know the best, cheapest trip I should take. Right now it’s possible to spend hours looking at maps, air fares, hotel rates, weather predictions, and events calendars. A computer can’t tell me what I want — my preferences are dynamic, conditional, personal, and fickle — but right now the computer can’t really do a great job of letting me discover what I want either.

I have a suspicion some of these limitations are far from being solved. at the same time, whether it’s in the realm of computer-controlled tools (whether scalpels or lathes), transportation (personal drones are worth a book all by themselves), or human augmentation, the various tandems of human and computing capabilities will have far-reaching impact sooner than most anyone expects.

Wednesday, April 29, 2015

Early Indications April 2015 Review Essay: Being Mortal by Atul Gawande

Let me begin by dispensing with any pretext of objectivity: I think Atul Gawande, a surgeon at a Harvard teaching hospital who writes for The New Yorker, is a national treasure. Complications may be the best first book of our generation; Better is brilliant. We have personal parallels: both of us grew up in the midwest and each named a son for the greatest physician-novelist of the 20th century. He teaches and practices at the hospital where my twins were born back in my Boston days.

Being Mortal is a sobering book. I had to read it in small doses in part to savor its richness but in larger measure to cope with the existential finality it addresses so beautifully and concretely. To the Amazon reviewers complaining that it’s based on anecdotes, let me say simply, they’re not anecdotes, they’re parables. There’s a difference. Those parables made me face my own life’s end in ways nothing else ever has.

Given that the scope of the book is broad and nuanced, I have nothing to gain by attempting to summarize it. Instead, I want to look closely at one piece of his wisdom, that regarding the Hard Conversations. Physicians aren’t trained, he states, to guide patients into death; dying is taken not as natural but as a failure. Given both a cultural reticence to see death as part of life and the readily litigious context of modern U.S. medicine, doctors tend to reach deep into the armamentum of ventilators, central lines, kilobuck antibiotics, dialysis, and other tools near the end of life. Thus the family often can say “the doctor did everything she could,” rather than “Dad went out peacefully, surrounded by his loved ones.”

Gawande gives a great example of the alternative by recounting the story of his father’s end of life passage. Based on a conversation with a bioethicist who had just watched her own father die, Gawande asks his father frank questions about tradeoffs, about limits, about fears. One person might want to get to a family milestone (a grandchild’s wedding, say) and will tolerate high levels of pain in that pursuit; another can bear roaring tinnitus or deafness but is terrified of the implications of an ostomy bag; a third wants to be remembered as cogent rather than as a narcotized, slurring shell of her former self.

The point here is an important one: medical technology has cured old ways of dying but located more deaths in high-tech hospital scenarios. Hospitals employ doctors and technicians who are expert in life-extending treatments more than in guiding the hard conversations. Duration is taken as the relevant yardstick by default; quality takes time and skill to be assessed as a different way to judge outcomes. In one case, Gawande pins down one of his patients’ oncologists who admits that the best-case scenario after a brutal chemotherapy regime is measured in months: the same prospect as with palliative care, and not the years the family and patient were hopefully assuming was the case. The path toward one’s demise is too often governed by what drugs and machines can do rather than what the patient and the family want.

This paradox reminds me of another Boston conversation, this one originating at MIT rather than at Harvard. The psychologist Sherry Turkle’s most recent book, Alone Together, asserts that modern communications technologies have done their job too well: millennials and also many older than they have come to expect human gratification from a tweet, a like, a text, often more than from real people in real proximity. The absence of these digital stimuli — quiet — is painful and to be avoided, she finds; people have lost the ability to be alone with their thoughts. Further, Facebook profiles, Twitter feeds, Pinterest boards, Instagram portfolios, and the other billboards we erect are carefully curated, to use the modern term of art. Thus we can control the self the world sees and interacts with, making the comparatively naked conventional social self more vulnerable and less practiced in the “messy bits” of human interaction, as she calls them.

In both of these scenarios, modern technologies — ventilators and pharmaceuticals in the former case, smartphones in the latter — have become so powerful that they rather than their users shape the tenor and often content of the debate: rather than ask “what do we want?” and use the technologies to get there, we take the limits of the technology as our boundaries and push up against that instead. In both of these instances, the problem is that modern medicines, computing, and sensors exceed human scale: no human can last long on incredibly potent modern chemotherapy poisons, nor can a person be “friends” with 5,000 people 24 hours a day.

What then are the resources for the conversations we should be having? The professor in me wants to say, “the great intellectual traditions.” Indeed, Gawande cites Tolstoy on p. 1 and Plato much later. The problem is that in the U.S. and elsewhere, college as a time for introducing and possibly pondering the big questions is out of fashion right now. In public universities especially, other agendas are in play.

In Florida, governor Rick Scott tried to make tuition for literature, history, and philosophy majors more expensive than engineering or biotechnology, notwithstanding the cost differences in the respective professoriates and infrastructure. Florida is not alone: here at Penn State, a committee was charged with updating the general education curriculum (that includes the essential ideas everyone should encounter, regardless of major) and the task is turning out to be more difficult than expected: the deadline has been extended since the idea was proposed five years ago. To assess whether a Penn State education prepares people to ask “what is a good society?” or “what is a good way to live one’s life?” you can see the committee’s report here. The principles guiding the effort have evolved and can be seen here.

Though I doubt he realizes it, Gov. Scott embodies the paradox. America’s society and economy value the contributions of engineers and programmers more than marketing assistants, retail managers, school teachers, or social service providers  — the landing spots for humanities and social science undergrads. Those makers of machines and software have done amazing things, but the state of the academic humanities does little to inspire confidence that college courses in English or philosophy will teach young adults how to form healthy personal selves and relationships in a digital social context (can Aristotle help cure Facebook envy?), and to help their elders die well. Like it or not, Gawande’s Tolstoy is more than ever an intellectual luxury good rather than the staple of a balanced diet. Thus college and the wisdom of the past are less of a resource than some of us might hope.

What Gawande calls on us to do — beginning with doctors and patients, then patients and families — is the hardest thing: to listen, including to our deepest selves, and to talk honestly. What do we value? What makes us frightened? How do we reconcile ourselves to family differences and breakages in our final days? To watch mom or granddad die, and to help listen to what they really want, is both terrifically hard and a great gift. That Gawande has jump-started that conversation not for a handful but for thousands of people makes him the closest thing to a secular saint I have ever witnessed.

Tuesday, March 31, 2015

Early Indications March 2015

After a busy few months away, the newsletter returns with a collection of news and notes.

1) My long-ish blog post on Uber, Airbnb, and regulation as competitive barrier to entry was just posted today on The Conversation, a foundation-funded collection of various informed points of view.

2) I am delighted to announce that MIT Press has me under contract to deliver a book manuscript on robots and robotics for 2016 publication.

3) The reaction against Indiana governor Mike Pence's signing of the "religious freedom" legislation has been fascinating to watch, in part because I grew up in the state. One analysis suggested that the polarization of media has led to "echo chambers" on both left and right: if you listen only to the cheerleaders for your side, the reaction of what used to be called "the silent majority" can be a blindside smackdown. Pence's complete lack of articulate answers to the broader media (most visibly George Stephanopoulos) suggests he may have little idea of how non-social-conservatives outside Indiana see the world.

Lest this viewpoint appears partisan, Hillary Clinton's stonewalling of the archival process suggests a similar blind spot. One difference is that she is much more experienced in handling the media than Pence, but also the integrity of the historical record matters less to most people than the prospect of Aunt Peg and her partner Allison getting turned away from a hotel. Gay and lesbian rights has become personalized in a way that records retention has not: an overwhelming majority of Americans knows a gay or lesbian family member or colleague. Very few of us can even name an archivist or historian.

Once that personal association sensitizes people to an issue, social media provides a ready environment for expressed outrage. The power of the hashtag allows individuals to feel like they're part of what Lawrence Goodwyn referred to as "movement culture" when he discussed the civil rights protests of the 1960s. There, the options were to be on site or watch on TV; now, one can be physically remote from the protest yet feel active solidarity. The tidal wave of #boycottIndiana could not have happened in a TV-driven media environment, and I'm sure both parties' 2016 presidential nominees will remember the episode.

4) Amazon remains relentless in its pace of innovation. The drone delivery system, effectively grounded by current FAA rules, is being tested in Canada (a country where Target couldn't operate profitably). The Echo AI appliance is shipping and changes household behavior in ways I will examine in a forthcoming letter. Today, Amazon announced "impulse buy" devices called Dash buttons you can affix to the storage spaces for household items like bleach or paper towels. These are another small but discernible step in the march toward the "smart" house. Yesterday Amazon launched a listing of vetted home services providers ("from plumbers to herders" in the words of one headline), once again connecting the physical and virtual worlds unlike any other company.

At the same time, Amazon quietly stopped its mobile wallet efforts, which unlike the ATT/Verizon effort had the benefit of not sharing a name with a Middle Eastern terror group. The rapid warehouse buildout appears to be continuing, as does the slow rollout of home grocery delivery. In short, the company that has consistently zigged while others zagged (or stood still) appears to be moving full speed ahead to continue launching new initiatives that challenge conventional wisdom in field after field.

Thursday, October 30, 2014

Early Indications October 2014: Who’s Watching?

I didn’t really go looking for this particular constellation of ideas, but several good pieces really got me connecting the dots and this month’s letter represents an effort to spell things out with regard to surveillance.

1) The Economist published one of its special reports on September 13 on online advertising. Entitled “Little Brother,” the report argues that mobile devices combined with social networks are providing advertisers — and more importantly, a complex ecosystem of trackers, brokers, and aggregators illustrated in Luma Partners’ now-famous eye-chart slides with unprecedented targeting information. One prominently quoted survey asserts that marketers have seen more change in the past two years than in the previous 50. Among the biggest of these shifts: programmatic ad buying now works much like algorithmic trading on Wall Street, with automated ad bidding and fulfillment occurring in the 150 milliseconds between website arrival and page load on the consumer device.

[As I type this, Facebook announced that the firm made $3.2 billion in one quarter, mostly from ads, nearly $2 billion of it from mobile.]

Given that surveillance pays dividends in the form of more precise targeting — one broker sells a segment called “burdened by debt: small-town singles” — it is no surprise that literally hundreds of companies are harvesting user information to fuel the bidding process: online ad inventory is effectively infinite, so user information is the scarce commodity and thus valued. This marks a radical reversal from the days of broadcast media, when audience aggregators such as NBC or the New York Times sold ad availability that was constrained by time or space. Thus the scarcity has shifted from publishers to ad brokers who possess the targeting information gleaned from Facebook, GPS, Twitter, Google searches, etc. Oh, and anyone who does even rudimentary research on the supposedly “anonymous” nature of this data knows it isn’t, really: Ed Felten, a respected computer scientist at Princeton, and others have repeatedly shown how easy de-anonymization is. (Here’s one widely cited example.)

2) In another sign that surveillance is a very big deal, not only for advertising, the always-astute security guru Bruce Schneier announced that his next book Data and Goliath, due out in March, addresses this issue.

3) Robots, which for our purposes can be defined as sensor platforms, are getting better — fast — and Google has acquired expertise in several forms of the discipline:

-the self-driving car (that has severe real-world limitations)

-Internet of Things (Nest and Waze)

-autonomous military and rescue robots (Boston Dynamics and Schaft).

4) A September 28 post by Steve Cheney raised the prospect of Google moving some or all of the aforementioned robot platforms onto some version of Android. While he predicted that “everything around you will feel like an app,” I’m more concerned that every interaction with any computing-driven platform will be a form of surveillance. From garage-door openers and thermostats to watches to tablets to “robots” (like the one Lowes is prototyping for store assistance) to cars, the prospect of a Google-powered panopticon feels plausible. (I looked for any mention of robotics in the Google annual report but all the major acquisitions were made in this fiscal year, so next year's 10-K will bear watching on this topic.)

5) Hence Apple’s recent positioning makes competitive sense. When Tim Cook said “A few years ago, users of Internet services began to realize that when an online service is free, you’re not the customer. You’re the product.” he was ahead of the curve, I believe: according to the Economist report, only .00015% of people use those little triangle things to opt out of online ad tracking. In Cook’s and Apple’s narrative, premium prices implicitly become more reasonable to those who value privacy insofar as there is no “audience commodity” as at eBay, Amazon, Google Twitter, or Facebook.

6) One other thing to consider here is how that information is being processed at unprecedented scale. When The Economist noted the likeness of ad-buying to algo trading, we enter the world of artificial intelligence, something Google counts as a core competency, with 391 papers published not to mention untold portions of secret sauce.

Some very smart people are urging caution here. Elon Musk was at MIT for a fascinating (if you’re a nerd) discussion of rockets, Tesla, the hyperloop, and space exploration. Thus for someone serious about a Mars space base to warn against opening an AI Pandora’s box was quite revealing:

“I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”

(The complete MIT talk is here)

Musk is not alone. The University of Oxford’s Nick Bostrom recently wrote Superintelligence (maybe best thought of as an alternative excursion into Kurzweil-land), a book that quite evidently is grappling with the unknown unknowns we are bumping up against. He knows of what he speaks, but the book is, by his own admission, a frustrating read: no generation of earth’s population has ever had to ask these questions before. The book’s incompleteness and tentativeness, while making for a suboptimal read, are at the same time reassuring: someone, both informed and set in a broad context, is asking the questions many of us want on the table but lack the ability, vocabulary, and credibility to raise ourselves.

In a nutshell, there it is: mobile devices and social networks generate data points that supercomputing and sophisticated analytical tools turn into ad (or terrorist, or tax-cheat) profiling data. Computing liberated from desktop boxes and data centers moves in, and acts on, the physical world, extending surveillance further. Apple positions itself as a self-contained entity selling consumers stuff they pay for, not selling eyeballs/purchase histories/log-in fields/expressed and implied preferences to advertisers. In sharp contrast, Google has repeatedly shown — with Streetview, wi-fi access point mapping, Buzz, and Google+ — a desire to collect more information about individuals than many of those individuals would voluntarily reveal. With AI in the picture, the prospect of surveillance producing some very scary scenario — it may not accurately be called a breach, just as the flash crash wasn’t illegal — grows far more likely. Human safeguards didn’t work at the NSA; why should they work in less secure organizations? Like Bostrom, I have no ready answers other than to lead a relatively careful digital existence and hope that the wisdom of caution and respect for privacy will edge out the commercial pressures in the opposite direction.

Next month: unexpected consequences of a surveillance state.

Wednesday, October 01, 2014

Early Indications September 2014: Alternatives to Industry

In classic business school strategy formulation, a company’s industry is taken as the determining factor in cost structures, capital utilization, and other constraints to the pursuit of market success. Nowhere is this view more clearly visible than in Michael Porter’s seminal book Competitive Strategy, in which the word “industry” appears regularly.

I have long contended that Internet companies break this formulation. A series of blog posts (especially this one) in the past few weeks have crystallized this idea for me. The different paths pursued by Apple, Amazon, and Google — very different companies when viewed through the lens of industries — lead me to join those who contend that despite their different microeconomic categories, these three companies are in fact leading competitors in important ways: but not of the Coke/Pepsi variety.

Let us consider the traditional labels first. Amazon is nominally a retailer, selling people (and businesses) physical items that it distributes with great precision from its global network of warehouses. Its margins are thin, in part because of the company’s aggressive focus on delivering value to the customer, often at the cost of profitability at both Amazon itself and its suppliers.

Apple designs, supervises the manufacture of, and distributes digital hardware. Its profit margins are much higher than Amazon’s, in large part because its emphasis on design and usability allows it to command premium prices. Despite these margins and a powerful brand, investors value the company much less aggressively than they do Amazon.

Google, finally, collects vast sums of data and provides navigation in the digital age: search, maps, email. Algorithms manage everything from web search to data-center power management to geographic way-finding. In the core search business, profit margins are high because of the company’s high degree of automation (self-service ad sales) and the wide moats the company has built around its advertising delivery.

Thus in traditional terms, we have a mega-retailer, a computer hardware company, and a media concern.

When one digs beneath the surface, the picture morphs rather dramatically. Through a different lens, the three companies overlap to a remarkable degree — just not in ways that conform to industry definitions.

All three companies run enormous cloud data center networks, albeit monetized in slightly different ways. Apple and Amazon stream media; Google and Amazon sell enterprise cloud services; Apple and Google power mobile ecosystems with e-mail, maps, and related services. All three companies strive to deepen consumer connections through physical devices.

Apple runs an industry-leading retail operation built on prime real estate at the same time that Amazon is reinventing the supply-chain rule book for its fulfillment and now sortation centers. (For more on that, see this fascinating analysis by ChannelAdvisor of the Amazon network. In many cases, FCs are geographically clustered rather than spread more predictably across the landscape) Both of these retail models are hurting traditional mall and big-box chains.

At the most abstract but common level, all three companies are spending billions of dollars to connect computing to the physical world, to make reality a peripheral of algorithms, if you will. Amazon’s purchase of Kiva and its FC strategy both express an insistent strategy to connect a web browser or mobile device to a purchase, fulfilled in shorter and shorter time lags with more and more math governing the process. In the case of Kindle and streaming media, that latency is effectively zero, and the publishing industry is still in a profoundly confused and reactive state about the death of the physical book and its business model. The Fire phone fits directly into this model of making the connection between an information company and its human purchasers of stuff ever more seamless, but its weak market traction is hardly a surprise, given the strength of the incumbents -- not coincidentally, the other two tech titans.

Apple connects people to the world via the computer in their pocket. Because we no longer have the Moore’s law/Intel scorecard to track computing capacity, it’s easy to lose sight of just how powerful a smartphone or tablet is: Apple's A8 chip in the new iPhone contains 2 Billion (with a B) transistors, equivalent to the PC state of the art in 2010. In addition, the complexity of the sensor suite in a smartphone — accelerometers, microphone, compasses, multiple cameras, multiple antennae — is a sharp departure from a desktop computer, no matter how powerful, that generally had little idea of where it was or what its operator was doing. And for all the emphasis on hardware, Nokia’s rapid fall illustrates the power of effective software in not just serving but involving the human in the experience of computing.

Google obviously has a deep capability in wi-fi and GPS geolocation, for purposes of deeper knowledge of user behavior. The company’s recent big-bet investments — the Nest thermostat, DARPA robots, Waze, and the self-driving car team — further underline the urgency of integrating the world of physical computing on the Android platform(s) as a conduit for further and further knowledge of user behavior, social networks, and probably sentiment, all preconditions to more precise ad targeting.

Because these overlaps fail to fit industry definitions, metrics such as market share or profit margin are of limited utility: the three companies recruit, make money, and innovate in profoundly different ways. Amazon consistently keeps operating information quiet (nobody outside the company knows how many Kindle devices have been sold, for example) so revenue from the cloud operation is a mystery; Google’s finances are also somewhat difficult to parse, and the economics of Android for the company were never really explicated, much less reported. Apple provides most likely the most transparency of the three companies, but that’s not saying a lot, as the highly hypothetical discussion of the company’s massive cash position would suggest.

From a business school or investor perspective, the fact of quasi-competition despite the lack of industry similitude suggests that we are seeing a new phase of strategic analysis and execution, both enabled and complicated by our position with regard to Moore’s law, wireless bandwidth, consumer spending, and information economics. The fact that both Microsoft and Intel are largely irrelevant to this conversation (for the moment anyway) suggests several potential readings: that success is fleeting, that the PC paradigm limited both companies’ leaders from seeing a radically different set of business models, that fashion and habit matter more than licenses and seats, that software has changed from the days of the OSI layer cake.

In any event, the preconditions for an entirely new set of innovations — whether wearable, embedded/machine, algorithmic, entertainment, and/or health-related — are falling into place, suggesting that the next 5-10 years could be even more foreign to established managerial teaching and metrics. Add the external shocks — and shocks don’t get much more bizarre than ebola and media-savvy beheadings — and it’s clear that the path forward will be completely fascinating and occasionally terrifying to traverse. More than inspiration or insight from our business leaders, we will likely need courage.