Saturday, December 31, 2016

Early Indications December 2016: The Future of Aging


We grow too soon old and too late smart.
      -Proverb variously attributed to Swedes, Germans, and Dutch

While it is common to note that the U.S. worships youth and beauty in contrast to other cultures that revere age and wisdom, the demographic tidal wave that is the baby boomer generation will change aging just as it changed higher education (the explosion of college attendance), childrearing norms (compare baby strollers and birthday parties in 1960 and 1980), family structure (marriage rates have plummeted since World War II), the workplace (cube farms), and the built landscape (McMansions). Speaking only of the U.S., and not of those wise Dutch, German, Swedish, or Chinese elders, what will we see in the next 25 years? The short answer: lots of big changes.

-Medical breakthroughs
When life expectancy was shorter, body and mind wore out at more or less the same rate. As life expectancy increases, dementia, on one hand, as well as crippling orthopedic and spinal conditions both increase in likelihood: the possibility of mind failing before body, and body before mind, means that heartbreaking scenarios of both asymmetries are on the upswing. (See this.) Exoskeletons, 3D-printed artificial joints, and other mobility solutions can help with the latter class of conditions, while new Alzheimer’s and other dementia drugs are getting more attention, given the growing market need (see this). Just as fertility treatment advanced markedly in the baby boomer’s childbearing years, expect new medical miracles to address the aging process.

-Income
Whether or not the aged will be able to pay for these new medicines and devices remains an open question. Social Security is both underfunded and insufficient for a moderate lifestyle, pensions are less available and often underfunded for the public-sector employees lucky enough to get them, and the tab for the everyone-his-own-investment-analyst experiment known as the 401(k) defined contribution approach will soon be coming due. As of 2013, only 53% of U.S. families had a retirement plan, and of those aged 56-61, the median account was valued at only $17,000. The mean account value for that age cadre — $163,000 — is clearly boosted by a very few families with extensive or even adequate resources: in round numbers, a 65-year-old couple needs about $850,000 to generate $50,000 a year (the “average” U.S. income) for 20 years, assuming 1% inflation and 3% investment returns, not counting Social Security. Fidelity Investments estimates that same couple will pay $260,000 in out-of-pocket health care expenses, not counting nursing homes or related costs. (More here.) Most American families will not be able to afford to retire under the current rules.

The question then becomes, what happens? After rising for more than 50 years running, U.S. life expectancy might be dropping: earlier this month, the National Center for Health Statistics announced that death rates for 8 of the top 10 causes of death increased in 2015. Life expectancy at birth dropped about a month for women and 2+ months for men. One year does not a trend make, but it’s possible we could be seeing an effect of the growth in income/wealth disparity that has characterized so much of American life in the past 30 years: one hypothesized cause of an increase in death rates for white middle-age people is the increase in so-called diseases of despair: alcoholism, overdoses, and suicide. Whether through despair, diet/lifestyle, or limited access to care, financial stress and low income most likely reduce life expectancy. 

-Safety nets
So if life is getting harder, life expectancy lasts 20+ years beyond age-65 retirement, and savings are minuscule, what will government do? One colleague of mine suggests that Medicare could expand to include food stamps or some other nutrition component. Perhaps there will be wider calls for a public option for health care coverage. Social Security was originally instituted in the Great Depression, and the nation is a very different place 80 years later: might government old-age insurance be redefined in the coming decade, especially given the coming bust in 401(k) assets relative to need? That bailout will dwarf both Wall Street’s and Detroit’s proppings-up after 2008. Will the retirement age will increased from 65 to reflect modern longevity? (In 1935, when Social Security was introduced, the U.S. life expectancy was 61; it is now about 79.) I can’t see the current collection of national safety nets — VA, Social Security, Medicare/Medicaid, disability, SNAP — being able to withstand another 10 years without being reworked.

-Living arrangements
The prevalence of 2 or 3 adult generations living in close proximity fell dramatically after World War II: the growth of suburbs filled with single-family detached-houses, with limited walkability, along with the rising number of nursing and retirement homes, meant that grandparents less commonly lived with their grandchildren. The numbers are difficult to track, given the changing makeup of care resources: adult day care, in-home service providers, nursing homes, and adult care communities can all overlap both in structure (a community agency can offer both adult day care and hospice, let’s say) as well as by person: transitions from one type of care to another are common as health needs change, offspring move in or out of town, spouses die, and finances change.

Multigenerational families are on the upswing in the U.S., and elders are part of the picture, but those aged 25-34 are moving in with their parents in stunning numbers. According to figures from the Pew Foundation, 11% of adults 25-34 lived in a multi-generational household in 1980. That proportion had more than doubled as of 2012, and by 2014, more young adults (18-34) lived with parents than with a spouse or significant other. As those unmarried millennials age, how will they change our assumptions about, and institutions related to, aging? Or will they marry in traditional numbers, only later? As the costs of aging rise, how will families, and real estate, adapt? How will Uber and, later, autonomous vehicles change where elders live and what they do with their days?

-Religious practice
The U.S. church landscape is changing profoundly. These changes matter for aging, insofar as churches are often providers of both formal and informal support networks, but aging matters for some churches, especially “mainstream” Protestantism. Consider that U.S. population grew 65% in the half-century between 1965 and 2015. In that same period, the Episcopal church lost 49% of its adherents, Presbyterians (PCUSA) 47%, and United Methodists 33%. Even among Roman Catholics, where membership pretty closely tracked population growth, parishes are closing: those 70 million self-described Catholics don’t attend Mass very often, statistically speaking. Catholics also have a supply-side issue getting men to join the priesthood; staffing parishes is a problem for many denominations, especially those without access to women clergy. Another side effect of the drop in mainstream church membership and attendance relates to the growth of towns and cities: those 19th-century church buildings are often located in prime real estate at the same time that maintenance and heating costs for big, old buildings with creaking structure and infrastructure (wiring, plumbing, HVAC) are non-trivial. The continuing shift in American church affiliation will affect both the look of our cities and the delivery of social services to many, including the aged.
*****
It should be clear that demographics, medical science, social institutions, government programs, religious faith, and technological change are wound together into a yarn ball: telecommuting and Uber will let people who can’t drive work at jobs they currently can’t get to. A stock-market decline could wipe out even more people than were decimated by 2008, given how many more baby boomers are out of the work force since then, making sharing living quarters a necessity. Older people feel more vulnerable, and scams of various sorts prey on many of them, including politicians, at the same time that elders are outliving their churches. Social networks facilitate the spread of fake news, rumors, and other misinformation, both frightening people further and making real solutions harder to design and implement. Jobs and work tasks are changing incredibly fast, making older expensive workers expendable, yet intellectual capital is leaving U.S. companies via retirement (particularly in process manufacturing) without clear backup plans in place. With so many strands to the issue, no single initiative can be considered in isolation; the side effects of policy (think of the home mortgage interest exemption as just one example) are often nearly as important as the primary objectives. Whether it’s robotics, social networks, autonomous vehicles, or telepresence for work or family ties, the new elderly will be key factors in many technological waves of the coming decades. 

Wednesday, November 30, 2016

Early Indications November 2016: Beyond Party Realignment?

At the risk of getting confused for a political scientist or Beltway blogger, I want to look at the recent US and UK elections through the lens of technology and media rather than interest groups, policy, or even candidates. We are in the midst of a fundamental transformation at the global level, and the way information and opinions move among people (I’m not sure the word “audience” is solid any more) is changing a wide variety of institutions. Political parties are among them.

Let’s start with the basics. According to a standard US government textbook, a political party has several functions:

-To select candidates, formally through primaries and informally through social and organizational networks at local, state, and national levels
-To gather support for the party through media, organizing, and mobilization efforts
-To organize representatives in legislatures, in part by ideological and policy stand-To synthesize ideas into party platforms and other points of view that can function like a brand to lower information costs.

In the UK and US, founding documents made no provision for parties; the Labour party only dates to 1900, while in the US, political parties weren’t major factors for the first 50 years after the America evolution. After World War II, US political scientists had about a century of experience to analyze, and led by Harvard’s V.O. Key, some argued for a notion of party “realignment.” Key saw the beginnings of Franklin Roosevelt’s era of Democratic rule in the 1928 election, in which the Catholic Democrat Al Smith lost to Herbert Hoover while performing well in the urban Northeast where Republicans had historically been strong.

As John Judis notes in The Washington Post, in 1967 the MIT political scientist Walter Dean Burnham built on Key’s concept and predicted that party realignments would happen every 30 or 40 years; it’s what the US has instead of revolutions, he posited. Richard Nixon’s victory in 1968 was another turning point as Republicans captured many Southern voters dissatisfied with racial integration and other aspects of the Kennedy/Johnson years. Did Bill Clinton usher in a new era in 1994? Or is instability now the dominant motif, given the disconnect in the number of Republican governors with the Obama presidency, the large numbers of women and Latin voters who sided with Donald Trump, and the surprising success of the two party outsiders — Trump and Bernie Sanders — who reshaped the 2016 election from outside party orthodoxy and even affiliation. (As of 2008 Trump was registered as a Democrat; Sanders is officially an independent and describes himself as a democratic socialist.)

Judis applies conventional political logic to argue whether 2016 is a realignment or what others have called a “recalibration,” but I am taking a different tack: the power of social media, the broken economics of the news industry, and the loss of critical thinking amidst the identity politics that follows in the wake of those two developments point to another reading. Given both Brexit and the Trump victory, as well as various European struggles to reconcile national identity with global economics (burqa bans, austerity debates, free speech battles involving cartoonists), I’m suggesting that the political party itself is facing an existential challenge: realignment marks a change in what parties stand for; I’m suggesting parties must redefine what they do.

Let’s go back to those functions of a party:

In terms of selecting a candidate, the Republican party establishment did not “choose” President-elect Trump. In terms of governing, the Republican party platform is not the guidebook for the Trump presidency, particularly on foreign affairs. And in terms of branding, the Trump campaign was run in many ways as a repudiation of the Republican party, instead building on the candidate’s media experience and personal brand. The free airtime granted by cable news and print media amounted to far more exposure than ads could have generated, and Trump’s wide and confrontational use of social media made it a factor in ways it never has before. How the president-elect manages traditional media (will there be more YouTube announcements?) and a so-far not-very-presidential Twitter feed will be of critical importance. The traditionally symbiotic relationship between news media and elected officials, stalwarts of their parties, is eroding from both sides. The many contradictions embodied in the White House Corespondents’ Dinner may soon fracture that institution, for example.

Going forward, Brexit and the US election both suggest that lowered barriers to media access have unintended consequences. The founding mission statement of The Economist, dating from 1843, asserts that the periodical was intended to participate “in a severe contest between intelligence, which presses forward, and an unworthy, timid ignorance obstructing our progress.” This same thinking, phrased less ornately, can be said to have motivated the early World Wide Web: if knowledge can be more widely disseminated, humanity will benefit. As uncontroversial as that might seem, Internet history fails to bear out the optimism. Cat videos were massively popular, cute, and pretty harmless; “fake news” was more popular on Facebook than fact and deeply dangerous; ethnic and racial harassment via social media, sometimes by software bots, is another threat to civil society.

Echo chambers are a major factor in modern Western political discourse; and what echoes is often patently false. As Jack Burden learned while watching Willie Stark (the barely disguised Huey Long) numb audiences with his tax plan in Robert Penn Warren’s brilliant novel All the King’s Men, policy rarely incites a crowd. The Trump campaign was light and often inconsistent on policy details, but between the pre-existing television persona of a decisive “boss” who fires people with delight and long-smoldering dissatisfaction among the white working class mobilized at rage-fueled rallies, policy wasn’t the main attraction. Dispensing with party orthodoxy was seen not as a flaw but a feature of the outsider campaign: making America great “again” allowed sympathetic voters to fill in the blank with a nostalgic evocation of whatever period they liked, concrete details be damned. That news media repeated the slogan literally millions of times without pinning down the candidate is one legacy of this peculiar election.

In both the US and UK, the question of “now what?” looms large. Decrying orthodoxy, winning elections with outright lies and racial antagonism, and economic consequences of rejecting the norms of globalization have unintended consequences yet to be discovered. In such a landscape, what is the role and function of a political party? Where is the “farm system” of candidates for 2018, 2020, and beyond — for both major US parties? Will 2016 mark the end of the Clinton-Bush era of semi-dynastic candidacies? If so, who will step up and — much more important — how will they do so? By adopting the Trump (and Huey Long) playbook of us-against-them? Or by simultaneously innovating and drawing on the deep history of US optimism, national pride, and civic decency?

From the party perspective: Is grass-roots organizing no longer worth the investment, especially as industrial labor unions continue to decline? Will facts continue to be so widely and enthusiastically disregarded in favor of appealing social media/cable TV nuggets? Can good people be persuaded to enter the fray of personal attacks, physical intimidation (especially of females), and lack of compromise, whether for city council, state auditor, or Congress? Will the Republican vision of minority outreach, articulated in the 2012 postmortem, gain traction as white voters continue to decrease as a percentage of the electorate? Can Democrats articulate a compelling alternative to Trumpism rather than only reacting vigorously (crying wolf?) to everything that emanates from this presidency — and unite behind a candidate who embodies that alternative? While Republican presidential hopefuls of all sorts and ages emerged in the 2015-16 cycle, Democrats need to rejuvenate: Hillary Clinton is 69 years old, Bernie Sanders is 75, and Elizabeth Warren is 67. 

From the media perspective: Can Facebook and Twitter police the misuse of their services, or will monetization of clicks continue to drive profitable lying, regardless of its costs to democracy? How will news and semi-news organizations (cable channels especially don’t typically win Pulitzers) adapt to the unprecedented behavior of President Trump? What, in 2017, is “the public interest”? Who can be trusted to monitor and nurture it?

Most centrally, what will people be talking about four years from now, in the aftermath of the 2020 election? Will parties reinvent their mission, or will social media amplify single-issue politics, leveraging the highly salient yet divisive “solutions” we saw in 2016? Can politics regain some of its luster as a call to civic service, in part through revitalized parties, or are we being pulled into ever more cynical directions by the sound bites of fear-mongering opportunists who don’t have and don’t need a party organization to help them succeed? Can a new kind of news media business model emerge to pay for the kinds of reporting an informed electorate requires? None of these questions have simple answers, but the costs of not addressing them could not be any higher.

Monday, October 31, 2016

Early Indications October 2016: What's Ahead for Higher Education?

First things first: my Robots book was published by MIT Press a couple weeks ago. I worked with a wonderful team there; among other things, the cover art is far better than any other book I’ve done since the millennium. I hope we can team up on another project in the future.
*************

I last wrote about higher education in September 2009, and upon revisiting the piece, it has held up pretty well.

That said, the landscape has shifted dramatically in the intervening years, so in this newsletter I will address the new issues, with some reiteration of past themes. In a nutshell, colleges face a potentially crippling combination of being locked into existing infrastructure-heavy business models in the face of alternative delivery practices, unsustainable cost increases, and extreme mission creep. Trying to be so many things to so many constituencies, using an expensive/inefficient physical plant and headcount under massively bureaucratic management, will fail as online education gets better and better at the same time that student loan debt hits critical mass.

Hypothesis 1: College is unique
College is a unique element of most societies. Using the US model, which is not fully representative but widely emulated, consider:

1) For what other service are there both private and state alternatives from which to choose? Not police forces, drug certification, roads, Cabinet departments, or militaries.

2) In what other transaction does the buyer (and buyer's parents) lay bare their finances before being told how much the service will cost?

3) In what other transaction, especially one costing so much, are yardsticks a) not agreed on and b) difficult to obtain? I know roughly how much my house cost the previous buyer, how its property taxes compare to those of neighboring properties, and even how its electric bill compares to peer properties. Buying a car, I can see some facsimile of dealer invoice price, EPA fuel economy performance, if a used car has been in a collision, and even what parts tend to break at what mileages. Buyers know surprisingly little about such an important investment.

Hypothesis 2: The many missions of a college/university can expand and conflict

College is idiosyncratic in many regards, no more so than in its many and often competing definitions of success. Does a successful college education

- further upward economic mobility?

- teach a graduate a marketable skill for a first job?

- turn a child into an adult?

- prepare a person to ask the enduring questions of the world, its institutions, and oneself?

- teach a graduate how to learn and adapt his/her skills to a changing job market?

- teach a specialized body of knowledge so civil engineers, accountants, and English teachers can join their respective guilds?

- teach general skills that one should possess regardless of occupation, such as financial literacy, critical thinking and writing, and civic/historical awareness?

- endow the graduate with social experiences and friendships that will endure over time?

- teach awareness of and respect for people and traditions different from one's own?

That list, for all its complexity and internal competition, addresses but one university constituency: undergraduate students. There are more players: alumni, corporate research sponsors, state economic development authorities, employees, graduate and professional-school students, farmers and other consumers of agricultural extension expertise, patients at the medical center, fans of the school's football or basketball teams, municipalities paid something similar to property taxes (but not quite), and the makers and buyers of things that universities can help certify, whether meat and dairy products, nurses, or STEM curricula. What is the priority of these many groups? Who sets the pecking order?

Hypothesis 3: Undergraduate education is getting lost in the shuffle
So colleges have seen their scope of activity explode. Rather than try to sort through all the constituencies, let’s return to the erstwhile purpose of college, the undergraduate experience. Richard DeMillo sees higher education from multiple perspectives and currently works at Georgia Tech at a research center for the future of higher education. His recent book, Revolution in Higher Education (MIT Press), is well worth reading. In it, he raises five questions:

1) Does an institution serve the people it is supposed to serve?
2) Are there among a university’s graduates a sufficiently large number of successful and influential alumni to warrant a second look at what is being done to achieve those results?
3) Besides the visible success stories, what happens to most graduates once they get their degrees?
4) What, exactly, do students learn?
5) How important is an institution to the city or region?

Since that last newsletter in 2009, U.S. student loan debt has exploded, in part because of fraudulent or dishonest practices by for-profit colleges that have high rates of loan acceptance, degree non-completion, and loan defaults. Overall, U.S. student loan debt has more than quintupled since 2000: from $250 billion to more than a trillion today. For-profit colleges are heavily over-represented on the list of “leaders” - the University of Phoenix has seen its loans increase 17-fold in that same period.

What is a “typical” college experience today? A mid-range state university, a Western Michigan or Kansas State? Part-time and/or online, whether Phoenix or Southern New Hampshire? The public research powerhouses — Cal Berkeley, Michigan, Texas, et al — teach a lot of students but a) represent a small percentage of total enrollment and b) have distinctive strengths and weaknesses. One thing is certain: the private liberal arts colleges where many members of the media went to school (a Syracuse, a Williams, or an Ivy let’s say) are neither representative nor “average.” A Five Thirty Eight post — “Shut Up About Harvard” — from March is required reading on this topic: many people, but especially those in the media, focus on elite or very good schools because that’s what they saw. But Harvard’s tiny entering class isn’t representative of the larger experience, with its high 4-year graduation rate, lack of athletic scholarships, lack of loan debt (all aid is grants at several Ivies), no part-time students, small numbers of military veterans, etc.  The other big change is the rise of new models for online learning: Udacity, Coursera, EdX, Khan Academy, and others. DeMillo names the perfect storm:

A) More students are starting college than ever before.
B) Fewer students (on a percentage basis) are completing degrees than ever before.
C) College costs are cursed by “Baumol’s disease,” an economic theory positing that many service industries can raise costs without raising productivity. In U.S. universities, labor-related costs (including health care and overall headcount) have risen faster than inflation although most university salaries have not, and a school’s tuition usually reflects this imbalance.
D) Output measures are hard to collect, hard to interpret, and hard for the public to find. Something as simple as “what did a student learn“ is not well understood, especially across heterogeneous populations, and not widely collected. Debt loads, starting salaries, and subsequent education (such as law or business school) are tracked loosely at best, and not at all in many cases, and then not prominently reported to prospective students at many institutions. Do English majors at Florida State do better, employment-wise, than marketing majors at LSU? Few people know, though many have opinions and/or anecdotes.
E) Technological change reshapes entire occupational categories faster than colleges can react. I just saw last week that someone advocated cutting off radiology training in medical schools “because in five years deep learning will have better performance.”

Hypothesis 4: The magnitude and complexity of the challenge dwarfs the caliber of post-graduate leadership and innovation

Thus we have multiple dilemmas facing U.S. universities. Costs keep rising, in part driven by the pursuit of an elusive notion of “prestige,” while tuition increases cannot outrun inflation for very much longer. By taking on so many missions beyond undergraduate education, colleges build bureaucratic fortresses that duplicate effort and impede both efficiency and collaboration. Online learning promises a cure to Baumol’s disease via simultaneous scaling and personalization, but organizational models (including factors such as accreditation) make implementation within traditional brick-and-mortar incentive models problematic. (A case in point: what is an online credit hour if there is no classroom where, in a 3-credit course, people convene TuTh from 9:00-10:30?) University investments in the physical experience double down on dorms, student unions, gyms, and sports programs, ignoring or wishing away the oncoming online locomotive. An emphasis on STEM or even STEAM (science, technology, engineering, arts, and math) doesn’t prepare students for what startup consultants Burning Glass call hybrid jobs: engineers who need to write proposals, nurses who supervise people, statisticians who need to analyze cultural differences. Political rhetoric notwithstanding, core liberal-arts training does not become irrelevant even as technical and quantitative skills gain in importance.

What’s next? Three questions:

1) Where will university leadership develop the sophistication, foresight, and boldness to reinvent the basic model of research, tenure, teaching, and testing that dates primarily from the 1880s, imported to the U.S. from Germany? 20 years ago, I heard a great parable that makes the point:

Leonardo da Vinci walks into an airplane hangar and sees both a Boeing 747 and an SR-71 Blackbird and cannot believe what happened to his conception of powered flight. Gutenberg sees Adobe Creative Suite, with Photoshop, Illustrator, and the rest of the software, and cannot believe what happened to printing presses. Ben Franklin walks into a modern school building, sees desks in rows, chalk, and blackboards, and says, “hey that’s a classroom.”

Another parable: when asked the purpose of the modern university, University of California president Clark Kerr said "to provide sex for the students, parking for the faculty, and football for the alumni." He said that 50 years ago, and his advice is still pretty widely followed. Who is our generation's Clark Kerr?

2) What alternatives can emerge to the ubiquitous 4-year undergraduate degree, lowering costs, increasing access, and improving performance of the system, including delivering lifelong learning in whatever field a person might work? (Why has 4 years become some sort of magic number, given how little we know of learning mechanisms and outcomes?)

3) Where can we have a serious conversation about the role and purpose of different types of college experience, ranging from education and training (they are two different things) in critical thinking, financial literacy, basic citizenship, Great Western and other cultural traditions, math and science fundamentals, preparation for entrepreneurship, and learning both to think abstractly and solve concrete problems. No education can do all of these well; some should do each of them as a point of distinction, without ignoring all the rest.

For all my passion in this domain, I’m glad I’m not king for a day, charged with slicing the Gordian knot: these are worthy challenges that deserve hard choices, broad participation, and above all, moral courage. Where will we find the people who can lead such a quest?

Sunday, October 02, 2016

Early Indications September 2016: Welcome to Dystopia?

If you’re an author with a knack for conjuring up nightmare scenarios, these are in fact the best of times: George R. R. Martin (Game of Thrones) and Suzanne Collins (The Hunger Games) are absolutely rolling in money and fame. There’s something going on when bleak futures capture national mindshare in books, TV shows, and movies. Look at 1963: with far fewer options, mass audiences converged on distraction. Shows such as Petticoat Junction, Candid Camera, and My Favorite Martian made the top ten, all lagging The Beverly Hillbillies and Bonanza. In 1968, with riots in the streets and the assassinations of Robert Kennedy and Martin Luther King calling the American ideal into question, Bonanza hung in at #3, behind Gomer Pyle U.S.M.C. and Rowan and Martin’s Laugh-in. That Stanley Kubrick was able to confront the madness of nuclear war with the brilliant black comedy of Dr. Strangelove (1964) helps prove the point: dystopias have historically been uncommon cultural touchstones; now they’re everywhere.

Could it be that these cultural artifacts capture our zeitgeist? Whether in the Dark Knight Batman films, The Wire, Breaking Bad, or The Sopranos, our most popular entertainments of the past 15 years present a pretty bleak vision, diametrically opposed to new deals, new frontiers, or “Don’t Stop Thinking About Tomorrow” campaign songs. Along the dystopian line of thinking, it’s easy to find evidence that the world is heading in a very bad direction. A gloomy sample:

*Ocean levels are rising faster than predicted, but the local effects in New York, Miami, the Netherlands, and Bangladesh will all vary considerably. Millions of people will be displaced; where will they go? Norfolk Naval Base will lose acres of land, how fast nobody knows.

*What appears to be the single largest distributed denial of service (DDoS) cyber-attack was mounted earlier this month using at least 150,000 compromised cameras and other poorly secured Internet of Things (IoT) devices. It’s quite possible our cars, garage-door openers, thermostats, and personal devices can be turned against us.

*Guns kill a lot of people in the U.S. Exactly how many is hard to determine, in part because the gun lobby discourages public health officials from calculating statistics. But whether it is suicides (20,000 a year, or 2/3 of all gun deaths), mass shootings, police violence against citizens, or the average of 82 shootings per week in Chicago alone, the numbers are depressing but apparently acceptable, given the lack of action. One statistic provides food for thought. Despite its wide error range, a Harvard study released earlier this month estimated (a key word) that 7.7 million people (3% of U.S. adults) own half the country’s guns. These “super-owners” collect between 8 and 140 firearms apiece.

*Globally, millions of people are being lifted out of poverty, but in the U.S., tens of millions of middle-class people find their fates stuck or, increasingly, declining. Whether from plant closures, downsizing, inadequate skills, offshoring, or automation’s various effects, people can’t get ahead the way previous generations did. For many, complex reasons, class conflict is showing itself in various ways: racial tensions, protests in places like San Francisco where homelessness and extreme wealth collide, and anti-trade sentiment. Immigration and refugees are super-sensitive issues from Turkey to British Columbia.

*At the same time that Colorado and Washington state are finding benefits of legal marijuana, recreational drugs are killing people. In addition to the violence in Chicago noted above, some of which is drug-related, the toll of opioid drugs is shocking. Especially when heroin is cut with fentanyl, overdoses are swamping local EMS and other responders. Columbus saw 27 in 24 hours, while Cincinnati had to cope with 174 heroin overdoses in 6 days. Huntington, WV had calls for 27 overdoses in under four hours last month. Prescription oxycontin was likely a tragic gateway drug in many of these cases. “Just say no” and a “war” on drugs clearly didn’t work; what’s next?

*On the ethical drug front, meanwhile, we live in scary times. Antibiotic-resistant “superbugs” are making hospitalization in any country a frightening proposition. As of 2013, 58,000 babies died of antibiotic-resistant infections in India alone, and in a global age of travel, those bacterial strains are moving elsewhere. An estimated 23,000 people died in the US last year from antibiotic-resistant infections, and just this past May, the CDC reported that a Pennsylvania woman who had not recently traveled out of the country tested positive for the mcr-1 strain of E. coli. This variant resists colistin, widely regarded as the “last resort” antibiotic, though the woman in question _did_ respond to other treatments. Still, the CDC’s language is sobering: “The CDC and federal partners have been hunting for this gene in the U.S. ever since its emergence in China in 2015. . . . The presence of the mcr-1 gene, however, and its ability to share its colistin resistance with other bacteria such as CRE raise the risk that pan-resistant bacteria could develop.”

None of these problems have easy answers; some don’t even have hugely difficult answers. Zeroing in on the technology-related world (thus leaving aside climate change, gun violence, and drug issues for the moment), I see four nasty paradoxes that, taken together, might explain some of how we arrived at a juncture where dystopian fantasies might resonate.

1) Automation brings leisure and productivity; robotics threatens to make many job categories obsolete. From radiologists to truck drivers to equity analysts, jobs in every sector are threatened with extinction.The task of making sure technologies of artificial muscle and cognition have widely rather than narrowly shared benefits runs counter to many management truisms regarding shareholder value, return on investment, and optimization.

2) We live in a time of unprecedented connection as most adults on the planet have access to a phone and will soon get smartphones; interpersonal dynamics are often characterized by savagery (at worst) or distractedness. (Google “Palmer Luckey” for a case in point.) Inside families and similar relationships, meanwhile, the psychologist Sherry Turtle argues persuasively that we are failing each other, and especially our kids, when we interact too much with screens and too little with flesh-and-blood humans.

3) The World Wide Web brought vast stores of the world’s cultural and scientific knowledge to masses of people; a frightening amount of public debate is now “post-factual,” with conspiracy theories and plain old ignorance gaining large audiences. Climate science, GMO crops, and vaccinations are but three examples. The assumptions behind the Web have too often failed: access to knowledge by itself cannot counter fads (hello Justin Bieber), longstanding ignorance, or intolerance. Compare the traffic to YouTube or Facebook with that to the Library of Congress, Internet Archive, or even Wikipedia. At some level, maybe people don’t like eating their intellectual vegetables; junk food is too hard to resist.

4) Billions of sensors, smartphones, and cloud computing virtual machines enable an increasingly real-time world, where information flows faster and wider every year; historical context is lacking for many public assertions and private opinions. In September, a Republican party official claimed there was no racism before 2008. For years, only a minority of people have been able to identify in which century the American revolution or Civil War occurred. Nuanced views of Reconstruction or the Gilded Age, hugely formative of and relevant to today, are difficult to find.

Together, these paradoxes add up to a truly dystopian vision at odds with what seemed inevitable just a few years ago. It’s difficult to be optimistic, but to close I’ll suggest some reasons why solutions are so difficult.

*The digital world doesn’t respect traditional organizational boundaries. Examples abound: Russia is said to be meddling in the US election cycle. Certainly the superpowers have influenced local elections in the past, but the thought of major media outlets and voting machines being compromised by a global adversary calls the whole notion of sovereignty into question. Whether it’s in regard to spam, child porn, copyright, compromised hardware at the chipset level, digital privacy, or the handling of video and music streaming, the global, borderless nature of the mobile/digital platforms calls basic facts of jurisdiction, evidence, and recourse into question.

*At the same time that “where” needs to be redefined, so too we must confront what work is. Who does what, how much they are paid or otherwise valued, how they learn the job, what happens when jobs or entire labor markets disappear — none of our current answers can be assumed to hold stable 10 years from now. Education, unemployment and disability benefits, collective bargaining, workplace health and safety (does sitting really “kill” you?), pensions, internships, retirement, job-hunting, and corporate education and training will all assume new shapes. Some of this will be messy; I can’t see anyone getting it all right the first time.

*Technologies of communication and transportation have usually been a double-edged sword. Trade brings benefits to many parties, but smallpox, influenza, and the AIDS virus all crossed oceans on new modes of transport. Given the essentially free, multimedia, borderless nature of digital communications, what equivalent maladies will be given broad distribution, and what will be their consequences?

*In a pluralistic world, what can serve as a moral compass for an individual, a group, a nation, a continent? The teachings of Muhammad, Jesus, Yahweh, Confucius, and the Buddha all have served to guide people over the centuries, but so too have they justified crimes against humanity. We live in a connected world where religious conflict becomes more likely than in eras with less physical mobility. Given global communications and mobility, how is coexistence possible, given increases in both fundamentalism and secularism in many places, and the ongoing tendency for the major religions to splinter internally, often violently? In a post-factual world, people try to claim their own beliefs, but without sufficiently binding notions of a common identity, purpose, or ideology, we are left less with states of free-thinkers than with new sources of conflict — and fewer resources for building group identity.

To be sure, there are many hopeful signals, and plenty of today’s entertainment is mindless diversion not unlike the television hits of the 1960s. That dystopias can find audiences may be more a function of the multitude of distribution options than of national mood. In any event, I do believe the challenges we confront will test moral resolve, institutional flexibility, and intellectual creativity unlike ever before. It may be that meta-questions are in order: rather than asking how we solve internet security or rising ocean levels, we (a tricky word all on its own) need to ask, what are the political forms, grounds of legitimacy, and resources of the institutions we will design to address these new challenges.

Wednesday, August 31, 2016

Early Indications August 2016: The Next Car

About 125 years ago, when the internal combustion engine supplanted equine power for personal mobility, there was much talk regarding “horseless carriages,” defining the future in terms of the past. We are at much the same juncture today: as electric autonomous vehicles come closer and closer to mass-market availability, much of the conversation starts with what we know human drivers do: “How will self-driving cars avoid bicyclists? How will self-driving cars merge in construction zones? How will self-driving cars make left turns across oncoming traffic with solar glare?” All of these questions must be answered, of course, but I believe it’s not too early to ask what we want of the _next_ car, the one(s) with a largely new set of constraints and capabilities. That is, given a clean-sheet design, what are some questions we might ask? Here are three among many.

1) How do we balance autonomy with “mesh transportation”?

By definition, a driver in a car is largely autonomous and disconnected from the cognition of those around; “what was he thinking?” is a common complaint while observing other drivers. The person at the wheel can follow or ignore traffic laws, brake suddenly or gradually, act with awareness of other vehicles or possess limited situational awareness. There are many consequences of this autonomy: cars have long been associated with personal “freedom,” traffic flows in an annoying and predictable accordion pattern in congested stretches of highway, and of course accidents happen when driver A somehow surprises driver B.

Once driverless vehicles constitute some critical mass of traffic, however, that assumption of autonomy can be challenged. My current frame of reference is a mesh wireless network, a potentially peer-to-peer ad hoc configuration of cars both interacting with vehicles close to them and serving as repeaters for less proximate “nodes.” A simple scenario started my thinking: what if a truck stuck in traffic wanted to see the sensor feed from the car at the front of the pack? Already, Samsung has shown a heavy truck with an LCD display on the rear showing the view out the windshield. Once my vehicle can “see” the sensors n cars ahead, what else can happen?

Although sensor-driven autonomous vehicles are getting substantial attention from Uber, Google, and Tesla, the notions of vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) are also gaining mindshare. Already emergency vehicles can trigger stoplights at some intersections (green for the ambulance, red for the cross-streets), it doesn’t require an enormous leap of imagination to get to cars addressing and listening to the signaling infrastructure: why train a self-driving car to understand (or memorize) speed-limit signs when the information could be available via wireless beacons? The appeal of V2V is obvious, as per the “what was he thinking?“ scenario above: if vehicle A can signal intent with more lead time, more consistently, to more of the relevant peers and bystanders, the safer matters should become.

For years, European companies have been trialing road trains: a lead heavy truck invites close followers (who hand control off to a wi-fi network) to ride in its wake. The tighter following distances improve fuel economy and free drivers to attend to less mundane chores. Now what happens when self-driving vehicles can self-assemble? If 8 or 10 cars and trucks all enter I-95 from around Princeton going northbound, what if they form a road train until the first of the vehicles comes to its exit? And what if there’s a mechanical connection, like those magnets on wooden train sets? Road utilization improves, fuel economy improves, and the lack of human drivers means nobody is bothered by the unappealing view of the vehicle ahead.

Given the currently lousy security of Internet of Things things in general, and of wireless car systems in particular, for this mesh-vehicle future to be safe, there will need to be massive strides in, and probably complete rethinking of, security practices. This means clearly understanding the trade-offs that are being asked and granted. The security domain thus echoes the larger debate that will emerge as to what vehicles can and cannot, should and should not, tell and learn in interaction with other vehicles and the world.

2) What happens when cars don’t project personal identity?

I know something of my readership. You’ve driven stealth M3s, E-types, Z06 Vettes, and of course 911s. This is not mass-market car-dom, but the extreme proves my point. In many cases, vehicles are designed as much as a projection of the owner’s psyche as they are for road performance. Compare a Suburban to a minivan, or a new pickup truck to a 30-year-old version. Absent towing capacity, the functional performance may be similar but the features, price (I didn’t say cost: GM builds massive profit into that Suburban), and enhancement of one’s personal brand are very different.

Auto executives have already begun publicly worrying about vehicles designed and sold only as appliances: if people buy transportation as a service rather than as a product, the design remit changes. As a rider, I certainly prefer the S-class Mercedes taxis common in Europe to the cramped Priuses (Prii?) I get in US cities. Do I insist on a certain type of car to haul me around, especially when nobody will see it in my driveway either way? No way. Once autonomous vehicles are optimized for whatever we decide to optimize (please make the London taxi one of the blueprints . . .), conspicuous consumption will fall far down the list, particularly for mass-market cars and trucks. (Read more here and here.)

3) What happens if safety is reset to a higher priority?

There’s a famous assignment given to engineering students: design a protective enclosure such that the egg inside it can survive a fall of a specified height. I thought of it immediately when I saw this effort by an MIT team to understand safety trade-offs to be encoded in autonomous vehicle algorithms.

Why the egg drop? All of the Moral Machine scenarios embed numerous assumptions; I wanted to challenge those, not take them as given. The passenger cocoon could be one such: when we design, license, and support different vehicle designs, what do we want optimized?  Should pedestrian safety be a higher priority than what happens to a car’s passengers? Why or why not? To give an example, Google has a patent on a flypaper-like technology to snag people on car hoods after they’ve been struck. How will we set the weights of protection for passengers, cargo, pedestrians, bicyclists, driveway shrubbery, and other features of the driving environment? How much will this be done by markets, how much self-policing will we see, and how much government regulation will be imposed?

If safety is a higher priority than it is today, why have windows at all? Boeing has suggested replacing airplane windows with display screens, so why not equip cars the same way? At the same time, if passengers are cosseted in metal tank-like vehicles, what unintended consequences could there be for driving algorithms, bystanders, and even fast-food restaurants?

Just as interstate highways, McDonald's, and suburban sprawl could not have been foreseen by Daimler, Ford, and Durant, we will see parallel discontinuities in the coming century. Railroad ownership, to take one example, went from being the source of massive wealth and prestige (as with Stanford or Vanderbilt) to a joke (Conrail) in a relative short time. Entirely new ancillary services and industries emerged, and may now recede: will anyone become a billionaire owning parking lots in the next 30 years?

A key question relates, as it always does, to the speed of the transition. From an engineering standpoint, designing a world of only autonomous vehicles would be relatively easy; similarly, we know how to build roads, vehicles, and venues for cars with people driving them. The mixed zone, to borrow a term from the Olympics, is what we currently face, and it’s one hard problem on top of another:

What are the business models? (Google is struggling with this as I write, having replaced a respected-but-departed roboticist with an AirBnB executive to run the car project.) Who bears liability? Who pays to upgrade the infrastructure, whether to fill potholes or install beacons in road signs? What signaling conventions can carry over and what new ones need to be designed; can the camera of a self-driving truck reliably see a turn signal or is a radio message more appropriate? Where will trusted suppliers come from (Panasonic, a partner in Tesla’s battery factory, is emerging as a big player in the global automotive ecosystem, for example)? What about car loan companies, service bays, and other businesses whose mission will be redefined? How will gas stations as real estate, and oil companies as businesses, be forced to evolve (charging a battery takes a lot longer than filling a tank: how many 20-pump gas stations will become 20-plug chargers? Can local electric grids handle such concentration?) Which brands will win and lose? Why? How will human-driven cars fade into antiquity? Will valets, for example, be kept on for ceremonial value at high-end destinations? Which internal-combustion vehicles will have the longest carriers in the new era, and which will be most prized by collectors? What happens to the freed-up parking and other infrastructure? Does congestion pricing a) disappear or b) enter a new phase of complexity? What happens to municipal revenues dependent on traffic tickets? How will domestic architecture evolve without the need for the same size and type of garages? Where will housing be located relative to work? The list goes on, but suffice it to say, Detroit and its extended network is in for the shock of a lifetime.

Postscript: As I was wrapping up this newsletter, tech analyst Benedict Evans of Andreesen Horowitz posted a series of Tweets asking many of these same questions.

Saturday, July 30, 2016

Early Indications July 2016: The Beer Issue


While a focus on beer in mid-summer is timely, the economics of the industry are fascinating rear-round (hopefully even to non-drinkers). The big growth is happening in the craft segment, where sales are up about 15% in an overall declining market. Big brewers have responded predictably, buying such brands as Blue Point, Shock Top, and Goose Island. And, oh yes, the world’s two biggest brewers look like they will merge, minus some divestitures. I’ll leave the details of AB InBev/SABMiller to the bankers and financial reporters; it’s a pretty complex deal. Apart from that, what is the current state of play in the U.S., why are craft brews thriving, and what might the future hold?

The current state of play in the U.S. beer market can be summarized as big and light. 7 of the top 10 sellers are light beers, led by Bud Light with sales of $2 billion annually. The second 10, smaller in revenue by more than a factor of 10, is much more interesting, led by Blue Moon (always owned by Miller Coors but branded like a craft beer) and including Pennsylvania’s Yuengling, Samuel Adams, Sierra Nevada, and Leinenkugel out of Wisconsin. Considering that the light beers in the top 20 add up to $5.5 billion in sales, $140 million for Yuengling or $80 million for Sierra Nevada might seem like a literal drop in the bucket.

The relevant number for a Goose Island or a New Belgium Fat Tire, however, is not necessarily sales, but growth. The number of U.S. craft breweries surged from 284 in 1990 to more than 1,500 in 2000, then past 2,000 companies in 2011 and nearly 3,500 in 2015. Imports are fairly stable, running about 35 million barrels from 2004 to 2014, but the 24 million barrels of craft beer sold in 2015 represented a 13% annual increase in quantity and a 16% increase in dollar volume over 2014. There's a loose craft-industry effort to hit a collective 20% market share by 2020; it's not an impossible target.

Just about any locale can now have some microbrewery presence; there are four (plus one in the process of being reopened) in our county alone. Skill is necessary, as is capital: those 100 barrel stainless steel tanks you might see in your local brew pub list for about $35,000 apiece. The U.S. craft phenomenon is entering middle age: Sierra Nevada was founded in 1979, when Jimmy Carter deregulated the industry; Boston Beer Company (Samuel Adams) followed five years later. Thus some founders are finding ways to cash out and/or generate capital for expansion. One path is an employee stock ownership plan, where employees buy out some percentage of the business. Another is private equity, while still other firms sell to an acquiring brewer, often from outside the U.S.: recent deals included major acquisitions by a Belgian company and a Spanish brewer as well.


Indeed, the global nature of the beer industry is a topic unto itself. Given the high shipping costs and perishability of the product, local production can be a competitive advantage (especially if the primary facility might be threatened by drought). Licensing agreements are common: Australia’s Fosters Lager is owned by SABMiller (headquartered in London) and brewed for the North American market in Toronto at a Molson facility. Sierra Nevada added capacity to its California operation with a vast facility in western North Carolina, as did Colorado brewers Oskar Blues and New Belgium at a smaller scale. California-based Lagunitas opened a Chicago taproom in a former steel mill. Now that the two global giants — AB InBev (headquartered in Belgium and owning Anheuser-Busch) and SAB Miller — are set to merge, there will be more waves of change that could affect most any country on earth at some point.


Returning to the local impact of craft brewing, I haven't seen anything connecting brewing to the alleged U.S. manufacturing revival, but maybe it should be included. The Atlantic’s James Fallows posited a list of factors that predict a town will be in good shape after he traversed the continent in his small plane for three years of research.


“#11: Craft Breweries. A city on the way back will have one or more craft breweries, and probably some small distilleries too. . . . A town that has craft breweries also has a certain kind of entrepreneur, and a critical mass of mainly young (except for me) customers. You may think I’m joking, but just try to find an exception.”

Jeff Alworth, in All About Beer magazine, expands Fallows’ observation from correlation to causation: he argues that craft brewing drives economic development, and his logic is compelling:


“Breweries are industrial operations, and they’re expensive. Beer is a mass beverage, and even making it on a brewpub scale means you have to have quite a bit of space for the brewhouse, fermentation, and storage. All that equipment costs a lot, and real estate does, too. When you’re spending a quarter- or half-million dollars on equipment, you can’t afford expensive commercial space. So breweries end up on the fringes, in bad parts of town where the rent is cheap. That alone is the first step of revitalization.


But breweries aren’t like the average industrial plant. They are people magnets, bringing folks in who are curious to try a pint of locally made IPA. In fairly short order, breweries can create little pockets of prosperity in cities that can (and often do) radiate out into the neighborhood. Pretty soon, other businesses see the bustle and consider moving in, too. It doesn’t hurt that breweries often find run-down parts of towns that have great buildings. Once a brewery moves in and refurbishes an old building, it reveals the innate promise of adjacent buildings to prospective renters.
. . .
But the effect may even be stronger in smaller communities. Little towns are often underserved with regard to cool places to hang out. When they open up shop, they provide much-needed social hubs. That the rent is cheaper there than in big cities gives these breweries a competitive boost, to boot—and we have seen many small towns (like Petaluma, California; Kalamazoo, Michigan; and Milton, Delaware) spawn outsized breweries. And whether they’re in small towns or cities, breweries serve an important community-building function. They’re not only a nice place to spend an evening, but serve as venues for events like meetings, weddings, and even children’s birthday parties.” 
*****

The consumer appeal of craft beers has many facets, but a few of these include the following:

-Craft brewers eschew light beers, the nearly-clear light lagers that dominate the U.S. sales charts. Instead, nearly every craft brewer needs an India Pale Ale to prove its mettle. IPAs can have up to twice the alcohol content of an American “lite,” the taste is rated by bitterness units from the hop content, and they cost more to brew, driven by lower volumes and more expensive ingredients (bought in smaller quantities). Thus the craft movement represents a strong case for government deregulation: consumer welfare has improved with increased choice and availability. If only airline deregulation had so many positive outcomes.


-Craft brewers often have a strong experience component, whether in tours, tastings, or just ambience. The Guinness tour in Dublin is highly orchestrated (thanks to the deep pockets of Diageo, the parent company) but still pretty engaging. I’ve heard good things about Troegs in Harrisburg, Yuengling in Pottsville, PA, Rogue in Portland, and Harpoon in Boston. Sierra Nevada offers a range of tours, including deep-dive sessions for true beer geeks.

-That experience component can be enhanced and/or amplified by social media. Dogfish Head is a leader in this regard. Advertising spend has never tightly correlated to improved sales, as Schlitz proved (to its detriment) in the 1970s, so social media's low cost, responsiveness, and intimacy make it a great tool for the job.


-The craft movement represents a cyclical return to regionalism. In 1900, and even in 1950, there were no dominant national brands. Schlitz, Falstaff, Ballantine, Schaefer, Hamm’s, Olympia, and Stroh’s all thrived until AB began its surge in the 1960s, Coors expanded beyond being a mountain-region favorite, and then Miller enjoyed first-mover advantage in the light beer category it invented in the 1970s. Yuengling is a special case (being the first U.S. brewery, it’s not really a craft), and still doesn’t have wide distribution. Regionalism aside, there is worldwide interest in craft brewing, so growth prospects remain strong: San Diego's Stone Brewing just opened a facility in Munich, and there's no reason to think other beers won't follow them overseas.

-Experimentation is rampant, and rewarded by the market segment. Seasonal brews are a common offering from crafts but not the majors, further emphasizing uniqueness rather than standardization. In the last week alone, I've seen beers featuring habanero peppers, passionfruit, grapefruit, rye wheat, and coffee in my routine travels. Oskar Blues has produced at least 10 variations of its flagship Dale's Pale Ale, according to Beer Advocate; the 17 versions of its Scotch Ale include one flavored with chocolate and marshmallow.


-Beer is being treated more and more like wine, with more “varietals,” ratings, competitions, food pairing guides, and so on. Label art can be low-budget, exquisite, or weirdly idiosyncratic. Different glasses are recommended for different brews, the same as wine.

-The home brewing movement connects the hobbyist and the professional in ways that the majors cannot; I’ve never heard a home brewer try to recreate Natural Light in his or her basement. As of 2014, sales of home-brew starter kits had been growing at about 20% year/year. That’s positive for microbreweries in many ways.

While researching this piece I found a report from the Federal Trade Commission from 1978 (on the eve of deregulation, I imagine not coincidentally). The author -- one Charles Keithahn -- was incredibly far-sighted, predicting that San Francisco’s then-tiny Steam would be the start of something: that very brewer was crucial in the birth of Sierra Nevada, now the archetypal craft brewer.

“And a number of the smaller companies will probably be able to survive for one or more of the following reasons: local loyalty, exceptional knowledge and responsiveness to local tastes and conditions, low transport costs and low advertising costs associated with serving a small market, excise tax breaks, . . .  or finding a special niche in the market. A few examples might include Latrobe, Pickett of Dubuque, Iowa, Spoetzl (Shiner) of Texas and, at least at last report, the Nation's smallest brewer, Steam Beer of San Francisco.”

Is there any other example where industry consolidation has spawned a counter-movement of variety, experimentation, and market enthusiasm? Medical devices, automobiles, airplanes, retail, energy, and tech don't really suggest any comparable examples -- especially if you look at the community-building aspect. All told, beer is good in more ways than the obvious ones. Happy summer.

Wednesday, June 22, 2016

Early Indications June 2016 What do we make of Artificial Intelligence?

For context, an old joke:

Q: What’s harder than solving the problem of artificial intelligence?

A: Fixing real stupidity.

In many current publications, the technical possibilities, business opportunities, and human implications of artificial intelligence are major news. Here’s just a sample:

-a computer beat a grand master at Go about 10 years before many predicted such an outcome would be possible.

-Google is repositioning itself as an AI company, with serious credibility. IBM is advertising “cognitive computing,” somewhat less convincingly, Watson notwithstanding.

-Venture capital is chasing AI-powered startups in every domain from ad serving to games to medical devices.

-Established players are hiring top talent from each other and from academia: Toyota, Amazon, Uber, and Facebook have made noise, but Google remains the leader in AI brainpower.

-Corporate acquisitions are proceeding apace: just this week Twitter bought a London-based company called Magic Pony, which does image-enhancement, for a reported $150 million. Those kinds of numbers (shared around a team of in this case only 11 PhDs) will continue to attract talent to AI startups all over the world

Despite so much activity, basic answers are hard to come by. What is, and is not, AI? By which definition? What is, and is not possible, for both good and ill? The more one reads, the less clarity emerges: there are many viable typologies, based on problems to be solved, computational approaches, or philosophical grounding. I can recommend the following resources, with the caveat that no consensus emerges on anything important: the whole concept is still being sorted out in both theory and practice.

The Wikipedia entry is worth a look.

Here's a pretty good explainer from The Verge.

The Economist reports on the sizable shift of top research talent away from universities into corporate AI research.

Here’s a New Yorker profile of the Oxford philosopher Nick Bostrom.

Oren Etzioni has a piece on “deep learning” (neural networks at very large scale, best I can make out) in a recent issue of Wired:

Elon Musk called AI humanity’s “biggest existential threat” in 2014.

Frank Chen at Andreessen Horowitz has a very good introductory podcast explaining the recent boom in both activity and progress.

Apple is trying to use AI without intruding into people’s identifiable information using something called differential privacy.

Google’s AI efforts, by contrast to Apple’s, build on the vast amount of information the company’s tools know about people’s habits, web browsing, searches, social network, and more.

Amazon has multiple horses in the AI race, and recently made a high-profile hire.


Despite the substantial ambiguity related to the macro-level abstraction that is AI, several generalizations can be made:

1) Defining AI with any precision is problematic. Vendors including Google (“deep learning”) and IBM (“cognitive computing”) are well served by a certain degree of mystery, while the actual mechanics of algorithm tuning are deeply technical and often tedious. There are live questions over how completely the use of a given algorithm (a Kalman filter, used in both econometrics and missile guidance, or a simulated annealing optimization, used in supply chains and elsewhere, to take two examples) is “doing” AI.

2) AI can work spectacularly well in highly defined domains: ad placement, cerebral games, maps and directions, search term anticipation, and more and more, natural-language processing as in Siri/Cortana/Alexa. Leave the domain, however, and the machine and its learning are lost: don’t ask Google Maps to pick a stock portfolio, or Siri to diagnose prostate cancer. The challenge of “general AI” remains a far-off goal: people are more than the sum of their map-reading, pun-making, and logical generalization abilities.

3) Hardware is a key piece of the recent advances. Computer graphics processors feature a parallel architecture that lends itself to certain kinds of AI problems, and the growth of gaming and other image-intensive applications is fueling better performance on the computing frontiers of machine learning. Google also recently announced a dedicated hardware component, the Tensor Processing Unit, built specifically to handle machine learning problems.

4) “Big data” and AI are not synonymous, but they’re cousins. Part of the the success of new machine learning solutions is the vast increase in the scale of the training data. This is how Google Translate can “learn” a language: with billions of examples, not a grammar, dictionary, or by ear.

5) It’s early days, but one of the most exciting prospects is that humans can learn from AI. Lee Sedol, the Go player, says he is now playing better than before his loss to the Google computer. Whether with recipes (for tire rubber or salad dressing), delivery routes, investment strategies, or even painting, getting inspiration from an algorithm can potentially spur people to do great new things. Shiv Integer is a bot in the 3D printing site Thingiverse, and the random shapes it generates are fanciful, part of an art project. It’s not hard to envision a more targeted effort along the same lines, whether for aircraft parts or toys. I would also bet drug discovery could benefit from a similar AI approach.

6) The AI abstraction is far more culturally potent than the concrete instances. The New Yorker can ask “Will artificial intelligence bring us utopia or destruction?” (in the Bostrom article) but if you insert actual products, the question sounds silly: “Will Google typeahead bring us utopia or destruction?” “Will Anki Overdrive (an AI-enhanced race-car toy) bring us utopia or destruction?” Even when the actual applications are spooky, invasive, and cause for concern, the headline still doesn’t work: ”Will the FBI’s broad expansion of facial recognition technology bring us utopia or destruction?” The term "AI" is vague, sometimes ominous, but the actual instantiations, while sometimes genuinely amazing (“How did a computer figure that out?”), help demystify the potential menace while raising finite questions.

7) Who will train the future generations of researchers and teachers in this vital area? The rapid migration of top robotics/AI professors to Uber, Google, and the like is completely understandable, and not only because of money. Alex Smola just left Carnegie Mellon for Amazon. In his blog post (originally intended for his students and university colleagues), he summarized the appeal: less bureaucracy, more data, more computing power.

          “Our goal as machine learning researchers is to solve deep problems (not just in deep learning) and to ensure that this leads to algorithms that are actually used. At scale. At sophistication. In applications. The number of people I could possibly influence personally through papers and teaching might be 10,000. In Amazon we have 1 million developers using AWS. Likewise, the NSF thinks that a project of 3 engineers is a big grant (and it is very choosy in awarding these grants). At Amazon we will be investing an order of magnitude more resources towards this problem. With data and computers to match this. This is significant leverage. Hence the change.”

It’s hard to see universities offering anything remotely competitive (across all 3 dimensions) except in rare cases. Stanford, MIT, University of Washington, NYU, and Carnegie Mellon (which lost most of an entire lab to Uber) are the schools I know about from afar with major defections; those 4 (absent NYU) are among the top 5 AI programs in the country according to US News, and I wouldn’t feel too comfortable as the department chair at UC-Berkeley (#4).


As in so many other domains (the implications of cheap DNA sequencing; materials science including 3D printing; solar energy efficiency), we are seeing unprecedentedly rapid change, and any linear extrapolations to predict 2025 or even 2020 would be foolish. Perhaps the only sound generalization regarding AI is that it is giving us strong reinforcement to become accustomed to a world of extreme, and often troubling, volatility. Far from the domain of machine learning, for example, a combination of regulations, cheap fracking gas, and better renewable options led the top US coal companies to lose 99% — 99%! — of their market capitalization in only 5 years. Yet other incumbents (including traditional universities) can still look at our world and say, “I’m immune. That can’t happen here.” Helping expand perspectives and teach us flexibility may be one of AI’s greatest contributions, unless human stupidity is too stubborn and wins the day.

Friday, June 10, 2016

Early Indications May 2016: Technology and inevitability

Human nature drives us to look backwards and see a series of developments neatly explaining the current situation: we all exhibit hindsight bias is some form. It’s much harder to look back and recapture the indeterminacy in which life is lived in the present tense. Technological history is particularly prone to this kind of thought and rhetoric: the iPhone was famously (but not universally) mocked upon its introduction, to take but one example; looking for “the next Microsoft” or the “next Google” is another manifestation. The project “singularity” of digital cognition surpassing the human kind builds on this kind of logic. Coming in June, longtime tech observer Kevin Kelly’s new book is called simply The Inevitable.

It’s important to remember, however, that merely inventing (or imagining) a technology is a far cry from getting it into garages, factories, living rooms, or otherwise achieving successful commercialization. The low success rate for university technology transfer offices bears this out: a great molecule, material, or method does not a successful product make, absent entrepreneurship, markets, and other non-technical factors. This month I’ll run through a few technologies, some well-known and visible, others largely forgotten, that failed to achieve market success. I do this less out of nostalgia and more in the interest of tempering some current projections with a reminder that luck, competition, timing, and individual drive and vision still matter.

1) Very Light Jets
Led by Eclipse but also joined by Honda, Embraer, and others, the late 1990s stand as the height of the promise of a small, cheap (under $1 million new) aircraft that could both lower the barrier to personal jet ownership and fuel the rise of short-hop air taxi services. Eclipse shipped far later than promised at more than twice the projected cost, and performance problems were numerous: tires needed frequent replacement, the windscreens cracked, fire extinguishers leaked corrosive chemicals into sensitive components, the computerized “glass cockpit” failed to perform, and so on. A few air taxi firms went live (such as DayJet), but failed in the 2008 financial crisis, as did Eclipse. Honda, meanwhile, is prone to showing HondaJets in company advertising, but as of last December, had delivered a total of one plane to a paying customer. Sale prices are in the $4 million and up range — more than a used Hawker or similar mainstream business jet.

2) Flying cars
However intuitive the appeal, flying cars remain a niche market occupied primarily by mad-scientist visionaries rather than established production teams and facilities. The latest attempt, the Aeromobil, is claimed to be ready for market in 2017. The video is pretty impressive. Much like VLJs, flying cars have failed as much for economic reasons as technical ones. Building such a complex vehicle is not cheap, and safety considerations raise the product’s cost in multiple ways: FAA certification, spare parts management, expensive short-run production, and insurance factor into the actual operational expenses. Some of these expenses are out of the control of the aforementioned visionary (and in the Eclipse case, Bert Rutan has throughly impressive credentials), while other business challenges, including marketing, are common in tech-driven startups: who will buy this and what problem does it solve for a critical mass of real people?

3) ATT Picturephone

Here is ATT’s website, verbatim:

"The first Picturephone test system, built in 1956, was crude - it transmitted an image only once every two seconds. But by 1964 a complete experimental system, the "Mod 1," had been developed. To test it, the public was invited to place calls between special exhibits at Disneyland and the New York World's Fair. In both locations, visitors were carefully interviewed afterward by a market research agency.

People, it turned out, didn't like Picturephone. The equipment was too bulky, the controls too unfriendly, and the picture too small. But the Bell System* was convinced that Picturephone was viable. Trials went on for six more years. In 1970, commercial Picturephone service debuted in downtown Pittsburgh and AT&T executives confidently predicted that a million Picturephone sets would be in use by 1980.

What happened? Despite its improvements, Picturephone was still big, expensive, and uncomfortably intrusive. It was only two decades later, with improvements in speed, resolution, miniaturization, and the incorporation of Picturephone into another piece of desktop equipment, the computer, that the promise of a personal video communication system was realized."

*I’m sure the story of exactly _who_ in the Bell System drove this $500 million boondoggle is fascinating, if heavily revised.

4) Voice recognition software
Bill Gates is very smart, and obviously has connected some pretty important dots (as in the Internet pivot Microsoft executed in the late 1990s). On voice recognition, however, “just around the corner” has yet to come to pass. His predictions began in earnest with his 1995 book The Road Ahead, and in numerous speeches since then (well into the 2000s), he doubled down. Even now, in the age of Siri/Alexa/Cortana, however, natural-language processing is a very different beast compared to replacing a keyboard and mouse with talking. Compare two statements to see the difference: “What is the temperature?” vs “highlight ‘voice recognition software’ and make it bold face.”

5) Nuclear civilian ships
President Dwight Eisenhower wanted both Americans and citizens of other nations to temper their fears of military atomic and nuclear weapons by encouraging peacetime uses (his “Atoms for Peace speech” was delivered in 1953). The NS Savannah, a nuclear cargo ship, was intended to be a proof of concept, and it remains a handsome vessel a half-century on. The ship toured many ports of call for publicity and drew good crowds. Reaction was mixed, and the fear of both nuclear accidents and waste leaking into the oceans proved prescient as the US vessel and, later, a Japanese civilian ship both experienced losses of radioactive water. Although operational costs are low, the high up-front investment and, more critically, unpredictable decommissioning and disposal costs presented unacceptable risks to funding agencies or banks. Despite 700 military nuclear vessels becoming standard pieces of national arsenals, nuclear civilian craft have never caught on (with the exception of a few Russian icebreakers). A great BBC story on the Savannah (now moored in Baltimore) can be found here.


6) 3DO gaming console
After the 1980s, in which Sony’s Betamax format lost out to Matsushita’s VHS, consumers remained wary of adopting a technology in the midst of a format war. The lesson has been learned and relearned in the succeeding decades. In the early 1990s, Trip Hawkins (who founded Electronic Arts) helped found a new kind of console company, one based on licensing rather than manufacturing. The effort attracted considerable attention, but numerous problems doomed the effort. Sony and Nintendo can subsidize the cost of their hardware with software royalty streams; this is a basic element of platform economics as seen in printer cartridges and other examples. The 3DO manufacturers lacked this financial capability, so a high selling price was one problem. In another basic of platform economics, software and hardware must be available in tandem, and there was only one game — Crash ’n Burn, ironically enough — available at US product launch. In Japan, a later launch helped enable better reception as six game titles were available, but within a year the platform had become known for its support for pornographic titles, so general adoption lagged. 3DO clearly had some technically attractive elements (some of which were never included in the Nintendo 64 and Sony Playstation that followed) but the superior technology failed to compensate for market headwinds.

7) Elcaset
Unless you’re an audio enthusiast of a certain age, this is deep trivia. Sony introduced this magnetic tape format in 1977, and it was clearly technically superior to the audio cassette that had become entrenched by this time. The tape was twice as wide, and moved twice as fast, improving the signal/noise ratio and allowing for more information to be recorded, thus increasing fidelity. Like a VHS deck, tape handling was done outside the plastic shell, improving performance further. Unfortunately, the added performance came at a cost, and few consumers saw any reason to embrace the odd new format, which was supported by TEAC, Technics (Matsushita), and JVC as well. Also, no pre-recorded titles were available: this was the time when “home taping is killing music” — the 1980s UK anti-cassette campaign was later dusted off for the Internet age by the Norwegian recording industry association — and label execs were of two minds with regard to cassettes. In a curious twist I only recently learned about, Sony sent the remaining inventory of players and tapes to Finland after a distributor there won the wholesale auction, where many of the machines, well-made as they were, continued to work well for decades afterward.

This somewhat random collection of technologies holds very few generalizations. Having high-ranking executive sponsorship — up to and including the President of the United States — failed to compensate for deep fears and uncertain cost projections. Some failures came from corporate labs, others from entrepreneurs. Platform economics prove to be critical, whether for hardware and software, spare parts and airfields, or communications technologies. In the end, the only true generalization might be that markets are fickle, and there’s very little technology that is truly inevitable in its adoption.

Friday, April 29, 2016

Early Indications April 2016: Tesla Thoughts

In the absence of tech IPOs, must-have new apps, or killer demos, we are in a period of waiting:

  • will Uber continue to scale, win its legal battles, and develop a self-driving ride-share car?
  • will the rapid decline of so many “unicorn” company valuations chill the funding side of the cycle as Theranos et al become cautionary tales?
  • will Apple rebound from the quarter where it failed to set a revenue record, whether on iPad adoption, the watch, or growth in emerging economies?
  • will Facebook ever hit a wall past which privacy concerns, a saturated user base, and generationality slow its growth of ad revenue?

Amid all of this wait-and-see, one big shock has hit the tech world, and it’s more in the realm of bits (and electrons) than atoms: Tesla took $1,000 deposits for roughly 400,000 Model 3 sedans — in under a month. For scale, BMW sold 100,000 3-series cars in the U.S. in 2015, a 6% drop from 2014. Tesla’s name of its car is no accident: BMW is the standard for the mid-size sport sedan, and Tesla likely wants to do to that benchmark what the Model S did to Mercedes S-class, BMW 7-series, and Audi A8 sales: torpedo them. All of a sudden, Tesla is shaking up the automotive world, and every time I investigate, another interesting tidbit comes up.

First, the model 3 sales might not be the biggest news. Solar power is about to get cost-competitive in some climates (without subsidies), given new advances in sun-tracking technology for the arrays. One big drawback is the night-time, obviously, so battery power is one key way for solar to make sense. Tesla’s energy business unit is on track to sell 168.5 megawatt-hours of energy storage applications to SolarCity (another Tesla unit), according to GTM Research. That number is six times what Tesla sold SolarCity just last year, and a 60% increase on the entire industry output for 2015.  In addition, the 85kw battery in the model S is massive — just how big, I didn’t realize until I read that it can power the average household for 3 1/2 days. What does that do to electric company projections, to household disaster recovery, to our thinking about what charges what in the family garage? Tesla is remaking the auto industry, but power generation could be affected pretty radically as well.

Second, Tesla is learning the realities of manufacturing quality control, vendor management, and other “boring” supply-chain tasks that are tripping up the company. Some examples: Reuters reports that Tesla spent more than $1,000 per car on repairs, and set aside about $2,000 more per car for future repairs, on cars sold in 2015. Daimler (a more apt comparison than GM or Ford, given the average selling pice) spent $970 per vehicle but set aside only $1294. This approach appears to be well justified: Tesla has missed ship dates, suffered from repeated quality issues, and is trying to rewrite the industry rule book with over-the-air software updates.

The Model X SUV (the one with the funky gull-wing doors) is getting blasted by online forum reviewers, at the Wall Street Journal, and from Consumer Reports (which loved the Model S). Even CEO Elon Musk said earlier this year, "I'm not sure anyone should have built or designed this car, because it's so difficult to make." Doors won't open (or open correctly), the heater is chilly, and the touch-screen freezes, among other issues. Some of this is a reflection of making something as complex as an automobile, now with more software than ever before. Musk tried to point out a bright side in one presentation, noting that only 6 out of 8,000 parts for one car were in short supply — but most of the time, a single part shortage can stop production entirely.

Third, Tesla is taking a bold tack on self-driving. Their cars on the road are minimally instrumented (in that they lack Lidar), but are recording driving data at a prodigious clip: one estimate claims that Tesla “learns” (in AI terms) as much in one or two days as Google has from all of its cumulative driving experience. Thus if Google sees one deer strike per hundred thousand miles, let’s guesstimate, then Google has a base of 12 or 15 deer strikes whereas Tesla has hundreds or even thousands. Every Tesla has a cellular data connection for the software updates, but that link also harvests driving data from owners who do not opt out of being guinea pigs. Thus the Model 3 could offer stronger autopilot capability than anyone else in the market when it appears. (See this for more)

Fourth, the Model 3 buyers could face a nasty surprise if they are late in the queue. Specifically, U.S. government subsidies of $7500 for electric cars expire after 200,000 units have been sold. If U.S. sales of the Model 3 are 50% of the total, using round numbers, the subsidy (which can be augmented with state incentives in a given locality) will run out early in that 400,000 run: sometime in 2018. Thus buyers who came late to the party might pay the sticker $35,000 base price rather than $27,500 (or less in some states). In reality, Musk reported, the typical option package for the first weekend brought the total average selling price up to $42,000 or so.

Fifth, the big question is of course, can Tesla meet demand? The Model S began as a 15-20 units/week exercise in 2012 before hitting 1,000/week in 2015. Assuming early growing pains, but a faster ramp, 400,000 is a big leap, from 50,000 a year to something close to 3 or 4 times that, in less time than the Model S took to get to scale volumes. The good news is that the Model X complexity costs were lessons well learned, and the Model 3 has the potential to be the “best of all worlds” assuming a) battery production at the Tesla/Panasonic factory in Nevada comes on line as predicted, b) the same engineering that delivers “stunningly graceful” ride quality in the Models X and S can be scaled down to meet the price point (in part from a steel rather than aluminum body, most likely), and c) the factory processes, vendor management, and warranty issues can be contained.

For those who ask, no I did not reserve a Model 3: range anxiety in the middle of nowhere is real (the nearest Supercharger is more than an hour away, and there are none in the places I tend to drive for vacation).

Finally, the wonders of Quora continue to amaze me. There, I learned about the Model S as a “green” car: most electricity is not carbon-free, obviously, but how much does power source matter? If we use a Toyota Prius as a benchmark (19 metric tons of CO2 per year), the Model S wins only if it’s on a clean-running grid, such as the California mix of fuels/methods (11 metric tons) or if one can connect to wind (at which point emissions fall below 1 metric ton). A coal-fired diet for the Tesla’s electricity results in a 34-metric-ton CO2 toll, nearly twice that of the Prius.

Given Apple’s share price slide and the apparent saturation of its main markets, along with the heavy cross-pollination of engineers who have worked at both companies, should Apple buy Tesla? Apple’s supply-chain and marketing expertise and its capacity for major capital expenditures make it a seemingly attractive suitor. In addition, Tesla CEO Elon Musk may be too busy with Mars mission plans in his SpaceX capacity to get deeply engaged in the much less interesting earth-bound business issues such as those at Tesla: quality control, procurement analysis, lobbying for company-owned dealerships, etc.

I’m partial to another scenario, however: Apple could team up (somehow) with BMW, a company with a viable electric compact already in the market in the i3 at $44,000 MSRP. The two brands share customer bases, design aesthetics, and profit margins. Apple CEO Tim Cook is reported to have visited the i3 assembly line, which in itself is pretty amazing (see here), as is Tesla’s, to be sure. Tesla has a nice injection of working capital from those deposits, but also a tall order in the need to execute a step-function increase in production, engineering, and after-sales service on an entirely new scale. Cook’s expertise is in supply chains, and he likely understands better than most the risks facing Tesla at this juncture.

However it plays out, cars are suddenly “cool” again, for entirely new reasons, in part because the global smartphone market appears to be saturating. Wherever one sits in the tech industry (except at Amazon Web Services, apparently), there seems to be concern and caution rather than the unbounded-horizon talk to which we’ve grown accustomed (Intel just laid off more than 10,000 employees, to take only one example). Seeing who emerges from the recalibration will be fascinating indeed.