Saturday, December 29, 2018

Early Indications December 2018: 5 Weak Signals

2018 was a challenging year for much of the tech world, and not only because of the depressed stock market. A number of high-profile failures taught several lessons:
  • maintaining momentum (and revenue growth) at vast scale is truly difficult and ultimately impossible: nothing can grow forever.
  • finding hard but solvable problems is tougher and tougher, in part because so much has to go right to win in billion-person-sized markets.
  • whether in Moore’s law, pharmaceutical drug development, or social software, after you solve the initial hard problem, the road only goes uphill and gets rockier. Apple delivered the iPod then iPhone/iPad, and that may be the extent of their core market. (Cars? Healthcare? I’m not holding my breath.) In hardware, Intel entirely missed the mobile- and smartphone segments after dominating the desktop, and Nvidia is seeing a major slowdown in sales as bitcoin miners and self-driving cars are not providing the updraft the company expected. 
Here are five weak signals I'll be watching in 2019 and 2020.

1) Hardware robotics lacks key knowledge of how people understand machines

Over the course of 2018, three high-profile robot companies shuttered operations. Kuri, a social robot built on a platform Bosch hoped to commercialize, was priced at $700. Jibo, founded by MIT affective robotics pioneer Cynthia Breazeal, had raised upwards of $60 million and was designed to sell for about $1,000. Finally, the biggest fish in the pond of dying robot companies was Heartland, later renamed Rethink Robotics. It was founded by MIT AI/robotics superstar Rodney Brooks and had raised $150 million to make user-friendly (no safety cage required) light industrial robots that could be simply programmed and work amidst people. These so-called “cobots” (collaborative robots) called Baxter and Sawyer were aimed at small manufacturing and shipping facilities that needed an occasional helping hand: flexibility and adaptability were the bots’ calling cards rather than power, durability, or accuracy.

In the case of the household bots (Kuri and Jibo), the functionality was largely achieved by Amazon Echo and Google Home smart speakers at 5-15% of the price. Jibo employed a complex and expensive 3-axis motor system to give the table-top device an emotionally appealing “body language.” It was billed as a companion but couldn’t keep a shopping list, told jokes but couldn’t shoot decent photos, and joined other prominent crowdfunding projects that over promised and under shipped. Jibo started out global and was retrenched to North America only. And so on. (For those who are interested, here’s an excellent post-mortem on the Jibo.)

In all 3 cases, figuring out what people value in a robot, how they value it, and what they will give up in exchange still appears to be beyond the state of the field: even deep-pocketed giants including Sony and Honda have found consumer robotics to be an impossible mass market to crack. In part, engineers tend to design robots that can’t be produced at sufficiently low price targets. Further, it’s unclear how many people want to bond with an electronic device: even the Roomba, a relative success, does not remotely approach Apple/Android sales volumes. In short, it’s not at all clear what widely shared problem Jibo solves that Alexa can’t.

2) Social media giants quite simply do not value their users’ identity and privacy (in the US especially)

Every few weeks, Facebook is found to have done something that demonstrates a complete disregard of users as people. Twitter continues to condone personal abuse, often by troll bots, while presenting “innovations” unrelated to the service’s core mission. Google+ was a privacy disaster (here’s a good article), starting with a “real names” enforcement that did not follow the core concept of Google’s own Circles product, that people relate to different other people using different identities: Adam in a classroom, Adam the book author, Adam the scout troop leader, Adam the poker player, Adam the church deacon, Adam the AA member. Internationally, Alibaba’s Sesame Credit was for a time connected to the larger Chinese governmental social credit initiative, and it’s possible that Alibaba’s cloud computing subsidiary is involved with running the latter given that it frequently serves as a contractor to governmental agencies. 

If a US citizen travels to Europe and logs on, meanwhile, Google helpfully presents all manner of privacy tools that are either hidden or unavailable stateside. (I have no idea what Facebook logins look like in either the US or EU.) Google leads US corporations in lobbying spend, and it’s easy to see why, given the larger dynamics at play in the ad/media landscape. It’s staggering how little people know or comprehend about what’s happening: students regularly assert that “Facebook sells our data” when a) that’s not literally the case and b) the truth is that Facebook largely constructs then monitors and markets these students’ digital identities, which is something else entirely. 

There is no analogy in economic history (“data is the new oil” is wholly insufficient) that adequately explains or even suggests what is happening, and Google is right to be worried about regulation that could be written ham-fistedly and have many unintended consequences. At the same time, who outside of maybe EFF is lobbying on behalf of the users? Someone on Twitter made a great point about poor people who use Facebook as a utility: white elites who step off (Walt Mossberg being the latest) are in a very different situation compared to many millions of Facebook users, given their many sources of social capital. I’m not at all sure what _should_ happen here, but am watching all the same.

3) How long can Netflix abandon the long tail for being a neo-Disney hit-maker?

Long ago and far away, Netflix rented every possible DVD, satisfying customers’ need for choice and curation (“people who liked X also liked Y”). Beginning about ten years ago, with streaming supplanting the mailed discs, Netflix began investing vast sums of largely borrowed money on original content. All those old movies people used to rent went into mothballs, to be replaced with Netflix originals and a few select chestnuts to be trickled out then pulled back. If you or your children want Disney titles, meanwhile, those will all likely be gone in a year as Disney shifts its content to its own streaming platform. 

Netflix is no longer an online Blockbuster Video; it’s competing head to head with Disney and Viacom not only for viewership but increasingly for creative talent. Longtime Hollywood writers and producers are getting contracts from Netflix in nosebleed territory: Shonda Rhimes (responsible for hits including Grey’s Anatomy and Scandal) landed a production deal reported in the $100 million range. 21st Century Fox lost Ryan Murphy (Glee, 9-1-1, The People v. O.J. Simpson); Netflix gave Murphy a $300 million deal, according to whisper numbers. Finally, Netflix has spent still more money on production infrastructure in both Los Angeles and New Mexico. It’s a simple question: how long can Netflix borrow the money to pay more people to build more content to satisfy its viewers who live in more and more (and more diverse) countries every year?

4) Who will pay for 5G?

As 5G wireless rolls out in the coming years, the confusion will be considerable. AT&T is calling its 4G improvements 5G Evolution, which is not true 5G. 5G is not backward compatible with 4G, so users will need new equipment. What this equipment might include will be interesting. True 5G features higher transmission speeds and lower latency; gamers should be very excited. It also requires more antennas because the cells are smaller. Thus stationary broadband (home and small business) looks to be an opportunity for wireless providers led by Verizon and AT&T to steal market share from wired broadband providers including Comcast and . . . AT&T. 

Besides watching Netflix and playing Fortnight, what else is 5G good for? True enterprise-grade broadband is frequently mentioned as a prerequisite for “smart cities” in which everything from water mains to traffic lights to drones and other surveillance cameras can be instrumented and eventually programmed to perform better. Traffic gridlock alone is often mentioned as a substantial opportunity.

There are many opportunities, some sinister (either China’s social credit scoring or enhanced police and government surveillance, possibly without adequate oversight, here at home). But for our purposes, the more relevant issue was illustrated this summer during the California wildfires. Verizon was throttling a firefighting team that had exceeded its monthly wireless data limit, the fire chief linked the behavior to net neutrality, and the PR hit to Verizon was significant. The more important point for our purposes is that state and municipal governments are cash-starved: property tax receipts drop every time Sears closes a store, costs for health care rise every year, and longer life expectancies mean longer pension payouts to retirees. While enhanced gunshot location, faster infrastructure repairs, and other sensor-driven municipal scenarios sound appealing, there is a real question lurking: who exactly will pay for the bandwidth and related equipment to make them happen?

5) Last year it was Bitcoin, this year it’s supply chain blockchain

Back when I worked in consulting, there was a saying: “in mystery there is margin.” Conversely, once enterprise IT buyers figured something out for themselves, its price dropped rapidly into commodity territory. Thus we see consulting firms spending lots of money, time, and effort on “thought leadership” built for the express purpose of being able to charge for something the client doesn’t fully understand.

In 2019, when more people realize that “enterprise blockchain” is a modern implementation of a distributed database, some of the irrational exuberance may die down. Yes, faster invoice reconciliation, more granular product recalls, and less fraud are benefits, but those benefits do not accrue because blockchains are built from some kind of magical fairy dust. Note that IBM’s definition of blockchain never once mentions the relationship of the new kid to the grandfather:


*****
There you have it. 2019 will be an important year for many tech companies to stabilize revenue growth, deliver meaningful innovations, and get their operational houses in order. Of course we will see surprises (Uber and Lyft’s IPO valuations?) but there don’t seem to be any hot new startups to track right now: the old order seems to be relatively stable, with Dell, IBM, and HP (among others, obviously) needing to reinvent themselves for the age of mobile device supremacy and eventual commercialization of AI. I don’t see a Netscape, a Salesforce, a Google, or a Facebook on the horizon, which probably spells trouble in the long term — where are the new innovations going to come from? — but makes for a relatively calm landscape in the short run.

Speaking on a personal note, I hope the new year brings health and happiness to each of you, along with your loved ones.

Wednesday, October 31, 2018

Early Indications October 2018: More


This month’s news cycle featured news that a single ticket matched the numbers necessary to win a $1.5 billion prize. South Carolina, where the ticket was purchased, allows winners to remain anonymous, but as they say, good luck with that. I’m struck by the famous study by Brickman, Coates, and Janoff-Bulman that found that lottery winners, shortly after the event, were less happy than people who had recently become paraplegics. While it’s impossible to do a controlled study, the number of lottery winners who have made a complete mess of their lives is long.

My point here has little to do with lottery excess per se. Rather, I’d like to connect the fascination with “more” to the digital age. I can’t claim either uniqueness to the US or global applicability, but it’s easy to see examples of mismeasurement: specifically, we use numbers that are easy to derive to compare things that are much more subtle. Clayton Christensen addressed this tendency in How to Measure Your Life. At the end of my life, how good a parent was I? How careful a scholar? How effective a citizen? How inspirational and responsible a leader? Numbers don’t measure any of those particularly well.

Distributing computing broadly among the human population, then networking lots of it together, feeds this tendency. Back in its early years, Google posted the vast number of webpages it had indexed. Facebook friend counts distort many aspects of true human connection. Amazon boasts “earth’s biggest selection.” Having our fingers on the screen for hours every day is changing us and our kinship in insidious ways. Infinite availability of gossip, shopping, sports talk, or anything else is probably not a long-term win, but humans are easily hacked and we absorb this stuff under the guise of self-determination even as we are being powerfully and constantly gamed by any number of bots and psychological techniques.

Robin Dunbar famously hypothesized that a typical person can maintain about 150 meaningful relationships, Facebook’s counter notwithstanding. Our attention spans, memories, and dexterity are finite, yet screens, game controllers, and other information flows are overwhelming us. The NYU academic Clay Shirky proposed that we don’t have data overload; rather, the technologies of information filtering are insufficient. That may be so, but in any event, our technologies are altering us, often in the name of “more” rather than “better.”

Where does this lead? There are relatively tiny efforts, parallel to the Slow Food movement, to resist the “more” of data: fewer numbers, better chosen, reflected on over time can be incredibly powerful. By analogy, firehoses can be great for extinguishing burning buildings but they’re worthless as nourishment. Vast selection, whether in selection of mates or of toothpaste, ultimately doesn’t enrich us: Jim Gilmore (author with Joe Pine of the Experience Economy book 20 years ago now) memorably told me that people don’t want infinite selection; they want what they want. Restaurants with thick menus rarely inspire confidence among foodies: it’s impossible to master crepes, lasagna, and stir-fry under one roof.

The food analogy begins to lead us to the main point: software may be eating the world, as Mark Andreessen asserted in the post of that title, but curation wins over sheer volume. Netflix delivered vast selection of DVDs in its first decade and for all the star ratings and recommendation engines, its share price remained flat. Now that Netflix delivers far fewer titles, and more of them are optimized for its viewers based on the combination of algorithmic analysis and in-house production, its investors are exuberant, the company’s debt habit notwithstanding.

In their book The Second Machine Age, Eric Brynjolfsson and Andrew McAfee of MIT posit that humans and robots, teamed together, can perform better than either computers or people alone. Curation seems to be a task that largely awaits this approach to take hold more broadly: digital platforms love the scale of automated curation but stumble on the execution. Facebook is efficiently algorithmic but prone to being manipulated, as the 2016 election proved; Mark Zuckerberg and his senior management appear to be struggling with the human side of the content selection issue. Google image search had to stop offering results for some queries after machine learning presented an image of people of color as gorillas. Amazon is replacing some of its category management (demand forecasting, price negotiation, and ordering inventory) with algorithms, freeing retail white-color workers for new tasks. If the company performs well this holiday season, it may validate the experiment.

Where else might we look for joint computational-human curation as an antidote to the overwhelming flood of information coming at us? I would hope the education segment takes a leadership role but see few efforts in that direction. Booksellers are making a comeback for many reasons, recommendations prominently among them. Spotify hasn’t mimicked Netflix in production of original content; I can’t speak to the role of curation in its success to date but it doesn’t seem to outperform a good DJ. Pandora’s origins in a “music genome project” (automated curation, begun in 1999) did not translate to market dominance. Choosing a vacation destination, a retirement portfolio, or a job candidate is still not well enabled by software + people; lots of error and wasted effort persists in these systems. 

Part of the reason for this slow progress is a skills deficit: the number of people who can understand a domain of knowledge (art history, auto mechanics, arthritis) then can (and want to) translate that understanding into software-friendly form is extremely small. Jeff Bezos, a Princeton electrical engineer by training, is rare in this regard and unique in his entrepreneurial translation of that skill: Amazon Web Services, the Kindle, Alexa/Echo, and the company’s supply-chin practices all derive, I would argue, from similar roots in being able to bridge the gap between physical practice and software instantiation thereof. 

Going forward, perhaps we will see software tools, possibly variants of IBM’s Watson suite, that ease and accelerate human + computational information management. Perhaps a new computer language, cognitive model, or interface will enable the aforementioned domain experts to amplify their expertise to a larger audience. Or perhaps we will continue to tread water, barely staying afloat in an inexorably rising tide of noise.

Sunday, September 30, 2018

Early Indications September 2018: Why can’t your organization innovate like DARPA?



Following up on last month’s newsletter, which asked who was going to generate the big-scale innovations required for a growing world, I recently read Sharon Weinberger’s 2017 study of DARPA entitled The Imagineers of War. The effort was well worthwhile, if only for the nuanced explanation of the origins of the Internet. Paul Baran’s famous mesh diagram of a survivable communications architecture for command and control of the US nuclear forces is only part of the story. J.C.R. Licklider, a key figure in the history of human-computer interaction, was also involved: he saw far earlier than most of his contemporaries how having access to computing can change how we think, so his connection of computing to psychology was significant.

More generally, psychology was a hot area of defense and intelligence research in the late 1950s and 1960s, in part because of the interest in and fear of “brainwashing,” that is, some form of mind control. The bestselling book The Manchurian Candidate was but one manifestation of this fascination; Stanley Kubrick’s still-brilliant Dr. Strangelove also captures many aspects of the era with searing insight: mine-shaft gaps, purity of essence, and of course Peter Sellers’ brilliant one-sided phone conversations with his Soviet counterpart on the hotline.

Thus we have in some ways come full circle as the Internet was successfully used for psychological manipulation by Russian entities and surrogates in the 2016 election. History in this case did more than rhyme. For all the historical interest I had in the book, there are some concrete lessons: for all the attention DARPA gets for its successes — Stealth aviation, GPS, drones — the organization is likely to spawn few imitators. I posit that there are at least seven reasons for this.

1) Your organization isn’t motivated by defense of American interests
Being charged with preventing future surprises like Sputnik means that very little subject matter is out of bounds. Budgets are much bigger than private industry typically can mobilize. Patriotism can motivate devotion and behavior that cannot be simply hired.

2) Your organization doesn’t pursue enough whackadoo ideas
When it was first proposed, stealth aviation did not sound much more plausible than ESP between mother rabbits and their bunnies (the former were thought to know when harm befell the latter, even at a far geographic remove), telepathic spoon bending (accomplished by none other than Uri Geller), or antennas mounted on elephants to aid in radio communications through Vietnamese jungle foliage.

3) Your organization has short time horizons
Notwithstanding the common critique of corporate focus on quarterly numbers, even a 5-year plan is sometimes too short. GPS took 20 years between theoretical proposal and first satellite launch. ARPAnet was more than 5 years in the making when one of four connected computers sent the message “lo” to a second unit (it was supposed to be “login” but the system crashed), and the World Wide Web launched another 30 years after that.

4) Your organization can’t bury failures as secretly
The author of the DARPA book noted in the endnotes that classification has obscured both successes and failures from being publicly viewed. A box of documents related to a James Bond-like jetpack remains classified many decades later. DARPA hasn’t undertaken an institutional history since 1975, when it was about 18 years old.

5) Your organization has more moral prohibitions
ARPA was deeply connected to the US war in Vietnam, attempting to use everything from fortune-tellers and soothsayers (who were paid to predict a Communist defeat) to Agent Orange (one of an entire family of air-sprayed chemicals designed both to cut off the food supply and deprive the guerrilla forces of jungle cover). GE proposed a mass galvanic polygraph to be used on entire villages. John Poindexter’s Total Information Awareness project, mass surveillance with little human or institutional oversight (AI was supposed to protect privacy), ran on DARPA money. Somewhere in the DARPA robotics budget there most likely exists a human-out-of-the-loop autonomous robot with lethal capability.

6) Your organization has more conventional hiring processes
Related to 2), Darpa has a long history of hiring rogue, unconventional, or outlandish individuals, then giving them long leashes. In the age of post-Sputnik fear, one Greek physicist proposed creating a defensive shield of high-energy electrons trapped above the earth’s atmosphere in the magnetic field. Multiple nuclear explosions were detonated high above the earth’s surface to try to validate the concept, which of course did not work in practice. One former DARPA director held the biannual agency conference at Disneyland not once but on multiple occasions. Program managers in the parapsychology field held the beliefs and credentials you might imagine for such a post.

7) Your organization has some nominal and procedural objectives
DARPA rarely gets too specific on what its mission and objectives are. Over the institution’s history, they have ranged from investigating counter-insurgency to counter-terror to space-based weapons to lots of secret stuff the public can’t see. Regimes have been supported with cash, expertise, and other assets, scientists (and psychics) have been funded, and numerous contractors have been generously enriched. Most of this activity is only loosely connected to an overall remit. 

As Weinberger notes in her conclusion, “the dilemma for DARPA is finding a new mission worthy of its past accomplishments and cognizant of its darker failures.” (p. 371) After failing massively in trying to win the hearts and minds of Vietnam’s people with as few US ground troops as possible, and after 30 years of a bi-polar (US-USSR) world order, the age of Al Qaeda and related entities has proven more difficult to fight with technology. After tens or hundreds of millions of dollars of investment, for example, the best weapon for fighting against improvised roadside bombs remained . . . dogs’ noses. Very few of our civilian or even DoD organizations would be allowed to spend so much and come up with so little.

Sunday, September 16, 2018

Early Indications August 2018: The Next Big Innovation?

Few books have stuck with me the way Geoffrey West’s Scale (reviewed here last summer) did. I don’t fully buy the book’s argument for the applicability of natural scale laws to human structures such as cities (here’s a much smarter review than mine), but he did put the planet’s projected population in sharp perspective for me: worldwide, 1.5 million people will be moving to cities every _week_ for the next (now) 34 years. West argues, plausibly in my view, that we will need step-function innovations on the order of the Internet to feed, employ, cure, and transport all those people.
 
There’s a quasi-debate running between several economists and management scholars. Eric Brynjolfsson and Andrew McAfee at MIT argue that human organizational structures have lagged, as they historically do, technological development. Robotics per se doesn’t put people out of work; rather, corporate, taxation, labor law, education, and other structures don’t yet create a place for these new machines and humans in a larger, functioning economy. On the other side stands Tyler Cowen of George Mason, who says that we have harvested all the “low-hanging fruit” (his words) and that compared to the 20th century, our era’s record of groundbreaking innovation is thin.
 
All three views hang together in my mind: we are due for another massively important innovation — including in the “rules of the game,” as it were. Since the iPhone launched the age of mass smartphone use 11 years ago, it’s hard to find truly important ideas: Uber and Airbnb are both about 10 years old, as is blockchain (in which China now leads the world in patent applications ), which has yet to solve a truly important problem. Autonomous vehicles, meanwhile, are looking like less of a near-term bet (as recent news from Waymo illustrates). What am I missing?
 
Before looking ahead, let’s look back and see where the last few world-changing innovations came from:
 
-The Internet began at DARPA (in 1969, ARPA) but key components including the World Wide Web came from elsewhere (Europe’s CERN, in the case of the WWW). AT&T famously passed on the contract to build the Internet, because their substantial expertise in the existing circuit switched regime made it clear the technology would never work.
 
-Malcolm McLean owned a North Carolina trucking company and died worth about $350 million. His innovation? Containerized shipping: in 1956, when he piloted the idea, hand-loading a ship cost $5.86 a ton. Containerization dropped that to 16 cents per ton. “Globalization” and all that implies, including increased standards of living in many locales, rely heavily on his invention.
 
-Norman Borlaug earned a PhD at the University of Minnesota then spent most of his professional life in Mexico, cross-breeding crops. He has been credited with saving a billion people from starvation and won the Nobel peace prize. His so-called (by others) “green revolution” was critiqued from several angles: input-intensive agriculture made seed, fertilizer, and tractor companies rich and famers indebted. Large-scale farms (including road-building and other infrastructure) destroyed cultural practices and institutions associated with subsistence faming. Pesticides and monoculture have negative long-term environmental effects. All of that is true, but feeding a billion people who most likely would otherwise have starved deserves a healthy dose of credit.
 
Thus we see an entrepreneur, an individual humanitarian, and large-scale government agencies all making decisive contributions. Absent are corporations: yes, the Toyota Prius is 20 years old, but it has not (yet?) shifted the global auto industry off of fossil fuels. Even pure electric vehicles rely on a power grid that most likely begins with the burning of gas, oil, or coal. The great innovators of the past — GE, HP, IBM, AT&T, Xerox — no longer pack the research punch they once did. Innovations at Facebook, Google, Netflix, and Amazon are heavily tilted to the realm of consumer behavior, in which advanced algorithms are used on the relatively easy task of manipulating purchase and viewing patterns, one person at a time. 
 
What about big Pharma? In an age when economic rationality means $500 Epi-pens and 5,000% price increases on off-patent drugs (see Shkreli, Martin), it’s hard to see the sector as currently constituted solving a capital-B Big human challenge. Meanwhile, as antibiotic-resistant bacteria get tougher to combat with every passing month, it’s not impossible to imagine that penicillin and its offspring may not matter (or matter much) 100 years after the drug’s discovery in 1928. As science cracks the code of the biome, particularly regarding the gut, entirely new modes of treatment may become feasible. If (very broadly speaking) the 19th century was the dawn of surgery, and the 20th belonged to the birth of entirely new categories of pills, perhaps we will see the potential of genetics and related science realized for the remaining 80 or so years of our century. There’s no guarantee the Pfizers and Mercks of the world will be the relevant parties for these to-be-built treatment modalities. Recall that Sports Illustrated did not launch ESPN, nor did Sony introduce the iPod.
 
Zooming back out to the larger issue of the innovations required for the planet we are rapidly populating, two key questions will have to be answered: 
 
-what kind of organizational structures will help envision and develop ways to feed, move, educate, and/or employ large numbers of people?
 
-in what domain will the truly big innovations reside?
 
It’s easy to perceive the trajectory of history as moving upward: higher standards of living, as measured by money. Longer life expectancies. Farther reaches of sea and space explored. I was reminded today, though, that part of a 9-billion-person planet will be doing with less: less animal protein, fewer square feet of housing per person, less social mobility in a given country even as the broader population does better on the whole. 
 
Thus the new innovation might be in the arrangement of social order: both the limited-liability joint stock corporation and republican democracy are human inventions (as are human slavery, dictatorship, and monarchies). The next big thing might be “social technology,” designed to organize large numbers of people, along with their wants and needs, just as we saw with the pre-Reformation Catholic church in Europe, or Pax Britannica from about 1815 to 1914. (Technology matters a lot for these social arrangements, as witness the printing press’s role in the decline of the former or the place of steam power in the latter.) Alternatively, there might emerge some innovation as we more traditionally define it: technologies to move people in cities, process and distribute nutritional protein, or teach people how to earn a living. In either case, time is getting short: according to United Nations projections, global population will hit the 8 billion mark in about 6 years.

Sunday, July 22, 2018

Early Indications June 2018: The Future of Home Entertainment

In the grand scheme of things, “home entertainment” of the electronic variety is a very young phenomenon. Radio is only about 100 years old, as is recorded music. For such a young technology, though, the industry does not stay static for very long. Consider how much instability (read: the need to buy new software and/or hardware) we have seen just since World War II:

1948           LP phonograph records
1963           Audio cassette
1966           Regular color television programming
~1969         Transistor audio amplifiers replace vacuum tubes
1976           VHS Videocassette
1982           Compact disc
1995           Sony launches Playstation in the US
1996           DVD
2001           MP3s/iPod players (and the beginning of the earbud/headphone surge)
2000           Streaming audio
2005           Scalable streaming video
2006           Affordable flat-panel television

While it can be shocking to see the changes in the past 20 to 30 years, never has this industry had fixed standards for more than a couple decades. What I find noteworthy now is the degree of instability at all layers — hardware, software (content), distribution, and business models — with which we are confronted today. Let us examine each of these in turn.

Hardware
Beginning in 1970, driven in part by US soldiers with access to Japanese electronics while stationed or on leave in the Far East, a stereo was defined as a turntable, receiver, and loudspeakers, possibly augmented by a tape deck. TVs and stereos might have been in the same room, but there was minimal interoperation until the advent of the videocassette, and even then, loudspeaker magnets interfered with cathode-ray TV picture tube performance at close range. Wires connected everything together. After 1980, the compact disc player might take the place of the cassette recorder and/or record player, but the basic architecture still held. 

After the DVD and the near-simultaneous transition to flat-panel TVs, hi-fi systems became multimedia affairs: a center front channel under the display and between the left and right traditional stereo speakers carried dialog, and if one got serous, a subwoofer replicated some of the special effects magic of the big screen. In addition, computer gaming was often installed in the same system as the video performance of high-end consoles could drive high-definition displays.

At this time, copy protection entered the equation. Content owners saw how Napster began the catastrophic drop in sales of physical media, so movies and games in particular came with encryption to prevent copying. DVDs were soon cracked, but Blu-ray discs remain a challenge to copy. The copy protection takes a toll on usability however: getting players, processors, and displays to synch can be frustrating, audio signal is delayed by copy-protection switching lag, and the cost of both hardware and software went up. Many respected audio companies have refused to license the latest generation of high-resolution copy protection included in HDMI connections (HDCP 2.2), leaving only a small number of vendors in the market for what used to be called home theater receivers or processors. In part, this decision relates to business model issues we will discuss presently.

For whatever reason — usability expert Don Norman decried how complicated it was for an MIT PhD engineer (him) to set up a home theater — many home theaters are getting simpler. (Dolby Atmos, meanwhile, bucks the trend with an option for 7 surround speakers at more or less ear level, a subwoofer, and 4 speakers in the ceiling.) Soundbars are a 1- or 2-cable solution to multichannel audio, and they save domestic real estate for other things. Smart TVs can include sophisticated audio processing and cleverly compact, decent-sounding loudspeakers. Headphones are incredibly popular, including among hard-core gamers. Without a need for playback of physical media, the total box count might be as low as two: a smart TV, and a sound bar, maybe augmented by an Apple, Google,  or Amazon Fire streaming device. Music might come from YouTube, Spotify, Tidal, or Apple, either through a wonky music streamer box or a spare laptop PC. Finally, given the popularity of Bluetooth speakers, many hi-fi vendors are making wireless loudspeakers, including soundbars.

Another simplification play is to use “smart assistants” (Echo, Google Home, Apple HomePod) to play music. Apple has applied the same model it used on the iPhone X camera: machine learning performs tricks with sound or images, in this case, creating the illusion of stereo out of a single point. Expect more of this in the future, intersecting with steady interest in multi-room systems. This used to require extensive wiring and switch boxes, but now can be achieved wirelessly by Sonos and others. How smart homes, digital assistants, and entertainment hardware evolve will bear watching. Yes you can start the movie, lower the shades, and dim the lights with a single voice command, even now, but how many people will devote the necessary resources and energy to first build and then maintain the capability through multiple hardware, firmware, and software upgrades? 

Software
The phonograph record has proven to be a durable medium, despite the delicate physical transaction required to get sound out of microscopic groves cut into plastic. The role of the large-format cover art (complete with liner notes that might be readable), the role of quasi-steampunk affection for turntables (sometimes along with typewriters, single-speed bicycles, and so on), and the role of analog sound quality all can be debated. In any event, LP and turntable sales are robust — but still must be counted as a niche market.

Physical tape, usually enclosed in a plastic case of some sort, does not age well, and fidelity is well below digital levels. Few people would claim affection for VHS tapes that regularly broke, jammed, or lost alignment with playback heads. Optical discs last longer, but in the case of compact discs, the choice of a cheap plastic “jewelbox” case format was, in retrospect, a liability. Hinges, spindles, and lids regularly crack, the liner notes are printed in tiny fonts, and opening the shrink-wrap is among the least pleasant unboxing exercises I know of. DVDs last pretty well and usually come in sensible packaging, but they too are in disfavor, in part because of space considerations: today’s 20-somethings often hold very little physical media, I’m noticing, whether books, newspapers, photo albums, movies, or music. Several engineering wizards I know have built private content clouds for DVD, CD, even photo and video content. One such tool is the Plex media server, which can be deployed atop a storage system. 

Distribution
Obviously record stores, video rental outlets, and mega-sized media stores are scarce in the US. (I have seen the latter in other countries, however, including France.) Amazon, Apple, and Google move a huge percentage of US culture, joined by Spotify, Netflix, and the cable operators. Given the retreat from physical media, distribution gets much simpler in a streaming world, as witnessed Netflix’s ability to save on its costs for USPS mailing (two ways) of DVDs.

Distribution affects hardware too. Many towns had local vendors of televisions and hi-fi (including repair), but those stores are rapidly disappearing. For those who want to sample a TV’s color performance, Best Buy is one of a few options: such regional electronics retailers as Lechmere in Boston, Myer Emco in Washington, DC, and Pacific Hi-Fi in California are long gone, as are stores with larger footprints including Circuit City, Radio Shack, and Tweeter Etc. Some national mail-order firms made the successful transition to web retail: Audio Advisor in Michigan and Crutchfield in Virginia.

The latter had their start in car audio in 1974, answering the question: will model X tape deck fit my car? Yes, and here’s a kit with all the pieces and fittings to make it work: Crutchfield eventually built a vast database of part-fit descriptions for after-market car audio versus almost every kind of car. Another bit of audio business trivia relates to cars, in this case, burglar alarms: Marantz and Denon, now both Japanese companies, are owned by Sound United, which also owns Polk Audio and other brands. Sound United is owned by Charlesbank Capital Partners, a Boston private equity firm. Sound United began life in 1982 as Directed Electronics, which launched the Viper brand of car alarm and remote-start gear. The founder of Directed Electronics, now worth more than $400 million (and probably the richest member of Congress)? Darrell Issa, who recently announced he will not run for reelection in his Southern California district.

With the decline in the number of audio-centric retailers, other options for hardware distribution are emerging. Some companies sell direct to consumer: examples include Emotiva and Outlaw Audio in electronics, and Zu Audio for loudspeakers, cables, and phono cartridges. Both electronics firms build at least some products in the US, which is a selling point for some consumers. Zu gained (and continues to gain) strong market awareness by constant eBay auctions of their cables; the speakers are in high demand and sell direct from the Utah factory, often with custom paint jobs. Obviously Amazon moves a huge number of TVs, headphones, and soundbars (some of which conveniently include Alexa integration).

Business Models
Just as home entertainment technology has never really stabilized in its 100 years of existence, neither have business models settled into a long-term stasis. At the same time that advertising is driving some segments of the Web and mobile economies, print and television are having to experiment with alternatives: Netflix’s absence of commercials is a tangible selling point, NFL Network’s popular Red Zone bypasses all ads, and HBO has always enjoyed the cachet of being subscription-driven and thus able to deliver shows like The Sopranos that network TV couldn’t touch. While some recording artists must tour to make a living, other web video personalities are making a killing off YouTube revenues. Streaming music hasn’t demonstrated that Spotify et al can pay artists sufficient royalties, and the technicalities of the payment process are incredibly complex. For example, radio airplay royalties only pay songwriters, not performers. Songwriters must register to collect royalties, and this might mean multiple transactions. 

There’s also record-keeping: a Norwegian research team has examined Tidal internal hard drives that appear to have been manipulated to inflate play counts for albums by Beyonce and Kanye West — the wife of Tidal’s founder and the service’s co-founder, respectively. The numbers do smell funny: Tidal claims 3 million global subscribers but the researchers believe that number could be off by a multiple of 3. For the play count for West’s The Life of Pablo to be accurate, every Tidal subscriber would have had to play the album eight times a day for the first ten days of the record’s release. One entry in the database was a user who allegedly played Life of Pablo tracks 96 times in 24 hours; users also “listened” to multiple songs at the same time. 

Why would Tidal lie? Sony paid the service $2 million for The Life of Pablo’s plays. Other artists, meanwhile, are not being paid royalties, and industry reports suggest Tidal is perilously low on cash (as a company with only a million subscribers and an enormous playlist would be). Prince’s estate is suing Tidal over false reporting. The service’s high resolution Master recordings (encrypted using something called MQA, accessible only via the computer desktop app) are demonstrably better than CD-audio, and the playlist is impressively broad, but long-term survival seems like a shaky proposition. Anyone remember Rdio, Grooveshark, Liquid Audio, Broadcast.com, or Ruckus?

It is safe to say the streaming music business model has not yet been validated, and Netflix is facing scrutiny for its solution to the  video content problem: rather than pay royalties, make your own franchises. The stock price reflects the risk/reward proposition of this strategy. A cornerstone of the Netflix strategy is stand-up comedy specials, of which there will eventually be too many, for example. What then? Disney, meanwhile, is buying Fox. Time Warner and ATT cable are merging. Comcast bought NBC to gain content access, effectively paying some royalties to themselves rather than only content providers like Scripps, Discovery, and Disney.

Cars are a final piece of the home entertainment business model puzzle. Velodyne was making home theater subwoofers in the early 2000s when the firm began making lidar for use in DARPA autonomous vehicle challenges; it has since spun the lidar unit out as a separate business. Such firms as Bose, Harman, Panasonic, and Denmark’s Dynaudio make consumer electronics equipment but also harvest a steady cash flow from the manufacture of OEM car audio systems. (Volkswagen orders millions of speaker drivers at a shot, many from Dynaudio, which also supplies Volvo.) Bose also developed a large-scale version of the technology used in noise-cancelling headphones for improving automobile ride quality that never was successfully marketed. That business has also spun out of the consumer electronics parent company.

Where does all of this lead?
Anecdotally, US single-family houses seem to be getting smaller, retreating from the McMansion era (as of 2016, Census numbers had not shown a drop, however). Increased urbanization means more concentrated housing in smaller spaces. Fewer malls might be one reason why there are 26% fewer movie theaters in the US than in 1995, but DVDs might also help explain the drop. Streaming media frees up room in those smaller domiciles where LPs, CDs, or DVDs might once have been shelved. Home theater seems to be a stronger category than it was a few years ago, though “smart home” numbers might be responsible. At the same time that TV screens are getting bigger, more and more viewing happens on handheld mobile devices (and the smartphone is _not_ the long-promised portable television that was promised to look something like this.

Thus we might look for a “donut hole” in the market for middle size displays. Streaming will continue to evolve, depending in part on the future of net neutrality (however it is defined) in different countries. Physical media consumption will ebb and flow, probably as a consequence of demographics (meaning nostalgia in some cases) and also as a response to shrinking movie rental availability: Netflix’s famous “long tail” of movie selection is no more. In domestic architecture, will home theaters continue to be a common feature?

Despite the fact that home entertainment is non-essential and not a terribly large contributor to any nation’s GDP, its implications are disproportionate. How people learn, laugh, cry, or root for favored teams has a lot to do with the available technology. How people organize and use their physical space — the primacy of the television is fascinating to compare when touring houses for sale — relates to their choices in this market. Finally, how people engage public spaces, whether football stadiums or comedy clubs, also relates to the options waiting in the living room. In the end, as home entertainment continues to evolve, the implications will reach far outside the home.

Early Indications July 2018: Tech and the US national parks


Over the past two summers I’ve had the good fortune to hike in eight of the western US national parks. This is a “bucket list” item for many people around the world, for good reason: the scale, the beauty, the unspoiled state of the nature are all hard to find anywhere, never mind that so many are within a day’s drive of a major US airport. The trips proved to be thought-provoking in many ways, some of them related to the information technology umbrella under which this newsletter operates.

First, some facts. The US national parks, strictly speaking, number 60. The most recent national park to be added is the St Louis Gateway Arch area, added earlier this year (at 192 acres, it is also by far the smallest). 7 of the 9 largest parks (by acreage) lie within Alaska. The largest park in the lower 48 states is Death Valley, followed by Yellowstone. The national parks are seeing massive attendance numbers: 330 million recreation visits (to all sites administered by the National Park Service) in both 2016 and 2017. Grand Canyon National Park saw a record 6 million visitors last year. 11 million plus people came to Great Smoky Mountains National Park. 

An “America the Beautiful” pass is accepted at 2,000 federal recreation sites. The bureaucratic allocation of those sites is of course complex:

-The national parks belong to Department of the Interior

-560 national wildlife refuges fall under the Fish and Wildlife Service, also Department of Interior

-10 national seashores (including Cape Hatteras and Point Reyes) and 4 national lakeshores on lakes Superior and Michigan are administered by the National Park Service

-National Historic Sites (like Edgar Allen Poe’s birthplace and many presidential homes and/or other structures) are usually managed by the National Park Service. 43 important areas are designated National Historical Parks, such as the one surrounding the Liberty Bell in Philadelphia.

-US Army Corps of Engineers recreation areas (almost 2000 of them) fall under Department of Defense

-154 National Forests are an arm of the Department of Agriculture, managed for multiple uses

-The Bureau of Land Management is a different piece of the Department of Interior, with another multi-use mandate covering 245 million acres plus subsurface rights (including almost the entirety of the state of Nevada).

All of these areas must manage visitors, and the numbers of people who want to get outside (whether to hike, birdwatch, fish, hunt, or drive off-road vehicles) are growing. BLM visits were climbing as of 2012 (the last figure I could locate) while the record-setting national park attendance is stressing budgets, infrastructure, staffing, and nearby local governments. The politics of government land ownership and management are heated, especially in the past few years, and I will only note the tensions here: the multi-use mandate is of course understood differently depending on where you stand in relation to the land in question.

Secretary of the Interior Ryan Zinke has stated that the parks are being “loved to death.” As long ago as 2001 when I was in Acadia National Park in Maine, visitors were warned not to take rocks off the shore: the souvenirs were denuding the fragile ecosystem. Cars are effectively banned from a large segment of Zion National Park, replaced by mandatory shuttle bus service. It seems other parks will have to follow suit. Passes to popular hikes are rationed and even guest space in lodges (managed for the parks by outside contractors) fills up months or years ahead of time. 

Of course as visitorship increases, budgets have dropped, so park management is getting creative: many helpers in ranger-like uniforms are in fact volunteers, and the role of gift shop profits in helping run the parks is good-naturedly highlighted. The popularity of the parks is a mixed blessing, of course, but I did notice visitors were generally patient, respectful to rangers (who sometimes had to deliver unpopular news such as trail closures), and aggressive in their bookstore purchases. One large population struck me (though it’s obvious in retrospect): international travelers are a significant percentage of the visitor base. Comparing models of managing visits to natural wonders illustrates the contrast. Seeing the Great Wall of China, the Matterhorn in Switzerland, or a Norwegian fjord involves very different logistics and fees compared to the US model. It’s no surprise millions of people come from literally every continent to see glaciers, geysers, ancient trees, and the rest of the wonders on offer.

If you’ve seen the standard national parks graphic identity expressed on brochures and other materials, you may note the use of the same Helvetica font used on New York subway signs. This is not coincidence: Massimo Vignelli, the great Italian-American designer, did both projects, along with the iconic Helvetica American Airlines branding of the 1960s and many other familiar identities. The print model has not been seamlessly ported to web and mobile environments, but neither is every park sporting an idiosyncratic look and feel. I also found it noteworthy that parks dispense a LOT of paper, which is both costly and ecologically questionable. At the same time, getting maps into as many hands as possible is a good idea when cell signal is often nonexistent and people are in places that they can and do get lost.

Mt Ranier National Park is wrestling with the cell coverage question right now. The benefit would be to visitors; rangers would continue to use their radios for search, rescue, and related work. (The memorial at the Mt Ranier visitor center was sobering: many people have died helping make the national parks work, including six in that one park alone.) On the plus side, better coordination among city-dwelling visitors would prevent some misfortunes and maybe even a tragedy or two. At the same time, the place of wilderness as unspoiled and quiet is already under siege by armies of tourists bearing selfie sticks who often stand in truly stupid and/or dangerous spots in search of the perfect shot. The downsides (including false confidence) are real, but in the face of such sustained growth in visitorship, how long can the parks remain signal-free? Already many climbers are jarred working their routes on Yosemite’s Half Dome, interrupted by mobile phone conversations.

Another new source of contention relates to the use of drones. Whether for photography, stalking, or flying, these small (and sometimes not so small) flyers are frequently banned on paper, but enforcement is another matter. There’s also the irony of drones being banned at the Wright Bothers national monument at Kitty Hawk, dedicated as it is to innovation in flight.

Both smartphones and drones illustrate the range of issues in play. Both devices can improve safety, including rescues, just as both can impede it. Both devices can be used sensitively and creatively in the spirit of the place, while both are also fully capable of ruining other people’s experience of wilderness. Finally, both could be used as revenue generators just as both are a drain on scarce park service resources. When you have 330 million people visiting NPS and related recreation areas every year, one size does have to fit all, or almost all, so building a patchwork of regulations is probably a bad idea.

Another technology shift that will affect the parks is the rise of electric cars. With these kinds of visitor numbers, plus the fact that parks are often located far from infrastructure, the question arises as to how all those batteries on wheels will get charged. Many of the parks are vast, and range anxiety could enter a new phase: without a jerry can of gasoline in the back of the NPS pickup, how will dead batteries be revived? How will motels and other nearby accommodations deliver sufficient juice for armies of visitors’ cars?

That shift lies in the future. While it’s easy to rant against selfie sticks and their ilk, technology has coexisted with the natural parks for well more than a century in complex, cross-cutting ways. Well before the establishment of the NPS in 1916, Abraham Lincoln set aside land in what is now Yosemite for public use. Yellowstone was established as a park in 1872, long before the states it straddles — Idaho, Wyoming, Montana — entered the Union. Photographs were almost certainly the primary form of persuasion in both instances.

After 1916, which coincides with the rise of US automobile use, park vistorship climbed dramatically, particularly during the Great Depression. I was puzzled at the numbers, but as the US federal government was expanding its presence in so many ways, the number of NPS sites (“reporting units”) climbed from 44 in 1929 to 141 in 1941. (Visitor numbers climbed far faster in the same period, from 3 million to more than 20 million). Whenever we talk about preserving the parks, it bears mentioning that roads and parking lots were not there 100 years ago. 6 million people visiting the Grand Canyon in a year turn it from wilderness into something else: not civilization exactly, but some kind of naturalistic theme park maybe?

After World War II, photography was used as a powerful force in the debate over conservation and preservation. The great novelist Wallace Stegner edited This Is Dinosaur, a document of words, maps, and photos explaining and celebrating Dinosaur National Monument which was at the time the site of a proposed dam that was successfully defeated. Ansel Adams was only one of many documentarians who shaped how millions of people understand the wild, including the parks. 

Today, millions of people watch bear cams, monitor snowfall, map glacier melting, or track eagles, via cam and other data feeds. The binary question — how do we keep technology from spoiling the national parks? — ceased to be relevant a long time (and hundreds of millions of visits) ago. Just as it always has been, the more pressing question is the more difficult one: how do we preserve wild spaces that by definition become less wild with roads, people, and mobile phones invading them? How do we increase access to people who historically aren’t big national parks users, whether for reasons of culture, disability, or whatever at the same time we preserve often-fragile ecosystems that aren’t meant to coexist with millions of footfalls, bathroom visits, or water bottles? as with the impact of technology in so many other domains, the simple questions turn out to be multi-layered and the tradeoffs ripple far outward from whatever point they begin.