This month’s news cycle featured news that a single ticket matched the numbers necessary to win a $1.5 billion prize. South Carolina, where the ticket was purchased, allows winners to remain anonymous, but as they say, good luck with that. I’m struck by the famous study by Brickman, Coates, and Janoff-Bulman that found that lottery winners, shortly after the event, were less happy than people who had recently become paraplegics. While it’s impossible to do a controlled study, the number of lottery winners who have made a complete mess of their lives is long.
My point here has little to do with lottery excess per se. Rather, I’d like to connect the fascination with “more” to the digital age. I can’t claim either uniqueness to the US or global applicability, but it’s easy to see examples of mismeasurement: specifically, we use numbers that are easy to derive to compare things that are much more subtle. Clayton Christensen addressed this tendency in How to Measure Your Life. At the end of my life, how good a parent was I? How careful a scholar? How effective a citizen? How inspirational and responsible a leader? Numbers don’t measure any of those particularly well.
Distributing computing broadly among the human population, then networking lots of it together, feeds this tendency. Back in its early years, Google posted the vast number of webpages it had indexed. Facebook friend counts distort many aspects of true human connection. Amazon boasts “earth’s biggest selection.” Having our fingers on the screen for hours every day is changing us and our kinship in insidious ways. Infinite availability of gossip, shopping, sports talk, or anything else is probably not a long-term win, but humans are easily hacked and we absorb this stuff under the guise of self-determination even as we are being powerfully and constantly gamed by any number of bots and psychological techniques.
Robin Dunbar famously hypothesized that a typical person can maintain about 150 meaningful relationships, Facebook’s counter notwithstanding. Our attention spans, memories, and dexterity are finite, yet screens, game controllers, and other information flows are overwhelming us. The NYU academic Clay Shirky proposed that we don’t have data overload; rather, the technologies of information filtering are insufficient. That may be so, but in any event, our technologies are altering us, often in the name of “more” rather than “better.”
Where does this lead? There are relatively tiny efforts, parallel to the Slow Food movement, to resist the “more” of data: fewer numbers, better chosen, reflected on over time can be incredibly powerful. By analogy, firehoses can be great for extinguishing burning buildings but they’re worthless as nourishment. Vast selection, whether in selection of mates or of toothpaste, ultimately doesn’t enrich us: Jim Gilmore (author with Joe Pine of the Experience Economy book 20 years ago now) memorably told me that people don’t want infinite selection; they want what they want. Restaurants with thick menus rarely inspire confidence among foodies: it’s impossible to master crepes, lasagna, and stir-fry under one roof.
The food analogy begins to lead us to the main point: software may be eating the world, as Mark Andreessen asserted in the post of that title, but curation wins over sheer volume. Netflix delivered vast selection of DVDs in its first decade and for all the star ratings and recommendation engines, its share price remained flat. Now that Netflix delivers far fewer titles, and more of them are optimized for its viewers based on the combination of algorithmic analysis and in-house production, its investors are exuberant, the company’s debt habit notwithstanding.
In their book The Second Machine Age, Eric Brynjolfsson and Andrew McAfee of MIT posit that humans and robots, teamed together, can perform better than either computers or people alone. Curation seems to be a task that largely awaits this approach to take hold more broadly: digital platforms love the scale of automated curation but stumble on the execution. Facebook is efficiently algorithmic but prone to being manipulated, as the 2016 election proved; Mark Zuckerberg and his senior management appear to be struggling with the human side of the content selection issue. Google image search had to stop offering results for some queries after machine learning presented an image of people of color as gorillas. Amazon is replacing some of its category management (demand forecasting, price negotiation, and ordering inventory) with algorithms, freeing retail white-color workers for new tasks. If the company performs well this holiday season, it may validate the experiment.
Where else might we look for joint computational-human curation as an antidote to the overwhelming flood of information coming at us? I would hope the education segment takes a leadership role but see few efforts in that direction. Booksellers are making a comeback for many reasons, recommendations prominently among them. Spotify hasn’t mimicked Netflix in production of original content; I can’t speak to the role of curation in its success to date but it doesn’t seem to outperform a good DJ. Pandora’s origins in a “music genome project” (automated curation, begun in 1999) did not translate to market dominance. Choosing a vacation destination, a retirement portfolio, or a job candidate is still not well enabled by software + people; lots of error and wasted effort persists in these systems.
Part of the reason for this slow progress is a skills deficit: the number of people who can understand a domain of knowledge (art history, auto mechanics, arthritis) then can (and want to) translate that understanding into software-friendly form is extremely small. Jeff Bezos, a Princeton electrical engineer by training, is rare in this regard and unique in his entrepreneurial translation of that skill: Amazon Web Services, the Kindle, Alexa/Echo, and the company’s supply-chin practices all derive, I would argue, from similar roots in being able to bridge the gap between physical practice and software instantiation thereof.
Going forward, perhaps we will see software tools, possibly variants of IBM’s Watson suite, that ease and accelerate human + computational information management. Perhaps a new computer language, cognitive model, or interface will enable the aforementioned domain experts to amplify their expertise to a larger audience. Or perhaps we will continue to tread water, barely staying afloat in an inexorably rising tide of noise.