Wednesday, June 22, 2016

Early Indications June 2016 What do we make of Artificial Intelligence?

For context, an old joke:

Q: What’s harder than solving the problem of artificial intelligence?

A: Fixing real stupidity.

In many current publications, the technical possibilities, business opportunities, and human implications of artificial intelligence are major news. Here’s just a sample:

-a computer beat a grand master at Go about 10 years before many predicted such an outcome would be possible.

-Google is repositioning itself as an AI company, with serious credibility. IBM is advertising “cognitive computing,” somewhat less convincingly, Watson notwithstanding.

-Venture capital is chasing AI-powered startups in every domain from ad serving to games to medical devices.

-Established players are hiring top talent from each other and from academia: Toyota, Amazon, Uber, and Facebook have made noise, but Google remains the leader in AI brainpower.

-Corporate acquisitions are proceeding apace: just this week Twitter bought a London-based company called Magic Pony, which does image-enhancement, for a reported $150 million. Those kinds of numbers (shared around a team of in this case only 11 PhDs) will continue to attract talent to AI startups all over the world

Despite so much activity, basic answers are hard to come by. What is, and is not, AI? By which definition? What is, and is not possible, for both good and ill? The more one reads, the less clarity emerges: there are many viable typologies, based on problems to be solved, computational approaches, or philosophical grounding. I can recommend the following resources, with the caveat that no consensus emerges on anything important: the whole concept is still being sorted out in both theory and practice.

The Wikipedia entry is worth a look.

Here's a pretty good explainer from The Verge.

The Economist reports on the sizable shift of top research talent away from universities into corporate AI research.

Here’s a New Yorker profile of the Oxford philosopher Nick Bostrom.

Oren Etzioni has a piece on “deep learning” (neural networks at very large scale, best I can make out) in a recent issue of Wired:

Elon Musk called AI humanity’s “biggest existential threat” in 2014.

Frank Chen at Andreessen Horowitz has a very good introductory podcast explaining the recent boom in both activity and progress.

Apple is trying to use AI without intruding into people’s identifiable information using something called differential privacy.

Google’s AI efforts, by contrast to Apple’s, build on the vast amount of information the company’s tools know about people’s habits, web browsing, searches, social network, and more.

Amazon has multiple horses in the AI race, and recently made a high-profile hire.


Despite the substantial ambiguity related to the macro-level abstraction that is AI, several generalizations can be made:

1) Defining AI with any precision is problematic. Vendors including Google (“deep learning”) and IBM (“cognitive computing”) are well served by a certain degree of mystery, while the actual mechanics of algorithm tuning are deeply technical and often tedious. There are live questions over how completely the use of a given algorithm (a Kalman filter, used in both econometrics and missile guidance, or a simulated annealing optimization, used in supply chains and elsewhere, to take two examples) is “doing” AI.

2) AI can work spectacularly well in highly defined domains: ad placement, cerebral games, maps and directions, search term anticipation, and more and more, natural-language processing as in Siri/Cortana/Alexa. Leave the domain, however, and the machine and its learning are lost: don’t ask Google Maps to pick a stock portfolio, or Siri to diagnose prostate cancer. The challenge of “general AI” remains a far-off goal: people are more than the sum of their map-reading, pun-making, and logical generalization abilities.

3) Hardware is a key piece of the recent advances. Computer graphics processors feature a parallel architecture that lends itself to certain kinds of AI problems, and the growth of gaming and other image-intensive applications is fueling better performance on the computing frontiers of machine learning. Google also recently announced a dedicated hardware component, the Tensor Processing Unit, built specifically to handle machine learning problems.

4) “Big data” and AI are not synonymous, but they’re cousins. Part of the the success of new machine learning solutions is the vast increase in the scale of the training data. This is how Google Translate can “learn” a language: with billions of examples, not a grammar, dictionary, or by ear.

5) It’s early days, but one of the most exciting prospects is that humans can learn from AI. Lee Sedol, the Go player, says he is now playing better than before his loss to the Google computer. Whether with recipes (for tire rubber or salad dressing), delivery routes, investment strategies, or even painting, getting inspiration from an algorithm can potentially spur people to do great new things. Shiv Integer is a bot in the 3D printing site Thingiverse, and the random shapes it generates are fanciful, part of an art project. It’s not hard to envision a more targeted effort along the same lines, whether for aircraft parts or toys. I would also bet drug discovery could benefit from a similar AI approach.

6) The AI abstraction is far more culturally potent than the concrete instances. The New Yorker can ask “Will artificial intelligence bring us utopia or destruction?” (in the Bostrom article) but if you insert actual products, the question sounds silly: “Will Google typeahead bring us utopia or destruction?” “Will Anki Overdrive (an AI-enhanced race-car toy) bring us utopia or destruction?” Even when the actual applications are spooky, invasive, and cause for concern, the headline still doesn’t work: ”Will the FBI’s broad expansion of facial recognition technology bring us utopia or destruction?” The term "AI" is vague, sometimes ominous, but the actual instantiations, while sometimes genuinely amazing (“How did a computer figure that out?”), help demystify the potential menace while raising finite questions.

7) Who will train the future generations of researchers and teachers in this vital area? The rapid migration of top robotics/AI professors to Uber, Google, and the like is completely understandable, and not only because of money. Alex Smola just left Carnegie Mellon for Amazon. In his blog post (originally intended for his students and university colleagues), he summarized the appeal: less bureaucracy, more data, more computing power.

          “Our goal as machine learning researchers is to solve deep problems (not just in deep learning) and to ensure that this leads to algorithms that are actually used. At scale. At sophistication. In applications. The number of people I could possibly influence personally through papers and teaching might be 10,000. In Amazon we have 1 million developers using AWS. Likewise, the NSF thinks that a project of 3 engineers is a big grant (and it is very choosy in awarding these grants). At Amazon we will be investing an order of magnitude more resources towards this problem. With data and computers to match this. This is significant leverage. Hence the change.”

It’s hard to see universities offering anything remotely competitive (across all 3 dimensions) except in rare cases. Stanford, MIT, University of Washington, NYU, and Carnegie Mellon (which lost most of an entire lab to Uber) are the schools I know about from afar with major defections; those 4 (absent NYU) are among the top 5 AI programs in the country according to US News, and I wouldn’t feel too comfortable as the department chair at UC-Berkeley (#4).


As in so many other domains (the implications of cheap DNA sequencing; materials science including 3D printing; solar energy efficiency), we are seeing unprecedentedly rapid change, and any linear extrapolations to predict 2025 or even 2020 would be foolish. Perhaps the only sound generalization regarding AI is that it is giving us strong reinforcement to become accustomed to a world of extreme, and often troubling, volatility. Far from the domain of machine learning, for example, a combination of regulations, cheap fracking gas, and better renewable options led the top US coal companies to lose 99% — 99%! — of their market capitalization in only 5 years. Yet other incumbents (including traditional universities) can still look at our world and say, “I’m immune. That can’t happen here.” Helping expand perspectives and teach us flexibility may be one of AI’s greatest contributions, unless human stupidity is too stubborn and wins the day.