Monday, September 30, 2019

Early Indications September 2019: Will designers become robots?


Earlier this month I had the honor of delivering the named Toyota lecture at the College for Creative studies in Detroit. CCS is an art and design school that has graduated many stars in the automotive, apparel/footwear, and advertising industries. It was truly inspiring to see the student work on display, and the model of hands-on instruction by working artists and creatives (as opposed to publishing academics) gave me much to ponder. For context related to the pedagogy/professoriate question, please see the article I helped write on an integrated model of university-industry collaboration.

On to the talk. My argument was that the designed environment in which North Americans live is increasingly shaped not by conscious human decisions but by algorithmic routines. The students and faculty engaged in an excellent discussion afterward, concluding that while designers might not need to be adept at running these algorithmic systems, there will be an increasing number of scenarios in which humans and machines team up, each to do what that party is best at. People can provide aesthetic guidance, potentially, while machines can do complex mathematical calculations and optimizations beyond any human’s capacity.

I found six examples of algorithms doing the work that creatives and designers used to claim as their responsibility. They are as follows:

1) Facebook has adopted a three-letter acronym (TLA) for its ad-optimization process: test, learn, adapt (TLA). Described as “A/B testing on steroids,” this approach tests a slight variation in text color, wording (“donate” vs “contribute”), photo choice, or other design elements across a carefully defined population. Performance is measured and analyzed, the layout is iterated, and the process repeated. The success of this approach can be seen in the Trump campaign’s use of Facebook advertising in the 2016 election. While the Clinton campaign spent only slightly less ($28 million vs $44 million), the Trump campaign tested a staggering 6.9 million variations of ad copy/layout, compared to only 66,000 for the Clinton campaign. According to Buzzfeed, Facebook’s Mark Zuckerberg called Trump after the election to congratulate him on his team’s use of the Facebook toolset on a heretofore unprecedented scale. 

2) YouTube has a problem: compared to Google search, in which the properties of the web document being indexed matter significantly, videos are not (currently) converted to text. Searches only can match cursory metadata, along with a lot of behavioral data: how long do viewers watch, how often do they click on thumbnails, etc. We thus have a scenario in which our video search results are heavily conditioned by how other people have reacted to a video rather than by what’s actually in it. Video creators watch these metrics really closely and adjust underperforming content on the fly: click-through rate can be updated as rapidly as every three minutes, with titles and photos tuned accordingly. The result, as YouTuber Derek Muller explained it, is a reliance on click-bait headlines, calculated thumbnails, and cliff-hangers: the same elements of yellow journalism before subscriber models decreased the reliance on newsstand sales. The big difference is that Google/YouTube shows me what their algorithms think I want, not necessarily what I have subscribed to. Twitter can behave much the same way.

3) It’s not only social media where algorithms create our content. Persado is an e-mail marketing company whose algorithms can outperform human text by 68% on click-through ratio and 76% on conversion. It’s no wonder that the likes of Amazon (Audible), Chase, and American Express have engaged the firm, which has deconstructed the appeal into its “writing DNA”: Narrative, emotion, descriptions, calls-to-action, formatting, and word positioning. In the dozens of years I’ve taught writing and written, I’ve never formulated anything on that basis. Shame on me. The firm’s toolbox consists of a million tagged and scored words in 25 languages, once again outperforming any - any - human team of writers and editors. 

On a related note, thanks to Kevin Slavin, I discovered Epigogix a few years back. This startup runs Hollywood scripts through a scoring algorithm, predicting box office performance and helping studios hedge risks. As Slavin noted in his prescient TED talk, when these systems have bugs - as every software system does - how will we know?

4) In March McDonald’s bought Dynamic Yield, an Israeli AI startup, for $300 million. By April, more than 700 restaurants had installed new drive-through message boards (that had been piloted before the acquisition) designed to up-sell customers. Eventually the program will encompass the mobile app and in-store kiosks as well as the drive-through innovations, which are being expanded to include 8,000 stores as per a company statement earlier this month. Equity analysts are praising the move, raising their stock price and earnings targets: the tech acquisitions led by Dynamic Yield appear to be moving the revenue needle. How can the algorithm customize your drive-through purchase? Some of it is time of day, weather, and simple promotions by suggestion (“wouldn’t a cold McFlurry taste great with that burger?”). Other personalization is based on your license plate and/or the presence of your smartphone.

5) Autodeck has long sold CAD software to help designers draw electronic objects to be made through molding, milling, and assembly. More recently, it has applied algorithms to the creative process in what is called generative design. The computer operator specifies the parameters that must be met: sizes, weights, strengths, features, etc. The algorithm takes over from there, suggesting dozens or hundreds of shapes that meet the criteria. The designer goes from a creator to a chooser. Furthermore, Autodesk shows an automobile chassis that is built conventionally, instrumented with sensors, and driven hard to generate maximum loads. The algorithm redesigns the chassis, using asymmetric and much more “biological” looking shapes that would be impossible to weld. Instead, the design feeds 3D printers to make a car that no human could ever conceive or build. 

Given these developments across such a wide variety of industries and skills, what should we be teaching both the algorithm-writers and the designers about their respective domains? How do we teach students, as per MIT economist Eric Brynjolfsson, to _team_ with machines? (I’m an outsider and only speculating here, but the limited success of IBM’s Watson for health might in part relate to how hard this human-machine interaction problem really is: it’s far more than a conventional UX issue. 


I believe we will see these kinds of questions emerge in more and more scenarios: finance is old news, while fast-food menus are pretty recent, and generative design will move from wood and steel to cells and tissues. Everything from labor economics to workplace safety to information privacy to legal liability will be reshaped as a result. Given everything else happening politically, it’s not surprising but still concerning how few discussions are underway as to how we frame, invest in, and regulate these new kinds of interactions.