"So we will be producing about 1026 to 1029 cps of nonbiological computation per year in the early 2030s. This is roughly equal to our estimate for the capacity of all living biological human intelligence ... This state of computation in the early 2030s will not represent the Singularity, however, because it does not yet correspond to a profound expansion of our intelligence. By the mid-2040s, however, that one thousand dollars' worth of computation will be equal to 1026 cps, so the intelligence created per year (at a total cost of about $1012) will be about one billion times more powerful than all human intelligence today. That will indeed represent a profound change, and it is for that reason that I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045."
-Ray Kurzweil, The Singularity is Near (New York: Penguin Group, 2005), pp. 135–136.
In the manner of all true technology revolutions, this one has crept up on us. I speak of the vast array of ways that computing is augmenting human life. Rather than think of people and computers, or people and robots, I believe it makes more sense to think of a continuum, with a naked newborn on one end -- pure human, zero augmentation -- and 2001's HAL or an Asimov fictional robot on the other: pure cyborg, with ample human characteristics. Everywhere in between these two poles, we can see combinations of human traits and computo-mechanical assistance. For now, humans call the shots in most scenarios (but not all; see below) and our devices can assist in any of thousands of ways.
I've never really bought Kurzweil's Singularity hypothesis: that machine capability, whether in the solo or collective mode, will eclipse human capability with profound consequences. The simplistic equation of CPU capacity with "living biological human intelligence" has never been argued with any serious evidence. The fact that Kurzweil has recently become a senior exec at Google, meanwhile, raises some pretty interesting questions.
At the same time, there's a tendency to "privilege" (sorry - it's an academic phrasing) humanity. Every robot that is said to be convincing, for example, serves as evidence that people are somehow Special, if less so every year. At the same time, people impart human characteristics to machines, naming cars and Roombas, for example, but not mobile phones, as far as I can tell. There's a whole lot to be researched and written about how humans anthropomorphize non-humans (animals and now machines/devices), but that's out of scope for the moment.
I think these two viewpoints -- the Singularity school and human exceptionalism -- both carry a substantial sum of unacknowledged baggage. Kurzweil et al adopt a simplistic understanding of humanity as simply the sum of our circuits. For those who worry about machines overtaking humans, meanwhile, this is not news. Humans are far from the strongest creatures or even strongest mammals, and nobody I know seems to feel the lesser for it. In the realm of mental capacity, computers have resoundingly beat our cerebral gladiators at both chess and trivia. In the realm of the everyday, pocket calculators from 40 years ago outperform everybody at mathematical figuring. Engines, hydraulics, and now computers are clearly better than humans at some tasks.
Here's one example. Inspired by the Economist's cover story of April 20, I conclude that machines are in fact better than humans at driving cars under many circumstances. Consider that calculating the speed of an oncoming car is guesswork for a human: people misjudge their window of opportunity (to make a left turn across oncoming traffic for example) literally hundreds of times every day. LIDAR plus processing power makes that calculation trivial for a computer-car. Knowing the limits of the car's handling, done by feel, is beyond the experience of 99% of drivers, most likely: especially with automatic transmissions, traction control (a computer assist), automatic pitch/yaw compensation, and other safety features, it's very difficult to get a car sideways, deliberately, to figure out how and when to react. For every "driving enthusiast" who squawks in a car magazine about "having all the decisions taken away from us by Big Brother," there will be many people who would LOVE to be chauffeured to their destination, especially when it's the Interstate grind crawling to work at 7:00 rather than the stick-shift byways of rural Virginia on a springtime Sunday morning. Even apart from the Google car, computers are doing more and more driving every year.
Preceded with the caveat that I am not not can I aspire to be a cognitive scientist, there is a question embedded here: what are humans good at, what are computers good at, and how will the person/machine partnership change shape over the coming years? There's got to be research done on this, but I couldn't find a clear, definitive list in plain English, so this fool will rush in, etc.
Let's start with machines: machines can count, multiply, and divide way faster than any person. Time and distance, easily quantified, are readily calculable. Short- and long-term memory (storage) can be pretty much permanent, especially in a well-engineered system, not to mention effectively infinite at Google scale. If-this/then-that logic, in long long chains, is a computer specialty. IBM's Watson, after winning Jeopardy with a really big rules engine, is now being used for medical diagnosis, where it should do well at both individual scenarios and public-health bigger pictures. Matching numbers, data patterns, and text strings is straightforward.
What about people? People can feel empathy. People can create art. People can see visual/logical nuances better than machines: a 5-year-old can know that a spoon is "like" a fork but not like a pencil but a computer must be taught that in explicit terms. Similarly, machine filters that "know it when they see it," in the Potter Stewart sense of hard-core material, have been spectacularly unreliable. People can read body language better than computers can. People can integrate new experience. People can infer better than machines. People can invent: recipe creation software can't duplicate even middling chefs, for example. Computers can be taught to recognize puns and, more recently, "that's what she said" double-entendres; only humans can create good ones.
Anthony Damasio's brilliant book Descartes' Error should be required reading for the Kurzweilians. Rather than accept the Cartesian split of mind from body - embodied in the epigram "I think therefore I am" -- Damasio insists, with evidence, that it is emotion, the blurry juncture of mind and body, that enabled human survival and continues to define the species. All the talk about calculations equaling and surpassing human intelligence ignores this basic reality. Until computers can laugh, cry, sing, and otherwise integrate mind and body, they cannot "surpass" what makes people people.
Here's a nice summary, from INC magazine of all places, in 2002:
*********
Yet is thinking outside the box all it takes to be innovative? Are reasoning and imagination -- the twin faculties that most of us associate with innovation -- enough for Ray Kurzweil to know which of the formulas that he's dreamed up based on past technological trends will lead to the best mathematical models for predicting future trends?
No, says Antonio Damasio, head of the neurology department at the University of Iowa College of Medicine. The innovator has to be able to feel outside the box, too -- that is, to make value judgments about the images and ideas that he or she has produced in such abundance. "Invention," as the French mathematician Henri Poincaré said, "is discernment, choice." And choice, notes Damasio, is based on human emotion -- sensations that originate in the brain but loop down into the body and back up again. "What you're really doing in the process of creating is choosing one thing over another, not necessarily because it is factually more positive but because it attracts you more," says Damasio. "Emotion is literally the alarm that permits the detection."
Kurzweil, for his part, calls that alarm "intuitive judgment." But he disagrees that it -- or reasoning or imagination, for that matter -- is exclusively human. He sees a day in the not-too-distant future when we will merge mechanical processes with biological ones in order to amplify what our brains alone do today. "Ultimately, we'll be able to develop machines that are based on the principles of operation of the human brain and that have the complexity of human intelligence," he says. "As we get to the 2030s and 2040s, the nonbiological component of our civilization's thinking power will dominate."
**********
As I suggested earlier, the human/machine distinction is far from binary. Even assuming a continuum, however, perhaps the most important category of tasks has been little discussed: computer systems which possess emergent properties that cannot be understood by humans. Wall Street is in this category, given algorithmic trades occurring in the millionths of a second, in a system where the interactions of proprietary codes are occasionally catastrophic yet beyond human comprehension (both in real time and after the fact) not to mention regulation. When algorithmic trades in synthetic instruments inadvertently wipe out underlying assets, who's left holding the bag? Sometimes it's the algorithm's "owner": Knight Capital basically had to sell itself to a rescue party last summer because its bad code (apparently test scripts found their way into the live NYSE and nobody noticed) lost $440 million in less than an hour; the program was buying, in automatic mode, $2.6 million of equities _per second._ Just because people could write algorithms and code doesn't mean they can foresee all potential interactions of said code -- assuming it was written as designed in the first place. (For more see here and here)
I'm not going to get nostalgic, or apocalyptic, or utopian here. Humanity has always built tools, and the tools always have unintended consequences. Those consequences have been substantial before: the rise of cities, extension of human life spans, atomic bombs, Tang. This time around, however, when the unintended consequence cuts so close to our identity it probably means that some self-awareness -- something computers can't do -- is probably in order. On that front, I'm not entirely hopeful: during the recent Boston bomb drama, when people were shown at their worst and finest, news feeds were dominated by updates on a Kardashian divorce development. I don't know if we're "amusing ourselves to death," as Neil Postman put it a long time ago, but maybe there's the chance that some people will dumb themselves down to computers rather than the machines catching up.