We Are the Borg

Is the convergence of human and machine really upon us?

Martin Heigen/Flickr
Martin Heigen/Flickr

The Singularity Is Nearer: When We Merge with AI by Ray Kurzweil; Viking, 432 pp., $35

In the fall of 2014, an MIT cognitive scientist named Tomaso Poggio predicted that humankind was at least 20 years away from building computers that could interpret images on their own. Doing so, declared Poggio, “would be one of the most intellectually challenging things … for a machine to do.” One month later, Google released an AI program that did exactly what he’d deemed impossible.

It’s easy to chuckle at Poggio, but the pace of technological change nowadays leaves many of us dizzy—and according to Ray Kurzweil’s The Singularity Is Nearer, we ain’t seen nothing yet. This book builds on Kurzweil’s 2005 title, The Singularity Is Near, which argued that accelerating advances in computer science and other domains, coupled with positive feedback loops, would eventually produce a superhuman intelligence far beyond anything we can conceive—an event he dubbed the Singularity. (Kurzweil always capitalizes the term, like the Rapture.) He wrote the new book—a curious mix of sober historical analysis and wild, futuristic leaps—to show how high the stakes are: “If we can meet the scientific, ethical, social, and political challenges posed by these advances,” he declares, “by 2045 we will transform life on earth profoundly for the better.” But “if we fail, our very survival is in question.”

One core aspect of Kurzweil’s vision involves linking human brains to computers to exploit the latter’s superior processing power. In support of this idea, he notes that impressive neural implants already exist and can even partially replace structures like the hippocampus, a vital node in the brain for forming and storing memories. Currently, installing such devices requires a lengthy, invasive surgery, so Kurzweil suggests implanting them through “noninvasive” (and yet-to-be-invented) means—like injecting billions of nanobots into our bloodstreams, bots that will worm their way into our brains and start implanting electrodes there. “Even our blood supply may be replaced by nanobots,” he adds. I conclude from this that Kurzweil and I have very different definitions of “noninvasive.”

Even if people willingly underwent such procedures, the very notion of grokking with supercomputers seems uncertain, as the book’s own anecdotes show. In 2011, IBM’s Watson computer crushed two human champions on Jeopardy! During the television broadcast, viewers got to see not only Watson’s top response to clues but also its next two best guesses—guesses that, Kurzweil notes, “were often laughably wrong.” In the “European Union” category, one clue read, “Elected every 5 years, it has 736 members from 7 parties.” Watson answered correctly (“What is the European Parliament?”), but its third guess was “universal suffrage,” which makes no sense. Kurzweil concludes “that even though Watson’s gameplay seemed human, if you dig down just beneath the surface … the ‘cognition’ Watson was doing was quite alien to our own.” Similarly, ChatGPT regularly invents false responses to user queries and even lies about doing so, a glitch euphemistically called “hallucinating.” In summary, Kurzweil says, AIs employ “arcane mathematical and statistical processes very different from what we would recognize as our own thought processes.”

Fascinating stuff. But if these programs process information so differently, can we really just hook our brains up to them and start churning? Wouldn’t the result be an incoherent jumble? (Or would we just fry our brains, like plugging an American appliance into a European electrical outlet?) We could maybe overcome this problem with hardware and software designed to mimic our brains. Except, as Kurzweil admits, we have only a vague idea how human cognition works. How much parallel
processing occurs in our brains? And to simulate human thought, would replicating the network connections between neurons be enough (doable), or do we need all the nitty-gritty of biomolecular interactions (much harder)? Despite such gaping holes, Kurzweil predicts that “by the late 2030s”—little more than a dozen years from now—we’ll overcome all our ignorance and all our squeamishness, and we’ll be so gung-ho about brain-machine interfacing that “our thinking itself will be largely nonbiological.”

In the Singulartopia, Kurzweil also envisions transforming old, thing-based industries—food production, manufacturing, health care—into information technology. To be sure, exciting changes are afoot in these fields: 3D printing, vertical farming, and genetic engineering really could revolutionize how we build homes, manufacture goods, and grow food. And he’s so jazzed about using AI to discover new drugs that “by the end of the 2030s,” he declares, “we will largely be able to overcome diseases and the aging process.” I’m not convinced. There’s no Moore’s Law for tomatoes: no matter how quickly farmers shuttle information around, it still takes a certain amount of time to grow food. Similarly, however slick a drug looks in silico, it needs to be tested on flesh-and-blood creatures. Such bottlenecks exist in many fields because material things don’t always reduce to bits and bytes. That fact will act as a drag on the upward zoom toward the Singularity. Physical reality is stubborn.

And humans are stubbornly attached to it. At one point, Kurzweil discusses using virtual reality to provide a fully immersive Mount Everest experience, allowing us nonclimbers to see the roof of the world and hear the wind whipping about up there. It sounds amazing. But then Kurzweil muses whether anyone will still bother climbing Mount Everest when it is possible to “go” there virtually. “People will have to wrestle with whether it’s worth doing the real thing,” he says, “or whether the danger was part of the attraction all along.” I hate to break it to Kurzweil, but the danger was always at least half the attraction, with the other half being the sheer challenge of overcoming obstacles, enduring pain, and achieving what seemed impossible. No Edmund Hillary or Tenzing Norgay will trade crampons and ropes for some VR goggles. They’ll still climb it because it’s still there.

Perhaps I’m being hard on Kurzweil. The book contains several delightful philosophical digressions as well as an impressive rundown of all the many (many) advances that humankind has made lately in curbing poverty and disease. It’s stirring. And many of Kurzweil’s predictions could indeed happen—someday. Kurzweil certainly has a fertile mind, and who knows? Maybe this review will look silly when I’m 10 years younger in the late 2030s. But his unrelenting insistence that the Singularity is nigh undermines an otherwise sharp and provocative book.

Permission required for reprinting, reproducing, or other uses.

Sam Kean is the author of six science books, including The Disappearing Spoon and The Icepick Surgeon.


Please enter a valid email address
That address is already in use
The security code entered was incorrect
Thanks for signing up