Would somebody please explain to me how the dutiful 0s and 1s of computer processes at some point unbind themselves and undertake independent thought? Could anyone describe the actual instant of transition? I’ve been asking these questions around the office for the last little while. Might we find a writer to explain, in terms that even I could understand, this moment when the warm breath of life turns a box of metal into Frankenstein’s monster? One smart sciency friend of mine told me (rather dismissively, I thought) that it was all about algorithms. I’m sorry, but that doesn’t take me very far.
Then several months ago there arrived unbidden an article that reinforced my doubts about computer autonomy in terms that I could not have expressed but could follow. The piece was written by a fellow who had, in these very pages almost two decades ago, made a similarly iconoclastic argument about language. “There is only one thing everyone knows about language,” that article began, “—that it’s a living growing thing—so it seems particularly unfortunate that the notion should be false.” The writer, Mark Halpern, was described at the time as having written a book about computer programming, which lent some comfort to me in my decision to make his new piece on the limitations of computers the cover story in this issue.
Halpern’s argument is that the computer is and always will be a tool, like a hammer, its intrinsic simplicity disguised by the increasingly astonishing things it can do at speeds well beyond our comprehension. But speed and complexity are not thought, and certainly not independent thought. When computers escape our literal desktops and our smartphones and become robots or drones, they have not escaped their definitional boundaries. Halpern attributes our notion that robots can or soon will be able to act on their own to the Isaac Asimov factor, to science fiction. This matters, he points out, because evidence exists that people making serious decisions in government and the military have bought into the notion that computers are about to become autonomous.
A counter to Halpern’s argument asserts that computer programming has become so complicated, so deeply multilayered, that even if computers can do only what we tell them to do, we have lost control of them because we cannot follow all the implications of our instructions. And so the bonds of our control have been broken. Which perhaps reduces this to a semantic argument. But I think not. As Halpern wrote in that earlier piece, “greater clarity, coherence, and honesty” are “always the result when a process swathed in mystery is brought into the light, and we begin to understand and take responsibility for our own actions.”
Permission required for reprinting, reproducing, or other uses.