Responses to Our Spring 2020 Issue

The Conscious Robot

Reading “No Ghost in the Machine” in the Spring issue of the SCHOLAR, I waited in vain for Mark Halpern to establish the salient features of human thought, of which he declares robots not capable.

Underlying his reasoning is the idea of our self-awareness, our consciousness—of all that we do, including our thought processes. This self-awareness leads to the notions of volition and free will. So, the question posed on the magazine’s cover—Will robots ever think?—should be: Will robots ever achieve consciousness?

That question may be unanswerable. Do we know where the sense of self comes from? Coleridge’s definition of the “primary imagination” suggests an answer: “the living Power and prime Agent of all human Perception, and … a repetition in the finite mind of the eternal act of creation in the infinite I AM.” Some research seems to have demonstrated that our exercise of volition is an illusion, that our awareness of making a decision comes after our brain has already made that decision.

The critical factor with regard to robots’ thinking is whether they are given the freedom to make decisions. We, out of our imaginative self-awareness, claim that freedom. Absent a demonstration of robotic consciousness, we would have to give it to robots. And we already do, as in the case of autonomous vehicles. Interestingly, robots—not influenced by self-awareness and not tied to a biological organism—can “think” more clearly than we can. Robots may never reach the stage of development when, having achieved consciousness, they claim freedom as we do. But we will have to decide how much freedom we will give them to do the thinking for us.

THOMAS A. MAKIN
Bethlehem, Pennsylvania


If we could examine the workings of a human brain at the
ultimate level of detail, we would see the same kind of processes that we see in a microchip. Nothing supernatural is going on. At the moment, however, the finest neural network is further from a mouse than a mouse is from Plato. It is possible that this will not always be the case.

DAVID J. WILSON
from our website


Technically, a computer cannot add numbers. Addition is something humans impute. The Analytic Engine, had it been built, would have manipulated the positions of rods and gears. Modern computers manipulate voltages and charges. In every case, a mechanism ensures that some combination of inputs yields an output, the state of which can be construed as the sum of the inputs.
Having worked in artificial intelligence, I think this article is a powerful and useful refutation of a lot of the hype and mysticism surrounding the subject. It is important for us to remember that what comes out of an AI system is generally what we humans bake into it. It is not always what we intended. Worse, sometimes it is.

“KALEBERG”
from our website


Most well-trained scientists learn to steer clear of pronunciations that include the words “never” or “not ever.” There is a very fine line between the abilities of a neural net computer, which uses associations and methods poorly understood by humans to learn to play a superior game of chess, and those of a human brain learning to do the same thing. At some point, one has to expect that the complexity of neural network computers will approach that of the brain. When that happens, how does one discern between the capabilities of the machine versus its human counterpart? Is there really a difference between human and machine learning? Evolution is the ultimate architect and experience the ultimate programmer. At some point, it is likely that the only difference between human and nonhuman intelligence will be the construction of the container that holds the machine.

JOSEPH KENNEDY
Los Altos Hills, California


There is a huge gulf between saying that “computers can’t think now” and that “computers can’t think in principle.” Biology has had the benefit of billions of years of evolution, whereas computers have had less than a century of intelligent design. When computers finally do think, it will be because they will have many subsystems that process external inputs, many subsystems that use those inputs to decide what to do, and many subsystems to modify their stored information based on the inputs and the processing. Those subsystems are going to be subject to a sensitive dependence on initial conditions, so that while in theory one could try to predict their behavior, there will be so many branching futures that it will be impossible to do so, even if every atom in the universe were a simulation trying to do so.

“HYMANROSEN”
from our website


I am not sure if the practical question is: Do computers think or make decisions? Perhaps the better question is: Should we act on the decisions that computers spurt out, based on how machines are programmed? In which cases do we let computers dictate what we do, and to what degree? The worry is not that computers might “take over,” but that we might program human intervention out of cases when it is absolutely necessary not to do so.

“BIGBG”
from our website


A Hairy Situation

I thoroughly enjoyed David Owen’s “My Hairy Past.” It was a trip down memory lane back to the sweetly naïve idealism of my own adolescence. Some baby boomers never let go of the past. At a high school reunion 20 years after graduation, a classmate who still had long hair and granny glasses derisively told me, when we met in the men’s room, “For some of us, long hair wasn’t just a style.” What could I say in response? Like, wow, man.
For most of us, it was a passing phase. My older brother, who was the radical leader of a strike against the school’s dress code, grew up to become a Republican operative and major Trump supporter. I went a different direction. Both of my millennial sons grew ponytails in high school and cut them off in college. One is a captain in the National Guard, the other an activist leftist. The divide continues.

JEFF RASLEY
Indianapolis, Indiana


All Crossed Up

The article “The Uncertainty Principle,” by Cristopher Moore and John Kaag, is challenging and thought-provoking. I was puzzled, however, by Figure 2 on page 35, which is not drawn correctly. The text states that “any two great circles that cross the equator also cross each other twice, once in the Northern Hemisphere and once in the Southern.” The figure does show two great circles crossing the equator and crossing each other twice—but both times in the northern hemisphere.

JOSEPH SNIDER
Southwest Harbor, Maine

Joseph Snider is correct. The error was an editorial one.


CORRECTION: On page 65 of Pamela D. Toler’s “Peggy’s War,” the caption misidentifies the three men in the photograph as Army officers. At least one, possibly all three, were officers in the U.S. Navy. Thanks to several of our readers who pointed out the error.

Permission required for reprinting, reproducing, or other uses.

Our Readers may send letters to The American Scholar, 1606 New Hampshire Avenue, N.W., Washington, D.C. 20009; or e-mail them to scholar@pbk.org. Please include a daytime telephone number. Letters may be edited for length or clarity.

● NEWSLETTER

Please enter a valid email address
That address is already in use
The security code entered was incorrect
Thanks for signing up