To the Editor:
David Gelernter extols Judaism for its emphasis on this earthly life and condemns the Kurzweil Cult of roboticism [“The Closing of the Scientific Mind,” January]. They are both myopic views. Judaism is also an eschatological religion. Two of the 13 principles of the Jewish faith, according to Maimonides, are belief in the resurrection and in the coming of the messiah. An Orthodox Jew recites three times a day the Amidah prayer in which he blesses God for the resurrection. An Orthodox rabbi delivering a Torah lecture always tries to conclude with the hope that the messiah will speedily come. And in the Mishna (Avot 4:21): “Rabbi Yaakov says: This world is like a corridor before the world to come, prepare yourself in the corridor so that you may enter the banquet hall.”
What Judaism is opposed to is intense speculation on what the afterlife holds. Otherwise, people would recklessly forfeit their life, as young Muslim jihadists do in their conviction that limitless physical pleasures await them in paradise.
Many admire Ray Kurzweil not because of his prediction that humans will be enhanced with radical new capabilities (whether totally feasible or not), but because he seems to stand in line with futurists who came before him, ones grounded in reality, such as Herman Kahn, Arthur C. Clarke, and, in the 19th century, Jules Verne, who provided spectacular visions for humanity stuck in a rut.
Jacob Mendlovic
Toronto, Ontario, Canada
David Gelernter responds:
Jacob Mendlovic misunderstands Judaism and the process by which we understand Judaism. Rabbi Yehoshua cites in the Talmud (B. Bava Metzia) the verse from Deuteronomy (“It—the Torah—is not in the heavens”) to make clear that we are required to keep our minds turned on and reach our own interpretations of scripture. Judaism has several views of eschatology; eilu v’eilu divrei elokim hayyim, all these views are words of the living God. On matters of Jewish practice, my obligation is to defer to my rabbi and poseik. But as a darshan, as an interpreter of Jewish thought (not a judge of legal questions), my obligation is to choose, and indeed to reinforce, the rabbinic view I believe is correct. I’ve discussed the important question of Jewish eschatology in Judaism, A Way of Being (Yale, 2009). I agree with Mr. Mendlovic that many people admire Ray Kurzweil. As far as I can tell, Mr. Kurzweil is not only a brilliant but a thoroughly decent man. But I disagree, as I explained in my piece, that Mr. Kurzweil provides “spectacular visions.” I find his predictions (as I explained) not only sickening but dangerous.
To the Editor:
The interesting article by David Gelernter attacks the computationalist or roboticist view of the human mind. He has a formidable ally in Roger Penrose. In his book Shadows of the Mind: A Search for the Missing Science of Consciousness (which lists Gelernter’s The Muse in the Machine in its bibliography), Penrose contends that by the use of our consciousness, we are able to perform actions that lie beyond the reach of any kind of computational activity. Penrose states additionally that his book provides a powerful and rigorous case for this assertion, and it seems to me that he does make a convincing case.
Part of Penrose’s exposition in support of his view concerns Gödel’s incompleteness theorem, and it is strange that Mr. Gelernter makes no mention of Gödel nor of the support for Mr. Gelernter’s view that is provided by Gödel’s theorem. (Imagine an essay on the history of English literature that skipped over Shakespeare.) Gödel’s theorem constitutes a thorough dismissal of the computationalist or roboticist view, which can accordingly be removed from serious consideration as a full portrayal of the human mind. The theorem states that no formal system of mathematical rules can be complete, because there must exist mathematical theorems that are true but whose truth cannot be proven using only the rules of the system. Human insight, impossible to formulate in mathematical terms for inclusion in the system, is necessary in addition to the rules. Penrose gives a simple example of a Gödel-type theorem, and it is clear at the end of the example that a conclusion has been reached that is beyond the powers of any computational process but is nevertheless completely convincing. Penrose considers, moreover, no fewer than 20 objections to the implications of Gödel’s theorem and refutes them all, again convincingly in my opinion.
How, therefore, can Mr. Gelernter assert “The Closing of the Scientific Mind”? The closed minds, in the context of his article, are surely those that have not acquainted themselves with the discoveries of Gödel, nor of the work of his peer and successor Alan Turing, and know little of such mysteries as the reduction of quantum states by the act of measurement (another of Penrose’s themes). But this is simply ignorance rather than a sinister-toned “closing of minds.” The ignorance could be at least partly mitigated if Mr. Gelernter considered (if he has not already) writing semi-popular articles, or giving online lectures, on Gödel-type reasoning and its implications for the humanist or subjectivist view of the human mind.
Robert Frenkel
Roseville, New South Wales, Australia
David Gelernter responds:
Robert Frenkel: Forget Gödel! Gödel (a very great logician) proved a theorem about formal logistic systems. Human beings are not formal logistic systems. And we don’t need Gödel to tell us that we are capable of feats that formal logistic systems are not. A formal system can establish the truth of a proposition only by producing a complete, logical proof. But of course—obviously—we have many ways to find the truth. We can notice that a proposition looks true, or reminds us of another proposition we know is true, or we can sketch a proof that’s missing two-thirds of its formal steps but that nonetheless seems true and so we accept it, or an oracular authority can tell us it’s true, or we can remember having seen it last year in Commentary on a list of true propositions. And so on. Of course—obviously—the human mind can do things that formal logistic systems can’t. We don’t need Gödel to tell us that. Mr. Frenkel’s proposed analogy between Gödel and Shakespeare is silly.
The actual problem is this: Despite being formal logistic systems, computers are able to simulate other kinds of systems: An elephant tap-dancing is not a formal logistic system, but a computer can simulate the tap-dancing elephant, make correct predictions about its behavior, and show us on-screen what it looks like. The validity of computationalism has nothing to with Gödel and everything to do with the relationship between software simulations and reality. A software simulation of a chess player can beat a human chess player. A proof discovered by a computer is just as valid as a man-made proof. In some departments, simulated intelligence is (in fact) intelligence. But a simulated feeling is not the same as a human feeling. Simulated understanding is not the same as human understanding. These are the areas in which computationalism shows its failings. Gödel has nothing to do with it.
To the Editor:
I read with pleasure the long lucubration of David Gelernter on the problem of consciousness, yet I do not find that it represents a progress toward its solution, or that it is an impartial judgment between two opposite points of view—the subjectivist and the objectivist. Mr. Gelernter says the commonsensical thing: “Both worlds, inside and outside, are real.” Yet he is tormented by the problem of how to put them together.
The fact is that the problem will never be solved until one accepts some counterintuitive propositions. First, both worlds are not only real, they are the same. Also, this common world is a relativistic world, in the sense that it is real both to you and to me—or, for that matter, both to man and to animal, to us and to a frog. An absolute world does not exist, just as an absolute time does not exist. All the paradoxes that intrigue scientists, Mr. Gelernter noted, can be compared with the paradoxes that intrigued them about time, when they thought that it could be used as an absolute measure, i.e., before Einstein demonstrated that it existed only as a relativistic measure. Once you accept that relativism is intrinsic to the world, the paradoxes of consciousness and reality, brain and matter, body and mind, even, perhaps, life and death disappear.
In his revulsion for a totally objective world, Mr. Gelernter falls into the opposite error of tending toward a totally subjective one. In this he is clearly propelled by a religious bent (in the same way that the supporters of the absolute objective world tend toward atheism). His contradictory position is revealed when he says that “science today is the Catholic Church around the 16th century”—in its efforts to drag man, who had been removed by Galileo and others from the center of the universe, back to it “by excommunication, not argument.” Yet his entire reasoning tends to show that science today wants to remove man from the center of the universe even more by “roboticism” and by comparing the universe to machines; whereas it is he, Mr. Gelernter, who tries to drag him back to the center, and not by argument, but by religious considerations—the sanctity of life, as he calls it, etc.
Mauro Lucentini
New York City
David Gelernter responds:
Mauro Lucentini writes that I tend to believe in a “totally subjective world,” but—as I made clear in my piece—I don’t. The analogy I proposed between science today and the 16th-century Catholic Church has to do with powerful institutions versus the public at large, nothing to do with man’s position in the universe.
To the Editor:
Thank you for this fascinating article. It’s interesting to see that Dan Dennett is still insisting that the mind is a von Neumann–like machine. He was wrong thirtysome years ago when I had him as a professor, and he is still wrong today. Dennett must still believe that we process bits of information one piece at a time in a logical progression. Of course with the advent of generally available multiprocessor computers, machines can truly multitask now, but at their most fundamental level, each CPU is still a computer engine processing one bit of information at a time, sequentially.
But with complexity of code and systems these days, do we truly know the state of a computer and its software at any particular point? One of the fascinating things about IBM’s Deep Blue was that when it was playing chess, it would give different responses to the same scenario. And Watson’s programmers could not tell anyone why Watson was responding as it did or predict how it would respond the next time.
Does that mean Deep Blue was thinking? Making inferences and using intuition?
Aaron Frank
West Hartford, Connecticut
David Gelernter responds:
Aaron Frank asks an important question. Was the world’s chess champion, IBM’s Deep Blue, making inferences, using intuition, thinking? I think yes, but only in a special sense. The human mind needs emotion or feelings to think, especially to think intuitively. Deep Blue’s operations yielded results identical to the results of human thought, intuition included. But the method it used was unlike human thought and won’t generalize to human-like mental performance in arbitrary fields. (To imitate the mind’s method was not part of the project’s goals; Deep Blue was an extraordinary technological achievement.) If you discovered one afternoon that your best friend of 30 years had always been unconscious, was a zombie, what would you do? You might intend to change your mind about the guy (or thing)—but might find that you couldn’t.
To the Editor:
David Gelernter has singled out the wrong culprit. Since it has never been unusual for scientists to objectify, quantify, and deconstruct, he has hardly demonstrated that the “scientific mind” is any more or less closed than it ever was, or even that science is lately drifting further from humanism.
Sure, scientists occasionally become smugly dismissive toward alternative approaches, but that is hardly a new phenomenon. The difference is that in the last few decades the humanities have become so much less effective a counterforce. As Gelernter notes: “We need science and scholarship and art and spiritual life to be fully human.” He also acknowledges that “the last three are withering.”
That same point has been made many times—including in this magazine—but Victor Davis Hanson put it very well in a recent piece in the Hoover Institution’s Defining Ideas: “Literature, history, art, music, and philosophy classes…became shells of their selves, now focusing on race, class, and gender indictments of the ancient and modern Western worlds.”
The minority of philosophers and English professors who are simply intent on staying out of the culture wars all too often try to sound like “ideologically neutral” scientists by increasingly focusing on minutiae or devising impenetrable jargon. For better or worse, this imitation-as-sincere-flattery has unintentionally pushed science even more firmly toward center stage. But to suggest that science has “closed its mind” in the face of its expanded role is to overlook what has been occurring in the real world. For example, the Edge Organization’s 2014 “annual question” is: What Scientific Idea Is Ready for Retirement? Edge received 177 responses, including many from computer scientists, psychologists, and neuroscientists who engaged in a lively debate of issues involving consciousness, awareness, and free will that Mr. Gelernter worries are being ignored.
Part of the evidence Mr. Gelernter adduces of science’s anti-humanistic bent is the ongoing tension between science and religion. But isn’t there at least an equal degree of tension between religion and humanism? It may be fine for Mr. Gelernter to point to Reform Judaism as able to reconcile the two. (Reconstructionist Judaism would perhaps be still closer to that mark.) But, since Western religious tradition is so much broader, that hardly tells the whole story.
It is at least as plausible to argue that conflict between science and religion is evidence for scientific humanism as much as it is evidence against it. One of Mr. Gelernter’s most conspicuous bêtes noires is Ray Kurzweil, who pursues the concept of computer-assisted immortality. I do not know Kurzweil’s religious beliefs, but it would not surprise me if he were somewhat estranged from a religious tradition that treated eating of the tree of knowledge of good and evil as the original sin, and then raised the danger of eating of the tree of eternal life.
Steve Stein
Larkspur, California
David Gelernter responds:
Steve Stein suggests I have missed the boat regarding modern science because (for example) I have not taken account of such things as Edge’s annual questions and responses. That’s unlikely, because I’ve been one of the invited Edge responders since those annual questions began. Check your Internet, Mr. Stein! I don’t in the least worry that “issues involving consciousness, awareness, and free will” will be ignored; I only worry that they are too often answered on the basis of insufficiently examined assumptions about mind, software, and computation that are ripe for the trash. I don’t point to Reform Judaism as an example of anything but tragedy. “Original sin” is a Christian idea with no place in Judaism.
To the Editor:
This article was a pleasure to read. I wholeheartedly agree with David Gelernter on the need for civility in scientific disputes, which arise, in my experience, when what is in dispute are not facts but ideology. The very fact that we do not have a definitive theory about the emergence of consciousness leads to competition and, in the absence of proper arguments, to conflict.
Having said that, I have to disagree with some points. The subjective I develops from an organism’s need to scan the environment for food, potential mates, adversaries, or danger. The possible outcomes are: eat, mate, fight, flee, or do nothing. In order to make a proper assessment, the organism needs to have a representation of its own capabilities. This “operational self-image” is the principle on which elements in the environment are classified. In other words, a rabbit knows that it is a rabbit. If a rabbit scans the environment based on the principle that he is a bear and neglects the fox nearby, he will be eaten. The accuracy of self-representation assures survival.
E.O. Wilson has proposed that eusociality is one of the deciding factors that gave an evolutionary advantage to the human race. Living in small groups led to further development of self-representation, based on essential dichotomies: Me/Not Me, We/They, and Human/Not Human. The ability to make these classifications evolved gradually with the development of new brain structures needed to process them. In turn, these new structures are part of the “theory of mind” neural network that emerges when we need to know what others feel or think. This ability also offers obvious evolutionary advantages, since for a long time in their evolution humans were their own worst predators. It also helps regulate the behavior of in-group members. The above is a summary sketch of an evolutionary theory of the emergence of consciousness that fits the facts and does not require a deus ex machina.
As to Phineas Gage, the capacities for planning and self-control that were affected are processed by a network and are a distributed ability. Despite the changes in his personality, Gage never became anti-social, a fact confirmed by contemporary case studies of adults who had similar injuries. The case of children with the same injury is different: They develop psychopathic personalities because their brains cannot receive the early input from significant others necessary for the learning of core moral values. I believe these facts constitute an argument for Wilson’s theory on the importance of eusociality.
Dr. Peter Dan
New York City
David Gelernter responds:
The ideas attributed to E.O. Wilson by Peter Dan tell us nothing about the origins of consciousness. None assumes or relies on consciousness. None excludes the possibility of zombies going through exactly the same sequence of developments. None excludes the possibility that Professor Wilson is a zombie.
To the Editor:
David Gelernter is correct in his essay—and if he studies some modern biology, he will appreciate just how correct he is. The primordial germ cells (for an embryo’s children) are set aside in the yolk sack by day 15 of embryologic life and migrate into (and are sequestered in) developing gonads by the eighth week, long before the brain develops. This is our genetic inheritance. The brain, like the rest of the organic self, develops later by the timely expression and repression of selected genes from our genome, in the right sequence at the right time in the right anatomic location. The term for this is epigenesis.
We encode memories by the genetic mechanisms of synaptic plasticity, which obey the rules of physics and chemistry. Memories are encoded in synapses and form the basis for learning and thinking. The meaning of our memories for each of us exists in culture and in our encultured brains. This is our cultural inheritance. Meaning (subjectivism) is personal to each of us. There are corollaries; the meaning of memories does not obey the rules of physics and chemistry and can be anything at all. Consciousness, being awake, is the opposite of being asleep. The brain centers for this genetically endowed phenomenon, related to circadian rhythms, are quite well understood. There is even a rare genetic disorder (fatal familial insomnia) presumably related to the genetics of consciousness.
We have two components to our personhood, our organic soma and the meaning of the content of our minds. The organic brain is our minds, but each brain encodes and processes (called thinking) our unique memories obtained from our cultural experiences. In much the same way, one can buy a computer system including Microsoft Word but not buy (unless you are cheating at school) the content of a Word document.
Ronald J. Carroll
New Harbor, Maine
David Gelernter responds:
Ronald J. Carroll raises an important point: The whole social and cultural environment, not just the body, is an extension of the brain. We link our memories to points all over the landscape, all our lives.
To the Editor:
This article is so full of twisted and ideological confirmation biases that it is difficult to know where to begin. So, let us start with a quotation: “Your subjective, conscious experience is just as real as the tree outside your window.” If this is the case, then this experience is just as amenable and open to investigation as is the tree outside my window. This makes that experience objective, and only “subjective” in that it is produced and perceived by only one person (at the moment—we will soon have the ability to share, and thus the privately “subjective” will no longer be private).
There is very good reason why we in the fields of cognitive science and biology reject philosophical musings about why biology is insufficient to explain consciousness: Such claims do not fit the evidence we have. These claims are based nearly universally in ideology, and their foundations are nearly universally built by religions that have been retreating before the advance of science.
I would be tempted to apologize to the religiously inclined that their soft and comfortable beliefs are yielding before what is to them a harsh reality. But I am far more comforted by the ability to explain these things by evidence obtained in the exploration of the physical universe, and to produce theories that stand up to robust and rigorous inquiry and experimentation.
But comfort is beside the point. Reality does not care if it comforts us, and it is the job of science to determine “what is” and “what is not” regardless of how much that may frighten some.
Matthew Bailey
Los Angeles, California
David Gelernter responds:
Matthew Bailey can’t mean, I suppose, that two things being equally real makes them equally accessible to investigation. The tree outside my window and the core of Saturn are equally real. If he means that two such things may equally be the target of investigation—well, obviously. Inspecting and trying to learn our own and other people’s subjective thoughts has always been one of our most important tasks.
That religion offers “soft and comfortable beliefs” is the sort of ludicrous falsehood that intellectuals, too often, love to believe but not think about. Only an intellectual, after all, would argue a position that any 10-year-old could demolish while chewing gum and chatting on his iPhone. Religion ordains belief in duty and sacrifice. Science ordains nothing of the kind. Science offers no standard of good and evil, just and unjust; no vision of the sanctity of human life, or anything else. No doctrine of family, of love, of kindness. “Man: He has shown you what is good and what the Lord requires of you; only to do justice, love mercy and walk humbly with your God.” Only soft and squishy religion makes such demands and lays down such doctrines. Back to Sunday School, Mr. Bailey!
To the Editor:
David Gelernter doesn’t understand computers or the mind. A sufficiently complicated computer program is indeterministic. See Kurt Gödel. And the mind? It has many programs running at once. See Societies of the Mind, by Marvin Minsky. And I haven’t even touched on the chaos problem—the sensitivity to initial conditions. Or the fact that digital computers are not like minds, which are analog computers “designed” more for pattern recognition than for abstract thinking. Minds do not do logarithms well. Computers breeze through them. On the other hand, minds are better at face recognition. But only just, at this time.
I do agree that the universe tends toward awareness. It has survival value. But it is way out on the tail of the curve so the probability might not be very high. Which may explain why we haven’t seen signs of it.
M. Simon
Address withheld
David Gelernter responds:
M. Simon explains that I don’t “understand computers or the mind,” because “a sufficiently complicated program is indeterministic.” No, it isn’t. No computer can tell whether an arbitrary program will converge on arbitrary inputs; in other words, the halting problem is insoluble (technically “uncomputable”), an important fact but irrelevant. (And this fact has nothing to do with the “complexity” of the program. It holds for every program, simple or complicated.) But a digital computer is a finite and deterministic machine, in the sense that, if I know the machine’s current state, I can predict its next state with perfect certainty. Minds are not analog computers any more than they are digital computers. True, minds don’t do logarithms well! M. Simon and I agree on that point.
To the Editor:
In the section entitled “Flaws,” David Gelernter’s arguments do not work to disrupt the “master analogy,” for several reasons. He writes: “You can transfer a program easily from one computer to another, but you can‘t transfer a mind, ever, from one brain to another.” This seems to be for a technical reason, and not for any fundamental differences between minds and software. The reason we can move a program from one computer to another is that programs were engineered to be shared. No such engineering occurred with biological computers through evolution. Remember, minds were not constructed by human beings for the use of human beings; computers were.
Mr. Gelernter writes: “You can run an endless series of different programs on any one computer, but only one ‘program’ runs, or ever can run, on any one human brain.” But when you are playing basketball or composing music, are you really running the same program? This is implausible.
Then there is this: “Software is transparent. I can read off the precise state of the entire program at any time. Minds are opaque—there is no way I can know what you are thinking unless you tell me.” Software is transparent because it is engineered by human beings that way, for the use of human beings. Minds aren’t engineered by human beings for the use of other human beings, so it would make no sense that minds should be transparent.
“Computers can be erased,” Mr. Gelernter writes, “minds cannot.” This is plainly false. Minds sometimes lose certain abilities; an example is prosopagnosia, the inability to recognize faces due to cortical damage.
Finally, this: “Computers can be made to operate precisely as we choose; minds cannot.” That’s because we engineered computers precisely for that purpose.
In another place, Mr. Gelernter stated that “minds might be wholly quiet”; this is also plainly false. Our brains are never wholly quiet, and by extension, neither are our minds. This claim assumes that the mind is only what’s consciously represented to us. A mind that is “wholly quiet” is simply doing something outside consciousness, such as consolidation of memory.
Last, Mr. Gelernter states, “Computationalists cannot account for emotion.” In fact, they do. There is a field called affective neuroscience that studies emotions from this perspective, and it’s pretty successful. Big names in the field include Antonio Damasio and Joseph LeDoux.
Samer Nour Eddine
Address withheld
David Gelernter responds:
Samer Nour Eddine thinks that the “reason we can move a program from one computer to another is that programs were engineered to be shared.” This is a deep and fundamental error. Software can be moved among digital computers (regardless of how it is engineered) because all digital computers have exactly the same computing power: Each can compute every computable function—“computable” as defined by Turing and other great logicians (Church, Post, Kleene) of the same mathematical era. In the rest of his letter, Mr. Eddine continues to confuse logical questions with engineering ones. He writes that “our brains are never wholly quiet, and by extension, neither are our minds.” Our brains are gray and mushy, and what about our minds? “By extension”? Mr. Eddine boldly insists that well-known people have worked on the problem of emotion. I agree.
To the Editor:
Attempts to pinpoint the “essence” that makes humans human have been going on for centuries, particularly since paradigms of reality that defied biblical (Jewish/Christian cults per Mr. Gelernter) descriptions of reality became widely accepted. Certainly, humans have a distinction that sets them aside from all other creatures; in attempting to describe this human distinction, Mr. Gelernter called it “consciousness” or “mentality” or “mind” or “subjectivity” (among other terms), but the one important aspect of human ability that he overlooked was the ability to “handle” reality symbolically. We can talk about any aspect of this shared reality so long as we share a common language. The ability of human beings to create symbols is missing from Mr. Gelernter’s description of human interiority (a term I don’t believe he used to describe this distinction). Humans have been defined as the only creatures who know that they are going to die; well, at least, we are the only creatures who can talk about it.
Jerry Blaz
Tarzana, California
David Gelernter responds:
Jerry Blaz points that out that human beings have been defined many ways over the years. Language-using creatures is one way. But it’s not a useful definition for my purposes in this piece.
To the Editor:
What an insightful essay by David Gelernter: He does an excellent job of showing the coldness of the scientistic worldview where man is finally put in his place. The article accurately describes the modern empirical hostility to subjectivity: In order to maintain its place as sole arbiter of knowledge, science must deny the existence of all the irrational elements of personhood that always elude its grasp.
I recognize that the political uses of science were only a secondary concern for Mr. Gelernter, but it is important to note the collaboration of scientific thought and radical subjectivism in leftist politics. In postmodern liberalism, science and subjectivism constitute a one-two punch that serves as a major barrier to honest democratic deliberation.
On issues such as global climate change and education policy, the left insists on science as the only source of true knowledge (because of its purported “objectivity”). But on other issues, the left furtively allows subjectivity to slip in the back door: When refuting the foundational moral or ethical claims of conservatives, the left is quick to remind us that all values are relative, that experience is subjective, and that any attempt to generalize human experience or motivation is evidence of totalitarian tendencies. Why is it that, for all of its love of objectivity, Big Science is so cozy with the radical subjectivism on the left? Follow the money.
Adam Ellwanger
University of Houston
To the Editor:
David Gelernter pays short shrift to the contributions that philosophers and scientists are making to the study of the mind, and he exaggerates the danger so-called roboticists pose to Western civilization. Most philosophical schools of thought today are materialist, but few are dominated by the influence of computer science in the way Mr. Gelernter describes. Nevertheless, he groups all these materialist approaches together as if they were a monolithic orthodoxy with roboticists at the vanguard.
But as Mr. Gelernter points out, the leading proponent of roboticism is Ray Kurzweil, a self-promoting futurist working for Google—hardly the leader of philosophical or scientific thinking on the mind. Mainstream academics treat Kurzweil as an interesting and thought-provoking curiosity, so Mr. Gelernter’s focus on him seems out of place to me. Furthermore, Mr. Gelernter claims that most philosophers of mind are completely dismissive of phenomenal experience. This is far from the case. The problems Mr. Gelernter points to about explaining the uniqueness of mental properties are widely debated by philosophers, and have been for centuries. In heaping scorn and ridicule on the work philosophers and scientists are doing on the mind—while relying primarily on descriptions of mental life from poets and novelists to explain his own subjective humanist approach—Mr. Gelernter seems to be trivializing one of the most traditional, and challenging, problems of philosophy. I would hope to see a conservative intellectual treat philosophy in a less dismissive manner.
More important, Mr. Gelernter seems to be conflating critiques of subjectivism with an attack on humanism. But the ability to appreciate history and culture, or to enjoy Mozart and Wordsworth, or to propose wise life-affirming social policies fortunately does not depend on subscribing to the correct theory of consciousness. As for Kurzweil’s transhumanism, that too seems to be a less-than-existential threat to humans, given that he is almost certainly wrong in what he predicts machines will be capable of in the foreseeable future. Even if he is not, saner minds would surely prevail before Kurzweil launched the cyborg revolution that would extinguish humanity, as Gelernter himself points out. While I certainly agree that there is much to criticize in what Kurzweil is claiming, I would have preferred a more measured or even constructive critique. In the end, Mr. Gelernter has not convinced me that Western civilization is so fragile as to be threatened either by philosophers with counterintuitive theories about the nature of mind or by futurists with unconventional visions of where humanity is going. In fact, it seems to me that those things are both perfectly natural products of Western civilization.
But my biggest concern about this piece is Mr. Gelernter’s suggested response to this perceived affront to human decency—a cry from the heart. Thanks to Fox News and talk radio, conservatives are already holding their own quite well among those who are motivated by emotion. What we conservatives need is to broaden our appeal to intellectuals. I fear this piece will just serve as another data point for those who wish to portray conservatism as inherently antagonistic to scientific reasoning and reflexively hostile to bold new ideas. Commentary is one of the few places where one can find well-reasoned arguments in defense of conservative ideas. It would be a shame if this magazine devolved into a forum for attacking ideas based on sentiment, not reason.
David A. Patten
Washington, D.C.
David Gelernter responds:
David A. Patten believes that I have been attacking a straw man, a non-existent “monolithic orthodoxy.” Of course there is no monolithic orthodoxy, and I never said there was. But unfortunately we are a generalization-hating culture—and without generalizing or abstracting, we cannot think. Meeting every attempt to see the field from anywhere but one inch off the ground with the “Hey, this ain’t no monolithic orthodoxy!” play is just another way to postpone thinking as long as possible. As David Chalmers has so convincingly pointed out, the field has long suffered from a clear bias against the phenomenological and in favor of what he calls the psychological view of mind, meaning the view that minimizes or ignores consciousness as experience versus consciousness as awareness. This is a valuable subjective judgment with which I find it impossible to disagree.
That Kurzweil is not a serious philosopher, that his appeal is to the public and not to academics, is exactly why he is so important and dangerous. Public opinion is even more important than academic philosophy in setting the cultural tone—hard as that might be for certain parties to accept. “What we conservatives need is to broaden our appeal to intellectuals.” I sympathize with the instinct, but it’s hopelessly naïve. We don’t need George Orwell or Paul Johnson to tell us that the modern intelligentsia has stopped worshipping God in order to be God. No group is less interested in ideas than intellectuals—as reflected in the fact that no environments are less intellectually diverse and less tolerant of free speech than modern universities. We take these facts for granted and shrug them off. Shame on us. The only proper, responsible reaction is to discard our present universities and build new ones, with the Internet’s help, now. How many more graduating classes will we sacrifice?
To the Editor:
I believe that David Gelernter is confused. I presume that his religious beliefs are causing him cognitive dissonance. Consider the following excerpted line from his article: “You can transfer a program easily from one computer to another, but you can’t transfer a mind, ever, from one brain to another.” Yes, you can, at least partially, simply by talking to someone.
He also writes, “You can run an endless series of different programs on any one computer, but only one ‘program’ runs, or ever can run, on any one human brain.” The “writing” of the human “code” is, by its nature, a one-off event. So what?
“Software is transparent,” Mr. Gelernter writes. “I can read off the precise state of the entire program at any time. Minds are opaque—there is no way I can know what you are thinking unless you tell me.” The format of human “code” simply doesn’t permit that with our current level of technology.
Then there’s the contention that “Computers can be erased; minds cannot.” Yes, they can, to some extent. (My mind does this too often—it’s called forgetting.) We can only do this rather crudely and incompletely, due to our current level of technology.
Mr. Gelernter writes: “Computers can be made to operate precisely as we choose; minds cannot.” Advertisers spend a fortune in the belief that they can do this, profitably. Religious indoctrinators try the same trick.
As for philosophic zombies, they are impossible. Our consciousness is an evolved mental ability that allows us to operate upon our environment by understanding what is going on. (The understanding might be flawed, but it is usually “good enough.”) It is apparent to me that my cat has some consciousness, too. Hers just doesn’t function as well as mine, from my point of view. She might have a different opinion, except that her consciousness doesn’t run to that refined level. A zombie would have even less understanding, or none at all—it would be robotic or at the level of insect intelligence.
Mind is clearly a function of brain, an emergent property that has evolved by natural selection. There’s no more mystery about our perception of, say, the color blue (a particular frequency of electromagnetic radiation) than there is about our perception of size. The mystery is how mind, being non-physical, might affect matter. But if matter can affect mind, which it clearly can, then I see no reason why the reverse process should not operate. It is all just brain activity. No spirits are required.
Richard Harris
Kanata, Ontario, Canada
David Gelernter responds:
Richard Harris responds to my assertion that “you can’t transfer a mind, ever, from one brain to another” by writing “Yes, you can, at least partially.” But I didn’t assert that you can’t transfer a mind partially. Obviously, Mr. Harris, I can give you a piece of my mind—and I will. “Advertisers spend a fortune in the belief that they can” make minds “operate precisely as [they] choose.” Yes, Mr. Harris; but this is a false belief. Or do you disagree? “A zombie would have even less understanding, or none at all.” Yes, a zombie has no understanding at all. That’s just the point. The question is whether one can act as if one understands, although one doesn’t. The question isn’t whether mind affects matter but how it does.
To the Editor:
I read with pleasure David Gelernter’s article “The Closing of the Scientific Mind,” and I eagerly await more from him on subjectivism. I would like to propose that computationalism cannot and should not be so hastily rejected, however. “Cannot” because of FPGA (field-programmable gate array) hardware and unconventional rewriting software. “Should not” because the computationalist analogy is extremely powerful for countering materialist nonsense.
Consider that the brain-mind relationship might be more like the relationship between an FPGA and its software than like the relationship between a conventional computer and its software. The software of an FPGA can continuously restructure the hardware, so that after just a few steps, the two become a matched pair, neither capable of running without the other. Though the software can, in principle, be extracted, encoded, copied like any other, it cannot run on any hardware other than the one it configured itself (absent an external, step-by-step recorder that tracks every action, present usually only for debugging). The actual configuration of the machine is not externally discernible or reproducible without running the original software again on a fresh FPGA instance, being lost among a combinatorial number of possibilities.
Further, the software can continuously rewrite itself. This is not conventionally allowed in ordinary computers because it opens security hazards, but it is commonplace in applications like equation solvers that rewrite equations into non-obvious forms while maintaining correctness. Perhaps rewriting is central to subjectivism, allowing our minds to examine and optimize themselves along with the brain hardware they run on.
Perhaps our minds are like rewriting systems and our brains like FPGAs? The computationalist analogy still obtains and forcefully makes the point that the software is a nonmaterial but totally real entity, something our ancestors would have unhesitatingly and correctly called “spiritual,” and a notion that causes supernova-like implosions of closed scientific minds. Something as pedestrian as a printer driver is a nonmaterial entity that can interact with the physical world and might have—pretty accurately—been called an angel or demon.
Brian Beckman
Address withheld
David Gelernter responds:
I am grateful to Brian Beckman for his comments, and the spirit of actual discussion in which he writes. I can’t follow him with respect to FGPAs; for any FGPA at any state in its evolution, conventional software on a conventional machine can exactly duplicate its behavior. Two digital computers must be logically equivalent; they can differ from each other only in their performance or efficiency. Performance is important to mind (if my thinking were 10 times faster or slower, the difference would matter), but I can’t draw any philosophical or logical conclusions based on computer performance because hardware and algorithms are both moving targets and change constantly.
To the Editor:
There seems to be an inherent paradox in David Gelernter’s article. A computationalist has utterly no basis for his opinion except for a subjective judgment on his part. Any intellectual rigor requires that a computationalist
reject his own subjectivity. Of course, if he does so, he can’t use it as the basis for what is claimed as an observable, verifiable truth.
An endeavor where this problem plays out is in psychotherapy (as distinguished from other “mental health” treatments such as cognitive behavioral therapy or medication use). Every psychotherapist knows of remarkable results from the process, results acknowledged often by the person involved, that person’s spouse, family, and friends. Yet, in this country at least, psychotherapy has become suspect, and all the excitement and most of the resources have been directed toward pharmacological management of problems. Of course, there is no problem at all if the subjective doesn’t exist.
Richard Mize
Crescent City, California
David Gelernter responds:
Richard Mize’s observation is fascinating. The collapse of psychotherapy owes something to smugness and excessive claims; but things have swung way too far in the opposite direction. The opinions of intellectuals combine the behavior of pendulums and wrecking balls.
To the Editor:
I must point out to David Gelernter that an argument against these reductionists is quite simple: They say what they are saying only because they are programmed to say it. They think it is true only because they are programmed to think it is true. They think they are being objective only because they are programmed to think they are objective. Without the human ability to make judgments, they are merely recordings playing what has been recorded. Can two recordings have an argument? Can either recording actually persuade the other?
C.J. Stone
Address withheld
David Gelernter responds:
A fair number of computationalists would accept C.J. Stone’s arguments. They not only say but also believe that they are mere organic digital computers.
To the Editor:
I could not agree more with David Gelernter about many academics being closed-minded bullies. I take major issue, however, with some of his substantive points on artificial intelligence.
My main objection is to the points raised in the “Brain as Computer” section. The first quibble is philosophical. Mr. Gelernter tries to argue that a computational theory of the mind somehow belittles humanity, consciousness, and the soul. I find that unconvincing. Just as accepting Darwinian evolution as a plausible biological model detracts nothing from my belief in creation by the Almighty, probing the mind’s computational properties in no way hinders my appreciation of the mystery of consciousness. A question for Mr. Gelernter: If the brain is not a computer, then what is it? A magic box? To banish the computational paradigm is to usher in nonscientific thinking. The latter is perfectly suitable in the theological and moralistic realms—and indeed, I don’t see how even the most accurate computational model of the mind will have any bearing on morality.
And yet Mr. Gelernter isn’t attacking that straw man; he’s got his sights on the big one—the computational model of the brain. His objections (“The Flaws”) are rather puzzling. Says Mr. Gelernter: “But the master analogy—between mind and software, brain and computer—is fatally flawed. It falls apart once you mull these simple facts.”
And these are: “1. You can transfer a program easily from one computer to another, but you can’t transfer a mind, ever, from one brain to another. 2. You can run an endless series of different programs on any one computer, but only one “program” runs, or ever can run, on any one human brain.”
These are “hardware” objections, which would apply equally well to any machine with read-only memory. Just because we lack the technology to rewrite the neural code does not mean that there is no neural code.
Mr. Gelernter goes on: “3. Software is transparent. I can read off the precise state of the entire program at any time. Minds are opaque—there is no way I can know what you are thinking unless you tell me.”
If Mr. Gelernter were able, by inspecting the code being executed, always to give a meaningful, interpretable description—that would be quite a feat. Even more so if, say, the program being run were a neural net. Suppose I know that neuron 51 is firing at a rate of 726 to neuron 27—how does one interpret that?
Continues Mr. Gelernter: “4. Computers can be erased; minds cannot.”
This is the same hardware objection found in his first two points, except it appears to be wrong on its own terms, as minds can surely be erased by drugs, injury, or death. If one wants to quibble over the semantics of “erasing,” then the “hardware” response of the first two points suffices.
Finally: “5. Computers can be made to operate precisely as we choose; minds cannot.”
This is essentially point three, but I’ll add that machine-learning theorists are regularly asked to provide “simulations” or “experimental” results to demonstrate the performance of their algorithms. The theorists’ frequent complaint—to which Mr. Gelernter appears to be sympathetic—is that the experiments are superfluous once one has given mathematical guarantees on an algorithm’s performance (and hence “understands” the algorithm). Unfortunately, no one can know with certainty how the program will handle real data, hence the need for empirical trials. Now some programs are indeed quite deterministic and transparent—but then again, so are some people (especially after the right amount of cult programming).
This does not exhaust my objections, but on the epistemological points (the zombie argument, mind-body problem), I’ll defer to the respective experts.
Aryeh Kontorovich
Beer Sheva, Israel
David Gelernter responds:
Aryeh Kontorovich writes that “probing the mind’s computational properties in no way hinders my appreciation of the mystery of consciousness.” But computationalists often believe that there is nothing fundamental left to explain with regard to consciousness. “If the brain is not a computer, then what is it?” he asks. “A magic box?” Are these the only two alternatives? (If a rabbit is not a computer, what is it?) A brain is a collection of cells with particular chemical and physical structures that matter to its operation—as the physical details of a logic switch do not matter to a computer. The brain in short is part of the human body and cannot be reduced to or equated with a computing machine or a formal logistic system.
Searle writes somewhere that his biggest surprise subsequent to the widespread hostile reaction to his Chinese Room thought experiment was that so many computer scientists simply could not believe that the physical, chemical structure of the brain actually matters. Mr. Kontorovich writes as if he were suffering from exactly this problem.
To the Editor:
David Gelernter’s analysis can be taken a step further. Not only are our brains nestled in bodies, but our bodies are nestled in a world. In this world, we create “memory places,” objective repositories of our subjective experience and feeling. These memory places take the shape of “homes” in the first instance, where we remember our ancestors and teach our young, but rapidly branch out into a full panoply of monuments, edifices, and literatures of culture. Any theory of consciousness must take the mind’s nestling in culture into account.
The mathematical classic Flatland makes clear the herculean effort that is required if we are to raise our awareness to recognize the presence of higher dimensions, and our existential relation to these dimensions. And we would do well to remember that modern mathematics suggests that the nestled hierarchy of ever higher (as well as ever more minute) dimensions is infinite.
Stefan Saal
Lancaster, New Hampshire
David Gelernter responds:
I think Stefan Saal is absolutely right; we must recognize not only the body but also the world at large as extensions of the mind.
To the Editor:
David Gelernter’s essay is very human—beautiful, inspirational, and informative all at the same time. It weaves together so many important threads. It exposes a looming threat to humanity and strikes a blow against that threat. Thank you for publishing it.
He makes one statement, however, that concerns me because it unnecessarily concedes ground to the roboticists that they have not fairly won. He writes: “The mind has its own structure and laws: It has desires, emotions, imagination; it is conscious. But no mind can exist apart from the brain that ‘embodies’ it.” One of the basic concepts of most religions is that the mind or soul can indeed exist apart from the brain that embodies it. This concept is of course ridiculed or ignored by scientists, but it has never been disproved. If religion is correct that consciousness can continue independent of the body, then Nagel is even more right than he imagines in saying that major scientific advances and the creation of new concepts are needed to understand how consciousness works. Roboticism becomes a ridiculous quest if the software runs independently of the machine.
Lyle Radke
Plano, Texas
David Gelernter responds:
Lyle Radke has a fair point respecting my assertion that “no mind can exist apart from the brain that ‘embodies’ it.” After all, maybe it can. But in the world of science alone—not the only world, certainly, but the place where I was arguing this piece—I don’t see how.
To the Editor:
“The Closing of the Scientific Mind” by Yale scholar David Gelernter, takes its title from The Closing of the American Mind, a 1987 book by Allan Bloom that forecast the nation’s tumbling into heck and damnation if (among other modernist crimes) the scientific mind-set were allowed to ruin the subjective-conservative-humanism of impressionable youth. Of course it is all yawningly familiar. Jeremiads against modernity and science surge with rhythmic regularity and eerie similarity from centers of scholastic nostalgia on both the far left and far right. They call to mind C.P. Snow’s famous “Two Cultures” essay that rocked academia 50 years ago, in which Snow portrayed simmering resentment on the scholastic-humanities side of university campuses, toward what the dons viewed as upstart arrogance—scientists usurping their authority over matters of “truth.”
What Mr. Gelernter ultimately conveys is the Zero-Sum Game—the dismal belief that if a person has superior powers in one realm, that plus must be paid for with a minus of inferiority in some other aspect of human life. Zero-sum thinking tugs naturally at us all; it was the common human reflex that dominated almost every human culture, though not our own.
Our civilization is the only one ever to have been based firmly on the positive-sum game—the notion that we can be many. That each success does not require a compensating failure. That each winner does not have to stand upon a smoldering loser. We live in a world filled with spectacular results, in which most children no longer grow up steeped in tragedy, but with some likelihood that they might turn inborn talents into positive success that uplifts not only themselves, but others as well.
That easily supported and statistically proved assertion is not a call for Pollyannaish complacency. Rather, all this tentative and early progress constitutes a clarion summons to complete the partly fulfilled Enlightenment promise. Indeed, we judge ourselves and our society harshly in proportion to how far short of that ideal we still fall, proving how embedded the ideal has become in our hearts.
And scientists lead the way. For every bad thing science engenders (and most often scientists issue the alerts), there are a hundred genuine advances. But it is in the realm of humanism, art, and the soul that Mr. Gelernter makes his central accusation, and reveals insipidity.
Anyone who has spent time around top-level scientists knows that they tend (with some exceptions) to be profoundly broad in their interests. Most are well read and thoughtful far beyond the so-called objective realms. Every great scientist I’ve met has had a strong side-avocation in the arts or humanities, often at a professional level. I watched Einstein perform with his violin when I was four. I discussed patterns of history and humanism with Murray Gell-Mann, before we shifted to discussing insights offered by James Joyce’s Ulysses and Finnegan’s Wake. Richard Feynman was among the world’s greatest bongo players; he also painted brilliantly and wrote passionately about humanity’s need to combine bold exploration with humility before a stunning cosmos. The anthropological insights of Sarah Blaffer Hrdy have challenged smug dogmas of both left and right, showing how we are simultaneously rooted in our ancestral past and profoundly launched far beyond it all.
Almost no modern scientist declares the non-existence of subjectivity or its irrelevance to human life. That straw man is a dodo from the 1960s era of logical positivism and Skinnerism, an obsolete calumny that is raised only by postmodernists and fools who seem bent on screeching at caricatures of science.
Likewise, take another Gelernter aspersion. He condemns a purported scientific fixation, that our human minds are mere software—a position taken literally by only a few intelligence researchers. Most apply the comparison only as a metaphor, useful for generating experiments and models, contingently, with exactly the tentativeness and humility that Mr. Gelernter claims technical people lack. And many AI scientists reject the mind-as-software model entirely.
Roger Penrose’s fabulous speculations about the specialness of human consciousness run diametrically opposite to Mr. Gelernter’s stereotype, yet they are backed by one of the most brilliant—and cantankerously contrarian—physicists of our age (a personality trait that is prevalent among the greatest scientific minds).
Mr. Gelernter writes: “Science needs reasoned argument and constant skepticism and open-mindedness. But our leading universities have dedicated themselves to stamping them out?.” Shades of Allan Bloom! But in fact, across the vast and tragic history of our species, no human field ever taught those skills as sincerely and relentlessly as science.
Civilization has recently advanced against myriad ancient crimes like racism, sexism, and environmental neglect—crimes which for millennia were excused by incantations and rationalizations cast by old-time scholastics, priests, and “humanist” scholars. We have progressed precisely because science taught us how to refute and cast down comfortable prejudices that all of our ancestors took for granted. It wasn’t preaching that made the crucial difference in ending those horrid errors of subjectivity, but relentless, scientific disproof of stereotypes about women, minorities, and so on, that finally and decisively overcame the dismal, filthy habit of blanketing entire groups with slander…the way David Gelernter attempts (laughably) to blanket libelous slurs across the one field of human life that keeps insatiably asking questions. Alas, his slanders tell us vastly more about him than about scientists.
David Brin
Encinitas, California
David Gelernter responds:
According to David Brin, “top-level scientists” tend to be “well-read and thoughtful far beyond the so-called objective realms.” Unfortunately, he’s wrong. He is stuck in the world of my childhood. In the 1960s, it was true of many scientists at every level. It’s not true today. The world changes, Mr. Brin—a fact that intellectuals rarely seem to notice.
He writes, “No modern scientist declares the non-existence of subjectivity or its irrelevance to human life.” Is Mr. Brin saying that computationalists or functionalists in the fields of computing, cognitive science, and philosophy are not scientists, or are “unscientific”? He might be right, but it’s an odd way of making the claim. Mr. Brin needs to study the distinction between a flat-out assertion (“Because I am a scientist, I am smarter than you”) and the evidence by which we reach conclusions about other people’s beliefs and desires. Functionalists clearly believe (whether or not they say so plainly) in the unimportance of the subjective to human life. That’s why they have invented a scheme for explaining that mental states (wistfulness, euphoria, being cold) do exist but don’t feel like anything—or if they do, it doesn’t matter. Back to your textbooks, Mr. Brin! Though if I were you, I’d fire my philosophy coach.
Mr. Brin writes of me: “He condemns a purported scientific fixation, that our human minds are mere software—a position taken literally by only a few intelligence researchers. Most apply the comparison only as a metaphor.” Here Mr. Brin is writing nonsense. I criticized what I called an analogy: that minds are analogous to software, and brains to digital computers. Most “intelligence researchers” certainly do believe this analogy. None takes it literally. It’s not even clear what it would mean to take it literally.
Mr. Brin: “‘Science needs reasoned argument and constant skepticism and open-mindedness. But our leading universities have dedicated themselves to stamping them out.’ Shades of Allan Bloom!” Is Mr. Brin saying that Allan Bloom was wrong? If you believe that, you’ll believe anything.
He also writes: “Civilization has recently advanced against a myriad ancient crimes like racism, sexism, and environmental neglect…We have progressed precisely because science taught us how to refute and cast down comfortable prejudices.” Nonsense. Was Lincoln citing science when he said that our fathers had brought forth “a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal”? Was he citing the latest scientific results when he said, “With malice toward none, with charity for all, with firmness in the right as God gives us to see the right, let us strive on to finish the work we are in” Did the men of the Union fight and die for science? Was Martin Luther King Jr. quoting scientists when he said “Let my people go”? Was the Civil Rights movement of the 1950s and ’60s inspired by science? Read your Bible, Mr. Brin. Did scientists lead the fights against anti-Semitism and racism and prejudice against women? Or did pastors, like the Rev. King, and priests and rabbis? Learn some history, Mr. Brin!
To the Editor:
If the demarcation between science and non-science was absorbed along the lines proposed by Sir Karl Popper, the 21st-century philosophy of science would be in a healthier state. Unfortunately, as David Gelernter laments, it seems that Popper’s stress on the conjectural or imaginative nature of what we claim to know has been submerged by the retreat to justificationism.
In contrast, science from Popper’s perspective is that branch of the arts that presents its explanatory claims in such a form as to be testable against the world. The demarcation between science and non-science has nothing to do with methods of discovery or technologies of discovery but with the perpetual and normative requirement to make scientific statements logically amenable to being proven wrong, even if in practice it is a matter of decision. Furthermore, if one is asked what is the aim of science, it is to explain whatever one deems requires an explanation. It begins with problems, not with data. How can minds or their technological surrogates even perceive data if they do not possess modifiable expectations or propensities to solve problems?
Furthermore, building tools is not the same as practicing science. Scientific theories may be instruments, but they are not only instruments. They explain. A shovel or an electron microscope is not true or false; a scientific theory can be. Mind you, any claim to be true is only tentative—otherwise it is not science! “Can science explain everything?” is a ridiculous question. Our conjectures come first as in all arts. Let us praise imagination but remain humbled by our propensity for error. It is the humility of science that gives us hope that we will not be consumed by technological hubris.
Scientism is the enemy, not science.
Bruce Caithness
South Turramurra, North South Wales, Australia
To the Editor:
I am a fan of David Gelernter’s and enjoyed his article. I was surprised, however, that he did not cite the work of Kahneman, Tversky, Sunstein, and other cognitive scientists who have provided in the past 30 years revolutionary new insights about how the mind works. They have surely bolstered the importance of individual subjectivity and experience in decision-making with concrete, repeatable experiments. For example, Prospect Theory, which was initially developed by Kahneman and Tversky, has superseded Expected Utility Theory. The former properly accounts for individual subjectivity, while the latter is the mechanistic/computational approach that has held sway for more than 250 years and has been proven wrong.
Bill Frank
Westborough, Massachusetts
To the Editor:
David Gelernter covers an incredibly important topic that pays rich rewards with thoughtful inquiry. Just four points. One, Chamlers’s zombie arguments were meant to be against materialism, the view that matter is all there is. Second, Chomsky, who is the modern originator of our current cognitive revolution, stands foursquare against the current artificial-intelligence view on what mind, intelligence, and language use amount to. Third, there are enormous problems with functionalism, and many philosophers of mind hold them to be quite significant objections. It is doubtful that it is the dominant view, as it is doubtful there is any such thing as a dominant view in the philosophy of mind. Fourth, and this might be the most profound dynamic at play here, religious discourse often improves itself by letting secular and material forces speak to its religious errors. For example, the criticism of the concept of atonement by Dennett will likely help religious folks better understand it.
Timothy Holmes
Minneapolis, Minnesota
David Gelernter responds:
Timothy Holmes is certainly correct about Chalmers’s intentions in his zombie argument. But I can’t accept Chalmers’s conclusion, that consciousness can be created by computation. That conclusion seems to me to rest, ultimately, on one of the weakest of all the artillery pieces ever fired against Searle’s Chinese Room thought experiment way back in 1980—the idea that if one were to replace, incrementally, every neuron in a human brain with silicon, it can be assumed without discussion that the quality of the unfortunate subject’s consciousness would be unchanged. Bizarre. Nonetheless, the zombie argument is good and thought-provoking as far as it goes.
Mr. Holmes is certainly right that there are enormous problems with functionalism, and I agree that many if not most philosophers in the field have real difficulties with functionalism. But if one attempts to describe the field from the outside, I don’t see how he can avoid the conclusion that functionalism of one kind or another remains, today, the most important and influential position in philosophy of mind. Certainly religious discourse has often been improved by secular arguments. The theme of modern Orthodox Judaism, Torah u’mada, religion and science (or “secular knowledge”), puts it exactly right.
To the Editor:
David Gelernter addresses questions of profound importance at a high level of sophistication. But when an individual of his capabilities undertakes such a project, together with many original and powerful insights, there will be a few errors. The zombie argument is a red herring, since it is impossible to prove or disprove. It should be classed together with solipsism and old Twilight Zone scenarios—where everything the protagonist “knows” about his life is a deception foisted on him by clever aliens—as an unproductive waste of time.
“The Flaws” are either plain wrong or else the result of technological limitations that we can imagine might be overcome one day. Much spadework on the ethical problems raised by artificial intelligence has been done within the literary genre of science fiction. See, for example, Robert Heinlein’s classic, The Moon Is a Harsh Mistress. While one can dismiss this as mere fantasy, recent proposals to build a cell-by-cell software model of an entire human brain should give us pause. If this software model began exhibiting signs of self-awareness and a desire to communicate, would rebooting the hardware it runs on amount to murder? Or suppose that genetic engineering and prostheses were used to endow a dog with the same intelligence and capacity for speech as a human child. Would it not demand the same degree of respect and autonomy?
The philosophy that dismisses humanity as a random ripple in a meaningless cosmos has been around since ancient times. Computationalism simply puts the same sour wine vinegar into a flashy new bottle.
David Hoffman
Jerusalem, Israel
David Gelernter responds:
I thank David Hoffman for his useful comments. Conclusions based on thought experiments are nearly always impossible to prove or disprove; that’s why we need thought experiments in the first place. A thought experiment is like a legal argument to the jury. It’s up to the scientist or philosopher to make his best case; then it goes to the community for a decision. A decision is required even where there’s no proof or disproof. (But of course the lack of decisive evidence, of any way of proving or disproving a conclusion, doesn’t mean that there is no absolute truth in the matter. Nor does it excuse us from trying our best to find it.)
An accurate software simulation of the brain poses no ethical problems of the sort Mr. Hoffman has in mind; there’s no more reason to believe that such a simulator will have intentional or mental states, be conscious, or be alive than there is to think that a software simulation of a thunderstorm will get you wet. (This point has been made many times before.) I agree with Mr. Hoffman: Computationalism is fairly new, but the impulses behind it are old.
To the Editor:
David Gelernter claims that “the intelligentsia was so furious [against Nagel] that it formed a lynch mob.” How, precisely, did this happen? Did Daniel Dennett, perhaps, knock on Nagel’s door in the middle of the night, assisted by Richard Dawkins holding a flaming torch, and then proceed to string up Nagel? Somehow I failed to read about this in the newspapers.
Here’s what actually happened: Nagel’s claims were criticized in print (good heavens!) by those better informed. These criticisms were largely rather mild in tone (see, for example, H. Allen Orr’s gentle dismantling in the New York Review of Books, February 7, 2013), but a few were somewhat harsher, pointing out that Nagel had some fundamental misunderstandings about science and biology. Nobody threatened Nagel or called for him to be fired, or for his book to be boycotted, or for his grants to be revoked. (By contrast, when climate scientists have received actual threats at international conferences, I don’t recall Mr. Gelernter or Nagel springing to their defense.)
Yes, there are certainly closed minds in academia. People with genuinely open minds don’t refer to legitimate challenges to their beliefs as “smashing the sacred tablets.” Based on their writings, I’d have to say Mr. Gelernter and Nagel qualify as good examples of the closed minds they decry.
Jeffrey Shallit
University of Waterloo, Waterloo, Ontario, Canada
David Gelernter responds:
Little needs saying to Jeffrey Shallit, except that I was not using “lynch mob” to suggest that Nagel’s opponents wanted him to be hanged or even gently murdered. “Nagel had some fundamental misunderstandings about science and biology” is a statement some people might possibly disagree with, especially those who care about what a person knows versus what degrees he holds. As Emerson would have said: Ph.D.’s, Mr. Shallot, are the hobgoblin of little minds.
To the Editor:
David Gelernter’s very fine critique of computationalist theories of mind and consciousness nevertheless surprised me by neglecting two important problems with such theories. One was explained by Roger Penrose in The Emperor’s New Mind (1990). Penrose argued that the brain couldn’t be compared to a digital computer because the very nature of a computer’s algorithmic architecture means that it cannot derive much of the mathematical knowledge that we know we possess. Obviously, the brain, however it operates, has done this. While Penrose, as a great mathematician, can be expected to know more about mathematics than others, he nevertheless explained the issue in a way intelligible to a normally educated person. If there was something wrong with his argument, I have yet to hear it; and I am surprised that Mr. Gelernter didn’t at least mention the point.
The other problem concerns the obvious reductive materialism of the “computationalist” and atheistic camp. Such people, as Mr. Gelernter says, would hold that “nothing stops us from imagining a universe exactly like ours in every respect except that consciousness does not exist.” Unfortunately, this simply has not been true in physics for some time. Have these people not heard of Niels Bohr? Wasn’t he the physicist who said that “nothing exists” until it has been observed? Did not Albert Einstein react in alarm that the presence of consciousness, an observer, compromised the realism of quantum mechanics? Even while physicists have been trying ever since to get rid of that observer, a properly subjective being in Mr. Gelernter’s terms, Bohr’s paradox stands pretty unshaken as I write; and the realism of quantum mechanics is still compromised by a subjective presence.
Mr. Gelernter’s case is not complete without Penrose and Bohr. With them, I think that the critique is decisive.
Kelley L. Ross
Kingston, New Jersey
David Gelernter responds:
Kelley L. Ross makes a mistake that seems to be hard to avoid—assuming that an author’s entire view of any topic appears in everything he publishes. I discussed Penrose’s ideas in my book Muse in the Machine. I admire Penrose but can’t agree that his theories of creativity are anywhere within walking distance of reality. They fail Occam’s test: They’re too complicated. Simpler ideas, which are more consistent with what we know already, are available for the taking. See the Muse book (and my forthcoming Subjectivism). Bohr’s views of the philosophical implications of his physics are fascinating, but I can’t usher quantum mechanics into the discussion until I’m convinced it’s needed. It’s a dangerous guest. The qualitative conclusions of modern physics are too easily abused by laymen who don’t understand the physics and by physicists who don’t understand the philosophical questions. Nothing prevents laymen from mastering physics and physicists from mastering philosophy; it’s just that, in both cases, they rarely bother.
David Gelernter concludes:
Because so many letter writers don’t see where I’m coming from or headed for, I’m bound to underline that my own interest is not in tearing down computationalism or Kurzweil—although I try to do my part. But naturally I’m interested mainly in my own theory, not theirs. I do not believe that consciousness has no function, although the literature is miles away from seeing what its function is. I believe that function centers on the role of emotion in remembering. There are no unconscious emotions (no unfelt feelings), although one can of course store in memory an unrealized description of a long-ago emotion—a latent or potential emotion. And when the memory becomes conscious (i.e., is recalled), the description is sometimes realized—is felt once more. Now, feelings play a major role in allowing us to recall things we never could otherwise, because we retain no active or accessible sliver of recollection with which to bait the memory hook. But the emotion we felt on that long-ago occasion might still be part of our active repertoire. It only remains to ask whether a felt emotion—a conscious emotion—can be more effective than the mere description of an emotion as a memory cue. I believe the answer is yes.
It follows that we would ultimately catch a zombie out, because its ability to recollect would be weak at the edges compared with ours. And those particular edges can be hugely important.
My main interest is the cognitive spectrum, which I first wrote about in 1994 and have since described from several angles in Commentary. I continue to believe that this spectrum is the basis of mind—a basis we miss (although any child can see it) for the same reason we usually miss obvious philosophic points: The facts are too close to our noses. Our thinking, emotions, and the texture of consciousness all change together over the course of every day, as our mental focus and energy go from a wide-awake maximum to a fast-asleep minimum. This spectrum’s implications are wide and deep, for ordinary thought, some aspects of childhood development, and other mental transitions. Science and philosophy of mind still see mind as Descartes did, as basically static. But constant (albeit gradual, creeping) change—from high- to low-focus, from focused to dilate attention, from largely rational to largely emotional mindfulness—is the mind’s foundation.
It seems to me for these reasons that, in our long-distance super-endurance race to understand the human mind, we haven’t even made it to the start line.