About eleven years ago I happened to sit down for breakfast with Marvin Minsky and his wife, the pediatrician Gloria Rudisch. Having just begun to develop an interest in cognitive science, I knew Minsky by name as one of the fathers of artificial intelligence (or, as it is often abbreviated, AI). Thanks in part to the camaraderie between AI researchers and science-fiction writers, the phrase “artificial intelligence” still conjures up images of scientists building superhuman robots and computers that will render mere mortal thinkers obsolete. The reality, for the moment, is quite otherwise: AI researchers spend much of their time trying to figure out how human thought works, in order to write computer programs that can simulate it.

Though I knew little about his work, I was eager to hear whether Minsky believed computers could ever be programmed to translate one language into another. This hypothetical feat, called machine translation, is one aspect of a broader area of research called natural-language processing, which in essence means getting computers to use language the same way that humans do. Natural-language processing, in turn, has been one of the central problems in artificial intelligence since at least 1950, when, as an alternative to the empirically unverifiable question “Can machines think?,” the British mathematician Alan Turing proposed testing a machine’s ability to hold a conversation.

Minsky seemed happy enough to entertain my curiosity, explaining his view that before we can program machines to use language, we first need a theory about how to represent the commonsense meaning of words. At some point in the conversation, I asked him whether he thought that Noam Chomsky’s ideas about the structure of language, and how we acquire language, might have any application in AI research.

If only I had known better. My question provoked a fifteen-minute tirade, focused mostly on Minsky’s opinion that Chomsky had wrecked the field of psychology and set back the study of artificial intelligence by decades. Unaware, I had stumbled into a longstanding antagonism between the two giants of cognitive science, who have been colleagues at MIT since the 1960’s. To this day, it is said that MIT graduate students in linguistics refuse to speak with their counterparts in artificial intelligence, and vice-versa. After that breakfast, I can understand why. Sensing my distress as Minsky continued to rant and gesticulate, Rudisch pulled me aside and whispered, “You pushed the Chomsky button.”

Although I have by now forgotten the greater part of Minsky’s diatribe, the main bone of contention seemed to be a philosophical disagreement over how to understand the human mind. Minsky accused Chomsky of “physics envy”: the desire to describe psychology in a logically elegant system of rules, like physical equations. Chomsky’s formal analysis of grammar, which had become more rarefied as it evolved, epitomized this physics-like approach. The trouble with this line of inquiry, as Minsky characterized it, was that mental processes that did not seem amenable to rational inquiry, like the ones generating meaning and consciousness, were regarded in it as scientifically intractable mysteries—and any attempt to emulate those processes, as in artificial intelligence, was therefore doomed from the start.

For Minsky, treating psychology as if it were akin to physics was, and is, wrongheaded. Instead, Minsky envisions the brain as an enormously complicated gadget, like a computer running a suite of different programs, written and updated at different times in the evolutionary timescale and hence riddled with bugs and inefficiencies, each competing with the others for resources from the central processing unit. To understand such an unwieldy system, we need not one description but many—and none of them is likely to be elegant. Nevertheless, if we could manage to cobble together enough of these bits and pieces, we might just end up with a working model of an intelligent machine.



Minsky’s new book, The Emotion Machine, is his latest contribution to a theory of “commonsense thinking,” a topic he sees as pivotal to the effort to understand human cognition but one that, in his view, has largely been ignored by psychologists for the last 30 years.* Indeed, the questions he poses are not easy ones. How do we make decisions—whether to cross the street in the face of oncoming traffic, whether to take a trip by train or by plane, whether to pursue one career or another? How do we set and prioritize our goals? How do we recognize problems and retrieve the knowledge necessary to solve them? And how do we learn how to do these things in the first place?

The book’s title refers to the idea that our emotional states are themselves examples of what Minsky dubs “ways to think”—general methods of problem-solving that our brains use to tackle the tasks of everyday life. This is contrary to the popular view of emotions as mere overtones to thought, or as irrational and ineffable shades of experience. Rather than being impediments to reasoning, Minsky argues, emotions can actually help us to focus our attention in ways that are relevant to our immediate goals. They do this by changing the “resources,” or processes, that our brains use at any given moment. A state of fear, for example, might switch on resources that heighten our speed and awareness while switching off resources devoted to reflection and the pursuit of entertainment.

Such a mechanism would surely have come in handy for our evolutionary predecessors, who needed to respond quickly to the dangers and opportunities they encountered in the wild. And to the extent that we face similar challenges today, our emotions still serve us well. Whether confronted by a beast or by a bully, it makes little sense to waste time thinking about where to find nice flowers, or what to buy for dinner. By contrast, such deliberations might become important when trying to woo a mate—a situation in which haste and hypervigilance are usually unproductive.

But emotions like fear and infatuation (and other “ways to think”) can also serve goals that seem distant from such biological imperatives as avoiding danger. The fear of missing a deadline or failing a test, for instance, might motivate an otherwise lazy student to pull an all-nighter. In Minsky’s model, these different sorts of goals are represented by different components of the mind: on the one hand, we have innate urges that are similar to those of other animals; on the other hand, we have values, ideals, and taboos that are represented by more recently evolved cognitive systems.

To accommodate all the mental procedures necessary to deal with these various goals, Minsky divides the mind into six levels. These range from basic instinctive and learned reactions (corresponding to the id in Freud’s three-part structural theory of human psychology) all the way up to self-conscious thinking and reflection (corresponding to the superego). In the middle are the processes of deliberative and reflective thinking (the ego), which mediate between the conflicting goals of the top and bottom levels.

Although our primal emotions may originally have been intended to operate at the level of instinctive reactions, they can also influence procedures at other levels. This is why, according to Minsky, we speak of the “pain” resulting from an affront or a professional failure, as if such things were physically hurtful. Sometimes this can seem maladaptive, as when we anguish obsessively over perceived mistakes or moral shortcomings. At other times, however, emotion-like shifts of resources might be crucial for coming to grips with difficult problems. Thus, we might solve some intellectual puzzles by alternating between a kind of subconscious elation, in which we turn off our focusing and scrutinizing resources in order to let many ideas flourish, and a similar kind of subliminal depression, when resources for ruminating and detecting flaws are switched back on.



Minsky imagines that each of his six levels of mental procedures is populated by special resources called “critics,” which recognize various kinds of situations. One kind of critic, for example, might identify a threat (a physical threat at one level, an intellectual threat at another). The critic, in turn, activates “selectors,” which in turn activate resources corresponding to ways to think. Facing a threat, we might activate some resources used in anger, and others used in fear. Some critics (“encouragers”) might respond when a particular strategy is working well (e.g., when a display of anger causes an opponent to back down), others when a strategy is not working at all (as when, instead of backing down, the opponent becomes more belligerent), and still others when two strategies appear to be in conflict (as when one has to decide whether to attack or retreat).

Naturally, our thought processes are not limited to emotions. Minsky proposes, indeed, that the construction of each higher level of the mind during childhood and adolescence is accompanied by the development and application of new critics and selectors. These utilize strategic resources like breaking down complex problems into parts, making comparisons, retrieving memories, and relying on progressively more intricate mental models of the world and of our own minds. In addition to emotional thinking, then, we learn to apply strategies like imagination, logic, and reasoning by means of analogy.

In The Emotion Machine, Minsky devotes considerable space to describing such non-emotional strategies, with occasional diversions into the processes that support them. For example, in order to remember what worked in the past, we need mechanisms to form and retrieve memories; in order to imagine what might happen in the future, we need to be able to make predictions based on our commonsense knowledge. The power of the human mind, for Minsky, lies in its vast repertoire of different ways to think, and in its ability to switch among them in order to pursue its goals.

But what are our goals? Where do they come from, and how do we decide which to pursue? Minsky implies that one of the most fundamental (if not, in his opinion, insurmountable) differences between humans and computers is that humans determine their own goals and find the means to pursue them, while computers must be told what to do—and are equipped in advance with the information to do it.

Minsky acknowledges that it is useful for us to conceive of our goals as originating from our “selves”—mental constructs that embody our personal identities. But, in his view of the mind, no single set of resources can be identified as the self. Instead, our minds contain different models of ourselves at different levels: some models represent our basic needs, others our aspirations, and still others various aspects of our personalities. In reality, says Minsky, there is no central executive in charge of our minds. Rather, there is a collection of diverse processes, with critics that constantly interrupt each other and compete for control. The “self” is merely a convenient fiction that enables us to get on with our lives without worrying about our various goals all at once.

If there is no managerial self, how then do we manage to pursue so many different aims, and to draw priorities among them so effectively? Minsky proposes that at least some of our goals are influenced by “imprimers,” a term derived from the ethological notion of imprinting that was made famous by Konrad Lorenz (as in the case of a gosling that learns how to behave by spontaneously following or “imprinting” on its mother goose). External imprimers—like our parents—might scold us for pursuing bad goals (playing in the mud), or praise us for pursuing better ones (finishing our vegetables), determining what courses of action we are more likely to follow in the future.

But we also develop our own mental models of our imprimers, and these act like critics to recognize beforehand which goals are worthy of pursuit (my mother would not want me to play in the mud). Eventually, our models of imprimers are transmuted into representations that we call our “conscience” or “moral code,” and that presumably help to determine the hierarchy of goals we ought to desire.



Like Minsky’s other popular books, The Emotion Machine has its own peculiar style of exposition, relying heavily on neologisms (like “imprimer”), diagrams, epigraphs (sometimes repeated), and imagined interlocutors (identified by labels like “Student,” “Psychologist,” “Romanticist,” “Determinist,” and so forth). Despite its straightforward and relatively unadorned prose style, it can be difficult to follow—in part because the ideas it contains are not presented or organized in any particularly sensible way. (Not the best advertisement for critics, selectors, and encouragers in action.) Its pages are peppered with references to Minsky’s own prior writings and what amount to literary rain checks: promises that an issue will be discussed at some later point.

Moreover, while Minsky’s intellectual creativity is vast—and the knowledge he brings to bear on the problem of cognition is formidable—one gets the impression that much has been left out. Some of this, to be sure, is deliberate. For example, Minsky explicitly avoids reference to current hypotheses in neuroscience, most of which are likely to become outdated in short order. This is fair enough, and—considering the inanity of most contemporary efforts to construct neuroscientific explanations of higher-level mental processes like morality and consciousness—a prudent move on his part. Besides, he does cite examples from neuroscience where they are pertinent, as when he observes that brain injury resulting in an inability to experience emotions can also severely impair the capacity for decision-making.

On the other hand, there are points at which Minsky seems to ignore entire fields of inquiry that bear upon his theorizing—particularly fields that have developed since the 1970’s. An example is research on cognitive impairments in illnesses like depression and schizophrenia, which could be construed as providing evidence for his thesis that some human mental disorders arise from “bugs” in the processes that regulate switching among ways to think.

There are more striking lacunae as well. While Minsky speculates about the development of children’s thought, his main point of reference on this subject seems to be the work of the Swiss psychologist Jean Piaget, who wrote in the 1920’s. Needless to say, there has been some progress in the area of cognitive development since then. Likewise, in a section on analogical reasoning, Minsky proposes that “the architecture of our brains has evolved to have structures that make it easy for us to link . . . the same things seen from different points of view,” and then goes on to assert: “I don’t know of any experiments to see if structures like these can be found in our brains.” But scores of such experiments have been conducted over the last few decades, as any neuropsychologist who studies disorders of object recognition can attest.

Occasionally, Minsky’s “new” ideas seem to amount to little more than reinventing the wheel. His discussion of imprimers covers ground trod more than a half-century ago by the school of object-relations psychologists (“object” being the psychoanalytic equivalent of “imprimer”)—a fact that is perhaps not surprising, since both Minsky and the object-relations theorists were heavily influenced by Freud.



The really relevant question, of course, is whether the ideas presented in The Emotion Machine—whether innovative or simply innovatively repackaged—represent a substantial advance in our understanding of the human mind.

Minsky is undoubtedly right when he says that thinking is a topic that is both extraordinarily important and poorly understood. Thinking about thinking is, simply, hard to do. His approach to the issue is useful insofar as it delineates the kinds of problems that thinking must solve: determining the most useful ways to pursue our goals, splitting goals into subgoals, reframing difficult problems, and so forth. A point that Minsky makes repeatedly, and with much justification, is that in attempting to understand the methods we use to think, it would be a mistake to apply Occam’s razor—the idea that the most parsimonious solution to a problem is always best. In the clamor of our brain processes, commonsense reasoning might well be arrived at by way of complex strategies.

Yet as Minsky himself points out, all this conjecture will get us nowhere unless we understand the domains of knowledge over which such strategies are applied. In other words, studying how to solve problems in general is nearly impossible without a way of describing what the problems are about—or, for that matter, without an understanding of what kinds of problems are important to us and how we approach them. Here The Emotion Machine is much less useful. Minsky recognizes the problem—the problem, that is, of representing knowledge—as the barrier that has stymied AI research for the last two decades; but, like the majority of his colleagues, he cannot seem to figure out a practical way around it.

During the 1960’s and early 1970’s, the heyday of research on general problem-solving, cognitive scientists studied the ways in which people approached “model” tasks like the Tower of Hanoi. (This is a puzzle in which disks in a stack have to be moved from one peg to another according to certain rules.) Such tasks had the advantage of being simultaneously challenging for human beings yet relatively amenable to description in mathematical terms. Cognitive scientists could compare the strategies used by human subjects in tackling the tasks, and AI researchers could model those strategies computationally.

Indeed, it can fairly be said that AI devices have been extremely successful at this sort of thing, where the problem is circumscribed and the options at each step are relatively limited. Chess-playing computers are now as good as, or better than, the best human players. AI-controlled vehicles can navigate obstacle courses and drive on empty streets. Some “expert” systems, given the right information, can even make accurate medical diagnoses—albeit only when the cases are relatively clear-cut, or when the range of possibilities is restricted.

Then, around 1980, scientists began to tackle so-called “semantically rich” problems—in other words, realistic ones. As Minsky notes, it was not long before work ground to a halt: most researchers recognized that the number of bits of knowledge that needed to be encoded simply to frame a given problem was impossibly enormous. The only way forward, it appeared, was to figure out a way to program a computer to acquire the knowledge it needed by itself. Minsky likens this approach to building a “baby-machine,” and notes quite correctly that it has led to unimpressive results. Programmers have no idea how to equip such machines with the ability to learn anything but the simplest kinds of relations.

In some respects, it is true, the baby-machine approach has proved tremendously productive. Some devices (so-called connectionist networks) have learning algorithms that make them quite good at recognizing patterns and statistical regularities. This is the principle used by e-mail filters to sort real correspondence from spam, or, more critically, by military-intelligence programs to detect terrorist chatter among millions of intercepted telephone calls. Unfortunately, such systems work well only when the patterns in question are well defined and relatively static, and the knowledge the systems can acquire is limited to information that can be reduced to numerical values. A connectionist network cannot, for example, learn by itself a system of rules and exceptions like those governing the past tense in English. For a past-tense generator to work at all reasonably, some rules must be built-in—a human “fix” that defeats the purpose of a machine that can learn everything it needs to know by itself.

Some have suggested that the limitations of statistical-learning devices can be overcome if we can just gain access to enough information by means of powerful enough computers. This is Google’s approach to the problem of machine translation. By searching databases to compare vast tracts of text in two languages, the theory goes, a computer algorithm can sort through enough contextual subtleties to come up with the appropriate translation for any new phrase. But the theory is flawed, because there is no guarantee that a given phrase has ever occurred before, in any context. Without a way of representing the rules that generate phrases—not to mention the various aspects of word meaning—Google’s translator will always be prone to mistakes.

In recent years programmers have tinkered with other approaches to AI—so-called “evolutionary” algorithms, for example, which generate multiple solutions to a given problem, and select the best ones for further refinement. All of these methods, however, are severely limited in the kinds of information they can work with. In The Emotion Machine, Minsky suggests that information-accommodating machines must be equipped with “higher reflective levels” in order to organize the knowledge they acquire. Human children, he imagines, make extensive use of such “structures” to represent new knowledge and processes. But what does this mean? Are different structures used for different kinds of knowledge? If so, is it reasonable to think that there is some set of all-purpose mental representations that can capture the rules of language, the plot of a story, and the layout of a room?



Enter Minsky’s nemesis, Noam Chomsky. One of Chomsky’s best-known ideas is that a system as complex as language cannot be learned from scratch. Instead, the human mind must be pre-programmed with the ability to acquire and utilize human language, including the capacity to represent sounds, words, and the rules of grammar. By studying how the process of language-acquisition unfolds, Chomsky believes that we can gain some insight into how the human mind operates.

Chomsky focused on language because it is governed by a system of rules (that is, syntax) that can be thought of as self-contained: at least in theory, syntax is independent of the other contents of our knowledge, and unlike, say, vision, it does not have to conform to any external reality. But language is not the only complex system that needs to be learned quickly and from relatively incomplete data. Over the last few decades, many cognitive psychologists have come to believe that similar innate learning systems exist in other domains—that we have, for example, a kind of innate theory of physics, which provides us with certain assumptions about the behavior of objects that would otherwise have to be learned the hard way. (Minsky postulates such a system in passing, but seems to regard it as a novel idea.) Likewise, we might have an innate theory of biology, which accounts for our intuitions about living things. It is possible that each of these systems organizes knowledge in its own way, meaning that “commonsense” knowledge is different in different domains.

To the extent that inquiry in contemporary psychology is directed along such lines, it actually marks a triumph for Minsky’s “gadget” view of the mind: the work of the brain is not a single, rational, Chomskyan process but an assortment of functions and intuitive theories that evolved at different times to serve different purposes. Moreover, most psychologists would agree with Minsky’s postulate that our basic intuitive theories in many domains can be revised and revamped as we acquire more knowledge and develop new modes of thought.

But where Minsky is probably wrong is in his assumption that we can simply catalog the general processes of thought without investigating the specific structures inherent in each domain of thought. To reason about objects, we need to understand the ways in which we represent objects; to reason about living things, we need to understand how we think about living things. It might be reasonable to move a life-sized statue of an elephant to a new location by disassembling and reassembling it; it would be much less reasonable to move an elephant that way.

By the same token, there is no reason to believe that the development of “higher-level” thought processes applies in all domains of thought at the same time, or in the same way (as was hypothesized, notably, by Piaget). A child might have a relatively sophisticated understanding of human behavior, but a poor understanding of how complex machines work—or vice-versa, as some researchers believe to be the case in certain forms of autism. A physicist’s understanding of mechanics is vastly different from that of the average person, but this says nothing about his ability to make challenging moral decisions.

It is hard to see how psychologists can make much progress in figuring out the operations that underpin these different kinds of reasoning without at least a little “physics envy,” including a rigorous approach to experimentation. Minsky acknowledges that we have very little introspective insight into the thought processes we were born with, and infants, for their part, are notoriously bad at explaining what they think (or whether they are hungry or have just soiled their diapers). Even those of us capable of speaking possess relatively little knowledge of the mechanisms that produce our thoughts—otherwise, artificial intelligence would not present such a difficult problem.

For these reasons, it is not likely that we will get very far in understanding human cognition—or in simulating it—by cataloguing our emotions or making lists of ways to think, as Minsky suggests. A better way forward might be to figure out, by experimental observation, the kinds of information structures that are used to represent our knowledge of things like the words of language and objects in the world. Indeed, the AI devices that are most successful at emulating aspects of human cognition are those designed to include rules similar to those encoded in the human brain, like the model past-tense networks that are wired to recognize a distinction between regular (walked) and irregular (ran) verb forms. Of course, this is not enough to enable us to build a natural-language processor, but it is a necessary step.

As for the larger mysteries—like the nature of meaning and the origins of consciousness—it is not clear that Minsky has much to say beyond identifying the solutions he thinks are bad. True, at the moment no one else has anything more constructive to say, either. But if we cannot explain how we manage our own goals, how can we build computers that will?

Minsky’s faith in the potential of artificial intelligence would thus seem to be undercut. But can it then be argued, following Chomsky’s logic, that questions like meaning and consciousness do not belong to the realm of science at all, but to the realm of philosophy? This too seems deeply unsatisfying. After all, if we can explain some of the operations of the brain, we should in principle be able to explain all of them. In that regard, Minsky’s empirical approach may ultimately succeed: if we can reverse-engineer and reconstruct enough of the basic modules of human thought, we might be able to link them together in a way that produces human-like thought.

Unfortunately, you cannot put together a machine with parts you do not have. And, by any measure, we are still an exceedingly long way from having them.





* The book’s subtitle is “Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind.” Simon & Schuster, 400 pp., $26.00.

+ A A -
You may also like
Share via
Copy link