Every once in a while a deranged man appears across the street from my office, shouting loudly but incoherently about a conspiracy threatening him. After several hours he disappears; presumably someone comes to take him away. But then he reappears in a few weeks, or a few months, and the scene is repeated. I once mentioned him to a colleague, also a psychotherapist, who said I ought to count my blessings since he shows up only so often. My colleague’s office is located not too far from a halfway house, and on the streets near his building there are so many strange and disorderly people that some of his patients are fearful and want him to move. He does not blame them, he is sometimes frightened himself. He shakes his head sadly and asks: “How in the world did we ever get into this mess?”
The short, flip answer to his question is that we took a large but manageable problem—caring for the long-term mentally ill—put our Best Minds to work on it, and produced a vast and unmanageable problem, the psychotic homeless. The somewhat longer answer runs along these lines: the traditional mental hospital was disliked by almost everyone. It was costly, it was ineffective, much of the time it served only as a warehouse, and some of the time it was the snakepit so vividly depicted in film, fiction, and exposé journalism. It survived only because there seemed no other option. Then, unexpectedly, an alternative appeared, one produced by apparent progress on two fronts: the emergence of anti-psychotic medication and the development of new ideas about psychosis itself, to the effect that mental illness was rooted in and sustained by malevolent environments, one of which was the mental hospital. By emptying such institutions, not only would the state save untold sums of money, but the patients locked up inside would be liberated, would lead far better lives outside, and might even be cured of what were possibly pseudo-illnesses.
The hospitals were emptied, and disaster followed—it was perhaps the greatest social-policy fiasco of an era which specialized in them. Nothing worked as it was supposed to do. The medications were helpful, but they were not the panaceas advertised. They were not always effective, they sometimes produced serious side effects, many patients refused or could not remember to take them. The substitutes for the hospitals—community centers of various kinds—were not in fact established, or were given over to activities interesting mostly to the mental-health profession, such as providing outpatient psychotherapy for the not-so-disturbed. Then, when one counted the indirect costs of alternate care, or of no care at all, it became evident that there were no financial savings to be had. By the time all these problems were recognized, there were tens of thousands of insane people on the streets, most of them helpless and vulnerable, some dangerous to themselves and others.
The conventional assessment of what went wrong agrees that serious mistakes were made, that we were overly optimistic about what we could accomplish. But as this school of thought sees it, our errors grew out of enthusiasm, out of that activist ardor, characteristically American, which is ever ready to innovate and try out the untried. In this case the experiment failed, but we learned some valuable lessons and will do better next time. Back to the drawing board.
The account seems plausible but is in fact seriously misleading. The decision to empty the mental hospitals was not simply a mistaken judgment. It was an ideological decision deriving from strong convictions about both the nature of psychosis and the function of the hospital. These in turn reflected powerful though often unvoiced assumptions about human nature and the social order. If we had not made those assumptions we would not have made the errors we did. There is, in short, a connection leading from some of the books in my office—books on political philosophy and academic psychology—and that tortured man across the street, wandering up and down the sidewalk and shouting at the heavens.
_____________
To trace that connection we could begin from any number of starting points, but a good one might be American academic psychology in the 1950’s, then all but dominated by behaviorist learning theory. The true psychological scientist, according to the behaviorists, must concentrate on what was visible and external and measurable, and eschew speculating about such ineffables as “the mind.” The mind was merely a hypothesis, and even if there were such an entity it could be reached only by the slow, steady building of models based on stimulus and response. All else was wool-gathering.
These ideas, as Gordon Allport of Harvard, a (quiet) opponent of behaviorism, wrote in the mid-1950’s, had their origins in the empiricist tradition deriving from John Locke. As Allport characterized that position, since “mind is by nature a tabula rasa, it is not the organism but what happens to the organism from outside that is important.” And indeed in its earliest, purest version, behaviorism was the heir not only of Locke but of the Enlightenment more generally. In the work of the American psychologist J.B. Watson (1878-1958), behaviorism revived the Enlightenment formula for human perfectibility by rejecting the importance of innate differences and by promoting (in the words of John Passmore) “the conviction that men can be improved to an unlimited degree by controlling the formation of their habits.” These were not merely hypothetical possibilities for Watson. One of his major works, Behaviorism, was a remarkably self-confident statement of the nearly infinite capacity of behavioral techniques to produce what we want in human conduct.
One might imagine that the Lockean tradition has by now been routed. Who still believes in an organism empty, passive, entirely pliable, lacking in will or purpose and without tendency? Even at the time Allport wrote, about thirty-five years ago, behaviorism was already losing its luster, and was being replaced by the cognitive approaches it once scornfully dismissed. Thus, the dominant voice in developmental psychology in those days was Jean Piaget, who posited complex intrinsic patterns of thinking which unfolded as the child grew. In addition, psychoanalytic psychologies were for the first time becoming respectable in the graduate departments, and that extraordinarily “full” system of internal tendencies was taken very seriously indeed. Even the most formidable contemporary behaviorist, B.F. Skinner, was beginning to seem quaint, his contribution now seen as merely providing a bag of tricks with which behavioral therapists discouraged bad habits.
Yet the Lockean tradition did not wither away. As with other strong ideas—true and false alike—it managed to survive and prosper, doing so by assuming a new identity. Like a revenant, the Lockean spirit departed one host and attached itself to a more receptive one, in this case moving from the learning laboratory to social theories of human conduct, now become aggressively political.
The strategy was to devalue the concept of personality, by denying, first, that it is consistent over time and, second, that it is an important determinant of behavior. Personality was said to be something of a chimera—there is less there than meets the eye. We imagine a degree of consistency we cannot really demonstrate. People will be selfish one moment, altruistic the next. Children who will not cheat in one situation will do so in another—indeed, there are strong findings to this effect, or so it was said. Why should we believe in the existence of traits like honesty or altruism, given how hard it is to nail down their presence?
If such qualities are illusory or ephemeral, and do not govern conduct, then what does? The answer was “the situation,” the influence of the immediate milieu, its expectations and constraints: I am “honest” not because of a deeply entrenched character trait, let alone the promptings of the superego, but because I fear the consequences of being caught. If I see others cheating or stealing, and if it is clear I will not be found out, I will not be “honest.” Autres temps, autres moeurs, even over the short run. People are neither good nor evil, they are reflections of the milieu, and much of the time of their immediate surroundings.
_____________
So said, and says, situationism, which I have put here in its purest form, absent the usual qualifications and escape hatches. Like behaviorism, it too is by no means a new idea. It can be traced back to the sociologist W.I. Thomas, in an important essay appearing in the 1920’s. Gardner Murphy’s influential textbook on personality, published in 1947, gave a full chapter to situationism. But in Murphy’s version the situation is only one of a large number of variables influencing conduct, most of which are such factors as traits, needs, defenses, and emotions. In the 1960’s and 70’s a new and tougher version set itself to expunge those variables, to demonstrate the hegemony of situation per se. In the most famous situational experiments, the studies of obedience conducted by Stanley Milgram, an unsuspecting person is persuaded by the “experimenter” to administer ostensibly dangerous electric shocks to ostensible research subjects. In another well-known example, ordinary undergraduates are induced to play the role of prison guards and soon enough lose themselves in beastliness. These demonstrations are quite compelling, seeming to prove that ordinary people can be coerced by “situation” so as to behave in absurd or aberrant or even abhorrent ways.
Yet if one looks at the studies closely, some troubling questions arise. Do they in fact simulate the “reality” they set out to duplicate, and do the “laws” discovered travel beyond the demonstration? Even if they do, they tell us only about short-term events, as though life were composed only of vignettes. What would a longer perspective tell us about the human career? Above all, the demonstrations prove compelling only at first blush; once variations are introduced on the original experiments, so many exceptions and ambiguities emerge as to make us wonder whether we have discovered anything that says much or means much.
These logical and empirical problems, serious enough on their own, are overshadowed by an avalanche of findings which contradict the major tenets of situationism (in its strong version). We now know with some certainty that personality is remarkably consistent over time. A personality test taken in one’s teens will yield essentially the same results when taken again in one’s fifties. Ratings of a person made early in life by others even correlate highly with such ratings in adulthood. The most impressive findings of all are those demonstrating the long-term effects of personality; thus, children rated as undercontrolled in childhood (ages eight to ten) are more likely to be downwardly mobile, or those rated as shy lag behind their peers in such matters as getting married. Even more startling is the evidence from various studies which show a relationship between early mortality and subtle measures of personal style in early adulthood.
The wonder is that situationism was taken seriously in the first place. Anyone who has raised children will attest both to the consistency of character and to its effect on how lives are lived. The reclusive, pensive child will likely remain that way, and will likely lead a different life from that of his gregarious brother. Anyone who has lived long enough will observe the same consistency in his friends. A boy I knew in high school is now an economic analyst who appears frequently on television, and watching him I am transfixed by how little he has changed: the same body language, facial expressions, intonations of voice, above all the same mocking wit, once directed at the fools who taught us, now deployed against the fools who make our economic policy. Of course these are merely anecdotes and not systematic findings, let alone scientific proof. But there has always been enough empirical information to rebut the assumptions of situationism, as in the revealing studies by the Gluecks and by Lee Robins showing close connections between certain forms of childhood disorder and criminality later in life.
_____________
Something, however, was in the air when situationism was introduced—something political—and we get a glimpse of it if we return to Gardner Murphy’s treatment of 1947. Well before the civil-rights movement or the onset of feminism, Murphy wanted to show that traits imputed to blacks or Jews or Italians or women are responses to the situations they find themselves in. Psychology, he wrote, demonstrates that “there are no large and socially important differences . . . based on racial stock,” and the traits said to characterize a member of a given group—desirable and undesirable alike—are best understood “as reflections of the situation in which he is placed and of the roles which he must enact.”
Yet even so, Murphy did not go so far as to assume that circumstance and role were the sole answer to behavior; cultural and other differences could be real and perdurable. It was this latter conclusion that was swept away when an egalitarian zeal overtook the social sciences. All differences were now suspect, and many were anathema. Everyone, it was now asserted, is equal: if some score differently on tests, something is wrong with the measure, or how the measure is taken, or it is the result of how we have been raised, or the expectations we have internalized, or the signals given off by those teaching us or testing us. The differences that are undeniably there have been produced by upbringing alone, and are neither biological nor constitutional nor hereditary in origin.
Aaron Wildavsky of Berkeley has termed this state of mind “radical egalitarianism,” and in a series of penetrating essays has demonstrated its spread throughout the elite culture. According to its tenets, all differences reflecting “hierarchy” are inherently suspect and reprehensible—men over women, white over black, rich over poor. The new egalitarian passion reaches out, in fact, to all relations marked by unequal status—Third World vs. First World, mature vs. young (children’s rights), humans against other species (animal rights). Perhaps its most unexpected extension has been to behavior once considered not merely different but socially or morally deviant—homosexuality is the most striking example here, but also drug abuse and to a lesser extent criminality.
We can see how comfortably psychosis fits into the Lockean formula and its egalitarian extension. The notion that personality is insubstantial, transient, illusory, had its counterpart in a new understanding of psychosis—that it too is not a fixed condition but is fluid and changes with the moment; indeed, that it might not be present at all, but only an illusion. In this understanding, “craziness” is a term we use to shut away people who are merely different, independent, or eccentric. They listen to a different drummer. What we term insanity is often no more than a heightened sensitivity to the craziness of the world, or to the truly mad behavior of the so-called normals. Some psychotics are visionaries, seeing truths the rest of us can reach only through mind-altering drugs. And since psychosis is not biological, not fixed, and not intractable, it should not be “treated” unless the person affected so wishes.
The attack on the idea of psychosis might have remained largely academic—that is, without practical effect—were it not coupled with an equally vigorous attack on the mental hospital. The underlying purpose of the hospital, it was now said, is to imprison those we find deviant, hence offensive. It is an instrument of social control, a way the state has developed to remove certain types of troublemakers, especially those who are not clearly “criminal,” but merely burdensome or troubling. In any case, the hospital cannot cure its occupants, since they do not suffer from disease. To the contrary, the hospital intensifies the condition by confirming the label of “crazy” which the community has already affixed. In this way the hospital helps to create the very condition it is supposed to treat: the hospital “situation” induces “insanity” much as Milgram’s experiments produced “cruelty.”
_____________
The problem, in short, is not in the person but in the milieu: over the years, this view was set forth by several writers, most brilliantly by the sociologist Erving Goffman, whose métier was a crystalline depiction of environments. He surpassed himself in the book Asylums, written from the inside (as a pretended staff member) and capturing the claustral surrealism of the mental hospital. But however compelling it was, Asylum was still reportage. What was needed was something “empirical,” hence scientifically respectable. Some years later that and more was to be provided when the prestigious journal Science published an article by Stanford’s David Rosenhan that was to have a profound effect on both the idea of psychosis and our view of the hospital.
Rosenhan inserted normal people into mental hospitals to see if they would be detected. The pseudo-patients were instructed to report schizophrenic symptoms in order to gain admission but to behave normally thereafter. They were not found out. The staff regarded them as genuine psychotics. The demonstration was taken as proof that sane and insane are not sharply demarcated categories, and inferentially that the hospitals are filled with mildly disturbed or even normal people who are locked up for no good reason. Rosenhan’s article was a smashing success, becoming the most widely cited and reprinted article in psychology, the crowning empirical achievement of the movement against psychiatry.
In the fullness of time we are able to see that the Rosenhan demonstration is itself illusory. It demonstrates only that hospitals, like other bureaucratic institutions, tend to take the given for granted and to follow routine. It also shows, what did not need to be shown, that it is easy to carry out an imposture, especially when there is no reason to suspect one. The Rosenhan study now seems badly outdated, and the same may be said for anti-psychiatry as a whole, which lingers on only among the hopelessly ideologized. The question to ask, as with situationism, is why it was believed in the first place. For it too violates ordinary experience.
To be sure, not all psychotics are psychotic all of the time. There are ups and downs, there are long periods of remission, there are many cases where insanity is a once-in-a-lifetime event. But that does not disprove the fact that psychosis exists or that those suffering from it are unable to care for themselves. The first such person I ever saw was a fifteen-year-old boy who carried on a combative conversation with an invisible person located behind me. The most recent was a hospitalized woman who believed that her six-year-old daughter was in league with the devil, and must be put to death. These are by no means unusual examples. How can we imagine them to be nothing more than eccentricities or modest variations of normality? How can we expect such persons to assume full responsibility for themselves? And how in the world could we ignore so blithely powerful evidence—not nearly as overwhelming as it later became, but of sufficient weight to have given us pause—that biological factors play a significant role in the genesis of psychosis? Why did anti-psychiatry mesmerize, and for so long a time?
_____________
Many of the answers can be found in a dazzling new book on mental illness, Madness in the Streets, by Rael Jean Isaac and Virginia Armat.1 One of several full-scale treatments of the problem to appear in the last few years, it is to my mind the best, an intellectual as well as a social and political history. Isaac and Armat spell out the ways in which the doctrines of anti-psychiatry weakened our sense of obligation to a group, many of whom are genuinely helpless, by claiming that they were not psychotic at all but only the victims of poverty and racism, or prisoners of a callous government. The complex realities of diagnosis, cure, and treatment were thereby reduced to the level of Ken Kesey’s famous novel One Flew Over the Cuckoo’s Nest, with its melodramatic portrait of innocents wrongly accused, helpless in the hands of cruel pseudo-healers.
Isaac and Armat’s presentation is not either/or; it recognizes at every moment that methods of diagnosis and treatment have themselves been abused or used recklessly and that they do not always provide cures, let alone miraculous ones. But the politicizing of mental illness, in the belief that it is one more example of the powerful oppressing the weak, has led to a bizarre state of affairs where those who would certainly be helped, and might indeed be cured, are kept from appropriate therapies. Anyone still believing that, in the disciplines of psychiatry and psychology, science easily overcomes dogma should read the section of this book entitled “The War Against Treatment,” in which the authors show how weak, irrelevant, or nonexistent findings were all allowed to trash good research in forums responsible for making public policy.
Reading these accounts one is reminded again of the remarkable power of the judiciary in setting policy, often to disastrous effect. The “right to refuse treatment,” noble as it may sound in principle, has had nothing but wretched consequences for psychotics, for their families, and for the community at large. A practitioner in the field hears about such cases constantly—the “Billie Boggs” case in New York is merely the most notorious—in which a person clearly out of control but refusing care is bounced from the police to the emergency room to the hospital to the family and ultimately to the streets. One might excuse the judges, on the grounds that they know not what they do. It is hard to be quite so forgiving of my own profession, which helped establish these policies and continues to aid and abet them even after their bankruptcy has become fully evident.
Isaac and Armat discuss an important California case, involving the right to refuse medication, in which amicus briefs were filed by both the American Orthopsychiatric and American Psychological Associations, the former using “wildly exaggerated figures, not backed up by research,” the latter offering arguments that Isaac and Armat correctly term “extraordinary,” among them the idea that delusion and hallucination are rights protected by the First Amendment. On the question of whether medication may be helpful in calming violently agitated patients, the brief recommends instead such techniques as seclusion, strait-jackets, and cold wet-packs—in short, all of the methods used in the snakepit asylum which led to the attack on the mental hospital in the first place.
_____________
Anti-psychiatry will sooner or later vanish, its demise aided by this devastating book. But the longer, deeper tradition that produced and sustains it simply carries on, telling us that nothing of real importance can be found within the skin, that it is all out there. The last national convention of the American Psychological Association presented a two-day mini-convention on homelessness which paraded all the standard errors. Speaker after speaker—psychologists and psychiatrists all—argued against too much attention being given to the psychological. One of them inveighed against funding research into substance use and mental illness as contributors to the plight of the homeless. What then is responsible? Why, our old friends poverty and racism. And what is to be done? Why, call in some other old friends, “empowerment” and “action research” and “innovative interventions,” along with such new ones as “ecological resource perspectives.”
Empty organism, empty diagnosis, empty solutions. And in the meantime, that deranged man across the street from my office goes on shaking the heavens—and everyone around him—with his tormented cries.
1 The Free Press, 450 pp., $24.95.