Iraq
To the Editor:
Norman Podhoretz’s “The Panic Over Iraq” [January] strikes me as a curious piece. He calls for patience in the face of daily reports of more and more American and Iraqi casualties—three years after President George W. Bush declared that “major combat operations in Iraq have ended”—but ultimately says little in defense of the administration’s policy.
Mr. Podhoretz devotes most of his article to attacking his enemies, including such unlikely targets as Zbigniew Brzezinski, who supported Ronald Reagan’s foreign policy, Brent Scowcroft, who served as President George H.W. Bush’s national security adviser, Eliot A. Cohen, a neoconservative academic, and Congressman John Murtha, a hero of the Vietnam war. These policymakers he lumps together with the usual Podhoretz suspects: “op-ed writers” and other members of the “mainstream press,” the “radical Left,” “think tanks,” familiar punching bags like Howard Dean and Congresswoman Cynthia McKinney, and just about everybody else in America who does not believe the war in Iraq is a nifty thing.
In an effort to excuse the administration’s foul-ups in Iraq, Mr. Podhoretz mentions that “mistakes” were also made by Franklin Delano Roosevelt and Winston Churchill during World War II. Never mind that neither of those leaders elected to go to war or that both responded to one that was thrust upon them. President Bush, by contrast, initiated the war in Iraq, and his Pentagon began planning for it a year before he gave the order to strike. And although Mr. Podhoretz never details exactly what mistakes Roosevelt and Churchill made, or when, it is safe to assume that they did not make them three years after declaring the end of major combat operations.
The administration has dealt its apologists an awfully weak hand to play in defending its Iraq policy. Thus, to counter the charge that “no thought was given to what would happen once we got to Baghdad,” Mr. Podhoretz can only write that much of what the administration did plan for did not occur. He does not answer—or even ask—the relevant question, which is why no one planned for the predictable collapse of governmental authority, widespread looting, the breakdown of infrastructure, the random killing of civilians, tribal conflicts, and power-grabbing by local militias.
Three years after the American invasion, the lights are still out in Baghdad, people line up to buy gasoline, and the trial of Saddam Hussein has been postponed because of the assassination of prosecutors and judges. More than 2,000 American troops and over 23,000 Iraqi civilians have been killed, and many more have been wounded. If all this were not bad enough, a draft government report on the administration’s much-ballyhooed reconstruction effort describes it as having been hobbled from the outset by gross understaffing, lack of technical expertise, bureaucratic infighting, and constantly rising security costs.
Undaunted, Mr. Podhoretz claims that per-capita income in Iraq is now 30-percent higher than it was before the war. He fails to mention that the cost of living has gone up, too. Moreover, most of the money that has flowed through the Iraqi economy is simply a byproduct of the billions of dollars the U.S. has poured into the country—most of it going to American contractors on no-bid contracts, such as with Halliburton and other companies with close ties to the administration. When Mr. Podhoretz tells us what has been built in Iraq since the war, he fails to mention all that was destroyed during the war and has yet to be replaced. In any case, increasing the per-capita income of Iraqis who dare to work for Americans and replacing infrastructure destroyed by the war are not why we went to Iraq.
Once it was determined that Saddam Hussein’s regime did not have chemical and biological weapons, had not sought to purchase significant quantities of uranium from Africa, and had not cooperated with al-Qaeda terrorists, the administration came up with democracy-building as its post-hoc justification for the war. Who can be against democracy? But Mr. Podhoretz’s claim that the Bush administration is making the Middle East safe for America by “making it safe for democracy” is sadly delusional.
He points to what he terms the “first seriously contested elections in Egypt” as an example of progress in this regard. But what really happened? The only effective opposition to President Hosni Mubarak’s National Democratic Party was the radical-Islamist Muslim Brotherhood, an illegal organization barred from fielding a presidential candidate. The candidate of the secular-liberal Ghad party took 8 percent of the vote, only to be met with what American officials have described as false criminal charges and a sentence to five years of forced labor. Meanwhile, Mubarak has begun his sixth six-year term as president.
It goes downhill from there. The cedar revolution in Lebanon, which Mr. Podhoretz also trumpets, has little to do with Western-style democracy, but was a reaction to the Syrian-planned assassination of Lebanon’s former prime minister, Rafik Hariri. If democracy were truly on the march in the Middle East, who would be the likely election winners: the liberal/democratic/secular parties, or the likes of Hamas, Hizballah, and the Muslim Brotherhood? The recent victory of Hamas in the Palestinian elections is instructive on this point.
Thanks to the Bush administration’s war, Iraq has become a magnet for terrorists and would-be terrorists. Many of them are Sunni Arabs from other countries who have flocked to Iraq to fight the “infidels.” President Bush’s response, “better there than here,” misses the point. The number of terrorists is not fixed; our actions in Iraq have materially swelled the ranks of those willing to die for jihad. Yet Mr. Podhoretz tells us we are winning the war politically. I and most Americans wish this were true, but the reality is that (to quote Donald Rumsfeld) “the harder we work, the behinder we get.”
Mr. Podhoretz rests his assertion on two achievements: the adoption of the Iraqi constitution and three rounds of elections. I wish I could be as sanguine. The Iraqi constitution is just a piece of paper, one that incidentally adopts shari’a as the supreme law in the land. Prime Minister Ibraham al-Jaafari has ties to the radical cleric Muqtada al-Sadr, whose militia thugs battled U.S. Marines in late 2004 for control of Falluja. Grand Ayatollah Ali al-Sistani pulls the political strings in the Shiite-led government, a fact that does not inspire confidence in a democratic future for Iraq—throughout history, the Shiites have been intolerant of others, including fellow Muslims who are Sunni—and makes the prospect of a rapprochement between Iraq and Iran no longer remote.
The question is not just who is winning the war, as Mr. Podhoretz poses it, but whether we are safer, more secure, and better able to defend ourselves today than before we invaded Iraq. Most Americans say no, and for good reason. Faced with such burdensome challenges in Iraq, the administration has abandoned the field elsewhere in the world, leaving the growing nuclear threat in Iran to be solved by our European allies, turning over North Korea’s nuclear threat for China to work out, and ignoring Vladimir Putin’s anti-democratic power grab in Russia. Meanwhile, the administration’s policies in Iraq have strained relations with America’s traditional allies.
As for the larger war on terror, Osama bin Laden is alive and taunting the American public on al-Jazeera, despite Bush’s vow to capture him, “dead or alive.” In filling homeland-security jobs, political loyalty has meant more to the administration than loyalty to country. The administration has trampled on the civil rights of the American people by illegally eavesdropping on the conversations of ordinary citizens without any reasonable suspicion that they could be related to terrorism. We have been learning about detention centers operated by the CIA in foreign countries, of atrocities in Abu Ghraib—carried out, it now seems, on orders from on high—and of the use of military commissions, not courts, to determine the guilt or innocence of detained suspects.
Human rights were once high on the list of priorities for neoconservatives, when the Soviet Union was Enemy Number One. But now Mr. Podhoretz tells us that it is okay to ignore the human rights of those we oppose in Iraq (and at home) and in the larger war on terror. There is no other explanation for his suggestion that our interrogation of suspected terrorists has been limited to “accepted methods.” Torture is outside the limits set by the Geneva Convention, to which the U.S. is a party. Human rights apply across the board—even when we deal with the enemy. They are what we fight for when we fight for American values.
Following President Bush and Vice President Cheney, Mr. Podhoretz suggests that people are “either with us or against us,” patriots or traitors. This rhetoric may be effective politically in the short term, as it was in the 2004 presidential election, but in the long run Iraq is likely to be the death knell for the ideologues bent on using American military power to bring down regimes that do not pose a serious security threat to our country.
Ambassador Alfred H. Moses
Washington, D.C.
To the Editor:
I do not know why Norman Podhoretz lumps me in with liberal intellectuals who, having once supported the war in Iraq, are now running for the exits. Liberal intellectual I doubtless am, “sunshine soldier” I am most emphatically not. To the contrary, I opposed the Iraq war before it began (“a war too far” was what I called it in a piece by George Packer profiling liberals who had been hawks over Bosnia), just as I oppose it now.
David Rieff
New York City
To the Editor:
In a series of articles in Commentary, Norman Podhoretz has defended the Bush Doctrine and the President’s actions in Iraq. In his latest installment, Mr. Podhoretz presents areas of alleged progress in Iraq and asks: “Why is there so little public awareness of these things?” He does not consider that the American public might have concluded, on its own, that it no longer accepts any of the numerous rationales for the invasion of Iraq, including the need to bring democracy to the region. Instead, Mr. Podhoretz creates the scapegoat of the “mainstream media” (MSM), with its enormous negative influence in shaping public attitudes.
Specifically, Mr. Podhoretz cites the MSM’s penchant for negative views of the situation on the ground in Iraq. Such behavior, which fuels the “frenzied calls for the withdrawal of our forces,” is motivated, he says, by a desire to pull off “the proverbial feat of snatching an American defeat from the jaws of victory.” But Mr. Podhoretz seems to be attributing more influence to the MSM than can be justified. I find it difficult to believe that its influence equals or exceeds that of the many conservative media outlets across the country.
Despite the efforts of Mr. Podhoretz and others, President Bush has one of the lowest approval ratings of any President in recent years—primarily due to the situation in Iraq. The “panic,” if there is one, must be among his supporters as well as his detractors.
Sheldon F. Gottlieb
Boynton Beach, Florida
To the Editor:
In mapping the domestic forces of the “Vietnam syndrome” that have undermined American leadership in the global war against Islamofascism, Norman Podhoretz recalls for me the thought of the great military historian Carl von Clausewitz. Clausewitz characterized the essence of war as a protracted contest of wills, from which it follows that whatever weakens one’s own will strengthens the foe’s. That President Bush and his senior military have absorbed Clausewitz’s axiom is evident in their repeated assertions that, given the strength of our military and the soundness of our political strategy, a failure of national will is the only factor that could lead to an American defeat in Iraq.
Mr. Podhoretz’s comparison with the Vietnam experience is indeed apposite, for among the many factors that enabled a determined Hanoi to prevail against the powerful American military, arguably the most decisive was the stunning success of the radical Left in dividing the American house against itself. Today, despite the many successes in Iraq that Mr. Podhoretz enumerates, America’s noxious self-hating weed has clearly reemerged.
The Vietnam syndrome will likely die hard among policy elites vested in the flawed paradigms and assumptions in which they were schooled, even following a victory in Iraq. Israel will always be the “core problem” of the Middle East for a realist like Brent Scowcroft or an internationalist like Zbigniew Brzezinski, as Mr. Podhoretz observes. More generally, realism, conceived as “politics among nation-states,” cannot see why a trans-state Islamist ideology poses a mortal threat to the free world. Nor can liberal internationalism comprehend that a security policy required to pass John Kerry’s “global test” of unanimous approval is hopelessly paralyzed in a world of radically disparate interests.
Iraq is not the last of the Clausewitzian challenges American leadership will need to meet. Iran is looming.
Michael Balch
Iowa City, Iowa
To the Editor:
Thank you for Norman Podhoretz’s excellent article. I happened to read it the day after 11 million Iraqi voters, including a great many Sunnis, showed up at the polls to vote in parliamentary elections. It was typical that my edition of the New York Times gave this story no attention on the front page and only a slightly positive appreciation in the international section. What is wrong with this picture?
J.M. Batteau
The Hague, The Netherlands
To the Editor:
I came to Australia from Poland when I was seventeen, two years before the Berlin Wall came down. When I started getting seriously interested in politics, Norman Podhoretz’s writings helped convince me that neoconservatism came closest to my personal sentiments. Thus, as a very humble intellectual disciple, I was honored to be mentioned in his article for my blog tracking the good news from Iraq as well as the distortions perpetrated by mainstream media coverage of events in that country. It is good to know that good people are continuing the fight for victory in Iraq, and I am glad that some of my past work has provided them with useful ammunition.
Arthur Chrenkoff
Canberra, Australia
Norman Podhoretz writes:
Let me begin by correcting a few of Alfred H. Moses’ egregious misrepresentations of what I actually wrote in “The Panic Over Iraq.”
First of all, while I most certainly did attack Zbigniew Brzezinski, Brent Scowcroft, and John Murtha, I just as certainly did not attack Eliot A. Cohen. All I did was to take respectful issue with him on a couple of points. Ambassador Moses’ inability to perceive—or perhaps his unwillingness to acknowledge—this distinction is of a piece with his generally sloppy account of my article. If, for example, he had bothered to read it a little more carefully, he could not possibly have called Brze-zinski, Scowcroft, and Mur-tha “unlikely targets” of an article whose purpose was to take on the whole spectrum of opposition to the battle of Iraq. The same carelessness is evident in his allegation that I never spelled out the mistakes made by Roosevelt and Churchill in World War II, when in fact I cited a long list of them. And he is guilty not only of carelessness but also of something worse when—on the basis of my incontrovertible statement that there is a campaign to “define torture down to the point where it would become illegal to subject even a captured terrorist to generally accepted methods of interrogation”—he accuses me of believing that “it is okay to ignore the human rights of those we oppose.”
For the rest, I could hardly have hoped for a better illustration than Ambassador Moses’ letter of the kind of syndrome I was describing. Like his fellow opponents of what we are trying to do in Iraq and in the broader Middle East, he paints an insistently negative picture, disregarding or dismissing or denigrating any and all of the signs of progress to which I pointed and which have become evident even to a formerly anti-American Arab radical like Walid Jumblatt (“It’s strange for me to say it, but this process of change has started because of the American invasion of Iraq”). Like virtually all of his fellow Democrats, Ambassador Moses places the Bush Doctrine in the blackest possible light, accusing its author of every imaginable sin both of conception and of execution, and sparing not a single talking point in the endlessly reiterated and wildly exaggerated Democratic litany of evils both real and imagined (Abu Ghraib, Guantanamo, Halliburton, “illegal eavesdropping,” etc., etc., etc.). Like most other Democrats, too, he will not grasp how Iraq fits into the larger war against Islamofascism, or trouble himself to consider the role it plays in the strategy to make the Middle East safe for America by making it safe for democracy.
I am writing this in late February, at a moment when things look dismal for that strategy. Iraq is said to be (though I doubt that it is) on the possible brink of civil war, and the electoral success of radical Islamist parties in Egypt and the Palestinian Authority have demonstrated that the Middle East is very far indeed from being made safe for democracy.
Not surprisingly, these developments have swollen the ranks of the summer soldiers and sunshine patriots, even among neoconservatives who at first strongly supported the invasion of Iraq. The prime example is Francis Fukuyama, but a few others also seem to be on the brink of giving up. As for the more traditional conservatives like William F. Buckley, Jr. and George Will, who had serious doubts from the beginning but were, out of a sense of solidarity, relatively reluctant to express them too forcefully, they have now cast off all reticence and unambiguously declared that we have lost in Iraq and that the Bush Doctrine is a failure.
Although such conservatives are understandably doing so with a heavy heart, it is a good bet that Ambassador Moses and his political friends, for all of their unctuous protestations to the contrary, welcome these same developments and feel vindicated by them—just as isolationists and pacifists and appeasers did in the disastrous first phases of World War II and at critical junctures of the cold war.
Who, Ambassador Moses writes, “can be against dem-ocracy?” Well, since he asks me, I will answer him: he himself is, at least where the Middle East is concerned, and so are most (all?) opponents of the Bush Doctrine, including those on the Right. For (if I may borrow one of his own phrases) “there is no other explanation” for their attacks on it than that, all things considered, and now that (as most of them desperately hope and fondly imagine) the count appears to be in, they think we would be better off with the Taliban and Saddam Hussein still in power. Or, with these two leading faces of Islamofascism gone from the scene, the alternative favored by Bush’s opponents—though they rarely have the political courage or the intellectual honesty to come right out and say so—is a friendly dictator like Hosni Mubarak (“a sonofabitch,” as FDR once said of Nicaragua’s Anastasia Somoza, “but our sonofabitch”).
In other words, we should return to our pre-9/11 policy of supporting similar strongmen. Yet resurrecting this policy would also restore the swamps of Islamofascism to the undisturbed condition they were in before 9/11, when we discovered to our horror that they had been breeding a mortal threat to us and to everything for which we stand. Moreover, it would choke off the voices throughout the Middle East that are now calling for political and religious reform, consigning them once again to the dead silence in which they existed before George W. Bush spoke the words and took the actions he did in response to 9/11. According to the rich trove of testimony that can most conveniently be found on the MEMRI website (www. memri.org/reform.html), it is these very words and deeds, so brutally and off-handedly derided by Ambassador Moses and his political friends, that have emboldened the reformers and have put democratization, along with religious opposition to Islamist radicalism, on the agenda throughout the entire Middle East.
Given all this, I find it curious that Ambassador Moses finds it “curious” that I should “call for patience.” After all, whether he admits it or not, the plain fact is that we are only in the third scene of the first act of a five-act drama, which is to say the very early stages of a long-range effort to replace the tyrannies and despotisms of the Middle East with democratic regimes. The seeds having been planted, it will now take, yes, patience, as well as perseverance, for them to be nourished into full flower. The same virtues will be required to protect these tender plants against the inevitable bouts of bad weather and the spread of malignant infestations by which they are at this very moment being attacked and to which, if we were to follow the cynical and/or defeatist counsels of Bush’s opponents, they would soon fall prey. Thus would a bold and brave and noble American enterprise be doomed to ignominious failure. And thus too—by acting once more like the “weak horse” that Osama bin Laden thought he could attack with impunity—would we invite even more lethal terrorist assaults upon our country than we suffered on 9/11.
Which is why Michael Balch is exactly right to bring up Clausewitz and to stress the crucial role of national will. In this connection, though, I cannot for the life of me understand how Sheldon F. Gottlieb can conclude that the American public has “on its own” turned against the battle for Iraq in particular and against “the need to bring democracy to the region” in general. It is true that conservative radio and the Internet have broken the monopoly once enjoyed by the mainstream media, but unfortunately not yet to the extent of having been able to affect the tremendous imbalance in the ratio of negative to positive stories about Iraq documented definitively by Arthur Chren-koff (and another striking example of which is provided here by J.M Batteau). How could public opinion not have been swayed by this torrent of determinedly bad news?
Speaking of Arthur Chrenkoff, I was touched to learn that he has been influenced by my writings, and I thank him for paying me so great a compliment. And speaking of compliments, I apologize for having mistakenly paid one to David Rieff.
Old Age
To the Editor:
Eric Cohen and Leon R. Kass inveigh against two proposed solutions to the growing societal problem of debility in old age; we might call them the technical fix and the autonomy fix [“Cast Me Not Off in Old Age,” January]. With the technical fix, medicine promises cures for common scourges of old age like dementia, heart disease, and cancer. With the autonomy fix, patients use living wills to specify in advance what treatments they do not want administered to them in the future, thus hoping to avoid the indignities of old age.
I agree that we should not delude ourselves into thinking that medical science (and, I would add, exercise and diet) will guarantee us a good old age. And I agree that an individualistic worldview that encourages people to act in isolation to try to control their destiny is a flawed and impoverished one as applied to the elderly. My main disagreement with the authors is over the implications of their profound observations.
Messrs. Cohen and Kass put the responsibility for caregiving largely on the shoulders of families. But while most care for elders is and will continue to be provided by families, and while viewing the elderly as part of families is an advance over seeing them as isolated entities, focusing on families is too narrow. As I argue in my recent book, The Denial of Aging: Perpetual Youth, Eternal Life, and Other Dangerous Fantasies, it is not sufficient to emphasize (as Messrs. Cohen and Kass do) “giv[ing] care and comfort to those we cannot cure”; rather, we should strive to enable the elderly to engage with society to the extent possible within their limitations. This requires a broad societal response, something that the authors only mention as an afterthought.
This would require us truly to modernize Medicare so that chronic diseases, which make up most of what ail the elderly, can be attended to with the same degree of sophistication as are acute illnesses—not in the vain hope that chronic disease will be cured but in order to maximize the functioning (vision, hearing, mobility, etc.) of the elderly.
A broad approach for the physically frail might include linking them through the Internet to others who might benefit from their experience and counsel while offering aid in return. For the cognitively frail who cannot live independently, we need to develop assisted-living facilities and nursing homes that do more than just guard the health and safety of their residents. Allowing individuals with severe cognitive impairment to lead meaningful lives requires nurturing whatever residual spark of their fuller selves they retain.
Messrs. Cohen and Kass argue that the current best interests of patients with dementia should be given priority when it comes to decisions about their treatment. I generally agree with this position, although I think that there should be some acknowledgment of previously stated wishes. (The argument that living wills cannot protect against geriatric debility is a bit of a straw man, since their intent is typically to avoid burdensome interventions near the end of life, not to promote withholding all treatment at the first sign of frailty.) But Messrs. Cohen and Kass fail to recognize that determining a patient’s current best interests requires that we establish objective standards.
We need not resign ourselves to leaving families “not wishing to condemn the worth of people’s lives, yet not wanting to bind them to the rack of their growing misery . . . with no simple formulas for finding the best course of action.” The medical profession could develop guidelines about what kind of treatment is too much and what is too little. By setting the upper and lower bounds for reasonable treatment—leaving ample room for the individualized weighing of benefits and burdens and the exercise of choice within the extremes—we can spare families the “most poignant dilemmas” that Messrs. Cohen and Kass seem to feel they must inevitably face.
Muriel R. Gillick, M.D.
Harvard Medical School
Boston, Massachusetts
To the Editors:
Eric Cohen and Leon R. Kass’s analysis of issues related to care for the elderly is very timely. They are to be commended for taking a large view of a contemporary problem and addressing it in the context of history and tradition. But I fear that they may be guilty of a failure to hear what patients and families are really saying when they discuss things like living wills, health proxies, and a “right to die.”
Messrs. Cohen and Kass worry that these instruments are not widespread enough to be clinically feasible, and that they undermine the individual medical guidance and emotional support that are required to care thoughtfully for aged parents and relatives. But the increasing prevalence of patients’ dying in the hospital rather than at home means that families are forced to learn the jargon of the medical institution and use it in place of the familiar, plain language of home and family. Most people who use living wills are not articulating an attack on tradition or the nature of man.
Caregivers are well served by trying to determine the real-life issues involved in requests for a living will or health proxy. People’s needs can thus be identified, translated into communicable terms, and managed in real time as necessary. Engaging people on their own terms in this way may actually facilitate the responsible confrontation with the inevitable facts of aging that Messrs. Cohen and Kass advocate.
I am a pediatric nephrologist, and do not have a great deal of experience with debilitated patients. But I have witnessed the frustrations that come out of families’ encounters with the legal and administrative machinery of the modern hospital. Often their only recourse is in living wills, proxies, and a patient’s bill of rights, imperfect as they are. As we work to find better ways to cope with the care of the elderly and infirm, these instruments can serve a noble purpose.
Howard Trachtman, M.D.
New Hyde Park, New York
To the Editor:
Eric Cohen and Leon R. Kass lucidly confront the difficult issues that our society is likely to face as the parents of my generation of recent college graduates age. By placing the issue of caregiving in the broader context of the degeneration of familial responsibilities, they illuminate the inherent instability of broken families and the false promises of the welfare state.
It is imperative that we confront these issues within a sound moral and philosophical framework before we are confronted with the harsh practical consequences. I applaud Commentary for publishing a vital essay that stands against the bioethical demagoguery that has come to define our age.
Isaac Gruber
Fairfield, Connecticut
Eric Cohen and Leon R. Kass write:
We thank our respondents for their thoughtful letters, which we take, for the most part, as friendly amendments to our argument.
We agree with Muriel R. Gillick that the primary goal of medicine for the elderly is maximizing function in the face of worsening debilities, not seeking to conquer debility itself. In our war against various diseases, we sometimes forget the importance of small improvements, including those that depend more on attentive care than on medical miracles. Dr. Gillick’s eloquent formulation—“nurturing whatever residual spark of their fuller selves [the elderly] retain”—nicely describes the kind of care we ought to provide.
We also agree that society has a central role to play in providing such care—both society writ large in social programs like Medicare and Medicaid and society writ small in the form of local communities, families, and the institutional settings in which many elderly persons spend their last days. And we agree that modernizing Medicare requires placing greater emphasis on the management of chronic disease, including better in-house nursing care and greater attention to treatments aiming to prevent multiple acute episodes.
But stating a goal and knowing how to get there from here are two different matters. Already, we face a future in which entitlements for the elderly will use up an ever-increasing amount of our resources, even as the wealth-producing, tax-paying, and caregiving populations decline in proportion to the dependent elders they must support. Our challenge is figuring out how to improve care while restraining, as much as possible, its costs.
Here we face an ambivalence in our own argument. Seeking the best care possible is a moral imperative, lest we define the debilitated elderly as mere burdens, unworthy of the resources we expend to make their remaining days as comfortable and meaningful as possible. Yet providing the best care imaginable would erode our capacity to meet other civic obligations. To emphasize the devotion of family caregivers, as we do, is not an effort to downplay the role of society in providing crucial resources and support to patients and families. It is simply to recognize that society must limit the kind of care it can provide, precisely because the demand for better care is insatiable.
This leads us to the question of developing “objective standards” of care. Dr. Gillick seems to misread or misunderstand our article when she claims that we “fail to recognize that determining a patient’s current best interests requires that we establish objective standards” for interventions near the end of life. To the contrary, we put forward several crucial objective standards: the moral obligation never to take active measures to end someone’s life; to avoid excessively burdensome treatments, even when they might extend someone’s life; to benefit always the life the patient still has rather than seeing life itself as a burden to be eliminated. But such standards need to be applied in particular cases, demanding loving prudence from doctors and family on the scene. This includes giving proper attention to the prior wishes of the patient, but without seeing the wishes of a past self as the only guide for caring well for the present one.
Perhaps the most difficult challenge will be defining standards of care in the policy arena, including what kinds of intervention Medicare will and will not cover. Dr. Gillick thinks it will be easy for “the medical profession to develop guidelines about what . . . is too much and what is too little.” We are not so sanguine. Are there really morally objective criteria for determining what degree of dementia disqualifies my mother from which sorts of medical intervention? Should our society accept across-the-board age-based or capacity-based grounds for denying care to the elderly?
Even if we could define such “objective” standards of care—and did our best to be guided by them—we will never be able to spare families “the most poignant dilemmas,” as Dr. Gillick seems to hope. No policy, no guideline, no protocol can liberate us from the anguish of letting a loved one die or seeing a loved one suffer under our watch. Only a heartless society can make its end-of-life dilemmas go away.
Howard Trachtman accuses us of not listening to patients in offering our critique of living wills, health proxies, and a “right to die.” But he seems to be hunting for more disagreement than may exist. For one thing, we explicitly endorse the wisdom of health proxies, understood as a way to empower those who will eventually or inevitably speak on one’s behalf. And we do not claim that most people who use living wills are “articulating an attack on tradition or the nature of man.” We simply argue, based on the best social-science data, that living wills are ineffective at meeting their own goals; that prior wishes are not the only or best guide to making future caregiving decisions; and that those who promote living wills, as opposed to those who use them, often do have a distorted understanding of human nature by emphasizing self-determination at the expense of human finitude and interdependence.
If living wills promote a deeper understanding of what it means to age well and care well, then we are all for them. If they help preserve even a dose of loving humanity in the face of the “machinery of the modern hospital,” then we endorse them. But the evidence suggests that living wills have largely failed to meet these noble ends, and that no legal instrument can liberate us from the human dilemmas of learning how to put ourselves in the hands of caregivers, and how to care for those who put their trust in us.
Who Is a Jew?
To the Editor:
I appreciate Meir Soloveichik’s sensitive discussion of my memoir, Girl Meets God, but I am slightly puzzled by his professed puzzlement about whether I—as a convert to Judaism who was subsequently baptized—remain halakhically Jewish [“How Not to Become a Jew,” January]. So unambiguous is my halakhic status that I can only conclude that Rabbi Solovei-chik’s perplexity is a mere literary conceit; I myself have never for a moment doubted that, according to Jewish law, I am both Jewish and an apostate.
If I concur with Rabbi Soloveichik about my halakhic status, I take issue with his claim that I “regard rabbinic Judaism as a stepping stone to the higher truth of Christianity.” I am not sure what this means exactly, but insofar as it implies a certain supersessionism—that Judaism is the static backdrop to Christianity—I would dissent from it.
To be sure, life in an Orthodox Jewish community shaped by the rhythms of rabbinic Judaism is part of my spiritual autobiography, and my own understanding of Jesus and the New Testament has been unavoidably shaped by my study of Torah in the years prior to my baptism. But I certainly do not view rabbinic Judaism as a “stepping stone” to anything other than (perhaps) a faithful halakhic life. Indeed, insofar as rabbinic Judaism developed over the same centuries as did the early Church, it would be logically incoherent to regard the former as a stepping stone. Rabbinic Judaism is one response by some of God’s children to life in a covenantal relationship with Him.
I also wish to comment on Rabbi Soloveichik’s discussion of the meaning of private baptism. The community of the Trinity ensures that nothing in Christianity is truly private—even the hermit alone in the desert participates in the community of Triune life. While baptisms can be performed without witnesses, most churches today frown on the practice. Instead, by having baptisms performed in the presence of the local church body, they seek to underscore the communal nature of the baptismal covenant.
This is something I touch on in a chapter of Girl Meets God that reflects upon the symbolism of infant baptism: a baby cannot possibly hope to live out the promises being made on his behalf in the baptismal ceremony, and so infant baptisms are, for many adults who witness them, a profound reminder that Christianity is communal; that it is next to impossible for any of us to live a life of faithful Christian discipleship without the support, blessing, and admonishment of our brothers and sisters in Christ; that, as William Willimon once wrote, faith commitments “that are not reinforced and reformed by the community tend to be short-lived.”
Rabbi Soloveichik is right to point out the different understandings of nationhood that inhere in Christianity and Judaism. Unlike the Jewish gerut ritual, baptism does not mark one’s joining a nation. But it does graft the newly baptized person into a people—the people of God.
The moral of Rabbi Soloveichik’s article seems to be that Jewish communities should be careful about whom they convert. That he feels such a warning is necessary demonstrates another difference between Judaism and Christianity.
Lauren F. Winner
Durham, North Carolina
To the Editor:
Meir Soloveichik compares the conversion rituals of Judaism and Christianity in order to illustrate points of difference between the two faiths. As he sees it, “conversion to Judaism is as much a public, legal proceeding as a sacramental one.” It is “at once spiritual and civil—or, indeed, political”; it involves “not only taking on a new faith but also a new nationality.” Baptism to Christianity, by contrast, is a “private,” sacramental matter, in which “nationality is irrelevant.”
A pitfall of writing about Christianity in such general terms is that different types of Christians approach baptism in radically different ways. Generally speaking, evangelicals have no real belief in the efficacy of baptism or any other sacrament, while Catholics believe that the sacraments are avenues by which God’s grace is communicated to mankind. In between these poles are a dizzying variety of views.
Contrary to Rabbi Soloveichik’s sketch, baptism in the Catholic Church—the tradition I know best—is never a private matter between a convert and his priest. The situation he cites in which a non-baptized person can perform the sacrament is limited to cases of conversion or (most commonly) of an infant on its deathbed. This is an exception born of grave necessity, and is not the normative option. Under circumstances in which death is not imminent, the entire process from evangelization to sacrament is the work of the community, both clergy and laity. While such private matters as regeneration and forgiveness of sin are held to be key effects of baptism, the far greater stress, especially in the baptism of infants, is on the individual’s incorporation into the community of believers.
Can one lose membership in this community, as Rabbi Soloveichik claims, contrasting it with immutable Jewishness? On a functional level I suppose so, but beyond that I am not so sure. The early Church struggled over what to do with those who had denounced the faith in order to avoid martyrdom at the hands of the Romans but who later repented. It was decided to allow them to be readmitted to communion without the need for a second baptism. This would suggest that although a person might denounce his or her Catholicism, the indelible mark left by the sacraments of initiation endures.
Reverend Thomas M. Provenzano
Chicago, Illinois
To the Editor:
In his wonderful article, Meir Soloveichik misses two small points about Christian conversion. One runs somewhat counter to his thesis, while the other supports it.
He asserts that a baptized Christian who does not believe in Christian dogma is no longer a Christian. But, at least for the Catholic Church, this is not the case. As the new catechism states, “Baptism seals the Christian with the indelible spiritual mark of his belonging to Christ. No sin can erase this mark, even if sin prevents baptism from bearing the fruits of salvation.” Ideally, the attitude of the Church toward the apostate Christian is very similar to the attitude of the halakha toward the apostate Jew that Rabbi Soloveichik describes. For Catholics, baptism is the circumcision of the soul, binding the Christian to God and Christ “once for all.”
Second, to emphasize the private, spiritual nature of baptism, Rabbi Soloveichik notes that in the absence of a priest, even a non-baptized person can perform a baptism. In fact, even a baptizer need not be present. The catechism states that one can be “baptized by blood” by dying for the sake of the faith, and “baptized by desire” by wishing to be baptized when prevented from doing so. In this light, baptism may be seen even more to concern the relationship between the individual, God, and the world to come.
Craig Bruney
Washington, D.C.
To the Editor:
Meir Soloveichik is certainly correct to maintain that, despite the fact that Lauren Winner came to Judaism by way of conversion, her status as an apostate Jew is no different from that of any born Jew who has forsaken Judaism. But contrary to what he suggests, it is not so clear that a “betrayal” like apostasy does not affect one’s status as a member of the Jewish people.
A few decades ago, the Israeli Supreme Court rejected the claim of the monk Brother Daniel, a born Jew who had committed apostasy to Christianity and then demanded that the state of Israel continue to recognize him as a Jew. Writing about the case, Rabbi Aharon Lichtenstein (a different article of whose is cited approvingly by Rabbi Solovei-chik) demonstrated that rabbinic law, no less than Israeli civil law, “recognize[s] the fatal fallacy of the notion that, ad aeternitatem, the crown of Jewry can never fall off, no matter how ill it is worn.”
Regardless of whether Winner herself has crossed over the line marking the boundaries of Jewish identity, we should be aware of Rabbi Lichtenstein’s conclusion: “the halakhic principle [is] that an apostate can become a Gentile, and that Jewishness is not an absolutely irrevocable status.”
Joel B. Wolowelsky
Yeshivah of Flatbush
Brooklyn, New York
To the Editor:
Meir Soloveichik asserts that as a convert to Judaism, Lauren Winner remains a Jew despite her later conversion to Christianity. At the same time, he writes that “women serve as the foundation of the Jewish family by instinct,” and that “one whose mother is Jewish is considered a member of the Jewish family by birth.” This raises a troubling question: will Winner’s daughters and their own daughters, raised entirely as Christians, still be considered Jewish?
Two concepts of Jewish identity may be relevant here. The concept of Judaism as a family, which Rabbi Soloveichik and others before him have espoused, may be traced back to Abraham’s covenant with God, which was to be transmitted through his seed (zera)—that is, through Isaac, Jacob, and their descendants. Zera denotes a family bond, a tradition passed down from generation to generation that combines the physical and the concrete with the divine. Accordingly, as Rabbi Soloveichik notes, a convert somehow leaps the physical barrier and becomes a son or daughter of Abraham.
But zera alone is not enough. The Jewish family is different from other groups with mere hereditary ties. No one would say that every group of descendants from a single common ancestor forms a family or a people in any meaningful sense. They have not sustained the memory of their shared past, nor do they have any kind of shared customs or sense of purpose. Only “seed” together with a compelling imaginative tradition (zera u’berit) makes Judaism a family—and something more than a family.
Whether Lauren Winner is a Jew or not, her descendants—including the matrilineal ones—will be lost from Judaism as the passage of time washes away the memory of their once famously Jewish grandmother.
Kevin Jon Williams
Wynnewood, Pennsylvania
Meir Soloveichik writes:
If Lauren Winner is puzzled by my essay, I am doubly puzzled by her response. From it, one would conclude that she does not believe Christianity offers its adherents a “higher truth” than that proclaimed by the Judaism that she has left behind. Yet in her book she makes clear that believing in Jesus is irreconcilable with living a halakhic life. The Jewish holiday of Sukkot, she tells her readers, “is one of the things I gave up because of Jesus. I gave up [the holiday of] Purim, which I love, and [observance of] kashrut, which I love. . . . All because I was courted by a very determined carpenter from Nazareth.” Lauren Winner also speaks of her obligation, as an evangelical Christian, to bear witness to the truth of Christianity in public. On an Ash Wednesday at Columbia University, she found herself hoping that a young woman’s willingness to “proclaim, at least one day a year, that she is a Christian, will lodge somewhere in some student’s heart.”
The fact is that to be an evangelical Christian is to hope that everyone will embrace the “carpenter from Nazareth”—which, in the case of Jews, means that they will give up practicing rabbinic Judaism. If Winner does not believe this, then she is not an evangelical Christian; and if she truly believes that Jews have no need of becoming Christian, then she should share this conviction not with the readers of Commentary but with her many evangelical readers who regard her as an ideal type of the Jew who has seen the light.
Contrary to Winner’s historical claim, rabbinic Judaism was already in existence when Christianity began; many of the rabbis whose opinions are recorded in the Mishnah were teaching Jewish law at the time of the Church’s birth. Rabbi Gamaliel makes an appearance in the New Testament book of Acts; his forebear Hillel was one of the most influential rabbinic sages. Paul claims to have been raised a rabbinic Jew, but insists that what his fellow Pharisees missed was that the Law was no longer in effect, and that the Torah was nothing more than a stepping stone to the higher truth of the Gospel. “Before faith came,” Paul writes to the Galatians, “we were imprisoned under the law.” Halakha, he informs his readers, “was our disciplinarian until Christ came, so that we might be justified by faith; but now that faith has come, we are no longer subject to a disciplinarian.”
No faithful Christian can see rabbinic Judaism as merely a path leading to what Winner calls a “faithful halakhic life.” To be a religious Christian is to insist that there is something about Jewish dogma that is profoundly wrong, just as to be an Orthodox Jew is to respond that it is the Christians who are in error. On this, Jews and Christians agree: one of them has embraced a religion that proclaims a “higher truth” than the other.
Lauren Winner has given every indication elsewhere that she knows this to be the case. Thus, in a book review in the New York Times, she has chastised the religion writer Winifred Gallagher for an “uncritical embrace of ecumenism” and for believing that “the world’s religions are different paths to the same truth.” Gallagher, she writes, fails to make room for “the unfashionable belief, shared by the orthodox of most of the world’s faiths, that there is something exclusive about the nature of religious truth claims,” and she suggests that, in her next book, Gallagher “invite pilgrims on a journey that doesn’t require them to surrender particularistic claims in favor of a spiritual smorgasbord.”
Surrendering her own particularistic claims is exactly what Lauren Winner does in the first part of her letter. Then, however, comes her conclusion, where she states that, while Judaism is wary of accepting converts, Christianity is not—a fact that may demonstrate “another difference between Judaism and Christianity.” Here the clear implication seems to be that the universality of the Church’s mission is a distinct virtue, and one that is sadly lacking in Judaism.
It is true that Jews have never focused on converting the world; but then, in stark contrast to Christianity, neither have they ever insisted that joining their faith is necessary for salvation. At a time when Augustine was declaring that most of the world’s people, including unbaptized infants, were destined for eternal damnation, the rabbis were ruling that “the righteous of the nations of the world have a portion in the world to come.” Need one add that Christians’ insistence on their responsibility to save the world led historically to unpleasant consequences known all too well to most Jews, including, one would assume, some of Lauren Winner’s father’s ancestors, or that the Christian penchant for mission has in the past forced considerable numbers of Jews to choose between conversion and death?
A number of my Christian correspondents point out that baptism can be communal in nature, and that the church embodies a community. But I explicitly acknowledged this, though I also noted that the Church sees itself as a spiritual fellowship, not a political one, whereas the Jewish people sees itself also as a national entity. The difference is evident in the fact that baptism in private is considered valid while conversion to Judaism without a “citizenship court” of three is never valid.
As for whether baptism is “indelible” or, on the contrary, irrelevant to one’s status as a Christian, a joint statement by the group “Evangelicals and Catholics Together,” signed by many of America’s most prominent Catholic and evangelical theologians, says the following:
The communio sanctorum [“community of the saints”] embraces all Christians, including those whose lives are not notably marked by holiness. In the New Testament, the term “saints” generally refers to all who are baptized and confess Christ as Lord. [emphasis added]
Catholics may believe baptism is an indelible mark, and some evangelicals may not see baptism in the same way; but one who does not believe in Jesus is not a Christian, and baptism, for a very wide swath of the Christian world, plays a special role in defining one as a Christian.
In his letter, Joel B. Wolowelsky cites an article about the Brother Daniel case by Rabbi Aharon Lichtenstein. But his brief quotations from a lengthy and complex essay are misleading. Rabbi Lichtenstein’s thesis is built on a discussion in the Talmud of the ten lost tribes of Israel, who were exiled by the king of Assyria before the destruction of the northern kingdom in the 8th century b.c.e. So assimilated did those tribes become, the rabbis postulated, that a marriage between a descendant of one of them and a Jew could not be considered valid. So, too, Rabbi Lichtenstein suggests, a born Jew who experiences “total alienation from the Jewish people” can lose his “Jewish sanctity.” As such, his marriage to a Jewess would be invalid.
At the same time, however, Rabbi Lichtenstein notes that such an individual should not be regarded in the same light as a non-Jew. By way of analogy, he cites the halakhic abrogation, during the time of the Babylonian exile, of the agricultural tithing laws that applied to Jews in the Holy Land. “What,” he asks, “was the status of the land during the Babylonian exile? Was it simply identical with Iceland’s or Manchuria’s?” His answer: “we recoil from these possibilities instinctively, and our instincts are right.” So, too, with an apostate. On the one hand, having abandoned any sense of allegiance to the Jewish people, he is “a Jew without Jewishness,” and lacks the “sacredness of the Jewish personality.” On the other hand, if “we ask, in purely descriptive terms, whether anyone born of Jewish parents is a Jew, the answer must be yes.” Moreover, should such apostates “return to the fold, they would represent reformed prodigal children, rather than fresh converts.”
One may interpret the rabbinic injunction suspending marriage with members of the ten lost tribes as Rabbi Lichtenstein does, or one may follow other commentators who confine themselves to the talmudic point at hand, suggesting no further implications. But it is agreed that, in some sense, anyone born a Jew is a Jew—and that no matter how far a Jew has strayed from his people, Judaism always welcomes his return, just as it has dreamed of the return of the ten lost tribes of Israel from the four corners of the earth. Such is the strength of the blood bonds invoked by the theologian Franz Rosenzweig, whom I quoted in my article.
This brings us to the question of Lauren Winner’s descendants. Kevin Jon Williams writes that they will be “lost from Judaism with the passage of time.” He may be right; the statistics these days are on his side. But I, for one, cannot rule out their return. As I noted in my essay, the writer Stephen Dubner found his way back to Judaism after being raised by two Catholic-Jewish parents with no sense of his Jewishness. A child of Abraham remains a child of Abraham, and, no matter how lost, might just find his way back to the covenant of Abraham. However high the odds against this happening, surely they are no higher than the odds against the children of one man from Mesopotamia surviving, as a family and as a nation, the ravages of the centuries, and transforming the world in the process.
Among the YPSL’s
To the Editor:
Having been a member of Social Democrats USA in the late 1980’s, and having also counted Penn Kemble as a friend, I was moved by Joshua Muravchik’s memorial to him [“Comrades,” January]. But I would point out a small error, and two larger matters on which I think he misstates important points.
Mr. Muravchik refers to Penn’s “own Marxist mentor, Alex Garber,” as having “grown up as a member of the Olerites, an eponymous Communist splinter group whose ranks at its height may barely have reached double digits.” A couple of sentences later he refers to Garber as a “Communist schismatic.” The group was actually called the “Oehlerites,” after their leader Hugo Oehler, and they were not schismatics from official Communism but one of many split-offs from the Trotskyist movement.
The positions of the Oehlerites were extremely puristic, to the point that they attacked other Trotskyists for having sold out to the bourgeoisie—and this in the 1930’s! The group’s only surviving document of consequence is Hugo Oehler’s pamphlet Barricades in Barcelona, which covers the same events described in George Orwell’s Homage to Catalonia but in a much more exaggerated and sectarian style. If the group had any virtue at all, it was that its members positively loathed Stalinism.
More significantly, Mr. Muravchik suggests that Max Shachtman, the former Trotsky associate, had ceased being a “Trotskyite” many years before he recruited some of Mr. Muravchik’s “comrades” to his brand of socialism in the late 1950’s. This is arguable. It is true that Shachtman broke with Trotsky in 1940, and had given up his revolutionary political stand by 1962. But Shachtman supported the Trotskyist concept of a “Fourth International” for a decade after his rupture with the founder. It is also true—and something of an irony, considering how many of Mr. Muravchik’s comrades have supported U.S. military intervention overseas in recent years—that Shachtman’s group strongly opposed American involvement in World War II. Shachtman’s Independent Socialist League (ISL) was included in the U.S. Attorney General’s list of subversive organizations until the late 1950’s.
Shachtman still met politically with an “official” Trotskyist, Murry Weiss of the Socialist Workers’ party, in 1957. Although Shachtman’s group produced the array of social democrats described and memorialized by Mr. Muravchik, its younger veterans also included some of the most extreme radicals of the 1960’s and after. Examples include Shane Mage, leader of the tiny Workers’ League, and Jim Robertson, founder of the even tinier Spartacist League, which took a sharp turn toward Stalinism.
In sum, the issue of when Shachtman ceased to be a Trotskyist, and what the very term “Trotskyism” means in a broad historical sense (as opposed to a sectarian or self-interested one), remains open and neglected. It is part of the peculiar legacy of the 1960’s that the experience of non-Stalinist radicals in the U.S. and globally has almost never been significantly examined by American historians.
Mr. Muravchik contributes to this problem when he elides a great deal of the complicated if perhaps ultimately irrelevant history of the Young People’s Socialist League (YPSL), of which he was national chairman from 1968 to 1973. He describes YPSL as “the youth section of the Socialist party of America that had been founded in the early 20th century by Eugene Debs and was led for many decades by Norman Thomas.” This is a bit like the common descriptions of the Democratic party of Senators Edward Kennedy and Joseph Lieberman as the party of Jefferson and Jackson, with no acknowledgement of intervening controversies. Such condensation is a lesser matter when it involves major American institutions, about which many citizens understand the historical basics, than when it touches on small radical groups whose history is not well known.
Stephen Schwartz
Washington, D.C.
To the Editor:
Reading Joshua Muravchik’s “Comrades,” I was reminded of the old slogan, “Don’t trust anyone over thirty”—or, don’t acknowledge that your own ideas may owe something to theirs. Mr. Muravchik in his article seems to have fallen prey to this latter piece of 60’s hubris.
Outside of acknowledging Alex Garber’s influence on Penn Kemble, Mr. Muravchik gives the impression that Kemble, Tom Kahn, and Paul Feldman sprang fully grown from the head of Zeus and invented the idea of social democracy in America. He acknowledges that the latter two had been “devotees of a former Trotskyist named Max Shachtman,” but makes short shrift of Shachtman’s central role in the formation of their ideas and his continued influence on them until his death.
He also dismisses the centrality of Shachtman’s ideas to the Socialist party, the Social Democratic Federation, and the Social Democrats USA (often referred to as the “Right Shachtmanites” by their Old and New Left detractors). As for Mr. Muravchik’s own Young People’s Socialist League (YPSL), he writes that “although Shachtman was one of the elder statesmen who occasionally made stirring speeches to us, no YPSL of my generation was a Shachtmanite. What is more, our mentors, Paul and Tom, had come under Shachtman’s sway years after he himself had ceased to be a Trotsykite.”
In fact, Shachtman was much more than a maker of stirring speeches. Yes, the YPSL’s were not Trotskyists, largely because, as Mr. Muravchik notes, Shachtman himself had long since ceased to be one. He broke with Trotsky and formed the Worker’s party in 1940 over the issue of the nature of the Soviet Union—the central issue on the American Left and the one upon which the splits and factional schisms among socialist groups were largely based.
Shachtman’s insistence that the Soviet Union was not socialist but “bureaucratic collectivist”—having a new ruling class with a totalitarian and imperialist nature—contributed greatly to the principled anti-Communism of the various groups of which he and his followers were a part, a stance to which the YPSL’s became heir. The group around Shachtman came to believe that the Stalinist nations were far worse than Western capitalism, that democratic socialists should side with the U.S. in international conflicts against the Soviet Union or its proxies, and that they should be (to use Herman Kahn’s phrase) “aggressive democrats,” seeking not only to counter totalitarianism but actively to spread democracy.
This included an obligation to counter tyranny early on, and not repeat the mistakes that individuals and governments of their generation had made by appeasing Nazism before World War II and Communism during the cold war. Hence their support of the Vietnam war, in which they were joined by their allies within organized labor, principally the leadership of the AFL-CIO.
By the early 60’s, Shachtman’s group had also made a major practical shift, by abandoning its third-party election runs in favor of working within the Democratic party to influence its policies. One of the fruits of this shift was Shachtman’s encouraging of Kahn, Kemble, Carl Gershman, and others to engage in politics and the civil-rights movement, which, as Mr. Muravchik details, they did effectively. I would take nothing away from these individuals in their brilliant application of the principles they inherited, but it did not occur in a vacuum.
Ellen Heyman
Georgetown University
Washington, D.C.
To the Editor:
I very much appreciated Joshua Muravchik’s tribute to the late Penn Kemble, especially his evocation of the neat and elegant young socialist among the company of his rather more conventionally radical comrades (almost all of whom by now, of course, are neither radical, nor young, nor—despite what one or two of them might continue to claim—even faintly socialist). Since for a brief part of this story I, too, was among Mr. Muravchik’s comrades, I hope I may be forgiven for offering a correction to one part of his account. I think it is of some interest in itself, and throws an interesting light on the relation of Penn and his YPSL-Social Democratic comrades to the real world of American politics.
Contrary to Mr. Muravchik’s account, Penn was not the one who dreamed up the organization called the Coalition for a Democratic Majority (CDM). The credit for this, if in the end credit it be, belongs to Ben Wattenberg. In 1972, before the presidential election of that year, it was easy to see that George McGovern and his leftist supporters were headed for a major defeat. Thus, reasoned Ben, after the election would be an ideal time to bring together the two leading groups of centrist Democrats, the supporters of Hubert Humphrey and the supporters of Henry Jackson, and retake the party. Actually, it would not be unfair to say that CDM was first organized in my apartment, where Ben, my husband Norman Podhoretz, and I drew up a founding statement whose slogan was “Come Home, Democrats.”
After making a bit of a splash with a full-page ad in the New York Times, we settled down to the work of figuring out exactly how we would function. Ben had asked Penn to be CDM’s executive director, and, if memory serves, we spent our first several meetings on the question of how, or whether, to write our by-laws. Since most, or perhaps all—I no longer remember—of our financing came from the AFL-CIO, our first officially agreed-upon act in 1973 was to issue a statement expressing our disapproval of President Richard Nixon’s budget. After this, we had some annual banquets at which we bestowed honors upon some of our more worthy members. At some point, the directorship of CDM passed to Mr. Muravchik.
By the time Jimmy Carter ascended to the presidency in 1976, it would in all honesty have to be said that CDM had utterly failed in its mission to become a counterweight to the Left in the Democratic party. In the meantime, however, we had become a kind of community, not very large but not entirely without a voice. And by 1980, most of us, though not all, marched ourselves cheerfully into the camp of Ronald Reagan.
After CDM, as before it, Penn had a variety of jobs and projects, all undertaken with his unswerving passion to increase the spread of democratic rights throughout the world. It seems fair to say, in deepest sadness, that we will never look upon his like again.
Midge Decter
New York City
To the Editor:
Joshua Muravchik’s eulogy for Penn Kemble is a fitting tribute. Though I found it for the most part a wonderful stroll down memory lane with people I consider old friends and coalition partners, I was disappointed that he chose to lump me and my fellow “followers of [Reverend] Sun Myung Moon” together with other political “immoderates” like the Jewish Defense League and the disciples of Lyndon LaRouche.
Mr. Muravchik knows better. He and his YPSL colleagues kept real extremists like those at arm’s length. But from the late 60’s through the late 70’s, the YPSL’s made common cause with us in the Unification Church many times, and precisely because they understood that, despite our quirky religion, we were not political extremists. Causes that I participated in with YPSL ranged from supporting Henry Jackson for the Democratic presidential nomination to opposing the Stalinists on the war in Vietnam, blocking the leftist crowd from taking over the United States Youth Council, educating Americans about Soviet oppression of intellectual dissidents, supporting Israel, and working to elect moderate Democrats to the Washington, D.C. city council. Later, we Unificationists, just like the YPSL neoconservatives, tended to close ranks with the Republicans.
Those shoulder-by-shoulder battles with the YPSL’s are precious memories for me, and it saddens me to see an old comrade forget so easily.
Dan Fefferman
International Coalition for Religious Freedom
Falls Church, Virginia
To the Editor:
It has long been nearly impossible to keep track of all the peregrinations and permutations of the American political Left in the 20th century. As the glory days of the Lovestoneites, Shachtmanites, Trotskyites, and the rest fade further from view with the passing of time, it becomes harder still. Many thanks, then, to Joshua Muravchik for providing a clear and moving portrait of the core group at the center of one of the most important of those factions, the Young People’s Socialist League (YPSL), and, in particular, of the late, great, admired, and much missed Penn Kemble.
Contrary to what Mr. Muravchik suggests, Penn did not quite abandon his New York City activities after relocating to Washington. I had the good fortune to work closely with him on a number of projects for Mayor Ed Koch, who in the 1980’s seemed to be the last, lonely voice of anti-Communism within the Democratic party. By the middle of the decade, the battle between Left and Right was being waged over the spread of Soviet-backed, Castro-style juntas in Central America. Penn spearheaded an effort to provide moral and political support to the Nicaraguan contras, Costa Rican President Óscar Arias Sánchez, and other proponents of democracy in the region. Spurred on by Penn, Mayor Koch took on a considerable political risk by leading a large delegation to the region to back the “small d” democrats and raise their profile with the American public.
The dream held by Penn and others of restoring to the party of Harry Truman, Scoop Jackson, and Ed Koch the courage to fight totalitarians and support democracy movements is even more fleeting today—incredibly, after the end of the cold war—than it was 20 years ago. But abandoning that dream should in no way diminish respect for its chief advocate and avatar. May Penn Kemble rest in peace.
Jonathan R. Cohen
Norwalk, Connecticut
Joshua Muravchik writes:
I thank Stephen Schwartz for correcting my misspelling of Oehler and for offering other details about the Oehlerites well beyond my knowledge. However, his point that Oehler’s group was not a Communist split-off but a Trostskyist split-off strikes me as, well, splitting hairs. The Trotskyists of that era were themselves a Communist splinter. Oehler had begun as a member of the Communist party, then split from it along with other Trotskyists, and then, together with a few others, split from the Trotskyists to form his own group.
Mr. Schwartz’s encyclopedic knowledge of this stuff is admirable, but it seems to get in the way of linear thinking when he argues about the relationships among Max Shachtman, Trotskyism, and my own “comrades.” That Shachtman had once been a Trotskyist is hardly at issue. My point was that by the time Tom Kahn and Paul Feldman became his disciples, he was no longer one, and was soon to disband his party and urge its members to follow him in joining the Socialist party. He had, he explained, reconsidered the entire Communist experience back to its roots in 1917, and had concluded that the creation of the Communist movement by Lenin as a split-off from social democracy had been a “historical mistake.” In this view of his, no trace of Trotskyism remained. (By engaging in this argument I realize that I, too, become vulnerable to the charge of splitting hairs, but since the label “Trotskyist” has gained new currency as a slur on neoconservatives, demonstrating its falsehood seems to me more than a scholastic exercise.)
Mr. Schwartz also taxes me with “elid[ing] a great deal of the complicated if perhaps ultimately irrelevant history of the . . . YPSL.” For this I feel confident I have the reader’s gratitude.
Ellen Heyman accuses me of suggesting that my comrades and I “sprang fully grown from the head of Zeus.” If I gave any such impression, I wish to withdraw it. But the real problem here, I think, is that Ellen Heyman’s view of our paternity is shaped by her own distinct history. She has told me that she grew up in a Shachtmanite home. Like her, I too was very influenced by the ideas of my parents; but unlike hers, mine were not Shachtmanites but rather devotees of the Socialist party of Norman Thomas. The same was true for Penn Kemble and indeed for more among my other comrades than those who claimed a Shachtmanite lineage. So if I have failed to acknowledge a paternal debt, it may not be to the pater she has in mind.
I did record, however, that Kahn and Feldman, who were a few years older than my cohort, had been Shachtmanites themselves, and Ellen Heyman would be right to say that Shachtman’s influence on them was strong and enduring. She is not right, however, when she makes it sound as if Shachtman were the inventor of Left anti-Communism. There was a long, albeit uneven anti-Communist tradition among social democrats in many countries, including the U.S., dating all the way back to the time when Shachtman himself was still a Communist. Although Shachtman’s speeches brilliantly evoked the horrors of Communism, his signature theory of “bureaucratic collectivism” never meant much to me or, as far as I know, to my fellows.
The distortion in Ellen Heyman’s lens becomes especially evident when she suggests that Shachtman’s influence is what led us to the civil-rights movement. I was active in that movement years before I ever heard of Shachtman, and again the same was true for Kemble as it was for Carl Gershman. As with others of our generation, it was the civil-rights struggle that led us into activism, where we eventually encountered Shachtman—not the other way around. We all regarded him as a wonderful orator and a great thinker, but his influence on us was not as momentous as it may have been on Ellen Heyman or those close to her.
I have taken much instruction from Midge Decter over the years, and am willing to be corrected on the subject of the genesis of the Coalition for a Democratic Majority. But I think I was not as far from the mark as she suggests when I wrote of Penn Kemble that CDM was one of the organizations “largely of his . . . invention.”
It is said that defeat is an orphan; but CDM, although thoroughly defeated within the Democratic party as Midge Decter aptly points out, had many fathers. The prime mover was indeed Ben Wattenberg. But the idea for it, as he notes in a forthcoming memoir, had been suggested to him by his sometime co-author, Richard Scammon. To judge by Wattenberg’s account, moreover, Midge Decter has understated her own hand in CDM’s founding statement, of which he says she was the principal author.
Still, before it was decided even to draft such a statement, a small group had been meeting quietly in Washington to concoct the new organization. Wattenberg was the convener, and Kemble was one of the handful of participants. There are no minutes of those meetings, but since Penn had already created more organizations than anyone else involved, my guess is that his was an influential voice in the deliberations. It was also Penn more than anyone else who had the long, close ties to organized labor that brought in the financial backing that Midge Decter mentions. He was chosen early on to be the staff person for the nascent en- terprise, and from the time CDM was formally launched, with Penn as its executive director, he largely gave it its direction. The maiden action (which did not enthuse everyone) of assailing Nixon’s budget bore the earmarks of Penn’s tactical style—zigging Left to zag Right—just as clearly as CDM’s founding manifesto bore those of Midge Decter’s distinctive prose.
I am reminded by Dan Fefferman’s charming letter of why he was my favorite among the followers of Reverend Moon with whom we YPSL’s did from time to time make common cause. I recall with particular relish a “rally against North Vietnamese imperialism” staged jointly by the YPSL and the Freedom Leadership Foundation (FLF), as these “Unificationists” called themselves, on the campus of the University of California at Berkeley. That took moxie, which is precisely what led us to ally with them in such ventures. Their virtue, as I saw it, was not, as Mr. Fefferman puts it, that they were “not extremists,” but rather that they shared our militant anti-Communism. I did not think of them or of us as “moderates.”
Perhaps the reason Mr. Fefferman is so sensitive on the question of who is a “moderate” and who an “extremist” is that Reverend Moon has, in recent years, found allies far more outré than us young socialists: notably, Louis Farrakhan. In 1998, the Unification Church hosted a visit to South Korea by a delegation from the Nation of Islam during which Farrakhan proclaimed Moon “immortal.” In 2000 the two groups joined forces to sponsor a “Million Family March,” a sequel to Farrakhan’s 1995 “Million Man March.” Mr. Fefferman bristles at the imputation that he and his co-religionists were or are anything but centrists; methinks he doth protest too much.
I thank Jonathan R. Cohen for his generous words. But if I may quibble, the collaboration between Penn and Ed Koch was a matter not so much of Penn’s continuing his “New York activities” as of Koch’s continuing his national and international interests even after leaving the U.S. Congress and becoming mayor of New York—which is much to his credit.
In private correspondence, Rachelle Horowitz, an old comrade who was at Brooklyn College with Paul Feldman and was an eyewitness to the event in question, has informed me that I erred in writing that Paul had once told a date to “put out or get out” atop a Ferris wheel. The venue, rather, was the apex of Coney Island’s terrifying Cyclone roller coaster. My version understated his tactical genius.
Depression
To the Editor:
I am honored that my work has been given an extended critical essay by Algis Valiunas, but I want to correct one point and respond to another [“Sadness, Gladness—and Serotonin,” January].
Contrary to what Mr. Valiunas writes, I am not the “trademark proponent” of Prozac and its fellow selective serotonin reuptake inhibitors (SSRI’s). In my book, Listening to Prozac (1993), I wrote that these medications are less effective than prior antidepressants in the treatment of severe depression. For minor depression, I suggested, psychotherapy remains the most important intervention. As for the SSRI’s negative effects, although the research literature was equi-vocal, I sided with those who believed that new-onset suicidal thoughts, late-appearing neurological symptoms, and the development of tolerance for the drug were real risks.
My main interest, however, was not in Prozac’s effects on health, for good or ill, but in its effects on the modern sense of self. I had observed, in some of my patients, dramatic responses to Prozac that seemed to go beyond the amelioration of illness to the alteration of personality traits. Because there were good theoretical grounds to believe that the medication was directly responsible, I used my observations as the basis for a consideration and criticism of what I called “cosmetic psychopharmacology,” whereby medication might be used not to treat disease but to alter normal functions in desired or socially rewarded directions. I think it is fair to say that Listening to Prozac energized the study of “enhancement,” a now burgeoning field within medical ethics.
As a reviewer noted when it appeared, the book might well have been titled Worrying about Prozac. Its central concern was cultural: did society favor frenetic assertiveness as a personality style, and would medications’ ability to tweak temperament push us ever further in that direction? I favored a more neutral stance, in which the whole range of temperaments would be respected. In subsequent years, I began to hear the concerns I had voiced with respect to enhancement applied to the mainstream treatment of major depression—as if ameliorating mood disorder were also morally fraught. I wrote Against Depression in part to address that overgeneralization. I believe that depression has earned its status as a disease and that the ethical issues that attend its treatment, whether with medication or psychotherapy, parallel those that attend the treatment of other diseases.
Mr. Valiunas wants to contrast my position with that of Kay Jamison in her recent book Exuberance, but Jamison and I are largely in agreement on the main issue, namely, that sunny temperaments and emotional resilience have been underrated, particularly by intellectuals. I devote whole sections of Against Depression to this issue, asking why sanguine artists and scientists have been dismissed as lightweights. I speculate that the historical prevalence and intractability of depression have helped shape the cultural preference for “heroic melancholy,” and I discuss the potential effects on our tastes and values of (future) more effective treatments.
Jamison’s Exuberance is not at all “an answer to Kramer from within psychiatry.” It is rather a sort of naturalist reply to Leon Kass and others who seem to argue that fulfillment can arise only from anguished striving. Both Jamison and I say that people to whom achievement comes easy, or at least without agonizing self-doubt, can also lead full moral lives.
Mr. Valiunas’s praise for Jamison (who fills her book with case examples of people who look as if they inherited the traits I worried about as the goals of cosmesis) shows how much the conservative critique of psychiatry relies on faith in the natural as the good. Mr. Valiunas admires the “resilience, pluck, daring, and sparkle” of Teddy Roosevelt and others like that “riproaring statesman” in Jamison’s gallery; presumably these robust, joyous types are moral because they arrive at their temperament honestly, through “cosmic accident.” By this account, lucking into extreme resilience is admirable, while the restoration of quite modest resilience, through the medical treatment of depression, constitutes a suspect interference with what Mr. Valiunas calls “divine will.”
I want to underscore a point that, to his credit, Mr. Valiunas (mostly) makes on my behalf. I appreciate the centrality, to the aware and self-aware life, of social and existential alienation. But I do not see the disease of major depression as necessary to that perspective—just as I do not see exuberance as incompatible with it. How strange it is to have to make such a disclaimer, though; it should go without saying that one can oppose depression vigorously while understanding what is at stake in the modern world.
When Mr. Valiunas reaches back to Jamison’s account of her own manic depression, he comes closer to locating a point of disagreement. I do find that memoirs of mood disorder, hers included, tend to romanticize disease states. Mr. Valiunas quotes approvingly from her report that she learned from and found odd beauty in episodes of mania that involved psychosis and delusions. But as a society, we are in reasonable agreement that psychotic manic states are symptoms of disease. We do not refer to episodes of psychotic mania as justified—that is, as apt responses to the human condition or the state of society—as we might with depression. My new book will have achieved much of its purpose if it convinces readers that depressive episodes deserve the same status as episodes of mania. We may make creative use of them, as we make use of any extreme experience; all the same, they are bouts of illness, worthy of the efforts at prevention and treatment that would be applied toward any disease.
Peter D. Kramer, M.D.
Brown University
Providence, Rhode Island
Algis Valiunas writes:
By calling Peter D. Kramer the trademark proponent of SSRI’s, I by no means wished to suggest that he is a shill for the latest products of the pharmaceutical industry. Like any other good doctor, he numbers the dangerous side effects that drugs such as Prozac sometimes have, and he notes the therapeutic limitations of the SSRI’s. Despite these reservations, however, Dr. Kramer makes clear that Prozac’s effectiveness in ameliorating ordinary unhappiness has changed not only psychiatry but the very conception of the moral life. As he writes in Listening to Prozac,
Is Prozac a good thing? By now, asking about the virtue of Prozac—and I am referring here not to its use in severely depressed patients but, rather, to its availability to alter personality—may seem like asking whether it was a good thing for Freud to have discovered the unconscious. . . . In time, I suspect we will come to discover that modern psychopharmacology has become, like Freud in his day, a whole climate of opinion under which we conduct our different lives.
Surely one can make no larger claim for the significance of a drug than this.
The practical triumphs of psychopharmacology that Dr. Kramer describes have all but assured the theoretical ascendancy of neurobiology as the defining science of human nature. Certainly in the popular mind, and to an alarming degree in the scientific conception, neural wiring has acquired the cachet of destiny, which Prozac is designed to overcome. The questions that I raise in my essay, and that Dr. Kramer scants in his books, concern the consequences for philosophy and religion that the ongoing conquest of nature, including human nature, will inevitably have.
Like Dr. Kramer and Kay Jamison, I believe that exuberance has been underrated, and that to take joy in being alive is the hallmark of a life well lived. I admit that a good deal of luck is involved in who gets to be joyous and who does not, while Dr. Kramer thinks such luck to be an injustice that widespread neurochemical intervention can make right.
Sometimes that intervention is absolutely the correct thing; there is nothing inherently heroic in depressive illness, which I would be happy to see wiped out. Yet, fond as I am of my own all too rare flights of exuberance, if a pill were available to make them more lasting or more intense, I do not believe I would take it—although I would not question for a moment the wisdom of taking other pills to quell mania or relieve depression.
To manufacture a neurochemically improved or even perfected self seems a violation of my own nature in a way that correcting an obvious pathology does not. Coming to terms with certain flaws in my nature that seem essential to who I am is indispensable to such happiness as I am capable of; trying to overcome those flaws by moral effort seems natural in a way that smoothing them out by cosmetic psychopharmacology never will.
The conservative critique of psychiatry does not make nature pure and simple out to be the good, for nature can be unmistakably pernicious. But prudence distinguishes between nature’s grievous malignities and its ordinary imperfections. The former cry out for immediate relief by any means at hand; you learn to live, however uneasily, with the latter.
What I especially praised in Kay Jamison’s character was not her embrace of manic extremity. It was her making a remarkably healthy life for herself despite the occasional horrors of mania and depression. I did point out that she would have chosen mania only so long as she had lithium to control it. The triumph of psychopharmacology is to allow her such a choice; the triumph of character is to wrestle with a terrifying affliction and come out on top as often as she has.