Since the terrorist attacks of September 11, 2001, there has been an understandable preoccupation with how to reform the nation's intelligence system in order to prevent the recurrence of such an event. But the issue has been considered in a historical vacuum. The history of surprise attacks composes a pattern to which the 9/11 attacks conformed. Once this pattern is grasped, it becomes apparent that reforming the intelligence system has serious limitations as a means of defending against international terrorism. Overselling intelligence can cause us to slight other, potentially more effective avenues of defense, such as making our borders less porous.
Before 9/11, the biggest surprise attack against the United States had been the Japanese raid on Pearl Harbor in December 1941. The question of why we were surprised is the subject of a large scholarly literature dominated by Roberta Wohlstetter's Pearl Harbor: Warning and Decision. Since this book's publication more than 40 years ago, additional documents have been declassified or have otherwise become available, and a stubborn revisionist school continues to argue that Roosevelt or Churchill or both knew the attack was coming but kept mum in order to lure America into the war. Nevertheless, Wohlstetter's study is generally and rightly considered authoritative.1
In 1941, as Wohlstetter explains, we had plenty of warning signs of an impending attack, in part because we had cracked the code (“MAGIC”) that the Japanese government used to communicate with its embassies around the world, including its embassy in Washington. War with Japan was expected to break out at any time, and the direction of a Japanese offensive was thought to be southward toward the Philippines, Malaya, and the Dutch East Indies (now Indonesia). Because such an offensive would expose the Japanese flank to U.S. ships in the Pacific, a preemptive Japanese attack on Pearl Harbor was hardly out of the question. Pearl Harbor was also known to be within range of the Japanese fleet, which was rich in aircraft carriers.
Why, then, did we not take measures to protect Pearl Harbor from air attack? The answer is multifold. For one thing, as Wohlstetter explains, to have a reasonable chance of detecting a surprise attack would have required instituting dense air patrols, which would have stripped aircraft from other fronts that seemed in greater danger of attack, such as the Philippines—where in fact the Japanese attacked within hours of Pearl Harbor—and would have interfered with pilot training. When a satisfactory response to a threat is difficult to devise, the tendency is, ostrich-like, to deny the threat.
Similarly, because our fleet was already heavily engaged in combat with German submarines in the Atlantic (though we were not yet formally at war with Germany), it was difficult for policy planners to see how we could take on Japan at the same time. In Wohlstetter's words, “the assumption was . . . ‘If we lose in the Atlantic, we lose everywhere.’ This meant that the Far East simply had to stay quiet.” Besides, although some signs pointed to Pearl Harbor as a target, more pointed to other possible objectives, including the Soviet Union and Thailand, as well as the British, Dutch, and U.S. possessions in the Far East. As Wohlstetter writes:
We failed to anticipate Pearl Harbor not for want of the relevant materials, but because of a plethora of irrelevant ones. . . . The signals that the local commanders later argued were muffled and fraught with uncertainty are the ones they viewed before the event. The signals that seem to stand out and scream of the impending catastrophe are the ones learned about only after the event, when they appear stripped of other possible meanings.
Because these warning signals were not effectively pooled, moreover, no one had the full picture of the danger. One reason the signals weren't pooled was security. The best source, the decrypted MAGIC code, was highly sensitive: if the Japanese discovered that their code had been broken, they would change it. So access to MAGIC was limited to a handful of high officials—who had neither the time nor the background to make sense of what they were reading.
There were additional constraints on our ability to gain a true picture of the danger. Politically, our leaders had difficulty understanding why the Japanese would want to attack Pearl Harbor. It is true that we were squeezing Japan hard economically, in order to deter it from attacking the Soviet Union and to convince it to abandon its ambition to dominate Asia. But we ourselves were extremely unlikely to attack Japan first. Had the Japanese confined their aggression to nations they could actually defeat, the United States—given its strong isolationist streak and the Roosevelt administration's preoccupation with the greater menace of Germany—might well have hesitated to declare war on Japan. In brief, it was reasonable to think that Japan's optimal strategy, especially given its military weakness vis-à-vis the U.S., would be to avoid precipitating a war.
To understand why Japan might nevertheless decide to attack Pearl Harbor would have required us to understand a culture alien to our own, and specifically to understand how, even with the knowledge that they might be defeated, the Japanese would think it more honorable to fight than to abandon their announced policy of dominating Asia. We would also have needed to understand how Japan's rulers saw the likely American response to such an attack. In the event, just as we underestimated their devotion to national honor, they underestimated ours.
Within the intelligence community itself, a further obstacle to a correct estimate of Japanese intentions was departmental reluctance to challenge consensus views. People in general hesitate to admit to having been mistaken or to being surprised by new data; intelligence officers are no exception. A reputation for being unsteady or unreliable can damage careers. “One index to sound judgment,” as Wohlstetter remarks, “is agreement with the hypotheses on which current department policy is based”; in the case of Pearl Harbor, the consensus held that the danger was sabotage by agents recruited from Hawaiians of Japanese ethnicity, not an air attack.
Still another impediment to sound judgment was that on several occasions before the attack, there had been credible warnings that war with Japan was about to break out. When it did not, intelligence officers were loath to repeat such warnings lest they be regarded as alarmists. Besides, people cannot keep themselves at peak alert all the time; they get tired. The more false alarms there are, the less alert they become. This may well have been true of the army and navy commanders in Hawaii, who felt they had already wasted time and resources on alarms issued by Washington that turned out to be false.
Finally, there was no overall commander of U.S. forces in Hawaii. Nor was there a joint Army-Navy staff. (The office of chairman of the Joint Chiefs of Staff did not yet exist.) In Washington, there were War and Navy departments, but there was no Department of Defense. The Army and Navy had their own separate intelligence services; little information was shared between them. There was no counterpart to the CIA or the Defense Intelligence Agency (DIA), and thus no “center for evaluating a mass of conflicting signals from specialized or partisan sources” (Wohlstetter). Missing, too, was any procedure for synthesizing confidential sources of information with what was publicly known about enemy intentions and capabilities.
The organizational problems were eventually corrected by the creation of the Defense Department, the CIA, and the DIA. No one has devised satisfactory correctives for the other problems that Wohlstetter identified.
A second historical instance occurred during the Vietnam war. At the end of January 1968, during the Tet holiday period, the United States was stunned by an offensive mounted by Vietcong and North Vietnamese forces against the cities of South Vietnam.
As with Pearl Harbor, signs of the impending attack abounded. Indeed, through a communications mixup, some Vietcong jumped the gun and attacked the day before the beginning of the holiday period. Although that should have been recognized as a definitive warning, few defensive measures were taken; from the business-as-usual behavior of the U.S. and South Vietnamese military commanders, it is apparent that the scope and intensity of the Tet assault came as a complete surprise.2
Why? As in the case of Pearl Harbor, our commanders were distracted by what looked to them like a greater menace—the siege of our base at Khe Sanh, in the northern part of South Vietnam, where it seemed that the North's General Giap was trying to repeat his victory over the French at Dienbienphu in the first Vietnam war. This led the U.S. command to interpret warning signs of an attack on South Vietnam's cities as a diversionary tactic, designed to trick us into shifting troops away from Khe Sanh. For just as the Japanese had been too weak in 1941 to take on the United States in a war, so the enemy in Vietnam seemed too weak to achieve the more ambitious goal of taking over South Vietnam's cities.3 At worst, we thought in our complacency, a Tet attack would be a last fling, like Hitler's 1944 assault on U.S. forces in the Ardennes (the Battle of the Bulge). Because we won that battle decisively, the German attack turned out to shorten the war. On the same reasoning, we could look forward to a Tet attack with equanimity.
We were handicapped in grasping the enemy's intentions not only by a mistaken analogy but also by cultural ignorance. We thought the Communists would not dare outrage South Vietnamese public opinion by violating the traditional Tet truce. An older tradition would have cast doubt on that expectation: in 1789, when the country was ruled by the hated Chinese, the Vietnamese rose in a surprise attack during Tet and won independence. There was also a mundane reason to schedule a major offensive during the Tet holiday: almost a third of the South Vietnamese army was expected to be on holiday leave.
In Vietnam, we used what we thought we knew in order to create a theory of what might happen. The theory was plausible, but erroneous.
For a third instance of this historical pattern, we may turn to the assault on Israel by Egypt and Syria in October 1973, which initiated the Yom Kippur war.
Because the bulk of the Israeli army consists of reserve formations that require a minimum of 48 hours to mobilize and reach the front, Israel is highly vulnerable to a surprise attack. Having launched a preemptive attack of its own in June 1967, Israel knew that the Arab states itched to return the compliment. There was no doubt who its enemies were, or what their options were: an attack from the south would come across the Suez Canal into Sinai, from the north across the cease-fire line in the Golan Heights.
Warnings in 1973 were abundant, but they were waved aside: Egypt and Syria achieved complete surprise. After the war, the reasons for this success were investigated at length by Israel's Agranat commission, an ad-hoc committee of notables not unlike our 9/11 commission. From its report we learn that the intelligence branch of Israel's armed forces, the only intelligence agency responsible for assessing threats of invasion, was convinced that Egypt and Syria knew they were too weak to prevail in a war with Israel. Like the United States at the time of Pearl Harbor, the prospective victim thought that its enemy needed more time to prepare a successful attack.
Although Israel's earlier, one-sided victory in the 1967 war had bred a certain contempt for its enemies—illustrating the Duke of Wellington's adage that a great victory is a great danger—Israeli intelligence was correct that Egypt and Syria were still too weak to win. But again like America in 1941, Israel misunderstood its enemies' goals: Egypt and Syria could “win” just by fighting Israel to a draw, just as Japan thought it could preserve its honor by fighting the United States and losing.
In addition, and once again like Pearl Harbor, there had been a number of false alarms. Only a few months earlier, Israel had mobilized at great expense in the face of a warning that it was about to be attacked. A dissenting voice at the time was that of the chief of military intelligence, and his vindication in that earlier episode helped to persuade the Israeli cabinet to ignore later warning signs.
Reposing confidence in the accuracy of its intelligence, Israel had no plan for fighting off a full-scale attack before its reserves could be mobilized. And when the truth finally dawned, the few hours that remained were squandered; full mobilization was delayed, and units already on the front lines were not adequately alerted to what was about to happen.
The Agranat commission placed much of the blame for the fiasco on the incompetence of particular individuals, including the chief of military intelligence, his principal assistant, and the chief of the general staff. But it also criticized the structure of Israeli intelligence, which it found to be excessively centralized. Hence it offered a number of recommendations to insure “pluralism.” In particular, it urged the appointment of a civilian intelligence adviser to the prime minister—ironically, a step toward the same type of structure that, according to the 9/11 commission, let us down in September 2001 because it was too decentralized. But then, just like Israel's centralized system, our previous, more “pluralistic” intelligence system also failed to foresee the Yom Kippur attack.
Pearl Harbor, Tet, Yom Kippur, 9/11: the sample is too small to prove definitively that surprise attacks have certain features in common. But it is nevertheless suggestive, and the examples I have given could easily be augmented.
In each case, the attacker is too weak to have much hope of prevailing, at least in conventional military terms; the victim's perception of this weakness contributes to the failure to anticipate attack; the victim thinks, often with reason, that the principal danger lies elsewhere, or in the future, or both; the victim lacks a deep understanding of the attacker's intentions and capabilities, and so bases his expectations on what he, the victim, would do in the attacker's place; the victim interprets warning signs to fit this preconception; the victim is lulled by false alarms or deliberate deception; the victim is in a state of denial concerning the forms of attack that are the hardest to defend against; intelligence officers are reluctant to challenge their superiors' opinions; and warnings to local commanders lack clarity and credibility.
Senator Susan Collins, the sponsor of the bill that eventually emerged as the Intelligence Reform and Terrorism Prevention Act of 2004, has been quoted as saying that “just as the National Security Act of 1947 [which established the CIA] was passed to prevent another Pearl Harbor, the Intelligence Reform Act will help us prevent another 9/11.” What she overlooked is the fact that 9/11 was another Pearl Harbor. Although deficiencies in the organization of intelligence may have contributed to the success of both attacks—just as, in Israel, they contributed to the success of the Yom Kippur surprise attack—it is far from clear that the structure of the victim's intelligence service was a salient factor. This is grounds for concern in light of the heavy emphasis placed on reorganizing our intelligence system by both the 9/11 commission and the Intelligence Reform Act.
The causes of a nation's vulnerability to surprise attack seem intractable. This is especially true for a country like the United States, which faces a multitude of potential enemies, including those, like al Qaeda, with a much larger range of potential targets to choose from than Japan had in 1941. It is impossible to be strong everywhere, or to respond to every alarm with costly defensive measures (like grounding all civil aviation, as in the wake of 9/11), or to eavesdrop on every plotter.
Well in advance of 9/11, it was known that terrorists might use hijacked planes as missiles. But the possibility seemed remote, while taking effective measures against it would have been very costly.
A subtle but significant cost of such defensive measures, especially those that stress heightened alertness and frequent warnings, is the lulling (“boy crying wolf”) effect of false alarms; the more there are, the likelier it is that true alarms will be ignored. The Israelis disregarded the signs of an imminent attack by the Egyptians and Syrians in October 1973 not only because they thought the probability low but also out of a deceptive sense of security induced by their previous, costly mobilization. Before 9/11, too, the probability of attacks on the continental United States was thought to be low, and the cost of defensive measures too high. Indeed, one of the special factors that continue to make it difficult to defend the United States against a surprise attack is an individualistic mentality that places a very high value on privacy, autonomy, and convenience, and as a result greatly resents the restrictions entailed by effective security measures until the danger becomes palpable.
If false alarms create a lulling effect, an attack, when it comes, can create a “hyperalert effect”—increasing the risk of another surprise attack by overconcentrating on preventing a repetition of the previous one. We may be expending too many resources on screening airline passengers while downplaying serious potential threats to other parts of our transportation system.4 A hyperalert state may also precipitate a flood of warnings that turn out to be false alarms, as has certainly been our experience since 9/11; this creates still more lulling costs.
Even without such distractions, intelligence agencies cannot collect and analyze data across the entire spectrum of possible surprise attacks. Intelligence officers determine where the greatest dangers lie and, having made that determination, inevitably give greater weight to information that bears on those dangers than to information concerning more remote threats. This gives rise to the paradox that a surprise attack is more likely to succeed if it has a low probability of success and if the attacker is weak.
On both counts, the prospective victim is likely to discount the danger. In addition, since the number of possible low-probability attacks is much greater than the number of high-probability ones, a potential victim will marshal his defensive resources to protect the high-probability targets of greatest value, leaving underprotected the immense number of lower-valued, low-probability ones. Realizing, too, that an enemy who wants to achieve strategic surprise will pick one of those inferior targets, and is therefore unlikely to obtain a decisive victory, the potential victim will rationally decline to invest a great deal in defensive measures, especially since the cost of defending against the entire spectrum of low-probability attacks by weak adversaries (who may, moreover, be numerous) is prohibitive.
For the weak, surprise attacks are a favorite tactic because they are a force multiplier. They tend to be wild gambles. Yet the attacker, weak as he may be, gets to pick the time, place, and means of attack; unless the victim is exceptionally lucky, the plan cannot be discovered in advance. Surprise gives the attacker a built-in advantage that assures a reasonable probability of at least a local or short-term success. The Pearl Harbor, Tet, Yom Kippur, and 9/11 attacks achieved such successes. When an attacker is willing to settle for that, there is little the victim can do to prevent the attack from happening.
A final consideration is that the more sensitive a warning system, the greater the risk of the victim's refraining from a response altogether or responding mistakenly by means of a preemptive strike on the supposed attacker. As Thomas Schelling put it in The Strategy of Conflict (1980), a warning system “may cause us to identify an attacking plane as a seagull, and do nothing, or it may cause us to identify a seagull as an attacking plane, and provoke our inadvertent attack on the enemy.” This is another reason to doubt the wisdom of seeking a hair-trigger defense against surprise attack.
Still, although surprise attacks cannot reliably be prevented across the board, some can be. Others can be deterred, and the worst consequences of those that do occur can be mitigated, for example by stocking vaccines in anticipation of a possible bioterrorist attack that may not be preventable. But reorganizing the intelligence system along the lines proposed by the 9/11 commission, and adopted to a considerable extent by Congress in the Intelligence Reform Act, is an implausible response to the problem of preventing or defending against surprise attacks.
The act's central thrust is to centralize the control of intelligence in the person of the Director of National Intelligence, who will be too busy, and too remote from the operational level of intelligence collection and analysis, to establish priorities among threatened attacks. Worse, if Americans believe that the Intelligence Reform Act has fixed the system—if, indeed, they believe that any reforms can insure against intelligence failures—they may be less willing to support the strengthening of other elements of an integrated system of national defense. These elements include deterrence, border defense, the guarding and hardening of potential targets, and mitigative measures should an attack occur. Such a relaxation of effort would be an invitation to disaster.
1 Other noteworthy works on surprise attacks include Richard K. Betts, Surprise Attack: Lessons for Defense Planning (1982), and Cynthia M. Grabo, Anticipating Surprise: Analysis for Strategic Warning (2002).
2 See James J. Wirtz, The Tet Offensive: Intelligence Failure in War (1991).
3 In fact the enemy was too weak to achieve its objectives; in the Tet offensive it suffered horrendous losses and was driven out of all the cities it attacked. The North Vietnamese and the Vietcong seem to have thought the offensive would end the war by causing the South Vietnamese regime to collapse; in this they were mistaken. As is well known, however, Tet did have a powerful effect on American public opinion, and one deeply injurious to the war effort.
4 On this point, see James Fallows, “Success without Victory,” Atlantic Monthly, January-February 2005.