In the early-morning hours of October 7, Shin Bet, the IDF’s security service, began detecting hints of activity across the border with Gaza. According to a report in Haaretz, it was nothing concrete, just “an accumulation of signs or fragments of information [that] aroused certain concern.” Telephone conferences were held, possible scenarios discussed. It was just an exercise, some argued; others suspected an isolated abduction attempt. Officers alerted IDF Chief of Staff Herzi Halevi, as well as Shin Bet director Ronen Bar. Shin Bet dispatched a small squad to hunt for any possible abduction incursion, but that was it. No general alert was issued. They would wait until morning for more information.
I am not an expert in Israeli military affairs. But I have spent years studying man-made disasters—and almost every case I’ve examined includes a scenario like the one above. People working in a hazardous environment notice small signs of trouble, pass those warnings along to their superiors, and then…nothing. The Titanic, for example, received multiple telegrams about icebergs in its path. In 2010, the crew of the Deepwater Horizon drilling rig performed a test that suggested high-pressure methane might surge up the drill string—but after a second test produced a less worrisome result, managers decided work should proceed as normal. And, in an eerie parallel with October 7, the night before the 1986 launch of the space shuttle Challenger, anxious engineers requested a telephone conference with NASA brass. The rubber joints on the Shuttle’s booster rockets had had a history of leaking small jets of flame. The engineers worried that the very cold weather that night might make those rubber seals inflexible and thereby allow bigger, more dangerous leaks. After a long discussion, managers decided the launch should proceed because there weren’t enough data to justify scrubbing the mission.
Humanity has always faced disasters. Some, like hurricanes, earthquakes, or tsunamis, are unpredictable caprices of nature. Others, especially since the Industrial Revolution, result from breakdowns in technological systems, mistakes on the part of humans operating those systems, or, most often, some devilish combination of the two. These include train wrecks, industrial accidents, plane crashes, and the like. The epic catastrophes of the 1970s and ’80s—Three Mile Island, Challenger, Chernobyl, Exxon Valdez—helped launch a new field of inquiry dedicated to understanding these types of “socio-technical” failures. The field was pioneered not by engineers, but by sociologists and psychologists, experts in how humans behave in organized groups. This relatively new science of disaster reveals that major accidents are rarely the result of a single mechanical breakdown or isolated human error. Instead, organizations gradually “drift into failure,” in the words of disaster theorist Sidney Dekker. During long, accident-free periods, businesses and other institutions tend to reduce manpower and trim safety margins, all in the name of efficiency and focusing on their “core mission.”
“Drifting into failure is a slow, incremental process,” Dekker writes. “An organization, using all its resources in pursuit of its mandate (providing safe air-travel, delivering electricity reliably, taking care of your savings), gradually borrows more and more from the margins that once buffered it from the assumed boundaries of failure.”
Organizational culture also plays a role. Especially in hazardous fields, workers and managers grow accustomed to certain risks and increasingly confident in their ability to manage them. Situations that might appear scary to outsiders feel routine to them. NASA convinced itself that small leaks from shuttle booster rockets were a predictable annoyance, not a looming crisis. The crew members of the Deepwater Horizon were so accustomed to pushing the boundaries of undersea drilling that, in the words of one accident postmortem, “they forgot to be afraid.”
University of Michigan sociologist Karl Weick has extensively studied the famous Mann Gulch disaster, a 1949 forest fire that claimed the lives of 13 smoke jumpers. He concludes that those men died due to a failure of what he calls “sensemaking.” Having fought many similar fires, the crew shared a deep assumption that this one would behave like those they’d extinguished before. As a result, the crew missed subtle clues that their mental model was flawed. When the fire suddenly turned on them, they had no escape route. Weick argues that successful organizations need to be wary of too much optimism. True safety requires cultivating a bit of paranoia: being alert for the “weak signals” that might signal future trouble. “Positive illusions can kill people,” he warns.
Military history is full of disasters, of course. And the lessons of military history dovetail with the findings of disaster science. But wartime catastrophes contain yet another variable: an enemy eager to exploit any hint of complacency or distraction on the part of its adversary. Looking at October 7 through the lens of disaster science reveals how Israel’s military and political leadership developed an overly optimistic mental model of the threats that surrounded them. They forgot to be afraid. Worse, Hamas’s attack planners deeply understood Israel’s weak points and blind spots.
Israel’s drift into failure began long before October 7. Prime Minister Netanyahu and military leaders believed the threat from Hamas was mostly contained. With the Iron Dome missile-defense system in place, they came to see occasional rocket barrages from Gaza much the way NASA viewed booster leaks: as a manageable problem rather than an existential threat. A high-tech fence and automated listening posts had mostly eliminated incursions across the border. With Hamas seemingly quiescent, the IDF was able to reassign most troops from the Gaza frontier to other missions.
Amos Harel, a defense reporter for Haaretz, visited the border region six weeks before the attack. The IDF’s Coordination and Liaison Office at Erez “looked quite drowsy,” Harel later recalled. He saw “high fences, outposts, observation means, plenty of technological systems—but not many forces on the ground … the level of preparedness and vigilance also didn’t appear to be high.” On a visit to a base on Israel’s northern frontier, “I was taken aback by the amount of garbage in the corridors,” Harel wrote. Trash on a military base is like rust on a ship or burned-out light bulbs at a nuclear plant—the kind of weak signal that suggests slack discipline. If soldiers aren’t noticing litter underfoot, what else are they overlooking?
After the attack, stories trickled out that some observers had noticed worrisome activities in Gaza. An unnamed Egyptian intelligence official told the Associated Press that “we have warned them an explosion of the situation is coming.” Hamas was planning “something big.” (Netanyahu called the claim “fake news.”) U.S. intelligence services also circulated reports indicating a rising threat of rocket attacks and other “unusual activity by Hamas,” according to CNN. But U.S. officials had seen such reports before, one source told the network: “I think what happened is everyone saw these reports and were like, ‘Yeah of course. But we know what this will look like.’” It is unclear whether the U.S. passed any warnings back to Israel. But apparently, U.S. experts were equally confident they could predict Hamas’s future tactics based on its past behavior.
Maya Desiatnik, an observation officer at the Nahal Oz outpost near the Gaza border, told Israeli public radio that she and other observers informed IDF superiors about multiple instances of suspicious activity. They saw people consulting maps along the fence line, heavy equipment, and groups of armed men engaged in what appeared to be training exercises. “It was clear that something would happen,” she said. Apparently, those warnings failed to dent the IDF’s mental models about what risks might emanate from Gaza. If they had, perhaps military leaders would have been more alarmed by those reports that had filtered up to various headquarters in the hours before the attack. Instead, as Haaretz writes, “at the Shin Bet, the signs were deemed ‘weak signals’ from which it wasn’t possible to derive sufficient insight on activity in the near future.”
The Hamas leaders who planned the invasion—along with their likely Iranian advisers—knew exactly what they were doing. The attack began with a massive rocket barrage. The Iron Dome began launching its interceptor missiles while IDF forces stayed safely in their bunkers, monitoring the frontier remotely on video screens. The cacophony overhead masked the sound of gunfire and bulldozers breaking down fences. Snipers took aim at the cameras overlooking the border, while remotely operated drones dropped small bombs on automated watchtowers and cellular transponders. The timing was so precise that “all our screens turned off in almost the exact same second,” one soldier in a command center told investigators. Within minutes, the most powerful military in the Middle East was blind.
Israel had invested heavily in its high-tech border defenses, which included remote-controlled machine guns in addition to surveillance capabilities. In theory, the system should have kept the border region safe. But, as disaster researchers have documented, safety technology can be a two-edged sword. A new gadget can make an existing system safer, but it might also open the door to less frequent, yet more severe accidents. For example, the Deepwater Horizon platform relied on an elaborate seafloor rig known as a blowout preventer as a last resort to keep oil and gas from erupting out of the bore hole. These devices have prevented many accidents, but they also allow oil companies to drill deeper and take more risks. When the Deepwater Horizon preventer failed, it enabled the largest accidental oil spill in history.
With its high-tech barrier in place, the IDF became comfortable leaving only token forces along much of the border. Many of the troops who remained were concentrated in a single base. And instead of patrolling the border on foot, they watched it mostly via video links. In other words, Israel’s border defense evolved over time from a loosely connected string of observation posts to a single, well-integrated network. Disaster researchers would call this a “tightly coupled system.” The modern world is full of such interconnected systems. As a rule, these make our communications, commerce, infrastructure, and so on more efficient and convenient. But they have the potential to turn small, isolated problems into sprawling disasters.
Our power grid, for example, connects utilities across multiple states, allowing them to shuttle power to where it is needed, helping to prevent local brownouts and blackouts. But by linking local grids together, it exposes each of them to the threat of a grand, cascading failure. In 2003, a minor short circuit on an Ohio transmission line propagated across the grid until much of the Northeast and parts of Canada were plunged into a blackout. Almost 100 people died. Israel’s border-defense system was similarly vulnerable. Once terrorists knocked out the cameras and communications nodes, the whole system went dark.
But even before communications fully collapsed, the information pouring in was too chaotic for IDF soldiers and officers to process. One new recruit told a military website, “We started receiving messages that there was a raid on every reporting line,” she said. “There were swarms of terrorists, something psychotic.” Higher up the chain of command, IDF officers were preoccupied with the rocket attack. At the main military headquarters in Tel Aviv, panicked reports from the front were met with “a lot of question marks,” one former military adviser told the Washington Post. No one grasped the massive scale of the attack; the very idea was outside anyone’s mental model. Faced with its worst crisis since the Yom Kippur War, Israel’s military suffered a complete collapse of sensemaking. An hour and a half passed before the IDF reported a “combined attack.” It took hours more to call in air support, even though the closest bases were only minutes away.
Such failure to grasp the scope of an unfolding catastrophe is surprisingly common. In fact, it is the norm. “Failure is not as much the accident,” Sidney Dekker once wrote, “but failure to identify the accident early in its birth.” In 2012, after the cruise ship Costa Concordia struck a rock and began to sink off the Italian coast, befuddled crew members told passengers, “We have solved the problem and invite everyone to return to their cabins.” It is easy, in retrospect, to condemn such obtuseness, but this “positive optimistic bias” is deeply ingrained in our nature. What the human mind can’t comprehend, it seeks to minimize.
Civilians near the Gaza border were equally overwhelmed. Attackers arrived at the small Kissufim kibbutz just after 6:30 A.M. Shai Asher, a member of the kibbutz security team, got a WhatsApp message: “‘This is a real action, real action, a real situation,’” he recalled to the Washington Post. He asked his colleagues whether anyone knew how to contact the IDF. But efforts to communicate were in vain. The attackers had detailed information about where to find—and soon disable—the village’s communications equipment. Asher described the moment he realized the kibbutzniks were on their own: “The phone network doesn’t work, WhatsApp doesn’t work, everything is broken down,” he said. Similar scenarios played out all along the border.
The IDF would have been little help in any event. In many cases, military outposts and kibbutzim alike were just hundreds of meters from the border fence. Everyone was living under an imaginary umbrella of security. And the military had abandoned the longstanding concept of “defense in depth.” Hamas terrorists overran the thinly staffed bases before many soldiers even realized they were under attack. Some were shot in their beds. At the Nahal Oz post, Maya Desiatnik—the officer who reported suspicious activity—was one of only two soldiers who weren’t killed or abducted. “That’s what happens when you suffer a catastrophic systemic failure,” a former Israeli intelligence officer told the Post. “That’s what happens when you forget that all defense lines can eventually be breached and have been historically. That’s what happens when you underestimate your enemy.”
In a 2017 paper for the Brookings Institution, former CIA analyst Bruce Riedel wrote that Israel’s lack of preparedness for the 1973 Yom Kippur War was “a classic example of how intelligence fails when the policy and intelligence communities build a feedback loop that reinforces their prejudices and blinds them to changes in the threat environment.” Riedel’s paper is just one of hundreds of analyses and investigations that try to make sense of that war’s initial, nearly fatal, missteps. The tragedy of October 7 will no doubt prompt similar inquiries.
“Rituals of risk and error are the aftermath of every disaster,” writes Diane Vaughan, a Columbia University sociologist who wrote the definitive analysis of the Challenger disaster. Committees are convened, hearings held, fingers pointed. Invariably the investigations focus on exposing the culprits responsible for the lapses that few saw in advance, but that seem so glaringly obvious in retrospect. As Dekker puts it, disaster inquiries always seek to identify “bad people, bad decisions, broken parts.” After a tragedy it is natural to want someone to blame. And Israel’s current political and military leadership certainly offers plenty of candidates. But singling out negligent or even corrupt leaders isn’t enough to prevent catastrophes from recurring.
The Challenger disaster led to the famous Rogers Commission, an inquiry committee stuffed with notables including Neil Armstrong, Richard Feynman, Sally Ride, and Chuck Yeager. It produced a scorching report, which focused on leadership lapses. NASA initiated serious reforms. Nonetheless, 17 years after the Challenger was lost, the shuttle Columbia disintegrated on reentry. It turned out, NASA had made the same mistakes all over again: It had decided to tolerate seemingly small risks (in this case, chunks of detached insulating foam striking the orbiter’s delicate wings during takeoff) and to tune out weak signals. Vaughan concludes that NASA’s problems ran deeper than a few bad managers at the top. “Mistake, mishap, and disaster are socially organized and systematically produced by social structures,” she writes.
Inquiries into the Yom Kippur War failures led to major changes in Israel’s military structure and political leadership—including the end of Prime Minister Golda Meir’s storied career. And yet here we are, 50 years later, grappling with another catastrophic failure of sensemaking on the part of Israel’s political, intelligence, and military elites. And this could be the hardest lesson of the post–October 7 reckoning: Identifying and even punishing these failed leaders might be necessary, and indeed, cathartic. But it won’t be sufficient. The problems lie deeper than any group of individuals. “Locating blame in individuals perpetuates the problem,” Vaughan writes. The people thought to be at fault can be fired or even jailed, “but unless the organizational causes of the problems are fixed, the next person to occupy the same position will experience the same pressures and the harmful outcomes will repeat.”
It’s natural to be outraged at the leaders who failed to anticipate this horrific assault. But, unlike in a disaster, we should reserve our deepest anger for the people who ordered and carried out this exercise in primitive barbarity. Emily Harding, an analyst with the Center for Strategic and International Studies, writes that intelligence collapses like 9/11 and October 7 “are often failures of imagination.” They occur when leaders and analysts “neglect to think as big and as ruthlessly as their enemy.” Maybe we shouldn’t be shocked that Israel’s military and intelligence leaders failed to imagine the depths of Hamas’s depravity. Perhaps—and I know this is asking a lot—we should try to summon a bit of empathy for officials whose notions of military threats didn’t include mass rape and babies in ovens.
No doubt all these questions will be hashed out in the coming years of inquiry and attempts at reform. But that will have to wait. As Israeli forces were still engaging the last Hamas terrorists, a reporter asked IDF military spokesman, Rear Admiral Daniel Hagari, about the status of the investigation into military and intelligence failures. His response: “First, we fight, then we investigate.”
Photo: Abir Sultan/Pool Photo via AP
We want to hear your thoughts about this article. Click here to send a letter to the editor.