Accidents and disasters are often caused by simple, random events or the change in a normal sequence of actions, any one of which could affect the outcome. Had the path of the Air France Concorde been slightly different, or the piece of titanium not fallen off a DC-10, or the plane left a tad earlier or later, or a sealant been used in the fuel tanks, or any one of any other seemingly unimportant events taken place, the plane's tire would not have struck the titanium and a piece of tire would not have opened a substantial leak in the plane's fuel tank, and the passengers and crew would still be alive today.
Another related book worth reading is Normal Accidents by Charles Perrow. Perrow had studied several major accidents and concluded that some forms of technology are more open to chains of failure and that adding more safety systems can actually lead to an increased likelihood of an accident because of the increase in complexity. The systems become so "tightly coupled" that a failure in any part of the system almost inevitably leads to a chain of unmanageable and uncontrollable events.
Chiles goes Perrow one further and makes recommendations as to how training and people can prevent the accidents by breaking one of the links in the chain. It requires that individuals throughout the organization be empowered to call decisions into question or to halt actions they believe to be of concern. He observed several industries as air traffic control centers, and aircraft carriers, (not to mention helicopter repair of high-tension lines!) which have impressive safety records despite a high level of coupling and danger.
It's a fascinating book that examines why disasters happened and what lessons can be gleaned from those tragedies. For example, the explosion of the steamboat Sultana killed hundreds at a time (1865) when Americans were seemingly inured to disasters of all kinds ("between 1816 and 1848, 233 explosions on American steamboats had killed more than two thousand people"). Steamboats were constantly being destroyed by boiler explosions, and, despite industry objections, the federal government had issued all sorts of controls and inspections. In the case of the Sultana, the captain was in a hurry, he wanted to pack as many prisoners (released from Andersonville prison) on board as possible (being paid [$] per soldier and [$] per officer). The ship was way overloaded, which contributed to the boiler explosion because when the ship turned, its topheaviness caused the water level in the boiler to shift beyond safe levels. In addition, rather than have a crack in one boiler properly fixed, the captain had insisted on a patch that normally would have been fine, except that it was slightly thinner than the boilerplate on the rest of the boiler. That would have been OK, except that no one thought to change the setting on the emergency blowout valve to reflect the thinner metal of the repair, so a sequence of decisions that individually would have been unimportant resulted in a sequence that killed far more, on a percentage basis, than the 9/11 attacks.
It is possible to conduct accident-free operations, but Chiles says that it means changing normal operational culture and mindset. For example, challenging authority becomes crucial in preventing aircraft crashes and other jobs where people have to work as a team. The airlines have recognized this and no longer is there a pilot in command; the term now is pilot flying the plane with each pilot required to question the judgment of the other pilot if he/she thinks the pilot flying has made an unsafe move or decision.
I learned about the extraordinary safety record of companies that use helicopters to make repairs on high-tension electrical lines while the current is still on. That would certainly loosen my sphincter. The pilot hovers the craft within feet of the conductive lines while the electrician leans out on a platform, hooks a device to the line that makes the craft and everyone on it conduct up to 200,000 volts (they have to even wear conductive clothing), and makes repairs to the line. They have never had an accident in twenty-five years of doing this. Safety is paramount, they anticipate the unexpected, and everyone is an equal partner in the team and expected to point out conditions that might be unsafe. "A good system, and operators with good `crew resource management' skills, can tolerate mistakes and malfunctions amazingly well. Some call it luck, but it's really a matter a resilience and redundancy." Failing to have this resiliency can have tragic consequences. On December 29, 1972 an L-1011 crashed on approach to Miami because a light bulb indicating whether the landing gear was down had burned out and the entire four-man crew became involved in changing the bulb. They did not notice that someone had bumped the throttle lever releasing the autopilot that was supposed to keep them at two thousand feet, and the air traffic controller who noticed the deviation in altitude did not yell at them to pull up, not wanting to annoy the crew, but simply asked if everything was coming along. The plane crashed killing everyone on board.
Another key element is that people must be clear in speaking and writing, "even if doing so necessitates asking people to repeat what you told them. . . We know that people will try to avoid making trouble, particularly any trouble visible to outsiders, even though they are convinced that catastrophe is near." Chiles sites numerous instances where committed individuals went outside normal channels to get additional perspectives or assistance and prevented catastrophe. Those individuals always knew the leadership would back up their independent decisions even if they were wrong.
I have just scratched the surface. This book should be recommended reading for everyone.