Course Summary
DURATION: 2 days
Examination of the history of such events reveals that the root causes remain largely unchanged over time and across many, diverse programs. Lessons to be learned from such root causes can help designers of aerospace and other technical systems to:
Analyze the detailed case histories of previous mishaps to identify system-specific lessons, and
Translate these lessons into concrete strategies that will minimize or eliminate root causes.
Topics ranging from lessons in general management (or mismanagement) approaches to very detailed actual mishap case histories are presented. The instructors share their combined experience of over 80 years in aerospace and civil engineering system development, and personal involvement with many of the mishaps presented, to provide unique insights from the perspective of those who were there and have the “scars” to prove it. They identify all sources of information such as websites, failure reports, interviews, and consultation with other subject matter experts. Finally, they share their views on why the lessons are frequently not learned, and what to do about it.
COURSE SAMPLE: APOLLO 13
What really happened and why?
You may know that an oxygen tank over pressurized during the ill-fated mission Apollo 13. However, do you know why? We will dig into the back story of multiple times people made mistakes during pre-flight analysis and testing. We’ll also explain how the crew made it home safely, largely due to engineers who predicted a worst case, and how to survive it. Apollo 13 is one of over 60 cases we chose to present and draw out the lessons learned you won’t find in a failure report.
COURSE SAMPLE 2: EVA #23
An astronaut was enveloped with water in his helmet during an EVA. Why did it happen?
The answer might surprise you. The root cause was not on-orbit, and, it was a bad management decision.
WHO SHOULD ATTEND:
Engineering staff, technical managers, and program/project managers engaged in the development of aerospace and other high technology systems, or those responsible for oversight in these areas.
WHAT YOU WILL LEARN:
This 2-day course is designed to provide insights, lessons learned, and mitigation strategies to address the root causes of complex aerospace and civil infrastructure system failures.
The course reviews dozens of extraordinary engineering failures and near misses in detail, exploring the engineering, quality, systems, and management aspects that led to disaster. In addition to uncovering the root causes, specific lessons for avoidance of these types of mishaps are outlined. For example, the details of the Space Shuttle Columbia accident are presented along with an introduction to the concept “Normalization of Deviance, in which system behaviours deviating from an established requirement are rationalized and accepted, with catastrophic results. Other notable cases include the perils of Faster, Better, Cheaper programs and the special challenges associated with mission software development, including Mars missions and the Ariane 501 cases. “Rules of Practice” and other concrete strategies are shared that apply to the current works of each participant to reduce risk and maximize success.
TYPICAL COURSE OUTLINE*
(Content will vary slightly depending on the tailoring and as updated material is included)
Introduction
Overview of Failure Types
Lessons from Past Missions
Screening Out Design Errors
Screening Out Procedural Errors
Impact of Weak Testing Practices
Systems Engineering Lapses
Software Mishaps
Information Flow Breakdown
Flawed Processes
Experienced Teams Make Mistakes
Normalizing Deviance
Missed Advanced Warnings
Perils of Heritage Systems
Management Issues
Chain of Errors Concept
Near Misses
Mishap Summaries
The Human Element
Applying the Lessons: “Rules of Practice”
Working Group Discussions (end of each day)
Conclusions
*Traditionally a two-day course, but a wide scope of options ranging from lectures to two full days are available. Content will vary.
GOALS
This 2-day course has the following goals for developers of technical systems. Participants will:
· Develop early recognition of potential failure risks given the awareness of the twelve most common themes in the extensive database of cases.
· Achieve a broader awareness and new perspectives based on the variety of engineering cases that either failed or had a near miss.
· Anticipate, recognize, and reduce the risks for human error in the system, given the root cause of most complex engineering failures is driven by human elements.
· Recognize and take action to avoid common pitfalls through the life cycle of projects, from design through operations.
· Adopt applicable “Rules of Practice” for their own projects, especially useful to create a culture for engineering processes, risk management, and decision making to minimize or eliminate root causes of failures.
“Honestly, every program at NASA should have this training.”
— Kennedy Space Center
Book a Course
AR NexGen translates lessons into concrete strategies that will minimize or eliminate root causes of engineering system mishaps and near misses.