If AI ran Apollo 13 rescue mission, the crew would have died
Don't take it from me, hear it straight from the AI.
The story of Apollo 13 rescue mission made me fall in love with engineering. It’s a perfect blend of harsh and beautiful reality. The situation was bad, resources were limited, but the deep expertise and adaptive creativity saved lives.
Today the blog features a guest. But the guest isn’t human, it’s AI. I’m the interviewer, while the AI is in the hot seat.
Enjoy.
Q: Why is Apollo 13 such a big deal for engineers?
Apollo 13 wasn’t a mission success, but it was a systems failure masterclass. A literal explosion in space forced the crew and engineers to drop the original plan and improvise every step home.
For engineers, it’s the ultimate case study in solving real-world problems under pressure, with limited data, broken tools, and no second chances.
Q: What exactly went wrong, and how did they fix it?
An oxygen tank exploded mid-flight, taking out power, oxygen, and propulsion in the command module. The crew had to retreat to the lunar module, which wasn’t built for three people or for a return trip.
NASA and the astronauts improvised:
CO₂ Scrubber Hack: The command module used square filters, the lunar module used round ones. They built an adapter out of plastic bags, cardboard, and duct tape—guided by ground engineers over radio.
Manual Navigation: With the main computer down, they used Earth’s position through the window and timed burns with wristwatches.
Extreme Power Cuts: They powered down everything non-essential. Cabin temps dropped near freezing. No lights, barely any comms, but they kept life support running.
This wasn’t textbook problem-solving. It was mental simulation, constraint reasoning, and trust in teammates.
Q: What traits made those engineers successful?
Clear systems thinking
Creativity under extreme limits
Mental simulation of complex environments
Precision in communication
No ego in collaboration
Deep failure-mode literacy
Ruthless prioritization
Zero reliance on “business as usual”
They weren’t smarter than everyone else—they were trained, prepared, and able to adapt in real time.
Q: Which of these traits does current AI possess?
AI shows partial constraint-driven creativity and failure-mode literacy, strong precision in communication, and high cognitive stamina without fatigue. It has limited mental simulation, rapid abstraction, and weak systemic thinking.
AI inherently lacks prioritization, judgment, intent, and adaptive autonomy, operating strictly within predefined rules and data.
Q: Could current AI handle this kind of problem-solving?
Not even close.
AI can simulate scenarios, analyze data, and suggest options—but only within pre-trained bounds. It can’t leap context, reimagine tools, or repurpose duct tape for a life support system. It has no sense of urgency, no ability to bend systems creatively, and no understanding of what’s actually at stake.
Give AI the Apollo 13 problem as primary decision-maker? Astronauts die.
Q: So what should today’s engineers take from all this?
Invest in:
Thinking clearly under extreme constraints
Systems literacy across software, hardware, and operations
Communication when everything’s going wrong
Knowing how to build with what you have, not what you want
Practicing mental models, not just tools
Avoid:
Blind trust in automation
Assuming data is always complete
Defaulting to procedures without questioning them
Losing hands-on skill in favor of abstraction
When the mission breaks catastrophically, you can’t outsource the fix.
You are part of the system. Your emotions, creativity, previous experiences and deep expertise all contribute to the final outcome. Don’t underestimate the value of being a human.
We’re inherently flawed. But just like jazz, it’s those flaws that make us valuable. Imperfection is at the core of innovation. On a low level such as code, our flaws generate problems. On a high level, they generate solutions.
AI is a threat for developers
I believe AI is a threat for developers, but not for engineers. I might be wrong, but this is my rationale.
I was reading this and the crisis made me think of the recent cyber attacks on M&S and retailers, and how no-one has come out of them proclaiming how AI saved the day getting systems back rather than humans