AI Pilots: How Artificial Intelligence Is Taking Over Spacecraft Control
From neural networks fine-tuning satellite pointing, to reinforced learning agents guiding landers to the Moon, AI is becoming a serious copilot in space
Issue: 123. Subscribers: 52,602
In 1999, NASA’s Deep Space 1 spacecraft made history with the Remote Agent Experiment (RAX) — the first time an AI autonomously controlled a spacecraft’s high-level operations in deep space. Back then, RAX did not improve or adapt itself beyond predefined parameters. More than two decades later, ESA’s OPS-SAT pushed this frontier further, using a machine-learning-based system to control a spacecraft’s attitude in real time using camera images alone. This step emphasizes learning from data and improving performance autonomously, capabilities beyond what RAX employed.
This isn’t science fiction. It’s a sign of where spaceflight is headed.
HAL 9000, Revisited
Long before these real-world AI pilots, science fiction had already imagined them. In 2001: A Space Odyssey, the HAL 9000 computer managed nearly every function aboard the Discovery One spacecraft: life support, navigation, mission planning, anomaly detection, even conversation. HAL wasn’t just a chatbot — it was a unified intelligence system capable of perceiving its environment, understanding natural language, reasoning about mission objectives, and making autonomous decisions. It could recognize faces, read lips, interpret human emotion, and act independently, even under uncertain conditions. HAL remained a speculative benchmark for decades, far beyond what real space systems could achieve.

Today, we’re inching closer to fragments of HAL’s capabilities — not in one unified brain, but in specialized modules spread across missions. Neural networks now steer spacecraft in real time, planning systems can autonomously manage science tasks, onboard image classifiers assist in safe landing, and conversational AI has even flown aboard the ISS. But we’re still far from integrating these pieces into a single, learning, interacting spacecraft mind. The technical challenge isn’t just about power or algorithms — it’s about trust, verifiability, and the safe convergence of reasoning, perception, and control. HAL was a cautionary tale. Ironically, its fictional overreach has inspired real engineers to design something as smart, but also safer.
In traditional space missions, human flight controllers monitor telemetry, assess system health, authorize maneuvers, and troubleshoot anomalies — all with strict procedural oversight. They operate as the spacecraft’s brain-by-proxy, interpreting data and issuing commands often minutes or hours after the fact. This human-in-the-loop model is effective for low-Earth orbit, but it doesn’t scale. As missions occur farther, with longer delays due to the limited speed of light, and tighter timelines, human oversight becomes a bottleneck. AI doesn’t just promise cheaper operations; it enables something categorically new — spacecraft that can adapt instantly, act independently, and continue exploring even when Earth is silent. That, more than any single breakthrough, may redefine what space missions can attempt.
From Autopilot to AI Pilot
Spacecraft have flown with digital autopilots for decades — carefully engineered, rule-based systems that rely on fixed models and human supervision. But modern missions are more complex. They demand spacecraft that can handle uncertainty, adapt mid-flight, and make decisions without waiting for Earth’s help. Enter machine learning (ML).
A 2021 review in Acta Astronautica analyzed dozens of ML-based spacecraft control techniques applied across a wide range of spacecraft control problems: from optimizing interplanetary trajectories and synthesis of controllers to stabilize orbital or angular motion, to formation flying and autonomous landing. These methods fall into two broad families: supervised learning and reinforcement learning, each with its subtypes and specialties.
Supervised learning: Neural networks trained on examples, like optimal trajectories or human commands. These are often used to imitate expert solutions or assist classical control algorithms. Some methods are stochastic, relying on randomized optimization and model selection to tune performance. Others are deterministic, using techniques like Lyapunov theory to ensure system stability during learning.
Reinforcement learning (RL): The AI learns by trial-and-error, interacting with a simulated environment and optimizing rewards. It's been applied to landing, docking, and autonomous guidance. These RL methods split into direct approaches, which map states to actions outright, and value-based approaches, which evaluate long-term rewards to guide decisions. The structure mirrors traditional control theory’s divide between direct and indirect optimization.
Since that review, many of these techniques have moved from theory and simulation to hardware and orbit. What was once speculative — neural networks computing thrust or torque onboard in real time — is now appearing in mission flight software.
Learning to Dock, Land, and Maneuver
Autonomous Docking
Docking two spacecraft is an exceptionally precise task — even small errors can cause catastrophic failure. While automated docking is possible in Earth orbit, truly autonomous spacecraft that can plan their own maneuvers remain a goal for the future.
To bridge that gap, researchers have proposed handing over docking control to a transformer-based AI system — the same architecture behind tools like ChatGPT. They call it the Autonomous Rendezvous Transformer (ART). The idea is for spacecraft to run ART onboard and independently compute docking strategies. Though still in early development, ART has shown promising results in simulation. The next step is to test it in a realistic space-like environment, with the long-term goal of deploying it in orbit.
Planetary Landing (and Ascent)
In August 2023, machine learning quietly crossed a historic threshold. India’s Chandrayaan-3 mission achieved a soft landing on the Moon’s south polar region using a lander equipped with Chandrayaan-3 Terrain Avoidance System (CATS) - a terrain-relative navigation and an onboard hazard detection and avoidance system powered by stochastic supervised machine learning (SL). During descent, its camera streams were analyzed in real time to identify boulders and slopes, with a Convolutional Neural Network (CNN) guiding the lander’s decision to shift its touchdown point to a safer zone. The logic was trained on simulated lunar terrain and integrated into the lander’s final descent controller, enabling adaptive behavior at the most critical moment, with no human in the loop.

Just days earlier, Russia’s Luna-25 attempted a similar landing but ended in failure. A guidance software fault triggered an extended engine burn, pushing the lander into an unintended descent path and ultimately causing it to crash into the lunar surface. The system lacked both onboard hazard detection and autonomous correction logic. It was later suggested that an adaptive AI layer — capable of detecting anomalies in engine performance or trajectory drift — could potentially have intervened. The events of August 2023 offered a striking contrast: one lander used machine learning to think on its feet and succeeded, while another relied solely on traditional automation and failed.
RL shines in scenarios with complex, uncertain dynamics, like planetary descent. In simulations, neural-network-guided landers have landed on Mars with meter-level precision, adapting to sensor noise or engine failures.
NASA's Mars Ascent Vehicle (MAV) — part of the upcoming Mars Sample Return mission — is shaping up to be one of the most ambitious testbeds for AI control. While it hasn’t flown yet, NASA researchers have developed and simulated an online reinforcement learning controller designed to adapt during ascent. As the rocket burns fuel and its mass shifts, the controller adjusts thrust vectoring in real time, outperforming traditional PID systems under off-nominal conditions. The results show promise for robust, self-correcting launch behavior, especially when exposed to uncertain dynamics. Still, this remains in the simulation phase — no RL-controlled ascent has yet occurred in flight.
Attitude and Orbit Control
OPS-SAT (Launched December 2019) is a nanosatellite mission by ESA designed as an open platform for testing and validating new operational technologies in orbit. It has achieved several AI-related firsts:
First in-orbit training of machine learning models: OPS-SAT successfully trained ML models onboard using real-time sensor data, demonstrating the feasibility of in-situ learning in space.
Deployment of neural networks for anomaly detection: The satellite implemented AI models to detect and respond to anomalies autonomously, enhancing its fault detection, isolation, and recovery (FDIR) capabilities.
Use of generative AI for enhancement of the remote sensing capabilities: OPS-SAT explored the application of generative adversarial networks (GANs) for image enhancement tasks, such as denoising images affected by radiation.
OPS-SAT’s 2023 Deep Active Tracking experiment was a further step toward becoming the first publicly validated in-orbit learning-based satellite's orientation control using vision and neural networks. The system processed Earth images onboard and issued torque commands to reaction wheels in real time.

Researchers have also tested hybrid attitude controllers — combining classical PID loops with neural networks that compensate for unmodeled disturbances or fuel slosh. These hybrid designs offer stability with added adaptability.
For large constellations like Starlink, autonomy is already in play, but the role of AI remains unclear. In 2019, a near-miss with ESA’s Aeolus satellite forced a manual maneuver when a Starlink satellite didn’t respond to coordination attempts. SpaceX later noted that Starlink satellites use onboard systems to autonomously avoid collisions using data from the U.S. military’s tracking network. While these systems are often described as “AI-powered,” SpaceX hasn’t confirmed whether machine learning or neural networks are involved.
In 2023, SpaceX and NASA launched “Starling”, a mission to test autonomous coordination and collision avoidance in orbit, pointing toward future AI-guided constellations. Until more is disclosed, Starlink’s system should be considered autonomous but not proven to use modern ML in its control loop. The direction, however, is clear: toward constellations that assess, decide, and act with minimal human intervention.
Next-Gen AI Pilots
Today, neural networks are trained on Earth and uploaded pre-flight. But for deep-space missions, where communication delays stretch into minutes or hours, spacecraft will need to learn on their own. NASA’s upcoming Mars Ascent Vehicle may offer the first real demonstration of reinforcement learning guiding a spacecraft in real-time.
Meanwhile, neuromorphic computing — inspired by the architecture of the human brain — promises to revolutionize space AI. These chips aim to drastically reduce power consumption and improve onboard learning efficiency, making true autonomy feasible even for small spacecraft.
At the same time, safety remains the core challenge. Techniques like Control Barrier Functions, Lyapunov-based learning, and backup controllers are being developed to ensure AI acts within certifiable bounds. In the near term, most missions will rely on hybrid systems: a trusted classical controller guarantees baseline safety, while an AI module boosts performance or handles complex, uncertain situations. This model is already being tested in attitude control and is expected to become the norm.
Companies like SpaceX and Blue Origin are openly investing in AI-driven autonomy. But they’re not alone — a growing wave of startups is laser-focused on AI for flight control. Curious who’s leading this transformation? We’ll explore the emerging ecosystem in our upcoming article.

Final Notes
From neural networks fine-tuning satellite pointing to RL-based controllers guiding landers to the Moon, AI is becoming a serious copilot in space. And thanks to growing confidence in safe AI, it’s poised to take on more.
With each new experiment — like OPS-SAT’s AI vision tracker or NASA’s self-learning ascent vehicle — the line between autopilot and AI onboard pilot blurs further. We’re entering an era where spacecraft don’t just execute plans. They make them.
Welcome to the era of machine-learning spacecraft. HAL 9000, but helpful.
It's fascinating to see how we're evolving from simple autopilot systems to fully autonomous AI pilots in space. The advancements in machine learning are opening up new frontiers that were once purely sci-fi dreams. Exciting times ahead as we redefine what exploration means beyond Earth.
Nice roundup. Getting tired of hearing about all the creative tools. You might like this quick read on the topic of space computers. https://open.substack.com/pub/biggiantwords/p/stop-hating-on-the-space-shuttle