Introduction:
Satellite systems constitute a foundational layer of modern global infrastructure, supporting communications, navigation, financial systems, environmental monitoring, and national security operations. As dependence on these systems intensifies, the complexity of managing them within an increasingly congested and strategically contested orbital environment also grows. In response, satellite operators are incorporating artificial intelligence into operational architectures, marking a transition from conventional control paradigms to data-driven and increasingly autonomous systems. AI is no longer a theoretical construct in space systems; it is actively deployed for onboard data processing, system monitoring, and real time decision-making in contexts where human response latency is prohibitive.¹⁶ While these developments provide clear operational benefits, they also introduce fundamental challenges, particularly as decision-making authority shifts from human operators to autonomous systems, raising questions regarding control, accountability, and systemic risk.
Current Applications of AI in Satellite Operations
Artificial intelligence is increasingly embedded within satellite systems, most prominently in onboard data processing, where Earth observation platforms generate substantial volumes of data that frequently exceed available downlink capacity. AI models are therefore used to filter, classify, and prioritise data before transmission, improving both efficiency and responsiveness while enabling more targeted use of limited communication resources.¹ This capability has been demonstrated in operational missions such as ESA’s OPS-SAT and KP Labs’ Intuition-1, which have successfully deployed machine learning models in orbit, including neural networks capable of processing hyperspectral imagery in real time.¹³ ²¹
In parallel, AI is being applied to satellite health monitoring, where machine learning systems enable anomaly detection, predictive maintenance, and, in certain configurations, the autonomous initiation of corrective actions without requiring immediate ground intervention.² ⁹ These capabilities are particularly valuable in environments characterised by communication latency or intermittent connectivity, where delays can significantly affect operational outcomes. At the same time, the rapid expansion of satellite constellations, particularly in low Earth orbit, has increased the probability of conjunction events, making collision avoidance a central operational concern. AI systems are capable of processing large scale orbital datasets and generating avoidance manoeuvres with a speed and precision that exceed human capabilities.³ ¹⁶
More advanced developments extend beyond individual satellite functionality to coordinated autonomy across constellations, where systems operate as distributed networks rather than isolated assets. Research initiatives such as ConstellAI demonstrate how reinforcement learning can optimise routing, scheduling, and resource allocation across multiple satellites, enabling them to exchange information and adapt behaviour collectively while reducing reliance on centralised control.¹⁶ This shift represents a broader transformation in which satellites are no longer passive infrastructure but active, decision-capable systems embedded within dynamic operational ecosystems.
Operational Advantages and Associated Trade-offs
The primary advantage of integrating artificial intelligence into satellite operations lies in its capacity for rapid data processing and decision execution, allowing autonomous systems to identify patterns and respond to environmental changes on timescales that are inaccessible to human operators. This capability is particularly valuable in time-sensitive scenarios, where delays in decision-making could lead to degraded performance or increased operational risk. However, the same characteristics that enable speed also introduce significant trade-offs, as increasing autonomy reduces the transparency of decision-making processes and limits the ability of human operators to fully interpret, supervise, and intervene in system behaviour.
This reduction in transparency is especially consequential in shared orbital environments, where the actions of one system may have cascading effects on others, amplifying the potential impact of incorrect or poorly understood decisions. Unlike deterministic rule-based systems, AI models operate probabilistically, deriving outputs from statistical inference rather than explicit reasoning, which introduces the possibility of unpredictable behaviour when systems encounter novel or adversarial conditions.¹⁰ As a result, the central issue is not whether AI can make decisions in orbit, but how much decision-making authority can be safely delegated without compromising reliability, predictability, and broader system stability.
Misinterpretation and Attribution Challenges
Attribution remains a persistent challenge in both cyber and space domains, referring to the ability to determine the origin and intent of a given action. In orbital systems, this challenge is compounded by reliance on signals that are inherently vulnerable to disruption, spoofing, and environmental distortion, creating conditions in which data may be incomplete, ambiguous, or misleading. AI systems tasked with anomaly detection must therefore interpret this imperfect information, introducing a risk of misclassification in which benign anomalies are interpreted as malicious interference or genuine threats are overlooked.
Emerging missions, including NASA’s autonomous Earth observation experiments and dynamic targeting systems, demonstrate how satellites can independently prioritise observations based on onboard analysis, reducing reliance on ground-based decision-making and enabling more responsive operations.¹⁴ ²² While these capabilities enhance efficiency, they also increase the risk that incorrect interpretations may lead to unintended actions, particularly in scenarios where systems are authorised to act without human validation. In a shared orbital environment, such actions may be externally observed and interpreted as deliberate or hostile, regardless of their origin in algorithmic processes, highlighting the intersection of technical uncertainty and geopolitical sensitivity.
Security Implications and System Vulnerabilities
The integration of AI into satellite systems expands the operational attack surface, as machine learning models themselves become potential targets for adversarial manipulation. Techniques such as data poisoning and adversarial inputs can subtly influence how models interpret information, leading to degraded performance or altered behaviour that may not be immediately detectable.⁴ Research into AI-enabled satellite architectures indicates that compromised models may not only generate incorrect outputs but may also act upon them autonomously, amplifying the potential consequences of such vulnerabilities.⁸ ¹⁸
This challenge is further compounded by the constraints of the space environment, where updating or retraining models is significantly more complex than in terrestrial systems, particularly at scale. Hardware limitations, including reliance on specialised accelerators and radiation-tolerant processors, impose additional constraints on model flexibility and performance.¹¹ ¹⁷ As a result, resilience, validation, and secure system design must be prioritised during development, ensuring that AI systems maintain integrity under both nominal and adversarial conditions and that potential failure modes are anticipated and mitigated.
Human Oversight and Responsibility
Despite increasing autonomy, human oversight remains a critical component of satellite system governance, although its role is evolving in response to technological change. Operational models range from human-in-the-loop systems, where decisions require explicit approval, to human-on-the-loop systems, where autonomous actions are monitored and can be overridden, and ultimately to fully autonomous systems that remove real time human input altogether. Industry analyses indicate a growing shift towards AI-enabled mission operations, in which satellites independently manage tasking, data processing, and system optimisation.¹⁹
However, this shift complicates the allocation of responsibility, particularly in cases where autonomous systems produce harmful or unintended outcomes. Determining accountability in such scenarios remains unresolved, as responsibility may be distributed across developers, operators, and deploying organisations. Existing legal frameworks are not yet sufficiently developed to address these complexities, particularly in international and multi-operator contexts where jurisdictional boundaries and governance structures are unclear.
Governance and Regulatory Developments
Regulatory frameworks are beginning to address the implications of artificial intelligence in high impact systems, with the European Union’s AI regulation emphasising principles of transparency, accountability, and risk management.⁵ At the same time, institutional efforts by organisations such as the European Space Agency and the Organisation for Economic Co-operation and Development increasingly recognise the role of AI in space sustainability and orbital traffic management, highlighting the need for coordinated governance approaches.³ ¹² However, governance remains fragmented, and no comprehensive international framework currently exists for AI-driven satellite operations, creating a structural gap between technological capability and regulatory oversight that must be addressed through both policy development and technical design.
Bridging this gap will require more than formal regulation alone. It will also require technical architectures that prioritise explainability, traceability, auditability, and controllability from the design stage. For satellite operators, this means developing systems that allow autonomous decisions to be reconstructed and assessed after execution, particularly where those decisions affect manoeuvring, signal behaviour, data prioritisation, or defensive responses. For regulators and international bodies, it means recognising that AI-enabled autonomy is not simply a software governance issue, but part of the wider challenge of maintaining safety, security, and stability in the orbital environment.
Concluding Observations
Artificial intelligence is fundamentally reshaping satellite operations by enabling faster decision-making, optimised data utilisation, and increased autonomy in environments where human intervention is constrained, while simultaneously introducing new forms of risk related to misinterpretation, system vulnerability, and reduced transparency. The integration of AI into orbital systems is no longer optional but an ongoing transformation that is redefining how space infrastructure functions at both technical and strategic levels.
The central challenge lies in ensuring that this transformation is managed responsibly, balancing the operational advantages of autonomy with the need for control, accountability, and stability in an increasingly complex orbital environment. As satellite networks continue to expand, the decisions executed in orbit, whether by humans or autonomous systems, will carry growing significance, making it essential that these decisions are reliable, interpretable, and supported by governance frameworks capable of keeping pace with technological change.
Footnotes
1. National Aeronautics and Space Administration, Onboard Artificial Intelligence for Earth Observation Missions (2022).
2. European Space Agency, Artificial Intelligence for Spacecraft Operations (2023).
3. Organisation for Economic Co-operation and Development, Space Sustainability and Orbital Traffic Management (2022).
4. Ian Goodfellow, Patrick McDaniel and Nicolas Papernot, ‘Making Machine Learning Robust Against Adversarial Inputs’ (2018) Communications of the ACM.
5. Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence.
6. Marcus Gardill and others, ‘Towards Space Edge Computing and Onboard Artificial Intelligence for Real-Time Teleoperations’ (IEEE LEO SatS Workshop, 2023).
7. Luis M Garcés-Socarrás and others, ‘Artificial Intelligence Satellite Telecommunication Testbed using Commercial Off-The-Shelf Chipsets’ (arXiv, 2024).
8. Michael J Veyette and others, ‘AI/ML for Mission Processing Onboard Satellites’ (AIAA, Lockheed Martin, 2022).
9. Eduardo Gizzi and others, ‘The Onboard Artificial Intelligence Research Platform’ (NASA, 2024).
10. ‘Artificial Intelligence for Trusted Autonomous Satellite Operations’ (2023) Acta Astronautica.
11. ‘Onboard Processing in Satellite Communications Using AI Accelerators’ (2023) Aerospace.
12. European Space Agency, Artificial Intelligence in Space.
13. European Space Agency Φ-lab, OPS-SAT mission documentation.
14. NASA Jet Propulsion Laboratory, Autonomous Sciencecraft and EO-1 mission reports.
15. NASA Goddard Space Flight Center, OnAIR Platform Documentation (2024).
16. G F Stock and others, ‘On the Role of AI in Managing Satellite Constellations: Insights from the ConstellAI Project’ (arXiv, 2025).
17. V Leon and others, ‘Towards Employing FPGA and ASIP Acceleration to Enable Onboard AI/ML in Space Applications’ (arXiv, 2025).
18. ‘Toward Trusted Onboard AI: Advancing Small Satellite Operations using Reinforcement Learning’ (arXiv, 2025).
19. David Hodes, ‘AI-Enabled Autonomous Mission Operations: Redefining How Satellites Work in Orbit’ (Kratos, 2026).
20. Organisation for Economic Co-operation and Development, additional policy materials on orbital automation and sustainability.
21. KP Labs, Intuition-1 mission documentation.
22. NASA, CogniSAT-6 and autonomous targeting experiments (2024).

