1. Introduction: Beyond the “Ungoverned Space”
For too long, the global narrative regarding Artificial Intelligence (AI) governance has treated Africa as a passive observer, i.e., a “data mine” for foreign corporations or a dumping ground for untested technologies. This perspective is not only outdated but also demonstrably false. As we move through early 2026, the continent is actively constructing a distinct jurisprudential architecture. We are witnessing the birth of the African “Algorithmic State”; a governance model defined by a sharp tension between the hunger for developmental “leapfrogging” and the urgent, almost existential necessity to erect sovereign defenses against digital extractivism.
This report, the second in our series on emerging technologies,moves beyond high-level rhetoric to examine the gritty reality of black-letter law, policy drafts and institutional manoeuvres. It posits that African policymakers are no longer waiting for permission or consensus from the Global North. Instead, they are engaging in a sophisticated game of regulatory catch-up, attempting to legislate for technologies that are moving faster than their own parliaments.
The landscape that emerges is one of a “multi-speed” Africa. Whilst the African Union (AU) strives to harmonise the continent’s fragmented digital markets through the Continental Artificial Intelligence Strategy and the newly operational Africa AI Council, the real power remains stubbornly national. A vanguard of nations – Egypt, Nigeria, Kenya, South Africa and Rwanda – has moved from soft policy papers to hard legislative instruments. These nations are grappling with questions that Western regulators are also facing, but they are doing so under vastly different material conditions. How do you regulate high-risk algorithms when you lack a surplus of data scientists? How do you demand data sovereignty when your government’s data sits on servers in Ireland or Virginia?
2. The Continental Architecture: Normative Ambitions vs. Reality
The governance of AI in Africa operates within a complex hierarchy of norms, stretching from continental aspirations down to regional directives and national enforcement. At the apex stands the African Union, which has sought to transform the continent from a consumer of imported intelligence into a unified geopolitical bloc.
2.1 The African Union: From Strategy to Institutionalisation
The endorsement of the Continental Artificial Intelligence Strategy in July 2024 was a watershed moment, shifting the AU’s stance from reactive adaptation to proactive agenda-setting. By November 2025, this strategy moved from paper to practice with the operationalisation of the Africa AI Council.
This Council is not merely a bureaucratic addition; it represents a strategic attempt to solve the “collective action problem”. Individually, African markets are too small to dictate terms to global tech giants like Google or OpenAI. Collectively, however, a market of 1.4 billion people has leverage. The Strategy is predicated on a “Development-First” paradigm, distinguishing it from the risk-centric approach of the European Union. It identifies the primary risk of AI in Africa not as “rogue superintelligence”, but as missed opportunity – the risk that Africa will remain a digital colony, exporting raw data and importing finished algorithmic products.
Critically, the Strategy is backed by the Africa Declaration on Artificial Intelligence, which solidified the commitment of member states to “Data Sovereignty”. It proposes a $60 billion “Africa AI Fund” to finance the infrastructure gap, acknowledging that you cannot regulate what you do not own.
2.2 Regional Economic Communities (RECs) as Laboratories
Below the AU, the Regional Economic Communities (RECs) are acting as regulatory laboratories.
ECOWAS (West Africa): The Economic Community of West African States continues to be the legislative pacesetter. Following a high-level dialogue in Lagos in January 2026, the bloc is finalising a “Regional AI Governance Framework”. This builds on its existing binding data protection act, aiming to create a harmonisedlegal environment that prevents “regulatory arbitrage”; where companies set up shop in the country with the weakest laws.
SADC (Southern Africa): In the south, SADC is finalizing its regional digital transformation strategy. Unlike the binding nature of ECOWAS instruments, SADC has historically relied on “Model Laws” which member states can voluntarily adopt. The new “SADC AI Strategy”focuses heavily on industrialisation and infrastructure sharing, viewing AI primarily as a tool for economic modernization rather than a subject of rights-based regulation.
3. National Deep Dives: The “Big Four” and The Challengers
The operational reality of AI governance is defined not by continental summits, but by the legislative battles occurring in national capitals. The period between 2025 and early 2026 has been defined by a dramatic divergence in approaches between East and West Africa.
3.1 Kenya: The Strategic Pivot to State-Led Regulation
Kenya, often dubbed the “Silicon Savannah”, offers the most fascinating case study of regulatory evolution. Throughout 2023 and 2024, the conversation was dominated by the Kenya Robotics and Artificial Intelligence Society Bill, a private member’s bill that sought to establish a professional guild to self-regulate the industry.
However, on 3 December 2025, the narrative shifted abruptly. The Public Petitions Committee of the National Assembly rejected this Bill. The rejection was not merely procedural; it was ideological. The Committee, backed by the Ministry of Information, Communications and the Digital Economy (MoICDE), argued that vesting regulatory power in a private society created an inherent conflict of interest. How can a body designed to promote the interests of AI developers simultaneously police them for public safety? Furthermore, the proposed licensing regime was viewed as a bottleneck that would stifle the grassroots innovation that defines the Kenyan tech ecosystem.
The New Government-Led Roadmap (2026): Following this rejection, the Kenyan government initiated a “Strategic Roadmap” to develop comprehensive, state-led regulations by June 2026. This new framework represents a shift to a “risk-based” model, mirroring the EU AI Act but adapted for the local context.
The Council for Emerging Technologies: Instead of a private guild, a state body will now advise the Cabinet Secretary. This Council is tasked with overseeing regulatory sandboxes – controlled environments where startups can test products without the full burden of compliance.
Risk-Based Classification: The upcoming Kenya Artificial Intelligence and Robotics Regulations, 2026 will categoriseAI systems into four tiers: Prohibited (e.g., social scoring), High-Risk (e.g., credit scoring, judicial AI), Limited Risk (e.g., chatbots), and Minimal Risk.
Institutional Triad: Governance will be split between three bodies: the Council (technical standards), the Office of the Data Protection Commissioner (ODPC) (privacy), and the new National Cyber Space Agency (NCSA)(security). This “triad” approach aims to cover the multifaceted nature of AI, ensuring that a security flaw in a banking algorithm is treated differently from a privacy breach in a health app.
3.2 Nigeria: The “Digital Leviathan” Awakens
If Kenya is opting for a “sandbox” approach, Nigeria is building a fortress. As of February 2026, the National Digital Economy and E-Governance Bill is in its final legislative stages, poised for presidential assent.
Nigeria’s approach is characterized by its punitive weight. The Bill proposes fines of up to 10 million or 2 per cent of annual turnover for non-compliance. This is not symbolic; it is designed to bite. The legislation mandates that Data Controllers and Processors of Major Importance (DCPMIs)– a category that captures almost every significant AI deployer – must register with the Nigeria Data Protection Commission (NDPC).
The NDPC has proven it is not a paper tiger. In 2025, it levied significant fines on major entities for data breaches, signalling a willingness to enforce the law aggressively. The new Bill reinforces this by empowering regulators to suspend AI systems deemed unsafe, effectively treating high-risk algorithms as controlled substances. This “command and control” model reflects a state determined to assert its sovereignty over the digital realm, ensuring that multinational tech companies cannot operate with impunity.
3.3 South Africa: The “Year of Execution”
South Africa has taken a pragmatic, perhaps more cautious, path. Explicitly opting against a standalone AI Act for the time being, the government has adopted a sector-specific approach utilizing existing legal instruments.
POPIA as the Shield: The Protection of Personal Information Act (POPIA) remains the primary mechanism for governance. The Information Regulator has asserted that Section 71, which governs automated decision-making, fully applies to modern AI systems.
The “Year of Execution”: The business sector has dubbed 2026 the “Year of Execution”. Rather than waiting for new parliamentary legislation, regulators are using soft law and guidance notes to manage AI risks. In its 2025/2026 Annual Performance Plan, the Information Regulator announced a “Security Compromise Reporting Portal” to track algorithmic data breaches, moving from theoretical oversight to active monitoring.
3.4 Egypt: Infrastructural Sovereignty
Egypt occupies a unique position, bridging African and Arab policy spheres. Its focus is relentlessly on “Infrastructural Sovereignty”. The second edition of its National AI Strategy (2025–2030) emphasizes the Cloud First Policy, which mandates that government data must be hosted locally. Egypt understands that in the age of AI, compute power is geopolitical power. Consequently, it is investing in a “National Foundational Model” to ensure Arabic language capability, viewing reliance on Western Large Language Models (LLMs) as a strategic vulnerability that threatens cultural erasure.
4. The Nexus of Data Protection and AI Governance
In the absence of dedicated AI statutes in most nations, Data Protection Laws (DPLs) function as the “Law of the Horse” for AI. This reliance creates a unique legal landscape where the regulation of the algorithm is achieved almost exclusively through the regulation of its fuel (data).
4.1 The “Black Box” vs. Due Process
The primary legal mechanism for challenging AI decisions remains the restriction on Automated Decision Making (ADM).
South Africa: Section 71 of POPIA prohibits decisions based solely on automated processing that have legal consequences. However, legal scholars have long warned of the “rubber stamp” loophole – if a human analyst simply clicks “approve” on an AI’s recommendation without understanding it, is the decision still “solely” automated?
Kenya: Section 35 of the Data Protection Act grants a right not to be subject to ADM. The ODPC has actively used this to regulate digital credit providers who use AI for loan issuance, effectively becoming the country’s de factoalgorithmic auditor.
A critical tension exists here between legal rights and technical reality. Laws in Rwanda and South Africa require data controllers to provide sufficient information about the underlying logic of an automated decision. Yet, modern Deep Learning models are inherently opaque “black boxes”. A neural network cannot explain why it denied a loan in plain English. This creates a looming litigation frontier: can a right to an explanation ever be satisfied if the technology itself is unexplainable?
5. Normative Themes: Sovereignty, Ethics and the Colonial shadow
Beyond the statutes, the African discourse is shaped by deep normative themes that do not always appear in Western policy papers.
5.1 Digital Colonialism and Data Sovereignty
The critique of Digital Colonialism is not academic in Africa; it is visceral. There is a consensus amongst policymakers that the current global AI ecosystem replicates colonial extraction patterns: African data is harvested as raw material, processed in Western data centres, and sold back to the continent as finished intelligence products.
Policy Response: This anxiety drives the push for Data Sovereignty. It explains why Nigeria and Egypt are obsessed with data localization and why Kenya is establishing a National AI Centre of Excellence. The goal is to transition Africa from a data producer to an intelligence proprietor.
5.2 Responsible Innovation vs. Precautionary Regulation
A clear divergence is visible between the “Precautionary Approach” (EU style) and “Permissionless Innovation” (US style).
Kenya’s Pivot: By rejecting the restrictive Society Bill in favour of Regulatory Sandboxes, Kenya is attempting to strike a middle ground, that is, allowing innovation under supervision. It acknowledges that burdensome licensing could kill the very startups it hopes will save the economy.
Nigeria’s Hard Line: Conversely, Nigeria’s high fines and mandatory registration suggest a lean towards a stricter, enforcement-heavy model. This reflects a state that views unregulated tech as a potential threat to stability as much as an economic engine.
6. Conclusion: The Road to 2030
The review of AI policies across the African continent depicts a region in rapid, somewhat chaotic, transition. The narrative of “ungoverned spaces” is false; Africa is constructing a complex web of AI governance, anchored in Data Protection laws and driven by a strong, almost defiant, desire for sovereign control.
Key Observations for the Future:
1. The Death of Self-Regulation: The failure of the Kenya Robotics Society Bill signals that African governments are not interested in letting the industry mark its own homework. The state is reasserting itself as the primary arbiter of algorithmic safety.
2. Institutions over Paper: We are moving from the era of “National Strategies” (PDFs on websites) to the era of “Agencies” (Councils, Commissions and Directorates) with statutory teeth and budget lines.
3. The Infrastructure Hard Wall: The political will for African AI is high, but the physics of the internet remains a barrier. Without massive investment in local data centresand reliable energy, regulatory sovereignty will remain constrained. You cannot effectively regulate a cloud that floats entirely outside your borders.
As we move towards the 2030 horizon of the AU Strategy, the success of the African “Algorithmic State” will not depend on the elegance of its legal drafting, but on its ability to build the physical and human infrastructure necessary to enforce its will.
Bibliography
Primary Sources: Legislation and Policy Documents
1. African Union, Continental Artificial Intelligence Strategy(AU 2024).
2. Data Protection Act 2019 (Kenya).
3. Department of Communications and Digital Technologies (South Africa), Artificial Intelligence Policy Framework(DCDT 2024).
4. Information Regulator (South Africa), Annual Performance Plan 2025/2026 (Information Regulator 2025).
5. Kenya Robotics and Artificial Intelligence Society Bill 2023(Kenya Gazette Supplement No 214).
6. Law No 058/2021 relating to the Protection of Personal Data and Privacy (Rwanda).
7. Ministry of Communications and Information Technology (Egypt), National Artificial Intelligence Strategy: Second Edition 2025-2030 (MCIT 2025).
8. Ministry of Information, Communications and the Digital Economy (Kenya), Cabinet Brief on the Review of the Proposed Kenya Robotics and Artificial Intelligence Society Bill and the Strategic Roadmap for Comprehensive AI Regulations (MICDE, January 2026).
9. Ministry of Information, Communications and the Digital Economy (Kenya), National Artificial Intelligence Strategy 2025-2030 (MICDE 2025).
10. National Digital Economy and E-Governance Bill 2025(Nigeria).
11. Nigeria Data Protection Act 2023 (Nigeria).
12. Protection of Personal Information Act 4 of 2013 (South Africa).
13. Republic of Kenya, Report on Public Petition No. 75 of 2023 regarding the enactment of the Kenya Robotics and Artificial Intelligence Society Bill (Public Petitions Committee, 3 December 2025).
14. SADC, Regional Indicative Strategic Development Plan (RISDP) 2020-2030 (SADC Secretariat 2020).
15. Supplementary Act A/SA.1/01/10 on Personal Data Protection within ECOWAS (ECOWAS 2010).
Secondary Sources: Books, Articles and Online Reports
16. African Union, ‘African Union Champions AI for Africa’s Socioeconomic Transformation’ (Press Release, 4 April 2025).
17. AI Impact, ‘2026: The Year of AI Execution for South African Businesses’ (January 2026) https://ai-impact.co.za/2026-the-year-of-ai-execution-for-south-african-businesses/ accessed 8 February 2026.
18. Birhane A, ‘The Algorithmic Colonization of Africa’ (2020) 17 SCRIPTed 389.
19. ECOWAS, ‘ECOWAS Adopts Digital Roadmap’ (24 January 2026) https://prnigeria.com/2026/01/24/ecowas-adopts-digital-roadmap/ accessed 8 February 2026.
20. Michalsons, ‘The South African AI Act: Latest Developments’ (February 2026) https://www.michalsons.com/blog/the-south-african-ai-act-latest-developments/80547 accessed 8 February 2026.
21. National Assembly of Nigeria, ‘Digital Economy Bill Poised for Presidential Assent’ (10 November 2025) https://naltf.gov.ng/digital-economy-bill-poised-for-presidential-assent/ accessed 8 February 2026.
22. Nigeria Data Protection Commission, ‘NDPC Investigations into Data Breaches’ (Press Release, August 2025).
23. Oxford Insights, ‘Building Egypt’s AI Future: Capacity-Building, Compute Infrastructure, and Domestic LLMs’ (2025).
24. Smart Africa, ‘The Smart Africa Board Unveils the Inaugural Africa AI Council’ (17 November 2025) https://smartafrica.org/the-smart-africas-board-unveils-the-inaugural-africa-ai-council-to-lead-the-continents-ai-transformation/ accessed 8 February 2026.
25. TechPoint Africa, ‘Nigeria Advances Landmark Digital Economy and e-Governance Bill’ (14 November 2025).

