2024 was supposed to be the year AI broke democratic politics. Over four billion people voted across more than sixty countries. Security agencies, researchers, and commentators had spent two years warning of deepfakes at scale, synthetic influence operations, and AI-generated content that would render meaningful electoral deliberation impossible. None of it materialised at anything close to the forecast level.
By the end of 2025, a second cycle of elections had concluded across Germany, Canada, Australia, and elsewhere. The picture shifted, but only partially. The dramatic AI-enabled democratic catastrophe remained absent. The conclusion drawn by many was that the threat had been overstated. That conclusion is wrong, and the reasoning that leads to it is precisely the kind of misreading that leaves democracies unprepared.
What the evidence actually shows
The Alan Turing Institute's Centre for Emerging Technology and Security studied over one hundred elections held between 2023 and 2024 and found measurable AI interference in only nineteen, with no evidence of meaningful change to outcomes in any of them (CETaS, 2024). Meta reported that AI-generated influence operations had modest impact on its platforms, characterising the productivity gains available to malicious actors through generative AI as incremental rather than transformative (Al Jazeera, 2024). The R Street Institute found that AI deepfakes played a minor role in the US cycle, and that public confidence in election administration rebounded strongly, reaching 88 percent by the end of 2024 compared to 59 percent in 2020 (R Street Institute, 2025).
Three explanations account for this. First, decades of research on mass persuasion were simply correct: high-stakes electoral contests involve too many competing variables for any single information intervention to be decisive (MIT Technology Review, 2024). Second, the AI Elections Accord, signed by twenty-seven major technology companies at the Munich Security Conference in 2024, created a co-ordinated detection and response infrastructure that raised the cost of misuse meaningfully during that cycle. Third, and most instructively, traditional methods proved more effective than AI-enhanced ones. Much of the most damaging electoral misinformation in 2024 was not AI-generated at all. It was conventional manipulation: cheaply edited authentic content, false statements from figures with large organic followings, and platform-amplified polarisation that required no technical sophistication whatsoever.
2025 offered a partial correction. The German federal election saw a co-ordinated influence operation, later attributed to Russian-aligned actors, deploy AI-generated deepfakes and a secondary amplification network of over six thousand accounts with synchronised engagement so precise it left no analytical ambiguity about its artificial origin (Institute for Strategic Dialogue, 2025). In Romania, the 2024 presidential election result had already been annulled after evidence emerged of AI-powered foreign interference through manipulated video content. In Canada, AI was deployed to impersonate media outlets for financial fraud ahead of the spring elections. These were not near-misses that almost broke democracy. But they represented a direction of travel rather than a steady state, and the conditions that limited their impact in those cycles have since been deliberately removed.
What has changed since
The AI Elections Accord was not renewed after 2024. The infrastructure of voluntary co-ordination that provided partial protection during that cycle no longer exists (CDT, 2025). Meta dismantled its third-party fact-checking programme in January 2025, replacing it with community notes and reinstating political content on previously restricted topics. The House of Lords AI Committee had already identified platform incentive structures as a core governance failure, finding that commercial incentives do not reliably align with epistemic safety (Parliament, House of Lords, 2024). The removal of moderation infrastructure makes deploying AI-enabled influence operations cheaper and less risky simultaneously.
In the United States, President Biden's 2023 Executive Order on AI safety was revoked in January 2025. The State Department's counter-disinformation office was closed. The White House released a national AI framework in March 2026 that urges Congress to preempt state laws, avoid creating new regulatory bodies, and shield AI developers from liability (Tech Policy Press, 2026). The cumulative effect is to remove the protective architecture built across the preceding three years at the point where the threat is becoming more, not less, sophisticated.
The European Union's AI Act, the most substantive attempt to create binding obligations for AI systems operating in democratic contexts, entered force in August 2024 but has since been weakened under sustained pressure from US technology companies and the Trump administration. The Code of Practice for general-purpose AI models initially classified large-scale disinformation as a systemic risk requiring mandatory assessment. By its third draft, those protections had been reclassified as matters companies could choose to address or not (Fortune, 2025). European legislators warned explicitly that this created conditions in which AI providers with more extreme political positions could influence elections without adequate accountability (Fortune, 2025). The Commissioner confirmed in June 2025 that safeguards could be diluted further before 2026 implementation (EU AI Act Newsletter, 2025). The protective conditions of 2024 were imperfect. And now they no longer exist.
The problem that did not exist in 2024
The risks framed before 2024 were almost entirely about generative AI as a content production tool: deepfakes, synthetic personas, personalised messaging at scale. The emergence of autonomous AI agents represents something different in kind. Agents do not wait to be told what to produce. They execute tasks, adapt to feedback, pursue objectives, and co-ordinate with other agents without continuous human oversight (Bengio et al., 2023).
Menczer and Yang (2026) describe what this means in practice: organisations with malicious intent can now deploy large numbers of autonomous, adaptive, co-ordinated agents across multiple social media platforms simultaneously. Unlike scripted bots, whose co-ordination patterns were eventually detectable, AI agent swarms generate varied, credible content that resists identification by both machine classifiers and human reviewers. The Carnegie Endowment for International Peace characterised the arrival of autonomous agents in September 2025 as occurring at a moment of acute vulnerability for liberal democratic orders, noting that governance frameworks had not been designed with this capability in mind (Carnegie Endowment, 2025).
The commercial infrastructure supporting this is already being built. Reporting in 2025 documented the emergence of tools explicitly designed to orchestrate actions across thousands of social accounts, marketed as communications platforms. Documents from the Vanderbilt Institute of National Security described a system built around data harvesting, psychological profiling, and AI personas designed for large-scale influence operations (Menczer and Yang, 2026). The architecture for manufacturing synthetic consensus at scale is being productised.
There is a further dimension that did not feature in pre-2024 analysis at all. A Washington Post investigation in 2025 identified a Russian-linked network of content farms that produced more than 3.6 million disinformation pieces in 2024, designed specifically to contaminate the training data of large language models, a technique described as LLM grooming. A subsequent NewsGuard study found that ten major AI chatbots reproduced falsehoods from these sources approximately one third of the time (Policy Options, 2025). As citizens increasingly consult AI systems as sources of political information, contamination at the training-data layer constitutes a form of democratic manipulation that operates before the conversation begins, not during it. That is a structurally different problem from anything that election security frameworks were designed to address.
What 2026 requires
The 2026 United States midterm elections will be the first major electoral test conducted in this environment. The AI Elections Accord has expired. Platform moderation has been weakened. Federal counter-disinformation capacity has been dismantled. Autonomous agent deployment in electoral contexts is no longer theoretical. The CDT assessed at the end of 2025 that AI will likely be more prevalent and impactful in the 2026 cycle than in 2024, and that the risk is significantly compounded by the complacency that the relatively uneventful 2024 cycle has generated (CDT, 2025).
Three things are required that are not currently in place.
- Governance designed for aggregate harm, not episodic incident. The existing toolkit of content moderation, disclosure requirements, and detection tools addresses individual pieces of AI-generated material. It does not address the manufacture of synthetic consensus through agent swarms, or the contamination of AI training data by state-sponsored disinformation networks. These operate through cumulative processes with no identifiable single event; the research consistently confirms that this is precisely how AI's most consequential democratic effects operate (Lorenz-Spreen et al., 2022; Anderson and Rainie, 2020). Regulatory design must be calibrated to that reality, including continuous monitoring for anomalous co-ordination patterns and mandatory provenance standards for AI-generated political content.
- Addressing the economics of manipulation directly. Synthetic consensus is commercially viable because engagement is rewarded irrespective of its authenticity. The voluntary mechanisms that raised the cost of misuse in 2024 have now lapsed. Restricting the monetisation of co-ordinated inauthentic behaviour, requiring audited bot-traffic metrics, and enforcing no-revenue policies for identified manipulation campaigns would alter the economic logic that makes large-scale influence operations sustainable (Menczer and Yang, 2026). These are not technically novel proposals. They require political will to implement that has not yet been assembled.
- Sustained investment in civic infrastructure. CETaS found that digital literacy, a robust public broadcasting ecosystem, and lower political polarisation are the factors that most demonstrably increase public resistance to disinformation (CETaS, 2024). These are not technology problems. They are institutional and political ones. The capacity to navigate AI-mediated information environments critically is not something citizens develop without deliberate support. It requires the same category of investment as the other infrastructure on which democratic governance depends.
The deeper point
There is a structural asymmetry underlying all of this that governance has not yet adequately acknowledged. Open societies are constitutionally constrained in their ability to control information flows. Russia and China are not. They can mandate domestically aligned AI systems, restrict their own information environments, and operate offensively in Western democratic ones simultaneously (Foreign Policy, 2025). AI agents amplify that asymmetry materially. The current US policy response frames this as an argument for domestic deregulation and technological acceleration. That position may be sound for economic competition and insufficient for democratic resilience at the same time.
The evidence from 2024 and 2025 established clearly that the threat to democracy from AI is real, diffuse, cumulative, and operating below the threshold of the dramatic single events that governance frameworks are best equipped to handle. The evidence from early 2026 establishes that the conditions constraining that threat have weakened and the capabilities advancing it have not. The question before democratic governance now is not whether to respond. It is whether the response can be assembled before the next cycle demonstrates, conclusively, what the absence of one looks like.
References
Al Jazeera (2024) 'Meta says AI had only "modest" impact on global elections in 2024', Al Jazeera, 3 December. Available at: https://www.aljazeera.com/news/2024/12/3/meta-says-ai-had-only-modest-impact-on-global-elections-in-2024 (Accessed: 20 March 2026).
Al Jazeera (2024) 'Did artificial intelligence shape the 2024 US election?', Al Jazeera, 25 December. Available at: https://www.aljazeera.com/news/2024/12/25/did-artificial-intelligence-shape-the-2024-us-election (Accessed: 20 March 2026).
Anderson, J. and Rainie, L. (2020) Concerns about Democracy in the Digital Age. Washington, DC: Pew Research Centre. Available at: https://www.pewresearch.org/internet/2020/02/21/concerns-about-democracy-in-the-digital-age/ (Accessed: 21 February 2024).
Bai, H., Voelkel, J., Eichstaedt, J. and Willer, R. (2023) 'Artificial Intelligence Can Persuade Humans on Political Issues', Nature Portfolio. doi: 10.21203/rs.3.rs-3238396/v1.
Bengio, Y. et al. (2023) 'Managing AI Risks in an Era of Rapid Progress', arXiv. doi: 10.48550/arXiv.2310.17688.
Carnegie Endowment for International Peace (2025) AI Agents and Democratic Resilience, September. Available at: https://carnegieendowment.org/research/2025/09/ai-agents-and-democratic-resilience (Accessed: 20 March 2026).
Centre for Democracy and Technology (2025) Countdown to the Midterms: The Changing AI Threat Landscape for Elections. Washington, DC: CDT. Available at: https://cdt.org/insights/countdown-to-the-midterms-the-changing-ai-threat-landscape-for-elections/ (Accessed: 20 March 2026).
Centre for Emerging Technology and Security (CETaS) (2024) AI-Enabled Influence Operations: Safeguarding Future Elections. London: Alan Turing Institute. Available at: https://cetas.turing.ac.uk/publications/ai-enabled-influence-operations-safeguarding-future-elections (Accessed: 20 March 2026).
Centre for Emerging Technology and Security (CETaS) (2025) From Deepfake Scams to Poisoned Chatbots: AI and Election Security in 2025. London: Alan Turing Institute. Available at: https://cetas.turing.ac.uk/publications/deepfake-scams-poisoned-chatbots (Accessed: 20 March 2026).
EU AI Act Newsletter (2025) 'The EU AI Act Newsletter #84: Trump vs Global Regulation', August. Available at: https://artificialintelligenceact.substack.com/p/the-eu-ai-act-newsletter-84-trump (Accessed: 20 March 2026).
Foreign Policy (2025) 'The Window for Combating AI Propaganda Is Closing', Foreign Policy, 18 September. Available at: https://foreignpolicy.com/2025/09/18/ai-propaganda-information-warfare-systems/ (Accessed: 20 March 2026).
Fortune (2025) 'Lawmakers warn European Commission against watering down landmark EU AI Act to appease Trump and US tech companies', Fortune, 26 March. Available at: https://fortune.com/2025/03/26/eu-ai-act-code-of-practice-disinformation-election (Accessed: 20 March 2026).
House of Lords Artificial Intelligence Committee (2024) Large Language Models and Generative AI: 1st Report of Session 2023–24. London: House of Lords. Available at: https://committees.parliament.uk/work/7827/large-language-models/publications/ (Accessed: 20 March 2026).
Institute for Strategic Dialogue (2025) Foreign Information Manipulation in the 2025 German Federal Election. London: ISD. Available at: https://www.newgeopolitics.org/2025/07/10/foreign-information-manipulation-in-the-2025-german-federal-election/ (Accessed: 20 March 2026).
Lorenz-Spreen, P., Oswald, L., Lewandowsky, S. and Hertwig, R. (2022) 'A systematic review of worldwide causal and correlational evidence on digital media and democracy', Nature Human Behaviour, 7(1), pp. 1–28. doi: 10.1038/s41562-022-01460-1.
Menczer, F. and Yang, K-C. (2026) 'Swarms of AI bots can sway people's beliefs — threatening democracy', The Conversation, 12 February. Available at: https://theconversation.com/swarms-of-ai-bots-can-sway-peoples-beliefs-threatening-democracy-274778 (Accessed: 20 March 2026).
MIT Technology Review (2024) 'AI's impact on elections is being overblown', MIT Technology Review, 3 September. Available at: https://www.technologyreview.com/2024/09/03/1103464/ai-impact-elections-overblown/ (Accessed: 20 March 2026).
Parliament. House of Lords (2024) Large language models and generative AI (1st Report of Session 2023–24, HL Paper 54). London: The Stationery Office.
Policy Options (2025) 'Countering the threat of AI manipulation by authoritarian states', Policy Options, 11 November. Available at: https://policyoptions.irpp.org/2025/11/state-ai-maniuplation (Accessed: 20 March 2026).
R Street Institute (2025) AI and the 2024 Election Part III: Many Uses and Minor Impacts. Washington, DC: R Street Institute. Available at: https://www.rstreet.org/commentary/ai-and-the-2024-election-part-iii-many-uses-and-minor-impacts/ (Accessed: 20 March 2026).
Sanders, N.E. and Schneier, B. (2025) Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship. Cambridge, MA: MIT Press.
TechPolicy.Press (2026) 'America's AI Governance Crisis Is a Democracy Crisis', Tech Policy Press, 24 March. Available at: https://www.techpolicy.press/americas-ai-governance-crisis-is-a-democracy-crisis/ (Accessed: 25 March 2026).
The Economist (2023) 'The world is heading into a fresh election super-cycle', The Economist, 13 November.
White House (2025) Executive Order 14179: Removing Barriers to American Leadership in Artificial Intelligence, 23 January. Available at: https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/ (Accessed: 20 March 2026).
Part of the Blueprint AI series, this post reflects my personal views and is independent of my professional role.