Back to overview
A democratic foundation for electoral AI #1
International IDEA's 'AI for Electoral Actors' project logo.
Going into the electoral Supercycle year of 2024, one of the hottest items of discussion was undoubtedly how AI might have an unprecedented impact on elections. Although predictions varied, the most drastic projections envisioned major disruptions to elections, leading to a “tech-enabled Armageddon” mired in generative AI. As the year draws to a close and the results from major elections come in, there is still no indisputable evidence of links between major AI-generated campaigns and electoral results. Rather, the hyperfocus on AI generated disinformation may instead have misdirected our focus from what is in reality a diverse set of AI possibilities and risks.
AI is poised to transform elections, that much is certain. Electoral Management Bodies (EMBs) are already incorporating AI technology to enhance some of their functions. It’s reasonable to expect that the use of AI in electoral administration will steadily expand, covering everything from managing voter lists better to identifying potential counting errors more easily. On other sides of the playing field, civil society organizations could use AI as a tool to monitor elections more effectively and political campaigns and candidates have already started turning to technology to drive their strategies, produce content, and broaden their outreach.
However, these applications of AI will push elections into largely uncharted territory, with significant implications for all involved. The use of AI in elections could erode trust in democratic processes, amplify existing societal biases and structural discrimination, undermine the autonomy of EMBs, and further degrade the integrity of the information environment.
With these mounting challenges, International IDEA sets out to apply a holistic and foundational approach to AI in elections through an initiative aimed at electoral actors. The AI for Electoral Actors project, implemented by International IDEA in 2024-2025 with the support of Microsoft and OpenAI, launched with the objective of raising AI literacy and resilience among electoral management bodies and civil society through a series of executive trainings. To ensure that the potential of AI is harnessed while its negative impact is contained, mitigated, or at least reduced, the trainings convey a sound foundation of principles addressing the technology’s democratic, technical, legal and ethical implications.
This ‘democratic AI foundation’ consists of five main pillars:
-
AI literacy: As AI becomes more widely used across industries and sectors, it is crucial that all persons involved in the technology’s lifecycle fully understand the capabilities, limitations and possible implications of the different AI systems that are part of their work. This requisite extends to electoral management bodies (EMBs), who are the stewards of democratic and trustworthy AI use in elections. Enhancing AI literacy at the EMB level secures their capacity to oversee the use of AI systems in ways that are functional, fair and equitable, strengthens their abilities to mitigate the negative effects of AI and allows them to make informed policy decisions. This understanding should translate to transparency and communication with constituents, as receivers of AI products and decisions should have the right to meaningful information and explanation about the systems in use.
-
AI ethics and Human Rights: When misused, AI has proven to seriously undermine ethical principles and human rights, including the right to privacy, right to data protection, right to fair trial, right to participate in public affairs, right to equality and non-discrimination, and the right to live free from violence. When implemented in elections safeguards need to be put in place to mitigate and minimize harm against these rights. Risks are particularly high for marginalized groups who are often left out at development stages of the AI lifecycle. If developed with non-inclusive or skewed datasets, AI may host inherit biases that reproducde existing discrimination. Such particular biases can be mitigated by adopting an inclusive perspective at the inception of AI development, as well as by moderating systems through policy and regulation for AI in electoral contexts.
-
AI content curation and moderation: Social Media has become a key vehicle for political information during electoral cycles. For the average voter, this information feed is often polluted with an abundance of AI generated mis- and disinformation, making it difficult for users to discern what’s credible. Ultimately, this presents a threat to the reliability of political information, thereby undermining electoral integrity. For EMBs, it’s important to have an awareness of where and how disinformation is spread in local communities and to make factual and trustworthy information easily accessible. To make sure this information reaches voters, EMBs must also understand how social media ranking and recommender algorithms often prioritize engaging and sensationalist content.
-
Regulation and legislation: While global frameworks are crucial to govern AI, regulation needs to reflect the contexts in which they are enforced to effectively address how AI is used in the context of national elections. AI regulation should be designed to ensure that the technology actively strengthens and promotes democratic conditions for all of society and that no actors can uniquely benefit from or abuse the technology to manipulate or interfere with electoral processes.
-
AI to improve electoral management: As discussions often focus on AI’s disruption of electoral processes, ways that the technology can be used to improve electoral management are often overlooked. When all other pillars are in place, AI can contribute to more fair, transparent and timely elections. AI technology is already employed in national elections for purposes such as voter identification and political campaigns. In the future, AI applications may cover more in-depth tasks, such as voter management, conducting pre-electoral estimations and post-electoral anaylsis, and identifying potential issues like fraud or irregularities. This, however, more than any other pillar, depends on the existence of democratic safeguards and ethical frameworks.
Reinforcing these core pillars opens the potential for AI to make elections more secure, fair and transparent. However, by failing to make even one of the pillars strong enough, the entire complex risks collapse, thereby turning the risks and negative externalities of AI to reality.
In recent years, significant efforts have been made to promote these five pillars through regional and international policy initiatives. Notable examples include the OECD AI Principles, the UNESCO Recommendation on the Ethics of AI and the Brazilian AI Strategy. While these instruments are key in establishing shared ethical principles for AI, they rarely address the specific impact AI may have, and already is having, on electoral processes. Democracies have a great deal to win by directly tackling the intersection of AI and elections at the electoral management level, turning the potentially harmful technology into a force for good.
The AI for Electoral Actors project seeks to strengthen each pillar of the democratic AI foundation by raising awareness and literacy among EMBs and CSOs. Over the course of five regional events, experts from the fields of computer science, democracy assistance and digital- and human rights law will work collaboratively with participants to identify good practices and paths to cooperation that secure AI’s potential advantages for elections. To learn more about the project, visit the AI for Electoral Actors project page.
As part of the project, we are launching an article series, which will delve into AI and elections in the regional context of each event. The articles will explore local legislation and ethics, interconnecting a global understanding of the different challenges and possibilities of AI. Each article will also connect back to one pillar of the democratic AI foundation to regional strategies to build a steadfast and qualified home for AI in elections.
About the authors
Cecilia Hammar
- Programme Assistant, Digitalization and Democracy
Programme Assistant, Digitalization and Democracy
Cecilia is a Programme Assistant of the Digitalization and Democracy, Global Programmes, in Stockholm. Cecilia joined International IDEA in January 2024. Cecilia contributes to the Digitalization…