Back to overview

The ethical conundrum of electoral AI #3

The AI for Electoral Actors project logo and a text stating

In 2024, investigative reports uncovered Telegram networks across the Balkans where hundreds of thousands of users circulated AI manipulated software designed to “undress” images of women. These digitally altered photos became tools for blackmail, coercion, and public shaming—effectively silencing targets out of public life. While generative AI has become a fixture in the social media feeds of many, as more than half of all content online is produced by AI , mainstream discussions fail to address a stark reality: an estimated 96 per cent of all deepfakes depict non-consensual intimate imagery of women. Such technology-facilitated gender-based violence is especially alarming in the context of elections, where such personal attacks not only reinforce harmful gender stereotypes but also drive women out of public and political spaces—or deter them from entering those spaces in the first place. 

The Telegram case exposes a troubling dimension: without sufficient safeguards to prevent abuse of AI systems, severe harm can be inflicted on women and other marginalized communities. Gender-based violence online existed long before AI became a widely accessible tool, but AI technologies can accelerate and amplify the spread of hate speech and non-consensual sexual imagery which poses serious threats to fundamental human rights. The risks posed by AI are not just about deliberate misuse; they are also embedded in the very fabric of how these systems are built and deployed. Beyond cases like Telegram, where AI is weaponized to target marginalized groups, unintentional bias and harm that AI can cause must be recognized. 

AI literacy must, therefore, extend to a deeper awareness of how systematic biases—baked into the datasets used to train AI—can perpetuate discrimination and exclusion, often without users even realizing it. These biases can replicate existing gender norms, racial stereotypes, and other prejudices—and can stem from decisions about system design, data collection and labeling, or how models are developed, deployed, and used. These shortcomings can magnify the potential for harm—such as discrimination and exclusion—even without the user’s intention or awareness. Whether through intentional abuse or unintentional flaws, the ethics of AI demands urgent attention and clear mechanisms for accountability at every level. 

The large data sets used to train AI models also require extensive data collection and storage of highly sensitive voter information, including data used for biometric identification, election surveillance and facial recognition. Protection of such data is critical to ethically safeguard the privacy of voters and uphold public trust in electoral administration, a key feature of a healthy democracy. At the same time, such protections are often costly and require high capacity from electoral management bodies (EMBs). EMBs must equally secure adequate cybersecurity infrastructure to protect sensitive data and provide voters with transparency into how and what data is processed. 

Ethical considerations are integral to every developmental phase of the AI lifecycle, from initial design through deployment and ongoing monitoring and evaluation of systems. Decisions made during development have lasting impacts, as oversights or a lacking mindfulness of democratic values may result in mistreatment of overlooked communities and violation of fundamental civil and political rights. In electoral contexts, the environment in which AI is deployed significantly influences its ethical impact, Electoral AI must account for highly specific local cultural, political, and social dynamics to avoid exacerbating existing inequalities. To mitigate these risks, AI systems must be transparent with clear accountability mechanisms available to voters to ensure their fairness. Continuous monitoring is also essential to catch and address ethical breaches, which relies on robust digital literacy and capacity among EMBs. 

As AI can act as an amplifier of already existing systemic issues, the brunt of these harms often falls on those already marginalized, including ethnic and religious minorities, women and individuals of lower socioeconomic status. Without essential human oversight, such harm often remains unintended and unnoticed, yet their threat to human rights is undeniable. When used in any aspect of elections, including electoral management, attention to these risks is fundamental.   

International IDEA’s AI for Electoral Actors project recognizes that ethical and human rights considerations must be the foundation for any AI-related work in elections. In line with this principle, one key pillar of the workshop—and the second pillar to be presented in this article series—is dedicated to AI ethics and human rights.  (Learn about the five pillars that make up the foundation of principles addressing the AI’s democratic, technical, legal and ethical implications.)

This article draws on discussions from the second workshop, in Tirana, Albania, which brought together regional representatives from EMBs and civil society organizations across the Balkans and Eastern Europe to jointly envisage the ethical foundation necessary to align AI usage in elections with democratic values.  

These conversations have helped to detangle the wicked problem of how to protect marginalized communities from persecution and disenfranchisement by introducing ethical safeguards for electoral AI. 

Pillar #2, AI Ethics and Human Rights 

As EMBs consider adopting AI tools to improve election integrity for tasks such as voter list maintenance and deduplication, polling booth location optimization, and voter authentication and fraud detection, these tools are also being used by other actors in the broader electoral context, such as political campaigns. Each of these applications carries significant ethical and human rights implications, including privacy, security, and transparency—as well as sustainability concerns, given that AI can consume vast amounts of energy. Moreover, regions lacking equal access to digital public infrastructure or sufficient digital literacy face additional challenges in ensuring fair and equitable AI deployment. Strengthening AI literacy—among electoral actors especially—has thus become a critical priority. 

In the Tirana-based workshop, EMB representatives generally rated their knowledge of AI as low, raising concerns about their capacity to mitigate potential human rights breaches. Despite this, some representatives shared that they are already using AI to monitor social media and to provide information about election administration through AI-powered chatbots. 

AI chatbots are prone to generate inaccurate information, also known as hallucinations, because their response generation involves a level of randomness. Hallucinations have already struck the 2024 EU elections, when the world’s largest chatbots spread incorrect information about election dates, how to cast a ballot and who was eligible to vote. Without built-in system limitations, regulation, and human oversight, AI systems risk not only decreasing trust in electoral institutions but also disenfranchising voters.  

Hallucinations showcase a larger ethical issue relevant to AI usage in public affairs: a lack of transparency. The decisions made by algorithms are often unexplainable, even by the developers behind the systems, a phenomenon known as the black box problem. Since there is little insight into how an AI system produces certain outputs, it may be difficult to provide adequate information about why and how the system reaches certain conclusions. In the case of an AI chatbot, the system may for example make up fake polling locations for regions that lack relevant data, creating systematic obstacles to vote for users living in certain regions. Such issues are especially likely to affect already marginalized communities, where data gaps are a lot more common—thereby entrenching existing societal injustices. Inaccuracies or misrepresentations in AI output are hard to predict and risk infringing upon the inalienable right to information which is paramount to making free and informed voting decisions. 

A lack of trust in AI has been singled out as a major obstacle to people’s willingness to implement AI at work across industries. This general distrust closes off the potential benefits to the efficiency and fairness of elections that AI could bring. To build a strong democratic foundation for AI, ethical awareness and accountability must be integrated at every level—among developers, deployers, and end users alike. This is especially important in electoral management, where human rights concerns and ethical standards need to remain front and center. Electoral officials must therefore be vigilant when approached by AI vendors, ensuring meaningful human oversight at every stage of AI deployment and taking proactive steps to identify potential harm and address them. 

While AI tools so far have been largely peripheral in the Western Balkans and Eastern Europe and therefore there have been no well-documented instances of AI-driven bias in electoral processes, the region’s history of ethnic tensions, political instability, and media manipulation highlights the risks of introducing AI without careful management. Such risks include for example amplifying existing ethnic, racial, geographic or media biases if the technology is not thoughtfully integrated. 

A related concern arose in Bosnia and Herzegovina, where the case of Pilav v. Bosnia and Herzegovina (2016) demonstrated that the country’s electoral system discriminates based on ethnic origin and place of residence. By restricting key political offices to three ‘constituent peoples’—Bosniaks, Croats and Serbs—the system excludes individuals who do not belong to these groups or who reside in certain regions. Such geographic and ethnic biases highlight how seemingly neutral rules can have biased outcomes in practice. 

Age bias has also been reported in Albania’s voting process. Our experts and participants noted that elderly voters struggled to use electronic voting systems, often requiring assistance that compromised voter privacy, an example that illustrates how AI-enabled or technology-based electoral solutions can inadvertently create barriers to participation if not designed and implemented inclusively. 

Taken together, these examples—by no means unique to this region—highlight how bias in AI can exacerbate existing societal schisms and issues that disrupt the fairness of elections. Such examples underscore the need for a cautious, well-considered approach to incorporating AI into electoral processes. Even where AI offers clear advantages, such as streamlining administration or reducing human error, robust safeguards and transparent oversight are critical. AI developers and electoral authorities must prioritize ethical standards, consult with diverse stakeholders, and rigorously monitor implementations to ensure that AI strengthens rather than undermines electoral fairness and democracy. 

The third installment of the article series will explore how AI has come to shape online information spaces and how content curation and moderation, the third pillar for the democratic AI foundation, plays a vital part in curbing the often-unfettered impact of AI on electoral information integrity. The discussion will lift insight from our third AI for Electoral Actors workshop in Johannesburg, taking place during the first week of April 2025. 

Catch up on the article series so far by reading the first article to get the full picture of a democratic AI foundation and the second article on AI Literacy and the first workshop in Kuala Lumpur. 

About the authors

Cecilia Hammar
Programme Assistant, Digitalization and Democracy
Juliane Müller
Juliane Müller
Associate Programme Officer
Close tooltip