Khan's use of AI not only highlights the transformative power and new possibilities of technology in political campaigns and elections more broadly but also raises critical questions about its impact on electoral integrity. His ability to engage and mobilize supporters despite legal and physical constraints showcases both the immense opportunities and the significant challenges that AI brings to the democratic process. Hence, it becomes essential for electoral actors to understand and navigate the complexities of AI in elections. This is precisely why our first executive workshop on AI Literacy for Electoral Actors in the Asia-Pacific region dove deep into building a democratic foundation for AI in electoral processes. Over three days in Kuala Lumpur, Malaysia, our executive workshop convened representatives of electoral management bodies and civil society organizations from 19 countries in the Asia-Pacific region. The workshop explored the five pillars necessary for building a democratic foundation when considering the use of AI in electoral processes. The curriculum addressed AI literacy, delved into AI ethics and human rights, examined AI content curation and moderation, discussed regulation and legislation, and considered how AI can enhance electoral management.
Five pillars, five regions, these workshops are set to unfold across different parts of the world in the coming months. Hence, this inaugural workshop serves as the perfect starting point to delve deeper into the first pillar of our article series on this global comparative project. Each workshop will be followed by an article, each shedding light on one of these critical pillars.
Pillar #1, AI Literacy
To build what we call a democratic AI foundation, it's crucial to understand the basic technical details of modern AI systems, where AI is being used, and the key issues associated with them—the very first learning objective of our curriculum. When EMB officials were asked what words come to mind when they think of AI, terms like "automatic," "intelligent," and "futuristic" were mentioned, but also associations such as "complex," "dangerous," "fake," and "scary." Such responses paint a clear picture of the mixed feelings many hold— seeing AI as both an opportunity that can protect and streamline electoral processes, and harboring worries about what AI means for the future of upholding electoral integrity. To enhance their understanding about potential risks and challenges but also opportunities that come with AI, it is crucial that electoral actors understand how AI works, where and why it might not work, where it might be useful and where it might be harmful.
AI is an umbrella term that refers to a variety of related technologies. The OECD defines AI as ‘a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments (OECD 2019).’ Yet, when most people think about AI, they picture a prompt by ChatGPT or a deepfake, not other types of applications. Indeed, AI technologies are very broad. Chatbots and generative image technologies have very little to do with software that, for example, delineates voting districts. This shows how important it is to understand key terms such as the difference between generative AI —a subset of machine learning capable of generating content like text, images, or other media—and discriminative AI, where models are used to classify, analyze, or separate data.
Although the public consciousness around AI grew with the release of ChatGPT in November 2022—which also led many Electoral Management Bodies (EMBs) to consider and design chatbots to answer questions and provide information regarding elections, research on the technology dates back to the mid-20th century, with substantial advances in the 1990s and early 2000s, particularly in image recognition models, natural language processing, and ensemble methods. Now, AI systems are found everywhere, spanning from autocomplete on keyboards, spam and phishing filters in emails, voice assistants, AI summaries, and bots on social media platforms to biometric systems, voter list management, and predictive analytics in electoral management.
During the first event in Kuala Lumpur, many EMBs shared that they have already been discussing the use of AI in elections. However, before taking any next steps, they emphasized that it is crucial for them to first build capacity and increase AI literacy within their institutions. AI literacy is a prerequisite for informed decisions on where the deployment of AI might be useful and improve a process, and where AI might complicate or harm an already well-functioning process.
This need extends beyond AI literacy within electoral bodies. Many participants pointed out that broader AI literacy that reaches voters, along with increased resources for civic education, is key. Attendees shared that this could be an area where EMBs and civil society could work together to fill the gap in AI literacy. It was noted that particularly the current lack of civil society oversight on the use of AI by stakeholders in elections needs to be addressed.
This emphasis on collective efforts echoes one of the key takeaways of the workshop: that harnessing the benefits of AI, as well as addressing any AI-related challenges to elections—such as AI generated disinformation and ethical concerns—requires a holistic approach by engaging all actors.
Despite each country's unique electoral context, many challenges and opportunities are common to all. There was a broad consensus on the importance of convening, sharing expertise, and continuing this collaboration to tackle any AI-associated risks in elections.
Our workshop in Kuala Lumpur was a pivotal step towards further understanding the complexities of AI in elections. By understanding how AI works, where it can be beneficial, and where it may pose risks, electoral actors can make informed decisions that uphold the integrity of elections.
As we continue this series of workshops across different regions, our aim remains steadfast: to build a democratic foundation for AI in electoral processes. Enhancing AI literacy is not just about keeping pace with technological advancements; it is about ensuring that democracy thrives in the digital age. We look forward to ongoing collaboration and shared learning that will empower electoral actors worldwide to harness AI responsibly and effectively.
Looking ahead, our next workshop for the Western Balkans and Eastern Europe region will take place in Tirana, Albania during the first week of December. The upcoming article will discuss region-specific insights and delve into the second pillar of a ‘democratic AI foundation’: AI Ethics and Human Rights.
Please note that this is the second article in a series, read the first one: A democratic foundation for electoral AI #1