2024 is the year that will see as many as two billion people will casting their votes as elections are held across the world, starting from Bangladesh in January to Ghana in December. With every aspect of human behaviour and industry being impacted by Generative AI (AI), which includes dating, education, creativity, customer service, and social media, elections, no matter where they are held, weren’t going to be void of Gen AI.
Gen AI is reshaping the way campaigns are run, information is disseminated, and voters are influenced. For instance, in December, Pakistan’s incarcerated former prime minister, Imran Khan, whose party was banned from holding public rallies, used an audio clip generated by AI to address a virtual rally. The speech was generated from a written version Khan had approved from prison.
The technology has given a voice to those who face oppression. Belarus, where opposition leaders often meet the misfortune of being jailed, exiled, or dead, the opposition put up an AI bot, Yas Gaspadar, a virtual candidate built using OpenAI’s ChatGPT who can speak freely without fear of repercussion.
However, from crafting persuasive messages to targeting specific demographics, all doesn’t turn out well with applying this tech. The resulting emotions are both excitement and apprehension among political analysts and policymakers worldwide.
According to the 2024 CrowdStrike Global Threat Report, the exploitation of Generative AI is on the 2024 horizon. The report predicts that with more than 40 democratic elections scheduled in 2024, nation-state and eCrime adversaries will have numerous opportunities to disrupt the electoral process or sway voter opinion. Nation-state actors from China, Russia and Iran are highly likely to conduct mis-or disinformation operations to sow disruption against the backdrop of geo-conflicts and global elections.
Personalized Campaigning
One of the most significant impacts of generative AI on elections is its ability to facilitate personalized campaigning at an unprecedented scale. By analyzing vast datasets of voter information, AI algorithms can generate highly personalized messages, advertisements, and outreach strategies. With this approach, political campaigns can micro-target individuals based on their demographics, interests, and past behaviour, getting the most out of their outreach efforts.
A political Starbucks cup did the rounds on social media in March, when Mexican civic organization Sociedad Civil de México posted an AI-generated image on X, which showed a colourful Starbucks cup sporting the inscription #Xochitl2024, along with the hashtag #StarbucksQueremosTazaXG (#StarbucksWeWantACupXG). Mexico’s opposition presidential candidate Xóchitl Gálvez requested her X followers to order a “café sin miedo” (coffee without fear). Users were soon sharing the AI-generated image and the trend caught on. Later, Starbucks stated that the design didn’t originate from the coffee brand.
This is at the inflection point of an entirely new way of conducting visual politics and arguably one that will foundationally change the way we consume multimedia artefacts during political campaigns
Joyojeet Pal, an associate professor at the University of Michigan
In India, things have become official, says Aljazeera. In February, an Instagram video parodying Prime Minister Narendra Modi by the Indian National Congress (INC) amassed more than 1.5 million views. The very same day BJP’s official handle uploaded a video that used AI to clone deceased patriotic singer Mahendra Kapoor’s voice to sing about the PM’s achievements.
“This is at the inflection point of an entirely new way of conducting visual politics and arguably one that will foundationally change the way we consume multimedia artefacts during political campaigns,” said Joyojeet Pal, an associate professor at the University of Michigan.
Disinformation & Misinformation
However, the proliferation of generative AI also raises concerns regarding the spread of disinformation and misinformation. With the ability to generate convincingly human-like text, malicious actors can exploit AI to create and disseminate false narratives, sow discord, and manipulate public opinion. The challenge of combating AI-generated misinformation poses a significant threat to the integrity of democratic processes worldwide.
In Indonesia, a fabricated video on Facebook showed the vice president elect insulting the beneficiaries of government benefits. In the US, viewers saw AI-generated robocalls of President Joe Biden asking voters to stay home.
In February, the Institute for Strategic Dialogue (ISD), a UK based think tank, uncovered what is being called Chinese government-run ‘spamouflage’, which is a campaign that shares AI-generated images in a bid to spread misinformation ahead US elections this year.
Deepfake videos have become rampant in this election packed year. Several countries like Pakistan, Taiwan, and Bangladesh, have seen incidents of such videos doing rounds spreading misinformation amongst voters.
Ethical & Regulatory Challenges
The rise of generative AI in elections has prompted debates surrounding its ethical implications and the need for regulatory oversight. Questions regarding transparency, accountability, and the safeguarding of democratic norms have become central to discussions among policymakers and technologists.
Every country needs to have its own AI infrastructure to take advantage of the economic potential while protecting its own culture
Nvidia CEO Jensen Huang
As Nvidia CEO, Jensen Huang has said, every country needs to have its own AI infrastructure to take advantage of the economic potential while protecting its own culture. Striking a balance between harnessing the potential of AI for political campaigning while mitigating its negative consequences remains a key challenge for stakeholders.
The European Union (EU) has taken the first official step by establishing the AI Act in March. As per the act, use cases that pose a high risk to people’s fundamental rights are restricted. AI that poses “unacceptable risk,” are also facing ban. This includes use cases where AI systems deploy “subliminal, manipulative, or deceptive techniques to distort behaviour and impair informed decision-making,” or exploit vulnerable people.
In India, the Union minister of state for electronics and IT Rajeev Chandrashekhar has stated that the AI regulatory framework will be discussed and debated in June-July this year.
Tech companies too are putting in an effort to curb the damage done by AI generated content.
An open-source internet protocol called C2PA uses cryptography to encode details about the origins of a piece of content. This means information such as, where a piece of content came from and who or what created it, will be available to us. Tech giants like Google, Microsoft, and Meta have already announced that they are part of this protocol.
As the usage of Gen AI continues in electoral politics, its impact will only intensify in the years to come. Advancements in AI technology and the increasing sophistication of algorithms will continue to change the dynamics of political campaigning and voter engagement. To navigate this rapidly changing landscape, policymakers, electoral authorities, and civil society must collaborate to establish robust regulatory frameworks and ethical guidelines that safeguard the integrity of democratic processes.
After all, how much do we know AI?
Gen AI, particularly language models like GPT (Generative Pre-trained Transformer), is undoubtedly shaping political narratives. These models can churn out vast amounts of text, mimicking human-like language and generating content ranging from speeches to social media posts. By crafting messages tailored to resonate with specific voter segments and effectively influencing public opinion on key issues, this technology has given a great power into the hands of campaigners.
However, it’s important to ask whether people understand what they see, or do they blindly believe it. Prime Minister Narendra Modi himself expressed concern regarding deepfakes and called on the media to educate the public about the risks of Gen AI.
Read more: Already AI is demanding due acknowledgement in creative works
According to Ipsos polling data, there is a lag in the way individuals understand AI, and this observation is global. In a survey across 31 countries, on an average, around 70% think they’ve a good understanding of AI but only 50% can name products and services that use AI. For example, in India, this number is 64%. Also, interestingly, populations in emerging markets have higher trust in AI technology. In addition, they tend to see the pros rather than the cons of AI.
The rise of generative AI represents a profound shift in the way elections are contested and won. While offering unprecedented opportunities for personalized campaigning and voter engagement, AI also poses significant challenges in terms of disinformation, privacy, and democratic accountability. As societies adapt to this new reality, the responsible deployment of AI in electoral processes will be crucial in preserving the integrity and legitimacy of democratic governance worldwide.