AI is a threat to the 2024 elections — but not in the way you might think, writes Eileen Solaiman, international policy director at Hugging Faith.
Already months into the world’s most crucial election year, with billions of people across at least 64 countries and the European Union heading to the polls, public concern is centered on a relatively new and rapidly evolving threat: generative AI.
There are many reasons to be afraid.
Already, we’ve seen AI-generated fake images of former President Donald Trump allegedly being arrested by the New York Police Department, and an AI voice clone of President Joe Biden delivering false messages to New Hampshire voters not to vote. People are taking notice. A global survey conducted in 29 countries in spring 2023 found that 60% of voters are concerned that AI could make disinformation worse. A July 2023 poll found that 73% of US adults are concerned that AI will create deceptive political ads. And the solutions promised to us aren’t working. Not only do AI text detectors not work, they tout disastrous false positives, and image and video detection isn’t much better either.
All of this sounds like a recipe for disaster, but having led AI policy at OpenAI and now Hugging Face, plus several years of experience working on US election security, I believe the tools are already there to address these issues — if only we could dispel five key misconceptions that are preventing us from diagnosing them.
Myth 1: AI brings entirely new risks.
We have been warning about eerily realistic deepfakes for years, and we have been plagued by disinformation for a long time. The low barrier to entry for AI, allowing anyone with a stable internet connection to create believable fake content, is worrying on multiple fronts. The threat level varies with the type of content, such as fake images and audio, and the impact also varies with distribution and audience scope. AI-generated text may not be cheap to produce. In other words, people can make up facts in text cheaply, and for fun. Image manipulation has long been possible with tools such as Photoshop. The most immediate threat regarding AI content with few safeguards is realistic audio and video generation.
Reality: AI builds on existing risks.
The main risks that AI poses to the integrity of democracy are disruption of voting procedures and infrastructure, and loss of information and trust in democratic institutions. The former includes infrastructure such as polling stations and voter databases, and includes risks such as cyber attacks, misinformation about how and where to vote, and interference with official records. The latter includes disinformation about candidates, processes, and outcomes.
That said, the impact of AI on voter opinions varies, with many voters making their decisions well before Election Day. Variability in performance of AI systems trained on high-resource languages such as English could skew content quality in less-representative regions. But AI content can still interfere with infrastructure, and the very idea of AI influencing could lead to a loss of trust.
Myth 2: Watermarks are the solution.
While AI researchers are working hard on provenance techniques such as AI labeling, technical tools such as language model content filters and image watermarking have yet to reach full scale development. The performance of tools and techniques varies across content types, and ensuring watermarks are tamper-proof is an ongoing challenge.
Reality: Mitigation is multifaceted.
Technical protection measures are not a panacea, so model licenses and terms of use must be enforced as concrete legal consequences for misuse. Mitigation measures must be implemented from the ground up. Even if labeling and watermarking worked perfectly in the short term, it would not deter voters committed to believing in conspiracy theories. Given that in 2016 some US voters saw similarities between candidates and statues of the devil, watermarking a topical image like Shrimp Jesus would do little to weaken its influence. The seeds of belief supported by false content become reality when a narrative sprouts that influences a candidate’s views on policies.
Myth 3: Open source is more dangerous.
In the post-ChatGPT era, there is a lively debate about whether AI should be “open source” (colloquially, anyone can download and run AI models) or “closed” (where companies host AI models and provide access to approved developers). There is no conclusive evidence that a more open model significantly increases the threat of election risk. The infamous robocalls from President Biden’s cloned voice were not the result of an open model.
Reality: Lowering barriers to harmful election content depends primarily on access.
Access is related to, but not dependent on, the openness of a system. Generative systems capable of producing high-quality, reliable content are likely to be expensive and require more computing infrastructure than the average person has access to. Existing systems, including open source models, may not be optimal for influence operations, and government actors with sufficient resources may build their own customized systems specifically for election interference.
Myth #4: All attackers need is AI.
Anyone looking to influence voters, from bad actors and well-funded government officials to teenagers with a quirky sense of humor, may not be able to reach their audience without the key of strong distribution.
Reality: Distribution is key.
The harm caused by the production of an audio recording of President Biden’s voice arises, firstly, because it was produced without consent, but is compounded by its distribution and the targeting of vulnerable groups with robocalls to voters. Such targeted campaigns require a plan beyond content generation: in the case of the robocalls, it would have been necessary to have access to the phone numbers of voters in specific areas.
Myth 5: It’s too late to act.
We are a few months into 2024 and elections are already underway around the world. Experts are right that there is no time left for legislative change, but measures can be simple and targeted.
Reality: We can all make a big impact.
We are at a critical juncture, and everyone has a responsibility. The key players needed to protect election security in an AI environment are election agencies and administrators, election campaigns, content distribution platforms and social media, journalists and news media, AI companies, developers, adopters, and voters. That probably includes you.
Following the lead of the National Association of Secretaries of State’s (NASS) #TrustedInfo2024 initiative, there is still time for official election institutions, from administrators to campaigns, to work together to guide voters to trustworthy information. Social media and news media teams and channels need to be properly equipped to handle global elections, not just a few in well-resourced regions. Media and AI organizations have invested and should continue to invest in the capacity to verify content.
The onus should not be placed too heavily on voters, but they should increase their own literacy and verify as much information as possible before taking action. Voters should also be aware that the AI “hallucination” problem extends to election information, and should not use AI as a trusted source of voter information.
We all have a role to play in making our democracy more resilient against threats, regardless of how they arise.
We would like to thank Brigitte Tousignant and Justin Hendrix for their thoughtful comments and advice.