NEW YORK — The release of a doctored video imitating Vice President Kamala Harris’ voice and making her say things she did not is raising concerns about the power of artificial intelligence to mislead people with Election Day just three months away.
The video gained attention after tech billionaire Elon Musk shared it on his social media platform “X” on Friday night, without specifying that it was originally published as a parody.
The video uses much of the same footage as the actual ad that Harris, a leading Democratic presidential candidate, released last week as she launched her campaign, but the narration voice in this one has been replaced with a different voice that mimics Harris’s.
“I, Kamala Harris, am the Democratic candidate for president because Joe Biden finally showed his age in the debate,” a voice says in the video. The voice claims Harris is a “diversity hire” because she’s a woman and a person of color, and says she “doesn’t even know the basics about running a country.” The video keeps the “Harris for President” branding intact, and adds some authentic archival footage of Harris.
“We believe what the American people want is the real freedom, opportunity and security that Vice President Harris is providing, not the false, manipulated lies of Elon Musk and Donald Trump,” Mia Ellenberg, a spokeswoman for the Harris campaign, said in an email to The Associated Press.
The widely shared video is one example of how lifelike AI-generated images, videos, and audio clips have been used to ridicule and mislead politicians as the US presidential election approaches. The video highlights how, even as high-quality AI tools become much more accessible, there has so far been a lack of notable federal action to regulate their use, leaving the rules guiding AI in politics largely to states and social media platforms.
The video also raises questions about how to best handle content where the lines of appropriate use of AI are blurred, particularly content that falls into the category of satire.
The original user who posted the video, a YouTuber known as Mr. Reagan, clarified on both YouTube and X that the doctored video was a parody. But Musk’s post, which has been viewed more than 123 million times according to the platform, simply includes the caption “This is awesome” and a laughing emoji.
X users familiar with the platform may know that they can click on Musk’s post to be taken to the original user’s post and view the disclosure, though there is no instruction to do so in Musk’s caption.
Some participants in X’s “Community Notes” feature, which adds context to posts, suggested Musk’s post be labeled, but no such label had been added as of Sunday afternoon. Some online users questioned whether Musk’s post violated X’s policies, which state that users “may not share composite, manipulated, or out-of-context media that may deceive, confuse, or harm people.”
The policy makes an exception for memes and satire, as long as they don’t cause “significant confusion about the veracity of the media.”
Earlier this month, Musk endorsed Republican candidate and former President Donald Trump. Neither Reagan nor Musk immediately responded to emailed requests for comment Sunday.
Two experts specializing in AI-generated media examined the audio from the fake ads and found that many of them were generated using AI technology.
One of them, Hany Farid, a digital forensics expert at the University of California, Berkeley, said the video shows the power of generative AI and deepfakes.
“The AI-generated audio is so good,” he said in an email. “Most people would not believe it is VP Harris’ voice, but having the words in her voice makes the video that much more powerful.”
He said generative AI companies that offer voice cloning tools and other AI tools to the public should do more to ensure their services are not used in ways that harm people and democracy.
Rob Weissman, co-director of the advocacy group Public Citizen, disagreed with Farid and said many people would be fooled by the video.
“I think this is obviously not a joke,” Wiseman said in an interview. “I don’t think most people who see this would think it was a joke. It’s not good, but it’s good enough. And it’s precisely because it relates to the existing themes surrounding her that most people believe it’s real.”
Weissman, whose group has lobbied Congress, federal agencies and state governments to regulate generative AI, said the video was “the kind of thing we’ve been warning about.”
Other AI deepfakes generated in the United States and elsewhere sought to influence voters with misinformation, humor, or both. In Slovakia in 2023, a fake audio clip impersonated a candidate, discussing plans to rig the election and raise beer prices days before the vote. In Louisiana in 2022, a satirical political action committee ad superimposed the face of a Louisiana mayoral candidate onto that of an actor playing a struggling high school student.
Congress has yet to pass any laws regarding AI in politics, federal agencies have taken only limited action, and most of the existing regulations in the U.S. are left to the states. More than a third of states have enacted their own laws regulating the use of AI in campaigns and elections, according to the National Conference of State Legislatures.
Besides X, other social media companies are also creating policies around synthetic or manipulated media shared on their platforms. For example, users of the video platform YouTube must disclose whether they have used generative artificial intelligence to create their videos or face the suspension of their accounts.
___
The Associated Press receives support from several private foundations to strengthen its commentary coverage of elections and democracy. Learn more about the AP Democracy Initiative here. The Associated Press is solely responsible for all content.