More than half of social media users don’t trust the information related to the 2024 U.S. elections that appears in their feeds, and a significant number of users believe that much of the political and election-related images and videos they see are generated by AI, according to a new Tech Policy Press/YouGov poll.
The online poll, conducted June 12-14 among more than 1,000 U.S. adults, asked respondents to consider both the reliability of election-related information they found on social media about political issues and candidates, as well as the reliability of the platforms’ election information tools.
Responses to the question, “How trustworthy do you think the information about political issues and candidates you see on the social media platforms that you use?” were broken down by political party self-identification.
Just over half of survey respondents feel they cannot trust information about political issues or candidates on social media, with large differences across party lines: Republicans are more likely to distrust election-related information, with 27% saying they feel it is “very distrustful” — 10 percentage points higher than Democrats, who generally distrust what they see on social media to the same extent.
In contrast, only 39% of respondents said they had some trust in the election-related information they found on social media. Democrats were significantly more likely to say they had “some trust” in this information, while Republicans were 15 points less likely to say so.
Only 6% of respondents felt they could trust election-related information on social media “very much,” regardless of party affiliation, but men were three times more likely than women to have a high level of trust in the political content they see on their social feeds.
Responses to “How trustworthy do you think the information about political issues and candidates you see on the social media platforms you use can be?” broken down by respondent gender.
Of those who said they had difficulty ranking their trust in election-related information, one-third said social media helps them find reliable news sources that can help them decide how to vote, but 40 percent said they use social networking sites to find reliable news sources but get their election information from elsewhere.
Respondents believe they will see AI-generated content about the 2024 election in their feeds.
Nearly half of respondents believe they have seen AI-generated visual content, such as images or videos, related to the 2024 US election, and nearly a quarter believe they encounter it regularly. It is unclear how accurately respondents estimate the amount of generated AI content they encounter online, as there is not yet solid empirical evidence on its actual prevalence on social media. At this time, no major social media platform has implemented tools that can reliably detect AI-generated content on their platform. Meta recently announced that it will label all AI-generated images on Facebook, Instagram, and Threads that are generated by the platform or have an identifiable marker ahead of the November election.
We present responses by political party to the question, “How often have you seen AI-generated content (images or videos) on your social media feed related to politics and the 2024 US Presidential election?”
Part of the problem may stem from the increasing difficulty researchers have in accessing social media data, according to Yunkan Yang, an assistant professor at Texas A&M University. Yang is one of the authors of a study that measured the prevalence of visual misinformation on Facebook in the 2020 election cycle. “Elon Musk has made it almost too expensive for most academics to access X (formerly Twitter) data,” he told Tech Policy Press. Another example he cited is Meta, which will discontinue CrowdTangle, a tool many researchers use to study Facebook and Instagram, starting in August. “To answer questions like how prevalent AI-generated content is on social media, social media companies need to make their data easily accessible to researchers,” he said.
In general, people, and even experts, have a hard time distinguishing real from fake online content. “One way to distinguish realistic AI-generated visual content is plausibility,” Yang says. For example, he explains, an image of a young Marlon Brando, who died in 2004, standing next to President Joe Biden at a campaign event is unlikely to occur to most Americans. “Not only that, but it’s very hard for the average media consumer to tell the difference between a real photo and an AI-generated one,” Yang says. “This is really worrying.”
The poll results could raise concerns even if AI-generated media isn’t as prevalent as people realize. “If people believe they’re seeing a lot of AI-generated content, they may be more skeptical of real content or believe that real content may be AI-generated,” said Kaylin Jackson Schiff, an assistant professor in the Department of Politics at Purdue University and co-director of the Governance and Responsible AI Lab (GRAIL). “That’s the idea behind the concept of the ‘lying dividend,’ coined by two legal scholars and explored in our study. This suggests that the prevalence of real deepfakes makes false claims about deepfakes more believable.”
Her colleague, Daniel Stuart Schiff, an assistant professor of technology policy at Purdue University and co-director of GRAIL, says that awareness of the risks of AI-generated content might lead people to treat politicians’ cries of “deepfakes” as more credible. “We seem to be in a bit of a dilemma,” he told Tech Policy Press. “Like media literacy, digital literacy is generally thought to be an important strategy for mitigating misinformation. But our recent research suggests that people who are more aware of deepfakes are more susceptible to false claims of misinformation by politicians.” The two co-authored a report on how politicians are using misinformation about fake news and deepfakes to avoid accountability through the “lying dividend.”
Many social media platforms offer users tools or features to fact-check information that appears in their feed. However, more than half of survey respondents said they have never used these tools for information related to the 2024 election. Additionally, a quarter of respondents said they did not know which platforms had these tools or how to access them. Nearly a third did not use a platform’s fact-checking features due to distrust.
Responses to questions about the use of fact-checking features and tools offered by social media platforms, broken down by political party.
In a survey about social media platform usage, the most used platforms by respondents were Facebook and YouTube, with 75% and 59% of respondents respectively saying they had recently visited these two networks. Nearly half of respondents also said they had recently used Instagram, while only a third had recently visited X (formerly Twitter).
Responses to questions about respondents’ engagement in online political discussions in the 2024 US election were compared to 2020 and broken down by gender.
Compared to the 2020 election, women appear to be less politically engaged on the platform this election than men. About one-third of respondents are participating in political discussions online to roughly the same extent as they did in the last election. Another third have stopped participating online altogether, with female respondents significantly less interested than men. Among those who have increased their online political activity, men do so twice as often as female respondents.
The full dataset can be downloaded here.
Correction: A previous version of this article stated that more than half of respondents believe they have seen AI-generated visual content, such as images or videos, related to the 2024 U.S. election. This statement has been corrected to reflect that the figure is “nearly half,” not “more than half.” We sincerely apologize for this error.