China, Iran, Russia Using AI to Influence US Elections, Intelligence Community Warns

Foreign adversaries are escalating efforts as the 2024 election approaches, says a report from the national intelligence chief.

Foreign powers are increasingly using artificial intelligence (AI) to influence how Americans vote, according to a new intelligence report released just 45 days ahead of the 2024 presidential election.

The Office of the Director of National Intelligence (ODNI) released a security update on Sept. 23 warning that China, Iran, and Russia are ramping up their respective influence efforts to shape public opinion in the United States using AI tools. These efforts are consistent with a previous warning issued by the intelligence community earlier in September, as foreign actors seek to exacerbate societal divisions and sway voters.

“These actors most likely judge that amplifying controversial issues and divisive rhetoric can serve their interests by making the United States and its democratic system appear weak and by keeping the U.S. government distracted with internal issues instead of pushing back on their hostile behavior in other parts of the world,” the Sept. 23 update reads.

Some of the foreign powers’ efforts include laundering deceptive content through prominent figures or releasing fabricated “leaks” intended to appear controversial. While AI has helped accelerate certain aspects of foreign influence operations targeting the United States, the intelligence community notes that AI has yet to revolutionize these tactics.

While China’s activity has been more general—seeking to shape global views of China and amplifying divisive U.S. political issues—Iran and Russia’s efforts have been using AI to create content more directly related to American elections.

Russia has generated the most AI content related to the U.S. election, the intelligence community report states. Moscow’s efforts span across text, images, audio, and video mediums. The Kremlin’s actions involve spreading conspiratorial narratives and AI-generated content of prominent U.S. figures aimed at deepening divides on issues like immigration.

Iran, on the other hand, has used AI to generate social media posts and inauthentic news articles for websites posing as legitimate news sources. Such content, appearing in both English and Spanish, has targeted American voters across the political spectrum, particularly focusing on divisive issues such as the Israel-Gaza conflict and the U.S. presidential candidates.

China’s use of AI in its influence operations is primarily focused on shaping perceptions of China rather than directly influencing the U.S. election, per the report. Chinese actors have also employed AI-generated content, such as fake news anchors and social media profiles, to amplify domestic U.S. political issues like drug use, immigration, and abortion, without explicitly backing any candidate.

In its Sept. 6 election security update, the intelligence community stated that, at the time, China was mostly focused on influencing down-ballot races and was not yet attempting to influence the presidential race.

The goal of the foreign actors remains more focused on influencing perceptions rather than directly interfering in the actual election process, the latest report states.

The report warns that adversaries will likely continue ramping up AI-driven disinformation campaigns as election day nears, posing a risk to U.S. democratic processes.

In March, The Epoch Times reported on the rising influence of political memes on election discourse. At the time, Pamela Rutledge, director of the Media Psychology Research Center, told The Epoch Times that deep fakes—which are realistic images, videos, and audio typically created by generative AI software—can and do effectively fool people.

Even if the content is obviously fake or of low quality, Rutledge said that the messages can still be persuasive if they confirm people’s political biases.

A survey published in May by the Imagining the Digital Future Center at Elon University found that 78 percent of Americans believe the presidential election will be influenced by “abuses” related to AI-generated content that spreads on social media.

“Many aren’t sure they can sort through the garbage they know will be polluting campaign-related content,” Lee Rainie, director of the Digital Future Center, said in a statement.

In a testament to the potential impact of AI-driven content on elections, 69 percent of survey respondents told the center they are not confident most voters can differentiate between fabricated and authentic photos, with similar percentages expressing concern about others’ ability to detect fake audio and video.

Austin Alonzo contributed to this report.

 

Leave a Reply