With this being an election year, concerns have long been raised about the potential threats artificial intelligence (AI) poses to the integrity of the electoral process. According to a recent poll by Pew, more than half of US adults are "extremely or very concerned" about the negative influence of AI on the elections.
The 2024 Nobel Peace Prize laureate Maria Ressa highlighted the dangers posed by the rise of GenAI, which she argued is fueling a “tech-enabled Armageddon”. She predicted that GenAI would significantly impact the election.
Hillary Clinton, the former secretary of state and 2016 presidential candidate, said AI could be used by foreign actors to disrupt the electrical process in the US. “Anybody who’s not worried is not paying attention,” Clinton said at a panel discussion at an event focused on AI's impact on the 2024 global elections.
While such claims are concerning, AI’s impact on the elections could be a bit overblown.
There is no denying that with the rise of deepfakes, GenAI has a clear impact on global politics. Earlier this year, residents of New Hampshire received robocalls from an AI-powered voice, who appeared to be President Joe Biden, discouraging them from voting. In response, the FCC banned robocalls using AI voices.
In England, an audio deepfake falsely depicted London Mayor Sadiq Khan making provocative statements before a pro-Palestinian march. This event came close to causing a “serious disorder” in the city, according to Khan.
A few months ago, former President Donald Trump elevated deepfake technology by using AI-generated images of Taylor Swift endorsing him, without her consent. While the singer did not publicly respond to the incident, she has been vocal in the past about her political views and opposition to Trump.
Researchers from Purdue University studied the spread and impact of AI deepfakes, analyzing more than 500 incidents of political deepfakes. One of the key findings of the study is that deepfakes are often designed to reinforce the beliefs of individuals who are already inclined to accept their messaging. Other studies also indicate that most types of political persuasion have minimal impact.
While the Purdue study suggests that deepfakes may not be as impactful as initially feared, concerns about the broader use of AI in elections remain. The use of GenAI as a tool for foreign election interference is a pressing concern. The Russian interference in the 2016 election underscored the threat of foreign meddling in US politics. However, in its 2024 Adversarial Threat Report, Meta claimed limited use of GenAI in foreign election interference efforts.
According to a report from Meta, “We continue to monitor and assess the risks associated with evolving new technologies like AI. Our findings so far suggest that GenAI-powered tactics provide only incremental productivity and content-generation gains to the threat actors, and have not impeded our ability to disrupt their influence operations”.
Another key concern about AI is its use for microtargeting campaigns, where voter data is analyzed to create precise and tailored messaging. This raises issues around privacy, manipulation, and voter polarization. However, how impactful has this method been?
An MIT study challenges the prevailing assumptions about the effectiveness of microtargeting. “Our research found no evidence that complex microtargeting works better than simple demographic targeting,” says David Rand, MIT professor of management science and co-author of the study. “Single-attribute targeting proved just as effective as more sophisticated approaches.”
Admittedly, the full impact of GenAI on this election cycle may yet be fully understood. While the impact of AI on this campaign seems minimal based on evidence, experts caution that as the technology continues to improve, it could have a bigger impact on future elections. GenAI could help streamline the electoral process, such as by automating signature verification and election monitoring, but it could also create more convincing deepfakes, increasing the spread of disinformation and undermining trust in political messaging.