China ‘Very Likely’ to Exploit AI to Influence Canada’s General Election: Intelligence Agency

Voters head to cast their ballots at the Fairbanks Interpretation Centre in Dartmouth, N.S., during Canada’s federal election on Oct. 21, 2019. The Canadian Press/Andrew Vaughan
 

 

China ‘Very Likely’ to Exploit AI to Influence Canada’s General Election: Intelligence Agency

 

 

By Carolina Avendano

 

China, Russia, and Iran are “very likely” to use artificial intelligence tools to attempt to interfere in Canada’s general election this year, with Beijing being the most likely to generate fake content and launch targeted propaganda campaigns aimed at spreading disinformation among Canadian voters, says one of Canada’s key security and intelligence agencies.

 

The “malicious” use of AI poses a growing threat to Canadian elections, as foreign actors increasingly exploit these technologies to interfere in global elections by targeting voters, spreading disinformation, and harassing politicians, says the Communications Security Establishment Canada (CSE) in the 2025 update of its Cyber Threats to Canada’s Democratic Process report.

 

Canada is particularly vulnerable, as the majority of its citizens receive their news and information from the internet or social media, thus “increasing their exposure to AI-enabled malign influence campaigns,” the CSE said. Meanwhile, data from Canadians, as well as public and political organizations, can be mined from online sources, enabling foreign actors to create fake content and craft tailored propaganda campaigns.

 

“We assess that the PRC, Russia, and Iran will very likely use AI-enabled tools to attempt to interfere with Canada’s democratic process before and during the 2025 election,” reads the report.

 

“When targeting Canadian elections, threat actors are most likely to use generative AI as a means of creating and spreading disinformation, designed to sow division among Canadians and push narratives conducive to the interests of foreign states.”

 

The use of generative AI by hostile actors to interfere in elections worldwide, including in Europe, Asia, and the Americas, has increased over the past two years, with 102 reported cases of these tools being used to interfere with or influence 41 elections held between 2023 and 2024, the report says in one of its key findings. It notes that generative AI tools are capable of producing new content based on given data, including new “text, images, audio, video, or software code.”

 

China’s Threat to Canadian Elections

 

The People’s Republic of China (PRC) is the most likely foreign actor to target Canadian elections, says the agency, while Russia and Iran “almost certainly” view Canadian elections as lower-priority targets compared to elections in the United States and UK. If Russia or Iran do target Canada, “they are more likely to use low-effort cyber or influence operations.”

 

The agency cites as an example the 2021 Canadian general election, in which actors likely or known to be affiliated with the Chinese regime spread non-AI-enabled disinformation about politicians running for office whom they deemed to be “anti-PRC.”

 

Two years later, a propaganda campaign called “Spamouflage Dragon,” likely linked to China, spread disinformation targeting dozens of MPs, including Prime Minister Justin Trudeau, Conservative Leader Pierre Poilievre, and several cabinet members, the report notes, adding that the network has previously used generative AI to target Mandarin-speaking figures in Canada.

 

In a more recent foreign interference case, Liberal leadership candidate Chrystia Freeland was targeted by a “coordinated and malicious” information campaign, including news articles disparaging her, which was traced to a popular news account on Chinese social media and messaging application WeChat, said Global Affairs Canada in a Feb. 7 statement. The news account is an anonymous blog that has been previously linked to the Chinese regime.

 

Another key finding of the report is that some foreign nation states, particularly the PRC, are undertaking “massive data collection campaigns” targeting “democratic politicians, public figures, and citizens around the world.”

 

The risk grows when advances in predictive AI allow those states to “quickly query and analyze these data,” the report says, as it enables them to improve their understanding of political environments in democratic countries. Predictive AI tools, instead of producing new content, are designed to analyzed data by recognizing patterns in the data.

 

“By possessing detailed profiles of key targets, social networks, and voter psychographics, threat actors are almost certainly enhancing their capabilities to conduct targeted influence and espionage campaigns,” the report reads.

 

The report notes that it is “likely” the Chinese regime has used social media platform TikTok to promote pro-PRC narratives in democratic countries and to censor narratives it identifies as anti-PRC. It adds that operations aimed at impacting “user beliefs and behaviours on a massive scale” are “likely” to have targeted voters ahead of an election on at least one occasion. TikTok is owned by PRC-based company ByteDance.

 

“We assess it very likely that PRC-affiliated actors will continue to specifically target Chinese-diaspora communities in Canada, pushing narratives favourable to PRC interests on social media platforms,” the report says.

 

Election Integrity

 

While it’s “very unlikely” that disinformation or AI-enabled cyber activity would undermine the integrity of Canada’s upcoming general election, ongoing AI advancement and the growing proficiency of cyber adversaries in using these technologies mean “the threat against future Canadian general elections is likely to increase,” the CSE said in its report.

 

Although Canada conducts its general elections by paper ballot, much of the electoral infrastructure is digitized, including voter registration systems, election websites, and communications within election management bodies, making those systems vulnerable to malicious cyber activity.

 

“Cyber actors can use generative AI to quickly create targeted and convincing phishing emails, potentially allowing them illicit entry to this infrastructure, where they can install malware or exfiltrate and expose sensitive information,” says the agency.

 

The report cites a case from last July, in which Chinese regime-affiliated hackers gained access to UK electoral registries with names and addresses of everyone registered to vote between 2014 and 2021, according to the UK government. “AI-enabled cyber actors can use data such as this to develop propaganda campaigns tailored to specific audiences,” the CSE report says.

 

The agency says it doubts foreign actors will conduct a “destructive cyber attack against election infrastructure, such as attempting to paralyze telecommunications systems on election day.” Meanwhile, it notes that Canadian politicians and political parties are likely to be targets of “hack-and-leak operations,” which involve stealing victims’ information and publicly releasing sensitive data for purposes such to seek financial gain, damage reputations, or cripple organizations.

 

From theepochtimes.com

Categories: