China’s Increasingly Aggressive Tactics for Foreign Disinformation Campaigns

Advertisement

On August 29, Meta reported that it had recently taken down thousands of accounts and Facebook pages that “were part of the largest known cross-platform covert operation in the world,” run by “geographically dispersed operators across China.” The announcement and its detailed analysis made headlines around the world, garnering attention for the type of information that is often mainly of interest to cybersecurity firms and digital policy wonks.

But such revelations are just the tip of the iceberg when it comes to Beijing’s evolving campaign to feed targeted disinformation – demonstrably false or misleading content, often through the use of fake accounts – to social media users around the world.

A review of numerous forensic investigations, think tank reports, platform transparency reports, and media coverage published since June points to a disconcerting if unsurprising trend: Beijing-linked actors are continually engaging in covert disinformation or other online influence operations. And they are experimenting with tactics that are more sophisticated, harder to detect, and potentially more effective than in previous years, while also tackling issues that cut to the heart of public debate in democracies.

This reality reaffirms the findings of Freedom House’s “Beijing’s Global Media Influence” report, published last year, and demonstrates that democracies must invest more resources in the detection and mitigation of the Chinese regime’s disinformation efforts.

Diplomat Brief

Weekly Newsletter

Get briefed on the story of the week, and developing stories to watch across the Asia-Pacific.

Get the Newsletter

As they develop an appropriate response, policymakers, major technology companies, civil society researchers, and ordinary users should bear in mind the following features of Beijing’s latest disinformation practices.

Enjoying this article? Click here to subscribe for full access. Just $5 a month.

Expansion to New Platforms and Audiences

The first documented Beijing-backed global disinformation campaigns dated to 2017, and typically targeted English and Chinese speakers on large platforms like Twitter (now X), Facebook, and YouTube. But recent reports show that the Chinese Communist Party (CCP) regime’s manipulation efforts are spreading across many more platforms, languages, and geographic audiences.

The network identified in last month’s Meta takedown – a persistent revival of a previously exposed and thwarted network known as Spamouflage – notably extended beyond Facebook and Instagram. Links were found to some 50 other applications, including TikTok, Reddit, Pinterest, and Medium, as well as local online forums in Asia and Africa. Meta suggested that the pivot to smaller platforms may have been a deliberate response to larger firms’ increased monitoring, detection, and removals.

Advertisement

A separate report published by Microsoft on September 7 uncovered a range of influence efforts, from networks of fake accounts to a corps of Chinese state-linked influencers who masquerade as independent commentators. The company counted at least 230 such state media employees or affiliates across multiple platforms, with accounts that reached 103 million people using 40 different languages. The report described an expansion to new languages – like Indonesian, Croatian, and Turkish – and new platforms – including Vimeo, Tumblr, and Quora – by both human influencers and automated accounts over the past year.

More Sophisticated Tactics for Increasing Engagement and Avoiding Detection

While some networks, like the one exposed by Meta, have apparently struggled to gain genuine engagement from social media users, other recent initiatives have scored more success. The Microsoft report found an emerging use of images that were created with generative artificial intelligence (AI) tools and shared as memes by accounts mimicking U.S. voters from across the political spectrum. Such images, despite their recognizable AI flaws, have reportedly garnered additional circulation by real users. Indeed, video and other visual media are a recurring feature of the content now being shared, according to the report.

Other effective tactics include exploiting popular hashtags related to current events, as has occurred in campaigns on Australian political issues, or programming fake accounts to post comments in the first person.

Another tactic uses unattributed images to avoid easy detection of a link to Chinese state media. An investigation by the cybersecurity firm Nisos found that a network of accounts in Spanish and Portuguese, which had not been labeled as Chinese state media under Twitter’s former policy, posted screenshots of state media articles or used images and videos from the China News Service without attribution. In another case, the Australian Strategic Policy Institute (ASPI) found that Beijing-backed disinformation networks were replenishing their ranks after account takedowns by purchasing fake personas from transnational criminal organizations in Southeast Asia, and using them to post false or divisive content.

Elaborate Schemes to Launder Content and Narratives

Among the most striking discoveries from the recent set of investigations are the various ways in which proxy entities or accounts on multiple platforms are used to “launder” content, increasing its credibility and obfuscating its origins to the point that even some people involved in its production are unaware.

Enjoying this article? Click here to subscribe for full access. Just $5 a month.

One example unveiled in the Meta investigation centered on an error-laden, 66-page “research report” claiming that the U.S. government was hiding the origin of COVID-19. The document was published on Zenodo.org, then promoted by fake accounts via two distinct videos on YouTube and Vimeo; an article based on those items was then posted on LiveJournal, Medium, and Tumblr, and finally, accounts on Facebook, X, Reddit, and other platforms amplified these links.

In another notable set of incidents revealed by the cybersecurity firm Mandiant in July 2023, a Chinese public relations firm known to have ties to the government piggybacked on recruitment websites for freelancers and newswire services in the United States. This enabled them to enlist unwitting Americans to create content that aligned with CCP narratives or criticized U.S. policies. The firm published the resulting material on legitimate news website domains via the newswire services.

In one case from mid-2022, the public relations firm successfully recruited a musician and actor to organize small real-world protests in Washington, D.C., images of which were then circulated as part of an influence campaign to discredit that year’s International Religious Freedom Summit and U.S. lawmakers’ efforts to ban the importation of products made by Uyghur forced labor.

Advertisement

Use of Smears and Incitement to Discredit Factual Reporting and Disrupt Democratic Societies

In terms of topical focus, these disinformation campaigns have apparently doubled down on a long-term strategy aimed at moving beyond simple pro-CCP messages and actually attempting to amplify discord on key political and social issues, or to damage the reputations of activists, journalists, policymakers, and democratic governments.

The network that was active on Meta platforms sought to harass or discredit journalists in the United States (such as Jiayang Fan), political commentators and dissidents (such as Chen Pokong), and occasionally elected officials (including Republican Representative Jim Banks and Democratic Representative Nancy Pelosi). In an incident from May that was exposed in August by the Canadian government, a network on Tencent’s WeChat platform engaged in a coordinated campaign to smear the reputation of Parliament member Michael Chong, whose father is from Hong Kong and who has been a vocal critic of the increasing repression there and in China.

The disinformation networks have also taken aim at think tanks and other nongovernmental organizations whose investigations of the CCP’s transnational repression and disinformation campaigns have been especially effective at spurring public awareness and policy responses. These include the Madrid-based Safeguard Defenders and ASPI, which have been subjected to aggressive and wide-ranging campaigns of harassment, threats, and impersonations. ASPI found that 70 percent of the top 50 Chinese-language search results for the organization’s name on YouTube had been “posted by CCP-linked inauthentic accounts.”

With respect to divisive topics, the AI-generated memes discovered by Microsoft revolved around issues like gun violence and the Black Lives Matter movement in the United States. ASPI’s research is replete with examples of China-linked fake accounts trying to influence public discourse on domestic social issues such as gender, sexual assault, and Indigenous people’s rights. The accounts have also tried to amplify public frustration over cost-of-living pressures and false scandals targeting Australian institutions like political parties, Parliament, and the banking system.

New Vulnerability to Exposure and Pushback

CCP propagandists have good reason to put greater energy into hiding their disinformation efforts. Thanks in part to the accumulating results of investigations into the tactics now associated with China-linked campaigns, as well as a recent set of U.S. federal indictments that clarified links between networks of fake accounts and China’s Ministry of Public Security, it is becoming easier for observers to trace and attribute specific campaigns to Beijing.

Meta and Microsoft, for example, were able to make relatively definitive attributions, relying on common patterns of posting, the locations of account operators, the use of common proxy or server infrastructure, or information available on the Chinese internet regarding the government ties of public relations companies, cybersecurity firms, and fake news websites. The Canadian government found it “highly probable” that the campaign against Chong was linked to Beijing, while ASPI said the behavior it documented was similar to that of previously exposed CCP-linked covert networks.

Despite the exposure, however, there is no indication that the Chinese regime plans to rein in its manipulation. In fact, it is almost certainly gearing up for more aggressive activity centered on the 2024 presidential elections in the United States and Taiwan.

The recent assessments noted above highlight some of the strengths in current democratic responses that help safeguard the integrity of online communications and political processes, including tech firms’ transparency reports, government monitoring, and investigations by cybersecurity firms. But they also spotlight vulnerabilities, such as the inconsistency of monitoring and takedowns across platforms, particularly newer and more niche services, and the extent to which CCP-linked networks take full advantage of these gaps.

Under its new leadership, X has dismantled many of the policies and teams that had increased transparency and thwarted inauthentic behavior on Twitter. Meanwhile, TikTok, owned by the China-based ByteDance, acknowledged removing hundreds of accounts linked to the Meta-exposed network, but only after being queried by reporters. WeChat, an app of Chinese tech giant Tencent, has yet to share information about campaigns that others have detected on their platform.

In this context, it is increasingly important for the public, civil society, U.S. policymakers, and their democratic peers to apply pressure and create incentive structures that compel all technology companies to treat the threat of disinformation – including from Beijing – with the seriousness it deserves.

Leave a Comment