What's new

How a fake network pushes pro-China propaganda

How a fake network pushes pro-China propaganda
By Flora Carmichael
BBC News

Published5 August
Share
Graphic of anonymous people at computers with a Chinese character on all their screens

A sprawling network of more than 350 fake social media profiles is pushing pro-China narratives and attempting to discredit those seen as opponents of China's government, according to a new study.
The aim is to delegitimise the West and boost China's influence and image overseas, the report by the Centre for Information Resilience (CIR) suggests.
The study, shared with the BBC, found that the network of fake profiles circulated garish cartoons depicting, among others, exiled Chinese tycoon Guo Wengui, an outspoken critic of China.
Other controversial figures featured in the cartoons included "whistleblower" scientist Li-Meng Yan, and Steve Bannon, former political strategist for Donald Trump.
Each of these individuals has themselves been accused of spreading disinformation, including false information about Covid-19.
Cartoon depicting Steve Bannon as a demon, 'Yan Limeng' with a forked tongue and Guo Wengui with a tail and holding an American flag.'Yan Limeng' with a forked tongue and Guo Wengui with a tail and holding an American flag.

Image caption,
The cartoons shared seek to ridicule (left-right) Bannon, Li-Meng Yan and Guo Wengui
Some of the accounts - spread across Twitter, Facebook, Instagram and YouTube - use fake AI-generated profile pictures, while others appear to have been hijacked after previously posting in other languages.
There is no concrete evidence that the network is linked to the Chinese government, but according to the CIR, a non-profit group which works to counter disinformation, it resembles pro-China networks previously taken down by Twitter and Facebook.
These networks amplified pro-China narratives similar to those promoted by Chinese state representatives and state media.
Much of the content shared by the network focuses on the US, and in particular on divisive issues like gun laws and race politics.
One of the narratives pushed by the network paints the US as having a poor human rights record. Posts from the fake accounts cite the murder of George Floyd among examples, as well as discrimination against Asians.
Tweet saying Nearly 20,000 people die from gun violence in the United States in 2020

Image caption,
This account has since been suspended by Twitter for violating its rules
Some accounts repeatedly deny human rights abuses in the Xinjiang region, where experts say China has detained at least a million Muslims against their will, calling the allegations "lies fabricated by the United States and the West".
"The aim of the network appears to be to delegitimise the West by amplifying pro-Chinese narratives," said Benjamin Strick, the author of the CIR report.
There are strong similarities between this network and the so-called "Spamouflage Dragon" propaganda network identified by social analytics firm Graphika.
Commenting on the new study Ira Hubert, a senior investigative analyst at Graphika, said: "The report shows that on US platforms, there was no 'honeymoon' in the first months of the Biden administration.
"The network put out a steady mix of anti-US content, for example cheering US 'defeat' ahead of its withdrawal from Afghanistan and painting the US as a poor ally whose aid to India was inadequate during some of its worst months battling Covid."
How was the network uncovered?
The CIR mapped hashtags favoured by previously identified networks, unearthing more accounts that showed signs of being part of an influence operation.
Tell-tale signs included high levels of activity pushing propaganda narratives and repeated use of the same hashtags. Newly created accounts, accounts with usernames that appeared to be randomly generated, and accounts with very few followers also raised red flags.
Some profiles were created to post original content, while others only shared, liked and commented on those original posts, to help them reach a wider audience.
This kind of activity is often referred to as "astroturfing" because it is designed to create the appearance of a grass-roots campaign.
A graph showing different nodes in a network with different colours representing connections. A central cluster shows a lot of red lines gathered around the middle
IMAGE SOURCE, BENJAMIN STRICK / CIR
Image caption,
The study visualises how different accounts amplify each other - each small node represents a Twitter account
Fake people
Many of the fake profiles used AI generated photos - a relatively new phenomenon that allows computers to create realistic looking images of people who don't exist. Unlike stolen profile images of real people, the AI generated images, which are created by a type of machine learning framework called StyleGAN, cannot be traced using a reverse image search.
The use of fake profile pictures in disinformation campaigns is becoming more common as users and platforms become more wary of suspicious accounts.
The CIR used various techniques to identify fake profile pictures in the network. The synthetic images always put the eyes in the same location, so lining them all up can help identify a collection of fake profile pictures.
Normally, a random collection of profile pictures would display much more variety in the cropping and the alignment of the eyes.
A collection of images of 6 people with a line joining all of their eyes to show how they are all perfectly in line.
IMAGE SOURCE, BENJAMIN STRICK / CIR
Image caption,
The network uses images of people who do not exist
Other signs include blurred edges around the hair, teeth at strange angles, and blurred objects around the face.
Many of the Facebook accounts believed to be part of the network appeared to have Turkish names. These accounts may once have belonged to real people but were later hijacked or sold and given new profile pictures.
Hijacked accounts also spread the network's pro-China narratives on YouTube. Accounts that had previously posted in English or German and then lain dormant for years suddenly started posting Chinese language content from official Chinese state broadcasters.
Image showing multiple Twitter accounts tweeting the same cartoon which shoes Li Meng Yang being crushed by a boot
IMAGE SOURCE, BENJAMIN STRICK / CIR
Image caption,
The report found spam Tweets using the same text, tags and images all uploaded on the same day
The CIR shared its research with the social media platforms involved.
Facebook has removed the accounts on its platform highlighted in the study.
A Facebook spokesman said: "In September 2019, we removed a network of spam activity that posted lifestyle and political clickbait, primarily in Chinese. This network had almost no engagement on our platform, and we continue to work with researchers and our industry peers to detect and block their attempts to come back, like those accounts mentioned in this report."
YouTube also terminated accounts in the network for violating YouTube's community guidelines.
Twitter said it had also now removed almost all of the accounts identified by CIR, as well as a number of others engaged in similar behaviour. It said its investigations are still ongoing.
"When we identify information operation campaigns that we can reliably attribute to state-linked activity - either domestic or foreign-led - we disclose them to our public archive."
line

Analysis - Kerry Allen, BBC Monitoring China media analyst
Over the past decade, billions of dollars have gone into funding the growth of China's presence on international platforms.
But with Facebook, Twitter and YouTube blocked in the mainland, and only accessible via a VPN, the country has struggled to get such platforms recognised as viable competitors to Western juggernauts. It has needed not only Chinese voices, but foreign voices, to show that the country has "arrived".
"Wolf warrior diplomacy" has emerged, with officials using Twitter accounts to fly the flag for Communist Party rhetoric. China wants to portray itself as a friend to the world - and not a repressive, authoritarian state, as it perceives Western nations make it out to be.
With more than one billion internet users, China certainly has the capability to orchestrate large-scale social media campaigns, and target what it sees as anti-China voices with a wealth of opposing opinions.
But with English-language skills limited in China, there are often clumsy tell-tale signs that a Chinese troll is behind such accounts. Many rely on automatic translation software to turn Chinese messages into English, meaning such messages are riddled with typos, or contain clumsy grammatical structures.
And with many Western outlets inaccessible to them within China, users generally have very little knowledge of who they are meant to be targeting, so they simply piggyback off the replies of others from within the same network.
Graphics by Simon Martin
Related Topics
There are similar ccp trolls here as well
 
.
China bad

...acccording to pro U.S. propaganda pushing U.S. puppet regime funded fake news agency called BBC

Citing claims and accussations from a pro U.S. propaganda pushing fake intelligence group called Centre for Information Resilience (CIR), whos director is a "former" BBC employee, which is merely a cover name for a collection of active U.S. puppet regime employees, "former" U.S. puppet regime employees, subjects of U.S. regime run propaganda training programs or active and "former" employes of other U.S. propaganda pushing fake news agencies.

Who if they arent busy spreading their clumsy lies and disinformation about China, also like to deflect British warcrimes and corruption in the Middle East and Africa on their victims or instigate wars in Burma and Sudan in interest of the U.S. regime.

Or like every other fake pro US propaganda pushing agency, likes to astroturf and flex their partnerships with this huge network of fake pro US propaganda pushing mouthpieces they belong to, like their partner the U.S regime funded pro US propaganda pushing Institute for War & Peace Reporting, their partner the Africa Resilience Network which is just another cover for the U.S. regime funded Institute for War & Peace Reporting, the Africa Uncensored project which is just another cover for the U.S. regime funded Institute for War & Peace Reporting, and so on and so on, constantly caressing and echoing each other without scrutiny, while being carried by an armarda of U.S. shills and bot accounts and billions of Dollars of U.S. regime funding for disinformation networks like this one.

Sounds legit and definitely worth a consideration. 🙄 Definitely not dubious and hypocrite at all, especially when spammed by U.S. bot armies and paid shills. 🙄 Cue for rambling shills trying to distract from these facts.
 
Last edited:
.
How a fake network pushes pro-China propaganda
By Flora Carmichael
BBC News

Published5 August
Share
Graphic of anonymous people at computers with a Chinese character on all their screens

A sprawling network of more than 350 fake social media profiles is pushing pro-China narratives and attempting to discredit those seen as opponents of China's government, according to a new study.
The aim is to delegitimise the West and boost China's influence and image overseas, the report by the Centre for Information Resilience (CIR) suggests.
The study, shared with the BBC, found that the network of fake profiles circulated garish cartoons depicting, among others, exiled Chinese tycoon Guo Wengui, an outspoken critic of China.
Other controversial figures featured in the cartoons included "whistleblower" scientist Li-Meng Yan, and Steve Bannon, former political strategist for Donald Trump.
Each of these individuals has themselves been accused of spreading disinformation, including false information about Covid-19.
Cartoon depicting Steve Bannon as a demon, 'Yan Limeng' with a forked tongue and Guo Wengui with a tail and holding an American flag.'Yan Limeng' with a forked tongue and Guo Wengui with a tail and holding an American flag.

Image caption,
The cartoons shared seek to ridicule (left-right) Bannon, Li-Meng Yan and Guo Wengui
Some of the accounts - spread across Twitter, Facebook, Instagram and YouTube - use fake AI-generated profile pictures, while others appear to have been hijacked after previously posting in other languages.
There is no concrete evidence that the network is linked to the Chinese government, but according to the CIR, a non-profit group which works to counter disinformation, it resembles pro-China networks previously taken down by Twitter and Facebook.
These networks amplified pro-China narratives similar to those promoted by Chinese state representatives and state media.
Much of the content shared by the network focuses on the US, and in particular on divisive issues like gun laws and race politics.
One of the narratives pushed by the network paints the US as having a poor human rights record. Posts from the fake accounts cite the murder of George Floyd among examples, as well as discrimination against Asians.
Tweet saying Nearly 20,000 people die from gun violence in the United States in 2020

Image caption,
This account has since been suspended by Twitter for violating its rules
Some accounts repeatedly deny human rights abuses in the Xinjiang region, where experts say China has detained at least a million Muslims against their will, calling the allegations "lies fabricated by the United States and the West".
"The aim of the network appears to be to delegitimise the West by amplifying pro-Chinese narratives," said Benjamin Strick, the author of the CIR report.
There are strong similarities between this network and the so-called "Spamouflage Dragon" propaganda network identified by social analytics firm Graphika.
Commenting on the new study Ira Hubert, a senior investigative analyst at Graphika, said: "The report shows that on US platforms, there was no 'honeymoon' in the first months of the Biden administration.
"The network put out a steady mix of anti-US content, for example cheering US 'defeat' ahead of its withdrawal from Afghanistan and painting the US as a poor ally whose aid to India was inadequate during some of its worst months battling Covid."
How was the network uncovered?
The CIR mapped hashtags favoured by previously identified networks, unearthing more accounts that showed signs of being part of an influence operation.
Tell-tale signs included high levels of activity pushing propaganda narratives and repeated use of the same hashtags. Newly created accounts, accounts with usernames that appeared to be randomly generated, and accounts with very few followers also raised red flags.
Some profiles were created to post original content, while others only shared, liked and commented on those original posts, to help them reach a wider audience.
This kind of activity is often referred to as "astroturfing" because it is designed to create the appearance of a grass-roots campaign.
A graph showing different nodes in a network with different colours representing connections. A central cluster shows a lot of red lines gathered around the middle
IMAGE SOURCE, BENJAMIN STRICK / CIR
Image caption,
The study visualises how different accounts amplify each other - each small node represents a Twitter account
Fake people
Many of the fake profiles used AI generated photos - a relatively new phenomenon that allows computers to create realistic looking images of people who don't exist. Unlike stolen profile images of real people, the AI generated images, which are created by a type of machine learning framework called StyleGAN, cannot be traced using a reverse image search.
The use of fake profile pictures in disinformation campaigns is becoming more common as users and platforms become more wary of suspicious accounts.
The CIR used various techniques to identify fake profile pictures in the network. The synthetic images always put the eyes in the same location, so lining them all up can help identify a collection of fake profile pictures.
Normally, a random collection of profile pictures would display much more variety in the cropping and the alignment of the eyes.
A collection of images of 6 people with a line joining all of their eyes to show how they are all perfectly in line.
IMAGE SOURCE, BENJAMIN STRICK / CIR
Image caption,
The network uses images of people who do not exist
Other signs include blurred edges around the hair, teeth at strange angles, and blurred objects around the face.
Many of the Facebook accounts believed to be part of the network appeared to have Turkish names. These accounts may once have belonged to real people but were later hijacked or sold and given new profile pictures.
Hijacked accounts also spread the network's pro-China narratives on YouTube. Accounts that had previously posted in English or German and then lain dormant for years suddenly started posting Chinese language content from official Chinese state broadcasters.
Image showing multiple Twitter accounts tweeting the same cartoon which shoes Li Meng Yang being crushed by a boot
IMAGE SOURCE, BENJAMIN STRICK / CIR
Image caption,
The report found spam Tweets using the same text, tags and images all uploaded on the same day
The CIR shared its research with the social media platforms involved.
Facebook has removed the accounts on its platform highlighted in the study.
A Facebook spokesman said: "In September 2019, we removed a network of spam activity that posted lifestyle and political clickbait, primarily in Chinese. This network had almost no engagement on our platform, and we continue to work with researchers and our industry peers to detect and block their attempts to come back, like those accounts mentioned in this report."
YouTube also terminated accounts in the network for violating YouTube's community guidelines.
Twitter said it had also now removed almost all of the accounts identified by CIR, as well as a number of others engaged in similar behaviour. It said its investigations are still ongoing.
"When we identify information operation campaigns that we can reliably attribute to state-linked activity - either domestic or foreign-led - we disclose them to our public archive."
line

Analysis - Kerry Allen, BBC Monitoring China media analyst
Over the past decade, billions of dollars have gone into funding the growth of China's presence on international platforms.
But with Facebook, Twitter and YouTube blocked in the mainland, and only accessible via a VPN, the country has struggled to get such platforms recognised as viable competitors to Western juggernauts. It has needed not only Chinese voices, but foreign voices, to show that the country has "arrived".
"Wolf warrior diplomacy" has emerged, with officials using Twitter accounts to fly the flag for Communist Party rhetoric. China wants to portray itself as a friend to the world - and not a repressive, authoritarian state, as it perceives Western nations make it out to be.
With more than one billion internet users, China certainly has the capability to orchestrate large-scale social media campaigns, and target what it sees as anti-China voices with a wealth of opposing opinions.
But with English-language skills limited in China, there are often clumsy tell-tale signs that a Chinese troll is behind such accounts. Many rely on automatic translation software to turn Chinese messages into English, meaning such messages are riddled with typos, or contain clumsy grammatical structures.
And with many Western outlets inaccessible to them within China, users generally have very little knowledge of who they are meant to be targeting, so they simply piggyback off the replies of others from within the same network.
Graphics by Simon Martin
Related Topics

Unheard Voice: Evaluating five years of pro-Western covert influence operations​

In July and August 2022, Twitter and Meta removed two overlapping sets of accounts for violating their platforms’ terms of service. Twitter said the accounts fell foul of its policies on “platform manipulation and spam,” while Meta said the assets on its platforms engaged in “coordinated inauthentic behavior.” After taking down the assets, both platforms provided portions of the activity to Graphika and the Stanford Internet Observatory for further analysis.

Our joint investigation found an interconnected web of accounts on Twitter, Facebook, Instagram, and five other social media platforms that used deceptive tactics to promote pro-Western narratives in the Middle East and Central Asia. The platforms’ datasets appear to cover a series of covert campaigns over a period of almost five years rather than one homogeneous operation.

https://purl.stanford.edu/nj914nx9540
 
.
So guys, every single person here who is pro-China is fake.

The only right and tell the truth is the mainstream media.


This video is fake,

It's a high-quality and well-made CG instead of an actual city.


Unlike all evil fake posters here, how good and kind the mainstream media is.
 
.
Back
Top Bottom