What's new

CHINA HAS CREATED A RACIST A.I. TO TRACK MUSLIMS

Foggy_Bottom

BANNED
Joined
Dec 11, 2014
Messages
1,053
Reaction score
-2
Country
United States
Location
United Kingdom
The Chinese government is using facial-recognition software to “track and control” a predominantly Muslim minority group, according to a disturbing new report from The New York Times. The Chinese government has reportedly integrated artificial intelligence into its security cameras to identify the Uighurs and appears to be using the information to monitor the persecuted group. The report, based on the accounts of whistleblowers familiar with the systems and a review of databases used by the government and law enforcement, suggests the authoritarian country has opened up a new frontier in the use of A.I. for racist social control—and raises the discomfiting possibility that other governments could adopt similar practices.

Two people familiar with the matter told the Times that police in the Chinese city of Sanmenxia screened whether residents were Uighurs 500,000 times in a single month. Documents provided to the paper reportedly show demand for the technology is ballooning: more than 20 departments in 16 provinces sought access to the camera system, in one case writing that it “should support facial recognition to identify Uighur/non-Uighur attributes.” This, experts say, is more than enough to raise red flags. “I don’t think it’s overblown to treat this as an existential threat to democracy,” Jonathan Frankle, an A.I. researcher at the Massachusetts Institute of Technology, told the Times. “Once a country adopts a model in this heavy authoritarian mode, it’s using data to enforce thought and rules in a much more deep-seated fashion than might have been achievable 70 years ago in the Soviet Union. To that extent, this is an urgent crisis we are slowly sleepwalking our way into.”

Racial profiling has long been a concern in the use of artificial intelligence. But in the United States and other Western countries, much of that scrutiny has centered on biases built into A.I. systems. Computer scientist Aylin Caliskan put the matter succinctly in a 2017 interview with Vox: “Many people think machines are not biased. But machines are trained on human data. And humans are biased.” Already, studies have shown this to be the case. A 2016 investigation by ProPublica found that machine-learning software rated black people at higher risk of committing another crime after an initial arrest. (The software’s conclusions were based on current incarceration rates.) American lawmakers have highlighted these concerns as the A.I. race heats up. “Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions,” Rep. Alexandria Ocasio-Cortez said at an event in January. “They’re just automated assumptions. And if you don’t fix the bias, then you are just automating the bias.”



In China, however, the government appears to be exploiting that bias, purposely using the technology to keep tabs on a subset of its population. China has faced increasing criticism from human-rights groups for its treatment of the country’s 11 million Uighurs, about a million of whom are believed to be detained in Chinese indoctrination camps, which the government characterized to the Times as “vocational training centers that curb extremism.” But the secret use of artificial intelligence as part of its crackdown on the Muslim minority is likely to exacerbate fears about the government’s increasingly aggressive approach to the group, and about the abilities of governments worldwide to use technology for nefarious purposes. China appears to be the first country to use the systems explicitly for racial profiling, but experts are concerned that others will follow. “Take the most risky application of this technology, and chances are good someone is going to try it,” Clare Garvie, an associate at the Center on Privacy and Technology at Georgetown Law, told the Times. “If you make a technology that can classify people by an ethnicity, someone will use it to repress that ethnicity.”

https://www.vanityfair.com/news/201...cist-artificial-intelligence-to-track-muslims
 
.
The Chinese government is using facial-recognition software to “track and control” a predominantly Muslim minority group, according to a disturbing new report from The New York Times. The Chinese government has reportedly integrated artificial intelligence into its security cameras to identify the Uighurs and appears to be using the information to monitor the persecuted group. The report, based on the accounts of whistleblowers familiar with the systems and a review of databases used by the government and law enforcement, suggests the authoritarian country has opened up a new frontier in the use of A.I. for racist social control—and raises the discomfiting possibility that other governments could adopt similar practices.

Two people familiar with the matter told the Times that police in the Chinese city of Sanmenxia screened whether residents were Uighurs 500,000 times in a single month. Documents provided to the paper reportedly show demand for the technology is ballooning: more than 20 departments in 16 provinces sought access to the camera system, in one case writing that it “should support facial recognition to identify Uighur/non-Uighur attributes.” This, experts say, is more than enough to raise red flags. “I don’t think it’s overblown to treat this as an existential threat to democracy,” Jonathan Frankle, an A.I. researcher at the Massachusetts Institute of Technology, told the Times. “Once a country adopts a model in this heavy authoritarian mode, it’s using data to enforce thought and rules in a much more deep-seated fashion than might have been achievable 70 years ago in the Soviet Union. To that extent, this is an urgent crisis we are slowly sleepwalking our way into.”

Racial profiling has long been a concern in the use of artificial intelligence. But in the United States and other Western countries, much of that scrutiny has centered on biases built into A.I. systems. Computer scientist Aylin Caliskan put the matter succinctly in a 2017 interview with Vox: “Many people think machines are not biased. But machines are trained on human data. And humans are biased.” Already, studies have shown this to be the case. A 2016 investigation by ProPublica found that machine-learning software rated black people at higher risk of committing another crime after an initial arrest. (The software’s conclusions were based on current incarceration rates.) American lawmakers have highlighted these concerns as the A.I. race heats up. “Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions,” Rep. Alexandria Ocasio-Cortez said at an event in January. “They’re just automated assumptions. And if you don’t fix the bias, then you are just automating the bias.”



In China, however, the government appears to be exploiting that bias, purposely using the technology to keep tabs on a subset of its population. China has faced increasing criticism from human-rights groups for its treatment of the country’s 11 million Uighurs, about a million of whom are believed to be detained in Chinese indoctrination camps, which the government characterized to the Times as “vocational training centers that curb extremism.” But the secret use of artificial intelligence as part of its crackdown on the Muslim minority is likely to exacerbate fears about the government’s increasingly aggressive approach to the group, and about the abilities of governments worldwide to use technology for nefarious purposes. China appears to be the first country to use the systems explicitly for racial profiling, but experts are concerned that others will follow. “Take the most risky application of this technology, and chances are good someone is going to try it,” Clare Garvie, an associate at the Center on Privacy and Technology at Georgetown Law, told the Times. “If you make a technology that can classify people by an ethnicity, someone will use it to repress that ethnicity.”

https://www.vanityfair.com/news/201...cist-artificial-intelligence-to-track-muslims
Is she a Muslim??

Chinese police hunt down murderer on the run after she was spotted by AI cameras 20 years later
By TRACY YOU FOR MAILONLINE
PUBLISHED: 07:52 EST, 3 December 2019


China's 'Bonnie and Clyde' caught: Notorious murderer on the run is spotted by facial-recognition cameras before being arrested - 20 years after her partner and lover was executed by a firing squad
  • Fa Ziying and Lao Rongzhi have been billed as the Chinese 'Bonnie and Clyde'
  • They were accused of robbing and killing seven people between 1996 and 1999
  • Mr Fa was arrested during an intense raid in 1999 and then executed by shooting
  • Ms Lao, who remained at large, was caught after police used AI technology
21756004-7750273-image-a-92_1575376210686.jpg

A notorious suspected murderer who was on the run for 20 years has been arrested by Chinese police after officers used artificial intelligence to hunt her down.

Lao Rongzhi, 45, and her late boyfriend have been billed as the Chinese 'Bonnie and Clyde'. The couple were accused of robbing and killing seven people in various cities in the 1990s.

Her ex-partner, Fa Ziying, was caught by police during a dramatic raid in 1999 and then executed by a firing squad; while Ms Lao remained in hiding until she was caught last week owing to facial-recognition monitors.

Ms Lao and Mr Fa, from the southern province of Jiangxi, carried out the crimes between 1996 and 1999, according to the police.

It is alleged that Ms Lao, a former primary school teacher, was responsible for finding wealthy-looking men and alluring them to a rented flat shared by her and Mr Fa.

Once the men returned with Ms Lao, the couple allegedly threatened and robbed the victims before cruelling killing them.

Mr Fa was caught when he was trying to extort 10,000 yuan (£1,090) from the family of his last victim in July, 1999, in the city of Hefei.

He was shot in the leg while trying get the money at the victim's home after heavily armed police surrounded the apartment.

Mr Fa was executed in December of the same year.

However, the whereabouts of Ms Lao have been a mystery for police for more than two decades until this month.

Officers in the city of Xiamen in south-eastern China tracked down a woman that resembled Ms Lao after analysing relevant information using the big data technology, the police said in a statement.

The 'big data' technology, part of China's 13th five-year plan, is backed by a national surveillance system featuring hundreds of millions of AI-powered street cameras.

Xiamen police said the suspect was seen near a shopping centre in the city's Mingsi District last Wednesday.

Officers immediately set up a team and successfully apprehended the suspect inside the mall the next day. The suspect was said to be selling watches at the time.

The suspect initially denied that she was Lao Rongzhi. She claimed that her surname was Hong and she was from the city of Nanjing.

Police then carried out a DNA test and confirmed that the woman was indeed the fugitive they had been seeking for 20 years.

Officers said Ms Lao had lived under fake identities in many different cities over the years, making a living by working in pubs, clubs and taking odd jobs.

After moving to Xiamen, she started to sell watches at a shopping mall for a friend until she was found.

Police are carrying out further investigations into Ms Lao.

China is currently building the world's largest surveillance system, which aims to recognise any of its 1.4 billion citizens within three seconds.

The security network, set to be finished next year, will be equipped with 626 million street monitors, or one camera for nearly every two people, according to a study.

The network consists of the 'Sky Net Project' and the 'Sharp Eye Project' and is part of a state-led campaign.

AI-powered cameras have appeared in nearly all public places in major cities, from thoroughfares and tourist attractions to subway stations and shops.

https://www.dailymail.co.uk/news/ar...er-run-spotted-AI-cameras-20-years-later.html
 
. .
What about the Hui Muslims. They are more in numbers and cannot be racially profiled.
 
.
Is she a Muslim??

Chinese police hunt down murderer on the run after she was spotted by AI cameras 20 years later
By TRACY YOU FOR MAILONLINE
PUBLISHED: 07:52 EST, 3 December 2019


China's 'Bonnie and Clyde' caught: Notorious murderer on the run is spotted by facial-recognition cameras before being arrested - 20 years after her partner and lover was executed by a firing squad
  • Fa Ziying and Lao Rongzhi have been billed as the Chinese 'Bonnie and Clyde'
  • They were accused of robbing and killing seven people between 1996 and 1999
  • Mr Fa was arrested during an intense raid in 1999 and then executed by shooting
  • Ms Lao, who remained at large, was caught after police used AI technology
21756004-7750273-image-a-92_1575376210686.jpg

A notorious suspected murderer who was on the run for 20 years has been arrested by Chinese police after officers used artificial intelligence to hunt her down.

Lao Rongzhi, 45, and her late boyfriend have been billed as the Chinese 'Bonnie and Clyde'. The couple were accused of robbing and killing seven people in various cities in the 1990s.

Her ex-partner, Fa Ziying, was caught by police during a dramatic raid in 1999 and then executed by a firing squad; while Ms Lao remained in hiding until she was caught last week owing to facial-recognition monitors.

Ms Lao and Mr Fa, from the southern province of Jiangxi, carried out the crimes between 1996 and 1999, according to the police.

It is alleged that Ms Lao, a former primary school teacher, was responsible for finding wealthy-looking men and alluring them to a rented flat shared by her and Mr Fa.

Once the men returned with Ms Lao, the couple allegedly threatened and robbed the victims before cruelling killing them.

Mr Fa was caught when he was trying to extort 10,000 yuan (£1,090) from the family of his last victim in July, 1999, in the city of Hefei.

He was shot in the leg while trying get the money at the victim's home after heavily armed police surrounded the apartment.

Mr Fa was executed in December of the same year.

However, the whereabouts of Ms Lao have been a mystery for police for more than two decades until this month.

Officers in the city of Xiamen in south-eastern China tracked down a woman that resembled Ms Lao after analysing relevant information using the big data technology, the police said in a statement.

The 'big data' technology, part of China's 13th five-year plan, is backed by a national surveillance system featuring hundreds of millions of AI-powered street cameras.

Xiamen police said the suspect was seen near a shopping centre in the city's Mingsi District last Wednesday.

Officers immediately set up a team and successfully apprehended the suspect inside the mall the next day. The suspect was said to be selling watches at the time.

The suspect initially denied that she was Lao Rongzhi. She claimed that her surname was Hong and she was from the city of Nanjing.

Police then carried out a DNA test and confirmed that the woman was indeed the fugitive they had been seeking for 20 years.

Officers said Ms Lao had lived under fake identities in many different cities over the years, making a living by working in pubs, clubs and taking odd jobs.

After moving to Xiamen, she started to sell watches at a shopping mall for a friend until she was found.

Police are carrying out further investigations into Ms Lao.

China is currently building the world's largest surveillance system, which aims to recognise any of its 1.4 billion citizens within three seconds.

The security network, set to be finished next year, will be equipped with 626 million street monitors, or one camera for nearly every two people, according to a study.

The network consists of the 'Sky Net Project' and the 'Sharp Eye Project' and is part of a state-led campaign.

AI-powered cameras have appeared in nearly all public places in major cities, from thoroughfares and tourist attractions to subway stations and shops.

https://www.dailymail.co.uk/news/ar...er-run-spotted-AI-cameras-20-years-later.html

So no denial about the article, only an attempt to distract about its other capabilities.

When we say that AI is built into track Uighur Muslim/non-Uighur attributes, it does not mean other aspects of the AI don't exist. It is not a one and done defining statement. Let me try and further my explanation with this analogy. When someone says the cops have created a dragnet to catch a thief, it does not mean all the police do is only find robbers and let murders, pimps, rapists, rioters, etc. go ignored.

So, congrats on using it for catching a murderer, but that does not absolve you from developing AI to watch Uighur Muslim attributes too.
 
.
Not worse than Google's facial recognition that identified an African American as a gorilla.
 
.
So no denial about the article, only an attempt to distract about its other capabilities.

When we say that AI is built into track Uighur Muslim/non-Uighur attributes, it does not mean other aspects of the AI don't exist. It is not a one and done defining statement. Let me try and further my explanation with this analogy. When someone says the cops have created a dragnet to catch a thief, it does not mean all the police do is only find robbers and let murders, pimps, rapists, rioters, etc. go ignored.

So, congrats on using it for catching a murderer, but that does not absolve you from developing AI to watch Uighur Muslim attributes too.
The new technology is available for everyone, China is the leading country in this field, Hans, Uighurs, Huis... we are all Chinese, western media put a pin that China only use this Ai technology on Uighurs , that's so absurd.
 
. . .
The new technology is available for everyone, China is the leading country in this field, Hans, Uighurs, Huis... we are all Chinese, western media put a pin that China only use this Ai technology on Uighurs , that's so absurd.
Media never said they ONLY use it for this purpose, that's your talking point to distract to another story. However, once again you agree they have developed the AI to specifically track Muslims too
 
.
The Chinese government is using facial-recognition software to “track and control” a predominantly Muslim minority group, according to a disturbing new report from The New York Times. The Chinese government has reportedly integrated artificial intelligence into its security cameras to identify the Uighurs and appears to be using the information to monitor the persecuted group. The report, based on the accounts of whistleblowers familiar with the systems and a review of databases used by the government and law enforcement, suggests the authoritarian country has opened up a new frontier in the use of A.I. for racist social control—and raises the discomfiting possibility that other governments could adopt similar practices.

Two people familiar with the matter told the Times that police in the Chinese city of Sanmenxia screened whether residents were Uighurs 500,000 times in a single month. Documents provided to the paper reportedly show demand for the technology is ballooning: more than 20 departments in 16 provinces sought access to the camera system, in one case writing that it “should support facial recognition to identify Uighur/non-Uighur attributes.” This, experts say, is more than enough to raise red flags. “I don’t think it’s overblown to treat this as an existential threat to democracy,” Jonathan Frankle, an A.I. researcher at the Massachusetts Institute of Technology, told the Times. “Once a country adopts a model in this heavy authoritarian mode, it’s using data to enforce thought and rules in a much more deep-seated fashion than might have been achievable 70 years ago in the Soviet Union. To that extent, this is an urgent crisis we are slowly sleepwalking our way into.”

Racial profiling has long been a concern in the use of artificial intelligence. But in the United States and other Western countries, much of that scrutiny has centered on biases built into A.I. systems. Computer scientist Aylin Caliskan put the matter succinctly in a 2017 interview with Vox: “Many people think machines are not biased. But machines are trained on human data. And humans are biased.” Already, studies have shown this to be the case. A 2016 investigation by ProPublica found that machine-learning software rated black people at higher risk of committing another crime after an initial arrest. (The software’s conclusions were based on current incarceration rates.) American lawmakers have highlighted these concerns as the A.I. race heats up. “Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions,” Rep. Alexandria Ocasio-Cortez said at an event in January. “They’re just automated assumptions. And if you don’t fix the bias, then you are just automating the bias.”



In China, however, the government appears to be exploiting that bias, purposely using the technology to keep tabs on a subset of its population. China has faced increasing criticism from human-rights groups for its treatment of the country’s 11 million Uighurs, about a million of whom are believed to be detained in Chinese indoctrination camps, which the government characterized to the Times as “vocational training centers that curb extremism.” But the secret use of artificial intelligence as part of its crackdown on the Muslim minority is likely to exacerbate fears about the government’s increasingly aggressive approach to the group, and about the abilities of governments worldwide to use technology for nefarious purposes. China appears to be the first country to use the systems explicitly for racial profiling, but experts are concerned that others will follow. “Take the most risky application of this technology, and chances are good someone is going to try it,” Clare Garvie, an associate at the Center on Privacy and Technology at Georgetown Law, told the Times. “If you make a technology that can classify people by an ethnicity, someone will use it to repress that ethnicity.”

https://www.vanityfair.com/news/201...cist-artificial-intelligence-to-track-muslims


As if Muslims are not tracked in US and A.
Btw its funny how any Chinese development is portrayed as anti Muslim. the AI is also used to track all Chinese citizens when registering for a SIM.
 
. .
Media never said they ONLY use it for this purpose, that's your talking point to distract to another story. However, once again you agree they have developed the AI to specifically track Muslims too
CHINA HAS CREATED A RACIST A.I. TO TRACK MUSLIMS
That's your title, a pathetic lie
 
. . .

Latest posts

Pakistan Defence Latest Posts

Back
Top Bottom