Foggy_Bottom
BANNED
- Joined
- Dec 11, 2014
- Messages
- 1,053
- Reaction score
- -2
- Country
- Location
The Chinese government is using facial-recognition software to “track and control” a predominantly Muslim minority group, according to a disturbing new report from The New York Times. The Chinese government has reportedly integrated artificial intelligence into its security cameras to identify the Uighurs and appears to be using the information to monitor the persecuted group. The report, based on the accounts of whistleblowers familiar with the systems and a review of databases used by the government and law enforcement, suggests the authoritarian country has opened up a new frontier in the use of A.I. for racist social control—and raises the discomfiting possibility that other governments could adopt similar practices.
Two people familiar with the matter told the Times that police in the Chinese city of Sanmenxia screened whether residents were Uighurs 500,000 times in a single month. Documents provided to the paper reportedly show demand for the technology is ballooning: more than 20 departments in 16 provinces sought access to the camera system, in one case writing that it “should support facial recognition to identify Uighur/non-Uighur attributes.” This, experts say, is more than enough to raise red flags. “I don’t think it’s overblown to treat this as an existential threat to democracy,” Jonathan Frankle, an A.I. researcher at the Massachusetts Institute of Technology, told the Times. “Once a country adopts a model in this heavy authoritarian mode, it’s using data to enforce thought and rules in a much more deep-seated fashion than might have been achievable 70 years ago in the Soviet Union. To that extent, this is an urgent crisis we are slowly sleepwalking our way into.”
Racial profiling has long been a concern in the use of artificial intelligence. But in the United States and other Western countries, much of that scrutiny has centered on biases built into A.I. systems. Computer scientist Aylin Caliskan put the matter succinctly in a 2017 interview with Vox: “Many people think machines are not biased. But machines are trained on human data. And humans are biased.” Already, studies have shown this to be the case. A 2016 investigation by ProPublica found that machine-learning software rated black people at higher risk of committing another crime after an initial arrest. (The software’s conclusions were based on current incarceration rates.) American lawmakers have highlighted these concerns as the A.I. race heats up. “Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions,” Rep. Alexandria Ocasio-Cortez said at an event in January. “They’re just automated assumptions. And if you don’t fix the bias, then you are just automating the bias.”
In China, however, the government appears to be exploiting that bias, purposely using the technology to keep tabs on a subset of its population. China has faced increasing criticism from human-rights groups for its treatment of the country’s 11 million Uighurs, about a million of whom are believed to be detained in Chinese indoctrination camps, which the government characterized to the Times as “vocational training centers that curb extremism.” But the secret use of artificial intelligence as part of its crackdown on the Muslim minority is likely to exacerbate fears about the government’s increasingly aggressive approach to the group, and about the abilities of governments worldwide to use technology for nefarious purposes. China appears to be the first country to use the systems explicitly for racial profiling, but experts are concerned that others will follow. “Take the most risky application of this technology, and chances are good someone is going to try it,” Clare Garvie, an associate at the Center on Privacy and Technology at Georgetown Law, told the Times. “If you make a technology that can classify people by an ethnicity, someone will use it to repress that ethnicity.”
https://www.vanityfair.com/news/201...cist-artificial-intelligence-to-track-muslims
Two people familiar with the matter told the Times that police in the Chinese city of Sanmenxia screened whether residents were Uighurs 500,000 times in a single month. Documents provided to the paper reportedly show demand for the technology is ballooning: more than 20 departments in 16 provinces sought access to the camera system, in one case writing that it “should support facial recognition to identify Uighur/non-Uighur attributes.” This, experts say, is more than enough to raise red flags. “I don’t think it’s overblown to treat this as an existential threat to democracy,” Jonathan Frankle, an A.I. researcher at the Massachusetts Institute of Technology, told the Times. “Once a country adopts a model in this heavy authoritarian mode, it’s using data to enforce thought and rules in a much more deep-seated fashion than might have been achievable 70 years ago in the Soviet Union. To that extent, this is an urgent crisis we are slowly sleepwalking our way into.”
Racial profiling has long been a concern in the use of artificial intelligence. But in the United States and other Western countries, much of that scrutiny has centered on biases built into A.I. systems. Computer scientist Aylin Caliskan put the matter succinctly in a 2017 interview with Vox: “Many people think machines are not biased. But machines are trained on human data. And humans are biased.” Already, studies have shown this to be the case. A 2016 investigation by ProPublica found that machine-learning software rated black people at higher risk of committing another crime after an initial arrest. (The software’s conclusions were based on current incarceration rates.) American lawmakers have highlighted these concerns as the A.I. race heats up. “Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions,” Rep. Alexandria Ocasio-Cortez said at an event in January. “They’re just automated assumptions. And if you don’t fix the bias, then you are just automating the bias.”
In China, however, the government appears to be exploiting that bias, purposely using the technology to keep tabs on a subset of its population. China has faced increasing criticism from human-rights groups for its treatment of the country’s 11 million Uighurs, about a million of whom are believed to be detained in Chinese indoctrination camps, which the government characterized to the Times as “vocational training centers that curb extremism.” But the secret use of artificial intelligence as part of its crackdown on the Muslim minority is likely to exacerbate fears about the government’s increasingly aggressive approach to the group, and about the abilities of governments worldwide to use technology for nefarious purposes. China appears to be the first country to use the systems explicitly for racial profiling, but experts are concerned that others will follow. “Take the most risky application of this technology, and chances are good someone is going to try it,” Clare Garvie, an associate at the Center on Privacy and Technology at Georgetown Law, told the Times. “If you make a technology that can classify people by an ethnicity, someone will use it to repress that ethnicity.”
https://www.vanityfair.com/news/201...cist-artificial-intelligence-to-track-muslims