UN rights chief calls for moratorium on AI systems that threaten human rights
There needs to be greater transparency about the development of artificial intelligence systems and a moratorium on those that pose a risk to human rights, UN rights chief Michelle Bachelet said on Wednesday.
A multitude of human rights, including privacy, health and freedom of expression are under threat from certain artificial intelligence systems, a UN report presented to the Human Rights Council underscored today.
“Artificial intelligence now reaches into almost every corner of our physical and mental lives and even emotional states,” said UN rights chief Michelle Bachelet. “AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online.”
The report called for a moratorium on artificial intelligence systems that pose a danger to human rights until due diligence is conducted on such technologies and appropriate safeguards are developed. Biometric analysis, automated decision-making and other machine-learning technologies are the greatest cause for concern.
Systems with risks that safeguards cannot sufficiently mitigate and that violate international human rights law should be banned, the report said. It also called for greater transparency on the development and use of artificial intelligence, as well as for more rigorous data requirements.
“The higher the risk for human rights, the stricter the legal requirements for the use of AI technology should be,” said Bachelet.
The report will inform the UN Human Rights Council’s informal consultations on a draft resolution concerning the right to privacy in the digital age on Thursday afternoon penned by Germany, Austria, Brazil, Liechtenstein and Mexico.
Learning from recent mistakes. Predictive and biometric artificial intelligence systems have come under increasing scrutiny in recent years. For example, in 2020, schools across England used predictive technology to project grades for that year’s A-level examinations after the Covid-19 pandemic resulted in school closures and cancelled exams.
However, the data fed into the algorithms was simplistic, based on a student’s grade history and their school’s performance.
The results were widely criticised. The grade allocation system frequently assigned lower grades to students from disadvantaged backgrounds at schools with generally lower performance compared to their more privileged peers, particularly those in private education.
These grade predictions highlight the perils of using historical data sets that often contain racial, ethnic, and socioeconomic bias and are too simplistic to approximate real world conditions with enough closeness.
Such datasets will do little to change the future, the report emphasized. The problem is enhanced when these data sets are run through complex, nontransparent algorithms.
Biometric systems pose similar problems. They are extensively used in the public and private sectors and often discriminate based on skin tone.
Black women between the ages of 18 and 30 are most frequently misidentified by biometric software, a 2012 study by the FBI and Michigan State University revealed.
Such algorithms can have serious consequences. For example, when authorities use them to make arrests.
Read also: ‘Decoding Coded Bias for a more equitable artificial intelligence’
It can be difficult even for a programmer to determine all the reasons behind an algorithm’s results, particularly when deep learning is used. This lack of transparency is an impediment to assessing the reliability of an algorithm's output.
“The complexity of the data environment, algorithms and models underlying the development and operation of AI systems, as well as intentional secrecy of government and private actors are factors undermining meaningful ways for the public to understand the effects of AI systems on human rights and society,” the report says.
The privacy implications of insufficiently-regulated biometric software could be grave. Existing infrastructure enables authorities to collect near-constant data on a person in public spaces, the report noted. The report calls for due diligence on the processing and use of such data, and a moratorium in the meantime.
“Given the rapid and continuous growth of AI, filling the immense accountability gap in how data is collected, stored, shared and used is one of the most urgent human rights questions we face,” Bachelet said.
This is a question that contact tracing and other data collection efforts during the Covid-19 pandemic have made more urgent.
“The power of AI to serve people is undeniable, but so is AI’s ability to feed human rights violations at an enormous scale with virtually no visibility,” she said.