Human rights: behind the AI hype, an existential threat lurks

Work by Luísa Tormenta, a masters student at ÉCAL, the Cantonal Art School in Lausanne, entitled ‘Can I hear from you?’, exhibited at the Centre de la Photographie Genève, 23 May 2023. (Geneva Solutions/Paula Dupraz-Dobias)

An exhibit in Geneva raises alarms against accelerating risks digital technologies pose for human rights.

One by one, the original designers of artificial intelligence (AI) have come out in recent weeks to warn about the technologies that may autonomously create everything from complex texts to fake images of real people or of fake people appearing real. Disinformation and the snowballing damaging effects of the use of that technology on society were on most of their lips.

Among them, Geoffrey Hinton, the chief scientist at OpenAI, a research lab responsible for paving the way for developments including ChatGPT and Google’s Bard application, recently said that new human behaviours triggered by the increasing blurring of lines between truth and falsehoods might threaten humanity. He and others have called for more regulation.

Meanwhile, concerns over insufficient regulations on data privacy, which may be abused by companies and governments to track and control individuals and undermine human rights, have also made headlines. In Switzerland, a plan to install cameras with facial recognition in train stations to analyse people’s shopping habits received pushback from data protection rights groups before being backtracked.

A photo exhibit at the Centre de la Photographie Genève (CPG) in the Quartier des Bains gallery neighbourhood running until 28 May aims to further warn people of some of those risks.

The show features the work of this year’s laureates of the AOYF Human Rights Photography prize funded by Act On Your Future, a Geneva-based foundation getting young people involved in philanthropy.

Keyvan Khavami, AOYF’s executive director, told Geneva Solutions the theme of this year’s competition came from the interest and importance the technologies represent for young people. “The idea was to provide a means of identifying certain issues and showcase ways to learn about potential consequences for everyday citizens and users to be more responsible,” he said. “It is important that we are not overcome by the speed of developments.”

The work on view of French photographer Thaddé Comar presents how protesters in Hong Kong in 2019 defied state authorities by masking up and hiding key facial points, to avoid being identified with facial recognition technology. Another artist, Rodrigo Alcocer de Garay from Mexico, used artificial intelligence to assemble images of indoor work settings for porn streaming workers into various series, according to the AI’s own logic.

In past years, AOYF photo competitions have focused on themes such as migration, freedom of expression and the climate crisis. AOYF partners with the International Human Rights Film Festival (FIFDH), the CPG, Human Rights Watch and others.

DSC_0499.jpeg
Part of a winning photography project by Thaddé Comar on anti-government protests in Hong Kong in 2019, exhibited at the Centre de la Photographie Genève, 23 May 2023. (Geneva Solutions/Paula Dupraz-Dobias)

Rights defenders raising red flags

Developments in tech are of particular concern to human rights defenders. Volker Türk, the United Nations high commissioner for human rights, told Geneva Solutions on Wednesday that warnings from the developers of generative AI applications are proof that “there is a sense of urgency to actually ensure that regulations occur rather quickly”.

Speaking at a press conference at the Palais des Nations, he stressed that governments are often poorly equipped to stand up to the challenges. “How do we match the knowledge about some of these very complex areas, and how do we make sure that it is infused with the human rights and regulatory content that we need?” he added.

Other experts are more blunt. “We need to be careful right now that we have a future in which rights will still exist. The situation is that existential,” Frederike Kaltheuner, director of technology and human rights at Human Rights Watch (HRW), told Geneva Solutions, before speaking at a panel organised on the sidelines of the photo exhibit.

She listed various issues regarding technological penetration in society. Governments are digitising the entire state, offloading responsibilities to machines and tech companies. Meanwhile, as the digital public sphere grows to decide on how we access information, organise ourselves and mobilise, companies running digital platforms are failing to respect their duties toward personal data protection.

The use of algorithms to determine peoples’ actions was also worrying, she said, including through the use of cameras in public spaces to predict dangerous activity, at times reinforcing racial prejudices. Kaltheuner pointed to how, in the Netherlands, authorities had ruined thousands of lives when an algorithm used to detect fraud in child benefits erroneously identified innocent people.

She said labour abuses within the tech supply chain affecting people on the margins of society involved in an unregulated gig economy posed yet another concern.

DSC_0492.jpeg
Part of a photography project by Thaddé Comar, on anti-government protests in Hong Kong in 2019, exhibited at the Centre de la Photographie Genève, 23 May 2023. (Geneva Solutions/Paula Dupraz-Dobias)

Resources and safeguards

Kaltheuner pointed out the difficulties in monitoring and regulating the human rights impacts of emerging technologies. “There is a real lack of truly independent experts who do not have a vested interest (in the sector),” she said.  “The challenges are so big, and there is so much money to be made. There is a real need for independent expertise.”

She said that while it was necessary to expose the harm being done by the technologies, human rights advocates are stretched. Civil society groups often had to compete for attention from lawmakers with well-funded lobbyists, she added. It is estimated that the five biggest global tech companies – Google, Meta (Facebook’s parent company), Amazon, Apple and Microsoft – spent $69 million lobbying the United States government in 2022. Recent layoffs at Meta sparked fears over the company’s ability to respond to viral misinformation.

The UN rights chief said that while technological tools, such as satellite imagery, are undeniably useful for rights groups and scientists to monitor human rights violations and climate change, more needs to be done to build on the understanding of the technologies to prepare for its risks.

“We need to be much more conversant and literate when it comes to digital technologies,” he said.

Read more:Cash-strapped UN human rights chief pleas for funds as crises clash and spiral

Speaking of artificial intelligence, specifically generative AI, Türk said: “Of course, the opportunities are immense, but so are the risks. Human rights need to be baked into AI during its entire life cycle, and both governments and companies need to do more to ensure that guardrails are in place.”

The European Union has been seeking to introduce transparency measures that would require companies to disclose when content is AI-generated. On Monday, Ireland imposed a record EUR 1.2 billion fine on Meta for not complying with the regional bloc’s privacy requirements.

Kaltheuner said that such steps are a “good start”, but admitted that while government oversight in itself doesn’t lead to abuses, flawed regulation does. In the Middle East, governments have cracked down on LGBTQI people through the use of gay dating apps. “It is especially tricky when governments are the perpetrators. We definitely need rules, but they are not always the saviours.”

*The exhibition runs until 28 May. For more information visit website.

Newsletters