| | News

Regulating ‘killer robots’ divides talks at the UN

Killer robots do not yet exist, but precursors including the Turkish STM Kargu kamikaze drone, which can attack targets using facial recognition technology, show the trend of increasing autonomy. (Credit: Armyinform.com.ua, CC BY 4.0 via Wikimedia Commons)

Representatives from around 50 countries gathered at the United Nations in Geneva last week for their first official meeting on lethal autonomous weapons systems (LAWS), or “killer robots”, in nearly a year. An international regulatory framework, however, remains elusive. 

At first, it looks like a flock of birds, but no birds of that size hum. A swarm of drones cut through the air, scanning the terrain below. Their sensors latch on to weapons. Programmed to target enemy combatants, they shoot.

While a swarm of lethal drones may sound like a futuristic scenario, it is a plausible one, and the technology to make it happen is already here. Many armies around the world are already developing and testing AI-powered weapons – including guns, planes, tanks and underwater vehicles – with increasing levels of autonomy in their functions to select and attack targets.

A UN report published in March of this year, reported the first known use of a lethal autonomous weapon: the Kargu-2 designed by the Turkish company STM and used by the former interim Libyan government against the rebel Haftar Affiliated Forces last year.

The report does not say whether the Kargu-2 targeted humans, but its use marks the first of a drone with lethal capabilities that can operate without human control to identify and strike a target.

Such lethal autonomous weapons might need a human to deploy them, but are programmed to autonomously identify targets based on data sets they encounter in training. They are programmed to shoot without a human decision maker to judge the nuances of the immediate situation.

UN discussions

A group of governmental experts (GGE) from member states and NGOs met at the UN under the Convention on Certain Conventional Weapons (CCW) over the past two weeks to discuss how to regulate this emerging weapons area.

“We want international limits to be set while these technologies are being developed, rather than letting development drive what should be accepted on the battlefield,” said Maya Brehm, legal advisor at the International Committee for the Red Cross (ICRC) who has been involved in the latest UN discussions.

The ICRC’s position is that nothing can be left to chance. A weapon that can select and shoot a target independently of human control “brings a risk of harm for those affected by armed conflict, specifically civilians and combatants who are wounded or surrendering,” she said.

Building on last year’s GGE, the latest session, which closed on Friday, is a step in the two-year mandate to develop “consensus recommendations on a normative and operational framework for autonomous weapon systems,” said Giacomo Persi Paoli, programme lead for the security and technology programme at the UN Institute for Disarmament Research (UNIDIR), which also sent representatives to the GGE, in an interview.

“This means breaking down the problem into: how do you characterise laws, what is the role of the human, what is the role of international law, what are some of the more operational sides that need to be taken into account,” he said.

This latest session was an opportunity for NGOs and member states to table their views and concerns about an international regulatory framework. The chair will consider the inputs, identify areas of growing consensus, and recommend the next session’s agenda, which will run from 27 September to 1 October.

The next session’s aim is to produce a consensus and recommendations for the Review Conference of the CCW at the end of the year.

‘Meaningful human control’

No state wants uncontrollable, unpredictable weapons, said Persi Paoli.  “Everyone is recognising the fact that whether you call it human control, human involvement, human-machine interaction – there isn’t necessarily agreement on the label – that humans have to be part of the equation,” he added.

There is a spectrum of human control, however, and member states are far from reaching a consensus on where along that spectrum international law should limit the development of autonomous weapons, Ousman Noor, the government outreach manager for the Campaign to Stop Killer Robots who attended the sessions at the Palais des Nations told Geneva Solutions.

Some have confidence in AI capabilities and the accuracy of image recognition and argue that AI can be programmed to operate in line with existing international laws with minimal human control, he said.

The ICRC and the Campaign to Stop Killer Robots, an international coalition of 180 NGOs, instead argue that there needs to be a complete international ban on lethal autonomous weapons that can operate without “meaningful human control”. They call for close oversight and frequent legal reviews of autonomous weapons.

Some 31 countries have joined calls to ban autonomous weapons systems, according to Human Rights Watch, co-founder of the Campaign to Stop Killer Robots.

Some states prefer the terms “appropriate control” or “sufficient control”. However, Noor says, “we use the term ‘meaningful human control’ because the word ‘meaningful’ is meaningful.”

Weapons with machine learning capabilities that do not operate predictably fall outside meaningful control, he explains. If humans cannot detect when a weapon is turned on, cannot turn it off, and cannot determine its space of operations, “that’s inherently a lack of control”.

Data issues

An element of lack of control is that AI can make mistakes. The data sets it is trained on are less messy than what they will find in real-world environments. A recent UNIDIR report explores the data challenges of preparing autonomous devices for deployment. According to the report, “harsh conditions” as well as “adversarial actions”, “complexity and variability”, and “data drift” – changes in the environment – may cause a drone to misidentify a target.

Another issue is that collecting data about humans is problematic, said Noor. “The idea of encoding what it means to be a human if you're designing a weapon – that is bad for human beings. You can only look for things like our shape, our heat signal, our temperature. The process of encoding is undignified, it's dehumanising, it makes human beings just USB sticks.”

Horizons

Another major conundrum is how to regulate something that does not yet exist. Lethal autonomous weapons can only execute narrow tasks at the moment, but could in future be self-learning machines that develop the capabilities to execute a range of tasks, lethal included. Experts warn that developments in machine learning could allow such a weapon to perform actions that the programmer had no way of foreseeing.

The first thing to do to regulate a futuristic scenario such as that of fully autonomous weapons is to “limit the problem space”, said Persi Paoli. At the moment, “everything is on the table, from fully autonomous weapon systems – and those mean very different things to very different people – to systems that are partially autonomous. So let's first agree on what we want to prevent.”

It is important not to be reductionist, Persi Paoli said. Particularly before the recent session, “there has been kind of a binary approach: it's either prohibited or it's allowed. The debate should become more and more nuanced.”

The good news is that “we are very far” from an AI arms race akin to the Cold War, Persi Paoli told Geneva Solutions. Although the amount of money that countries with large militaries are pouring into AI may at first glance seem large, only a fraction of countries’ defense budgets are being spent on the area. However the competition is clearly there, he said.

An international framework?

The CCW’s review conference in December is seen as a crucial deadline for adopting a mandate to negotiate a legally binding instrument to regulate lethal autonomous weapons. However, reaching a consensus between state parties is still proving elusive.

Countries are showing a “general willingness to go beyond what has been agreed so far, but to do so in a way that doesn't undermine the potential benefits that AI and autonomy could bring,” said Persi Paoli.

Among the world’s biggest militaries are China, Israel, Russia, South Korea, the US, and the UK – all of whom have, to varying degrees, indicated a desire to regulate lethal autonomous weapons at the national level. However, many, including the US and Russia, have rejected an outright ban under international law.

While these countries are willing to discuss the issue in an international forum and contemplate basic international principles, the possibility of an international regulatory agreement within the UN framework still hangs in the balance.

Many in industry are waiting and even advocating for an international framework. “This would provide them with much-needed guidance and ensure that the many applications of the technologies they develop are not negatively impacted by the concerns surrounding autonomous weapons,” said Brehm.

The Campaign to Stop Killer Robots is sceptical that member states will be able to agree on an international framework, or even on the principle of an international framework, by the Review Conference in December.

“All it takes is for one country to say no and then nothing happens,” Noor said. “The consequence of that is that you end up with an agreement that is based on the lowest common denominator.”

The Campaign to Stop Killer Robots consequently views an agreement outside of the UN framework favorably, where countries who want stronger regulations can set a standard. Such an external agreement would give industry a frame of reference, even if they operate in a country that has not joined the agreement, said Noor.