| | News

Why peace talks need ‘digital ceasefires’

Tunisian President Kais Saied speaks at the opening ceremony of Libyan peace talks in Tunis, November 2020. A network of disinformation circulated by fake social media accounts attempted to undermine the talks. (AP Photo / Slim Abid)

Social media is being used as a tool to undermine peace processes, stoking division and making an already difficult task much harder. With stakes so high, mediators are calling for big tech companies to take responsibility for what’s happening on their platforms, and to work with peacekeepers to help reach “digital ceasefires”.

While delegates from Libya’s rival factions met behind closed doors in Tunisia late last year for talks to bring stability to the country after over a decade of turmoil, a network of fake social media accounts erupted online peddling damaging disinformation aimed at undermining the peace process.

Members of the Political Dialogue Forum (LPDF) were accused of corruption and misconduct while UN mediators received a barrage of online harassment and threats. A fake agreement emerged and was shared countless times before being removed days later.

The online storm created by these accounts, which largely originated from outside Libya, risked discrediting the fragile peace process just as the country’s opposing sides began to make real progress towards ending years of war. But unfortunately, what happened online during the talks was not new.

In recent years, social media has been used more and more frequently to undermine peace processes in countries around the world. From the long running peace negotiations in Mindanao in the Philippines to talks between the government and FARC rebels in Colombia, social media has become a tool to stoke division between adversaries and threaten those who are working to bring them together. Peacekeepers and mediators on the ground have witnessed this dangerous trend firsthand.

Social media’s threat to peace. “There are a number of different ways that social media is impacting the places that we're working in,” says Maude Morrison, social media and conflict mediation adviser at the Centre for Humanitarian Dialogue (HD) in Geneva, which has recently launched a programme to respond to the growing threat the online world poses to their mediation work. “At first we started noticing the more obvious things, the surface level impact of social media – things like the prevalence of hate speech, specific calls to violence, or misinformation that is promoting conflict or causing harm.”

There are countless examples of online violence inflicting real world harm in recent years – from the storming of the US Capitol by an angry mob in January this year to Myanmar in 2017, where the spread of disinformation on Facebook was directly implicated in the Rohingya genocide.

But increasingly, HD’s mediators have also witnessed sophisticated networks of disinformation and misinformation specifically targeting the peace processes in the countries where they were working. “[This includes] the increasing use of social media as a tool, both to influence people's own constituencies and also to target opposing constituencies, whether that be political actors or conflict actors,” says Morrison. “That's prevalent I'd say in almost all of the countries where HD is working now.”

With peace processes by nature occurring in countries going through a period of extreme volatility and division, it became glaringly obvious how easily social media could be used to undermine the legitimacy of these processes.

“We are talking about contexts that are usually already fragile and often fragile information environments, so often you have phenomena like low trust in official media or highly politicised media,” says Morrison. “Sometimes, but not always, you can have indicators like low media literacy or relatively recent digital access, [so] there's already a fragility to the information environment and therefore any disinformation, misinformation or any uncertainty can be exploited, and it can be really dangerous.”

In Libya and elsewhere, it has become increasingly common for networks of accounts to emerge online when peace talks are taking place, with adversaries accused of misconduct and mediators subjected to threats of violence on top of a countless stream of fake news.

A key focus is working with adversaries on the restraint they should exercise on social media, even going so far as to incorporate clauses on online behaviour into ceasefires and political settlements.

A growing problem. “Our colleagues in the field were frequently bemoaning the impact that social media was having on their work and generally the negative side of social media,” says Adam Cooper, a senior programme manager at HD, which has operations all over the world, including Libya, Mindanao and Colombia. “[This included] the way it undermines the confidentiality of their work at times, and the way it was used to polarise societies and spread disinformation around the peace talks that they were engaged in.”

In response, HD’s programme seeks to understand the danger social media poses to peace processes in the countries where they operate, working directly with conflict parties, mediators and social media companies to counter this threat in the hope of reaching “digital ceasefires”. A key focus is working with adversaries on the restraint they should exercise on social media, even going so far as to incorporate clauses on online behaviour into ceasefires and political settlements. While there are some examples of this already, these clauses are generally vague and are not supported by the monitoring mechanisms essential to make sure conflict parties comply.

“Mediators aren't used to talking to conflict parties about their online behaviour, and the conflict parties aren't used to thinking about how they would restrain themselves in that area – they generally operate without limits in that space,” says Morrison. “But for me, peace agreements that don't address online behaviour of parties will not fully resolve the conflicts that they're trying to address.”

Working with mediators to help them understand what’s happening online and how to respond to it properly is also essential. Not only are mediators now frequently the prime target of these accounts – threats against UN envoys have been commonplace throughout Libya’s LPDF meetings – but they and all those involved are also typically subjected to strict confidentiality. Inherently private, the covert nature of peace processes means disproving false narratives could likely further jeopardise their success, leaving misinformation to thrive.

“It's a really tricky balance and it’s often what scares mediators when we talk about our work, because there is a real desire to protect the process from any external criticism, which is a very understandable desire given the nature of the work,” says Morrison. “I think the counter to that is that there are already those kinds of things happening – in Libya, the fake agreement was being circulated during the LPDF, and it took quite a long time for that information to be debunked and corrected. I think that there needs to be a bit more of an acceptance that there will be leaks and that there has to be some degree of information out there in order to protect the process, but it doesn't have to go into detail about what is being discussed.”

“What we need is a dialogue about what kinds of risks arise from social media in peace processes and then, building from that, what are the specific measures that we would want.”

The responsibility of big tech? Another aspect of HD’s programme is to work with the social media platforms themselves on how they can better protect peace processes. In recent years big companies like Facebook have taken big strides to address violence and misinformation on their platforms, prompted by events such as the genocide in Myanmar. However, much of their focus is on dealing with this problem when it’s already out there. Morrison explains social media companies need to invest their resources in local expertise and monitoring mechanisms in fragile states where they operate to limit the harm their platforms can do before the damage is done.

“We're much more interested in what it would look like to prevent some of this activity and behaviour getting onto the platforms in the first place, which is in the interest of the platforms but it's also in the interest of society at large,” says Morrison.

Because these warning signs are often hard to spot, it's essential for companies to work both with mediators and experts in disinformation to guide and warn them ahead of events such as peace talks or elections that could prompt such activity on their platforms. Since the US elections in 2016 it has become routine for companies to put special measures in place surrounding elections to protect the democratic process - although Morrison says it's important to acknowledge that this development has been uneven dependent on country, with places where these companies have large markets or there's strong public pressure having far more stringent policies in place.

But this conversation has not yet been applied to peace talks, which also define the future of governments, countries and entire populations, and the stakes are often higher. This is why HD are calling for companies to have a similar policy for peace processes, ranging from simply being aware of when peace talks are taking place and flagging disinformation on their sites in local languages ​​to protecting the people involved, in a similar vein to how candidates for elections are protected.

A policy for peace talks. “What we need is a dialogue about what kinds of risks arise from social media in peace processes and then, building from that, what are the specific measures that we would want,” says Morrison. “But I think we can draw some inspiration and get some ideas from the election policies.”

For this, close collaboration between companies and mediators is essential. “A big part of what we're trying to do is helping these platforms to be aware of peace processes in the first place, and that sounds like a really obvious thing but it's not something which they systematically track at the moment,” says Cooper . “A process should be set up whereby we have a common understanding of what a peace process is, what criteria would we apply, and then what are the special measures that the platform would take in those situations.”

“They're quite used to doing this around elections,” he adds. “They have a criteria which they use to decide which elections are credible and worthy of protecting and then they have a set of things which they can institute as special measures to limit the harm to what they call 'civic processes', and we're trying to create the peace process equivalent of that policy which they currently do for elections. ”

To help forge these policies, HD has convened an expert group of disinformation specialists, experts in digital conflict and figures from within the peacebuilding sphere to ensure companies have the tools they need to protect peace processes.

Reaching “digital ceasefires” may sound like an impossible task, but the progress these companies have made in recent years should be reassuring - in Myanmar, policies introduced by Facebook in the wake of the 2017 genocide are now being employed to counter the spread of disinformation surrounding the coup, and their policy surrounding the 2020 US election was far more stringent following the events of 2016.

With months to go until elections are held in Libya in December, when the country will enter a particularly delicate stage of its peace process and social media could once again jeopardise a peaceful future, companies could learn from the events of last year and adapt their policies to prevent a repeat of the past.

“We've already come a long way - the fact that HD has this programme is progress as it's recognising that this is an important issue that mediators need to take seriously,” says Morrison.

“The fact that platforms have been making efforts to combat misinformation, to combat hate speech, to combat inauthentic behaviour around elections sets a precedent, even if that precedent hasn't yet been deployed in peace process contexts. It shows that there are measures that can be taken, and it also reminds us that there's lots more to do. ”

Related articles