| | News

Information manipulation and democracy: a dangerous game

President Donald Trump drives past supporters gathered outside Walter Reed National Military Medical Center in Bethesda, Md., Sunday, Oct. 4, 2020. Source: KEYSTONE.

Last Friday, President Donald Trump was tested positive for the Covid-19. Or was he? Following the announcement, social media turned wild with a repetitive question: why should we believe him? The question seemed legitimate as a study published by researchers at Cornell University revealed earlier that week that “media mentions of US President Donald Trump within the context of Covid-19 misinformation made up by far the largest share of the infodemic. Trump mentions comprised 37.9 per cent of the overall misinformation conversation”, the New York Times reports. Now, Trump is back to work. Is he healed?

A little less than one month before the election, one wonders. How are fake news and disinformation affecting relations within a democracy, but moreover what threat do they pose to democracy itself? It is precisely this topic that the Center for Digital Trust (C4DT) housed at EPFL brought to the table yesterday during an event organized with the Geneva-based Graduate Institute of International and Development Studies and the CyberPeace Institute. If cyberspace becomes fertile ground to sow distrust and create discord and antagonisms, it is necessary to understand the problem and act consequently.

Why this matters. An article by The Guardian this week reveals ties between a Trump-linked consultant and Facebook pages that promote the idea of a civil war after the election. Meanwhile, the FBI and the Cybersecurity and Infrastructure Security Agency have warned the public that foreign actors and cybercriminals might spread disinformation about the results, according to CNN. The words are strong. And the possible consequences, if this information were to be true, dramatic. The question is: how do we know if it is true and how serious the problem is? Difficult to say, explains Graduate Institute Professor and panelist, Michael Kende :

“When people are good at it, no one knows that it’s been done. When it happened in the 2016 election, it was really after the election and the Cambridge analytica report came out that people started to realize that there had been a lot of disinformation. It wasn’t obvious. Moreover, the algorithms can be so well targeted that sometimes we simply don’t see the information.”

Social media and responsibility. At the heart of this story is section 230 of the US Communication decency act passed in 1996. Websites, in this case, social media platforms, can’t be treated as the publisher or speaker of third-party content, protecting them from lawsuits if a user posts something illegal. That leaves it to the platforms to set their own terms and conditions on what to take down or not. In other words, they can decide to remove or block content while being shielded from liability for what remains, with only the obligation to take down what is illegal.

“One thing we have to realize is the volume of traffic these companies have to deal with. It would take about 70’000 people to watch every video going into youtube without any time for discussion or debate.”, Kende says.

What makes the situation even more complicated in the United States is the First Amendment, which protects freedom of speech. A law forcing tech companies to moderate content could be considered as unconstitutional.

The key really resides in private companies themselves. Facebook and Twitter have defined they own regulations banning hate speech for example. Facebook even has a new independent advisory board to face these new challenges. But is it enough?

Internet governance. Self-regulation is a response to the role social platforms companies are starting to play and the negative press associated with them. Kende continues:

“The next step in the governance is to help create a better way of conveying the terms and conditions so that anyone can understand, starting to look into whether they are really being adhered to on both sides. They have to be very apolitical, coming up with rating systems, to see which sources are credible and which sources are not.”

Understanding means more awareness and accountability. Knowing who is responsible and what impact disinformation has can help guide companies to do more and do better. In Geneva, the CyberPeace Institute focuses on holding all actors accountable for their role and responsibilities in cyberspace, including state-actors. This is essential to ensure human security, dignity and equity in digital ecosystems. Within that framework, companies and governments can start to negotiate, and pave the way for a better tomorrow.

C4DT for democracy. In this context, information, awareness and cooperation are key. C4DT plays an essential role in building trust in the digital world. It brings together businesses, the research community, civil society, and policy actors to collaborate, share insights, and to gain early access to trust-building technologies. Addressing the challenges highlighted by mis/disinformation and the potential threat to democracy engages the Center and its partners to exploit the human, political, and economic implications of digitalization and propose adequate solutions to enable companies, institutions and citizens worldwide to become digitally trustworthy and to create new opportunities.

This discussion is essential and can make the difference. As C4DT executive director, Olivier Crochat, explains:

“Democracy is clearly one of the verticals where trust in the digital tools that could be deployed is of prime importance.  And democracy is one of the pillars of our societies, that we need to preserve and protect.”

As Crochat suggests, the key is now to focus on finding our way back to the sources of information in a constructive way. And in that sense, not only social media platforms have a role to play. Journalists and the media can make a difference too.

Related articles