Nicolas Capt: Critical thinking is key to understanding WhatsApp privacy policy backlash

Nicolas Capt. Credits: Jay Louvion, 2020.

WhatsApp, Telegram, Signal, Threema… which is the most secure to use? The question has shaken the digital community these past weeks after an outcry over WhatsApp’s plans to update its privacy settings.

Millions of users decided to leave the instant messaging app for seemingly more neutral systems over fears as to how their data would be shared with Facebook. The confusion has since prompted WhatsApp to delay the rollout of the new terms, aimed at facilitating e-commerce, until mid-May, saying it will “do a lot more to clear up the misinformation around how privacy and security works” on its app.

The situation is extraordinary, as WhatsApp is one of the most popular applications, with over two billion users accounted for in February 2020. The announced changes, however, were confusing and understanding the consequences of the changes in the app’s regulations were not clear.

At the heart of it is the understanding of the risks related to what is being shared. Metadata is not the content, but information around the actual message. For the first time, “the container has an impact on the content”, says Nicolas Capt, a Geneva-based attorney-at-law, and specialist in media and new technology law. He helps us to understand the issues at stake and argues that media education needs to be reinforced to encourage critical thinking around the use of new technologies.

What changed in the WhatsApp regulations and what are the consequences on users?

According to WhatsApp, the users’ so-called metadata can now be shared to Facebook. People reacted strongly and loudly, especially by threatening to change applications and to switch to Signal, Telegram, or Threema instead. But these applications have never had the same penetration rate as WhatsApp. So far, WhatsApp was the Google of instant messaging, very hegemonic.

The information related to the content of the exchanged messages is not accessible since the messages are end-to-end encrypted (E2E). The only way to access a WhatsApp conversation is to access one of the unlocked terminals. The messages themselves remain private. This is why these reactions are rather surprising because what is considered private is primarily the content and not the contextual information which surrounds the message,

One wonders whether there isn’t a misunderstanding on the user’s side about what exactly is going to be shared. Of course, users are accustomed to Facebook being data voracious, dragging its feet to comply with the various regulations. One can therefore understand the mistrust of internet users when the name Facebook is mentioned in relation to data. But the fact is that we face other developments , which are quite alarming, as people continue to post private content on social networks. This privacy paradox goes on and on.

That being said, one can imagine situations in which metadata alone makes it possible to draw up a fairly accurate profile of a person. This can have consequences, especially if this information is, for example, included in criminal proceedings. Therefore, one cannot go so far as to say that metadata has no impact on privacy.

What is metadata?

It includes all the data surrounding the message, like the date and time of the connection or the phone numbers involved in the exchange. Accordingly, the metadata is related to the transmission of the content but does not contain the content itself. The notion also exists in telecom law in the case of surveillances in the context of a police investigation. The police can use two types of surveillance. Firstly, there is the live surveillance, allowing them to record the conversations and/or the text messages that are sent and received. Secondly, there is retroactive surveillance, which allows the police to access antenna readings to find out who was where at what time. The information between two connections is purely technical. In the first case, we talk about content and in the second about metadata.

How are posts on Facebook and Instagram more problematic in terms of data?

There is a double dimension to posts on social networks. First, there is the fact that we are feeding the monster with data. It is thanks to the production of content that these GAFAM companies expand and eventually become hegemonic,  causing the disappearance of competing companies. The second dimension is the risk related to the content itself. There is this one example that people continue to use of someone posting a picture on the beach with the comment "Having fun in Miami". This kind of post is golden information for people with bad intentions (burglaries,…). And of course, we also face the problem of sexual predation on minors.

Consequently, a post can sometimes be a threat not only for our privacy, but also for our security or even our physical well-being. Therefore, the issue should not only be examined in the context of the re-use of data for advertising purposes.

Is there a possibility to have access to our data once the account is shut down?

As far as WhatsApp content is concerned, the answer is no, since there is no database in which the messages are stored once they have been delivered. The question is whether it will be possible to access the actual metadata. Today, there are important developments on the issue of access to content. Facebook consistently used to deny access to its content. Today, things have changed and we can retrieve the data we posted, at least our own.  In accordance with the principle of the right of access that exists in Swiss or European law, we should have access to all data labelled as personal. However, if the data is used in an aggregated or statistical manner, it may lose the status of personal data and be exempted from the right of access.

Is there any difference with other messaging apps such as Telegram, Signal or Threema?

There is no major difference between Signal and Threema. The level of security of both apps is more than sufficient. Threema, which can also be used anonymously, has the advantage of being based in Switzerland. Telegram is a cloud-based instant messaging app launched by two Russian brothers who are best known for creating the social network VK. However, it is not as secure as is often assumed (for instance the E2E encryption is limited to the Secret Chat feature). All those apps are competitors and it is the critical mass of users that counts. In the end, WhatsApp has never been criticised for its security (the E2E encryption on all messages is rather good), but for the use of metadata and the change of the general conditions of use.

Which data protection laws are applicable in Switzerland?

Swiss data protection law is quite complicated. There is a Data Protection Act (DPA) in force in Switzerland, which is completely obsolete as it dates back to 1992. Besides, it only applies to private entities and federal bodies. A completely revised version of the law, which takes into account the digital landscape and most of the General Data Protection Regulation (GDPR) standards, is expected to come into force next year. Swisscom, for example, is subject to the DPA, as is the federal administration. On the other hand, if the HUG (University Hospital of Geneva) processes your data, it is the Geneva cantonal law, the LIPAD, that applies. The law is different in every Canton and is not standardised for public entities.

Other regulations such as the GDPR may apply as well on Swiss territory, but often in a limited way in relation to Swiss companies. First, it applies in the case of the offering of goods or services, and irrespective of whether a payment of the data subject is required, to such data subjects in the Union. Second, the GDPR applies when the processing activities are related to the monitoring of data subject’s behaviour, as far as their behaviour takes place within the Union.

The difficulty lies in knowing which law applies to foreign companies. And the matter is complicated, as the principle of territoriality rules in Switzerland. There are no real precedents, apart from one: the famous Google street view case, where cars were driving around in Switzerland to collect data. The case was brought to the Federal Court, which ruled that Google was partly subject to Swiss law because the data collection was taking place on Swiss territory. In this case the collection was physical, and thus the link to the territory more obvious. The situation is far more complicated when subscribing to a service that tells you that the only applicable law is American law.

What does this crisis reveal?

The inconsistency in user behaviour, almost as a gesture of political distrust. Facebook has become the big bad wolf whereas before it was an inspiring start-up, the same image that Google also had at its very beginnings. In the end, few people are going to give up Facebook, apart from young people who no longer show interest in using the platform anyway.

People are their own worst enemy. There is a very ambivalent report on the issue. However, there are much more alarming digital problems such as facial recognition, which is a much more intrusive technique.

What would be the next steps to tackle the different issues?

For a start, we should strongly reinforce media education, the understanding of fake news, filter bubbles and censorship phenomena, as well as their impact on the ecosystem. We also need to remind people of the need for critical thinking. For the very first time, the container actually has an impact on the content. Otherwise, we will create generations of people in tunnels who will feel that everything they find on the internet is correct, people who will not realise that because of the configuration of the content, they are only given what they want to see, and thus completely close their horizons. This is a more essential question to me than knowing if WhatsApp is bad or not.

Talking about critical thinking, what is your position on the Trump ban on Twitter and Facebook?

In my opinion, it can only be understood as a power struggle between journalists, internet platforms and the presidential administration throughout Trump’s term in office. I think the platforms lost their heads a bit in the process of deleting accounts. They have even labelled content as unverified facts. This is absurd. Does that mean that all the other facts are verified? They can't decide to label Trump's content and not the rest. That is an illusion. So is tracking fake news. It will never work. Just look at the law in France against the manipulation of information, it is a resounding failure - as we knew it would be from the very beginning. Declaring war to fake news is dangerous, because in everything that we call facts, there is always an opinion hidden behind it. Hence, this way of reasoning is fallacious. The strategy should be a lot more about fighting the phenomenon by awakening the critical mind or possibly trying to avoid mass manipulations, similar to those we have seen in the past.

Meanwhile, hate speech and every other content that violates obvious norms is another problem. In the European Union, the Digital Services Act (DSA), which is currently being discussed, tackles this type of violations. It is important to understand the difference between the host and the publisher. Traditionally, social media platforms are hosts, who provide space for people to introduce user-generated content. They happen to be at risk today and are anything but eager to become publishers. If they did, they would lose the immunity they have in the United States. Their role is passive and their responsibility almost non-existent. Deleting accounts and integrating warnings is therefore a purely editorial choice. The DSA wants to force these platforms to cooperate, whilst guaranteeing them the status of a host.

We are facing the dreadful Ministry of Truth. We are going back to the terrifying times of information control. These are still fairly recent issues, when seen on a long-time scale. It is relatively normal that we have not yet found the philosopher's stone to regulate these networks.  We have to succeed in maintaining freedom of expression and in regulating misconducts. Injunctions are a bit contradictory, but we have to deal with that. Personally, I am wary of ready-made solutions.

These platforms have become so hegemonic and are used by so many people that they are almost perceived as a public good. We easily – and too often – forget that they are private companies, which have a completely monopolised speech. We could go as far as to say that speech has been confiscated by private networks. Hence the need to supervise them.