We have digital trust issues. The Swiss population rejected a proposed digital ID earlier this year in a referendum, in large part because of privacy concerns resulting from the involvement of private companies in providing the ID. This follows a notable reluctance to adopt the Swiss Covid contact tracing app, also on privacy grounds.
This trend is not limited to Switzerland; contact tracers worldwide have been met with distrust by people wary both of health authorities and of the technologies being deployed This lack of trust is not always evident day to day, but these rejections show how it can limit our digital future if not addressed.
There is a general ‘techlash’ that is taking place, which started with the understanding that free services come in exchange for personal data, and that there is a flip side relating to privacy and security of that data. As our use of the Internet increased in response to Covid pandemic restrictions, along with an increase in cyberattacks, fears have generally increased. These fears are then expressed when we are confronted with a discrete choice about our data, such as with the digital ID.
The outcome of the digital ID vote was interesting, because we do not typically get to measure the level – or absence – of digital trust. That is, in part, because there seems to be a difference with respect to existing or new services. Specifically, while digital trust has been falling, there is little evidence of people using existing digital services less. That may be because of their familiarity with the apps they already use and comfort with the perceived risks; unwillingness to give up the benefits of popular services; a general herd mentality to use what everyone else is using.
On the other hand, there is no familiarity with new services, and nothing to give up if they are not used. However, the long-run benefits from new services are not realised. Perhaps nothing illustrates this better than the experiences with Covid-19 contact tracing apps.
Contact tracing is a traditional way of cutting down transmission by isolating those with whom a sick person had contact while infectious. While the spread of Covid quickly overwhelmed manual contact tracing, the prevalence of smartphones provided a new tool to automate contact tracing. An early study by the University of Oxford claimed that if 60 per cent of citizens used a contact tracing app, the pandemic would be stopped – and initial surveys showed a high willingness to use the apps.
Needless to say, it did not work out that way. The uptake of the apps was well below 60 per cent in all countries – around 20 per cent in Switzerland - and the key reason given was worry about privacy. However, many of the apps are based on a privacy-protecting exposure notification feature developed by Apple and Google for their phones, which kept track of users by random ID, did not track location, or provide any information to any central authority. While studies showed the existing use of contact tracing apps reduced transmission, voluntary adoption was clearly not enough to stop the pandemic.
One thread that comes from the digital ID vote and contact tracing app adoption is a lack of trust in the companies providing the services, and whether people’s data will be secure. But every day we trust companies with our valuables and our lives. We put our money into banks, where we cannot see it, but we are confident it will be there when we need it; we put food and pharmaceuticals in our body that we cannot test but we trust; and we entrust ourselves and love ones in cars, elevators, and airplanes generally without a second thought. It was not always so – just think how car safety has evolved over the past 50 years.
Each of these industries built trust based on continual oversight. While bank standards are guaranteed to protect depositors, the banks must comply with regulations. Cars have safety standards, they are tested, they are rated, and they face liability if they fall short. The same is true of course for pharmaceuticals – as we became aware as we desperately followed vaccines through testing and certification. While government has the ultimate regulatory and enforcement role, other organisations may help develop standards, certifications, and ratings, and companies’ self-regulation can also play an important role.
Many new services are being introduced, involving artificial intelligence, internet of things devices, and automation. Their acceptance and adoption will depend in large part on trust. The demonstrated lack of adoption of new digital services should be a wake-up call for all stakeholders – governments, companies, researchers, and civil society – otherwise lulled by continued use of existing digital services in spite of the techlash. We need to learn how to build the trust that will be needed for our digital future.
Michael Kende is a lecturer at the Graduate Institute of Geneva, teaching a course on internet economics and governance. To address the causes and propose solutions to these issues, Graduate Institute's Centre for Trade and Economic Integration (CTEI) and EPFL's Centre for Digital Trust (C4DT) will be hosting the Digital Trust 2025 conference on 15 October. Registration for in-person attendance or online here.