Digital trust, governance, corporate social responsibility, sustainability. From buzzwords to keywords, these concepts are slowly shaping artificial intelligence (AI) of tomorrow as its use in everyday life continues to expand.
They are also the focus of a virtual event taking place in Geneva next week, the 2030 Digital Fasttrack Studios (DFS), convened by UNESCO’s Geneva liaison office together with Microsoft Corporation’s United Nations affairs team in Geneva, and The Graduate Institute, Geneva’s Centre for Trade and Economic Integration.
As AI gains momentum and takes root in our private sphere, concerns about its ethical use have ignited intense public debate. Discrimination over gender or race made by biased algorithms is an example of the limits and shadows of AI and, unfortunately, it’s not the only one.
The recent acceleration in the implementation of AI technologies due to the pandemic and the digitalisation of our social and professional interaction is having a tangible impact on our daily life. The flourishing of surveillance systems to measure and track employee productivity, for example, raises questions over where to draw the line between what AI should and shouldn’t be allowed to do. AI governance is critical and can’t be delayed.
The DFS event, taking place on 7 December, will seek to shed light on the interaction between AI, ethics, and human rights in helping to promote a peaceful society - one of the 17 Sustainable Development Goals (SDGs). Geneva Solutions spoke to Jean-Yves Art, senior director of strategic partnerships at Microsoft and moderator of the session, on the role of tech governance, trust, and Geneva-based organisations in shaping the debate.
Why is it important to mobilise society and the different stakeholders in Geneva around technology and the SDGs?
We are at the beginning of what is called the Fourth Industrial Revolution and, in this moment of profound change, we need to raise awareness of the prominence of governance, which is essential to build sustainable trust in technology. If we want a society able and willing to take full advantage of digital technology, we need to take a step forward in this sense. We already made significant progress with instruments such as the General Data Protection Regulation (GDPR) law, but it’s not enough. Governance is a crucial configurator and driver for trust and Geneva is the suitable context for these discussions thanks to its international dimension and the number of UN specialised agencies and highly reputed organisations with headquarters in the city, such as the Office of the High Commissioner for Human Rights (OHCHR) or the International Telecommunication Union (ITU).
Do you think we need specific governance for AI?
At Microsoft, we believe that AI is going to be the driver of the Fourth Industrial Revolution as much as the invention of the internal combustion engine was key for the 19th century. We made an initial step on privacy protection, but we still have to find answers to other major questions related to AI such as the risk of bias and the need for transparency and accountability. AI is already influencing numerous aspects of our lives and it’s crucial to ensure it evolves in a beneficial way for our society. Governance is a critical part of this process.
Therefore, governance is a must, but not the finish line. What other innovative approaches are needed?
At this early stage of the Fourth Industrial Revolution, we still have significant opportunities to set the course of the technological (re)evolution. To achieve positive change, all the stakeholders contributing to this debate should come together and find a common way to move forward. Business players need to be involved as they know what technology can achieve and where it is heading, but it’s not up to tech companies to decide whether and how technology should be used. Their role is to inform governments and international organisations. Yet, we need to have civil society at the table as well, as users must have a say on the direction technology should take. With the Digital Fasttrack series, we hope to contribute to raise awareness among all stakeholders and help them find a common pathway toward sustainable technological progress, not just to balance the different interests, but to optimise them.
However, the finish line goes beyond creating governance and rules. The discussion is not only about norms and principles but encompasses much more than that. If we want to bring about real change, we need, for instance, to go into education at schools and universities. Engineers need to realise that what they do has an impact on ethics and human rights. Effective governance requires a holistic approach.
In which sense does AI challenge existing ethical standards and human rights? What do AI technologies, human rights and ethics have to do with each other?
AI is a technology that looks at massive amounts of data to make predictions and solve specific issues. It can be used for good or bad purposes. Take, for instance, facial recognition. It can be exploited to identify political opponents in authoritarian regimes, but also to reunite separated families in case of wars, conflicts and climate disasters. That’s where human rights, ethical principles and values need to step in to determine how we as individuals decide to live together and what values we want technology to serve. It’s here that AI, human rights and ethics come together.
We have a Universal Declaration of Human Rights, but besides those norms there is a set of values that pursue the same goal: protecting and defending human dignity. These rules and ethical principles are the guard rails for the use of AI, which will probably be one of the most important technologies for our society for the next five to ten years and beyond.