Translating AI ethics principles into action through digital cooperation
by Prateek Sibal, Specialist Consultant on AI at UNESCO and Lecturer in Digital Innovation and Transformation at Sciences Po, Paris
As a general-purpose technology, Artificial Intelligence (AI) impacts all spheres of our lives. A recent study on the role of AI in achieving the Sustainable Development Goals shows that AI could enable the accomplishment of 134 targets out of the total 169 targets under the 17 Sustainable Development Goals (SDGs) that cut across society, economy and the environment. For instance, AI can help deliver food and water security and education and health services efficiently.
At the same time, AI has the potential to hinder progress on 59 targets concerning environment and society. On the environment front, some projections suggest that by 2030, information and communication technologies (ICTs) may account for up to 20 percent of global electricity consumption as compared to 1 per cent at present. If more environmentally friendly ICTs are not developed, it would adversely impact the achievement of SDGs on Climate Change.
On the societal aspects, the development and use of AI is raising concerns about privacy, freedom of expression, discrimination, transparency and unequal access to technology. These factors have the potential to exacerbate existing social and economic inequalities between and within countries.
How are governments responding to AI?
Governments across the world are scrambling to leverage the large economic potential of AI that includes an estimated USD 13 trillion in additional economic activity by 2030. At the same time, they are attempting to maximize the gains and minimizing the harms when dealing with the implications of AI for democracy, labour and the environment. As of 2019, governments, inter-governmental organizations, businesses, NGOs and academia have articulated over 84 AI guidelines or ethical principles.
These ethical guidelines, originating mostly in developed countries, cluster around principles of transparency, justice and fairness, non-maleficence, responsibility and privacy. The principles have been adopted by countries at inter-governmental platforms like the OECD, European Union, G20, G7, and discussed by technical experts and academics at international conferences, civil society organizations through participatory processes with involvement of citizens or through expert groups constituted by private sector companies and alliances.
Why ethical guidelines are not enough?
While the large number of ethical guidelines emerging at the national and regional level and by different stakeholder groups help in crystallizing a broad consensus on the values that should be respected in the development and use of AI globally, it leaves the question of translating principles to action insufficiently addressed.
Ethical guidelines for AI have been criticized for being too high level, conflicting with each other or even meaningless in informing concrete actions that governments, engineers, businesses or justice systems are expected to take.
Moving from principles to real questions that AI designers and regulators face, researchers at the University of Cambridge, highlight some of the tensions between different principles that remain unresolved. For instance:
— Should data be used to improve the quality and efficiency of services while compromising the privacy and autonomy of individuals?
— Should AI developers write algorithms that make decisions and predictions more accurate but may rank low on ensuring fairness and equal treatment?
— How to balance the benefits of increased personalization in the digital sphere with enhancing solidarity and citizenship?
— How to balance the use of automation to make people’s lives more convenient with empowering people by promoting self- actualization and dignity?
As an illustration, it may be useful to consider the principle of transparency that features in many AI ethics guidelines. The meaning of transparency vis. a vis. AI systems changes with the level of abstraction or the context in which it is applied.
In the development of AI systems, transparency may be facilitated through transparent procedures that record decisions taken by developers and businesses concerning the objective function of the algorithms, the data used for training and arriving at decisions taken by algorithms, among others.
However, when it comes to the use of AI systems, the norms of transparency may concern disclosures of how the user’s information would be processed, user consent and maintenance of decision-making logs for ex-post analysis of the decision for any bias and discrimination.
Finally, transparency principle may also apply to internal technological processes of AI systems pertaining to the black-box problem of AI, where the decisions taken by neural networks or support vector machines are not understandable by humans. In this case, any measures to ensure transparency may involve technical solutions in the form of explainable AI among others.
How to translate AI ethics principles to action?
Several governments, international organizations and standards organizations are alive to these concerns and are working towards translating high-level commitments into actionable requirements and norms. For instance, the OECD, through its OECD Network of Experts and the AI Observatory is translating its AI Principles into concrete advice for policymakers. The standards body, IEEE is developing standards and certifications for AI systems as part of its Ethics of Autonomous and Intelligent Systems programme.
Some inspiration is also sought from the field of medicine, where high-level ethical principles are translated into action through tools like “professional societies and boards, ethics review committees, accreditation and licensing schemes, peer self-governance, codes of conduct, and other mechanisms supported by strong institutions help determine the ethical acceptability of day-to-day practice by assessing difficult cases, identifying negligent behaviour and sanctioning bad actors”.
However, the general purpose nature of AI, difficulty in creating professional bodies and institutional mechanisms for oversight makes implementation of ethical guidelines difficult to implement.
Therefore, solutions may lie in translating high-level principles into “mid-level norms and low-level requirements”. Furthermore, these “norms and requirements cannot be deduced directly from mid-level principles without accounting for specific elements of the technology, application, context of use or relevant local norm”.
Articulation of a hierarchical framework that translates principles to mid-level norms and low-level requirements as per technology, application and context of use remains a challenge because of the evolving nature of technology and the difficulty in anticipating relevant contexts. Further challenges are posed by gaps in human and institutional capacities for technology governance at the national and local levels.
Therefore, the work of translating AI ethics guidelines at the international level requires enhanced international cooperation and a robust mechanism for provision, monitoring and accountability for digital public goods. Implementation of these guidelines would require programmes for raising awareness, and strengthening capacities for monitoring and evaluation at all levels.
The UN Secretary General’s Roadmap on Digital Cooperation, developed over a two-year long consultative process, provides a good starting point to address some of the governance challenges related to AI. The roadmap aims to facilitate digital cooperation amongst transnational actors through its different working groups on digital public goods and human-centered AI, among others. Furthermore, it proposes strengthening policy advice and extending capacity building support to governments through digital help desks at the national level.
Views expressed above belong to the author(s).
© 2019-23 AI Policy Exchange
AI Policy Exchange is a non-profit registered under section 8 of Indian Companies Act, 2013.
We are headquartered in New Delhi, India.
Founded in 2019 by Raj Shekhar and Antara Vats
editor@aipolicyexchange.org