What does Transparent AI mean?
by Fenna Woudstra, affiliated with Filosofie in actie (an organization based in Arnhem, the Netherlands advising technology companies and projects on the ethical use of data)
Over the last couple of years many companies, institutions, governments, and ethicists have proposed their own principles for the development of Artificial Intelligence (or AI) in an ethical way. Most of the guidelines include principles about fairness, accountability, and transparency, but there is a wide range of interpretations and explanations of what these principles entail. This article delves into the transparency of AI.
What is transparency?
Because of the growing use of Artificial Intelligence (AI) in many different fields, problems like discrimination caused by automated decision-making algorithms are becoming more pressing. The underlying problem is the unknown existence of biases, due to the opacity of the systems.[1]
The logical response to reduce the problems caused by opacity is a greater demand for transparency. Transparency is all about an information exchange between a subject and an object, where the subject receives information about the performance of a system or organization the object is responsible for.[2]
However, putting transparency into practice is easier said than done. Firstly, not everything can simply be made publicly available. Other important values like privacy, national security, and corporate secrecy have to be taken into account.[3] Secondly, even after considering these points it remains unclear how transparency should be realized; what information about which aspects of the AI system should be disclosed for the system to deemed transparent?
Results of comparing 18 Ethical AI guidelines
To find this out, eighteen guidelines for Ethical AI have been analyzed on their principles and practical implementations of transparency, all retrieved from the AI Ethics Guidelines Global Inventory maintained by AlgorithmWatch.
As a result, a new framework has been created that consists of nine main topics: environmental costs, employment, business model, user’s rights, data, algorithms, auditability, trustworthiness, and openness (see Table 1 below). Each topic is explained by multiple practical specifications that were extracted from the existing guidelines. The topics cover the whole process of developing, using, and reviewing AI systems.
Table 1. Framework for Transparent AI
Topic | Be Transparent By Providing |
---|---|
Environmental Costs | – how much energy was needed to train and test the system – what kind of energy the servers use (renewable or fossil) – what is done to compensate for the CO2 emissions or other environmental harm |
Employment | – the need to use people in low-cost countries for content moderation and creation of creation, or clickworkers to develop and maintain AI systems (if this was the case) – what is done to guarantee fair wages and healthy employment conditions for all contributors to the system |
Business Model | – the purpose of the system, what it can and cannot be used for – the reasons for deploying the system – the design choices of the system – the degree to which the system influences and shapes the organizational decision-making process – who is responsible for the development and control of the system |
User’s Rights | – all information presented through a clear user interface and in non-technical knowledge – that people will be informed when they interact with an AI system – that people will be informed when decisions about them are being made by an AI system – if/that/how people can opt out, reject the application of certain technologies, or ask for human interaction instead – if/that/how people can check and/or change their personal data that is being used in the system – if/that/how people can request an explanation for the given outcome or decision; the explanation should be free of charge. – if/that/how people can challenge a given outcome or decision – if/that/how people can give feedback to the system |
Data | – what kind of data is being used in the system – what variables are being used and why – how/when/where the training and test data was collected – what was changed/corrected/manipulated in the data, and why this had to be done – how the required data is gathered from the user – if/that people explicitly need to agree to use their data – in which country the data is stored and where the provider is headquartered |
Algorithms | – what kind of machine learning algorithm is used – how the algorithm works – how the algorithm was trained and tested – what the most important variables are that influence the outcome – what the possible outcomes are, what groups people can be placed in – what notion of fairness is used in the algorithm – why this algorithm is the best option to use – whether it is possible to avoid highly complex, black-box algorithms |
Auditability | – the degree to which it is possible to reconstruct how and why the system came up with this outcome (for example by logging what the system does) – whether it is possible to give counterfactual explanations, what variables can be changed for a different outcome – whether the system or a human can give a justification for why the given outcome is right or ethically permissible – what was done to enable the auditing of the whole system (for example, by providing detailed documentation of all aspects) – whether internal or independent external people audited the system whether there is an audit possible by the public community or researchers |
Trustworthiness | – what has been done to identify, prevent, and mitigate discrimination or other malfunctions in the system – what are capabilities and limitations of the system – what are the risks and possible biases that have been identified – what are the possible impacts the system can have on people – what is the accuracy of the system – what is the trustworthiness of the system in comparison to other systems |
Openness | – what open source software was used (if this was the case) – what samples of the training data were used; explain reasons for non-disclosure – what is the source code of the system; explain reasons for non-disclosure – what are the auditing results; explain reasons for non-disclosure |
Some critical notes
Unexpectedly, the principles of transparency do not only address algorithmic transparency or the explainability of the AI system, but also the transparency of the company itself. This is most notable in the first two topics, namely, environmental costs and employment. They show how a system is made, at what costs, and under which circumstances, are also a part of the development of a system.
Being transparent about these aspects could probably result in a greater trust and acceptance; Virginia Dignum, Professor of Social and Ethical Artificial Intelligence at University of Umeå in Sweden has already mentioned that “trust in the system will improve if we can ensure openness of affairs in all that is related to the system. As such, transparency is also about being explicit and open about choices and decisions concerning data sources and development processes”.[4]
Furthermore, it should be mentioned that not all existing guidelines have been analyzed and neither can we be sure that the existing guidelines already include all the important principles, due to the novelty of this subject. So clearly, additions to this framework can be made as and when important principles are found to be missing.
That is why this framework is not meant to provide strict rules, but it rather aims at providing an overview of the possible topics to be transparent about. Organizations can decide which aspects are relevant for their system and organization.
This framework also tries to show that being transparent does not necessarily mean that everything must be made publicly available; explaining why a principle cannot be fulfilled is also a form of transparency. Not publishing the used data can be perfectly ethical if it is done to ensure people’s privacy or the national security, as the American economist, Joseph Stiglitz has already noted.[5]
Finally, this framework shows that even when an algorithm’s working is highly complex, there could be still many aspects to be transparent and many ways to achieve transparency that would help reveal or prevent biases in AI systems. There is much more to an AI system than only a black box.
REFERENCES
[1] O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books. See also: Lepri, B., Oliver, N., Letouzé, E., Pentland, A. and Vinck, P. (2018). Fair, transparent, and accountable algorithmic decision-making processes. Philosophy & Technology, 31(4), 611-627.
[2] Meijer, A. (2013). Understanding the complex dynamics of transparency. Public Administration Review, 73(3), 429-439.
[3] Stiglitz, J. E. (1999). On liberty, the right to know, and public discourse: the role of transparency in public life. Globalizing rights: The Oxford amnesty lectures, 149.
[4] Dignum, V. (2019). Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer International Publishing.
[5] Stiglitz, J. E. (1999). On liberty, the right to know, and public discourse: the role of transparency in public life. Globalizing rights: The Oxford amnesty lectures, 149.
Views expressed above belong to the author(s).
Note: Fenna Woudstra did research on the transparency of AI during her internship at Filosofie in actie based in Arnhem, the Netherlands. This article is a summary of her findings. You can read her full research paper here.
© 2019-23 AI Policy Exchange
AI Policy Exchange is a non-profit registered under section 8 of Indian Companies Act, 2013.
We are headquartered in New Delhi, India.
Founded in 2019 by Raj Shekhar and Antara Vats
editor@aipolicyexchange.org