9 ways to put AI ethics into practice

This document presents a collation of expert insights on creating and mandating ethical AI devices & applications — contributed by members of the AI Policy Exchange Editorial Network under the AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? — for public appraisal.

Edited by Raj Shekhar and Antara Vats

Foreword

The many iterations of artificial intelligence (AI) ethics coming from private and public enterprises, regional and international organizations, and civil society groups, globally, have been criticized for their impracticable propositions for creating and mandating ethical AI, among other things. These propositions, broadly hinging on fairness, accountability, transparency, and environmental sustainability principles, have been deemed impracticable as they are found to be loaded with elusive, abstract concepts and paralyzed by internal conflicts. For instance, the need for high-quality data to enable AI systems to produce fairer or more efficient outcomes comes into conflict with the object of securing individual data privacy. Moreover, individual privacy as a value is likely to get emphasized more in individualist cultures than in collectivist cultures.

However, in all fairness, even these impracticable propositions for creating and mandating ethical AI have laid some groundwork for creating actionable requirements for ethical AI and framing rules and regulations that would eventually mandate these requirements by prioritizing human safety and well-being. Indeed, international, regional, and national efforts to come up with practicable propositions for creating and mandating ethical AI have already begun —  the Institute of Electrical and Electronics Engineers (IEEE) launched the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems to develop standards and certifications for AI systems; the Organization for Economic Cooperation and Development (OECD) Policy Observatory published a set of concrete policy recommendations for national governments to develop and deploy AI technologies ethically within their jurisdictions; Singapore released its Model AI Governance Framework providing practical guidance on deploying ethical AI to private companies.

Yet, even the most practicable propositions for creating and mandating ethical AI would remain unethical so long as they remain elusive enough to discourage their public appraisal.

Hence, earlier in June, 2020, we launched the AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? as our small step towards facilitating public appraisal of various practicable propositions for creating and mandating ethical AI as they were earnestly put forth by several members of the AI Policy Exchange Editorial Network in the form of simple, insightful vlogs. These propositions for creating and mandating ethical AI ended up representing a diverse range of voices from amongst industry professionals, public policy researchers, and leaders of civil society groups within the AI community. In this document, we have attempted to compare and analyze these propositions to generate wholesome insights on how to create and mandate ethical AI for public appraisal.


Our Expert Contributors

Abhishek GuptaFounder, Montreal AI Ethics Institute; Machine Learning Engineer, Microsoft; Faculty Associate, Frankfurt Big Data Lab, Goethe University
Ana ChubinidzeFormer Research Fellow, Regional Academy on the United Nations; Member, Founding Editorial Board – AI and Ethics Journal, Springer Nature
Anand Tamboli, Chief Executive Officer, Anand Tamboli & Co.; Principal Advisor, AI Australia; Author, Keeping Your AI Under Control: A Pragmatic Guide to Identifying, Evaluating, and Quantifying Risks (Apress)
Angela Kim, Head of Analytics and AI at Teachers Health Fund, Australia; Women in AI Education Ambassador for Australia; Founder, est.ai; Member, Founding Editorial Board – AI and Ethics Journal, Springer Nature
Charlie Craine, Chief Technology Officer and Chief Data Officer, Meister Media Worldwide
Chhavi Chauhan, Ethics Advisor, Alliance for Artificial Intelligence in Healthcare, Washington, D.C.
Deepak Paramanand, Ex-Microsoft; AI Product Innovator, Hitachi
Diogo Duarte, Legal Consultant and Researcher, Data Protection
Felipe Castro Quiles, Co-founder & CEO, Emerging Rule and GENIA Latinoamérica; Fellow, Singularity University, Silicon Valley; Member, NVIDIA AI Inception; Member, Forbes AI Executive Advisory Board
Ivana Bartoletti, Technical Director, Privacy, Deloitte; Co-founder, Women Leading in AI Network; Author, An Artificial Revolution: On Power, Politics and AI (Mood Indigo)
James Brusseau, Director, AI Ethics Site, Pace University, New York
Merve Hickok, Founder, aiEthicist.org; Member, Founding Editorial Board – AI and Ethics Journal, Springer Nature
Nigel WillsonFormer European CTO, Professional Services, Microsoft; Founder, awakenAI and Co-Founder, We and AI
Oriana Medlicott, Subject Matter Expert & Consultant, AI Ethics, CertNexus; Member, Advisory Board, The AI Ethics Journal (AIEJ), The Artificial Intelligence Robotics Ethics Society (AIRES); Former Head of Operations, Ethical Intelligence Associates, Limited
Renée Cummings, Founder & CEO, Urban AI, LLC; East Coast Lead, Women in AI Ethics (WAIE); Community Scholar, Columbia University
Stephen Scott Watson, Managing Director, Watson Hepple Consulting Ltd.; Founding Editorial Board – AI and Ethics Journal, Springer Nature
Tannya JajalTechnology Contributor, Forbes Middle East; Co-Leader, Dubai WomeninTech Lean In Circle
Tom MouleExecutive Lead, The Institute for Ethical AI in Education, University of Buckingham
Tony Rhem, CEO / Principal Consultant, A.J. Rhem & Associates, Inc.; Member, Founding Editorial Board – AI and Ethics Journal, Springer Nature
Virginie Martins de Nobrega, AI Ethics & Public Policy Expert, Creative Resolution


9 Ways to Create and Mandate Ethical AI 

Let’s contextualize

AI devices & applications should be deemed ethical if they tangibly respond to the human needs and aspirations specific to the social, economic, political, and cultural contexts in which they are developed and deployed.

Explanation
In multilingual countries like India, an ethical AI use case could enable translation of classroom lectures delivered or recorded in one language (say, English) into several vernacular languages for cost-effective, equitable dissemination of educational content amongst learners residing in multiple regions of the country.


Let’s teach

Ethical AI is significantly predicated on creating responsible users of AI devices & applications, for which systematic and concerted measures need to be undertaken to improve AI literacy amongst the general population. Also, developers of AI systems should be systematically trained to appreciate the ethical risks associated with the use of AI devices & applications they develop and empowered with tools to reasonably foresee and adequately circumvent those risks.

Explanation
AI and ethical AI as disciplines should enter the school course curricula to create the next generation of responsible users of AI devices & applications. Also, AI companies should provide mandatory training to their developers that is focused on inculcating an interdisciplinary understanding of the purpose, use, and impact of AI technologies they develop and deploy.


Let’s concretize

Different use cases of AI come with different capabilities posing different risks and promising different benefits to individuals and society. Hence, technical safety standards as well as rules and regulations mandating these standards for different use cases of AI would be different.

Explanation
Both targeted advertising online and self-driving cars are use cases of AI. However, unethical development and deployment of targeted advertising online could routinize individual privacy invasions, whereas that of self-driving cars could lead to road accidents, jeopardizing human life and limb. Hence, technical safety standards as well as rules and regulations mandating these standards for targeted advertising online should instate safeguards that protect individual privacy, whereas that for self-driving cars should instate accountability measures for tackling negligent compliance by manufacturers of self-driving cars.


Let’s update

Existing ethical and regulatory frameworks should be adapted to accommodate the emerging social, economic, and political concerns around the adoption and use of AI devices & applications.

Explanation
Electoral laws must be suitably amended to delineate rules and regulations addressing the growing political concerns around the use of unethical microtargeted political advertising in election campaigns that compromises voter informational autonomy by gathering and using voter data without informed voter consent, and corrupts voter judgment by deploying fake news.


Let’s incentivize

Mandates to develop and deploy ethical AI devices & applications should enable AI companies to not only save costs of sanctions resulting from non-compliance but also gain occasional or routine financial rewards for compliance, creating market incentives for AI companies to proactively comply.

Explanation
Financial regulatory institutions could conceptualize and mandate an investment grading scale for rating AI devices & applications based on a certain measurable criteria determining their ethical value. This investment grading scale could enable investors to credibly assess the risks associated with their financial investments in AI devices & applications, and in turn, would create a market incentive for AI companies to seek financial investments by proactively seeking to develop and deploy AI devices & applications that rank high on the investment grading scale.


Let’s diversify

To counter the encoding of undesirable human biases in AI devices & applications, the use of which inevitably risks unfair and adverse treatment of certain individuals and groups, AI companies should be mandated to not only incorporate social diversity in their AI training datasets, but also to employ a socially diverse, multidisciplinary human resource pool that is tasked with developing and deploying their AI devices & applications.

Explanation
AI-powered recruitment tools could be corrected for results reflecting gender discrimination, that leads to rejecting more qualified female applicants against less qualified male applicants for the same job, by programming the underlying software to ignore factors like gender. This is likely to become an ethical imperative in AI companies technically mandating such requirements via AI ethics checklists configured by a collaborative, consultative engagement between developers and ethicists, with satisfactory representation from the female community.


Let’s account

Ethical AI principles and guidelines formulated by AI companies as bye-laws to self-regulate are laudable, but these should never discourage external regulation of AI companies and their activities via binding legislative instruments; so that when something goes wrong, penalizing the wrongdoer and compensating the victim are backed by legal certainty.

Explanation
Commitments made by social media companies to securing and maintaining the data privacy of their platform users should become actionable, i.e., the data privacy rights of these users should become the subject of a legislative enactment enabling a user whose data privacy rights have been infringed to take legal action with an appreciable degree of certainty of grievance redressal.


Let’s predict

The rapidly emerging use cases of AI are so transformative in their nature (unlike few other technologies from the past) that they give little or no time to formulate regulatory responses to adequately tackle the risks they pose to society. If these regulatory responses are not formulated well and in time, AI as an industry could prove to be more harmful than beneficial for humanity. Hence, governments should consult and collaborate with AI companies, public policy researchers, and the public to systematically anticipate the risks and opportunities posed by the potential AI use cases and initiate work towards regulatory reforms that the development and deployment of these use cases would demand with a view to minimizing their harmful impact on society.

Explanation
Governments could build internal capabilities — technical and human — for forecasting AI futures by drawing on the best methods and techniques of technological forecasting currently in place, like via regulatory sandboxing, in which live experiments are conducted in a controlled environment under a regulator’s supervision to safely test the efficacy and/or the impact of innovative software products or business models.


Let’s engage

Good public policy making is almost always led by effective engagement with all interested stakeholders. Hence, ethical AI devices & applications must almost always be a product of stakeholder-engagement, whose exact modalities should be decided by the ethical AI use case that is sought to be developed and deployed for it to be effective.

Explanation
Before introducing ethical AI devices & applications to improve learning outcomes for pupils in schools, their need and demand should be properly assessed by effectively engaging with pupils, teachers, and school administrators, amongst others, via survey tools and methods like questionnaires and focus group discussions.


Readers are encouraged to send us their feedback at editor@aipolicyexchange.org, and share the document as widely as possible. Here’s to creating an AI-literate society, together.

Your brain on algorithms

Article by Andrew Sears, Founder, Technovirtuism and Advisor, All Tech is Human

A quote attributed to the linguist Benjamin Lee Whorf claims that “language shapes the way we think and determines what we can think about.” Whorf, along with his mentor Edward Sapir, argued that a language’s structure influences its speakers’ perception and categorization of experience. Summarizing what he called the “linguistic relativity principle,” Whorf wrote that “language is not simply a reporting device for experience but a defining framework for it.”[1]

Over the past several decades, a strange new language has increasingly exerted its power over our experience of the world: the language of algorithms. Today, millions of computer programmers, data scientists, and other professionals spend the majority of their time writing and thinking in this language. As algorithms have become inseparable from most people’s day-to-day lives, they have begun to shape our experience of the world in profound ways. Algorithms are in our pockets and on our wrists. Algorithms are in our hearts and in our minds. Algorithms are subtly reshaping us after their own image, putting artificial boundaries on our ideas and imaginations.   

Our Languages, Our Selves

To better understand how language effects our experience of the world, we can consider the example depicted in the 2016 science fiction movie Arrival. In the film, giant 7-legged aliens arrive on Earth to give humanity a gift – a “technology,” as they call it. This technology turns out to be their language, which is unlike any human language in that it is nonlinear: sentences are represented by circles of characters with no beginning or end. Word order is not relevant; each sentence is experienced as one simultaneous collection of meanings.

As the human protagonist becomes fluent in this language, her experience of the world begins to adapt to its patterns. She begins to experience time nonlinearly. Premonitions and memories become indistinguishable as her past, present, and future collapse into one another. The qualities of her language become the qualities of her thought, which come to define her experience.

A similar, if less extreme, transformation is occurring today as our modes of thought are increasingly shaped by the language of algorithms. Technologists spend most of their waking hours writing algorithms to organize and analyze data, manipulate human behavior, and solve all manner of problems. To keep up with rapidly-changing technologies, they exist in a state of perpetual language learning. Many are evaluated based on how many total lines of code they write.

Even those of us who do not speak the language of algorithms are constantly spoken to in this language by the devices we carry with us, as these devices organize and mediate our day-to-day experience of the world. To understand how this immersion in the language of algorithms shapes us, we need to name some of the peculiar qualities of this language.

Traditional human languages employ a wide range of linguistic tools to express the world in its complexity, such as narrative, metaphor, hyperbole, dialogue, or humour. Algorithms, by contrast, express the world through formulas. They break up complex problems into small, well-defined components and articulate a series of linear steps by which the problem can be solved. Algorithms – or, more accurately, the utility of algorithms – depend on the assumption that problems and their solutions can be articulated mathematically without losing any meaningful qualities in translation.

They assume that all significant variables can be accounted for, that the world is both quantifiable and predictable. They assume that efficiency should be preferred to friction, that elegance is more desirable than complexity, and that the optimized solution is always the best solution. For many speakers of this language, these assumptions have become axiomatic. Their understanding of the world has been re-shaped by the conventions and thought-patterns of a new lingua franca.

Algorithms and Technological Solutionism

Immersion in the language of algorithms inclines us to look for technological solutions to every kind of problem. This phenomenon was observed in 2013 by Evgeny Morozov, who described the folly of “technological solutionism” in his book, To Save Everything, Click Here:

“Recasting all complex social situations either as neat problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized–if only the right algorithms are in place!–this quest is likely to have unexpected consequences that could eventually cause more damage than the problems they seek to address.”[2]

An algorithmic approach to problem solving depends on the assumption that the effects of the algorithm – the outputs of the formula – can be predicted and controlled. This assumption becomes less plausible as the complexity of the problem increases. When technologists seek to write an algorithmic solution to a nuanced human problem – like crime prevention or biased hiring, for example – they are compelled to contort the problem into an oversimplified form in order to express it in the language of algorithms. This results, more often than not, in the “unintended consequences” that we’ve all heard so much about. Perhaps Wordsworth said it best, 200 years before Morozov:

“Our meddling intellect

Mis-shapes the beauteous forms of things:—

We murder to dissect.”[3]

A helpful example of all this is furnished by Amazon, who in 2014 implemented an algorithmic tool to automate the vetting of resumes. The tool was designed to rate job candidates on a scale from one to five in order to help managers quickly identify their top prospects. After one year of use, this tool was shelved for discrimination against female candidates.

This parable contains all the components of the technosolutionist narrative: a problem rife with human complexity is assigned an algorithmic solution in the pursuit of efficiency; the problem is oversimplified in a hubristic attempt to express it numerically; the algorithm fails to achieve its objective while creating some nasty externalities along the way.

Immersion in the language of algorithms doesn’t just shape our understanding of the world; it shapes our understanding of the self. Members of the “quantified self” community use a combination of wearable devices, internet-of-things sensors, and – in extreme cases – implants to collect and analyze quantitative data on their physical, mental, and emotional performance. Adherents of this lifestyle pursue “self-knowledge through numbers,” in dramatic contrast to their pre-digital forbearers who may have pursued self-knowledge through philosophy, religion, art, or other reflective disciplines.

The influence of algorithms on our view of the self finds more mainstream expressions as well. Psychologists and other experts have widely adopted computational metaphors to describe the functions of the human brain.[4] Such conventions have made their way into our common vernacular; we speak of needing to “process” information, of feeling “at-capacity” and needing to “recharge,” of being “programmed” by education and “re-wired” by novel experiences.     

To understand the extent to which algorithms have shaped our world, consider that even Silicon Valley’s rogues and reformers struggle to move beyond algorithmic thought patterns. This is evidenced by the fact that tech reformers almost uniformly adopt a utilitarian view of technological ethics, evaluating the ethical quality of a technology based on its perceived consequences.

The algorithmic fixation with quantification, measurement, outputs, and outcomes informs their ethical thinking. Other ethics frameworks – such as deontology or virtue ethics – are seldom applied to technological questions except by the occasional academic who has been trained in these disciplines. 

A Language Fit for Our World

When it comes to complex social problems, algorithmic solutions tend to function at best as band-aids, covering up human shortcomings but doing nothing to address the problem’s true causes. Unfortunately, technological solutionism’s disastrous track record has done little to dampen its influence on the tech industry and within society as a whole.

Contact tracing apps look to be an important part of the solution to the COVID-19 pandemic; but will they be able to compensate for the lack of civic trust and responsibility that has crippled America’s virus containment efforts? Predictive and surveillance policing, powered by artificial intelligence tools, promises to create a safer society; can these atone for the complex social and economic inequalities that drive crime and dictate who gets called a criminal? As November 2020 approaches, social media companies assure us that they’re ready to contain the spread of fake news this time around; are we as citizens ready to engage in meaningful civil discourse?

Benjamin Lee Whorf, mirroring the language of Wordsworth’s poem, wrote that “we dissect nature along lines laid down by our native language.”[5] “We cut nature up, organize it into concepts, and ascribe significances,” as our language dictates and allows. It is therefore critical that the languages we adopt are capable of appreciating the world in all of its messy, tragic, beautiful complexity. Algorithms are well-suited to many kinds of problems, but they have important limits. Rather than shrinking our world to fit within these limits, we must broaden our minds to comprehend the intricacies of the world. We must stop thinking like algorithms and start thinking like humans.

References

[1] Whorf B.L. (1956). Language, Thought, and Reality. Cambridge, MA: MIT Press.

[2] Morozov, Evgeny. To Save Everything, Click Here.

[3] Wordsworth, William (2004). The Tables Turned. In Selected Poems, London: Penguin Books.

[4] For an excellent discussion and refutation of this trend, see Nicholas Carr’s book, The Shallows.

[5] Whorf, Benjamin (1956), Carroll, John B. (ed.), Language, Thought, and Reality: Selected Writings of Benjamin Lee Whorf. MIT Press: 212 – 214.

Translating AI ethics principles into action through digital cooperation

Article by Prateek Sibal, Specialist Consultant on AI at UNESCO and Lecturer in Digital Innovation and Transformation at Sciences Po, Paris

As a general-purpose technology, Artificial Intelligence (AI) impacts all spheres of our lives. A recent study on the role of AI in achieving the Sustainable Development Goals shows that AI could enable the accomplishment of 134 targets out of the total 169 targets under the 17 Sustainable Development Goals (SDGs) that cut across society, economy and the environment. For instance, AI can help deliver food and water security and education and health services efficiently. 

At the same time, AI has the potential to hinder progress on 59 targets concerning environment and society. On the environment front, some projections suggest that by 2030, information and communication technologies (ICTs) may account for up to 20 percent of global electricity consumption as compared to 1 per cent at present. If more environmentally friendly ICTs are not developed, it would adversely impact the achievement of SDGs on Climate Change. 

On the societal aspects, the development and use of AI is raising concerns about privacy, freedom of expression, discrimination, transparency and unequal access to technology. These factors have the potential to exacerbate existing social and economic inequalities between and within countries. 

How are governments responding to AI?

Governments across the world are scrambling to leverage the large economic potential of AI that includes an estimated USD 13 trillion in additional economic activity by 2030. At the same time, they are attempting to maximize the gains and minimizing the harms when dealing with the implications of AI for democracy, labour and the environment. As of 2019, governments, inter-governmental organizations, businesses, NGOs and academia have articulated over 84 AI guidelines or ethical principles.

These ethical guidelines, originating mostly in developed countries, cluster around principles of transparency, justice and fairness, non-maleficence, responsibility and privacy.  The principles have been adopted by countries at inter-governmental platforms like the OECD, European Union, G20, G7, and discussed by technical experts and academics at international conferences, civil society organizations through participatory processes with involvement of citizens or through expert groups constituted by private sector companies and alliances

Why ethical guidelines are not enough?

While the large number of ethical guidelines emerging at the national and regional level and by different stakeholder groups help in crystallizing a broad consensus on the values that should be respected in the development and use of AI globally, it leaves the question of translating principles to action insufficiently addressed. 

Ethical guidelines for AI have been criticized for being too high level, conflicting with each other or even meaningless in informing concrete actions that governments, engineers, businesses or justice systems are expected to take.  

Moving from principles to real questions that AI designers and regulators face, researchers at the University of Cambridge, highlight some of the tensions between different principles that remain unresolved. For instance:

— Should data be used to improve the quality and efficiency of services while compromising the privacy and autonomy of individuals?

— Should AI developers write algorithms that make decisions and predictions more accurate but may rank low on ensuring fairness and equal treatment?

— How to balance the benefits of increased personalization in the digital sphere with enhancing solidarity and citizenship?

— How to balance the use of automation to make people’s lives more convenient with empowering people by promoting self- actualization and dignity?

As an illustration, it may be useful to consider the principle of transparency that features in many AI ethics guidelines. The meaning of transparency vis. a vis. AI systems changes with the level of abstraction or the context in which it is applied.  

In the development of AI systems, transparency may be facilitated through transparent procedures that record decisions taken by developers and businesses concerning the objective function of the algorithms, the data used for training and arriving at decisions taken by algorithms, among others. 

However, when it comes to the use of AI systems, the norms of transparency may concern disclosures of how the user’s information would be processed, user consent and maintenance of decision-making logs for ex-post analysis of the decision for any bias and discrimination. 

Finally, transparency principle may also apply to internal technological processes of AI systems pertaining to the black-box problem of AI, where the decisions taken by neural networks or support vector machines are not understandable by humans. In this case, any measures to ensure transparency may involve technical solutions in the form of explainable AI among others. 

How to translate AI ethics principles to action?

Several governments, international organizations and standards organizations are alive to these concerns and are working towards translating high-level commitments into actionable requirements and norms. For instance, the OECD,  through its OECD Network of Experts and the AI Observatory is translating its AI Principles into concrete advice for policymakers. The standards body, IEEE is developing standards and certifications for AI systems as part of its Ethics of Autonomous and Intelligent Systems programme.

Some inspiration is also sought from the field of medicine, where high-level ethical principles are translated into action through tools like “professional societies and boards, ethics review committees, accreditation and licensing schemes, peer self-governance, codes of conduct, and other mechanisms supported by strong institutions help determine the ethical acceptability of day-to-day practice by assessing difficult cases, identifying negligent behaviour and sanctioning bad actors”.

However, the general purpose nature of AI, difficulty in creating professional bodies and institutional mechanisms for oversight makes implementation of ethical guidelines difficult to implement. 

Therefore, solutions may lie in translating high-level principles into “mid-level norms and low-level requirements”. Furthermore, these “norms and requirements cannot be deduced directly from mid-level principles without accounting for specific elements of the technology, application, context of use or relevant local norm”. 

Articulation of a hierarchical framework that translates principles to mid-level norms and low-level requirements as per technology, application and context of use remains a challenge because of the evolving nature of technology and the difficulty in anticipating relevant contexts. Further challenges are posed by gaps in human and institutional capacities for technology governance at the national and local levels.

Therefore, the work of translating AI ethics guidelines at the international level requires enhanced international cooperation and a robust mechanism for provision, monitoring and accountability for digital public goods. Implementation of these guidelines would require programmes for raising awareness, and strengthening capacities for monitoring and evaluation at all levels.

The UN Secretary General’s Roadmap on Digital Cooperation, developed over a two-year long consultative process, provides a good starting point to address some of the governance challenges related to AI. The roadmap aims to facilitate digital cooperation amongst transnational actors through its different working groups on digital public goods and human-centered AI, among others. Furthermore, it proposes strengthening policy advice and extending capacity building support to governments through digital help desks at the national level. 


Note: The views expressed in this article are those of the author and not of any affiliated organization.

Is no privacy the best AI privacy policy?

Article by James Brusseau, Director, Data Ethics Site, Pace University, New York

“You have zero privacy anyway,” Scott McNealy, CEO of Sun Microsystems scolded a concerned journalist in 1999, “Get over it.” He was wrong then. And if he said it again today, he would still be wrong. Tomorrow, though, is not so clear.

Most AI policy discussions about privacy concern strategies for its preservation, and cutouts for the relaxation of safeguards. So, we have policies banning facial recognition, and contact tracers that strip privacy in the name of fighting a pandemic.

No doubt these are the most important and currently pressing privacy issues, but they also hide the deeper and decisive question as technology increasingly pries into our lives: Should we be willing to accept full exposure, life without privacy?

No answer will be proposed here, but to ask the question seriously, we first need to get a sense of what the human experience would be if privacy were gone. That is the subject of this article, which begins about a decade before Apple computers went mainstream, in the 1970s with a philosophical thought experiment authored by Robert Nozick.

What he imagined was a was a floating tank of warm water, a sensory deprivation chamber where electrodes wrapped around our heads to feed our synapses a tantalizing experience indistinguishable from lived reality. You could choose your experiences beforehand – like choosing a book from the library – and then go into the tank.

Presumably, you could choose as many experiences as you would like and a single narrative could go on for years, decades. Nozick called this an Experience Machine, and the reason he formulated it was to ask a difficult hypothetical question: Would you permanently trade your outside life for an existence floating inside the tank?

If the answer is yes, every moment would be as thrilling or heroic or opulent as you could possibly wish. But, it would all only happen in your mind. It is a hard call. What is at stake, though, is easy to see: mental gratification versus personal freedom. You get prefabricated episodes guaranteed to feel good, exciting, or luxurious, at least in your mind. However, you lose the ability to create experiences and form a unique identity for yourself out in the unpredictable world.

A post-privacy reality offers an analogous choice: pleasure for freedom. It promises satisfactions, but requires relinquishing control over our own destinies. First, what does post-privacy mean? Private companies can purchase not only information about our age, location, recent credit card purchases and webpage visits, but also access all the text messages, telephone calls and pixels exchanged with every friend and romance.

There would be full access to medical records and work histories, and not just the topline statistics, but the history of every moment of every day as captured by video and audio surveillance. More could be added – all this is just the beginning of full exposure – but it is enough to begin sketching the pleasure or freedom dilemma.

As an initial and crude example, there are Netflix movies. By combing personal information about the viewer with predictive analytics, a film or episode is selected to begin rolling even before the previous one ends. The transition is almost seamless. Now, if Netflix knows everything about you – if there is no privacy – then it is probably going to be a good choice, one that holds your attention and wraps you in contentment as you keep watching.

It is also true, though, that your autonomy is getting limited. It is because you get what you want before making any choices: on one level, you don’t choose another movie from a list and then, above that, you don’t even choose whether to watch a movie at all because it’s already going. You are trapped in front of the screen. It’s a good trap because the confinement is your own data and entertainment, but it remains confinement.

One series that has been selected by the mechanisms of surveillance capitalism for many of the people reading this sentence is Black Mirror. There is an episode depicting a couple in a restaurant getting served their dishes just before asking to see the menu, and in a big data future of embraced transparency, that anticipatory provisioning should not be disconcerting but expected. Stronger, it is a central reason for relinquishing privacy and exposing ourselves to the uses and satisfactions of AI. It is good to get what you want before you want it.

Generalizing, it will not only be movie selections and the dinner choices. All our wants will be answered so immediately that we won’t even have time to understand why they are right, when we started wanting them, or to ask what it is that we wanted in the first place.

If the big data experience machine is functioning as it should – if it is instantly transforming complete personal information and predictive analytics into total consumer and user satisfactions – then we are not choosing anymore and, far more significantly, we’re not choosing to not choose. When we always already have what we want, no question about making a selection can even arise.

For that reason, one of the twisted curiosities about life after privacy is that the way we realize something is what we want is: we already have it. And that’s the only way we know that we want something. More, if we do feel the urge for a slice of pizza, or to binge Seinfeld, or to incite a romantic fling, what that really means is: we don’t actually want it. We can’t, since the core idea of the AI service economy is that it knows us transparently and so responds to our urges so perfectly that they’re answered without even a moment of suffering an unfulfilled craving.

It would be interesting to know whether something can be truly pleasurable if we get it before realizing a hunger for it, but no matter the answer, the drowning of personal desire in convenience and comfort is a significant temptation. On the other hand, there is no avoiding the cost that we no longer have personal freedom to make anything of ourselves.

Active self determination is gone since there is no room for experimenting with new possibilities, or for struggling with what’s worth pursuing and having. There are only the tranquil satisfactions that initially and powerfully recommended that we expose ourselves to big data reality and predictive analytics. Total exposure – reality without privacy – means a life so perfectly satisfying that we cannot do anything with it.

The unconditional surrender of our personal information is a big data Stockholm syndrome. Viewed from outside, users must seem enamored of the information sets and algorithms that control their experiences: those whose personal freedom has been arrested are actually grateful to their captors for the pleasures their minds experience.

But, from within the experience the very idea of being captive does not make sense since it is impossible to encounter – or even conceptualize – any kind of restraint. If we always get everything we want – movies, dinners, jobs, lovers, everything – before we even know we want them, how could we feel anything but total liberation?

What does Transparent AI mean?

Article by Fenna Woudstra, former intern at Filosofie in actie (an organization based in Arnhem, the Netherlands advising technology companies and projects on the ethical use of data)

Over the last couple of years many companies, institutions, governments, and ethicists have proposed their own principles for the development of Artificial Intelligence (or AI) in an ethical way. Most of the guidelines include principles about fairness, accountability, and transparency, but there is a wide range of interpretations and explanations of what these principles entail. This article delves into the transparency of AI.

What is transparency?

Because of the growing use of Artificial Intelligence (AI) in many different fields, problems like discrimination caused by automated decision-making algorithms are becoming more pressing. The underlying problem is the unknown existence of biases, due to the opacity of the systems.[1]

The logical response to reduce the problems caused by opacity is a greater demand for transparency. Transparency is all about an information exchange between a subject and an object, where the subject receives information about the performance of a system or organization the object is responsible for.[2]

However, putting transparency into practice is easier said than done. Firstly, not everything can simply be made publicly available. Other important values like privacy, national security, and corporate secrecy have to be taken into account.[3] Secondly, even after considering these points it remains unclear how transparency should be realized; what information about which aspects of the AI system should be disclosed for the system to deemed transparent?

Results of comparing 18 Ethical AI guidelines

To find this out, eighteen guidelines for Ethical AI have been analyzed on their principles and practical implementations of transparency, all retrieved from the AI Ethics Guidelines Global Inventory maintained by AlgorithmWatch.

As a result, a new framework has been created that consists of nine main topics: environmental costs, employment, business model, user’s rights, data, algorithms, auditability, trustworthiness, and openness (see Table 1 below). Each topic is explained by multiple practical specifications that were extracted from the existing guidelines. The topics cover the whole process of developing, using, and reviewing AI systems.

Table 1. Framework for Transparent AI

TopicBe Transparent By Providing
Environmental Costs– how much energy was needed to train and test the system
– what kind of energy the servers use (renewable or fossil)
– what is done to compensate for the CO2 emissions or other environmental harm
Employment– the need to use people in low-cost countries for content moderation and creation of creation, or clickworkers to develop and maintain AI systems (if this was the case)
– what is done to guarantee fair wages and healthy employment conditions for all contributors to the system
Business Model– the purpose of the system, what it can and cannot be used for
– the reasons for deploying the system
– the design choices of the system
– the degree to which the system influences and shapes the organizational decision-making process
– who is responsible for the development and control of the system
User’s Rights– all information presented through a clear user interface and in non-technical knowledge
– that people will be informed when they interact with an AI system
– that people will be informed when decisions about them are being made by an AI system
– if/that/how people can opt out, reject the application of certain technologies, or ask for human interaction instead
– if/that/how people can check and/or change their personal data that is being used in the system
– if/that/how people can request an explanation for the given outcome or decision; the explanation should be free of charge.
– if/that/how people can challenge a given outcome or decision
– if/that/how people can give feedback to the system
Data– what kind of data is being used in the system
– what variables are being used and why
– how/when/where the training and test data was collected
– what was changed/corrected/manipulated in the data, and why this had to be done
– how the required data is gathered from the user
– if/that people explicitly need to agree to use their data
– in which country the data is stored and where the provider is headquartered
Algorithms– what kind of machine learning algorithm is used
– how the algorithm works
– how the algorithm was trained and tested
– what the most important variables are that influence the outcome
– what the possible outcomes are, what groups people can be placed in
– what notion of fairness is used in the algorithm
– why this algorithm is the best option to use
– whether it is possible to avoid highly complex, black-box algorithms
Auditability– the degree to which it is possible to reconstruct how and why the system came up with this outcome (for example by logging what the system does)
– whether it is possible to give counterfactual explanations, what variables can be changed for a different outcome
– whether the system or a human can give a justification for why the given outcome is right or ethically permissible
– what was done to enable the auditing of the whole system (for example, by providing detailed documentation of all aspects)
– whether internal or independent external people audited the system
whether there is an audit possible by the public community or researchers
Trustworthiness– what has been done to identify, prevent, and mitigate discrimination or other malfunctions in the system
– what are capabilities and limitations of the system
– what are the risks and possible biases that have been identified
– what are the possible impacts the system can have on people
– what is the accuracy of the system
– what is the trustworthiness of the system in comparison to other systems
Openness– what open source software was used (if this was the case)
– what samples of the training data were used; explain reasons for non-disclosure
– what is the source code of the system; explain reasons for non-disclosure
– what are the auditing results; explain reasons for non-disclosure

Some critical notes

Unexpectedly, the principles of transparency do not only address algorithmic transparency or the explainability of the AI system, but also the transparency of the company itself. This is most notable in the first two topics, namely, environmental costs and employment. They show how a system is made, at what costs, and under which circumstances, are also a part of the development of a system.

Being transparent about these aspects could probably result in a greater trust and acceptance; Virginia Dignum, Professor of Social and Ethical Artificial Intelligence at University of Umeå in Sweden has already mentioned that “trust in the system will improve if we can ensure openness of affairs in all that is related to the system. As such, transparency is also about being explicit and open about choices and decisions concerning data sources and development processes”.[4]

Furthermore, it should be mentioned that not all existing guidelines have been analyzed and neither can we be sure that the existing guidelines already include all the important principles, due to the novelty of this subject. So clearly, additions to this framework can be made as and when important principles are found to be missing.

That is why this framework is not meant to provide strict rules, but it rather aims at providing an overview of the possible topics to be transparent about. Organizations can decide which aspects are relevant for their system and organization.

This framework also tries to show that being transparent does not necessarily mean that everything must be made publicly available; explaining why a principle cannot be fulfilled is also a form of transparency. Not publishing the used data can be perfectly ethical if it is done to ensure people’s privacy or the national security, as the American economist, Joseph Stiglitz has already noted.[5]

Finally, this framework shows that even when an algorithm’s working is highly complex, there could be still many aspects to be transparent and many ways to achieve transparency that would help reveal or prevent biases in AI systems. There is much more to an AI system than only a black box.


Fenna Woudstra did research on the Transparency of AI during her internship at Filosofie in actie (the Netherlands). This article is a summary of her findings. You can read her full research paper here.

References

[1] O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books. See also: Lepri, B., Oliver, N., Letouzé, E., Pentland, A. and Vinck, P. (2018). Fair, transparent, and accountable algorithmic decision-making processes. Philosophy & Technology, 31(4), 611-627.

[2] Meijer, A. (2013). Understanding the complex dynamics of transparency. Public Administration Review, 73(3), 429-439.

[3] Stiglitz, J. E. (1999). On liberty, the right to know, and public discourse: the role of transparency in public life. Globalizing rights: The Oxford amnesty lectures, 149.

[4] Dignum, V. (2019). Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer International Publishing.

[5] Stiglitz, J. E. (1999). On liberty, the right to know, and public discourse: the role of transparency in public life. Globalizing rights: The Oxford amnesty lectures, 149.

What’s Up With AI Ethics?

(last updated on 20.02.20)

Curated, Edited, and Prepared By Raj Shekhar (Managing Editor, AI Policy Exchange)

Note — This document is solely intended for popular reading. It is a product of curation. All claims produced in the document are those of the individual authors whose works have been used to prepare this document, and therefore, should not be construed to be those of AI Policy Exchange, or that of the curator and editor of this document.

Contents

Why This Document

Nature And Scope Of This Document

How To Read This Document

1. Defining AI
2. Adopting AI-Induced Business Automation
3. Global Collective Approach To AI Adoption
4. Regional Domestic Approach To AI Adoption
5. Developing And Deploying AI Ethically
6. Assigning Responsibility And Liability For AI-Induced Mishaps
7. Tackling Bias In AI
8. Making Explicable AI
9. Developing And Deploying AI-Based Weapons
10. Recognizing Rights Related To Artificial Creations

Why This Document

The study of ethics goes back to the earliest philosophical foundations of human knowledge and beliefs about right and wrong — while observing and analyzing the ever evolving human thinking and the impact of its expressions (say, in the form of human action) on individuals and society.

The study of artificial intelligence (or AI) is about making machines more intelligent and human-like with the expectation that they could augment human activity — especially economic activity to maximize economic efficiency and productivity by unprecedented measures — so as to ultimately improve human life and well-being.

Even as AI and ethics are two distinct disciplines, they are linked in an inextricable way. While ethics talks about how humans think and act, and how they ought to think and act, AI attempts to replicate human thinking and action within ethically defined boundaries. 

Hence, defining what ethics should govern human society becomes pivotal to answering how AI technologies should be developed, deployed, and used, i.e., how AI should be governed. In other words, the generally accepted ethical norms and principles governing human society will inevitably influence the type and mode of AI governance that humans ultimately adopt.

Arguably, therefore, the emergence of artificially intelligent technologies has lent humanity a chance to reflect on its deepest instincts and wildest aspirations like never before.

However, for all practical purposes of AI governance, the subject of ethics needs to be closely applied to the current and actual development, deployment, and use of AI technologies.

When the subject of ethics is applied to any other discipline — like law to form legal ethics, medicine to form medical ethics, and business to form business ethics — it serves the specialized and useful purpose of moulding generally accepted ethical norms and principles into precise and practicable rules to aptly govern activities exclusive to that discipline.

Hence, the desperate need of the moment is a wholesome body of authoritative literature that integrates the disciplines of AI and ethics to answer the most pressing questions about AI governance — we can call it “AI ethics”.

The need for AI ethics is fast becoming obvious with the rapid emergence of AI technologies whose development, deployment, and use appear to be at loggerheads with humanity’s knowledge and beliefs about itself, the world and beyond — and most importantly, about what’s right and wrong.

Naturally, there is no dearth of ethical concerns that are surfacing in contemporary governance of AI technologies. Scholars around the world have already begun to make strides in addressing these concerns by systematically documenting them, analyzing them, and recommending policy approaches to tackle them.

After all, academia is where all the big ideas emerge. But, academia is notorious for often communicating these ideas in intimidating, jargon-loaded academic research articles that appear to condescendingly test the erudition of its readers. The case of the growing body of scholarly research on AI ethics is no different.

The purpose of this document is to break down this growing body of scholarly research on AI ethics into jargon-free, easy-to-understand ethical precepts that are being advocated by scholars around the world to put AI governance on an ethical and responsible trajectory.

Since this literature is constantly growing, we plan on updating this document as frequently as we can, whenever we find something new and pertinent to add. Hence, we hope to maintain it as an evergreen document.

The idea of developing this document was fundamentally motivated by AI Policy Exchange’s core mission of making discourses on AI policy and governance accessible for the everyday citizen. You can read our complete mission statement here.

This document is a small attempt to bridge the gap between academia and the public by enabling general readers to have a clue about what ethical considerations are at play as academics globally set out to predict and influence the fate of the brave new world of AI.


Nature And Scope Of This Document

A Product of Curation 

This document is a product of curation. It was prepared after sifting through several academic research articles that explore the emerging ethical quandaries around the adoption and use of AI technologies.

We have been highly selective in picking articles for use in this document based on their authoritativeness, currency, and comprehensiveness in adequately representing the diverse and distinct standpoints that one could possibly come across in debates and discussions around AI ethics.

An Evergreen Document 

The body of scholarly research on AI ethics is constantly growing. Moreover, producing a document that holistically covers all of it cannot be a singular, one-time effort. 

Hence, as we concede that this document is an incomplete one, we invite other individual and institutional researchers to join hands with us and share with us their own unique insights on tackling the ethical quandaries around the adoption and use of AI technologies based on their research.

We would be happy to update our document with their insights and make our forthcoming editions even more illuminating and useful for our readers.


How To Read This Document

This document is divided into ten broad chapters — each intended to familiarize the reader with latest academic insights on a somewhat unique ethical quandary around the adoption and use of AI technologies.

You may wish to limit your reading of this document to the specific topic(s) of AI ethics that you are exploring and that we have covered in this document under individual chapters. In other words, you may not need to read the document from cover to cover. Moreover, points under each chapter are discrete but may be read as a whole.

While discussing research articles that we have picked for use under each chapter, we have deliberately limited ourselves to acquainting the reader only with the crux of our findings from what we have read. This is with the view to enabling the reader to broadly grasp the diverse and distinct approaches of academia to tackling the ethical quandaries around the adoption and use of AI technologies.

In other words, we do no want the reader to get lost in academic verbosity and lose sight of the big picture.

In turn, we have also been able to preserve the brevity of this document.

Even so, we do not intend to discourage curious minds, who are free to read the full-length articles by visiting the web links to these articles that we have embedded in this document.


1. Defining AI

1.1 AI is a constellation of highly advanced technologies that are designed to mimic human brain processes and action. That said, we do not have a universally accepted definition for “AI” even as John McCarthy, the great American computer scientist had coined the term long back in 1956.

1.2 Nonetheless, AI researchers and scholars have come up with their own definitions for AI. For example, Stuart Russell and Peter Norvig in Artificial Intelligence: A Modern Approach (2013) define AI as “the designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment.”

Nils J. Nilsson in The Quest for Artificial Intelligence: A History of Ideas and Achievements (2010) defines AI as “activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.”

1.3 Jonas Schuett in A Legal Definition of AI (2019) has criticized using the term AI when we talk about regulating artificially intelligent technologies.

To regulate AI technologies effectively, Schuett recommends defining AI technologies in terms of their —

(a) Designs (i.e., how they are made — using reinforcement learning, or supervised learning, or unsupervised learning, or artificial neural networks);

(b) Use Cases (i.e., how they are used — as self-driving cars or automated facial detectors or robots);

(c) Capabilities (i.e., what they can do — through physical interaction with their environment, or automated decision-making, or other actions entailing some form of legal responsibility).

Schuett argues that AI definitions must not be —

(a) Over-Inclusive (i.e., if protection of individual privacy is the regulatory goal, it should be secured through regulation or banning of automated facial recognition systems — and not of robotics);

(b) Imprecise (i.e., AI use cases should be expressed in binary terms so as to be clearly distinguishable from each other — for the purposes of regulation. In other words, self-driving cars are clearly distinct from automated facial recognition systems even as both could be understood as AI technologies in broad terms);

(c) Incomprehensible (i.e., the description of an AI technology that is sought to be regulated should be easily comprehensible — in jargon-free terms — to those developing, deploying, or using it);

(d) Impracticable (i.e., the AI technology that is sought to be regulated should be clearly identifiable by government agencies, courts, and lawyers — with the technical knowledge typically available to them);  

(e) Temporal (i.e., the definition of an AI technology that is sought to be regulated should be almost permanent — to ensure legal certainty for regulators and those being regulated).


2. Adopting AI-Induced Business Automation

2.1 Scott Wright and Ainslie E. Schultz in The rising tide of AI and business automation: Developing an ethical framework (2018) argue that policymakers lack a structured approach to assessing the big ethical questions around business automation. So, they recommend one involving the use of stakeholder theory and social contracts theory.

(a) Stakeholder Theory posits that the growth and longevity of a business enterprise is dependent not only upon the value of returns on investments made by its shareholders but also upon the satisfaction of other stakeholders of that enterprise which virtually includes anyone who is affected by the activity of that enterprise.

Stakeholders of a business enterprise would then include labourers and employees, governments, consumers, and even other business enterprises operating in the domestic and international markets.

(b) Social Contracts Theory posits that a business enterprise and all its stakeholders are in a form of social contract through which they exchange something of value. For example, when a labourer provides his labour to a business enterprise, he gets his wage in return from the owners of that enterprise.

Wright and Schultz argue that our ethical approach to tackling the big questions around business automation should be directed at minimizing the social contract violations between stakeholders of a business enterprise.

For example, if business automation is likely to take away jobs of workers in a business enterprise, our ethical precepts should oblige such an enterprise to invest in the re-skilling of these workers so that they could be retained in that enterprise instead.


3. Global Collective Approach To AI Adoption

3.1 Thilo Hagendorff in The Ethics of AI Ethics — An Evaluation of Guidelines (2019) notes that nation-states are currently approaching the development, deployment, and use of AI with a mindset of global competition, instead of global cooperation. 

Hagendorff’s claim gives strength to Wright and Schultz’s prophetic argument — in their 2018 article — that business automation is likely to increase, and not decrease, the economic gap between developed (capital-intensive) countries — that can afford to develop and deploy AI technologies — and developing (labour-intensive) countries — that cannot.

To temper this adverse effect of business automation on international economic relations, Wright and Schultz recommend that national governments should collaborate with each other to develop regulations to reduce unhealthy global competitiveness and instability that would be caused by AI-induced business automation.


4. Regional Domestic Approach To AI Adoption

4.1 Joanna Bryson and Miles Brundage in Smart Policies for AI (2016) had made some important recommendations for building an ecosystem in the United States to develop and deploy AI technologies most effectively — prioritizing public interest. Their following recommendations could hold pertinent for most national jurisdictions across the globe —

(a) Bryson and Brundage argue for the constitution of a centralized agency to govern AI technologies in the United States. They note that AI expertise in the United States is currently dispersed across different agencies. A certain level of dispersion is desirable, but not without a centralized agency. Historically, centralized agencies have been found to play a constructive role in the governance of emerging technologies. 

(b) Bryson and Brundage argue that government funding in AI development and research is important to (i) ensure equitable distribution of benefits arising from the development of AI, (ii) avoid ignoring core social sectors that are not lucrative for market players — AI is a general purpose technology, and there is a possibility that like electricity many would not have access to it in the absence of government support.

(c) Bryson and Brundage propose that we can have an AI extension service whose functions would include (i) sharing expertise with those who wish to focus on work that is more socially oriented and less market oriented, and (ii) influencing the ethical and safety standards to ensure transparency of AI systems and public engagement in developing these systems.

The AI extension service, therefore, would provide a very broad range of exciting career paths for AI researchers and AI policy specialists

(d) Bryson and Brundage also propose that we coordinate a government-wide initiative to investigate possible AI futures, drawing on the best methods in technological forecasting and scenario planning. We should also investigate both how agencies can leverage AI technologies for the public good, and how critical mission areas could be affected by AI and should adjust based on forecasted AI futures.


5. Developing And Deploying AI Ethically

5.1 Jack Stilgoe in Developing a framework for responsible innovation (2013) argues that conventional governance of science and technology has focused exclusively on the regulation of finished technological products as they entered the market. 

Stilgoe suggests that it’s about time that we adopted a new form of scientific governance that also emphasizes purposes and processes leading to the technological innovations. He calls it “responsible innovation” that will enable our scientific governance to become embedded in our social norms. Stilgoe proposes that responsible innovation should include —

(a) Anticipation: We must be able to systematically and explicitly anticipate (not merely predict in vague terms) risks and opportunities resulting from the technological innovations.

(b) Reflexivity: We must reflect on the value systems that motivate our technological innovations.

(c) Inclusion: We must include diverse voices in the governance of our technological innovations.

(d) Responsiveness: We must be responsive to the changing societal needs and new knowledge so as to modify our technological innovations from time to time.

5.2 Fabio Massimo Zanzotto in Human-in-the-loop Artificial Intelligence (2019) argues that AI learns from the data of unwary working people and considers this as the biggest modern-day knowledge theft. Hence, he argues that AI developers (individuals and companies) financially compensate these people for using their data.

5.3 As Hagendorff notes in his 2019 article — several international organizations and professional associations — like Institute of Electrical and Electronics Engineers, Partnership on AI to Benefit People and Society, OpenAI etcetera) — have formulated and released their guidelines in the public domain to guide the ethical development and deployment of AI technologies.

However, Hagendorff also notes that awareness of AI ethics guidelines have negligible effect on technical practice. He observes that the current ethics guidelines are too abstract for AI developers to deduce technological implementations from them. Hagendorff cites the disconnect between ethicists and AI developers as the reason for this occurrence.

Hence, Hagendorff recommends that ethicists become more technical and develop micro-ethical frameworks for specific technological implementations. In other words, ethics guidelines should be accompanied by clear-cut technical instructions. At the same time, Hagendorff suggests that AI developers need to get sensitized to AI ethics though university training etcetera.

Hagendorff also recommends stringent legal enforcement, independent auditing of technologies, and institutional grievance redressal for better realization of ethical practices in AI development and deployment.


6. Assigning Responsibility And Liability For AI-Induced Mishaps

6.1 Caroline Cauffman in Robo-liability: The European Union in search of the best way to deal with liability for damage caused by AI (2018) argues that most generally accepted principles and rules of liability should apply in the case of damage caused by robots, although some tweaking in the these principles and rules may be desirable. 

For example, bonus pater familias (standard of reasonable care) would apply on the person manufacturing the product based on AI. Cauffman thinks that the EU Product Liability Directive is a good starting point for determining robot-liability,

The EU Product Liability Directive states that if damage is caused by a defect in the product, the manufacturer of that product would be liable for damages. 

Cauffman notes that the defect included under the EU Product Liability Directive is a safety defect which obliges the manufacturer to sell its products for safe use by buyers. “Use” includes normal use as well as unreasonable use — for AI-based products.

Cauffman recommends that manufacturers ensure that their products do not respond to unreasonable instructions by its users resulting in harmful or illegal use of those products; if the manufacturer fails to, the manufacturer (and not the user) should be presumed to be liable for damage caused by the resulting misuse of the product.

Cauffman notes some limitations of the EU Product Liability Directive when applied to AI products —

(a) Non-movables (like software) are probably excluded from the purview of the EU Product Liability Directive.  

(b) The EU Product Liability Directive allows manufacturers of innovative products to escape liability if they can prove that discovering the defect was impossible given the existing state of scientific and technical knowledge. This provision (called the development risk defence) might enable certain AI manufacturers to unreasonably limit their liability caused by their AI-based products.

(c) The EU Product Liability Directive covers damage caused by death or personal injury only;   

6.2 Jack Stilgoe in Machine learning, social learning and the governance of self-driving cars (2018) makes the following critical observations on the regulation of self-driving cars:

(a) Unethical Competitive Advantage: The opacity in machine learning processes underlying self-driving cars could lead to “codified irresponsibility” that companies can use to shirk responsibility and avoid liability. Hence, interpretability in machine learning is most desirable for the effective regulation of self-driving cars.

However, even if companies could clearly understand why their self-driving cars decided a particular course of action, they would choose to not disclose it to maintain competitive advantage over their competitors in the market.

(b) Novelty Trap: Companies must juggle selling their self-driving cars to prospective buyers as ground-breaking technology with persuading regulators to treat the same technology as only an incremental improvement beyond existing approaches.

(c) Handoff Problem: It is likely that with self-driving cars, the driving skills of humans would tend to atrophy from no use. The risk of accidents, therefore, become imminent as self-driving cars anyway remain far from becoming completely autonomous, or reliable in dynamic real-world contexts.

(d) Fallacious Autonomy: Self-driving cars are not truly autonomous as they are not self-contained, or self-sufficient, or self-taught. The extent and significance of human involvement in the creation and governance of these so-called autonomous vehicles should not be underestimated or ignored.

(e) Need to Democratize Learning: Self-driving cars produced by a certain car-manufacturing company operate as a fleet belonging to that company, i.e., on-road learning of a Tesla self-driving car becomes the training data for all the other Tesla self-driving cars by default.

Hence, cooperative sharing of training data amongst different self-serving car manufacturers becomes most desirable to expedite the combined learning of all the self-driving cars on road so as to ultimately enhance road safety and security.


7. Tackling Bias In AI

7.1 Jake Silberg and James Manyika in Notes from the AI frontier: Tackling bias in AI (and in humans) (2019) argue that reducing bias in AI systems is important for us to trust these systems so that we can use them for wider social and economic benefits. AI can reduce bias while it can also bake in and scale bias. 

Before we tackle the problem of bias in human decision-making or AI decision-making, we must be able to define “bias”. It turns out that experts do not agree on a common definition or have failed to articulate an exhaustive definition for bias. 

Nevertheless, technical researchers are developing statistical techniques to reduce bias in data and algorithms to produce fair outcomes.

Varying social contexts in which AI systems are developed and deployed necessitate the use of human judgment to ensure that AI supported decision making is fair.

Moreover, if an AI algorithm reflects bias in human behaviour, one way to reduce such bias in the long-run would be to alter that human behaviour.


8. Making Explicable AI

8.1 Scott Robbins in A Misdirected Principle with a Catch: Explicability for AI (2019) argues that explicability in AI is important for two reasons: 

(a) Epistemic Value: A doctor using an AI-based tool to diagnose a disease may find an explanation offered by the tool in support of its results useful to better diagnose the disease.

(b) Ethical Value: An AI-generated decision accompanied by explanations offers AI developers and users the opportunity to reasonably accept or reject that decision to thereby (i) prevent avoidable harm resulting from the decision, or (ii) take moral responsibility for that decision when something goes wrong.

The focus should be not on how, but rather on why an AI system arrived at a certain decision. Hence, subject-centric explanations (focusing on the input used to develop an AI system) should be used as opposed to model-centric explanations (focusing on due diligence undertaken while developing an AI system).

However, Robbins also argues that not all AI decisions should demand explanations, like AI used for credit card fraud detection. On the other hand, AI used for medical diagnosis, judicial sentencing, and predictive policing must be explicable as these constitute morally sensitive contexts and we must know if the considerations used by the AI to arrive at a certain decision were appropriate. Luciano Floridi calls these morally sensitive contexts the arena for “socially significant decisions”.


9. Developing And Deploying AI-Based Weapons

9.1 Setu Bandh Upadhyay in Tackling Lethal Autonomous Weapon Systems (LAWS) — through regulation, not bans (2019) notes that blanket bans on deploying AI-based weapons have not worked in the past. Hence, global regulation seems more viable to control and monitor the use of AI-based weapons in inter-state conflicts so as to protect humanity from the catastrophic effects of their irresponsible use.

Upadhyay suggests that such global regulation of AI-based weapons should derive its legitimacy from international consensus and agreements fostered through dialogues and discussions between national governments, technical experts, jurists, and policy practitioners under the umbrella of international organizations.

Upadhyay, among other things, also suggests that such global regulation of AI-based weapons should restrict the use of autonomous weapons to self-defense by nation-states while obliging them to ensure a strong human factor in the deployment and use of these weapons, and to maintain algorithmic transparency in the development of these weapons.


11.1 Intellectual property rights — including a combination of attribution, ownership, and monopoly rights — are granted over intellectual creations — like artworks and inventions emanating from the human mind and realized by human action — to incentivize intellectual creations.

Claims are emerging that intellectual creations could be generated by certain AI technologies that could be called “artificial creators” — and by extension, their intellectual creations could be called “artificial creations”.

For instance, Stephen Thaler (eminent AI researcher and developer) has been making claims that intellectual creations can be generated almost autonomously by artificial creators which he has developed during his career as an AI developer — namely, The Creativity Machine — that produced 11,000 songs over a weekend and the patented design of the Oral-B cross-bristled toothbrush — and DABUS  — “Device for the Autonomous Bootstrapping of Unified Sentience” — that produced a specially designed container for robotic gripping and an emergency flashlight system.

Hence, according to Thaler, intellectual creations have ceased to be the exclusive domain of human activity with the emergence of AI creators. But, the question remains: Whether and how artificial creations should be accommodated by the long established jurisprudential framework of intellectual property rights. 

11.2 Michael McLaughlin in Computer-Generated Inventions (2018) emphasizes adherence to the original purpose of intellectual property rights which was to reward and incentivize human — not artificial creativity. He maintains that artificial creations do not deserve intellectual property protection — automatically bringing them into the public domain.

11.3 Ryan Abbott in I Think, Therefore I Invent: Creative Computers and the Future of Patent Law (2016) argues that attribution for artificial creations to any entity other than the artificial creator would be ethically problematic — for instance, it could seriously risk undervaluing human creations.

However, even as Abbott makes the case for attributing artificial creations exclusively to artificial creators — he discounts artificial ownership (i.e., ownership by AI systems) of artificial creations — and endorses the grant of state-granted monopolies on artificial creations with their ownership reserved with human and corporate owners of artificial creators — on two counts —

(a) AI technologies do not, and foreseeably, cannot possess legal personhood that is necessary to exercise ownership and assume related rights and liabilities;

(b) Humans and corporates (unlike artificial creators) do create intellectual property at the prospect of a state-granted monopoly, and therefore, human or corporate ownership over artificial creations will tend to reward and incentivize creation of more artificial creators — thereby boosting research and development activity.

11.4 Shlomit Yanisky-Ravid and Xiaoqiong (Jackie Liu) in When Artificial Intelligence Systems Produce Inventions: An Alternative Model for Patent Law at the 3A Era (2019) argue that intellectual property rights law and jurisprudence are not fit to accommodate artificial creations for the following reasons —

(a) Artificial creation is a multiplayer activity — involving software developers, data suppliers, system owners and operators, the public, and the government — making claims of attributing an artificial creation to a single entity redundant;

(b) Ownership and related rights and liabilities over artificial creations can be brokered between entities — individuals, firms, and governments — investing their resources and efforts to develop and deploy artificial creators — via contractual arrangements;

(c) Licenses and first-mover advantage techniques would better reward and incentivize the creation of artificial creators than state-granted monopolies.


Thank you for reading. We hope that you found the document illuminating and useful. Please send us your feedback at editor@aipolicyexchange.org.