What’s Up With AI Ethics?

(1st edition) (last updated on 20.02.20)
Curated, Edited, and Prepared By Raj Shekhar (Managing Editor, AI Policy Exchange)

Note — This document is solely intended for popular reading. It is a product of curation. All claims produced in the document are those of the individual authors whose works have been used to prepare this document, and therefore, should not be construed to be those of AI Policy Exchange, or that of the curator and editor of this document.


Contents

Why This Document
Nature And Scope Of This Document
How To Read This Document
1. Defining AI
2. Adopting AI-Induced Business Automation
3. Global Collective Approach To AI Adoption
4. Regional Domestic Approach To AI Adoption
5. Developing And Deploying AI Ethically
6. Assigning Responsibility And Liability For AI-Induced Mishaps
7. Tackling Bias In AI
8. Making Explicable AI
9. Developing And Deploying AI-Based Weapons
10. Recognizing Rights Related To Artificial Creations

Why This Document

The study of ethics goes back to the earliest philosophical foundations of human knowledge and beliefs about right and wrong — while observing and analyzing the ever evolving human thinking and the impact of its expressions (say, in the form of human action) on individuals and society.

The study of artificial intelligence (or AI) is about making machines more intelligent and human-like with the expectation that they could augment human activity — especially economic activity to maximize economic efficiency and productivity by unprecedented measures — so as to ultimately improve human life and well-being.

Even as AI and ethics are two distinct disciplines, they are linked in an inextricable way. While ethics talks about how humans think and act, and how they ought to think and act, AI attempts to replicate human thinking and action within ethically defined boundaries. 

Hence, defining what ethics should govern human society becomes pivotal to answering how AI technologies should be developed, deployed, and used, i.e., how AI should be governed. In other words, the generally accepted ethical norms and principles governing human society will inevitably influence the type and mode of AI governance that humans ultimately adopt.

Arguably, therefore, the emergence of artificially intelligent technologies has lent humanity a chance to reflect on its deepest instincts and wildest aspirations like never before.

However, for all practical purposes of AI governance, the subject of ethics needs to be closely applied to the current and actual development, deployment, and use of AI technologies.

When the subject of ethics is applied to any other discipline — like law to form legal ethics, medicine to form medical ethics, and business to form business ethics — it serves the specialized and useful purpose of moulding generally accepted ethical norms and principles into precise and practicable rules to aptly govern activities exclusive to that discipline.

Hence, the desperate need of the moment is a wholesome body of authoritative literature that integrates the disciplines of AI and ethics to answer the most pressing questions about AI governance — we can call it “AI ethics”.

The need for AI ethics is fast becoming obvious with the rapid emergence of AI technologies whose development, deployment, and use appear to be at loggerheads with humanity’s knowledge and beliefs about itself, the world and beyond — and most importantly, about what’s right and wrong.

Naturally, there is no dearth of ethical concerns that are surfacing in contemporary governance of AI technologies. Scholars around the world have already begun to make strides in addressing these concerns by systematically documenting them, analyzing them, and recommending policy approaches to tackle them.

After all, academia is where all the big ideas emerge. But, academia is notorious for often communicating these ideas in intimidating, jargon-loaded academic research articles that appear to condescendingly test the erudition of its readers. The case of the growing body of scholarly research on AI ethics is no different.

The purpose of this document is to break down this growing body of scholarly research on AI ethics into jargon-free, easy-to-understand ethical precepts that are being advocated by scholars around the world to put AI governance on an ethical and responsible trajectory.

Since this literature is constantly growing, we plan on updating this document as frequently as we can, whenever we find something new and pertinent to add. Hence, we hope to maintain it as an evergreen document.

The idea of developing this document was fundamentally motivated by AI Policy Exchange’s core mission of making discourses on AI policy and governance accessible for the everyday citizen. You can read our complete mission statement here.

This document is a small attempt to bridge the gap between academia and the public by enabling general readers to have a clue about what ethical considerations are at play as academics globally set out to predict and influence the fate of the brave new world of AI.


Nature And Scope Of This Document

A Product of Curation 

This document is a product of curation. It was prepared after sifting through several academic research articles that explore the emerging ethical quandaries around the adoption and use of AI technologies.

We have been highly selective in picking articles for use in this document based on their authoritativeness, currency, and comprehensiveness in adequately representing the diverse and distinct standpoints that one could possibly come across in debates and discussions around AI ethics.

An Evergreen Document 

The body of scholarly research on AI ethics is constantly growing. Moreover, producing a document that holistically covers all of it cannot be a singular, one-time effort. 

Hence, as we concede that this document is an incomplete one, we invite other individual and institutional researchers to join hands with us and share with us their own unique insights on tackling the ethical quandaries around the adoption and use of AI technologies based on their research.

We would be happy to update our document with their insights and make our forthcoming editions even more illuminating and useful for our readers.


How To Read This Document

This document is divided into ten broad chapters — each intended to familiarize the reader with latest academic insights on a somewhat unique ethical quandary around the adoption and use of AI technologies.

You may wish to limit your reading of this document to the specific topic(s) of AI ethics that you are exploring and that we have covered in this document under individual chapters. In other words, you may not need to read the document from cover to cover. Moreover, points under each chapter are discrete but may be read as a whole.

While discussing research articles that we have picked for use under each chapter, we have deliberately limited ourselves to acquainting the reader only with the crux of our findings from what we have read. This is with the view to enabling the reader to broadly grasp the diverse and distinct approaches of academia to tackling the ethical quandaries around the adoption and use of AI technologies.

In other words, we do no want the reader to get lost in academic verbosity and lose sight of the big picture.

In turn, we have also been able to preserve the brevity of this document.

Even so, we do not intend to discourage curious minds, who are free to read the full-length articles by visiting the web links to these articles that we have embedded in this document.


1. Defining AI

1.1 AI is a constellation of highly advanced technologies that are designed to mimic human brain processes and action. That said, we do not have a universally accepted definition for “AI” even as John McCarthy, the great American computer scientist had coined the term long back in 1956.

1.2 Nonetheless, AI researchers and scholars have come up with their own definitions for AI. For example, Stuart Russell and Peter Norvig in Artificial Intelligence: A Modern Approach (2013) define AI as “the designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment.”

Nils J. Nilsson in The Quest for Artificial Intelligence: A History of Ideas and Achievements (2010) defines AI as “activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.”

1.3 Jonas Schuett in A Legal Definition of AI (2019) has criticized using the term AI when we talk about regulating artificially intelligent technologies.

To regulate AI technologies effectively, Schuett recommends defining AI technologies in terms of their —

(a) Designs (i.e., how they are made — using reinforcement learning, or supervised learning, or unsupervised learning, or artificial neural networks);

(b) Use Cases (i.e., how they are used — as self-driving cars or automated facial detectors or robots);

(c) Capabilities (i.e., what they can do — through physical interaction with their environment, or automated decision-making, or other actions entailing some form of legal responsibility).

Schuett argues that AI definitions must not be —

(a) Over-Inclusive (i.e., if protection of individual privacy is the regulatory goal, it should be secured through regulation or banning of automated facial recognition systems — and not of robotics);

(b) Imprecise (i.e., AI use cases should be expressed in binary terms so as to be clearly distinguishable from each other — for the purposes of regulation. In other words, self-driving cars are clearly distinct from automated facial recognition systems even as both could be understood as AI technologies in broad terms);

(c) Incomprehensible (i.e., the description of an AI technology that is sought to be regulated should be easily comprehensible — in jargon-free terms — to those developing, deploying, or using it);

(d) Impracticable (i.e., the AI technology that is sought to be regulated should be clearly identifiable by government agencies, courts, and lawyers — with the technical knowledge typically available to them);  

(e) Temporal (i.e., the definition of an AI technology that is sought to be regulated should be almost permanent — to ensure legal certainty for regulators and those being regulated).


2. Adopting AI-Induced Business Automation

2.1 Scott Wright and Ainslie E. Schultz in The rising tide of AI and business automation: Developing an ethical framework (2018) argue that policymakers lack a structured approach to assessing the big ethical questions around business automation. So, they recommend one involving the use of stakeholder theory and social contracts theory.

(a) Stakeholder Theory posits that the growth and longevity of a business enterprise is dependent not only upon the value of returns on investments made by its shareholders but also upon the satisfaction of other stakeholders of that enterprise which virtually includes anyone who is affected by the activity of that enterprise.

Stakeholders of a business enterprise would then include labourers and employees, governments, consumers, and even other business enterprises operating in the domestic and international markets.

(b) Social Contracts Theory posits that a business enterprise and all its stakeholders are in a form of social contract through which they exchange something of value. For example, when a labourer provides his labour to a business enterprise, he gets his wage in return from the owners of that enterprise.

Wright and Schultz argue that our ethical approach to tackling the big questions around business automation should be directed at minimizing the social contract violations between stakeholders of a business enterprise.

For example, if business automation is likely to take away jobs of workers in a business enterprise, our ethical precepts should oblige such an enterprise to invest in the re-skilling of these workers so that they could be retained in that enterprise instead.


3. Global Collective Approach To AI Adoption

3.1 Thilo Hagendorff in The Ethics of AI Ethics — An Evaluation of Guidelines (2019) notes that nation-states are currently approaching the development, deployment, and use of AI with a mindset of global competition, instead of global cooperation. 

Hagendorff’s claim gives strength to Wright and Schultz’s prophetic argument — in their 2018 article — that business automation is likely to increase, and not decrease, the economic gap between developed (capital-intensive) countries — that can afford to develop and deploy AI technologies — and developing (labour-intensive) countries — that cannot.

To temper this adverse effect of business automation on international economic relations, Wright and Schultz recommend that national governments should collaborate with each other to develop regulations to reduce unhealthy global competitiveness and instability that would be caused by AI-induced business automation.


4. Regional Domestic Approach To AI Adoption

4.1 Joanna Bryson and Miles Brundage in Smart Policies for AI (2016) had made some important recommendations for building an ecosystem in the United States to develop and deploy AI technologies most effectively — prioritizing public interest. Their following recommendations could hold pertinent for most national jurisdictions across the globe —

(a) Bryson and Brundage argue for the constitution of a centralized agency to govern AI technologies in the United States. They note that AI expertise in the United States is currently dispersed across different agencies. A certain level of dispersion is desirable, but not without a centralized agency. Historically, centralized agencies have been found to play a constructive role in the governance of emerging technologies. 

(b) Bryson and Brundage argue that government funding in AI development and research is important to (i) ensure equitable distribution of benefits arising from the development of AI, (ii) avoid ignoring core social sectors that are not lucrative for market players — AI is a general purpose technology, and there is a possibility that like electricity many would not have access to it in the absence of government support.

(c) Bryson and Brundage propose that we can have an AI extension service whose functions would include (i) sharing expertise with those who wish to focus on work that is more socially oriented and less market oriented, and (ii) influencing the ethical and safety standards to ensure transparency of AI systems and public engagement in developing these systems.

The AI extension service, therefore, would provide a very broad range of exciting career paths for AI researchers and AI policy specialists

(d) Bryson and Brundage also propose that we coordinate a government-wide initiative to investigate possible AI futures, drawing on the best methods in technological forecasting and scenario planning. We should also investigate both how agencies can leverage AI technologies for the public good, and how critical mission areas could be affected by AI and should adjust based on forecasted AI futures.


5. Developing And Deploying AI Ethically

5.1 Jack Stilgoe in Developing a framework for responsible innovation (2013) argues that conventional governance of science and technology has focused exclusively on the regulation of finished technological products as they entered the market. 

Stilgoe suggests that it’s about time that we adopted a new form of scientific governance that also emphasizes purposes and processes leading to the technological innovations. He calls it “responsible innovation” that will enable our scientific governance to become embedded in our social norms. Stilgoe proposes that responsible innovation should include —

(a) Anticipation: We must be able to systematically and explicitly anticipate (not merely predict in vague terms) risks and opportunities resulting from the technological innovations.

(b) Reflexivity: We must reflect on the value systems that motivate our technological innovations.

(c) Inclusion: We must include diverse voices in the governance of our technological innovations.

(d) Responsiveness: We must be responsive to the changing societal needs and new knowledge so as to modify our technological innovations from time to time.

5.2 Fabio Massimo Zanzotto in Human-in-the-loop Artificial Intelligence (2019) argues that AI learns from the data of unwary working people and considers this as the biggest modern-day knowledge theft. Hence, he argues that AI developers (individuals and companies) financially compensate these people for using their data.

5.3 As Hagendorff notes in his 2019 article — several international organizations and professional associations — like Institute of Electrical and Electronics Engineers, Partnership on AI to Benefit People and Society, OpenAI etcetera) — have formulated and released their guidelines in the public domain to guide the ethical development and deployment of AI technologies.

However, Hagendorff also notes that awareness of AI ethics guidelines have negligible effect on technical practice. He observes that the current ethics guidelines are too abstract for AI developers to deduce technological implementations from them. Hagendorff cites the disconnect between ethicists and AI developers as the reason for this occurrence.

Hence, Hagendorff recommends that ethicists become more technical and develop micro-ethical frameworks for specific technological implementations. In other words, ethics guidelines should be accompanied by clear-cut technical instructions. At the same time, Hagendorff suggests that AI developers need to get sensitized to AI ethics though university training etcetera.

Hagendorff also recommends stringent legal enforcement, independent auditing of technologies, and institutional grievance redressal for better realization of ethical practices in AI development and deployment.


6. Assigning Responsibility And Liability For AI-Induced Mishaps

6.1 Caroline Cauffman in Robo-liability: The European Union in search of the best way to deal with liability for damage caused by AI (2018) argues that most generally accepted principles and rules of liability should apply in the case of damage caused by robots, although some tweaking in the these principles and rules may be desirable. 

For example, bonus pater familias (standard of reasonable care) would apply on the person manufacturing the product based on AI. Cauffman thinks that the EU Product Liability Directive is a good starting point for determining robot-liability,

The EU Product Liability Directive states that if damage is caused by a defect in the product, the manufacturer of that product would be liable for damages. 

Cauffman notes that the defect included under the EU Product Liability Directive is a safety defect which obliges the manufacturer to sell its products for safe use by buyers. “Use” includes normal use as well as unreasonable use — for AI-based products.

Cauffman recommends that manufacturers ensure that their products do not respond to unreasonable instructions by its users resulting in harmful or illegal use of those products; if the manufacturer fails to, the manufacturer (and not the user) should be presumed to be liable for damage caused by the resulting misuse of the product.

Cauffman notes some limitations of the EU Product Liability Directive when applied to AI products —

(a) Non-movables (like software) are probably excluded from the purview of the EU Product Liability Directive.  

(b) The EU Product Liability Directive allows manufacturers of innovative products to escape liability if they can prove that discovering the defect was impossible given the existing state of scientific and technical knowledge. This provision (called the development risk defence) might enable certain AI manufacturers to unreasonably limit their liability caused by their AI-based products.

(c) The EU Product Liability Directive covers damage caused by death or personal injury only;   

6.2 Jack Stilgoe in Machine learning, social learning and the governance of self-driving cars (2018) makes the following critical observations on the regulation of self-driving cars:

(a) Unethical Competitive Advantage: The opacity in machine learning processes underlying self-driving cars could lead to “codified irresponsibility” that companies can use to shirk responsibility and avoid liability. Hence, interpretability in machine learning is most desirable for the effective regulation of self-driving cars.

However, even if companies could clearly understand why their self-driving cars decided a particular course of action, they would choose to not disclose it to maintain competitive advantage over their competitors in the market.

(b) Novelty Trap: Companies must juggle selling their self-driving cars to prospective buyers as ground-breaking technology with persuading regulators to treat the same technology as only an incremental improvement beyond existing approaches.

(c) Handoff Problem: It is likely that with self-driving cars, the driving skills of humans would tend to atrophy from no use. The risk of accidents, therefore, become imminent as self-driving cars anyway remain far from becoming completely autonomous, or reliable in dynamic real-world contexts.

(d) Fallacious Autonomy: Self-driving cars are not truly autonomous as they are not self-contained, or self-sufficient, or self-taught. The extent and significance of human involvement in the creation and governance of these so-called autonomous vehicles should not be underestimated or ignored.

(e) Need to Democratize Learning: Self-driving cars produced by a certain car-manufacturing company operate as a fleet belonging to that company, i.e., on-road learning of a Tesla self-driving car becomes the training data for all the other Tesla self-driving cars by default.

Hence, cooperative sharing of training data amongst different self-serving car manufacturers becomes most desirable to expedite the combined learning of all the self-driving cars on road so as to ultimately enhance road safety and security.


7. Tackling Bias In AI

7.1 Jake Silberg and James Manyika in Notes from the AI frontier: Tackling bias in AI (and in humans) (2019) argue that reducing bias in AI systems is important for us to trust these systems so that we can use them for wider social and economic benefits. AI can reduce bias while it can also bake in and scale bias. 

Before we tackle the problem of bias in human decision-making or AI decision-making, we must be able to define “bias”. It turns out that experts do not agree on a common definition or have failed to articulate an exhaustive definition for bias. 

Nevertheless, technical researchers are developing statistical techniques to reduce bias in data and algorithms to produce fair outcomes.

Varying social contexts in which AI systems are developed and deployed necessitate the use of human judgment to ensure that AI supported decision making is fair.

Moreover, if an AI algorithm reflects bias in human behaviour, one way to reduce such bias in the long-run would be to alter that human behaviour.


8. Making Explicable AI

8.1 Scott Robbins in A Misdirected Principle with a Catch: Explicability for AI (2019) argues that explicability in AI is important for two reasons: 

(a) Epistemic Value: A doctor using an AI-based tool to diagnose a disease may find an explanation offered by the tool in support of its results useful to better diagnose the disease.

(b) Ethical Value: An AI-generated decision accompanied by explanations offers AI developers and users the opportunity to reasonably accept or reject that decision to thereby (i) prevent avoidable harm resulting from the decision, or (ii) take moral responsibility for that decision when something goes wrong.

The focus should be not on how, but rather on why an AI system arrived at a certain decision. Hence, subject-centric explanations (focusing on the input used to develop an AI system) should be used as opposed to model-centric explanations (focusing on due diligence undertaken while developing an AI system).

However, Robbins also argues that not all AI decisions should demand explanations, like AI used for credit card fraud detection. On the other hand, AI used for medical diagnosis, judicial sentencing, and predictive policing must be explicable as these constitute morally sensitive contexts and we must know if the considerations used by the AI to arrive at a certain decision were appropriate. Luciano Floridi calls these morally sensitive contexts the arena for “socially significant decisions”.


9. Developing And Deploying AI-Based Weapons

9.1 Setu Bandh Upadhyay in Tackling Lethal Autonomous Weapon Systems (LAWS) — through regulation, not bans (2019) notes that blanket bans on deploying AI-based weapons have not worked in the past. Hence, global regulation seems more viable to control and monitor the use of AI-based weapons in inter-state conflicts so as to protect humanity from the catastrophic effects of their irresponsible use.

Upadhyay suggests that such global regulation of AI-based weapons should derive its legitimacy from international consensus and agreements fostered through dialogues and discussions between national governments, technical experts, jurists, and policy practitioners under the umbrella of international organizations.

Upadhyay, among other things, also suggests that such global regulation of AI-based weapons should restrict the use of autonomous weapons to self-defense by nation-states while obliging them to ensure a strong human factor in the deployment and use of these weapons, and to maintain algorithmic transparency in the development of these weapons.


11.1 Intellectual property rights — including a combination of attribution, ownership, and monopoly rights — are granted over intellectual creations — like artworks and inventions emanating from the human mind and realized by human action — to incentivize intellectual creations.

Claims are emerging that intellectual creations could be generated by certain AI technologies that could be called “artificial creators” — and by extension, their intellectual creations could be called “artificial creations”.

For instance, Stephen Thaler (eminent AI researcher and developer) has been making claims that intellectual creations can be generated almost autonomously by artificial creators which he has developed during his career as an AI developer — namely, The Creativity Machine — that produced 11,000 songs over a weekend and the patented design of the Oral-B cross-bristled toothbrush — and DABUS  — “Device for the Autonomous Bootstrapping of Unified Sentience” — that produced a specially designed container for robotic gripping and an emergency flashlight system.

Hence, according to Thaler, intellectual creations have ceased to be the exclusive domain of human activity with the emergence of AI creators. But, the question remains: Whether and how artificial creations should be accommodated by the long established jurisprudential framework of intellectual property rights. 

11.2 Michael McLaughlin in Computer-Generated Inventions (2018) emphasizes adherence to the original purpose of intellectual property rights which was to reward and incentivize human — not artificial creativity. He maintains that artificial creations do not deserve intellectual property protection — automatically bringing them into the public domain.

11.3 Ryan Abbott in I Think, Therefore I Invent: Creative Computers and the Future of Patent Law (2016) argues that attribution for artificial creations to any entity other than the artificial creator would be ethically problematic — for instance, it could seriously risk undervaluing human creations.

However, even as Abbott makes the case for attributing artificial creations exclusively to artificial creators — he discounts artificial ownership (i.e., ownership by AI systems) of artificial creations — and endorses the grant of state-granted monopolies on artificial creations with their ownership reserved with human and corporate owners of artificial creators — on two counts —

(a) AI technologies do not, and foreseeably, cannot possess legal personhood that is necessary to exercise ownership and assume related rights and liabilities;

(b) Humans and corporates (unlike artificial creators) do create intellectual property at the prospect of a state-granted monopoly, and therefore, human or corporate ownership over artificial creations will tend to reward and incentivize creation of more artificial creators — thereby boosting research and development activity.

11.4 Shlomit Yanisky-Ravid and Xiaoqiong (Jackie Liu) in When Artificial Intelligence Systems Produce Inventions: An Alternative Model for Patent Law at the 3A Era (2019) argue that intellectual property rights law and jurisprudence are not fit to accommodate artificial creations for the following reasons —

(a) Artificial creation is a multiplayer activity — involving software developers, data suppliers, system owners and operators, the public, and the government — making claims of attributing an artificial creation to a single entity redundant;

(b) Ownership and related rights and liabilities over artificial creations can be brokered between entities — individuals, firms, and governments — investing their resources and efforts to develop and deploy artificial creators — via contractual arrangements;

(c) Licenses and first-mover advantage techniques would better reward and incentivize the creation of artificial creators than state-granted monopolies.


Thank you for reading! We hope that you found the document illuminating and useful. If not, please send us your comments at editor@aipolicyexchange.org. Your feedback holds immense value for us. It will help us in improving our forthcoming editions.