This document presents a collation of expert insights on creating and mandating ethical AI devices & applications — contributed by members of the AI Policy Exchange Editorial Network under the AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? — for public appraisal.
Edited by Raj Shekhar and Antara Vats
The many iterations of artificial intelligence (AI) ethics coming from private and public enterprises, regional and international organizations, and civil society groups, globally, have been criticized for their impracticable propositions for creating and mandating ethical AI, among other things. These propositions, broadly hinging on fairness, accountability, transparency, and environmental sustainability principles, have been deemed impracticable as they are found to be loaded with elusive, abstract concepts and paralyzed by internal conflicts. For instance, the need for high-quality data to enable AI systems to produce fairer or more efficient outcomes comes into conflict with the object of securing individual data privacy. Moreover, individual privacy as a value is likely to get emphasized more in individualist cultures than in collectivist cultures.
However, in all fairness, even these impracticable propositions for creating and mandating ethical AI have laid some groundwork for creating actionable requirements for ethical AI and framing rules and regulations that would eventually mandate these requirements by prioritizing human safety and well-being. Indeed, international, regional, and national efforts to come up with practicable propositions for creating and mandating ethical AI have already begun — the Institute of Electrical and Electronics Engineers (IEEE) launched the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems to develop standards and certifications for AI systems; the Organization for Economic Cooperation and Development (OECD) Policy Observatory published a set of concrete policy recommendations for national governments to develop and deploy AI technologies ethically within their jurisdictions; Singapore released its Model AI Governance Framework providing practical guidance on deploying ethical AI to private companies.
Yet, even the most practicable propositions for creating and mandating ethical AI would remain unethical so long as they remain elusive enough to discourage their public appraisal.
Hence, earlier in June, 2020, we launched the AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? as our small step towards facilitating public appraisal of various practicable propositions for creating and mandating ethical AI as they were earnestly put forth by several members of the AI Policy Exchange Editorial Network in the form of simple, insightful vlogs. These propositions for creating and mandating ethical AI ended up representing a diverse range of voices from amongst industry professionals, public policy researchers, and leaders of civil society groups within the AI community. In this document, we have attempted to compare and analyze these propositions to generate wholesome insights on how to create and mandate ethical AI for public appraisal.
Our Expert Contributors
Abhishek Gupta, Founder, Montreal AI Ethics Institute; Machine Learning Engineer, Microsoft; Faculty Associate, Frankfurt Big Data Lab, Goethe University
Ana Chubinidze, Former Research Fellow, Regional Academy on the United Nations; Member, Founding Editorial Board – AI and Ethics Journal, Springer Nature
Anand Tamboli, Chief Executive Officer, Anand Tamboli & Co.; Principal Advisor, AI Australia; Author, Keeping Your AI Under Control: A Pragmatic Guide to Identifying, Evaluating, and Quantifying Risks (Apress)
Angela Kim, Head of Analytics and AI at Teachers Health Fund, Australia; Women in AI Education Ambassador for Australia; Founder, est.ai; Member, Founding Editorial Board – AI and Ethics Journal, Springer Nature
Charlie Craine, Chief Technology Officer and Chief Data Officer, Meister Media Worldwide
Chhavi Chauhan, Ethics Advisor, Alliance for Artificial Intelligence in Healthcare, Washington, D.C.
Deepak Paramanand, Ex-Microsoft; AI Product Innovator, Hitachi
Diogo Duarte, Legal Consultant and Researcher, Data Protection
Felipe Castro Quiles, Co-founder & CEO, Emerging Rule and GENIA Latinoamérica; Fellow, Singularity University, Silicon Valley; Member, NVIDIA AI Inception; Member, Forbes AI Executive Advisory Board
Ivana Bartoletti, Technical Director, Privacy, Deloitte; Co-founder, Women Leading in AI Network; Author, An Artificial Revolution: On Power, Politics and AI (Mood Indigo)
James Brusseau, Director, AI Ethics Site, Pace University, New York
Merve Hickok, Founder, aiEthicist.org; Member, Founding Editorial Board – AI and Ethics Journal, Springer Nature
Nigel Willson, Former European CTO, Professional Services, Microsoft; Founder, awakenAI and Co-Founder, We and AI
Oriana Medlicott, Subject Matter Expert & Consultant, AI Ethics, CertNexus; Member, Advisory Board, The AI Ethics Journal (AIEJ), The Artificial Intelligence Robotics Ethics Society (AIRES); Former Head of Operations, Ethical Intelligence Associates, Limited
Renée Cummings, Founder & CEO, Urban AI, LLC; East Coast Lead, Women in AI Ethics (WAIE); Community Scholar, Columbia University
Stephen Scott Watson, Managing Director, Watson Hepple Consulting Ltd.; Founding Editorial Board – AI and Ethics Journal, Springer Nature
Tannya Jajal, Technology Contributor, Forbes Middle East; Co-Leader, Dubai WomeninTech Lean In Circle
Tom Moule, Executive Lead, The Institute for Ethical AI in Education, University of Buckingham
Tony Rhem, CEO / Principal Consultant, A.J. Rhem & Associates, Inc.; Member, Founding Editorial Board – AI and Ethics Journal, Springer Nature
Virginie Martins de Nobrega, AI Ethics & Public Policy Expert, Creative Resolution
9 Ways to Create and Mandate Ethical AI
AI devices & applications should be deemed ethical if they tangibly respond to the human needs and aspirations specific to the social, economic, political, and cultural contexts in which they are developed and deployed.
In multilingual countries like India, an ethical AI use case could enable translation of classroom lectures delivered or recorded in one language (say, English) into several vernacular languages for cost-effective, equitable dissemination of educational content amongst learners residing in multiple regions of the country.
Ethical AI is significantly predicated on creating responsible users of AI devices & applications, for which systematic and concerted measures need to be undertaken to improve AI literacy amongst the general population. Also, developers of AI systems should be systematically trained to appreciate the ethical risks associated with the use of AI devices & applications they develop and empowered with tools to reasonably foresee and adequately circumvent those risks.
AI and ethical AI as disciplines should enter the school course curricula to create the next generation of responsible users of AI devices & applications. Also, AI companies should provide mandatory training to their developers that is focused on inculcating an interdisciplinary understanding of the purpose, use, and impact of AI technologies they develop and deploy.
Different use cases of AI come with different capabilities posing different risks and promising different benefits to individuals and society. Hence, technical safety standards as well as rules and regulations mandating these standards for different use cases of AI would be different.
Both targeted advertising online and self-driving cars are use cases of AI. However, unethical development and deployment of targeted advertising online could routinize individual privacy invasions, whereas that of self-driving cars could lead to road accidents, jeopardizing human life and limb. Hence, technical safety standards as well as rules and regulations mandating these standards for targeted advertising online should instate safeguards that protect individual privacy, whereas that for self-driving cars should instate accountability measures for tackling negligent compliance by manufacturers of self-driving cars.
Existing ethical and regulatory frameworks should be adapted to accommodate the emerging social, economic, and political concerns around the adoption and use of AI devices & applications.
Electoral laws must be suitably amended to delineate rules and regulations addressing the growing political concerns around the use of unethical microtargeted political advertising in election campaigns that compromises voter informational autonomy by gathering and using voter data without informed voter consent, and corrupts voter judgment by deploying fake news.
Mandates to develop and deploy ethical AI devices & applications should enable AI companies to not only save costs of sanctions resulting from non-compliance but also gain occasional or routine financial rewards for compliance, creating market incentives for AI companies to proactively comply.
Financial regulatory institutions could conceptualize and mandate an investment grading scale for rating AI devices & applications based on a certain measurable criteria determining their ethical value. This investment grading scale could enable investors to credibly assess the risks associated with their financial investments in AI devices & applications, and in turn, would create a market incentive for AI companies to seek financial investments by proactively seeking to develop and deploy AI devices & applications that rank high on the investment grading scale.
To counter the encoding of undesirable human biases in AI devices & applications, the use of which inevitably risks unfair and adverse treatment of certain individuals and groups, AI companies should be mandated to not only incorporate social diversity in their AI training datasets, but also to employ a socially diverse, multidisciplinary human resource pool that is tasked with developing and deploying their AI devices & applications.
AI-powered recruitment tools could be corrected for results reflecting gender discrimination, that leads to rejecting more qualified female applicants against less qualified male applicants for the same job, by programming the underlying software to ignore factors like gender. This is likely to become an ethical imperative in AI companies technically mandating such requirements via AI ethics checklists configured by a collaborative, consultative engagement between developers and ethicists, with satisfactory representation from the female community.
Ethical AI principles and guidelines formulated by AI companies as bye-laws to self-regulate are laudable, but these should never discourage external regulation of AI companies and their activities via binding legislative instruments; so that when something goes wrong, penalizing the wrongdoer and compensating the victim are backed by legal certainty.
Commitments made by social media companies to securing and maintaining the data privacy of their platform users should become actionable, i.e., the data privacy rights of these users should become the subject of a legislative enactment enabling a user whose data privacy rights have been infringed to take legal action with an appreciable degree of certainty of grievance redressal.
The rapidly emerging use cases of AI are so transformative in their nature (unlike few other technologies from the past) that they give little or no time to formulate regulatory responses to adequately tackle the risks they pose to society. If these regulatory responses are not formulated well and in time, AI as an industry could prove to be more harmful than beneficial for humanity. Hence, governments should consult and collaborate with AI companies, public policy researchers, and the public to systematically anticipate the risks and opportunities posed by the potential AI use cases and initiate work towards regulatory reforms that the development and deployment of these use cases would demand with a view to minimizing their harmful impact on society.
Governments could build internal capabilities — technical and human — for forecasting AI futures by drawing on the best methods and techniques of technological forecasting currently in place, like via regulatory sandboxing, in which live experiments are conducted in a controlled environment under a regulator’s supervision to safely test the efficacy and/or the impact of innovative software products or business models.
Good public policy making is almost always led by effective engagement with all interested stakeholders. Hence, ethical AI devices & applications must almost always be a product of stakeholder-engagement, whose exact modalities should be decided by the ethical AI use case that is sought to be developed and deployed for it to be effective.
Before introducing ethical AI devices & applications to improve learning outcomes for pupils in schools, their need and demand should be properly assessed by effectively engaging with pupils, teachers, and school administrators, amongst others, via survey tools and methods like questionnaires and focus group discussions.