Article by Felipe Castro Quiles, Co-Founder and CEO, Emerging Rule and GENIA Latinoamérica
By now, most humans have been impacted by artificial intelligence. Elshawi, Maher, and Sakr (2019) say, “Due to the increasing success of machine learning techniques in several application domains, they have been attracting a lot of attention from the research and business communities.” In fact, today, many of the financial applications, social media recommendations, marketing campaigns, web searches, and communication systems are led or assisted by machine learning (ML) techniques and algorithms. Moreover, two thirds of a group of nearly 1,000 experts agreed that artificial intelligence will have made humanity better off by the end of this decade.
However, regardless of the intensive publicity of the power of everyday ML, its employment into businesses, governments, and other socioeconomic organizations, still depends on the actions (or inaction) of those leading these institutions. The series of elements that serve as the core of this technology lacks the public scrutiny that will guide its fair deployment on a large scale. The matrix of our new way of life still sits unattended.
But, why is such a technological paradigm on its own course, in the mid of 2020, more than half a century after the first computer program which could learn as it ran was created?
This is certainly a very hard question to answer. Because, at first, it could seem evident that such a transcendental event (like that of the creation of autonomous systems) would be at the core of the governance agenda of every nation.
Nevertheless, a Government Artificial Intelligence Readiness Index, compiled by Oxford Insights and the International Development Research Centre in 2019, reported “…a timely reminder of the ongoing inequality around access to AI.” It warned, “Considering the disparities highlighted in this report, policymakers should act to ensure that global inequalities are not further entrenched or exacerbated by AI.”
Actually, even though the race for the supremacy of this tech is on the verge of taking a heavy toll on how humans would be ruled and governed in the near future, governments, and most of the leading sectors, haven’t quite done their part. Machine Learning has been treated as an application that concerns only the computer science and engineering professions. Society has allowed a miniscule portion of its population to define a biased route to a common destiny.
As a result, we are experiencing the most challenging global crisis since World War II – unprepared for a necessary transformation that is led by expert systems.
In effect, these systems risk leading humankind to a much-accentuated social divide if overlooked. Thus, agreeing on principles for the aims of widespread benefits to society could help global leaders beat the current situation and prepare their populations to satisfy human-environment-machine balance by participating in the creation of new rules and ethics that could secure this balance.
As a matter of fact, many researchers and academics have proposed roles and limits of AI ethics principles, to balance the fast pace of technological advancement with the state of social behaviour. For example, Whittlestone, Nyrup, Alexandrova, and Cave (2019) suggest that the next step for making AI ethics practical should cover several different sectors and types of institutions, including the following:
— Companies aiming to develop their own ethical guidelines (e.g. Google’s “AI Ethics principles”);
— Professional bodies, whose codes of ethics are aimed at guiding practitioners of various professions;
— Standards-setting bodies that aim to set general standards for fields of research or industries (e.g., the Institute of Electrical and Electronics Engineers (IEEE) or the British Standards Institution);
— Government bodies and legislators that aim to develop policy and regulation (e.g. the UK’s new Centre for Data Ethics and Innovation, which advises the UK Government on regulation of data-driven technologies required across sectors);
— Researchers across disciplines whose work aims to inform how AI ethics is put into practice: exploring the technical, philosophical, or legal aspects of using algorithms in ethical ways (e.g., Selbst and Powles 2018) or synthesizing and translating such research.
But ultimately, informing the public on the critical aspects of digitization, digitalization, and artificial decision-making is essential. The development of supporting structures that ensure inclusion and equity in the development and use of AI systems requires your attention and cooperation. The world is no longer the same; machine learning is here to stay and disrupt – so you must be at the center of its policy-making, ethics, and responsible development.