Expert Vlogging Series 1.0: How to put AI ethics into practice? | Nigel Willson, Founding Partner, awakenAI and Co-Founder, We and AI

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our thirteenth vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Nigel Willson below.

Nigel joined AI Policy Exchange as a technical/policy expert later in June this year. Nigel is a global speaker, top influencer and advisor on artificial intelligence, innovation & technology. He worked at Microsoft for 20 years in many different roles; his last role was as the European Chief Technology Officer for Professional Services. He is now an independent voice on artificial intelligence and machine learning with awakenAI, his own independent AI advisory company and We and AI, a non-profit organisation he co-founded to promote the ethical use of AI for the benefit of everyone. He sits on several advisory boards, where he hopes to help ensure that AI is used in a fair, ethical and transparent way, promote the use of AI to level inequalities in the world, and promote AI for good and AI for all.

Don’t forget to send us your feedback on editor@aipolicyexchange.org.

Expert Vlogging Series 1.0: How to put AI ethics into practice? | Virginie Martins de Nobrega, AI Ethics and Public Policy Expert, Creative Resolution

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our twelfth vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Virginie de Martins de Nobrega below.

Virginie joined AI Policy Exchange as a technical/policy expert earlier last month. Virginie is a seasoned professional in international development and passionate about tackling global challenges of social, economic, legal and political character to achieve progress for all. Back in 2008, during her Executive MBA, Virginie conducted research on the AI and innovation ecosystem within the United Nations to see how technologies could improve the efficiency and impact of the actions of the public sector on international development, from a programme management perspective as well as from a public policy perspective. Ever since, she has been advocating for responsible and ethical AI with strong human rights benchmarks to optimize the social impact of AI applications by tackling issues such as systemic inequalities and bias.

Don’t forget to send us your feedback on editor@aipolicyexchange.org.

Expert Vlogging Series 1.0: How to put AI ethics into practice? | Ivana Bartoletti, Co-Founder, Women Leading in AI Network

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our eleventh vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Ivana Bartoletti below.

Ivana joined AI Policy Exchange as a technical/policy expert earlier in May this year. Ivana is Technical Director, Privacy at Deloitte and Co-founder of the Women Leading in AI Network, a thriving international group of scientists, industry leaders and policy experts advocating for responsible AI. In her day job, Ivana helps businesses in their privacy by design programmes, especially in relation to AI and blockchain technology. Ivana was awarded Woman of the Year (2019) in the Cyber Security Awards in recognition of her growing reputation as an advocate of equality, privacy and ethics at the heart of tech and AI. She is a sought after commentator for the BBC, Sky and other major broadcasters and news outlets like The Guardian and The Telegraph on headline stories where the tech economy intersects with privacy and data law and politics. Ivana’s first book, An Artificial Revolution: On Power, Politics and AI, focuses on how technology is reshaping the world, and the global geopolitical forces underpinning this process.

Don’t forget to send us your feedback on editor@aipolicyexchange.org.

Expert Vlogging Series 1.0: How to put AI ethics into practice? | Stephen Scott Watson, Managing Director, Watson Hepple Consulting Ltd.

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our tenth vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Stephen Scott Watson below.

Stephen joined AI Policy Exchange as a technical/policy expert earlier this month. Stephen is Managing Director, Watson Hepple Consulting Ltd., a digital consulting company headquartered in Newcastle upon Tyne, England. He also sits on the Founding Editorial Board – AI and Ethics Journal at Springer Nature.

Don’t forget to send us your feedback on editor@aipolicyexchange.org.

Expert Vlogging Series 1.0: How to put AI ethics into practice? | James Brusseau, Director, AI Ethics Site, Pace University, New York

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our ninth vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by James Brusseau below.

James joined AI Policy Exchange as a technical/policy expert earlier last month. James (PhD, Philosophy) is the author of several books, articles, and digital media in the history of philosophy and ethics. He has taught in Europe, Mexico, and is currently at Pace University near his home in New York City. As Director of AI Ethics Site Research Institute, currently incubating at Pace University, he explores the human experience of artificial intelligence, especially in the areas of personal freedom and authenticity. He also works in AI ethics as applied to finance, economics, and healthcare.

Don’t forget to send us your feedback on editor@aipolicyexchange.org.

Artificial intelligence and the future of global governance

Article by Jean Garcia Periche, Co-founder and Chief Government Officer, GENIA Latinoamérica; Founder, Global Neo; Fellow, Singularity University, Silicon Valley

The biggest lesson we can learn from the current crisis is that our international system is ill-equipped to deal with a global emergency. This pandemic forced us to acknowledge the structural limitations of the multilateral system. Moreover, the impending disruptions coming from Artificial Intelligence (AI), together with the rise of digital surveillance and massive economic inequality, are fundamental threats to the stability of the world order. This realization calls for a new global agenda that is able to manage the increasing complexity of our globalized techno-landscape. For that reason, furthering the multilateral system will require an integrated approach between cognitive technologies and global governance.

The challenges posed by AI are global in nature. In times of increased nationalism and populism, it is essential for the international community to design new frameworks for global governance that enable us to build stronger relationships in the age of AI. Unless we further multilateral approaches, these imminent disruptions will come at a higher price. The constant state of flux and the accelerating rate of change are making populations across the world lack trust in core global institutions. Moreover, the upcoming AI-driven rearrangement of the world order will tend to undermine, rather than strengthen, multilateral mechanisms. As nation-states reassert their powers and weaken the role of international institutions, the need to steer regions into greater global cooperation becomes more urgent than ever.

In light of the exponential rise of AI, it is crucial for the international community to mitigate the impact of intelligent technologies. Most likely, the increase in socioeconomic inequality that results from artificial intelligence will further nationalist fervor across regions. At minimum, the wave of technological disruptions in the following years will create deep structural changes in the labour market, in a capital-intensive economy where automation could potentially displace millions of jobs. At maximum, the ubiquity of AI algorithms poses a major existential risk to our species.

Global cooperation is necessary to establish safety standards that reduce the catastrophic risks posed by the proliferation of autonomous machines. Furthermore, multilateral mechanisms are decisive in building a new social contract to ensure a more equitable distribution of resources in the AI economy. For that reason, the structural reconfiguration of the international system should be towards new and improved mechanisms that deepen international cooperation and integrate new technologies as a governing channel.

The rise of “deepfakes” and the erosion of trust with the normalization of fake news will have detrimental implications for the role global institutions have in peacekeeping and international security. Digital and precision surveillance that utilize biometric data to manipulate entire populations with the ability to control individuals, together with the refinement of lethal autonomous weapons (LAWs), are unprecedented threats to universal human rights. These novel methods of social hacking and warfare require the development of new global governance frameworks that can anticipate this ever-changing field.

Our current global governance framework was established decades before the emergence of digital technologies, and it is poorly placed to respond to the complex challenges of such massive disruptions. Without new forms of collaborative mechanisms that can govern our global commons in face of AI-driven changes, there is a real risk of sabotaging the stability of the world order in ways that are insalvable. However, an effective global governance system is not only limited to tackling humanity’s shared challenges. A truly reformed multilateral approach ought to take advantage of the major opportunities that rise from the efficacy of emerging technologies.

In that sense, this crisis offers the human race an opportunity to transcend to a new model of governance that can upgrade global institutions and reassure the importance of international unity. Furthermore, the post-crisis geopolitical arrangement urges strengthening and rebuilding international cooperation as the most effective way to move forward. There is no doubt that national responses are indispensable, but relying on them alone is futile. Our best weapon against global threats is reinforcing multilateral mechanisms that support nations in working together to solve these challenges, as global problems require global solutions.

Previous global crises were followed by the formation of new multilateral initiatives. After World War I, the international community formed the League of Nations, which morphed into the United Nations following the second World War. Moreover, the OECD emerged from the Marshall Plan to rebuild Europe, which converged with the effort for the formation of a European common market, which later materialized into the European Union.

The opportunity: a new model of global governance

This time, to rebuild our global governance architecture, it is crucial to update multilateral organisms with the tools that ensure a safe path towards economic development and geopolitical stability. The potential gains from AI pose a significant opportunity to contribute to global prosperity. By 2030, it is projected that AI will add more than $15.7 trillion to the world economy, about 16 percent higher cumulative GDP compared with today. This impact can only be compared to other historic General Purpose Technologies (GPT), like electricity during the Second Industrial Revolution. Moreover, the interface between artificial intelligence and the Sustainable Development Goals (SDGs) holds notable promise as numerous regional initiatives to use AI in the achievement of these goals proliferate across the world.

The power to process Big Data and make autonomous decisions implies the possibility for AI to play a major role in multilateral dynamics. By harnessing a more responsible use of data and machine learning, international institutions can have better regional and global coordination with national systems. Leveraging AI as a strategic tool for global governance enables multilateral organizations to manage the complexities of our interconnected world with greater precision. Accordingly, the increasing digitalization of society only furthers the need to apply these technologies in cooperation efforts.

Instead of having a common political unity where global institutions superintend local and national systems, as some have proposed in a model resembling a “World Government”, a truly effective and practical global governance framework channels a constant collaboration between local and global institutions based on the sharing of data and information. In this manner, AI can be the key for the international system to harmonize the nationalist and the globalist struggle in the coming years. By sharing crucial data with international organizations, national systems can still reassert their authority while empowering multilateral institutions to manage global problems that can only be addressed by collective efforts. By doing so, the international community can build stronger relationships and establish real rectors that coordinate better responses to our common challenges.

Moving forward, there is no better way to transition to this model of collaborative global governance than to gear international health organizations with the proper tools to deal with health crises. Global institutions can process a flux of data between national health systems to coordinate actions that increase agile governance at national and international levels. Launching regional repositories of data to solve global health challenges and partnering with companies and private labs to equip governments with critical infrastructure serves as the optimal starting point for a new way forward for globalization.

Devising a more effective approach to global public health that integrates emerging technologies with key international agencies to detect, prevent, and respond to diseases and pandemics can aid in intercepting future failures. In fact, modernizing infrastructure with greater connectivity for data collection is a strategic opportunity for developing nations to leapfrog to a new stage of development. Robust information systems and governance structures that learn from data and optimize with machine learning provide a solid path for a new foundation of multilateral relations. This transition is also the best opportunity for governments to start deploying artificial intelligence as critical infrastructure to build the new industrial revolution and leverage the benefits of a GPT social platform.

The post-crisis configuration of the international system will come with many risks that threaten the stability of the world order. The rise of nationalism and the disruptions from AI will undermine the capacities of the multilateral system. For that reason, developing an effective framework for international cooperation should be the priority of the global agenda. By integrating cognitive technologies to multilateral relations, there is a real opportunity to consolidate a new model of global governance that is equipped to deal with the challenges of our interconnected society in times of extreme volatility. Data-driven governance and machine learning optimization are strategic tools that can be leveraged to deploy critical infrastructure that facilitate agility at the national and international level. In this world, we are all stakeholders and deserve to have our voices heard. The promises of a better tomorrow will only be achieved if we take action to ensure global cooperation between all nations.

Expert Vlogging Series 1.0: How to put AI ethics into practice? | Anand Tamboli, Tech Futurist and Principal Advisor, AI Australia

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our eighth vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Anand Tamboli below.

Anand joined AI Policy Exchange as a technical/policy expert later last month. Anand is an entrepreneur, award-winning author, global speaker, futurist, and highly sought after thought leader. He works with organizations that want to transform into a sustainable brand with creative and innovative employees. Anand specializes in areas that intersect with technology and people. He is creativity, innovation, entrepreneurship, and transformation specialist, and is well-known for bringing ideas & strategies to life. Being a polymath, he can often shed new light on a topic that you think is “done to death.” Having worked with several Fortune 500 multinationals for the past two decades, Anand draws upon his cross-industry and multi-cultural experience.

Don’t forget to send us your feedback on editor@aipolicyexchange.org.


Expert Vlogging Series 1.0: How to put AI ethics into practice? | Montreal AI Ethics Institute

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our seventh vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Abhishek Gupta and his wonderful team at Montreal AI Ethics Institute, an international nonprofit think tank helping people build & navigate AI responsibly, below.

Abhishek joined AI Policy Exchange as a technical/policy expert earlier this month. Abhishek is the founder of Montreal AI Ethics Institute and a Machine Learning Engineer at Microsoft where he serves on the CSE Responsible AI Board. His research focuses on applied technical and policy methods to address ethical, safety and inclusivity concerns in using AI in different domains. He has built the largest community driven, public consultation group on AI Ethics in the world. His work on public competence building in AI Ethics has been recognized by governments from North America, Europe, Asia, and Oceania. He is a Faculty Associate at the Frankfurt Big Data Lab, Goethe University, a guest lecturer at the McGill University School of Continuing Studies for the Data Science in Business Decisions course and a Visiting AI Ethics Researcher on the Future of Work for the US State Department representing Canada.

Don’t forget to send us your feedback on editor@aipolicyexchange.org.

Your brain on algorithms

Article by Andrew Sears, Founder, Technovirtuism and Advisor, All Tech is Human

A quote attributed to the linguist Benjamin Lee Whorf claims that “language shapes the way we think and determines what we can think about.” Whorf, along with his mentor Edward Sapir, argued that a language’s structure influences its speakers’ perception and categorization of experience. Summarizing what he called the “linguistic relativity principle,” Whorf wrote that “language is not simply a reporting device for experience but a defining framework for it.”[1]

Over the past several decades, a strange new language has increasingly exerted its power over our experience of the world: the language of algorithms. Today, millions of computer programmers, data scientists, and other professionals spend the majority of their time writing and thinking in this language. As algorithms have become inseparable from most people’s day-to-day lives, they have begun to shape our experience of the world in profound ways. Algorithms are in our pockets and on our wrists. Algorithms are in our hearts and in our minds. Algorithms are subtly reshaping us after their own image, putting artificial boundaries on our ideas and imaginations.   

Our Languages, Our Selves

To better understand how language effects our experience of the world, we can consider the example depicted in the 2016 science fiction movie Arrival. In the film, giant 7-legged aliens arrive on Earth to give humanity a gift – a “technology,” as they call it. This technology turns out to be their language, which is unlike any human language in that it is nonlinear: sentences are represented by circles of characters with no beginning or end. Word order is not relevant; each sentence is experienced as one simultaneous collection of meanings.

As the human protagonist becomes fluent in this language, her experience of the world begins to adapt to its patterns. She begins to experience time nonlinearly. Premonitions and memories become indistinguishable as her past, present, and future collapse into one another. The qualities of her language become the qualities of her thought, which come to define her experience.

A similar, if less extreme, transformation is occurring today as our modes of thought are increasingly shaped by the language of algorithms. Technologists spend most of their waking hours writing algorithms to organize and analyze data, manipulate human behavior, and solve all manner of problems. To keep up with rapidly-changing technologies, they exist in a state of perpetual language learning. Many are evaluated based on how many total lines of code they write.

Even those of us who do not speak the language of algorithms are constantly spoken to in this language by the devices we carry with us, as these devices organize and mediate our day-to-day experience of the world. To understand how this immersion in the language of algorithms shapes us, we need to name some of the peculiar qualities of this language.

Traditional human languages employ a wide range of linguistic tools to express the world in its complexity, such as narrative, metaphor, hyperbole, dialogue, or humour. Algorithms, by contrast, express the world through formulas. They break up complex problems into small, well-defined components and articulate a series of linear steps by which the problem can be solved. Algorithms – or, more accurately, the utility of algorithms – depend on the assumption that problems and their solutions can be articulated mathematically without losing any meaningful qualities in translation.

They assume that all significant variables can be accounted for, that the world is both quantifiable and predictable. They assume that efficiency should be preferred to friction, that elegance is more desirable than complexity, and that the optimized solution is always the best solution. For many speakers of this language, these assumptions have become axiomatic. Their understanding of the world has been re-shaped by the conventions and thought-patterns of a new lingua franca.

Algorithms and Technological Solutionism

Immersion in the language of algorithms inclines us to look for technological solutions to every kind of problem. This phenomenon was observed in 2013 by Evgeny Morozov, who described the folly of “technological solutionism” in his book, To Save Everything, Click Here:

“Recasting all complex social situations either as neat problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized–if only the right algorithms are in place!–this quest is likely to have unexpected consequences that could eventually cause more damage than the problems they seek to address.”[2]

An algorithmic approach to problem solving depends on the assumption that the effects of the algorithm – the outputs of the formula – can be predicted and controlled. This assumption becomes less plausible as the complexity of the problem increases. When technologists seek to write an algorithmic solution to a nuanced human problem – like crime prevention or biased hiring, for example – they are compelled to contort the problem into an oversimplified form in order to express it in the language of algorithms. This results, more often than not, in the “unintended consequences” that we’ve all heard so much about. Perhaps Wordsworth said it best, 200 years before Morozov:

“Our meddling intellect

Mis-shapes the beauteous forms of things:—

We murder to dissect.”[3]

A helpful example of all this is furnished by Amazon, who in 2014 implemented an algorithmic tool to automate the vetting of resumes. The tool was designed to rate job candidates on a scale from one to five in order to help managers quickly identify their top prospects. After one year of use, this tool was shelved for discrimination against female candidates.

This parable contains all the components of the technosolutionist narrative: a problem rife with human complexity is assigned an algorithmic solution in the pursuit of efficiency; the problem is oversimplified in a hubristic attempt to express it numerically; the algorithm fails to achieve its objective while creating some nasty externalities along the way.

Immersion in the language of algorithms doesn’t just shape our understanding of the world; it shapes our understanding of the self. Members of the “quantified self” community use a combination of wearable devices, internet-of-things sensors, and – in extreme cases – implants to collect and analyze quantitative data on their physical, mental, and emotional performance. Adherents of this lifestyle pursue “self-knowledge through numbers,” in dramatic contrast to their pre-digital forbearers who may have pursued self-knowledge through philosophy, religion, art, or other reflective disciplines.

The influence of algorithms on our view of the self finds more mainstream expressions as well. Psychologists and other experts have widely adopted computational metaphors to describe the functions of the human brain.[4] Such conventions have made their way into our common vernacular; we speak of needing to “process” information, of feeling “at-capacity” and needing to “recharge,” of being “programmed” by education and “re-wired” by novel experiences.     

To understand the extent to which algorithms have shaped our world, consider that even Silicon Valley’s rogues and reformers struggle to move beyond algorithmic thought patterns. This is evidenced by the fact that tech reformers almost uniformly adopt a utilitarian view of technological ethics, evaluating the ethical quality of a technology based on its perceived consequences.

The algorithmic fixation with quantification, measurement, outputs, and outcomes informs their ethical thinking. Other ethics frameworks – such as deontology or virtue ethics – are seldom applied to technological questions except by the occasional academic who has been trained in these disciplines. 

A Language Fit for Our World

When it comes to complex social problems, algorithmic solutions tend to function at best as band-aids, covering up human shortcomings but doing nothing to address the problem’s true causes. Unfortunately, technological solutionism’s disastrous track record has done little to dampen its influence on the tech industry and within society as a whole.

Contact tracing apps look to be an important part of the solution to the COVID-19 pandemic; but will they be able to compensate for the lack of civic trust and responsibility that has crippled America’s virus containment efforts? Predictive and surveillance policing, powered by artificial intelligence tools, promises to create a safer society; can these atone for the complex social and economic inequalities that drive crime and dictate who gets called a criminal? As November 2020 approaches, social media companies assure us that they’re ready to contain the spread of fake news this time around; are we as citizens ready to engage in meaningful civil discourse?

Benjamin Lee Whorf, mirroring the language of Wordsworth’s poem, wrote that “we dissect nature along lines laid down by our native language.”[5] “We cut nature up, organize it into concepts, and ascribe significances,” as our language dictates and allows. It is therefore critical that the languages we adopt are capable of appreciating the world in all of its messy, tragic, beautiful complexity. Algorithms are well-suited to many kinds of problems, but they have important limits. Rather than shrinking our world to fit within these limits, we must broaden our minds to comprehend the intricacies of the world. We must stop thinking like algorithms and start thinking like humans.

References

[1] Whorf B.L. (1956). Language, Thought, and Reality. Cambridge, MA: MIT Press.

[2] Morozov, Evgeny. To Save Everything, Click Here.

[3] Wordsworth, William (2004). The Tables Turned. In Selected Poems, London: Penguin Books.

[4] For an excellent discussion and refutation of this trend, see Nicholas Carr’s book, The Shallows.

[5] Whorf, Benjamin (1956), Carroll, John B. (ed.), Language, Thought, and Reality: Selected Writings of Benjamin Lee Whorf. MIT Press: 212 – 214.

Expert Vlogging Series 1.0: How to put AI ethics into practice? | Ana Chubinidze, AI & Data Governance Professional

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our sixth vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Ana Chubinidze below.

Ana joined AI Policy Exchange as a technical/policy expert later last month. Ana made her foray into AI policy during her research fellowship with the Regional Academy on the United Nations (RAUN) where she worked on a policy paper in cooperation with UNDP discussing anticipatory regulations for harnessing AI for the greater good. Following this, for her master’s thesis, Ana researched the influences of content personalization on personal autonomy. A short summary of her thesis has been published by the Royal Society for the encouragement of Arts, Manufactures and Commerce (London). Ana recently joined the founding editorial board of Springer Nature’s new AI and Ethics journal. She constantly strives to expand her understanding of AI with various online courses in data science, neuroscience and quantum machine learning. Ana believes that concerted efforts are needed to make AI policy-making more inclusive and pragmatic.

Don’t forget to send us your feedback on editor@aipolicyexchange.org.