Expert Vlogging Series 1.0: How to put AI ethics into practice? | Anthony Rhem, CEO & Principal Consultant, A.J. Rhem & Associates, Inc.

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our fourteenth vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Anthony Rhem below.

Anthony joined AI Policy Exchange as a technical/policy expert earlier last month. Anthony is an Information Systems professional with more than 30 years of experience. He is a published author and educator, presenting the application and theory of Software Engineering Methodologies, Knowledge Management, and Artificial Intelligence. Anthony also sits on the Founding Editorial Board of Springer Nature’s AI and Ethics Journal.

Viewers are welcome to send us their feedback on editor@aipolicyexchange.org.

Expert Vlogging Series 1.0: How to put AI ethics into practice? | Nigel Willson, Founding Partner, awakenAI and Co-Founder, We and AI

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our thirteenth vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Nigel Willson below.

Nigel joined AI Policy Exchange as a technical/policy expert later in June this year. Nigel is a global speaker, top influencer and advisor on artificial intelligence, innovation & technology. He worked at Microsoft for 20 years in many different roles; his last role was as the European Chief Technology Officer for Professional Services. He is now an independent voice on artificial intelligence and machine learning with awakenAI, his own independent AI advisory company and We and AI, a non-profit organisation he co-founded to promote the ethical use of AI for the benefit of everyone. He sits on several advisory boards, where he hopes to help ensure that AI is used in a fair, ethical and transparent way, promote the use of AI to level inequalities in the world, and promote AI for good and AI for all.

Don’t forget to send us your feedback on editor@aipolicyexchange.org.

Expert Vlogging Series 1.0: How to put AI ethics into practice? | Virginie Martins de Nobrega, AI Ethics and Public Policy Expert, Creative Resolution

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our twelfth vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Virginie de Martins de Nobrega below.

Virginie joined AI Policy Exchange as a technical/policy expert earlier last month. Virginie is a seasoned professional in international development and passionate about tackling global challenges of social, economic, legal and political character to achieve progress for all. Back in 2008, during her Executive MBA, Virginie conducted research on the AI and innovation ecosystem within the United Nations to see how technologies could improve the efficiency and impact of the actions of the public sector on international development, from a programme management perspective as well as from a public policy perspective. Ever since, she has been advocating for responsible and ethical AI with strong human rights benchmarks to optimize the social impact of AI applications by tackling issues such as systemic inequalities and bias.

Don’t forget to send us your feedback on editor@aipolicyexchange.org.

Expert Vlogging Series 1.0: How to put AI ethics into practice? | Ivana Bartoletti, Co-Founder, Women Leading in AI Network

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our eleventh vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Ivana Bartoletti below.

Ivana joined AI Policy Exchange as a technical/policy expert earlier in May this year. Ivana is Technical Director, Privacy at Deloitte and Co-founder of the Women Leading in AI Network, a thriving international group of scientists, industry leaders and policy experts advocating for responsible AI. In her day job, Ivana helps businesses in their privacy by design programmes, especially in relation to AI and blockchain technology. Ivana was awarded Woman of the Year (2019) in the Cyber Security Awards in recognition of her growing reputation as an advocate of equality, privacy and ethics at the heart of tech and AI. She is a sought after commentator for the BBC, Sky and other major broadcasters and news outlets like The Guardian and The Telegraph on headline stories where the tech economy intersects with privacy and data law and politics. Ivana’s first book, An Artificial Revolution: On Power, Politics and AI, focuses on how technology is reshaping the world, and the global geopolitical forces underpinning this process.

Don’t forget to send us your feedback on editor@aipolicyexchange.org.

Expert Vlogging Series 1.0: How to put AI ethics into practice? | Stephen Scott Watson, Managing Director, Watson Hepple Consulting Ltd.

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our tenth vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Stephen Scott Watson below.

Stephen joined AI Policy Exchange as a technical/policy expert earlier this month. Stephen is Managing Director, Watson Hepple Consulting Ltd., a digital consulting company headquartered in Newcastle upon Tyne, England. He also sits on the Founding Editorial Board – AI and Ethics Journal at Springer Nature.

Don’t forget to send us your feedback on editor@aipolicyexchange.org.

Expert Vlogging Series 1.0: How to put AI ethics into practice? | James Brusseau, Director, AI Ethics Site, Pace University, New York

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our ninth vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by James Brusseau below.

James joined AI Policy Exchange as a technical/policy expert earlier last month. James (PhD, Philosophy) is the author of several books, articles, and digital media in the history of philosophy and ethics. He has taught in Europe, Mexico, and is currently at Pace University near his home in New York City. As Director of AI Ethics Site Research Institute, currently incubating at Pace University, he explores the human experience of artificial intelligence, especially in the areas of personal freedom and authenticity. He also works in AI ethics as applied to finance, economics, and healthcare.

Don’t forget to send us your feedback on editor@aipolicyexchange.org.

Artificial intelligence and the future of global governance

Article by Jean Garcia Periche, Co-founder and Chief Government Officer, GENIA Latinoamérica; Founder, Global Neo; Fellow, Singularity University, Silicon Valley

The biggest lesson we can learn from the current crisis is that our international system is ill-equipped to deal with a global emergency. This pandemic forced us to acknowledge the structural limitations of the multilateral system. Moreover, the impending disruptions coming from Artificial Intelligence (AI), together with the rise of digital surveillance and massive economic inequality, are fundamental threats to the stability of the world order. This realization calls for a new global agenda that is able to manage the increasing complexity of our globalized techno-landscape. For that reason, furthering the multilateral system will require an integrated approach between cognitive technologies and global governance.

The challenges posed by AI are global in nature. In times of increased nationalism and populism, it is essential for the international community to design new frameworks for global governance that enable us to build stronger relationships in the age of AI. Unless we further multilateral approaches, these imminent disruptions will come at a higher price. The constant state of flux and the accelerating rate of change are making populations across the world lack trust in core global institutions. Moreover, the upcoming AI-driven rearrangement of the world order will tend to undermine, rather than strengthen, multilateral mechanisms. As nation-states reassert their powers and weaken the role of international institutions, the need to steer regions into greater global cooperation becomes more urgent than ever.

In light of the exponential rise of AI, it is crucial for the international community to mitigate the impact of intelligent technologies. Most likely, the increase in socioeconomic inequality that results from artificial intelligence will further nationalist fervor across regions. At minimum, the wave of technological disruptions in the following years will create deep structural changes in the labour market, in a capital-intensive economy where automation could potentially displace millions of jobs. At maximum, the ubiquity of AI algorithms poses a major existential risk to our species.

Global cooperation is necessary to establish safety standards that reduce the catastrophic risks posed by the proliferation of autonomous machines. Furthermore, multilateral mechanisms are decisive in building a new social contract to ensure a more equitable distribution of resources in the AI economy. For that reason, the structural reconfiguration of the international system should be towards new and improved mechanisms that deepen international cooperation and integrate new technologies as a governing channel.

The rise of “deepfakes” and the erosion of trust with the normalization of fake news will have detrimental implications for the role global institutions have in peacekeeping and international security. Digital and precision surveillance that utilize biometric data to manipulate entire populations with the ability to control individuals, together with the refinement of lethal autonomous weapons (LAWs), are unprecedented threats to universal human rights. These novel methods of social hacking and warfare require the development of new global governance frameworks that can anticipate this ever-changing field.

Our current global governance framework was established decades before the emergence of digital technologies, and it is poorly placed to respond to the complex challenges of such massive disruptions. Without new forms of collaborative mechanisms that can govern our global commons in face of AI-driven changes, there is a real risk of sabotaging the stability of the world order in ways that are insalvable. However, an effective global governance system is not only limited to tackling humanity’s shared challenges. A truly reformed multilateral approach ought to take advantage of the major opportunities that rise from the efficacy of emerging technologies.

In that sense, this crisis offers the human race an opportunity to transcend to a new model of governance that can upgrade global institutions and reassure the importance of international unity. Furthermore, the post-crisis geopolitical arrangement urges strengthening and rebuilding international cooperation as the most effective way to move forward. There is no doubt that national responses are indispensable, but relying on them alone is futile. Our best weapon against global threats is reinforcing multilateral mechanisms that support nations in working together to solve these challenges, as global problems require global solutions.

Previous global crises were followed by the formation of new multilateral initiatives. After World War I, the international community formed the League of Nations, which morphed into the United Nations following the second World War. Moreover, the OECD emerged from the Marshall Plan to rebuild Europe, which converged with the effort for the formation of a European common market, which later materialized into the European Union.

The opportunity: a new model of global governance

This time, to rebuild our global governance architecture, it is crucial to update multilateral organisms with the tools that ensure a safe path towards economic development and geopolitical stability. The potential gains from AI pose a significant opportunity to contribute to global prosperity. By 2030, it is projected that AI will add more than $15.7 trillion to the world economy, about 16 percent higher cumulative GDP compared with today. This impact can only be compared to other historic General Purpose Technologies (GPT), like electricity during the Second Industrial Revolution. Moreover, the interface between artificial intelligence and the Sustainable Development Goals (SDGs) holds notable promise as numerous regional initiatives to use AI in the achievement of these goals proliferate across the world.

The power to process Big Data and make autonomous decisions implies the possibility for AI to play a major role in multilateral dynamics. By harnessing a more responsible use of data and machine learning, international institutions can have better regional and global coordination with national systems. Leveraging AI as a strategic tool for global governance enables multilateral organizations to manage the complexities of our interconnected world with greater precision. Accordingly, the increasing digitalization of society only furthers the need to apply these technologies in cooperation efforts.

Instead of having a common political unity where global institutions superintend local and national systems, as some have proposed in a model resembling a “World Government”, a truly effective and practical global governance framework channels a constant collaboration between local and global institutions based on the sharing of data and information. In this manner, AI can be the key for the international system to harmonize the nationalist and the globalist struggle in the coming years. By sharing crucial data with international organizations, national systems can still reassert their authority while empowering multilateral institutions to manage global problems that can only be addressed by collective efforts. By doing so, the international community can build stronger relationships and establish real rectors that coordinate better responses to our common challenges.

Moving forward, there is no better way to transition to this model of collaborative global governance than to gear international health organizations with the proper tools to deal with health crises. Global institutions can process a flux of data between national health systems to coordinate actions that increase agile governance at national and international levels. Launching regional repositories of data to solve global health challenges and partnering with companies and private labs to equip governments with critical infrastructure serves as the optimal starting point for a new way forward for globalization.

Devising a more effective approach to global public health that integrates emerging technologies with key international agencies to detect, prevent, and respond to diseases and pandemics can aid in intercepting future failures. In fact, modernizing infrastructure with greater connectivity for data collection is a strategic opportunity for developing nations to leapfrog to a new stage of development. Robust information systems and governance structures that learn from data and optimize with machine learning provide a solid path for a new foundation of multilateral relations. This transition is also the best opportunity for governments to start deploying artificial intelligence as critical infrastructure to build the new industrial revolution and leverage the benefits of a GPT social platform.

The post-crisis configuration of the international system will come with many risks that threaten the stability of the world order. The rise of nationalism and the disruptions from AI will undermine the capacities of the multilateral system. For that reason, developing an effective framework for international cooperation should be the priority of the global agenda. By integrating cognitive technologies to multilateral relations, there is a real opportunity to consolidate a new model of global governance that is equipped to deal with the challenges of our interconnected society in times of extreme volatility. Data-driven governance and machine learning optimization are strategic tools that can be leveraged to deploy critical infrastructure that facilitate agility at the national and international level. In this world, we are all stakeholders and deserve to have our voices heard. The promises of a better tomorrow will only be achieved if we take action to ensure global cooperation between all nations.

Expert Vlogging Series 1.0: How to put AI ethics into practice? | Anand Tamboli, Tech Futurist and Principal Advisor, AI Australia

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our eighth vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Anand Tamboli below.

Anand joined AI Policy Exchange as a technical/policy expert later last month. Anand is an entrepreneur, award-winning author, global speaker, futurist, and highly sought after thought leader. He works with organizations that want to transform into a sustainable brand with creative and innovative employees. Anand specializes in areas that intersect with technology and people. He is creativity, innovation, entrepreneurship, and transformation specialist, and is well-known for bringing ideas & strategies to life. Being a polymath, he can often shed new light on a topic that you think is “done to death.” Having worked with several Fortune 500 multinationals for the past two decades, Anand draws upon his cross-industry and multi-cultural experience.

Don’t forget to send us your feedback on editor@aipolicyexchange.org.


The five AI myths to be aware of

Article by Anand Tamboli, tech futurist and award-winning author of “Keeping Your AI Under Control: A Pragmatic Guide to Identifying, Evaluating, and Quantifying Risks” (Apress)

Technology is often over-hyped but underestimated. Artificial Intelligence couldn’t have been any different from it.

There are several stories, which don’t give you the full picture. And then there are many anti-stories, which create more confusion. As I often say, the truth can be uncovered only with deep questioning and relentless persuasion for answers. It is simple, but not easy!

So, here are five AI myths that keep circling without any conclusion.

Myth #1: AI will create more jobs than it will eliminate

Whenever there is a question around job losses, most people would play safe and not speak the truth. Sometimes, it is merely due to an inability to see a bigger picture.

AI will eliminate routine jobs, and also it will change many jobs in the coming few years, mainly because AI is currently being used mostly for incremental and efficiency innovations in the businesses.

As Clayton Christensen explained in one of his lectures, efficiency innovation doesn’t create jobs; it instead eliminates jobs and releases capital. That’s where enterprises and many businesses see more value, i.e., fewer overheads and more capital to play with.

AI can help in creating new jobs only if it is democratized significantly, which is not the case as of today. In the future, if the business propensity shifts and more resources are spent on democratization, we will see the change in the job market.

AI can help in creating new jobs only if it is democratized significantly!

However, remember, these jobs will not be like-to-like. That means most people will be required to reset and relearn a lot! Now, those who live hand-to-mouth (on meager salaries or wages); it is going to be challenging. Significant time will be spent on learning new skills whose application may be short-lived. So, the loop of “relearn – apply for jobs – and use those skills for short term” can soon become exhausting for a significant number of people.

Everyone must understand this reality and adjust, merely believing in tales that there would be more jobs (or less) will not be helpful.

Myth #2: AI is smarter than people and will outpace human intelligence

The fact is, AI can be only as smart as you program it. However, in comparison, it may seem more intelligent.

What I mean is, if more people start to lose their wits and don’t learn, keep themselves augmenting mentally and experientially, they eventually will become less and less intelligent in comparison with technology, and the balance will tilt. Mindless use of technology and consumerism are some of the most significant contributors to this effect.

Mindless use of technology and consumerism are some of the most significant contributors to creating intellectual disparity.

One thing we must not forget, there are many dimensions of intelligence. AI knows only a few of them. As long as humans learn to master their other dimensions of intelligence, things would remain in control.

However, if we choose to regress over time, then…godspeed!

Myth #3: AI will solve all humanity’s problems

To solve any problem, you must define the problem effectively and correctly. If humans can’t do that job properly, nothing much will change.

AI may assist in identifying solutions, but humans will have to execute. But most importantly, we will have to define our high priority problems first!

Let’s take an example of the climate change issue. Scientists have been saying it for decades; activists have been pursuing for decades, nothing has changed. Moreover, the environment has given us enough warning signs in the form of several calamities, yet, nothing has changed. Even if people come up with any extraordinary solution using AI, what will change, and why?

We somehow have learned to discard, undermine, or ignore all the solutions we do not like. Much like a bitter pill, there seems to be an inherent resistance not to take it.

The same goes for AI solutions – if we don’t like the answers, we may not accept and execute them. And, even if say we like them for once, we humans will still have to execute flawlessly, go through change management, transformation, the whole nine yards.

And after all, what are humanity’s problems? Customer complaints for which AI chatbots are developed? People identification for which facial recognition is being perfected? Or understanding what people prefer to buy the most for which marketing systems are being fine-tuned? What about real humanity’s problems – inequality, hunger, climate change, discrimination, mental health, etc.?

Coming up with the solution is only 5% of the whole endeavor, we still have to handle the 95%.

Myth #4: Cognitive AI can solve problems in the same way as humans

There is this common fairy tale belief that cognitive AI will help in solving problems in the same way as humans do.

But I hate to break it to you – it won’t!

AI or any other technology, cannot solve problems it wasn’t designed to address. After all, technology is a tool, not the mind. A false belief that it can happen is a pretty intense story.

However, think again! With all this hoopla, how much help AI has done to come up with solutions to the Covid-19 situation? Almost nil!

AI cannot solve problems it wasn’t designed to address.

Myth #5: AI systems are biased

AI amplifies bias mindlessly, which already exists in designers.

By definition, bias means learning from patterns and experiences. Not all biases are bad, wrong, or evil. Neither a weapon is biased nor the AI. However, just like the use of a firearm has a dependency on its user (soldier or terrorist), the use of AI will be subject to who is developing it and using it.

If researchers, developers, sponsors, and early adopters fail to notice or even acknowledge the lacunae of AI systems, it will only perpetuate the existing societal bias.

Although the bigger question remains, “Who decides what is right or wrong?”

A better way to approach the bias issue is to refer to the societal as well as legal fabric and consider it to be the baseline. If you don’t like anything or think it is biased, changing the policy and law will be more effective. Calling out such issues is still necessary and will help to make some noise, but it will not be sufficient to yield fruitful outcomes.

AI amplifies bias mindlessly, which already exists in designers.

The point is

Technology is often over-hyped but underestimated. Remember that people use your ignorance to sell you something! Selling AI revolution is one such attempt. You don’t have to be ignorant and believe in one-sided stories.

Everyone needs to learn to find anti-stories for each story and cross-check. Eventually, you may still believe in the main story, but you will be more confident as a result of cross-checking.

Technology is often over-hyped but underestimated. Remember, people use your ignorance to sell you something.

Expert Vlogging Series 1.0: How to put AI ethics into practice? | Montreal AI Ethics Institute

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our seventh vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Abhishek Gupta and his wonderful team at Montreal AI Ethics Institute, an international nonprofit think tank helping people build & navigate AI responsibly, below.

Abhishek joined AI Policy Exchange as a technical/policy expert earlier this month. Abhishek is the founder of Montreal AI Ethics Institute and a Machine Learning Engineer at Microsoft where he serves on the CSE Responsible AI Board. His research focuses on applied technical and policy methods to address ethical, safety and inclusivity concerns in using AI in different domains. He has built the largest community driven, public consultation group on AI Ethics in the world. His work on public competence building in AI Ethics has been recognized by governments from North America, Europe, Asia, and Oceania. He is a Faculty Associate at the Frankfurt Big Data Lab, Goethe University, a guest lecturer at the McGill University School of Continuing Studies for the Data Science in Business Decisions course and a Visiting AI Ethics Researcher on the Future of Work for the US State Department representing Canada.

Don’t forget to send us your feedback on editor@aipolicyexchange.org.