Why AI applications get stalled in the public sector

Article by Thomas Balbach, Senior Business Analyst, Public Sector, Sopra Steria, and Master of Science in Public Sector Innovation and eGovernance Candidate at KU Leuven, WWU Münster and TalTech.

This article is published as part of a joint blog series on AI in Government by AI Policy Exchange and N3GZ Young Professionals Network on Digital Government, the official, self-organized youth network of the German National E-Governance Competence Center (NEGZ). The series is aimed at featuring emerging policy perspectives from all across the globe on the use of AI applications in the public sector.

This article explains why many applications of artificial intelligence (AI) in public administration are ultimately shelved. To that end, it uses a real-world example of a discontinued AI application in the regional government administrations in Belgium.

More than a year ago, the author participated in a hackathon for applied AI in public administration with a cross-functional team of programmers and business analysts. The team created a proof of concept (POC) for a natural language processing (NLP) application that estimated which parts of a document contained which information types. The concept was brilliant, yet it was never commissioned as public sector-specific cost structures denied a business case for the POC, notwithstanding a convincing use case.

Generally, to understand why AI applications in public administration often face an early death, one must shift from the perspective from a technology provider to that of a public administrator.

Technology provider view: the use case

The status quo: Whenever Belgian municipalities elect new mayors and city councillors, the election results are recorded in electronic documents. Municipal governments then send these documents to the regional administrations, which aggregate and publish them. This data is stored in the designated format (i.e. Open Standards for Linked Organisations – OSLO) in the regional database. This database can be queried: Did the same person win electoral offices in several municipalities? What other patterns might one observe?

As a technology provider, we saw a clear use case for AI facilitating the publication of election results in Belgium. On the face of it, it promised significant improvement to the workflow with comparatively little effort. Currently, public administrators in the regional government copy and paste the name of elected mayors and councillors from each municipal document manuallya truly cumbersome process. There are 581 municipalities in Belgium and three regions. Consequently, reading all those reports adds up to a few weeks’ workload of one or even several full-time employees. An AI application could speed up this process by excerpting the names per information type, e.g. “Jan Jansen”, and labelling them with the right information type, e.g. “Mayor”. The AI application speeds up the data entrya clear benefit.

Efficiency lies in the eye of the beholder

Does it make sense to automate the extraction of names from the municipal assembly notes? For the department of the regional administration which aggregates the election results, automating the task saves many person-months. Automating it means less tedious work for the regional government’s employees and freeing up resources for more pressing matters. Thus, from the perspective of the responsible regional government authority, the business case seems convincing.

However, from the perspective of the overall region, the business case is a lot cloudier. Looking beyond the viewpoint of a single authority or department, the AI-powered data entry appears inefficient. Both the existing solution as well as the proposed AI solution are operationally suboptimal.

In this situation, the highest efficiency gains can be achieved by reworking the administrative proceduresso that municipal employees can enter the election results directly into a single digital interface. This would reduce the number of interfaces, transmission errors, and produce the same result with a technologically less demanding process.

However, here the costs of change come into play. The regional government would have to pass a new law, coordinate with other regions for similar regulations, and convince municipal governments of the new approach. Thus, the regional government would have to step into the conflict-prone territory of inter-agency, inter-region, multi-level governance, spending time and political capital.

Many ways to skin a cat: the costs of public sector innovation

So what alternatives are there for public managers in the regional government? Their options are to:

(A) Do nothing and retain the copy-and-paste process;

(B) Use AI to excerpt the names from the hand-written documents to populate a regional database;

(C) Reform the administrative processes and ask municipal governments to enter election results in an online form to populate a regional database.

The preceding section showed that option C delivers the highest efficiency gains. Therefore, we conclude that option C achieves the highest cost reduction (savings) while option A achieves the lowest cost reduction. This means that doing nothing (option A) incurs the highest opportunity costs, i.e. potential savings that are not realized.

Put like that, it seems irrational to forego the AI solution or the procedural reform. But that is what the public managers ultimately did. They chose option A. The following paragraphs will explain why it was rational to do nothing and keep the status quo.

Our assumptions about the potential cost reductions for each option were misguided as we did not realize that the costs of change are generally much higher in the public sector, as experience shows.[1] Factoring in change-related costs, the potential savings of alternatives B and C are much lower than when only looking at the efficiency gains. The costs for each option are:

(1) Cost of person-days in regional government;

(2) Costs to develop and implement the AI solution technically, plus costs for the changing processes in the regional government;

(3) Costs of changing processes in regional and municipal administrations, costs of legislative reform at the federal and/or regional level, costs of developing and implementing the single digital interface, plus additional person-days at the municipal level to enter election results into the interface.

Weighing the options and their respective costs of change, the regional government rationally decides to choose option A, i.e. to do nothing and keep the status quo.

There’s more to it: risks particular to the public sector

Beyond costs, there are risks unique to the public sector. Invisible to the untrained eye, they weigh heavily on the public managers’ decisions to select among the above three options of implementation. With regard to the proposed AI solution, there are three noteworthy risks:

(R1) Larger scale of errors

For options B and C, a central solution would apply to all municipalities. Therefore, errors in the system would result in problems for all municipalities of the region. In contrast, the manual processes of option A are more robust.

(R2) No redundancies

The regional government is the sole provider of consolidated municipal electoral results in the region. If the process of aggregating election results breaks down for some reason, there is no alternative source. This risk cannot be mitigated.

(R3) No room for experimentation

If a factory applies AI to its production process, it can experiment. If it must discard 5% of its output due to experiments gone awry, it can still turn a profit on the optimised 95%. This does not hold for public administrations in general, nor our election results in particular. If the proposed AI solution in option B excerpts the wrong names from the municipal files with an error rate of 5%, then the public, news agencies, and municipalities will reject the entire process solution and blame the regional government for it.

Again, considering the risks as well as the costs of change, public managers will rationally choose option A.

Bogus rationalizations for the public sector retaining the status quo

It is tempting to explain this preference for doing nothing and keeping the status quo with common perceptions about workers in the public sector: they are risk-averse, unwilling to take responsibility, and lazy. It’s as simple as that. Many TV series have proven the point (and very entertainingly so): Yes Minister, Parks and Recreation, and Der Reale Irrsinn.

Some rationalize the preference by referring to the budget-maximizing model of behaviour for public agencies holding that public managers simply have an inherent drive for more budget, staff, and power. From this perspective, the reasoning runs like this:

By use of an AI application, some person months could be removed from a task. To private sector managers, this is attractive as it lowers costs while retaining the same output, resulting in higher profits. However, to public sector managers, this is unattractive because they want to have as many subordinates as possible and unfamiliar with new ways of working, making them easier to control.

Instead of pointlessly indulging in such bogus rationalizations for the proclivity of public managers for retaining the status quo, we need to understand the absence of change as a rational decision. The sections above explained that it is not the managers’ motivations, but the risks and cost structures that cause her to keep the status quo. Seasoned public managers are aware of the change-related costs and risks that are particular to the public sector. Often, technology providers and startups are not aware of them. Thus, what is a rational decision for one, seems like the product of bad character to the other.

UCNOBC: use case, no business case

Option Adoing nothingis a case of the common Use Case No Business Case (UCNOBC) phenomenon, which simply means that there is a use case for an AI application, but it has no business case.  Generally, the lack of a solid business case is among the most common reasons for AI projects to fail.[2]

In the public sector, the business case is determined through cost-benefit analyses. Typical benefits are more effective policy provision or more efficient service delivery.

The UCNOBC phenomenon is frequent in the public sector with regard to AI applications. In Table 1 below: the first, second, and third columns lists general types of use cases, their associated cost categories, and their respective business cases; the fourth column contains the UCNOBC phenomenon.

Type of use caseRelevant cost categoriesExample where cost of change < (less than) cost of no-changeExample where cost of change > (greater than) cost of no-change
Process automation type      IT solution development, IT change management, administrative process change management, policy-specific legislative reform, inter-agency collaboration, intra-agency or intra-administration collaborationchat botsfacial recognition system at Berlin Südkreuz train station
Process optimization typeincrease of workload, IT solution development, IT change management, administrative process change management, intra-agency or intra-administration collaborationpredictive policing in North-Rhine Westphalia; predictive form filling; document completeness checksAI assessment of grant applications
Targeting typeincrease of workload, IT solution development, IT change management, administrative process change management, intra-agency or intra-administration collaborationAMS (unemployment agency) algorithm in Austriasentiment analysis per public service used
Table 1

This perspective helps to explain why some of the projects in the examples above were delayed or terminated. For example, the facial recognition pilot project at Berlin Südkreuz train station, run by the German Federal Police, was technologically inexpensive but did not achieve a viable accuracy rateerroneously frisking that many persons was not feasible for the authorities, and intolerable to the citizens.

Pilot flagship projects and hackathon initiatives tend to foster disappointment among their participants and the public because they result in many UCNOBC cases. Here, public managers might want to estimate costs of change and no-change beforehand. At hackathons, these estimates and business cases could be made visible to participants, so that their use cases at least have a fighting chance.

Advice for AI enthusiasts

Much heartbreak can be avoided by scrutinising the costs of change related to AI applications in public administrations. As Table 1 shows, there are AI applications with convincing business cases. We should keep the UCNOBC phenomenon in mind when the next hot new AI solution crosses our path. It takes deeper knowledge of the public sector to separate the wheat from the chaff.

Endnotes and References

1] There is academic literature on the relatively high costs of change in the public sector compared to the private sector. The literature is inconclusive and disputative. For example, The Myth of the Entrepreneurial State by McCloskey & Mingardi (2020) argues for the high costs (pp. 119-120). The Entrepreneurial State by Mazzucato (2018) argues against them (p. 195).

[2] Common causes of AI project failure are further elaborated in a blog post by Thomas Dinsmore here.

Ankhi Das quitting Facebook ironically symptomizes an impaired democracy

Article by Raj Shekhar, Founder & Executive Director, AI Policy Exchange

Recently, Ankhi Das of Facebook India resigned from the company after nine-long years of service. Earlier in August this year, The Wall Street Journal had reported her refusal to take down communally provocative statements against Muslims made by Bharatiya Janata Party’s T. Raja Singh from the social media platform. This was despite the fact that these statements had been flagged internally for running afoul of the platform’s rules on hateful speech. Reportedly, for her decision, Das had cited her interest in saving “the company’s business prospects in the country” by not “punishing violations by politicians from Mr. Modi’s party”. Later in September, this had led several activist groups from across the world to demand Facebook to remove Das from her role in the company “should the audit or investigation reinforce the details of The Wall Street Journal”. While Das seemed to have succumbed to this demand without much resistance—apparently implying her admission of guilt for what she did, the grounds that compelled her exit from the company—i.e., her refusal to take down “hateful speech” as defined and mandated by Facebook in its Community Standards—do not remain that innocent either.

Let me explain.

Hate is one of the most primal human instincts. When expressed freely, it could be seditious—endorsing rebellion against the authority of the state, defamatory—tarnishing the reputation of an individual, group, or company, or socially disharmonizing—criticizing group identities and leading to mass violence in extreme cases. Hence, most jurisdictions have enacted legislations proscribing the various forms of hateful speech domestically. However, these proscriptions have not always been the most reasonable—making them vulnerable to abuse by malicious actors—allowing free speech and democracy to suffer. Proscriptions against seditious speech have ended up penalizing essential criticisms against bad government policies. Proscriptions against defamatory speech have enabled large corporations to clampdown on press reporting of their unfair business practices. Proscriptions against socially disharmonizing speech have bred the ground for perversion of identity politics—often using it as a justification for violence.

When hateful speech moved to the social media, the network effects amplified its harmful potential to an unprecedented order—as each one of us who was on it was freely lent a megaphone—without prejudice to our identities or motives. As fate would have it, many of us became the vicious carriers and hapless victims of hateful speech online. Traditional proscriptions against hateful speech offline began to seem inadequate to tackle hateful speech online. Panic followed—with proposals for stricter regulation of hateful speech online. But, only if it would have been that simple. Hateful speech is not a species of crime like murder, you see. It lacks a precise, practicable definition—which makes any attempt to proscribe it, online or offline, inevitably risk jeopardizing free speech and democracy. Hence, neither governments with their legislative ingenuity nor technology companies with their current technosolutionism can regulate it online—without unreasonably restricting free speech and undermining democracy.

Nevertheless, mandates to regulate hateful speech—if at all—should come from the legislatures—as they would at least carry the popular will. The push from various quarters for social media companies to instate and implement privately conceived rules and technical measures to tackle hateful speech on their platforms sadly discounts the immense significance of this popular will—making unelected officials within social media companies the arbiters of free speech who simply cannot be held to the same standards of democratic or constitutional accountability as governments. This could seriously risk subversion of free speech on social media platforms as and when companies running these platforms deem it expedient to save their bottom line—pushing democracy to unapologetically succumb to market forces.

This already became evident earlier this year when shortly after major American corporations joined the Stop Hate for Profit campaign declaring their decision to temporarily pull their advertising from Facebook, the CEO of the social media giant, Mark Zuckerberg announced the company’s new policies to address corporate concerns over Facebook’s inability to tackle hateful speech and divisiveness in a politically polarized climate ahead of the United States presidential elections. Now, what if these new policies of Facebook, as imagined by its corporate bosses to effectively tackle hateful speech and divisiveness on its platform, actually end up subverting free speech and democracy? Who should be held accountable for that? Undoubtedly, it has to be us. We allowed Facebook to dictate our speech on its platform. We forgot that regardless of how pious Facebook’s intentions could be, they were always supposed to be primarily driven by profit-motives, not a constitutional commitment to upholding democratic values; we, the people, elect our parliaments to do that.

While Facebook India’s former Public Policy Director could have been rightly reprimanded both for her unscrupulous political partisanship and profit-motives, the demands from activist groups seeking her removal from the company—even though fair in principle—ironically, ended up penalizing her failure to meet an obligation that was set by Facebook’s corporate discretion—not Indian Parliamentary deliberation and vote.


Note: The views expressed in this article are those of the author and not of AI Policy Exchange or its members.

Digitalization and market power

Article by Thieß Petersen, Senior Advisor, Global Economic Dynamics Project, Bertelsmann Stiftung, Germany and Lecturer, European University Viadrina

The use of digital technologies increases market transparency for consumers and enlarges the relevant market. This strengthens their negotiating position vis-à-vis companies. However, there exists also the risk that large technology companies will develop into global monopolies that exploit their market power at the expense of consumers.

Digitalization reduces the market power of local companies

On the one hand, digitalization can reduce the market power of individual providers. If only one supplier of electrical appliances operates a store within a thirty miles radius in a rural region, she can charge mark-ups for her products and thus increase her profits. However, if there are internet-based search engines and online platforms, interested customers can search for offers of equal quality and take advantage of price differences.

Digital technologies also reduce the costs associated with delivering a product purchased online and handling payment. Consumers can therefore significantly expand the market that is relevant to them. For example, a local electronics retailer loses market power and has to adjust her price to the national or even worldwide market price.

A further weakening of the market power of local providers results from the possibility that private individuals offer goods and services on digital platforms. If they appear on the market as additional suppliers, this has an impact on the established commercial providers. For them, the additional supply represents competition that forces them to reduce their prices.

Digitalization can result in global monopolies

On the other hand, the characteristics of digital goods can have the effect that only one provider dominates a market in the long-term, thus creating a monopoly. There are two central reasons for this.

(1) The development of digital infrastructures or new computer programs involves high fixed costs. Once the computer program exists, the duplication and distribution of further copies of the program involves very low additional costs. With this cost structure, the increase in production volume results in decreasing costs per unit. This means that the supplier who produces the largest quantity has the lowest costs per unit and can therefore demand the lowest price. Therefore, only one supplier survives on the market.

(2) Digital platforms often rely on network effects. This means: The benefit for the consumer depends on the size of the network. The more participants there are in a social network or an online platform such as Airbnb, the more attractive it is for people to join the network. In the end, the company that has the most participants wins – and displaces all other providers

Disadvantages of monopolies for consumers

Monopolies have a number of economic disadvantages for consumers, other companies, and employees.

First, monopolists demand higher prices because they have no competition. For consumers, this means a loss of purchasing power, which reduces the opportunities for consumption.

Secondly, a monopolist also has market power as a buyer, with which it can lower the prices of inputs and wages. There are indications that the emergence of so-called superstar companies such as Google, Apple, Amazon, Facebook, and Uber is putting pressure on wages or on wage increases.

Thirdly, for a monopolist without competition, there is no need to improve the quality of its products and lower the prices of its products through technological progress. The central advantage of a market economy for consumers – an improved product offering at lower prices – is thus not realized.

Fourthly, monopoly profits imply high financial resources. This financial power can be used to buy up potential competitors at an early stage and thus restrict competition. The financial resources can also be used to acquire companies and enter entirely new markets that are not part of the actual business area.

Finally, economic power can become a political power. Monopolies are important economic actors as employers and taxpayers. This increases the probability that political decision-makers will listen to these companies and their partial interests. This can result in political decisions that are at the expense of consumers, employees, and small businesses.

Conclusion and outlook

The increased use of digital technologies offers a number of economic advantages. These include, above all, an improved supply of goods and services. As digitalization progresses, people are being offered a greater variety of products, can consume a larger quantity of products, and have to pay lower prices. Greater market transparency strengthens the position of consumers vis-à-vis companies and reduces prices further.

The economic risks of digitalization include monopolization tendencies, which can be at the expense of consumers and employees. A key economic policy challenge is therefore to use competition policy instruments to prevent the exploitation of market power so that the price-reducing and welfare-enhancing effects of increasing digitization can be realized.


Note: This is a shortened and revised version of the article “Digitalization of the Global Economy: Monopolies, Personalized Prices and Fake Valuations“, which was published on the project page of “Global Economic Dynamics” on October 30, 2020.

Expert Vlogging Series 1.0: How to put AI ethics into practice? | Oriana Medlicott, AI Ethics Expert & Consultant, CertNexus

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our twentieth vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Oriana Medlicott below.

Oriana joined AI Policy Exchange as a technical/policy expert earlier this month. Oriana is Subject Matter Expert & Consultant, AI Ethics at CertNexus, where she helped develop an emerging tech certification. She is also Member, Advisory Board of The AI Ethics Journal (AIEJ) published by The AI Robotics Ethics Society. Oriana has previously worked as Head of Operations at Ethical Intelligence Associates, Limited.

Viewers are welcome to send us their feedback on editor@aipolicyexchange.org.

Expert Vlogging Series 1.0: How to put AI ethics into practice? | Deepak Paramanand, Ex-Microsoft; AI Product Innovator, Hitachi

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our nineteenth vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Deepak Paramanand below.

Deepak joined AI Policy Exchange as a technical/policy expert later in June this year. Deepak is a widely travelled technologist who has gone from BI to AI and everything in between. Across 3 continents he has built AI products in B2B and B2C spaces, dealing with Natural Language Processing and real-time Computer Vision technology. He specializes in building ethical AI products. Currently, at Hitachi, Deepak is looking to put AI in trAIns. Previously at Microsoft, he launched puppets, Microsoft’s first foray in consumer-facing computer vision technology delivered on the Android ecosystem. The product won global accolades and customer appreciation across its 2M+ user base. Deepak’s work at evaluating fairness and bias in AI was called “pioneering” by Microsoft’s Ethics Committee and is being used as an example of doing AI right. A consummate public speaker, Deepak loves talking AI to professionals, graduates, and school children as he tries to decode AI for everyone. Deepak is looking to demystify and communicate AI to the world so that the world is included in the conversation in building and using AI.

Viewers are welcome to send us their feedback on editor@aipolicyexchange.org.

Expert Vlogging Series 1.0: How to put AI ethics into practice? | Charlie Craine, Chief Technology Officer and Chief Data Officer, Meister Media Worldwide

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our eighteenth vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Charlie Craine below.

Charlie joined AI Policy Exchange as a technical/policy expert later last month. Charlie is Chief Technology Officer and Chief Data Officer at Meister Media Worldwide.

Viewers are welcome to send us their feedback on editor@aipolicyexchange.org.


Expert Vlogging Series 1.0: How to put AI ethics into practice? | Angela Kim, Head of Analytics & AI, Teachers Health Fund, Australia

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our seventeenth vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Angela Kim below.

Angela joined AI Policy Exchange as a technical/policy expert later in June this year. Angela is a data science and AI professional with over 15 years of experience in the banking and insurance industry. She is currently Head of Analytics and AI at Teachers Health Fund and Women in AI Education Ambassador for Australia. Angela has been counted amongst the Top 10 Analytics Leaders 2020 from IAPA – Institute of Analytics Professionals of Australia. She has been working with Insurance Australia Group, Macquarie Group, Microsoft and Salesforce to provide AI outreach programs for high school students and technology literacy programs for USYD, UNSW, UTS and Macquarie University female business students via 100 Girls, 100 Futures WaiMASTERCLASS. Angela is also the founder of est.ai; its first project is focused on building a product to detect bullying, drug usage, and mental health & wellbeing on social media. Angela also sits on the founding editorial board of Springer Nature’s AI and Ethics journal.

Viewers are welcome to send us their feedback on editor@aipolicyexchange.org.

Expert Vlogging Series 1.0: How to put AI ethics into practice? | Tannya Jajal, Co-Leader, Lean In Circle – Dubai Women in Tech

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our sixteenth vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Tannya Jajal below.

Tannya joined AI Policy Exchange as a technical/policy expert earlier this month. Tannya is a technology evangelist, optimist, and futurist. She is passionate about carving out an exciting and connected future based on science, data and technology. Her hope is to encourage current and future leaders to have intelligent discussions around the ethical and practical implications of exponential technologies on business, life and society at large. Tannya currently works as the Resource Manager at VMware, a leader in cloud infrastructure. She is a technology contributor at Forbes Middle East, co-founder of SheMadeIT, and co-leader of the Lean In Circle – Dubai Women in Tech.

Viewers are welcome to send us their feedback on editor@aipolicyexchange.org.

Expert Vlogging Series 1.0: How to put AI ethics into practice? | Tom Moule, The Institute for Ethical AI in Education, University of Buckingham

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our fifteenth vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Tom Moule below.

Tom joined AI Policy Exchange as a technical/policy expert earlier last month. Tom is Executive Lead at The Institute for Ethical AI in Education, which is based at The University of Buckingham. In this role, Tom leads the Institute’s operations as it develops a framework for the ethical use of AI in education. Tom has previously worked as Strategic Projects Lead, and Curriculum Lead for CENTURY Tech; as a programme manager for Team up for Social Mobility; and as a secondary science teacher, for which he trained via Teach First. Education is Tom’s passion. He is excited and motivated by the transformative benefits that AI could achieve for learners, and is keen to support the development of the structures and guidance needed to allow such promises to be fulfilled.

Viewers are welcome to send us their feedback on editor@aipolicyexchange.org.

Expert Vlogging Series 1.0: How to put AI ethics into practice? | Anthony Rhem, CEO & Principal Consultant, A.J. Rhem & Associates, Inc.

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our fourteenth vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Anthony Rhem below.

Anthony joined AI Policy Exchange as a technical/policy expert earlier last month. Anthony is an Information Systems professional with more than 30 years of experience. He is a published author and educator, presenting the application and theory of Software Engineering Methodologies, Knowledge Management, and Artificial Intelligence. Anthony also sits on the Founding Editorial Board of Springer Nature’s AI and Ethics Journal.

Viewers are welcome to send us their feedback on editor@aipolicyexchange.org.