Why AI applications get stalled in the public sector

Article by Thomas Balbach, Senior Business Analyst, Public Sector, Sopra Steria, and Master of Science in Public Sector Innovation and eGovernance Candidate at KU Leuven, WWU Münster and TalTech.

This article is published as part of a joint blog series on AI in Government by AI Policy Exchange and N3GZ Young Professionals Network on Digital Government, the official, self-organized youth network of the German National E-Governance Competence Center (NEGZ). The series is aimed at featuring emerging policy perspectives from all across the globe on the use of AI applications in the public sector.

This article explains why many applications of artificial intelligence (AI) in public administration are ultimately shelved. To that end, it uses a real-world example of a discontinued AI application in the regional government administrations in Belgium.

More than a year ago, the author participated in a hackathon for applied AI in public administration with a cross-functional team of programmers and business analysts. The team created a proof of concept (POC) for a natural language processing (NLP) application that estimated which parts of a document contained which information types. The concept was brilliant, yet it was never commissioned as public sector-specific cost structures denied a business case for the POC, notwithstanding a convincing use case.

Generally, to understand why AI applications in public administration often face an early death, one must shift from the perspective from a technology provider to that of a public administrator.

Technology provider view: the use case

The status quo: Whenever Belgian municipalities elect new mayors and city councillors, the election results are recorded in electronic documents. Municipal governments then send these documents to the regional administrations, which aggregate and publish them. This data is stored in the designated format (i.e. Open Standards for Linked Organisations – OSLO) in the regional database. This database can be queried: Did the same person win electoral offices in several municipalities? What other patterns might one observe?

As a technology provider, we saw a clear use case for AI facilitating the publication of election results in Belgium. On the face of it, it promised significant improvement to the workflow with comparatively little effort. Currently, public administrators in the regional government copy and paste the name of elected mayors and councillors from each municipal document manuallya truly cumbersome process. There are 581 municipalities in Belgium and three regions. Consequently, reading all those reports adds up to a few weeks’ workload of one or even several full-time employees. An AI application could speed up this process by excerpting the names per information type, e.g. “Jan Jansen”, and labelling them with the right information type, e.g. “Mayor”. The AI application speeds up the data entrya clear benefit.

Efficiency lies in the eye of the beholder

Does it make sense to automate the extraction of names from the municipal assembly notes? For the department of the regional administration which aggregates the election results, automating the task saves many person-months. Automating it means less tedious work for the regional government’s employees and freeing up resources for more pressing matters. Thus, from the perspective of the responsible regional government authority, the business case seems convincing.

However, from the perspective of the overall region, the business case is a lot cloudier. Looking beyond the viewpoint of a single authority or department, the AI-powered data entry appears inefficient. Both the existing solution as well as the proposed AI solution are operationally suboptimal.

In this situation, the highest efficiency gains can be achieved by reworking the administrative proceduresso that municipal employees can enter the election results directly into a single digital interface. This would reduce the number of interfaces, transmission errors, and produce the same result with a technologically less demanding process.

However, here the costs of change come into play. The regional government would have to pass a new law, coordinate with other regions for similar regulations, and convince municipal governments of the new approach. Thus, the regional government would have to step into the conflict-prone territory of inter-agency, inter-region, multi-level governance, spending time and political capital.

Many ways to skin a cat: the costs of public sector innovation

So what alternatives are there for public managers in the regional government? Their options are to:

(A) Do nothing and retain the copy-and-paste process;

(B) Use AI to excerpt the names from the hand-written documents to populate a regional database;

(C) Reform the administrative processes and ask municipal governments to enter election results in an online form to populate a regional database.

The preceding section showed that option C delivers the highest efficiency gains. Therefore, we conclude that option C achieves the highest cost reduction (savings) while option A achieves the lowest cost reduction. This means that doing nothing (option A) incurs the highest opportunity costs, i.e. potential savings that are not realized.

Put like that, it seems irrational to forego the AI solution or the procedural reform. But that is what the public managers ultimately did. They chose option A. The following paragraphs will explain why it was rational to do nothing and keep the status quo.

Our assumptions about the potential cost reductions for each option were misguided as we did not realize that the costs of change are generally much higher in the public sector, as experience shows.[1] Factoring in change-related costs, the potential savings of alternatives B and C are much lower than when only looking at the efficiency gains. The costs for each option are:

(1) Cost of person-days in regional government;

(2) Costs to develop and implement the AI solution technically, plus costs for the changing processes in the regional government;

(3) Costs of changing processes in regional and municipal administrations, costs of legislative reform at the federal and/or regional level, costs of developing and implementing the single digital interface, plus additional person-days at the municipal level to enter election results into the interface.

Weighing the options and their respective costs of change, the regional government rationally decides to choose option A, i.e. to do nothing and keep the status quo.

There’s more to it: risks particular to the public sector

Beyond costs, there are risks unique to the public sector. Invisible to the untrained eye, they weigh heavily on the public managers’ decisions to select among the above three options of implementation. With regard to the proposed AI solution, there are three noteworthy risks:

(R1) Larger scale of errors

For options B and C, a central solution would apply to all municipalities. Therefore, errors in the system would result in problems for all municipalities of the region. In contrast, the manual processes of option A are more robust.

(R2) No redundancies

The regional government is the sole provider of consolidated municipal electoral results in the region. If the process of aggregating election results breaks down for some reason, there is no alternative source. This risk cannot be mitigated.

(R3) No room for experimentation

If a factory applies AI to its production process, it can experiment. If it must discard 5% of its output due to experiments gone awry, it can still turn a profit on the optimised 95%. This does not hold for public administrations in general, nor our election results in particular. If the proposed AI solution in option B excerpts the wrong names from the municipal files with an error rate of 5%, then the public, news agencies, and municipalities will reject the entire process solution and blame the regional government for it.

Again, considering the risks as well as the costs of change, public managers will rationally choose option A.

Bogus rationalizations for the public sector retaining the status quo

It is tempting to explain this preference for doing nothing and keeping the status quo with common perceptions about workers in the public sector: they are risk-averse, unwilling to take responsibility, and lazy. It’s as simple as that. Many TV series have proven the point (and very entertainingly so): Yes Minister, Parks and Recreation, and Der Reale Irrsinn.

Some rationalize the preference by referring to the budget-maximizing model of behaviour for public agencies holding that public managers simply have an inherent drive for more budget, staff, and power. From this perspective, the reasoning runs like this:

By use of an AI application, some person months could be removed from a task. To private sector managers, this is attractive as it lowers costs while retaining the same output, resulting in higher profits. However, to public sector managers, this is unattractive because they want to have as many subordinates as possible and unfamiliar with new ways of working, making them easier to control.

Instead of pointlessly indulging in such bogus rationalizations for the proclivity of public managers for retaining the status quo, we need to understand the absence of change as a rational decision. The sections above explained that it is not the managers’ motivations, but the risks and cost structures that cause her to keep the status quo. Seasoned public managers are aware of the change-related costs and risks that are particular to the public sector. Often, technology providers and startups are not aware of them. Thus, what is a rational decision for one, seems like the product of bad character to the other.

UCNOBC: use case, no business case

Option Adoing nothingis a case of the common Use Case No Business Case (UCNOBC) phenomenon, which simply means that there is a use case for an AI application, but it has no business case.  Generally, the lack of a solid business case is among the most common reasons for AI projects to fail.[2]

In the public sector, the business case is determined through cost-benefit analyses. Typical benefits are more effective policy provision or more efficient service delivery.

The UCNOBC phenomenon is frequent in the public sector with regard to AI applications. In Table 1 below: the first, second, and third columns lists general types of use cases, their associated cost categories, and their respective business cases; the fourth column contains the UCNOBC phenomenon.

Type of use caseRelevant cost categoriesExample where cost of change < (less than) cost of no-changeExample where cost of change > (greater than) cost of no-change
Process automation type      IT solution development, IT change management, administrative process change management, policy-specific legislative reform, inter-agency collaboration, intra-agency or intra-administration collaborationchat botsfacial recognition system at Berlin Südkreuz train station
Process optimization typeincrease of workload, IT solution development, IT change management, administrative process change management, intra-agency or intra-administration collaborationpredictive policing in North-Rhine Westphalia; predictive form filling; document completeness checksAI assessment of grant applications
Targeting typeincrease of workload, IT solution development, IT change management, administrative process change management, intra-agency or intra-administration collaborationAMS (unemployment agency) algorithm in Austriasentiment analysis per public service used
Table 1

This perspective helps to explain why some of the projects in the examples above were delayed or terminated. For example, the facial recognition pilot project at Berlin Südkreuz train station, run by the German Federal Police, was technologically inexpensive but did not achieve a viable accuracy rateerroneously frisking that many persons was not feasible for the authorities, and intolerable to the citizens.

Pilot flagship projects and hackathon initiatives tend to foster disappointment among their participants and the public because they result in many UCNOBC cases. Here, public managers might want to estimate costs of change and no-change beforehand. At hackathons, these estimates and business cases could be made visible to participants, so that their use cases at least have a fighting chance.

Advice for AI enthusiasts

Much heartbreak can be avoided by scrutinising the costs of change related to AI applications in public administrations. As Table 1 shows, there are AI applications with convincing business cases. We should keep the UCNOBC phenomenon in mind when the next hot new AI solution crosses our path. It takes deeper knowledge of the public sector to separate the wheat from the chaff.

Endnotes and References

1] There is academic literature on the relatively high costs of change in the public sector compared to the private sector. The literature is inconclusive and disputative. For example, The Myth of the Entrepreneurial State by McCloskey & Mingardi (2020) argues for the high costs (pp. 119-120). The Entrepreneurial State by Mazzucato (2018) argues against them (p. 195).

[2] Common causes of AI project failure are further elaborated in a blog post by Thomas Dinsmore here.

Ankhi Das quitting Facebook ironically symptomizes an impaired democracy

Article by Raj Shekhar, Founder & Executive Director, AI Policy Exchange

Recently, Ankhi Das of Facebook India resigned from the company after nine-long years of service. Earlier in August this year, The Wall Street Journal had reported her refusal to take down communally provocative statements against Muslims made by Bharatiya Janata Party’s T. Raja Singh from the social media platform. This was despite the fact that these statements had been flagged internally for running afoul of the platform’s rules on hateful speech. Reportedly, for her decision, Das had cited her interest in saving “the company’s business prospects in the country” by not “punishing violations by politicians from Mr. Modi’s party”. Later in September, this had led several activist groups from across the world to demand Facebook to remove Das from her role in the company “should the audit or investigation reinforce the details of The Wall Street Journal”. While Das seemed to have succumbed to this demand without much resistance—apparently implying her admission of guilt for what she did, the grounds that compelled her exit from the company—i.e., her refusal to take down “hateful speech” as defined and mandated by Facebook in its Community Standards—do not remain that innocent either.

Let me explain.

Hate is one of the most primal human instincts. When expressed freely, it could be seditious—endorsing rebellion against the authority of the state, defamatory—tarnishing the reputation of an individual, group, or company, or socially disharmonizing—criticizing group identities and leading to mass violence in extreme cases. Hence, most jurisdictions have enacted legislations proscribing the various forms of hateful speech domestically. However, these proscriptions have not always been the most reasonable—making them vulnerable to abuse by malicious actors—allowing free speech and democracy to suffer. Proscriptions against seditious speech have ended up penalizing essential criticisms against bad government policies. Proscriptions against defamatory speech have enabled large corporations to clampdown on press reporting of their unfair business practices. Proscriptions against socially disharmonizing speech have bred the ground for perversion of identity politics—often using it as a justification for violence.

When hateful speech moved to the social media, the network effects amplified its harmful potential to an unprecedented order—as each one of us who was on it was freely lent a megaphone—without prejudice to our identities or motives. As fate would have it, many of us became the vicious carriers and hapless victims of hateful speech online. Traditional proscriptions against hateful speech offline began to seem inadequate to tackle hateful speech online. Panic followed—with proposals for stricter regulation of hateful speech online. But, only if it would have been that simple. Hateful speech is not a species of crime like murder, you see. It lacks a precise, practicable definition—which makes any attempt to proscribe it, online or offline, inevitably risk jeopardizing free speech and democracy. Hence, neither governments with their legislative ingenuity nor technology companies with their current technosolutionism can regulate it online—without unreasonably restricting free speech and undermining democracy.

Nevertheless, mandates to regulate hateful speech—if at all—should come from the legislatures—as they would at least carry the popular will. The push from various quarters for social media companies to instate and implement privately conceived rules and technical measures to tackle hateful speech on their platforms sadly discounts the immense significance of this popular will—making unelected officials within social media companies the arbiters of free speech who simply cannot be held to the same standards of democratic or constitutional accountability as governments. This could seriously risk subversion of free speech on social media platforms as and when companies running these platforms deem it expedient to save their bottom line—pushing democracy to unapologetically succumb to market forces.

This already became evident earlier this year when shortly after major American corporations joined the Stop Hate for Profit campaign declaring their decision to temporarily pull their advertising from Facebook, the CEO of the social media giant, Mark Zuckerberg announced the company’s new policies to address corporate concerns over Facebook’s inability to tackle hateful speech and divisiveness in a politically polarized climate ahead of the United States presidential elections. Now, what if these new policies of Facebook, as imagined by its corporate bosses to effectively tackle hateful speech and divisiveness on its platform, actually end up subverting free speech and democracy? Who should be held accountable for that? Undoubtedly, it has to be us. We allowed Facebook to dictate our speech on its platform. We forgot that regardless of how pious Facebook’s intentions could be, they were always supposed to be primarily driven by profit-motives, not a constitutional commitment to upholding democratic values; we, the people, elect our parliaments to do that.

While Facebook India’s former Public Policy Director could have been rightly reprimanded both for her unscrupulous political partisanship and profit-motives, the demands from activist groups seeking her removal from the company—even though fair in principle—ironically, ended up penalizing her failure to meet an obligation that was set by Facebook’s corporate discretion—not Indian Parliamentary deliberation and vote.


Note: The views expressed in this article are those of the author and not of AI Policy Exchange or its members.

9 ways to put AI ethics into practice

This document presents a collation of expert insights on creating and mandating ethical AI devices & applications — contributed by members of the AI Policy Exchange Editorial Network under the AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? — for public appraisal.

Edited by Raj Shekhar and Antara Vats

Foreword

The many iterations of artificial intelligence (AI) ethics coming from private and public enterprises, regional and international organizations, and civil society groups, globally, have been criticized for their impracticable propositions for creating and mandating ethical AI, among other things. These propositions, broadly hinging on fairness, accountability, transparency, and environmental sustainability principles, have been deemed impracticable as they are found to be loaded with elusive, abstract concepts and paralyzed by internal conflicts. For instance, the need for high-quality data to enable AI systems to produce fairer or more efficient outcomes comes into conflict with the object of securing individual data privacy. Moreover, individual privacy as a value is likely to get emphasized more in individualist cultures than in collectivist cultures.

However, in all fairness, even these impracticable propositions for creating and mandating ethical AI have laid some groundwork for creating actionable requirements for ethical AI and framing rules and regulations that would eventually mandate these requirements by prioritizing human safety and well-being. Indeed, international, regional, and national efforts to come up with practicable propositions for creating and mandating ethical AI have already begun —  the Institute of Electrical and Electronics Engineers (IEEE) launched the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems to develop standards and certifications for AI systems; the Organization for Economic Cooperation and Development (OECD) Policy Observatory published a set of concrete policy recommendations for national governments to develop and deploy AI technologies ethically within their jurisdictions; Singapore released its Model AI Governance Framework providing practical guidance on deploying ethical AI to private companies.

Yet, even the most practicable propositions for creating and mandating ethical AI would remain unethical so long as they remain elusive enough to discourage their public appraisal.

Hence, earlier in June, 2020, we launched the AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? as our small step towards facilitating public appraisal of various practicable propositions for creating and mandating ethical AI as they were earnestly put forth by several members of the AI Policy Exchange Editorial Network in the form of simple, insightful vlogs. These propositions for creating and mandating ethical AI ended up representing a diverse range of voices from amongst industry professionals, public policy researchers, and leaders of civil society groups within the AI community. In this document, we have attempted to compare and analyze these propositions to generate wholesome insights on how to create and mandate ethical AI for public appraisal.


Our Expert Contributors

Abhishek GuptaFounder, Montreal AI Ethics Institute; Machine Learning Engineer, Microsoft; Faculty Associate, Frankfurt Big Data Lab, Goethe University
Ana ChubinidzeFormer Research Fellow, Regional Academy on the United Nations; Member, Founding Editorial Board – AI and Ethics Journal, Springer Nature
Anand Tamboli, Chief Executive Officer, Anand Tamboli & Co.; Principal Advisor, AI Australia; Author, Keeping Your AI Under Control: A Pragmatic Guide to Identifying, Evaluating, and Quantifying Risks (Apress)
Angela Kim, Head of Analytics and AI at Teachers Health Fund, Australia; Women in AI Education Ambassador for Australia; Founder, est.ai; Member, Founding Editorial Board – AI and Ethics Journal, Springer Nature
Charlie Craine, Chief Technology Officer and Chief Data Officer, Meister Media Worldwide
Chhavi Chauhan, Ethics Advisor, Alliance for Artificial Intelligence in Healthcare, Washington, D.C.
Deepak Paramanand, Ex-Microsoft; AI Product Innovator, Hitachi
Diogo Duarte, Legal Consultant and Researcher, Data Protection
Felipe Castro Quiles, Co-founder & CEO, Emerging Rule and GENIA Latinoamérica; Fellow, Singularity University, Silicon Valley; Member, NVIDIA AI Inception; Member, Forbes AI Executive Advisory Board
Ivana Bartoletti, Technical Director, Privacy, Deloitte; Co-founder, Women Leading in AI Network; Author, An Artificial Revolution: On Power, Politics and AI (Mood Indigo)
James Brusseau, Director, AI Ethics Site, Pace University, New York
Merve Hickok, Founder, aiEthicist.org; Member, Founding Editorial Board – AI and Ethics Journal, Springer Nature
Nigel WillsonFormer European CTO, Professional Services, Microsoft; Founder, awakenAI and Co-Founder, We and AI
Oriana Medlicott, Subject Matter Expert & Consultant, AI Ethics, CertNexus; Member, Advisory Board, The AI Ethics Journal (AIEJ), The Artificial Intelligence Robotics Ethics Society (AIRES); Former Head of Operations, Ethical Intelligence Associates, Limited
Renée Cummings, Founder & CEO, Urban AI, LLC; East Coast Lead, Women in AI Ethics (WAIE); Community Scholar, Columbia University
Stephen Scott Watson, Managing Director, Watson Hepple Consulting Ltd.; Founding Editorial Board – AI and Ethics Journal, Springer Nature
Tannya JajalTechnology Contributor, Forbes Middle East; Co-Leader, Dubai WomeninTech Lean In Circle
Tom MouleExecutive Lead, The Institute for Ethical AI in Education, University of Buckingham
Tony Rhem, CEO / Principal Consultant, A.J. Rhem & Associates, Inc.; Member, Founding Editorial Board – AI and Ethics Journal, Springer Nature
Virginie Martins de Nobrega, AI Ethics & Public Policy Expert, Creative Resolution


9 Ways to Create and Mandate Ethical AI 

Let’s contextualize

AI devices & applications should be deemed ethical if they tangibly respond to the human needs and aspirations specific to the social, economic, political, and cultural contexts in which they are developed and deployed.

Explanation
In multilingual countries like India, an ethical AI use case could enable translation of classroom lectures delivered or recorded in one language (say, English) into several vernacular languages for cost-effective, equitable dissemination of educational content amongst learners residing in multiple regions of the country.


Let’s teach

Ethical AI is significantly predicated on creating responsible users of AI devices & applications, for which systematic and concerted measures need to be undertaken to improve AI literacy amongst the general population. Also, developers of AI systems should be systematically trained to appreciate the ethical risks associated with the use of AI devices & applications they develop and empowered with tools to reasonably foresee and adequately circumvent those risks.

Explanation
AI and ethical AI as disciplines should enter the school course curricula to create the next generation of responsible users of AI devices & applications. Also, AI companies should provide mandatory training to their developers that is focused on inculcating an interdisciplinary understanding of the purpose, use, and impact of AI technologies they develop and deploy.


Let’s concretize

Different use cases of AI come with different capabilities posing different risks and promising different benefits to individuals and society. Hence, technical safety standards as well as rules and regulations mandating these standards for different use cases of AI would be different.

Explanation
Both targeted advertising online and self-driving cars are use cases of AI. However, unethical development and deployment of targeted advertising online could routinize individual privacy invasions, whereas that of self-driving cars could lead to road accidents, jeopardizing human life and limb. Hence, technical safety standards as well as rules and regulations mandating these standards for targeted advertising online should instate safeguards that protect individual privacy, whereas that for self-driving cars should instate accountability measures for tackling negligent compliance by manufacturers of self-driving cars.


Let’s update

Existing ethical and regulatory frameworks should be adapted to accommodate the emerging social, economic, and political concerns around the adoption and use of AI devices & applications.

Explanation
Electoral laws must be suitably amended to delineate rules and regulations addressing the growing political concerns around the use of unethical microtargeted political advertising in election campaigns that compromises voter informational autonomy by gathering and using voter data without informed voter consent, and corrupts voter judgment by deploying fake news.


Let’s incentivize

Mandates to develop and deploy ethical AI devices & applications should enable AI companies to not only save costs of sanctions resulting from non-compliance but also gain occasional or routine financial rewards for compliance, creating market incentives for AI companies to proactively comply.

Explanation
Financial regulatory institutions could conceptualize and mandate an investment grading scale for rating AI devices & applications based on a certain measurable criteria determining their ethical value. This investment grading scale could enable investors to credibly assess the risks associated with their financial investments in AI devices & applications, and in turn, would create a market incentive for AI companies to seek financial investments by proactively seeking to develop and deploy AI devices & applications that rank high on the investment grading scale.


Let’s diversify

To counter the encoding of undesirable human biases in AI devices & applications, the use of which inevitably risks unfair and adverse treatment of certain individuals and groups, AI companies should be mandated to not only incorporate social diversity in their AI training datasets, but also to employ a socially diverse, multidisciplinary human resource pool that is tasked with developing and deploying their AI devices & applications.

Explanation
AI-powered recruitment tools could be corrected for results reflecting gender discrimination, that leads to rejecting more qualified female applicants against less qualified male applicants for the same job, by programming the underlying software to ignore factors like gender. This is likely to become an ethical imperative in AI companies technically mandating such requirements via AI ethics checklists configured by a collaborative, consultative engagement between developers and ethicists, with satisfactory representation from the female community.


Let’s account

Ethical AI principles and guidelines formulated by AI companies as bye-laws to self-regulate are laudable, but these should never discourage external regulation of AI companies and their activities via binding legislative instruments; so that when something goes wrong, penalizing the wrongdoer and compensating the victim are backed by legal certainty.

Explanation
Commitments made by social media companies to securing and maintaining the data privacy of their platform users should become actionable, i.e., the data privacy rights of these users should become the subject of a legislative enactment enabling a user whose data privacy rights have been infringed to take legal action with an appreciable degree of certainty of grievance redressal.


Let’s predict

The rapidly emerging use cases of AI are so transformative in their nature (unlike few other technologies from the past) that they give little or no time to formulate regulatory responses to adequately tackle the risks they pose to society. If these regulatory responses are not formulated well and in time, AI as an industry could prove to be more harmful than beneficial for humanity. Hence, governments should consult and collaborate with AI companies, public policy researchers, and the public to systematically anticipate the risks and opportunities posed by the potential AI use cases and initiate work towards regulatory reforms that the development and deployment of these use cases would demand with a view to minimizing their harmful impact on society.

Explanation
Governments could build internal capabilities — technical and human — for forecasting AI futures by drawing on the best methods and techniques of technological forecasting currently in place, like via regulatory sandboxing, in which live experiments are conducted in a controlled environment under a regulator’s supervision to safely test the efficacy and/or the impact of innovative software products or business models.


Let’s engage

Good public policy making is almost always led by effective engagement with all interested stakeholders. Hence, ethical AI devices & applications must almost always be a product of stakeholder-engagement, whose exact modalities should be decided by the ethical AI use case that is sought to be developed and deployed for it to be effective.

Explanation
Before introducing ethical AI devices & applications to improve learning outcomes for pupils in schools, their need and demand should be properly assessed by effectively engaging with pupils, teachers, and school administrators, amongst others, via survey tools and methods like questionnaires and focus group discussions.


Readers are encouraged to send us their feedback at editor@aipolicyexchange.org, and share the document as widely as possible. Here’s to creating an AI-literate society, together.

Digitalization and market power

Article by Thieß Petersen, Senior Advisor, Global Economic Dynamics Project, Bertelsmann Stiftung, Germany and Lecturer, European University Viadrina

The use of digital technologies increases market transparency for consumers and enlarges the relevant market. This strengthens their negotiating position vis-à-vis companies. However, there exists also the risk that large technology companies will develop into global monopolies that exploit their market power at the expense of consumers.

Digitalization reduces the market power of local companies

On the one hand, digitalization can reduce the market power of individual providers. If only one supplier of electrical appliances operates a store within a thirty miles radius in a rural region, she can charge mark-ups for her products and thus increase her profits. However, if there are internet-based search engines and online platforms, interested customers can search for offers of equal quality and take advantage of price differences.

Digital technologies also reduce the costs associated with delivering a product purchased online and handling payment. Consumers can therefore significantly expand the market that is relevant to them. For example, a local electronics retailer loses market power and has to adjust her price to the national or even worldwide market price.

A further weakening of the market power of local providers results from the possibility that private individuals offer goods and services on digital platforms. If they appear on the market as additional suppliers, this has an impact on the established commercial providers. For them, the additional supply represents competition that forces them to reduce their prices.

Digitalization can result in global monopolies

On the other hand, the characteristics of digital goods can have the effect that only one provider dominates a market in the long-term, thus creating a monopoly. There are two central reasons for this.

(1) The development of digital infrastructures or new computer programs involves high fixed costs. Once the computer program exists, the duplication and distribution of further copies of the program involves very low additional costs. With this cost structure, the increase in production volume results in decreasing costs per unit. This means that the supplier who produces the largest quantity has the lowest costs per unit and can therefore demand the lowest price. Therefore, only one supplier survives on the market.

(2) Digital platforms often rely on network effects. This means: The benefit for the consumer depends on the size of the network. The more participants there are in a social network or an online platform such as Airbnb, the more attractive it is for people to join the network. In the end, the company that has the most participants wins – and displaces all other providers

Disadvantages of monopolies for consumers

Monopolies have a number of economic disadvantages for consumers, other companies, and employees.

First, monopolists demand higher prices because they have no competition. For consumers, this means a loss of purchasing power, which reduces the opportunities for consumption.

Secondly, a monopolist also has market power as a buyer, with which it can lower the prices of inputs and wages. There are indications that the emergence of so-called superstar companies such as Google, Apple, Amazon, Facebook, and Uber is putting pressure on wages or on wage increases.

Thirdly, for a monopolist without competition, there is no need to improve the quality of its products and lower the prices of its products through technological progress. The central advantage of a market economy for consumers – an improved product offering at lower prices – is thus not realized.

Fourthly, monopoly profits imply high financial resources. This financial power can be used to buy up potential competitors at an early stage and thus restrict competition. The financial resources can also be used to acquire companies and enter entirely new markets that are not part of the actual business area.

Finally, economic power can become a political power. Monopolies are important economic actors as employers and taxpayers. This increases the probability that political decision-makers will listen to these companies and their partial interests. This can result in political decisions that are at the expense of consumers, employees, and small businesses.

Conclusion and outlook

The increased use of digital technologies offers a number of economic advantages. These include, above all, an improved supply of goods and services. As digitalization progresses, people are being offered a greater variety of products, can consume a larger quantity of products, and have to pay lower prices. Greater market transparency strengthens the position of consumers vis-à-vis companies and reduces prices further.

The economic risks of digitalization include monopolization tendencies, which can be at the expense of consumers and employees. A key economic policy challenge is therefore to use competition policy instruments to prevent the exploitation of market power so that the price-reducing and welfare-enhancing effects of increasing digitization can be realized.


Note: This is a shortened and revised version of the article “Digitalization of the Global Economy: Monopolies, Personalized Prices and Fake Valuations“, which was published on the project page of “Global Economic Dynamics” on October 30, 2020.

Expert Vlogging Series 1.0: How to put AI ethics into practice? | Oriana Medlicott, AI Ethics Expert & Consultant, CertNexus

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our twentieth vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Oriana Medlicott below.

Oriana joined AI Policy Exchange as a technical/policy expert earlier this month. Oriana is Subject Matter Expert & Consultant, AI Ethics at CertNexus, where she helped develop an emerging tech certification. She is also Member, Advisory Board of The AI Ethics Journal (AIEJ) published by The AI Robotics Ethics Society. Oriana has previously worked as Head of Operations at Ethical Intelligence Associates, Limited.

Viewers are welcome to send us their feedback on editor@aipolicyexchange.org.

Expert Vlogging Series 1.0: How to put AI ethics into practice? | Deepak Paramanand, Ex-Microsoft; AI Product Innovator, Hitachi

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our nineteenth vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Deepak Paramanand below.

Deepak joined AI Policy Exchange as a technical/policy expert later in June this year. Deepak is a widely travelled technologist who has gone from BI to AI and everything in between. Across 3 continents he has built AI products in B2B and B2C spaces, dealing with Natural Language Processing and real-time Computer Vision technology. He specializes in building ethical AI products. Currently, at Hitachi, Deepak is looking to put AI in trAIns. Previously at Microsoft, he launched puppets, Microsoft’s first foray in consumer-facing computer vision technology delivered on the Android ecosystem. The product won global accolades and customer appreciation across its 2M+ user base. Deepak’s work at evaluating fairness and bias in AI was called “pioneering” by Microsoft’s Ethics Committee and is being used as an example of doing AI right. A consummate public speaker, Deepak loves talking AI to professionals, graduates, and school children as he tries to decode AI for everyone. Deepak is looking to demystify and communicate AI to the world so that the world is included in the conversation in building and using AI.

Viewers are welcome to send us their feedback on editor@aipolicyexchange.org.

Expert Vlogging Series 1.0: How to put AI ethics into practice? | Charlie Craine, Chief Technology Officer and Chief Data Officer, Meister Media Worldwide

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our eighteenth vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Charlie Craine below.

Charlie joined AI Policy Exchange as a technical/policy expert later last month. Charlie is Chief Technology Officer and Chief Data Officer at Meister Media Worldwide.

Viewers are welcome to send us their feedback on editor@aipolicyexchange.org.


Expert Vlogging Series 1.0: How to put AI ethics into practice? | Angela Kim, Head of Analytics & AI, Teachers Health Fund, Australia

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our seventeenth vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Angela Kim below.

Angela joined AI Policy Exchange as a technical/policy expert later in June this year. Angela is a data science and AI professional with over 15 years of experience in the banking and insurance industry. She is currently Head of Analytics and AI at Teachers Health Fund and Women in AI Education Ambassador for Australia. Angela has been counted amongst the Top 10 Analytics Leaders 2020 from IAPA – Institute of Analytics Professionals of Australia. She has been working with Insurance Australia Group, Macquarie Group, Microsoft and Salesforce to provide AI outreach programs for high school students and technology literacy programs for USYD, UNSW, UTS and Macquarie University female business students via 100 Girls, 100 Futures WaiMASTERCLASS. Angela is also the founder of est.ai; its first project is focused on building a product to detect bullying, drug usage, and mental health & wellbeing on social media. Angela also sits on the founding editorial board of Springer Nature’s AI and Ethics journal.

Viewers are welcome to send us their feedback on editor@aipolicyexchange.org.

Expert Vlogging Series 1.0: How to put AI ethics into practice? | Tannya Jajal, Co-Leader, Lean In Circle – Dubai Women in Tech

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our sixteenth vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Tannya Jajal below.

Tannya joined AI Policy Exchange as a technical/policy expert earlier this month. Tannya is a technology evangelist, optimist, and futurist. She is passionate about carving out an exciting and connected future based on science, data and technology. Her hope is to encourage current and future leaders to have intelligent discussions around the ethical and practical implications of exponential technologies on business, life and society at large. Tannya currently works as the Resource Manager at VMware, a leader in cloud infrastructure. She is a technology contributor at Forbes Middle East, co-founder of SheMadeIT, and co-leader of the Lean In Circle – Dubai Women in Tech.

Viewers are welcome to send us their feedback on editor@aipolicyexchange.org.

Expert Vlogging Series 1.0: How to put AI ethics into practice? | Tom Moule, The Institute for Ethical AI in Education, University of Buckingham

This is the AI Policy Exchange Expert Vlogging Series featuring vlogs from our technical/policy experts addressing some of the hardest questions confronting the governance of artificially intelligent technologies.

In each AI Policy Exchange Expert Vlogging Series, we ask all our technical/policy experts a common question. Experts respond to the question through short, illuminating vlogs sent to us. We then upload these vlogs to our YouTube channel and also post them on our website.

We posed the following question to our experts as part of our AI Policy Exchange Expert Vlogging Series 1.0: How to put AI ethics into practice? 

This question arose from findings by public policy and ethics researchers claiming that AI ethics principles & guidelines proposed by various public bodies and private groups in their current form likely have a negligible effect on the safe development, deployment, and use of artificially intelligent technologies.

Watch our fifteenth vlog as part of the AI Policy Exchange Expert Vlogging Series 1.0 made by Tom Moule below.

Tom joined AI Policy Exchange as a technical/policy expert earlier last month. Tom is Executive Lead at The Institute for Ethical AI in Education, which is based at The University of Buckingham. In this role, Tom leads the Institute’s operations as it develops a framework for the ethical use of AI in education. Tom has previously worked as Strategic Projects Lead, and Curriculum Lead for CENTURY Tech; as a programme manager for Team up for Social Mobility; and as a secondary science teacher, for which he trained via Teach First. Education is Tom’s passion. He is excited and motivated by the transformative benefits that AI could achieve for learners, and is keen to support the development of the structures and guidance needed to allow such promises to be fulfilled.

Viewers are welcome to send us their feedback on editor@aipolicyexchange.org.