A quote attributed to the linguist Benjamin Lee Whorf claims that “language shapes the way we think and determines what we can think about.” Whorf, along with his mentor Edward Sapir, argued that a language’s structure influences its speakers’ perception and categorization of experience. Summarizing what he called the “linguistic relativity principle,” Whorf wrote that “language is not simply a reporting device for experience but a defining framework for it.”
Over the past several decades, a strange new language has increasingly exerted its power over our experience of the world: the language of algorithms. Today, millions of computer programmers, data scientists, and other professionals spend the majority of their time writing and thinking in this language. As algorithms have become inseparable from most people’s day-to-day lives, they have begun to shape our experience of the world in profound ways. Algorithms are in our pockets and on our wrists. Algorithms are in our hearts and in our minds. Algorithms are subtly reshaping us after their own image, putting artificial boundaries on our ideas and imaginations.
Our Languages, Our Selves
To better understand how language effects our experience of the world, we can consider the example depicted in the 2016 science fiction movie Arrival. In the film, giant 7-legged aliens arrive on Earth to give humanity a gift – a “technology,” as they call it. This technology turns out to be their language, which is unlike any human language in that it is nonlinear: sentences are represented by circles of characters with no beginning or end. Word order is not relevant; each sentence is experienced as one simultaneous collection of meanings.
As the human protagonist becomes fluent in this language, her experience of the world begins to adapt to its patterns. She begins to experience time nonlinearly. Premonitions and memories become indistinguishable as her past, present, and future collapse into one another. The qualities of her language become the qualities of her thought, which come to define her experience.
A similar, if less extreme, transformation is occurring today as our modes of thought are increasingly shaped by the language of algorithms. Technologists spend most of their waking hours writing algorithms to organize and analyze data, manipulate human behavior, and solve all manner of problems. To keep up with rapidly-changing technologies, they exist in a state of perpetual language learning. Many are evaluated based on how many total lines of code they write.
Even those of us who do not speak the language of algorithms are constantly spoken to in this language by the devices we carry with us, as these devices organize and mediate our day-to-day experience of the world. To understand how this immersion in the language of algorithms shapes us, we need to name some of the peculiar qualities of this language.
Traditional human languages employ a wide range of linguistic tools to express the world in its complexity, such as narrative, metaphor, hyperbole, dialogue, or humour. Algorithms, by contrast, express the world through formulas. They break up complex problems into small, well-defined components and articulate a series of linear steps by which the problem can be solved. Algorithms – or, more accurately, the utility of algorithms – depend on the assumption that problems and their solutions can be articulated mathematically without losing any meaningful qualities in translation.
They assume that all significant variables can be accounted for, that the world is both quantifiable and predictable. They assume that efficiency should be preferred to friction, that elegance is more desirable than complexity, and that the optimized solution is always the best solution. For many speakers of this language, these assumptions have become axiomatic. Their understanding of the world has been re-shaped by the conventions and thought-patterns of a new lingua franca.
Algorithms and Technological Solutionism
Immersion in the language of algorithms inclines us to look for technological solutions to every kind of problem. This phenomenon was observed in 2013 by Evgeny Morozov, who described the folly of “technological solutionism” in his book, To Save Everything, Click Here:
“Recasting all complex social situations either as neat problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized–if only the right algorithms are in place!–this quest is likely to have unexpected consequences that could eventually cause more damage than the problems they seek to address.”
An algorithmic approach to problem solving depends on the assumption that the effects of the algorithm – the outputs of the formula – can be predicted and controlled. This assumption becomes less plausible as the complexity of the problem increases. When technologists seek to write an algorithmic solution to a nuanced human problem – like crime prevention or biased hiring, for example – they are compelled to contort the problem into an oversimplified form in order to express it in the language of algorithms. This results, more often than not, in the “unintended consequences” that we’ve all heard so much about. Perhaps Wordsworth said it best, 200 years before Morozov:
“Our meddling intellect
Mis-shapes the beauteous forms of things:—
We murder to dissect.”
A helpful example of all this is furnished by Amazon, who in 2014 implemented an algorithmic tool to automate the vetting of resumes. The tool was designed to rate job candidates on a scale from one to five in order to help managers quickly identify their top prospects. After one year of use, this tool was shelved for discrimination against female candidates.
This parable contains all the components of the technosolutionist narrative: a problem rife with human complexity is assigned an algorithmic solution in the pursuit of efficiency; the problem is oversimplified in a hubristic attempt to express it numerically; the algorithm fails to achieve its objective while creating some nasty externalities along the way.
Immersion in the language of algorithms doesn’t just shape our understanding of the world; it shapes our understanding of the self. Members of the “quantified self” community use a combination of wearable devices, internet-of-things sensors, and – in extreme cases – implants to collect and analyze quantitative data on their physical, mental, and emotional performance. Adherents of this lifestyle pursue “self-knowledge through numbers,” in dramatic contrast to their pre-digital forbearers who may have pursued self-knowledge through philosophy, religion, art, or other reflective disciplines.
The influence of algorithms on our view of the self finds more mainstream expressions as well. Psychologists and other experts have widely adopted computational metaphors to describe the functions of the human brain. Such conventions have made their way into our common vernacular; we speak of needing to “process” information, of feeling “at-capacity” and needing to “recharge,” of being “programmed” by education and “re-wired” by novel experiences.
To understand the extent to which algorithms have shaped our world, consider that even Silicon Valley’s rogues and reformers struggle to move beyond algorithmic thought patterns. This is evidenced by the fact that tech reformers almost uniformly adopt a utilitarian view of technological ethics, evaluating the ethical quality of a technology based on its perceived consequences.
The algorithmic fixation with quantification, measurement, outputs, and outcomes informs their ethical thinking. Other ethics frameworks – such as deontology or virtue ethics – are seldom applied to technological questions except by the occasional academic who has been trained in these disciplines.
A Language Fit for Our World
When it comes to complex social problems, algorithmic solutions tend to function at best as band-aids, covering up human shortcomings but doing nothing to address the problem’s true causes. Unfortunately, technological solutionism’s disastrous track record has done little to dampen its influence on the tech industry and within society as a whole.
Contact tracing apps look to be an important part of the solution to the COVID-19 pandemic; but will they be able to compensate for the lack of civic trust and responsibility that has crippled America’s virus containment efforts? Predictive and surveillance policing, powered by artificial intelligence tools, promises to create a safer society; can these atone for the complex social and economic inequalities that drive crime and dictate who gets called a criminal? As November 2020 approaches, social media companies assure us that they’re ready to contain the spread of fake news this time around; are we as citizens ready to engage in meaningful civil discourse?
Benjamin Lee Whorf, mirroring the language of Wordsworth’s poem, wrote that “we dissect nature along lines laid down by our native language.” “We cut nature up, organize it into concepts, and ascribe significances,” as our language dictates and allows. It is therefore critical that the languages we adopt are capable of appreciating the world in all of its messy, tragic, beautiful complexity. Algorithms are well-suited to many kinds of problems, but they have important limits. Rather than shrinking our world to fit within these limits, we must broaden our minds to comprehend the intricacies of the world. We must stop thinking like algorithms and start thinking like humans.
 Whorf B.L. (1956). Language, Thought, and Reality. Cambridge, MA: MIT Press.
 Morozov, Evgeny. To Save Everything, Click Here.
 Wordsworth, William (2004). The Tables Turned. In Selected Poems, London: Penguin Books.
 For an excellent discussion and refutation of this trend, see Nicholas Carr’s book, The Shallows.
 Whorf, Benjamin (1956), Carroll, John B. (ed.), Language, Thought, and Reality: Selected Writings of Benjamin Lee Whorf. MIT Press: 212 – 214.
Views expressed above belong to the author(s).