Our Digital Mirror

Our Digital Mirror

Abstract: The digital world has a strong tendency to let everything in its realm appear as resources. This includes digital public discourse and its main creators, humans. In the digital realm, humans constitute the economic end and at the same time provide the means to fulfil that end. A good example is the case of online public discourse. It exemplifies a range of challenges from user abuse to amassment of power, difficulties in regulation, and algorithmic decision-making. At its root lies the untamed perception of humans as economic and information resources. In this way, digital technology provides us with a mirror that shows a side of what we are as humans. It also provides a starting point to discuss such questions as who would we like to be – including digitally, which purpose should we pursue, and how can we live the digital good life?

For Antoine de Saint-Exupery (1939) airplanes can become a tool for knowledge and for human self-knowledge. The same is true for digital technologies. We can use the digital as a mirror that reflects an image of what we are as humans. And when we look closely, it can become an opportunity to shape who we are in ways that make us more attractive.

It is perhaps surprising to talk about attraction in the context of digital humanism. Much of its discourse to date is about the downsides of digital technologies: human estrangement, monopolistic power, unprecedented surveillance, security attacks, and other forms of abuses. However, the discourse of digital humanism is not entirely negative. Its proponents belief in progress including by technological means. It is a constructive endeavour and acknowledges that many good things have come from digital technologies. Still, digital humanism calls for better technology and an improved way of shaping our future with the help of digital tools. It demands technologies shaped in accordance with human values and needs instead of allowing technologies to shape humans in often unpredictable and undesirable ways.

However, regarding digital humanism, a programme of only technical advancement would fall short of its real ambition. Although technologically driven and proceeding with the intention of improving technology, digital humanism more often problematizes science-technology relationships, human attitudes, and fundamental concepts that we have come to take for granted. In digital humanism, the digital becomes the mirror that impresses a picture of who we are on ourselves and lets us realize what we should change – about us, about technology, and about the lives that we live. 

The pervasiveness and ubiquity of information technology means that just about everything that is subject to digitization is also subject to its transformational powers. Central concepts of our lives – both private and professional – have become transformed when they seemingly were only digitized. Collaboration is now something happening in shared directories, meeting new people is mediated by apps and preference settings, news has become pop-up windows, and museums curate data. Intelligence is something that machines exhibit, and chess has been turned into a formal calculation task rather than a board game since around the 1970s. It is as if we had to reinvent the whole world in the digital again and examine its values and implications for our lives. A good example is the case of public discourse, certainly also a feat of the new digital sphere. 

The example of online discourse

The advent of the internet significantly impacted the way we speak in public. From early newsgroups to today’s online social networks, this development deserves its own, more thorough investigation. Today, public discourse without the digital has become quite unthinkable. At the same time, the phenomenon of online discourse is now a major concern of policy makers as well as of critical thinkers, researchers, and many citizens. Its shortcomings, from fake news to echo chambers, from foreign political influence to the pervasion of illegal content, are blamed on its digital form, hence, on technology. Some key challenges include the following:

  • Platforms exploit discourse to drive user behaviour. They can prioritize emotional content over facts, nudge users into staying online, and have become viable ways to influence user behaviour including political decisions.
  • Algorithms supervise and police user-contributed online content with the aim to detect illegal matter, spot infringements of intellectual property (cf. chapter by Mendis), remove what may be considered harmful etc. 
  • There is a massive shift of power over discourse control from traditional rulers of public discourse, such as media, politicians, and thinkers, to digital platforms. 
  • Discourse in online platforms has proven enormously difficult to regulate by any single country. The only exceptions are through massive investments in surveillance, censorship, and severe limitations of Freedom of Expression, e.g. in China. 
  • User generated discourse provides platforms with large amounts of data to learn, build models, predict behaviour, and generate profit in various ways including targeted advertising based on behavioural prediction.

These challenges are by no means unique to online public discourse. We find massive shifts of power towards platforms throughout the world of internet business; platforms have generally proven difficult to regulate – not just regarding public discourse; algorithmic decision-making affecting humans happens through a broad range of applications; harvesting data from all sorts of electronic devices lies at the root of surveillance capitalism; and, luring users to stay online through emotional targeting happens across a range of online media today. Online public discourse is really but one example, albeit one that is pervasive throughout societies all over the world. 

Beyond the listed concerns, digital online discourse seems to affect members of societies in their sense of belonging. The individualised nature of person-targeted discourse, its character of entertainment, and the self-fulfilling quality of opinionated content sharing and creation have severely undermined shared views, collective narratives, and communal perception. It has been suggested that digital discourse only involves a ‘simulated’ public. Earlier analyses of digital culture focused more on the creation of new communities as well as on referentiality and algorithmicity (Stalder, 2016) as ways of creating a shared understanding. Today however, discourse moderation algorithms reinforce an individualized monologue in which references serve to propagate an individual’s opinions and lead to the often-diagnosed fragmentation of society. Discourse in the digital world thus runs the risk of endangering the common good not necessarily because of attacking any good specifically, but because of undermining the concept of the commons. It limits what is shared among people and thus what contributes to forming a societal collective. This is yet another case of digital technologies not only changing human behaviour but changing the very essence of key concepts in often unexpected and unpredictable ways.

A recurring topic in digital humanism is that of primacy of agency or who shapes whom: is technology shaping humans or should technologies be designed in accordance with human needs and values? Unfortunately, matters in digital technologies are never so simple and there is mutual influence of the two spheres. In digital humanism this phenomenon has been called co-evolution (cf. chapter by Nowotny and by Lee). When co-evolution affects basic concepts, such as discourse, it seems futile to repair these fundamental concept drifts and the challenges they create only by mending technologies. Much beyond co-evolution there is a deeper, more philosophical question to ask. It concerns a matter of choice and decision: How do we want to be as humans? It concerns ethical choices about the good life digital as much as it concerns the design of our technosphere. To stay with the example of online discourse, the digital then poses the question of what type of discourse do we want, or perhaps more ontologically, what should discourse be?

Some legislators and platform owners, for example, suggest using algorithms for improving online discourse. The idea is that artificial intelligence removes illegal content, and many seem to suggest that any content that is potentially harmful should be removed. While the former is usually defined in legal texts and practice, the latter is typically ill-defined and lies at the networks’ discretion. The ensuing discussions of democratic parliaments and non-government think-tanks then concern Freedom of Expression as a basic or human right, censorship, regulation etc. (Cowls et al., 2020) While these are important and difficult discussions, a more essential line of thinking is required, namely the question of what should the essential qualities of online discourse be? It is another typical characteristic of digital technologies that we can rarely do away with them once they have been rolled out. We therefore need to have productive, forward-looking discussions. This can include a debate about how much ‘harm’ a discourse may have to include to be productive, to stimulate, or to provoke. We need to discuss not only formal qualities of discourse, but what should its purpose be, who should partake, and whom should it serve?

Scaffolding discourse

The reasons for challenges of digital technologies do not exclusively root in the fact that they are digital as opposed to analogue, nor do they lie in their ubiquitous nature and the ease with which digital technologies can manage large numbers. The challenges root in how they affect our basic conceptions of the world. Although the technical characteristics are important, there currently is an unprecedented scale of how the digital facilitates commercial gains of a specific character. We mentioned how online discourse provides a basis of targeted advertising, of data harvesting, and for the construction of predictive behavioural models. This exploitation of online discourse lets discussions appear as a resource in the digital sphere. The digital (platform) perspective thus regards human language from the standpoint of observability and predictability. The resulting digital online sphere consists of (mostly) humans that provide linguistic resources and their online presence, and of businesses requiring that humans need to be predicted and targeted in advertising. In this discourse, humans become a resource in the digital realm.

Such a resource-focused perspective is not unique to digital technology. As early as 1956, Heidegger suggested that this specific way of letting everything appear as a resource lies in the very nature of modern technology. In his terminology, technology lets everything become part of an enframing (‘Ge-stell’) as resource (‘Bestand’). Heidegger uses the example of the river that appears as a power source once we start building electricity stations. Digitization not only provides such enframing for various objects and phenomena in our environment, it additionally and much more than previous, older technologies, enframes us as humans. It is perplexing that in the digital realm, the human is both the source and the sink. Humans constitute the economic end and at the same time provide the means to fulfil that end. Humans stand reserve to the extent that they are simply either data or money generators. From an economic viewpoint, Zuboff (2019) identified a similar concept drift underlying surveillance capitalism. It is a strong and impactful understanding of humans driven by the commercial online world. It is commercially attractive and promising with its double take on the human as a resource. 

Engineering may always imply ‘a certain conception of the human’ (Doridot, 2008), but we have choices. For example, not all public discourse needs to take such an instrumentalist turn. Like other human activities, we can choose the purpose of our speaking. Some forms of public online discussions are designed to facilitate dialogues among groups of highly engaged speakers of a local community (e.g. MIT’s Local Voices network – https:lvn.org). Others are solution- and goal-oriented and live a practice of focused contributions (e.g. the business and employment-oriented network LinkedIn). Such examples suggest that there are ways to facilitate online discourse less prone to filter-bubbles, echo chambers, fake news etc., and perhaps even in business settings. It also shows how purposes can be designed in line with human needs; in fact, purposes are entirely human made. 

We may still have to focus on technology, to occasionally retreat from social media, reform its way of working, and exert restraint as Deibert (2020) suggests. However, realising that some types of online discourse emerge from the instrumentalization of users, turning them into targets and exploiting them as resources, means to understand not just technology, but ourselves. Technology then is the mirror that presents us with an image of ourselves that we may not find entirely attractive. We can also find possible relief in Heidegger, who quotes Hölderlin: ‘But where danger is, grows the saving power also.’ This suggests that the enframing also reveals truth upon which we can build if want to overcome present danger. And lifted back to the level of digital humanism, the questions then become who would we like to include digitally, which purpose do we pursue, and how can we live the good digital life? Finally, we will also have to find good answers to the question who is ‘we’?


Cowls J., Darius P., Golunova V., Mendis S., Prem E., Santistevan D., Wang W. (2020). Freedom of Expression in the Digital Public Sphere [Policy Brief]. Research Sprint on AI and Content Moderation. Retrieved from https://graphite.page/policy-brief-values

Deibert R.J. (2020) Reset. Toronto: House of Anansi Press.

Doridot F. (2008) Towards an ‘engineered epistemology’? Interdisciplinary Science Reviews, 33:3, 254-262, DOI: 10.1179/174327908X366941.

Heidegger M. Die Frage nach der Technik. (The question concerning technology.) (1954) Vorträge und Aufsätze. (1990), Pfullingen: Neske.

De Saint-Exupéry A. (1939) Wind, Sand und Sterne (Terre des hommes.) Düsseldorf: Karl Rauch. 

Stalder F. (2016) Kultur der Digitalisierung. (Culture of digitization.) Frankfurt/Main: Suhrkamp.

Zuboff S. (2019) Surveillance capitalism. London: Profile books.