It Is Simple, It Is Complicated

It Is Simple, It Is Complicated

Abstract: History is not a strictly linear process, our progress as society is one full of contradictions. This we have to bear in mind when trying to find answers to pressing challenges related to and even caused by the digital transformation. In this article, we reflect on contradictory aspects of Digital Humanism, which is an approach to foster the control and the design of digital infrastructure in accordance with human values and needs. Seemingly simple solutions turn out to be highly complex when looking at them more closely. Focusing on some key aspects as (non-exhaustive) examples of the simple/complicated dilemma, we argue that, in the end, political answers are required.

History is a dialectic process. This is also true for the ongoing digital transformation (probably much better named informatization as it is informatizing nearly everything). Much of this development has already happened in the past, unobserved by mass media and most decision makers in politics and industry. Nowadays, this transformation appears at the surface, leaving many with the impression of an automatism, of a process without human control, guided by some “external” forces. This is why the notion of Digital Humanism is so important. As an approach to counteract negative effects of the digital transformation, it aims to foster the control and the design of digital infrastructure in accordance with human values and needs. While numerous people support (or at least sympathize with) these general aims and goals of Digital Humanism, there are subtle questions when looking behind the scene which need to be discussed and resolved. We should resist the temptation to provide trivial solutions which will not prevail in a serious discussion.

Our progress as society is full of contradictions, and sometimes it even points backwards. Bearing in mind such contradictions and that historical processes do not move straight forward linearly, we try to approach some of these contradictory aspects of Digital Humanism by looking at it using the intertwined pair “simple” and “complicated”. What seems to be simple, is complicated, and vice versa. This paper also reflects a discussion among us, the authors. Our sometimes-controversial debate may serve as a blueprint for a future “dialectic” process to find agreements on complicated matters, also in a public debate. The following, by no means exhaustive, list represents some of the “simple / complicated” issues Digital Humanism has to address.

  • Interdisciplinarity. The impact of digitalization on our lives is obvious, and everybody senses its power (both positive and negative). Negative phenomena to be addressed include the monopolization of the Web, issues related to the automatization of work, problems with respect to AI and decision making, the emergence of filter bubbles, the spread of fake news, the loss of privacy, or the prevalence of digital surveillance. While these are all aspects of the same disruptive process, they manifest very differently. Consequently, a broad spectrum of challenges needs to be addressed. The obvious and simple conclusion is: Interdisciplinarity is needed to tackle them, to understand the complicated presence and to shape the digital future. 

    But is it really so simple, as interdisciplinarity brings its own challenges? It is very hard, for instance, to come up with a common language, where all researchers involved use the same terminology with the same meanings. The way, moreover, how the research landscape is organized, still hinders interdisciplinarity. Interdisciplinary (especially young) researchers often do not obtain funding since they touch different communities but are not specialized enough to be at their centers which often leads to negative reviews; so how to foster interdisciplinarity for Digital Humanism, on a content, a method and an institutional level? Even more, as Informatics – as a key discipline – often comes with the attitude to “solve problems”, but at the same time not always seeing the side- and long-term effects of its work. Computer scientists cannot or even should not be the sole driving force. But if so, the role of Informatics and its methods needs some clarification. For a solid foundation of Digital Humanism, exchange across various disciplines is needed throughout the entire process, i.e., when doing analyses, when developing new technologies and when adopting them in practice. Looking back in history one sees that artefacts created by computer scientists have similar (if not even more, given their more pervasive nature) impact as the steam engine had in the industrial revolution. But it was not the engineers who organized the workers and envisioned social welfare measures, it was a much broader effort including intellectual leaders with diverse backgrounds together with the workers and their unions.
  • Humans decide. As it is written in the manifesto, “Decisions with consequences that have the potential to affect individual or collective human rights must continue to be made by humans” (Werthner et al., 2019). In a world becoming more and more complicated and diverse, it is obvious that long reaching and fundamental decisions should be made by humans. It is about us and our society – thus it is up to us, we are responsible for ourselves. This seems to be a simple principle. 

    However, it may be a little bit more complicated; empirical research in psychology and human decision-making shows (see Kahnemann, 2011; or Meehl, 1986) that rather simple statistical algorithms (e.g., multiple regression models) seem to outperform humans, even experts in the respective field, in particular when long-term decisions are to be taken. We humans tend to build causal models of the world, as a “simplification” for being able to understand the world. This is the case even or especially in complicated cases where randomness plays an important role. In addition, unobserved or un-considered parameters can influence human decisions.1 So the issue does not seem to be an either or nor, but rather how and when to combine humans and machines – complicated!
  • IT for the good. Digital Humanism not only attempts to eliminate the “downsides of information and communication technologies, but to encourage human-centered innovation” (Werthner et al., 2019), its focus is also on making the world a better one to live in, to contribute to a better society. In essence, it is simple when taking for example the Corona crisis, where informatization has shown its positive potential (both in research as well as in enabling a further functioning of our society). And when one looks at the United Nations Sustainable Development Goals (SDG – https://sdgs.un.org/goals), one can see that they only can be achieved through proper IT research and its application. 

    But reality is again a little bit more complicated. Since the 80s income inequality has risen in practically all major advanced economies, and this is the period of the ongoing digitalization.2 But not only the gap within the society has widened,3 also – following the “rules” of the networked platform economy with its winners take it all principle – there is a growing market gap between companies. For example, today the first seven most valued companies at the stock markets are IT platform companies; back in 2013 only two of them were in the top ten. In addition, the productivity growth has slowed down, where one would have expected substantial growth, as forecasted by several public relations companies in the IT field. Thus, also the cake of wealth to be distributed did not grow in order to “appease” the majority of the population.4 It is difficult to isolate the causes of this socio-economic development, but for sure technology plays a crucial role. So, it is complicated, in the analysis, and also in finding the proper technological and political answers.
  • Ethical technology. Being aware of the impact of our artefacts, we recognize the need to develop new technology along ethical guidelines. Informatics departments world-wide have included ethics in their curricula, either as stand-alone courses, or embedded in specific technical subjects. Also, industry has come along, some companies even offer specific tools, and associations such as IEEE provide guidelines for ethical design of systems.5 So, if we follow such guidelines, offer courses and behave ethically, then it will work, at least in the long run. That’s simple. 

    But reality may again be a little bit complicated. Most of the research in AI, especially with respect to machine learning, is done by the big IT platform companies, they have the data and, with sufficient financial resources, also outstanding expertise. These companies try to anticipate “too much” regulation and argue for self-regulation. However, is this research really independent, is it not only “ethics washing” as observed by (Wagner, 2018). Cases such as Google firing Timnit Gebru and Margaret Mitchell let the alarm bells ring.6 But it is not only about independence of research, it is also about reproducibility of results, transparency of funding or the governance structure of research (see Ebell et al., 2021). And there are other subtle problems, e.g., we argue for fairness in recommendation or search results. But how to define fairness, is it with respect to the provider of information or products, is it with respect to readers or consumers (and which sub-group), or do we need to define fairness with respect to some general societal criteria? One step further: let’s assume these issues are solved and we all behave according to Kant’s categorical imperative, can we then guarantee overall ethical behavior or a good outcome. Assuming the concept of human technology co-evolution, we have an evolutionary “optimization” process, which may lead to a local but not to a global optimum (e.g., preventing automatically monopolistic structures). Even more, this evolution “does not evolve on its own”, but is – as in our context – governed by existing unequal societal and economic power relationships. So, ethics alone may not be enough.
  • It is about the economy.7 The digital transformation as a socio-economic-technical process has to be put into a historical context. One could apply contemporary economic theory to understand what is going on (the “invisible hand” according to Adam Smith, i.e. following their self-interest consumers and firms create an efficient allocation of resources for the whole of society). However, the economic world has been substantially changed by the digital transformation. The value of labor is in the process of being reduced by increasing automatization and a possible unconditional basic income. These are simple observations but what are the implications? What is, ultimately, the role of humans in the production process? Or even, what is the value of a company? Can this still be captured and understood by traditional theories? 

    This is again complicated, as personal data seems to become the most distinguished feature people can contribute (at least on the Web) in this new world. Apparently, it is less and less the surplus (“Mehrwert”) generated by humans in the labor process that is relevant but rather the added value by a never-ending stream of data, i.e., their behavioral traces on the Web. This data is used to develop, to train and to optimize AI-driven services. Thus, users are permanently doing unpaid work. A user is, therefore, all three, a customer, a product and a resource at the same time (Butollo and Nuss, 2020). Furthermore, “instead of having a transparent market in which posted prices lead to value discovery, we have an opaque market in which consumers support Internet companies via, essentially, an invisible tax.” (Vardi, 2018). All this is related to the central role of online platforms. However, the investigation of their technological dominance and the resulting imbalances of power may require a network analytical perspective, integrating informatics, statistics and political science. But such novel approaches to understand the new rules of the economic game and the mechanisms that drive the data driven digital revolution are complicated. However, as far as we know there is no accepted method to measure the value of the data economy, or data itself. Data is the core of this development, or by several observers even called the “gold-nugget” of today. Whereas external valuation is difficult, the large online platforms are aware of the situation and investing heavily. We need creative people with eyes from different disciplines in order to come up with further enlightening – both methodological as well as practical – insights. Remember industrial revolution once more: understanding the steam engine does not immediately result in Marx’s Critique of Political Economy.
  • And about politics. It is already everyday knowledge that Informatics will continue to bring about profound changes. All this seems to be an automatism, even like a force of nature. However, we neither think that there is a higher being that is responsible nor, in a similar mindset, that developments strictly follow a “historical determinism”. If we, the people, should be the driving force, the simple approach would be that all people participate in decisions of shaping their own future, be it via democratic elections or via participatory initiatives. 

    However, in practice experiences are contradictory, and, thus, complicated. An example: although it was social media that claimed to lay the basis for participative processes,8 recent years have shown that their effect is often enough going in the opposite direction and fuels the loss of trust in policy makers (and thus in democracy in the long run). In addition, policy makers seem to be, at least sometimes, powerless against market automatisms, which in turn leads to the tendency to let people vote for “strong men”. Today it is the platforms themselves that make inherent political decisions when, for instance, banning individuals or entire opinion groups. In conclusion, with the simplistic vision that the Internet will foster participation, we have ended up in a complex situation that shifted power from the people to global players which take opaque actions. And these quasi-monopolists not only run the “visible” global platforms but, in the background, have built up a critical infrastructure for the functioning of the entire economy (cloud service, machine learning services, etc.). Something needs to be done, that’s the simple request. But available options are all difficult: e.g., building its own (e.g., European) infrastructures, or regulations, or even a nationalization of these companies. Each of these alternatives raises complicated questions: should it take place on a global or national or regional scale? Regarding regulation, what should be addressed, the entire technology stack or just the software upper layer? Regarding nationalization, which is anyway a controversial act on its own, who would even have the power to do this? Or, who should operate and control such infrastructures? There won’t be simple answers ahead, but long and cumbersome discussions on a global scale, which are often devastatingly ineffective. As a blueprint, we mention the discussion on a Tobin-Tax as one lesson learned from the 2008 crisis and which did not materialize in any form after 13 years of debate.
  • We as academics. Coming back to our role as academics, at a first glance it appears to be simple: specifically, in the technical disciplines we try to solve open questions, test new hypotheses, find novel models to describe the world, etc. Our papers are reviewed by colleagues and we present our results to the community in journals and conferences. And in some rare cases, these results find their way to the real world, making it hopefully and sometimes a better place. 

    Clearly, life is not as simple as that; are we always aware of all effects of our contributions? Have we thought about our responsibility, as stated by Popper (1971)? Scientific technical solutions are not always for the better, sometimes such solutions and artefacts even worsen the situation. In addition, such solutions depend on the context and may reflect current societal structures and relationships. When normative claims come into play, things immediately become more complicated, and norms can be seen to conflict with the freedom of research. Especially in technology, scientists might be reluctant, as at least some often perceive their research activities as detached from social values and norms. Moreover, the COVID crisis has brought the ambiguous relation between politics and science and research on the agenda once more, demonstrating that we need to be aware of pitfalls and consequences when determining this relation (Habermas, 1968). We as researchers have to communicate that there are two sides of science, a “ready-made-science” and a “science-in-the-making” (see Denning and Johnson, 2021; or Latour, 1987). Science is also a process, starting from struggling for what we need to know, towards an agreement on theories and models, i.e. settled science. Both sides are needed, as are – sometimes loud – debates between researchers. Thus, in certain situations we cannot provide 100% certain answers, and this must be communicated to the public, to politicians. A related aspect is the rapid development of technology, constantly (re-)shaping our world, which frightens people and increasingly leads to criticism of technology; sometimes it is hard to trust technological progress. As an example, as digital transformation leads to automatization and reorganization of work, certain jobs will disappear, which in turn raises an intrinsic reservation about modern technology (nonetheless, it has to be emphasized that it might also be a blessing for the society as a whole if certain jobs disappear). The complicated issue is how to manage this progress, with a societal or a market perspective, only.

Like other socio-economic or socio-technical-economic processes, digital transformation is a dialectic endeavor, full of contradictions, raising simple and complicated issues with often not easy solutions. One needs to understand technology to build a technical infrastructure for the human and the society, but at the same time this goal is only achievable if there is broad support from people. Scientific insights need to be communicated and participation is required (“Citizen Science”). People have to understand the power they (still) have in this new society where fundamental rules are under change, but basic knowledge on these issues is often severely lacking. As an example, it was not at least the COVID crisis that showed that privacy issues are raised in weird ways: we have seen huge reservations against contact tracing apps, while the same people mark their vaccination appointments in their Facebook stories. 

Digital Humanism is not only about fundamental and applied research, where different disciplines have to cooperate, also different types of activities have to be integrated, from research to innovation, education, policy briefing and communication with the public. As simple or complicated as it may be, it needs democratic ways forward and solutions. Or to put it simply: at the end techno-societal issues need political answers.


1. Danzinger et al. (2011) show that “judicial rulings can be swayed by extraneous variables that should have no bearing on legal decisions”. When examining parole boards’ decisions, the authors found that the likelihood of a favorable ruling is greater at the beginning of the work day or after a food break than later; so, whether before or after a break matter. 

2. See, e.g., http://www.bbvaopenmind.com/en/articles/inequality-in-the-digital-era/; or https://knowledge.insead.edu/responsibility/how-the-digital-economy-has-exacerbated-inequality-9726; (thanks to George Metakides for these references).

3. There is even a new source of inequality that stems from bias in data, an issue which we won’t discuss here further.

4. As a result, look at the respective elections and the success of the populist right.

5. IEEE P7000 – IEEE Draft Model Process for Addressing Ethical Concerns During System Design.

6. https://www.bbc.com/news/technology-56135817

7. We omit “stupid”, for not offending the reader. 

8. In fact, social media contributed to broad political movements such as the Arab Spring.


References

Butollo, F., and Nuss, S. (2019) Marx und die Roboter. Vernetzte Produktion, künstliche Intelligenz und lebendige Arbeit. Dietz Berlin (in German).

Danziger, S., Levav, J., & Avnaim-Pesso, L. (2011) Extraneous factors in judicial decisions. Proceedings of the National Academy of Sciences, 108(17), 6889-6892.

Denning, P., and Johnson, J. (2021) Science Is Not Another Opinion. Communications of the ACM. 3 (64).

Ebell, C., Baeza-Yates, R., Benjamins, R. et al. (2021) Towards intellectual freedom in an AI Ethics Global Community. AI Ethics (2021). https://doi.org/10.1007/s43681-021-00052-5.

Habermas, J. (1970) “Technology and Science as ‘Ideology.”‘ In Toward a Rational Society: Student Protest, Science, and Politics, trans. Jeremy J. Shapiro. Boston: Beacon Press. (Original article: Habermas, J. (1968) Technik und Wissenschaft als “Ideologie”. Man and World 1(4):483-523.)

Kahneman, D. (2011) Thinking, fast and slow. London: Penguin Books

Latour, B. (1987) Science in Action: How to Follow Scientists and Engineers through Society. Harvard University Press.

Meehl, P. E. (1986) Causes and effects of my disturbing little book. Journal of personality assessment, 50(3), 370-375.

Popper, K. R., (1971) The moral responsibility of the scientist. Bulletin of Peace Proposals, 2(3), 279-283.

Vardi, M. (2018) How the hippies destroyed the Internet. Communications of the ACM. 7 (61). 

Wagner, B. (2018) Ethics as an escape from regulation. From “ethics-washing” to ethics-shopping? In: Emre, B., Irina, B., Liisa, J.U.A. (Hg.): Being Profiled: Cogitas Ergo Sum. 10 Years of ‘Profiling the European Citizen‘. Amsterdam University Press, Amsterdam, pp. 84–88.

Werthner, H. et al. (2019), The Vienna Manifesto on Digital Humanism.