Upcoming Events:

April 14, 2023 11:00 – 12:00 AM (11:00) CEST Europa im Diskurs “Ist künstliche Intelligenz bald klüger als wir?”

Sie beantworten Mails, schreiben Hausübungen, unterstützen wissenschaftliche Recherchen und versuchen sich auf Verlangen sogar an Lyrik und Dramatik. Die neuesten Chatbots, textbasierte Dialogsysteme wie etwa ChatGPT, erledigen bestimmte Aufgaben bereits besser und schneller als Menschen das können. Wird eine uralte Angst der Menschheit Wirklichkeit? Sind die Programme drauf und dran, uns zu ersetzen? Werden sie am Ende gar intelligenter sein als die Klügsten unter uns? Darüber sprechen wir diesmal mit Expert:innen und Künstler:innen bei EUROPA IM DISKURS.

Panelists: Helga Nowotny (Former President of the ERC) Peter Knees (TU Wien), Lisz Hirn, Maya Pindeu, Jörg Piringer    Moderatorin: Petra Stuiber

Past Events:

March 28, 2023
5:00 – 6:00 PM
(17:00) CEST
Lecture Series
“Statement of the Digital Humanism Initiative on ChatGPT and a possible new online world”


The statement of the Digital Humanism Initiative as of March 2023 will be present and discussed.
The release of ChatGPT has stirred worldwide enthusiasm as well as anxieties. It has triggered popular awareness of the far-reaching potential impact of the latest generative AI, which ranges from numerous beneficial uses to worrisome concerns for our open democratic societies and the lives of citizens.  


Speakers: Helga Nowotny (Former President of the ERC) George Metakides (President of Digital Enlightenment Forum, Visiting Professor, University of Southampton)
Moderator: Hannes Werthner (TU Wien)
Slides
YouTube
March 9, 2023
5:00 – 6:00 PM
(17:00) CET
Lecture Series
“On ChatGPT”


Gary Marcus is a prominent public voice on the developement of AI, see his recent NY Times interview or his comments on his blog. In the DigHum Lecture he will present his latest take on ChatGPT.


Speaker: Gary Marcus (garymarcus.com)
Moderator: Helga Nowowtny (Former President of the ERC)
YouTube
March 7, 2023
5:00 – 6:00 PM
(17:00) CET
Lecture Series
“Seize the Means of Computation”


The tech giants claim that they have to lock down their devices to defend you, but they keep failing to do so. The most science fictional question isn’t what a technology *does*, it’s who it does it *for* and *to*. A free and fair future starts with technology that is under its users’ control. We need a new Luddite movement, one that seizes control of the machines that try to seize control of *us*.


Speaker: Cory Doctorow (craphound.com)
Moderator: Allison Stanger (Middlebury College, USA)
YouTube
February 21, 2023
5:00 – 6:00 PM
(17:00) CET
Lecture Series
“Abstracted power and responsibility”


Technology often serves as a lever to extend and amplify our power. It is increasingly possible in the 21st century to make decisions that have profound impacts on people thousands of miles away from us. As it extends our reach geographically and culturally, this long arm of technology can also distance us perceptually from impacts of our decisions and make the consequences easier to ignore. We call the outcome of this phenomenon ‘abstracted power,’ and identify it as an important obstacle to overcome as we promote social responsibility in engineering education. Our work is with computer science students, but abstracted power could apply to any engineering field and indeed may find traction beyond those disciplinary boundaries. In this interactive talk I will introduce the concept of abstracted power and the highlights of a paper my colleagues Rodrigo Ferreira and Moshe Vardi and I recently published on the subject. Next I will describe how I have used this concept in my computer science ethics classes and share some students’ observations of this phenomenon in the ‘real world.’ I will then invite members of the audience to share their thoughts on this concept in the context of their own disciplines. Finally I will talk about ways to un-abstract power and make computer scientists and others feel a greater sense of personal responsibility for the consequences of their actions.


Speaker: Tina Peterson (University of Texas, Austin)
Moderator: Carlo Ghezzi (Politecnico di Milano, Italy)
Slides and presented paper
YouTube
January 24, 2023
5:00 – 6:00 PM
(17:00) CET
Lecture Series
“AI For Good” Isn’t Good Enough: A Call for Human-Centered AI”


AI for Good initiatives recognize the potential impacts of AI systems on humans and societies. However, simply recognizing these impacts is not enough. To be truly Human-Centered, AI development must be user-centered, community-centered, and societally-centered. User-centered design integrates techniques that consider the needs and abilities of end users, while also improving designs through iterative user testing. Community-centered design engages communities in the early stages of design through participatory techniques. Societally-centered design forecasts and mediates potential impacts on a societal level throughout a project. Successful Human-Centered AI requires the early engagement of multidisciplinary teams beyond technologists, including experts in design, the social sciences and humanities, and domains of interest such as medicine or law, as well as community members. In this talk I will elaborate on my argument for an authentic Human-Centered AI.


Speaker: James A. Landay (Stanford University)
Moderator: Moshe Y. Vardi (Rice University, USA)
Slides
YouTube
December 19, 2022
5:00 – 6:00 PM
(17:00) CET
Online Vienesse Digital Humanism Runde “Responsible AI in China”


upcoming


Speaker: Pingjing Yang (University of Illinois)
Slides

December 6, 2022
2:00 – 3:00 PM
(14:00) CET
Lecture Series
“The Uselessness of AI Ethics”


In this discussion, Luke Munn and Erich Prem consider the apparent uselessness of AI ethics. As the awareness of AI’s power and danger has risen, the dominant response has been a turn to ethical principles. A flood of AI guidelines have been released in both the public and private sector in the last several years. Systematic reviews of these frameworks reveal that most of them are instances of principlism. Unfortunately, such principles are often meaningless and vague, they lack “teeth” or enforcement, and they are situated in an industry that often ignores ethics. The question then is how to leverage ethical principles, to move from what to how. There have been numerous proposals for tools, techniques, and algorithms to create ethical AI systems. Will this be the solution or will we need completely new pathways to building and operating AI systems that align with societal values?


Speakers: Luke Munn (University of Queensland) and Erich Prem (Universität Wien & eutema)
Discussed paper
YouTube
November 22, 2022
18:00 CET
Open Societies and Democratic Sustainability in the Shadow of Big Tech
Lecture of the IWM Digital Humanism Fellowship Program.

 

November 23, 2022
9:30 – 12:00 CET
In the Shadow of Big Tech
IWM Workshop with Allison Stanger, Paul Timmers, George Metakides and Guests

 

November 25, 2022
13:00 – 15:00 CET
Who Elected Big Tech?
IWM Digital Humanism Fellow Allison Stanger speaks about technological innovation and power shifts.

 

November 15, 2022
5:00 – 6:00 PM
(17:00) CET
Lecture Series
“Is AI good or bad for the climate? It’s complicated”


With the increasing deployment of artificial intelligence (AI) tools across society, it is important to understand in which ways AI may accelerate or impede climate progress, and how various stakeholders can guide those developments. On the one hand, AI can facilitate climate change mitigation and adaptation strategies within a variety of sectors, such as energy, manufacturing, agriculture, forestry, and disaster management. On the other hand, AI can also contribute to rising greenhouse gas emissions through applications that benefit high-emitting sectors or drive increases in consumer demand, as well as via energy use associated with AI itself. In this talk, we will explore AI’s multi-faceted relationship with climate change.


Speaker: David Rolnick (McGill University, Canada)
Moderator: Peter Knees (TU Wien, Austria)
Slides
YouTube
October 18, 2022
5:00 – 6:00 PM
(17:00) CET
Lecture Series
“Rocks, Flesh, and Rockets: A Political Ecology of AI”


In this talk, Professor Crawford will share the findings from her book Atlas of AI, which maps the global impacts of large-scale computation on the environment, personal data, and human labor. She will share insights from her field work for the book, including visiting lithium mines, Amazon warehouses, and Blue Origin’s rocket base. This work gives insight on the deeper politics and planetary costs of artificial intelligence and its infrastructures, which are generally hidden from public view.


Speaker: Kate Crawford (USC Annenberg)
Moderator: Edward A. Lee (UC Berkeley, USA)
YouTube
June 28, 2022
5:00 – 6:00 PM
(17:00) CET
Lecture Series
“AI Advances, Responsibilities, and Governance”


Surprising advances in machine learning over the last decade have provided breakthrough capabilities across a spectrum of long-term AI aspirations—and more breakthroughs are coming our way. Possibilities for achieving new forms of automation and human augmentation have captured the imagination of people and organizations across the world.  However, exuberance with AI technologies is tempered by concerns about potential costs, rough edges, and downsides of applications of AI, including challenges with reliability and safety, equity, economics, civil liberties, democracy, and human agency. I will share reflections on AI responsibilities and governance across multiple sectors with a focus on corporate responsibilities.

Speaker: Eric Horvitz (Microsoft)
Moderator: Moshe Y. Vardi (Rice University, USA)
Slides
YouTube
May 24, 2022
5:30 – 7:00 PM
(17:30) CET
Lecture Series
“Limits of Machines, Limits of Humans”

“Rationality” in Simon’s “bounded rationality” is the principle that humans make decisions based on step-by-step (algorithmic) reasoning using systematic rules of logic to maximize utility. “Bounded rationality” is the observation that the ability of a human brain to handle algorithmic complexity and data is limited. Bounded rationality, in other words, treats a decision-maker as a machine carrying out computations with limited resources. In this talk, I will argue that the recent breakthroughs in AI demonstrate that much of what we consider “intelligence” is not based on algorithmic symbol manipulation, and that what the machines are doing more closely resembles intuitive thinking than rational decision making. Under this model, the goal of “explainable AI” is unachievable in any useful form.

Speaker: Edward A. Lee (UC Berkeley, USA)
Moderator: Stefan Woltran (TU Vienna, Austria)
Slides
YouTube

 

May 17, 2022
7:30 – 8:00 PM
(19:30) CET

Lecture Series
“The Facebook Files”Speaker: Frances Haugen
Moderator: Allison Stanger (Middlebury College, USA)
YouTube
May 05, 2022
5:30 – 7:00 PM
(17:00) CET
Panel Discussion
“Algorithms. Data. Surveillance – Is There a Way Out?”


With every message, every purchase, and every click, we generate data. Often quite unconsciously and even without immediate action on our behalf – the boundaries between online and offline are becoming increasingly blurred. Data, its processing, and interconnectedness through artificial intelligence thus influence fundamental developments in our society. Whether digital technologies make our worst nightmares or boldest utopias come true depends on how we develop and regulate them based on democratic values. How do we, as a society, deal with data security, surveillance, and privacy? What role do politics and jurisdiction play in technology development? And how can we in Europe take advantage of the opportunities offered by digitalization? Marc Rotenberg, president and founder of the Center for AI and Digital Policy, privacy activist Maximilian Schrems from noyb – European Center for Digital Rights and Christiane Wendehorst, law and digitalization expert and professor for civil law at the University of Vienna, discuss democracy and digitalization, legal differences between the US and Europe, and ways to take action.

Speaker: Marc Rottenberg, Christine Wendehorst, Hannes Werthner, Moderator: Josef Broukal
YouTube
April 26, 2022
5:00 – 6:00 PM
(17:00) CET
Lecture Series
“Responsible Artificial Intelligence”


The main challenge that artificial intelligence research is facing nowadays is how to guarantee the development of responsible technology. And, in particular, how to guarantee that autonomy is responsible. The social fears on the actions taken by AI can only be appeased by providing ethical certification and transparency of systems. However, this is certainly not an easy task. As we very well know in the multiagent systems field, the prediction accuracy of system outcomes has limits as multiagent systems are actually examples of complex systems. And AI will be social, there will be thousands of AI systems interacting among themselves and with a multitude of humans; AI will necessarily be multiagent. The area of multiagent systems has developed a number of theoretical and practical tools that properly combined can provide a path to develop such responsible systems. In this talk, I will discuss several ideas and tools to achieve such purpose.

Speaker: Carles Sierra (Artificial Intelligence Research Institute, President EurAI)
Moderator: Oliviero Stock (FBK-IRST Trento, Italia)
Slides
YouTube
April 5, 2022
5:00 – 6:00 PM
(17:00) CET
Lecture Series
“AI Ethics as Translational Ethics”


There is now widespread recognition that advances in AI and related technologies have deep ethical and societal implications. At the same time, there is much less consensus about what we should expect from AI ethics. In this talk, I will first argue that ethical analyses cannot be treated as a secondary or optional aspect of technology creation. No AIs are outside of the scope of ethics, though the ethical content of an AI is often different than people think. I will then argue that AI ethics should be a translational ethics: a robust, multi-disciplinary effort that starts with the practices of AI design, development, and deployment, and then develops practical guidance to produce more ethical AI. Throughout the talk, I will provide concrete examples of AI ethics as translational ethics.


Speaker: David Danks (Professor of Data Science & Philosophy, UCSD)
Moderator: Hannes Werthner (TU Vienna, Austria)
Slides
YouTube
March 29, 2022
5:00 – 6:00 PM
(17:00) CET
Lecture Series
“How Artificial Intelligence May One Day Threaten the Political Capacity of Human Intelligence”


There is no agreement as to what intelligence is, whether human or artificial (AI). But we can hardly wait for consensus in light of rapid developments in science and technology that generate urgent normative questions about how political communities might best to respond to those developments. I attempt to identify what I take to be the political capabilities of human intelligence (HI). For example, as members of political community, individuals need to be able, mutually, to attribute responsibility for actions. But some forms of AI eventually may threaten this capacity of HI. For example, AI might tempt citizens to outsource, to technology, forms of social integration that otherwise require the mutual attribution of responsibility among citizens. To be sure, AI in some cases can contribute positively to the tasks of social integration. And if there are political dangers, they will derive not from AI as such but rather from how humans deploy it.


Speaker: Benjamin Gregg (University of Texas at Austin, USA)
Moderator: Stefan Woltran (CAIML TU Wien, Austria
Slides
YouTube
March 3-4, 2022

Workshop
“Towards a Research and Innovation Roadmap”


4th Workshop on Digital Humanism March 3-4, 2022 Digital Humanism aims to ensure that the development of digital technologies focuses on human needs and interests. Following the Vienna Manifesto on Digital Humanism, this workshop will take the initiative one step further towards realizing the vision. While many of the challenges of our digital age have become evident, the solutions to overcoming them are not. Digital technologies are still disrupting our society and putting in question decade-long achievements of democracy, humanism, and the age of Enlightenment. The workshop will address how to advance digital technologies towards realizing inclusion and democratic societies.


YouTube
February 22, 2022
5:00 – 6:00 PM
(17:00) CET
Lecture Series
“Why AI is Harder Than We Think”

Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment (“AI Spring”) and periods of disappointment, loss of confidence, and reduced funding (“AI Winter”).  Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions  has turned out to be much harder than many people expected.  One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself.   In this talk I will discuss some fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field.  I will also speculate on what is needed for the grand challenge of making AI systems more robust, general, and adaptable—in short, more intelligent. Speaker Bio: Melanie Mitchell is the Davis Professor of Complexity at the Santa Fe Institute.  Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems.   Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the ten best science books of 2009. Her latest book is Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux).


Speaker: Melanie Mitchell (Santa Fe Institute)
Moderator: Allison Stanger (Middlebury College, USA)
Slides
YouTube
February 8, 2022
5:00 – 6:00 PM
(17:00) CET
Lecture Series
“Job Scenarios 2030: How the World of Work has Changed Around the Globe”

We are in the year 2030. How will the world of work look like in 2030? Based on today’s megatrends artificial intelligence (AI), climate change, population aging and others, the author develops scenarios for possible Futures of Work in different regions of the world: Robots have not replaced humans but AI and smart machines have become indispensable parts of our working lives. Efforts to mitigate climate change may fail but still trigger an ecological transformation that leads us into a more sustainable future. Mass production might enter its last decades but people may instead work in small shops and in more agile organizations. Can we work without jobs? In all these transformations AI and the digitalization are likely to play crucial roles. Whatever the future will be in 2030, thinking through such scenarios helps us perceiving opportunities and risks today, and shape the Future of Work that we want.


Speaker: Daniel Samaan (International Labour Organization, Geneva)
Moderator: George Metakides (President of the Digital Enlightenment Forum)
Slides
YouTube
January 25, 2022
5:00 – 6:00 PM
(17:00) CET
Lecture Series
“What is a 'Truly Ethical' Artificial Intelligence? An end-to-end approach to responsible and humane technological systems”

Since 2015, an increasing number of ethical AI guidelines have been published. Despite their common goal, they leverage largely divergent values and categories. Moreover, ethical AI guidelines display a strong alignment with corporate interests and emphasize technology’s deployment. In order to develop an end-to-end ethical approach to AI, we must take into account the conditions of production of the data used to create and market artifacts. After analyzing the “mineralogical layer” of intelligent technologies (ie. the use of rare minerals embedded in devices) and the environmental costs of producing increasingly large models, we focus on the geography of the platform labor necessary to train, verify, and sometimes impersonate AI systems. The global distribution of AI companies (based in higher-income countries) and that of data labor (located in lower-income countries) reproduce legacy global inequalities in terms of wealth, power, and economic influence.


Speaker: Antonio Casilli (School of telecommunications engineering | Polytechnic Institute of Paris)
Moderator: Enrico Nardelli (University of Roma ‘Tor Vergata’, Italy)
Slides
YouTube
November 16, 2021
5:00 – 6:00 PM
(17:00) CET
Lecture Series
“Transition From Labor-intensive to Tech-intensive Surveillance in China”

In the post-Tiananmen period the Chinese government has devoted enormous resources to the construction of a surveillance state. The initial efforts were directed at building organizational networks and coordination mechanisms to monitor a large number of individuals deemed threats to the rule of the Communist Party and public safety. Due to the lack of technological resources, the surveillance state in the 1990s was labor-intensive but highly effective. The Chinese government developed sophisticated surveillance tactics to monitor individuals and public venues even without hi-tech equipment. The transition to tech-intensive surveillance began at the end of the 1990s and accelerated in the 2010s due to the availability of new technologies and the generous funding from the state. Yet, despite the adoption of hi-tech tools, the Chinese surveillance state remains a labor-intensive and organization-intensive system. The unrivaled organizational capacity of the Communist Party, not new fancy technology, is the secret of the effectiveness of the Chinese surveillance state.


Speaker: Minxin Pei (Claremont McKenna College, USA)
Moderator: Susan J. Winter (University of Maryland, College of Information Studies, USA)

November 9, 2021
5:00 – 6:00 PM
(17:00) CET
Lecture Series
“Artificial Intelligence and Democratic Values: What is the Path Forward?”

Countries and international organizations are moving quickly to adopt AI strategies. More than fifty nations have signed on to the OECD AI Principles or the G20 AI Guidelines. The EU is developing a comprehensive regulation for AI, and UNESCO is about to adopt an AI Ethics Recommendation. Many of these policy frameworks share similar objectives – that AI should be “human-centric” and “trustworthy,” that AI systems should ensure fairness, accountability, and transparency. Looming in the background of this policy debate is the deployment of AI techniques, such as facial surveillance and social scoring, that implicate human rights and democratic values. Therefore we should ask how effectively do these policy frameworks address these new challenges? What are the differences between a country endorsing a policy framework and implementing a policy framework? Will countries be able to enforce actual prohibitions or “red lines” on certain deployments? And how do we assess a country’s national AI strategy with democratic values? These questions arise in the context of the CAIDP Report “Artificial Intelligence and Democratic Values,” a comprehensive review of AI policies and practices in 30 countries.


Speaker: Marc Rotenberg (Center for AI and Digital Policy, USA)
Moderator: Paul Timmers (Oxford University, UK | European University, Cyprus)
Slides
YouTube
October 12, 2021
5:00 – 6:00 PM
(17:00) CEST
Lecture Series
“Human-Centered AI: A New Synthesis”

A new synthesis is emerging that integrates AI technologies with HCI approaches to produce Human-Centered AI (HCAI). Advocates of this new synthesis seek to amplify, augment, and enhance human abilities, so as to empower people, build their self-efficacy, support creativity, recognize responsibility, and promote social connections.

Educators, designers, software engineers, product managers, evaluators, and government agency staffers can build on AI-driven technologies to design products and services that make life better for the users. These human-centered products and services will enable people to better care for each other, build sustainable communities, and restore the environment. The passionate advocates of HCAI are devoted to furthering human values, rights, justice, and dignity, by building reliable, safe, and trustworthy systems.

The talk will include examples, references to further work, and discussion time for questions. These ideas are drawn from Ben Shneiderman’s forthcoming book (Oxford University Press, January 2022). Further information at: https://hcil.umd.edu/human-centered-ai


Speaker: Ben Shneiderman (University of Maryland, USA)
Moderator: Allison Stanger (Middlebury College, USA)
Slides
YouTube
June 29, 2021
5:00 – 6:00 PM
(17:00) CEST
Lecture Series
“Should we preserve the world's software history, and can we?”

Cultural heritage is the legacy of physical artifacts and intangible attributes of a group or society that are inherited from past generations, maintained in the present and bestowed for the benefit of future generations. What role does software play in it? We claim that software source code is an important product of human creativity, and embodies a growing part of our scientific, organisational and technological knowledge: it is a part of our cultural heritage, and it is our collective responsibility to ensure that it is not lost. Preserving the history of software is also a key enabler for reproducibility of research, and as a means to foster better and more secure software for society.

This is the mission of Software Heritage, a non-profit organization dedicated to building the universal archive of software source code, catering to the needs of science, industry and culture, for the benefit of society as a whole. In this presentation we will survey the principles and key technology used in the archive that contains over 10 billion unique source code files from some 160 millions projects worldwide.


Speaker: Roberto Di Cosmo (INRIA, France)
Moderator: Edward A. Lee (UC Berkeley, USA)

YouTube
June 15, 2021
5:00 – 6:00 PM
(17:00) CEST
Lecture Series
“Digital Humanism and Democracy in Geopolitical Context”

Government and corporate surveillance in autocracies have very different ethical ramifications than the same actions do in liberal democracies. Open societies protect individual rights and distinguish between the public and private spheres. Neither condition pertains in China, an instantiation of what the philosopher Elizabeth Anderson calls private government. Ignoring the significance of such differences, which are only reinforced by differing business-government relationships in the United States, EU, and China, is likely to undercut both liberal democratic values and US-European national security.


Speaker: Allison Stanger (Middlebury College, USA)
Moderator & Respondent: Moshe Y. Vardi (Rice University, USA)
Slides
YouTube
June 8, 2021
5:00 – 6:00 PM
(17:00) CEST
Lecture Series
“The New EU Proposal for AI Regulation”

The European Commission has published its proposal for a new AI regulation. It takes a risk-based approach with different rules for low-risk applications of AI, stricter rules for riskier systems, and a complete ban for specified types of AI applications. Regulatory measures include data governance, transparency, information to users, and human oversight. The proposal also includes measures for supporting AI innovation through regulatory sandboxes. The proposed regulation was met with great interest and criticism. It started an intense debate about its appropriateness, omissions, specificity, and practicability. With this lecture and the following discussion, we aim to improve the understanding of the proposal’s main objectives, methods, and instruments and contribute to the public debate of how AI should be regulated in the future.


Speaker: Irina Orssich (European Commission)
Moderator & Respondent: Erich Prem (eutema & TU Wien, Austria)
Slides
YouTube
May 27, 2021
5:00 – 7:00 PM
(17:00) CEST
Vienna Gödel Lecture
“Technology is Driving the Future, But Who Is Steering?”

The benefits of computing are intuitive. Computing yields tremendous societal benefits; for example, the life-saving potential of driverless cars is enormous. But computing is not a game—it is real—and it brings with it not only societal benefits but also significant societal costs, such as labor polarization, disinformation, and smartphone addiction.

The common reaction to this crisis is to label it as an “ethical crisis”, and the proposed response is to add courses in ethics to the academic computing curriculum. This talk will argue that the ethical lens is too narrow. The real issue is how to deal with technology’s impact on society. Technology is driving the future, but who is doing the steering? Moshe Vardi will show how these issues relate to the Vienna Circle and the recently declared Vienna Manifesto on Digital Humanism.


More details at https://informatics.tuwien.ac.at/news/2020
May 18, 2021
5:00 – 6:00 PM
(17:00) CEST
Lecture Series
“The Platform Economy and Europe: Between Regulation and Digital Geopolitics”

Platforms represent a new structure for organising economic and social activities and appear as a hybrid between a market and a hierarchy. They are match makers like traditional markets, but they are also company heavy in assets and often quoted in the stock exchange. Platforms, therefore, are not simply technological corporations but a form of ‘quasi-infrastructure’. Indeed, in their coordination function, platforms are as much an institutional form as a means of innovation. Ever since the launch of the Digital Single Market in 2015, platforms have gone under the radar of the EU for different matter such as consumer protection in general, competition, and lately data protection. In addition to genuine policy concerns, the debate on platforms and on other digital transformation phenomena have gained higher political status in the form of the debate on digital and data sovereignty of Europe vis a vis the US and China. Platforms have also managed for some time to get away with regulation adopting a lobbying based on rhetorical framing. This has been revived during the Pandemic when dominant platforms extolled their data provided to government as an important tool to track and contain the spread of the virus.

In this seminar, besides discussing this more high level issue, I will also address more specific ones such as data protection, extraction of behavioral surplus, consumer protection, and competition issues. I conclude by comparing two different regulatory approach: the application of the precautionary principle as opposed to a cost-benefit assessment of intervention.


Speaker: Cristiano Codagnone (Universitat Oberta de Catalunya | Università degli studi di Milano)
Moderator: Paul Timmers (Oxford University, UK | European University, Cyprus)
Slides
YouTube
May 4, 2021
5:00 – 6:00 PM
(17:00) CEST
Lecture Series
“Ethics in AI: A Challenging Task”

In the first part we cover four current specific challenges: (1) discrimination (e.g., facial recognition, justice, sharing economy, language models); (2) phrenology (e.g., bio-metric based predictions); (3) unfair digital commerce (e.g., exposure and popularity bias); and (4) stupid models (e.g., Signal, minimal adversarial AI). These examples do have a personal bias but set the context for the second part where we address four generic challenges: (1) too many principles (e.g., principles vs. techniques), (2) cultural differences (e.g., Christian vs. Muslim); (3) regulation (e.g., privacy, antitrust) and (4) our cognitive biases. We finish discussing what we can do to address these challenges in the near future.

This might be of your interest: Towards intellectual freedom in an AI Ethics Global Community


Speaker: Ricardo Baeza-Yates (Institute for Experiential AI, Northeastern University, USA)
Moderator: Carlo Ghezzi (Politecnico di Milano, Italy)
Slides
YouTube
Apr 20, 2021
5:00 – 6:00 PM
(17:00) CEST
Lecture Series
“Vaccination Passports – a tool for liberation or the opposite?”

The European Commission and its member states are discussing “Green Passports” as a way of opening up after lockdown. They have been proposed as tools to verify the Covid immunization status and thus help to accelerate the path to normality. However, similar to the contact tracing apps there are numerous issues and concerns about what these apps should be and how to make them safe, reliable, and privacy-preserving. Director Ron Roozendaal from the Dutch Ministry of Health will talk about digital solutions, important design decisions, and the way forward. As a respondent, Prof. Nikolaus Forgo from the Department of Innovation and Digitalisation in Law will be our respondent.


Speaker: Ron Roozendaal (Ministry of Health, Welfare and Sports, The Netherlands)
Respondent: Nikolaus Forgo (University of Vienna, Austria)
Moderator: Walter Hötzendorfer (Research Institute – Digital Human Rights Center)
Slides
YouTube
Apr 13, 2021
5:00 – 6:30 PM
(17:00) CEST
Lecture Series
“(Gender) Diversity and Inclusion in Digital Humanism”

Digital Humanism is ‘focused on ensuring that technology development remain centered on human interests’. This panel focuses on ‘which humans’, and how different voices and interests can and should be included in the development and application of digital technologies.

Talks:

Sally Wyatt will examine how the inclusion of women in computer science and related fields has declined over the past 50 years. She will argue that including women is partly a matter of social justice, of providing women with access to interesting and well paid jobs. Further, it is a matter of epistemic justice, including women’s perspectives and experiences could lead to better and more inclusive technologies.

Jeanne Lenders will explain how the Commission is stepping up efforts for gender equality in research and innovation, including on women’s participation in STEM and the integration of gender perspectives into research and innovation content. She will highlight the strengthened provisions for gender equality in Horizon Europe, the next Framework Programme for Research and Innovation, and showcase examples of the Commission’s ‘Gendered Innovations’ Expert Group.

Hinda Haned, will discuss different definitions of bias, and bias mitigation through so-called “fairness algorithms”. Drawing from practical examples, she will argue that the most fundamental question we are facing as researchers and practitioners, is not how to fix bias with new technical solutions, but whether we should be designing and deploying potentially harmful automated systems in the first place.

Judy Wajcman and Erin Young will discuss the gender job gap in AI. The fields of artificial intelligence and data science have exploded as the world is increasingly being built around smart machines and automated systems. Yet the people whose work underpins that vision are far from representative of the society those systems are meant to serve. Their report shows the extent of gender disparities in careers, education, jobs, seniority, status and skills in the AI and data science fields. They argue that fixing the gender job gap in AI is not only a fundamental issue of economic equality, but also about how the world is designed and for whom.


Panelists: Sally Wyatt (Maastricht University, The Netherlands), Jeanne Lenders (European Commission), Hinda Haned (University of Amsterdam, The Netherlands), Judy Wajcman (London School of Economics | The Alan Turing Institute, UK), Erin Young (The Alan Turing Institute, UK)
Moderator: Lynda Hardman (CWI – Centrum Wiskunde & Informatica, Amsterdam and Utrecht University)
Slides: Hinda Haned; Jeanne Lenders; Judy Wajcman & Erin Young; Sally Wyatt
YouTube
Mar 23, 2021
5:00 – 6:30 PM
(17:00) CET
Lecture Series
“Digital Society, Social Justice and Academic Education”

ABSTRACT: An important open question of Digital Humanism is how ethical and social aspects of digital technologies and associated matters of human values and social justice can be properly handled in academic research and education. An approach, also discussed in preceding DigHum Lectures, is to create interdisciplinary courses on ethics and/or philosophy of technology such as “Tech Ethics”. This panel investigates approaches that have their roots in direct collaboration from academia with outside (underprivileged, marginalized) communities as integral element of research and education. Case examples and experiences from three different continents are discussed, which also gives some perspective on the simultaneous universality and contextuality of issues of human values and social justice.

TALKS:

Knowledge for Service: Digital Technology Positives and Negatives in African Rural Societies
Saa Dittoh

Many decades ago (and possibly now in some areas) in rural Africa, communal methods of information sharing were not always face-to-face; some were virtual, through high-pitched voices and/or loud sounding “talking drums” that gave “coded information”. No wonder that many African rural societies have no reservations adopting appropriate modern digital technologies. The rapid advance in digital technology has been positive in many ways but there are several negative and damaging aspects that threaten the values, cultures and even the very existence of some African rural societies. In this talk I discuss those threats and suggests ways to counter them. This talk further highlights how knowledge can be put to service and how university students can be engaged in this.

Digital Sociotechnical Innovation and Indigenous Knowledge
Narayanan Kulathuramaiyer

In this talk I will discuss how university research and education on digital technology can be a factor in empowering under-served communities. In particular I describe the eBario program as a long-standing university-community partnership between the rural Kelabit community, one of Borneo ethnic minorities, and the University Malaysia Sarawak. This program to bridge the digital divide started in 1998, the indigenous Kelabit community taking on the information and knowledge creation pathway as a way forward. The program has over the past two decades evolved to become recognized as a living laboratory, influencing practice and policy, with for example a role in poverty reduction. eBario as an ICT for Development model has been replicated to cover eight other sites across Peninsula and East Malaysian states of Sabah and Sarawak. The biggest achievement however resides in the development of community scholars and the community-led life-long-learning initiatives that go on till today.

Digital Divide, Inclusion and Community Service Learning
Anna Bon

Community service learning (CSL) is an educational approach that we have further developed in collaboration with universities and stakeholders in the Global South into a research and education model dubbed: ICT4D 3.0. This model combines problem-solving and situational learning with meaningful service to communities and society. In computer science and artificial intelligence education – traditionally purely technologically oriented – ICT4D 3.0 integrates CSL’s societal and ethical principles with user-centered design and socio-technical problem-solving. Being exposed to complex, societal real world problems, students learn by exploring, reflecting, co-designing in close interaction with communities in a real world environment. This type of education provides a rich learning environment for “Bildung”.


Panelists: Saa Dittoh (UDS, Ghana), Narayanan Kulathuramaiyer (UNIMAS, Malaysia), Anna Bon (VU Amsterdam, the Netherlands)
Moderator: Hans Akkermans (w4ra.org, the Netherlands)
Slides: Hans Akkermans; Saa Dittoh; Narayanan Kulathuramaiyer; Anna Bon
YouTube

Feb 23, 2021
5:00 – 6:30 PM
(17:00) CET
Lecture Series
“Preventing Data Colonialism without resorting to protectionism - The European strategy”

This panel builds on prior Dighum panels including the one entitled “Digital Sovereignty”. The particular focus of this one is on data and the related threats and opportunities. The threat of “data colonialism” is meant to describe a possible situation where there is unbridled access (extraction) and processing / exploitation of data of European citizens and businesses by non-European companies and, potentially via these, foreign powers.

Simply as an illustration, today the data of virtually all European citizens and companies that use cloud services are accessible by non-European cloud service providers to which their countries of origin have potential access (the US via the recent Cloud Act and China via … fiat). The trap on the other side is to ring-fence such data within Member State or EU boundaries thus severely limiting their value to anybody.

In her keynote address on Feb 4th at the Digital Masters 2021, EC President Ursula Von der Leyen said: “In Europe we are sitting on a gold mine of machine generated data (value estimated at 1.5 trillion) which remains largely unexploited due primarily to the absence of clear rules for how a company can access, buy or sell such data across EU Member State borders.”

It is in this context that a number of European initiatives have been designed so as to constitute a coherent strategy. To itemize: The GAIA-X consortium aims, in the context of a broader effort, to lead to the creation of a European cloud characterized by portability, interoperability and security. Non-European cloud providers participate as well. The EC will launch the European Alliance for Industrial Data and Cloud aiming at a European Federated Cloud.

On the regulatory front, we have: the DSA that aims to define the responsibilities of all digital players; the DMA that aims to set rules for gatekeepers so that there is an online world accessible to all with a single, clear set of rules; the European Data Governance Act proposed last November with the aim to strengthen data sharing mechanisms across Europe. And last and certainly not least, the coming Data Act, which aims to arbitrate in trusted fashion how are the resulting benefits shared. Together with the Horizon research instruments and the investment instruments foreseen for the creation of European dedicated data spaces (e.g. for Health) all the above comprise a complex arsenal which the panel has the knowledge and experience to help us understand better.

The key question, of course, is how can all these initiatives come together to form a coherent and effective strategy and with what expected “timeline of impact”. Furthermore, as data underpins AI and practically all major coming digital technology advances, it is the aforementioned impact and its timeline that will be crucial for the achievement and maintenance of European Digital Sovereignty in the emerging geopolitical context.


Panelists: Pilar del Castillo (European Parliament), Lokke Moerel (Tilburg University, The Netherlands), Yvo Volman (European Commission)
Moderator: George Metakides (President of Digital Enlightenment Forum)
Slides – Yvo Volman; Slides – Lokke Moerel
YouTube
Feb 2, 2021
5:00 – 6:00 PM
(17:00) CET
Lecture Series
“Freedom of Expression in the Digital Public Sphere”

A substantial portion of contemporary public discourse takes place over online social media platforms, such as Facebook, YouTube and TikTok. Accordingly, these platforms form a core component of the digital public sphere, which, although subject to private ownership, constitute a digital infrastructural resource that is open to members of the public.

The content moderation systems deployed by such platforms have the potential to influence and shape public discourse by mediating what members of the public are able to see, hear, and say online. Over time, these rules may have a norm-setting effect, shaping the conduct and expectations of users as to what constitutes “acceptable” discourse. Thus, the design and implementation of content moderation systems can have a powerful impact on the preservation of users’ freedom of expression. The emerging trend towards the deployment of algorithmic content moderation (ACM) systems gives rise to urgent concerns on the need to ensure that content moderation is regulated in a manner that safeguards and fosters robust public discourse.

This lecture develops upon the research carried out within the framework of the Research Sprint on AI and Platform Governance (2020) organized by the HIIG, Berlin (for more information on the research project and its key findings see, Freedom of Expression in the Digital Public Sphere (graphite.page)). It explores how the proliferation of ACM poses increased risks for safeguarding the freedom of expression in the digital public sphere and proposes legal and regulatory strategies for ensuring greater public oversight and accountability in the design and implementation of content moderation systems by social media platforms.


Speaker: Sunimal Mendis (Tilburg University, The Netherlands)
Respondent: Christiane Wendehorst (University of Vienna, Austria)
Moderator: Erich Prem (eutema & TU Wien, Austria)
YouTube
Jan 26, 2021
5:00 – 6:30 PM
(17:00) CET
Lecture Series
“Digital Superpowers and Geopolitics”

In Cyberspace, the modern “colonial powers” are not nations but multinational companies, mostly American but with strong competition emerging in China. These companies control the digital platforms central to most peoples’ social networks, communications, entertainment, and commerce and, through them, have collected and continue to collect limitless information about our friends, colleagues, preferences, opinions, and secrets. With the knowledge obtained by processing this information/data, these companies have built some of the world’s more profitable businesses, turning little pieces of information given to them by uninformed users in return for “free services”, into extremely valuable, targeted advertising. These companies, moreover, endeavor to operate in the space between countries, with very limited responsibility/ accountability to governments. At the same time, governments such as in China and the US have laws requiring such companies to divulge data obtained from their customers anywhere in the world. Does this pose a threat to national or European sovereignty?
This panel will endeavor to appraise the current situation, assess the potential impact of actions already initiated as well as explore new ones.


Panelists: June Lowery-Kingston (European Commission), Jan-Hendrik Passoth (ENS / Viadrina, Germany), Michael Veale (University College London, UK)
Moderator: James Larus (EPFL, Switzerland)
YouTube
Dec 15, 2020
5:00 – 6:00 PM
(17:00) CET
Lecture Series
Julian Nida-Rümelin (LMU München, Germany)
“Philosophical Foundations of Digital Humanism”

In this talk Julian Nida_Rümelin will develop the main features of what he calls ‘digital humanism’ (Nida-Rümelin/Weidenfeld 2018), based on a general philosophical account of humanistic theory and practice (Nida-Rümelin 2016): (1) preconditions of human authorship (JNR 2019); (2) human authorship in times of digital change; (3) ethical implications.

J. Nida-Rümelin: Humanistische Reflexionen (Berlin: Suhrkamp 2016)
J. Nida-Rümelin/N. Weidenfeld: Digitaler Humanismus. Eine Ethik für das Zeitalter der Künstlichen Intelligenz (München: Piper 2018)
J. Nida-Rümelin: Structural Rationality and other essays on Practical Reason (Berlin / New York: Springer International 2019)


Moderator: Edward A. Lee (UC Berkeley, USA)
YouTube
Nov 19-20, 2020
4:00 – 6:45 PM
(16:00) CET
Workshop
“Strategies for a Humanistic Digital Future”
YouTube
Nov. 3, 2020
5:00 – 6:30 PM
(17:00) CET
Lecture Series
“Ethics and IT: How AI is Reshaping our World”

This panel debate will investigate the development of AI from a philosophical perspective. In particular, we will discuss the ethical implications of AI and the global challenges raised by the widespread adoption of socio-technical systems powered by AI tools. These challenges will be addressed by the three speakers from different cultural and geographical perspectives. We invite the audience to be part of the debate to increase with the panel our understanding how AI is reshaping the world and the awareness of the challenges we will face in the future.

Safety, Fairness, and Visual Integrity in an AI-shaped world
Deborah Johnson

AI algorithms are potent components in decisions that affect the lives of individuals and the activities of public and private institutions. Although the use of algorithms to make decisions has many benefits, a number of problems have been identified with their use in certain domains, most notably in domains in which safety and fairness are important. AI algorithms are also used to produce tools that enable individuals to do things they would not otherwise be able to do. In the case of synthetic media technologies, users are able to produce deepfakes that challenge the integrity of visual experience. In my presentation, I will discuss safety, fairness, and visual integrity as three ethical issues arising in an AI-shaped world.

Global Challenges for AI Ethics
Guglielmo Tamburrini

The Covid-19 pandemics is forcing us to address some global challenges concerning human well-being and fundamental rights protection. This panel presentation explores ethically ambivalent roles that AI plays in connection with two additional global challenges: (1) climate warming and (2) threats to international peace. 1. AI has a significant carbon footprint. Should one set quantitative limits to the energy consumption required for AI model training? And if so, how must one distribute AI carbon quotas among States, businesses, and research? Should one limit the collection of user data to feed into data-hungry AI systems? And who should be in charge of deciding which data to collect, preserve or get rid of for the sake of environmental protection? 2. An AI arms race is well under its way, ranging from the development of autonomous weapons systems to the development of AI systems for discovering software vulnerabilities and waging cyberconflicts. Should the weaponization of AI be internationally regulated? And if so, how to interpret and apply within this domain human rights, humanitarian principles and the UN fundamental goal of preserving world peace and stability? This panel presentation is rounded out by looking at EU efforts to cope with some of these global ethical issues.

Building Ethical AI for the Human-AI Symbiotic Society
Yi Zeng

In this talk, I will provide a global landscape of AI Ethical Principles and investigate on how the efforts complete each other, instead of compete with each other. I will then talk about concrete groundings of AI Ethical principles and introduce technical and social efforts in different domains. Finally, I will extend the discussion to long-term A(G)I ethical challenges and a possible positive path.


Deborah G. Johnson (University of Virginia, USA), Guglielmo Tamburrini (University of Naples, Italy), Yi Zeng (Chinese Academy of Sciences, China)
Moderator: Viola Schiaffonati (Politecnico di Milano, Italy)
YouTube
Oct. 20, 2020
5:00 – 6:00 PM
(17:00) CEST
Lecture Series
Elissa M. Redmiles (Microsoft Research)
“Learning from the People: Responsibly Encouraging Adoption of Contact Tracing Apps”

A growing number of contact tracing apps are being developed to complement manual contact tracing. Yet, for these technological solutions to benefit public health, users must be willing to adopt these apps. While privacy was the main consideration of experts at the start of contact tracing app development, privacy is only one of many factors in users’ decision to adopt these apps. In this talk I showcase the value of taking a descriptive ethics approach to setting best practices in this new domain. Descriptive ethics, introduced by the field of moral philosophy, determines best practices by learning directly from the user — observing people’s preferences and inferring best practice from that behavior — instead of exclusively relying on experts’ normative decisions. This talk presents an empirically-validated framework of the inputs that factor into a user’s decision to adopt COVID19 contact tracing apps, including app accuracy, privacy, benefits, and mobile costs. Using predictive models of users’ likelihood to install COVID apps based on quantifications of these factors, I show how high the bar is for these apps to achieve adoption and suggest user-driven directions for ethically encouraging adoption.


Moderator: James Larus (EPFL, Switzerland)
Slides
YouTube
Oct. 06, 2020
5:00 – 6:30 PM
(17:00) CEST
Lecture Series
Paul Timmers, Ciaran Martin, Margot Dor, and Georg Serentschy
“Digital Sovereignty – Navigating Between Scylla and Charybdis”

This panel debate will have a hard and critical look at the sense and nonsense of digital sovereignty.

We will debunk some of the terminology that is being thrown around in debates on digital sovereignty, analyse the good, the bad, and the ugly of geopolitical technology battles between the USA and China and provide specific look insight into two harbingers of the emerging perceptions of sovereignty in cyberspace: global telecommunications and global standardization.

We invite the audience to be part of the debate to increase with the panel our understanding how Europe can best navigate the good, the bad and the ugly of geopolitics and the digital world.

Prof Paul Timmers will set the scene by a critical reflection where we are in the debate on ‘digital sovereignty’ and consequences for EU policy development. Paul Timmers is at the European University Cyprus, Research Associate at Oxford University, Senior Advisor at EPC, former Director European Commission, and leading thinker on strategic autonomy and digital sovereignty.

Subsequently, we will engage in a panel and audience discussion where three leading cybersecurity personalities will put forward their response to the scene setter:

Prof Ciaran Martin, Oxford University, former head UK NCSC (National Cyber Security Centre), a world top person in cybersecurity, recent interview by the Financial Times on east-west split over the internet.

Dr Margot Dor, Strategy Director of ETSI a European Standards Organization, driver of the Carl Bildt Report on Strategic Standardisation for Europe in the Digital Era

Dr Georg Serentschy, advisor on telecoms and IT, senior advisor SquirePattonBoggs, Board of Directors International Telecommunications Society, former Head of BEREC (European Telecoms Regulators).


Moderator: Lynda Hardman (CWI – Centrum Wiskunde & Informatica, Amsterdam and Utrecht University)
Slides – Paul Timmers; Slides – Margot Dor
YouTube
Sept. 22, 2020
5:00 – 6:00 PM
(17:00) CEST
Lecture Series
Barbara J. Grosz (Harvard, USA):
“An AI and Computer Science Dilemma: Could I? Should I?”

Computing technologies have become pervasive in daily life. Predominant uses of them involve communities rather than isolated individuals, and they operate across diverse cultures and populations. Systems designed to serve one purpose may have unintended harmful consequences. To create systems that are “society-compatible”, designers and developers of innovative technologies need to recognize and address the ethical considerations that should constrain their design. For students to learn to think not only about what technology they could create, but also whether they should create that technology, computer science curricula must expand to include ethical reasoning about the societal value and impact of these technologies. This talk will describe Harvard’s Embedded EthiCS program, a novel approach to integrating ethics into computer science education that incorporates ethical reasoning throughout courses in the standard computer science curriculum. It changes existing courses rather than requiring wholly new courses. The talk will describe the goals of Embedded EthiCS, the way the program works, lessons learned and challenges to sustainable implementations of such a program across different types of academic institutions. This approach was motivated by my experiences teaching the course “Intelligent Systems: Design and Ethical Challenges”, which I will describe briefly first.


Moderator: Erich Prem (eutema & TU Wien, Austria)
YouTube
September 8, 2020 Lecture Series
Stuart Russell (University of California, Berkeley, USA):
“How Not to Destroy the World with Artificial Intelligence!“

I will briefly survey recent and expected developments in AI and their implications. Some are enormously positive, while others, such as the development of autonomous weapons and the replacement of humans in economic roles, may be negative. Beyond these, one must expect that AI capabilities will eventually exceed those of humans across a range of real-world-decision making scenarios. Should this be a cause for concern, as Elon Musk, Stephen Hawking, and others have suggested? And, if so, what can we do about it? While some in the mainstream AI community dismiss the issue, I will argue that the problem is real and that the technical aspects of it are solvable if we replace current definitions of AI with a version based on provable benefit to humans.


Moderator: Helga Nowotny (Chair of the ERA Council Forum Austria and Former President of the ERC)
Slides
YouTube
July 14, 2020 Lecture Series
“Corona Contact Tracing – the Role of Governments and Tech Giants”
Alfonso Fuggetta (Politecnico di Milano, Italy), James Larus (EPFL, Switzerland)
Moderator: Jeff Kramer (Imperial College London, UK)
YouTube
June 9, 2020 Lecture Series
Moshe Vardi (Rice University, USA):
Slides
YouTube

May 14, 2020
Workshop
“Digital Humanism: Informatics in Times of COVID-19”
YouTube
April 4, 2019 Workshop
“Vienna Workshop on Digital Humanism”
YouTube