Upcoming Events:
![]() |
April 14, 2023 11:00 – 12:00 AM (11:00) CEST |
Europa im Diskurs
“Ist künstliche Intelligenz bald klüger als wir?” Sie beantworten Mails, schreiben Hausübungen, unterstützen wissenschaftliche Recherchen und versuchen sich auf Verlangen sogar an Lyrik und Dramatik. Die neuesten Chatbots, textbasierte Dialogsysteme wie etwa ChatGPT, erledigen bestimmte Aufgaben bereits besser und schneller als Menschen das können. Wird eine uralte Angst der Menschheit Wirklichkeit? Sind die Programme drauf und dran, uns zu ersetzen? Werden sie am Ende gar intelligenter sein als die Klügsten unter uns? Darüber sprechen wir diesmal mit Expert:innen und Künstler:innen bei EUROPA IM DISKURS. Panelists: Helga Nowotny (Former President of the ERC) Peter Knees (TU Wien), Lisz Hirn, Maya Pindeu, Jörg Piringer Moderatorin: Petra Stuiber |
Past Events:
![]() |
March 28, 2023 5:00 – 6:00 PM (17:00) CEST |
Lecture Series “Statement of the Digital Humanism Initiative on ChatGPT and a possible new online world”
Speakers: Helga Nowotny (Former President of the ERC) George Metakides (President of Digital Enlightenment Forum, Visiting Professor, University of Southampton) Moderator: Hannes Werthner (TU Wien) Slides ![]() |
![]() |
March 9, 2023 5:00 – 6:00 PM (17:00) CET |
Lecture Series “On ChatGPT”
Speaker: Gary Marcus (garymarcus.com) Moderator: Helga Nowowtny (Former President of the ERC) ![]() |
![]() |
March 7, 2023 5:00 – 6:00 PM (17:00) CET |
Lecture Series “Seize the Means of Computation”
Speaker: Cory Doctorow (craphound.com) Moderator: Allison Stanger (Middlebury College, USA) ![]() |
![]() |
February 21, 2023 5:00 – 6:00 PM (17:00) CET |
Lecture Series “Abstracted power and responsibility”
Speaker: Tina Peterson (University of Texas, Austin) Moderator: Carlo Ghezzi (Politecnico di Milano, Italy) Slides and presented paper ![]() |
![]() |
January 24, 2023 5:00 – 6:00 PM (17:00) CET |
Lecture Series “AI For Good” Isn’t Good Enough: A Call for Human-Centered AI”
Speaker: James A. Landay (Stanford University) Moderator: Moshe Y. Vardi (Rice University, USA) Slides ![]() |
![]() |
December 19, 2022 5:00 – 6:00 PM (17:00) CET |
Online Vienesse Digital Humanism Runde
“Responsible AI in China”
Speaker: Pingjing Yang (University of Illinois) Slides |
![]() |
December 6, 2022 2:00 – 3:00 PM (14:00) CET |
Lecture Series “The Uselessness of AI Ethics”
Speakers: Luke Munn (University of Queensland) and Erich Prem (Universität Wien & eutema) Discussed paper ![]() |
![]() |
November 22, 2022 18:00 CET |
Open Societies and Democratic Sustainability in the Shadow of Big Tech Lecture of the IWM Digital Humanism Fellowship Program.
|
![]() |
November 23, 2022 9:30 – 12:00 CET |
In the Shadow of Big Tech IWM Workshop with Allison Stanger, Paul Timmers, George Metakides and Guests
|
![]() |
November 25, 2022 13:00 – 15:00 CET |
Who Elected Big Tech? IWM Digital Humanism Fellow Allison Stanger speaks about technological innovation and power shifts.
|
![]() |
November 15, 2022 5:00 – 6:00 PM (17:00) CET |
Lecture Series “Is AI good or bad for the climate? It’s complicated”
Speaker: David Rolnick (McGill University, Canada) Moderator: Peter Knees (TU Wien, Austria) Slides ![]() |
![]() |
October 18, 2022 5:00 – 6:00 PM (17:00) CET |
Lecture Series “Rocks, Flesh, and Rockets: A Political Ecology of AI”
Speaker: Kate Crawford (USC Annenberg) Moderator: Edward A. Lee (UC Berkeley, USA) ![]() |
![]() |
June 28, 2022 5:00 – 6:00 PM (17:00) CET |
Lecture Series “AI Advances, Responsibilities, and Governance”
Moderator: Moshe Y. Vardi (Rice University, USA) Slides ![]() |
![]() |
May 24, 2022 5:30 – 7:00 PM (17:30) CET |
Lecture Series “Limits of Machines, Limits of Humans” “Rationality” in Simon’s “bounded rationality” is the principle that humans make decisions based on step-by-step (algorithmic) reasoning using systematic rules of logic to maximize utility. “Bounded rationality” is the observation that the ability of a human brain to handle algorithmic complexity and data is limited. Bounded rationality, in other words, treats a decision-maker as a machine carrying out computations with limited resources. In this talk, I will argue that the recent breakthroughs in AI demonstrate that much of what we consider “intelligence” is not based on algorithmic symbol manipulation, and that what the machines are doing more closely resembles intuitive thinking than rational decision making. Under this model, the goal of “explainable AI” is unachievable in any useful form. Speaker: Edward A. Lee (UC Berkeley, USA)Moderator: Stefan Woltran (TU Vienna, Austria) Slides ![]() |
![]() |
|
Lecture Series “The Facebook Files”Speaker: Frances Haugen Moderator: Allison Stanger (Middlebury College, USA) ![]() |
![]() |
May 05, 2022 5:30 – 7:00 PM (17:00) CET |
Panel Discussion “Algorithms. Data. Surveillance – Is There a Way Out?”
![]() |
![]() |
April 26, 2022 5:00 – 6:00 PM (17:00) CET |
Lecture Series “Responsible Artificial Intelligence”
Moderator: Oliviero Stock (FBK-IRST Trento, Italia) Slides ![]() |
![]() |
April 5, 2022 5:00 – 6:00 PM (17:00) CET |
Lecture Series “AI Ethics as Translational Ethics”
Speaker: David Danks (Professor of Data Science & Philosophy, UCSD) Moderator: Hannes Werthner (TU Vienna, Austria) Slides ![]() |
![]() |
March 29, 2022 5:00 – 6:00 PM (17:00) CET |
Lecture Series “How Artificial Intelligence May One Day Threaten the Political Capacity of Human Intelligence”
Speaker: Benjamin Gregg (University of Texas at Austin, USA) Moderator: Stefan Woltran (CAIML TU Wien, Austria Slides ![]() |
![]() |
March 3-4, 2022 |
Workshop “Towards a Research and Innovation Roadmap”
![]() |
![]() |
February 22, 2022 5:00 – 6:00 PM (17:00) CET |
Lecture Series “Why AI is Harder Than We Think” Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment (“AI Spring”) and periods of disappointment, loss of confidence, and reduced funding (“AI Winter”). Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this talk I will discuss some fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I will also speculate on what is needed for the grand challenge of making AI systems more robust, general, and adaptable—in short, more intelligent. Speaker Bio: Melanie Mitchell is the Davis Professor of Complexity at the Santa Fe Institute. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems. Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the ten best science books of 2009. Her latest book is Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux). Speaker: Melanie Mitchell (Santa Fe Institute) Moderator: Allison Stanger (Middlebury College, USA) Slides ![]() |
![]() |
February 8, 2022 5:00 – 6:00 PM (17:00) CET |
Lecture Series “Job Scenarios 2030: How the World of Work has Changed Around the Globe” We are in the year 2030. How will the world of work look like in 2030? Based on today’s megatrends artificial intelligence (AI), climate change, population aging and others, the author develops scenarios for possible Futures of Work in different regions of the world: Robots have not replaced humans but AI and smart machines have become indispensable parts of our working lives. Efforts to mitigate climate change may fail but still trigger an ecological transformation that leads us into a more sustainable future. Mass production might enter its last decades but people may instead work in small shops and in more agile organizations. Can we work without jobs? In all these transformations AI and the digitalization are likely to play crucial roles. Whatever the future will be in 2030, thinking through such scenarios helps us perceiving opportunities and risks today, and shape the Future of Work that we want. Speaker: Daniel Samaan (International Labour Organization, Geneva) Moderator: George Metakides (President of the Digital Enlightenment Forum) Slides ![]() |
![]() |
January 25, 2022 5:00 – 6:00 PM (17:00) CET |
Lecture Series “What is a 'Truly Ethical' Artificial Intelligence? An end-to-end approach to responsible and humane technological systems” Since 2015, an increasing number of ethical AI guidelines have been published. Despite their common goal, they leverage largely divergent values and categories. Moreover, ethical AI guidelines display a strong alignment with corporate interests and emphasize technology’s deployment. In order to develop an end-to-end ethical approach to AI, we must take into account the conditions of production of the data used to create and market artifacts. After analyzing the “mineralogical layer” of intelligent technologies (ie. the use of rare minerals embedded in devices) and the environmental costs of producing increasingly large models, we focus on the geography of the platform labor necessary to train, verify, and sometimes impersonate AI systems. The global distribution of AI companies (based in higher-income countries) and that of data labor (located in lower-income countries) reproduce legacy global inequalities in terms of wealth, power, and economic influence. Speaker: Antonio Casilli (School of telecommunications engineering | Polytechnic Institute of Paris) Moderator: Enrico Nardelli (University of Roma ‘Tor Vergata’, Italy) Slides ![]() |
![]() |
November 16, 2021 5:00 – 6:00 PM (17:00) CET |
Lecture Series “Transition From Labor-intensive to Tech-intensive Surveillance in China” In the post-Tiananmen period the Chinese government has devoted enormous resources to the construction of a surveillance state. The initial efforts were directed at building organizational networks and coordination mechanisms to monitor a large number of individuals deemed threats to the rule of the Communist Party and public safety. Due to the lack of technological resources, the surveillance state in the 1990s was labor-intensive but highly effective. The Chinese government developed sophisticated surveillance tactics to monitor individuals and public venues even without hi-tech equipment. The transition to tech-intensive surveillance began at the end of the 1990s and accelerated in the 2010s due to the availability of new technologies and the generous funding from the state. Yet, despite the adoption of hi-tech tools, the Chinese surveillance state remains a labor-intensive and organization-intensive system. The unrivaled organizational capacity of the Communist Party, not new fancy technology, is the secret of the effectiveness of the Chinese surveillance state. Speaker: Minxin Pei (Claremont McKenna College, USA) Moderator: Susan J. Winter (University of Maryland, College of Information Studies, USA) |
![]() |
November 9, 2021 5:00 – 6:00 PM (17:00) CET |
Lecture Series “Artificial Intelligence and Democratic Values: What is the Path Forward?” Countries and international organizations are moving quickly to adopt AI strategies. More than fifty nations have signed on to the OECD AI Principles or the G20 AI Guidelines. The EU is developing a comprehensive regulation for AI, and UNESCO is about to adopt an AI Ethics Recommendation. Many of these policy frameworks share similar objectives – that AI should be “human-centric” and “trustworthy,” that AI systems should ensure fairness, accountability, and transparency. Looming in the background of this policy debate is the deployment of AI techniques, such as facial surveillance and social scoring, that implicate human rights and democratic values. Therefore we should ask how effectively do these policy frameworks address these new challenges? What are the differences between a country endorsing a policy framework and implementing a policy framework? Will countries be able to enforce actual prohibitions or “red lines” on certain deployments? And how do we assess a country’s national AI strategy with democratic values? These questions arise in the context of the CAIDP Report “Artificial Intelligence and Democratic Values,” a comprehensive review of AI policies and practices in 30 countries. Speaker: Marc Rotenberg (Center for AI and Digital Policy, USA) Moderator: Paul Timmers (Oxford University, UK | European University, Cyprus) Slides ![]() |
![]() |
October 12, 2021 5:00 – 6:00 PM (17:00) CEST |
Lecture Series “Human-Centered AI: A New Synthesis”
A new synthesis is emerging that integrates AI technologies with HCI approaches to produce Human-Centered AI (HCAI). Advocates of this new synthesis seek to amplify, augment, and enhance human abilities, so as to empower people, build their self-efficacy, support creativity, recognize responsibility, and promote social connections. Speaker: Ben Shneiderman (University of Maryland, USA) Moderator: Allison Stanger (Middlebury College, USA) Slides ![]() |
![]() |
June 29, 2021 5:00 – 6:00 PM (17:00) CEST |
Lecture Series “Should we preserve the world's software history, and can we?”
Cultural heritage is the legacy of physical artifacts and intangible attributes of a group or society that are inherited from past generations, maintained in the present and bestowed for the benefit of future generations. What role does software play in it?
We claim that software source code is an important product of human creativity, and embodies a growing part of our scientific, organisational and technological knowledge: it is a part of our cultural heritage, and it is our collective responsibility to ensure that it is not lost.
Preserving the history of software is also a key enabler for reproducibility of research, and as a means to foster better and more secure software for society.
Speaker: Roberto Di Cosmo (INRIA, France) Moderator: Edward A. Lee (UC Berkeley, USA) ![]() |
![]() |
June 15, 2021 5:00 – 6:00 PM (17:00) CEST |
Lecture Series “Digital Humanism and Democracy in Geopolitical Context” Government and corporate surveillance in autocracies have very different ethical ramifications than the same actions do in liberal democracies. Open societies protect individual rights and distinguish between the public and private spheres. Neither condition pertains in China, an instantiation of what the philosopher Elizabeth Anderson calls private government. Ignoring the significance of such differences, which are only reinforced by differing business-government relationships in the United States, EU, and China, is likely to undercut both liberal democratic values and US-European national security. Speaker: Allison Stanger (Middlebury College, USA) Moderator & Respondent: Moshe Y. Vardi (Rice University, USA) Slides ![]() |
![]() |
June 8, 2021 5:00 – 6:00 PM (17:00) CEST |
Lecture Series “The New EU Proposal for AI Regulation” The European Commission has published its proposal for a new AI regulation. It takes a risk-based approach with different rules for low-risk applications of AI, stricter rules for riskier systems, and a complete ban for specified types of AI applications. Regulatory measures include data governance, transparency, information to users, and human oversight. The proposal also includes measures for supporting AI innovation through regulatory sandboxes. The proposed regulation was met with great interest and criticism. It started an intense debate about its appropriateness, omissions, specificity, and practicability. With this lecture and the following discussion, we aim to improve the understanding of the proposal’s main objectives, methods, and instruments and contribute to the public debate of how AI should be regulated in the future. Speaker: Irina Orssich (European Commission) Moderator & Respondent: Erich Prem (eutema & TU Wien, Austria) Slides ![]() |
![]() |
May 27, 2021 5:00 – 7:00 PM (17:00) CEST |
Vienna Gödel Lecture “Technology is Driving the Future, But Who Is Steering?”
The benefits of computing are intuitive. Computing yields tremendous societal benefits; for example, the life-saving potential of driverless cars is enormous. But computing is not a game—it is real—and it brings with it not only societal benefits but also significant societal costs, such as labor polarization, disinformation, and smartphone addiction.
More details at https://informatics.tuwien.ac.at/news/2020 |
![]() |
May 18, 2021 5:00 – 6:00 PM (17:00) CEST |
Lecture Series “The Platform Economy and Europe: Between Regulation and Digital Geopolitics”
Platforms represent a new structure for organising economic and social activities and appear as a hybrid between a market and a hierarchy. They are match makers like traditional markets, but they are also company heavy in assets and often quoted in the stock exchange. Platforms, therefore, are not simply technological corporations but a form of ‘quasi-infrastructure’. Indeed, in their coordination function, platforms are as much an institutional form as a means of innovation. Ever since the launch of the Digital Single Market in 2015, platforms have gone under the radar of the EU for different matter such as consumer protection in general, competition, and lately data protection. In addition to genuine policy concerns, the debate on platforms and on other digital transformation phenomena have gained higher political status in the form of the debate on digital and data sovereignty of Europe vis a vis the US and China. Platforms have also managed for some time to get away with regulation adopting a lobbying based on rhetorical framing. This has been revived during the Pandemic when dominant platforms extolled their data provided to government as an important tool to track and contain the spread of the virus. Speaker: Cristiano Codagnone (Universitat Oberta de Catalunya | Università degli studi di Milano) Moderator: Paul Timmers (Oxford University, UK | European University, Cyprus) Slides ![]() |
![]() |
May 4, 2021 5:00 – 6:00 PM (17:00) CEST |
Lecture Series “Ethics in AI: A Challenging Task”
In the first part we cover four current specific challenges: (1) discrimination (e.g., facial recognition, justice, sharing economy, language models); (2) phrenology (e.g., bio-metric based predictions); (3) unfair digital commerce (e.g., exposure and popularity bias); and (4) stupid models (e.g., Signal, minimal adversarial AI). These examples do have a personal bias but set the context for the second part where we address four generic challenges: (1) too many principles (e.g., principles vs. techniques), (2) cultural differences (e.g., Christian vs. Muslim); (3) regulation (e.g., privacy, antitrust) and (4) our cognitive biases. We finish discussing what we can do to address these challenges in the near future.
Speaker: Ricardo Baeza-Yates (Institute for Experiential AI, Northeastern University, USA) Moderator: Carlo Ghezzi (Politecnico di Milano, Italy) Slides ![]() |
![]() |
Apr 20, 2021 5:00 – 6:00 PM (17:00) CEST |
Lecture Series “Vaccination Passports – a tool for liberation or the opposite?” The European Commission and its member states are discussing “Green Passports” as a way of opening up after lockdown. They have been proposed as tools to verify the Covid immunization status and thus help to accelerate the path to normality. However, similar to the contact tracing apps there are numerous issues and concerns about what these apps should be and how to make them safe, reliable, and privacy-preserving. Director Ron Roozendaal from the Dutch Ministry of Health will talk about digital solutions, important design decisions, and the way forward. As a respondent, Prof. Nikolaus Forgo from the Department of Innovation and Digitalisation in Law will be our respondent. Speaker: Ron Roozendaal (Ministry of Health, Welfare and Sports, The Netherlands) Respondent: Nikolaus Forgo (University of Vienna, Austria) Moderator: Walter Hötzendorfer (Research Institute – Digital Human Rights Center) Slides ![]() |
![]() |
Apr 13, 2021 5:00 – 6:30 PM (17:00) CEST |
Lecture Series “(Gender) Diversity and Inclusion in Digital Humanism”
Digital Humanism is ‘focused on ensuring that technology development remain centered on human interests’. This panel focuses on ‘which humans’, and how different voices and interests can and should be included in the development and application of digital technologies.
Panelists: Sally Wyatt (Maastricht University, The Netherlands), Jeanne Lenders (European Commission), Hinda Haned (University of Amsterdam, The Netherlands), Judy Wajcman (London School of Economics | The Alan Turing Institute, UK), Erin Young (The Alan Turing Institute, UK) Moderator: Lynda Hardman (CWI – Centrum Wiskunde & Informatica, Amsterdam and Utrecht University) Slides: Hinda Haned; Jeanne Lenders; Judy Wajcman & Erin Young; Sally Wyatt ![]() |
![]() |
Mar 23, 2021 5:00 – 6:30 PM (17:00) CET |
Lecture Series “Digital Society, Social Justice and Academic Education”
ABSTRACT:
An important open question of Digital Humanism is how ethical and social aspects of digital technologies and associated matters of human values and social justice can be properly handled in academic research and education. An approach, also discussed in preceding DigHum Lectures, is to create interdisciplinary courses on ethics and/or philosophy of technology such as “Tech Ethics”. This panel investigates approaches that have their roots in direct collaboration from academia with outside (underprivileged, marginalized) communities as integral element of research and education. Case examples and experiences from three different continents are discussed, which also gives some perspective on the simultaneous universality and contextuality of issues of human values and social justice.
Panelists: Saa Dittoh (UDS, Ghana), Narayanan Kulathuramaiyer (UNIMAS, Malaysia), Anna Bon (VU Amsterdam, the Netherlands) Moderator: Hans Akkermans (w4ra.org, the Netherlands) Slides: Hans Akkermans; Saa Dittoh; Narayanan Kulathuramaiyer; Anna Bon ![]() |
![]() |
Feb 23, 2021 5:00 – 6:30 PM (17:00) CET |
Lecture Series “Preventing Data Colonialism without resorting to protectionism - The European strategy”
This panel builds on prior Dighum panels including the one entitled “Digital Sovereignty”. The particular focus of this one is on data and the related threats and opportunities. The threat of “data colonialism” is meant to describe a possible situation where there is unbridled access (extraction) and processing / exploitation of data of European citizens and businesses by non-European companies and, potentially via these, foreign powers.
Panelists: Pilar del Castillo (European Parliament), Lokke Moerel (Tilburg University, The Netherlands), Yvo Volman (European Commission) Moderator: George Metakides (President of Digital Enlightenment Forum) Slides – Yvo Volman; Slides – Lokke Moerel ![]() |
![]() |
Feb 2, 2021 5:00 – 6:00 PM (17:00) CET |
Lecture Series “Freedom of Expression in the Digital Public Sphere”
A substantial portion of contemporary public discourse takes place over online social media platforms, such as Facebook, YouTube and TikTok. Accordingly, these platforms form a core component of the digital public sphere, which, although subject to private ownership, constitute a digital infrastructural resource that is open to members of the public. Speaker: Sunimal Mendis (Tilburg University, The Netherlands) Respondent: Christiane Wendehorst (University of Vienna, Austria) Moderator: Erich Prem (eutema & TU Wien, Austria) ![]() |
![]() |
Jan 26, 2021 5:00 – 6:30 PM (17:00) CET |
Lecture Series “Digital Superpowers and Geopolitics”
In Cyberspace, the modern “colonial powers” are not nations but multinational companies, mostly American but with strong competition emerging in China. These companies control the digital platforms central to most peoples’ social networks, communications, entertainment, and commerce and, through them, have collected and continue to collect limitless information about our friends, colleagues, preferences, opinions, and secrets. With the knowledge obtained by processing this information/data, these companies have built some of the world’s more profitable businesses, turning little pieces of information given to them by uninformed users in return for “free services”, into extremely valuable, targeted advertising. These companies, moreover, endeavor to operate in the space between countries, with very limited responsibility/ accountability to governments. At the same time, governments such as in China and the US have laws requiring such companies to divulge data obtained from their customers anywhere in the world. Does this pose a threat to national or European sovereignty? Panelists: June Lowery-Kingston (European Commission), Jan-Hendrik Passoth (ENS / Viadrina, Germany), Michael Veale (University College London, UK) Moderator: James Larus (EPFL, Switzerland) ![]() |
![]() |
Dec 15, 2020 5:00 – 6:00 PM (17:00) CET |
Lecture Series Julian Nida-Rümelin (LMU München, Germany) “Philosophical Foundations of Digital Humanism”
In this talk Julian Nida_Rümelin will develop the main features of what he calls ‘digital humanism’ (Nida-Rümelin/Weidenfeld 2018), based on a general philosophical account of humanistic theory and practice (Nida-Rümelin 2016): (1) preconditions of human authorship (JNR 2019); (2) human authorship in times of digital change; (3) ethical implications.
Moderator: Edward A. Lee (UC Berkeley, USA) ![]() |
![]() |
Nov 19-20, 2020 4:00 – 6:45 PM (16:00) CET |
Workshop “Strategies for a Humanistic Digital Future” ![]() |
![]() |
Nov. 3, 2020 5:00 – 6:30 PM (17:00) CET |
Lecture Series “Ethics and IT: How AI is Reshaping our World”
This panel debate will investigate the development of AI from a philosophical perspective. In particular, we will discuss the ethical implications of AI and the global challenges raised by the widespread adoption of socio-technical systems powered by AI tools. These challenges will be addressed by the three speakers from different cultural and geographical perspectives. We invite the audience to be part of the debate to increase with the panel our understanding how AI is reshaping the world and the awareness of the challenges we will face in the future. Deborah G. Johnson (University of Virginia, USA), Guglielmo Tamburrini (University of Naples, Italy), Yi Zeng (Chinese Academy of Sciences, China) Moderator: Viola Schiaffonati (Politecnico di Milano, Italy) ![]() |
![]() |
Oct. 20, 2020 5:00 – 6:00 PM (17:00) CEST |
Lecture Series Elissa M. Redmiles (Microsoft Research) “Learning from the People: Responsibly Encouraging Adoption of Contact Tracing Apps” A growing number of contact tracing apps are being developed to complement manual contact tracing. Yet, for these technological solutions to benefit public health, users must be willing to adopt these apps. While privacy was the main consideration of experts at the start of contact tracing app development, privacy is only one of many factors in users’ decision to adopt these apps. In this talk I showcase the value of taking a descriptive ethics approach to setting best practices in this new domain. Descriptive ethics, introduced by the field of moral philosophy, determines best practices by learning directly from the user — observing people’s preferences and inferring best practice from that behavior — instead of exclusively relying on experts’ normative decisions. This talk presents an empirically-validated framework of the inputs that factor into a user’s decision to adopt COVID19 contact tracing apps, including app accuracy, privacy, benefits, and mobile costs. Using predictive models of users’ likelihood to install COVID apps based on quantifications of these factors, I show how high the bar is for these apps to achieve adoption and suggest user-driven directions for ethically encouraging adoption. Moderator: James Larus (EPFL, Switzerland) Slides ![]() |
![]() |
Oct. 06, 2020 5:00 – 6:30 PM (17:00) CEST |
Lecture Series Paul Timmers, Ciaran Martin, Margot Dor, and Georg Serentschy “Digital Sovereignty – Navigating Between Scylla and Charybdis”
This panel debate will have a hard and critical look at the sense and nonsense of digital sovereignty.
Moderator: Lynda Hardman (CWI – Centrum Wiskunde & Informatica, Amsterdam and Utrecht University) Slides – Paul Timmers; Slides – Margot Dor ![]() |
![]() |
Sept. 22, 2020 5:00 – 6:00 PM (17:00) CEST |
Lecture Series Barbara J. Grosz (Harvard, USA): “An AI and Computer Science Dilemma: Could I? Should I?” Computing technologies have become pervasive in daily life. Predominant uses of them involve communities rather than isolated individuals, and they operate across diverse cultures and populations. Systems designed to serve one purpose may have unintended harmful consequences. To create systems that are “society-compatible”, designers and developers of innovative technologies need to recognize and address the ethical considerations that should constrain their design. For students to learn to think not only about what technology they could create, but also whether they should create that technology, computer science curricula must expand to include ethical reasoning about the societal value and impact of these technologies. This talk will describe Harvard’s Embedded EthiCS program, a novel approach to integrating ethics into computer science education that incorporates ethical reasoning throughout courses in the standard computer science curriculum. It changes existing courses rather than requiring wholly new courses. The talk will describe the goals of Embedded EthiCS, the way the program works, lessons learned and challenges to sustainable implementations of such a program across different types of academic institutions. This approach was motivated by my experiences teaching the course “Intelligent Systems: Design and Ethical Challenges”, which I will describe briefly first. Moderator: Erich Prem (eutema & TU Wien, Austria) ![]() |
![]() |
September 8, 2020 |
Lecture Series Stuart Russell (University of California, Berkeley, USA): “How Not to Destroy the World with Artificial Intelligence!“ I will briefly survey recent and expected developments in AI and their implications. Some are enormously positive, while others, such as the development of autonomous weapons and the replacement of humans in economic roles, may be negative. Beyond these, one must expect that AI capabilities will eventually exceed those of humans across a range of real-world-decision making scenarios. Should this be a cause for concern, as Elon Musk, Stephen Hawking, and others have suggested? And, if so, what can we do about it? While some in the mainstream AI community dismiss the issue, I will argue that the problem is real and that the technical aspects of it are solvable if we replace current definitions of AI with a version based on provable benefit to humans. Moderator: Helga Nowotny (Chair of the ERA Council Forum Austria and Former President of the ERC) Slides ![]() |
![]() |
July 14, 2020 | Lecture Series “Corona Contact Tracing – the Role of Governments and Tech Giants” Alfonso Fuggetta (Politecnico di Milano, Italy), James Larus (EPFL, Switzerland) Moderator: Jeff Kramer (Imperial College London, UK) ![]() |
![]() |
June 9, 2020 | Lecture Series Moshe Vardi (Rice University, USA): Slides ![]() |
|
May 14, 2020 |
Workshop “Digital Humanism: Informatics in Times of COVID-19” ![]() |
![]() | April 4, 2019 | Workshop “Vienna Workshop on Digital Humanism” ![]() |