Upcoming Events:

Past Events:

Feb 23, 2021
5:00 – 6:30 PM
(17:00) CET
“Preventing Data Colonialism without resorting to protectionism - The European strategy”

This panel builds on prior Dighum panels including the one entitled “Digital Sovereignty”. The particular focus of this one is on data and the related threats and opportunities. The threat of “data colonialism” is meant to describe a possible situation where there is unbridled access (extraction) and processing / exploitation of data of European citizens and businesses by non-European companies and, potentially via these, foreign powers.

Simply as an illustration, today the data of virtually all European citizens and companies that use cloud services are accessible by non-European cloud service providers to which their countries of origin have potential access (the US via the recent Cloud Act and China via … fiat). The trap on the other side is to ring-fence such data within Member State or EU boundaries thus severely limiting their value to anybody.

In her keynote address on Feb 4th at the Digital Masters 2021, EC President Ursula Von der Leyen said: “In Europe we are sitting on a gold mine of machine generated data (value estimated at 1.5 trillion) which remains largely unexploited due primarily to the absence of clear rules for how a company can access, buy or sell such data across EU Member State borders.”

It is in this context that a number of European initiatives have been designed so as to constitute a coherent strategy. To itemize: The GAIA-X consortium aims, in the context of a broader effort, to lead to the creation of a European cloud characterized by portability, interoperability and security. Non-European cloud providers participate as well. The EC will launch the European Alliance for Industrial Data and Cloud aiming at a European Federated Cloud.

On the regulatory front, we have: the DSA that aims to define the responsibilities of all digital players; the DMA that aims to set rules for gatekeepers so that there is an online world accessible to all with a single, clear set of rules; the European Data Governance Act proposed last November with the aim to strengthen data sharing mechanisms across Europe. And last and certainly not least, the coming Data Act, which aims to arbitrate in trusted fashion how are the resulting benefits shared. Together with the Horizon research instruments and the investment instruments foreseen for the creation of European dedicated data spaces (e.g. for Health) all the above comprise a complex arsenal which the panel has the knowledge and experience to help us understand better.

The key question, of course, is how can all these initiatives come together to form a coherent and effective strategy and with what expected “timeline of impact”. Furthermore, as data underpins AI and practically all major coming digital technology advances, it is the aforementioned impact and its timeline that will be crucial for the achievement and maintenance of European Digital Sovereignty in the emerging geopolitical context.

Panelists: Pilar del Castillio (European Parliament), Lokke Moerel (Tilburg University, The Netherlands), Yvo Volman (European Commission)
Moderator: George Metakides (President of Digital Enlightenment Forum)
Feb 2, 2021
5:00 – 6:00 PM
(17:00) CET
“Freedom of Expression in the Digital Public Sphere”

A substantial portion of contemporary public discourse takes place over online social media platforms, such as Facebook, YouTube and TikTok. Accordingly, these platforms form a core component of the digital public sphere, which, although subject to private ownership, constitute a digital infrastructural resource that is open to members of the public.

The content moderation systems deployed by such platforms have the potential to influence and shape public discourse by mediating what members of the public are able to see, hear, and say online. Over time, these rules may have a norm-setting effect, shaping the conduct and expectations of users as to what constitutes “acceptable” discourse. Thus, the design and implementation of content moderation systems can have a powerful impact on the preservation of users’ freedom of expression. The emerging trend towards the deployment of algorithmic content moderation (ACM) systems gives rise to urgent concerns on the need to ensure that content moderation is regulated in a manner that safeguards and fosters robust public discourse.

This lecture develops upon the research carried out within the framework of the Research Sprint on AI and Platform Governance (2020) organized by the HIIG, Berlin (for more information on the research project and its key findings see, Freedom of Expression in the Digital Public Sphere (graphite.page)). It explores how the proliferation of ACM poses increased risks for safeguarding the freedom of expression in the digital public sphere and proposes legal and regulatory strategies for ensuring greater public oversight and accountability in the design and implementation of content moderation systems by social media platforms.

Speaker: Sunimal Mendis (Tilburg University, The Netherlands)
Respondent: Christiane Wendehorst (University of Vienna, Austria)
Moderator: Erich Prem (eutema & TU Wien, Austria)
Jan 26, 2021
5:00 – 6:30 PM
(17:00) CET
“Digital Superpowers and Geopolitics”

In Cyberspace, the modern “colonial powers” are not nations but multinational companies, mostly American but with strong competition emerging in China. These companies control the digital platforms central to most peoples’ social networks, communications, entertainment, and commerce and, through them, have collected and continue to collect limitless information about our friends, colleagues, preferences, opinions, and secrets. With the knowledge obtained by processing this information/data, these companies have built some of the world’s more profitable businesses, turning little pieces of information given to them by uninformed users in return for “free services”, into extremely valuable, targeted advertising. These companies, moreover, endeavor to operate in the space between countries, with very limited responsibility/ accountability to governments. At the same time, governments such as in China and the US have laws requiring such companies to divulge data obtained from their customers anywhere in the world. Does this pose a threat to national or European sovereignty?
This panel will endeavor to appraise the current situation, assess the potential impact of actions already initiated as well as explore new ones.

Panelists: June Lowery-Kingston (European Commission), Jan-Hendrik Passoth (ENS / Viadrina, Germany), Michael Veale (University College London, UK)
Moderator: James Larus (EPFL, Switzerland)
Dec. 15, 2020
5:00 – 6:00 PM
(17:00) CET
Julian Nida-Rümelin (LMU München, Germany)
“Philosophical Foundations of Digital Humanism”

In this talk Julian Nida_Rümelin will develop the main features of what he calls ‘digital humanism’ (Nida-Rümelin/Weidenfeld 2018), based on a general philosophical account of humanistic theory and practice (Nida-Rümelin 2016): (1) preconditions of human authorship (JNR 2019); (2) human authorship in times of digital change; (3) ethical implications.

J. Nida-Rümelin: Humanistische Reflexionen (Berlin: Suhrkamp 2016)
J. Nida-Rümelin/N. Weidenfeld: Digitaler Humanismus. Eine Ethik für das Zeitalter der Künstlichen Intelligenz (München: Piper 2018)
J. Nida-Rümelin: Structural Rationality and other essays on Practical Reason (Berlin / New York: Springer International 2019)

Moderator: Edward A. Lee (UC Berkeley, USA)
Nov. 3, 2020
5:00 – 6:30 PM
(17:00) CET
“Ethics and IT: How AI is Reshaping our World”

This panel debate will investigate the development of AI from a philosophical perspective. In particular, we will discuss the ethical implications of AI and the global challenges raised by the widespread adoption of socio-technical systems powered by AI tools. These challenges will be addressed by the three speakers from different cultural and geographical perspectives. We invite the audience to be part of the debate to increase with the panel our understanding how AI is reshaping the world and the awareness of the challenges we will face in the future.

Safety, Fairness, and Visual Integrity in an AI-shaped world
Deborah Johnson

AI algorithms are potent components in decisions that affect the lives of individuals and the activities of public and private institutions. Although the use of algorithms to make decisions has many benefits, a number of problems have been identified with their use in certain domains, most notably in domains in which safety and fairness are important. AI algorithms are also used to produce tools that enable individuals to do things they would not otherwise be able to do. In the case of synthetic media technologies, users are able to produce deepfakes that challenge the integrity of visual experience. In my presentation, I will discuss safety, fairness, and visual integrity as three ethical issues arising in an AI-shaped world.

Global Challenges for AI Ethics
Guglielmo Tamburrini

The Covid-19 pandemics is forcing us to address some global challenges concerning human well-being and fundamental rights protection. This panel presentation explores ethically ambivalent roles that AI plays in connection with two additional global challenges: (1) climate warming and (2) threats to international peace. 1. AI has a significant carbon footprint. Should one set quantitative limits to the energy consumption required for AI model training? And if so, how must one distribute AI carbon quotas among States, businesses, and research? Should one limit the collection of user data to feed into data-hungry AI systems? And who should be in charge of deciding which data to collect, preserve or get rid of for the sake of environmental protection? 2. An AI arms race is well under its way, ranging from the development of autonomous weapons systems to the development of AI systems for discovering software vulnerabilities and waging cyberconflicts. Should the weaponization of AI be internationally regulated? And if so, how to interpret and apply within this domain human rights, humanitarian principles and the UN fundamental goal of preserving world peace and stability? This panel presentation is rounded out by looking at EU efforts to cope with some of these global ethical issues.

Building Ethical AI for the Human-AI Symbiotic Society
Yi Zeng

In this talk, I will provide a global landscape of AI Ethical Principles and investigate on how the efforts complete each other, instead of compete with each other. I will then talk about concrete groundings of AI Ethical principles and introduce technical and social efforts in different domains. Finally, I will extend the discussion to long-term A(G)I ethical challenges and a possible positive path.

Deborah G. Johnson (University of Virginia, USA), Guglielmo Tamburrini (University of Naples, Italy), Yi Zeng (Chinese Academy of Sciences, China)
Moderator: Viola Schiaffonati (Politecnico di Milano, Italy)
Oct. 20, 2020
5:00 – 6:00 PM
(17:00) CEST
Elissa M. Redmiles (Microsoft Research):
“Learning from the People: Responsibly Encouraging Adoption of Contact Tracing Apps”

A growing number of contact tracing apps are being developed to complement manual contact tracing. Yet, for these technological solutions to benefit public health, users must be willing to adopt these apps. While privacy was the main consideration of experts at the start of contact tracing app development, privacy is only one of many factors in users’ decision to adopt these apps. In this talk I showcase the value of taking a descriptive ethics approach to setting best practices in this new domain. Descriptive ethics, introduced by the field of moral philosophy, determines best practices by learning directly from the user — observing people’s preferences and inferring best practice from that behavior — instead of exclusively relying on experts’ normative decisions. This talk presents an empirically-validated framework of the inputs that factor into a user’s decision to adopt COVID19 contact tracing apps, including app accuracy, privacy, benefits, and mobile costs. Using predictive models of users’ likelihood to install COVID apps based on quantifications of these factors, I show how high the bar is for these apps to achieve adoption and suggest user-driven directions for ethically encouraging adoption.

Moderator: James Larus (EPFL, Switzerland)
Oct. 06, 2020
5:00 – 6:30 PM
(17:00) CEST
Paul Timmers, Ciaran Martin, Margot Dor, and Georg Serentschy:
“Digital Sovereignty – Navigating Between Scylla and Charybdis”

This panel debate will have a hard and critical look at the sense and nonsense of digital sovereignty.

We will debunk some of the terminology that is being thrown around in debates on digital sovereignty, analyse the good, the bad, and the ugly of geopolitical technology battles between the USA and China and provide specific look insight into two harbingers of the emerging perceptions of sovereignty in cyberspace: global telecommunications and global standardization.

We invite the audience to be part of the debate to increase with the panel our understanding how Europe can best navigate the good, the bad and the ugly of geopolitics and the digital world.

Prof Paul Timmers will set the scene by a critical reflection where we are in the debate on ‘digital sovereignty’ and consequences for EU policy development. Paul Timmers is at the European University Cyprus, Research Associate at Oxford University, Senior Advisor at EPC, former Director European Commission, and leading thinker on strategic autonomy and digital sovereignty.

Subsequently, we will engage in a panel and audience discussion where three leading cybersecurity personalities will put forward their response to the scene setter:
Prof Ciaran Martin, Oxford University, former head UK NCSC (National Cyber Security Centre), a world top person in cybersecurity, recent interview by the Financial Times on east-west split over the internet.

Dr Margot Dor, Strategy Director of ETSI a European Standards Organization, driver of the Carl Bildt Report on Strategic Standardisation for Europe in the Digital Era

Dr Georg Serentschy, advisor on telecoms and IT, senior advisor SquirePattonBoggs, Board of Directors International Telecommunications Society, former Head of BEREC (European Telecoms Regulators).

Moderator: Lynda Hardman (CWI – Centrum Wiskunde & Informatica, Amsterdam and Utrecht University)
Slides – Paul Timmers; Slides – Margot Dor
Sept. 22, 2020
5:00 – 6:00 PM
(17:00) CEST
Barbara J. Grosz (Harvard, USA):
“An AI and Computer Science Dilemma: Could I? Should I?“

Computing technologies have become pervasive in daily life. Predominant uses of them involve communities rather than isolated individuals, and they operate across diverse cultures and populations. Systems designed to serve one purpose may have unintended harmful consequences. To create systems that are “society-compatible”, designers and developers of innovative technologies need to recognize and address the ethical considerations that should constrain their design. For students to learn to think not only about what technology they could create, but also whether they should create that technology, computer science curricula must expand to include ethical reasoning about the societal value and impact of these technologies. This talk will describe Harvard’s Embedded EthiCS program, a novel approach to integrating ethics into computer science education that incorporates ethical reasoning throughout courses in the standard computer science curriculum. It changes existing courses rather than requiring wholly new courses. The talk will describe the goals of Embedded EthiCS, the way the program works, lessons learned and challenges to sustainable implementations of such a program across different types of academic institutions. This approach was motivated by my experiences teaching the course “Intelligent Systems: Design and Ethical Challenges”, which I will describe briefly first.

Moderator: Erich Prem (eutema & TU Wien, Austria)
Sept. 8, 2020
5:00 – 6:00 PM
(17:00) CEST
Stuart Russell (University of California, Berkeley, USA):
“How Not to Destroy the World with Artificial Intelligence!“

I will briefly survey recent and expected developments in AI and their implications. Some are enormously positive, while others, such as the development of autonomous weapons and the replacement of humans in economic roles, may be negative. Beyond these, one must expect that AI capabilities will eventually exceed those of humans across a range of real-world-decision making scenarios. Should this be a cause for concern, as Elon Musk, Stephen Hawking, and others have suggested? And, if so, what can we do about it? While some in the mainstream AI community dismiss the issue, I will argue that the problem is real and that the technical aspects of it are solvable if we replace current definitions of AI with a version based on provable benefit to humans.

Moderator: Helga Nowotny (Chair of the ERA Council Forum Austria and Former President of the ERC)
July 14, 2020
5:00 – 6:00 PM
(17:00) CEST
“Corona Contact Tracing – the Role of Governments and Tech Giants“
Alfonso Fuggetta (Politecnico di Milano, Italy), James Larus (EPFL, Switzerland)
Moderator: Jeff Kramer (Imperial College London, UK)
June 9, 2020
5:00 – 6:00 PM
(17:00) CEST
Moshe Vardi(Rice University, USA):
“Lessons for Digital Humanism from Covid-19”