Upcoming Events:

November 16, 2021
5:00 – 6:00 PM
(17:00) CET
“Transition From Labor-intensive to Tech-intensive Surveillance in China”

In the post-Tiananmen period the Chinese government has devoted enormous resources to the construction of a surveillance state. The initial efforts were directed at building organizational networks and coordination mechanisms to monitor a large number of individuals deemed threats to the rule of the Communist Party and public safety. Due to the lack of technological resources, the surveillance state in the 1990s was labor-intensive but highly effective. The Chinese government developed sophisticated surveillance tactics to monitor individuals and public venues even without hi-tech equipment. The transition to tech-intensive surveillance began at the end of the 1990s and accelerated in the 2010s due to the availability of new technologies and the generous funding from the state. Yet, despite the adoption of hi-tech tools, the Chinese surveillance state remains a labor-intensive and organization-intensive system. The unrivaled organizational capacity of the Communist Party, not new fancy technology, is the secret of the effectiveness of the Chinese surveillance state.


Speaker: Minxin Pei (Claremont McKenna College, USA)
Moderator: Susan J. Winter (University of Maryland, College of Information Studies, USA)


Past Events:

November 9, 2021
5:00 – 6:00 PM
(17:00) CET
“Artificial Intelligence and Democratic Values: What is the Path Forward?”

Countries and international organizations are moving quickly to adopt AI strategies. More than fifty nations have signed on to the OECD AI Principles or the G20 AI Guidelines. The EU is developing a comprehensive regulation for AI, and UNESCO is about to adopt an AI Ethics Recommendation. Many of these policy frameworks share similar objectives – that AI should be “human-centric” and “trustworthy,” that AI systems should ensure fairness, accountability, and transparency. Looming in the background of this policy debate is the deployment of AI techniques, such as facial surveillance and social scoring, that implicate human rights and democratic values. Therefore we should ask how effectively do these policy frameworks address these new challenges? What are the differences between a country endorsing a policy framework and implementing a policy framework? Will countries be able to enforce actual prohibitions or “red lines” on certain deployments? And how do we assess a country’s national AI strategy with democratic values? These questions arise in the context of the CAIDP Report “Artificial Intelligence and Democratic Values,” a comprehensive review of AI policies and practices in 30 countries.


Speaker: Marc Rotenberg (Center for AI and Digital Policy, USA)
Moderator: Paul Timmers (Oxford University, UK | European University, Cyprus)
Slides
October 12, 2021
5:00 – 6:00 PM
(17:00) CEST
“Human-Centered AI: A New Synthesis”

A new synthesis is emerging that integrates AI technologies with HCI approaches to produce Human-Centered AI (HCAI). Advocates of this new synthesis seek to amplify, augment, and enhance human abilities, so as to empower people, build their self-efficacy, support creativity, recognize responsibility, and promote social connections.

Educators, designers, software engineers, product managers, evaluators, and government agency staffers can build on AI-driven technologies to design products and services that make life better for the users. These human-centered products and services will enable people to better care for each other, build sustainable communities, and restore the environment. The passionate advocates of HCAI are devoted to furthering human values, rights, justice, and dignity, by building reliable, safe, and trustworthy systems.

The talk will include examples, references to further work, and discussion time for questions. These ideas are drawn from Ben Shneiderman’s forthcoming book (Oxford University Press, January 2022). Further information at: https://hcil.umd.edu/human-centered-ai


Speaker: Ben Shneiderman (University of Maryland, USA)
Moderator: Allison Stanger (Middlebury College, USA)
Slides
June 29, 2021
5:00 – 6:00 PM
(17:00) CEST
“Should we preserve the world's software history, and can we?”

Cultural heritage is the legacy of physical artifacts and intangible attributes of a group or society that are inherited from past generations, maintained in the present and bestowed for the benefit of future generations. What role does software play in it? We claim that software source code is an important product of human creativity, and embodies a growing part of our scientific, organisational and technological knowledge: it is a part of our cultural heritage, and it is our collective responsibility to ensure that it is not lost. Preserving the history of software is also a key enabler for reproducibility of research, and as a means to foster better and more secure software for society.

This is the mission of Software Heritage, a non-profit organization dedicated to building the universal archive of software source code, catering to the needs of science, industry and culture, for the benefit of society as a whole. In this presentation we will survey the principles and key technology used in the archive that contains over 10 billion unique source code files from some 160 millions projects worldwide.


Speaker: Roberto Di Cosmo (INRIA, France)
Moderator: Edward A. Lee (UC Berkeley, USA)

June 15, 2021
5:00 – 6:00 PM
(17:00) CEST
“Digital Humanism and Democracy in Geopolitical Context”

Government and corporate surveillance in autocracies have very different ethical ramifications than the same actions do in liberal democracies. Open societies protect individual rights and distinguish between the public and private spheres. Neither condition pertains in China, an instantiation of what the philosopher Elizabeth Anderson calls private government. Ignoring the significance of such differences, which are only reinforced by differing business-government relationships in the United States, EU, and China, is likely to undercut both liberal democratic values and US-European national security.


Speaker: Allison Stanger (Middlebury College, USA)
Moderator & Respondent: Moshe Y. Vardi (Rice University, USA)
Slides
June 8, 2021
5:00 – 6:00 PM
(17:00) CEST
“The New EU Proposal for AI Regulation”

The European Commission has published its proposal for a new AI regulation. It takes a risk-based approach with different rules for low-risk applications of AI, stricter rules for riskier systems, and a complete ban for specified types of AI applications. Regulatory measures include data governance, transparency, information to users, and human oversight. The proposal also includes measures for supporting AI innovation through regulatory sandboxes. The proposed regulation was met with great interest and criticism. It started an intense debate about its appropriateness, omissions, specificity, and practicability. With this lecture and the following discussion, we aim to improve the understanding of the proposal’s main objectives, methods, and instruments and contribute to the public debate of how AI should be regulated in the future.


Speaker: Irina Orssich (European Commission)
Moderator & Respondent: Erich Prem (eutema & TU Wien, Austria)
Slides
May 18, 2021
5:00 – 6:00 PM
(17:00) CEST
“The Platform Economy and Europe: Between Regulation and Digital Geopolitics”

Platforms represent a new structure for organising economic and social activities and appear as a hybrid between a market and a hierarchy. They are match makers like traditional markets, but they are also company heavy in assets and often quoted in the stock exchange. Platforms, therefore, are not simply technological corporations but a form of ‘quasi-infrastructure’. Indeed, in their coordination function, platforms are as much an institutional form as a means of innovation. Ever since the launch of the Digital Single Market in 2015, platforms have gone under the radar of the EU for different matter such as consumer protection in general, competition, and lately data protection. In addition to genuine policy concerns, the debate on platforms and on other digital transformation phenomena have gained higher political status in the form of the debate on digital and data sovereignty of Europe vis a vis the US and China. Platforms have also managed for some time to get away with regulation adopting a lobbying based on rhetorical framing. This has been revived during the Pandemic when dominant platforms extolled their data provided to government as an important tool to track and contain the spread of the virus.

In this seminar, besides discussing this more high level issue, I will also address more specific ones such as data protection, extraction of behavioral surplus, consumer protection, and competition issues. I conclude by comparing two different regulatory approach: the application of the precautionary principle as opposed to a cost-benefit assessment of intervention.


Speaker: Cristiano Codagnone (Universitat Oberta de Catalunya | Università degli studi di Milano)
Moderator: Paul Timmers (Oxford University, UK | European University, Cyprus)
Slides
May 4, 2021
5:00 – 6:00 PM
(17:00) CEST
“Ethics in AI: A Challenging Task”

In the first part we cover four current specific challenges: (1) discrimination (e.g., facial recognition, justice, sharing economy, language models); (2) phrenology (e.g., bio-metric based predictions); (3) unfair digital commerce (e.g., exposure and popularity bias); and (4) stupid models (e.g., Signal, minimal adversarial AI). These examples do have a personal bias but set the context for the second part where we address four generic challenges: (1) too many principles (e.g., principles vs. techniques), (2) cultural differences (e.g., Christian vs. Muslim); (3) regulation (e.g., privacy, antitrust) and (4) our cognitive biases. We finish discussing what we can do to address these challenges in the near future.


Speaker: Ricardo Baeza-Yates (Institute for Experiential AI, Northeastern University, USA)
Moderator: Carlo Ghezzi (Politecnico di Milano, Italy)
Slides
Apr 20, 2021
5:00 – 6:00 PM
(17:00) CEST
“Vaccination Passports – a tool for liberation or the opposite?”

The European Commission and its member states are discussing “Green Passports” as a way of opening up after lockdown. They have been proposed as tools to verify the Covid immunization status and thus help to accelerate the path to normality. However, similar to the contact tracing apps there are numerous issues and concerns about what these apps should be and how to make them safe, reliable, and privacy-preserving. Director Ron Roozendaal from the Dutch Ministry of Health will talk about digital solutions, important design decisions, and the way forward. As a respondent, Prof. Nikolaus Forgo from the Department of Innovation and Digitalisation in Law will be our respondent.


Speaker: Ron Roozendaal (Ministry of Health, Welfare and Sports, The Netherlands)
Respondent: Nikolaus Forgo (University of Vienna, Austria)
Moderator: Walter Hötzendorfer (Research Institute – Digital Human Rights Center)
Slides
Apr 13, 2021
5:00 – 6:30 PM
(17:00) CEST
“(Gender) Diversity and Inclusion in Digital Humanism”

Digital Humanism is ‘focused on ensuring that technology development remain centered on human interests’. This panel focuses on ‘which humans’, and how different voices and interests can and should be included in the development and application of digital technologies.

Talks:

Sally Wyatt will examine how the inclusion of women in computer science and related fields has declined over the past 50 years. She will argue that including women is partly a matter of social justice, of providing women with access to interesting and well paid jobs. Further, it is a matter of epistemic justice, including women’s perspectives and experiences could lead to better and more inclusive technologies.

Jeanne Lenders will explain how the Commission is stepping up efforts for gender equality in research and innovation, including on women’s participation in STEM and the integration of gender perspectives into research and innovation content. She will highlight the strengthened provisions for gender equality in Horizon Europe, the next Framework Programme for Research and Innovation, and showcase examples of the Commission’s ‘Gendered Innovations’ Expert Group.

Hinda Haned, will discuss different definitions of bias, and bias mitigation through so-called “fairness algorithms”. Drawing from practical examples, she will argue that the most fundamental question we are facing as researchers and practitioners, is not how to fix bias with new technical solutions, but whether we should be designing and deploying potentially harmful automated systems in the first place.

Judy Wajcman and Erin Young will discuss the gender job gap in AI. The fields of artificial intelligence and data science have exploded as the world is increasingly being built around smart machines and automated systems. Yet the people whose work underpins that vision are far from representative of the society those systems are meant to serve. Their report shows the extent of gender disparities in careers, education, jobs, seniority, status and skills in the AI and data science fields. They argue that fixing the gender job gap in AI is not only a fundamental issue of economic equality, but also about how the world is designed and for whom.


Panelists: Sally Wyatt (Maastricht University, The Netherlands), Jeanne Lenders (European Commission), Hinda Haned (Janssen Biologics, The Netherlands), Judy Wajcman (London School of Economics | The Alan Turing Institute, UK), Erin Young (The Alan Turing Institute, UK)
Moderator: Lynda Hardman (CWI – Centrum Wiskunde & Informatica, Amsterdam and Utrecht University)
Slides: Hinda Haned; Jeanne Lenders; Judy Wajcman & Erin Young; Sally Wyatt
Mar 23, 2021
5:00 – 6:30 PM
(17:00) CET
“Digital Society, Social Justice and Academic Education”

ABSTRACT: An important open question of Digital Humanism is how ethical and social aspects of digital technologies and associated matters of human values and social justice can be properly handled in academic research and education. An approach, also discussed in preceding DigHum Lectures, is to create interdisciplinary courses on ethics and/or philosophy of technology such as “Tech Ethics”. This panel investigates approaches that have their roots in direct collaboration from academia with outside (underprivileged, marginalized) communities as integral element of research and education. Case examples and experiences from three different continents are discussed, which also gives some perspective on the simultaneous universality and contextuality of issues of human values and social justice.

TALKS: Knowledge for Service: Digital Technology Positives and Negatives in African Rural Societies
Saa Dittoh

Many decades ago (and possibly now in some areas) in rural Africa, communal methods of information sharing were not always face-to-face; some were virtual, through high-pitched voices and/or loud sounding “talking drums” that gave “coded information”. No wonder that many African rural societies have no reservations adopting appropriate modern digital technologies. The rapid advance in digital technology has been positive in many ways but there are several negative and damaging aspects that threaten the values, cultures and even the very existence of some African rural societies. In this talk I discuss those threats and suggests ways to counter them. This talk further highlights how knowledge can be put to service and how university students can be engaged in this.

Digital Sociotechnical Innovation and Indigenous Knowledge
Narayanan Kulathuramaiyer

In this talk I will discuss how university research and education on digital technology can be a factor in empowering under-served communities. In particular I describe the eBario program as a long-standing university-community partnership between the rural Kelabit community, one of Borneo ethnic minorities, and the University Malaysia Sarawak. This program to bridge the digital divide started in 1998, the indigenous Kelabit community taking on the information and knowledge creation pathway as a way forward. The program has over the past two decades evolved to become recognized as a living laboratory, influencing practice and policy, with for example a role in poverty reduction. eBario as an ICT for Development model has been replicated to cover eight other sites across Peninsula and East Malaysian states of Sabah and Sarawak. The biggest achievement however resides in the development of community scholars and the community-led life-long-learning initiatives that go on till today.

Digital Divide, Inclusion and Community Service Learning
Anna Bon

Community service learning (CSL) is an educational approach that we have further developed in collaboration with universities and stakeholders in the Global South into a research and education model dubbed: ICT4D 3.0. This model combines problem-solving and situational learning with meaningful service to communities and society. In computer science and artificial intelligence education – traditionally purely technologically oriented – ICT4D 3.0 integrates CSL’s societal and ethical principles with user-centered design and socio-technical problem-solving. Being exposed to complex, societal real world problems, students learn by exploring, reflecting, co-designing in close interaction with communities in a real world environment. This type of education provides a rich learning environment for “Bildung”.


Panelists: Saa Dittoh (UDS, Ghana), Narayanan Kulathuramaiyer (UNIMAS, Malaysia), Anna Bon (VU Amsterdam, the Netherlands)
Moderator: Hans Akkermans (w4ra.org, the Netherlands)
Slides: Hans Akkermans; Saa Dittoh; Narayanan Kulathuramaiyer; Anna Bon
Feb 23, 2021
5:00 – 6:30 PM
(17:00) CET
“Preventing Data Colonialism without resorting to protectionism - The European strategy”

This panel builds on prior Dighum panels including the one entitled “Digital Sovereignty”. The particular focus of this one is on data and the related threats and opportunities. The threat of “data colonialism” is meant to describe a possible situation where there is unbridled access (extraction) and processing / exploitation of data of European citizens and businesses by non-European companies and, potentially via these, foreign powers.

Simply as an illustration, today the data of virtually all European citizens and companies that use cloud services are accessible by non-European cloud service providers to which their countries of origin have potential access (the US via the recent Cloud Act and China via … fiat). The trap on the other side is to ring-fence such data within Member State or EU boundaries thus severely limiting their value to anybody.

In her keynote address on Feb 4th at the Digital Masters 2021, EC President Ursula Von der Leyen said: “In Europe we are sitting on a gold mine of machine generated data (value estimated at 1.5 trillion) which remains largely unexploited due primarily to the absence of clear rules for how a company can access, buy or sell such data across EU Member State borders.”

It is in this context that a number of European initiatives have been designed so as to constitute a coherent strategy. To itemize: The GAIA-X consortium aims, in the context of a broader effort, to lead to the creation of a European cloud characterized by portability, interoperability and security. Non-European cloud providers participate as well. The EC will launch the European Alliance for Industrial Data and Cloud aiming at a European Federated Cloud.

On the regulatory front, we have: the DSA that aims to define the responsibilities of all digital players; the DMA that aims to set rules for gatekeepers so that there is an online world accessible to all with a single, clear set of rules; the European Data Governance Act proposed last November with the aim to strengthen data sharing mechanisms across Europe. And last and certainly not least, the coming Data Act, which aims to arbitrate in trusted fashion how are the resulting benefits shared. Together with the Horizon research instruments and the investment instruments foreseen for the creation of European dedicated data spaces (e.g. for Health) all the above comprise a complex arsenal which the panel has the knowledge and experience to help us understand better.

The key question, of course, is how can all these initiatives come together to form a coherent and effective strategy and with what expected “timeline of impact”. Furthermore, as data underpins AI and practically all major coming digital technology advances, it is the aforementioned impact and its timeline that will be crucial for the achievement and maintenance of European Digital Sovereignty in the emerging geopolitical context.


Panelists: Pilar del Castillio (European Parliament), Lokke Moerel (Tilburg University, The Netherlands), Yvo Volman (European Commission)
Moderator: George Metakides (President of Digital Enlightenment Forum)
Slides – Yvo Volman; Slides – Lokke Moerel
Feb 2, 2021
5:00 – 6:00 PM
(17:00) CET
“Freedom of Expression in the Digital Public Sphere”

A substantial portion of contemporary public discourse takes place over online social media platforms, such as Facebook, YouTube and TikTok. Accordingly, these platforms form a core component of the digital public sphere, which, although subject to private ownership, constitute a digital infrastructural resource that is open to members of the public.

The content moderation systems deployed by such platforms have the potential to influence and shape public discourse by mediating what members of the public are able to see, hear, and say online. Over time, these rules may have a norm-setting effect, shaping the conduct and expectations of users as to what constitutes “acceptable” discourse. Thus, the design and implementation of content moderation systems can have a powerful impact on the preservation of users’ freedom of expression. The emerging trend towards the deployment of algorithmic content moderation (ACM) systems gives rise to urgent concerns on the need to ensure that content moderation is regulated in a manner that safeguards and fosters robust public discourse.

This lecture develops upon the research carried out within the framework of the Research Sprint on AI and Platform Governance (2020) organized by the HIIG, Berlin (for more information on the research project and its key findings see, Freedom of Expression in the Digital Public Sphere (graphite.page)). It explores how the proliferation of ACM poses increased risks for safeguarding the freedom of expression in the digital public sphere and proposes legal and regulatory strategies for ensuring greater public oversight and accountability in the design and implementation of content moderation systems by social media platforms.


Speaker: Sunimal Mendis (Tilburg University, The Netherlands)
Respondent: Christiane Wendehorst (University of Vienna, Austria)
Moderator: Erich Prem (eutema & TU Wien, Austria)
Jan 26, 2021
5:00 – 6:30 PM
(17:00) CET
“Digital Superpowers and Geopolitics”

In Cyberspace, the modern “colonial powers” are not nations but multinational companies, mostly American but with strong competition emerging in China. These companies control the digital platforms central to most peoples’ social networks, communications, entertainment, and commerce and, through them, have collected and continue to collect limitless information about our friends, colleagues, preferences, opinions, and secrets. With the knowledge obtained by processing this information/data, these companies have built some of the world’s more profitable businesses, turning little pieces of information given to them by uninformed users in return for “free services”, into extremely valuable, targeted advertising. These companies, moreover, endeavor to operate in the space between countries, with very limited responsibility/ accountability to governments. At the same time, governments such as in China and the US have laws requiring such companies to divulge data obtained from their customers anywhere in the world. Does this pose a threat to national or European sovereignty?
This panel will endeavor to appraise the current situation, assess the potential impact of actions already initiated as well as explore new ones.


Panelists: June Lowery-Kingston (European Commission), Jan-Hendrik Passoth (ENS / Viadrina, Germany), Michael Veale (University College London, UK)
Moderator: James Larus (EPFL, Switzerland)
Dec. 15, 2020
5:00 – 6:00 PM
(17:00) CET
Julian Nida-Rümelin (LMU München, Germany)
“Philosophical Foundations of Digital Humanism”

In this talk Julian Nida_Rümelin will develop the main features of what he calls ‘digital humanism’ (Nida-Rümelin/Weidenfeld 2018), based on a general philosophical account of humanistic theory and practice (Nida-Rümelin 2016): (1) preconditions of human authorship (JNR 2019); (2) human authorship in times of digital change; (3) ethical implications.

J. Nida-Rümelin: Humanistische Reflexionen (Berlin: Suhrkamp 2016)
J. Nida-Rümelin/N. Weidenfeld: Digitaler Humanismus. Eine Ethik für das Zeitalter der Künstlichen Intelligenz (München: Piper 2018)
J. Nida-Rümelin: Structural Rationality and other essays on Practical Reason (Berlin / New York: Springer International 2019)


Moderator: Edward A. Lee (UC Berkeley, USA)
Nov. 3, 2020
5:00 – 6:30 PM
(17:00) CET
“Ethics and IT: How AI is Reshaping our World”

This panel debate will investigate the development of AI from a philosophical perspective. In particular, we will discuss the ethical implications of AI and the global challenges raised by the widespread adoption of socio-technical systems powered by AI tools. These challenges will be addressed by the three speakers from different cultural and geographical perspectives. We invite the audience to be part of the debate to increase with the panel our understanding how AI is reshaping the world and the awareness of the challenges we will face in the future.

Safety, Fairness, and Visual Integrity in an AI-shaped world
Deborah Johnson

AI algorithms are potent components in decisions that affect the lives of individuals and the activities of public and private institutions. Although the use of algorithms to make decisions has many benefits, a number of problems have been identified with their use in certain domains, most notably in domains in which safety and fairness are important. AI algorithms are also used to produce tools that enable individuals to do things they would not otherwise be able to do. In the case of synthetic media technologies, users are able to produce deepfakes that challenge the integrity of visual experience. In my presentation, I will discuss safety, fairness, and visual integrity as three ethical issues arising in an AI-shaped world.

Global Challenges for AI Ethics
Guglielmo Tamburrini

The Covid-19 pandemics is forcing us to address some global challenges concerning human well-being and fundamental rights protection. This panel presentation explores ethically ambivalent roles that AI plays in connection with two additional global challenges: (1) climate warming and (2) threats to international peace. 1. AI has a significant carbon footprint. Should one set quantitative limits to the energy consumption required for AI model training? And if so, how must one distribute AI carbon quotas among States, businesses, and research? Should one limit the collection of user data to feed into data-hungry AI systems? And who should be in charge of deciding which data to collect, preserve or get rid of for the sake of environmental protection? 2. An AI arms race is well under its way, ranging from the development of autonomous weapons systems to the development of AI systems for discovering software vulnerabilities and waging cyberconflicts. Should the weaponization of AI be internationally regulated? And if so, how to interpret and apply within this domain human rights, humanitarian principles and the UN fundamental goal of preserving world peace and stability? This panel presentation is rounded out by looking at EU efforts to cope with some of these global ethical issues.

Building Ethical AI for the Human-AI Symbiotic Society
Yi Zeng

In this talk, I will provide a global landscape of AI Ethical Principles and investigate on how the efforts complete each other, instead of compete with each other. I will then talk about concrete groundings of AI Ethical principles and introduce technical and social efforts in different domains. Finally, I will extend the discussion to long-term A(G)I ethical challenges and a possible positive path.


Deborah G. Johnson (University of Virginia, USA), Guglielmo Tamburrini (University of Naples, Italy), Yi Zeng (Chinese Academy of Sciences, China)
Moderator: Viola Schiaffonati (Politecnico di Milano, Italy)
Oct. 20, 2020
5:00 – 6:00 PM
(17:00) CEST
Elissa M. Redmiles (Microsoft Research):
“Learning from the People: Responsibly Encouraging Adoption of Contact Tracing Apps”

A growing number of contact tracing apps are being developed to complement manual contact tracing. Yet, for these technological solutions to benefit public health, users must be willing to adopt these apps. While privacy was the main consideration of experts at the start of contact tracing app development, privacy is only one of many factors in users’ decision to adopt these apps. In this talk I showcase the value of taking a descriptive ethics approach to setting best practices in this new domain. Descriptive ethics, introduced by the field of moral philosophy, determines best practices by learning directly from the user — observing people’s preferences and inferring best practice from that behavior — instead of exclusively relying on experts’ normative decisions. This talk presents an empirically-validated framework of the inputs that factor into a user’s decision to adopt COVID19 contact tracing apps, including app accuracy, privacy, benefits, and mobile costs. Using predictive models of users’ likelihood to install COVID apps based on quantifications of these factors, I show how high the bar is for these apps to achieve adoption and suggest user-driven directions for ethically encouraging adoption.


Moderator: James Larus (EPFL, Switzerland)
Slides
Oct. 06, 2020
5:00 – 6:30 PM
(17:00) CEST
Paul Timmers, Ciaran Martin, Margot Dor, and Georg Serentschy:
“Digital Sovereignty – Navigating Between Scylla and Charybdis”

This panel debate will have a hard and critical look at the sense and nonsense of digital sovereignty.

We will debunk some of the terminology that is being thrown around in debates on digital sovereignty, analyse the good, the bad, and the ugly of geopolitical technology battles between the USA and China and provide specific look insight into two harbingers of the emerging perceptions of sovereignty in cyberspace: global telecommunications and global standardization.

We invite the audience to be part of the debate to increase with the panel our understanding how Europe can best navigate the good, the bad and the ugly of geopolitics and the digital world.

Prof Paul Timmers will set the scene by a critical reflection where we are in the debate on ‘digital sovereignty’ and consequences for EU policy development. Paul Timmers is at the European University Cyprus, Research Associate at Oxford University, Senior Advisor at EPC, former Director European Commission, and leading thinker on strategic autonomy and digital sovereignty.

Subsequently, we will engage in a panel and audience discussion where three leading cybersecurity personalities will put forward their response to the scene setter:
Prof Ciaran Martin, Oxford University, former head UK NCSC (National Cyber Security Centre), a world top person in cybersecurity, recent interview by the Financial Times on east-west split over the internet.

Dr Margot Dor, Strategy Director of ETSI a European Standards Organization, driver of the Carl Bildt Report on Strategic Standardisation for Europe in the Digital Era

Dr Georg Serentschy, advisor on telecoms and IT, senior advisor SquirePattonBoggs, Board of Directors International Telecommunications Society, former Head of BEREC (European Telecoms Regulators).


Moderator: Lynda Hardman (CWI – Centrum Wiskunde & Informatica, Amsterdam and Utrecht University)
Slides – Paul Timmers; Slides – Margot Dor
Sept. 22, 2020
5:00 – 6:00 PM
(17:00) CEST
Barbara J. Grosz (Harvard, USA):
“An AI and Computer Science Dilemma: Could I? Should I?“

Computing technologies have become pervasive in daily life. Predominant uses of them involve communities rather than isolated individuals, and they operate across diverse cultures and populations. Systems designed to serve one purpose may have unintended harmful consequences. To create systems that are “society-compatible”, designers and developers of innovative technologies need to recognize and address the ethical considerations that should constrain their design. For students to learn to think not only about what technology they could create, but also whether they should create that technology, computer science curricula must expand to include ethical reasoning about the societal value and impact of these technologies. This talk will describe Harvard’s Embedded EthiCS program, a novel approach to integrating ethics into computer science education that incorporates ethical reasoning throughout courses in the standard computer science curriculum. It changes existing courses rather than requiring wholly new courses. The talk will describe the goals of Embedded EthiCS, the way the program works, lessons learned and challenges to sustainable implementations of such a program across different types of academic institutions. This approach was motivated by my experiences teaching the course “Intelligent Systems: Design and Ethical Challenges”, which I will describe briefly first.


Moderator: Erich Prem (eutema & TU Wien, Austria)
Sept. 8, 2020
5:00 – 6:00 PM
(17:00) CEST
Stuart Russell (University of California, Berkeley, USA):
“How Not to Destroy the World with Artificial Intelligence!“

I will briefly survey recent and expected developments in AI and their implications. Some are enormously positive, while others, such as the development of autonomous weapons and the replacement of humans in economic roles, may be negative. Beyond these, one must expect that AI capabilities will eventually exceed those of humans across a range of real-world-decision making scenarios. Should this be a cause for concern, as Elon Musk, Stephen Hawking, and others have suggested? And, if so, what can we do about it? While some in the mainstream AI community dismiss the issue, I will argue that the problem is real and that the technical aspects of it are solvable if we replace current definitions of AI with a version based on provable benefit to humans.


Moderator: Helga Nowotny (Chair of the ERA Council Forum Austria and Former President of the ERC)
Slides
July 14, 2020
5:00 – 6:00 PM
(17:00) CEST
“Corona Contact Tracing – the Role of Governments and Tech Giants“
Alfonso Fuggetta (Politecnico di Milano, Italy), James Larus (EPFL, Switzerland)
Moderator: Jeff Kramer (Imperial College London, UK)
Slides
June 9, 2020
5:00 – 6:00 PM
(17:00) CEST
Moshe Vardi(Rice University, USA):
“Lessons for Digital Humanism from Covid-19”
Slides