ERCIM
  • Home
  • Science
    • Projects
    • Working Groups
    • Cooperations and Partnerships
    • EOSC and Gaia-X
    • Beyond Compliance - Digital Ethics in Research
  • People & Careers
    • Fellowship Programme
    • Cor Baayen Award
    • In-House Staff Exchange Programmes
    • Jobs
  • Publications
    • ERCIM News
    • Strategic Reports
    • Leaflets & Brochures
    • Annual Report
  • About
    • Objectives
    • Membership
    • Member representation
    • ERCIM Office
    • Governance
    • ERCIM and W3C
    • Legal Information
    • Contact
    • Slides
    • Logos
  • Events
  • News
  1. You are here:  
  2. Home
  3. Activity
  4. Behond Compliance

Beyond Compliance – Digital Ethics in Research

Digital Ethics in Research

Forum on Digital Ethics in Research 2022

Institut Imagine, Paris and online

The most recent edition can be found here.

Researchers in digital sciences face tough ethical questions in their daily activity for which there aren't yet consensual answers among the research community. The forum "Beyond Compliance" aims at advancing the discussion about those issues. The target audience is composed by researchers and Research Ethics Boards. The event consists of keynotes, presentations, tutorials and interactive sessions, and provides ample time for open discussions. Different outcomes are envisioned, including some which may be directed towards policy makers.

The presentations are published on a Youtube channel and can also be viewed individually (see below).

Program

Day 1: Monday, October 17th

09:15-09:30

Welcome words

09:30-10:30
Keynote (chair: Claude Kirchner)

Raja Chatila (Sorbonne University, CNPEN)
Responsible development use and governance of AI [slides] Youtube

 

10:45-12:30
Session 1 - Research ethics in a cross-disciplinary and global setting (chair: Gabriel David)

Philip Brey (University of Twente) [via zoom]
Research ethics guidelines for the computer and information sciences [slides] Youtube

Bernd Stahl (De Montfort University) [via zoom]
Realising ethics and responsible innovation in a large neuroinformatics project [slides] Youtube

Sally Wyatt (Maastricht University) [via zoom]
New digital research possibilities/Old ethical forms

 

14:15-16:00
Session 2 - Research ethics review (chair: Catherine Tessier)

Dirk Lanzerath (University of Bonn) [via zoom]
Ethics reviews in modern research: learning from medical RECs [slides] Youtube

Jeroen van der Ham (University of Twente) [via zoom]
Beyond “human subject”: the challenges of ethics oversight in digital science [slides] Youtube

Casey Fiesler (University of Colorado Boulder) [via zoom]
Data is people: research ethics and the limits of human subjects review Youtube

 

16:15-18:00
Session 3 - Research ethics in the era of big data and AI (chair: Andreas Rauber)

Rowena Rodrigues (Trilateral Research) [via zoom]
Looking back, moving forward: AI research ethics [slides]

Inioluwa Deborah Raji (Mozilla Foundation) [via zoom]
Research accountability in machine learning Youtube

Michael Bernstein (Stanford University) [via zoom]
Ethics and society review: ethics reflection as a precondition to research funding [slides] Youtube

 

Day 2: Tuesday, October 18th

09:00-10:30
Session 4 - Tutorial on research ethics for PhD students and young researchers (chair: Eric Germain)

Catherine Tessier (Onera, CNPEN)

Alice: Hi Ben! You know what? I enrolled on a new PhD program at Hereafter University. They offered generous grants for machine learning students to design chatbots replicating the speech of deceased individuals and even generating new phrases that the person has never uttered in their lifetime. How does your own PhD go? Still with bees?
Ben: Hi Alice! The Hereafter program looks amazing. I wish I could chat with my grandpa again. As for me, the micro-robots to support the well-being of honeybee queens and optimise the honey yield are ready to be implemented in beehives. Demo within two months! Our partner Bill Surrogates is looking forward to it.
Alice: That’s really great to help the bees, times are so hard for them.
Well, let us encourage Alice and Ben to reflect about their PhD works and give them some basic tools to do so. Will you join in?

[slides] Youtube 

 

10:45-12:30
Session 5 - Research ethics training (chair: Eric Germain)

Gordana Dodig-Crnkovic (Chalmers University) [via zoom]
Research-based perspective in teaching ethics to engineering students [slides] Youtube

Karën Fort (Sorbonne University) [via zoom]
Teaching ethics in NLP: DIY (do it yourself) [slides] Youtube

Catherine Tessier (Onera, CNPEN)
Research ethics training: debating is learning [slides] Youtube

 

14:15-16:00
Session 6 - New challenges and opportunities for research ethics (chair: Sylvain Petitjean)

Panagiotis Kavouras (National Technical University of Athens) [via zoom]
Open science: hopes, challenges and the intervention of ROSiE project [slides] Youtube

Sylvie Delacroix (University of Birmingham) [via zoom]
Data Trusts and the need for bottom-up data empowerment infrastructure [slides] Youtube

Yves-Alexandre de Montjoye (Imperial College London) [via zoom]
The search for anonymous data - From de-identification to privacy-preserving systems

 

16:00-16:15
Closing remarks

Location

Institut Imagine
24 Boulevard du Montparnasse
75015 Paris
France

To facilitate getting accommodation during the forum, we list some hotels near the venue:

  • Hotel Korner Montparnasse
  • Best Western Hotel Le Montparnasse
  • l'hôtel La Parizienne
  • Hotel Le Littré
  • Hôtel Louison Rive Gauche
  • Pullman Paris Montparnasse

If you are interested to participate or if you would like to receive more information, please contact the organisers at This email address is being protected from spambots. You need JavaScript enabled to view it.

Organization committee: ERCIM Ethics Working Group

  • Claude Kirchner (Inria, CNPEN)
  • Sylvain Petitjean (Inria)
  • Andreas Rauber (SBA)
  • Christos Alexakos (ISI)
  • Fabian Eberle (SBA)
  • Gabriel David (INESC)
  • Guenter Koch (HCM)
  • Pablo Cesar (CWI)
  • Vera Sarkol (CWI)

ERCIM is a consortium of leading European research institutions committed to information technology and applied mathematics.

 

Sponsors

INRIA logo

ERCIM logo

logo CNPEN

Beyond Compliance – Digital Ethics in Research

Digital Ethics in Research

Beyond Compliance 2023

Faculty of Engineering of the University of Porto, Porto and online

Researchers in digital sciences face tough ethical questions in their daily activity for which there aren't yet consensual answers among the research community. The forum "Beyond Compliance" aims at advancing the discussion about those issues. The target audience is composed by researchers and Research Ethics Boards. The event consists of keynotes, presentations, tutorials and interactive sessions, and provides ample time for open discussions. Different outcomes are envisioned, including some which may be directed towards policy makers. (Abstract)

[previous edition: 2022]

Program

Please note that all the times are WEST (Western European Summer Time), so CEST-1

Wednesday, October 18th, afternoon

13:15-13:30 WEST
Welcome words

13:30-14:30 WEST
Keynote (chair: Claude Kirchner)

Arlindo Oliveira (University of Lisbon)
Artificial Consciousness: unreachable dream or foreseeable future?

14:30-16:15 WEST
Session 1 - AI: From ethics to regulation (chair: Andreas Rauber)

Daniela Tafani (University of Pisa)
What’s wrong with AI ethics narratives [slides] Youtube

Clara Neppel (IEEE Europe)
AI Governance: the role of regulation, standardization, and certification

Daniel Leufer (Access Now)
The EU’s AI Act: (self-)regulation, risk and public accountability

16:30-18:15 WEST
Session 2 - Inspiring trust in science in a digital world (chair: Guenter Koch)

Marisa Ponti (University of Gothenburg)
In (Citizen) Science We Trust [slides] Youtube

Katie Shilton (University of Maryland)
Excavating awareness and power for trustworthy data science [slides] Youtube

Jason Pridmore (Erasmus University Rotterdam)
Resilience Amidst Complexity: Navigating Conflicts of Interest in the Digital Age

Thursday, October 19th, morning

09:00-10:45 WEST
Session 3 - Data protection in the age of AI and big data (chair: Gabriel David)

Katharine Jarmul (Thoughtworks)
Theory to Practice: My journey from responsible AI to privacy technologies Youtube

Brent Mittelstadt (University of Oxford)
A right to reasonable inferences in the age of AI

Rainer Mühlhoff (University of Osnabrück)
The risk of secondary use of trained ML models as a key issue of data ethics and data protection regarding AI

11:00-12:30 WEST
Session 4 - Tutorial on research ethics in digital sciences (chair: Fabian Eberle)

Catherine Tessier (Onera) [slides] Youtube

Thursday, October 19th, afternoon

14:15-16:00 WEST
Session 5 - Responsible use of generative AI in academia (chair: Sylvain Petitjean)

Tony Ross-Hellauer (TU Graz)
LLMs, reproducibility and trust in scholarly work [slides] Youtube

David Leslie (Queen Mary University of London)
Scientific Discovery and Research Integrity in the Age of Large Language Models

16:15-18:00 WEST
Session 6 - Environmental research ethics (chair: Claude Kirchner)

Sasha Luccioni (Hugging Face)
AI and Sustainability: Data, Models and (Broader) Impacts

Gabrielle Samuel (King’s College London)
Reimagining research ethics to include environmental sustainability [slides] Youtube

Romain Couillet (University Grenoble-Alps)
Why and how to dismantle the digital world?

19:30-21:30 WEST
Dinner

Friday, October 20th, morning

09:00-10:45 WEST
Session 7 - Ethical guidance and review for research beyond human subjects (chair: Christos Alexakos)

Alexei Grinbaum (CEA)
Guidelines for AI Ethics in current EU research projects [slides] Youtube

Mihalis Kritikos (European Commission)
Research Ethics in Digital Sciences: the case of the EU’s Ethics Appraisal process

Kirstie Whitaker (Turing Institute)
Operationalising the SAFE-D principles for safe, ethical and open source AI

11:00-12:00 WEST
Keynote (chair: Guenter Koch)

Julia Neidhardt (TU Wien)
Digital humanism

12:00-12:15 WEST
Closing remarks

Registration

Participation is free but registration is mandatory, both for on line and on site participation. Please fill in the Registration form before October 6th.

Location

FEUP - Faculty of Engineering of the University of Porto
Rua Dr. Roberto Frias
4200-465 Porto
Portugal

Accommodation

The Portuguese hotels have established a very good reputation abroad thanks to the high standard of the services rendered. Professional teams combined with the Portuguese natural warmth and friendliness, make a powerful combination that will certainly make you feel welcome.

The hotels listed below are just examples (many, many other are available) and are organised by location in the city, going from the Meeting Venue to the city centre. No rates have been arranged.

Walking distance

IBIS HOTEL ★★

The ibis Porto São João hotel is located in Porto’s University district, close to São João hospital and on the top floor of Campus São João shopping centre. The hotel is strictly non-smoking, and has direct access to the city’s historic centre. Walking distance to FEUP: 10 min

HOTEL EUROSTARS OPORTO ★★★★

Eurostars Oporto is a modern design hotel located 100 m from Hospital São João Metro. All guest rooms at Eurostars Oporto have hardwood floors and flatscreen TVs. Each has a private balcony and a tiled-on suite bathroom. Guests can enjoy a glass of local Porto wine in the bar. Regional and international meals can be enjoyed in the privacy of the guest rooms. Eurostars Hotel is conveniently located 10 minutes’ Metro journey from Porto’s historic centre. It is 10 minutes’ drive from Francisco Sá Carneiro International Airport. Walking distance to FEUP: 15 min

AXIS PORTO BUSINESS & SPA HOTEL ★★★★

Axis Porto Hotel is located beside Porto University campus, 3 km from the Centre of Porto. The hotel features a spa centre and panoramic city views. All guest rooms at Axis Porto feature satellite TV, minibar and air conditioning. In addition, the hotel rooms all have balconies. The restaurant at Hotel Axis Porto offers a fusion of Italian and international cuisine. The hotel is located just 25 minutes’ drive from Porto International Airport. The nearest Metro station is 900 m away. Walking distance to FEUP: 15 min

Other downtown hotels

with metro access to the Meeting Venue

HOTEL TRYP PORTO CENTRO ★★★

Situated in downtown Porto, this hotel is a 6-minute walk from shopping on Rua de Santa Catarina. It offers air-conditioned guestrooms with satellite TV and also provides private parking, at an additional cost. Rooms at the Tryp Oporto Centro Hotel have parquet floors and modern wood furnishings. Free Wi-Fi is available throughout the property. The hotel is situated 550 m from the Marques Subway Station.

GRANDE HOTEL DO PORTO ★★★

Within Porto’s central pedestrian zone, this hotel is in a renovated 1880 building, at a 5-minute walk from the São Bento Train Station. The elegant rooms at the Grande Hotel do Porto include a flat-screen TV with satellite channels, a minibar and a private bathroom with a bath tub and shower. Grande Hotel do Porto is steps from Bolhão Bus and Metro Station and 10 minutes from the Serra do Pilar Monastery by foot. The Francisco Sá Carneiro Airport is 15 km from the hotel.

NH COLLECTION PORTO BATALHA ★★★★

Located in a recently renovated 18th-century palace, the NH Collection Porto Batalha hotel is located in the heart of the historic city centre. The hotel offers elegant rooms with double or twin beds and bathrooms with bath or shower. Each air-conditioned room is equipped with a flat-screen satellite TV, a seating area, safety deposit box, minibar, coffee machine and an electric kettle.

PORTOBAY HOTEL TEATRO ★★★★

Located in the heart of Porto, the 4-star Hotel Teatro – Design Hotels is within a 5-minute walk from some of the city’s main attractions. Hotel Teatro stands in the same spot where the old 1859 Baquet Theatre once stood. Completely renovated with a contemporary design plus a refined and bohemian environment, this revived theatre includes 74 rooms over six floors. The on-site restaurant offers an à lá carte menu and in the evening, guests can relax at hotel’s Plateia Bar. Free Wi-Fi is available throughout the hotel.

HOTEL PORTOBAY FLORES ★★★★★

This small boutique hotel has a great location on Rua das Flores (pedestrian access). The hotel is within easy walking distance of the city’s main tourist attractions. His 500 years of history is a fusion of the classic and the contemporary and comprised of two different buildings: a 16th-century palace and a new wing, built from scratch.


If you are interested to participate or if you would like to receive more information, please contact the organisers at This email address is being protected from spambots. You need JavaScript enabled to view it.

Organization committee: ERCIM Ethics Working Group

  • Claude Kirchner (Inria, CNPEN)
  • Sylvain Petitjean (Inria)
  • Andreas Rauber (SBA)
  • Christos Alexakos (ISI)
  • Fabian Eberle (SBA)
  • Gabriel David (INESC)
  • Guenter Koch (HCM)
  • Vera Sarkol (CWI)

ERCIM is a consortium of leading European research institutions committed to information technology and applied mathematics.

 

Sponsors

ERCIM logo

logo CNPEN

CWI logo

INESC

INRIA logo

Information

For any additional information, please feel free to contact Ana Isabel Oliveira at This email address is being protected from spambots. You need JavaScript enabled to view it..

Links:

  • Speakers of the 2023 edition
  • Abstract of the 2023 edition
  • Programme and presentations of the 2022 edition

Beyond Compliance – Digital Ethics in Research

Digital Ethics in Research

Beyond Compliance 2024

14-15 Octobre 2024 - HUN-REN SZTAKI - Institute for Computer Science and Control, Budapest, Hungary and online
Older editions: 2023 | 2022

Monday, October 14th

09:00-9:15 - Welcome and introduction

After Paris in 2022 followed by Porto in 2023, the third edition of the ERCIM Forum 'Beyond Compliance' was held in Budapest on October 14-15, 2024, at the HUN-REN Institute for Computer Science and Control. This year’s event, which took place both IRL and online, continued the discussion on the tough ethical issues faced by researchers in digital sciences. The scientific richness of these two days lay not only in the distinguished status of the speakers, but also in the wide range of cutting-edge topics covered. The diversity of contributions and the high caliber of Forum participants made it possible to explore digital issues from cultural, legal, (geo)political, historical, philosophical, and ethical perspectives.

09:15-10:15 – Opening keynote (Chair: Claude Kirchner)

The program of the first day was marked by two particularly brilliant keynotes, masterfully delivered by Julian Nida-Rümelin ("Beyond Compliance: Digital Humanism") and Milad Doueihi ("Beyond Intelligence: Imaginative Computing"). While the first speaker focused on tracing the philosophical origins of Digital Humanism and describing its challenges through animism and mechanistic reductionism, the second one offered a historical and literary analysis of what we now refer to as thinking machines. These presentations revisited classic AI debates, drawing on the ideas of Turing, Gödel, Wittgenstein, and earlier thinkers such as Leibniz and Butler. Both speakers explored the intersection of humanity and digital technology, advocating for human-centered approach to AI. The German philosopher emphasized the centrality of human authorship, while the American historian discussed the transformative effects of digital memory on culture and knowledge. Ethically, both thinkers stressed the importance of responsibility in the use of technology, emphasizing that education should guide digital transformation. They both called for critical reflection to safeguard cultural values and advocated for the preservation of human relationships, while reflecting on how digital culture reshapes knowledge transmission.

Julian Nida-Rümelin, LMU Munich and Humanistische Hoschule Berlin

Beyond Compliance: Digital Humanism Youtube


10:45-12:30 The making of regulations
(Chair: Guenter Koch and Anna Ujlaki)

The first session dedicated to the making of regulations featured three researchers. Firstly, Melodena Stephens discussed the complexities of AI regulation, emphasizing the difficulty of implementing effective, intergenerational policies in a rapidly evolving technological landscape, and the need for a global, flexible, and ethically sound approach to address issues like human autonomy, security, and the future of jobs. Next, Anna Ujlaki critically reviews the political theory discourse on AI, focusing on its conceptual limitations, normative questions, and potential for addressing AI's integration into society, while highlighting the political risks and ethical dilemmas involved in AI regulation. Finally, Nikolaus Forgo discussed how, since the introduction of computers into public administration, lawmakers have repeatedly overestimated the short-term effects of new technologies while underestimating their long-term impacts, exemplified by the development of data protection laws and the recent AI Act.

Melodena Stephens,  Mohammed Bin Rashid School of Government, online
Approaching the Regulatory Event Horizon: Opportunities and Challenges Youtube

Anna Ujlaki, HUN-REN/Eötvös Loránd University
Regulating Artificial Intelligence: A Political Theory Perspective [slides] Youtube

Nikolaus Forgo, Vienna University, online
Giving an historical and critical overview on European attempts to regulate digitalisation

Panel discussion

 

14:00-15:30 Emerging topics (Chair: Claude Kirchner)

The rest of the day featured two additional sessions dedicated to emerging topics and cultural influences. Anatole Lécuyer opened the emerging topics session by discussing the paradoxical effects of virtual reality and metaverse technologies, highlighting their history and their growing impact on the population, particularly children and young adults, and the emerging ethical questions surrounding them. He explored psychological effects such as the sense of embodiment, agency, and the Proteus effect, which leads users to behave according to the stereotypes of their avatars, while also examining the potential harms and benefits of VR, from therapeutic uses to the risk of altering identity. This fascinating discussion was extended by the following speakers, who were present in person: Michele Barbier and Ferran Argelaguet. They presented a project exploring the ethical challenges of social interactions in the metaverse, focusing on issues such as harassment, privacy, and the legal status of avatars, with the goal of fostering empathy, improving safety tools, and addressing social and cultural concerns around digital identities and regulation. Finally, and in a slightly unconventional style, Jean-Bernard Stefani discussed the concept of "conviviality" from Illich to highlight the moral dilemmas in the digital world, including its ecological impact, surveillance capitalism, algorithmic discrimination, and digital divides, while arguing that these issues require a critical approach and a shift towards more human-centered and de-automated technologies.

Anatole Lécuyer, Inria Rennes/IRISA, online
Paradoxical effects of virtual reality [slides] Youtube

Justyna Swidrak, Michele Barbier, Mel Slater, Maria Sanchez-Vives, Maria Roussou, Eleni Toli, Ferran Argelaguet, IRL
Ethical Considerations of Social Interactions in the Metaverse [slides] Youtube

Jean-Bernard Stefani, inria, online
Taking Conviviality Seriously

Panel discussion

 

16:00-18:00 Cultural influences (Chair: Emma Beauxis-Aussalet)

Keynote: Milad Doueihi
Beyond Intelligence: Imaginative Computing. A Minority report. online Youtube

Finally, the last two remote speakers addressed the issue of cultural influences. Rockwell Clancy discussed the relationship between cultural responsiveness, psychological realism, and global AI ethics, highlighting the importance of understanding both the normative and empirical components of AI ethics, the challenges posed by cross-cultural contexts, and the need for culturally informed policy frameworks in AI development. Marianna Capasso presented a project on algorithmic discrimination, approaching it from a cross-cultural perspective. She highlighted how algorithmic discrimination should be understood in a nuanced way, using examples such as Amazon's CV screening system, which discriminated against women due to biased historical training data. She examined various forms of algorithmic discrimination, including indirect and statistical discrimination, and explored how culturally specific norms influence discriminatory behaviors.

Rockwell F. Clancy, Virginia Tech, online
Towards a culturally responsive, psychologically realist approach to global AI ethics Youtube

Marianna Capasso, Utrecht University, online
Algorithmic Discrimination in Hiring: A Cross-Cultural Perspective Youtube

Panel discussion

 

Tuesday, October 15th

09:00-10:30 Cooperative agents (Chair: Gabriel David)

The second day began with a session on cooperative agents. Elias Fernández Domingos discussed the importance of studying delegation to AI, explaining its issues and presenting a behavioral experiment where AI delegation improved coordination in a collective risk scenario, emphasizing the need for well-designed systems that maintain human agency while delegating tasks. Rebecca Stower explored ethical and psychological implications of human-robot interactions, focusing on errors in robot behavior, the impact on trust and risk-taking, and the challenges of balancing data privacy and user preferences in robot design. Finally, Michael Fisher discussed the importance of ensuring trustworthiness in autonomous systems, emphasizing the need for reliability, transparency, and ethical decision-making, while also addressing sustainability concerns related to both the environmental impact of AI and robotics, as well as the unnecessary deployment of technology.

Elias Fernández Domingos, VUB Brussels, online
Delegation to AI Agents Youtube

Rebecca Stower, KTH Royal Institute of Technology
Good Robots Don’t Do That: Making and Breaking Social Norms in Human-Robot Interaction

Michael Fisher, University of Manchester, online
Responsible Autonomy Youtube

Panel discussion

 

11:00-12:30 – Tutorial (Chair: Christos Alexakos)

At midday, the Forum participants had the opportunity to attend the Tutorial Training expertly delivered by Alexei Grinbaum. He emphasized the importance of operationalizing AI ethics and explained that ethics in AI should be viewed as a valuable framework rather than a constraint. The scientist addressed a range of ethical challenges, including security risks in robotics, and introduced tools to facilitate discussions between ethicists and engineers. He presented training courses featuring exercises on dilemmas and the evaluation of AI projects in sectors like healthcare. He also explored the issue of responsibility in personalized education, focusing on topics such as bias, fairness, and the role of teachers.

Alexei Grinbaum, CEA
Training in AI ethics: concepts, methods, exercises, problems Youtube

 

14:00-15:00 Unconference session (Chair: Emma Beauxis-Aussalet)

For the first time, the Forum left some space for an unconference session which allowed participants to discuss, in a more informal way, Open Science and Nobel Prize in Computer Science.

 

15:15-17:00 Democracy (Chair: Sylvain Petitjean)

Finally, the Forum concluded with a session dedicated to democracy that gave the floor to four speakers. Natali Helberger argued that AI is a powerful political tool that can either strengthen or undermine democracy, highlighting concerns about misinformation and the influence of big tech, while also recognizing AI's potential to enhance communication. Siddharth Peter de Souza discussed the creation of data governance norms, emphasizing the role of civil society and advocating for a pluralistic approach to regulation that includes marginalized voices. Attila Gyulai explored the impact of AI on democracy, questioning the assumption that democracy is solely about autonomy, and suggesting that a more realistic understanding of democracy, which accounts for representation, manipulation, and the constructed nature of preferences, is necessary to address the challenges AI poses. Finally, Bjorn Kleizen examined the level of trust citizens have in AI systems used by governments, exploring how transparency and public perceptions influence trust, and emphasizing the need for long-term strategies to maintain trust in AI applications.

Natali Helberger, University of Amsterdam, online
AI everywhere and anytime in the media. Will the AI Act save democracy?

Siddharth Peter de Souza, Tilburg University/Warwick University, online
Norm making around data governance: proposals for red lines Youtube

Attila Gyulai, HUN-REN
Misled by autonomy: AI and contemporary democratic challenges

Bjorn Kleizen, University of Antwerp, online
Do citizens trust trustworthy artificial intelligence? Examining the limitations of ethical AI measures in government

Panel discussion

17:00-17:30 Closing discussion (Chair: Claude Kirchner)

 

Organizing committee

  • Christos Alexakos (ISI/ATHENA RC, Greece)
  • Emma Beauxis-Aussalet (VU, Netherlands)
  • András Benczúr (HUN-REN SZTAKI, Hungary)
  • Gabriel David (INESC TEC, Portugal)
  • Claude Kirchner (CCNE and Inria, France)
  • Rebeka Kiss (HUN-REN Centre for Social Sciences, Hungary)
  • Guenter Koch (AARIT, Austria, and Humboldt Cosmos Multiversity, Spain)
  • Sylvain Petitjean (Inria, France)
  • Andreas Rauber (TU Wien, Austria)
  • Miklós Sebők (HUN-REN Centre for Social Sciences, Hungary)
  • Vera Sarkol (CWI, Netherlands)

Working Group on Digital Ethics

Digital Ethics in Research

Previous editions: 2024 | 2023 | 2022

ERCIM Forum Beyond Compliance 2025

29–31 October 2025 – Rennes, France

> Summary of the talks (opens in a new tab)

The 4th edition of the ERCIM Forum Beyond Compliance will take place from 29 to 31 October 2025 at the Inria Centre at Rennes University (IRISA) in Rennes, France. This international event focuses on research ethics in the digital age, offering a space for thought-provoking dialogue and collaborative exploration.

This year’s edition will address five pivotal themes:

  • Security in the digital society
  • Geopolitics of digital ethics, including infrastructure and digital sovereignty
  • Data altruism and the promotion of open academic resources
  • Generative AI in research, teaching, and publishing
  • AI’s impact on behavior & cognition

CCNEN logo
The forum is partially co-located with the European Informatics Leaders Summit (ECSS’25) of Informatics Europe, and will feature a joint session co-organised by the digital ethics working groups of both ERCIM and Informatics Europe. Participants are welcome to also attend ECSS'25 and its ethics workshop.

Information on travel and accommodation can be found here. Attendance is in person (no online broadcast, at least for this year). You can also revisit last year's talks and videos here. The event is free, thanks to our sponsors: ERCIM, INRIA, and CCNEN (the national digital ethics advisory committee of France).

Wednesday 29-10-2025

Addressing ethical challenges with security in the digital society

14:30–16:30 — Joint Round Table with the Ethics WG of Informatics Europe — Amphitheatre (Building G)
This session will address the responsible development and application of information systems, encompassing areas such as accountability, misinformation, ethical awareness, and best practices for digital security.
Co-chaired by Covadonga Rodrigo.

Speakers:

  • Saskia Bayerl (replacing Marco Gercke) — From Lab to Law: compliance journeys of high-risk AI development and AP4AI self-assessments
  • Tatjana Welzer — Ethics and Accountability
  • Kristina Lapin — Raising ethical awareness to combat dark patterns
  • Rafael Pastor — GenAI: deepfakes, misinformation and risks in the digital society
  • Mirela Riveni — Ethical issues in decision-making with AI

Keynote

16:30–17:30 — Catherine Tessier — “Artificial intelligence”, research and education: some (new) ethical issues — Amphitheatre (Building G)

Thursday 30-10-2025

Strategy meeting

09:00–9:30 — Open discussion of future plans for ERCIM Digital Ethics WG, and for supporting digital ethics in academia — Amphitheatre (Building G)

Tutorial

09:30–11:00 — Alexei Grinbaum — AI impact on human behavior and cognition — Amphitheatre (Building G)

11:00–11:30 — Coffee Break

Data altruism and open academic resources

11:30–12:30 — Talks & Discussion — Amphitheatre (Building G)
Academics have a duty to deliver digital resources to the public, to inform citizens and policy, and foster fair innovation. This session will discuss practical issues with establishing data commons that enable open science and innovation: e.g., issues with privacy, trust, and regulatory frameworks.

Speakers:

  • YouTube Roberto Di Cosmo — No science without source: collecting, preserving and sharing software in a risky world
  • YouTube Bertil Egger Beck — European Open Science Cloud (EOSC): an evolving European ecosystem for research with digital objects
  • YouTube Miriam Seoane Santos (winner of 2025 Cor Baayen Award) — Responsible by design: building trust in open and shared data

12:30–13:30 — Lunch Buffet

Keynote

13:30–14:30 YouTube Afonso Seixas-Nunes — A lawyer among engineers! Autonomous systems and ethical and legal questions which remain to be answered — Salle Markov (Building G)

14:30–15:00 — Coffee Break

Geopolitics of digital ethics in academia

15:00–17:00 — Talks & Discussion — Salle Markov (Building G)
Recent geopolitical changes have impacted academia’s funding policy, and the implementation of research on digital ethics or with digital ethics implications. In this changing geopolitical landscape, digital sovereignty is ever more important. Dependencies on infrastructure and funding may constrain how digital ethics is implemented in academic research and practices (e.g., to ensure privacy, inclusivity, transparency, sustainability). This session will discuss the (dis)alignment of political, economical, and academic interests that underlie:

  • Digital infrastructure in academia, and its dependency on the big tech industry
  • University policy on digital ethics
  • Funding strategies for research on digital ethics, or with digital ethics implications

Speakers:

  • Eric Germain — What is behind ‘apolitical ethics’ and how academia can remain sovereign?
  • YouTube Kavé Salamatian — Collaborative multidisciplinary research in cybersecurity in a changing geopolitical context: news from the front
  • YouTube Petru Dumitriu — The emergence of international norms on AI ethics
  • YouTube Domagoj Juricic — When ethics meets power: the geopolitics of digital research

Friday 31-10-2025

GenAI in research and teaching

09:30–11:30 — Talks — Salle Markov (Building G)
Practices in academia have to rapidly adapt to generative AI. As educators, academics must wonder how GenAI impacts human cognitive skills. In the long-term, the issue is about which non-essential skills can we delegate to GenAI, and which essential skills must we not. This session will explore strategies for developing students’ essential skills for writing, reading, coding, ideating, and critical thinking. As this needs more than establishing fraud policies, we will revisit the design of learning goals, assessment methods, and learning activities.

Speakers:

  • YouTube Dagmar Monett Díaz — Against the uncritical adoption of 'AI' technologies in academia
  • YouTube Michał Wieczorek — What is (un)ethical about educational AI?
  • YouTube Laurynas Adomaitis — An oracle or an intern? Using GenAI in research

11:30–11:45 — Break

Keynote

11:45–12:30 — Mihalis Kritikos — Digital ethics in EU-funded projects — Salle Markov (Building G)

Organising Committee

Christos Alexakos (ISI/ATHENA RC, Greece)
Emma Beauxis-Aussalet (VU, Netherlands)
Gabriel David (INESC TEC, Portugal)
Alexei Grinbaum (CCNEN and CEA, France)
Claude Kirchner (CCNEN and Inria, France)
Guenter Koch (AARIT, Austria & Humboldt Cosmos Multiversity, Spain)
Anaelle Martin (CCNEN, France)
Sylvain Petitjean (Inria, France)
Vera Sarkol (CWI, Netherlands)

Speakers & Abstracts

Saskia Bayerl & Marco Gercke
From Lab to Law: compliance journeys of high-risk AI development and AP4AI self-assessments

Abstract:
The EU AI Act has introduced a new compliance regime that is now firmly institutionalized, reshaping how artificial intelligence is conceived, developed, and deployed across Europe. For high-risk AI systems in particular, the regulation sets out rigorous requirements that directly influence the design and innovation processes within EU-funded projects and beyond. These obligations create both opportunities for more trustworthy AI and practical challenges for developers, regulators, and end-users alike. Drawing on concrete experiences from European high-risk AI initiatives, this keynote highlights the realities of operationalizing compliance. It explores how AP4AI self-assessments can serve as a practical tool to navigate complexity, align with the EU AI Act, and foster accountability in AI development.

Note: The Accountability Principles for AI (AP4AI) Project develops solutions to assess, review and safeguard the accountability of AI usage by internal security practitioners. https://www.ap4ai.eu/about

Bio:
Prof. Dr. Marco Gercke is an entrepreneur, scientist and consultant. His first focus area is Cybersecurity. With more than 1000 speeches in over 100 countries and over 100 scientific publications, Prof. Gercke is one of the world’s leading experts in the field of cybersecurity and cybercrime. He is the founder and director of the Cybercrime Research Institute, an independent research institute and think tank based in Cologne. He advises governments, organizations and large enterprises around the world and advises them on strategic, political and legal issues in the field of cybersecurity. The main focus of his work is the development of innovative approaches to tackling a problem that has developed into a central problem for governments and businesses in recent years – Cybercrime. Over the past 15 years, he has worked in over 100 countries across Europe, Asia, Africa, the Pacific and Latin America. As a respected and experienced speaker, Prof. Gercke offers excellent and useful insider knowledge on the subject of cybersecurity due to his many years of activity and internal view. His lectures are clearly structured, very informative and include practical examples.

Dagmar Monett Díaz
Against the uncritical adoption of 'AI' technologies in academia

Abstract:
Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously to a) counter the technology industry's marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.

Bio:
Dagmar Monett is a Computer Science Professor (Artificial Intelligence, Software Engineering) at the Berlin School of Economics and Law (HWR Berlin), Director of the Computer Science Division at the Department of Cooperative Studies Business and Technology. She is also a Co-Founding Member of the Institute for Data-Driven Digital Transformation (d-cube, HWR Berlin), a Co-founder of the AGI Sentinel Initiative AGISI.org, a Board member of Hochschullehrerbund Berlin, and a professional member of the ACM. With over 37 years of research and teaching experience in different countries, her research fields include but are not limited to AI (its siloed but also cross-subfields), critical AI, AI ethics, software engineering, and computer science education.

Michał Wieczorek
What is (un)ethical about educational AI?

Abstract:
This paper builds on the results of our recent systematic literature review to discuss the ethical implications of using AI in primary and secondary education. Although recent advances in AI have led to increased interest in its use in education, discussions about the ethical implications of this new development are occurring in different disciplinary circles. As such, they reflect varied understandings of ethics that make it challenging to consolidate the debate.

I highlight the seventeen categories of ethical implications of educational AI identified in the review which were grouped into four kinds of opportunities and thirteen types of concerns. The former include, among others, the potential reduction of educational inequalities or the facilitation of teachers’ work, while the latter range from fairness and privacy issues to concerns about the influence of private companies or low accountability of the systems.

I then build on this discussion and our interactions with readers, audiences and reviewers to highlight the conflicting understandings of ethics that can be observed in the current debate. Although all of the themes highlighted in the review have normative implications – i.e., they influence our values and the practices through which such values are enacted – they are not equally recognised as such by different communities seeking to engage with the ethics of educational AI. For example, we observed that more computationally-minded readers tend to overly or exclusively focus on issues such as fairness, accuracy, transparency, explainability or privacy, while others are hesitant to consider, e.g., the impact on teaching practices or the role of the teachers as ethical – preferring instead to discuss them under the label of social or pedagogical concerns.

Consequently, I argue that such narrow views of ethics are limiting and do not enable us to capture the wide variety of ethical impacts introduced by educational AI. I call for increasing research on the less obvious normative implication of the technology and sketch an agenda for such work.

Bio:
Dr. Michał Wieczorek is an Ad Astra Fellow – Assistant Professor in AI-Driven Educational Innovation in the School of Education, University College Dublin. As a philosopher, he studies how new technologies impact the values, goals and practices of education. He has expertise in applied ethics, philosophy of education, philosophy of technology and anticipatory research, and he specialises in the thought of John Dewey. Before joining UCD he was a Government of Ireland Postdoctoral Fellow at Dublin City University where he researched the ethical issues introduced by the use of AI in compulsory schooling. He did his PhD at DCU as part of the EU-funded PROTECT project. His research dealt with the influence of self-tracking technologies (e.g., Fitbits, Apple Watches) on users’ habits and self-knowledge.

Tatjana Welzer
Ethics and Accountability

Abstract:
Accountability in the digital age is not something we deal with when something goes wrong but rather a requirement that we must think about before, during, and after the selection of solutions and their implementation. Accountability does not mean blaming others but taking responsibility for making decisions and ensuring a safe and transparent online environment. Accountability is the awareness that is the only constant in a dynamic digital age, in which we must understand each other and ensure the development of contextual instruments, guidelines, and other policies. In doing so, we create awareness of responsibility in the global community with various stakeholders, impart knowledge about responsibility, and research and develop instruments for responsibility.

We will focus on accountability in connection with artificial intelligence, emphasizing ethics, including cultural awareness and professional codes of ethics. These principles govern the behavior of a person or group in a business environment.

Like values, professional ethics determine the rules of how a person should behave towards others and institutions in the professional environment. These rules are presented as Professional Codes of Ethics for individual fields and maintain the highest standards of professional conduct. Their common characteristics are avoiding conflicts of interest, violations of confidentiality and privacy, and the law, providing knowledge for advancing technology, prudent use of information and maintaining the integrity of systems, and transferring fundamental ethical principles to computer professional activity. Of course, the rules are not an algorithm for solving ethical problems. They are only a basis for ethical decision-making and a demonstration of responsibility for supporting the public good.

Bio:
Tatjana Welzer Družovec is a researcher and a full professor at the University of Maribor, Faculty of Electrical Engineering and Computer Science. She is the head of the Data Technology Laboratory. Her research interests include cybersecurity including ethics, cultural and human factors of IT and cybersecurity, and intercultural communication. She is the national delegate for IFIP TC 11 and a member of the executive board of Slovenian Society Informatika. She has participated in numerous national and international research projects. Most international projects have been funded by the EC through various Horizon 2020 and Erasmus+ programs. She was a coordinator of the European University Alliance ATHENA at the University of Maribor and is still involved in its activities.

Her bibliography contains over 800 bibliographic items published in various scientific journals, including top JCR IF publications. She has published chapters in several books and has participated in numerous international conferences. She has been and is a member of the committees of many international conferences and steering committees. With her team, she has organized and co-organized over 20 international conferences in Slovenia, and many invited events at various conferences worldwide. For her work she received the title of Congress Ambassador of Slovenia in 2019.

Kristina Lapin
Raising ethical awareness to combat dark patterns

Abstract:
Ethical design ensures users’ well-being, privacy, and autonomy in making informed decisions. Usability and accessibility design principles ensure ethics because they require essential aspects to be visible, understandable, controllable, recognizable, etc. Dark patterns intentionally violate these principles, making it possible to manipulate consumers into taking actions that do not correspond to their preferences. Dark patterns are aimed at modifying the underlying choice architecture. They alter decision space or manipulate the information flow to benefit the service providers rather than users. While these designs work in the short term, the companies extract profits, harvest data, and limit customer choice before users face consequences.

While maintaining professional ethics is the norm in other disciplines, UX design still requires more efforts to raise awareness among users, designers, and stakeholders. The presentation will focus on categorizations of dark patterns that distinguish them according to their implementation methods and consequences on users' well-being. Further, the factors raising the awareness of designers, stakeholders, and end-users will be reviewed. We will provide an overview of the legal regulations that are obligatory for stakeholders. A way to raise prospective designers’ awareness will be presented on the example of how ethics topics are taught for the Vilnius University Software Engineering students. Finally, examples of tools for raising users’ awareness of ethics breaches will be discussed.

Bio:
Kristina Lapin is an associate professor at Vilnius University, Faculty of Mathematics and Informatics, Department of Computer Science. She is the chair of the Board of the Faculty of Mathematics and Informatics. She is also the chair of the Software Engineering Bachelor's Study Program Committee. She teaches human-computer interaction for bachelors and User Experience Engineering for masters in Software Engineering and Computer Science students. She is an author of the Human-Computer Interaction textbook for Lithuanian students. Research interests include human-computer interaction, balancing of usability and security, and design ethics. She participated in national and international research projects in educational, aeronautics, virtual worlds, and cybersecurity thematic areas.

Rafael Pastor Vargas
Misinformation and risks in the digital society: Ethical use of AI and solutions

Abstract:
The use of generative artificial intelligence tools is spreading and becoming more widespread in our society, with particular relevance in content generation and in their use by teenagers on social media. The latent dangers of misinformation have grown exponentially due to the massive use of social media, and it is in these spaces where the use of generative AI as a fundamental tool for misinformation has increased. This speech aims to present specific cases that demonstrate the application of this technology and its impact on the radicalization of opinions and extremism. In addition, these same tools are used illegally on these networks, leading to crimes of hate speech, bullying, harassment of young women, and even sexual blackmail. Some results from the project “Analysis of mobile applications from a data protection perspective: Cyber protection and cyber risks to citizens' information” will be presented, along with how to use AI to detect these situations and take appropriate action.

Bio:
Rafael Pastor is a professor at UNED. He served as Director of Technological Innovation at the UNED (responsible for developing the aLF learning platform and technological innovation processes) for five years (2004-2009) and also as Director of the UNED Center for Innovation and Technological Development from 2009 to 2011, where he was responsible for managing the UNED virtual campus and developing the aLF learning platform. He is currently Director of the ETSI School of Computer Science. He has directed and participated in several teaching innovation projects, summer courses, and continuing education programs. Throughout his scientific career as a researcher, he has participated in more than 20 R&D projects funded by public calls for proposals (regional, national, and international), some of which are particularly relevant to companies and/or administrations at the international level. He has also participated as a speaker and active member in nearly 60 international/national conferences, indexed in impact lists such as CORE (ERA), DBLP, and IEEE Explorer. His research experience is also reflected in more than 70 publications in international journals, 60 of which have a JCR/SJR impact factor, with 45 of them indexed in the Journal Citation Report (JCR). Additionally, he has been a member of several international scientific societies, including the IEEE (Education Society), where he holds the status of Senior Member. He is a collaborator/advisor to the AEPD (Spanish Data Protection Agency), through his participation in the advisory council “Espacio de Estudio sobre Inteligence Artificial” (Study Space on Artificial Intelligence), and a member of the P2834 working group “Standard for Secure and Trusted Learning Systems”. He is one of eight Spanish researchers to hold an International Chair in Cybersecurity, funded by EU PTR funds and awarded through a competitive and public call by the National Cybersecurity Institute (INCIBE).

Afonso Seixas-Nunes
A Lawyer among Engineers! Autonomous Systems and ethical and legal questions which remain to be answered

Abstract:
The talk will focus on the moral dilemma of human control in autonomous systems and the moral responsibility of those who design them, as well as the legal implications.

Bio:
Afonso Seixas-Nunes, SJ, was born in Porto, Portugal, in 1973. He joined the Portuguese Province of the Society of (Jesuits) in 1998, after he graduated in Law by the Portuguese Catholic University (Porto), and was ordained priest in 2010. Afonso, as a Jesuit, did his degree in Philosophy (Licence) by the Portuguese Catholic University (Braga) for which he was awarded the Prize Pe Vitorio de Sousa Alves, and he has a degree in Theology by the Pontificia Universita Gregoriana, Italy). After his theological studies, Afonso went to London and has a Master’s in International Law and Human Rights by the London School of Economics and Political Science (LSE -UK). In early 2019, Afonso completed his doctoral thesis in International Humanitarian Law at the School of Law of the University of Essex (UK), entitled The Legitimacy and Accountability for the Deployment of Autonomous Weapon System under International Humanitarian Law, then published by CUP, 2022. In September 2018, Afonso became a post-doc research fellow of the Oxford Institute for Ethics, Law and Armed Conflict (ELAC – University of Oxford) directed by Professor Dapo Akande at the Blavatnik School of Government. In August 2021, Afonso joined Saint Louis University Law School as an Associate Professor Public International Law and Laws of Armed Conflict. His research focus AI technologies and laws of armed conflict, and the intersection of Outer Space Law and private corporations.

Bertil Egger Beck
European Open Science Cloud (EOSC): an evolving European ecosystem for research with digital objects

Abstract:
The objective of the European Open Science Cloud (EOSC) is to provide researchers and innovators in Europe with an open and trusted multi-disciplinary environment where they can publish, find and reuse data, tools and services for research and innovation. Through this environment, EOSC aims to mobilise, align and scale resources across Europe to accelerate open science, higher productivity and increased reproducibility and trust in research.

Miriam Seoane Santos
Responsible by design: building trust in open and shared data

Abstract:
Openness is essential to modern science, but openness alone does not guarantee responsibility, trust, or societal benefit. As we advance towards data altruism and open science infrastructures, we must also ensure that the data we share is responsible by design: understandable, contextualized, and aware of its limitations. This talk explores how intrinsic data characteristics and documentation practices influence the trustworthiness and reusability of open datasets. It highlights how human-centered tools can make openness more interpretable and actionable, and how emerging approaches like synthetic data generation can help overcome barriers of privacy and underrepresentation. Responsible data is not only open: it should be understandable, reliable, and designed to serve the public good.

Bio:
Miriam Seoane Santos is a professor at the Department of Computer Science of the University of Porto and a researcher at the Laboratory of Artificial Intelligence and Decision Support (LIAAD, INESC TEC). She received her PhD in Informatics Engineering in 2022, earning the Award for Best PhD Thesis in Artificial Intelligence from APPIA. Her research focuses on data complexity, intrinsic data characteristics, and their implications for Trustworthy and Responsible AI. She has authored over 20 peer-reviewed publications, including papers in top-tier Artificial Intelligence journals.

Italy CNR ·  The Netherlands CWI ·  Germany Fraunhofer ·  Luxembourg FNR ·  Greece FORTH ·  Portugal INESC ·  France INRIA ·  Greece ISI ·  Spain ITIS-UMA ·  Norway NTNU ·  Sweden RISE ·  Austria SBA ·  Hungary SZTAKI ·  Cyprus UCY
© ERCIM · Legal information · Contact · 
  ERCIM is the European Partner of   W3C