This picture is generated with the use of Co-Pilot.
The European Council of Doctoral Candidates and Junior Researchers (Eurodoc) supports the introduction of the guidelines for responsible use of AI in Science however there is also space for improvement of these guidelines. In a recent hearing conducted by the European Commission Eurodoc gave the following seven inputs on the guidelines AI in Science.
- An emphasis on the importance of training researchers in responsible use of AI. We align with the recommendation that researchers themselves are responsible for improving their knowledge and understanding of AI within their field of science. However, the guidelines fail to address the responsibility of especially RPOs in providing quality training.
- Alignment with Open Science (OS) principles. All actors involved in ERA must align with the principle of “as open as possible, as closed as necessary” if responsibility and trustworthiness is to be at the core of the use of AI in the research sector.
- The use of generative AI in the assessment of research(ers): The current guidelines fail to explicitly address the use of AI in the assessment of research and researchers. The importance of legal security of the individual must be at the center.
- Acknowledging that the use of generative AI in science transcends any singular discipline. The use of generative AI in science is interdisciplinary. The research topics and methods affected, potentially disrupted or generated, by generative AI belong to no singular field.
- The importance of the social sciences and humanities (SSH). The SSH are fundamental to further our understanding of the individual, ethical, cultural, and societal impact of generative AI.
- Encouraging the development of publicly owned and openly accessible LLMs. Europe needs publicly owned and openly accessible Large Language Models (LLM).
- Alignment with democratic values: The recent, more general guidelines “Ethics guidelines for trustworthy AI”, emphasise that trustworthy AI must align with the democratic values of the European Union, such as respect for human rights, ensuring fairness, counteracting discrimination and biases, and ensuring explainability in decision-making processes. This perspective is missing in the guidelines for AI in Science.
AI in science is likely to at least partly disrupt the current research ecosystem and researchers, research performing institutions (RPOs), and research funding organisations (RFOs) all have distinct responsibilities for ensuring responsible use of AI within the sector. We see these guidelines proposed by the ERA Forum serve as a good starting point for future work and look forward to engaging further in this work. In particular we wish to underline the importance of ensuring responsible use of AI in the assessment of research and researchers and the alignment with the principles of open science and with a reform of research assessment.
You find Eurodoc’s full statement at Zenodo: https://zenodo.org/records/15768274
This picture is generated with the use of Co-Pilot. The following prompts was used: Step 1: Generate a picture to go with: The European Council of Doctoral Candidates and Junior Researchers (Eurodoc) supports the introduction of the guidelines for responsible use of AI in Science. The guidelines proposed by the ERA Forum serve as a good starting point for future work. We are pleased to see that the guidelines are designed with the purpose of being a living document. We look forward to following this work. Step 2: Remove the ppl on the picture and make a reference to the humanities. Step 3: But keep the EU reference. Step 4: Remove the € sign.