FontsA A
ContrastA A
Newsletter sign-up
I give permission for BBMRI-ERIC to send me their newsletter and emails about subjects which they think may be of interest to me. I can unsubscribe from all emails at any time. I understand that my information will be processed according to BBMRI-ERIC's privacy notice.

Topic: AI

Explore resources that provide information and guidance on AI in Biobanking and Biomedical Research

Overview

Artificial Intelligence (AI) is one of the most significant developments in the medical field and is transforming biobanking and biomedical research. The potential benefits of AI are just beginning to emerge, with the ability to enhance specimen and data management, accelerate research processes, and facilitate scientific findings. By efficiently analysing datasets, AI can also identify patterns and correlations that may be otherwise overlooked in order to predict disease risk, identify potential biomarkers, and suggest personalised treatment options. These valuable insights can ultimately improve health outcomes for patients.

 

As AI and machine learning systems gain more access to data and computational power, their effectiveness and utility in biobanking will also grow. However, legitimate concerns exist. The application of AI in biobanking opens complex questions from legal, ethical and societal perspectives, with potential for far-reaching consequences. It is therefore essential to identify and address arising ELSI issues in order to manage any adverse impacts. This requires a proactive and interdisciplinary approach on the part of the biobanking community to ensure that AI technologies are integrated and managed in a responsible manner that ultimately protects both individual rights and public welfare.

 

Key Ethical, Legal and Societal Issues

The main ethical, legal, and societal considerations of AI in biobanking and biomedical research encompass a wide range of challenges that should be addressed to ensure the responsible use of AI technologies. Key issues include:

Informed Consent

With AI there may be a shift away from traditional human-centered processes. It is therefore essential that informed consent from participants is obtained as it remains a foundational ethical requirement in biomedical research. Furthermore, with AI’s capabilities to analyse and repurpose data in unforeseen ways, participants should be fully informed about how their data will be used. Transparent communication about data sharing, along with the scope of AI analyses, is essential.

Trust in AI and Trustworthy AI

Trustworthiness is a key requirement in the development and application of AI, which is highlighted by the European Commission with their Ethics Guidelines for Trustworthy AI. In addition to defining criteria for trustworthy AI through guidelines or regulations, it is also essential to take a broader view. Trust extends beyond the technology itself – it is a complex web of relationships, which includes trust in AI, but also in institutions, and between individuals such as scientists, healthcare professionals, and patients. Adopting a multi-layered view of trust acknowledges that human and institutional also components play a crucial role in the acceptance and successful implementation of AI in healthcare settings.

Accountability

AI systems may be involved in decision-making processes that were previously made by individuals. The issue of assigning accountability therefore becomes relevant, particularly given the distribution of responsibility within biobanking development and operations, where multiple stakeholders are involved. It can therefore be very challenging to pinpoint where and with whom responsibility lies when errors are made.

Data Privacy and Security

AI systems in biobanking and biomedical research rely on processing vast amounts of sensitive data, including genetic information, medical histories, and personal identifiers. Protecting this data from breaches and unauthorised access is paramount. Researchers must ensure compliance with data protection regulations such as the GDPR. Robust encryption, secure storage, and strict access controls are essential measures to safeguard data privacy and security.

Bias and Unfairness

Data can be flawed and biased from the outset. AI algorithms are therefore susceptible to biases that can arise from the data they are trained on or from the design of the algorithms themselves. In biobanking and biomedical research, biased AI models can lead to unequal treatment, misdiagnosis, or disparities in healthcare outcomes. It is therefore vital to develop and validate AI systems with diverse and representative datasets, continuously monitor for biases, and implement fairness metrics to ensure equitable treatment across different populations.

Transparency

AI systems, particularly those based on deep learning, can often function as “black boxes” with decision-making processes that are not easily interpretable. In the context of biobanking and biomedical research, it is important to strive for transparency and explainability in AI models as far as is possible, with the aim of being able to understand and explain how AI systems arrive at specific conclusions or recommendations. This to ensure accountability and foster trust among stakeholders.

Ownership and Control of Data

The question of who owns and controls the data in biobanking and biomedical research is complex. Participants, biobanks, researchers, and AI developers all have interests in the data. Clear policies and agreements regarding data ownership, usage rights, and the sharing of benefits arising from AI-driven discoveries are necessary to navigate these complexities and ensure fair practices.

Regulatory Compliance

AI applications in biobanking and biomedical research must comply with existing regulations and guidelines governing biomedical research and healthcare. This includes obtaining necessary approvals from ethical review boards, adhering to clinical trial regulations, and ensuring that AI-based tools meet regulatory standards for safety and efficacy. Staying abreast of evolving regulatory landscapes and adapting AI systems accordingly is crucial for compliance.

Social Implications and Public Perception

The use of AI in biobanking and biomedical research has broader social implications, including public perception and acceptance. Engaging with the public, addressing concerns about AI’s role in healthcare, and fostering a dialogue about the benefits and risks associated with AI technologies are essential for building public trust and ensuring the ethical deployment of AI in biomedical research.

 

Relevant EU Legislation

General Data Protection Regulation (GDPR)

The GDPR provides a regulatory framework for lawfully processing personal data. While not explicitly about AI, the GDPR would apply, given that AI systems would be drawing from and processing sensitive personal data.

EU Artificial Intelligence Act (AI Act)

Approved in May 2024, the AI Act regulates the use of AI, adopting a tiered approach based on risk assessment: the greater the potential societal harm from an AI application, the stricter the regulations imposed. The AI Act applies to all sectors and industries, including the life sciences, imposing various obligations at every stage of the AI cycle.

Note: For further relevant EU legislation that may be applicable, please take a look here.

 

BBMRI Resources

The below resources have been developed by those included in the BBMRI network:

BBMRI-ERIC ELSI Dialogues Webinar: Ethics of AI in Imaging – Ethical and Societal Implications

This webinar gives an overview of the types of ethical issues that are raised by AI in biomedical research, offering a comprehensive and systematic review of existing literature. The challenges raised by approaches such as ‘trustworthy AI’ and ‘explainable AI’, which shape the ethics discourse on AI, are discussed. The webinar concludes with a reflection on the topics identified that shape the understanding of ‘Ethics of AI’ and the gaps in the discourse.

 

External Resources

The below resources have been developed outside of the BBMRI network:

AI Ethics and Governance in Practice

by the Alan Turing Institute

Developed by the Alan Turing Institute’s Public Policy Programme, this 8-module programme provides tools and guidance to ensure responsible use of AI technologies.

Principles for Augmented Intelligence Development, Deployment, and Use

by the American Medical Association (AMA)

These principles expand on existing AI policy, and emphasise the need for ethical, equitable, responsible, and transparent AI development and governance at the national level.

Ethics and Governance of Artificial Intelligence for Health: WHO Guidance

by the World Health Organization

This report from the WHO outlines ethical challenges and risks of using AI in healthcare, presents six consensus principles, and offers recommendations for effective governance of AI technologies whilst ensuring accountability of stakeholders involved.

Ethics Guidelines for Trustworthy AI

by the High-Level Expert Group on Artificial Intelligence set up by the European Commission

These Guidelines present a framework aimed to foster Trustworthy AI by giving guidance that promotes and ensures ethical and robust AI. In addition to providing a set of ethical principles, the Guidelines also provides information from an operational perspective.

 

Relevant Literature

Unlocking the potential of big data and AI in medicine: insights from biobanking

by BBMRI ELSI Team: Akyüz, K., Cano Abadía, M., Goisauf, M., et al.

Frontiers in Medicine (2024)

Ethical layering in AI-driven polygenic risk scores—New complexities, new challenges

by Fritzsche, M.C.,  Akyüz, K., Cano Abadía, M., et al.

Frontiers in Genetics (2023)

Ethics of AI in Radiology: A Review of Ethical and Societal Implications

by BBMRI ELSI Team: Goisauf, M. & Cano Abadía, M.

Frontiers in Big Data (2022)

You Can’t Have AI Both Ways: Balancing Health Data Privacy and Access Fairly

by Bak, M., Madai, V. I., Fritzsche, M.C., et al.

Frontiers in Genetics (2022)

 

 

Contact Person:
Dr Melanie Goisauf, Senior Scientist – Ethics of AI Lab Lead
Email: melanie.goisauf@bbmri-eric.eu

 

Acknowledgements
The entry was co-funded by EuCanImage, INTERVENE, and BIG PICTURE, projects that have received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 952103, No 101016775, and form the Innovative Medicines Initiative 2 Joint Undertaking under grant agreement No 945358. This Joint Undertaking receives support from the European Union’s Horizon 2020 research and innovation program and EFPIA.

 

Last Updated: September 2024

DIDN’T FIND WHAT YOU WERE LOOKING FOR?

Ask our team a question


The website was co-funded within ADOPT BBMRI-ERIC, a project that has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 676550.
We use cookies to analyse the traffic on our websites. All personal data is anonymized and not shared with third parties! Click here for more information.
Accept