Mode
Text Size
Log in / Sign up

Review of ethical and societal aspects of AI in the EuCanImage project among stakeholders

Review of ethical and societal aspects of AI in the EuCanImage project among stakeholders
Photo by Navy Medicine / Unsplash
Key Takeaway
Note that ethical concerns in radiological AI are relational and require ongoing interdisciplinary involvement.

This publication is a multi-method empirical review rather than a primary trial, focusing on the ethical and societal aspects of AI within the EuCanImage project. The study population included developers, clinicians, and other stakeholders, though the sample size was not reported. The setting was the EuCanImage project, and no comparator was used. The review synthesizes qualitative conclusions rather than quantitative effect sizes, as numerical data were not reported for the outcomes analyzed.

Key synthesized findings indicate that ethical concerns emerge in real-world settings and are shaped by institutional, clinical, and sociotechnical dynamics. The authors argue that ongoing interdisciplinary involvement is essential to address explainability, accountability, bias, and social impact in radiological AI. Furthermore, trustworthiness is described as relational and co-constructed through interactions among very diverse stakeholders.

The review notes that follow-up duration was not reported, and no specific primary or secondary outcomes were quantified with absolute numbers or confidence intervals. Safety data, including adverse events or tolerability, were not reported. The authors do not overstate certainty regarding these qualitative conclusions, acknowledging the limitations inherent in a review of this nature without specific trial-level detail.

Study Details

Study typeSystematic review
EvidenceLevel 1
PublishedApr 2026
View Original Abstract ↓
Artificial intelligence (AI) in radiology and oncology promises improvements in diagnostic accuracy and efficiency yet introduces complex ethical and societal challenges. Governance efforts frequently rely on high-level principles such as trustworthiness and fairness, which risk becoming ineffective when not grounded in specific contexts. This study presents findings from our work on ethical and societal aspects of AI within the EuCanImage project. We conducted a multi-method empirical study involving literature reviews, interviews, and workshops with developers, clinicians, and other stakeholders. The study explored how ethical concerns emerge in real-world settings and how they are shaped by institutional, clinical, and sociotechnical dynamics. Findings indicate that ongoing interdisciplinary involvement is essential to address explainability, accountability, bias, and social impact in radiological AI. The literature review identified four guiding dimensions of trustworthy AI (i.e., explainability and interpretability, trust and trustworthiness, responsibility and accountability, and justice and fairness) which remain difficult to operationalize without concrete procedural guidance. Empirical findings highlight that ethical issues cannot be addressed solely as technical problems or abstract principles. Trustworthiness emerged as relational and co-constructed through interactions among very diverse stakeholders. We propose a structured, multi-stakeholder AI development pathway that advances from decontextualized, principle-driven ethics toward embedded, interdisciplinary approaches attentive to clinical realities, power relations, and socio-cultural conditions, by strengthening stakeholder engagement for trustworthy AI in cancer care.
Free Newsletter

Clinical research that matters. Delivered to your inbox.

Join thousands of clinicians and researchers. No spam, unsubscribe anytime.