Trustworthy AI in medical image analysis: A unified perspective built on robustness and layers of trust

Zuluaga, Maria A; Isgum, Ivana; Bach Cuadra, Meritxell
Current Opinion in Biomedical Engineering, 13 January 2026

Trustworthy AI is critical for effectively adopting AI systems in medical imaging and broader healthcare contexts. While the Trustworthy AI framework defines seven core principles —ranging from technical robustness to societal well-being— these are often addressed in isolation, lacking a coherent integration strategy. In this perspective paper, we propose a unified, layered framework that organizes these principles across three tiers of increasing trust: core operations, feedback, and explainability. Each layer aligns with the fundamental components of an AI system—input data, model, and outputs, integrating the different principles and offering a structured path toward increasing levels of trust. Central to our framework is technical robustness, positioned as a cross-cutting enabler that intertwines with the other trust principles across all layers. Through this lens, we review recent advances in trustworthy AI techniques in medical imaging and highlight persistent challenges. and future research directions for building trustworthy AI systems in medical imaging.


DOI
Type:
Journal
Date:
2026-01-13
Department:
Data Science
Eurecom Ref:
8559
Copyright:
© Elsevier. Personal use of this material is permitted. The definitive version of this paper was published in Current Opinion in Biomedical Engineering, 13 January 2026 and is available at : https://doi.org/10.1016/j.cobme.2026.100649
See also:

PERMALINK : https://www.eurecom.fr/publication/8559