TY  -  JOUR
AU  -  Sacchi, Francesca Aurora
AU  -  Cascini, Fidelia
AU  -  Conditi, Noemi
AU  -  Ravizza, Alice
AU  -  Daverio, Margherita
AU  -  Causio, Francesco Andrea
AU  -  De Vita, Vittorio
AU  -  Pivetta, Alessio
AU  -  Maio, Pierpaolo
AU  -  De Angelis, Luigi
AU  -  Baglivo, Francesco
AU  -  Diedenhofen, Giacomo
AU  -  Di Pumpo, Marcello
AU  -  Belpiede, Alessandro
AU  -  Ferro, Diana
AU  -  Bolognini, Luca
T1  -  The path to trustworthy medical AI: 
the evolving role of explainability
PY  -  2025
Y1  -  2025-10-01
DO  -  10.1701/4573.45774
JO  -  Recenti Progressi in Medicina
JA  -  Recenti Prog Med
VL  -  116
IS  -  10
SP  -  546
EP  -  550
PB  -  Il Pensiero Scientifico Editore
SN  -  2038-1840
Y2  -  2026/03/16
UR  -  http://dx.doi.org/10.1701/4573.45774
N2  -  Summary. The integration of artificial intelligence (AI) in medicine has applications across several clinical domains, spanning from disease prevention and diagnosis through treatment and long-term care, as well as remote care. However, many AI systems are inherently characterized by limited explainability, meaning the processes behind their outcomes cannot be clearly understood or communicated to humans, whether developers or end users. This viewpoint explores the importance of AI explainability in medicine by first tracing its evolution from a primarily ethical concern to a legal requirement. It then examines the connection between explainability and the trustworthiness of AI systems. Finally, it considers how explainability is approached from a technical standpoint and its inherent tension with achieving high accuracy.
ER  -   
