Why Are Explainable AI Methods for Prostate Lesion Detection Rated Poorly by Radiologists?

dc.authoridGulum, Mehmet Akif/0000-0001-7065-8412
dc.authoridEsen, Enes/0000-0003-3035-2245
dc.contributor.authorGulum, Mehmet A.
dc.contributor.authorTrombley, Christopher M.
dc.contributor.authorOzen, Merve
dc.contributor.authorEsen, Enes
dc.contributor.authorAksamoglu, Melih
dc.contributor.authorKantardzic, Mehmed
dc.date.accessioned2025-01-27T21:19:47Z
dc.date.available2025-01-27T21:19:47Z
dc.date.issued2024
dc.departmentÇanakkale Onsekiz Mart Üniversitesi
dc.description.abstractDeep learning offers significant advancements in the accuracy of prostate identification and classification, underscoring its potential for clinical integration. However, the opacity of deep learning models presents interpretability challenges, critical for their acceptance and utility in medical diagnosis and detection. While explanation methods have been proposed to demystify these models, enhancing their clinical viability, the efficacy and acceptance of these methods in medical tasks are not well documented. This pilot study investigates the effectiveness of deep learning explanation methods in clinical settings and identifies the attributes that radiologists consider crucial for explainability, aiming to direct future enhancements. This study reveals that while explanation methods can improve clinical task performance by up to 20%, their perceived usefulness varies, with some methods being rated poorly. Radiologists prefer explanation methods that are robust against noise, precise, and consistent. These preferences underscore the need for refining explanation methods to align with clinical expectations, emphasizing clarity, accuracy, and reliability. The findings highlight the importance of developing explanation methods that not only improve performance but also are tailored to meet the stringent requirements of clinical practice, thereby facilitating deeper trust and a broader acceptance of deep learning in medical diagnostics.
dc.identifier.doi10.3390/app14114654
dc.identifier.issn2076-3417
dc.identifier.issue11
dc.identifier.scopus2-s2.0-85195932257
dc.identifier.scopusqualityQ1
dc.identifier.urihttps://doi.org/10.3390/app14114654
dc.identifier.urihttps://hdl.handle.net/20.500.12428/28736
dc.identifier.volume14
dc.identifier.wosWOS:001246832100001
dc.identifier.wosqualityN/A
dc.indekslendigikaynakWeb of Science
dc.indekslendigikaynakScopus
dc.language.isoen
dc.publisherMdpi
dc.relation.ispartofApplied Sciences-Basel
dc.relation.publicationcategoryinfo:eu-repo/semantics/openAccess
dc.rightsinfo:eu-repo/semantics/openAccess
dc.snmzKA_WoS_20250125
dc.subjectdeep learning
dc.subjectexplainability
dc.subjectinterpretability
dc.subjectcancer detection
dc.subjectMRI
dc.subjectXAI
dc.subjectradiology
dc.titleWhy Are Explainable AI Methods for Prostate Lesion Detection Rated Poorly by Radiologists?
dc.typeArticle

Dosyalar