Yazar "Kantardzic, Mehmed" seçeneğine göre listele
Listeleniyor 1 - 2 / 2
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe Evaluating Uncertainty-Based Deep Learning Explanations for Prostate Lesion Detection(Jmlr-Journal Machine Learning Research, 2022) Trombley, Christopher M.; Gulum, Mehmet Akif; Ozen, Merve; Esen, Enes; Aksamoglu, Melih; Kantardzic, MehmedDeep learning has demonstrated impressive accuracy for prostate lesion identification and classification. Deep learning algorithms are considered black-box methods therefore they require explanation methods to gain insight into the model's classification. For high stakes tasks such as medical diagnosis, it is important that explanation methods are able to estimate explanation uncertainty. Recently, there have been various methods proposed for providing uncertainty-based explanations. However, the clinical effectiveness of uncertaintybased explanation methods and what radiologists deem explainable within this context is still largely unknown. To that end, this pilot study investigates the effectiveness of uncertainty-based prostate lesion detection explanations. It also attempts to gain insight into what radiologists consider explainable. An experiment was conducted with a cohort of radiologists to determine if uncertainty-based explanation methods improve prostate lesion detection. Additionally, a qualitative assessment of each method was conducted to gain insight into what characteristics make an explanation method suitable for radiology end use. It was found that uncertainty-based explanation methods increase lesion detection performance by up to 20%. It was also found that perceived explanation quality is related to actual explanation quality. This pilot study demonstrates the potential use of explanation methods for radiology end use and gleans insight into what radiologists deem explainable.Öğe Why Are Explainable AI Methods for Prostate Lesion Detection Rated Poorly by Radiologists?(Mdpi, 2024) Gulum, Mehmet A.; Trombley, Christopher M.; Ozen, Merve; Esen, Enes; Aksamoglu, Melih; Kantardzic, MehmedDeep learning offers significant advancements in the accuracy of prostate identification and classification, underscoring its potential for clinical integration. However, the opacity of deep learning models presents interpretability challenges, critical for their acceptance and utility in medical diagnosis and detection. While explanation methods have been proposed to demystify these models, enhancing their clinical viability, the efficacy and acceptance of these methods in medical tasks are not well documented. This pilot study investigates the effectiveness of deep learning explanation methods in clinical settings and identifies the attributes that radiologists consider crucial for explainability, aiming to direct future enhancements. This study reveals that while explanation methods can improve clinical task performance by up to 20%, their perceived usefulness varies, with some methods being rated poorly. Radiologists prefer explanation methods that are robust against noise, precise, and consistent. These preferences underscore the need for refining explanation methods to align with clinical expectations, emphasizing clarity, accuracy, and reliability. The findings highlight the importance of developing explanation methods that not only improve performance but also are tailored to meet the stringent requirements of clinical practice, thereby facilitating deeper trust and a broader acceptance of deep learning in medical diagnostics.