FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, vol.129, pp.152-169, 2022 (SCI-Expanded)
Glaucoma causes blindness in long-time untreated cases. So, its early diagnosis is very important. Moving from that, there have been lots of Deep Learning oriented studies to diagnose Glaucoma from color fundus images. However, it is important to know about how humans can trust to black-box level Deep Learning models in their decision makings. In this study, a hybrid solution with image processing and deep learning was supported with the Explainable Artificial Intelligence (XAI), to ensure a trustworthy decision-making for Glaucoma diagnosis. In detail, image processing employing both histogram equalization (HE) and contrast-limited adaptive HE (CLAHE) was used to enhance colored fundus image-data. For the diagnosis, the enhanced image-data was used by an explainable convolutional neural network (CNN). The XAI was achieved via Class Activation Mapping (CAM) allowing heat map-based explanations for the image analysis done by the CNN. The performance of the hybrid solution was tested with the Drishti-GS, ORIGA(-Light) and HRF retinal image datasets through a total of twenty attempts for classification. Considering the performance evaluation, the highest mean values were found with the ORIGA(-Light) dataset (with accuracy: 93.5%, sensitivity/recall: 97.7%, specificity: 92.6%, precision: 93.8%, F1-Score: 95.7%, and AUC: 95.1%). As the XAI contribution of this study was with also an analysis by humans, the CAM based XAI effect was evaluated by some doctors. The CAM based XAI showed the accuracy of 82.73%, which was acceptable among alternative XAI methods. Also, according to the manual diagnosis test done with the doctors, the detections by the CAM based XAI was not below 97% and the worst diagnosis detection was 90%. Eventually, the results for the XAI effect were positive as pointing that use of XAI for black-box deep learning improves trust level for doctors. (C) 2021 Elsevier B.V. All rights reserved.