Convolutional Neural Networks (CNNs) serve as a foundational component in the domain of Computer Vision (CV). In order to enhance the Interpretability of CNN models, a critical aspect for clinical adoption, this study incorporates Explainable AI (XAI) methodologies. Through applying CNNs and XAI to a dataset comprising 5285 Brain MRI Images, a classification accuracy of 86% was achieved. The LIME framework was employed to generate localized explanations, thereby augmenting the model’s transparency and facilitating a deeper understanding of its decision-making process. This research explores the potential of synergistically integrating deep learning and XAI to foster the development of more reliable and comprehensible medical image analysis systems. Such systems hold the promise of improving diagnostic accuracy and clinical decision-making by providing healthcare professionals with transparent and explainable insights into the model’s predictions, ultimately leading to more informed and effective patient care.
“@inproceedings{preotee2024approach,
title={An Approach Towards Identifying Bangladeshi Leaf Diseases through Transfer Learning and XAI},
author={Preotee, Faika Fairuj and Sarker, Shuvashis and Refat, Shamim Rahim and Muhammad, Tashreef and Islam, Shifat},
booktitle={2024 27th International Conference on Computer and Information Technology (ICCIT)},
pages={1744–1749},
year={2024},
organization={IEEE}
}”
Research Type
Research
Status
Published
Publisher
IEEE
Publication Type
Conference
Conference Type
International Conference
Pages
1744--1749
Applied AI in Healthcare, Computer Vision, Human Centered AI (HCI), Trustworthy and Efficient AI