Explainable AI for Transparent and Trustworthy Tuberculosis Diagnosis: From Mere Pixels to Actionable Insights
Loading...
Date
2024-10-08
Journal Title
Journal ISSN
Volume Title
Publisher
East African Nature and Science Organization
Abstract
Building transparent and trustworthy AI-powered systems for disease diagnosis has become more paramount than ever due to a lack of understanding of black box models. A lack of transparency and explainability in AI-driven models can propagate biases and erode patients' and medical practitioners' trust. To answer this challenge, Explainable AI (XAI) is drastically emerging as a practical solution and approach to tackling ethical concerns in the health sector. The overarching purpose of this paper is to highlight the advancement in XAI for tuberculosis diagnosis (TB) and identify the benefits and challenges associated with improved trust in AI-powered TB diagnosis. We explore the potential of XAI in improving TB diagnosis. We attempt to provide a complete plan to promote XAI. We examine the significant problems associated with using XAI in TB diagnosis. We argue that XAI is critical for reliable TB diagnosis by improving the interpretability of AI decision-making processes and recognising possible biases and mistakes. We evaluate techniques and methods for XAI in TB diagnosis and examine the ethical and societal ramifications. By leveraging explainable AI, we can create a more reliable and trustworthy TB diagnostic framework, ultimately improving patient outcomes and global health. Finally, we provide thorough recommendations for developing and implementing XAI in TB diagnosis using X-ray imaging
Description
Keywords
Citation
Tibakanya, J., Kenneth, M. H. & Nakasi, R. (2024). Explainable AI for Transparent and Trustworthy Tuberculosis Diagnosis: From Mere Pixels to Actionable Insights. East African Journal of Information Technology, 7(1), 341-354. https://doi.org/10.37284/eajit.7.1.2276