Explainable and Uncertainty Aware AI-based Ransomware Detection
Loading...
Date
2025-06-12
Journal Title
Journal ISSN
Volume Title
Publisher
IEEE Access
Abstract
Ransomware poses a serious and evolving threat, demanding detection methods that can adapt to new attack
vectors while maintaining transparency and reliability. This study proposes a comprehensive framework that
integrates data augmentation, explainable artificial intelligence, and uncertainty quantification to address
key challenges in ransomware detection. By leveraging synthetic data generation techniques, the approach
mitigates class imbalance and captures varied ransomware behaviours. Simultaneously, explainable AI
methods shed light on model decisions, enhancing interpretability and building trust among cybersecurity
professionals. An uncertainty-aware component flags ambiguous predictions, allowing for targeted manual
reviews and minimising incorrect classifications. Experiments on multiple ransomware datasets show the
framework’s ability to maintain high detection rates, even under adversarial conditions. By combining
RanSAP and RDset datasets, the framework achieves marked performance improvements. When SMOTE
was applied, Random Forest reached an F1-score of 0.9963, while a CNN with Monte Carlo Dropout
attained 0.9906. Further incorporating CT-GAN boosted the CNN’s F1-score to 0.9978, underscoring the
robustness of our approach. The results suggest that combining robust data augmentation, interpretability,
and uncertainty handling offers a practical avenue for deploying reliable ransomware detection systems in
real-world environments.
Description
Keywords
Citation
Kabuye, H., Issac, B., Yumlembam, R., & Neera, J. (2025). Explainable and Uncertainty Aware AI-based Ransomware Detection. IEEE Access.