Explainable AI Models for Financial Transaction Fraud Detection
Eniola Bamise
Ladoke Akintola University of Technology
Abstract
Over the last few years, AI has been a very successful tool in fraud detection within money transactions. However, many of these AI models are not transparent and, therefore, their usage is a difficult choice for businesses that must be honest and transparent. This work discusses the embedding of explainable AI approaches into fraud detection systems with a view to finding an optimal trade-off between model accuracy and interpretability. We employ several machine learning algorithms and further investigate state-of-the-art XAI tools, including SHAP and LIME, to gain deeper insights from model decision-making processes. Extensive experiments on benchmark datasets show that adding explainability not only increases user trust in the system but also delivers useful information about fraud pattern characteristics. Our work highlights the benefits brought about by explainable models when they act as useful and trusted tools in the battle against financial fraud.
Keywords
Explainable AI (XAI), Fraud Detection, Financial Transactions, Machine Learning, SHAP, LIME, Interpretability, Transparency, Anomaly Detection, Financial Technology (FinTech).
References
[1] Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115.
[2] Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems (pp. 4765–4774).
[3] Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135–1144).
[4] Bagnall, A., Hills, J., Lines, J., & Bostrom, A. (2017). Time-series classification with deep convolutional neural networks. Data Mining and Knowledge Discovery, 31(3), 606–634.
[5] Chen, T., & Guestrin, C. (2016). XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 785–794).
[6] Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
[7] Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press.
[8] Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32.
[9] Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.