Interpretable Machine Learning under Evolving Fraud Regimes with Human-in-the-Loop Adaptation
Keywords:
Interpretable Machine Learning, Fraud Detection, Human-in-the-Loop, Concept Drift, Continual Learning, Explainable AI (XAI).Abstract
Fraudulent financial behavior is growing in sophistication, requiring detection systems that are not only accurate but also adaptive and interpretable. While machine learning models have demonstrated strong performance in fraud detection, their static nature and lack of explainability pose critical limitations in real-world deployment. This paper presents an interpretable machine learning framework designed to operate under evolving fraud regimes, integrating human-in-the-loop (HITL) feedback for continuous adaptation. The proposed system combines explainable models, including Quantum Shapley and Q-LIME, with a continual learning engine that updates its parameters based on live feedback from human fraud analysts. Using real-world transactional datasets with temporally distributed fraud patterns, we evaluate the framework across multiple metrics: detection accuracy, response to concept drift, consistency of model explanations, and the impact of human intervention. Results show that the adaptive, interpretable system significantly outperforms static models in both detection and trustworthiness. Reciprocal human-machine learning, where both the analyst and system improve iteratively, proves crucial in maintaining performance as adversarial behavior shifts. This research demonstrates the feasibility and necessity of deploying fraud detection systems that learn continuously, explain their decisions, and actively engage with human expertise in live operational environments.