Unveiling Student Learning Patterns through Explainable AI in Educational Analytics
Keywords:
Explainable AI, Educational Analytics, Student Learning Behaviors, XAI Models, Cognitive Skill Prediction, SHAP, LIME, Attention Networks.Abstract
In the era of data-driven education, understanding how students learn has become essential for improving teaching strategies and personalized learning experiences. Traditional learning analytics often rely on black-box machine learning models that provide accurate predictions but fail to explain why certain learning behaviors occur. This study explores the potential of Explainable Artificial Intelligence (XAI) to uncover and interpret student learning patterns from large-scale educational data. By integrating interpretable models such as SHAP, LIME, and decision trees within the educational analytics framework, the research identifies the most influential features affecting student performance, engagement, and learning outcomes. The findings demonstrate that XAI techniques not only enhance transparency and trust in predictive models but also provide actionable insights for educators to design adaptive interventions and support data-informed decision-making. The proposed approach bridges the gap between algorithmic accuracy and pedagogical understanding, offering a robust pathway toward human-centered, interpretable, and ethical educational analytics.
