Tallinn University of Technology

31 October 2025 at 2:00 PM
Rajesh Kalakoti, "Explainable Artificial Intelligence-Based Intrusion Detection Systems"

Supervisor: Tenured Associate Professor Dr. Sven Nõmm, Department of Software Science, School of Information Technology, Tallinn University of Technology, Tallinn, Estonia

Co-supervisor: Research Professor Dr. Hayretdin Bahşi, Department of Software Science, School of Information Technology, Tallinn University of Technology, Tallinn, Estonia

Opponents:

  • Professor Dr. Abdulhamit Subasi, Department of Information Sciences and Technology, University at Albany, New York, United States
  • Professor Dr. Jianhua Zhang, Department of Computer Science, Oslo Metropolitan University, Oslo, Norway

Join the public defence in Zoom

Meeting ID 884 983 3907
Passcode 584691

Summary
Rapid advancements in network technologies and the increasing volume of data are increasing the complexity of cyber threats, particularly as sophisticated cyberattacks increasingly target the Internet of Things (IoT) and Internet of Medical Things (IoMT) networks. Although machine learning (ML) models have shown promising solutions for detecting malicious activities in such networks, the lack of interpretability and transparency of the models often limits their effectiveness and trustworthiness. This Ph.D. dissertation explores the development of effective, interpretable, transparent, and privacy-preserving machine learning-based intrusion detection systems (IDS) in both centralised and decentralised (federated learning) settings.
Although IDS based on ML and DL have achieved high classification accuracy, their reliance on centralised data storage raises privacy and security concerns. Federated Learning (FL) addresses these challenges by enabling decentralised, privacy-preserving model training, where data remain local, and only model parameters are shared with the central server. However, explaining the ML model induced in this setting is challenging due to the complex nature of FL, especially using Post hoc XAI methods. Traditional post hoc XAI methods require access to input data for explanations, which violates privacy in FL when explaining the server model. This dissertation addresses the challenge of explainability in federated learning (FL)-based IDS, where privacy constraints limit access to data for post-hoc explanations. To overcome this, a Federated Explainable AI (FedXAI) framework is proposed, incorporating SHAP in a privacy-preserving manner by securely aggregating client-based SHAP values to approximate server model explanations for both binary and multiclass IoT attack detection tasks.