The interpretability of deep learning models remains a significant challenge, particularly in convolutional neural networks (CNNs) where understanding the contributions of individual filters is crucial for explainability. In this work, we propose a biologically inspired filter significance assessment method based on Steady-State Visually Evoked Potentials (SSVEPs), a well-established neuroscience principle. Our approach leverages frequency tagging techniques to quantify the importance of convolutional filters by analyzing their frequency-locked responses to periodic contrast modulations in input images. By blending SSVEP-based filter selection into Class Activation Mapping (CAM) frameworks such as Grad-CAM, Grad-CAM++, EigenCAM, and LayerCAM, we enhance model interpretability while reducing attribution noise. Experimental evaluations on ImageNet using VGG-16, ResNet-50, and ResNeXt-50 demonstrate that SSVEP-enhanced CAM methods improve spatial focus in visual explanations, yielding higher energy concentration while maintaining competitive localization accuracy. These findings suggest that our biologically inspired approach offers a robust mechanism for identifying key filters in CNNs, paving the way for more interpretable and transparent deep learning models.

, , , , ,
doi.org/10.1007/978-3-032-08324-1_19
Explainable Artificial Intelligence, Third World Conference, xAI 2025
Computer Security

Böge, E., Gunindi, Y., Ertan, B., Aptoula, E., Alp, N., & Ozkan, H. (2025). A biologically inspired filter significance assessment method for model explanation. In Explainable Artificial Intelligence, Third World Conference, xAI 2025 (Proceedings, Part II) (pp. 422–435). doi:10.1007/978-3-032-08324-1_19