Abstract (EN):
Artificial neural networks (ANNs) have been used for classification tasks involving functional magnetic resonance imaging (fMRI), though typically focusing only on fractions of the brain in the analysis. Recent work combined shallow neural networks (SNNs) with explainable artificial intelligence (xAI) techniques to extract insights into brain processes. While earlier studies validated this approach using motor task fMRI data, the present study applies it to Theory of Mind (ToM) cognitive tasks, using data from the Human Connectome Project's (HCP) Young Adult database. Cognitive tasks are more challenging due to the brain's non-linear functions. The HCP multimodal parcellation brain atlas segments the brain, guiding the training, pruning, and retraining of an SNN. Shapley values then explain the retrained network, with results compared to General Linear Model (GLM) analysis for validation. The initial network achieved 88.2% accuracy, dropped to 80.0% after pruning, and recovered to 84.7% post-retraining. SHAP explanations aligned with GLM findings and known ToM-related brain regions. This fMRI analysis successfully addressed a cognitively complex paradigm, demonstrating the potential of explainability techniques for understanding non-linear brain processes. The findings suggest that xAI, and knowledge extraction in particular, is valuable for advancing mental health research and brain state decoding.
Language:
English
Type (Professor's evaluation):
Scientific
No. of pages:
20