Go to:
Logótipo
Você está em: Start > Publications > View > Decoding Mental States in Social Cognition: Insights from Explainable Artificial Intelligence on HCP fMRI Data
Map of Premises
Principal
Publication

Decoding Mental States in Social Cognition: Insights from Explainable Artificial Intelligence on HCP fMRI Data

Title
Decoding Mental States in Social Cognition: Insights from Explainable Artificial Intelligence on HCP fMRI Data
Type
Article in International Scientific Journal
Year
2025
Authors
dos Santos, JDM
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. Without AUTHENTICUS Without ORCID
dos Santos, JPM
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. Without AUTHENTICUS Without ORCID
Journal
Vol. 7
Final page: 17
ISSN: 2504-4990
Indexing
Publicação em ISI Web of Knowledge ISI Web of Knowledge - 0 Citations
Publicação em Scopus Scopus - 0 Citations
Other information
Authenticus ID: P-018-6N2
Abstract (EN): Artificial neural networks (ANNs) have been used for classification tasks involving functional magnetic resonance imaging (fMRI), though typically focusing only on fractions of the brain in the analysis. Recent work combined shallow neural networks (SNNs) with explainable artificial intelligence (xAI) techniques to extract insights into brain processes. While earlier studies validated this approach using motor task fMRI data, the present study applies it to Theory of Mind (ToM) cognitive tasks, using data from the Human Connectome Project's (HCP) Young Adult database. Cognitive tasks are more challenging due to the brain's non-linear functions. The HCP multimodal parcellation brain atlas segments the brain, guiding the training, pruning, and retraining of an SNN. Shapley values then explain the retrained network, with results compared to General Linear Model (GLM) analysis for validation. The initial network achieved 88.2% accuracy, dropped to 80.0% after pruning, and recovered to 84.7% post-retraining. SHAP explanations aligned with GLM findings and known ToM-related brain regions. This fMRI analysis successfully addressed a cognitively complex paradigm, demonstrating the potential of explainability techniques for understanding non-linear brain processes. The findings suggest that xAI, and knowledge extraction in particular, is valuable for advancing mental health research and brain state decoding.
Language: English
Type (Professor's evaluation): Scientific
No. of pages: 20
Documents
We could not find any documents associated to the publication.
Related Publications

Of the same journal

Enhancing Hierarchical Sales Forecasting with Promotional Data: A Comparative Study Using ARIMA and Deep Neural Networks (2024)
Article in International Scientific Journal
Teixeira, M; José Manuel Oliveira; Patrícia Ramos
Recommend this page Top
Copyright 1996-2025 © Faculdade de Medicina Dentária da Universidade do Porto  I Terms and Conditions  I Acessibility  I Index A-Z
Page created on: 2025-07-21 at 16:35:15 | Privacy Policy | Personal Data Protection Policy | Whistleblowing | Electronic Yellow Book