Saltar para:
Logótipo
Comuta visibilidade da coluna esquerda
Você está em: Início > Publicações > Visualização > Deep Learning Approach for Human Action Recognition Using a Time Saliency Map Based on Motion Features Considering Camera Movement and Shot in Video Image Sequences

Deep Learning Approach for Human Action Recognition Using a Time Saliency Map Based on Motion Features Considering Camera Movement and Shot in Video Image Sequences

Título
Deep Learning Approach for Human Action Recognition Using a Time Saliency Map Based on Motion Features Considering Camera Movement and Shot in Video Image Sequences
Tipo
Artigo em Revista Científica Internacional
Ano
2023
Autores
Abdorreza Alavigharahbagh
(Autor)
Outra
Ver página pessoal Sem permissões para visualizar e-mail institucional Pesquisar Publicações do Participante Sem AUTHENTICUS Sem ORCID
Vahid Hajihashemi
(Autor)
Outra
Ver página pessoal Sem permissões para visualizar e-mail institucional Pesquisar Publicações do Participante Sem AUTHENTICUS Sem ORCID
José J. M. Machado
(Autor)
FEUP
João Manuel R. S. Tavares
(Autor)
FEUP
Revista
Vol. 14
Páginas: 616-616
ISSN: 2078-2489
Editora: MDPI
Indexação
Publicação em ISI Web of Knowledge ISI Web of Knowledge - 0 Citações
Publicação em ISI Web of Science ISI Web of Science
Publicação em Scopus Scopus - 0 Citações
Clarivate Analytics
Classificação Científica
CORDIS: Ciências Tecnológicas
FOS: Ciências da engenharia e tecnologias
Outras Informações
ID Authenticus: P-00Z-FBE
Abstract (EN): In this article, a hierarchical method for action recognition based on temporal and spatial features is proposed. In current HAR methods, camera movement, sensor movement, sudden scene changes, and scene movement can increase motion feature errors and decrease accuracy. Another important aspect to take into account in a HAR method is the required computational cost. The proposed method provides a preprocessing step to address these challenges. As a preprocessing step, the method uses optical flow to detect camera movements and shots in input video image sequences. In the temporal processing block, the optical flow technique is combined with the absolute value of frame differences to obtain a time saliency map. The detection of shots, cancellation of camera movement, and the building of a time saliency map minimise movement detection errors. The time saliency map is then passed to the spatial processing block to segment the moving persons and/or objects in the scene. Because the search region for spatial processing is limited based on the temporal processing results, the computations in the spatial domain are drastically reduced. In the spatial processing block, the scene foreground is extracted in three steps: silhouette extraction, active contour segmentation, and colour segmentation. Key points are selected at the borders of the segmented foreground. The last used features are the intensity and angle of the optical flow of detected key points. Using key point features for action detection reduces the computational cost of the classification step and the required training time. Finally, the features are submitted to a Recurrent Neural Network (RNN) to recognise the involved action. The proposed method was tested using four well-known action datasets: KTH, Weizmann, HMDB51, and UCF101 datasets and its efficiency was evaluated. Since the proposed approach segments salient objects based on motion, edges, and colour features, it can be added as a preprocessing step to most current HAR systems to improve performance.
Idioma: Inglês
Tipo (Avaliação Docente): Científica
Nº de páginas: 27
Documentos
Nome do Ficheiro Descrição Tamanho
information-14-00616-v2 Article 3219.21 KB
Publicações Relacionadas

Dos mesmos autores

Novel sound event and sound activity detection framework based on intrinsic mode functions and deep learning (2025)
Artigo em Revista Científica Internacional
Vahid Hajihashemi; Abdorreza Alavigharahbagh; J. J. M. Machado; João Manuel R. S. Tavares
Land Cover Classification Model Using Multispectral Satellite Images Based on a Deep Learning Synergistic Semantic Segmentation Network (2025)
Artigo em Revista Científica Internacional
Abdorreza Alavi Gharahbagh; Vahid Hajihashemi; José J. M. Machado; João Manuel R. S. Tavares
Hybrid time-spatial video saliency detection method to enhance human action recognition systems (2024)
Artigo em Revista Científica Internacional
Abdorreza Alavi Gharahbagh; Vahid Hajihashemi; Marta Campos Ferreira; J.J.M. Machado; João Manuel R. S. Tavares
Feature Extraction Based on Local Histogram with Unequal Bins and a Recurrent Neural Network for the Diagnosis of Kidney Diseases from CT Images (2024)
Artigo em Revista Científica Internacional
Abdorreza Alavi Gharahbagh; Vahid Hajihashemi; José J.M. Machado; João Manuel R. S. Tavares
Enhancing Efficiency in Hybrid Solar-Wind-Battery Systems Using an Adaptive MPPT Controller Based on Shadow Motion Prediction (2024)
Artigo em Revista Científica Internacional
Abdorreza Alavi Gharahbagh; Vahid Hajihashemi; Nasrin Salehi; Mahyar Moradi; José J. M. Machado; João Manuel R. S. Tavares

Ver todas (8)

Da mesma revista

Teaching Software Engineering Topics Through Pedagogical Game Design Patterns: An Empirical Study (2020)
Artigo em Revista Científica Internacional
Nuno Flores; Paiva, ACR; Cruz, N
Screening System for Cardiac Problems through Non-Invasive Identification of Blood Pressure Waveform (2020)
Artigo em Revista Científica Internacional
Paulo Abreu; Fernando Carneiro; Maria Teresa Restivo
Robust Complaint Processing in Portuguese (2021)
Artigo em Revista Científica Internacional
Henrique Lopes Cardoso; Osorio, TF; Barbosa, LV; Rocha, G; reis, lp; Machado, JP; Oliveira, AM
Recognizing textual entailment: Challenges in the Portuguese language (2018)
Artigo em Revista Científica Internacional
Gil Rocha; Henrique Lopes Cardoso
Prototype to Increase Crosswalk Safety by Integrating Computer Vision with ITS-G5 Technologies (2020)
Artigo em Revista Científica Internacional
Gaspar, F; Guerreiro, V; Loureiro, P; Costa, P; Mendes, S; Rabadao, C

Ver todas (17)

Recomendar Página Voltar ao Topo
Copyright 1996-2025 © Centro de Desporto da Universidade do Porto I Termos e Condições I Acessibilidade I Índice A-Z
Página gerada em: 2025-10-17 às 16:46:53 | Política de Privacidade | Política de Proteção de Dados Pessoais | Denúncias | Livro Amarelo Eletrónico