Go to:
Logótipo
Você está em: Start > Publications > View > A customized residual neural network and bi-directional gated recurrent unit-based automatic speech recognition model
Publication

A customized residual neural network and bi-directional gated recurrent unit-based automatic speech recognition model

Title
A customized residual neural network and bi-directional gated recurrent unit-based automatic speech recognition model
Type
Article in International Scientific Journal
Year
2022-04
Authors
Selim Reza
(Author)
Other
View Personal Page You do not have permissions to view the institutional email. Search for Participant Publications Without AUTHENTICUS Without ORCID
Marta Campos Ferreira
(Author)
FEUP
View Personal Page You do not have permissions to view the institutional email. Search for Participant Publications View Authenticus page View ORCID page
J.J.M. Machado
(Author)
FEUP
View Personal Page You do not have permissions to view the institutional email. Search for Participant Publications View Authenticus page View ORCID page
João Manuel R. S. Tavares
(Author)
FEUP
View Personal Page You do not have permissions to view the institutional email. Search for Participant Publications View Authenticus page View ORCID page
Journal
The Journal is awaiting validation by the Administrative Services.
Vol. 215
Pages: 1-10
ISSN: 0957-4174
Indexing
Publicação em ISI Web of Knowledge ISI Web of Knowledge - 0 Citations
Publicação em ISI Web of Science ISI Web of Science
Clarivate Analytics
Scientific classification
CORDIS: Technological sciences
FOS: Engineering and technology
Other information
Authenticus ID: P-00X-FZ7
Abstract (EN): Speech recognition aims to convert human speech into text and has applications in security, healthcare, commerce, automobiles, and technology, just to name a few. Inserting residual neural networks before recurrent neural network cells improves accuracy and cuts training time by a good margin. Furthermore, layer normalization instead of batch normalization is more effective in model training and performance enhancement. Also, the size of the datasets presents tremendous influences in achieving the best performance. Leveraging these tricks, this article proposes an automatic speech recognition model with a stacked five layers of customized Residual Convolution Neural Network and seven layers of Bi-Directional Gated Recurrent Units, including a logarithmic so f tmax for the model output. Each of them incorporates a learnable per-element affine parameter-based layer normalization technique. The training and testing of the new model were conducted on the LibriSpeech corpus and LJ Speech dataset. The experimental results demonstrate a character error rate (CER) of 4.7 and 3.61% on the two datasets, respectively, with only 33 million parameters without the requirement of any external language model.
Language: English
Type (Professor's evaluation): Scientific
No. of pages: 10
Documents
File name Description Size
1-s2.0-S0957417422023119 Paper 2250.21 KB
paper 1st Page 183.26 KB
Recommend this page Top
Copyright 1996-2024 © Faculdade de Arquitectura da Universidade do Porto  I Terms and Conditions  I Acessibility  I Index A-Z  I Guest Book
Page created on: 2024-11-08 at 14:33:12 | Acceptable Use Policy | Data Protection Policy | Complaint Portal