Go to:
Logótipo
Comuta visibilidade da coluna esquerda
Você está em: Start > Publications > View > Deep Learning Approaches Assessment for Underwater Scene Understanding and Egomotion Estimation
Publication

Publications

Deep Learning Approaches Assessment for Underwater Scene Understanding and Egomotion Estimation

Title
Deep Learning Approaches Assessment for Underwater Scene Understanding and Egomotion Estimation
Type
Article in International Conference Proceedings Book
Year
2019
Authors
Bernardo Teixeira
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. View Authenticus page Without ORCID
Hugo Silva
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. View Authenticus page Without ORCID
Aníbal Matos
(Author)
FEUP
View Personal Page You do not have permissions to view the institutional email. Search for Participant Publications View Authenticus page View ORCID page
Conference proceedings International
Pages: 1-9
2019 OCEANS MTS/IEEE Seattle, OCEANS 2019
27 October 2019 through 31 October 2019
Other information
Authenticus ID: P-00R-P5M
Abstract (EN): This paper address the use of deep learning approaches for visual based navigation in confined underwater environments. State-of-the-art algorithms have shown the tremendous potential deep learning architectures can have for visual navigation implementations, though they are still mostly outperformed by classical feature-based techniques. In this work, we apply current state-of-the-art deep learning methods for visual-based robot navigation to the more challenging underwater environment, providing both an underwater visual dataset acquired in real operational mission scenarios and an assessment of state-of-the-art algorithms on the underwater context. We extend current work by proposing a novel pose optimization architecture for the purpose of correcting visual odometry estimate drift using a Visual-Inertial fusion network, consisted of a neural network architecture anchored on an Inertial supervision learning scheme. Our Visual-Inertial Fusion Network was shown to improve results an average of 50% for trajectory estimates, also producing more visually consistent trajectory estimates for both our underwater application scenarios.
Language: English
Type (Professor's evaluation): Scientific
No. of pages: 9
Documents
We could not find any documents associated to the publication.
Related Publications

Of the same authors

Deep Learning for Underwater Visual Odometry Estimation (2020)
Article in International Scientific Journal
Bernardo Teixeira; Hugo Silva; Aníbal Matos; Eduardo Pereira da Silva
Recommend this page Top
Copyright 1996-2025 © Faculdade de Direito da Universidade do Porto  I Terms and Conditions  I Acessibility  I Index A-Z
Page created on: 2025-08-06 at 04:22:50 | Privacy Policy | Personal Data Protection Policy | Whistleblowing