Go to:
Logótipo
Comuta visibilidade da coluna esquerda
Você está em: Start > Publications > View > OCT Image Synthesis through Deep Generative Models
Publication

Publications

OCT Image Synthesis through Deep Generative Models

Title
OCT Image Synthesis through Deep Generative Models
Type
Article in International Conference Proceedings Book
Year
2023
Authors
Melo, T
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. View Authenticus page Without ORCID
Jaime S Cardoso
(Author)
FEUP
View Personal Page You do not have permissions to view the institutional email. Search for Participant Publications View Authenticus page View ORCID page
Ângela Carneiro
(Author)
FMUP
View Personal Page You do not have permissions to view the institutional email. Search for Participant Publications View Authenticus page Without ORCID
Aurélio Campilho
(Author)
FEUP
View Personal Page You do not have permissions to view the institutional email. Search for Participant Publications View Authenticus page Without ORCID
Ana Maria Mendonça
(Author)
FEUP
View Personal Page You do not have permissions to view the institutional email. Search for Participant Publications View Authenticus page View ORCID page
Conference proceedings International
Pages: 561-566
36th IEEE International Symposium on Computer-Based Medical Systems, CBMS 2023
L¿Aquila, 22 June 2023 through 24 June 2023
Indexing
Other information
Authenticus ID: P-00Y-V72
Abstract (EN): The development of accurate methods for OCT image analysis is highly dependent on the availability of large annotated datasets. As such datasets are usually expensive and hard to obtain, novel approaches based on deep generative models have been proposed for data augmentation. In this work, a flow-based network (SRFlow) and a generative adversarial network (ESRGAN) are used for synthesizing high-resolution OCT B-scans from low-resolution versions of real OCT images. The quality of the images generated by the two models is assessed using two standard fidelity-oriented metrics and a learned perceptual quality metric. The performance of two classification models trained on real and synthetic images is also evaluated. The obtained results show that the images generated by SRFlow preserve higher fidelity to the ground truth, while the outputs of ESRGAN present, on average, better perceptual quality. Independently of the architecture of the network chosen to classify the OCT B-scans, the model's performance always improves when images generated by SRFlow are included in the training set.
Language: English
Type (Professor's evaluation): Scientific
No. of pages: 6
Documents
We could not find any documents associated to the publication.
Recommend this page Top
Copyright 1996-2025 © Faculdade de Direito da Universidade do Porto  I Terms and Conditions  I Acessibility  I Index A-Z
Page created on: 2025-07-15 at 21:12:16 | Privacy Policy | Personal Data Protection Policy | Whistleblowing