Resumo (PT):
Abstract (EN):
This study describes a novel dataset with retinal image quality annotation, defined by three different retinal experts, and presents an inter-observer analysis for quality assessment that can be used as gold-standard for future studies. A state-of-the-art algorithm for retinal image quality assessment is also analysed and compared against the specialists performance. Results show that, for 71% of the images present in the dataset, the three experts agree on the given image quality label. The results obtained for accuracy, specificity and sensitivity when comparing one expert against another were in the ranges [83.0 - 85.2]%, [72.7 - 92.9]% and [80.0 - 94.7]%, respectively. The evaluated automatic quality assessment method, despite not being trained on the novel dataset, presents a performance which is within inter-observer variability.
Language:
English
Type (Professor's evaluation):
Dissemination
No. of pages:
4