Abstract (EN):
Chest radiography is one of the most common imaging exams, but its interpretation is often challenging and timeconsuming, which has motivated the development of automated tools for pathology/abnormality detection. Deep learning models trained on large-scale chest X-ray datasets have shown promising results but are highly dependent on the quality of the data. However, these datasets often contain incorrect metadata and non-compliant or corrupted images. These inconsistencies are ultimately incorporated in the training process, impairing the validity of the results. In this study, a novel approach to detect non-compliant images based on deep features extracted from a patient position classification model and a pre-trained VGG16 model are proposed. This method is applied to CheXpert, a widely used public dataset. From a pool of 100 images, it is shown that the deep feature-based methods based on a patient position classification model are able to retrieve a larger number of non-compliant images (up to 81% of non-compliant images), when compared to the same methods but based on a pretrained VGG16 (up to 73%) and the state of the art uncertainty-based method (50%).
Language:
English
Type (Professor's evaluation):
Scientific
No. of pages:
4