Abstract (EN):
Recognizing a place with a visual glance is the first capacity used by humans to understand where they are. Making this capacity available to robots will make it possible to increase the redundancy of the localization systems available in the robots, and improve semantic localization systems. However, to achieve this capacity it is necessary to build a robust visual signature that could be used by a classifier. This paper presents a new approach to extract a global descriptor from an image that can be used as the visual signature for indoor scenarios. This global descriptor was tested using videos acquired from three robots in three different indoor scenarios. This descriptor has shown good accuracy and computational performance when compared to other local and global descriptors.
Language:
English
Type (Professor's evaluation):
Scientific
No. of pages:
10