Abstract (EN):
The estimation of a 3D sensor constellation for maximizing the observable surface area percentage of a given set of target objects is a challenging and combinatorial explosive problem that has a wide range of applications for perception tasks that may require gathering sensor information from multiple views due to environment occlusions. To tackle this problem, the Gazebo simulator was configured for accurately modeling 8 types of depth cameras with different hardware characteristics, such as image resolution, field of view, range of measurements and acquisition rate. Later on, several populations of depth sensors were deployed within 4 different testing environments targeting object recognition and bin picking applications with increasing level of occlusions and geometry complexity. The sensor populations were either uniformly or randomly inserted on a set of regions of interest in which useful sensor data could be retrieved and in which the real sensors could be installed or moved by a robotic arm. The proposed approach of using fusion of 3D point clouds from multiple sensors using color segmentation and voxel grid merging for fast surface area coverage computation, coupled with a random sample consensus algorithm for best views estimation, managed to quickly estimate useful sensor constellations for maximizing the observable surface area of a set of target objects, making it suitable to be used for deciding the type and spatial disposition of sensors and also guide movable 3D cameras for avoiding environment occlusions.
Language:
English
Type (Professor's evaluation):
Scientific
No. of pages:
8