Abstract (EN):
Effective navigation in mobile robotics relies on precise environmental mapping, including the detection of complex objects as geometric primitives. This work introduces a deep learning model that determines the pose, type, and dimensions of 2D primitives using a mobile robot equipped with a noisy LiDAR sensor. Simulated experiments conducted in Webots involved randomly placed primitives, with the robot capturing point clouds which were used to progressively build a map of the environment. Two mapping techniques were considered, a deterministic and probabilistic (Bayesian) mapping, and different levels of noise for the LiDAR were compared. The maps were used as input to a YOLOv5 network that detected the position and type of the primitives. A cropped image of each primitive was then fed to a Convolutional Neural Network (CNN) that determined the dimensions and orientation of a given primitive. Results show that the primitive classification achieved an accuracy of 95% in low noise, dropping to 85% under higher noise conditions, while the prediction of the shapes' dimensions had error rates from 5% to 12%, as the noise increased. The probabilistic mapping approach improved accuracy by 10-15% compared to deterministic methods, showcasing robustness to noise levels up to 0.1. Therefore, these findings highlight the effectiveness of probabilistic mapping in enhancing detection accuracy for mobile robot perception in noisy environments.
Language:
English
Type (Professor's evaluation):
Scientific
No. of pages:
6