Abstract (EN):
Accurate estimation of hand shape and position is an important task in various applications, such as human-computer interaction, human-robot interaction, and virtual and augmented reality. In this paper, it is proposed a method to estimate the hand keypoints from single and colored images utilizing the pre-trained deep convolutional neural networks VGG-16 and VGG-19. The method is evaluated on the FreiHAND dataset, and the performance of the two neural networks is compared. The best results were achieved by the VGG-19, with average estimation errors of 7.40 pixels and 11.36 millimeters for the best cases of two-dimensional and three-dimensional hand keypoints estimation, respectively.
Language:
English
Type (Professor's evaluation):
Scientific
No. of pages:
6