Go to:
Logótipo
Comuta visibilidade da coluna esquerda
Você está em: Start > Publications > View > Self-adaptive Cobots in Cyber-Physical Production Systems
Publication

Publications

Self-adaptive Cobots in Cyber-Physical Production Systems

Title
Self-adaptive Cobots in Cyber-Physical Production Systems
Type
Article in International Conference Proceedings Book
Year
2019
Authors
Roberto Nogueira
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. Without AUTHENTICUS Without ORCID
João Reis
(Author)
Other
The person does not belong to the institution. The person does not belong to the institution. The person does not belong to the institution. Without AUTHENTICUS Without ORCID
Conference proceedings International
Pages: 521-528
24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA)
Zaragoza, SPAIN, SEP 10-13, 2019
Other information
Authenticus ID: P-00R-861
Abstract (EN): Absolute automation in certain industries, such as the automotive industry, has proven to be disadvantageous. Robots are fairly capable when performing tasks that are repetitive and demand precision. However, a hybrid solution comprised of the adaptability and resourcefulness of humans cooperating, in the same task, with the precision and efficiency of machines is the next step for automation. Manipulators, however, lack self-adaptability and true collaborative behaviour. And so, through the integration of vision systems, manipulators can perceive their environment and also understand complex interactions. In this paper, a vision-based collaborative proof-of-concept framework is proposed using the Kinect v2, a UR5 robotic manipulator and MATLAB. This framework implements 3 behavioural modes, 1) a Self-Adaptive mode for obstacle detection and avoidance, 2) a Collaborative mode for physical human-robot interaction and 3) a standby Safe mode. These modes are activated with recourse to gestures, by virtue of the body tracking and gesture recognition algorithm of the Kinect v2. Additionally, to allow self-recognition of the robot, the Region Growing segmentation is combined with the UR5's Forward Kinematics for precise, near real-time segmentation. Furthermore, self-adaptive reactive behaviour is implemented by using artificial repulsive action for the manipulator's end-effector. Reaction times were tested for all three modes, being that Collaborative and Safe mode would take up to 5 seconds to accomplish the movement, while Self-Adaptive mode could take up to 10 seconds between reactions.
Language: English
Type (Professor's evaluation): Scientific
No. of pages: 8
Documents
We could not find any documents associated to the publication.
Recommend this page Top
Copyright 1996-2025 © Faculdade de Direito da Universidade do Porto  I Terms and Conditions  I Acessibility  I Index A-Z
Page created on: 2025-07-23 at 20:10:06 | Privacy Policy | Personal Data Protection Policy | Whistleblowing