Despinoy, Fabien
[UCL]
Vitrani, Marie-Aude
[Université Pierre et Marie Curie - CNRS]
Herman, Benoît
[UCL]
Despite the continuous improvement of master interfaces, distant robot teleoperation remains a challenging task. In many applications (e.g. spaceships, underwater or flying drones, robotic arms that operate in hazardous conditions in factories), the operator has only an indirect vision of the remote environment, provided by a video camera usually mounted on the robot end-effector itself, and displayed on a 2D monitor. Whereas any controller is capable of planning a path to follow a prescribed 3D trajectory, driving such an eye-in-hand robot in real time is known to be difficult. Even a skilled user will not make the most of a 6-axis master interfaces that could normally induce a smooth and continuous motion. Instead, she/he will likely generate a succession of independent translations and rotations punctuated by frequent stops. The main reason is that the target (e.g. object to grasp) must be kept inside the camera field of view during robot motion. It requires to combine translations and rotations of the end-effector, with a ratio between linear and angular velocities that depends on the distance to the target—the latter being usually unknown.
Bibliographic reference |
Despinoy, Fabien ; Vitrani, Marie-Aude ; Herman, Benoît. A step toward robot teleoperation with eyes and hand.5th International Workshop on Human-Friendly Robotics (HFR2012) (Brussels, Belgium, du 18/10/2012 au 19/10/2012). |
Permanent URL |
http://hdl.handle.net/2078.1/120098 |