Robotic Vision

The Robotic Vision Group works with both vision sensors and robotics. Research activity deals also with medical applications. Our research topics focus on the use of visual information for autonomous robot navigation. The use of visual sensors enables non-contact measurement of the environment and strengthens the robot performances when included in the control loop. We developed new algorithms for visual servoing in completely unstructured environments only processing visual information. This research has applications in many areas, including autonomous navigation in hazardous or remote environments.

[ People ]

Research topics

Human-robot Formation Control via Visual and Vibrotactile Haptic Feedback
We explore a new formation control setup, consisting of a human leader and multiple follower robots. The mobile robots are uniquely equipped with RGB-D cameras, and they should maintain a desired distance and orientation to the leader. Vibrotactile feedback provided by haptic bracelets, guides the human along trajectories that are feasible for the leader-follower formation.
Uncalibrated Visual Compass from Omnidirectional Line Images with Application to Attitude MAV Estimation

We present a new algorithm based on previous results of the authors, for the yaw-angle estimation of an omnidirectional camera/robot undergoing a 6-DoF rigid motion. Our real-time algorithm is uncalibrated, robust to noisy data, and it only relies on the projection of 3-D parallel lines as image features. Numerical and real-world experiments conducted with an eye-in-hand robot manipulator, which we used to simulate the motion of a Micro unmanned Aerial Vehicle (MAV), show the accuracy and reliability of our estimation algorithm.
Kuka Control Toolbox (KCT)
The KUKA Control Toolbox (KCT) is a collection of MATLAB functions developed at the University of Siena, for motion control of KUKA robot manipulators. The toolbox, which is compatible with all 6 DOF small and low payload KUKA robots that use the Eth.RSIXML, runs on a remote computer connected with the KUKA controller via TCP/IP.

Visit also the page:

Planar Catadioptric Stereo Vision
Planar Catadioptric Stereo vision sensors (PCS) combine a pinhole camera with two or more planar mirrors. We propose new multi-view properties for PCS and address the image-based camera localization, mirror calibration and 3-D scene reconstruction problems.
Robust Uncalibrated Visual Compass
Due to their wide field of view, omnidirectional cameras are becoming ubiquitous in mobile robotic applications. A challenging problem consists of using these sensors as on-board visual compasses (VCs). We present a novel VC algorithm that is based on new multiple-view geometric constraints valid for pairs of omnidirectional images of 3-D lines taken by a paracatadioptric camera. Our VC algorithm improves over existing approaches since it provides a closed-form estimate of the camera-robot rotation, and it does not require knowledge of either the intrinsic camera-calibration parameters, or of the geometry of the 3-D scene.
Vision-based Localization and Control of Multi-Robot Formations*
We present a multi-robot (leader-follower) framework where each robot is equipped only with a panoramic camera addressing both the problems of vision-based observability (and localization) and control, also.

* In cooperation with Kostas Daniilidis and George J. Pappas (GRASP Lab, UPENN, USA)

The Epipolar Geometry Toolbox (EGT)
The Epipolar Geometry Toolbox (EGT) is a toolbox designed for MATLAB. Combined with interactive Matlab environment and advanced graphical functions, EGT provides a wide set of functions to approach multi-view computer vision problems for both pinhole and central catadioptric cameras. EGT allows to design also visual servoing simulations also in real experiments.

Visit also the page:

Image-based Visual Servoing for Mobile Robot Using Epipolar Geometry*
We present an image-based visual servoing strategy for asymptotically driving a mobile robot equipped with a partially uncalibrated pinhole camera toward a target position. Our approach uses the epipolar geometry and does not need any knowledge of the 3-D scene geometry.

* In cooperation with Giuseppe Oriolo (“La Sapienza”, Rome, ITALY)

Image-based Visual Servoing for Central Catadioptric Cameras
We present an image-based visual servoing strategy for a holonomic mobile robot equipped with a central catadioptric camera. This kind of vision sensor combines lens and mirrors to enlarge the field of view. The proposed visual servoing is mainly based on the auto-epipolar condition.