Image-Based Visual Servoing for Mobile Robots

with Central Catadioptric Camera

by G.L. Mariottini and D. Prattichizzo


Keywords:  Image-based visual servoing, Mobile robots, Epipolar geometry, Central catadioptric camera


           We presents an image-based visual servoing strategy for holonomic mobile robots equipped with a catadioptric camera. This kind of vision sensor combines lens and mirrors (Fig.1) to enlarge the field of view (Fig.2).

           The goal of the servoing is to autonomoously drive a robot from an initial to a target pose (Fig. 3(a)), specifying the control inputs exclusively in the image domain.



Fig.1 - In a central catadioptric camera (folded) all scene points are reflected to the parabolic mirror (PAR),

then to the spherical (SPH) and then orthographically  projected to the CCD

  Fig. 2 - The image acquired using a central catadioptric camera has a huge field-of-view.

The proposed visual servoing is based on the auto-epipolar property, a special configuration which occurs when the desired and the current views undergo a pure translation.

The occurrence of the auto-epipolar property can be detected observing when a set of so-called bi-conics (Fig.3(b)) have a commmon intersection (Fig.3(d)). This occurrence is used to design a rotation control law to align the current robot

with the desired position (Fig.3(c)). Then, a feature-based translational motion is performed to move the robot to the

target, until current and target features match (Fig.3(e)-(f)).


Fig.3(a) - START: The IBVS strategy based on epipolar geometry drives the robot from initial to desired position (in which the corresponding desired view is known).

Fig.3(b) - START: Current (.) and desired (+) features are used to exploit bi-conics. Due to the initial rotation displacement all biconcs do not have a common intersection region.


Fig.3(c) - FIRST STEP: The common bi-conics intersection is used to compensate the rotation displacement of  the current robot with the desired one.

Fig.3(d) - FIRST STEP: At the end of the first step, all bi-conics intersect in a common region (rotation is fully compensated)


Fig.3(e) - SECOND STEP: Feature point distance is used to finally translate the robot to the target.

Fig.3(f) - SECOND STEP: Current feature points (.) correctly moves to the desired ones (+).




[1] G.L. Mariottini, E. Alunno, J. Piazzi, D. Prattichizzo, "Visual Servoing for Central Catadioptric Cameras", in Current Trends in Nonlinear Systems and Control - Birkhauser, Boston, 2005, pp-309-325.[pdf] [bib]

[2] G.L. Mariottini, D. Prattichizzo, A. Cerbella, "Image-Based Visual Servoing for Mobile Robots with Catadioptric Camera", accepted to 2006 EUROS, Palermo, Italy. [pdf] [bib]

[3] G.L. Mariottini, E. Alunno, J. Piazzi, D. Prattichizzo, "Epipole-based Visual Servoing for Central Catadioptric Cameras", In 2005 IEEE International Conference on Robotics and Automation, 2005 , Barcellona,pain. [pdf] [bib]




We present here results from the experimental session:

      The experimental setup ([mp4] -  ~2Mb)

    Robot motion ([mp4] -  ~3Mb)

    Bi-conic image motion (no iteration mechanism) ([mp4] -  ~1.66 Mb)

    Bi-conic image motion (with iteration mechanism) ([mp4] -  ~1.8 Mb)

    Image feature tracker (no iteration mechanism) ([mp4] -  ~2.13 Mb)



       We wish to thanks Andrea Cerbella for his invaluable help in the realization of the experimental results.

Copyright & Privacy Notice



 SIRS Lab - Via Tommaso Pendola, SIENA 53045 ITALY  tel: +39 0577 233568