Localization of Multiple Robot Systems

Perception and estimation of the state of a robotic platform and the surrounding world are the first, foundational, building blocks for guaranteeing its autonomy.
In particular, the mutual localization among the different robots, i.e., the estimation of the time-varying transformations between the frames attached to the main body of each robot, is a fundamental prerequisite for any cooperative complex motion control strategy.

Among the main challenges of mutual localization we have:

keep the observability (i.e., the solvability of the estimation problem) despite the presence of of heterogeneous sensors, e.g., in part distance-based, in part vision-based, altitude, velocity, acceleration etc.;

to cope with the limited range and field-of-view of onboard sensors,

to cope with the presence of outliers;

to ensure an algorithm complexity that allows fast on-line processing and control in-the-loop.

Mutual Localization with Anonymous Relative Measurements

Click to Enlarge. Four snapshots of an experiments with eight quadrotor UAVs, where the IMU and the relative bearings are used as sensorial information.

We have formulated and investigated a novel problem called mutual localization with anonymous relative measurements. This is an extension of mutual localization with relative measurements, with the additional assumption that the identities of the measured robots are not known. For certain configurations of the multi-robot system, the anonymity hypothesis causes a combinatorial ambiguity in the inversion of the measure equation, resulting in the existence of multiple solutions. We have developed a two-phase filter for solving the localization problem: