Fusion of Distributed Perception in Collaborative Visual-SLAM Approaches on Mini-UAVs

From Self-Organization Wiki
Jump to: navigation, search

Miniaturization of the flying platforms limits their capabilities to accommodate large sensing units often necessary for larger field of reconstruction and their feasible range of operation. Instead of relying on a single platform to explore an environment, the perception from multiple agents of a swarm can be combined to allow for a better spatial resolution of the sensing in case of combining perceptions from, e.g., cameras mounted on different agents, and a better coverage of the space through simultaneous exploration of different parts of the space that need to be fused to a consistent model.

In my talk, I will presents our current scale-able system architecture for Mini-UAVs. I will discuss the design challenges and solutions for the problems in the context of the distributed perception, which include the problem of the sensor synchronization between the agents of a swarm to estimate the acquisition time of the data on different platforms. I will introduce our new extension to the stereo processing to allow 3D reconstruction from distributed asynchronous cameras in dynamic environments. The presented fusion architecture allows fusion of data with varying latency on different agents. I will present a way how to register the agents to each other to perform extrinsic calibration, and show our hybrid representation of the world consisting of static and dynamic scene elements.