4

Micro-Aerial Vehicles (MAVs) for Search and Rescue Applications


In this video we demonstrate recent work in
visual navigation and mapping for Micro Aerial Vehicles in the context of search and rescue. This summer, our research team had access
to a search and rescue training area to test our robots. The area, located at Wangen an der Aare in
Switzerland, features many simulated earthquake damaged buildings and indoor and outdoor areas. We use a custom micro-aerial vehicle for demonstrating
visual navigation algorithms. The robot has a pair of cameras and an inertial
measurement unit, a depth camera, and a thermal camera to detect potential victims for search
and rescue. The first stage in our pipeline is to create
an initial map of the environment from an inspection flight. Here we show a reconstruction resulting from
a single flight through both indoor and outdoor spaces, which was built in real time. In this segment the system’s underlying
representation of the world is shown. The world is represented as a collection of
smaller submaps. The positions and orientations of these submaps
are adjusted as the system acquires more information about its environment. To facilitate detection of injured people,
we use the MAV’s thermal camera to color the reconstruction by an area’s heat signature. Closer inspection of the generally cool basement,
shows an anomalous heat source, prompting further investigation. Thermal imagery reveals this heat source to
be a simulated victim. For path planning, we build a second map using
the stereo camera pair, which have a much larger field of view. From this map, we calculate traversability
information: marked in red are all the points where the robot has enough space to pass. To navigate in previously mapped environments,
the MAV needs to be able to locate itself in these reconstructions. We localize by creating a sparse map, which
contains only distinctive 3D landmarks. While the robot flies, it is able to recognize
these landmarks, and correct its estimate of its position in the environment. Here you can see the robots estimated pose,
and the output of its depth sensor, overlaid on the previous 3D reconstruction. Its pose is periodically corrected by the
localization procedure. Finally, we bring all the components we’ve
introduced into a single autonomous system: capable of planning, localizing, and navigating
in previously-seen environments. Here we perform an autonomous mission to return
to the area where a person was previously seen. By localizing against the previously built
map, and path-planning within traversable areas, the MAV navigates autonomously in this
narrow space. At the end of the mission, the person is clearly
visible in the thermal camera. Finally, as part of a public event, we got
to demonstrate our system to the general public through live demos throughout the day.

Glenn Chapman

4 Comments

  1. Outstanding. Also, thanks for making most of your work open-source.

  2. Outstanding. Also, thanks for making most of your work open-source.

  3. Why does the robot split the maps up into multiple sections? Is it a memory thing?

Leave a Reply

Your email address will not be published. Required fields are marked *