1, FEBRUARY 2012
the altitude of the vehicle and the elevation of the underlying monocular camera and ultrasound. Our paper is orthogonal
surface. We furthermore provide quantitative results of our si- to a recent work of Bachrach et al. [9], where the authors
multaneous localization and mapping (SLAM) approach and present a system for performing autonomous exploration and
discuss the effect of different modes of the incremental scan- map acquisition in indoor environments. They extend the 2-D
matching on the pose stability of the robot. We also describe robot navigation toolkit CARMEN [27] by adding a Rao
our algorithms for path planning and obstacle avoidance and Blackwellized particle filter for SLAM and an algorithm for
provide additional details and experiments. frontier-based autonomous exploration. However, they do not
Our system is a result of an integrated hardware/software de- provide localization, map optimization, obstacle avoidance, or
sign that meets several of the challenging constraints imposed mutlilevel SLAM. Furthermore, we utilize a more robust graph-
by small-sized flying vehicles, while preserving a large degree based SLAM algorithm in our system that allows for map op-
of flexibility. It further can be operated at different levels of timization and present our algorithm for estimating the altitude
autonomy. It can be used to assist a pilot by preventing col- of the surface underlying the robot. This enables a quadrotor
lisions with obstacles and keeping the position of the vehicle equipped with our system to fly over surfaces whose height is
when no commands are given. It can construct a map on-line, piecewise constant.
while flying in an unknown environment, or it can be instructed
to autonomously reach given locations in a known map. We II. INDOOR NAVIGATION OF AN AUTONOMOUS
evaluated our system on an open source quadrotor, the so-called FLYING QUADROTOR
Mikrokopter [3]. Fig. 1 visualizes our quadrotor system and its
To autonomously reach a desired location, a mobile robot
internal state, while autonomously flying within a highly clut-
has to be able to determine a collision-free path connecting
tered office room.
the starting and the goal locations. This task is known as path
planning and requires a map of the environment to be known.
I. RELATED WORK Usually, this map has to be acquired by the robot itself by
processing the sensor measurements that are obtained during an
In the last decade, flying platforms received an increasing at-
exploration mission. This task of generation of the map is known
tention from the research community. Many authors focused
as simultaneous localization and mapping (SLAM). For most
on the modeling and on the control of these vehicles [8],
of the applications it is sufficient to perform SLAM off-line on
[11], [25], [29], with a particular emphasis on small- or microhe-
a recorded sequence of measurements. To follow the path with
licopters [10]. Hoffmann et al. [19] presented a model-based al-
a sufficient accuracy, the robot needs to be aware of its position
gorithm for autonomous flying with their STARMAC-quadrotor.
in the environment at any point in time. This task is known as
Their system flies outdoors and utilizes GPS and inertial mea-
localization. A further fundamental component of a navigation
surement unit measurements. Ng and colleagues [14] have de-
system is the control module that aims to move the vehicle along
veloped algorithms for learning controllers for autonomous
the trajectory, given the pose estimated by the localization.
helicopter navigation. Their approach allows helicopters to
Because of the increased risk of damaging the flying plat-
perform impressive maneuvers in outdoor environments.
form during testing, the user should have the possibility of
Scherer et al. [28] describe algorithms for flying fast among
taking over the control of the platform at any point in time.
obstacles at low altitude by the usage of a laser scanner.
Finally, the more complex dynamics of a flying platform poses
Tempelton et al. [30] demonstrate how to use vision for outdoor
substantially higher requirements on the accuracy of the state
terrain mapping and autonomous landing. Tournier et al. [33]
estimation process than for typical ground-based vehicles. Al-
and Bourquardez et al. [12] used vision to estimate and stabilize
though in outdoors scenarios, positioning errors up to 1 m might
the current pose of a quadrotor. Thrun et al. [32] used a remotely
be acceptable, they are not indoors, as the free-space around the
controlled helicopter to learn large-scale outdoor 3-D models.
robot is substantially more confined.
There has also been some work that addressed the navigation of
flying vehicles in indoor environments and in absence of the GPS
III. HARDWARE ARCHITECTURE
signal. Several authors used vision to control or assist the control
of an indoor quadrotor [7], [20], [21]. Roberts et al. [26] used ul- Fig. 2 shows a Mikrokopter [3] open source quadro-
trasound sensors for controlling a flying vehicle in a structured tor equipped with sensors and computational devices. The
testing environment, while He et al. [18] presented a system Mikrokopter comes with a low-level controller for roll, pitch,
for navigating a small-sized quadrotor without GPS. Here, the and yaw. Our quadrotor is similar to the one proposed by
pose of the vehicle is estimated by an unscented Kalman filter. He et al. [18] and consists of the following components: (1) a
Whenever the robot has to reach a given location, a path that Hokuyo-URG miniature laser sensor for SLAM and obstacle
ensures a good observation density is computed from a prede- avoidance, (2) an XSens MTi-G microelectromechanical sys-
fined map. These highly dense observations minimize the risk of tems IMU for estimation of the attitude of the vehicle, (3) a
localization failures. In parallel to our work, Achtelika et al. [6] Linux-based Gumstix embedded PC with USB interfaces and a
developed an indoor autonomous quadrotor equipped with a WiFi network card that communicates with the microcontroller
laser range scanner and cameras enabling autonomous hover- on the quadrotor via an RS-232 interface, (4) and a mirror that
ing in a constraint indoor environment. Recently, Celik et al. is used to deflect some of the laser beams along the z-direction
[13] presented their system for indoor SLAM by using a to measure the distance to the ground.
92 IEEE TRANSACTIONS ON ROBOTICS, VOL. 28, NO. 1, FEBRUARY 2012
Fig. 3. Example of level joining during the estimation of the altitude of the vehicle and of the elevation of the underlying surfaces. Each level is represented as a
set of contiguous cells in the 2-D grid that share the same elevation. (a) The robot starts exploring an office environment. Initially, it recognizes two levels (Level 1
and Level 2), corresponding to a chair and a table. (b) Subsequently, it flies away from the table, turns back, and flies over a different region of the same table.
(c) This results in the creation of the new Level 3. Then, the robot keeps on hovering over the table until it approaches the extent of Level 2 that has the same
elevation of Level 3, which is originated by the same table. This situation is shown in (c). Finally, the robot enters Level 2 from Level 3. (d) Our system recognizes
these two levels to have the same elevation. Accordingly, it merges them and updates the common elevation estimate.
we estimate the current height of the robot given the known on the current pose estimate, our high-level control module runs
levels in the multi-level map. In a second step, we update the at a 10 Hz, since the laser scanner provides measurements at
map, given the estimated altitude of the robot. Here, a level is exact this rate.
constantly re-estimated whenever the vehicle enters or leaves Note, that the Mikrokopter (and most of commercial available
this specific level, and the data association is resolved by the platforms) comes with low-level controllers for roll, pitch, and
known (x, y, ) position of the vehicle. Finally, measurements yaw; thus, we do not have to take care about the control of the
not explained by any level that is present in the map are assumed individual motors but of the control of commands resulting in a
to be generated by new levels that are then included in the map. desired angle. In our particular case, the low-level controller of
the Mikrokopter quadrotor runs at 500 Hz. Since commands for
the yaw on common platforms result in how fast the quadrotor
D. High-Level Control for Pose and Altitude should turn and not how far, these parameters reflect the users
The high-level control algorithm is used to keep the vehicle wish of the robots aggressiveness with respect to the yaw rota-
in the current position. The output of the control algorithm are tion. In contrary to this, commands for roll and pitch result in
variations in the roll, pitch, yaw, and thrust, which are denoted, a desired angle for which independent mapping functions must
respectively, as u , u , u , and uz . The input are the position and be learned. In order to learn the mapping for our quadrotor,
the velocity estimates coming from incremental scan-matching. we fixed one axis of the vehicle to an external frame allowing
A variation of the roll translates in a motion along the y axis, a the vehicle to rotate only along the other axis. We learned the
variation in the pitch results in a motion along the x-axis and a mapping function by monitoring the current angle measured by
variation of the thrust results in a change in the vertical velocity. the IMU compared with the sent command. Our test bench for
We separately control the individual variables via proportional learning this mapping is shown in Fig. 4.
integral differential (PID) or proportional differential (PD) con- The calculated commands are sent directly to the microcon-
trollers. Since in our case all control commands are dependent troller via RS232 that is in charge of the low-level control (roll,
96 IEEE TRANSACTIONS ON ROBOTICS, VOL. 28, NO. 1, FEBRUARY 2012
Fig. 6. Map of on office building built with our approach using the quadrotor.
The labels 14 reflect the locations of individual markers that are used to evaluate
the accuracy of our mapping approach. Red triangles indicate the pose of the
corresponding camera images. The clutter in the bottom of the map originates
from the seating containing horizontal slots (see bottom-right image).
which the particles were sampled uniformly over the free space.
After approximately 1 m of flight (middle image), the particles
start to focus around the true pose of the vehicle. After ap-
proximately 5 m of flight, the quadrotor was globally localized
(bottom image). The blue circle indicates the current estimate
of the filter.
TABLE II
ESTIMATED AND MANUALLY MEASURED LOCATIONS OF THE MARKERS FOR
THE FLIGHT CONTAINING FOUR LOOPS IN TOTAL
TABLE III
COMPARISON OF SINGLE LOOPS FOR DIFFERENT GRID RESOLUTIONS
the person moved to the left again, the quadrotor was able to
follow a de-tour as shown in the right image of Fig. 9. Note, that
the snapshots show the endpoints of the laser only. Although it
looks like the quadrotor might have the space to fly around the
person in the second image, there is no valid plan because of the
safety margins around the walls.
REFERENCES
[1] Carmen. (Oct. 8, 2011). [Online]. Available: http://carmen.sourceforge.
Fig. 9. Experiment for path planning and dynamic obstacle avoidance. The net/
quadrotor is given a goal point 5 m in front of it. The planned path is shown in [2] ROSRobot Open Source. (Oct. 8, 2011). [Online]. Available:
the left image. A person enters the corridor (shaded area) and blocks the robots http://www.ros.org
path, which results in an invalid plan. The quadrotor therefore hovers around [3] Mikrokopter. (Oct. 8, 2011). [Online]. Available: http://www.mikrokopter.
the last valid way point (second image). In the third image, the person moved de/
back, leaving the quadrotor enough space for a de-tour. [4] OpenSLAMOpen Source Navigation Software Repository, (Oct. 8,
2011). [Online]. Available: http://www.openslam.org
[5] (Oct. 8, 2011). [Online]. Available: http://ais.informatik.uni-freiburg.de/
projects/quadrotor/
buffer. A typical run including autonomous pose stabilization [6] M. Achtelika, A. Bachrach, R. He, S. Prentice, and N. Roy, Autonomous
is shown in Fig. 8(b). Here, the quadrotor was set to keep the navigation and exploration of a quadrotor helicopter in GPS-denied indoor
initial pose of (0, 0), and once in a while, the user used the environments, in Proc. Robot.: Sci. Syst., 2008.
[7] S. Ahrens, D. Levine, G. Andrews, and J. P. How, Vision-based guidance
remote control to move the quadrotor around 1 m backwards. and control of a hovering vehicle in unknown, GPS-denied environments,
The quadrotor then autonomously moved back to the desired in Proc. IEEE Int. Conf. Robot. Autom., 2009, pp. 26432648.
position. Depending on the latency in the system, the pose os- [8] E. Altug, J. P. Ostrowski, and R. Mahony, Control of a quadrotor heli-
copter using visual feedback, in Proc. IEEE Int. Conf. Robot. Autom.,
cillations are typically around 20 cm. 2002.
[9] A. Bachrach, R. He, and N. Roy, Autonomous flight in unknown indoor
environments, Int. J. Micro Air Veh., vol. 1, no. 4, 2009.
E. Path Planning and Obstacle Avoidance [10] S. Bouabdallah, C. Bermes, S. Grzonka, C. Gimkiewicz, A. Brenzikofer,
In this section, we present an experiment that demon- R. Hahn, D. Schafroth, G. Grisetti, W. Burgard, and R. Siegwart, Towards
palm-size autonomous helicopters, presented at the Int. Conf. Exhib.
strates our algorithms for path planning and dynamic obstacle Unmanned Areal Veh., Dubai, UAE, 2010.
avoidance. The quadrotor was given a goal point approximately [11] S. Bouabdallah and R. Siegwart, Full control of a quadrotor, in Proc.
5 m in front of it. A person was standing on the left (see the IEEE/RSJ Int. Conf. Intell. Robots Syst., 2007.
[12] O. Bourquardez, R. Mahony, N. Guenard, F. Chaumette, T. Hamel, and
shaded area in Fig. 9) enter the corridor when the quadrotor L. Eck, Image-based visual servo control of the translation kinematics of
moved to its desired goal. The second image shows the situa- a quadrotor aerial vehicle, IEEE Trans. Robot., vol. 25, no. 3, pp. 743
tion when the person is completely blocking the robots path. 749, Jun. 2009.
[13] K. Celik, S. J. Chung, M. Clausman, and A. K. Somani, Monocular
In this case, the quadrotor hovered around the last valid way vision SLAM for indoor aerial vehicles, in Proc. IEEE Intell. Robots
point since there was no longer a valid plan to the goal. When Syst., 2009, pp. 15661573.
100 IEEE TRANSACTIONS ON ROBOTICS, VOL. 28, NO. 1, FEBRUARY 2012
[14] A. Coates, P. Abbeel, and A. Y. Ng, Learning for control from multiple Slawomir Grzonka received the Masters degree in
demonstrations, in Proc. Int. Conf. Mach. Learning, 2008. computer science from the University of Freiburg,
[15] F. Dellaert, D. Fox, W. Burgard, and S. Thrun, Monte carlo localization Freiburg, Germany, in June 2006. He is currently
for mobile robots, presented at the IEEE Int. Conf. Robot. Autom., Leuven, working toward the Ph.D. degree from the Au-
Belgium, 1998. tonomous Intelligent Systems Lab, University of
[16] G. Grisetti, C. Stachniss, and W. Burgard, Non-linear constraint network Freiburg.
optimization for efficient map learning, IEEE Trans. Intell. Transp. Syst., His research interests include unmanned aerial ve-
vol. 10, pp. 428439, Sep. 2009. hicles, simultaneous localization and mapping, and
[17] S. Grzonka, G. Grisetti, and W. Burgard, Towards a navigation system human activity recognition.
for autonomous indoor flying, in Proc. IEEE Int. Conf. Robot. Autom., Mr. Grzonka received best paper awards for his
Kobe, Japan, 2009. work on autonomous quadrotor systems from the In-
[18] R. He, S. Prentice, and N. Roy, Planning in information space for a ternational Conference on Robotics and Automation in 2009 and from the
quadrotor helicopter in a GPS-denied environment, in Proc. IEEE Int. International Conference and Exhibition on Unmanned Areal Vehicles in 2010.
Conf. Robot. Autom., 2008.
[19] G. Hoffmann, D. G. Rajnarayan, S. L. Waslander, D. Dostal, J. S. Jang,
and C. J. Tomlin, The Stanford testbed of autonomous rotorcraft for
multiagent control, in Proc. 23rd Digital Avion. Syst. Conf., 2004, p. 2.
[20] N. G. Johnson, Vision-assisted control of a hovering air vehicle in an
indoor setting, Masters thesis, Dept. Mech. Eng., Brigham Young Univ.,
Provo, UT, 2008. Giorgio Grisetti received the Ph.D. degree from the
[21] C. Kemp, Visual control of a miniature quad-rotor helicopter, Ph.D. University of Rome La Sapienza, Rome, Italy, in
dissertation, Churchill College Univ., Cambridge, U.K., 2005. April 2006.
[22] S. Koenig and M. Likhachev, Fast replanning for navigation in unknown He is an Assistant Professor with the Depart-
terrain, IEEE Trans. Robot., vol. 21, no. 3, pp. 354363, Jun. 2005. ment of Systems and Computer Science, University
[23] J.-C. Latombe, Robot Motion Planning. Norwell, MA: Kluwer, 1991, of Rome, and a member of the Autonomous Intelli-
pp. 295356. gent Systems Lab, University of Freiburg, Freiburg,
[24] E. Olson, Real-time correlative scan matching, in Proc. IEEE Int. Conf. Germany. From 2006 to 2010, he was a Postdoctoral
Robot. Autom., Kobe, Japan, Jun. 2009, pp. 43874393. Researcher with the Autonomous Intelligent Systems
[25] P. Pounds, R. Mahony, and P. Corke, Modelling and control of a quad- Lab, University of Freiburg. His research interests in-
rotor robot, in Proc. Australian Conf. Robot. Autom., 2006. clude mobile robotic navigation, simultaneous local-
[26] J. F. Roberts, T. Stirling, J. C. Zufferey, and D. Floreano, Quadrotor using ization and mapping, 3-D mapping, and path planning.
minimal sensing for autonomous indoor flight, in Proc. Eur. Micro Air
Veh. Conf. Flight Competition, 2007.
[27] N. Roy, M. Montemerlo, and S. Thrun, Perspectives on standardization in
mobile robot programming, presented at the IEEE/RSJ Int. Conf. Intell.
Robots Syst., Las Vegas, NV, 2003..
[28] S. Scherer, S. Singh, L. Chamberlain, and M. Elgersma, Flying fast and
low among obstacles: Methodology and experiments, Int. J. Robot. Res., Wolfram Burgard received the Ph.D. degree in com-
vol. 27, no. 5, p. 549, 2008. puter science from the University of Bonn, Bonn,
[29] A. Tayebi and S. McGilvray, Attitude stabilization of a VTOL quadrotor Germany, in 1991.
aircraft, IEEE TX Control Syst. Technol., vol. 14, no. 3, pp. 562571, He is a Full Professor of computer science with
May 2006. the University of Freiburg, Freiburg, Germany, where
[30] T. Templeton, D. H. Shim, C. Geyer, and S. S. Sastry, Autonomous vision- he heads the Laboratory for Autonomous Intelligent
based landing and terrain mapping using an MPC-controlled unmanned Systems. His research interests include artificial intel-
rotorcraft, in Proc. IEEE Int. Conf. Robot. Autom., 2007, pp. 13491356. ligence and mobile robots. In the past, with his group,
[31] S. Thrun, W. Burgard, and D. Fox, Robot perception, in Probabilistic he developed several innovative probabilistic tech-
Robotics, Cambridge, MA: MIT Press, 2005, pp. 171172. niques for robot navigation and control. They cover
[32] S. Thrun, M. Diel, and D. Hahnel, Scan alignment and 3-D surface different aspects such as localization, map-building,
modeling with a helicopter platform, Field Serv. Robot. (STAR Springer path-planning, and exploration.
Tracts Adv. Robot.), vol. 24, pp. 287297, 2006. Dr. Burgard received several best paper awards from outstanding national
[33] G. P. Tournier, M. Valenti, J. P. How, and E. Feron, Estimation and control and international conferences. In 2009, he received the Gottfried Wilhelm Leib-
of a quadrotor vehicle using monocular vision and moire patterns, in Proc. niz Prize, which is the most prestigious German research award. Recently, he
AIAA Guid., Navigat. Control Conf. Exhibit., 2006, pp. 2124. received an Advanced Grant from the European Research Council.