Anda di halaman 1dari 5

Proceedings of 2016 IEEE Chinese Guidance, Navigation and Control Conference August 12-14, 2016 Nanjing, China

Landing system for AR.Drone 2.0 using onboard camera and ROS
Tianqu Zhao, Hong Jiang

AbstractAs the application of unmanned aerial This paper is structured with the following part. In section
vehicles(UAVs) becoming more widely, a demand for precise 2, we introduce the hardware and software used in our
autonomous landing is rising quickly. AR.Drone 2.0 is a high- project. In section 3the main steps of our work is
tech low-cost flying toy that is able to take role as UAV. In this described. Section 4 showed the experiment result, and
paper, we present a system for the drone to implement section 5, we summarize our work and the future
autonomous landing. The image flow from the camera of the improvement.
drone is obtained and rectified by camera calibration. Then the
image frames are processed by a marker recognition package
to detect the AR marker which is used as a landing pad, and II. HARDWARE AND SOFTWARE
calculate the position and orientation relation between the The AR.Drone 2.0 quadrotor (called a drone) is used as
drone and the landing pad. Next a PID controller is used to our UAV platform, it has been used by several research
control the drones velocity to stably hovering on the center of groups from various universities. It is an off-the-shelf
the landing pad. A laptop computer running Ubuntu and ROS product which is low-cost and easy to use, released by Parrot
is used to communicate with the drone via Wi-Fi, throughout
SA. in 2012. It comes with two body cover (indoor/outdoor
the landing process. Several experiments are made to verify the
feasibility of the whole system.
hull), the indoor hull which covers the propellers and
Keywords: AR.Drone; Robot Operating System(ROS); therefore the drone can be safely used indoors. All parts of
landing system the drone can be replaced alone which makes it easy to fix.

I. INTRODUCTION The drone has many sensors onboard, including two


cameras (one facing horizontally forward the other vertically
In recent year, the Unmanned Aerial Vehicles(UAVs) and downwards), one 3 axis gyroscope, one 3 axis
in particular the multi-rotors are capturing the interest of the accelerometer, one 3 axis magnetometer, a pressure sensor
research community and becoming important in the military and an ultrasound altimeter. The onboard computer is
and civilian practice. Autonomous landing is an important composed of 1GHz ARM CortexA8 processor running
problem for UAV to achieve autonomous flight. In order to Linux, 1GB RAM and Wi-Fi module.
achieve autonomous landing, precise knowledge of the
quadrotor position and orientation and kinematic data is
needed. Generally, these data and information can be get
from the global positioning systems (GPS) and inertial
navigation systems(INS). But in indoor environment, the
GPS signal are unavailable, vision sensors such as cameras,
laser scanners can be used to obtain the information that
needed. Vision sensors are mainly used for estimating the
relative positions between camera and some special target,
such as a landing pad or a ground moving target.
In [1], Venugopalan et al. developed a control algorithm
to autonomously land a quadrotor on an autonomous marine
vehicle. They designed a search routine, landing routine and
a State Machine approach to complement the task. In [2],
Kim et al. used an omnidirectional camera to implement the
land system. They used fish-eye lens calibration model to
help shrinking field of view problem while descending above Figure 1. AR.Drone with indoor hull and coordinate system
the visual pad. In [3], Lange et al. designed a special target
pattern and explain their landing pad detection algorithm
using OpenCV. In [4], Santana et al. proposed a system to When the drone power is being switched on, a Wi-Fi ad-
implement trajectory tracking and positioning with an hoc appears and any Wi-Fi enabled devices such as
AR.Drone in indoor environments. They use a Kalman Filter smartphone, tablet, and laptop computers can connect to it.
to track 3D positions and a nonlinear controller for guidance. After the connection has established, the user may send
control command to the drone and received camera video
and even the sensor data from the drone.
*Research supported by aviation fund grant 2014ZC51042 The drones firmware and onboard hardware are closed
Tianqu Zhao is with School of Automation Science and Electrical source. However, Parrot released a Software Development
Engineering, Beihang University, Beijing, 100191 Kit (SDK) for the drone which gives easy access to sensor
China.(Email:zhaotianqu@buaa.edu.cn)
data and control software onboard. The latest version of the
978-1-4673-8318-9/16/$31.002016 IEEE

1098
SDK is 2.0.1 which can be download at Parrot developer the drone is flying, the message type is geometry_msgs
website [5]. The original version of the drones SDK is hard which is send to ROS topic cmd_vel. Then you can control
to use, many developers encountered errors when trying to the drone to move forward/backward, flying up/down,
compile it. Therefore, we choose to use a substitution left/right as you like. The drone driver will transfer
package based on ROS. geometry_msgs to relevant change of roll, pitch, yaw angles.
ROS, the Robot Operating System is a flexible tum_ardrone: This is a ROS package developed by Engel
framework and a toolbox for the development of robot et al [7]. It enables the low-cost ardrone coupled with a
applications. ROS can provide several functions similar to laptop computer to navigate in previously unknown and
the traditional operating system, like hardware abstraction, GPS-denied indoor environment. They also conduct many
underlying equipment control, massage passing between experiment proved that the system has a great robustness
threads and software packaging management. In addition, it even though the significant communication delays and
also provides quantity of utilities and libraries. In our temporary loss of visual tracking. The package contains 3
project, the most commonly used ROS tools and packages nodes.Fig. 3 show the running interface of tum_ardrone.
are RViz, ardrone_autonomy, ar_track_alvar and
tum_ardrone. drone_stateestimation: This node provides the drone
with SLAM capabilities by implementing an algorithm
RViz stands for ROS visualization. It is a general- based on Parallel Tracking and Mapping (PTAM). In
purpose 3D visualization environment for robots, sensors, addition, it implements an EKF for state estimation and
and algorithms. RViz can plot a variety of data types compensation of the time delays in the system arising from
streaming through a typical ROS system, and can be used for Wi-Fi communication.
any robot and rapidly configured for a particular application.
The default unconfigure RViz window is shown in Fig. 2.  drone_autopilot: This node implements a PID
controller for pose stabilization and navigation. Also
includes basic waypoint navigation and automatic
initialization of the drone.
drone_gui: This node implements a simple GUI to
display the drones status and control the movement of the
drone. You can use a joystick or simply a keyboard to driver
the drone.

Figure 2. Rviz initial interface

ardrone_autonomy: This is a ROS driver for Parrot


AR.Drone 2.0, the driver is based on official AR.Drone SDK
version 2.0.1. The ardrone_autonomy package is developed
by Autonomy Lab [6]; it is an enhanced version of
ardrone_brown driver. The drivers executable node,
ardrone_driver, offers two main features:
it converts all raw sensor readings, debug values and Figure 3. Running interface of tum ardrone
status reports sent from the drone into standard ROS
ar_track_alvar: It is a ROS package, a simple wrapper
messages, and
for ALVAR, an open source AR marker tracking and
it allows to send control commands to the drone for detection library [8]. ALVAR is a software library for
taking off, landing, hovering and specifying the desired creating virtual and augment reality (AR) applications.
linear and angular velocities. ALVAR marker tags are two-dimensional binary images,
such as Fig. 4.
The sensor data received from the drone is published to
the ROS topic ardrone/navdata, we can write a subscriber
node to the topic to get the drones state like velocity,
altitude, accelerations etc. The drones motion can be
changed and controlled by two kind of commands. The first
class is to send std_msg/Empty ROS message to ROS topics
ardrone/(takeoff\land\reset) respectively to command the
drone to takeoff, land or reset. Another class is to send when

1099
fx
X = z

Y = fy
z
where (XY) stands for the coordinate of point p on the
projection plane, (x,y,z) is the 3D point Ps coordinate in the
camera coordinate system. f is the distance between optic
Figure 4. AR Marker tag examples center and the image plane, which is known as focal length.
In ROS, we have a useful utility called
The encoding of these tags is carefully computed to camera_calibration, which allows easy calibration of
reduce reading errors and to permit accurate calculation of monocular or stereo cameras using a checkerboard
the orientation and distance of the tag relative to a camera calibration target. All camera that satisfying the standard
that image it. ALVAR marker tags can work surprisingly ROS camera interface can be calibrated by
well in a variety of application environment. camera_calibration. Fortunately, the ardrone_autonomy
In our project, we use the ar_track_alvar package to driver meets this requirement. Therefore, we run the
calculate the coordinate transformation relation between the ardrone_autonomy driver on one terminal at first, then we run
ALVAR tag and the drone. the camera_calibration on another terminal like this:
rosrun camera_calibration cameracalibrator.py --size 9x6 --square
We get a video streaming of resolution of 720p from the 0.044 image:=/ardrone/front/image_raw camera:=/ardrone/front
drones front-facing camera, and get 320x240 pixels from
the downward-facing camera. Due to the limited computing This will open up the calibration window which will
power of the drones onboard computer, all the computation highlight the checkerboard.
processes are running on a laptop computer, Lenovo Y430p
(CPU: Intel Core i7-4710MQ 2.5GHz, GPU: NVIDIA
GeForce GTX 850M8GB RAM) running Ubuntu 14.04
LTS.

III. EXPERIMENT FRAME WORK

A. Camera Calibration
As we all know, all camera lens has distortion. In order to
get accurate pose estimation result, we should calibrate the
drones camera at first.

Figure 6. Camera calibration window

As you move the checkerboard around you will see three


bars on the calibration sidebar increase in length. When the
CALIBRATE button lights, you have enough data for
calibration.
After the calibration is complete you will see the
calibration results in the terminal and the calibrated image in
the calibration window, and a file named ardrone_front.yaml
will be auto saved on your computer. Since that time, once
Figure 5. Camera pin-hole model
you run the drones driver on the same laptop, the calibration
result will automatically load by the system. The camera
calibration result will also publish to corresponding
The camera of the drone is modeled as a simple camera_info ROS topic for future use. Fig. 7 shows the result
perspective pin-hole camera, which maps 3D points onto a of camera calibration. It is clearly that the left edge of the
2D image plane. The projection point of any 3D point P (x, y, door in (a) before rectify is almost a arc, but after calibration
z) on the image plane is the intersection of image plane and it becomes straight.
the line which connect point P and camera optic center, Oc.
According to the projection relation, we have

1100
We do some experiment to test the accuracy of the pose
computed by the al_track_alvar. The drone is placed in front
of the AR marker just an edge length of the square ceramic
tile, 60cm. Next we run the al_track_alvar node and set
relevant parameter properly. Then we can see the pose of the
marker with regard to the ardrone_base_frontcam in RViz.
Fig. 8 is a screenshot of the Rviz when running this program.
On the left side, the ar_marker_4s position is representing
by three ever-changing values. This is because the noise of
the sensor. The two digits after the decimal point is stable.
We can tell that is accuracy of the computed pose is
(a) Image before rectify acceptable.

(b)Image after rectify

Figure 7. Camera calibration result Figure 8. Screenshot of running ar_track_alvar and Rviz

B. Marker Recognition
In our experiment, we plan to use the images from the
Before the landing phase, we need to figure out the down-looking camera of the drone, therefore we write a
location relationship between the drone and the landing pad. subscriber to the ROS topic ardrone/bottom/image_raw, then
In [9], Merz et al. use special designed landing pad patterns. we use the result of the camera calibration phase to rectify
In our project, we use the AR marker in Fig. 4 as our landing the output image flow. The image flow was transmitting to
pad pattern. our laptop via Wi-Fi. Based on these image frames, the
The AR markers are augmented reality tags which are relative nodes will do the image process, search for and
formed of different patterns of black or white squares locate the landing target (in our project, it is the AR marker).
arranged in a grid area. Such markers are well suited for Fig. 9 is an image from bottom camera in actual operation.
pattern recognition approaches and can easily be detected
with some known software. ALVAR is a software library for
creating virtual and augmented reality(AR) applications, and
has been developed by the VTT Technical Research Centre
[10].
The ar_track_alvar published a ROS topic named
ar_pose_marker, this is a list of the poses of all the observed
AR tags, with respect to the camera frame or other specified
coordinate frame.
Beneath the surface, the pose estimation problem is
solved through the following process. At first the color Figure 9. Actual image from bottom camera
image is converted to grayscale. Then this image is
converted into a binary (only two colors: black and white)
image using an adaptive threshold. From this image edges C. AR.Drone velocity control
are searched, producing a number of lines. Then sets of four Control theory deals with the problem of controlling the
intersecting lines (i.e. quadrangles) are searched, and these behavior of a dynamic system. The general goal is to
are potential markers. Then it is verified that the outside of calculate system input values u(t), such that the system
the quadrangle is white, and the inside is black (i.e. we are reaches and holds a desired state. In other words, the
indeed seeing the border stripe of a marker). And finally the measured error e(t) between a given setpoint w(t) and the
inside of the marker borders can be interpreted as bits, and measured output of the system y(t) is to be minimized over
the bit pattern is a valid marker. From the four corner points time. In our situation, the set point is the image center, the
of the detected marker the program can compute the pose measured error is the distance between the AR markers
(i.e. location and orientation) of the marker in camera center and the image center. We plan to use the most
coordinates.

1101
common proportional integral derivative(PID) controller in
our velocity control process. Drone take off

t
U t = K P (X (0,0) - X t -1 )+ K D (X& (0,0) - X& t -1 )+ K I (X (0,0) - X t -1 )dt Image frame
0
process

where the U t is the output of the PID controller, X t 1 stands


for the drone position at time t-1, X (0,0) is the set point which Hover and No
search AR marker detected?
is the image center. The X& is the drones velocity at different
time. K P , K I , K D respectively represent the parameters of Yes

proportional integral and derivative. These parameter is Compute position


determined in actual operations. In our landing system, the relation

drone is designed to hovering just above the landing pad.


Before the drone descend its height, we only need to apply No Move
the PID controller along X, Y axes to control the linear.x and Distance<threshold?
closer
linear.y velocity.
Yes

IV.SYSTEM OVERVIEW AND EXPERIMENT RESULT Decrease height

The overview of our landing system is as follows:


1) At first, after the AR.Drone takeoff, it begin to search and No
detect the landing pad on the ground from the every image Height<threshold?

frame from the camera faced downward.


Yes
2) Once the drone finds the AR marker, move slowly to the
above of the target and follow it tightly and stably, with the landing
help of the ROS package ar_track_alvar and our PID velocity
control node. Figure 10. Flow chart of the landing process
3) When the drone locks the target, keep the horizontal
position, and command the drone to move down slowly. [3] Lange, S., Snderhauf, N., & Protzel, P. (2008, November).
Autonomous landing for a multirotor UAV using vision. In
4) Maintain the target marker in the center of the image and International Conference on Simulation, Modeling, and Programming
descend to some appropriate height, then the drone cut off for Autonomous Robots (SIMPAR 2008) (pp. 482-491).
[4] Vago Santana, L., Brandao, A. S., Sarcinelli-Filho, M., & Carelli, R.
power and land on the pad. (2014, May). A trajectory tracking and 3d positioning controller for
The whole systems flow chart is shown in Fig. 10. the ar. drone quadrotor. In Unmanned Aircraft Systems (ICUAS),
2014 International Conference on (pp. 756-767). IEEE.
We have been testing our system for serveral times. The [5] Parrot for Developers. http://developer.parrot.com/ar-drone.html
drone can be successful landed on the pad in majority of the [6] ROS wiki: ardrone_autonomy. http://wiki.ros.org/ardrone_autonomy
[7] ROS wiki: tum_ardrone. http://wiki.ros.org/tum_ardrone
tests, with acceptable bias. The reason of the failure is that [8] ROS wiki: ar_track_alvar http://wiki.ros.org/ar_track_alvar
the drone cant search out the AR marker all along. [9] Merz, T., Duranti, S., & Conte, G. (2006). Autonomous landing of an
unmanned helicopter based on vision and inertial sensing. In
Experimental Robotics IX (pp. 343-352). Springer Berlin Heidelberg.
V. CONCLUSION [10] ALVAR.http://virtual.vtt.fi/virtual/proj2/multimedia/alvar/desktop/ind
ex.html
In this paper, we present a implementation of a landing
system for the low-cost AR.Drone 2.0 quadrotor. With the
help of many useful ROS packages, we completed tasks like
camera calibration, marker recognition, drone control and so
on. We used a PID controller to control the drones velocity,
and thus the drone can hover stably and landing on the pad.
In future studies, we will do work to improve the control
strategy and the accuracy of the pose estimation algorithm.

REFERENCES
[1] Venugopalan, T. K., Taher, T., & Barbastathis, G. (2012, October).
Autonomous landing of an Unmanned Aerial Vehicle on an
autonomous marine vehicle. In Oceans, 2012 (pp. 1-9). IEEE.
[2] Kim, J., Jung, Y., Lee, D., & Shim, D. H. (2014, May). Outdoor
autonomous landing on a moving platform for quadrotors using an
omnidirectional camera. In Unmanned Aircraft Systems (ICUAS),
2014 International Conference on (pp. 1243-1252). IEEE.

1102

Anda mungkin juga menyukai