Anda di halaman 1dari 1

Autonomous Search and Conquer

for MAVs
Yucong Lin, Jesus Pestana Puerta, Srikanth Saripalli
Autonomous System Technologies Research & Integration Laboratory (ASTRIL)
School of Earth & Space Exploration, Arizona State University
The Vehicle
We aimed at autonomy operation and
used an off-the-shelf ARDrone [1]. It
has the following specications:
1 Powered with brushless engines
with three phases current controlled
by a micro-controller. Using a LiPo
battery to y.
2 Sensors: IMU for automatic stabi-
lization. An ultrasound telemeter
providing with altitude measures.
A 3-axis magnetometer and a pres-
sure sensor to allow altitude mea-
surements at any height.
3 A camera aiming downwards: a
eld of viewof 47.5

36.5

and res-
olution of 176 144. The other cam-
era is aimed forward: a eld of view
of 73.5

58.5

and a resolution of
320 240. Only images from either
of the camera can be transmitted to
the laptop.
4 We created a ROS [2] package
that guides the ARDrone to exe-
cute the mission with full auton-
omy. The codes are run in a laptop.
Commands of (v
x
, v
y
, v
z
, yaw) are
generated and sent to the ARDrone
via Wi. The package is based
on ardrone_autonomy [3] package.
(v
x
, v
y
, v
z
, yaw) differs from the ac-
tual value up to an unknown scale.
the ARDrone (front, bottom view)
References
[1] http://ardrone2.parrot.com/
[2] http://www.ros.org/wiki/
[3] https://github.com/AutonomyLab/
ardrone_autonomy
[4] Chang, F., Chen, C. J., & Lu, C. J. (2004). A
linear-time component-labeling algorithm
using contour tracing technique. Computer
Vision and Image Understanding, 93(2),
206-220.
[5] Yuen, H. K., Princen, J., Illingworth, J.,
& Kittler, J. (1990). Comparative study of
Hough transform methods for circle nd-
ing. Image and Vision Computing, 8(1), 71-
77.
Acknowledgements
We thank School of Earth and Space Exploration (SESE) of ASUto provide ISTB4s
basement for testing the UAV. We also appreciate Susan Selkirks effort to print the
red target and this poster.
Step 1: taking off and ying to the target zone
Take-off is performed by sending a take-off command. We next aim at ying to
the 20ft 20ft target region. We know the regions position from the ofcial
documents but we dont have direct speed measurements. In this case, we have
to determine how far the drone travels with a xed forward velocity command in
a xed time interval by experiments.
Step 2: searching the target and head towards it
After reaching the target zone, the drone starts to search the target with its front
camera based on targets color. We went through the following steps
1 transform the image into Hue, Saturation, Value (HSV) color space. ( Saturation
shown in (b) )
2 thresholding in S channel. (c)
3 performing a blob detection [4] ( blobs marked with blue in (d) )
4 selecting the blob of red color and its center indicates the targets position (
circled in (e) ).
(a) (b) (c) (d) (e)
Once the drone detects the target, it adjust its yaw repeatedly so that the target
always appears at the images center. This way the drone will head towards the
target.
Step 3: Reach the target and hover above it
We judge that the drone is close to the target when the target appears at the image
bottom. Then we switch to the bottom camera and try to detect the targets center.
We perform a Hough Circle transform [5] to detect circles and use the center of
the smallest circle. If no circles is detected we use similar methods as in Step 2 to
detect blobs and approximate the targets center with the smallest blobs center.
The drone is commanded to decrease the distance between the image center and
the targets center. If the distance is below a preset threshold, the drone just hov-
ers. It will land after 2 mins or when the battery power is below 20%. Below are
targets image from the bottom camera. The targets center is marked with a small
blue circle.
1

Anda mungkin juga menyukai