Anda di halaman 1dari 12

1038 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 59, NO.

2, FEBRUARY 2012

A Robust Real-Time Embedded Vision System on an


Unmanned Rotorcraft for Ground Target Following
Feng Lin, Student Member, IEEE, Xiangxu Dong, Ben M. Chen, Fellow, IEEE,
Kai-Yew Lum, Member, IEEE, and Tong H. Lee, Member, IEEE

Abstract—In this paper, we present the systematic design and Many of them are adopted from those designed for ground
implementation of a robust real-time embedded vision system robots, which are not very suitable for applications on UAVs.
for an unmanned rotorcraft for ground target following. The To the best of our knowledge, there is hardly any systematic
hardware construction of the vision system is presented, and an
onboard software system is developed based on a multithread tech- documentation in the open literatures dealing with the complete
nique capable of coordinating multiple tasks. To realize the au- design and implementation of the vision system for unmanned
tonomous ground target following, a sophisticated feature-based rotorcrafts, which includes architectural and algorithmic de-
vision algorithm is proposed by using an onboard color camera signs of real-time vision systems. In addition, although the
and navigation sensors. The vision feedback is integrated with the target tracking in video sequences has already been studied in
flight control system to guide the unmanned rotorcraft to follow a
ground target in flight. The overall vision system has been tested a number of applications, there has been very little research
in actual flight missions, and the results obtained show that the related to the implementation of vision-based target following
overall system is very robust and efficient. for UAVs.
Index Terms—Image processing, real-time systems, target In this paper, we present the design and implementation of
detection and following, unmanned aerial vehicles (UAVs), vision a comprehensive real-time embedded vision system for an un-
systems. manned rotorcraft, which includes an onboard embedded hard-
ware system, a real-time software system, and mission-based
I. I NTRODUCTION vision algorithms. More specifically, the onboard hardware
system is designed to fulfill the image processing requirements
U NMANNED AERIAL VEHICLES (UAVs) have recently
aroused much interest in the civil and industrial markets,
ranging from industrial surveillance, agriculture, and academic
by using the commercial off-the-shelf products, such as PC104
embedded modules. Real-time vision software is developed,
which is running on the real-time operating system QNX. An
research to wildlife conservation [6], [8], [14], [15], [26], [32],
advanced and efficient vision algorithm is then proposed and
[34]. In particular, owing to its vertical takeoff-and-landing,
implemented to realize the ground target tracking, which is
hovering, and maneuvering capabilities, the unmanned rotor-
suited for the UAVs. The proposed vision scheme is integrated
craft has received much attention in the defense and secu-
with the onboard navigation sensors to estimate the relative
rity community [1]. More specifically, an unmanned rotorcraft
distance between the target and the UAV. Finally, using the
equipped with a vision payload can perform a wide range of
vision feedback, a two-layer target tracking control framework
tasks, such as search and rescue, surveillance, target detec-
is utilized to control a pan/tilt servomechanism to keep the
tion and tracking, etc., as vision provides a natural sensing
target in the center of the image and guide the UAV to follow
modality—in terms of human comprehension—for feature de-
the motion of the target.
tection and tracking [28], [29]. Instead of vision being merely
The remainder of this paper is organized as follows:
a payload, many research efforts have also been devoted to
Sections II and III present the development of hardware and
vision-aided flight control [2], [17], [22], tracking [25], [28],
software of the embedded vision system for a UAV, respec-
terrain mapping [27], and navigation [18], [23].
tively, whereas coordinate systems adopted in the UAV vision
We note that most of the works reported in the literature,
systems are described in Section IV. Section V details the
however, focus on only a certain part of vision systems for
vision-based ground target detection and tracking algorithms,
UAVs, such as hardware construction or vision algorithms.
as well as the target-following scheme based on vision signal
feedback. The experimental results of the vision system ob-
tained through actual flight tests are presented in Section VI.
Manuscript received November 8, 2010; revised March 10, 2011; accepted Finally, we draw some concluding remarks in Section VII.
May 19, 2011. Date of publication July 5, 2011; date of current version
October 18, 2011.
F. Lin, X. Dong, B. M. Chen, and T. H. Lee are with the Department of
Electrical and Computer Engineering, National University of Singapore, II. H ARDWARE C ONFIGURATION OF THE V ISION S YSTEM
Singapore 117576 (e-mail: linfeng@nus.edu.sg; dong07@nus.edu.sg;
bmchen@nus.edu.sg; eleleeth@nus.edu.sg). The hardware configuration of the proposed onboard vision
K. Y. Lum is with Temasek Laboratories, National University of Singapore, system for the UAV, as shown in Fig. 1, consists of the following
Singapore 119260 (e-mail: tsllumky@nus.edu.sg). five main parts: a visual sensor, an image acquisition module, a
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org. vision processing module, a pan/tilt servomechanism, and video
Digital Object Identifier 10.1109/TIE.2011.2161248 and data links.

0278-0046/$26.00 © 2011 IEEE


LIN et al.: EMBEDDED VISION SYSTEM ON AN UNMANNED ROTORCRAFT FOR GROUND TARGET FOLLOWING 1039

Fig. 1. Configuration of the overall vision system.

A. Visual Sensor: Video Camera consumption of flight control task and vision program are very
heavy, which can hardly be carried out together in a single
A visual sensor is employed on board to obtain in-flight
embedded computer; 2) the sampling rate of the flight control
visual information of the surrounding environment of the UAV.
computer is faster than the vision computer, since the faster
Interesting visual information is composed of silent and dy-
sampling rate is required to stabilize the unmanned rotorcraft;
namic features, such as the color and shape of landmarks, and
3) the decoupled structure reduces the negative effect of data
motions of vehicles. A color video camera is selected as the
blocking caused by the vision program and flight control system
onboard visual sensor in our system, which has a compact size
and thus makes the overall system more reliable.
and a weight less than 30 g, as well as 380 TV line resolution
In the proposed vision system, a separated onboard PC104
and 40◦ field of view.
embedded computer, Cool RoadRunner III, is employed to
process the digitalized video signal and execute the vision
B. Image Acquisition Module: Frame Grabber algorithms. The core of the board is an Intel LV Pentium-III
processor running at 933 MHz. A compact flash memory card
The primary function of a frame grabber is to perform the is used to save the captured images.
A/D conversion of the analog video signals and then output
the digitalized data to a host computer for further process-
ing. Our selection is a PC/104(-plus)-standard frame grab- D. Pan/Tilt Servomechanism
ber, a Colory 104, which has the following features: 1) high In the application of the ground target following, it is re-
resolution—Colory 104 is capable of providing a resolution quired to keep the target objects in the field of view of the cam-
of up to 720 × 576 (pixels), which is sufficient for online era to increase the flexibility of vision-based tracking. As such,
processing; 2) multiple video inputs—it is able to collect data we decide to mount the camera on a pan/tilt servomechanism
from multiple cameras; 3) sufficient processing rate—the high- that can rotate in the horizontal and vertical directions.
est A/D conversion rate is 30 frames per second (FPS), which
is higher than the onboard vision processing rate (10 FPS); and
4) featured processing method—two tasks are used alterna- E. Wireless Data Link and Video Link
tively to convert the digital video signal into specified formats. In order to provide ground operators with clear visualization
to monitor the work that the onboard vision is processing
during flight tests, the video captured by the onboard camera
C. Vision Processing Module: Vision Computer
is transmitted and displayed in a ground control station. An
As shown in Fig. 1, the digitalized visual signals provided airborne 2.4-GHz wireless video link is used to transmit the
by the frame grabber are transferred to the onboard vision live video captured to the ground control station.
computer that is the key unit of the vision system. The vision
computer coordinates the overall vision system, such as image
III. C ONFIGURATION OF THE V ISION S OFTWARE S YSTEM
processing, target tracking, and communicating with the flight
control computer, which is to be described in detail later in Based on the proposed hardware system, the configuration of
Section V. In this paper, the configuration of using two sepa- the onboard vision software system is presented. The purpose of
rated embedded computers in the onboard system for UAVs is the vision software system is to coordinate the work of onboard
proposed: one for flight control and another one for machine devices and implement vision algorithms. Since the vision
vision algorithms. We choose such a configuration for onboard software system targets for real-time applications and runs in
system because of the following reasons: 1) the computation an embedded PC104 computer, QNX Neutrino, a real-time
1040 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 59, NO. 2, FEBRUARY 2012

embedded operating system is employed as the developing


platform. QNX Neutrino has a microkernal that requires fewer
system resources and performs more reliably and efficiently for
embedded systems during runtime compared to the traditional
monolithic kernel.
The vision software program coordinates tasks such as cap-
turing video, controlling pan/tilt servomechanism, and perform-
ing the vision detecting and tracking algorithms. To make the
vision software system easy to design and robust to perform,
the entire vision software system is divided into several main
blocks. Each block is assigned a special task as follows.
1) CAM: Reading RGB24 images from the buffers assigned
to the frame grabber. The reading rate is set up to be
10 FPS. In order to reduce the risk of damaging the image
data, two buffers are used to store the captured images by Fig. 2. Coordinate frames used in unmanned vision systems.
the frame grabber alternatively.
2) IMG: Processing the captured images and carrying out system by three numbers: radius rsp , azimuth angle θsp ,
the vision algorithms, such as the automatic tracking and and elevation angle φsp , which is given by
camera control, which will be explained in Section V. ⎛ 2 ⎞
⎛ ⎞ xs + ys2 + zs2
3) SVO: Controlling the rotation of the pan/tilt servomech- rsp ⎜ ⎟
psp = ⎝ θsp ⎠ = ⎜ tan−1 xzss ⎟
anism to keep the ground target in a certain location of ⎝  ⎠. (1)
the image. φsp sin−1 ys
rsp
4) SAV: Saving the captured and processed images to a high-
speed compact flash. 5) Camera coordinate system (labeled with subscript “c”),
5) COM: Communicating with the flight control computer. whose origin is the optical center of the camera. The
The flight control computer sends the states of the UAV Zc -axis is aligned with the optical axis of the camera and
and commands from the ground station to the vision points from the optical center C toward the image plane.
computer, and the vision computer sends the estimated 6) Image frame (or the principle image coordinate system)
relative distance between the UAV and the ground target (appended with subscript “i”) has the origin at the princi-
to the flight control computer to guide the flight of the pal point. The coordinate axes Xi and Yi are aligned with
UAV. the camera coordinate axes Xc and Yc , respectively.
6) USER: Providing a mean for users to control the vision
program such as running and stopping the tracking, as
V. V ISION -BASED G ROUND TARGET F OLLOWING
well as changing the parameters of the vision algorithms.
7) MAIN: Managing and scheduling the work of the entire To realize the vision-based ground target detection, many
vision software system. vision approaches have been proposed worldwide, such as
template matching [7], [28], background subtraction [19], [35],
optical flow [3], [18], stereo-vision-based technologies [11],
IV. C OORDINATE F RAMES U SED IN V ISION S YSTEMS and feature-based approaches [20], [31], [36], [39].
Shown in Fig. 2 are the coordinate systems adopted in the In this paper, a sophisticated vision-based target detection
UAV vision systems. More specifically, we have the following. and tracking scheme is proposed, as shown in Fig. 3, which
employs robust feature descriptors and efficient image-tracking
1) Local north–east–down (NED) coordinate system techniques. Based on the vision sensing data and navigation
(labeled with a subscript “n”) is an orthogonal frame on sensors, the relative distance to the target is estimated. Such
the surface of the Earth, whose origin is the launching estimation is integrated with the flight control system to guide
point of the aircraft on the surface of the Earth. the UAV to follow the ground target in flight.
2) Body coordinate system (labeled with subscript “b”) is
aligned with the shape of the fuselage of the aircraft.
A. Target Detection
3) Servo base coordinate system (labeled with subscript “s”)
is attached to the base of the pan/tilt servomechanism, The purpose of the target detection is to identify the target
which is aligned with the body coordinate system of of interest from the image automatically based on a database of
the UAV. preselected targets. A toy car is chosen as the ground target.
4) Spherical coordinate system (labeled with subscript “sp”) A classical pattern recognition procedure is used to identify
is also attached to the base of the pan/tilt servomech- the target automatically, which includes three main steps, i.e.,
anism. It is used to define the orientation of the camera segmentation, feature extraction, and pattern recognition.
and the target with respect to the UAV. Given a generic 1) Segmentation: The segmentation step aims to separate
point ps = (xs , ys , zs )T in the servo base coordinate sys- the objects of interest from background. To simplify the
tem, its position can be defined in the spherical coordinate further processing, some assumptions are made. First, the target
LIN et al.: EMBEDDED VISION SYSTEM ON AN UNMANNED ROTORCRAFT FOR GROUND TARGET FOLLOWING 1041

Fig. 3. Flow chart of the ground target detection, tracking, and following.

and environments exhibit Lambertian reflectance, and in other where ηpqm


, for p + q = 2, 3, . . ., is the improved normalized
words, their brightness is unchanged regardless of viewing central moment defined as
directions. Second, the target has a distinct color distribution μcpq
compared to the surrounding environments. m
ηpq = (7)
A(p+q+1)/2
Step 1) Threshold in color space. To make the surface color
of the target constant and stable under the varying where A is the interior area of the shape and μcpq is the central
lighting condition, the color image is represented in moment defined as
HSV space, which stands for hue (hue), saturation
(sat), and value (val) introduced originally by Smith μpq = (x − x̄)p (y − ȳ)q ds,
c
p, q = 0, 1, . . . . (8)
[33]. Precalculated threshold ranges are applied to C
the hue, sat, and val channels
Note that, in (8), C is the boundary curve of the shape, C is a

huer = [h1 , h2 ] satr = [s1 , s2 ] valr = [v1 , v2 ]. line integral along C, ds = (dx)2 + (dy)2 , and [x̄, ȳ] is the
(2) coordinate of the centroid of the shape in the image plane.
Only the pixel values falling in these color ranges are In addition, compactness is another useful feature descriptor
described as the foreground points, and pixels of the for recognition. Compactness of a shape is measured by the
image that fall out of the specified color range are ratio of the square root of the area and the perimeter, which
removed. The procedure of the image preprocess is is given by

shown in Fig. 4. A
Step 2) Morphological operation. As shown in Fig. 4, nor- Compactness : βc = . (9)
C
mally, the segmented image is not smooth and has
many noise points. Morphological operations are It can be easily proven that compactness is invariant with
then employed to filter out noise, fuse narrow breaks respect to translation, scaling, and rotation.
and gulfs, eliminate small holes, and fill gaps in the Color Feature Extraction: To make the target detection and
contours. Next, a contour detection approach is used tracking more robust, we also employ color histogram to rep-
to obtain the complete boundary of the objects in the resent the color distribution of image area of the target, which
image, which will be used in the feature extraction. is not only independent of the target orientation, position and
2) Feature Extraction: Generally, multiple objects will be size but also robust to partial occlusion of the target and easy to
found in the segmented images, including the true target and implement. Due to the stability in outdoor environments, only
false objects. The geometric and color features are used as the hue and val are employed to construct the color histogram for
descriptors to identify the true target. object recognition, which is defined as
Geometry Feature Extraction: To describe the geometric
H = {hist(i, j)} , i = 1, . . . , Nhue ; j = 1, . . . , Nval
features of the objects, the four lowest moment invariants
(10)
proposed in [25] are employed since they are independent of
position, size, and orientation in the visual field. The four lowest where
moment invariants, defined in the segmented image I(x, y), are      
hue(x, y) val(x, y)
given by hist(i, j) = δ i, δ j,
Nhue Nval
(x,y)∈Ω
φ1 = η20
m
+ η02
m
(3)
m 2 m 2 and where Nhue and Nval are the partition numbers of hue and
φ2 = (η20
m
− η02 ) + 4 (η11 ) (4)
m 2 m 2
val channels, respectively, Ω is the region of the target, [·] is
φ3 = (η30
m
− 3η12 ) + (η03
m
− 3η21 ) (5) the nearest integer operator, and δ(a, b) is the Kronecker delta
m 2 m 2
φ4 = (η30
m
+ η12 ) + (η03
m
+ η21 ) (6) function.
1042 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 59, NO. 2, FEBRUARY 2012

Fig. 4. Illustration of segmentation. (a) Input image. (b) Image in HSV color space. (c) Image after thresholding. (d) Image after morphological operations.
(e) Image after contour detection. (f) Regions of interest.

Dynamic Features: Aside from the static features extracted robustness of the pattern recognition and speed up
from the foreground objects, we further calculate their dynamic the calculation.
motion using the Kalman filtering technique. The distance Step 2) Discriminant function: We use the discriminant
between the location of each object zi and the predicted location function, derived from Bayes’ theorem, to determine
of the target ẑ is employed as a dynamic feature. The detailed the target based on the measured feature values of
procedure for predicting the location of the target in the image each object and the known distribution of features
is to be discussed in Section V-B1. Both the static and dynamic of the target obtained from the training data. We
features of them are then employed in the pattern recognition. assume that these features are independent and fulfill
The extracted features of an object need to be arranged in normal distributions. Thus, we can define the simpli-
a compact and identifiable form [30]. A straightforward way fied discriminant function with weightings as
is to convert these features in a high-dimensional vector. For
5
 2  2
example, the feature vector of the ith object is given by αk,i − μk,j dc (Hi , Gj ) − μ6,j
fj (αi ) = wk +w6
σk,j σ6,j
αi = [βc,i , φ1,i , φ2,i , φ3,i , φ4,i , Hi , zi ] k=1

= {αk,i }, k = 1, . . . , d (11) where

where d is the dimension of the feature vector. h N


N v
3) Pattern Recognition: The purpose of the pattern recog- min (Hi (p, q), Gi (p, q))
p=1 q=1
nition is to identify the target from the extracted foreground dc (Hi , Gj ) = (12)
min (|Hi |, |Gj |)
objects in terms of the extracted features in (11). The straight-
forward classifier is to use the nearest neighbor rule. It calcu- and αk,i is the kth element of the feature vector of
lates a metric or “distance” between an object and a template the object i. μk,j and σk,j are the mean and standard
in a feature space and assigns the object to the class with deviation of the distribution of the corresponding
the highest scope. However, to take advantage of a priori feature. Gj is the color histogram template of a
knowledge of the feature distribution, the classification problem predefined target. In fact, the location information
is formulated under the model-based framework and solved by is not used in the detection mode. The target i with
using a probabilistic classifier. A discriminant function, derived the minimum value is considered as the candidate
from Bayes’ theorem, is employed to identify the target. This target. w1 to w6 are the weighting scalars of the
function is computed based on the measured feature values of corresponding features. In terms of the likelihood
each object and the known distribution of features obtained values of the objects, a decision rule is defined as
from training data.

Step 1) Prefilter: Before classifying the objects, a prefilter ⎪
⎪target = arg mini fj (αi ), which belongs to class j,
⎨ if min fj (αi ) ≤ Γj
is carried out to remove the objects whose feature
D=
values are outside certain regions determined by a ⎪
⎪no target in the image,
⎩ if min fj (αi ) > Γj
priori knowledge. This step aims to improve the
LIN et al.: EMBEDDED VISION SYSTEM ON AN UNMANNED ROTORCRAFT FOR GROUND TARGET FOLLOWING 1043

where Γ is a threshold value chosen based on the


training data. This decision rule chooses the object,
for example, i, with the smallest value of the sim-
plified discriminant function as the candidate target.
If fj (αi ) < Γj , then the scheme decides that object
i is the target. Otherwise, the scheme indicates that
there is no target in the current image.

B. Image Tracking
As shown in Fig. 3, after initialization, the image-tracking
techniques are employed. The purpose of image tracking is
to find the corresponding region or point to the given target.
Unlike the detection, the entire image search is not required.
Thus, the processing speed of image tracking is faster than
the detection. The image-tracking problem can be solved by
using two main approaches: 1) filtering and data association and Fig. 5. Flow chart of image tracking.
2) target representation and localization [13].
Filtering and Data Association: The filtering and data as- nique is employed to provide accurate estimation and prediction
sociation approach can be considered as a top–down process. of the position and velocity of a single target, referred to as
The purpose of the filtering is to estimate the states of the dynamic information. If the model-based tracker fails to find
target, such as static appearance and location. Typically, the the target, a mean-shift-based image-tracking method will be
state estimation is achieved by using filtering technologies [38], activated to retrieve the target back in the image.
[40]. It is known (see, for example, [24]) that most of the 1) Model-Based Image Tracking: Model-based image
tracking algorithms are model based because a good model- tracking will predict the possible location of the target in the
based tracking algorithm will greatly outperform any model- subsequent frames and then do the data association based on an
free tracking algorithm if the underlying model is found to be a updated likelihood function. The advantage of the model-based
good one. If the measurement noise satisfied the Gaussian dis- image tracking is to combine dynamic features with geometric
tribution, the optimal solution can be achieved by the Kalman features of the target in the image tracking under noise and
filtering technique [4]. In some more general cases, particle occlusion condition. In addition, several methods are employed
filters are more suitable and robust [21]. However, the com- to make the tracking more robust and efficient, which are given
putational cost increases, and the sample degeneracy is also by the following:
a problem. When multiple targets are tracked in the image 1) narrow the search window in terms of the prediction of
sequence, the validation and association of the measurements the Kalman filter;
become a critical issue. The association techniques, such as 2) integrate the spatial information with appearance and set
probabilistic data association filter (PDAF) and joint PDAF are the different weightings for the discriminant function.
widely used [37].
The motion of the centroid of the target x = [x̄, x̄˙ , ȳ, ȳ˙ ]T
Target Representation and Localization: Aside from using
in the 2-D image coordinate is tracked using a fourth-order
the motion prediction to find the corresponding region or
Kalman filter, which predicts the possible location of the target
point, the target representation and localization approach is
in the successive frames. The discrete-time model of the target
considered as another efficient way, which is referred to as a
motion can be expressed as
bottom–up approach. Among the searching methods, the mean-
shift approach using the density gradient is commonly used [5],
x(k|k − 1) = Φx(k − 1) + Λw(k − 1)
which is trying to search the peak value of the object probability
density. However, the efficiency will be limited when the spatial z(k) = Hx(k) + v(k) (13)
movement of the target becomes significant.
To take advantages of the aforementioned approaches, using where w and v denote the input and measurement zero-mean
multiple trackers is widely adopted in applications of image Gaussian noises
tracking. In [37], the tracking scheme by integrating motion, ⎡ ⎤ ⎡ T2 ⎤
1 Ts 0 0 s
2 0
color, and geometric features was proposed to realize robust ⎢ Ts
⎢0 1 0 0 ⎥ 0 ⎥
image tracking. In conclusion, combining the motion filtering Φ=⎣ ⎦ Λ=⎢ ⎣

s ⎦
2
0 0 1 Ts 0 T
and advanced searching algorithms will definitely make the 2
0 0 0 1 0 Ts
tracking processing more robust, but the computational load is
 
heavier. 1 0 0 0
H=
In our approach, instead of using multiple trackers simultane- 0 0 1 0
ously, a hierarchical tracking scheme is proposed to balance the
computational cost and performance, which is shown in Fig. 5. where Ts is the sampling period of the vision-based tracking
In the model-based image tracking, the Kalman filtering tech- system. A Kalman filter can then be designed based on the
1044 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 59, NO. 2, FEBRUARY 2012

aforementioned motion model to estimate the states of the target where 1 − α is the level of confidence, which should be
in the image plane. The filter consists of the following stages. sufficiently high (for our system, 1 − α = 99%). If H0 is
1) Predicted state true, the Chi-square testing-based switching declares that the
target is still in the image and enables the mean-shift-based
x̂(k|k − 1) = Φx̂(k − 1). tracker.
3) Mean-Shift-Based Image Tracking: If the target is still
2) Updated state estimate in the image, continuously adaptive mean-shift (CAMSHIFT)
algorithm [5] is employed, which is shown in Fig. 5. This
x̂(k) = x̂(k|k − 1) + K(k) (z(k) − H x̂(k|k − 1)) algorithm uses the mean-shift searching method to efficiently
obtain the optimal location of the target in the search window.
where K(k) is the optimal Kalman gain. The principle idea is to search the dominated peak in the feature
space based on the previous information and certain assump-
The distance between the location of each object zi and the
tions. The detected target is verified by comparing with an
predicted location of the target ẑ is employed as the dynamic
adaptive target template. The CAMSHIFT algorithm consists
feature defined by
of three main steps: back projection, mean-shift searching, and
search window adaptation.
z̃i = zi (k) − ẑ(k) = zi (k) − H x̂(k|k − 1).
Step 1) Back projection: In order to search the target in the
Thus, the updated discriminant function, which includes the image, the probability distribution image needs to
appearance and spatial information, is shown as follows: be constructed based on the color distribution of the
target. The color distribution of the target defined in
5
 2
αi (k) − μj (k) hue channel is given by
fj (αi ) = wk
k=1
σj (k)  huetg (x, y)
histtg (i) = δ i, , i = 1, . . . , Nhue .
 2  2 (x,y)∈Ω
Nhue
dc (Hi , Gj )−μj (6) z̃i −μj (7)
+ w6 +w7 . (14)
σj (6) σj (7) Based on the color model of the target, the back pro-
jection algorithm is employed to convert the color
Most of the time, the model-based tracker can lock the target
image to the color probability distribution image.
in the image sequence, but sometimes, it may fail due to the
The probability of each pixel Ip (x, y) in the region
noise or disturbance, such as partial occlusion. Thus, a scheme
of interest Ωr is calculated based on the model of the
is required to check whether the target is still in the image and
target, which is used to map the histogram results
then activate other trackers.
and given by
2) Switching Mechanism: The purpose of the switching
 
mechanism is to check whether the target is still in the image Ihue (x, y)
when the target is lost by the model-based tracker. If yes, Ip (x, y) = histtg (15)
Nhue
the mean-shift tracker will be activated. The lost of the target
can be attributed to the poor match of features due to noise, where Ihue is the pixel values of the image in the
distortion, or occlusion in the image. An alternative reason hue channel.
may be the maneuvering motion of the target, and the target is Step 2) Mean-shift algorithm: Based on the obtained color
out of the image. Therefore, in order to know the reason and density image, a robust nonparametric method, the
take the special way to find the target again, it is necessary mean-shift algorithm, is used to search the dom-
to formulate the decision making as the following hypothesis inated peak in the feature space. The mean-shift
testing problem: algorithm is an elegant way of identifying these
locations without estimating the underlying proba-
H0 : The target is still in the image; bility density function [12].
Recalling the discrete 2-D image probability dis-
H1 : The target is not in the image due to maneuvers. tributions in (15), the mean location (the centroid) of
the search window is computed by
The estimation error is considered as a random variable, which
is defined by M10 M01
xc (k) = yc (k) =
M00 M00
ε = (Hx̂k−1 − zk−1 )T Σ−1 (Hx̂k−1 − zk−1 )
where k is the number of iterations
where Hx̂k−1 − zk−1 is assumed to be N (0, Σ) distributed. ε
is Chi-square distributed with two degrees of freedom (x and y M00 = Ip (x, y)
(x,y)∈Ωw
directions) under H0

ε < λ = χ22 (α), if H0 is true M10 = Ip (x, y)x M01 = Ip (x, y)y
ε ≥ λ = χ22 (α), if H1 is true (x,y)∈Ωw (x,y)∈Ωw
LIN et al.: EMBEDDED VISION SYSTEM ON AN UNMANNED ROTORCRAFT FOR GROUND TARGET FOLLOWING 1045

where Ωw is the region of the search window,


M00 is the zeroth moment, and M10 and M01 are
the first moments for x and y, respectively. The
search window is centered at the mean location
c(k) = (xc (k), yc (k)). Step 2) is to be repeated until
c(k) − c(k − 1) < ε.
Step 3) Search window adaptation: The region of interest
is calculated dynamically using the motion filtering
given in Section V-B1. To improve the performance
of the CAMSHIFT algorithm, multiple search win-
dows in the region of interest are employed. The
initial locations and sizes of the searching windows
are adopted from the centers and boundaries of the
foreground objects, respectively. These foreground
objects are obtained using the color segmentation in
the region of interest. In the CAMSHIFT algorithm,
the size of the search window will be dynamically Fig. 6. Block diagram of the tracking control scheme.
updated according to the moments of the region
point. Thus, we can obtain a simplified pinhole projection
inside the search window [5]. Generally, more than
model as
one target candidate will be detected due to mul- ⎡ ⎤
tiple search windows adopted. To identify the true   f 0 0
pi 1⎣ x
target, the similarity between the target model and = 0 fy 0 ⎦ pc (16)
1 λ
the detected target candidate is measured using the 0 0 1
intersection comparison (12). This verification can
effectively reduce the risk of detecting the false with
target.
pc = Rc/n pn + tc/n (17)

C. Target-Following Control where λ = zc is the depth of the point P in the camera


coordinate system; fx and fy are the vertical and horizontal
We proceed to design a comprehensive target-following sys-
focal lengths in pixels, respectively; and Rc/n and tc/n are the
tem in this section. It consists of two main layers: the pan/tilt
rotation matrix and the translation vector, respectively, which
servomechanism control and the UAV following control. The
define the rigid-body transformation from the NED frame to
overall structure of the target-following control is shown in
the camera frame. Thus, we can define M as
Fig. 6. As mentioned in Section II, a pan/tilt servomechanism  
is employed in the first layer to control the orientation of 1 fx 0 0
pi = M (pn , v) = Rc/n (pn − tc/n ).
the camera to keep the target in an optimal location in the λ 0 fy 0
image plane, namely, eye-in-hand visual servoing [9], [10],
which makes target tracking in the video sequence more robust Next, to derive the function N , we write the transformation
and efficient. The parameters associated with the pan/tilt servo between the camera coordinate system and the servo base
control in Fig. 6 are to be introduced in detail later. In the second coordinate system as
layer, the UAV is controlled to maintain a constant relative
ps = Rs/c (v)pc (18)
distance between the moving target and the UAV in flight.
1) Control of the Pan/Tilt Servomechanism: As shown in where ps is the coordinate of the point P relative to the servo
Fig. 6, given a generic point P, pi and p∗i are the measured and base coordinate system and Rs/c describes the rotation from
desired locations of the projected point P in the image plane, the servo base frame to the camera frame. We can then combine
respectively. e = [eφ , eθ ]T is the tracking error, u = [uφ , uθ ]T (18) with (16) and define the coordinate of the target in the
is the output of the tracking controller, and v = [vφ , vθ ]T is spherical coordinate system
the output of the pan/tilt servomechanism. M is the camera ⎛  ⎞
model, which maps the points in the 3-D space to the projected   sin−1 ȳs

points in the 2-D image frame. N is a function to calculate the pe = = N (pi , v) = ⎝ r̄sp ⎠ (19)
pθ tan−1 x̄s
orientation of an image point pi with respect to the UAV under z̄s
the current v. As mentioned in the definitions of the coordinate
systems, the orientation of P with respect to the UAV can be where
defined using azimuth and elevation angles in the spherical ⎛ ⎞ ⎡ −1 ⎤⎛ ⎞
x̄s fx 0 0 xi
coordinate system, which is described by two rotation angles ⎝ ȳs ⎠ = Rs/c (v) ⎣ 0 fy−1 0 ⎦ ⎝ yi ⎠
pe = [pφ , pθ ]T . z̄s 0 0 1 1
In image processing, the distortion of the lens is compen- 
sated, and the origin of the image plane is set as the principal r̄sp = x̄2s + ȳs2 + z̄s2 . (20)
1046 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 59, NO. 2, FEBRUARY 2012

The pan/tilt servomechanism can be approximately consid- We assume that the ground is flat and the height of the UAV to
ered as two decoupled servomotors, which regulate the visual the ground h is known. We have
sensor for horizontal and vertical rotations, respectively. The ⎡ ⎤ ⎛ ⎞
dynamic model of the servomotor can be described by using r1 r2 r3 xn/c
a standard second-order system. Before proceeding to design Rn/c = ⎣ r4 r5 r6 ⎦ tn/c = ⎝ yn/c ⎠ (26)
the control law for the pan/tilt servomechanism, we define the r7 r8 r9 zn/c
tracking error function as
which can be calculated by using the measurements of the
onboard navigation sensors. Based on the assumption that the
e(k) = pe − p∗e = N (pi (k), v(k)) − N (p∗i , v(k)) (21)
target is on the ground, zn is equal to zero. We then can derive
where p∗e denotes the desired orientation of the camera. The λ as
control inputs will be sent to the pan/tilt servos after the vision- −zn/c
based target detection algorithm, which generally cost about λ=
r7 fxxi + r8 fyyi + r9
one sampling period. To track the moving target efficiently, we
calculate the pan/tilt servo control inputs using the predicted which, together with (25), yields
location of the target in the subsequent frame, which is derived  ⎞
⎛ ⎞ ⎛
from (13) and given by xn λ r1 fxxi + r2 fyyi + r3 + xn/c
⎝ yn ⎠ = ⎜  ⎟
⎝ λ r4 xi + r5 yi + r6 + yn/c ⎠ . (27)
p̂i (k + 1) = ẑ(k + 1|k) = H x̂(k + 1|k). (22) fx fy
zn
0
In implementation, it is not easy to measure the output of the
pan/tilt servo v in (21). We assume that the bandwidth of the As shown in Fig. 6, the relative distance between the target
pan/tilt servomechanism is much faster than that of the control and the UAV is estimated, which is employed as the reference
system. We then can ignore the transient of the pan/tilt servos signal to guide the UAV to follow the motion of the target. The
and consider them as scaling factors with one step delay. The tracking reference for the UAV is defined as
estimate of v is defined as ⎛   ⎛ ⎞⎞
⎛ ⎞  cx
xuav xn 1 0 0
⎜ − Rn/b ⎝ cy ⎠ ⎟
v̂(k) = Kd u(k − 1). (23) ⎜ yuav ⎟ ⎜ yn 0 1 0 ⎟
⎝ ⎠ =⎜ 0 ⎟
zuav ⎝ h0 ⎠
Replacing v and pi with v̂ and p̂i in (21), we then can obtain ψuav ref
ψ0
the modified error function as
where cx and cy are the desired relative distances between the
e(k) = N (p̂i (k + 1), v̂(k)) − N (p∗i , v̂(k)) . (24) target and the UAV in the Xb - and Yb -axes, respectively, h0 is
the predefined height of the UAV above the ground, ψ0 is the
The purpose of the design of the tracking control law is to predefined heading angle of the UAV, and Rn/b is the rotation
minimize the tracking error function given in (24) by choosing matrix from the body frame to the local NED frame, which can
a suitable control input u(k). Since the dynamics model of the be calculated in terms of the output of the onboard navigation
pan/tilt servos is relatively simple, we employ a discrete-time sensors.
proportional–integral (PI) controller (see, for example, [16]),
which is structurally simple but fairly robust. It is very suitable
for our real-time application. The incremental implementation VI. E XPERIMENTAL R ESULTS
of the PI controller is given by
To verify the proposed vision system, multiple tests of the
Kp T s complete system were conducted. During these tests, the pro-
Δu(k) = Kp [e(k) − e(k − 1)] + e(k) posed vision-based unmanned helicopter SheLion was hovering
Ti
autonomously at a certain position. If the moving target entered
where the proportional gain and the integral time are chosen into the view of the onboard camera, the target would be
as Kp = 0.65 and Ti = 0.8, respectively. We note that two identified and tracked in the video sequence by the vision
identical controllers are respectively used for the pan and tilt system automatically. Based on the vision information, the
servos, since the dynamics of these two servos are very close. pan/tilt servomechanism was controlled to keep the target in
2) Following Control of the UAV: As shown in Fig. 6, to a certain position in the image, as described in Section V-C1.
estimate the relative distance between the target and the UAV, Then, the operator can command the UAV to enter into the
we combine the camera model (16) with the transformation in following mode, in which the UAV followed the motion of the
(17) and generate the overall geometric model from an ideal target autonomously based on the estimated relative distance,
image to the NED frame using the algorithm proposed in Section V-C2.
⎡ −1 ⎤⎛ ⎞ The experimental results of the vision-based target detection
fx 0 0 xi and tracking in flight are shown in Table I, which indicate
pn = λRn/c ⎣ 0 fy−1 0 ⎦ ⎝ yi ⎠ + tn/c . (25) that the proposed vision algorithm could effectively identify
0 0 1 1 and track the target in the video sequence in the presence of
LIN et al.: EMBEDDED VISION SYSTEM ON AN UNMANNED ROTORCRAFT FOR GROUND TARGET FOLLOWING 1047

TABLE I
E XPERIMENTAL R ESULTS OF TARGET D ETECTION AND
T RACKING IN F LIGHT

Fig. 8. Test result of the relative distance estimation.

Fig. 7. Test result of the vision-based servo following.

the disturbance of unknown motion between the UAV and the


target. One example of the pan/tilt servo tracking control in Fig. 9. Test result of the vision-based target following.
flight is also shown in Fig. 7. The solid line in Fig. 7 indicates
the expected position of the target in the image, and the dashed
line indicates the actual location of the target in the image VII. C ONCLUSION
during the flight test. From Fig. 7, we can observe that, in In this paper, we have presented the comprehensive design
spite of the unknown motion between the UAV and the target, and implementation of the vision system for the UAV, including
the pan/tilt servomechanism can effectively control target in hardware construction, software development, and an advanced
a boxlike neighborhood of the center point of the image by ground target seeking and following scheme. Multiple real
employing the vision-based pan/tilt servo control. flight tests were conducted to verify the presented vision sys-
In the flight tests, the relative distance between the target and tem. The experimental results show that this vision system is
the UAV was estimated using the approach presented earlier, not only able to automatically detect and track the predefined
which is shown in Fig. 8. The relative distance is also measured ground target in the video sequence but also able to guide the
using the GPS receiver. The experimental results in Fig. 8 UAV to follow the motion of the target in flight. The robustness
indicate that the vision sensor can provide acceptable relative and efficiency of the developed vision system for UAVs could
distance estimates between the UAV and the target based on the be achieved by the current system. Our future research focus is
altitude information of the UAV and the location of the target in to utilize the system for implementing vision-based automatic
the image. landing of the UAV on a moving platform in an environment
One example of the ground target following is shown in without GPS signals.
Fig. 9. In the experiment, the target was manually controlled
to move randomly on the flat ground, and the UAV followed
the motion of the target automatically based on the scheme R EFERENCES
proposed in the previous sections. From Fig. 9, we observe that [1] M. J. Adriaans, R. C. Cribbs, B. T. Ingram, and B. P. Gupta, “A160
the UAV can follow the trajectory of the target and keep the Hummingbird unmanned air vehicle development program,” in Proc.
AHS Int. Specialists’ Meeting—Unmanned Rotorcraft: Des., Control Test.,
constant relative distance between the UAV and the target. The Chandler, AZ, 2007.
results for the moving ground target following of the UAV indi- [2] O. Amidi, T. Kanade, and R. Miller, “Vision-based autonomous helicopter
cate the efficiency and robustness of the proposed vision-based research at Carnegie Mellon Robotics Institute 1991–1997,” in Proc.
Amer. Helicopter Soc. Int. Conf., Gifu, Japan, 1998, pp. 1–12.
following scheme. The videos of the vision-based target follow- [3] J. L. Barron, D. J. Fleet, and S. S. Beauchemin, “Performance of optical
ing tests are available on http://uav.ece.nus.edu.sg/video.html. flow techniques,” Int. J. Comput. Vis., vol. 12, no. 1, pp. 43–77, Feb. 1994.
1048 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 59, NO. 2, FEBRUARY 2012

[4] Y. Boykov and D. P. Huttenlocher, “Adaptive Bayesian recognition [30] F. Porikli, “Achieving real-time object detection and tracking under ex-
in tracking rigid objects,” in Proc. IEEE Conf. Comput. Vis. Pattern treme conditions,” J. Real-Time Image Process., vol. 1, no. 1, pp. 33–40,
Recognit., Hilton Head, SC, 2000, pp. 697–704. Oct. 2006.
[5] G. R. Bradski and S. Clara, “Computer vision face tracking for use in [31] F. Sadjadi, “Theory of invariant algebra and its use in automatic tar-
a perceptual user interface,” Intel Technology Journal Q2 ’98, pp. 1–15, get recognition,” in Physics of Automatic Target Recognition, vol. 3.
1998. New York: Springer-Verlag, 2007, pp. 23–40.
[6] G. W. Cai, B. M. Chen, K. M. Peng, M. B. Dong, and T. H. Lee, “Mod- [32] Z. Sarris and S. Atlas, “Survey of UAV applications in civil markets,” in
eling and control of the yaw channel of a UAV helicopter,” IEEE Trans. Proc. 9th Mediterranean Conf. Control Autom., Dubrovnik, Croatia, 2001,
Ind. Electron., vol. 55, no. 9, pp. 3426–3434, Sep. 2008. vol. WA2-A, pp. 1–11.
[7] R. Canals, A. Roussel, J. L. Famechon, and S. Treuillet, “A biprocessor- [33] A. R. Smith, “Color gamut transform pairs,” in Proc. 5th Annu. Conf.
oriented vision-based target tracking system,” IEEE Trans. Ind. Electron., Comput. Graph. Interactive Tech., New York, 1978, pp. 12–19.
vol. 49, no. 2, pp. 500–506, Apr. 2002. [34] O. Spinka, O. Holub, and Z. Hanzalek, “Low-cost reconfigurable control
[8] M. E. Campbell and W. W. Whitacre, “Cooperative tracking using vision system for small UAVs,” IEEE Trans. Ind. Electron., vol. 58, no. 3,
measurements on seascan UAVs,” IEEE Trans. Control Syst. Technol., pp. 880–889, Mar. 2011.
vol. 15, no. 4, pp. 613–626, Jul. 2007. [35] T. Y. Tian, C. Tomasi, and D. J. Heeger, “Comparison of approaches
[9] F. Chaumette and S. Hutchinson, “Visual servo control Part I: Basic to egomotion computation,” in Proc. IEEE Conf. Comput. Vis. Pattern
approaches,” IEEE Robot. Autom. Mag., vol. 13, no. 4, pp. 82–90, Recognit., San Francisco, CA, 1996, pp. 315–320.
Dec. 2006. [36] S. Tsugawa, “Vision-based vehicles in Japan: Machine vision systems
[10] F. Chaumette and S. Hutchinson, “Visual servo control Part II: Advanced and driving control systems,” IEEE Trans. Ind. Electron., vol. 41, no. 4,
approaches,” IEEE Robot. Autom. Mag., vol. 14, no. 1, pp. 109–118, pp. 398–405, Aug. 1994.
Mar. 2007. [37] H. Veeraraghavan, P. Schrater, and N. Papanikolopoulos, “Robust target
[11] C. Chen and Y. F. Zheng, “Passive and active stereo vision for smooth detection and tracking through integration of motion, color and geome-
surface detection of deformed plates,” IEEE Trans. Ind. Electron., vol. 42, try,” Comput. Vis. Image Understand., vol. 103, no. 2, pp. 121–138, Aug.
no. 3, pp. 300–306, Jun. 1995. 2006.
[12] Y. Cheng, “Mean shift, mode seeking, and clustering,” IEEE Trans. [38] C. R. Wren, A. Azarbayejani, T. Darrell, and A. P. Pentland, “Pfinder:
Pattern Anal. Mach. Intell., vol. 17, no. 8, pp. 790–799, Aug. 1995. Real-time tracking of the human body,” IEEE Trans. Pattern Anal. Mach.
[13] D. Comaniciu, V. Ramesh, and P. Meer, “Kernel-based object tracking,” Intell., vol. 19, no. 7, pp. 780–785, Jul. 1997.
IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 5, pp. 564–577, [39] D. Xu, L. Han, M. Tan, and Y. F. Li, “Ceiling-based visual positioning
May 2003. for an indoor mobile robot with monocular vision,” IEEE Trans. Ind.
[14] B. Enderle, “Commercial applications of UAVs in Japanese agriculture,” Electron., vol. 56, no. 5, pp. 1617–1628, May 2009.
presented at the AIAA 1st Unmanned Aerospace Vehicles, Systems, [40] Q. M. Zhou and J. K. Aggarwalb, “Object tracking in an outdoor environ-
Technologies, and Operations( UAV) Conf., Portsmouth, VA, 2002, ment using fusion of features and cameras,” Image Vis.Comput., vol. 24,
AIAA-2002-3400. no. 11, pp. 1244–1255, Nov. 2006.
[15] J. Ferruz, V. M. Vega, A. Ollero, and V. Blanco, “Reconfigurable control
architecture for distributed systems in the HERO autonomous helicopter,”
IEEE Trans. Ind. Electron., vol. 58, no. 12, pp. 5311–5318, Dec. 2011.
[16] G. Franklin, J. D. Powell, and A. E. Naeini, Feedback Control of Dynamic
Systems, 4th ed. Upper Saddle River, NJ: Prentice-Hall, 2002.
[17] N. Guenard, T. Hamel, and R. Mahony, “A practical visual servo control
for an unmanned aerial vehicle,” IEEE Trans. Robot., vol. 24, no. 2,
pp. 331–340, Apr. 2008.
[18] S. Hrabar, G. S. Sukhatme, P. Corke, K. Usher, and J. Roberts, “Combined
optic-flow and stereo-based navigation of urban canyons for a UAV,” in
Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2005, pp. 3309–3316. Feng Lin (S’10) received the B.Eng. degree in
[19] W. M. Hu, T. N. Tan, L. Wang, and S. Maybank, “A survey on visual computer science and control and the M.Eng. de-
surveillance of object motion and behaviors,” IEEE Trans. Syst., Man, gree in system engineering from Beihang University,
Cybern. C, Appl. Rev., vol. 34, no. 3, pp. 334–352, Aug. 2004. Beijing, China, in 2000 and 2003, respectively, and
[20] Y. Hu, W. Zhao, and L. Wang, “Vision-based target tracking and collision the Ph.D. degree in computer and electrical engi-
avoidance for two autonomous robotic fish,” IEEE Trans. Ind. Electron., neering from the National University of Singapore,
vol. 56, no. 5, pp. 1401–1410, May 2009. Singapore, in 2011.
[21] M. Isard and A. Blake, “Condensation conditional density propagation He is currently a Research Associate with De-
for visual tracking,” Int. J. Comput. Vis., vol. 29, no. 1, pp. 5–28, partment of Electrical and Computer Engineering,
1998. National University of Singapore. His main research
[22] E. N. Johnson, A. J. Calise, Y. Watanabe, J. Ha, and J. C. Neidhoefer, interests are unmanned aerial vehicles, vision-based
“Real-time vision-based relative aircraft navigation,” J. Aerosp. Comput., control and navigation, target tracking, robot vision, and embedded vision
Inf. Commun., vol. 4, no. 4, pp. 707–738, 2007. systems.
[23] J. Kim and S. Sukkarieh, “SLAM aided GPS/INS navigation in GPS de-
nied and unknown environments,” in Proc. Int. Symp. GNSS/GPS, Sydney,
Australia, 2004.
[24] X. R. Li and V. P. Jilkov, “Survey of maneuvering target tracking, Part
I: Dynamic models,” IEEE Trans. Aerosp. Electron. Syst., vol. 39, no. 4,
pp. 1333–1364, Oct. 2003.
[25] F. Lin, K. Y. Lum, B. M. Chen, and T. H. Lee, “Development of a vision-
based ground target detection and tracking system for a small unmanned
helicopter,” Sci. China—Series F: Inf. Sci., vol. 52, no. 11, pp. 2201–2215,
Nov. 2009.
[26] B. Ludington, E. Johnso, and G. Vachtsevanos, “Augmenting UAV
Xiangxu Dong received the B.S. degree from
autonomy,” IEEE Robot. Autom. Mag., vol. 13, no. 3, pp. 63–71,
the Department of Communications Engineering,
Sep. 2006.
Xiamen University, Xiamen, China, in 2006. Since
[27] M. Meingast, C. Geyer, and S. Sastry, “Vision based terrain recovery for
2007, he has been working toward the Ph.D. degree
landing unmanned aerial vehicles,” in Proc. IEEE Conf. Decision Control,
at the National University of Singapore, Singapore.
Atlantis, Bahamas, 2004, pp. 1670–1675.
His research interests include real-time software,
[28] L. Mejias, S. Saripalli, P. Cervera, and G. S. Sukhatme, “Visual servoing
cooperative coordination, formation control, and un-
of an autonomous helicopter in urban areas using feature tracking,” J.
manned aerial vehicles.
Field Robot., vol. 23, no. 3/4, pp. 185–199, Apr. 2006.
[29] R. Nelson and J. R. Corby, “Machine vision for robotics,” IEEE Trans.
Ind. Electron., vol. IE-30, no. 3, pp. 282–291, Aug. 1983.
LIN et al.: EMBEDDED VISION SYSTEM ON AN UNMANNED ROTORCRAFT FOR GROUND TARGET FOLLOWING 1049

Ben M. Chen (S’89–M’92–SM’00–F’07) received Tong H. Lee (M’90) received the B.A. degree with
the B.S. degree in mathematics and computer sci- first class honors in the Engineering Tripos from
ence from Xiamen University, Xiamen, China, in Cambridge University, Cambridge, U.K., in 1980
1983, the M.S. degree in electrical engineering from and the Ph.D. degree from Yale University, New
Gonzaga University, Spokane, WA, in 1988, and the Haven, CT, in 1987.
Ph.D. degree in electrical and computer engineering He is currently a Professor with the Department
from Washington State University, Pullman, in 1991. of Electrical and Computer Engineering and the
From 1983 to 1986, he was a Software En- National University of Singapore (NUS) Graduate
gineer with South-China Computer Corporation, School for Integrative Sciences and Engineering,
Guangzhou, China, and from 1992 to 1993, he was National University of Singapore (NUS), Singapore.
an Assistant Professor with the Department of Elec- He was a Past Vice-President (Research) of NUS. He
trical Engineering, The State University of New York, Stony Brook. Since currently holds Associate Editor appointments in Control Engineering Practice
August 1993, he has been with the Department of Electrical and Computer En- [an International Federation of Automatic Control (IFAC) journal] and the
gineering, National University of Singapore, Singapore, where he is currently International Journal of Systems Science (Taylor and Francis, London). In
a Professor. He is the author/coauthor of eight research monographs, including addition, he is the Deputy Editor-in-Chief of IFAC Mechatronics journal. He has
H2 Optimal Control (Prentice Hall, 1995), H∞ Control and Its Applications coauthored five research monographs (books) and is the holder of four patents
(Springer, 1998; Chinese edition published by Science Press, 2010), Robust (two of which are in the technology area of adaptive systems, and the other two
and H∞ Control (Springer, 2000), Linear Systems Theory (Birkhauser, 2004; are in the area of intelligent mechatronics). He has published more than 300
Chinese translation published by Tsinghua University Press, 2008), Hard Disk international journal papers. His research interests are in the areas of adaptive
Drive Servo Systems (Springer, 2002 and 2006), and Unmanned Rotorcraft systems, knowledge-based control, intelligent mechatronics, and computational
Systems (Springer, 2011). He served/serves on the editorial boards for a number intelligence.
of international journals, including IEEE T RANSACTIONS ON AUTOMATIC Dr. Lee currently holds Associate Editor appointments in the IEEE T RANS -
C ONTROL, Automatica, Systems and Control Letters, and Journal of Control ACTIONS IN S YSTEMS , M AN , AND C YBERNETICS and IEEE T RANSAC -
Theory and Applications. His current research interests include robust control, TIONS IN I NDUSTRIAL E LECTRONICS . He was a recipient of the Cambridge
systems theory, unmanned systems, and financial market modeling. University Charles Baker Prize in Engineering, the 2004 Asian Control Con-
Dr. Chen was the recipient of the Best Poster Paper Award, 2nd Asian ference (ASCC) (Melbourne) Best Industrial Control Application Paper Prize,
Control Conference, Seoul, Korea (in 1997); University Researcher Award, the 2009 IEEE International Conference on Mechatronics and Automation
National University of Singapore (in 2000); Prestigious Engineering Achieve- Best Paper in Automation Prize, and the 2009 ASCC Best Application Paper
ment Award, Institution of Engineers, Singapore (in 2001); Temasek Young Prize. He was an Invited Panelist at the 2000 World Automation Congress,
Investigator Award, Defence Science and Technology Agency, Singapore (in Maui, HI; an Invited Keynote Speaker for the 2003 IEEE International Sym-
2003); Best Industrial Control Application Prize, 5th Asian Control Confer- posium on Intelligent Control, Houston, TX; an Invited Keynote Speaker for
ence, Melbourne, Australia (in 2004); Best Application Paper Award, 7th Asian the 2007 International Conference on Life System Modeling and Simulation
Control Conference, Hong Kong (in 2009); and Best Application Paper Award, (LSMS), Shanghai China; an Invited Expert Panelist for the 2009 IEEE Inter-
8th World Congress on Intelligent Control and Automation, Jinan, China (in national Conference on Advanced Intelligent Mechatronics; an Invited Plenary
2010). Speaker for the 2009 International Association of Science and Technology for
Development (IASTED) International Conference on Robotics, Telematics and
Applications, Beijing, China; an Invited Keynote Speaker for LSMS 2010,
Kai-Yew Lum (M’97) received the Diplôme Shanghai; and an Invited Keynote Speaker for 2010 IASTED International
d’Ingénieur from the Ecole Nationale Supérieure Conference on Control and Applications, Banff, AB, Canada.
d’Ingénieurs Electriciens de Grenoble, Grenoble,
France, in 1988 and the M.Sc. and Ph.D. degrees
from the Department of Aerospace Engineering,
University of Michigan, Ann Arbor, in 1995 and
1997, respectively.
He was with DSO National Laboratories,
Singapore, as a Junior Engineer from 1990 to 1993
and then as a Senior Engineer from 1998 to 2001,
specializing in control and guidance technologies.
In 2001, he joined Temasek Laboratories, National University of Singapore,
Singapore, where he is currently the Principal Investigator of research
programs in control, guidance, and multiagent systems. His research interests
include nonlinear dynamics and control, estimation and optimization, and
reduced-order modeling and control in aerodynamics.
Mr. Lum is a member of the IEEE Control Systems Society and the American
Institute of Aeronautics and Astronautics.

Anda mungkin juga menyukai