Anda di halaman 1dari 5

2016 IEEE International Conference on Consumer Electronics-China (ICCE-China)

Device-Free Human Localization Using Panoramic


Camera and Indoor Map
Yongliang Sun , Member, IEEE, Kanglian Zhao , Jian Wang , Wenfeng Li , Guangwei Bai , Naitong Zhang
School of Computer Science and Technology, Nanjing Tech University, Nanjing, China
Schoolof Electronic Science and Engineering, Nanjing University, Nanjing, China
Communication Research Center, Harbin Institute of Technology, Harbin, China

syl peter@163.com

AbstractThe widespread application of camera-based human localization, by which people no longer need to carry
surveillance systems has inspired extensive investigation on hu- any terminal device.
man localization. In this paper, we proposed a device-free local-
ization method using panoramic camera and indoor map. The Currently, many camera-based localization methods have
proposed method is able to provide precise location information been developed. Camera-based localization and trajectory re-
of human object without any terminal device. After preprocess- construction were formulated as an optimization problem in
ing the images observed with a panoramic camera, we detect [6] and this problem was solved by singular value decomposi-
human object as foreground using the widely used background
subtraction method. Then we search all the foreground pixels tion (SVD). L. Liu et al. investigated the localization-oriented
and nd the pixel whose location can represents the objects coverage problem of camera network and formulated the
location best. The pixel location on the image is mapped to a localization problem based on Bayesian estimation to obtain
corresponding location on an indoor map. Finally, localization the needed camera density [7]. Through incorporating camera
coordinates are obtained in the coordinate system we construct. parameters and some known location information, [8] pro-
Experimental results show that our localization method does not
need any terminal device and also is able to achieve a mean error posed a location-constrained maximum a posteriori algorithm
of 0.37m, which is far less than those of popular ngerprinting for camera-based localization. Then Y. S. Lin et al. proposed
and trilateration localization methods. a series of image transforms based on the vanishing point
Index Termsindoor localization, human object detection, of vertical lines [9]. The transforms improved the proposed
background subtraction, panoramic camera probabilistic occupancy map (POM)-based human localization
in both outdoor and indoor environments.
I. I NTRODUCTION Since it is the most common way to monitor an area with
With the development of mobile computing and popularity just one camera, the research on human localization has been
of smart terminal devices, location-based services (LBS) have concentrated on single camera-based method. Furthermore,
played an important role in common peoples life because single camera-based localization method is a fundamental and
it is able to offer accurate localization information for the essential component of multi-camera systems. Therefore, we
services, such as object localization, emergency rescue and investigate single camera-based human localization in this pa-
social network [1]. Owing to signal attenuation and complex per and propose a device-free human localization method using
radio propagation, the performances of satellite and cellular- panoramic camera and indoor map. The proposed method is
based localization systems are very limited in indoor envi- able to offer accurate location information for users without
ronments. So various indoor localization systems have been any terminal device. In our experiment, we utilize a stationary
developed for offering indoor LBS using the techniques like panoramic camera hung from the ceiling of an ofce room. We
ultra wideband (UWB), ultrasonic, radio frequency identica- rst recognize and extract moving human object observed in
tion (RFID), wireless local area network (WLAN), and vision the camera view through background subtraction method. The
processing [2-4]. pixel location of human object is searched and then the found
Among these existing indoor localization systems, WLAN- location is mapped to a corresponding location on an indoor
based indoor localization has been favored because of wide- map. The mapped location on the map is nally transformed
ly deployed access points (APs) of WLAN and commonly into localization coordinates.
available terminal devices. Unfortunately, WLAN-based lo- The remainder of this paper is organized as follows: Section
calization still requires terminal devices, with which received II introduces the related works of our proposed localization
signal strength (RSS) can be measured for calculating lo- method, which are principles of background subtraction and
calization coordinates [5]. This is not convenient for some Gaussian lowpass ltering we apply in the paper. In Section
special application scenarios like localizing children and el- III, the proposed camera-based localization method is given
derly people because users may be not able to carry a WLAN and described in details. The experimental setup, results and
terminal device sometimes. The rapid proliferation of cameras analyses are presented in Section IV. Finally, Section V gives
for surveillance and security creates tremendous chances for the conclusion of this paper.

978-1-5090-6193-8/16/$31.00 2016 IEEE


2016 IEEE International Conference on Consumer Electronics-China (ICCE-China)

II. R ELATED W ORKS


Initialization
A. Background Modeling and Subtraction
Moving object detection is one of the most important steps
in camera-based localization and it can be achieved through Image input

^
separating the moving foreground object from modeled back- Resizing and
rotation
ground. So far, many methods have been proposed for object
Background Gray
detection, such as background subtraction, temporal differ- Preprocessing processing
modeling
ence and optical ow [10]. Because we utilize a stationary Gaussian
panoramic camera in our experiment and there is no greatly filtering
Difference
varying illumination in the scene, it is easy to obtain a calculation
background image. Furthermore, background subtraction has
a fast processing speed. So we detect human object with No
>Threshold?
background subtraction method in this paper.
Background subtraction rst models a background approx- Yes
imation that is similar to the static scene without foreground Pixel location
Map input
object. Then the method detects the foreground through d- searching
ifferentiating the modeled background and image observed
in the camera view. Although many methods for modeling Preprocessing
Mapping and
background have been proposed like mean ltering, median transformation
ltering, Kalman ltering, and hidden Markov model. In this
paper, we apply the widely used mean ltering method for Localization
coordinates
background modeling.
First, assuming an observed image sequence is denoted
as {F1 , F2 , , Ft } . With the mean ltering method, a Fig. 1. Flow diagram of the localization method.
background pixel value Bt (x, y) at (x, y) on the image can
be computed by:
where n1 and n2 are the template sizes in the horizontal axis
L1
 and vertical axis, respectively; is the standard deviation of
1
Bt (x, y) = Fti (x, y) (1) the lter.
L i=0

where Fti (x, y) is the pixel value at (x, y) on image Fti ; L III. P ROPOSED L OCALIZATION M ETHOD
is the number of images used for computing pixel mean value Generally, our proposed camera-based localization method
Bt (x, y). When all the background pixel values are computed, can be divided into three phases: image preprocessing, fore-
we can get the background image Bt . ground detection and localization coordinate calculation. The
After modeling the background, assuming Ft (x, y) is the ow diagram of the method is shown in Fig. 1.
pixel value at (x, y) of current image, then we can have: In image preprocessing phase, we derive an image sequence
from a surveillance video recorded with a panoramic camera.
|Bt (x, y) Ft (x, y)| > T (2)
An inputted image is rst resized and rotated in order to
where T is a threshold to determine whether the pixel at obtain a more accurate localization pixel. Then the image
(x, y) of current image belongs to foreground object. If (2) is transformed to a gray image from true color image for
is satised, then the pixel is dened as a foreground pixel, or reducing calculation complexity and extracting morphological
it is considered as a background pixel. characteristics. After reverse color processing, the image is
denoised by a Gaussian lowpass lter.
B. Gaussian Lowpass Filtering In the second phase, theoretically speaking, when a great
In order to decrease the inuence of noise interference number of images are exploited for computing background
and separate the foreground object from background image image Bt with (1), the background image Bt can be very
precisely, in image preprocessing phase, we apply a Gaussian approached to the real scene. However, this will result in a
lowpass lter in time domain after image gray processing [11]. high computational cost. Therefore, we update the background
Gaussian lowpass lter is a lter whose impulse response is pixel Bt (x, y) with:
a Gaussian function and it is able to remove Gaussian noise
1
from the image. The Gaussian lowpass lter can be given by: Bt (x, y) = Bt1 (x, y) + (Ft (x, y) FtL (x, y)). (4)
L
1 (n21 +n22 )/22

h (n , n ) = e According to (4), the new background pixel value can be
G 1 2 2 2 computed with previous and current image pixel values as well
hG (n , n2 ) (3)


h (n1 , n2 ) =   1 as last background pixel value at (x, y), which greatly reduces
h G (n 1 , n 2 ) the computational cost. Then a difference image between the
n1 n2

978-1-5090-6193-8/16/$31.00 2016 IEEE


2016 IEEE International Conference on Consumer Electronics-China (ICCE-China)

Fig. 3. Experimental area plan.

Fig. 2. Panoramic camera on the ceiling.


localization coordinates in our constructed coordinate system
using linear equations.
background and current image is calculated and the foreground
object is detected using (2). IV. E XPERIMENTAL R ESULTS A ND A NALYSES
In the last phase, assuming the foreground object we get in
the foreground detection phase consists of M pixels and the A. Experimental Setup
camera pixel is at (x , y  ) on the image. Because we utilize a The experimental area of this paper is a real ofce room
panoramic camera near the central area of the room ceiling, with approximate dimensions of 5.1m8.5m2.7m. One
the foreground pixel location that is the nearest to (x , y  ) can 28mm CMOS panoramic camera shown in Fig. 2 was de-
represents the foreground objects location best. Therefore, we ployed on the ceiling of Room 620. A video recorded with
search the nearest pixel location with Euclidean distance. The the camera was divided into 530 images and these images
Euclidean distance between the location of every foreground were used as our testing data. The images recorded a persons
pixel and the camera location (x , y  ) is calculated and then trajectory from entering into the room to standing at the end of
the location Rr = (xr , xr ) with minimum Euclidean distance trajectory. Meanwhile, we also drew an indoor map according
Dr is considered as the location of foreground object Rim = to the layout of Room 620.
(xim , yim ) on the image, which is described by: For performance comparison, WLAN-based indoor local-
ization using ngerprinting and trilateration methods were

also performed. As shown in Fig. 3, we deployed 7 TP-
Di = (xi x )2 + (yi y  )2
, i (1, 2, , M ) . LINK TL-WR845N APs on the same oor with dimensions
D = min (Di )
r of 51.6m20.4m2.7m. RSS samples were collected with
Rim = Rr a Meizu smart phone. The phone was installed with a soft-
(5)
ware developed by our ourselves for RSS collection with a
The room map we use rst needs resizing and binarization.
sampling rate of 1 RSS sample per second. The phone was
Then a ratio matrix between the observed image and binary
laid on a tripod with a height of 1.2m. Regarding WLAN-
map is computed:

based ngerprinting localization, 16 reference points (RPs)
acol 0 were selected in the room and we collected RSS samples
a= (6)
0 arow at each RP for 120 seconds in order to constructing a radio
where acol and arow are the column and row ratios between map. Meanwhile, a total of 13 testing points (TPs) along the
the observed image and binary map, respectively. The ob- experimental trajectory were selected with gaps of 0.6m and
served image is divided into the four areas by two lines we collected RSS testing samples at each TP for 60 seconds.
x = x and y = y  . The correction factor of each area B. Localization Results of the Proposed Localization Method
Ri = (xi , yi ) , i (1, 2, , 4) is estimated through
averaging errors of several selected experimental samples from We take one image as an example and Fig.4 gives the
each area. So the pixel location Rim = (xim , yim ) on the results of image processing and camera-based localization.
observed image can be mapped to a corresponding pixel The observed image of real scene after resizing and rotation is
location Rmap = (xmap , ymap ) on the map with: shown in the top-left gure. Then the image goes through gray
processing and Gaussian lowpass ltering. We set the template
sizes of the lter n1 and n2 both to be 5 and the standard

R1T , xim x  , yim y
deviation to be 1.5. After the image preprocessing phase,
T T R2T , xim x  , yim > y
Rmap = aRim + (7) we can obtain the background image using (1) or (4), which is

R3T , xim > x  , yim > y
shown in the top-right gure. Then difference image between
R4T , xim > x  , yim y
the observed and background image is computed. We set the
where ()T denotes matrix transpose. Similarly, the pixel threshold T to be 32 to determine the foreground object and
location on the map can also be transformed to the real the detected result is shown in the bottom-left gure. Then

978-1-5090-6193-8/16/$31.00 2016 IEEE


2016 IEEE International Conference on Consumer Electronics-China (ICCE-China)

Real trajectory
Localization results

Fig. 5. Consecutive localization results on the indoor map.

0.8

Cumulative probability 0.6

0.4 KNN
Fig. 4. Results of image processing and camera-based localization: resized WKNN
and rotated image (top-left), background image (top-right), detected fore- ANN
ground object (bottom-left), and localization result on the map (bottom-right). 0.2
Trilateration
Camerabased
0
0 1 2 3 4 5 6 7
the pixel location with the minimum Euclidean distance is Localization error (m)
searched using (5) and is mapped using (6) and (7). Finally,
as shown in the bottom-right gure, we get the localization Fig. 6. Cumulative probability of localization methods.
result on the map that is marked with a blue point.
The consecutive localization results are shown in Fig. 5. In
the beginning, several localization results deviate from the real localization methods. Furthermore, the cumulative probability
trajectory relatively greatly. This is because the door area of curves of these methods shown in Fig. 6 also conrm that the
the image and persons clothes are both in deep color. They proposed camera-based localization method has a comparable
have the similar gray scales after gray processing. So it is performance and outperforms the other localization methods.
more difcult the detect the person precisely near the door V. C ONCLUSION
area. As the person walks away from the door, more accurate
In this paper, we propose a camera-based localization
localization results are calculated.
method that is able to localize human object without ter-
The experimental results of the popular ngerprinting and
minal device. We rst detect human object as foreground
trilateration localization methods as well as our proposed
using background abstraction method, and then search the
camera-based localization method are listed in Table I. The
mean errors of ngerprinting algorithms K-nearest neighbors
(KNN), weighted KNN (WKNN) and articial neural network TABLE I
P ERFORMANCE C OMPARISON OF VARIOUS L OCALIZATION M ETHODS
(ANN) are 1.70m, 1.65m and 1.96m, respectively. With the
data collected at RPs, the radio propagation model for trilater- Mean Cumulative Probability (%)
Method
ation localization is optimized with a nonlinear optimization Error (m) Within 1m error Within 2m error
KNN 1.70 33.6 63.3
method called quasi-newton method [12]. Then the mean error WKNN 1.65 38.0 62.6
of trilateration localization is reduced to 2.17m. In contrast, ANN 1.96 24.9 57.1
the mean error of our proposed camera-based localization Trilateration 2.17 24.6 40.3
method is 0.37m, which is far less than those of the popular Camera-based 0.37 100 100

978-1-5090-6193-8/16/$31.00 2016 IEEE


2016 IEEE International Conference on Consumer Electronics-China (ICCE-China)

foreground pixel location that can represents the location of


human object best. The found pixel location is mapped to a
corresponding location on an indoor map. Finally, localization
coordinates in our constructed coordinate system are calcu-
lated. All the experimental data of this paper are collected
in a real experimental environment and the mean error of
our proposed camera-based localization method is able to
reach 0.37m. The experimental results not only show that our
proposed camera-based localization method outperforms the
popular ngerprinting and trilateration methods, but also verify
the effectiveness of the proposed method.
ACKNOWLEDGMENT
The authors gratefully thank the referees for the constructive
and insightful comments. This work was supported by the
Natural Science Foundation of the Jiangsu Higher Education
Institutions of China under Grant No. 16KJB510014, and
partially by the National Natural Science Foundation of China
under Grant No. 61401194 and the Key R&D Program of
Jiangsu Province Industry Prospect and Common Key Tech-
nology under Grant No. BE2015022.
R EFERENCES
[1] K. Zhai, B. Jiang, and W. K. Chan, Prioritizing test cases for regression
testing of location-based services: metrics, techniques, and case study,
IEEE Transactions on Services Computing, vol. 7, no. 6, pp. 54-67, Jan.
2014.
[2] Y. Y. Gu, A. Lo, and I. Niemegeers, A survey of indoor positioning
systems for wireless personal networks, IEEE Communications Surveys
and Tutorials, vol. 11, no. 1, pp. 13-32, First Quarter 2009.
[3] S. N. He and S. H. Chan, Wi-Fi ngerprint-based indoor positioning:
recent advances and comparisons, IEEE Communications Surveys and
Tutorials, vol. 18, no. 1, pp. 466-490, First Quarter 2016.
[4] T. V. Nguyen, Y. M. Jeong, D. P. Trinh, and H. Shin, Location-aware
visual radios, IEEE Wireless Communications, vol. 21, no. 4, pp. 28-36,
Aug. 2014.
[5] Y. L. Sun and Y. B. Xu, Error estimation method for matrix correlation-
based Wi-Fi indoor localization, KSII Transactions on Internet and
Information Systems, vol. 7, no. 11, pp. 2657-2675, Nov. 2013.
[6] R. Pugfelder and H. Bischof, Localization and trajectory reconstruction
in surveillance cameras with nonoverlapping views, IEEE Transactions
on Pattern Analysis and Machine Intelligence, vol. 32, no. 4, pp. 709-721,
April 2010.
[7] L. Liu, X. Zhang, and H. D. Ma, Localization-oriented coverage in
wireless camera sensor networks, IEEE Transactions on Wireless Com-
munications, vol. 10, no. 2, pp. 484-494, Feb. 2011.
[8] Y. Liu, Q. Wang, J. B. Liu, J. Chen, and T. Wark, An efcient and
effective localization method for networked disjoint top-view cameras,
IEEE Transactions on Instrumentation and Measurement, vol. 62, no. 9,
pp. 2526-2537, Sep. 2013.
[9] Y. S. Lin, K. H. Lo, H. T. Chen, and J. H. Chuang, Vanishing point-based
image transforms for enhancement of probabilistic occupancy map-based
people localization, IEEE Transactions on Image Processing, vol. 23,
no. 12, pp. 5586-5598, Dec. 2014.
[10] S. Y. Chen, J. h. Zhang, Y. F. Li, and J. W. Zhang,A Hierarchical
Model Incorporating Segmented Regions and Pixel Descriptors for Video
Background Subtraction, IEEE Transactions on Industrial Informatics,
vol. 8, no. 1, pp. 118-127, Feb. 2012.
[11] M. Nixon, Feature extraction and image processing for computer vision,
3rd ed., New York NY: Academic Press, 2012.
[12] F. M. Aderibigbe, K. J. Adebayo, and A. O. Dele-Rotimi, On quasi-
newton method for solving unconstrained optimization problems, Amer-
ican Journal of Applied Mathematics, vol. 3, no. 2, pp. 47-50, 2015.

978-1-5090-6193-8/16/$31.00 2016 IEEE

Anda mungkin juga menyukai