Anda di halaman 1dari 4

A human movement profile classifier using Self Organized

Maps in the 4W1H architecture.


*Leon PALAFOX(Tokyo University, IIS), Hideki HASHIMOTO (Tokyo University, IIS)
Abstract −When detecting human activities in a controlled room such as Hashimoto Laboratory Intelligent
Space, a number of algorithms is required in order to detect and classify effectively human activity within an
specified part of the room. In this paper we implement a self organized map structure to data obtained from a
sensor network in order to classify human activity in the space.

Key Words: Self Organized Maps, 4W1H, Classification, Machine Learning

1. Introduction the human movement.


This paper is going to have the following distribution;
Advances in networking computing, sensor technolo- first we are going to give a brief introduction to Self Or-
gy and robotics allow us to create more convenient en- ganizing Maps as well as a quick overview of the 4W1H
vironments for humans In that context the Intelligent architecture. Afterwards we are going to describe briefly
Space (iSpace) concept was proposed. iSpace is a space the way the sensor interprets the data as well as an ex-
that has ubiquitous distributed sensory intelligence and planation on how the data is going to be used. Finally we
actuators for manipulating the space and providing are going to describe some of the experiments we real-
useful services [1]. It can be regarded as a system that ized as well as presenting the results in different tested
is able to support humans, i.e. users of the space, in maps.
various ways. Actuators provide services both physical
and informative to humans in the space, whereas sen-
sors are used for observing the space and gathering in-
formation.[2]
The ISpace is conformed of a set of sensors and actu-
ators, such as robots, that are aimed at tracking and inte-
racting with the elements involved. (Figure 1).
The ISpace consists of three functions, observing”,
“understanding”, and “acting”. The “Observing” func-
tion is the most important one, because it will deliver the
information to know what kind of services is required.
Conventionally, observation focus has been limited to
human and robot [3][4], but there are a lot of objects in Fig. 1. Diagram of the iSpace and the different kind of sensors and
our living environment. In order to offer the appropriate
actuators that may interact within.
services using the objects, not only the physical informa-
tion of object but also the object’s information that is
occurred by interaction with human. Such the informa-
2. Theoretical Background
tion cannot be written beforehand and provided only by
observation. In other words the relations among humans 2.1. Self Organized Maps
and objects are important. Thus, 4W1H is used to deter- The term self organized map was first coined by Teuvo
mine this information. Kohonen and it has been used since then as a very effi-
Thus, we propose in this paper to use a Self Organized cient classification technique, and to some limitation a
Map (SOM) [6] to classify and recognize human move- very good online classificatory.
ment profile information in order to reduce computing The basic principle of the SOM is creating an array of
time as well as addressing the data storage problem. Pre- NxM perceptrons, each of those with X inputs, where X
vious worked has been reported in classifying human is the number of variables in our sampling space.
movement using accelerometers and SOM [7][8][9], yet One of the characteristic that presents SOM as a very
in this work we are going to focus in the use of more good self organizing tool is that it preserves to some ex-
than accelerometers to create our classifying map. We tent the original topology in the space, depending which
are going to use linear acceleration sensors as well as metric is used, so it is very flexible to spaces measured in
angular velocity sensors to obtain precise information on l1 norm which will be a very strong point in our future
work. nematic systems.
Each of the weight in the perceptron map will be updated
with the following rule: The data is retrieved using the Matlab toolbox that
comes with the product, which allows us to retrieve in
Wv(t + 1) = Wv(t) + Θ (v, t) α(t)(D(t) - Wv(t)), real time all the needed data form the sensors.
The computer used was a computer with an Intel Core 2
Where Θ (v, t) is the neighborhood function, which Duo processor running Windows XP Professional Edi-
usually is defined as one if the pattern is closed enough to tion and Matlab 7.0
its Best Matching Unit, some times a Gaussian form is
used as well and depending in the presenting problem
different neighbor functions may be used. The α(t) is the
4. Data Sampling and Testing
learning function which will be a variable that define the 4.1. Data Processing
actual impact the distance will have in the learning algo-
rithm. Since the Self Organized Map has as a condition that
the values of the weights in the perceptron must be allo-
2.2. 4W1H cated between 0 and 1, a preprocessing of the data is re-
quired in which we do normalization of the input data as
well as a selection of which data to use in the training
It is thought that the relation between the human and
examples.
the object is described by observing the use history of
the object. The use history of the object is observed by
4.2. Map Parameters and Experiments Design
focusing on the object’s movement which is caused
In designing the experiment we heavily focused in deter-
when it is used by human. The name, the size, the color,
mining which was the optimum size of the map to use, as
and the shape etc. of the object are information given
well as which update function α(t) was more suitable for
beforehand, and are information given beforehand. On
the kind of data we were analyzing. For the Map Size, the
the other hand, there is information which occurs only
program initially chooses the best fit size according to the
after a person uses the thing, such as the use history or
number of labels or classes. We as well defined different
the movement history. Such information is vast, and
sizes and tested for reliability of classification for the
considering the cost it is not realistic to describe the use
training set as well as for an untrained set of sampled
history information on a lot of objects that exist in the
signals.
space. Therefore, it is necessary that the object's infor-
mation is written automatically without human intention
The signals we used were basic arm movements: left,
when the object is used by human. [5]
right, up and down, and the sensors were put in the dispo-
We try to describe human-object relations based on
sition shown in figure 2, in that way we could get precise
the following use history of the object (4W1H).
data from 2 sensors. It is worthy to note that the current
• Where: the position of the object
sensors were not performing any joint sensing and the
• Who: the user of the object
data from each of them where not correlated in the
• What: ID of the object
processing and sampling step.
• When: the time of the object used
• How: the way of the object used
From the sensors we will train with different sets of input
3. Hardware Description data, first we will train the system with Linear Accelera-
tion and Angular Velocity, then we will train the map
To perform the experiments we used a MTx sensor from
with Linear Acceleration and Orientation Data (Euler
the company Xsens, which is a small and accurate 3DOF
Angles), finally we will train the map with all sets of data.
inertial Orientation Tracker. It provides drift-free 3D
Linear acceleration is though to be a fundamental com-
orientation as well as kinematic data: 3D acceleration,
ponent when describing a movement profile, thus it is al-
3D rate of turn (rate gyro) and 3D earth-magnetic
ways trained in the map, yet we want to measure the ef-
field.[10]. The system contains nine sensors which can
fect of sampling the angular dynamics against an angular
be interlinked with each other in order to obtain a more
position in the system.
complex set of data out of one specific object, as well as
to provide a good architecture for setting referenced ki-
Fig. 3. Map of Acceleration and Velocity

MTx

Next we present the map (figure 4) when training the


orientation (Yaw, Tilt and Rotation) data and linear ac-
ZPS celeration.
Y
Z U-matrix
X 1.71

up up up down left

up up downdowndown left

Fig. 2. Setting of inertial sensor MTx on the arm up up down left left

up up down right left right left

5. Results left right left left left

right left left left left right


5.1. Map Parameters
right right left right right left right 0.962
When testing the system and after some trials, we arrived
to the conclusion that a medium sized map, that being in right left left down left right right
the order of 6 x 7 neurons was the best choice, since a right left left down up right right
bigger map would return higher error percentage and a
left left down up up
small map as well may overlap categories creating then
left downdowndown up
misfires in the classification
5.2. Maps results. left downdown down up

The results we obtained using only linear acceleration left downdown right downdown
0.215
and angular velocity are shown in figure 3, and we note
SOM 17-Jul-2009
that we have a clear mixing in some data but overall we
have some clusterization and but some cases vertical
movement data and horizontal movement data are well Fig. 4 Map of Acceleration and Orientation for Medium Sized map
separated. As well we have a 10% error in the detection
of the patterns. We see data mixing again, yet it is less noticeable than
when using the Angular Velocity, mainly because the
U-matrix 2.09 orientation data stay fix over time, the separation be-
tween vertical and horizontal data is again better defined
down down down up up up up up
than with the previous test and we have a class detection
down up up error of 9%. An interesting case is when we increment
right down the size of the map to be the double we get the result in
figure 5 in which a better classification is achieved with
right right left left left left left
a 2% of detection error, this specific case is given only
right right right left right left when working with orientations and linear accelerations.
right right right right left right left left 1.12 And is more computer power demanding that the smaller
versions of the map
right right right left up left

right left left left up

down left down left up up

down left left down left left up up

down right left down down left left up

0.163

SOM 17-Jul-2009
than the previous test, yet it is processed in a faster time.
U-matrix
1.63
up up up up updown downdown down rightleftleft 6. Conclusions
up up down down down down downright
right left
up up down down down left left We have presented a study on how movement classifica-
up up up left leftleftleft
tion can be done using a Self Organized Map over the
up updown down left left
up up rightleftleftleft left data extracted from an orientation and acceleration sen-
up up upright rightleft leftleft sor. Results have shown that to a loss in recognition per-
down up right right rightleft left
left right
right left left left formance, sampling the entire set of variables is the bet-
right rightleft left left right ter solution for the present case.
right right rightrightright
right right right left leftright
right leftleft leftleftright Future work will involve a more tuned detection of
0.858
right
rightright
rightrightleft downup left left movement, such as movement of specific objects and not
right
right left down right right right
right
right left down right
right right only human arms movement, but upper torso profile de-
right left left downup up right right tection. As well an algorithm for processing the data for
leftleftleft leftleft up up right
a faster and more effective recognition based in com-
downdowndown up up right
left leftleft downdown down up up pressive sensing will be implemented.
leftleftleft downdowndown down downup up
leftleftleft down downdowndown down
left down down downup updown down up up
left downdown down up REFERENCES
leftleftleft down right
right
rightupdown downdown
0.0859 [1] J.-H. Lee and H. Hashimoto, "Intelligent space - concept
SOM 17-Jul-2009 and contents," Advanced Robotics, vol. 16, no. 3, pp.
265-280, 2002.
[2] Brscic, D.; Hashimoto, H., "Tracking of Humans Inside
Fig. 5 Map of Acceleration and Orientation for Large Sized map
Intelligent Space using Static and Mobile Sensors," Indus-
trial Electronics Society, 2007. IECON 2007. 33rd Annual
Finally if we use the entire set of data we have the fol- Conference of the IEEE , vol., no., pp.10-15, 5-8 Nov.
lowing map: 2007
[3] Teuvo Kohonen, “The self-organizing map”, Neurocom-
U-matrix
2.16 puting, Volume 21, Issues 1-3, 6 November 1998, Pages
downdowndowndown up up up 1-6,
right downdown up up [4] Xsens, MTx, http://www.xsens.com/
[5] Kouhei Kawaji, Kazuki Yokoi, Mihoko Niitsuma, Hideki
right up
Hashimoto, "Observation System of Human-Object
left left right up Relations in Intelligent Space", 6th IEEE Int. Conf. on
Industrial Informatics, pp.1475-1480, 2008.
left right right right
[6] Teuvo Kohonen, “The self-organizing map”, Neurocom-
left left right right puting, Volume 21, Issues 1-3, 6 November 1998, Pages
right left left left right 1.21 1-6,
[7] Runqing Huang, Lifeng Xi, Xinglin Li, C. Richard Liu, Hai
right left right right
Qiu, Jay Lee, “Residual life predictions for ball bearings
right up left left left based on self-organizing map and back propagation neural
up left network methods”, Mechanical Systems and Signal
Processing, Volume 21, Issue 1, January 2007, Pages
up up downdown left
193-207,
downdowndowndowndown left [8] Van Laerhoven, K.; Aidoo, K.A.; Lowette, S., "Real-time
analysis of data from many sensors with neural networks ,"
down right up downdown left
0.266
Wearable Computers, 2001. Proceedings. Fifth
International Symposium on , vol., no., pp.115-122, 2001
SOM 17-Jul-2009
[9] K. van Laerhoven, “Combining the self-organizing map
and k-means clustering for on-line classification of sensor
Fig. 6 Map using all the variables data,” in Proc. Artif. NeuralNetw. (ICANN), Vienna,
Austria, 2001, pp. 464–469.
[10] Xsens, MTx, http://www.xsens.com/
This gives us a very good classification using less than
half the perceptrons used in the previous test bigger map
with an error percentage of 4%. Which itself is bigger

Anda mungkin juga menyukai