Anda di halaman 1dari 5

Recognizing User Context Using Mobile Handsets with Acceleration Sensors

Yoshihiro Kawahara, Member, IEEE, Hisashi Kurasawa, and Hiroyuki Morikawa, Member, IEEE
pocket, and 13.2% of people change the mobile phone position from day to day[2](Figure 1). Thus, a context inference method needs to adapt to at least those three positions and frequent change of the position. In this paper we show a user posture inference scheme that supports for different sensor positions on the users body. Our inference method requires only one 3-axis acceleration sensor embedded in a mobile handset. The system automatically recognizes the sensor position on the user's body and selects the most relevant inference method dynamically.

Abstract User context recognition is one of the important technologies for realizing context aware services. Conventional multi sensor based approach has advantages in that it can generate variety of contexts with less computation resources by using many different sensors. However, such systems tend to be complex and cumbersome and, thus, do not fit in well with mobile environment. In this sense, a single sensor based approach is suitable for mobile environments. In this paper, we show a context inference scheme that realizes a user posture inference with only one acceleration sensor embedded in a mobile handset. Our system automatically detects the sensor position on the user's body and selects the most relevant inference method dynamically. Our experimental results show that the system can infer a users posture (sitting, standing, walking, and running) with an accuracy of more than 96%. Index TermsContext Recognition, Ubiquitous Computing, Context Awareness

I. INTRODUCTION User context awareness is one of the important concepts for application services in the ubiquitous computing environment. In general, user context means user's posture, movement, situation, emotion, preference, etc. We are exploring feasibilities of a context aware service that monitors users' context with a set of off-the-shelf worn sensors and delivers the most appropriate contents based on the users' objective, preference, location, and context [1]. When developing context-aware systems, a practical and reliable context inference method is indispensable. Some researchers explored user activity inference methods with acceleration sensors. They assume that sensor position on the body is fixed (e.g. in hand, in the pocket) or asks users to wear multiple sensors. However this assumption is not suitable for application scenarios in mobile environment. A mobile phone is one of the attractive devices as a context inference sensor because it is equipped with computation and communication capability and always carried by users. Therefore we focus on developing a context inference technique based on mobile handset. A recent survey on the usage of a mobile phone shows that 77.6% of respondents always put their mobile phones in the bag, pants pocket or chest
This work is supported by Ministry of Internal Affairs and Communications. Yoshihiro Kawahara, Hisashi Kurasawa, and Hiroyuki Morikawa is with the University of Tokyo, JAPAN (phone&fax: +81-3-5841-6710; e-mail: kawahara[at]akg.t.u-tokyo.ac.jp).

Figure 1. Survey on mobile phone usage

II. RELATED WORKS In general, gesture recognition techniques can be classified into two approaches. The first approach is an infrastructure-based approach that captures users motion using the video cameras embedded in the environment. Other approach is user centric approach that attaches acceleration sensors onto the users body. Infrastructure based approach is mainly used for motion capture of CG movie actors because user doesnt need to wear any electric device and his physical motion is not disturbed. However such image-based approach usually needs complicated calibration procedure and does not fit in mobile environment. Thus we concentrate on user centric approach. There are several related works on posture recognition based on the acceleration sensors attached to users body. In this section we classify the related works and introduce the system architecture, supported postures, and target applications. A. Multi-sensor based approach In our previous work, we captured the physical motion of the user by five wireless acceleration sensors attached to users wrists, waist, and ankles [1]. Amplitude of the acceleration readings were used as a key to distinguish walking / running motion. And gravity vector change of the waist sensor was used for distinguishing sit / stand motion. The accuracy rate of the

1 1-4244-1039-8/07/$25.00 2007 IEEE

inference was more than 90% in the demonstration experiment. Additionally, a sensor array comprised of a photo diode, a UV sensor, a temperature sensor, a humidity sensor, an alcohol sensor, and a motion sensor are used for obtaining environmental information. The sensor data generated by the sensors are transferred to a small handheld PC carried by the user. The target application of the system was a context aware navigation in the city area. The system recommends most relevant shops and restaurants based on the user context, environmental situation, and user preference. Kern et al. developed a wearable platform that allows recording and processing the data from a microphone, 12 body-worn 3D acceleration sensors, and location estimation for the sake of inferring interruptability of the user. Their system can infer 6 activities (sit, stand, walk, up-stairs, down-stairs, and run) by using Bayes rule for mean and variance of acceleration sensor data. The overall accuracy rate was 86.5% in their evaluation[3,4]. Intille et al. achieved classification accuracy rates between 80% to 95% for walking, running, climbing stairs, standing still, sitting, lying down, working on a computer, bicycling, and vacuuming by using five data collection boards each with a 2-axis accelerometer to five different points on the body. Mean, energy, entropy, and correlation features were extracted from the acceleration data. Activity recognition on these features was performed using machine-learning algorithms such as the C4.5 decision tree classifier [5]. B. A single sensor based approach SenSay project developed a context-aware mobile phone that adapts to dynamically changing environmental and physiological states. Sensors including accelerometers, light, and microphones are mounted at various points on the body to provide data about the users context. In addition to manipulating ringer volume, vibration, and phone alerts, SenSay can provide remote callers with the ability to communicate the urgency of their calls, make call suggestions to users when they are idle, and provide the caller with feedback on the current status of the Sensay user [6]. Iso et al developed a mobile handset with various sensors including acceleration sensors, gyro sensors, microphones, infrared sensors, illumination sensors, temperature sensors, pressure sensors, and skin resistance sensors [7]. They collected various information and extracted features in the users daily lives. A Japanese startup company named Microstone has developed a wrist watch with acceleration and a gyroscope sensor which recognizes 10 different intensity of users physical motion such as walking, running, hard sports, etc [8]. The movement of the users arm is a clue to infer the physical motion. III. APPROACH Multi sensor based approach has advantages in that it can generate variety of contexts with less computation resources by using many different sensors. However, such systems tend to be complex and cumbersome and, thus, do not fit in well with
2

mobile environment. In this sense, a single sensor based approach is suitable for mobile environments. Related works shown in II.B assumes that the context inference sensor is attached to a fixed position on the users body. However, we found this assumption is not always acceptable for average users when we observe daily usage of a mobile phone. Our survey results about a mobile phone usage show that 77.6% of people carry their mobile phones in either bag or chest pocket or pants pocket. Moreover, 13.2% of people answered that they frequently changes the position of the mobile phones. Frequent change of the sensor position leads to degradation of inference accuracy. Therefore we need to develop a context inference scheme that accommodates the changes in the sensor positions. In consideration of the requirements mentioned above, we decided to detect the sensor position automatically on the users body in order to switch relevant posture inference algorithms and improve inference accuracy. As for the contexts to infer, sit, stand, walk, and run are the common and essential postures in related works. These postures are not only important for healthcare applications, but also used to understand higher-level activities of the users by integrating other information such as location and environmental sensor data. In this paper, we try to infer such fundamental user postures at higher accuracy with a simpler device. IV. SYSTEM ARCHITECTURE A. System Architecture As a prototype system, we attach a sensor node (Pavenet module [9]) with 3-axis acceleration sensor (LIS3L02DQ) in a mobile phone handset. The acceleration sensor data is transmitted to a mobile PC by Bluetooth. The sampling rate of the acceleration sensor is 20Hz. The actual analysis of acceleration sensor data is processed on the PC. Figure 2 summarizes our inference system architecture. An acceleration sensor itself is small enough to be embedded in the mobile phone handset. Some of the mobile phones commercialized in Japan are equipped with such acceleration sensors in order to count the number of steps for health care applications.

Figure 2. System Architecture

B. Inference Method We divide our inference method into three steps. The first step is a pre-processing. Feature values are extracted from the acceleration data. The second step is a sensor position inference. The system infers where the mobile phone is placed on in order to select a right posture inference scheme. The last step is a user posture inference. The system recognizes user's posture using the algorithm based on the sensor position. 1) Pre-Processing The system calculates the variance for last 12 samples, the average of each axis for last 4 samples, the maximum value of FFT power spectrum for continuous 64 samples, and the change of the angle of the sensor device. The sensor angle is calculated by detecting the gravity vector. 2) Sensor Position Inference We infer a sensor position by the following features. When a user doesn't wear the sensor device, the variance is nearly 0. Given that the sensor value is well-calibrated, direction of gravity can be calculated when a user is standing or moving gently. Then the sensor can know vertical and horizontal directions. As Figure 3 indicates, when the sensor device is in the pants pocket, the sensor angle fluctuates a lot during the user walking. As Figure 4 indicates, when the sensor device is in the chest pocket, the sensor data shows a unique change during the user stooping forward in the chair.

helpful for estimating backward-leaning, or side-leaning.

forward-leaning,

Acceleration (m/s^2)

Figure 4. The angle of the cell phone shell changes when a user sits with sensor in his chest pocket

Figure 5 shows the entire process of the context inference.

Off the body

Preprocessing

Variance Average FFT Angle

Figure 5. Process of context inference

Angle (deg)

Angle (deg)
Variance FFT Power spectrum max Angle

Figure 3. Change of the angle of the handset when the user is walking

3) Context Inference When the orientation and position of the sensor has been determined by the previous operation, the system selects a relevant algorithm to infer a users posture. Firstly, we describe two general rules. These rules are used regardless of the device position. Use variance value to determine whether the user is moving or not. Use maximum value of FFT power spectrum to determine the state of running, walking and the pace of running. Secondly, we describe two specific rules based on the sensor device position. When the sensor device is in the pants pocket, a change of the sensor angle can be used to estimate sitting motion. When the device is in the chest pocket, a sensor angle is
3

Figure 6. Feature values and user posture

V. PERFORMANCE ANALYSIS A. Evaluation of inference parameters First, we performed a preliminary experiment to determine parameters for context inference. We asked three subjects to put wireless sensors into the pockets and in the bags, and then asked to sit on the chair, stand still, walk, and run for several times. As a result, the features of the acceleration sensor data were almost consistent from person to person. The difference of the optimal threshold was small enough for everyone to use the same threshold value. We have performed an evaluation experiment for four subjects. The subjects are asked to place the sensor device in three positions (pants pocket, chest pocket, and bag). Then they

Angle (deg)

repeat four postures (sitting, standing, walking, and running) for 20 seconds for a total of about 10 minutes. In the experiment each person wore clothes s/he usually used. The bags used include a handbag, a tote bag, and backpack, which were brought by the subjects. The experiment was performed in the spacious area so that subjects can act as natural as possible. The result of posture and device position inference is shown at Table 1. Table 1. Performance evaluation
Pants Bag Chest sitting standing walking running 100%(675/675) 100%(675/675) 97.8%(660/675) 96.7%(653/675) 99.7%(718/720) 98.8%(711/720) 98.2%(707/720) 99.7%(718/720) 99.9%(719/720) 98.9%(712/720) Device position 98.3%(2654/2700) 98.7%(2131/2160) 97.4%(2103/2160)

Figure 7. E-Coaching Concept

When the sensor is put in the bag or chest pocket, the system cannot know if the user is sitting or standing. Therefore, in Table 1, the inference accuracy of sitting/standing means sitting or standing. The result shows that we can infer users posture very accurately regardless of the sensor position on the users body. VI. APPLICATION EXAMPLE We believe that development of applications is as important as performance improvement of core technologies. As a demonstration of our context inference scheme, we and NPO WIN have developed E-Coaching wearable computer system [10]. A. E-Coaching Concept Computer-aided health care services have become more and more popular. This expansion has provided new means to monitor individuals' health and to provide efficient and accurate feedback. However, it still remains difficult for people to follow fitness and/or weight loss programs because of some lack of discipline or determination. Indeed, most of these programs enforce a workout often too hard for an individual's physical and psychological capacities. As an application example of the context inference, we applied our technology into the "e-coaching" wearable computing outfit, aiming at providing interactive coaching services for efficient body regulation. E-coaching outfit is equipped with multiple sensors and a small PC. The coaching service processes multiple sensor readings about the user's body and its environment. Then, it generates coaching messages to guide the user's training in realtime. Essential contexts include the exercise intensity level i.e., the heart rate (from ECG sensor readings), the energy consumption, the speed and the exercise mode, namely walking, running and resting. Also the environmental context including the temperature, the humidity, and the UV level are monitored. A mobile handset including acceleration sensor is used to infer the user's exercise mode i.e., walking, running, or resting. The users exercise mode and running pace are very important information when generating a coaching message.

B. Coaching strategy The energy consumption throughout an exercise session is given by the following equation. Econsumed = F(Age, Duration, Weight, Exercise Mode, Speed) --- (1) The heartbeat frequency is obtained by inverting the "R-R interval" of the ECG waveform, which constitutes a one beat period. Using the proposed context interface, a coach message example is generated by comparing the heart rate and the targeted heart rate, which is given by equation 2. Htarget = Hrest + (Hmax - Hrest) * Cex --- (2) Hmax = 220 Age --- (3) Here, Htarget: Target Heart Rate Hrest : Resting Heart Rate Hmax : Maximum Heart Rate Cex : Exercise Intensity Coefficiency If current heartbeat bellows Htarget, the system encourage user to exercise harder so that the user can get good results. If the current heartbeat exceeds Htarget, the system tells the user to slow down in order to prevent over exercising. Detection of fluctuations in the cardiac rhythm is also helpful in detecting abnormal physical conditions. Other than heartbeat information, the system generates messages that help users to exercise safely and effectively. For instance, the system recommends users to drink some water when the outside temperature is too high. The system's implementation confirmed that the proposed context interface is effective to integrate multiple sensor readings within a resource limited environment. VII. CONCLUSION AND FUTURE DIRECTIONS In this paper, we presented a user posture inference method using a single 3-axis acceleration sensor. The system automatically detects the sensor position on the user's body and selects the most relevant inference method dynamically. The performance evaluation shows that our context recognition scheme is accurate regardless of the sensor position on the users body. Though our prototype system is composed of a wireless
4

sensor and a computer, our recognition scheme will be able to be implemented onto mobile handset in the near future. Even though calculation of FFT power spectrum consumes a lot of computation resources, it will be easily calculated with a DSP (Digital Signal Processor) which is commonly used in a mobile phone. We showed an application example named e-coaching that generates a coaching messages based on the physical conditions and objectives of the exercise. Other than e-coaching prototype, posture information will come to play an important role in context aware applications. We are now exploring higher level user contexts that are generated by incorporating other information sources into our current posture inference infrastructure. REFERENCES
[1] Y. Kawahara, T. Hayashi, H. Tamura, H. Morikawa, and T. Aoyama, "A Context-Aware Content Delivery Service Using Off-the-shelf Sensors," In Proceedings of The Second International Confernce on Mobile Systems, Applications, and Services (Mobisys 2004), (Poster Presentation), Boston, U.S.A., June 2004. [2] Survey on mobile phone usage, iSHARE INC. http://blog.ishare1.com/press/archives/2005/11/post_19.html November 2005. [3] Nicky Kern, Stavros Antifakos, Bernt Schiele, and Adrian Schwaninger, "A Model for Human Interruptability: Experimental Evaluation and Automatic Estimation from Wearable Sensors," Proceeding of 8th IEEE International Symposium on Wearable Computers(ISWC'04), pp.158-165, Arlington, USA, October 2004. [4] Nicky Kern, Bernt Schiele, and Albrecht Schmidt, "Recognizing Context for Annotating a Live Life Recording," Personal and Ubiquitous Computing, 2005. [5] Stephen S.Intille, Ling Bao, Emmanuel Munguia Tapia, and John Rondoni, "Acquiring In Situ Training Data for Context-Aware Ubiquitous Computing Applications," in Proceedings of CHI 2004. Connect: Conference on Human Factors in Computing Systems. New York, NY: ACM Press, pp. 1-9, New York, USA, 2004. [6] Daniel Siewiorek, Asim Smailagic, Junichi Furukawa, Neema Moraveji, Kathryn Reiger, and Jeremy Shaffer, "SenSay: A Context-Aware Mobile Phone," Poster of 7th IEEE International Symposium on Wearable Computers (ISWC'03), October 2003. [7] Toshiki Iso, Norihiro Kawasaki, and Shoji Kurakake, "Personal Context Extractor with Multiple Sensor on a Cell Phone," The 7th IFIP International Conference on Mobile and Wireless Communications Networks, D. 2 C200525, Morocco, September 2005. [8] Microstone K.K. : http://www.microstone.co.jp/ [9] S. Saruwatari, T. Kashima, M. Minami, H. Morikawa, and T. Aoyama, "PAVENET: Hardware and Software Framework for Wireless Sensor Networks," Transaction of the Society of Instrument and Control Engineers, vol. E-3, 2005. [10] Y. Kawahara, C. Sugimoto, S. Arimitsu, A. Morandini, T. Itao, H. Morikawa, and T. Aoyama, "Context Inference Techniques for a Wearable Exercise Support System," In Proceedings of ACM SIGGRAPH 2005 The 32nd International Conference on Computer Graphics and Interactive Techniques (Poster presentation), Los Angeles, USA, July 2005.

Anda mungkin juga menyukai