Anda di halaman 1dari 354

Advances in Intelligent and

Soft Computing 126


Editor-in-Chief: J. Kacprzyk
Advances in Intelligent and Soft Computing
Editor-in-Chief
Prof. Janusz Kacprzyk
Systems Research Institute
Polish Academy of Sciences
ul. Newelska 6
01-447 Warsaw
Poland
E-mail: kacprzyk@ibspan.waw.pl
Further volumes of this series can be found on our homepage: springer.com
Vol. 112. L. Jiang (Ed.) Vol. 119. Tianbiao Zhang (Ed.)
Proceedings of the 2011 International Future Computer, Communication, Control
Conference on Informatics, Cybernetics, and and Automation, 2011
Computer Engineering (ICCE 2011) November ISBN 978-3-642-25537-3
19-20, 2011, Melbourne, Australia, 2011
ISBN 978-3-642-25193-1 Vol. 120. Nicolas Lomnie, Daniel Racoceanu,
and Alexandre Gouaillard (Eds.)
Vol. 113. J. Altmann, U. Bauml, and Advances in Bio-Imaging: From Physics to Signal
B.J. Krmer (Eds.) Understanding Issues, 2011
Advances in Collective Intelligence 2011, 2011 ISBN 978-3-642-25546-5
ISBN 978-3-642-25320-1
Vol. 121. Tomasz Traczyk and
Vol. 114. Y. Wu (Ed.) Mariusz Kaleta (Eds.)
Software Engineering and Knowledge Modeling Multi-commodity Trade: Information
Engineering: Theory and Practice, 2011 Exchange Methods, 2011
ISBN 978-3-642-03717-7 ISBN 978-3-642-25648-6
Vol. 115. Y. Wu (Ed.)
Software Engineering and Knowledge Vol. 122. Yinglin Wang and
Engineering: Theory and Practice, 2011 Tianrui Li (Eds.)
Foundations of Intelligent Systems, 2011
ISBN 978-3-642-03717-7
ISBN 978-3-642-25663-9
Vol. 116. Yanwen Wu (Ed.)
Advanced Technology in Teaching - Proceedings Vol. 123. Yinglin Wang and
of the 2009 3rd International Conference on Tianrui Li (Eds.)
Teaching and Computational Science Knowledge Engineering and Management, 2011
(WTCS 2009), 2012 ISBN 978-3-642-25660-8
ISBN 978-3-642-11275-1
Vol. 124. Yinglin Wang and Tianrui Li (Eds.)
Vol. 117. Yanwen Wu (Ed.) Practical Applications of Intelligent
Advanced Technology in Teaching - Proceedings Systems, 2011
of the 2009 3rd International Conference on ISBN 978-3-642-25657-8
Teaching and Computational Science
(WTCS 2009), 2012 Vol. 125. Tianbiao Zhang (Ed.)
ISBN 978-3-642-25436-9 Mechanical Engineering and Technology, 2011
ISBN 978-3-642-27328-5
Vol. 118. A. Kapczynski, E. Tkacz,
and M. Rostanski (Eds.) Vol. 126. Khine Soe Thaung (Ed.)
Internet - Technical Developments and Advanced Information Technology
Applications 2, 2011 in Education, 2012
ISBN 978-3-642-25354-6 ISBN 978-3-642-25907-4
Khine Soe Thaung (Ed.)

Advanced Information
Technology in Education

ABC
Editor
Khine Soe Thaung
Society on Social Implications of Technology and Engineering
Mal
Maldives

ISSN 1867-5662 e-ISSN 1867-5670


ISBN 978-3-642-25907-4 e-ISBN 978-3-642-25908-1
DOI 10.1007/978-3-642-25908-1
Springer Heidelberg New York Dordrecht London
Library of Congress Control Number: 2011943800

c Springer-Verlag Berlin Heidelberg 2012


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the
material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broad-
casting, reproduction on microfilms or in any other physical way, and transmission or information storage
and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known
or hereafter developed. Exempted from this legal reservation are brief excerpts in connection with reviews
or scholarly analysis or material supplied specifically for the purpose of being entered and executed on a
computer system, for exclusive use by the purchaser of the work. Duplication of this publication or parts
thereof is permitted only under the provisions of the Copyright Law of the Publishers location, in its cur-
rent version, and permission for use must always be obtained from Springer. Permissions for use may be
obtained through RightsLink at the Copyright Clearance Center. Violations are liable to prosecution under
the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of publication,
neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or
omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material
contained herein.

Printed on acid-free paper


Springer is part of Springer Science+Business Media (www.springer.com)
Preface

It is our pleasure to welcome you to 2011 SSITE International Conference on


Computers and Advanced Technology in Education (ICCATE 2011) which will be
held in Beijing, China, November 34, 2011
Beijing, capital of the People's Republic of China, is the nation's political,
economic, cultural, educational and international trade and communication center.
Located in northern China, close to the port city of Tianjin and partially surrounded
by Hebei Province, Beijing also serves as the most important transportation hub and
port of entry in China.
Beijing, one of the six ancient cities in China, has been the heart and soul of
politics and society throughout its long history and consequently there is an
unparalleled wealth of discovery to delight and intrigue travelers as they explore
Beijing's ancient past and exciting modern development. Now it has become one of
the most popular tourist destinations in the world, with about 140 million Chinese
tourists and 4.4 million international visitors in a year.
ICCATE 2011 was the first conference dedicated to issues related to computers
and advanced technology in education. This conference aims to provide a high-level
international forum for researchers to present and discuss the recent advances in
related issues, covering various research areas including computers, advanced
technology and its applications in education.
The conference was both stimulating and informative with an interesting array of
keynote and invited speakers from all over the world. Delegates had a wide range of
sessions to choose from. The program consisted of invited sessions, technical
workshops and discussions with eminent speakers covering a wide range of topics in
computers, advanced technology and its applications in education. This rich program
provided all attendees with the opportunity to meet and interact with one another. The
conference is sponsored by Society on Social Implications of Technology and
Engineering.
We would like to thank the organization staff, the members of the Program
Committees and the reviewers for their hard work.
We hope the attendees of ICCATE 2011 had an enjoyable scientific gathering in
Beijing, China. We look forward to seeing all of you at the next ICCATE 2012 event.

November 3, 2011 General Chair


Beijing, China Khine Soe Thaung
ICCATE 2011 Organization

Honor Chair and Speakers


Chin-Chen Chang Feng Chia University, Taiwan
David Wang IEEE Nanotechnology Council Cambodia Chapter
Past Chair, Cambodia

Organizing Chairs
Khine Soe Thaung Society on Social Implications of Technology and
Engineering, Maldives
Bin Vokkarane Society on Social Implications of Technology and
Engineering, Maldives

Program Chair
Tianharry Chang University Brunei Darussalam, Brunei Darussalam
Wei Li Wuhan University, China

Local Chair
Liu Niu Beijing Sport University, China

Publication Chair
Khine Soe Thaung Society on Social Implications of Technology and
Engineering, Maldives

Program Committees
Tianharry Chang University Brunei Darussalam, Brunei Darussalam
Kiyoshi Asai National University of Laos, Laos
Haenakon Kim ACM Jeju ACM Chapter, Korea
Yang Xiang Guizhou Normal University, China
Minli Dai Suzhou University, China
Jianwei Zhang Suzhou University, China
Zhenghong Wu East China Normal University
Tatsuya Adue ACM NUS Singapore Chapter, Singapore
Aijun An National University of Singapore, Singapore
Yuanzhi Wang Anqing Teachers' University, China
Yiyi Zhouzhou Azerbaijan State Oil Academy, Azerbaijan
Contents

Integrating Current Technologies into Graduate Computer Science


Curricula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Lixin Tao, Constantine Coutras, Narayan Murthy, Richard Kline
Effective Web and Java Security Education with the SWEET Course
Modules/Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Lixin Tao, Li-Chiou Chen
Thinking of the College Students Humanistic Quality Cultivation in
Forestry and Agricultural Colleges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Gui-jun Zheng, De-sheng Deng, Wei Zhou
A Heuristic Approach of Code Assignment to Obtain an Optimal FSM
Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
M. Altaf Mukati
Development of LEON3-FT Processor Emulator for Flight Software
Development and Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Jong-Wook Choi, Hyun-Kyu Shin, Jae-Seung Lee, Yee-Jin Cheon
Experiments with Embedded System Design at UMinho and AIT . . . . . . . . . 41
Adriano Tavares, Mongkol Ekpanyapong, Jorge Cabral, Paulo Cardoso,
Jose Mendes, Joao Monteiro
The Study of H. 264 Standard Key Technology and Analysis of Prospect . . . 49
Huali Yao, Yubo Tan
Syllabus Design across Different Cultures between America and China . . . . 55
Fengying Guo, Ping Wang, Sue Fitzgerald
Using Eye-Tracking Technology to Investigate the Impact of Different
Types of Advance Organizers on Viewers Reading of Web-Based
Content: A Pilot Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Han-Chin Liu, Chao-Jung Chen, Hsueh-Hua Chuang, Chi-Jen Huang
X Contents

The Development and Implementation of Learning Theory-Based


English as a Foreign Language (EFL) Online E-Tutoring Platform . . . . . . . . 71
Hsueh-Hua Chuang, Chi-Jen Huang, Han-Chin Liu
Analysis of the Appliance of Behavior-Oriented Teaching Method in the
Education of Computer Science Professional Degree Masters . . . . . . . . . . . . . 77
Xiugang Gong, Jin Qiu, Shaoquan Zhang, Wen Yang, Yongxin Jia
Automatic Defensive Security System for WEB Information . . . . . . . . . . . . . . 83
Jiuyuan Huo, Hong Qu
Design and Implementation of Digital Campus Project in University . . . . . . 89
Hong Qu, Jiuyuan Huo
Detecting Terrorism Incidence Type from News Summary . . . . . . . . . . . . . . . 95
Sarwat Nizamani, Nasrullah Memon
Integration of Design and Simulation Softwares for Computer Science
and Education Applied to the Modeling of Ferrites for Power Electronic
Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Rosa Ana Salas, Jorge Pleite
Metalingua: A Language to Mediate Communication with Semantic Web
in Natural Languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Ioachim Drugus
An Integrated Case Study of the Concepts and Applications of SAP ERP
HCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Mark Lehmann, Burkhardt Funk, Peter Niemeyer, Stefan Weidner
IT Applied to Ludic Rehabilitation Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Victor Hugo Zarate Silva
A Channel Assignment Algorithm Based on Link Traffic in Wireless
Mesh Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Liu Chunxiao, Chang Guiran, Jia Jie, Sun Lina
An Analysis of YouTube Videos for Teaching Information Literacy Skills . . . 143
Shaheen Majid, Win Kay Kay Khine, Ma Zar Chi Oo, Zin Mar Lwin

Hybrid Learning of Physical Education Adopting Lightweight


Communication Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Ya-jun Pang
Experiments on an E-Learning System for Keeping the Motivation . . . . . . . . 161
Kazutoshi Shimada, Kenichi Takahashi, Hiroaki Ueda

Object Robust Tracking Based an Improved Adaptive Mean-Shift


Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Pengfei Zhao, Zhenghua Liu, Weiping Cheng
Contents XI

A Novel Backstepping Controller Based Acceleration Feedback with


Friction Observer for Flight Simulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Yan Ren, ZhengHua Liu, Weiping Cheng, Rui Zhou
The Optimization Space Design on Natural Ventilation in Hunan Rural
Houses Based on CFD Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Mingjing Xie, Lei Shi, Runjiao Liu, Ying Zhang
Optimal Simulation Analysis of Daylighting Design in New Guangzhou
Railway Station . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Lei Shi, Mingjing Xie, Nan Shi, Runjiao Liu
Research on Passive Low Carbon Design Strategy of Highway Station in
Hunan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Lei Shi, Mingjing Xie, Zhang Ying, Luobao Ge
A Hybrid Approach to Empirically Test Process Monitoring, Diagnosis
and Control Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Luis G. Bergh
Reconstructing Assessment in Architecture Design Studios with Gender
Based Analysis: A Case Study of 2nd Year Design Studio of National
University of Malaysia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Nangkula Utaberta, Badiossadat Hassanpour, Azami Zaharim,
Nurhananie Spalie
Re-assesing Criteria-Based Assessment in Architecture Design Studio . . . . . 231
Nangkula Utaberta, Badiossadat Hassanpour, Azami Zaharim,
Nurhananie Spalie
Layout Study on Rural Houses in Northern Hunan Based on Climate
Adaptability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
Xi Jin, Shouyun Shen, Ying Shi
Determination of Software Reliability Demonstration Testing Effort
Based on Importance Sampling and Prior Information . . . . . . . . . . . . . . . . . . 247
Qiuying Li, Jian Wang
The Stopping Criteria for Software Reliability Testing Based on Test
Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Qiuying Li, Jian Wang
The CATS Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Licia Sbattella, Roberto Tedesco, Alberto Quattrini Li, Elisabetta Genovese,
Matteo Corradini, Giacomo Guaraldi, Roberta Garbo, Andrea Mangiatordi,
Silvia Negri
Application of Symbolic Computation in Non-isospectral KdV Equation . . . 273
Yuanyuan Zhang
XII Contents

Modeling Knowledge and Innovation Driven Strategies for Effective


Monitoring and Controlling of Key Urban Health Indicators . . . . . . . . . . . . . 279
Marjan Khobreh, Fazel Ansari-Ch., Madjid Fathi
Team-Based Software/System Development in the Vertically-Integrated
Projects (VIP) Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
Randal Abler, Edward Coyle, Rich DeMillo, Michael Hunter, Emily Ivey
Frameworks for Effective Screen-Centred Interfaces . . . . . . . . . . . . . . . . . . . . 295
Luigi Benedicenti, Sheila Petty, Christian Riegel, Katherine Robinson
Analytical Classification and Evaluation of Various Approaches in
Temporal Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Mohammad Reza Keyvanpour, Atekeh Etaati
A Novel Classification of Load Balancing Algorithms in Distributed
Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
Mohammad Reza Keyvanpour, Hadi Mansourifar, Behzad Bagherzade
Data Mining Tasks in a Student-Oriented DSS . . . . . . . . . . . . . . . . . . . . . . . . . 321
Vasile Paul Bresfelean, Mihaela Bresfelean, Ramona Lacurezeanu
Teaching Automation Engineering: A Hybrid Approach for Combined
Virtual and Real Training Using a 3-D Simulation System . . . . . . . . . . . . . . . 329
Juergen Rossmann, Oliver Stern, Roland Wischnewski, Thorsten Koch
The Strategy of Implementing e-Portfolio in Training Elementary
Teachers within the Constructive Learning Paradigm . . . . . . . . . . . . . . . . . . . 339
Olga Smolyaninova, Vladimir Ovchinnikov
Speech Recognition Based Pronunciation Evaluation Using Pronunciation
Variations and Anti-models for Non-native Language Learners . . . . . . . . . . . 345
Yoo Rhee Oh, Jeon Gue Park, Yun Keun Lee
Computer Applications in Teaching and Learning: Implementation and
Obstacles among Science Teachers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
Abdalla M.I. Khataybeh, Kholoud Al Sheik

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361


Integrating Current Technologies into Graduate
Computer Science Curricula

Lixin Tao, Constantine Coutras, Narayan Murthy, and Richard Kline

Pace University, New York, USA


{ltao,ccoutras,nmurthy,rkline}@pace.edu

Abstract. Master in Computer Science programs (MS-CS) are critically


important in producing competitive IT professionals and preparing students for
doctorate research. A major challenge is how to integrate the latest computing
technologies into MS-CS programs without compromising the computer
science foundation education. This paper shares Pace Universitys study and
experience in renovating its MS-CS program to address this challenge. The
study started with the identification of the most important progress in
computing over the past decade and its relationship with the fundamental
computer science concepts and theory, and how to replace the traditional
waterfall teaching model with the iterative one to shorten the prerequisite chains
and support more flexible programs. In particular Internet and web
technologies, cloud computing, mobile computing, and Internet/web security
are analyzed. Based on this theoretical analysis Pace Universitys MS-CS
program was revised into a 30-credit program with a 12-credit program core
for comprehensive theoretical foundation, 12-credit concentrations for in-depth
study in selected technology areas, and two 6-credit capstone options for
knowledge integration and application as well as life-long learning.

Keywords: Master in Computer Science, Computing curriculum renovation,


Program concentrations, Iterative teaching model, Technology integration.

1 Introduction
Master in Computer Science programs (MS-CS) are critically important in producing
competitive IT professionals and preparing students for doctorate research. A major
challenge is how to integrate the latest computing technologies into MS-CS programs
without compromising the computer science foundation education. This paper shares
Pace Universitys study and experience in renovating its MS-CS program to address
this challenge.
The study started with the identification of the most important progress in
computing over the past decade and its relationship with the fundamental computer
science concepts and theory. In particular Internet and web technologies, cloud
computing, mobile computing, and Internet/web security are analyzed. It was
concluded that they are all based on recursive application of the fundamental
computer science concepts; XML is the new fundamental subject supporting from
data integration and transformation to the implementation of web services and cloud

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 17.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
2 L. Tao et al.

computing; abstraction and divide-and-conquer are the theory underlying the layered
web architecture, distributed system integration, component-based software
engineering, and server-based thin-client computing.
Another major challenge is how to integrate the current technologies into the MS-
CS curriculum. The traditional computing curricula are based on the water-fall model
with long prerequisite requirement chains, and students cannot have a global
subject/technology overview until the end of the program. As a result students are not
motivated in the early courses, and hands-on projects cannot be easily implemented to
enhance the courses. We decided to adopt the mature iterative education model, and
divide the MS-CS program into three iterations. The first iteration is the program core
containing the most fundamental computer science concepts and skills in computing
theory, hardware/software systems, Internet computing and data engineering. It
enables the students to have a global perspective of the study program and IT
technologies, the necessary skills for hands-on projects in the follow-up courses, and
the ability for life-long study. In the second iteration the students conduct focused in-
depth study in a chosen concentration to understand how the computing theories and
methodologies are applied in solving the real work challenges. The third iteration is
the capstone options and the students will conduct thesis research or major project to
explore the problem-solving skills in larger scale under faculty guidance.
Based on the above theoretical analysis Pace Universitys MS-CS program was
revised into a 30-credit program with a 12-credit program core, 12-credit
concentrations or elective courses, and two 6-credit capstone options. Each course
carries 3 credits. To ensure that all graduates have solid education on computer
science fundamentals and balanced perspective on computing, the program core
includes Algorithms and Computing Theory, Introduction to Parallel and
Distributed Computing, Concepts and Structures in Internet Computing, and
Database Management Systems, covering fundamentals of computing theory,
hardware/system, software/web, and data management and XML respectively. This
program core factors out the shared computing fundamentals so students could freely
take any of the following six concentrations with minimal prerequisite dependency
and redundancy: (1) Classical Computer Science, (2) Artificial Intelligence, (3)
Mobile Computing, (4) Game Programming, (5) Internet Computing, and (6) Web
Security. The two main 6-credit capstone options are master thesis research and
master major report, supporting in-depth original research and guided study of a new
technology and applying it in a major project respectively.
The result of this study also provides theoretical foundation to renovate computer
science undergraduate programs.

2 Two Problems in Computer Science Education


The enrollment of the USA computer science programs has dropped significantly in
the recent years. Apart from the burst of the dot-com-bubble and the IT out-sourcing,
there are two major reasons that have attributed to the decline of computer science
enrollment: (1) the lag between the knowledge scope of our current computer science
curricula and the expectations of the IT industry; (2) the current waterfall teaching
model.
Integrating Current Technologies into Graduate Computer Science Curricula 3

2.1 Knowledge Lag Problem

Since early 2000s the IT industry has adopted the service-oriented computing model.
As a generalization of the web and distributed computing technologies, Internet
business services [4] (for which web service is one of the particular implementation
techniques) are provided on servers through the Internet for heterogeneous client
systems to consume. An Internet business service abstracts specific business logics
and their implementations, the server IT infrastructure, and the expertise for
maintaining the server resources. The clients of such services are typically software
systems that can consume the services with remote system integration. Credit card
processing is a typical Internet business service provided by major financial
institutions. New Internet business services are typically implemented by integrating
existing services, and the XML technologies are the foundation of data integration
across heterogeneous IT systems. Internet business services promote specialized and
collaborated computing as well as support competitive global economy. Web service
is a particular implementation technology of Internet business services, and service-
oriented architecture (SOA) specifies the software architecture based on service
integration. The service-oriented computing is based on networking, the client-server
and thin-client architectures, and the web architecture which is still the foundation of
the fast-growing e-commerce and e-society.
As the top-level abstraction, each Internet business service is implemented with
server-side software component technologies like EJB [5] and .NET [6]. A software
component is a software module that has well-defined interfaces and can be
individually deployed. A software component typically implements specific business
logics with multiple objects, and the common server infrastructure functions, like
component life cycle management, thread pooling and synchronization, data caching,
load balancing and distributed transactions, are factored out into a component
container which is basically a software framework interacting with the components
through pre-declared hook and slot methods. Since early 1990s the software
component based software engineering has become the mainstream IT industry
practice. In 1995 The Department of Defense mandated that all its projects must be
based on software components.
Based on the above discussion we can see that over the last two decades the
concepts of abstraction and divide-and-conquer have been recursively applied to
higher-level software modules/systems, from objects to software components and to
Internet business services; the knowledge base for server-based computing is a
superset of that for client-side computing and introduces many new challenges not
properly covered by the current curricula; and the dominant server-based computing
IT technologies are based on sound and recurring concepts and methodologies that
must be integrated in computer science curricula to prepare students for the current
and future IT challenges.
But many of our computer science programs are still struggling with effective
teaching of objects and have weak coverage of server-side computing. Most of the
concepts and methodologies mentioned above are either only covered in elective senior
courses, or weakly covered, or totally missing in the current curricula. Our students
need early introduction of the fundamental modern computing concepts so they can
have a clear roadmap and motivation for their programs and be well-prepared for the
4 L. Tao et al.

competitive global job market. ACM Computing Curricula 2001 correctly introduced
the net-centric knowledge area to address the above knowledge gap, but most
computer science programs have not properly integrated it into the curricula due to
limitations of faculty expertise and resources.

2.2 Waterfall Teaching Problem

Most of the computer science curricula today are still based on ACM Computing
Curricula 1991 that reflected the IT technologies at that age, with limited coverage on
server-based computing. The topics are covered in their waterfall order specified by
the existing prerequisite chains. Even though the fundamental concepts in these
curricula are still the foundation of todays technologies, many important concepts
and skills are scattered in many senior courses which cannot be taken earlier due to
the strict course prerequisite requirements. For example, a typical engaging
programming project today involves graphic user interfaces, database and networking.
To make user interface responsive, multi-threads are needed. But most of the current
curricula introduce networking programming as advanced topics, and introduce
multithreading briefly in an operating system course. As a result the instructors are
limited in what kind of projects they can use to engage the students, and the students
have limited opportunities in practicing the important skills. To resolve this problem
we need to switch away from the current waterfall teaching model and greatly shorten
the current deep course prerequisite chains.

3 Current Technology Analysis


While the computing industry constantly declares new technologies, faculty usually
treats them as fads or buzz words thus not be part of pure computer science. As the
first step we conduct in-depth study of the major new computing technologies
including Internet and web technologies, cloud computing, mobile computing, and
Internet/web security, and identify the design principles and patterns. We reached the
following conclusions:

1. The computing industry today is characterized by server-based computing,


while most computing curricula, including ACM CS curriculum 2001/2008,
still focus on client-side computing. There are many new topics, including
server clustering and scalability, server security, and integration of
heterogeneous systems, representing todays computer science research
challenges. Web technologies should not be treated just as application of
network programming or distributed computing, but a new computing
service delivery platform. While Java has become the dominant introductory
programming language in most computer science programs, few curricula
have taken advantage of it in teaching parallel/distributed and event-driven
computing paradigms which are the core of server-based computing.
2. The most important challenges in computing industry over the past decade
are heterogeneous system integration and data integration. No programming
language, platform or software framework can dominate all application
Integrating Current Technologies into Graduate Computer Science Curricula 5

domains, and most businesses are conducted through collaboration among


multiple independent information systems. Web services and cloud
computing are part of the solutions to system integration. XML based data
integration is the foundation of system integration (platform-neutral service
specification) and application data integration, and has deeper and more
comprehensive impact on many computer science knowledge areas than the
traditional compilers.
3. All the reviewed technologies are based on recursive application of the
fundamental computer science great ideas including abstraction, divide-and-
conquer, and referential locality. The tiered and layered web architecture and
plug-and-play software components are new incarnation of similar ideas in
function and data abstraction in procedural and object-oriented languages,
and all remote method invocation mechanisms, including web services, are
based on the familiar proxy design pattern. Therefore the current
technologies are not just significant in their applications but also great
devices for illustrating how the small set of computing ideas are applied
creatively in problem-solving.

4 Revised Pace University MS-CS Program


Rational: A strengthened and less-credit MS-CS program could attract students
through quality, and reduced tuition and program completion time; and a small
program core would leave a three-course slot for developing meaningful and
competitive concentrations to better promote the program and research.
Note: All courses carry 3 credits.
Bridge Courses: [for students from other areas]
CS502 Fundamental Computer Science I using Java
CS504 Fundamental Computer Science II using Java
CS506 Computer Systems and Concepts
Core Courses: (12 credits)
CS608 Algorithms and Computing Theory
CS610 Introduction to Parallel and Distributed Computing
CS612 Concepts and Structures in Internet Computing
CS623 Database Management Systems
Concentration Options or Free Electives: (12 credits)
Each concentration contains three courses providing focused in-depth study in a
specific area. The diploma will carry the concentration name. Students typically
choose one concentration and one free elective. Students can also choose any four of
the computer science graduate elective courses.
Capstone Course Options: (6 credits)
Option 1: CS691/CS692 Computer Science Project I & II (individual
supervision, major report defense)
Option 2: CS693/CS694 Thesis I & II (individual supervision, thesis defense)
6 L. Tao et al.

The initial MS-CS concentrations are:

1. Classical Computer Science


a. CS611 Principles of Programming Languages
b. CS605 Compiler Construction
c. CS613 Logic and Formal Verification
2. Artificial Intelligence
a. CS627 Artificial Intelligence
b. CS630 Intelligent Agents
c. CS626 Pattern Recognition
3. Mobile Computing
a. CS639 Mobile Application Development
b. CS641 Mobile Web Content and Development
c. CS643 Mobile Innovations for Global Challenges
4. Game Programming
a. CS645 Game Level Design
b. CS647 Game Model Design and Animation
c. CS649 Advanced Video Game Programming
5. Internet Computing
a. CS644 Web Computing
b. CS646 Service-Oriented Computing
c. CS648 Enterprise Computing
6. Network Security
a. CS634 Computer Networking and the Internet
b. CS653 Cryptography and Computer Security
c. CS654 Security in Computer Networking
7. Web Security
a. CS634 Computer Networking and the Internet
b. CS651 Secure Distributed System Development
c. CS652 Secure Web Application Development
This new program will start to run in fall 2011.

References
1. Tao, L.: Integrating Component and Server Based Computing Technologies into Computing
Curricula. In: NSF Northeast Workshop on Integrative Computing Education and Research
(ICER), Boston, MA, November 3-4 (2005),
http://gaia.cs.umass.edu/nsf_icer_ne
2. Kurose, J., Ryder, B., et al.: Report of NSF Workshop on Integrative Computing Education
and Research. In: Northeast Workshop ICER (2005),
http://gaia.cs.umass.edu/nsf_icer_ne
Integrating Current Technologies into Graduate Computer Science Curricula 7

3. Tao, L., Qian, K., Fu, X., Liu, J.: Curriculum and Lab Renovations for Teaching Server-
Based Computing. In: ACM SIGCSE 2007 (2007)
4. Microsoft, Internet business service,
http://msdn2.microsoft.com/en-us/architecture/aa948857.aspx
5. Oracle, The Java EE 6 Tutorial,
http://download.oracle.com/javaee/6/tutorial/doc/
javaeetutorial6.pdf
6. Microsoft, Microsoft .NET,
http://msdn2.microsoft.com/en-us/netframework/default.aspx
Effective Web and Java Security Education with the
SWEET Course Modules/Resources

Lixin Tao and Li-Chiou Chen

Pace University, New York, USA


{ltao,lchen}@pace.edu

Abstract. We have developed a complete set of open-source tutorials and


hands-on lab exercises, called Secure WEb dEvelopment Teaching (SWEET),
to introduce security concepts and practices for web and Java application
development. SWEET provides introductory tutorials, teaching modules
utilizing virtualized hands-on exercises, and project ideas in web and Java
application security. In addition, SWEET provides pre-configured virtual
computer for laboratory exercises. This paper describes the SWEET design and
resources in general and its Java security module in particular. SWEET has
been integrated into computing courses at multiple universities and it has
supported innovative student projects like a secure web-based online trader
simulator.

Keywords: Virtualization, web security, Java security, and software assurance.

1 Introduction
Over the last two decades web technologies have become the foundation of (1) e-
commerce, (2) interactive (multi-media) information sharing, (3) e-governance and
business management, (4) distributed heterogeneous enterprise information system
integration, and (5) delivering services over the Internet. It is of high priority that the
web and web security technologies be integrated into computing curricula so
computer science students know how to develop innovative and secure web
applications, information system students know how to use web technologies to
address business challenges, and information technology students know how to
securely deploy web technologies to deliver good system scalability and robustness.
The main challenges for integrating secure web technologies into computing curricula
include (a) web technologies depend on a cluster of multiple types of servers (web
servers, application servers and database servers) and university labs normally cannot
support such complex lab environment; (b) there is a big knowledge gap between the
current computing curricula and the latest web technologies, and the faculty need help
to develop courseware so the web technologies could fit in the existing
course/curriculum design with sufficient hands-on experience and a robust evaluation
system. This integration of web technologies into computing curricula has not been
successful up to now as reflected in the recent ACM computing curricula
recommendations.

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 916.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
10 L. Tao and L.-C. Chen

SWEET (Secure WEb dEvelopment Teaching) is a two-year research project for


developing secure web courseware conducted by faculty researchers from Pace
University and CUNY City College of Technology and supported by National
Science Foundation and Department of Defense grants. The resulting SWEET
courseware is a comprehensive open-source package including extensive list of
portable virtual computers and labs, self-contained tutorials with integrated multi-
level evaluations, detailed lab designs and lab manuals, instructor solution manuals
(available to instructors upon private request), and project modules for further
exploration of the web security topics. SWEET is suitable for integration into either
undergraduate or graduate computing courses/curricula, and students only need to
have completed the introductory programming courses to use the SWEET course
modules. SWEET has been distributed through three national/international
workshops, nine research paper publications including papers in international
conference proceedings, and it has been adopted into courses/curricula at over eight
universities with very positive feedbacks.
As an example of the application of the SWEET courseware, we are supervising
several Pace University students to learn web computing/security based on the
SWEET course modules and design and implement a trader simulation web
application based on what they learn from the SWEET courseware. The project is
based on open-source technologies include Apache web server, MySQL database
management system, and PHP web scripting. The project is used by business school
students to learn bond trading by playing the realistic online trading games. Many
web security technologies covered in the SWEET courseware, including SQL
injection and web server security testing and threat assessment, are used to secure the
trader web application.

2 Literature Review
Many computer security educators have designed courseware with hands-on
laboratory exercises for computer security courses but none of them focus specifically
on secure web development. Whiteman and Mattord [1] has complied a set of hands-
on exercises for introductory computer security classes. The SEED (Developing
Instructional Laboratories for Computer SEcurity Education) project [2] has provided
a comprehensive list of computer security exercises including system security,
network security and web security, to a lesser degree at this point.
Web security textbooks suitable for undergraduate courses are also very limited.
Most textbooks in computer security published in recent years only have a chapter or
a section in web security with a limited overview of Secure Socket Layer (SSL) and
certificate authority. While there are many books in web application vulnerabilities
[3-9] and secure programming [10, 11], they are designed for practitioners, not for
undergraduate students.
Web security professional organizations have provided abundant learning materials
in secure web development, which are good information sources for our project. The
Open Web Application Security Project (OWASP) is an international group of experts
and practitioners who are dedicated to enabling organizations to develop, purchase,
and maintain secure applications. The Web Application Security Consortium
(WASC) is an international group of experts and industry practitioners who produce
Effective Web and Java Security Education with the SWEET Course Modules 11

open source and widely accepted security standards for the web. WASC has
constantly posted current information in securing web applications, such as security
exploits and its incident database.

3 SWEET Lab Virtualization


SWEET utilizes the virtualization technology to configure a computing environment
needed for the hands-on laboratory exercises. The virtualization of a computer means
to run emulator software on a computer (host computer or physical computer) to
emulate another desired computer (virtual computer). The host computer and the
virtual computer can run the same or different operating systems. For users, a virtual
computer looks just like an additional window on their computer desktop and
functions like another physical computer. Figure 1 illustrates a Linux virtual computer
operated on top of a Windows host computer. Users can switch back and forth
between the virtual computer and the host computer. The host computer and the
virtual computer can share both data and Internet access. Users can also conduct the
same computing tasks, such as installing new software, on the virtual computer as if
they would do on the host.

virtual
computer

host
computer

Fig. 1. An illustration of a virtual computer

Virtualization has been widely used ranging from commercial systems to


educational demonstrations. Various virtualization emulators have been developed,
such as VMware [12] or Microsoft Virtual PC [13], Virtual Box [14], and Citrix
ZenApp [15]. We developed SWEET virtual computers using VMware but the virtual
computers can be imported to other emulators if needed. In our project, a virtual
computer is implemented by a folder of 2-8 GB files and is based on Ubuntu Linux but
can be run on top of MacOS, Windows or Linux.
A virtual computer can run on either a remote server (server-side virtualization) or
on the user computer (client-side virtualization). We developed the SWEET virtual
computers to run locally on the user computers. The client-side virtualization offers us
several advantages over the server-side virtualization. First, the client-side virtual
computers do not require Internet connections which make it possible to isolate web
12 L. Tao and L.-C. Chen

security exercises to the local network and prevent the spilling effect of the exercise
results on the Internet. Second, the virtual computers greatly reduce the pressure on
the servers and network bandwidth. As a result, the laboratory exercises will not be
hindered by network performance. Third, the virtual computers are portable. Since
there are virtualization emulators on all operating systems and a virtual computer is
implemented as a folder of files, the students could hold the folder on a portable disk
and use, pause, and resume work on the same virtual computer on different host
computers at university labs or at home. Since a virtual computer is simply a folder of
files or a self-extracting file after compressing, it can be distributed through web
downloading, USB flash disks, or DVD disks. In addition, the virtual computers are
flexible, which can be run on computers in a general purpose computer laboratory,
students laptops or home computers, with only emulators installed. Moreover, the
virtual computers are easy to maintain since any software changes will be done on the
virtual computers which can be easily copied, modified and distributed. Last but not
the least, the virtual computers are cost effective. Both students and faculty do not
have to purchase additional hardware or software except for the emulator, which is
mostly free for educational purchases.

4 Sweet Teaching Modules


We have incorporated the software assurance paradigm [12] in SWEET. Software
assurance ensures the web applications to be as they are designed by examining each
stage in the life cycle of the web application development. In particular, security
maturity models provide a template for integrating security practices into the business
functions and goals of software systems. Although these models are reference models
rather than technical standards, they offer practitioners perspective on how to
incorporate security practices in the software development process. Three such models
have been proposed lately including OWASPs Software Assurance Maturity Model
(OpenSAMM) [18], Build Security In Maturity Model (BSIMM2) [19] and
Microsofts Security Development Lifecycle [20]. These models map security
practices into the stages of software development life cycle. The goal is to incorporate
security practices in software during its developmental stages instead of just testing for
security vulnerabilities after the software being completed. While considering web
application security, software developers could utilize the security maturity models to
determine what security practices they should consider and when the security practices
can be adopted.
SWEET contains the following eight course modules each with integrated labs and
evaluations.

1. Introduction to Web Technologies: The module covers HTML form and its
various supported GUI components; URL structure and URL rewrite; HTTP
basic requests; the four-tiered web architecture and web server architecture and
configuration; session management with cookies, hidden fields, and server
session objects; and Java servlet/JSP web applications. Laboratory exercises
guide students to set up a web server, observe HTTP traffic via a web proxy, and
develop a servlet web application and a JSP web application.
Effective Web and Java Security Education with the SWEET Course Modules 13

2. Introduction to Cryptography: This module covers basic concepts of private key


encryption, public key encryption, hash function, digital signature and digital
certificates. Laboratory exercises guide students to perform private key and
public encryption using GPG on an Ubuntu Linux virtual machine.
3. Secure Web Transactions: The module covers Secure Socket Layer (SSL)
protocols; certificate authority and X.509; certification validation and revocation;
online certification status protocol; OpenSSL utilities. Laboratory exercises guide
students to configure SSL on a web server and to create and sign server
certificates.
4. Web Application Threat Assessment: The lecture covers attacks exploiting
vulnerabilities occurred during construction of web applications, such as SQL
injection, cross site scripting (XSS), and poor authentication. Laboratory exercises
guide students to understand various vulnerabilities and countermeasures via a
preconfigured vulnerable web server utilizing OWASP WebGoat.
5. Web Server Security Testing: The lecture covers application penetration testing;
web server load balancing; and distributed denial of service attacks. Laboratory
exercises guide students to conduct penetration testing to an intentionally
vulnerable web server on a local virtual machine, BadStore.com.
6. Vulnerability Management: The lecture covers basic concepts on software
vulnerability database and vulnerability discovery. The countermeasures to two
web specific vulnerabilities, SQL injection and XSS, are discussed. Laboratory
exercises guide students to investigate and modify the Perl CGI script of a web
server that has both the SQL injection and XSS vulnerabilities.
7. Introduction to Web Services: The lecture covers service-oriented computing and
architecture; web service for integrating heterogeneous information systems
across the networks; service interface description with XML dialect WSDL; and
method invocation description with XML dialect SOAP. Laboratory exercises
guide students to develop, configure and secure a simple web service, and
develop a client application to consume the service.
8. Java Security: This lecture introduces the concepts and tools for supporting Java
security framework and key management. The laboratory exercises guide
students to review Java security framework, secure file exchange using Java
security API and keys, and protect their computers from insecure Java
applications by specifying Java security policies.

5 A Sample Module on Java Security


This modules tutorial introduces the students to both Java security policies and Java
security manager. Topics include how the Java security policies could be defined to
implement the web sandbox for applets so that they could not access private user
resources, how Java security policies could be defined to allow applets installed in
specific file system locations or signed by specific digital certificates to have access to
specific resources, how a Java security manager could limit a Java program to the
resources it can access, and how digital certificate chains are implemented to establish
trust within the web architecture.
The exercises guide students to work on the SWEET virtual computer to (1) create
public/private keys and digital certificates, (2) protect data with cryptography, (3)
14 L. Tao and L.-C. Chen

secure file exchange with Java security utilities, (4) grant special rights to applets
based on code base, (5) grant special rights to applets based on code signing, (6)
create a certificate chain to implement a trust chain, (7) protect a computer from
insecure Java applications, and (8) secure file exchange with Java security API and
newly created keys or keys in files or a keystore.
Each section includes review questions to enhance students understanding of the
materials. Sample questions are listed below:

What is identity authentication?


What is data validation?
What is the most important task in computer security based on cryptography?
What is the difference between a fingerprint and a signature of a document?
What is the difference between a public key and its digital certificate?

Review questions, as listed below, are also provided at the end of the module to
connect various concepts taught throughout the module.

Why Java security is important?


What are the most vulnerable folders for Java security?
Why applets always run using Java security manager?
List three resources that programs using Java security manager cannot access
by default?
Why you should bother to run some applications using Java security
manager?
How to selectively grant access rights to applets or applications?

6 Course Adoption Experience


Each SWEET teaching module is self-contained. They can either be adopted
separately in various courses or together in one course. We have currently
incorporated some of the SWEET modules in several courses at Pace University,
including Overview of Computer Security, an introductory computer security course
for the undergraduate students, Internet and Network Security, an advanced level
undergraduate class, and Web Security, a graduate elective course.
We collected the students feedback on the SWEET modules adopted in the last
three semesters. Our results show that the students had invested significant amount of
time (2-4 hours per week on average) in completing their hands-on exercises.
However, they generally agreed that the course materials were well planned, the
exercises had drawn their interests, and the exercises had helped them in learning the
course materials.
The SWEET teaching modules are also adopted by New York City College of
Technology, which is a minority university. Part of the SWEET projects had been
incorporated in two undergraduate courses at New York City College of Technology:
Web Design and Information Security. This collaboration have broadened the
participation of underrepresented students. Furthermore, the SWEET teaching
Effective Web and Java Security Education with the SWEET Course Modules 15

modules have been posted on a project web site1 to help other institutions to adopt or
incorporate it into their Web/Security courses and to train more qualified IT
professionals to meet the demand of the workforce.
The SWEET modules could also be integrated into several relevant computer
science courses since web computing highlights the application of the latest
computing concepts, theory and practices. For example, in a few lab hours, the
"Service Oriented Architecture" module could be integrated into Computer
Networking or Net-Centered Computing courses to provide the students with hands-
on exposure to the latest concepts and technologies in integrating heterogeneous
computing technologies over the Internet; and the "Threat Assessment" module could
be adopted by a database course for students to understand how SQL injection could
be used by hackers to attack server systems.

7 Conclusions
Secure web development is an important topic in assuring the confidentiality,
integrity and availability of the web-based systems. It is necessary for computing
professionals to understand web security issues and incorporate security practices
during the life cycle of developing a web-based system. Our secure web development
teaching modules (SWEET) provides the flexible teaching materials for educators to
incorporate this topic in their courses using hands-on exercises and examples.

Acknowledgment. The authors acknowledge the support of the US. National Science
Foundation under Grant No. 0837549 and the Verizon Foundation in partnership with
Pace Universitys Provost Office through its Thinkfinity Initiative. Any opinions,
findings, and conclusions or recommendations expressed in this material are those of
the author(s) and do not necessarily reflect the views of the National Science
Foundation or the Verizon Foundation.

References
1. Lawton, G.: Web 2.0 Creates Security Challenges. IEEE Computer (October 2007)
2. Andrews, M., Whittaker, J.A.: How to Break Web Software: Functional and Security
Testing of Web Applications and Web Services. Addison-Wesley (2006)
3. Fisher, M.: Developers Guide to Web Application Security. Syngress (July 2006)
4. Garfinkel, S.: Web Security, Privacy and Commerce, 2nd edn. OReilly (2002)
5. Shah, S.: Web 2.0 Security - Defending Ajax, Ria, and Soa. Charles River (December
2007)
6. Stuttard, D., Pinto, M.: The Web Application Hackers Handbook: Discovering and
Exploiting Security Flaws. Wiley (2007)
7. Graff, M.G., van Wyk, K.R.: Secure Coding: Principles & Practices. OReilly (2003)
8. Grembi, J.: Secure Software Development: A Security Programmers Guide. Delmar
Cengage Learning (2008)

1
http://csis.pace.edu/~lchen/sweet/
16 L. Tao and L.-C. Chen

9. Whitman, M.E., Mattord, H.J.: Hands-on Information Security Lab Manual. Thomson
Course Technology, Boston (2005)
10. Du, W., Wang, R.: SEED: A Suite of Instructional Laboratories for Computer Security
Education. ACM Journal on Educational Resources in Computing 8(1) (2008); The SEED
project is also accessible at, http://www.cis.syr.edu/~wedu/seed/
11. Komaroff, M., Baldwin, K.: DoD Software Assurance Initiative (September 13, 2005)
12. The Open Web Application Project (OWASP), Software Assurance Maturity Model,
Version 1.0, http://www.opensamm.org/ (released March 25, 2009)
13. McGraw, G., Chess, B.: Building Security In Maturity Model version 2, BSIMM2 (May
2010), http://bsimm2.com/
14. McGraw, G.: Software Security: Building Security. Addison-Wesley (2006)
15. Howard, M., Lipner, S.: The Security Development Lifecycle. Microsoft Press (2006)
16. Chen, L.-C., Lin, C.: Combining Theory with Practice in Information Security Education.
In: Proceedings of the 11th Colloquium for Information Systems Security Education,
Boston, June 4-7 (2007)
Thinking of the College Students Humanistic Quality
Cultivation in Forestry and Agricultural Colleges

Gui-jun Zheng, De-sheng Deng, and Wei Zhou

Bussiness School Of Central South University of Forestry & Technology


ShaoRoad 498 Changsha, Hunan, China
zgj0419@163.com

Abstract. College students quality education is a system referring to aspects of


scientific knowledge, practical ability, entrepreneurship, employability,
physical and psychological quality etc. Social multipliable needs call for higher
quality college graduates, especially humanistic quality which is an important
part of quality education. And Colleges of Forest and Agriculture have their
own characteristics and should cultivate different humanistic spirit and
humanities. Through the analysis of the existing problems from three aspects,
the colleges and universities ought to actively guide students to face the new
social background, and take necessary measures and recommendations to
cultivate college students humanistic quality.

Keywords: Colleges of Forestry Agriculture, Quality Education, Quality


System, Humanistic Quality.

1 Introduction
There is great differentials in humanistic quality education between agricultural and
forestry Colleges and universities and others. Generally speaking the agriculture and
forestry colleges cover long history, rich in culture and advantages in agriculture and
forestry. Humanistic quality mainly refers to the spiritual state of the human subject,
which is the integration of qualities directly linked with the subjective spiritual state,
such as cultural quality, political thought, psychology, business quality, physical
quality. With social progress and scientific and technological development, humanistic
quality becomes an important part of college quality education. So agricultural and
forestry colleges and universities should pay more attention to developing students
humanistic quality considering the current education system and the practical demands
of the society: Firstly, humanistic quality cultivation caters for the needs of social
practice. In the period of economic development, it is necessary to foster high-quality
talents with high moral cultivation, the scientific and cultural level, and concept of
legal, commitment and dedication. Secondly, it meets the demand of cultivating
humanistic spirit. A person's growth and his contribution to society originate from his
spiritual power. Humanistic spirit, centered on the ideals of truth, virtue and beauty,
emphasizes the conscience, responsibility and values when pursuing and applying
knowledge [1]. Humanistic quality education internalizes the outstanding culture into a
relatively stable internal quality and cultivates their rational knowledge of the world,

K.S. Thaung (Ed.): Advanced Information Technology in Education, LNEE 126, pp. 1721.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
18 G.-j. Zheng, D.-s. Deng, and W. Zhou

society and individuals, which promotes the national cohesion and solidarity. Thirdly,
Humanistic quality education is one part of education reform and it is needed to
cultivate creative talents. Innovative education is to cultivate the spirit of innovation,
and the ability of innovation and innovative personality focuses on college students
curiosity, intellectual curiosity and inquisitive mind. The traditional education runs
counter to the quality education, so the agriculture and forestry college should cultivate
humanistic quality combining the features of their own.

2 Raise Questions
For the importance of humanistic quality, the agriculture and forestry college must
pay attention to humanistic education. However, by the influence factors of family,
society, school for a long time, college students their humanistic quality generally are
low, such as narrow human knowledge, irrational knowledge structure and poor
psychological quality which do not adapt to the requirements of actual work [2]. And
the entire phenomenon is greatly connected with current higher education to a large
extent. The main reasons are as follows:

2.1 Ignoring in Humanistic Education

For some years, education sector, influenced by the ideas of pragmatism, tended to
weaken or abolish the humanistic education. Many universities ignore the humanistic
education and pursue solely value subject education in the process of cultivating
students ,coupled with fewer students activities , so that a good cultural atmosphere
can`t be formed in the whole campus , replaced with some of the edged-culture and
Back Street Culture. On the other hand, many students are unconcern to traditional
culture and the masterpieces, while they are very enthusiastic to practical English and
computer grade examination and this makes universities more indifferent to
humanistic education.

2.2 Lacking in Humanistic Education Guarantee Mechanism


College and universities basically pay attention to scientific knowledge and know-
how cultivation on the developing programs for college students , and they generally
only plan in arranging basic courses, major compulsory course and major elective
courses, having no systematic humanistic quality cultivation scheme , and a series of
courses are set to make up for the lack of traditional education, college and
universities simply think it can enhance students humanistic quality just by a series
of political education or several activities. As a matter of fact, lack of humanistic
quality cultivation systematic scheme, little effect on humanistic quality cultivation.

2.3 The Missing of College Students Values Orientation

Entering the 21st century, people are faced with diverse, multi-dimensional and multi-
level value choices. On the one hand, it implies the Chinese society is full of uplifting
energy during rapid development; on the other hand, it also tells us some social
members` values orientation confused and lost to some extent during the
transformation period. Especially in the college and universities, considerable number
Thinking of the College Students Humanistic Quality Cultivation in Forestry 19

of students feel vacuity facing the pressure of job searching and the increasingly
competition social life; they are short of ideals and fighting spirit to do things, seek
quick way to success and instant benefits, bite off more than they can chew; selfish
and lack of responsibility; fragile in mind, poor tolerance facing frustration. And all of
these need to be guided through the right education, no wonder humanity education is
imminent.

3 Analysis of the Problems


Many reasons lead to low humanistic quality: some are family education; some are
social morality influence [3]. As a whole, colleges and universities play a leading role
in fostering students humanistic quality just as following:

3.1 Utilitarianism in High Education

Along with the era of global economic integration, China's economy has witnessed a
rapid development and universities are closely connected to the market. Universities,
especially those which lack state fund, regard education as an industry in the process
of operation. They just have a fancy for training students into future technician and
professionals and overemphasize the instrumental value of human resources. They
consider economic benefits as the only criterion for everything, ignoring the
education rules and laws, opening lots of so-called practical courses, and declaring
hot majors and to hold certification rush, randomly cut or cancel the humanities
courses, falsely guide students to consider learning skills and getting certification as
the study goal and consequently ignore the cultivation of humanistic quality.

3.2 The Obsolete Teaching Model

Many universities have set up a series of humanistic courses in recent years, but
some teachers don`t update their teaching models and examine models, still only
emphasize the imbuing with knowledge , and they take traditional assessment and are
careless about the students thought expansion and sentiment influence, which makes
humanistic education as technology operation in some degree. So it is difficult to
experience humanity and hard to have the sublimation of spirit for college students.
Thus, the teachers should change the models of teaching, slighting practice and mind-
expand and it should be diversified, open and flexible in the assessment.

3.3 The Lack of Traditional Culture Education

Missing of traditional culture is an important manifestation of the lack of humanistic


quality. From the view of social atmosphere, western culture is flocking in, and many
students worship the western culture and pursue the western way of life, the moral
values are diluted, social customs and relationships tend to vulgar and utility, which
makes traditional Chinese culture missing. As lacking of influence of traditional
Chinese culture, many students have no sense of truth, virtue and beauty , whose
cultural literacy are shallow and individual quality is low, they have no standard
evaluation and lack life goals. Considering the current college students are born in the
eighties and nineties, mostly only-child, many of them lack traditional culture
20 G.-j. Zheng, D.-s. Deng, and W. Zhou

education otherwise growth under wealthy materials and excellent environment, so it


is necessary for colleges and universities to avoid caring more about the natural
science and ignoring social sciences on discipline construction , value moral and
ethical education[4].

4 The Solutions to the Problems


According to the analysis of the problems above, the forestry Agriculture schools
could carry out tactics to improve college students' humanistic quality:

4.1 Enlarging Autonomous Recruit Students Ratio Based on the Schools


Characteristics

Traditional way of recruitment mainly based on the scores of college entrance


examination, but studies found that there are a lot of shortcomings in the way of
recruitment. Appearing high scores and low abilities or low humanistic quality,
existing a lot of problems in thoughts and mentality, at the same time ,colleges and
universities passively recruit students by Students score without choice of
independent recruitment according to the characteristic of itself, and evaluate and
recruit for a special purpose, who are high quality but low scores. So under the current
system of college entrance examination, colleges and universities could recruit
students according to its characteristic to realize the evaluation of student's
performance in certain aspects of the knowledge of the history, culture, individual
character and basic knowledge and to improve the whole quality of students.

4.2 Taking the University as the Base Developing the Humanistic Spirit
Education Vigorously
College students are the group with great creative energy and creative passion. It is
the key to humanistic guidance that whether we can furthest stimulate their
enthusiasm or play their initiative and innovation [5]. For cultivating students
abilities on the innovative spirit and practice effectively, the colleges and universities
should reform and change traditional teaching method, enrich multi-channel of
education, and pay more attention to the quality of basic education and humanistic
education through introducing the humanities knowledge, such as philosophy, history,
society, ethics and management, logic to students. What`s more, it is needed to plan
the humanistic education courses to the program of cultivating students and improve
their humanistic quality through systematic education.

4.3 Taking Traditional Culture Education as Key to Cultivate Humanistic


Quality
There are many ways to strengthen humanistic education and cultivate humanistic
quality for colleges and universities, such as building harmonious campus culture,
holding the combination of scientific education and humanistic education, promoting
the integration of Chinese and Western culture, focusing on self-cultivation and
practice and organizing students to study humanities knowledge. Affected by the
education of industrialization and practical trend, colleges and universities are time to
Thinking of the College Students Humanistic Quality Cultivation in Forestry 21

play the leading role of education, and promote the essence of traditional culture and
give correct guidance to students value direction based on traditional culture
education. As the profound traditional culture is the crystallization of our 5000-year-
old Chinese civilization, it can cultivate character, sublimate spirit, inspire wisdom
and improve literacy, and which plays a basic role in students quality education.

4.4 Promoting Humanistic Education under the Guarantee System


Under the influence of traditional education system and ideology, high education does
not get rid of the shackle of scripted education actually. Most education is in the way
of indoctrinating, and this phenomenon is certainly connected with the traditional
training system. Therefore, high education reform must change the old mode,
establish new training system and set up a system of quality education for students
through corresponding education training programs. On the other hand, the new
training system for college teachers should be established. A teacher should have high
morality and integrity. Good knowledge, ability and rich professional qualities of the
teacher, as well as teachers behavior, has a great influence on the formation of
students healthy personalities and also directly affects the formation of their correct
concepts.

5 Conclusion
College students' quality education is a widely and systematic problems. This thesis
analyzes the main factors from four aspects of independent recruitment, the
humanistic spirit, humanistic education and the humanities education system. And it
puts forward the measures to resolve from the aspects of enlarging autonomous
recruit students ratio combined with the schools` characteristics, developing the
humanistic spirit education, educating in traditional culture and constructing
humanistic education guarantee system. But a lot of other factors are not referred such
as political consciousness, political accomplishment, and what`s more, much further
work should do by quantitative analysis.

Acknowledgments. This article is supported by Hunan degree and postgraduate


education teaching reform project and School teaching reform project. Allow us to
thank them for the kindness.

References
1. Li, W.: The theory and practice of College Students Quality Education. China Journal of
Radio and TV University 1, 8285 (2006)
2. Yang, J.: Humanistic Education Thinking and Practice For College Students. Explore
Reform (2) (2007) (in Chinese)
3. Sorensen, C.W., Furst-Bowe, J.A., Moen, D.M. (eds.): Quality and Performance Excellence
in Higher Education. Anker Press (2005)
4. Ren, Y.: Traditional Culture and Humanistic Education. Journal of Vocational and
Technical College of Yiyang 3, 6768 (2009) (in Chinese)
5. Hong, B.: The Analysis Of Cultivating of Humanistic spirit For College Students. Culture
and Education Material 9, 192193 (2009)
A Heuristic Approach of Code Assignment to Obtain
an Optimal FSM Design

M. Altaf Mukati

Professor and Dean (Engineering Sciences) Bahria University Pakistan


altafmukati@gmail.com

Abstract. Circuit minimization is always helpful in obtaining efficient and


compact circuits besides cost effectiveness. Minimization of an FSM (Finite
State Machine) can be done in two steps i.e. State Minimization and State
(Code) Assignment. The state assignment problem has remained the subject of
extensive theoretical research. The state assignment problem in FSM is NP-
complete, requiring therefore extensive computations, if based on any
algorithm. A number of heuristics have therefore been developed to obtain good
state assignments. State assignment is targeted either for area minimization or
for low power. In this paper, a heuristic approach is presented related to
reducing the combinational logic in an FSM i.e. area minimization. The paper
also exposes the case of improper assignment of codes that results in a large
circuit.

Keywords: Finite State Machine, State Assignment, Code Assignment.

1 Introduction
Concept of FSM was first emerged in 1961. An FSM can be formally defined as a
quintuple M = (I, S, 0, , ) where I is a finite set of inputs, S is a finite, nonempty set,
of states, 0 is a finite set of outputs, : I x S S is the next state function, and : I x S
0 (: S 0) is the output function for a sequential circuit [1]. It is a device that
allows simple and accurate design of sequential logic and control functions. Any large
sequential circuit can be represented as an FSM for easier analysis. For example, the
control units of various microprocessor chips can be modeled as FSMs [2].
Moreover, an FSM can be modeled by the discrete Markov chains. Static probabilities
(the probabilities that FSM is in the given state) can be obtained from the Chapman-
Kolmogorov equations [3] useful to perform synthesis and analysis. FSM concepts
are also applied in the areas of pattern recognition, artificial intelligence etc. [4].
These are widely used to reduce logic complexities and hence cost, however in
asynchronous type, minimization of combinational logic has to be dealt with carefully
to avoid Races & Hazards, which means a minimized circuit in such a case may not
be the desired one, if it carries the threats of races & hazards. Hence a classic design
problem of asynchronous sequential machines is to find the optimum state code
assignment for the critical race-free operation [5].
An FSM can be optimized for area, performance, power consumption, or testability.
The design of an FSM can be simplified in different steps, such as state minimization,

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 2331.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
24 M.A. Mukati

state assignments, logic synthesis and optimization of sequential circuits [4]. The first
step i.e., state minimization, is related to reducing the number of states that results in
reducing the number of flip flops. This could not get much attention, in the earlier days,
in view of its inherent complexity of the process. It was shown that the reduction of
completely specified finite automata can be achieved in O(n log n) steps [6], whereas
the minimization of incompletely specified finite automata is a NP-complete problem
[7]. In view of growing requirement of FSM circuits in digital systems, designers were
forced to work in finding appropriate methods to reduce a state table. An Implicant
Chart Method is one of such methods. The second step is carried out by assigning
proper codes to the remaining states to obtain minimal combinational logic, but there is
no definite method available to guarantee a minimal circuit. The synthesis of FSMs can
be divided into functional design, logic design and physical design. Logic design maps
the functional description of an FSM into a logic representation using logic variables
[8]. Its optimization can considerably affect various performance metrics such as power,
area and delay [8]. The state assignment problem is related to minimizing the
combinational gates, where binary values are assigned to the remaining states contained
in the reduced state tables.

2 Literature Survey
Previous approaches to state assignment were targeted, both for area and performance
for two-level and multi-level logic circuits [9][10]. In [11], JEDI algorithm performs
state assignments for a multi-level logic implementation that works in two stages, the
weight assignment stage and the encoding stage. In [12], state assignment algorithms
have been described to target low power dissipation circuits which is shown to obtain
by assigning codes to the states in such a way as to reduce switching activity on the
input and output state variables. Several state assignment algorithms and heuristics
have been developed. In [3], an algorithm known as Sequential Algorithm has been
presented which assigns the codes to the states depend on the states assigned earlier. It
needs to define the set KR of all the state codes that can be assigned, where R is the
code width which can be any value in the range [int log2M, M], where M is the
number of states in the reduced state table. Most of the state assignment algorithms
were focused on the minimum state code length [13]. Moreover, assignment with the
minimum state code length does not always mean the minimum circuit size [13][14],
when realized on a silicon chip. Hartmanis, Stearns, Karp and Kohavi presented some
algebraic methods based on partition theory [15]. Their methods were based on a
reduced dependence criterion that resulted in good assignment of codes, but did not
guarantee the most optimal circuit. Moreover, no systematic procedure provided to
deal with assignment of codes in case of large FSMs. Armstrong presented a method
which was based on interpretation of a graph of the problem [15]. Although it was
able to deal with a large FSM, but his method could not make any impact due to its
limitation in transforming the state assignment problem into a graph embedding
problem, which was only partially representing the codes [15]. Armstrongs technique
was improved in [16]. In [17] NOVA is also based on a graph embedding algorithm.
However, still no state assignments procedures exist that guarantee a minimal
combinational circuit [18].
A Heuristic Approach of Code Assignment to Obtain an Optimal FSM Design 25

The state assignment problem in FSM, especially in the larger FSMs, may not be
optimally attainable because of being NP-complete i.e. the problem can be formulated
as an optimization problem that is NP-complete. The algorithms that try to solve this
problem are computationally intensive [4], therefore several people have worked on
heuristic solutions rather than on algorithms, to obtain good state assignments. State
assignment is thus one of the challenging problems of Switching Theory [18].

3 Problem Description
Each state of an FSM corresponds to one of the 2n possible combinations of the r
state variables. In order to illustrate, consider a reduced state table of a certain
problem containing 5 states i.e. r = 5, that requires 3 bits to represent each state i.e.
n = 3. One possible assignment of codes to the states is:

A=001 B=011 C=100 D=110 E=111

Clearly each state can be assigned with 8 possible combinations of bits i.e. from 000
to 111. The variables n and r are related as:
2n-1 < r 2n (1)
In general, the total number of permutations for a 3 bit code would be 8! = 40320. For
the value of r less than 2n, although the number of permutations would be lesser but
still it would represent some big value. Out of these possible permutations, very few
will represent the distinct permutations, as proved by McClucky [19]. He has shown
that the number of distinct row assignments for a table with r rows using n state
variables is:
ND = (2n 1) ! / ((2n r) ! n !) (2)
Where ND = Number of distinct assignments.
Equation (2), suggests that for a state table containing 4 states i.e. r = 4, requiring 2
bits to represent each state i.e. n = 2, the number of distinct assignments would be 3.
These distinct assignments can be best understood through Figure 1. In this case,
although 24 possible combinations exist but only 3 are distinct one. Any other
assignment would be just the rotation or reversal of any of the above three
assignments, and thus correspond either to reversing the order of the variables or
complementing one or both variables. Such changes do not change the form of any
Boolean function [19][20].

Fig. 1. Allocation of distinct assignments


26 M.A. Mukati

Like in the above case (00-01-11-10) and (11-10-00-01) will still result in the same
circuit, as the later can be obtained from the former by inverting the variables. As
evident from the equations (1) & (2), the number of distinct assignments of codes
increases very sharply as the value of r increases, as shown in Table-1[20]:

Table 1. Number of distinct allocations w.r.t. r

No. of States No. of No. of distinct


r Variables n allocations
2 1 1
3 2 3
4 2 3
5 3 140
6 3 420
7 3 840
8 3 840
9 4 10,810,800

The best distinct assignment, when r is high, is extremely difficult to find that
guarantees minimal combinational logic, as all these distinct assignments produce
circuits of varying complexities. To workout on all possible distinct assignments of
codes would require intensive computations that may take days even on a high speed
computer! As a second choice, a heuristic approach is presented in this paper that can
possibly produce a good reduced combinational logic, if not the most minimal one.

3.1 Two Rules

In a reduced state table:

1. Assign adjacent codes to the Present States which lead to the identical
Next State for a given input.
2. Assign adjacent codes to the Next States which correspond to the same
Present State.

The rule-1 has precedence over rule-2. If both rules are applicable on a given
reduced state table, it may likely to produce one of a probable set of simplified design
equations.

3.2 Example

The state diagram in Figure 2 represents an FSM to detect BCD code appearing at its
primary input X. Clearly 0000 to 1001 are valid codes. With every clock cycle, a
single bit of the code (with MSB first) enters into the circuit. On the detection of an
invalid code, the output Z is raised to high. The State Table is given in Table 2[20].
A Heuristic Approach of Code Assignment to Obtain an Optimal FSM Design 27

Due to nature of the problem, no state can be eliminated i.e. reduced state table is
not required in this example. Clearly, with r = 8, code assignment can be done in 840
distinct ways [refer table 1]. To prove working of the two rules described in section
3.1, first we will evaluate how many gates are required after assigning three distinct
random codes as in Table 3. In the next step, we will assign the codes after applying
the given rules and then we will compare all the reduced circuits to draw conclusions.
Using J-K flip flops, the three set of design equations obtained is summarized in
Table 4. The total number of logic gates required in each case is summarized in
Table 5. Obviously NOT gates are not required for internal variables in such circuits.
All gates with 2-inputs are considered in calculations.

Fig. 2. State diagram of BCD Detector

Table 2. State table of BCD Detector

Present Next State Output (z)


State X=0 X=1 X=0 X=1
S0 S1 S4 0 0
S1 S2 S2 0 0
S2 S3 S3 0 0
S3 S0 S0 0 0
S4 S7 S5 0 0
S5 S6 S6 0 0
S6 S0 S0 1 1
S7 S3 S6 0 0
28 M.A. Mukati

Table 3. Three randomly chosen code assignments

Assignment-1 Assignment-2 Assignment-3


State
C B A C B A C B A
S0 0 0 0 0 0 0 0 0 0
S1 0 0 1 0 0 1 0 0 1
S2 0 1 1 0 1 0 0 1 1
S3 0 1 0 0 1 1 0 1 0
S4 1 1 0 1 0 0 1 0 0
S5 1 1 1 1 0 1 1 0 1
S6 1 0 1 1 1 0 1 1 1
S7 1 0 0 1 1 1 1 1 0

Table 4. Three set of equations

Assignment No. 1o. 1 Assignment No. 2 Assignment No. 3


JA = A + B C + B C + B JA = C X + B C
JA = C X + B C X
X X
KA = B C + B C KA = B + C + X KA = B
JB = C X + A C + A C
JB = A + C X JB = A+ C X
X
KB = C X + AC + A B
KB = A C + A C KB = AC + A C
C
JC = A B X JC = A B X JC = A B X
KC = B X + A B KC = A B + B X KC = A B + B X
Z = ABC Z = A BC Z = ABC

Table 5. Total number of logic gates required in each assignment

Assignment Assignment Assignment


Logic Gates
No.1 No.2 No.3
OR Gates 7 8 4
AND Gates 19 12 12
Total Gates 26 20 16
A Heuristic Approach of Code Assignment to Obtain an Optimal FSM Design 29

According to rule No.1:


(1) S3 & S6 are assigned adjacent codes as their next state is exactly the same
(2) S2 & S7 are assigned adjacent codes as they have same next state under x = 0
(3) S5 & S7 are assigned adjacent codes as they have same next state under x = 1
According to rule No.2:
(4) S1 & S4 are assigned adjacent codes as in Next State table as they both lead to
the same state (S0)
(5) S5 & S7 are assigned adjacent codes as in Next State table as they both lead to
the same state (S4)
(6) S3 & S6 are assigned adjacent codes as in Next State table, they both lead to the
same state (S7)
One possible assignment of code is presented in Figure 3. Finding equations on
these assignments will likely to produce a minimal circuit. Table 6 summarizes the
whole situation.

. 00 01 11 10
0 S3 S6 S2 S7
1 S1 S4 S0 S5

Fig. 3. One possible codes assignment

Table 6. Assignment of specific codes along with the corresponding design equations and the
logic gates requirements

Assignment of codes as Logic gates


State Design equations
requirements
per described rules
C B A
JA = B + C + X OR
S0 1 1 1 5
KA = B C + B C + B X Gates
S1 1 0 0
S2 0 1 1 JB = 1
KB = 1 AND
S3 0 0 0 6
Gates
S4 1 0 1 JC = B
S5 1 1 0 KC = A + B X Total
S6 0 0 1 11
Z=A BC Gates
S7 0 1 0

4 Conclusion
Assignment of codes to the states in the reduced state table at random produced the
larger circuits in an FSM, whereas after applying the heuristics presented in this
paper; we obtained a simplified combinational logic.
30 M.A. Mukati

By working on a specific problem, we obtained the circuits comprising 26, 20 & 16


2-input basic logic gates with three random assignments (without heuristics). After
applying the heuristics described in this paper, we obtained a circuit comprising just
11 2-input basic gates. However, it may not be the most minimal circuit as other
assignments are still possible with the same heuristics, but it guarantees the less
complex circuits than the one obtained through without heuristics.

References
1. Avedillo, M.J., Quintana, J.M., Huertas, J.L.: Efficient state reduction methods for PLA-
based sequential circuits. IEEE Proceedings-E 139(6) (November 1992)
2. Bader, D.A., Madduri, K.: A Parallel State Assignment Algorithm for Finite State
Machines, http://cs.unm.edu/~treport/tr/03-12/parjedi-bader.pdf
3. Salauyou, V., Grzes, T.: FSM State Assignment Methods for Low-Power Design. In: 6th
International Conference on Computer Information Systems and Industrial Management
Applications (CISIM 2007), pp. 345350 (June 2007)
4. Bader, D.A., Madduri, K.: A Parallel State Assignment Algorithm for Finite State
Machines. In: Boug, L., Prasanna, V.K. (eds.) HiPC 2004. LNCS, vol. 3296, pp. 297
308. Springer, Heidelberg (2004)
5. Unger, S.H.: Asynchronous Sequential Switching Circuit. John Wiley & Sons (1969)
6. Hopcroft, J.: An n log n algorithm for minimizing stales in a finite automaton. In: Kohavi,
Z. (ed.) Theory of Machines and Computation, pp. 189196. Academic Press (1971)
7. Pfleeger, C.: State reduction of incompletely specified finite state machines. IEEE
Trans. C-26, 10991102 (1973)
8. Shiue, W.-T.: Novel state minimization and state assignment in finite state machine design
for low-power portable devices. Integration the VLSI Journal 38, 549570 (2005)
9. Eschermann, B.: State assignment for hardwired control units. ACM Computing
Surveys 25(4), 415436 (1993)
10. De Micheli, G.: Synthesis and optimization of digital circuits. McGraw-Hill (1994)
11. Lin, B., Newton, A.R.: Synthesis of multiple level logic from symbolic high-level
description languages. In: Proc. of International Conference on VLSI, pp. 187196
(August 1989)
12. Benini, L., De Micheli, G.: State assignment for low power dissipation. In: IEEE Custom
Integrated Circuits Conference (1994)
13. Cho, K.-R., Asada, K.: VLSI Oriented Design Method of Asynchronous Sequential
Circuits Based on One-hot State Code and Two-transistor AND Logic. Electronics and
Communications in Japan (Part-III: Fundamental Electronic Science) 75(4) (February 22,
2007)
14. Tan, C.J.: State assignments for asynchronous sequential machine. IEEE Trans.
Comput. C-20, 382391 (1971)
15. De Micheli, G., et al.: Optimal state assignment for finite state machines. IEEE
Transaction on Computer-Aided Design CAD-4(3) (1985)
16. De Micheli, G., Sangiovanni-Vincentelli, A., Villa, T.: Computer-aided synthesis of PLA-
based finite state machines. In: Int. Conf. on Comp. Aid. Des., Santa Clara, CA, pp. 154
157 (September 1983)
A Heuristic Approach of Code Assignment to Obtain an Optimal FSM Design 31

17. Villa, T., Sangiovanni-Vincentelli, A.: NOVA: state assignment for optimal two-level
logic implementation. IEEE Trans. Comput. Aided Designs 9(9), 905924 (1990)
18. Mano, M.M.: Digital Logic and Computer Design, vol. ch.6. Rev. Ed. Prentice Hall, Inc.,
Englewood Cliffs (2001)
19. McClusky, E.J., Unger, S.H.: A Note on the Number of Internal Assignments for
Sequential Circuits. IRE Trans. on Electronic Computer EC-8(4), 439440 (1959)
20. Mukati, A., Memon, A.R., Ahmed, J.: Finite State Machine: Techniques to obtain Minimal
Equations for Combinational part. Pub. Research Journal 23(2) (April 2004)
Development of LEON3-FT Processor Emulator for
Flight Software Development and Test

Jong-Wook Choi, Hyun-Kyu Shin, Jae-Seung Lee, and Yee-Jin Cheon

Satellite Flight Software Department (SWT), Korea Aerospace Research Institue,


115 Gwahanno Yuseong Daejeon, Korea
{jwchoi,hkshin,jslee,yjcheon}@kari.re.kr

Abstract. During the development of flight software, the processor emulator


and satellite simulator are essential tools for software development and
verification. SWT/KARI has developed the software-based spacecraft simulator
based on TSIM-LEON3 processor emulator from Aeroflex Gaisler. But when
developing flight software using TSIM-LEON3, there is much limitation for
emulation of real LEON3-FT processor and it is difficult to change or modify
the emulator core to integrate FSW development platform and satellite
simulator. To resolve these problems, this paper presents the development of
new GUI-based and cycle-true LEON3-FT processor emulator as LAYSIM-
leon3 and describes the software development and debugging method on
VxWorks/RTEMS RTOS.

Keywords: LEON3, LAYSIM-leon3, emulator, ISS, Cycle-True, GUI based.

1 Introduction
The microprocessor in on-board computer (OBC) is responsible for performing the
flight software (FSW) which controls the satellite and accomplishes missions to be
loaded and executed, and it is specially designed to be operated in the space
environment. Currently developing satellites by KARI (Korea Aerospace Research
Institute) use the ERC32 processor and the LEON3-FT processor will be embedded
for the OBC of next-generation satellites, and those processors were developed by
ESA (European Space Agency)/ESTEC (European Space Research and Technology
Centre).
The processor emulator is an essential tool for developing FSW and the core of
building the satellite simulator, but there is a very limited selection for choosing
LEON3 processor emulator. Only TSIM-LEON3 from Aeroflex Gaisler is available
for commercial purpose, so it is inevitable to purchase TSIM-LEON3 continuously
for development of FSW and constructing the satellite simulator. But TSIM-LEON3
does not support full features of the LEON3-FT model and it is difficult to change or
modify the emulator core to integrate FSW development platform and satellite
simulator.
In order to resolve these problems successfully, a new LEON3-FT processor
emulator, LAYSIM-leon3, has been developed. LAYSIM-leon3 is a cycle-true
instruction set simulator (ISS) for the LEON3-FT processor and it includes the

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 3340.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
34 J.-W. Choi et al.

embedded source-level debugger. Also LAYSIM-leon3 can support the full system
simulator for the SCU-DM (Spacecraft Computer Unit Development Model) based on
the LEON3-FT/GRLIB and various ASIC/FPGA cores.
This paper presents the architecture and design of LAYSIM-leon3, and the result
of FSW development and test under LAYSIM-leon3. In Section 2, we introduce the
emulation method and status of emulators for LEON3. The detailed simulation of the
LAYSIM-leon3 is discussed in Section 3. Section 4 gives the software development
environment under LAYSIM-leon3 with VxWorks/RTEMS RTOS. Finally we draw
the conclusion in Section 5.

2 Emulation Method and Emulator Status


The method of emulating the processor can be categorized into two major ways:
interpretation and dynamic translation. The interpretation is the widely used method
for cross-platform program execution. It fetches an instruction from target executable
codes, decodes it to host platform such as x86 machine and then executes it. So it has
a large overhead for every converting instruction, and it is very hard to meet the real-
time performance when target system is running on high system clock. But this
method is relatively easy to implement and cycle-true emulation of the target
platform. The dynamic translation such as QEMU takes a different approach. Blocks
of target instructions are complied to host instructions Just-In-Time (JIT) as they
encountered and stored in memory. When the same block is encountered again, the
precompiled block is retrieved from memory and executed. This enables around 5 and
10 times remarkable performance than interpreted emulator. However this method
cannot emulate as cycle-true and lead issues with target processor clock and I/O
timing [1]. So it is difficult to verify of flight software modules which have time
constrained attributes.
The seven processor emulators supporting ERC32 and LEON2/3 shown in Table 1
have been developed in ESA-related companies, the last two emulators for ERC32
was developed by Satellite Flight Software Department (SWT) in KARI. LAYSIM-
leon3 has been developed based on LAYSIM-erc32 and applied the specific features
of LEON3-FT processor. Both LAYSIM-erc32 and LAYSIM-leon3 use the
interpretation method, whereas QEMU laysim-erc32 uses the dynamic translation
method based on QEMU core.

Table 1. Processor Emulator Support Status for ERC32 & LEON2/3

Emulator Type Processor Supplier Remark


TSIM Interpretation ERC32, Aeroflex- Cycle True / Commercial
LEON2/3 GR Used for most ESA projects
KOMPSAT-3/5 Satellite Simulator in KARI
Leon-SVE Interpretation LEON2 Spacebel Full representative of LEON2-FT
SimERC32/ Interpretation ERC32, Astrium/ Astrium Internal (SIMERC32 emulator in SIMIX)
SimLEON LEON2/3 CNES Used for Gaia Real-Time Simulator
Sim- Dynamic LEON3 Astrium Spacecraft Controller On-a Chip with LEON3-FT
SCOC3 Translation
Sim-MDPA Interpretation LEON2 Astrium Multi-DSP/Micro-Processor Architecture with LEON2FT
ESOC Interpretation ERC32 ESOC/ Used for most ESOC/ESA ground system
Simulator VEGA
Development of LEON3-FT Processor Emulator for Flight Software Development 35

Table 1. (continued)

QERx Dynamic ERC32, SciSys/F Based on QEMU 0.9.1


Translation LEON2 FQTECH Used for Galileo Constellation Operation Simulator
QEMU Dynamic ERC32 SWT/ Based on QEMU 0.11.1
laysim- Translation KARI S/W development in VxWorks/ RTEMS RTOS
erc32
LAYSIM- Interpretation ERC32 SWT/ Windows & Linux Platform
erc32 KARI Source Level Debugging and Cycle True
KOMPSAT-3/5 Ground Operation Simulator in KARI

3 Architecture and Design of LAYSIM-leon3


The LEON3-FT from Aeroflex Gaisler is a fault-tolerant version of the standard
LEON3 SPARC V8 processor, it is designed for operation in the harsh space
environment and includes functionality to detect and correct errors in all on-chip
memories. It is a synthesizable VHDL model that can be implemented on FPGA
board or AISC, and it is just one of GRLIB which is a library of reusable IP cores for
SoC development from Aeroflex Gaisler [2]. The LEON3FT-RTAX processor is a
SoC design based on LEON3-FT, implemented in the RTAX2000S radiation-tolerant
FPGA with various application-specific IP cores [3]. The SCU-DM developed by
KARI is based on LEON3FT-RTAX and various ASIC/FPGA cores. Fig. 1 shows the
internal architecture of the SCU-DM.

Fig. 1. The SCU-DM internal architecture

3.1 Architecture of LAYSIM-leon3

LAYSIM-leon3 has been developed by using the GNU compiler and the GTK library
for GUI, so it can be executed at Windows and Linux platform without any
modification. LAYSIM-leon3 can be divided into seven parts broadly. First the file
loader module is responsible for loading a LEON3 program into memory, and it
analyzes and stores the symbol information and debugging information according to
file format (a.out, elf, or binary format). The source/disassembler module displays the
mixed format of source codes and disassembled code to GUI source viewer. The IU
(Integer Unit) execution module is the core of LAYSIM-leon3 which executes
36 J.-W. Choi et al.

SPARC v8 instructions. The FPU execution module takes the responsibility of FPU
operation. All GRLIB operations are controlled and executed by GRLIB execution
module. Trap or interrupts are treated by the trap/interrupt handling module. Finally
the GUI control module takes care of the watch/breakpoint operation, real-time
register update, user control of GUI environment.

Fig. 2. LAYSIM-leon3 Emulator Architecture

3.2 File Loader Module

LEON3 programs which can be loaded to LAYSIM-leon3 are a.out file format from
VxWorks 5.4 output and elf file format from VxWorks 6.5, RCC (RTEMS
LEON/ERC32 Cross-Compiler) and BCC (Bare-C Cross-Compiler System for
LEON). Also binary file format can be loaded to LAYSIM-leon3 with address option.
During loading a LEON3 program, the appropriate loader is executed after the
analysis of file format, it extracts symbol and debugging information and copies
text/data segments to memory. If a RAM based LEON3 program is selected, then
stack/frame pointers of the IU are automatically are set for its execution in RAM.

3.3 Source/Disassembler Module

If the matching C source code of a LEON3 program which is loaded through the file
loader module is available, then the source/disassembler module displays the mixed
format to GUI source viewer, otherwise it displays assembler code only. As for
disassemble, the rule of Suggested Assembly Language Syntax [4] from SPARC is
adopted for the convenience of software engineers. The LEON3-FT, SPARC v8 core,
supports 5 types instructions such as load/store, arithmetic/logical/shift, control
transfer, read/write control register and FP/CP instructions.
To trace the code execution, LAYSIM-leon3 has the function of code coverage. In
GUI source viewer, the executed code line is highlighted with blue color, untouched
Development of LEON3-FT Processor Emulator for Flight Software Development 37

code is colored in black, and current executing code line is marked with red color.
After execution, it can report the code coverage of the LEON3 program with source
code.

3.4 IU Execution Module

The IU execution module which executes SPARC v8 instructions operates as a single


thread, and it can be controlled by run, stop, step, etc., from GUI control toolbar or
console. It performs 7-stage instruction pipeline of the LEON3-FT; FE (Instruction
Fetch) DE (Decode) RA (Register Access) EX (Execute) ME (Memory) RA
(Register Access) XC (Exception) WR (Write).
All operations of the IU execution module are shown in Figure 3. During the fetch
stage, it gets two instructions according to PC/nPC from memory or icache, and it
updates icache according to icache update rule. If it cannot access the memory as
indicated by PC/nPC, then the instruction access error trap will be occurred. After it
checks current pending interrupts and conditions (trap.PSR is enabled and interrupt
level is bigger than pil.PSR), it updates the trap base register (TBR) and services a
highest pending interrupt. On instruction decode stage, it analyzes SPARC v8
instruction to be executed, and it calls the corresponding emulation function. The
execute/memory step performs the called function to be executed and it reads required
register/memory, it stores the result into register/memory back. If the decoded
instruction is a floating-point instruction, then it will be treated by the FPU execution
module.
During the execution of each instruction, this module checks the privilege, align,
trap condition of instruction. If exception case is occurred, then it sets the trap
environment and services trap operation where it processes the trap operation
according to LEON3 trap handling rule. If the occurred trap cannot be recovered then
the LEON3 mode is transited to error mode and it stops execution. On non-critical
exception case, it calculates the cycle time of instruction and it updates system clock
and timer registers through the GRLIB execution module which also services the
timed event for various GRLIB operation and user FPGA/ASICs. Lastly the IU
execution module updates GUI environments for timers, UARTs, etc.

Fig. 3. LAYSIM-leon3 IU Execution Module Flow


38 J.-W. Choi et al.

3.5 FPU Execution Module

Because the FPU, GRFPU-lite of LEON3-FT, follows IEEE-754 standard, LAYSIM-


leon3 uses the resources of x86 machine to perform FPU instruction and the results
are reflected into the FPU registers. If FPU exception is occurred during FPU
operation, the FPU exception of host x86 machine is first processed accurately and
then the exception information is applied to FSR/FPU of LAYSIM-leon3.
While the GRFPU-lite can perform a single FP instruction at a time, if FP
instructions are performed in succession, first FP instruction is stored in FP queue
until the end of execution and qne.FSR is set to 1(not empty). The IU execution also
will be blocked till the empty of FP queue which means the end of execution of FP
instruction. The calculation of cycle time of FPU instruction is more complicated than
the IU case. And if the register which is the result of previous execution of instruction
is used as a source operand in current instruction, hardware interlock adds one or
more delay cycles. Currently H/W interlock mechanism is implemented in LAYSIM-
leon3 with the actual LEON3-FT.
The FPU mode is operated as the execution, exception, pending exception mode.
During execution mode, if exceptions such as divide by zero, overflow/underflow are
occurred, then it transits to the pending exception mode, but the IU cannot
immediately aware of the error condition of FPU. The IU finally figures out the FPU
exception mode on executing another FP instruction, then FPU mode is changed to
the exception mode, the FPU exception trap will be invoked by the IU (deferred trap).
If software handles the FPU exception properly, then FP queue becomes empty and
FPU mode is changed to execution mode which operates FP instruction, otherwise the
LEON3-FT enters error mode which halts anymore operation.

3.6 GRLIB Execution Module

The GRLIB execution module in LAYSIM-leon3 implemented various IP cores such as


the memory controller, APBUART, GPTimer, IRQMP, GRGPIO, GRFIFO, SpaceWire
(SpW), etc. They consist of registers, memory, and controller where software can be
accessed as real hardware.
In case of memory controller, it sets the size of RAM/ROM and waitstates. If
software accesses an unimplemented area, the trap will arise, and waitstates will
consume the additional cycles of memory read/write operation. The IRQMP controls
the 15 internal/external interrupts for CPU and it will be treated by the trap/interrupt
handling module. The GRGPIO and GRFIFO are supported in LAYSIM-leon3 for
external interface and DMA operation. The APBUART is implemented as GUI
console or can be redirected to external interface. 3 GPTimers are also implemented
as the real hardware operation mechanism. The scaler and count of timers are
decremented as the cycle time of IU/FPU instruction execution, and if timer is
expired, then corresponding interrupt is invoked, it will be treated by the IU execution
module with the trap/interrupt handling module. The SpW module can send/receive
data via virtual SpW channel to/from external SpW test equipment which is also
software-based simulator. All registers of GRLIB devices are mapped to AMBA
APB/AHB address and controlled by event function and register operations.
Development of LEON3-FT Processor Emulator for Flight Software Development 39

3.7 Trap/Interrupt Handling Module

The LEON3-FT has 3 operation modes: reset, run, error mode. It supports three types
of traps: synchronous, floating-point, and asynchronous traps. Synchronous traps are
caused by hardware responding to a particular instruction or by the Ticc instruction
and they occur during the instruction that caused them. Floating-point traps caused by
FP instruction occur before that instruction is completed. Asynchronous trap
(interrupt) occurs when an external event interrupts the processor such as timers,
UART, and various controllers.
The software handlers for window overflow/underflow trap among synchronous
traps are provided by RTOS or compiler, so they can be handled correctly by
software. But other traps whose handlers are not installed properly by software will
lead the LEON3-FT to error mode. Interrupts can be processed by the IU on no
pending synchronous trap. All trap operations are handled by the trap/interrupt
handling module as the real LEON3-FT trap operation.

4 Software Development/Test on LAYSIM-leon3


The Flight Software based on VxWorks 5.4/6.5 or RTEMS can be loaded and
executed on LAYSIM-leon3 without any modification as the real hardware
environment. For s/w development on the SCU-DM, LAYSIM-leon3 supports the full
system simulator for the SCU-DM which has the Ethernet (LAN91C), VME, IPN,
1553B, RTC, IMSL controllers. All devices are integrated to memory mapped I/O
area in LAYSIM-leon3 and controlled by event function and register operations with
the same operation mechanism of GRLIB devices.
Figure 4 shows the software development environment using BCC and the
embedded debugger of LAYSIM-leon3 can debug as C source code level and trace
variables/memory.

Fig. 4. S/W Development Environment on LAYSIM-leon3


40 J.-W. Choi et al.

Figure 5 shows the case of VxWorks/Tornado on Windows. Tornado IDE is


connected with LAYSIM-leon3 through virtual network which enables FSW
members to develop, monitor and debug the FSW with Tornado IDE. LAYSIM-leon3
is also connected with the 1553B Monitor/Simulator, which sends /receives 1553B
command/data to/from LAYSIM-leon3.

Software Development Environment (Tornado 2.0/VxWorks 5.4)

LAYSIM-leon3 (SCU-DM model)

1553B Monitor/Simulator

virtual 1553B
Virtual LAN91C
Network

Fig. 5. S/W Development Environment with VxWorks 5.4/Tornado 2.0 on LAYSIM-leon3

5 Conclusion
In this paper we introduced the development of LEON3-FT emulator, LAYSIM-
leon3, which is a GUI-based and cycle-true emulator and can support the full system
simulator for the SCU-DM. And we described the software development and test on
LAYSIM-leon3. LAYSIM-leon3 shows the slightly lower performance compared
with TSIM-leon3 due to overhead of GUI processing, but it supports significantly
better environment for s/w developers. Currently the instruction level verification test
has been completed and the operation level test is undergoing. It will be the main core
of flight software simulator and operation simulator of SWT/KARI.

References
1. Pidgeon, A., Robison, P., McCellan, S.: QERx: A High Performance Emulator for Software
Validation and Simulations. In: Proceeding of DASIA 2009, Istanbul, Turkey (2009)
2. Gaisler, A.: GRLIB IP Core Users Manual. Version 1.1.0-B4104 (2010),
http://www.gaisler.com
3. Gaisler, A.: LEON3FT-RTAX Data Sheet and Users Manual. Version 1.1.0.9 (2010),
http://www.gaisler.com
4. SPARC International Inc : The SPARC Architecture Manual Version 8 (1992),
http://www.sparc.org
Experiments with Embedded System Design
at UMinho and AIT

Adriano Tavares, Mongkol Ekpanyapong, Jorge Cabral, Paulo Cardoso,


Jose Mendes, and Joao Monteiro

Centre Algoritmi, University of Minho, Portugal


http://esrg.dei.uminho.pt

Abstract. Nowadays, embedded systems are central to modern life, mainly due
to the scientific and technological advances of the last decades that started a new
reality in which the embedded systems market has been growing steadily, along
with a monthly or even weekly emergence of new products with different
applications across several domains. This embedded system ubiquity was the
drive for the following question "Why should we focus on embedded systems
design?" that was answered in [1, 2] with the following points: (1) high and fast
penetration in products and services due to the integration of networking,
operating system and database capabilities, (2) very strategic field economically
and (3) a new and relatively undefined subject in academic environment. Other
adjacent questions have been raised such as "Why is the design of embedded
systems special? ". The answer for this last question is based mainly on several
problems raised by the new technologies, such as the need for more human
resources in specialized areas and high learning curve for system designers. As
pointed in [1], these problems can prevent many companies from adopting these
new technologies or force them not to respond timely in mastering these
technological and market challenges. In this paper, it is described how staff at
ESRG-UMinho1 and ISE-AIT2 faced the embedded systems challenges at several
levels. It starts to describe the development of the educational context for the new
technologies and show how our Integrated Master Curriculum in Industrial
Electronics and Computer Engineering has been adapted to satisfy the needs of
the major university customers, the industry.

1 Introduction
Embedded systems are vital to our own existence as can be proven by their widespread
use in automative applications, home appliances, comfort and security systems, factory
control systems, defense systems, and so on. This view is nowadays indiscriminately
shared by everybody, mainly those who live in developed countries, as well as those
that are in charge of developing such systems. Mr. Jerry Fiddler, Wind River Chairman
and Co-Founder [3] said: We live in a world today in which software plays a critical
part. The most critical software is not running on large systems and PCs. Rather it runs
inside the infrastructure and in the devices that we use every day. Our transportation,

1
Embedded Systems Research Group at University of Minho, Guimares, Portugal.
2
Industrial Systems Engineering at Asian Institute of Technology, Bangkok, Thailand.

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 4148.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
42 A. Tavares et al.

communications and energy systems wont work if the embedded software contained in
our cars, phones, routers and power plants crashes. However, this wide diversity along
with the increasing complexity due to the multi-disciplinary nature of products and
services and heterogeneity of applied technologies demand changes in industrial
practices and consequently ask for changes in the educational system. At
ESRG-UMinho, the embedded system subject was first taught as a two-hour credit
course to mainly promote education in robotic, automation and control. Therefore, it
was viewed as an overview course (i.e., a backpack [4]) where students should be first
introduced to the main concepts of embedded systems design that would later be
combined to provide the embedded systems design big picture. The course was
theoretical but due to the growing importance of embedded systems, it was promoted to
two three-hour credit courses, allowing the introduction of lab sessions for hands-on
activities. Three years ago, our diploma program was revised and the subject promoted
to four three-hour courses under a new teaching and research specialization field
denominated embedded systems. The embedded systems research group,
ESRG-UMinho, was created and a discussion was held within the group to figure out
how to attract and keep students in elective courses in embedded systems field. The
general objectives should be: (1) exposing students to the industrial project
environment of embedded sys-tems design, (2) developing the capacity for teamwork,
and (3) highlighting the need to be self-taught. The teaching approach to be followed
were based on ideas presented in [1, 2] and was later reviewed overcome some issues
faced in the first year. In the remainder of this paper, several questions will be answered,
giving special focus on (1) why does the skill mismatch phenomenon exist and how to
cope with it and (2) mainly how to drive the whole group in sync by keep-ing
undergraduate and graduate students, teachers and researchers in flow and committed
with the ESRG-UMinho vision and outcomes.

2 Embedded System Innovation and Trends


Basically, innovation is a continual process that leads to the introduction of some-thing
new and it is a key goal of industry. Apart from software and electronic (digital and
analog) components, embedded systems also contain dedicated sensors and actuators,
mechanical components, etc, constrained to the design challenges of space, power
consumption, cost and weight. Essentially, they differ from the classical IT in several
characteristics like [5]: autonomy, flexible networking, fault tolerance, restrictions on
user interfaces, real time operation, reactivity, restricted resources, etc. The industry
must support the emerging trends in embedded systems in order to stay competitive and
among the major ones the following were observed [5]:
1. new generations of applications and products with increasing complexity and high
demand for functional safety and security, as well as improved autonomy and
usability, and interoperability in network;
2. increasing computational power of embedded processors combined with reduced
silicon area and energy, as well as improvements in operating platforms and
middleware support for efficient development of reliable systems;
3. new embedded system design methodologies to fill the HW-SW productivity gap,
in order to match the software productivity to the speed of HW innovation;
Experiments with Embedded System Design at UMinho and AIT 43

4. merging of application sectors like electronics and IT or telecommunication and


consumer electronics to provide multifunctional and more attractive products;
5. embedded systems designers are facing tight time-to-market constraints due to the
cognitive complexity of current embedded systems, and so there is a need to
balance time-to-market with the quality of the designed product.
To keep up with these trends, the industry is facing new issues and challenges in
designing current embedded systems, such as, lifecycle mismatches, skill short-ages,
low reuse of components, quality concerns, and increased warranty costs [6].

3 Embedded System Challenges


Among the three driving forces of technological advance, Fig. 1, knowledge-push is,
along with market-pull the most important. Market-pull is driven by the market and
user need for a product and the technology is developed to fulfill that need, while the
development of new technology must drive the creation of a business need. In between
the technology-push and market-pull will be the knowledge-push, defined in [5] as the
continuous application of new technologies to accelerate further technical innovations.
However, as pointed also in [5], such continuous application of technologies will be
valuable only if the market can create knowledge, share it among all market
participants and transfer it into new products, processes and services.
The previously mentioned emerging trends in embedded systems led to sev-eral
specific challenges for R&D and education in the domain of embedded sys-tems [510]
that can be grouped into the following three broad categories as pointed in [5]:
1. Softwareization to cope with the increasing product functionality and HW-SW
productivity gap, by shifting functionalities from hardware to software. To meet
this challenge, software engineering elements such as programming languages
and compilers, modeling notations with good understanding of semantics, testing
methods, and software design processes must be taken into account.

Fig. 1. Driving for ces of technological progress

2. Multi-functionality and exibility to deal with the integration of different


knowledge fields, integration of hardware and software, knowledge transfer
inside a company and among industry sectors, universities and R&D centers.
44 A. Tavares et al.

Embedded systems engineers must present simultaneously a deep understanding


of some knowledge fields and basic know-how from the other related fields.
3. Change in innovation drivers requires knowledge exchange, collaboration,
standardization, as well as the incorporation of business aspects and soft skills
into the embedded systems discipline. To meet this challenge a embedded
engineer needs to master the ability to communicate, understand markets,
develop products, and pursue lifelong learning.

4 Embedded System Curriculum


Embedded systems were defined in [5] as invisible computers or programmable
electronic subsystems built-in a larger heterogeneous system that help to increase the
ease and safety of our life and make our work interesting.
Embedded systems are virtually everywhere and such ubiquity along with the
following evidences [2] legitimate them as a discipline of their own:

1. It is the field with the highest and fastest growth rate in industry;
2. It is a very strategic field from economic standpoint: (a) their market size is about
100 times the desktop market, (b) nearly 98% of all 32-bit mi croprocessors in
use today around the world are incorporated in embedded systems and (c) in a
near future, nearly 90% of all software will be for embedded systems, and most
computing systems will be embedded systems and their importance will grow
rapidly;
3. Design of embedded systems is a new and not well defined subject in academic
environments [1] and the "skill mismatch" phenomenon is visible, where the
maturity levels of graduates' skills in the academies don't meet levels required by
key industry sectors.

The "skill mismatch" does exist because embedded systems is a multidisciplinary


curricula split into several application domains, and the university education fails to
connect them. Coping with it requires completely new embedded systems education
where new methodologies for designing embedded systems must be created,
consolidated and the knowledge transferred effectively to future graduates. According
to the didactic analysis presented in [1] and reinforced in [2], the discipline of
embedded systems has a thematic identity and functional legitimacy and so, the
academia must:
1. educate engineers with functional skills instead of solely formal knowledge, and
find an effective balance between the two, as a high level of formal knowledge
might also facilitate the development of new functional skills;
2. provide adequately trained multi-cultural engineers that integrate essential
knowledge from computer science, electrical engineering, and computer
engineering curricula to facilitate the communication and sharing of experiences
inside a company and also avoid fragmented research.
Nowadays, several proposals for education models in the field of embedded systems
are found worldwide [11]:
Experiments with Embedded System Design at UMinho and AIT 45

1. Courses on real-time systems in System Engineering and Computer Science


undergraduate curricula;
2. Courses focus on embedded systems hardware in Computer Engineering and
Electrical Engineering undergraduate curricula;
3. Courses on embedded system design in Computer Engineering, Computer
Science, System Engineering and Electrical Engineering graduate curricula;
4. Embedded systems design track in computer science and electrical engineering
department;
5. Continuing education and training programs for industrial engineers;
6. Undergraduate curricula in embedded systems engineering.
The first 3 models are used in Europe, the first 5 models in the United States and the
sixth model in Asian countries [12{14]. Some universities in the United States [15],
Canada [16] and Israel [17] started using the sixth model. A more extensive model for
embedded systems field was presented in [18], as it seeks to induce higher interest in
embedded systems concepts as early as middle school. In our diploma program in
Industrial Electronics and Computer Engineering, embedded systems education
appears as an elective track (among other 3specialization problems) consisting of the
courses represented by the gray-filled box, Fig. 2.
The courses represented by black-border boxes are taught by ESRG-UMinho
teachers/researchers and we bridged them in a coordinated way in order to achieve the
depth of embedded systems concepts. This year the diploma program was revised and
the course Advanced Processors Architecture was included to improve the breadth of
embedded systems concepts.
In terms of didactic analysis, a proposal to the "skill mismatch" resolution requires
answers to the following questions:

Fig. 2. Industrial Electronics and Computers Engineering course sequence


46 A. Tavares et al.

1. What about the embedded systems selection?


Our diploma program focused mainly on electronics and electrical engineering
knowledge field, resulting in many engineers with insufficient software background
required for embedded systems design. Although, embedded system designers need
to handle both software and hardware, the software portion in a system is getting
larger than hardware, and so, we must prepare our graduate to face the
softwareization challenge. As it is nearly impossible to cover all knowledge fields
into our course track in a way to provide students with in-depth understanding across
several application domains, our proposal to the breadth and depth problem is to
move from teaching "something of everything" toward "everything of something"
[1]. Furthermore, at the same time, providing enough high level of formal
knowledge to facilitate the development of new functional skills and reduce training
in embedded development concepts, thus requiring only the specialization of other
application domains. Our chosen "something" or application domain was home
automation, the core business of one of our major industrial partners. As we have
industrial partners with different core business, the lab sessions of courses like
Embedded Systems and Real-time Systems don't focus in any speci_c application
domain, trying to provide a broad education in embedded systems design. To
promote depth to the learning approach, the idea of overlapping coverage using
multiple courses with forward and backward references among the courses [15] is
followed, providing a deeper understanding of the concepts and also knowledge
retention to our students. As pointed in [15], it will help break down stereotypes
associated with hardware versus software engineers. Such significance of good
programming and system specification skills will again be emphasized later when
students attend the Embedded Systems and Real-time Systems courses that use
reverse reference to the Programming and Computer Technologies courses.
2. What about the embedded systems learning communication?
We promote an embedded systems education based on interactive communication
with strong focus on real world examples and project-based works, to produce
skilled graduates capable of engineering embedded systems as required by the hiring
industry. Thus, embedded systems concepts are introduced in the course track and
the prerequisite courses through lectures, hands-on sessions based on small
examples and project-based sessions. Lecture sessions: the prerequisites courses
provide basic knowledge underneath computer science and electrical engineering
and the embedded systems course track provides the design and implementation
principles of embedded systems.
Small real-world examples hands-on sessions: students gain practical experience
in programming embedded systems and designing hardware boards, using design
tools, development environments and platform targets. Usually they consist of a mix
of teaching styles, demonstrator to encourage student participations and facilitator to
allow students to explore new options and encourage active and collaborative
learning.
Project-based sessions: students complete a project for a complex system in
groups of 2-3 students to gain a better understanding of projects through
collaborative e_orts. Several home automation and digital security applica tions will
be proposed and the students also have a choice to design and implement their own
system.
Experiments with Embedded System Design at UMinho and AIT 47

The Embedded System course is a merging of the ECE 125 course [15] and
Complex Systems Design Methodology [4] and it was drafted to provide students with
a broad overview of the embedded systems design and also to show how the synergistic
combination of the broad set of knowledge fields will be explored through backward
and forward references to other courses in the curriculum. The other three courses of
the embedded systems course track, Languages for Embedded Systems, Dedicated
System Design and Advanced Processors Architecture focus on more advanced
embedded systems concepts like compiler, processor and System-On-Chip (SoC)
design. They are based on a mix of lectures, small real-world examples hands-on and
project-based sessions that end with the implementation of a SoC, a C compiler and
Linux porting to the new developed platform. Unlike the undergraduate
microcontroller-based design course track that strictly follows bottom-up design
methodology, the graduate embedded systems course track focuses on high-level
abstraction and top-down and bottom-up system-level design methodologies, starting
with a knowledge about the system to be developed. All students are forced to always
follow the same information flow during systems design, by first transforming the
system knowledge into a specification model of the system.

5 Conclusions
The omnipresence of embedded systems altogether with "skill mismatch" phenomenon
evince the need and urgency for an embedded systems education that produces skilled
graduates capable of engineering embedded systems as required by the hiring industry.
At UMinho an embedded systems design course track was designed and several
techniques employed in order to fill the "skill mismatch" gap, and also align teaching
and R&D activities. Among those techniques we'll emphasize the promotion of: depth
to the learning approach by bridging all these courses together, design-for-reuse
principles and system-level concepts early at the undergraduate microprocessor-based
course track, embedded systems education based on interactive communication with
strong focus on real world examples and project-based works, breadth to the learning
approach by vertical exemplification teaching approach combined with enough high
level of formal knowledge, procrastination avoidance, and integrated learning style but
strongly based on kinesthetic learning style. Furthermore, we found the creation of a
motivating environment with supportive and high performance culture in course
classes and R&D activities are very important, as was visible during the three months
staying at AIT where the twelve students were and still are completely in flow and
committed with the group vision and outcomes. The assessment of our embedded
systems design course track was very positive and manifested by (1) our internal
evaluation process with questions to drive further course track improvement, (2) the
performance of student coaching lab sessions at UMinho and AIT, (3) the willingness
of students to buy their own microprocessor and FPGA boards, (4) the way older
students sell ESRG brand, and (5) the increasing number of students attending the
elective embedded systems design course track year after year.
48 A. Tavares et al.

References
1. Grimheden, M., Torngreen, M.: What is Embedded Systems and How Should It Be Taught?
- Results from a Didactic Analysis. ACM Transactions on Embedded Computing
Systems 4(3) (August 2005)
2. Mesman, B., et al.: Embedded Systems Roadmap 2002. In: Eggermont, L.D.J (ed.) (March
2002)
3. Li, Q., Yao, C., Li, Q.: Real-time concepts for embedded systems. CMP (July 2003)
4. Bertels, P., et al.: Teaching Skills and Concepts for Embedded Systems Design. ACM
SIGBED Review Archive 6(1) (January 2009)
5. Helmerich, A., Braun, P., et al.: Study ofWorldwide Trends and R & D Programmes in
Embedded Systems in View of Maximising the Impact of a Technology Platform in the
Area. Final Report, Information Society Technologies (November 18, 2005)
6. Blake, D.: Embedded systems and vehicle innovation. In: Celebration of SAEs Centennial
in 2005, AEI (January 2005)
7. Kopetz, H.: The Complexity Challenge in Embedded System Design. In: ISORC 2008
Proceedings of the 2008: 11th IEEE Symposium on Object Oriented Real-Time Distributed
Computing (2008)
8. Henzinger, T.A., Sifakis, J.: The Embedded Systems Design Challenge. In: Misra, J.,
Nipkow, T., Karakostas, G. (eds.) FM 2006. LNCS, vol. 4085, pp. 115. Springer,
Heidelberg (2006)
9. Opportunities and challenges in Embedded Systems,
http://www.extra.research.philips.com/natlab/sysarch/
EmbeddedSystemsOpportunitiesPaper.pdf
10. Emerging trends in embedded systems and applications,
http://www.eetimes.com/discussion/other/4204667/
Emerging-trends-in-embedded-systems-and-applications
11. A Comparison of Embedded Systems Education in the United States, European,and Far
Eastern Countries,
http://progdata.umflint.edu/MAZUMDER/Globalization%20of%20En
gg.%20Education/Review%20papers/Paper%204.pdf
12. Pan, Z., Fan, Y.: The Exploration and Practice of Embedded System Curriculum in
Computer Science field. In: ICYCS 2008, Proceedings of the 2008 The 9th International
Conference for Young Computer Scientists, IEEE Computer Society, Washington, DC,
USA (2008)
13. Chen, T., et al.: Model Curriculum Construction of Embedded System in Zhejiang
University. In: CSSE 2008, Proceedings of the 2008 International Conference on Computer
Science and Software Engineering, vol. 05. IEEE Computer Society, Washington, DC, USA
(2008)
14. Pak, S., et al.: Demand-driven curriculum for embedded system software in Korea. In: ACM
SIGBED Review - Special issue: The first workshop on embedded system education
(WESE), vol. 2(4) (October 2005)
15. Ricks, K.G., et al.: An Embedded Systems Curriculum Based on the IEEE/ACMModel
Curriculum. IEEE Transactions on Education 51(2) (May 2008)
16. Seviora, R.E.: A curriculum for embedded system engineering. ACM Transactions on
Embedded Computing Systems 4(3) (August 2005)
17. Haberman, B., Trakhtenbrot, M.: An Undergraduate Program in Embedded Systems
Engineering. In: Proceeding, CSEET 2005 Proceedings of the 18th Conference on Software
Engineering Education & Training. IEEE Computer Society, Washington, DC, USA (2005)
18. Barrett, S.F., et al.: Embedded Systems Design: Responding to the Challenge. Computers in
Education Journal XVIIII(3) (July, September, 2009)
19. The IVV Automao Lda, http://www.ivv-aut.com/
The Study of H. 264 Standard Key Technology
and Analysis of Prospect

Huali Yao and Yubo Tan

College of Information Science and Engineering,


Henan University of Technology,
Zhengzhou, China
{Yaohuali8226,tanyubo}@163.com

Abstract. H.264 standard is the latest video coding standard, which uses a series
of advanced coding techniques, it has a great advantage than the traditional
standard in the coding efficiency,error resilience capabilities, network adapta-
bility.This article main studies the key technologies of H.264, put forward the
current problems and gives solutions,last introduces some new development and
applications.

Keywords: H.264, video compression, forecast coding, transform coding.

1 Preface
Since the 90's in the last century, as the rapid development of the mobile communica-
tions and the network technology, the processing and transmission technology of
multimedia and video information in mobile networks has become a hot spot in China's
information technology. Video information has lots of advantages such as intuitive,
precise, efficient, and extension, whereas because of abundance information of video,
besides the problem of video compression coding, we also should solve the quality
assurance issues after video compression to ensure better application of the video. But
it is a contradiction, what we do is to have a greater compression ratio and ensure a
certain degree of video quality at the same time.
For this reason, since the first enactment of video coding international standards in
1984, people have done a lot of effort, ITU-T and other International Organization for
Standardization have issued ten more video coding standards one after another, which
greatly promoted the development of video communication. While the development of
video communication is less than satisfactory in some degree, this mainly because the
conflict between video compression and video quality are not well resolved, in this
form, H.264 video compression coding standard was published.

2 H.264 Standard Profiles


In March 2003, H.264 video compression standard was formally published, it is a
high performance video codec technology explored by the ITU-T and ISO / IEC,
belonging to the ITU-T H.264 and MPEG-4 Advanced Video Coding of ISO / IEC.

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 4954.
Springerlink.com Springer-Verlag Berlin Heidelberg 2012
50 H. Yao and Y. Tan

Specifically, H.264 is developed by the ITU-T Video Coding Experts Group under the
(VCEG) and ISO / IEC Moving Picture Experts Group under the (MPEG). Therefore,
from this degree, the point that H.264 video compression technology differents from
the past standard is that it is not only the industry standard, but also international
standards.
H.264 is the latest and most promising video compression technology, it has more
significant improvement than the previous video compression technology whether in
the compression efficiency or network adaptability. Many new technologies were
used in the H.264 standard such as multiple reference frame prediction, integer trans-
form and quantization, entropy coding and a new intra frame prediction coding, which
are designed to achieve higher coding efficiency, but these technologies are at the cost
of increasing the computational complexity. In order to have a better image quality in
the lower possible storage space and transfer images quickly in the condition of limited
bandwidth, H.264 increases nearly twice the compression ratio under the premise of
image quality, which would resolve the contradiction between the video compression
efficiency and real-time transmission. For this reason, H.264 is considered the most
influential video compression standard.

3 H.264 Technology Principle and Key Technology

3.1 Principles of H.264 Codec

The idea behind the algorithm of H.264 is to eliminate space redundancy in the using of
intraframe prediction and motion compensation, to elimination time redundancy in the
using of interframe prediction and motion compensation, as well as using transform
coding to remove frequency domain redundancy. While there is no change compared
with previous standards (such as H.261, H.263, MPEG-1, MPEG-4) in the basic prin-
ciple and function module, the idea is still the classic compensation hybrid coding
algorithm. In addition, H.264 defines a new SP frame and SI frame to achieve different
data rates, fast switching of different image quality's streaming and rapid recovery
capability of information loss.
H.264 codec's basic functions are briefly described as follows:
Encoder. Encoder adopts a mixture coding method of transformation and prediction, if
it uses intraframe prediction encoding in the process, first select the corresponding mode
to the intraframe prediction, make the subtraction of the predicted value and the current
actual value, then transform, quantitative and entropy code the difference, meanwhile do
inverse quantization and inverse transformation for the encoded bitstream, and then
reconstruct prediction residual image, get the reconstruction frame by adding the pre-
vious, last feed the results into the frame memory after it was smoothed by the deb-
locking filter.
If using interframe prediction, the input image block first obtains motion vector in the
reference frame by motion estimation, then make integer transformation, quantization
and entropy coding for the residual image after motion estimation, send the results and
motion vector previous into the channel. Another streaming has the same pattern to
reconstruct and then is sent to frame memory as the next frame reference images after
through the deblocking filter.
The Study of H. 264 Standard Key Technology and Analysis of Prospect 51

Decoder. In a word, the decoding process is the reverse of encoding, , the priority is to
judge whether it is the intraframe prediction or interframe when the compressed
streaming is sent into the decoder: If the intraframe prediction, to reconstruct directly
after inverse quantization and inverse transform; if interframe, because the last result is
reconstructed residual image, which requires motion compensation on the basis of the
reference image in the frame memory, then makes the superposition of reference image
and residual image, ultimately obtains reconstructed frame.

3.2 Key Technologies of H.264 Standards

H.264 is based on the techniques of H.263, it adopts a hybrid coding which contains
DPCM encoding and transform coding, H.264 has its own unique in many aspects such
as multi-mode motion estimation, integer transform, uniform variable length. In addi-
tion, it introduced a series of advanced technology: 4 4 integer transform, intraframe
prediction in spatial domain, interframe prediction of multiple reference frames and 1 / 4
pixel accuracy motion estimation. These technology make the images quality of com-
pressed video is far better than any previous coding standards in the condition of same
bitrate , H.264 can save the bitrate up to 50%. Details of H.264 standard 's key tech-
nologies are as follows:

The technology of intraframe prediction in spatial


The intraframe prediction used in video compression technology can remove the current
image space redundancy, the intraframe prediction mode in H.264 is more accurate and
complicated than before. In order to improve the coding efficiency, the intraframe
prediction in H.264 is used to calculate the spatial correlation between coding macrol-
bocks and its adjacent blocks.The above and left macroblocks of the current can cal-
culate the predicted value of the current block, the difference of the current macroblock's
and its predicted value is transferred to the decoder after further coding. H.264 provides
four intraframe prediction modes: 4 4 intraframe prediction of luminance block, 16
16 intraframe prediction of luminance block, 8 8 intraframe prediction of chromaticity
blocks and the PCM predictions.

Interframe prediction of multi reference frame


Interframe prediction coding using the continuous time frame redundancy for motion
estimation and compensation, the difference between interframe prediction and the
previous standard is the more wider prediction block size, the using of sub-pixel motion
vector and multiple reference frames. Generally, the coding efficiency of inter frame
prediction is higher than intraframe, and the multi reference frame can save 5% to 10%
of the transmission rate more than a single reference frame.

Deblocking filter
Usually in the case of low bit rate, the block-based transform coding algorithm will
produce the block effect because of the using of a larger quantization step, moreover, it
can strengthen the block effect when H.264 uses multiple reference frames in some case.
In order to solve this problem, H.264 uses the adaptive deblocking filter based on 4 4
block boundary. The filter located in the circuit of encoder motion estimation / motion
compensation, the reconstruction frame only can be restored in frame memory as the
52 H. Yao and Y. Tan

next coded reference frame just after being filtered. The method of deblocking effec-
tively removes the blocking effect produced by the prediction error, maintain the orig-
inal edge information as much as possible, and highly improve the subjective quality of
the image, but these are all at cost of increaseing the system's complexity.

Integer transform and quantization


To further save the image transmission rate, H.264 adopts the transform coding and
quantization. However, in principle, transform coding and quantization are two separate
processes in image coding, H.264 combines the two process of multiplication, further
uses integer transform, reduce computation and improve the instantaneity of image
compression. In the aspect of transform, H.264 is still use transform coding based on
block for the input image, while the difference between transform coding and the pre-
vious 8 8 DCT transform is the using of new integer transform algorithm ,which is
based on the 4 4 pixel blocks and similar to DCT transform. Achieve the quantify
process by 16bit computing, and using the QP% 6 to select the quantization step in the
process of quantization and inverse quantization, which not only reduce the length of the
quantization table, but also can maintain the image quantization coefficient QP and
PSNR better linear relationship.

Entropy coding
Entropy coding is a lossless compression technology, which is based on statistical
property of random process, the streaming obtained by entropy coding can recover the
original data without distortion through decoding.
H.264 adopts two new types of entropy coding, the first is variable length coding,
which contains variable length coding (UVLC) and context-based adaptive variable
length coding (CAVLC), another is context-based adaptive binary arithmetic coding
(CABAC). The entropy coding of H.264 has the following characteristics:

Two techniques both make good use of context information to enable proba-
bility statistics of coding closing to the maximum of the statistical information
in the video stream, and reduce the coding redundancy.
The entropy coding of H.264 can adapt streaming, has good coding efficiency in
a large range of rate, and meet a lot of different applications.

SP / SI frame technology
In order to accommodate the requirements of video stream's bandwidth adaptation and
error resilience, H.264 proposes two new frame types: SP frame and SI frame.
SP frame coding is still the motion compensation prediction coding which based on
intraframe prediction, and the basic principle is similar to P frame, the difference is SP
frame can consult different reference frame to reconstruct the same image frame.Take
advantage of this, SP frame can replace the I frame, widely be used in the occasions of
switching streams, splicing, randomly accessing, fast forward/backward, error recov-
ery and so on. While SI frame is based on intraframe prediction, it is the most similar
slice to SP frame when the method of interframe prediction can't be used because of
transmission errors. In some sense, the network affinity of H.264 was greatly improved
just because the using of SP/SI frames, in addition, H.264 also has a strong anti-error
performance, supporting for flexible application services of streaming media.
The Study of H. 264 Standard Key Technology and Analysis of Prospect 53

4 H.264 Standards Development Trends and Application Prospects


As a video compression standard of the new generation, H.264 is a video encoding and
transmission technology which caters to Internet and wireless networks, not only
improves the coding efficiency, but also enhances the adaptive capacity of the network.
H.264 standard enable motion image compression technolog to rise to a higher level of
development, the highlighted application is to provide high quality image transmission
in the low bandwidth, its high coding efficiency and obvious advantage make it applied
in many new fields, which also provides a broad application prospects for the Internet,
the technology market based on the H.264 standard will have a very strong vitality.
As the video coding standard which faced with IP and wireless environment for the
future, H.264 adopts new coding structure, proposes the network adaptation layer
(NAL) for the first time, thus the bit stream structure's adaptability for network is
stronger. In addition, the application of data partitions, data mask and error recovery
make its transmission capacity in the channel of high error rate and lost package mul-
tiple more stronger, thus the robustness of H.264 video streams has been improved.
Multimedia video transmission is one significant application of 3G and its later's
mobile communication system. Usually, mobile video encoder needs to reduce the
complexity as low as possible because of the limited mobile equipment power and
storage capacity, at the same time it should maintain the efficiency and robustness. We
can achieve the video which has little requirements on the delay through retransmis-
sion, while lots of data retransmission is unpractical for the real-time video session
service; In addition, the system capacity in the region of mobile cellular is limited, the
amount of data transferred is also changing all the time.Therefore, in order to adapt the
changing of channel environmental, the video codec should be able to adjust coding
rate with the environment in the limited time.
The new standard of H.264 video coding is superior to MPEG4 and H.264 in
compression performance, it can be used in many image compression fields such as
Video communication, Internet video transmission, digital camera, digital video re-
corder, DVD and television broadcasting. In the future field of mobile video commu-
nications, H.264 has a very broad application prospect. However, we also should see
that its performance improvements is at the cost of higher complexity, the encoding
computational complexity of H.264 is approximately 3 times of H.263, decoding
complexity is about 2 times, the high complexity restricts the application of H.264.
Based on this, now we need to urgently find out a program implemented on hardware
and concerned with the key technology, some few foreign companies have developed
hardware encoder of H.264, which can complete part level video's fast decoder, this
products will have a broad market in the areas of Internet and digital television.

5 Conclusion
H.264 technology as an important progress in the next-generation video coding stan-
dard, it has obvious advantage compared to previous standards such as H.263,MPEG-2
and so on, it adds some advanced technology on the basis of previous ones and this
makes it used in many fields. It enhanced the coding efficiency and the adaptability of
the network at the same time, therefore it can obtain a higher video transmission quality
54 H. Yao and Y. Tan

in the same of bit rate. Although H.264 has many advantages compared to the traditional
standard, it is at the cost of increasing video encoding's computational complexity. So it
is an important issue to be resolved that how to reduce computational complexity as low
as possible when ensuring the high encoding efficiency.


Acknowledgment: 2010 key project in henan science and technology agency:the
study of video transmission control based on 3g mobile network. Number:
102102210125, funds:20000rmb.

References
1. Feng, L.: Video Image Coding Technology and International Standards. Beijing University
of Posts and Telecommunications Press, Beijing (2005)
2. Huang, J., Liu, J.: Digital Image Processing and Compression Technology. University of
Electronic Science and Technology of China, Chengdou (2000)
3. Yu, Z.: Image Coding Standard H.264 Technology. Posts & Telecom Press, Beijing (2006)
4. Wu, L.: Data Compression. Publishing House of Electronics Industry, Beijing (2000)
5. Deng, Z.: H.264-Based Video Encoding / Decoding and Control Technology. Beijing
University of Posts and Telecommunications Press, Beijing (2000)
6. Wu, Q.: The Key Technical Analysis Based on H.264 Video Coding & Complexity Re-
search Testing. Modern Electronic Technology (20), 6062 (2009)
7. Wang, Q., Guo, X.: The Progress of H.264/AVC Standard in Recent Years. World Radio
&Television (9), 7882 (2010)
8. Chen, Q.: The Status and Development Trend of H.265 Standard. China Multimedia
Communication (10), 1215 (2008)
9. Marpe, D., Schwarz, H., Wiegand, T.: Context-based adaptive binary arithmetic coding in
the H.264/AVC video compression standard. IEEE Trans. CSVT 13, 620636 (2003); Full
Text via CrossRef | View Record in Scopus | Cited By in Scopus (336)
10. Lee, S.W.: H.264/AVC decoder complexity modeling and its application, Ph.D. Dissertation
presented to graduate school University of Southern California
11. Kalva, H.: Issues in H.264/MPEG-2 video transcoding. In: Proceedings of the IEEE Con-
sumer Communications and Networking Conference, pp. 657659 (January 2004); Full
Text via CrossRef | View Record in Scopus | Cited By in Scopus (17)
12. Karczewicz, M., Kurceren, R.: The SP- and SI-frames design for H.264/AVC. IEEE Trans.
CSVT 13, 637644 (2003); Full Text via CrossRef | View Record in Scopus | Cited By in
Scopus (118)
Syllabus Design across Different Cultures
between America and China

Fengying Guo1, Ping Wang2, and Sue Fitzgerald2


1
Management College, Beijing Union University, Beijing, 100101 China
2
Metropolitan State University, St. Paul, MN 55106 USA

Abstract. This article compares the different approaches, goals, and education-
al philosophies of syllabus design for higher education by exploring different
cultural and educational traditions.

Keywords: Higher Education, Education Background, Syllabus Design.

1 Introduction
Universities are places of higher education and scientific research. American universi-
ties are evolved from European tradition of a long, classical model: for example, Har-
vard University, with its original name as Cambridge College, was established in
1636, and it has over 300 years of history. These classical universities, adapting to
societal needs, adhering to its intellectual tradition, gradually developed into universi-
ties of today.
The history of Chinese universities is only about one hundred years; most modeled
and built upon the Soviet Russia tradition.

2 Higher Education in America and China

2.1 Snapshot of Higher Education in America: Students and How Major


Decisions Are Made

Higher education in the United States largely serves three types of students and
the profile of each is discussed later. Adhering to its classical, European tradition, the
majority of American universities do not require students to declare a major in the
first year although students are encouraged to indicate an area of interest before ad-
mission. University curriculum generally encourages freshman and sophomore stu-
dents to pursue self exploration by requiring students to complete general education
courses. These general education courses provide a broad exposure to the disciplines
in the liberal arts, helping students to explore philosophy or sociology or literature;
and helping build skills in mathematical or logical thinking, abilities in effective writ-
ing and communications as well as the knowledge base for an educated citizen.
Another goal of such general education requirements is aimed at helping students to
decide on what exactly an individual may be interested in learning more in depth and

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 5561.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
56 F. Guo, P. Wang, and S. Fitzgerald

gaining more expertise in a specific discipline or help students to decide on the ma-
jor of pursuit by the end of this exploration. Its important to note that almost all
universities require certain math, laboratory science and technology related courses in
their general education requirements. General education is sometimes called liberal
studies requirements or university wide requirements: these are the requirements
every student must complete regardless of the major of an individual. In many institu-
tions, students are not allowed to take any major courses until the majority of these
university requirements are met. As a consequence, major courses are built upon the
assumption that all students have had a certain level of college level writing skills and
have completed some self exploration and have decided on the major upon reflection
of personal aptitude, skill set, and value. Most students declare their majors by the end
of sophomore year.
Here is an example. Metropolitan State University is one of the universities within
Minnesota State Colleges and Universities (MnSCU) system. There are 32 colleges
under MnSCU; and all bachelor degree graduates must meet ten areas of competen-
cies called Minnesota Transfer Curriculum: The transfer curriculum commits all
public colleges and universities in the state of Minnesota to a broad educational foun-
dation that integrates a body of knowledge and skills with study of contemporary
concerns -- all essential to meeting individuals social, personal, and career challenges
in the 1990s and beyond. The competencies people need to participate successfully in
this complex and changing world are identified. These competencies emphasize our
common membership in the human community; personal responsibility for intellec-
tual, lifelong learning; and an awareness that we live in a diverse world. They include
diverse ways of knowing -- that is, the factual content, the theories and methods, and
the creative modes of a broad spectrum of disciplines and interdisciplinary fields -- as
well as emphasis on the basic skills of discovery, integration, application and com-
munication. All competencies will be achieved at an academic level appropriate to
lower-division general education. At Metropolitan State University, General Educa-
tion and Liberal Studies requirements for graduation is as 10 goals areas with a mini-
mum of 48 semester credits, and the goals are:

Goal 1: Communication:
At least two writing courses and one oral communication course
Goal 2: Critical Thinking
Goal 3: Natural Sciences
At least one lab based science course
Goal 4: Mathematical/Logical Reasoning
Goal 5: History and the Social & Behavioral Sciences
At least two courses from two disciplines
Goal 6: Humanities and the Fine Arts
At least two curses from tow disciplines
Goal 7: Human Diversity
Goal 8: Global Perspective
Goal 9: Civic and Ethical Responsibility
Goal 10: People and the Environment
Syllabus Design across Different Cultures between America and China 57

Students pursuing higher education in the United States are diverse:

Traditional students: the traditional college students are between the ages of 18
to 25; and have graduated from high school. In most cases, they have taken SAT
(formerly known as the Scholastic Aptitude Test) or the ACT (formerly Ameri-
can College Testing) exams a year before their high school graduation. Most
American colleges use SAT or ACT scores as part of their admission require-
ments. Both the SAT and ACT tests have measurements in reading, math and
writing. Although test scores are important, most institutions look for other indi-
cators of a students abilities. Many universities emphasize a students high
school grades, demonstration of leadership, volunteering experience, or extraor-
dinary athletic activities as well as academic pursuits when making admission
decisions. Traditional students normally attend school fulltime, taking 12 to 15
credits each term.
Adult students: These are students who have fulltime jobs and have family re-
sponsibilities. Many of them attend college classes at night or during the week-
ends. Most of them need to self-support their own education by paying for the
tuition out of their own pocket. The majority of adult students attend college
part-time, taking 8 credits or less a term.
Online students: over 3.4 million (over 19%) students are taking college courses
online. Its now possible for students to complete their undergraduate degrees
without ever showing up on-campus and meeting other students face to face.
Students decide on how many credits they want to take each term and they are
able to complete school work at their own pace.

2.2 A Snapshot of Higher Education in China: Students and How Decisions


of Majors Are Made

Degree programs in China are categorized as traditional degree granting programs at


universities, adult education programs leading to adult education diploma, and self study
programs leading to certificate of completion. The competition in getting into the most
popular programs is intense popular programs normally are those with the highest
placement rate of graduates. In general, an applicant needs to apply for a specific major
at a specific university when seeking admittance; and it a common practice a student to
apply for a less popular major into order to gain admittance to a more prestigious
school. Upon admission into a specific major at a specific university as a freshman, the
student is not given further opportunity to change major nor school.

Traditional students in China, ages 18-19, are required to pass the National En-
trance Exam before they are eligible for application to any university. Generally
all high school graduates want to attend university. Its estimated that there will
be 2,500,000 students studying in universities in 2010.
Adult students are enrolled in adult education. If a student fails to meet the mini-
mum threshold of the National Entrance Exam, he or she may either retake the
exam the following year, hoping for a higher score, or choose to take the National
Entrance Exam for Adults. A student who elects to take the National Entrance
Exam for Adults is eligible only for degree programs designed for adults, earning
58 F. Guo, P. Wang, and S. Fitzgerald

an Adult diploma. Students who pass the National Entrance Exam for Adults may
attend universities which provide adult education programs or they may study via
distance learning. For instance, online programs are available and students do
not attend face-to-face classes. At present, there are 68 distance learning educa-
tion programs which serve approximately 4.2 million adult and remote students.
Unlike traditional students, they attend school on a part-time basis, evenings and
weekends.
Self learning students are administrated by national and local governments and
private colleges. These students may have failed the National Entrance Exam or
they may have chosen not to take it. Self learners may study individually, but
most attend a private college.

3 Syllabus Design of America and China

3.1 Syllabus Design in America

In America, syllabi are used by students. Faculty members are expected to give a
syllabus to students at the beginning of each term. In many institutions of higher
learning, the syllabus is attached to the course schedules so students are able to read
the syllabus before registering for a course. The syllabus serves as a guide for students
to know about the course: whats expected, how evaluations are done, and how much
work is involved in a certain course. There is no universal format to follow, but gen-
erally it includes an introduction of the faculty member, how to contact the faculty
member, office hours, required textbook, prerequisites, course description, learning
goals for the course, competence statement, and evaluation methods, schedule of as-
signments or labs and tests. Policies relating to learning disabilities, complaints or
absences are listed as well. Methods of evaluation and assessment measurements are
clearly laid out: scores needed to pass the course as well as scores needed for each
grade level; scores for every assignment, lab, or quiz. Students generally get an idea
how the course proceeds by looking at the schedule such as which chapter in covered
in which week, or how many chapters or how many concepts are covered in the
course.
In short, the syllabus is designed to inform students what the course is about; how
it proceeds during the term; what the learning outcomes are; and what evaluation
methods are used to assess students mastery of content knowledge.

3.2 Syllabus Design of China

Chinese syllabi are instruction files for teaching. They are designed in reference to the
overarching goal of the institutions curriculum. The standardized format is set by the
division of Academic Affairs: from font size to the type of dots and how to use voca-
bularies to describe the syllabus. Usually it includes an introduction of the course,
goals, how important the course is in relation to other courses in the major, prerequi-
sites, learning components such as labs or lectures as well as a detailed schedule,
outlining test dates and times and specific requirements for each assignment. The
content of the syllabus is extremely detailed and long, describing every aspect of
learning and teaching, section by section, chapter by chapter, hour by hour. For each
Syllabus Design across Different Cultures between America and China 59

section, the learning outcomes must be outlined and standard language must be used
in the syllabus. For instance, the main concepts of each chapter and the teaching me-
thod for instructing such concepts are included. In addition, standard language is used
for learning each sub-concept. The standard phrases used are: must know, must un-
derstand, must master.
According to such a system, faculty members in the same department teach the
same course with exactly the same syllabus. They all emphasize the same key con-
cepts in each chapter and each section. They all assess students according to the same
methods; and they all teach at the same pace. In this system, the final exam normally
counts heavily in the successful passing of the course.

3.3 Comparisons of America and China

3.3.1 Similarities
There are some similarities between the two designs: information such prerequisites,
the course description, goals, assessment methods, schedule, textbook titles and lab
descriptions are included.

3.3.2 Differences
The differences in syllabus design seem to highlight the different educational tradi-
tions and cultures:
(1) Different Audience
American syllabi are used by students; they are the guide for students. The syllabus
explains what a course is about and how students can pass the course. After they read
the syllabus, the students know clearly how many assignments they must hand in,
how many assignments are required, what scores they will lose if they dont hand in
certain assignments; how many quizzes are expected and how well they need to do for
each.
Chinese syllabi are used by faculty members only. They are the credo of how the
faculty members teach the course: they are the files that faculty members must follow,
dictating the content matter covered during certain weeks of the term. The students
dont care about the contents of the syllabus.
(2) Different Formats
The format of American syllabus is more individualized. It has neither a strict for-
mat nor requirements for font sizes and types of dots as long as the basic information
is introduced clearly. Faculty members may include additional information they feel
is important.
Chinese syllabi follow a strict design format, from font sizes to the type of dots.
(3) Different Content:
The content of American syllabi is not very detailed, containing only the title of the
class, chapters the course covers, the lab and assignment schedule, etc. It introduces
nothing about other details found in Chinese syllabi.
The contents of Chinese syllabi are very rigid: choice of phrases or standardized
words and phrases guide a faculty members teaching. By following the syllabus ac-
curately, a faculty member accurately interprets the teaching of the important to
less important topics.
60 F. Guo, P. Wang, and S. Fitzgerald

(4) Different Schedule Details


The schedule in the American syllabus is not so clear: it contains only dates, topics
and chapters covered.
The schedule in the Chinese syllabus is extremely detailed: from chapters covered
in a certain hour during a certain week of the semester to the specific hours in the lab.
(5) Different Details about Assignments
American syllabi define assignments clearly: what the students need to do; what
needs to be included; and when an assignment is due. Assignment information is not
very specific in Chinese syllabi as the primary assessment of learning is the final exam.
(6) Different Assessments
The assessment of learning in American syllabus is very clearly laid out. Every as-
signment, every lab, every quiz, and what total score is needed to pass the course; and
what scores correspond to which grade level are normally spelled out.
The assessment component of the Chinese syllabus usually includes a seat-time
score and final exam score. The scale of the two parts is 50% and 50%, or 40 and
60%, or 30% and 70%. The final exam weighs heavily yet the seat-time score is not
concisely spelled out and its very subjective.
(7) Other areas
The American syllabus also includes information on how students can voice their
complaints, how absences are treated, a discussion of issues such as academic honesty
and disciplinary consequences, as well as how learning disabilities are accommo-
dated. The Chinese syllabus does not include such information.

4 Conclusions and Implications


The comparison of the similarities and differences of American and Chinese syllabus
leads to some implications:

(1) Individual faculty members in America often have flexibility in the choices of
topics, the amount of time spent on each topic and when and in what order to cover
topics. What the students learn depends, for the most part, on the interpretation of
what the faculty member deems as important or less important. When students are
following a sequence of courses, different skills or key concepts covered by different
faculty in the prerequisite courses can be problematic for the students. However, such
an approach also encourages a faculty member to introduce areas of strong personal
skills, especially in areas of advanced knowledge that are still new. Chinese faculty
members have less flexibility; they teach students according to the syllabus strictly.
Students who are taught by different faculty members for the same title course need to
pass the same exam, therefore, the students do not face a knowledge gap when con-
tinuing onto a sequential course. Some compromise for incorporating aspects of the
American style into the Chinese format while assessing students with standard exams
in American courses may be worth looking into.
(2) In American universities, from the syllabus, the students know the learning
goals of each course, the requirements for passing as well as schedules for assign-
ments and quizzes. It is the students decision to take a certain course with a certain
faculty member; and it is the responsibility of a student to pass the course. In the Chi-
Syllabus Design across Different Cultures between America and China 61

nese system, its the facultys responsibility to help students pass the final exam an
exam thats not written by any specific faculty member but by the Academic Affairs
division. Doing well on the final exam remains the only path to success in most
courses. Before the final exam, students memorize their notes in order and faculty
members help them as well. Such a method is not good in evaluating a students true
ability in applying theoretical knowledge and can be damaging to those with more
creative minds. A serious examination of assessment methods is worthwhile.
(3) American students are encouraged to be proactive, independent and creative
learners. They choose their majors and courses themselves. Therefore, its more likely
they are involved actively during class time. They ask questions and they want faculty
input. They also know why they need to finish assignments and the importance of get
them in to the faculty on time. Chinese students are more dependent on their profes-
sors for direction. Students take notes during class. Before the final exam they me-
morize their notes in order to pass exams. Chinese students are not encouraged to
actively participate in their education and are generally following the same path to
carry on their professors thoughts as a consequence. A model of giving students the
responsibility to learn for themselves by encouraging creative and active participation
in making course choices may be worth looking into.
(4) It is easy for American students to be accepted to universities but it is hard for
them to graduate. Its reported that only 60% of those who enter college ever gradu-
ate. On the contrary, it is hard for Chinese students to be accepted into universities,
but it is easy for them to graduate. Its estimated that over 95% of them graduate. Its
the facultys responsibility and the universitys duty to help every entering student
graduate. If students success in life is the ultimate goal of higher education, its im-
portant that the Chinese system take a closer look at helping students become self
directed learners. Coursework in general education such as communication, culture
and technology may be beneficial to Chinese students so they can explore different
areas before deciding on their majors and take more personal responsibility to learn
what they believe are their chosen majors. With such freedom, students may be able
to develop into independent thinkers, creative workers, and life-long learners.
As outlined above, syllabus design is based on different cultures and educational
traditions and philosophies. Though each has its own characteristics, the time for
rethinking each model and learning from each other seems to be here.

References
1. http://www.360doc.com/showWeb/0/0/294799.aspx
2. http://research.microsoft.com/asia/asiaur/summit04/downloads/
china.pdf
3. http://www.edu.cn/20010827/208372.shtml
4. http://define.cnki.net/WebForms/WebDefines.aspx
5. http://www.mntransfer.org/pdfs/transfer/PDFs/MNTC.pdf
Using Eye-Tracking Technology to Investigate the Impact
of Different Types of Advance Organizers on Viewers
Reading of Web-Based Content: A Pilot Study

Han-Chin Liu1, Chao-Jung Chen1, Hsueh-Hua Chuang2, and Chi-Jen Huang3


1
Department of E-Learning Design and Management, National Chiayi University,
85 Wenlong Village Chiayi County, 621 Taiwan
2
Center for Teacher Education, National Sun-Yat Sen University, No. 70, Lienhai Rd.,
Kaohsiung 80424 Taiwan
3
Teacher Education Center, National Chiayi University, 85 Wenlong Village
Chiayi County, 621 Taiwan
hanchinliu@gmail.com

Abstract. This study utilized eye-tracking technology to investigate how question


and summary forms of advance organizers affected 9 college students information
processing of web-based reading content. The results showed that students eyes
fixated more on the question form than on the summary form organizer. However,
viewers were found to spend more time reading the main reading content when
summary form organizer was utilized. Trying to answer advance questions might
have reinforced student memory of the to-be-learned content and further achieve
effective retrieval of information from the web-based reading content. Further
studies with large sample size and measures on achievement and cognitive load are
needed to realize in-depth how the type of advance organizer affects viewers
information processing.

Keywords: advance organizers, eye-tracking, web-based learning.

1 Introduction
The concept of advance organizers was first introduced by Ausubel[1]. According to
Ausubel, an advance organizer is a cognitive strategy that allows learners to recall and
integrate their prior knowledge with the new information presented in the learning
environments. According to Mayers [5] theory, advance organizers are able to affect
learning by first, conceptual anchoring, the concept in the reading content will be
integrated with prior knowledge and promote retention and transfer; second, by
obliterative subsumption, the technical information and insignificant aspect of the
reading content will be diminished. Advance organizers have long been used to
present information before a lesson to make the content of the lesson more
meaningful to the learners and help learners integrate their prior knowledge with
reading content in meaning making[1][4]. Ausubel[2] defined two types of advance
organizers, the expository and the comparative organizers. An expository organizer
can be used to provide related adjoining subsumers on materials that are relatively
unfamiliar to the learners while a comparative organizer can be used to help learners
relate unfamiliar to familiar or existing knowledge. Barnes and Clawsons [3] argued

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 6369.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
64 H.-C. Liu et al.

that when variables such as the type of organizers was taken into consideration, early
studies reported statistically non-significant positive or significant negative results on
achieving student learning.
As text-based information, especially the webpage format, still serves as the main
information source for multimedia learning, advanced technology could still serve as
an effective strategy to achieve learning. However, only a few studies have
investigated the impact of advance organizers on learning from cognitive
perspectives. By utilizing eye-tracking technology, this pilot study sought to realize
the effect of different types of advance organizers on learners information processing
of the to-be-learned content that is encoded in a webpage format.

2 Related Literature
Advance organizers have long been used to present information prior to a lesson to
make the content of the lesson more meaningful and to help learners integrate their
own prior knowledge with lesson content in meaning determination [1]. Ausubel [2]
defined two types of advance organizers, expository and comparative. An expository
organizer can be used to provide related adjoining subsumers with respect to materials
that are relatively unfamiliar to the learners, while a comparative organizer can be
used to help learners relate unfamiliar knowledge to familiar or existing knowledge.
Different formats, such as verbal, visual, or a combination of the two, have also been
used as advance organizers to facilitate learning. As a result, a variety of media have
been utilized to generate different advance organizer formats. In addition to the use of
oral and textual advance organizers, simple illustrations and concept maps have been
used as graphic organizers [6][7][8]. Recently, dynamic graphics like video and
computer animations have been implemented as advance organizers in a hypermedia
format [9]. Early studies have tested the effectiveness of the use such advance
organizers on learning. Ausubel and colleagues conducted a series of experiments on
the impact of advance organizers on student learning [10][11][12][13]. In their
experiments, college and high school students using text-based advance organizers
were found to perform significantly better than the control group on immediate and
retention achievement tests. However, later studies have found controversial results
on the effectiveness of the use of advance organizers on student learning.
Eye movements can work as a blueprint for presenting details as to just how
information in different visual media formats is retrieved and processed [14]. Human
eyes are believed to be stably-positioned for only short time intervals, roughly 200 to
300 milliseconds long. Periods of stability of ones eyes are called fixations. During a
fixation, there is an area in size corresponding to only about 2 degrees of visual angle
over which the information is focused and clear. Saccades refers to fast and scattered
movements of ones eyes from one fixation point to another. It is believed that no
information is obtained during these movements. The distance between two
successive fixation points is defined as the saccade length. In the 70s non-intrusive
technology was invented to track participants eye movements. With further
enhancement of technology, the usability of eye-tracking technology increased and
eye movements studies emerge in the late 90s with attention specially given to
human-computer interaction and human cognitive reactions [15]. Eye fixations have
been found to correspond to the information to be encoded and processed by the
Using Eye-Tracking Technology to Investigate the Impact of Different Types 65

cognitive system. The correlation between fixation and mental processing of


information is referred as the eye-mind assumption [16].
According to Jacob and Karn [15], the number of fixations can be correlated with the
efficiency of a viewer in searching related information. The greater the number of
fixations, the less efficiency of the viewers information search on the computer screen.
In addition, the duration of the viewers gaze at a particular component of the visual
display can be used to identify the viewers area of interest. Additionally, a longer
fixation duration indicates that the viewer has encountered a higher difficulty level in
performing the task. The viewers area of interest can also be identified using eye-
movement variables. The frequency of a viewers fixations on a specific element or area
of a visual display demonstrates the importance of that particular element area.
Furthermore, the scan paths or sequence of fixations from the eye movement data
denoting the changes in areas of interest over time can be used to reconstruct the
viewers mental imagery constructed from visual information [17]. With the
improvement of such eye-tracking technology, eye-movement data collection today is
less intrusive than other physiological measures like heart rate or EEG. Eye movement
measurement has therefore become a promising alternative that can be utilized to gather
real-time information regarding a learners cognitive process and mental state with
minimal disruption. Along with comprehension tests and self-reporting surveys, eye-
tracking technology can collect information that can be used to construct blueprints
illustrating in depth just how an individual processes information encountered when
different instructional strategies like advance organizers are implemented.
Today, a great deal of learning and book-based content is digitized and displayed
in multiple formats on the screens of personalized learning/reading devices like the
iPad and e-readers. However, instructional strategies still play an important role in
determining the effectiveness of e-learning material [18]. Among these instructional
strategies, earlier studies have determined that advance organizers can be effective in
achieving learning. In the digital age, the information provided by advance organizers
can also be presented in multiple formats. Studies on the effect of advance organizers
on learning have produced inconclusive results when additional variables like their
type and format were examined. Meanwhile, early studies also examined the effect of
advance organizers in terms of improved scores on achievement tests. Because the
concept of advance organizers was based on cognitive theories, in-depth observations
and investigation of learners cognitive processes can be useful in providing
information to support a better understanding of how advance organizers may affect
an individuals information processing in the digital age.
This study utilized a qualitative research design using eye-tracking technology to
explore if the type and format of advance organizers affect cognitive tasks such as
conceptual anchoring and obliterative subsumption proposed by Mayer [5] in his
assimilation encoding theory .

3 Methods

3.1 Participants and Design

Nine college students in their freshman or sophomore years were invited and
participated in this study. Eye-tracking technology was utilized to track the learners
66 H.-C. Liu et al.

information processing patterns and preferences to generate an in-depth analysis and


discussion of the research questions. A repeated measure design was utilized to attain
the research purposes.

3.2 Instructional Material, Instruments, and Equipment

Two introductions on different types of rocks were served as the reading content. Five
test questions testing the nature of metamorphic and a short paragraph summarizing the
characteristics of pluton were developed to work as different types of advance
organizers. The advance information was placed before the related detailed
introduction of the two types of rocks respectively. All the reading content was
presented in web page format. Participants were asked to read two forms of reading
content with either question-based or summarized advance organizers in random order.
Participants eye movements were recorded by a faceLAB 4 eye-tracking system
while they were reading the content on the computer screen.

3.3 Data Analysis


The keywords and sentences mentioned in the advance information were identified as
the look zone 1 while the related sentences in the main reading content were
identified as the look zone 2. While retrieving information from the computer screen,
participants eye movement data such as number of fixations and fixation durations on
different look zones were collected for statistical comparison. In addition, each
participants number of fixations and fixation durations on look zone 1 were divided by
the number of fixations and fixation durations on look zone 2 to attain proportions of
the participant attention to realize the effect of different types of advance organizers
on readers information processing patterns. Figure 1 and 2 show examples of the
reading content with different look zones.

Fig. 1. Look zones on the metamorphic web page


Using Eye-Tracking Technology to Investigate the Impact of Different Types 67

Fig. 2. Look zones on the pluton web page

4 Results and Discussion


The results show that viewers portion of fixation duration was greater on the question
form (M=34.24, SD=7.58) than on the summary form (M=22.49, SD=12.75) of
advance organizer web page. However, the difference is marginal (p=.054).
Meanwhile, viewers showed significantly greater number of fixations on the question
form (M=42.26, SD=8.20) than on the summary form (M=26.12, SD=13.16) of
advance organizer web page (p=. 028). Then we examined the number of fixations
(NF) and fixation durations (FD) on different look zones. Viewers were found to
significantly fixated more frequently and spend more time on the question form than
on the summary form advance organizer (pNF<.001, pFD<.001). Meanwhile, viewers
were found to significantly fixated more frequently and spend more time on the
reading content with summary organizer than on the reading content with question
form organizer (pNF=.028, pFD=.043).
Viewers seemed to spend less time reading the summary form advance organizer
but then spend more time reading related reading content. On the contrary, viewers
tend to spend more reading the question form advance organizer and then spend less
time reading related reading content. The question form advance organizer might
have forced viewers to spend more time figuring out the answers for the questions. In
addition, trying to answer the questions might have strengthen the memory of advance
information and therefore increase the efficiency of information processing when
reading the main reading content. On the other way, although the summary advance
organizer provide more detailed and direct information, viewers seemed to show less
efficiency in reading the main reading content. Further studies incorporating a larger
sample size and measures of achievement and cognitive load are needed to realize if
using question form advance organizers actually achieve student understanding.
68 H.-C. Liu et al.

5 Conclusions
Taking advantage of eye-tracking technology, this pilot study found that using
questions as advance organizers seemed to have achieve students reading efficiency
on the web-based reading content. The small sample size might have weaken the
conclusion drew form the results; however, the findings of the present study have
paved the way for our further studies. More studies using larger sample sizes and
utilizing tests on student achievement and cognitive load are beneficial in realizing in-
depth how advance organizer instructional strategies effectively facilitate student
learning.

References
1. Ausubel, D.P.: Educational psychology: A cognitive view. Holt, Rinehart & Winston, New
York (1968)
2. Ausubel, D.P.: The acquisition and retention of knowledge: A cognitive view. Kluwer
Academic Publishers, Boston (2000)
3. Barnes, B.R., Clawson, E.V.: Do advance organizers facilitate learning? Recommendations
for further research based on an analysis of 32 studies. Review of Educational
Research 45, 637659 (1975)
4. Dembo, M.H.: Applying educational psychology in the classroom, 4th edn. Longman,
New York (1991)
5. Mayer, R.E.: Twenty years of research on advance organizers: Assimilation theory is still
the best predictor of results. Instructional Science 8(2), 133167 (1979)
6. Gil-Garcia, Villegas, J.: Engaging minds, enhancing comprehension and constructing
knowledge through visual representations. Paper presented at the Conference on Word
Association for Case Method Research and Application, Bordeaux, France (2003)
7. Kang, O.-R.: A meta-analysis of graphic organizer interventions for students with learning
disabilities. Unpublished Ph.D. dissertation, University of Oregon, Oregon (2002)
8. Millet, P.: The effects of graphic organizers on reading comprehension achievement of
second grade students. Unpublished Ph.D. dissertation, University of New Orleans, New
Orleans (2000)
9. Tseng, C., Wang, W., Lin, Y., Hung, P.-H.: Effects of computerized advance organizers on
elementary school mathematics Learning. In: Paper presented at the International
Conference on Computers in Education (2002)
10. Ausubel, P.: The use of advance organizers in the learning and retention of meaningful
verbal material. Journal of Educational Psychology 51(5), 267272 (1960)
11. Ausubel, P., Fitzgerald, D.: Organizer, general background, and antecedent learning
variables in sequential verbal learning. Journal of Educational Psychology 53, 243249
(1962)
12. Ausubel, D.P., Youssef, M.: The role of discriminability in meaningful parallel learning.
Journal of Educational Psychology 54, 331336 (1963)
13. Fitzgerald, D., Ausubel, D.P.: Cognitive versus affective factors in the learning and
retention of controversial material. Journal of Educational Psychology 54, 7384 (1963)
14. Unsworth, N., Heitz, R.P., Schrock, J.C., Engle, R.W.: An automated version of the
operation span task. Behavior Research Methods 37, 498505 (2005)
Using Eye-Tracking Technology to Investigate the Impact of Different Types 69

15. Jacob, R.J.K., Karn, K.S.: Eye tracking in human-computer interaction and usability
research: Ready to deliver the promises. In: Hyn, J., Radach, R., Deubel, H. (eds.) The
minds Eye: Cognitive and Applied Aspects of Eye Movement Research, pp. 573605.
Elsevier, Amsterdam (2003)
16. Just, M.A., Carpenter, P.A.: Eye fixations and cognitive processes. Cognitive
Psychology 8, 441480 (1976)
17. Huber, S., Krist, H.: When is the ball going to hit the ground? Duration estimates, eye
movements, and mental imagery of object motion. Journal of Experimental Psychology:
Human Perception and Performance 30, 431444 (2004)
18. Clark, R.E.: Media Will Never Influence Learning. Etr & D-Educational Technology
Research and Development 42(2), 2129 (1994)
The Development and Implementation of Learning
Theory-Based English as a Foreign Language (EFL)
Online E-Tutoring Platform

Hsueh-Hua Chuang1, Chi-Jen Huang2, and Han-Chin Liu3


1
Center for Teacher Education, National Sun Yat-Sen University, No. 70, Lienhai Rd.,
Kaohsiung 80424 Taiwan
2
Teacher Education Center, National Chiayi University, 85 Wenlong Village Chiayi Coun-
ty, 621 Taiwan
3
Department of E-Learning Design and Management, National Chiayi University,
85 Wenlong Village Chiayi County, 621 Taiwan

Abstract. The purpose of this study was to develop and implement a learning
theory-based EFL(English as a Foreign Language) e-tutoring platform to help
EFL learners develop English language skills in their own pace. The online e-
tutoring platform was designed based on learning theories of constructivism, si-
tuated learning theory, cooperative learning, and self-regulated learning. It is in-
tended that the online e-tutoring platform can provide an opportunity for e-
tutors to facilitate each individual EFL learner to develop his/her language
skills. A group of 25 six graders participated in the 20 week e-tutoring program
via the online EFL e-tutoring platform. Analysis of Results from the partici-
pants achievement tests and feedback from the e-tutors helped to inform future
improvements of e-tutoring programs. Recommendations for platform im-
provements were also provided.

Keywords: e-tutoring, e-learning platform, learning theory, EFL.

1 Introduction
According to Taiwan Network Information Center's (TWNIC) latest Internet database
in 2010, about 14,660,000 people (60 percent of Taiwan population) are using
internet.[1] The database also shows the age group of Internet users is decreasing each
year. Young generations are the digital natives and fluent in navigating among the
Internet tools. Specifically for the younger generation, learning is no more limited in
the classroom; e-Learning has emerged as an optimal option for them.
In a global knowledge economic era, learning will no longer restricted by time and
place boundaries. E-learning in a digital age can be customized to meet each individ-
ual learner needs and starting points, which makes e-Learning a promising innovation
that will transform the way learning takes place.

2 Related Literature
2.1 e-Learning
There are numerous definitions of the word eLearning. These include that e-learning
is a type of education where the medium of instruction is computer technology by

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 7176.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
72 H.-H. Chuang, C.-J. Huang, and H.-C. Liu

Wikipedia [16]. Brown and Voltz [6] proposed that e-Learning is the use of computers
in a systematic four step process: presented practiced, assessed and reviewed. According
to Clark and Mayer [2], eLearning is an instruction that is delivered on a computer
which has the following characteristics:

Content is relevant to the learning feature.


Instructional methods are used as examples or practice exercises to help learn-
ing.
A variety of media elements are used to deliver the content and methods.
New knowledge and skills are built and linked to improved organizational
performance.
e-Learning employed the Learning Management Systems(LMS) to teach, assign
homework, and to assess learners.[7] e-Learning applied Internet as the media to
achieve learning objectives by synchronously or asynchronously, so leaning would no
longer be restricted by time, place and the number of students.
Su and Huang [8] pointed out that technology applied to learning could enhance
the scope, depth and variation, and provided the first-person experience to build up a
sense of participation and learning motive. Henich et al [3] argued that e-Learning
could increase learners' motivation. Lai [4] believed that the learning system should
be equipped with different functions and characteristics based on if the e-Learning is
utilized: (1) Virtual; (2) Distance; (3) Interactive; (4) On demand; or (5) Integrated
models.
Recently, there has been great improvement in the e-Learning platform.
Su and Lin [12] pointed out that the Sharable Content Object Reference Model,
SCORM, is a part of Advance Distributed Learning plan, ADL. The model could
share the teaching materials that could shorten the time of teaching material develop-
ment, lower the development cost, and stimulate the mobility of teaching materials.
The SCORM model meet the requirement mentioned above.

2.2 Theory

This research project aimed to construct an EFL online e-tutoring platform based on
four learning theories: constructivism, situated learning theory, cooperative learning,
and self-regulated learning.

2.2.1 Constructivism Learning Theory


Constructivism emphasized that the knowledge is constructed from the actual expe-
rience of the learner. They actively construct the new concept according to their ac-
tion and learning. The essence of constructivism is that the cognitive mind will active-
ly construct knowledge, not just an acceptor.[10][13] Under constructivist theory,
knowledge would change over time through the interaction between the individual
and the surrounding environment. A teacher is a facilitator and promoter of meaning
construction rather than a knowledge provider in the learning process.
The practice of constructivism in teaching and learning could be divided to three
levels: (1) discussing constructions; (2) sharing constructions; (3) collaborating on
constructions.[11] Therefore, a good e-learning situation should try best to support
these three levels of knowledge construction.
The Development and Implementation of Learning Theory-Based EFL 73

2.2.2 Situated Learning Theory


The viewpoint on situated learning theory is that learning is accomplished through
social activity. Learning in daily life is the most natural and effective way. If an e-
tutor provides an effective learning situation, it could help learners to gain and con-
struct their own knowledge.[9]

2.2.3 Cooperative Learning


Internet created an open, integrated and reciprocity environment which could not be
done in the traditional classroom. E-tutors and e-lerners use resources on the Internet
to learn and they cooperate, discuss and brainstorm to gather ideas and solve prob-
lems according to their interest, abilities and experience.[5]

2.2.4 Self-regulated Learning


Zimmerman and Schunk [14] proposed that self-regulated learning is the self-
adjustment in the learning process. The self-adjustment includes setting up a goal, self-
monitoring and using tactics. Zimmerman and Schunk [14] believe that there are four
steps in the circle of self-regulated learning, which are self-monitoring and evaluating,
setting up goal and plan, practicing tactics, and monitoring the result.
Learners obtain learning autonomy to decide their own way to learn, and adjusting
according to the achievement.[15]

3 System

3.1 Construction of the System

When established the EFL online e-tutoring platform, four English content modules
are digitalized as the basis of the system with manageable interface and too aids. We
used Moodle to match up related module.
The advantages of Moodle system are that it could be downloaded on the internet
for free, easy to use and install.

3.2 Development Kit

This research use Fedora Linux Server Version 10 with Apache 2 to construct the sys-
tem and server of the platform. The programming technologies are PHP5, MySQL,
CSS, JavaScript, and Flash. The platform has been pilot tested to examine its usability.

4 System Introduction
The following are introduction of learning theory-based English as a foreign language
(EFL) Online E-tutoring Platform.
74 H.-H. Chuang, C.-J. Huang, and H.-C. Liu

Synchronous Course Synchronous Online


Content Learning Meeting
(constructivism) (situated learning)
1.One-on-one tutoring
1.Skype real time video
2.Hyper text learning in the
conferencing
four English learning
2.Instant message
modules
3.Teaching content file
3.Non-linear learning
transfer
4.Self-discovery, adaption,
4.Deskshare
and assimilation

Online E-tutoring
Platform
Internet Learning Online Learning
community Passport
(cooperative learn- (self-regulated learn-
1.Introduction forum
1.Learning Quiz Activity
2.E-tutors and e-learners
2.Report Card of learning
communication forum
progress
3.Q & A session
3.Record of lessons re-
4.Learning Reflections
viewed
4.Rewards

5 Results and Discussion

5.1 Achievement

The participants of the study are 25 six grade elementary school students from the
economically disadvantage family.
For the students without Internet access (about 50%) at home, they used school
computer labs to conduct one-on-one e-learning in school in the afternoon. For those
(about 50%) who have home Internet access, they used their home computers.
Students were given a pre-and post English test in the beginning and the end of the
18 week participating period. After analyzing the data, the percentage of improvement
is about 21% for school computer users and 33%. for the home users. The at-home
learning is more effective. The reason of the difference might be that the students who
have computer at home could do more learning activities than the students who could
use the computer only at school. The school only allowed one session of one hour
computer lab time from Monday to Friday. What is worth noticing was that those who
did make significant progress in the English proficiency only spent half of time or
even less on the platform compared with those who made significant progress.

5.2 Analysis the Feedback of Online E-tutor

Four of the e-tutors of the program were interviewed. They all believed that the e-
tutors should possess computer literacy, knowledge of netiquette, and related ability
of pedagogy and teaching tactics. The e-tutors should be able to understand learners'
The Development and Implementation of Learning Theory-Based EFL 75

needs, and to solve technical problems. Besides, they should have the ability to com-
municate and interact with learners and other e-tutors.

5.3 Recommendation and Future Improvement

Given that the home e-Learning group made more progress than the school group due
to the access issue of Internet. In light of digital divide, it is helpful if the use of on-
line platform could be in cooperation with community e-learning digital center and
school computer and Internet facilities to address the access issue of digital divide.
The school and the society should provide more digital resources to bridge the digital
divide gap.

6 Conclusion
With the rapid development of internet-mediated communication tool in the digital
era, teaching and learning is not just limited by the traditional classroom. Therefore,
how to develop effective e-learning with solid learning theories is an emerging issue.
We hope that through recruiting and training the volunteers to use the online platform,
it will be helpful to the disadvantaged students to improve their English learning,
especially in the remote area where there is a lack of qualified English teachers.

Acknowledgements. The authors would like to acknowledge the support of National


Science Council in Taiwan. NSC Grant Number: NSC96-2413-H-110-005-MY3.

References
1. Taiwan Network Information Center: Basic Internet Investigate. Website,
http://statistics.twnic.net.tw/item04.htm (May 15, 2010) (January 31,
2010)
2. Clark, R., Mayer, R.: e-Learning and the science of instruction: Proven guidelines for con-
sumers and designers of multimedia learning. Jossey-Bass/Pfeiffer, San Francisco (2003)
3. Heinich, R., Molenda, M., Russell, J., Smaldino, S.: Instructional Media and Technologies
for Learning, 7th edn. Merrill Prentice Hall, New Jersey (2002)
4. Lai, A.F.: Discussion of Digitalized Learning. Bimonthly Journal of Teachers
World 1236, 1623 (2005)
5. Lin, Y.Y.: An Action Research of Cooperative Learning on Health and Physical Education
Learning Area for High Grade Student of Elementary School (Master thesis, Education
Department of CCU, Chiayi, Taiwan) (2004)
6. Brown, A.R., Voltz, B.D.: Elements of Effective eLearning Design. The International Re-
view of Research in Open and Distance Learning, 6 6(1), 217226 (retrieved May 13,
2008); from the Washington State University database (March 2005)
7. Govindasamy, T.: Successful implementation of e-learning pedagogical considerations.
The Internet and Higher Education 4, 287299 (2001)
8. Su, X.Y., Huang, M.L.: Content Analysis of Digital Learning Literacy in Digital Age. Bi-
monthly Journal of Educational Resources and Research 80, 147172 (2008)
76 H.-H. Chuang, C.-J. Huang, and H.-C. Liu

9. Brown, J.S., Collins, A., Duguid, P.: Situated cognition and the culture of learning. Educa-
tional Researcher 18, 3242 (1989)
10. Pei, X.N.: Collision of Ideas between the East and West:the Understanding of Education in
the view of Constructivism. Open Education Research 41, 1214 (2003)
11. Resnick, L.B.: Shared Cognition: Thinking as Social Practice. In: Perspectives on Socially
Shared Cognition, pp. 120. American Psychological Association, Washington, DC (1991)
12. Su, W.J., Lin, P.J.: Learning Technology Standards and SCORM. Journal of Library and
Information Science 29(1), 1528 (2003)
13. Yager, R.E.: The constructivist learning model: Towards real reform in science education.
The Science Teacher 58(6), 5257 (1991)
14. Zimmerman, B.J., Schunk, D.E.: Self-regulated learning and academachievement: Theory,
research, and practice. Springer, Heidelberg (1989)
15. Zimmerman, B.J.: Self-efficacy and educational development. In: Bandura, A. (ed.) Self-
efficacy in changing societies, pp. 202231. Cambridge University Press, New York
(1995)
16. Wikipedia, The Free Encyclopedia (May 13, 2007)
http://en.wikipedia.org/wiki/ELearning (retrieved May 13, 2008)
Analysis of the Appliance of Behavior-Oriented
Teaching Method in the Education of Computer Science
Professional Degree Masters

Xiugang Gong1, Jin Qiu2, Shaoquan Zhang3, Wen Yang3, and Yongxin Jia1
1
College of Computer Science and Technology,
Shandong University of Technology
Zibo, China
gong_xg@sina.com
2
Information Engineering Department
Shandong Silk Textile Vocational College
Zibo, China
xindi1998@tom.com
3
Graduate School
Shandong University of Technology
Zibo, China
pyk@sdut.edu.cn

Abstract. The object of Computer Science professional degree masters


education is to cultivate professional, high-level and application-based
specialists who cater to the need of real work. Given that the period of
schooling is two years and that a section of Computer Science professional
degree masters are weak in computer programming entering school, this thesis
takes "VC++ Programming" as an example to discuss the way of using
Behavior-Oriented Teaching Method to improve teaching effects. This thesis
deals with the behavior-oriented theory, practical teaching method, the
appliance and meaning of this method and how to instruct the teaching process
of "VC++ Programming" applying this method.

Keywords: Behavior-Oriented Teaching Method, Computer Science professional


degree masters, VC++ Programming.

1 Introduction
Since China implemented professional degree in 1991 and strived for over ten years,
professional degree developed fast, acquiring prominent achievements. The scale of
masters in China was small and academic specialists for education and scientific
research were the major part of masters before 1999, therefore, professional degree at
that time was mainly for the employees at their post to satisfy their need to improve
themselves. In order to adapt the change of social needs for master education
structure, Ministry of Education decided to recruit professional degree masters for
graduating students since 2009[1]. Full time education and credit system were carried
out in this degree. The schooling period was two years[2].

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 7782.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
78 X. Gong et al.

High-level ability in computer programming is a basic requirement for Computer


Science professional degree masters. However, their ability are rough now. According
to teachers experience and inquiries, no more than 50% students entering master
degree could solely select program languages to produce small application programs.
The reasons are as follows.
(1).In the teaching process of program languages in undergraduate course, teachers
focus on the knowledge system but neglect the participation and assessment of the
students. Practical ability of students is ignored to some extent. As a result, the
computer programming ability of students is generally weak.
(2).In many universities, program designing class starts at the same time with
NETEM(National Entrance Test Of English For MA/MS Candidates) when a number
of students have to abandon learning this class to prepare NETEM. Moreover, the
undergraduate graduation thesis starts at the same time with reexamination of
NETEM. A large quantity of students are engaged in reexamination and miss a good
chance to improve their program designing ability.
(3).Plenty of students have the desire to improve their program designing ability
but restricted by their economic level and the limited practical exercise time.
It is obvious that plenty of institutions want to recruit some professional degree
masters. The professional degree, compared to the academic degree, is a sort of
degree the object of which is to cultivate professional, high-level and application-
based specialists who cater to the need of real work. Aimed at academic research,
academic degree stresses on theories and research. Its object is to cultivate teachers in
universities and researchers in scientific establishments[1]. Aimed at professional
practices, professional degree stresses on practices and appliance. Its object is to
cultivate high-level specialists who have received formal, high-level training in their
professional techniques. The prominent feature of professional degree is that it links
tight with the profession. The schooling period of professional degree masters is
generally two years[2]. It is a meaningful study work to research how to cultivate in a
short time the high-level, application-based specialists who cater to the need of real
work.

2 Meaning of Behavior-Oriented Theory


Behavior-Oriented is a new stream of thought appeared in 1980s in the world
vocational education field, and it is also called practice oriented or action oriented[3].
Owning to its important and significant role in cultivating students overall quality
and comprehensive ability, Behavior Oriented teaching was valued by experts from
vocational education field and manufacturing field of different countries[4].
In Behavior-Oriented Theory, people are active, improving gradually and
responsible for themselves. They could feed back themselves critically in achieving
a setting goal[5]. The purpose of Behavior-Oriented study is to expand and improve
the individual action mode and gain the professional ability, including the most
important ability. It combines the study process with professional action. It combines
the students individual action and study process with "action space" which is proper
to the social need. Behavior-Oriented teaching expands the action space of students
Analysis of the Appliance of Behavior-Oriented Teaching Method in the Education 79

and improves the "role ability" of individual action which promotes substantially the
inspiration and ability to solve problems of students. Owning to its important and
significant role in cultivating students overall quality and comprehensive ability,
Behavior Oriented teaching was valued by experts from vocational education field
and manufacturing field of different countries.
Some teaching methods develop leading by the thoughts of Behavior-Oriented
Teaching Method: Simulated Teaching, Education Case Method, Project Based
Approach, Role Play, etc[6]. The patterns for teaching and learning could vary
according to the nature of learning task. Now a number of teaching methods followed
Behavior-Oriented Teaching Method are transferred to the elementary education and
the regular higher education of our country which has made good results[7-11].
The professional degree is oriented by professional practice. It stresses on practice
and appliance. Behavior-Oriented Teaching Method sets the practical action of
students as subject who are the active participants. Teachers are the imparters of
knowledge as well as the consultants and instructors. The teachers change information
teaching to method teaching which takes the activities of students as the main part. As
a result, students could learn not only the indirect experience summarized by
predecessors but also the direct experience from their own practice. For example,
"VC++ Programming" taught in our college can be used to explain the appliance of
Behavior-Oriented Teaching Method in practical teaching.

3 Using Behavior-Oriented Teaching Method to Instruct


the Teaching of "VC++ Programming"
We Design the teaching of "VC++ Programming" according to Behavior-Oriented
Teaching thoughts. Carry out course teaching as 8 tasks in terms of going from the
easy to the difficult and complicated. Take the accomplishment of this set of tasks to
form the obvious clue to practice, explain, discuss and assess. Take the textbook as
reference book to refer to.
According to the curriculums in the cultivating plan, there are 40 class hours to
teach. The distribution of class hours are as follow. The whole course could be
divided into 8 tasks, including 3 demonstration tasks, 2 teaching cases and 3 teaching
projects. Every demonstration task takes 2 class hours. Every teaching case takes
about 5 class hours including 2 class hours of case explaining, 2 class hours of
students practice in computers and 1 class hour of instruction. Every teaching project
takes about 6 class hours including 1 class hour of explaining the needed knowledge
and distributing tasks, 4 class hours of students producing and debugging programs
and 1 class hour of instruction. If the students could not producing programs in the
fixed time, they could produce programs and communicate with others after class.
Enough time should be left to students to learn by themselves before every task.
Teachers should plan the self-study content before the next task. The other 6 class
hours are the summary hours and flexible hours. All the tasks are listed in Table1. The
teaching process will be explained taking Task 1 and Task 4 as examples.
(1)Task 1: Find out all the perfect numbers within 2 to 10000, a perfect number is a
number n for which the sum of divisors, s(n)=n. For example, 6=1+2+3,

28=1+2+4+7+14 Therefore, the number of 6 and 28 are perfect number.
80 X. Gong et al.

Table 1. List of teaching task of "VC++ programming"

Task Teaching
Content Needed knowledge
number techniques
Find out all the perfect use Visual C++ IDE, the
numbers within 2 to 10000. basic control stru-cture of
1 A perfect number is a the program, basic data demonstration
number n for which the sum types of C ++,C++ expre-
of divisors, s(n)=n ssion method, etc
Use Newton's method for Practice using the function
2 equation: 2x3-4x2+3x-6=0 call and simple algorithm demonstration
to solve the root near 1.5 application.
Print out the Chinese Master the output format of
3 triangle(required to print out C++, and the use of loop demonstration
10 lines) structure.
Design a small library Understande design pattern
management system, the of object-oriented program,
main functions include master classes and object
register the information of concepts in C++, master the case methods
4
every book, register library definition of class and of teaching
card, borrow registration and instantiation methods
returning registration etc through this mission.
[12].
Use VC++ program-ming to Master the messaging
simulate paint software in mechanism of Win-dows,
Windows accessories. The drawing with graphics case methods
5
design bases on MFC device interface of teaching
writing, and supports saving
and bitmap reading.
Design a digital image Master Messaging mecha-
processing demonstration nism of Windows, the
system, the function of the concept of dialog boxes and
system includes: open and controls, basic knowledge of
6 save bitmap images, bitmap and digital image Project Method
histogram, translation, proce-ssing, drawing with
mirroring transformation, GDI graphics.
transposition, scale, rotation,
etc.
Write a class performance The use of basic controls in
management system, which VC, basic knowledge of the
7 could statistic the average database and ADO tech- Project Method
grade, the number of nology.
failures, etc.
Write a chat program based Socket programming, ADO
8 Project Method
on UDP [13]. technology, C / S mode, etc

In teaching, teachers should demonstrate the running of program first. Then they
explain in detail the main knowledge combined with the program. The knowledge
include the usage method of the IDE of Visual C++, the debug method of computer
programming environment, the basic control structure of program, the basic data type
Analysis of the Appliance of Behavior-Oriented Teaching Method in the Education 81

of C++, the usage method of expressions in C++, the input format of C++ etc. Find
some students to design similar programs on the base of these.
(2)Task 4: Design a small library management system. The class need to be
defined for the system includes library book class, library card class and record class.
Member functions of the system include book entry functions, library card entry
functions, borrow process functions and return process functions, etc. The preparation
of the system uses the MFC framework.
Teachers give the case to the students before class. Students refer to various theories
and knowledge they think necessary after they got the case. Students understand the
knowledge better for that they think carefully and raise up the solvement. When
teaching, teachers cooperate with students to complete the task and instruct students to
master the basic knowledge of data base, ADO technology, Object-Oriented
programming thought, the concept of class and object which includes class definition,
constructor, destructor, the declaration of an object and references, etc.
Behavior-Oriented Teaching used, students could master quickly the features and
basic application of Visual C++ after the instruction and accomplishment of these 8
tasks. The accomplishment of tasks could make students free from the fear of VC
computer programming, initially understanding the procedures and mode of VC
computer programming. The interest of students was easy to be raised up.

4 Conclusion
On account of the above-mentioned reasons, the author reform in teaching to use
Behavior-Oriented Teaching Method and received good effects. The anonymous
investigation questionnaire shows that the teaching level improved evidently as the
follow Table 2. In this table, the test results of "C Programming" is gotten in
interview.

Table 2. Anonymous investigation questionnaire

Test Result of
Teaching Test Result of "C Students
Object "VC++
Method Programming" Assessment
Programming"
High 4 4
Traditional 22
teaching students Medium 7 6 78
methods in 2009
Low 11 6

behavior- High 4 3
25
oriented
Students Medium 9 7 87
teaching
in 2010
approach Low 12 8
82 X. Gong et al.

To sum up, in traditional teaching method, teachers just give instructions and
students receive instructions. Teaching effects do not show until the last tests.
Students learn fixed knowledge. The teaching mode of "VC++ Programming"
according to the thoughts of Behavior-Oriented Teaching Method could give
students definite and concrete tasks and stimulate the students interest and motivation
of VC computer programming. Students will exercise their knowledge and techniques
actively. They will take a serious and careful attitude to accomplish the 8 tasks, to
produce and debug codes in accordance with practical requires, to show their work
achievements. Their study interest is inspired. Therefore, the teaching effects are
superior. The teaching content could be mastered in a compared short time. The
expected study goal will be achieved.

Acknowledgements. This paper is sponsored by a Shandong innovative project to


Graduates (SDYC09020) and a Shandong University of Technology innovative
project to Graduates (08019).
Zhu Rui-jin and Liu Shushu from Shandong University of Technology did a lot of
research for this project. They designed part of the net code for this software system.

References
1. Ministry of Education of PRC: Make greater efforts to adjust the educational structure of
graduate degreeYang yuliang, Director of the State Council and Chinese Academy of
Sciences, answer the reporter (2009)
2. Ministry of Education of PRC: Certain opinions on how to do well the job of training
masters with professional degree (2009)
3. Xu, G.: The research on practice-oriented vocational education. Shanghai Education Press
(2006)
4. Liu, Y.: The application of "behavior-oriented" in the teaching of Excel. Information and
Computer (Theory Edition) 5, 195 (2010)
5. Ye, C.: Teaching and practice based on professional activities. Zhejiang Science and
Technology Press (2008)
6. Chen, Y.: Analysis of professional teaching methods. Professional Worlds 6, 8990 (2007)
7. Zhou, W., Zhang, X., Li, C.: Applications on the behavior-oriented teaching approach used
in vocational English courses. Education and Vocation 30, 133134 (2006)
8. Zhao, H.: On Behavior-Oriented Teaching Mode in Teaching English Major Interpersonal
Function, Practice and Case Study. Foreign Language and Literature 26, 134136 (2010)
9. Tang, C.: Analysis on the integration of behavior-oriented approach and modular
approach. Education and Vocation 29, 147149 (2006)
10. Zheng, L.: Discuss how to implement behavior-oriented teaching mode in vocational
colleges. Education and Vocation 23, 6869 (2008)
11. Huang, B.: Implementation and exploration on how to implement behavior-oriented
teaching mode in vocational colleges. Education and Vocation 26, 134135 (2009)
12. Lv, J., Yang, Q., Luo, J., et al.: Visual C++ and object-oriented programming tutorial.
High Education Press (2003)
13. Sun, X.: VC++ detailed in depth. Publishing House of Electronics (2006)
Automatic Defensive Security System for WEB
Information

Jiuyuan Huo and Hong Qu

Information Center, Lanzhou Jiaotong University,


Lanzhou 730070, China
huojy@mail.lzjtu.cn

Abstract. Tampered public information of Websites would result in serious po-


litical events which effect national security and economic development, reduce
government's credibility and even effect social stability. To solve these security
problems of public information Website, advanced file filtering technology and
event-driven technology have been adopted to build an automatic defensive se-
curity system to protect public information services from multiple security levels.
A hardware security guard has also been designed and produced to detect all
types of malicious attacks initiatively, shut down server and make alarms to
administrators. The system has completed research and development, and passed
information security product certification of the Ministry of Public Security.

Keywords: Website, Automatic Defensive, Web Information, Security.

1 Introduction
With the rapid popularization of Internet, Website has become an important way for
enterprises to issue and exchange their information. Enterprises could publish infor-
mation to the public through Web site, but also provide various Web applications for
customers and partners. Government portal has been turned into a new important form
for all levels of government to using information technology to fulfill their functions.
But the Internet Websites are in a relatively open environment, it is convenient to
provide service for the public, but also easy to become a target for hackers. In all attacks
events, the Web page tampering had occurred most frequently. Released Statistical data
from National Computer Network Emergency Response Technical Team/Coordination
Center of China (CNCERT/CC) showed that a total of 35,113 sites have been tampered
during the first half of 2008 in China, increased by 23.7% as compared with the same
period last year [1].
Due to the complexity and diversity of modern operating systems, emerging vul-
nerabilities, many security breaches in applications and other reasons, Web pages could
be altered by hackers. Web page illegal tampering is making use of vulnerabilities in
operating system and application-level to fulfill the attack, the existing security meas-
ures are focused on the network layer and could not form an effective monitoring and
protection for such attacks which result in page tampering event could not be avoided.
To protect Web site security and the credibility of Internet information, the Ministry
of Public Security issued "Regulations of Technical measures for Internet security

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 8388.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
84 J. Huo and H. Qu

protection" on December 1, 2005. The regulations states clearly that "Portals, news
sites, e-commerce sites should prevent the Web sites and pages from being tampered,
and should recover automatically from tampering." [2].
For these discussed above, we have researched and developed a suite of Web pages
anti-tampering system, WebDefender, to protect the information security of Web site
for enterprises and institutions. This product has been successfully developed, and has
passed certification of information security product by the Ministry of Public Security
of China.

2 Related Work
After years of development, technologies adopted by Web anti-tampering system were
constantly developed and updated [3-4]. Round-robin detection technology should be
the first generation of Web anti-tampering technology, which inspects the pages of
protected sites in a round-robin way through a detection program running in the back-
ground. Read out the monitored pages at regular intervals and compare with the backup
pages to determine whether content has been tampered. Once a tamper event found, the
tampered pages could be restored and alarms be sent. There is a certain time interval in
this detection technology, pages could be hacked and saw by users in this period. And
the method takes up system resources such as CPU and memory, and less efficiency.
The second generation of Web anti-tampering technology is Web server embedded
technology. It modifies the existing Web servers architecture, encrypts and stores
specific features such as size, create time of Web pages and other documents. As users
access Web site, the same features of requested pages will be encrypted, then compared
with the stored encrypted value. If two values were equal, Web server would forward
the pages to user. If not, Web server would refuse to show the page to users. In this way,
the possibility of transmission of altered information to the user has been eliminated.
This technology greatly improves security of Web site, but it will take a lot of server
resources during period of encryption calculation and comparison, and result in greater
system load and lower efficiency.
The latest third generation technology combines the advanced file filtering technol-
ogy with event-driven technology. File filter technology utilizes the underlying file drive
of operating system kernel. As the files feature changes, the operating system will
generate an appropriate message. By defining the monitored directories and files of a
Web site, when operating system produced these messages of directories and their files
change, an event-triggered synchronization technology will start a page tampering re-
covery mechanism to restore the backup file to tampered files, and alarm system ad-
ministrator to take the follow-up measures. By the underlying file drive technology, the
entire discovery process of file tampering attacks and file recovery process are about
several milliseconds. Thus, the tampered page will hardly be saw by users. At present,
the operation performance and real-time detection of this technology are all the highest
standards, and it is a kind of simple, efficient and safe anti-tampering technology.
Currently, all kinds of Web anti-tampering products are all based on software [5-10].
Once hackers got administrator privileges of operating system, it would cannot prevent
their destruction and illegal tampering to system. The security of public information
service system is related to many aspects and multiple levels of a system. Any one part
Automatic Defensive Security System for WEB Information 85

has a security vulnerability, it may cause a fatal damage to the entire system. There is
not a solution which combining software and hardware, solving network security
problems of public information service system at multi-level of system.

3 System Architecture
WebDefender is a Website tampering protection product, which adopts the most ad-
vanced third-generation Web anti-tampering technology. When the Website files were
tampered, a real-time blocking mechanism will be immediately called to prevent pro-
viding the error information, and the synchronization mechanism will be started to ra-
pidly recovery the tampered directories and files, the tampered files will also be backup
for future witness. As the anti-tampering service was attacked and could not operate
normally or operating system failures, a hardware security guard would shutdown the
corresponding service or host based on predefine security level or SMS (Short Mes-
saging Service) instructions sent by administrator. All the exception message could be
sent to the administrator through SMS or E-mail in real time. The system could realize
real-time detection, real-time recovery, real-time alerts of the Website tampering, and
effectively solve to the Website's security problems. As shown in Fig. 1, WebDefender
system consists of three components: anti-tampering server-side, anti-tampering
client-side and hardware security guard.

3.1 Anti-tampering Server-Side

Anti-tampering server-side was deployed on the Website server, and responsible for
real-time monitoring of the protected directories and files. As tampering attack was
detected, the illegal tampered file would be real-time recovered, and send alarms
through hardware security guard. Anti-tampering server-side includes page synchro-
nization module, page monitoring modules, system management module and alarm
management module.
Page synchronization module and page monitoring module are the core of an-
ti-tampering system, reside in the operating system kernel of Web server. Page moni-
toring module utilizes file drive filtering technology of Operating System and enhanced
real-time event triggering technology to detect files tampering of protected Website.
When a tamper event occurred, the module will immediately notify page synchronization
module to restore the damaged files, notify system management module a tampering
event has occurred, and notify alarm management module to inform the administrator by
using a variety of ways. Page synchronization module is responsible for communication
of the backup server which deployed the anti-tampering client-side, achieving the normal
update of Website files or the file recovery tasks in the tampering attack. System man-
agement module communicates with modules in anti-tampering client-side and hardware
security, records log of events occurred in the system and attacks, provides administrators
a WEB management method. The module can also take actions under the appropriate
strategy to avoid more damage according to the degree and frequency of attacks. Alarm
management module sends alarm information to the administrator through hardware
security guard, and communicates regularly with hardware security guard to ensure an-
ti-tampering core modules were not be malicious shut down.
86 J. Huo and H. Qu

Fig. 1. Schematic diagram of system modules

3.2 Anti-tampering Client-Side

Anti-tampering Client-side was deployed on the backup server, responsible for pub-
lishing Web pages to anti-tampering and restoring the illegal tampering file when the
server-side detected attacks. All legal changes of Web pages, including add, modify, or
delete operation must be located the specified directory in the client's backup server.
Anti-tampering client-side includes page synchronization module and system man-
agement module.
Page synchronization module resides on the backup server and runs in kernel of
Operating System. When administrator normally update files in Website, page syn-
chronization module communicates with server-side module, and the updated file will
be synchronized to the Web server. When a tampering event occurred, page synchro-
nization module at service-side will immediately start synchronization recovery me-
chanism with the client-side's page synchronization module together to restore the
tampered file. System management module is responsible for sending log messages of
system and attack to the server for log recording.

3.3 Hardware Security Guard

Hardware security guard is an embedded hardware device which independent Website


host system. Its main functions includes: monitoring of various security processes and
systems running status. When an exception occurs, it would stop corresponding service
or shut down host according to security policy or the administrators commands. Re-
sponsible for informing administrator the anomalous operation of system. Adminis-
trator receives instruction to fulfill the corresponding operate to system.
Automatic Defensive Security System for WEB Information 87

Hardware security guard includes alarm management module and system man-
agement module. Which the system management module can monitor the status of
corresponding module running on server, and communicate with the core modules
regularly to ensure anti-tampering modules will not be malicious shut down. When a
tamper event occurs, the alarm management module immediately notify the system
administrator a tampering attack has undergone. Hardware security guard receives the
administrators SMS to execute system command to close network function, shut down
power and other operations to reduce the extent of damages.

4 Product Deployment
Currently, most Websites use a content management system (CMS) to manage the
whole process of page production, including page editing, page auditing and page
generation. An anti-tampering system Website architecture after WebDefenders
deployment was shown in Fig. 2. First, WebDefender system should be adopted to
protect the integrity the dynamic files (Jsp, Asp, Php, etc.) of Website from illegal
tampering. Once the files of dynamic pages have been illegally tampered, WebDe-
fender system could detect the behavior in real time, and immediately restore tampered
files, inform the administrator through hard security guard. Second, anti-SQL injection
module should be deployed in WWW server to prevent SQL injection attacks which
initiated by hackers to undermine and tamper database service, modify important in-
formation stored in database. Through this joint work of these security products, the
operation security of enterprises Website information system can be protected com-
prehensively and systematically.

Fig. 2. Typical network topology of deployment


88 J. Huo and H. Qu

5 Conclusion and Future Work


Due to vulnerabilities in operating system and application-level, the existing security
measures are focused on the network layer and could not form an effective monitoring
and protection for Web page illegal tampering. To solve these security problems of
public information Website, an advanced file filtering technology and event-driven
technology have been adopted to build an automatic defensive security system to pro-
tect public information services from multiple security levels. The system has been
developed and passed information security product certification of the Ministry of
Public Security.

Acknowledgments. This work is supported by Lanzhou Science and Technology


Development Project, "Security Public Information Service System of Web" (Grant
number: 2009-1-111) and Gansu Science and Technology Support Program, "A Secu-
rity WEB Information Query System of Automatic Defense" (Grant number:
0804GKCA040). *Corresponding Author: Jiuyuan Huo (Email: huojy@mail.lzjtu.cn).

References
1. National Computer Network Emergency Response Technical Team/Coordination Center of
China (CNCERT/CC), http://www.cert.org.cn/
2. Regulations of Technical measures for Internet security protection ,
http://www.mps.gov.cn/n16/n1282/n3493/n3823/n442104/
452223.html
3. Waldman, M., Rubin, A.D., Cranor, L.F.: The architecture of robust publishing systems.
ACM Trans. Internet Technol. 1, 199230 (2001)
4. Waldman, M., Rubin, A.D., Cranor, L.F.: Publius: a robust, tamper-evident, censor-
ship-resistant web publishing system. In: Proceedings of the 9th conference on USENIX
Security Symposium, vol. 9, pp. 512. USENIX Association, Colorado (2000)
5. Lee, J.-W., Kim, H., Yoon, H.: Tamper Resistant Software by Integrity-Based Encryption.
In: Liew, K.-M., Shen, H., See, S., Cai, W. (eds.) PDCAT 2004. LNCS, vol. 3320, pp.
608612. Springer, Heidelberg (2004)
6. Jin, H., Lotspiech, J.: Proactive Software Tampering Detection. In: Boyd, C., Mao, W. (eds.)
ISC 2003. LNCS, vol. 2851, pp. 352365. Springer, Heidelberg (2003)
7. Jin, H., Myles, G., Lotspiech, J.: Towards Better Software Tamper Resistance. In: Zhou, J.,
Lpez, J., Deng, R.H., Bao, F. (eds.) ISC 2005. LNCS, vol. 3650, pp. 417430. Springer,
Heidelberg (2005)
8. Blietz, B., Tyagi, A.: Software Tamper Resistance Through Dynamic Program Monitoring.
In: Safavi-Naini, R., Yung, M. (eds.) DRMTICS 2005. LNCS, vol. 3919, pp. 146163.
Springer, Heidelberg (2006)
9. Horne, B., Matheson, L., Sheehan, C., Tarjan, R.E.: Dynamic Self-Checking Techniques for
Improved Tamper Resistance. In: Sander, T. (ed.) DRM 2001. LNCS, vol. 2320, pp.
141159. Springer, Heidelberg (2002)
10. Ghosh, S., Hiser, J.D., Davidson, J.W.: A Secure and Robust Approach to Software Tamper
Resistance. In: Bhme, R., Fong, P.W.L., Safavi-Naini, R. (eds.) IH 2010. LNCS, vol. 6387,
pp. 3347. Springer, Heidelberg (2010)
Design and Implementation of Digital Campus Project
in University

Hong Qu and Jiuyuan Huo

Information Center, Lanzhou Jiaotong University,


Lanzhou 730070, China
huojy@mail.lzjtu.cn

Abstract. Digital Campus project is an important part of university development


and should meet the need of information technology development in higher
education. Currently, many issues exist in Digital Campus development in uni-
versities, such as information technology development lack of a unified planning
and deployment, information standards are not uniform. These issues lead to
Digital Campus could not adapt to the demands of current information man-
agement, and restrict the development of universities. Therefore, we proposed a
set of design and planning of Digital Campus, and carried out the project. The
project will create a reliable platform for construction of information resources,
information flows and information sharing in universities, and provide advanced
tools and means for teaching, research and management of high education.

Keywords: Digital Campus, Data Center, Portal.

1 Introduction
In recent years, Digital Campus-based information technology of higher education has
developed rapidly, colleges and universities has made great progress in setting up ap-
plication systems. Digital Campus makes use of computer technology, network com-
munication technology on university's teaching, research, management, service and all
other information resources to conduct comprehensive, digital information resources
for scientific and standardized integration, to form unified user management, unified
resource management and unified access control, to promote universitys innovation,
management innovation and eventually realize educational information, scientific and
standardized decision-making [1-3].
In increasingly competitive environment of higher education, building up Digital
Campus, realizing information in education, and strengthening information manage-
ment is an urgent task of colleges and universities. Currently, many universities raise
funds in various way to start construction of Digital Campus project, and construction
in some key universities has began to take shape [4-5]. Since 1996, campus network in
our university began to plan and construct. After over 10 years of construction and
development, there are WWW, BBS, mail system, office automation system, education
management system, graduate management system, e-card system and other informa-
tion application systems in our university. But in the process of Digital Campus con-
struction, some problems and challenges must be resolved. Such as lack of effective

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 8994.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
90 H. Qu and J. Huo

information sharing, lack of effective integration of information systems, lack of a


unified user interface, and so on.
To address these problems and promote Digital Campus projects in our university to
serve teaching, research, and research better for staff and students. Taking considera-
tion of our university's reality of information development, we made a design and
planning of Digital Campus project. Currently, this project has made some progress,
and promoted to develop university's information technology.
This study focuses on the planning and construction of Digital Campus project in our
university. To this end, in Section 2 we introduced related work of the development of
Digital Campus and several outstanding problems exists in current construction. In
Section 3, we presented system architecture of Digital Campus project in our university
in detail. Finally, in Section 4 we concluded the paper and described the future work.

2 Related Work
Begun in 1990, Kenneth Green, a professor from Claremont University in America
initiated and sponsored a large-scale research project, "The Campus Computing Project
", which is the earliest concept of Digital Campus. The Campus Computing Project is
the largest continuing study of the role of information technology in American higher
education. The national studies of this project have collected lots of qualitative and
quantitative data to help inform faculty, campus administrators, and others interested in
the use of information technology in American colleges and universities [6].
On January 31, 1998, former U.S. Vice President, Al Gore, made a speech entitled
"The Digital Earth: Understanding our planet in the 21st Century" in Science Center of
California. This is first time to put forward the concept of Digital Earth and the concept
of digital world was accepted universally, which leads to the "Digital City", "Digital
Campus" and other concepts [7].
Digital Campus construction of our university has began in the middle of 1990s in
China. At present, we have established tens of information systems such as education
management system, student management system, office automation system, financial
management system, human resource management system, library management sys-
tems, e-card management system and so on. These various information management
systems were built up at different times, provides management service by different
departments. They have played an important role in accumulating of information re-
sources, improving teaching, working, study and living environment for students and
faculty, improving efficiency of management and other aspects. However, with the
university's expansion, workload of management have greatly augmented, the current
information management systems have showed lots of significant problems which can
not be surmountable.
1). Information technology development in universities is lack of a unified planning
and deployment, information resources were scattered. Construction of information
systems are independent between each department, information systems could not be
integrated and interoperated, and lead to information islands.
Design and Implementation of Digital Campus Project in University 91

2). Lack of public base platform of data, information standards are not uniform, data
and resources sharing is at a low-level. Lack of a set of uniform standards for data and
information, resulting in the data in information systems are not unified and nonstan-
dard, and the largest problem is can not compatible with each other.
3). Could not realize automatic data transfer between information systems. Data
exchange between systems in university are usually based on manual or file transfer
methods. It is not only lower work efficiency, and could not guarantee accuracy and
consistency of data.
In summary, for constraints of history and technology, and the drawbacks and in-
ternal deficiencies of management systems, current information systems could not
adapt to management and service requirements, restrict to develop the university.
Therefore, adopting new ideas and new technical methods to solve the current problems
in information management systems is an urgent task.

3 System Architecture of Digital Campus


Digital Campus project is a long-term and hard task, we must develop a practicable
design to achieve these objectives. The system architecture of Digital Campus project
in our university was shown in Fig. 1.
The system architecture of Digital Campus project in our university is divided into
the following five parts. Each part should be complementary to each other to achieve
the integrity and unity of the whole system.

Fig. 1. System Architecture of Digital Campus


92 H. Qu and J. Huo

3.1 Infrastructure and Resources

Infrastructure provides basic supporting environment for the entire Digital Campus. It
includes campus network, server host, storage devices, security products, operating
environment of application systems and other supporting hardware devices. Resources
comprises of information and data resources which gathered from database systems,
application servers, directory servers, etc.

3.2 Basic Supporting Service Platform

Basic supporting services platform is working as a support platform of application layer


of Digital Campus. It adopts modular, service-oriented design idea. Providing appli-
cation services and technical interfaces to all types of application systems to realize the
reusability and integration of application. It consists of unified authentication and data
exchange platform.

3.2.1 Unified Authentication Platform


The mode of independent certification, independent authority, and independent account
management of the existing systems in university could not meet the current and future
requirements of development of information technology on campus. Thus, building a
unified, higher efficiency and stability, centralized authentication and management
platform is strongly required in information development in university. Unified au-
thentication platform solves the challenges of security and management of access
control, identity management, unified authorization, security audit in university.

3.2.2 Data Exchange Platform


Data exchange platform sets up data exchange channel for information systems which
belongs to different administration in university through the construction of data cen-
ters. This platform is to achieve automatic or manual exchange data that has the same
meaning in different applications and to solve the heterogeneous data challenges of
integration and management. Enable data sharing between application systems through
unified data exchange solution to establish the unique and authoritative global database
in university to provide tactical and strategic decision support service for universitys
leaders.

3.3 Information Systems in University

Information systems in university provide all kinds of services for teachers and stu-
dents. Besides include human resources management system, education management
system and other common business systems, and office automation system, mail sys-
tem and other public service systems. We also add the following public service plat-
form.

3.3.1 Data Collection System


Generating real-time, accurate, comprehensive university-level data sources which
complies with university information standards through data collection system into the
data center. Under the principles and standards, data collection completes the process
that collecting existing data distributed in the information management systems of
Design and Implementation of Digital Campus Project in University 93

various departments, and transforming them into data which compiles with information
standards of university. Data acquisition is acquire data which is needed but not cov-
ered by existing management system. By using the way of workflow, acquire and audit
data through web, and convert into data which complies the information standards of
university.

3.3.2 Comprehensive Reporting System


Comprehensive reporting system was based on the data center of university, forms
various types of dynamic, real-time data reporting from different perspectives to faci-
litate functions of query, analysis, statistics of existing data, provide comprehensive,
effective, credible data to support management to make decisions.

3.4 Unified Information Portal of Campus

Unified information portal of campus achieves the interaction process between Digital
Campus platform with users, it is the internal service window for teachers and students
[8-10]. Portal platform is to solve the challenges of unified providing, unified showing
and unified aggregation of information of university. It aggregates the distributed, he-
terogeneous resources of applications and information, achieves seamless access to
applications and integrated systems through a unified access portal, and provides an
integrated collaboration environment for supporting information access, information
transmission, and information collaboration. Based on characteristics and preferences
of each user and different roles, it could provide individual application interface for
different users to access data related to them.

3.5 Supporting System of Digital Campus

Information Security System: protecting the overall security of Digital Campus project
through physical, network, system, information, management and other aspects. It is the
supporting system to protect safe and reliable operation of campus information system.
Information Standard System: Defining information standards of university is the
foundation for building Digital Campus, and it is also the premise to ensure data con-
sistency. Data sharing through the exchange is based on it, and it is also responsible for
building a stable, reasonable data structure.
Operation and Maintenance Supporting System: including system monitoring, sys-
tem management, maintenance services and so on. It is the important support system to
protect the safe and reliable operation of campus information system.

4 Conclusion and Future Work


Digital Campus project could provide better service teaching research, and living for
universitys staff and students. Currently, the project has made some progress and
created a reliable platform for construction of information resources of university, in-
formation flows and information sharing, and provided advanced research tools and
means for universitys talent. But construction of Digital Campus project is a long-term
and arduous task that requires strong measures to ensure overall planning, gradual
implementation to fulfill Digital Campus better.
94 H. Qu and J. Huo

Acknowledgments. This work is supported by Lanzhou Science and Technology


Development Project, "Security Public Information Service System of Web" (Grant
number: 2009-1-111) and Gansu Science and Technology Support Program, "A Secu-
rity WEB Information Query System of Automatic Defense" (Grant number:
0804GKCA040). *Corresponding Author: Jiuyuan Huo (Email: huojy@mail.lzjtu.cn).

References
1. The status and thinking of Digital Campus in Peking University,
http://metc.njnu.edu.cn/
2. Fernndez Niello, J., Cipolla Ficarra, F.V., Greco, M., Fernndez-Ziegler, R., Bernaten, S.,
Villarreal, M.: A Set of Rules and Strategies for UNSAM Virtual Campus. In: Jacko, J.A.
(ed.) HCI International 2009. LNCS, vol. 5613, pp. 101110. Springer, Heidelberg (2009)
3. Bers, M., Chau, C.: The virtual campus of the future: stimulating and simulating civic ac-
tions in a virtual world. Journal of Computing in Higher Education 22, 123 (2010)
4. Liu, N., Li, G.: Research on Digital Campus Based on Cloud Computing. In: Lin, S., Huang,
X. (eds.) CESM 2011, Part II. CCIS, vol. 176, pp. 213218. Springer, Heidelberg (2011)
5. Hunt, C., Smith, L., Chen, M.: Incorporating collaborative technologies into university
curricula: lessons learned. Journal of Computing in Higher Education 22, 2437 (2010)
6. The Campus Computing Project, http://www.campuscomputing.net/
7. The Digital Earth: Understanding our planet in the 21st Century,
http://portal.opengeospatial.org/files/?artifact_id=6210
8. Eisler, D.: Campus portals: Supportive mechanisms for university communication, colla-
boration, and organizational change. Journal of Computing in Higher Education 13, 324
(2001)
9. Pan, W., Chen, Y., Zheng, Q., Xia, P., Xu, R.: Academic Digital Library Portal A Per-
sonalized, Customized, Integrated Electronic Service in Shanghai Jiaotong University Li-
brary. In: Chen, Z., Chen, H., Miao, Q., Fu, Y., Fox, E., Lim, E.-p. (eds.) ICADL 2004.
LNCS, vol. 3334, pp. 563567. Springer, Heidelberg (2004)
10. Yu, S., Zhang, J., Fu, C.: Sharing University Resources Based on Grid Portlet. In: Zhang,
W., Chen, Z., Douglas, C.C., Tong, W. (eds.) HPCA 2009. LNCS, vol. 5938, pp. 515521.
Springer, Heidelberg (2010)
Detecting Terrorism Incidence Type from News
Summary

Sarwat Nizamani1,2 and Nasrullah Memon1


1
Maersk McKinney Moller Institute
University of Southern, Denmark
2
University of Sindh, Pakistan
{saniz,memon}@mmmi.sdu.dk

Abstract. The paper presents the experiments to detect terrorism incidence type
from news summary data. We have applied classification techniques on news
summary data to analyze the incidence and detect the type of incidence. A
number of experiments are conducted using various classification algorithms
and results show that a simple decision tree classifier can learn incidence type
with satisfactory results from news data.

Keywords: GTD, Classification, Decision tree, Nave Bayes, SVM.

1 Introduction
Since the unfortunate events of 9/11, the research trend towards counterterrorism
domain has increased at large scale. This paper is also an attempt towards the domain
of the research. In the paper, we present text mining experiments to detect terrorism
incidence type from news summary in the Global Terrorism Database (GTD). The
purpose of the research is to emphasize that we can extract useful information accord-
ing our query from free text using classification techniques. It is time consuming if
one goes through the lengthy text to extract a specific kind of information. Classifica-
tion techniques can be applied in different ways according to ones requirements to
extract specific information from text. We have applied classification techniques for
accomplishing the desired task. We experimentally show that we can extract this in-
formation from free text summary in the database. By using training data from the
GTD, we train the classifiers to learn the patterns of the incidence and classify the
new incidence from the news data as a specific type of terrorism incidence. We have
applied text mining on news summary, and trained the classifiers by providing train-
ing data. We performed experiments using three different classifiers i.e. decision tree
(J48 WEKA implementation of C4.5), Nave Bayes and Support Vector Machine
(SVM). We present the experimental analysis of the classifiers. The evaluation me-
thod that we have used for experimental analysis is tenfold cross validation. In the
experiments we show the empirical analysis of all the three classifiers on the GTD.
For applying text mining techniques we have used Waikato Environment for Know-
ledge Analysis (WEKA) [14].
We show experimentally that a simple decision tree classifier can identify the inci-
dence type with adequate accuracy. SVM classifier also achieved reasonable accuracy

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 95102.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
96 S. Nizamani and N. Memon

but at the expense of long running time where Nave Bayes classifier runs faster but
with low accuracy. According to our findings we can reliably apply classification
techniques on task like detecting terrorism incidence type from news data summary
using decision tree classifier. Below we present brief description of GTD.

1.1 Overview of Global Terrorism Database (GTD)

The Global Terrorism Database is an open source database that contains information
regarding terrorism incidences that took place between 1970-2008 in all over the
world. There are certain characteristics of the dataset defined on website [1] of the
GTD.
Following is the brief description of the dataset:

Total number of incidences Over 87000


Incidence types include 38,000 bombings 13,000 assassinations 4,000
kidnappings
Minimum number of 45
variables
Maximum number of >120
variables
Supervised by 12 Terrorism research experts
Sources of information 3,500,000 News articles, 2500 News sources

In the next section we present related work. Section 3 describes classification tech-
niques; whereas in Section 4 we elaborate the terrorism incidence type detection. We
discuss preprocessing of data in Section 5 while we illustrate experimental results in
Section 6 and conclusion and future work is presented in Section 7.

2 Related Work on Global Terrorism Database


Global Terrorism Database is a large collection of terrorism incidence data in all over
the world. It is a good source for counterterrorism and criminology research. A num-
ber of researchers have analyzed the dataset and presented their useful findings in the
literature. In this paper we discuss some of them. Dugan et al. [2] have used GTD for
analyzing hijacking incidences before 1986. The authors used continuous time surviv-
al analysis to estimate impact of counter-hijacking interventions on the hazard of
differently motivated hijacking attempts and logistic regression analysis to model the
predictors of successful hijackings. The authors found that the policy interventions
examined significantly decreased the likelihood of non-terrorist but not that of terror-
ist hijackings.
Greenbaum et al. [3] have used the GTD to analyze the impact of terrorism on Ital-
ian employment and business during 1985 to 1987. The authors concluded that terror-
ist attacks reduced the employment following the year of attack. The authors [4] used
terrorist attacks data from 1970 to 2004. In the article, the authors have tried to show
the characteristics of global terrorism. The authors also included an analysis showing
the link between the terrorism and political affairs in the country.
Detecting Terrorism Incidence Type from News Summary 97

The article [5] discusses the impact of governmental counter-terrorism policies on


the violence in the country. They show that it has positive as well as negative impact.
The authors [6] have studied the GTD for domestic terrorism in the United States.
They used group-based trajectory analysis to examine the different developmental
trajectories of U.S. target and non-U.S. target terrorist strikes. The authors concluded
that four trajectories best capture attack patterns for both. The authors [7] have used
spatial (country name, place name) and temporal (date, month, year) information from
the GTD and found a number of useful patterns from the database. The authors have
presented the patterns using visualization.
Thus a number of researchers [2,3,5,6,7] have studied the database in different di-
mensions and found useful results. It has been studied by the analysts, criminologists
as well as computer scientists. All of them have presented the interesting results from
the dataset.
In this paper, we apply text mining approach to the major variable of the dataset
that is the summary of terrorism incidence. We try to extract information about type
of terrorism incidence from the summary. We experimentally show that classification
techniques can learn from news summary to detect the incidence type. The next sec-
tion presents various classification algorithms used in the experiments

3 Classification Algorithms
Classification [15] is a kind of supervised machine learning algorithm. It takes train-
ing examples as input along with their class labels. It can be defined by following
equations:

D = {t1, t2,... tn z (1)

ti = {a1, a1,..., am}. (2)

C = {c1, c2,, ck}. (3)

Where D is a dataset consisting of n training examples, ti is a training example, each


ai is an attribute, m is the total number of attributes and ci is a class and k is the total
number of classes. With respect to our terrorism incidence type detection D is collec-
tion of 22235 terrorism incidences, each terrorism incidence ti comprises of 5345
attributes ai and C is a set of terrorism incidence type and total number of incidence
types k is 9.

3.1 Decision Tree

Decision tree is a kind of divide and conquer algorithm. A decision tree consists of
finite number of nodesinternal and external nodes. Each internal node corresponds
to an attribute selected by some measure of algorithm like information gain or gain
ratio that divides the training examples into the parts according to the values of that
attribute. For example if the attribute has three possible values then there will be three
branches going out from that node. The choice of attribute at particular level of hie-
rarchy usually depends on the class distinction ability of that attribute. External nodes
98 S. Nizamani and N. Memon

in the decision tree contain decisions or the class value. ID3 (Iterative Dichotomiser
3) is a kind of decision tree algorithm by Quinlan [9]. The algorithm suffers from
overfitting and also the algorithm can only work on nominal values and discrete val-
ues and also ID3 does not deal with missing value. To overcome these issues of the
ID3, Quinlan [10] proposed C4.5 algorithm. It uses pruning to overcome overfitting
problem, uses discretization at a certain threshold to deal continuous data and ignores
missing value attributes while making decisions.

3.2 Nave Bayes (NB)

Nave Bayes [11] is a simple and efficient technique used by data mining communi-
ty for classification task. It uses Bayes theorem to estimate probabilities for each class
to decide the class of an instance. NB assigns the maximum probability class label to
a test instance.

3.3 Support Vector Machine (SVM)

SVM is considered to be the state of art classification algorithm. SVM is a supervised


machine learning technique used for classification. SVM is based on Vapniks statis-
tical learning theory [13]. SVM has some unique features due to which it is consi-
dered as state-of-the-art in classification. It is considered well suitable for the task of
text classification and hand written digit recognition. Its unique features for text cate-
gorization are [12]: (i) It works well with high dimensional data; (ii) It can make a
decision boundary by using only a subset of training examples called support vectors;
(iii) It can also work well on non-linearly separable data by transforming the original
feature space into a new feature space that is linearly separable by using the kernel
trick. Joachims [12] has defined some properties of text classification for which SVM
is the ideal choice of solution. SVM has a main limitation that it suffers from long
running time when runs on large datasets.

4 Preprocessing Data
We used terrorism incidences from GTD that took place between 2001 and 2008.
Each incidence is considered as a record of ARFF file. ARFF is Attribute Relation
File Format used by WEKA [14]. From the GTD we took only two fields of each
incidence namely; summary (a text field) that describes the incidence and type of
incidence that derives a value from one of the type of incidences. The summary field
needs to be preprocessed because it contains free text. We applied further preprocess-
ing using WEKA utility (StringToWordVector). This utility performs preprocessing
steps like tokenization and stop word removal.

5 Terrorism Incidence Type Detection


We performed the task of terrorism incidence type detection using classification algo-
rithm. We considered the task as text classification problem. We provided the training
Detecting Terrorism Incidence Type from News Summary 99

incidences as training data that comprises of the summary of incidences as well as


type of the incidence, to preprocessing module that makes the data in suitable form
for various classifiers. The classifier is trained on training data. Then the classifier
applies its learning to predict the type of incidence only from the summary of inci-
dence. The process is depicted in Figure 1.

Fig. 1. Work flow of terrorism incidence type detection

6 Experimental Results
In experiments we take terrorism incidence records from 2001 to 2008 in GTD. Total
number of incidence that we have used is 22235. After preprocessing we have total
5345 distinguished features. We present brief description of the dataset, we have used
for the experiments in Table 1. In Table 2, we demonstrate all the incidence types and
number of instances of each incidence type. We have performed experiments using
three famous classification algorithms, namely; Decision tree J48 (WEKA implemen-
tation of C4.5), Nave Bayes (NB) and Support Vector Machine (SVM). These algo-
rithms are widely used algorithms by the research community [8]. In the subsequent
sub-sections we illustrate the evaluation method and evaluation measures used in the
experiments performed using these classifiers.
100 S. Nizamani and N. Memon

Table 1. General information about dataset used in experiments

Total number of incidences 22235


Total number of feature 5345
Total number of classes/ Types of incidence 9

Incidence period 2001-2008

Table 2. Incidence type distribution in training data

Type of incidence Number of incidences


Amed_Assault 6797
Assassination 1167
Bombing Explosion 10731
Facility_Infrastructure_Attack 1820
Hijacking 59
Hostage_Taking_Barricade_Incident 134
Hostage_Taking_Kidnapping 1111
UnArmed_Assault 275
UnKnown 141
Total 22235

6.1 Evaluation Method

The evaluation method that we have used is 10 fold cross validation. This method of
evaluation splits the dataset in 10 subsets. It runs for 10 rounds, in each round 9 sub-
sets are used for training and one of them is used for testing. In each round a new
subset is chosen for testing. After 10 rounds the average accuracy of all the rounds is
measured.

6.2 Evaluation Measures

The evaluation measures that we have used are accuracy, precision and recall. These
measures are calculated as follows:
Accuracy = (Tp+Tn) / (Tp+Tn+Fp+Fn.). (4)

Precision = Tp/ (Tp + Fp). (5)

Recall = Tp/( Tp + Fn). (6)


Where Tp is the number of incidences correctly classified as particular class, Fp is the
number of incidences that were incorrectly classified as a particular class. Tn is the
number of incidences that were correctly classified as other class and Fn is the num-
ber of incidences that were incorrectly classified as another class.
Detecting Terrorism Incidence Type from News Summary 101

The experimental results (see Figure 2) clearly illustrate that from the news sum-
mary data we can successfully detect terrorism incidence type. The classification
algorithms can extract this information successfully. It is clearly depicted in the
figure that decision tree correctly detects 83% of incidences with a balance of preci-
sion and recall.

90%
80%
70%
60%
50% Naves Bayes
40% DecisionTree
30% SVM
20%
10%
0%
Accuracy Precision Recall

Fig. 2. Experimental results

7 Conclusion and Future Work


In this paper we presented the experimental results for detecting terrorism incidence
type from news summary of incidence. We applied text mining techniques for the
desired task. In order to accomplish our task we applied various classification algo-
rithms. Experimental results show that the task can be successfully accomplished with
reasonable results. Experiments show that we achieve high accuracy using decision
tree algorithm with a balance of precision and recall. SVM also achieved high per-
formance, but it suffers from long running time when there is large dataset. The accu-
racy of Nave Bayes is lower but it runs faster. In this paper we have used words as
features without using any semantic knowledge. In future we are planning to incorpo-
rate semantic knowledge that may positively affect the accuracy of the task. We are
also planning to use spatial and temporal featured from the GTD to find associations
like, to extract the likelihood of incidence type and time and geo space.

References
1. http://www.start.umd.edu/gtd/about/
2. Laura, D., Gary, L., Piquero, A.R.: Testing a Rational Choice Model of Airline Hijackings.
Criminology 43, 10311065 (2005)
102 S. Nizamani and N. Memon

3. Robert, G., Laura, D., LaFree, G.: The Impact of Terrorism on Italian Employment and
Business Activity. Urban Studies 44, 10931108 (2007)
4. Gary, L., Laura, D.: Tracking Global Terrorism, 1970-2004. In: Weisburd, D., Feucht, T.,
Hakimi, I., Mock, L., Perry, S. (eds.) To Protect and to Serve: Police and Policing in an
Age of Terrorism. Springer, New York (2009)
5. Gary, L., Laura, D., Korte, R.: The Impact of British Counter Terrorist Strategies on Polit-
ical Violence in Northern Ireland: Comparing Deterrence and Backlash Models. Criminol-
ogy 47, 501530 (2009)
6. Gary, L., Yang, S.-M., Crenshaw, M.: Trajectories of Terrorism: Attack Patterns of For-
eign Groups that have targeted the United States, 1970 to 2004. Criminology and Public
Policy 8, 445473 (2009)
7. Guo, D., Liao, K., Morgan, M.: Environment and Planning B: Planning and Design. Visua-
lizing patterns in a global terrorism incident database 34, 767784 (2007)
8. Wu, X., Kumar, V., Quinlan, J.R., Ghosh, J., Yang, Q., Motoda, H., McLachlan, G.J., Ng,
A., Liu, B., Yu, P.S., Zhou, Z.H., Steinbach, M., Hand, D.J., Steinberg, D.: Top 10 algo-
rithms in data mining, Survey paper. Springer, Heidelberg (2007)
9. Quinlan, J.R.: Induction of decision trees. Journal of Machine Learning 1, 81106 (1986)
10. Quinlan, J.R.: C4.5: Programs for machine learning. Machine Learning, vol. 16, pp. 235240.
Springer, Heidelberg (1993)
11. McCallum, D.J.: Nigam. K.: A Comparison of event models for Naive Bayes text classifi-
cation. Technical Report. Workshop on learning for text categorization. pp. 4148 (1998)
12. Joachims, T.: A statistical learning model of text classification for Support Vector Ma-
chines. In: International ACM SIGIR Conference on Research and Development in Infor-
mation Retrieval (2001)
13. Vapnik, V.: The nature of statistical theory. Springer, Heidelberg (1995)
14. Hall, M., Frank, E., Pfahringer, B., Reutemann, P., Witten, I.H.: The WEKA Data mining
software: An Update. SIGKDD Explorations 11(1) (2009)
15. Sebastiani, F.: Machine learning in automated text categorization. ACM Computing Sur-
veys 34(1), 147 (2002)
Integration of Design and Simulation Softwares
for Computer Science and Education Applied
to the Modeling of Ferrites for Power Electronic Circuits

Rosa Ana Salas and Jorge Pleite

Departamento de Tecnologa Electrnica, Universidad Carlos III de Madrid


Avda. de la Universidad, 30, 28911, Legans (Madrid), Spain
rsalas@ing.uc3m.es

Abstract. In this paper we present a procedure that uses different standard


softwares and modeling techniques together. Through this technique a student
can see how Finite Element Analysis, Computer Aided Design, Numerical
Calculation and Circuit Simulation Tools can be used together for solving an
electronic component modeling and simulating procedure. As an example of
Integrated Computer Science and Education an application to the modeling of
inductors with ferrite cores for power electronic circuits is presented.

Keywords: Finite Element Analysis, Circuit simulation, Ferrite Cores, Science


Education, Educational Electronics.

1 Introduction

In the field of science we can find a great variety of commercial programs for
calculation and modeling that can be used for educational and industrial applications.
Modeling and computer simulations play an important role in the analysis, design,
and education for university students of power electronic systems [1]. In this article
we present a procedure that uses different programming and modeling techniques
coupled together: a Computer Aided Design program (AutoCAD [2]), a Finite
Element Analysis program (Maxwell [3]), two scientific calculus programs for the
numerical solving of derivatives and integrals (Origin [4] and Matlab [5]), a
numerical simulation program (Simulink [5]) combined with Matlab and finally an
electronic circuit simulation program (PSIM [6]). In this article we study the joined
application of these tools through the example of the design and modeling of ferrites
[7]-[10]. Ferrites are widely used in the area of electronic industry as a magnetic core
of inductors and transformers in photovoltaic solar energy. An inductor consists of a
winding, a ferrite core and sometimes a coil former. Ferrite materials show nonlinear
magnetic properties such as hysteresis and saturation. They come in different sizes
and geometrical shapes, some simpler (e.g. E type, Figure 1) and others more
complex (e.g. RM type, Figure 3).

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 103108.
Springerlink.com Springer-Verlag Berlin Heidelberg 2012
104 R.A. Salas and J. Pleite

Fig. 1. Inductor with E type ferrite core Fig. 2. Cross-section of the inductor in Figure 1.

Fig. 3. Inductor with RM type ferrite core Fig. 4. 2D equivalent inductor of Figure 3.

2 Calculation Process Diagram


The objective of the example is to obtain the waveforms of voltage, current and power
of an inductor with a ferrite core corresponding to its serial electric circuit as can be
seen in Figure 5. It consists of two nonlinear parameters: the inductance L as a
function of the excitation current I (L-I curve) and the resistance R as a function of the
rms current and frequency f (R-Irms curve). The calculation process diagram is shown
in Figure 6. The process starts with the construction of the inductor that consists of a
ferrite core and a coil former on which a copper wire is wound and the measuring of
the magnetic parameters under direct current that characterize the ferrite: permeability
as a function of the magnetic field H (-H curve) and the magnetic field B as a
function of H (B-H curve).

Fig. 5. Equivalent Electric Circuit of an inductor with a ferrite core


Integration of Design and Simulation Softwares for Computer Science 105

Construction of the inductor and Measuring of the magnetic properties


Design of the 2D or 3D domains using AutoCAD

x Boundary conditions
x Adaptative meshing
x Excitation level (voltage or current)


Maxwell Finite Element Analysis program

B-I, H-I Tables R-Irms Curve


Solving numerically with Origin

) B dS
S
-I Curve
d) di
L I v t L i, f  iR i, f
dI dt
L-I Curve

Solving differential and integral equations numerically Solving mathematical equations numerically using

using Simulink Matlab
vn in
x Design of the circuit
x Excitation level PSIM Electronic Circuit Simulation program

v(t), i(t) p(t)

Fig. 6. Calculation process diagram

Next, either the 2D (equivalent section of the ferrite plus its winding and coil
former) or the 3D domain is designed using the program AutoCAD. In Figures 2 and
4 the 2D equivalent domains of the real inductors designed in AutoCAD are shown.
This design and the magnetic properties are introduced into the Finite Element
Analysis program Maxwell. After this, the boundary conditions and excitation current
levels are assigned. In order to generate the mesh both in 2D and 3D we chose to
carry out an adaptative refinement of the mesh consisting of making a finer mesh at
the spatial points where the previously established error level is exceeded (corners,
regions with irregular borders, etc..). In each iteration the program computes the
magnetic fields, makes an error estimate and refines the mesh. This adaptative
meshing reduces the computing time and the convergence and tolerance. This
algorithm is implemented in the Maxwell program. In the adaptative procedure the
parameters corresponding to the stopping criteria and the percent refinement per pass
are introduced. The first ones specify the maximum number of passes and the
maximum percent error, also called error tolerance. The percent refinement per pass
specifies what percentage of finite elements (triangles in 2D or tetrahedra in 3D)
should be refined during each iteration in the initial mesh. In Figures 2 and 4 the mesh
106 R.A. Salas and J. Pleite

generated in the 2D simulations is shown. As the simulation programs outputs, for


each excitation current value we obtain the spatial distribution values of the magnetic
fields B and H (B-I and H-I Tables) and the R-Irms-f curve for all working regions of
the ferrite from the linear to saturation regions. Using the program Origin and
through integration of the B field, the -I curve is obtained and from this the L-I
curve is derived by differentiating with respect to I.

= B dS
S
(1)

In Figure 7 we show an example of an L-I curve and in Figure 8 an example of an


R-Irms-f curve.

Fig. 7. Experimental (stars) and simulated by Finite Element Analysis (squares) L-I curves

Fig. 8. Experimental (stars) and simulated by Finite Element Analysis (squares) R-Irms curves

In the last phase three programs are used linkedly. The L-I and R-Irms curves are
introduced into the Simulink program. With help of the Matlab program the equation
2 that represents the voltage of the inductor is solved numerically. We draw the
electrical circuit to be simulated in the circuit simulator PSIM and assign the voltage
and/or current excitation level. At each instant in time the Simulink program sends
the excitation current value i that flows through the inductor to PSIM and PSIM sends
the voltage v across the inductor to Simulink.
Integration of Design and Simulation Softwares for Computer Science 107

v(t ) = L(i, f ) + iR(i, f ) .


di
(2)
dt
In Figure 9 we show an example of an electrical circuit including the nonlinear
inductor model excited by a sinusoidal voltage.

Fig. 9. An example of electrical circuit to be drawn in the PSIM program

Finally, as output of the PSIM program we obtain the voltage and current
waveforms (v(t), i(t)) and from these the power waveform p(t) is derived using Origin.
In Figure 10 we show an example of these waveforms. The Figures (a), (b) and (c)
correspond to the linear region and (d), (e) and (f) to the saturation region.

Fig. 10. Example of experimental (dotted line) and simulated (solid line) voltage, current and
power waveforms for the linear and saturation regions.
108 R.A. Salas and J. Pleite

3 Conclusions
We have presented a procedure in which different standard softwares are used
together so that university students can see how different commercial programs and
modeling techniques can communicate to solve a specific task. Through our example
it can be seen how the procedure can be applied to the task of the modeling of
inductors with ferrite cores for use in circuit simulators.

References
1. Mohan, N., Undeland, T.M., Robbins, W.P.: Power Electronics: Converters, Applications
and Design. John Wiley Sons, Inc., New York (1995)
2. AutoCAD, http://usa.autodesk.com
3. Maxwell, http://www.ansoft.com
4. Origin, http://www.OriginLab.com
5. Matlab-Simulink, http://www.mathworks.com
6. PSIM, http://www.powersimtech.com
7. Salas, R.A., Pleite, J., Olas, E., Barrado, A.: Nonlinear saturation modeling of magnetic
components with an RM-type core. IEEE Trans. Magn. 44, 18911893 (2008)
8. Salas, R.A., Pleite, J., Olas, E., Barrado, A.: Theoretical-experimental comparison of a
modeling procedure for magnetic components using Finite Element Analysis and a circuit
simulator. J. Magn. Magn. Mater. 1028, e1024e1028 (2008)
9. Salas, R.A., Pleite, J.: Modelling nonlinear inductors with a ferrite core. Przegld
Elektrotechniczny (Electrical Review) R 85, 8488 (2009)
10. Salas, R.A., Pleite, J.: Accurate modeling of voltage and current waveforms with saturation
and power losses in a ferrite core via two-dimensional finite elements and a circuit
simulator. J. Appl. Phys. 107, 09A517 (2010)
Metalingua: A Language to Mediate Communication
with Semantic Web in Natural Languages

Ioachim Drugus

Institute of Mathematics and Informatics, Academy of Sciences of Moldova,


1 Stefan cel Mare bd., Chisinau, MD-2001, Republic of Moldova
ioachim.drugus@semanticsoft.net

Abstract. The main obstacle in the way of Semantic Web becoming a democrat-
ic tool is complexity of its standards. Therefore, a simple language, Notation3, is
alternatively used in many communities. But Notation3 does not comply with
compositionality principle, a characteristics of natural languages. Metalingua,
described in this paper, is a counterpart of Notation3, which complies with this
principle, can mediate the communication between humans speaking natural lan-
guages and Semantic Web, and be used for formalization of natural languages, in-
cluding languages considered difficult, like Chinese. Metalingua is currently
used in the projects of EstComputer, Inc. (www.estcomputer.com) for education
and natural languages informatics.

Keywords: metalingua, semantic web, knowledge representation, compositionali-


ty, Notation3.

1 Introduction
The objective of Semantic Web (SW) project is to build knowledge bases and a me-
chanism to deliver content from them to the wide public. But the standards of SW for
knowledge representation are addressed to IT professionals and in order for this next
generation of web to become same democratic as current web, it must be equipped
with a natural language interface to allow each person to communicate with web in
a natural language. This is hard to achieve, since there are hundreds of natural lan-
guages, and there is no adequate apparatus to formalize them. Notice, that formal
languages were shown in [1] to be a poor apparatus to serve this purpose. In [2-6], I
proposed a simple language metalingua (ML) as a tool for the formalization of natural
languages, and for building such a natural language interface.
In 2004, Tim Berners-Lee, the founder of the Web, proposed a simple language,
Notation3 (N3) for knowledge representation. With N3 a body of knowledge, or
ontology, is represented as a set of sentences, i.e. triples <s, p, o>, where s, p and o, are
strings written in a certain format, and said to be subject, predicate and object, respec-
tively. Since N3 uses a 3-ary relationship and no operations to represent knowledge, it
can be said to be a relational language. N3 is not appropriate for the formalization of
natural languages, the main feature of which is compositionality principle stating
that the meaning of a compound expression is a function of the meanings of its compo-
nents. Such a principle, obviously, can apply only to operational languages, the

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 109115.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
110 I. Drugus

expressions of which bear sense and are built out of atomic expressions by opera-
tions, but it does not apply to relational languages.
The A3 approach to Semantic Web and Brain Informatics [2, 6] is an operational
approach since it is preoccupied with operations whereby mental entities are built. I
contended that brain uses exactly 3 operations to build a mental entity: association to
form a set-theoretic ordered pair (operation ascribed to left hemisphere of brain),
aggregation to form a finite set (ascribed to right hemisphere), and atomification to
encapsulate a structure built by association and aggregation operations into an enti-
ty (ascribed to the bridge between the two hemispheres). The notation A3 used in A3
approach is an operational language, since it uses operations for building expres-
sions, and it complies with compositionality principle. In [3], I added the equality sign
= to denote synonymy, named new language metalingua, and explained how to use
it for integration of knowledge of various domains.
The name metalingua is justified by three reasons: it is intended for the formali-
zation of metalogic [7], it can serve as one metalanguage for other languages includ-
ing natural languages, and it has the operator meta to formalize meta-discourse. This
paper does not presuppose any knowledge from my previous publications.

2 Specification of Metalingua
Notation A3 is a sublanguage of ML and it is more appropriate to specify A3 before
specifying ML. The notation A3 proceeds from expressions said to be atomic expres-
sions or atoms, the set of which is called vocabulary of A3, and build out of them
compound expressions according several rules. We allow for many variants of A3
(and ML), each variant with its vocabulary, but in IT, it is appropriate to use only one
vocabulary - the set of all strings of Unicode characters, to which I refer as Unicode
texts. Since such atoms can contain also characters of ML and, thus, create collisions
with ML syntax, demand that a Unicode text used as an atom is enclosed between
angular brackets < and >, which either do not occur within Unicode text, or are
preceded by % sign (as recommended in URI standard). But we will allow the al-
phanumeric strings without white spaces to be used without angular brackets. Since
all the characters and symbols used in any natural language and in sciences are
represented in Unicode, A3 (and ML) based on such a vocabulary will also have a
practical use, because phrases in a natural language and even whole Unicode texts,
can serve as atoms within such vocabulary. We also refer to the atoms in vocabulary
as names: common names and proper names.
The expressions of A3 over a vocabulary V are typed and are defined by the fol-
lowing recursion rules:

(0) If a is a name in V, then a is an expression of type atom, or atomic expres-


sion, of A3;
(1) If a and b are expressions, then (a : b) is an expression of type association,
or association expression, of A3;
(2) If a1,..., an are expressions, then {a1,..., an} is an expression of type aggre-
gation, or aggregation expression, of A3;
Metalingua: A Language to Mediate Communication with Semantic Web 111

(3) If a is an expression, then [a] is an expression of type atom, or atomifica-


tion expression, of A3.
ML over vocabulary V is the language of expressions obtained by the rules above
and also by the rule below:
(4) If a and b are expressions of A3, then (a = b) is an expression of ML.
This rule, obviously, cannot be applied recursively, and this is the reason why we had
to first define A3, and only then to define ML. Another reason for separation of A3 as
a sublanguage of ML is a close correlation of A3 and N3 discussed later. Going for-
ward, though, I will talk about ML and only infrequently make reference to A3. We
will consider the three types of expressions defined by (1), (2), (3) as denoting results
of application of three operations with same names as the expressions.

3 Metalingua as a Language of Universal Structures


In some axiomatic set theories the concept of ordered pair is defined via the concept
of set, in other ones these are regarded as separate concepts. Also we can regard the
concept of ordered pair as fundamental and express through it the notion of set. We
will consider these two concepts as well as the concept of atom conceptually ortho-
gonal, even though will admit that one concept can be expressed via another for
some purposes. In set theory, a concept is considered obtained by an abstraction, an
operation which is more complex than an algebraic operation, since in abstraction we
proceed from the whole content of mind, rather than from separate entities as with an
algebraic operation. Thus, we regard 3 abstraction operations as fundamental in set
theory, association to form an ordered pair, aggregation to form a set (we are
interested only in finite sets) and atomification to form an atom, but replace them
with unary algebraic operations, which put into correspondence to an ordered pair
(set, or atom) an entity to be regarded as the identity of the ordered pair (of the set, or
of the atom, respectively). Notice that, while association and aggregation operations
can be applied iteratively in set theory, the atomification can be applied only once,
which complies with the intuition that the sets within the universe of discourse are
built out of atoms, but atoms are not built within this universe of discourse. Thus,
the notion of atom is relative and depends on the universe. According this intuition,
we also understand what is usually said to be structure as a structure constructed
(notice that both words have the same root) out of atoms, while atoms are not con-
structed in the process of building this structure.
All mathematical structures are defined in terms of set theory and, thus, can be
constructed by applying many times the association and aggregation operations. ML
borrowed two notations from set theory (that of ordered pair and that of set) and add-
ed a third notation for an atom. The operation of atomification denoted by square
brackets is most interesting and productive of new kind of entities when it is applied
more than once, but this leads outside of set theory and towards a new theory of
universal structures called universics [6]. Set theory defines what is usually said to
be a structure, while universics defines what is said to be a universal structure or
a universe. A universal structure can be imagined as having atoms of various level,
so that it can be stratified in layers which are structures. Thus, ML is the language for
112 I. Drugus

the formulation of universal structures or the language of universics. I used the term
formulation, because, in this context, it sounds more correct than denotation or
representation (say, denotation could imply that the whole structure is denoted by
one name, while the expression representation of a structure in a language also
sounds weird).
The notion of structure as built from atoms, where atoms are not built, does
not describe some phenomena which we also consider a structure. Really, a formula
like a + x b is considered atomic in assertoric logic because, if we decompose it
further, we obtain non- assertions. But, the expression a + x also has structure.
Thus, a + x b is an example of a universal structure, where the expression a + x
is obtained by atomification.
Data model is a term used in IT to refer to an apparatus which describes a certain
kind of data structures. Set theory can also be said to define a data model, an abstract
one, because it describes a type of abstract data called sets and ordered pairs. But the
term data model is not used in set theory, because, so far, mathematicians were
focused on properties of pure sets, built out of the empty set. In set theory an atom
is intuitively treated as non-set and non-ordered-pair. An atom can be treated as
any piece of data and, therefore, the notion of data model becomes useful in a set
theory with atoms and, even more useful, in a theory with atoms of different level of
universics. I refer to the data model of universal structures as universal data model.
Notice, that the notation (a : b) of ML is the Pierces denotation of ordered pair,
which historically preceded the currently used notation (x, y). In order to keep both
denotations, I consider that (a : b) is same as (b, a), and regard the first as a primitive
expression in ML and second as an expression defined in ML (obviously, I could have
proceeded the other way around).

4 Formalization of Languages with Metalingua


Writing systems of various languages use linear or plane representations, and there are
also graphic languages with most readable representation in 3D space where intersec-
tions can be avoided. Aside from difficulties due to spatiality, some difficulties are due to
compoundedness of graphemes (minimal units of text), and exactly due to this, Chi-
nese is regarded as one of most difficult languages. Notice, that while it follows from
morphology of the word that a hieroglyph is composed of glyphs, the graphemes of
European languages also consist, even though of a small number, of glyphs.
Due to the difficulties described above, only with most universal methods for repre-
sentation of structures one can be successful in formalization of all natural languages,
and one would look into set theory for such universal methods. But, as it results from
the constituent structure approach of Chomsky [1], even set theory is a poor tool to
formalize natural languages. For example, the expression to kick the bucket should
not be interpreted word by word, but as a whole - to die. In any formalization using
set theory, this expression should be treated as an atom, and there is means to denote an
atom in set theory. But universal structures with atom and atomifications and ML to
denote them, can deal with such difficulties. I proceed from the thesis that the universal
data model is the tool for formalization of all languages.
Metalingua: A Language to Mediate Communication with Semantic Web 113

While it is regular to use the expressions structure of text, since I distinguish be-
tween structures defined in terms of set theory and universal structures defined in
terms of universics, I prefer to use the expression organization of text. The organi-
zation of a text is a universal structure, and only in some particular cases, it can be a
structure. As per A3 approach, there are exactly three types of organization to which
reduce all other types: order between two entities, non-order, which is specific to
sets, and the atomic constituency as I refer to the organization of multi-level atomic
structures.
In order to formulate a linguistic structure as an expression of ML, consider it a
universal structure and proceed recursively to denote its constituents this manner:

(1) If a constituent should be interpreted as a whole, consider it atom and use a


Unicode text as its denotation;
(2) If there is an order between two constituents a and b already denoted, con-
sider that you have a constituent and denote it as (a : b);
(3) If there is no order between constituents a1,..., an and there is no other con-
stituent a, such that there is no order between constituents a1,..., an, a, then consid-
er the set of constituents a1,..., an a constituent obtained by aggregation and denote
it by {a1,..., an} (build the maximal aggregation);
(4) If an already denoted constituent should be regarded as a whole, but it is not
atom, consider it an atomification and include its denotation between square
brackets.

5 Denotational and Discursive Semantics of Metalingua


I say the treatment of ML notations described above to be denotational semantics,
because the names used in expressions serve as proper names of the entities which
they denote. Unlike proper names, the common names used in a discourse may have
many referents (denotata), which make up a class said to be names extension. Below,
I introduce the discursive semantics for common names informally, by explaining
the sense of names in English, rather than in terms of set theory:

(1) (a : b) reads b qualified by a (for intuition, imagine that in the ordered


pair (a : b), the a is always a feature of b, so that we can read it as b featured by
a or b qualified by feature a);
(2) {a1,..., an} reads a1 and ... and an where the meaning of and is the one
used in natural languages in expressions like Jack and John;
(3) [a] reads the a and has the same meaning as reference to a formal string
of characters which is usually made up by including the string between quotation
marks.
Notice, that we cannot read (a : b) as b is qualified by a, which is a statement, in-
stead of the recommended reading b qualified by a, which is just the name b (even
though a name qualified by a); qualification is not a predicate which makes a statement
114 I. Drugus

out of names, but is an operation over names. The reading b qualified by a is the
reading of a complex name, which can be used in compound expressions like ((a : b) :
c), but the reading b is qualified by a cannot be used in compound expressions. On
the other hand, if both a and b are statements, then (a : b) is a also a statement: a if
b, or more appropriately, b implies a. Without details which would digress from the
topic of this paper, I will only mention that qualification is generalization of implica-
tion and applies to arbitrary names, not only to names of values of truth values, i.e. to
statements. Thus, in discursive semantics, a term which better reflects the semantics
than association is qualification. It is appropriate to give an example of qualifica-
tion, which we will use later: the functional notation f (x) can be considered a quali-
fication of x by f denoted as (f : x), say, if Domain(x) is a function which for a function
x results in the domain of x, then (Domain : x) can serve as an alternative denotation for
this function. Aggregation operation is a generalization of conjunction, since it applies
to arbitrary names, and not only to statements. The term aggregation sounds appro-
priate both for denotational and discursive semantics.
In discourse, the atomification expression [a] is treated as obtained by an operator,
called operator meta, which switches between the discourse and the universe of dis-
course. To better understand this terminology consider any expression between quo-
tation marks used in a English text, like Socrates is mortal. It is clear that the dis-
course is about this 18 character length formal expression, i.e. the expression is con-
sidered to be within the universe of discourse and is not part of the discourse itself. By
including such an expression between quotation marks we throw it out of discourse
into the universe of discourse. Quotation marks is a tool for meta-discourse and in
discourse, the square brackets play the same role.

6 Metalingua An Operational Counterpart of N3


As explained in introduction, N3 specifies a format for representing knowledge as
triples <s, p, o>, where, each triple denotes a simple statement, with names s, p, o,
playing the roles of subject, predicate, object, where the statement is read s has the
property p with value o. In SW the notion property is treated as a binary relation-
ship rather than a unary relationship as in logic, or as in programming where it is
called attribute. Thus, property of SW is more abstract and refers to many own-
ers, while property in logic or in IT, is property of this owner. I will denote a
property p of SW relativized to an owner s as (s : (Domain : p)), which complies with
semantics of qualification. An ontology is a named set of N3 triples.
Here is how N3 can be mapped into ML, by putting in correspondence to an ontol-
ogy named O a ML expression said to be the fold of ontology O and denoted by
f(O):

Replace each triple e = <s, p, o> in O by ((s : (Domain : p) : o), which in ML


express the meaning attributed to it by N3;
Consider E = {e1,..., en}, where e1,..., en is the list of triples in O;
Consider the fold f(O) of the ontology named by O as the following ML ex-
pression: (O : E).
Metalingua: A Language to Mediate Communication with Semantic Web 115

Fold mapping is a one-to-one mapping from N3 to ML, but ML also has expres-
sions which are not images in this mapping; ML is richer, at least, because it can ex-
press also sentences with only the predicate, or with subject and predicate. For each
ML expression e, which is an image in fold mapping, there is an unfold as a N3 ontol-
ogy denoted as u(e).

7 Applications of Metalingua
ML is currently used in two projects of EstComputer, Inc. (www.estcomputer.com)
with participation of State University of Moldova and the Academy of Economics of
Moldova, which are described below:

1. UniVocabulary a system for hosting and maintenance of different languages


vocabularies and alignment of expressions with same meaning. An immediate use
of this system can be reconciliation of translations done by various dictionaries. A
remote objective is that this system plays a role for coding words similar to the role
of Unicode for characters. Currently, UniVocabulary contains vocabularies of Eng-
lish, Romanian and Russian.
2. KnowledgeSpace a system for education and scientific collaboration, in which
the description of various items used in the system is planned to be formulated in
metalingua in order to provide easy interoperability with SW.

References
1. Chomsky, N.: Syntactic Structures. Mouton, The Hague (1957)
2. Drugus, I.: A Whole brain approach to the Web. In: Proceedings of the Web Intelligence
Intelligent Agent Technology Conference, pp. 6871. Silicon Valley (2007)
3. Drugus, I.: Metalingua a Formal Language for Integration of Disciplines via their Un-
iverses of Discourse. ETC, 1723 (2009)
4. Drugus, I.: Universics - a Structural Framework for Knowledge Representation. In: Know-
ledge Engineering Principles and Techniques, Cluj-Napoca, Romania, pp. 115118 (2009)
5. Drugus, I.: Universics: an Approach to Knowledge based on Set theory. In: Knowledge En-
gineering Principles and Techniques. Selected Extended Papers, Cluj-Napoca, Romania, pp.
193200 (2009)
6. Drugus, I.: Universics: a Common Formalization Framework for Brain Informatics and Se-
mantic Web. In: Web Intelligence and Intelligent Agents, pp. 5578. InTech Publishers,
Vucovar (2010)
7. Hunter, G.: Metalogic: An Introduction to the Metatheory of Standard First-Order Logic.
University of California Press (1971)
An Integrated Case Study of the Concepts and
Applications of SAP ERP HCM

Mark Lehmann1, Burkhardt Funk1, Peter Niemeyer1, and Stefan Weidner2


1
Leuphana Universitt Lneburg, Institut fr elektronische Geschftsprozesse, Scharnhorststr.
1, 21335 Lneburg, Germany
{mlehmann,funk,niemeyer}@leuphana.de
2
Otto von Guericke Universitt Magdeburg, SAP University Competence Center
Universittsplatz 12, 39104 Magdeburg, Germany
weidner@ucc.uni-magdeburg.de

Abstract. Organizations use ERP systems to support a wide range of corporate


activities. Since students of IS and related disciplines are expected to be
familiar with these systems, universities provide classes focusing on ERP
systems, often adopting the case study approach. However, no case study has
been published for teaching human resource management with an ERP
orientation so far. In this paper we develop a case study that provides an
overview of the relevant HR concepts and links them to their implementation
with SAP ERP HCM. We used the HCM case study twice in classes with MA
students of HR, who were asked both times to evaluate it. The use of the study
allowed us to achieve our teaching goals of both imparting SAP and modeling
knowledge. Interestingly, both courses were inhomogeneous, which was
reflected in the learning outcomes: some students had a steep while others had a
flat learning curve.

Keywords: SAP ERP, enterprise resource planning, human capital management,


case study.

1 Introduction
Organizations use ERP (enterprise resource planning) systems in nearly all areas of
corporate activities. Therefore companies require employees who are familiar with
those systems. Universities should satisfy this demand and integrate courses on ERP
systems into their curriculum. Various software vendors like SAP, Microsoft and
Oracle established programs to provide their systems for universities. In Germany,
many universities use systems provided by the SAP University Alliances (UA). SAP
UA programs primary objective is to support education by supplying the newest
SAP-technology available [1]. The SAP UA program distributes SAP systems with
the aid of University Competence Centers (UCC). These centers operate and maintain
SAP systems for all participating universities and provide additional services like
training courses and teaching materials.
To the best of our knowledge, no case study on HR was available in 2009. But the
increasing importance of IT systems for human resource management has underlined

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 117125.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
118 M. Lehmann et al.

the need for training in human resource oriented studies [2]. Hence a collaboration
between the UCC Magdeburg and the Institut fr elektronische Geschftsprozesse
(IEG) of the Leuphana University of Lneburg was started to close that gap by
developing a human resource oriented case study for application in SAP systems.
The second chapter discusses some fundamentals of case study didactics. Based on
these descriptions, chapter three develops a concept for the use of case studies and
presents the HCM case study. In chapter four we describe the application of the HCM
case study in our classes during the last two years. We conclude with a summary of
our findings.

2 Case Study Didactic


Case study work is a teaching technique of long standing. It was developed and first
deployed at the Harvard-Business-School at the beginning of the 20th century [3].
Today, case study didactics is influenced by different academic disciplines, like
decision theory, business studies, situation theory and emancipatory pedagogy [4].
This chapter investigates traditional case study didactics. We start with a
description of the goals of case studies and compare it to traditional teaching methods.
We then go on to discuss three design principles and how they aect case study
design. Next we talk about the application of case studies. Though case studies can
vary in their design, their application follows a generic sequence of activities. In
section four we discuss four types of case studies that are typically found in the
literature. Although they follow the case study didactics presented, they pursue quite
dierent teaching goals. We conclude the chapter with a description of SAP case
studies and show why they dier from the traditional types of case studies.

2.1 Goal of Case Studies

One of the main goals of all teaching is the transfer of knowledge. This transfer is
often achieved with traditional teaching techniques. The application of knowledge,
however, can only be realized through active teaching techniques, like business games
or case studies [5]. When processing a case study, participants are forced to extend
and/or use their theoretical knowledge to perform actions which, in the end, will
hopefully solve the underlying problem. The primary goal of case study work is to
build a link between theoretical knowledge and action. The evolution of the
participants action competence is closely linked to case study work. Young humans
have to learn how to make autonomous decisions as early as possible [6].
A secondary objective of case study work is knowledge transfer. We have already
mentioned existing knowledge which is used during case study work. Besides the use
of knowledge, participants should be encouraged to acquire the knowledge needed to
solve the problem, which results in the expansion of their knowledge base [7].

2.2 Case Study Design

As there are no generally agreed standards for case studies, we are going to suggest
that they should conform to the three design principles of exemplarity, vividness and
action orientation.
An Integrated Case Study of the Concepts and Applications of SAP ERP HCM 119

Exemplarity: a case study is exemplary if the case provides a good example of


someone or something [8]. Exemplarity can be scaled in four dimensions. Two
dimensions are related to the learner and two are related to the topic treated. The
dimensions influence one another and the optimum is not achievable [8]. Therefore a
case study design must be found where all dimensions achieve a reasonable value or
where the dimensions value is in line with the teaching goal.
Vividness: vividness is used for the purpose of motivation. All facts should be
presented in a vivid way to increase the participants pleasure in solving the problem.
A tool to achieve graphicness is the case story, which introduces the case to the
participants, describes the problem, informs them where the situation takes place, who
is involved and why the problem emerged.
Action orientation: to enhance action competence the case study must be action
oriented. Action orientation includes the active involvement of all participants while
they are working on the case study. An action is a sequence of tasks, starting with a
planning task and ending with a control task. In order to obtain the best possible
learning eect, an action should be completely performed, from planning to control,
by one and the same person.

2.3 Application of Case Studies

Case studies can be applied as part of a decision or problem solving process. A typical
example of the latter [see Kaiser] combines six phases (see table 1). Each phase is
related to a teaching goal, as shown in table 1.

Table 1. Sequence of activities for the application of case studies following [6]

Phase Goal
Confrontation Find the problem, get an overview and describe the task
Information Validate existing information and acquire further information
Exploration Find alternative solutions
Resolution Compare and evaluate the various solution alternatives
Disputation Defend the chosen solution
Collation Compare your own solution with the real solution

The schedule in table 1 should not be understood as an unalterable sequence of


steps to be taken. Rather, it oers a basic structure for the use of case studies: there is
room for look aheads or fall-backs, phases can be done several times and the time
provided for phases can be variable.

2.4 Case Study Variants

Kaiser distinguishes between four basic case study variants using the parameters of
problem detection, information gathering, problem solving and critique of solution.
The four case study variants have dierent focus areas and are therefore distinguished
by the way in which they are designed and applied.
120 M. Lehmann et al.

The case-study-methods focus area is problem detection. Thereby hidden problems


have to be found and analyzed. Focus area of the case-problem-method is problem
solving. Participants have to develop several solutions while the problems are named
and necessary information is given. Applying the case-incident-method, information
gathering is focus area. Within this kind of case studies the problems are incomplete
described to force the participants to acquire necessary information. The stated-
problem-methods focus area is the critique of solution. Problems and information as
well as a set of solutions are given and the participants should evaluate the solutions.

2.5 Characteristics of SAP Case Studies

Current SAP case studies are distinct from traditional case studies in a few aspects of
their design and application. First, they cannot be identified with one of the case study
variants described but combine characteristics of the caseproblem-method and the
stated-problem-method [9], which both share the feature of given and described
problems. Second, they differ sharply from traditional case studies in the areas of
solution determination and criticism, and their application in the exploration and
resolution phases. Traditional case studies challenge participants to develop solutions
to a given problem. The teaching aim of SAP case studies, by contrast, is not to
develop solutions, but to present one solution which is applicable to SAP. This is
necessary since participants can only interact with the software in a specific way [9].
Therefore some degree of adaptation has to be made to fit the case study didactics to
SAP case studies.

3 HCM Case Study


In this chapter we develop a case study concept for application to SAP case studies at
universities. The HCM case study will combine traditional elements of case study
work with new approaches. Our aim is to develop a case study that is flexible and can
be adjusted to nearly all kind of university teaching.
The chapter structures as follows. First, we discuss functional and didactic
requirements. Section two deals with the studys implementation. The chapter ends with
a presentation of the HCM case study and a description of a typical case study chapter.

3.1 Requirements

We use a list of requirements to design the HCM case study and to evaluate it later.
Requirements can be divided into functional and didactic requirements.
We define functional requirements as topic-related requirements that describe the
knowledge we want to transfer to the students. First, the HCM case study should give
a detailed overview of the functions provided by SAP ERP for the field of human
resource management. Following Jung, we established the following areas of
personnel management [10]:

human resource requirement


recruiting
personnel placement
An Integrated Case Study of the Concepts and Applications of SAP ERP HCM 121

human resource development


personnel layo
human resource management
human resource accounting
human resource assessment
personnel administration.

Second, participants should understand what they do when they process the HCM
case study. They should be given a brief introduction to the dierent modules of SAP
ERP HCM and are to be able to explain the key functions of each module. Finally, we
expect the participants to understand the relationship between dierent SAP modules
and the integration of the dierent HCM components.
The didactic point of view can be divided into design elements and application
elements. These elements depend on the application area and the teaching goal. First,
the HCM case study should be usable at universities by students from dierent
departments and dierent levels of knowledge. The primary teaching goal is the
transfer of SAP knowledge, the secondary one the transfer of human capital
management knowhow. Second, the HCM case study is to fit generally the conditions
found at universities. The HCM case study should be scalable to fit to dierent types
of classes with dierent specializations. Finally, the HCM case study should inspire
the participants to work independently on the respective topic.

3.2 Case Study Design

To ensure the applicability of the HCM case study at universities, it must stay within a
certain timeframe. At the same time the HCM case study should be adjustable and be
applicable in dierent classes with dierent topics. We try to achieve that by splitting
the HCM case study into dierent chapters, which can be taught in dierent orders.
The case story was used to visualize circumstances within the HCM case study.
Each chapter had its own case story to guarantee the flexible application of the HCM
case study. Every chapter deals with one topic concerning human resource
management, treating a main task of human resource management and using the same
enterprise as an example. The situations are easy to generalize in order to meet the
concept of exemplarity.
The HCM case study should fulfill the goals of case studies as described in chapter 2.
We expect participants to have some theoretical knowledge in human resource
management. While they work through the HCM case study, participants should link
their theoretical knowledge with actions in SAP ERP HCM. The exercises are the most
important part within the HCM case study to ensure action orientation. Knowledge
transfer as secondary objective concentrates on the transfer of technical skills.

3.3 Case Study Description

The case story takes place in the IDES Corporation. SAP IDES is a configured SAP
system containing a database with example transactions and master data [11]. It is
used by the SAP AG for demonstration and educational purposes, as well as by the
SAP UA to provide universities with a complete operational SAP system. The IDES
122 M. Lehmann et al.

Corporation produces and distributes dierent products. In the context of the HCM
case study, participants act in dierent sections of the corporations human resources
department. The HCM case study consists of nine chapters, which each include one
process from the human resources department. The nine chapters are divided into two
parts for application purposes. The first part is called introductory course and contains
only two chapters. The introductory course includes the following chapters:

organization management
human resource administration

The introductory course is the basis for the second part, the advanced course. The
advanced course can be started after the participants have completed both chapters of
the introductory course. The advanced course contains seven chapters. These chapters
can be studied in random order:

personnel procurement
time management
travel management
payroll accounting
human resources development
performance management
human asset accounting

All nine chapters have a uniform structure, which enables a steady learning process
while working through the HCM case study. Each chapter starts with an introduction,
which introduces the chapters case story and the topic treated. The second section
deals with the preparation for the later exercises (preparing thematic controversy). It
consists of questions that test the students knowledge. There is a key to these
questions. The third section describes the realization of the chapters topic in SAP.
The integration of the component within the module SAP ERP HCM is shown
graphically. Important concepts und SAP terms are explained.
The fourth section contains the exercise for SAP ERP. The case storys situation is
described. It consists of descriptions of activities as well as a list of the actors
involved and the participants role within the exercise. The exercises can be done in
group or single work, which latter format is our recommendation: the learning eect
can thus be maximized as each participant has to perform actions rather than to
merely watch what other participants are doing.
Finally, chapters come with a brief conclusion which sums up the actions
performed.
In addition, we adjusted the sequence of activities presented in chapter two. The
schedule is the same for each chapter. First, the participants are introduced to the
chapter, its aim and its underlying problem. Second, the participants have to answer
questions. They have to use their theoretical knowledge as well as their first working
experiences to answer all questions. Third, the participants are made familiar with
SAP terms and concepts. Fourth, the exercises in SAP are done. The participants
work through the case study either in small groups or on their own. Finally, the
participants summarize their action, an activity which also makes them repeat
subconsciously the combination of theoretical knowledge with practical experiences.
An Integrated Case Study of the Concepts and Applications of SAP ERP HCM 123

4 Application
We applied the HCM case study approach over two years in a class for human
resource students at master level. The course is called IT supported human resource
management, and has been developed to teach students basic IT knowledge. A
central learning outcome of the class is for students to be able to define and formulate
requirements for software in the context of human resource management.
The class is divided into two parts. The first part takes up the first five or six
sessions and is a basic introduction to information technology, dierent software
systems for corporate activities and the history of ERP systems with examples from
dierent software vendors. Next, students are made familiar with requirements
engineering methods and tools. We then concentrate on event-driven process chains
(EPC) and entity-relationship models (ER). The first part concludes with models
developed by the students in short exercises.
The second part deals with our HCM case study, but we replaced the questions
with modeling exercises. We started each meeting with a short introduction. Then
students had to design EPC- or ER-models in small groups of up to three students.
Next, they presented their results to the whole group and we showed an example that
fitted the SAP implementation. This was followed by a discussion of concepts and
SAP terms using PowerPoint presentations. We finished each meeting with work on
the case study. The students had to work through the SAP system on their own.
We made an evaluation in both years. As we concentrated on the whole HCM case
study and its approach in the first year, we did the evaluation at the end of the term.
Most students enjoyed working with the HCM case study. What pleased students
particularly was the variation between knowledge transfer and action oriented work.
Also, the modeling tasks presented a challenge to students: most of them were able to
design EPC-models, but disliked modeling ER-models.
As for the evaluation, students were given the chance to make suggestions for
improvements. Some proposed concentrating on EPC-models while others wanted
more time for software installation and modeling basics before starting on the HCM
case study. We took up this request and changed our schedule so that we had more
time for basics and installations in the second year. Yet other students suggested using
operations in the procurement case study. These suggestions were also incorporated
into the relevant chapter.
In the second year we developed a standardized questionnaire and used it after each
chapter of the HCM case study. The questionnaire was designed to give answers to
three questions:

1. Can we find a development in the students interest?


2. Can we find irrelevant topics?
3. Can we find an improvement in the quality of the students ER- and EPC-models?

In our second course, half of the students had practical experiences with SAP
during internships or job training. Most students enjoyed the HCM case study. 71%
thought that the knowledge gained from the class would be helpful for future
employment. We found that students interest in the case study decreased over time.
We measured this with the help of the number of evaluations and the number of
124 M. Lehmann et al.

completed case studies. Furthermore, we asked the students to judge the case study
chapters on a 5-point scale by answering the following questions in order to and
irrelevant topics:

1. I was able to use knowledge from my studies within the case study.
2. I was able to identify topics from my studies within the case study.
3. I will benefit from the case study work in my future working life

We used the answers to these sentences to measure the topics relevance: the more
the students agreed with each sentence, the more relevant was the topic considered. In
response to the second question, most of those surveyed indicated that typical hype-
topics (like talent management and human resource development) are relevant and
that the units on travel management and payroll accounting are irrelevant.
In order to establish whether the quality of students models had improved, we
compared the first training models with models from the students seminar papers. The
models quality is determined by the number of mistakes. We searched in both models
for some typical mistakes. The presence of more mistakes in the first training models
was considered strong evidence of an improvement in students work over time.
We also included questions in which students were asked to suggest improvements.
Interestingly, dierent students made dierent suggestions. For example, some
mentioned that concepts and SAP terms should be discussed after the case study while
others agreed with our approach. A small number of students found the case study too
easy after the first meetings and requested more challenging exercises. Most students,
however, were pleased with the level of diculty of the case study. This supports our
finding from the first year. Both courses were inhomogeneous and some students had
a steep learning curve while others had a flat one.

5 Conclusion
Within the framework of this paper, a new case study concept was presented for the
application of SAP case studies at universities. The HCM case study contains
traditional elements of case study work as well as new approaches. Freely combinable
chapters make it possible to adapt the case study to nearly all kinds of classes.
The HCM case study combines practical exercises with knowledge transfer and
repetition. The approach combines elements from traditional classes with more
practical elements. The sequence of activities developed for the HCM case study
supports the whole learning process.. The HCM case study can be applied with
dierent teaching methods. Some parts of the case study can be done either in group
or single work. Other parts, such as the preparing thematic controversy one, can be
integrated into the class itself.
We used the HCM case study in a class for human resources students. Our teaching
goal was the transfer of basic IT knowledge, for which purpose we replaced the
questions to be prepared with our own exercises to reach our goal. As our evaluation
showed, most students enjoyed working with the HCM case study, although some of
them had already worked with the SAP system. It also showed the success of the
HCM case study in achieving our teaching goals. Interestingly, some students found
the case study challenging while others were unchallenged.
An Integrated Case Study of the Concepts and Applications of SAP ERP HCM 125

We are migrating the HCM case study to the new GBI (Global Bike Incorporate)
dataset at the moment. The GBI dataset is developed by SAP UA and will replace the
IDES dataset soon. Some chapters are already available for the GBI dataset and can
be accessed via the UA website. Finally, we will include the suggested improvements
into the new version of the HCM case study.

References
1. Rosemann, M., Maurizio, A.A.: SAP-related Education - Status Quo and Experiences.
Journal of Information Systems Education 16, 437453 (2005)
2. Bedell, M.D., Floyd, B.D., McGlashan Nicols, K., Ellis, R.: Enterprise Resource Planning
Software in the human resource classroom. Journal of Management Education 31, 4363
(2007)
3. Kaiser, F.J., Kaminski, H.: Methodik des konomieunterrichts: Grundlagen eines
handlungsorientierten Lernkonzepts mit Beispielen. Klinkhardt, Bad Heilbrunn/Obb
(1997)
4. Brettschneider, V.: Entscheidungsprozesse in Gruppen: Theoretische und empirische
Grundlagen der Fallstudienarbeit. Klinkhardt, Bad Heilbrunn/Obb (2000)
5. Alewell, K.: Entscheidungsflle aus der Unternehmenspraxis, Gabler, Wiesbaden (1971)
6. Kaiser, F.J.: Die Fallstudie: Theorie und Praxis der Fallstudiendidaktik. Klinkhardt, Bad
Heilbrunn/Obb (1983)
7. Kosiol, E.: Die Behandlung praktischer Flle im betriebswirtschaftlichen
Hochschulunterricht (Case Method). Duncker & Humbolt, Berlin (1957)
8. Reetz, L., Beiler, J., Seyd, W.: Fallstudien Materialwirtschaft: Ein praxisorientiertes
Wirtschaftslehre-Curriculum. Feldhaus, Hamburg (1993)
9. Funk, B., Lehmann, M., Niemeyer, P.: Entwicklung einer Fallstudie fr die Lehre im IT-
gesttzten Personalmanagement. Final 20, 1123 (2010)
10. Jung, H.: Personalwirtschaft. Oldenbourg, Mnchen (2006)
11. Vluggen, M., Bollen, L.: Teaching enterprise resource planning in a business cur riculum.
International Journal of Information and Operations Management Education 1, 4457
(2005)
IT Applied to Ludic Rehabilitation Devices

Victor Hugo Zrate Silva

Information Technology and Mechatronics Department


Tecnolgico de Monterrey, Campus Cuernavaca
Cuernavaca Morelos, Mexico
vzarate@itesm.mx

Abstract. In this paper we report some experiences in progress focus on the


creation of Rehabilitation Technology, defined as the "set of artifacts created by
people to facilitate human effort. These devices have been developed by stu-
dents along several years and impact directly on a social community who has
been the receptor and user of these systems, the CRIC (Center for child reha-
bilitation at Cuernavaca, Mexico). These experiences have been possible with a
service-learning pedagogy approach. The use of Information Technology (IT)
into the rehabilitation artifacts improve their capabilities and let us introduce
ludic aspect to the rehabilitation process. The role of the CRIC has been active
in all the phases of the development process and it helps us to achieve our goal:
to help people to increase his quality of life. Without this engagement it could
not be possible got these results.

Keywords: service-learning pedagogy, ludic rehabilitation, biomechatronic en-


gineering system, information technology.

1 Introduction

In 2000 the Tec de Monterrey Campus Cuernavaca, the biggest private university in
Mexico, along with Dr. Paul Bach-y-Rita, neurologist who worked at the Wisconsin
University, proposed a joint effort to develop a motivational device [1]. One of the
ideas we have is that if the patient is well motivated, the rehabilitation exercises will
more willingly and therefore recover faster. At this time these ideas were very innova-
tive, the closest was the creation of virtual reality-based rehabilitation by Professors
Thalmann and Burdea in 2002 [2], defined as an unconventional therapy that allows
entertainment and motivation. As the same time we use intensively IT to improve the
rehabilitation process. Although its application to rehabilitation is new, the aspects of
funny and attractive systems are also associated with ITs, as Professor Don Norman
states: "Thinking to humanize everyday things design to be at the same time functional
and attractive, but also funny and ludic" [3]. Even in empirical observations (as seen in
http://www.thefuntheory.com/), it has been showed that fun is a good incentive for
people to react and change its attitude or improves the perception of things. A technol-
ogy widely used for this purpose is the video games. For example, Zach Rosenthal of
Duke University in the United States use the videogames for drug patients [4]. Diane

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 127131.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
128 V.H. Zrate Silva

Gromala from Fraser University in Vancouver Canada use videogames instead of med-
icine to treat chronic pains [5]. In the "Chaim Sheba Rehabilitation Hospital" in Tel
Aviv they build a 650,000.00 USD system which is capable of simulating a total vir-
tual reality for patients with disabilities [6]. They affirm their system is funny, enter-
tained and addictive because, in some extent, reflects in a virtually fashion what they
can perform in real life, inducing the brain to be motivated during rehabilitation. They
have seen incredible results with this therapy; however the cost is very high. There are
many other examples using video games therapy, but all has in common the high price
[7, 8].
Under these assumptions, our focus is the development of innovative rehabilitation
technologies allowing registration of the evolution of the rehabilitation process, and
motive people to use them. Our goal is to create affordable systems for a non-profit
social community as CRIC, to test them and make improvements to assure better
therapy.

2 Methodology
Kindly our students have been collaborating to developed rehabilitation devices based
on CRICs needs on its daily work. These devices are validated by experts in rehabili-
tation through their use, thus providing maturity to these designs. On the other hand,
we chose CRIC as a social community partner because it is a nonprofit institution
providing social support and service to people with low-income economic resources.
That is why we support our participation through the service-learning pedagogy ap-
proach. This methodology link the social community partner (in this case CRIC) with
the academic curricula to produce final products made by the students to serve our
social partner.
Our work's main objective is to create IT artifacts based on computer resources to
improve the efficiency of rehabilitation therapy. We choose two approaches, comput-
er video games like a funny therapy and virtual reality immersion like a relax therapy.
Both approaches must suit the different rehabilitation devices created earlier. The
video games are designed in conjunction with the CRICs Therapists and according to
the guidelines of the learning objectives of the Tecnolgico de Monterrey Campus
Cuernavaca. We take care first at all of the functionality without losing sight to be
adaptable and flexible as seen in Figure 1.

Biomechatronic
Device

Fig. 1. Integration of ludic computer-based IT aspects to biomechatronic devices


IT Applied to Ludic Rehabilitation Devices 129

To develop these devices we adopt a five steps top-down methodology:

1) Define along the CRICs Therapist the problem to resolve.


This stage is carried out with CRICs experts (Therapists) to obtain their point of
views and expertise based on their rehabilitation sessions. Here, the students, the The-
rapists and the Professor work together to define a list of useful requirements accord-
ing to specific needs.

2) Design and implementation


Special attention is paid on ergonomics aspects. It is important that the device must be
comfortable to the person in rehabilitation. In general, this factor will affect the final
acceptance of the device. It is also important that the device must be attractive and
easy to handle for the specialists, mainly to provide assistance to patients. Here again
the Therapists have an active role to help to get a better design.

3) Integration of ludic aspects


ITs are ideal for creating devices for rehabilitation, mainly in the addition of ludic
aspects which represent an extra motivation to the people in rehabilitation. In many
cases some psychological factors determine the success of the systems as we men-
tioned, that is why we include these ludic elements associated with the movement of
the patients through the rehabilitation device. The video games are one of these as-
pects. Another way to help people in rehabilitation is through a virtual reality immer-
sion, it means, introduce the patient into a virtual world, like a walk in a forest with
soft music to relax him or her and stay more comfortable in the rehabilitation session.

4) Using the system with real patients


Once the system has been implemented, we measure its performance. The designers,
the therapists and sometimes the patients themselves, make recommendations about
the systems to make the necessary adjustments to improve the final version. Success
often depends on environmental conditions, personal motivation, training and training
in its use.

5) Improve the design based on feedback of the Patients and Therapist.


Feedback aims to determine the quality of the fusion of technology with health care.
This is accomplished directly auditing the device or through the results of its interven-
tion in rehabilitation. We can track usage with each of the devices and patient moni-
toring them through computational elements. The role of the Therapist is essential to
achieve this stage because he or she knows the medical parameters to establish if
patients health improve or diminish. Based on these assessments, we redefine, if
necessary, the design and begin another cycle to improve the system.

3 Results
From all the rehabilitation devices created, we have three with integrated ludic aspect
as seen in Figure 2.
130 V.H. Zrate Silva

Robotic Arm Ankle rehabilitation

Stimulation bed

Fig. 2. Examples of the biomechatronics devices

A. Robotic Arm. It is a mechanism to stimulate and strengthen the range of motion


of the elbow with a linear motion back and forth which provides a resistance through
a spring, linked to a computer with some video games, like ping-pong, that allow
increasing the motivation.

B. Ankle rehabilitation: It is a simple mechanical device working as an accelerator


(like a throttle) and has a variable weight resistance through pulleys and weight. It is
used to support the strengthening muscle and regain range of motion lost. It is asso-
ciated with a game (blowing up balloons) on a computer activated when the patients
push and pull of his or her ankle.

C. Multiple Stimulation bed: This system has 2 pedals bicycle-likewise; one fit the
feet and the other the Patient's hands. The system is programming in time and speed
of movement and allows improve the moves to regain range of the same while being
retrained. We include a monitor where the patients observe, in a relax ambience, pic-
tures or a virtual navigation of the nature. This is one of the most tested systems. We
have a forest virtual tour that can be attached to goggles to be more practical. The
results seem very attractive but we are also improving the mechatronic bed, so the
process to assess is slow.
IT Applied to Ludic Rehabilitation Devices 131

4 Conclusion
The use of new technologies such as ITs gives us the opportunity to do sophisticated
systems. For rehabilitation, a good use of these IT can substantially improve the quali-
ty of service and expand the number of people served.
In this work we show some experiences in progress on apply academic work strat-
egies to create rehabilitation systems. Mainly we use service-learning approach to join
the particular requirements of a social partner with the academic requirements and
permit the students create devices to supply these requirements. In our methodology,
the social community (the Therapists) is highly involved and its participation is essen-
tial to success. The systems showed here are in operation in the CRIC and particularly
the multiple stimulation bed is currently in the second stage of redesign and im-
provement.

References
1. Vargas, S.: Realiza el CRIC rehabilitacin ldica. Reportaje periodstico. Diario de More-
los. 4 de agosto de (2009) (in Spanish),
http://www.diariodemorelos.com/index.php?option=com_content&t
ask=view&id=45764&Itemid=68 (visited on 30th April 2010)
2. Burdea, G.: Keynote Address: Virtual Rehabilitation-Benefits and Challenges. In: 1st Interna-
tional Workshop on Virtual Reality Rehabilitation (Mental Health, Neurological, Physical,
Vocational) VRMHR 2002, Lausanne, Switzerland, November 7, 8, vol. 8, pp. 111 (2002)
3. Norman, D.A.: Emotional design: why we love (or hate) everyday things. In: Clerk Maxwell,
J. (ed.) Basic Books, A Treatise on Electricity and Magnetism, 3rd edn., vol. 2, pp. 6873.
Clarendon, Oxford (1892)
4. Rivera, A.: Using Games for Rehabilitation, (November 7, 2007),
http://www.massively.com/2007/11/07/using-games-for-
rehabilitation/ (visited on June 30, 2011)
5. Shayotovich, E.: Video Games Treat Chronic Pain Better Than Drugs (December 17, 2007)
http://www.massively.com/2007/12/17/video-games-treat-
chronic-pain-better-than-drugs-working-title/ (visited on June 30,
2011)
6. MSNBC News Tech & Science, Virtual Reality Boosts Rehab Efforts (December 18, 2006),
http://www.msnbc.msn.com/id/16266245/ (visited on June 30, 2011)
7. Gesture Tek technologies, IREX: The Best in Virtual Reality Physical Therapy (2009),
http://www.gesturetekhealth.com/products-rehab-irex.php (visited
on June 30, 2011)
8. Balasubramanian, S., et al.: IEEE Digital library. Rupert: An exoskeleton robot for assisting
rehabilitation arm functions, 27 de agosto del, video (2008),
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=46251
54, http://www.youtube.com/watch?v=SZAp9ZXye8w (visited on June 30,
2011); Smith, T.F., Waterman, M.S.: Identification of Common Molecular Subsequences. J.
Mol. Biol. 147, 195197 (1981)
A Channel Assignment Algorithm Based on Link Traffic
in Wireless Mesh Network

Liu Chunxiao1, Chang Guiran2, Jia Jie1, and Sun Lina1


1
Information Science and Engineering, Northeastern University, Shenyang, China
2
Computing Center, Northeastern University, Shenyang, China
xiaoxiao198525@163.com, chang@neu.edu.cn, jiajie@ise.neu.edu.cn,
sunyawen@sina.com.cn

Abstract. In a wireless mesh network, in order to accomplish the purposes of


minimizing the interference within the mesh network and improving the
network connectivity in the course of channel assignment. This paper proposed
a heuristic algorithm which can predict the busy-degree of a node through
predicting the link traffic in the networks. So the algorithm can respond to the
distribution of data traffic flexibly. This paper also used the protocol
interference model to reduce the interference between the links in the wireless
mesh network. Simulation results show that the algorithm can reduce the
waiting time while a data frame is transmitting. And the network throughput
and stability can be improved significantly.

Keywords: link traffic, busy degree, channel assignment, interference weight,


wireless mesh network.

1 Introduction
Wireless mesh network (WMN)[1] merges the advantage of WLAN (wireless local
area network) and Ad Hoc network, which is a high-capacity, high-speed and high-
coverage of wireless network. In order to increase network capacity, each node in the
wireless mesh network is configured multiple RFs and different RF is distributed
different non-overlapping channel.
How to reduce the interference among channels while data transmitting which means
maximizing the reuse ratio of scarce radio spectrum and how to allocate channels while
the link traffic is not balancing become the major challenges which a multi-channel
wireless mesh network has faced. Reference [2] proposed a polynomial-time greedy
heuristic algorithm, which is called Connect low interference channel allocation
algorithm. According to the connection graph and conflict graph, this algorithm
computes a priority for each mesh node and allocates channel for each link. However,
the algorithm does not consider the problem of flexibility and also can not resolve a
variety of network traffic patterns problem in channel allocation. Reference [3]
proposed an interference-aware channel allocation algorithm, which fully takes into
account the impact of link traffic, but it only considers the impact of the traffic of
external wireless network traffic. For the above problems, this paper proposes a
heuristic channel allocation algorithm based on the link traffic. In order to predict the

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 133141.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
134 C. Liu et al.

busy-degree of each node in the wireless mesh network, the algorithm uses the Markov
chain model[4] to predict the link traffic. The link connecting with node which has a
larger busy-degree has the priority to allocate channel. If there are a number of links,
according to the protocol interference model[5] then the link with a greater interference
degree has the priority to allocate channel.

2 Network Model and the Channel Assignment Algorithm

2.1 Network Model

Wireless mesh network can be represented by an undirected graph G= (V, E, L),


where V is the node set, E is a collection of links connecting two nodes, K is the
number of available channels in the graph G and N=|V| indicates the total number of
users in the wireless mesh network. Assume that the nodes of the wireless mesh
network distribute in a plane and each mesh router is configured with a full end to the
antenna RF. Also assume that each radio terminal has a coverage R and the same
interference range R ' . If a receiving node is in the coverage of the other two nodes,
then the packet transmission will produce interference. The range of coverage is less
than the range of interference ( R < R' ).
According to the above assumptions, and some related definitions are defined as
follows:
Definition 1. Matrix E= {eij}, i,j=0N-1, N*N matrix, eij =0 or eij=1. If eij=1, it
indicates that node i and node j are in each others scope of coverage respectively and
will produce link interference during data transmission. If eij=0, it indicates that node i
and node j are not in each others scope of coverage, so node i and node j can be
assigned the same channel.
Definition 2. Matrix L= {lik}, i=0N-1, k=0K-1, N*K matrix. It is used to represent
the channel availability of each node in graph G, that the channel availability state of
each node. lik=0 or lik=1. If lik=1, it indicates that channel k can be assigned to node i
during channel assigning. If lik=0, it indicates that channel k can not be assigned to
node i during channel assigning.
Definition 3. Matrix A= {aij}, i=0N-1, j=0N-1. It is used to represent situation of
the channel assignment for link eij. If channel k is assigned to link eij during the
channel assignment algorithm, then aij=k, else aij=0.
Definition 4. Interference degree of link e I(e). According to the protocol interference
model, N(e) is used to represent link collection which existing interference with link
e. So the interference degree of link e is |N(e)|.
Definition 5. Set P, P(eij) is set that nodes in link set N(e) except node i and node j.
Each link eij corresponds to a node set P(eij), so the number of set P is equal to the
number of link e in the network.
Definition 6. Queue Bi, according to the calculated busy_degree (i ) of node i to
order descend and put the corresponding value of i into the queue.
A Channel Assignment Algorithm Based on Link Traffic in Wireless Mesh Network 135

Definition 7. Queue Ok, Ok is used to record the used frequency of channel k.


Ok is used to record the frequency of utilization of channel k during channel
assignment and put the value of Ok into the queue.
The greater value of Ok, the higher frequency of utilization of channel k. In order to
maintain the balance of channel traffic, if there are a number of available channels,
the channel with smaller value of Ok should be choosed first.
Definition 8. The total interference degree w in the network:
w= | I (e) |
(1)
eE

2.2 Node Traffic Prediction Model

This paper uses Markon chain model (ON_OFF model) to predict the link traffic, as
shown in Figure 1. ON indicates the state while there is data transmitting on the link,
OFF indicates the state while no data transmission. a, b, c and d indicate the transition
probability between ON state and OFF state.

Fig. 1. ON_OFF model

Assume system time is divided into two parts, that no data transmission time and
data transmission time. TN is the average length of data transmission time, TF is the
average length of no data transmission time (TN1,TF1). Therefore, for any time
slot in the ON state, the probability of the next time slot still in the ON state is (TN-
1)/TN. The same for any time slot in the OFF state, the probability of the next slot
still in the OFF state is (TF-1)/TF.

TN 1 1 ; 1 ; TF 1 1 ; 1
a= = 1 b =1 a = c= =1 d = 1 c =
TN TN TN TF TF TF
TN ; TF
on = off =
TN+TF TN + TF

where on is the probability of the link in the ON state, off is the probability of the
link in the OFF state. So in a single wireless network, the mathematical expectation
EQ of the following m time slots is defined as follows:

EQ= (m on + S off ) B, (ONstate) .



(m on S off ) B, (OFFstate)
(2)

i
where S= m
1 1 , B is the bandwidth of wireless network.
1 TN
i =1

TF
Q(i, j) is used to represent data flow from node i to node j. As links in the wireless
network are two-way link, so Q(i)=Q(i,j)+Q(j,i), and Q(i) is the data flow of node i.
136 C. Liu et al.

2.3 Busy-Degree of Node

Busy-degree of node is used to indicate the business of the node. The greater the
value of busy-degree, the busier the node. So the definition of busy-degree is defined
as follows:
Q(i ) | Neighbor(i ) |
(i ) = + (3)
Ck N
where C is the channel capacity of network, a fixed value C is assigned in this paper.
k is the number of available channels in the network. |Neighbor(i)| is the number of
neighbors of node i in the network. There are two parameters separately and ,
+ =1, which are used to denote the importance of the prediction value of traffic in
the next stage and the number of node neighbors.
The measurement mechanisms of node load have four kinds, including CQI
(channel quality index), the occupancy of MAC buffer, the number of neighbor nodes
and the delay of processing packet. And this paper mainly considers the number of
neighbor nodes. So in the calculation of node busy-degree, this paper considers not
only the traffic in the next stage, but also the potential traffic of the node.
0 (i ) 1 , the greater the value of (i ) , the busier the node i. So the link connected
with node i should be assigned first.

2.4 The Channel Assignment Algorithm

In figure G (V, E, L), according to the protocol interference model to calculate the
interference degree |I (e)| of link, and follow the descending order to assign channel
for each link while the first time to assign channel for wireless mesh network. We use
the following algorithm to assign channel for corresponding link in the after channel
assignment:

Step1. Calculate the busy-degree of each node in the wireless mesh network
according to the formula 3. If eij E and eij=0, set the value of aij= 0 directly.
Step2. Following the descending order to assign channel for each link connected
with the node i according to the calculated value of busy-degree (i) . The channel
assignment algorithm process is as follows:
While assigning channel for link eij every time, check the value of aij. If aij it 0,
indicates that the link eij has been assigned channel before, then skipping the channel
assignment for the link. If aij=0, it indicates that the link eij has not been assigned
channel before, then performing the following channel assignment algorithm.
For node i in the network and node j which satisfies eij =1, if Li j L
, then it
indicates that there are same channels in the table of available channel of node i and
node j. Calculating the interference degree I(eij) respectively, following the
descending order to assign channel for each link according to the calculated
interference degree.
If | Li Lj | = 1, then the same channel k in available channel table of node i and
node j is assigned to the link directly. If | Li Lj |> 1, it indicates that the number of
A Channel Assignment Algorithm Based on Link Traffic in Wireless Mesh Network 137

the same channel in the available channel table is greater than 1, then comparing the
corresponding value of Ok, the channel k with smaller value is assigned to the link eij.
Calculating the total interference degree w' in the network, if w' <w, then the
channel assignment is successful, and w = w', set aij=k, Ok=Ok+1, modify the
corresponding value in the queue Ok, modify matrix L,
m, m P (eij )
l mk =0
.
Otherwise, cancel the channel assignment for the link.
Step3. Repeat step2 until each link in the network has been assigned.

3 Network Simulation

3.1 Mesh Topology

In this paper, NS2 network simulation tool[6] is used to build a simulation


environment of wireless mesh network. The transmission range and interference range
of each mesh node are 250m and 350m. 25 nodes are distributed in the 800m*800m
range randomly, as shown in figure 2.

Fig. 2. Topological structure of network simulation

This paper uses the MAC protocol of IEEE 802.16s standard[7] and the
transmission rate of the wireless link is 54Mbps. The main traffic flow is generated in
the peripheral nodes, using a continuous bit rate CBR traffic pattern. Using on-
demand distance vector routing protocol (AODV)[8] in the simulation.

3.2 Simulation Results


Three simulation experiments are used to test the performance of proposed algorithm
in this paper. Experiment 1 is used to test the validity of link traffic predicted model
in the network. Experiment 2 is used to compare the throughput in the same network
state among the proposed algorithm of this paper, CLICA algorithm of reference [2]
presented, and interference-aware algorithm of reference [3] presented. Experiment 3
is used to test the network delay in the same network state among three algorithms.
138 C. Liu et al.

(1) Test of link traffic prediction model


Experement1 uses the simulation environment provided above, there are 25 nodes
in the simulation network and the channel assignment period T=100s. Each node has
two interfaces, the number of available channels k=12 and simulation time t=500s.

the prediction value the actual value


9

)
t 8.5
i
b
M
(
c
i
f 8
f
a
r
t
k
n 7.5
i
l

7
0 100 200 300 400 500
simulation time(s)

Fig. 3. Comparison between the actual value and prediction value of link traffic

Experimental result as shown in figure 3 is that using the Markov chain (ON_OFF
model) can effectively predict the link traffic in the network.
(2) Network throughput
Comparing the throughput in the same network state among the proposed
algorithm in this paper, CLICA algorithm of reference [2] and interference-aware
algorithm of reference [3] from two aspects(higher traffic and lower traffic in the
network).
Figure 4 and figure 5 compare the network throughput of different channel
assignment algorithms respectively. Polynomial-time greedy heuristic algorithm
calculates a priority for each mesh node based on the connection graph and
confliction graph, but the algorithm does not consider the flexible problem during
channel assignment and also can not resolve the traffic pattern problem, so the
throughput is lower relatively. Interference-aware channel assignment algorithm not
only considers the impact of link traffic, but also the link interference degree, so the
throughput of the algorithm is higher than CLICA algorithm. The algorithm proposed
in this paper predicts busy-degree of each node through using Markov chain model
predicting link traffic in wireless mesh network. The link connecting with node which
has a larger busy-degree has the priority to assign channel. If there are a number of
links, according to the protocol interference model then the link with a greater
interference degree has the priority to assign channel. So the network throughput of
proposed algorithm in this paper is higher than CLICA algorithm and interference-
aware algorithm.
A Channel Assignment Algorithm Based on Link Traffic in Wireless Mesh Network 139

algorithm in this paper


interference-aware algorithm
CLICA algorithm

20

)s 19
pb 18
M(
tu 17
ph 16
gu
or 15
ht
14
kr
ow 13
te 12
n
11
10
0 100 200 300 400 500
simulation time(s)

Fig. 4. Network throughput of different algorithms (lower traffic)

algorithm in this paper


interference-aware algorithm
CLICA algorithm

20

)s 19
pb 18
M(
tu 17
ph 16
gu
or 15
ht
14
kr
ow 13
te 12
n
11
10
0 100 200 300 400 500
simulation time(s)

Fig. 5. Network throughput of different algorithms (higher traffic)

Experimental results show that regardless of how much traffic in the network, the
algorithm based on link traffic proposed in this paper can improve the network
throughput effectively. According to the results shown in figure 4, the network
throughput of algorithm proposed in this paper is 1.4 times than CLICA algorithm,
1.2 times than interference-aware algorithm. According to the results shown in figure
5, the network throughput of algorithm proposed in this paper is 1.5 times than
CLICA algorithm, 1.3 times than interference-aware algorithm. As traffic in the
network increasing, the algorithm proposed in this paper can improve network
throughput and network performance better.
140 C. Liu et al.

(3) Network delay


Network delay refers to the time of data packets from input port to output port.
Reducing network latency plays an important role on improving network
performance. Figure 6 is used to compare average packet delay among three
algorithms proposed above. From the results shown in figure 5 can be seen, that
regardless of how much the traffic in the network, the algorithm proposed in this
paper is lower 67% than CLICA algorithm, and lower 61% than interference-aware
algorithm.

CLICA algorithm
interference-aware algorithm
algorithm in this paper

3
s
/ 2.5
y
a
l
e
d 2
t
e
k
c 1.5
a
p
e 1
g
a
r
e 0.5
v
a
0
lower traffic higher traffic

Fig. 6. Average packet delay of different algorithms

4 Conclusion
This paper proposes a heuristic channel assignment algorithm based on the link
traffic. In order to predict the busy-degree of each node in wireless mesh network, the
algorithm uses the Markov chain model to predict the link traffic. The link connecting
with node which has a larger busy-degree has the priority to assign channel. If there
are a number of links, according to the protocol interference model then the link with
a greater interference degree has the priority to assign channel. Finally, experiment
results show that the algorithm proposed in this paper can improve network
throughput and network transmission performance effectively.

References
1. Avallone, S., Akyildiz, I.F.: A channel assignment algorithm for multi-radio wireless mesh
networks. In: 16th International Conference on ICCCN 2007, Honolulu, HI, pp. 10341039
(2007)
2. Marina, M.K., Das, S.R.: A topology control approach for utilizing multiple channels in
multi-radio wireless mesh networks. In: 2nd International Conference on BroadNets 2005,
California, pp. 381390 (2005)
A Channel Assignment Algorithm Based on Link Traffic in Wireless Mesh Network 141

3. Ramachandran, K.N., Belding, E.M., Almeroth, K.C., Buddhikot, M.M.: Interference aware
channel assignment in multi-radio wireless mesh networks. In: 25th IEEE International
Conference on Computer Communications, Barcelona, Spain, pp. 112 (2006)
4. Li, L., Yang, G.W., Cheng, G.Y.: A Forecast Method of the Flow in Networks. J. Computer
Engineering, 1516 (1998)
5. Naouel, B.S., Hubaux, J.P.: A fair scheduling for wireless mesh networks. In: The First
IEEE Workshop on Wireless Mesh Networks (WiMesh), Santa Clara (2005)
6. NS tutorial, http://www.isi.edu/nsnam/ns/tutorial/index.htm
7. IEEE Std. 802.16-2004, IEEE standard for Local and Metropolitan Area Networks (2004)
8. Perkins, C., Royer, E.: Ad-hoc On-Demand Distance Vector Routing. In: 2nd IEEE
Workshop on Mobile Computing Systems and Applications, Washington, DC, pp. 90100
(1999)
An Analysis of YouTube Videos for Teaching
Information Literacy Skills

Shaheen Majid, Win Kay Kay Khine, Ma Zar Chi Oo, and Zin Mar Lwin

Wee Kim Wee School of Communication & Information


Nanyang Technological University, Singapore 637718

Abstract. Traditionally librarians and educators have been using a variety of


methods such as lectures, discussions, demonstrations, and hand-on sessions for
imparting information literacy skills. An exciting addition to such initiatives is
the availability of Web 2.0 applications. Among the Web 2.0 tools, YouTube is
quickly becoming a new way of teaching information literacy skills in a more
interesting and engaging manner. The purpose of this study was to analyze in-
formation literacy videos on YouTube using the Big6 information literacy mod-
el. This paper also makes certain suggestions for using YouTube for imparting
information literacy skills effectively.

Keywords: Information Literacy, Big6 Model, YouTube, Training, Video Con-


tent, Video Quality.

1 Introduction
These days, large amount of information is readily available and without the neces-
sary skills to search, locate, process, evaluate, and use information, people may ex-
perience various information related problems, such as information overload, inability
to find the needed information, and underutilization of information. It is, therefore,
desirable for people from different segments of the society to become lifelong learn-
ers, and possess adequate levels of information-related competencies. The term in-
formation literacy (IL), sometimes referred to as information competency, is generally
defined as the ability of an individual to access, evaluate, organize, and use informa-
tion from a variety of sources. Being information literate requires knowing how to
clearly define a subject or area of investigation; select the appropriate terminology
that expresses the concept or subject under investigation; formulate a search strategy
that takes into consideration different sources of information and the variable ways
that information is organized; analyze the data collected for value, relevancy, quality,
and suitability; and subsequently turn information into knowledge [1].
In higher education arena, one of the objectives is to prepare information literate
citizens who can work effectively in an information and knowledge rich society. In-
formation literacy leads students to become independent learners rather than over-
depending on teachers to seek answers to questions or solve problems.
Preddie [2] listed several benefits of information literacy for students and general
public. Information literacy requires active learning, thus students should take more
control of and be responsible for their own learning. Information literate citizens

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 143151.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
144 S. Majid et al.

know how to analyze and use information to best apply to their work and everyday
life. For workers, information literacy enables them to embrace changes and quickly
adapt to a dynamic and constantly evolving work environment and at the same time
value-add the organization that they are working for.
Appreciating the importance of information literacy, various standards and guide-
lines have been developed and implemented. In the United States, the American Li-
brary Association (ALA) and Association for Educational Communications and
Technologys landmark publication Information Power and the Association of Col-
lege and Research Libraries publication Information Literacy Competency Standards
for Higher Education, have both become de facto standards for IL competencies from
kindergarten to college. The UK Standing Committee for National and University
Libraries (SCONUL) proposed the Seven Pillars of Information Skills in December
1998. The Council of Australian University Librarians (CAUL), reviewed the US
Information Literacy Standards for Higher Education by ACRL and revised the
Australian and New Zealand Information Literacy Framework (ANZIIL) and pro-
vided four guiding principles and more comprehensive details for each of the six core
standards.
Different information literacy models have also been presented, emphasizing dif-
ferent aspects of information literacy. Bruce [3] stated that information literacy is
generally influenced by five concepts, namely: information technology literacy, com-
puter literacy, library skills, information skills, and learning to learn. She stated that
the five concepts are simultaneously distinct and interconnected, and that each con-
cept is either differentiated from or integrated into current descriptions of information
literacy. This model is generally acknowledged, accepted and used. Burdick [4] de-
scribed information literacy as being made up of five components: abilities, skills,
knowledge, use and motivation. Eisenberg & Berkowitz [5] presented a well-received
Big6 information literacy model, describing how people solve a typical information
problem. Their model comprises six information-related activities: 1) task definition,
2) information seeking strategies, 3) location and access to information, 4) use of
information, 5) synthesis, and 6) evaluation.
With advancements in information technology, libraries and other stakeholders are
experimenting with new content delivery techniques to make IL instruction more ef-
fective and useful. A big advantage of using ICT is that IL instruction materials are
accessible to intended users on 24/7 basis. An exciting addition to such initiatives is the
availability of Web 2.0 applications. Among the Web 2.0 tools, YouTube is quickly
becoming a new way of teaching IL skills in a more interesting and engaging manner.
Unlike certain online tutorials and quizzes, which are usually designed and accessible
to only authorized user groups, YouTube videos are freely available to all interested
viewers. Burke and Snyder [6] pointed out that academics are using YouTube in an
innovative way to teaching their courses. Gilroy [7] noted that previously colleges
and universities were posting their videos on the site through their own channels.
However, now the new YouTube EDU page organizes the educational related videos at
one place. Primary Research Group [8] conducted a survey to explore how libraries use
Google, Yahoo, Wikipedia, Ebay, Amazon, Facebook, YouTube and other web tools
and websites. The study showed that 24.2% of the libraries had a YouTube account
and one-half of them had posted user education videos on YouTube.
An Analysis of YouTube Videos for Teaching Information Literacy Skills 145

The above literature review shows that libraries and other information handling
agencies increasingly use ICT tools, including YouTube, for reaching out to their
patrons. Libraries have already been taking advantage of YouTube for user education
and developing information literacy skills of their patrons. However, no study could
be located analyzing the attributes of YouTube videos on information literacy. The
purpose of this study was to analyze the scope and coverage of information literacy
videos using the Big6 information literacy model. The areas covered by this study
included: the type of IL skills taught, use of different instructional approaches, quality
and duration of videos, and the intended viewers. It is expected that the analysis will
help libraries putting their IL-related videos on YouTube, to identify strengths and
weaknesses of their videos and how to improve their production quality. In addition,
this analysis will also help other libraries to select appropriate and high quality videos
for recommending to their patrons.

2 Method
For the study only those videos using the keyword Information Literacy to describe
their contents were selected. As discussed in previous sections, information literacy is a
more comprehensive concept; therefore, videos on library collections, services, facili-
ties, rules and procedures, service hours, etc. were excluded from this study. In addition,
videos of book promotions, announcement of information literacy workshops, students
projects, and videos of inaugural sessions and dinners of information literacy confe-
rences were dropped. Similarly, use of certain other related terms such as user educa-
tion, bibliographic instruction, library orientation, library skills, library awareness and
promotion, etc. were avoided as these terms do not adequately represent the complete
scope and coverage of the concept of information literacy. It was interesting to note that
some videos produced by American Medical Association on the ill-effects of marijuana
and other drugs also used information literacy in their video titles (e.g. Marijuana In-
formation literacy - http://www.youtube.com/watch?v=Mqc7YBD7EqE). Another vid-
eo providing tip for the purchase of a new car used the title Final Project for
Information Literacy- Car Economics (http://www.youtube.com/watch?v=
gDQhZE0oDBo). All such videos were excluded from the analysis. Other criteria used
to limit the search result were: category education and upload date any time. The
data was collected in the first week of March 2010, and a total of 912 videos were re-
trieved matching the above stated criteria. Due to YouTube access limit, only first 800
videos could be analyzed. These videos were viewed and manually filtered to remove
irrelevant videos. Similarly, videos less than two minutes long were also removed as
these were not expected to communicate any meaningful knowledge related to informa-
tion literacy. It was interesting to note that many retrieved videos, using the keyword
information literacy, did not actually cover any distinct aspect of information literacy.
Even some videos on library jokes, advertisements and other types of literacy also used
the keyword information literacy. In addition, videos appearing multiple times or with
navigation problems were removed. After manual filtering, 70 unique videos on infor-
mation analysis were selected for more in-depth analysis.
146 S. Majid et al.

The shortlisted videos were examined to identify their content, extent of coverage
and other related attributes. Two approaches were used to do content analysis of these
videos, i.e. coverage of Big6 skills and use of different instruction styles. For Big6
analysis, the selected videos were examined to determine what skills were covered
and the depth of their treatment. The coverage given to each Big6 skill was deter-
mined using a three point scale (fair, good and excellent), based on the time alloca-
tion, sub-topics covered, examples used and other factors. For instruction style, the
selected videos were analyzed for the teaching style used by them, such as lectures,
tutorials, discussions, PowerPoint slides, oral presentations, interviews or a combina-
tion of styles.

3 Findings
The following sections provide an analysis of 70 unique videos on information litera-
cy skills. The discussion is divided into two major sections, i.e. coverage of Big6
skills and instructional approaches and other attributes.

3.1 Coverage of Big6 Skills

The selected videos were analyzed for coverage given to different Big6 skills com-
prising: task definition, information seeking strategies, information location and
access, use of information, synthesis, and evaluation. It was found that 11% of the
videos discussed task definition skills which include problem definition and identifi-
cation of information needs (Figure 1). The highest percentage (26%) of the videos
covered different strategies that can be used for seeking the needed information. The
percentage of videos teaching information location and access skills, information use
skills and information synthesis were 23%, 20% and 13% respectively. Only 7% of
the videos covered information evaluation related skills. It appeared that a majority of
the YouTube videos covered three IL skills, i.e. information seeking strategies, infor-
mation location and access, and information use skills. Comparatively fewer videos
taught the remaining three equally important information literacy skills.

Info.
Task
Info. Evaluation
Definition
Synthesis (7%)
(12%)
(10%)

Information
Information Seeking (27%)
Use (20%)

Information
Access (24%)

Fig. 1. Big6 Information Literacy Skills Covered (N=70)


An Analysis of YouTube Videos for Teaching Information Literacy Skills 147

As many videos covered more than one information literacy skill, these videos
were further analyzed for the number of Big6 skills taught in each individual video. It
was noted that none of the videos discussed all the six information literacy skills. One
skill was covered by 19 or 27.2% of the YouTube videos. The videos teaching two or
three information literacy skills were 22 (31.4%) and 17 (24.3%) respectively. Four
information literacy skills were covered by 11 (15.7%) of the videos. It appeared that
a majority of the videos covered two to four information literacy skills; however,
almost none of them covered all the Big6 skills.

3.2 Quality and Coverage of Videos

The selected videos were also analyzed to determine the depth of treatment given to
different information literacy skills. For this purpose, the videos were categorized into
three groups, namely excellent, good, and fair. The criteria used for this categoriza-
tion were: coverage of topics, content (quality of script), duration, presentation style
and number and type of example used. The following is a brief description of each
category:

Fair: Videos providing only an overview of a particular information literacy


skill without discussing its utility or application.
Good: These videos adequately describe a particular skill and how it can be
applied. These videos also spend adequate time for explaining different as-
pects of a skill; how to apply it, and how this skill can be acquired.
Excellent: These videos provide comprehensive information about a particular
information literacy skill, how it can be applied, and how it can help meet spe-
cific information needs. These videos also provide several relevant examples.
It was satisfying to note that the coverage and content of the majority of the videos
was satisfactory. On the whole, explanations given by 27 or 38.6% of the videos were
comprehensive and adequate time was allocated for this purpose (Figure 2). Some of
these videos used good examples and presented the material in an engaging manner.
Similarly, 48.6% of the videos were of good quality and provided basic information
about the information literacy skill covered by them.

10

Excellent Good Fair

Fig. 2. Quality and Coverage Videos


148 S. Majid et al.

3.3 Communication Style

Using an appropriate presentation style is important for imparting information literacy


skills. The following categories were used for grouping the YouTube videos:
Lecture: An instructor delivers a lecture
http://www.youtube.com/watch?v=cnfmzIHzTds&feature=PlayList&p=70C6
Tutorial: Someone shows how to do certain activities
http://www.youtube.com/watch?v=gbAcQcDTxdo
Discussion: Conversation between two or more people -
http://www.youtube.com/watch?v=WivEGp69p18&feature=related
Slide show: A slide show without any oral explanation
http://www.youtube.com/watch?v=VrT0xFG9TB0
Presentation: A slide show with explanation
http://www.youtube.com/watch?v=VaWv5S50Zww
Interview: Interviewing someone about the information literacy skill -
http://www.youtube.com/watch?v=fDwX7waeYkM

As many videos used a combination of communication techniques, separate per-


centage for each technique was calculated, based on a total of 70 videos. It was found
that 43 videos on information literacy were in the form of presentation using Power-
Point slides (Figure 3). Another popular technique used by 37 videos for providing
information literacy skills was tutorials where the viewers were shown how to under-
take different information literacy related tasks.

Interview (2)
Lecture (17)
Presentation
(43)

Tutorial (37)

Slide Show
(20) Discussion
(5)

Fig. 3. Communication Techniques Used

Other commonly used communications techniques were slides shows without any
verbal explanation (20 videos) and lectures (17 videos). It appeared that YouTube
videos used a variety of communication techniques for exposing viewers to necessary
information literacy skills and a majority of these videos used a combination of teach-
ing techniques.
An Analysis of YouTube Videos for Teaching Information Literacy Skills 149

3.4 Duration of Videos

Figure 4 presents duration of YouTube videos on information literacy. Videos with


less than two minutes duration were excluded from the analysis as these were less
likely to communicate any meaningful message. For example, a video of this kind,
categorized under information literacy, just mentioned the importance of libraries in
knowledge dissemination. It was found that 27% of the videos were two to four min-
utes long. The duration of another 30% of the videos was between 4:01 to 6:00 min-
utes. On the whole, the duration of about one-half of the videos was from 4:01 to 8:00
minutes.

> 10
8:01-10:00 minutes 2:00-4:00
minutes (3%) minutes
(19%) (27%)

6:01-8:00
minutes
(21%) 4:01-6:00
minutes
(30%)

Fig. 4. Duration of Videos (in minutes)

3.5 Quality of Videos

A big variation was observed in the quality of YouTube videos on information liter-
acy. Many of the videos were not shot professionally with poor production and pres-
entation quality (Table 1).

Table 1. Examples of Video with Low Production and Presentation Quality

Video Attribute URL of YouTube Videos

Poor image quality (http://www.youtube.com/watch?v=nl1QAg6oj-0


Poor camera angle (http://www.youtube.com/watch?v=Wj5ItyUc9eQ);
Poor camera handling (http://www.youtube.com/watch?v=pMvAfVV41tQ&feature
=related; http://www.youtube.com/watch?v=QzRJXkM6te0;
Poor light quality http://www.youtube.com/watch?v=KohVQYw2Y3A&feature
=related);
Poor sound quality (http://www.youtube.com/watch?v=h7TihkNJ3dU;
http://www.youtube.com/watch?v=TKMNmIK3q_c)
distracting background (http://www.youtube.com/watch?v=h7TihkNJ3dU
Poor presentation skills (http://www.youtube.com/watch?v=KohVQYw2Y3A&featur
e=related
Reading from laptop (http://www.youtube.com/watch?v=Vkz2GygKE30
150 S. Majid et al.

However, quality of several videos was quite good. Most of such videos
were shot purposely to teach IL skills, predominantly by library directors
and other senior staff. For example, Dr. Bob Bakars 12 vidoes series
(http://www.youtube.com/watch?v=cnfmzIHzTds&feature=channel), was presented
with good picture and sound quality, good PowerPoint slides, tutorial and recom-
mended readings. Similarly, another 11 videos series by Nathan Pineplow from Uni-
versity of Colorado is rich in content with engaging presentation
(http://www.youtube.com/ watch?v=1_Ksbwlaf88 &feature=related).

4 Conclusion
A variety of techniques are being used for creating familiarity with and for imparting
information literacy skills. YouTube is a very powerful medium which has the poten-
tial to reach out to different segments of the society on 24/7 basics. Another advan-
tage is that it can deliver the intended message in a more interesting, effective, and
engaging manner. This analysis found that many libraries, particularly academic li-
braries, were using YouTube videos for teaching different information literacy skills
to their users. However, it was a matter of concern that many videos were not of good
quality, probably shot by amateurs, without adequate video production skills. Several
such videos had poor picture quality, inappropriate backgrounds, inadequate light and
poor sound recording. Although many other videos available on YouTube are also of
not good quality and produced by amateurs, it is desirable that libraries should take
extra care while posting their videos. It is because these videos are likely to indirectly
affect the image of libraries and the quality of services provided by them. It is, there-
fore, desirable that libraries should either get professional help for producing their
YouTube videos or get their staff trained for this purpose. Another aspect requiring
attention is the communication skills of presenters. Many presenters of information
literacy videos failed to demonstrate good communication skills. Library profession-
als need to understand that, in order to take full advantage of the power of audio-
visual media, they need to make extra efforts to acquire necessary skills for effective
communication through videos. Library and information schools can also consider
providing audio-video production skills to their students.

References
1. American Library Association. Presidential committee on information literacy: Final report.
Washington, D.C.,
http://www.ala.org/ala/mgrps/divs/acrl/publications/
whitepapers/presidential.cfm (retrieved July 21, 2011)
2. Preddie, M.I.: Canadian Public Library Users are Unaware of their Information Literacy De-
ficiencies as Related to Internet Use and Public Libraries are Challenged to Address these
Needs. Evidence Based Library & Information Practice 4(4), 5862 (2009)
3. Bruce, H.: Information Professionals as Agents for Information Literacy. Education for In-
formation 20(2), 81106 (2002)
An Analysis of YouTube Videos for Teaching Information Literacy Skills 151

4. Burdick, T.: Pleasure in Information Seeking: Reducing Information literacy. Emergency


Librarian 25(3), 1317 (1998)
5. Eisenberg, M., Berkowitz, B.: Information Problem-solving: The Big Six Skills Approach
to Library and Information Skills Instructions. Ablex Publishing Corp., Norwood (1990)
6. Burke, S.C., Snyder, S.L.: YouTube: An Innovative Learning Resource for College Health
Education Courses. International Electronic Journal of Health Education 11, 3946 (2008)
7. Gilroy, M.: Higher Education Migrates to YouTube and Social Networks. The Hispanic
Outlook in Higher Education 19, 1214 (2009)
8. Primary Research Group, Inc.: Libraries and the Mega-Inernet Sites: A Survey of How Li-
braries Use and Relate to Google,Yahoo, Wikipeida, Ebay, Amazon, Facebook, YouTube &
Other Mega Internet Sites. Primary Research Group, New York (2008)
Hybrid Learning of Physical Education Adopting
Lightweight Communication Tools

Ya-jun Pang

Luoyang Institute of Science and Technology, Henan Province, P.R. China, 471023
shizi7677@hotmail.com

Abstract. The communication is needed anywhere and anytime between


students and teachers in the hybrid learning of physical education practice. The
architecture of FPEHLP is presented to create an environment in our practice,
where the hybrid learning of PE can be accomplished through mix-lightweight
communication tools. The core components of FPEHLP are Smart Deliverer
and Dynamic Learning Space. The Smart Deliver is to integrate the
heterogeneous learning resources of different education platforms. The
Dynamic Learning Space is composed with Video-editor and IM Adaptor to
realize the visibly and flexibility intercourse in the hybrid learning of PE
practice. Adopting IM Adaptor, the student and teacher can communicate with
each other anytime and anywhere through the lightweight communication tools,
such as QQ, MSN, Fetion and so on. The results of experiment show the
number of students of experiment group who like physical exercise is 21.6%
higher than ones in contracted group. And 87.5% of students in the experiment
group think the FPEHLP can afford their physical exercise.

Keywords: Hybrid learning; Physical exercise habit; Higher education;


Lightweight communication technology.

1 Introduction
The hybrid learning refers to online learning (e-Learning) and traditional classroom
learning (Face-to-Face) organic integration [13], which is not only playing teachers
guide, inspire, the leading roles of monitoring the teaching process, but also fully
embodies the learning process of students as the main body of the initiative,
enthusiasm and creativity [57]. Training methods of domestic enterprising by foreign
influence and inspiration, the use of hybrid learning gradually increased, but just the
initial stage of education mentioned in the blended learning, now more and more
people are concerned at hybrid learning and has been widely used at home and abroad
in English, computers and other teaching areas and achieve better teaching results.
And the present study and practice of hybrid learning focuses on the following aspects
of the principle of hybrid learning, definitions, strategies and more research model,
the learning effect of hybrid learning, instructional design and research factors such as
relatively small. As far as PE is concerned, the research on hybrid learning less sports,
literature [4,8] mainly focus on physical education hybrid learning platform (PEHLP)
structural model, the use of video annotation and editing technology to improve sports

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 153160.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
154 Y.-j. Pang

hybrid learning teaching. Due to the impact of the network bandwidth, learning
materials such as recording and transmission efficiency of video upload prominent,
and its popularity is limited. In order to solve these problems, the paper establishes
new hybrid learning in physical education model, which through the use of
lightweight interactive communication tools to achieve the traditional F2F and e-
Learning instruction blended.

2 Hybrid Learning of PE and Its Model


2.1 Current Trends in e-Learning

The evolution of e-Learning systems in the last two decades was impressive. In their
first generation, e-Learning systems were developed for a specific learning domain
and had a monolithic architecture. Gradually, these systems evolved and became
domain-independent, featuring reusable tools that can be effectively used virtually in
any e-Learning course. The systems that reach this level of maturity usually follow a
component-oriented architecture in order to facilitate tool integration. The LMS is an
example of this type of system, which integrates several types of tools for delivering
content and for recreating a learning context (e.g. Moodle, Sakai)[9].
The present generation focuses on the interchange of learning objects and learners
information through the adoption of new standards that brought content sharing and
inter-operability to e-Learning. In this context, several organizations have developed
specifications and standards in the last years, which define standards for e-Learning
content and inter-operability among many others. These systems based around
pluggable and interchangeable components, led to oversized systems that are difficult
to reconvert to changing roles and new demands such as the integration of
heterogeneous services based on semantic information, the automatic adaptation of
services to users (both learners and teachers), and the lack of a critical mass of
services to supply the demand of e-Learning projects [9]. These issues triggered a
new generation of e-Learning platforms that can be integrated in different scenarios
based on Service Oriented Architecture (SOA) technology. In the last few years there
have been initiatives to adapt SOA to e-Learning [10]. These initiatives (e-Learning
frameworks) had the same goal: to provide flexible learning environments for learners
worldwide. These e-learning frameworks use intensively the standards for e-Learning
content sharing and inter-operability developed in the last years by several
organizations (e.g. ADL, IMS GLC, and IEEE). Therefore, we conclude that the
hybrid learning should be open and be equipped with inter-operability and flexible
learning environments.

2.2 Hybrid Learning Model of Physical Education

Generally speaking, physical education course focuses on teachers standard


demonstration action and students self-practice or self-experience. Survey suggests
that 43% students think the technique action and the limited classroom demonstration
cant effectively help their establish correct integrity impression, 56% of students
believe that time is too long interval between the two physical education (once a
Hybrid Learning of Physical Education Adopting Lightweight Communication Tools 155

week), 67% students thinks exchanges and interaction out of the PE classroom of
teacher and student is worse [4].

Fig. 1. Hybrid Learning Model of Physical Education Platform

Hybrid learning model not only to make up the traditional F2F teaching feedback is
not timely, instructional media single, divorced of inside and outside the classroom
etc., but also overcome the teaching monitoring badly in teacher-led online learning
mode and other shortcomings. Based on the existed material, physical education in the
hybrid learning model with modern teaching media, carries on infinite extension to
time space and place of PE teaching on the horizontal, so as to achieve Teaching and
learning anytime, anywhere effect. From the particularity of environment and
physical education, the physical education can be divided into outside physical
education and physical education classroom according to the place and the way
students learning. The basic model of hybrid learning is organic to combine two
instruction methods: Face-to-Face and e-Learning (refer to Figure 1).In Figure 1, the
physical education classroom is carried on by guided navigation, so the Face-to-Face
instruction mode should be the primary means and the e-Learning is supplementary
one. On the contrary, the outside physical education is carried on by learner self-
navigation, so the e-Learning should be the primary means and the Face-to-Face is
supplementary one. Whats more, the e-Learning is divided into e-Learning by oneself
and interactive e-Learning. And the interactive e-Learning would be mainly adopted
according to characters of physical education. To realize the interactive e-Learning, we
put forward one education platform named PEHLP in [4,8]. As mentioned above, the
main character of PEHLP is turned to video editing (video review) technology to
smooth away communication barriers. Although, the PEHLP can afford the physical
education, there are also some defects, such as: firstly, the teacher argues about that the
video review cost their more time; secondly, students hope that the PEHLP would be
supplied with some instant communication tools, so their questions can be replied in
short time or commutate with their classmates, and so on.

3 Flexible Frameworks for Hybrid Learning of PE


The flexible frame for hybrid learning of physical education (FPEHLP) is equipped
with lightweight communications. Compared with the model in [8], the computer
based education platform for hybrid learning of physical education is more open and
flexible. The FPEHLP is composed of three sub-system as same as PEHLP, they are
156 Y.-j. Pang

identity identification, physical course deliverer and dynamic learning space. The
Identity Identification Module (IIM) is to accomplish user registration and
authorization for system safe. In this paper, the physical course deliverer and dynamic
learning space will be presented in detail.

3.1 Smart Deliverer

A module named Smart Deliverer (SD) is adopted by the physical education course
deliverer to reuse the physical education course materials of the national elaborate or
other education platform. SD is a set of services to accomplish information
exchanging between the FPEHLP and the heterogeneous education platforms, for
example the education platform of National elaborate physical education course. The
main components of Smart Deliverer (refer to Figure 2) include Theory Material
Learning Space, Courseware Warehouse, Multimedia Library and Test Bank. More
details are illustrated in [8]. To make the learning interested, the Microsoft Agent
toolkit and Speech SDK is adopted in FPEHLP. We put up some smart flags in the
learning material and courseware warehouse, where the knowledge are important and
need be illustrated in more detail. When student is learning the key points and
clicking it, the agent is coming and speaking it. For example, in the Figure 3, when
the student click the key point Aerobic exercise, the Agent will appear and tell what
is Aerobic exercise just as the teacher.

Fig. 2. Framework of Hybrid Learning of PE with Lightweight Communication Tools

3.2 Dynamical Learning Space

The learning process of e-learning in Hybrid Learning context can be presented with:
firstly, the student learns study material by the learning resources of the education
platform; then he/she does exercises/tests by the test papers supplied by the Test Bank
of education platform; lastly, by checking exercises/tests according to the answers,
he/she can clearly know whether his/her answers are right or wrong since the answers
is unique. As far as physical education is concerned, the visualization of standard
demonstration actions, the teachers hand-in-hand teaching and the communication
between the teacher and student play very important role during the learning process.
Hybrid Learning of Physical Education Adopting Lightweight Communication Tools 157

The student would not act well, even if he/she grasps every detail of each action. In
the F2F context, the student can only find out mistakes depending on his/her
classmates or the teacher in the field, whats more, the mistakes need be illustrated
with instructions and demonstrated action hand-in-hand. To mimic F2F context, the
video data tools should be supplied with. By the video data tools, the teacher can
make teaching materials easily and conveniently, such as, making commentary on the
video which is key point. Another function of the video data tools is reviewing the
students exercises/tests video. In order to meet these requirements, the Dynamical
Learning Space (DLS) is proposed, which is composed of Video Review Module
(VRM), Video Conference and IM Adaptor (refer to Figure 2). The core module of
DLS is VR and IM Adaptor. The function of VR is video reviewing and annotation.
By VR the teacher can review the students actions video or image, pick out the
wrong actions and add still annotations. By watching the reviewed video or image, the
student can find out mistakes and get instructions.

Fig. 3. Agent in Theory Learning Material

The FPEHLP is equipping with flexible and lightweight communication tools by


IM Adaptor. In our project, the IM Adaptor can support Fetion (sending message to
Mobile Phone), QQ or MS Message (refer to Figure 2) in current. Using Fetion, the
student can send short message to his/her teacher for help in text format in mobile
phone anywhere. By the QQ group, the student can discuss about study material or
instructions of action with each other or video chatting through the internet in home
or WAP in field. Using the video conference function of QQ Group, the teacher can
share action video with students in the group.

4 Evaluation and Results

4.1 Purpose

The primary aim of this project is to devise one simple flexible framework of hybrid
learning of physical education which can promote students life-long physical
education and cultivate their health physical training habits and hobbies. The
following issues were specifically focused on:
158 Y.-j. Pang

Would the hybrid learning promote physical education effect, especial whether the
hybrid learning can promote students exercise habit in university culture?
Would the education platform for hybrid learning with lightweight communication
tools afford students physical education exercise requirements?

4.2 Method and Results

Since September 2007, we started to investigate the students and make students watch
inspirational films and videos created by teachers in the room on initial stage; then
incorporate a variety of teaching content and standard video placed on the FPEHLP
for students to study; then set up the QQ group taking grade as an unit set for students
to provide students and teachers, students and students of mutual exchange in
September, 2008 through the IM Adaptor of FPEHLP.

Table 1. Learning Effect of Experiment

Averaged Mark
No Group Long-distance Standing Rope P/S
Running Long-Jump Skipping
Experimented 42.3 48.2 66.5
I 1.0/0.01
Control 42.1 48.6 66.2
Experimented 66.2 65.8 81.3
II 0.53/0.45
Control 55.4 57.2 68.3
Notes: (1) No. I is before the experiment data; No. II is post the experiment data; (2)
P/S refers to the Pearson correlation coefficients and significant.

All the students were divided into experimental group and control one. In
experimental group, the excitation and exchanges between students and teachers, and
teaching information and timely feedback, so that after the end of each new teaching
self-assessment, each student exchange at least once a week, and at least watch a
movie or inspirational articles every two weeks. In the control group, learning model
used only the traditional physical education. During a school year of physical
education, using long-distance running (1,000 meters boys and girls 800 meters),
standing long jump, jumping rope (one minute count) three examinations as indicators
of physical fitness testing. Content analysis was used to explore the information
collected from participants. The results revealed that hybrid learning can significantly
promote calisthenics teaching/learning effect (refer to Table 1). From experimented
results (Table 1), the Pearson significance of correlation coefficient is 0.53 and 0.45 is
greater than 0.05. That is to say, there are differences between the students
experimented group and contrasted group on test indicators. We notices that long-
distance running, standing long jump and rope skipping of experimented group has
increased 23.9%, 17.6%,and 14.8%. In contrast, indicator of students in contrasted
group has increased to 13.3%, 8.6%, and 2.1%. Therefore, that hybrid learning can
promote physical education effect. To find out why hybrid learning can promote
physical education effect, we surveyed on physical education attitude, physical
education aim, learning initiative and so on. The results show that 97% of students in
Hybrid Learning of Physical Education Adopting Lightweight Communication Tools 159

experimented group like physical exercise, 86.2% of them are eager to take part in
physical exercise, and 86.2% of them like hybrid learning (refer to Table 2). And
there are 87.5% of students think that the FPEHLP can afford their physical exercise
requirements.

Table 2. Results of Comprehensive Survey

Content Indicator I(%) II(%)


How did you like the Physical exercise? Creasy 18.5 70.8
Prefer 29.2 14.4
Like 27.7 10.8
Dislike 24.6 3.0
What is your attitude to Physical exercise? Positivity 27.7 86.2
Normal 53.8 9.2
Negative 18.5 4.6
Would you like hybrid learning? Yes 26.2 86.2
No 73.8 13.8
Whether can the FPEHLP afford your physicalBest 27.2
exercise requirements? Yes 60.3
No 12.5
Notes: 1) No. I is before the experiment data; No. II is post the experiment data. 2)
Because the student dont know the FPEHLP, the No. I groups results of fourth
question is empty.

5 Discussion and Conclusions


Based on the results of this research we can draw some conclusions:
First of all, the results of this research suggest that the hybrid learning can promote
students initiative during physical education learning and they would like to learn
more physical education academic knowledge under the flexible condition. The
studies have shown that it is beneficial to cultivating life-long physical training habit,
setting up right value orientation and adapting to society easily once people have
enough physical education academic knowledge.
Secondly, the education platform must be equipped with video review function and
flexible communication tool in hybrid learning. The physical education focuses on
teachers standard demonstration action and students self-practice or self-experience,
so education platform should pay more attention to guider. The power lightweight
communication tools of FPEHLP can realize communication among students or
students and teacher in anytime and anyplace.
To sum up, e-learning is a growing trend in education. Students are increasingly
spending more time on the internet or mobile phone. This has forced instructors to
move some of the learning process to online mode or WAP. Hybrid learning is a new
trend of education approach by combining the advantages of classroom training and
e-learning. However, it is a challenging to teachers in designing an effective hybrid
course, especially for experiences and practices focused course.
160 Y.-j. Pang

Acknowledgements. The work has been partly supported by 2010 Luoyang Institute
of Science and Technology Foundation 2010YR14 and 2011 Henan Province Social
science federation Project SKL-2011-2573.

References
1. Graham, C.R.: Blended learning systems: Definition, Current Trends, and Future Directions.
In: Handbook of blended learning: Global perspectives, local designs, pp. 321. Pfeiffer,
San Francisco (2005)
2. Zhao, S.J.: Application of Network Education Technology in Physical Education of Higher
Education. Dissertation of South China Normal University (2007)
3. Kim, W.: Towards a Definition and Methodology for Blended Learning. In: International
Workshop on Blended Learning 2007 (WBL 2007), pp. 1517. University of Edinburgh,
Scotland (2007)
4. Pang, Y.-J.: Hybrid Learning of Physical Education Using National Elaborate Course
Resources. In: Tsang, P., Cheung, S.K.S., Lee, V.S.K., Huang, R. (eds.) ICHL 2010.
LNCS, vol. 6248, pp. 270281. Springer, Heidelberg (2010)
5. Tan, C., Liu, Y.: Hybrid Learning and Discussion on its Implementation Measures in
Distance Education. Modern Distance Education Research 81(3), 3638 (2006)
6. Qi, Y.: Analysis on Application of Hybrid Teaching Mode in Higher Education. In: Hybrid
Learning: A New Frontier, pp. 151160. City University of Hong Kong (2008)
7. Karen, V., Charles, D., et al.: Blended Learning Review of Research: An Annotative
Bibliography. In: The ALN Conference Workshop on Blended Learning & Higher
Education (2005)
8. Pang, Y.-J.: Techniques for Enhancing Hybrid Learning of Physical Education. In: Tsang, P.,
Cheung, S.K.S., Lee, V.S.K., Huang, R. (eds.) ICHL 2010. LNCS, vol. 6248, pp. 94105.
Springer, Heidelberg (2010)
9. Dagger, D., OConnor, A., Lawless, S., et al.: Service Oriented eLearning Platforms: From
Monolithic Systems to Flexible Services. Internet Computing 11(3), 2835 (2007)
10. Schools Interoperability Framework, http://www.sifassociation.org
Experiments on an E-Learning System
for Keeping the Motivation

Kazutoshi Shimada, Kenichi Takahashi, and Hiroaki Ueda

Graduate School of Information Science, Hiroshima City University,


3-4-1, Ozukahigashi, Asaminami-Ku, Hiroshima, Japan
takahasi@hiroshima-cu.ac.jp

Abstract. E-learning systems that use computers and the Internet have become
popular. E-learning systems have many advantages. However, the users often
lose their motivation for learning in the process of studying, and the frequency
that they use e-learning systems sometimes decreases. In order to improve their
motivation, we add two functions in this paper: (1) a function that users are
praised or scolded, (2) a function of limiting the answering time. Also we check
their functions utility by experiments.

Keywords: E-learning, Motivation, Display of Image.

1 Introduction
E-learning systems with which users study using a computer and the Internet have
been widely used[1][2]. There are various good points in using e-learning systems;
for example, users can study at any place where the Internet function is provided. On
the other hand, e-learning systems also have weak points. Some users can easily get
bored since users just read texts displayed on a computer screen and they have no
chance to communicate with teachers; they dont feel joy or mental stress such as
being praised or being scolded in the process of learning. As a result, a user often
loses the will to keep learning and the frequency of using an e-learning system
decreases. Recently, a new way called entertainment learning has been studied for
keeping the learning will; the entertainment learning incorporates "the fun" of the
game into an education system[3].
The objective of this study is to construct an e-learning system that improves or
keeps the motivation for learning of a user. In the research, we add a function of
praising and scolding and a time-limit function in the answering time to the web-type
e-learning system "Let 's study English" developed by our laboratory[4]. In addition,
with technology of Ajax, we build seamless environment by decreasing the frequency
of page transitions in order to improve the learning efficiency[5]-[7]. We check the
effectiveness of these functions by experiments.
This paper is organized as follows: In section 2, we describe the functions to keep
the learning will or motivation. Section 3 shows the experimental results performed
for estimating the functions added to the system. Finally, in section 4 we give some
conclusions and future tasks.

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 161168.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
162 K. Shimada, K. Takahashi, and H. Ueda

2 Functions for Keeping the Learning Motivation

2.1 Displaying Images to Scold and Praise

In this study, we intend to improve or maintain the learning will of students by


displaying images to scold or praise the student according to the answering situation
of the student, and thus by making the student feel joy that he can solve the problem
or feel the chagrin(or vexation) that he cannot solve the problem with those images.
We prepare four images, namely Evaluation A-D: two images (Evaluation A and
Evaluation B) for praising and two images (Evaluation C and Evaluation D) for
scolding. Each image includes a sentence to praise or scold the student. For example,
the image Evaluation A includes a sentence You are splendid. The four images and
their sentences are listed as follows in the descending order of evaluation of the
answering situation of the student.

Evaluation A "You are splendid."


Evaluation B "You are good."
Evaluation C "You should do your best."
Evaluation D "You should study again! "
Depending on how many times a student made wrong answers until he make the
correct answer for a task, the system determines the image to display.
An example of the flow of transition of pages with displayed images to scold and to
praise in the system is shown in Figure 1. Fig. 1(a) is the page to provide a problem
(a question). If a student answers correctly without any mistake, then the image
Evaluation A whose evaluation is the highest will be displayed as shown in Fig.1(b).
If a student answers incorrectly at Fig.1(a), then the page shown in Fig.1(c) is
displayed, and then if he answers correctly, the image Evaluation B whose evaluation
is the second highest will be displayed as shown in Fig.(f). As the number of mistakes
increases, the system displays the images whose evaluation is lower. If the number of
mistakes is larger than or equal to 3, then the system displays the images whose
evaluation is the lowest as shown in Fig(h).
When the screen of Fig.1(a) is replaced by the screen Fig.1(c), the page transition
does not occur, since the replace is caused by just renewing the part of the page using
Ajax technology; users can keep answering the problem without the need to wait for
the page transition.

2.2 Limiting the Answering Time

We add a function of limiting the answering time which is the interval from the
moment that the system displays a problem on the screen to the time limit that a user
can input the answer to the system. This function aims to give the user a feeling of
entertainment and to keep or improve the learning will of the user by letting the user
concentrate on solving the problem.
In this study, we set the time limit to 10 seconds for every problem whose type is
one-out-of-four selection; a user chooses one right answer among four choices.
Experiments on an E-Learning System for Keeping the Motivation 163

Fig. 1. An example of the flow of transition of pages with displayed images to scold and to
praise in the e-learning system.

When the time is up, the screen page to give a problem is replaced with the screen
page which includes the teacher image with the sentence Time is up. You should
concentrate on learning more.
164 K. Shimada, K. Takahashi, and H. Ueda

3 Experiments

3.1 Overview of Experiments

We examine whether the functions added to the system to improve or keep the
learning will are effective or not in this chapter.
We gathered 14 students of our university as cooperators of the experiments; they
use this system to learn English words. After they take the test, we consult a
questionnaire about the system and the feeling.
For learning experiments, we prepare two types of systems that have different
degrees of difficulty of being praised during learning: one system that tends to praise
users easily and the other that tends to scold users. In addition, for comparison we
perform an experiment in which the system without the function to scold and praise
nor the time-limit function.
Three systems which we use for the experiments are as follows:
SYS1: the system that allows a user to make mistakes twice and praises a user when
he answers correctly at the third trial; the user is scolded when he makes
mistakes more than three times.
SYS2: the system that scolds a user as soon as he makes a mistake.
SYS3: the system that does not have the time-limit function nor the function to scold
and praise users.

The cooperators of the experiments are divided into two groups based on the
grade-points obtained for the technical English lecture of our university so that two
groups have little difference in the English ability. Let us call the groups Group 1 and
Group 2. Group 1 uses the system SYS1 for learning, and Group 2 learns with the
system SYS2. In addition, both groups use the system SYS3. We also prepare two
tests as follows; we call them Test A and Test B. Each test consists of 40 problems.
Test A: The test users take after users learn with the system SYS3.
Test B: The test users take after users learn with the system SYS1 or SYS2.
For example, Group 1 takes Test B after learning with SYS1 first, and then takes Test
A after learning with SYS3.
We consider what kind of change is observed in test points between the prior test
and the test after learning with the three systems, namely SYS1, SYS2 and SYS3.
SYS1 is a generous system that tends to praise users. SYS2 scolds a user as soon as
he makes a mistake. SYS3 is the system that has neither the time-limit function nor
the function to scold and praise users.
By using a questionnaire, we also investigate what kind of change of feelings users
make and whether the learning will improves or not, according to the system with
which users study.

3.2 Experimental Results

We performed the experiments mentioned in 3.1. In evaluation, we employ the


relative increase rate of points obtained by Test A and B. Let R denote the relative
increase rate, and P1 and P2 denote the point obtained in the prior test and the point
Experiments on an E-Learning System for Keeping the Motivation 165

obtained in the test after learning, respectively. We calculate the relative increase rate
as follows:
R=(P2-P1)/P1. (1)
Figure 2 plots the relative increase rates of users studied with SYS1 and SYS2.
We can see from Figure 2 that the average of relative increase rates for users with
SYS1 and the average of relative increase rates for users with SYS2 are almost the
same, but there is a user that shows a great change in the increase rate with SYS1
while no one with SYS2 shows a great increase in the rate.
Figure 3 shows the relative increase rates of users with SYS2 and SYS3. We can
see from figure 3 that some students show low increase rates with SYS2 compared to
those with SYS3. The authors think that this is caused by the images to scold students.
Some students feel too stressed by being scolded, and thus the increase rates are low.
After learning with SYS2, learning with SYS3 makes students relaxed.
We summarize the average points that are obtained from the questionnaire for
every group in Table 1. We can see from the results for items 4 to 7 in Table 1 that
the users in Group 1 who learned with the system to praise feel more joy and less
sadness or irritation than the users in Group 2 who learned with the system to scold.
On the other hand, users in Group 2 feel more sadness and irritation than the users in
Group 1. From item 8 of the questionnaire, we see that users in Group 1 and Group 2
improve the learning will. We also see from item 9 of the questionnaire that limiting
the answering time gives stress to most users. Further, we see from items 11 and 12
that the users in Group 1 who use the system to tend to "praise" feel more familiar to
the system than users in Group 2 who use the system to tend to "scold".

0.7

0.6

0.5

R
0.4

0.3

0.2

0.1

0.0
A B C D E F G Av. H I J K L M N Av.
Students with SYS1 Students with SYS2
Students

Fig. 2. The comparison of the relative increase rates of students with SYS1 and SYS2
166 K. Shimada, K. Takahashi, and H. Ueda

0.6

0.5

0.4 SYS2
R
0.3 SYS3

0.2

0.1

0.0
H I J K L M N Av.
Students

Fig. 3. The comparison of the relative increase rates of students with SYS2 and SYS3

Table 1. Questions in the questionnaire

Questions. Group1 Group2 Av.

1. Can you easily understand how to use the system? 4.9 4.6 4.7

2. Does the system have enough functions to study? 4.0 3.9 3.9

3. Can you learn effectively with the system? 4.6 3.7 4.1
4. Do you use the system more easily than the previous
4.9 4.4 4.6
system?
5. Do you feel glad when the image for the correct answer
3.9 3.0 3.4
is displayed?
6. Do you feel sad when the image for the correct answer
1.7 2.4 2.1
is displayed?
7. Do you feel irritated when the image for the correct
2.1 3.4 2.8
answer is displayed?
8. Did your learning will improve by displaying images? 3.6 3.6 3.6

9. Did you feel chagrin by time limiting? 5.0 5.0 5.0

10. Did your learning will improve by the time limit? 3.3 3.1 3.2

11. Do you want to use this system again? 4.6 3.9 4.2
12. How much is the degree of the total satisfaction of the
4.4 3.9 4.1
system?
Experiments on an E-Learning System for Keeping the Motivation 167

The authors believe that a function to "scold" to "praise" users and a time-limit
function that are added newly to the system are effective to improve or keep the
learning will from the experiments.

6 Conclusions
In this study, we added functions to the e-learning system, aiming at improvement of
the learning efficiency. The implemented functions are a function to set the time limit
to every problem and a function to display images to "scold" or "praise" users at the
time when the answer to the problem is wrong or correct, respectively. We employed
the Ajax technology to enable users to continue answering without page transitions at
the time of a wrong answer and thus to make the waiting time shorter.
In order to show the effectiveness of the functions, we performed experiments. In
the experiments, we prepared two systems that tend to scold and praise users, in order
to examine which system is more effective in keeping the learning will. Also, we
examined the effectiveness of the time-limit function. Further, we consulted a
questionnaire to examine what kind of change appeared in the learning will. In the
experiments, the difference between the system to scold and praise users was not seen
about learning will on the average, while for some user such systems show
effectiveness in learning. We also showed that the function to "scold" or "praise"
helped learning will improve more than the time-limit function. As for time limiting,
we need more experiments such as setting a more appropriate limiting time.
As a future task, we consider as follows; we can improve the system by adding an
entertainment function like games so that users do not feel boredom.

Acknowledgments. This work is partly supported by Hirohisma City University


Grant for Special Academic Reasearch (General Studies No.216).

References
1. Okamoto, T., Mizoguchi, R.: Aritificial Intelligence and Tutoring System. Ohm Inc., Tokyo
(1990) (in Japanese)
2. Saitoh, A., Nishida, T., Nakanishi, M., et al.: Classroom Supporty and System
Administration Support for Large Scale Educational Computer System. Trans. IEICE
Japan J84-D-I(6), 956965 (2001) (in Japanese)
3. Takaoka, R., Watanabe, Y., Matsushima, W., Onitake, S., Horikawa, T., Okamoto, T.: A
Development of Game-based Learning Environment to Activate the Interaction among
Learners. IEICE SIG Technical report, ET2007-96, pp.6972 (March 2008) (in Japanese)
4. Yamashjta, Y., Takahashi, K., Ueda, H., Miyahara, T.: Construction and Analysis of Web-
based E-learning System for Exercises. In: Proc. Chugoku Branch Conference of Electrical
and Information Related Institutes (2004) (in Japanese)
5. Takahashi, T.: Beginners Guide: Asynchronous Javascript + XML. Softbank Creative Inc.,
Tokyo (2005) (in Japanese)
168 K. Shimada, K. Takahashi, and H. Ueda

6. Urushio, T.: Introductory 10days Class of Ajax. Syoueisya Inc., Tokyo (2007) (in Japanese)
7. Takeuchi, G., Takeushi, M., Sano, H.: A Sentence Interface Program for Language
Learning by Using Ajax Technology. IPSJ SIG Technical Report, 2008-CE-93, pp.147154
(February 2008) (in Japanese)
8. Construction of WWW Servers,
http://cyberam.dip.jp/linux_server/www_server.html
Object Robust Tracking Based an Improved Adaptive
Mean-Shift Method

Pengfei Zhao1, Zhenghua Liu1, and Weiping Cheng2


1
School of Automation Science and Electrical Engineering,
Beihang University, Beijing, 100191, China
2
Chinese Helicopter Research and Development Institute, Jingdezhen, 333001, China
sweatytooth@126.com

Abstract. Mean-shift based tracking technique is successfully used in target


tracking. However, classic Mean-shift based tracking algorithm uses fixed
kernel-bandwidth, which limits the performance when the targets orientation
and scale change. In this article, we firstly outlines the basic concepts of Mean
Shift Algorithm, and Mean Shift algorithm for target tracking in the
visual tracking and its application in visual tracking. Then an improved
adaptive kernel-based object tracking is proposed, which extends 2-dimentional
mean shift to 4-dimentional, meanwhile combine s multiple scale and
orientation theory into tracking algorithm. A multi-kernel method is also
brought forward to improve the tracking Accuracy. Finally, experimental results
validate that the new algorithm can adapt to the changes of orientation and scale
of the target effectively.

Keywords: Mean-shift, target tracking, kernel function, orientation, scale.

1 Introduction
Visual target tracking currently is widely used in the military, video surveillance
transportation, etc. Research on robustness and real-time property is a hot issue in the
visual target tracking. Mean Shift algorithm is a versatile nonparametric probability
density estimation method. This method was firstly used successfully in the visual
tracking by Comaniciu. The article [1-2] discussed the Mean-Shift based target
tracking method and the kernel function bandwidth selection. For the tracking
purposes, the Mean Shift method generally uses histogram into modeling the target
region and candidate region. Similarity between the target model and the target
candidates in the next frame is measured by using the metric derived from the
Bhattacharyya coefficients.
Mean Shift algorithm is an optimal estimation method with a rising gradient of the
maximum probability density. Using the Mean Shift algorithm, we dont need the
endless search in the candidate region. We use the kernel probablity density to describe
the target features, and find the target real position by the Mean Shift vector. However,
the basic Mean Shift algorithm does not provide the solution of the orientation and
scale target. Many improved Mean Shift algorithms are proposed. Collins[3] put
forward the scale-space on the Mean Shift algorithm to solve the target scale change by
adding a scale kernel. Yilmaz[4] built a 4-dimensional kernel space, including location

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 169177.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
170 P. Zhao, Z. Liu, and W. Cheng

information, rotation and the scale information, in order to get the optimization
tracking results. In paper [5], the selection of kernel scale via linear search was
discussed. Elgammal et al. reformulated the tracking framework as a general form of
joint feature-spatial distributions [6-7].
We give an improved adaptive Mean Shift object tracking which can acquire some
improvements in two aspects: orientation and scale. We also propose a cascade kernel
function to reduce the computational complexity.
The rest of the paper is organized as follows. Section 2 analyzes the standard
Mean-shift object tracking algorithm and the shortcomings of classics Mean-shift
algorithm. Section 3 brings forward the improvements in adaptive orientation and
adaptive scale. In this section, the cascade kernel is also proposed to reduce the
computational complexity. Then, we give out the algorithm of adaptive mean shift in
section 4. The experiment results and discussion are as followed in section 5.

2 Mean-Shift analysis

2.1 Standard Mean-Shift Algorithm


The Mean Shift algorithm is an unsupervised segmentation method based on mean
shift cluttering. The idea of mean shift is to shift a fixed-sized window iteratively to
the average of the data points within it. It estimates the gradient of a density function
without assuming any distribution structure of the data. A kernel density estimator,
for an arbitrary set of n data points x1 ,", xn in the d-dimensional space R d , is
defined as:
1 n
f ( x ) = K H ( x xi ) (1)
n i =1
Where K ( x) is a non-negative kernel function and h is window radius. Generally
kernels are radially symmetric functions

K H ( x) = H 1 / 2
K( H 1 / 2
x) (2)

\Where x denotes the norm of x, The function k ( x) is called kernel profile and
ck , d is normalizing constant. Using this kernel, formula (1) is replaced as:

c n x xi 2

fh , K ( x) = k , dd
nh
k
h

(3)
i =1
Other kernels are Epanechnikov kernel, Gaussian kernel, and normal kernels. We
assume that the first derivative of the kernel profile is k '( x) ( x [ 0, ] ), and define a
new profile as
g ( x) = k '( x) (4)
Object Robust Tracking Based an Improved Adaptive Mean-Shift Method 171

And corresponding kernel G ( x ) as

G ( x) = cg , d (x )2
(5)

So kernel K ( x) is the shadow of G ( x ) . Then the gradient of the distribution can be


estimated as
n x xi 2
xi g
i =1
2c
f h , K ( x ) = f h , G ( x ) 2 k , d h x (6)
n
h c g ,d x xi 2
g
i =1
h



In (6), the term in the braces is mean shift vector mh ,G ( x ) :
2
n
x xi
xi g (
i =1 h
)
m h ,G ( x ) = 2
x
n
x xi

i =1
g(
h
)

2.2 Mean Shift Using Steps in the Target Tracking

a) Initialize the target model


The probability of the feature u = 1," , m in the target model is defined as

q = {qu }u =1"m (8)

( ) [b ( x ) u ]
nq
Where qu = C k xi
2
i
i =1

b) The target candidate is defined as:

p( y) = { pu ( y)}u =1"m (9)


nq
y xi 2
Where pu ( y ) = Ch k [b( xi ) u ]
h
i =1
c) Then the mean shift vector is:
np
y xi 2
x i g 0
i
h


i =1 , (10)
y i +1 = n j = 1, "
p y 0 xi
2

i g
h


i =1
172 P. Zhao, Z. Liu, and W. Cheng

Where i =
m
qu

[b( xi ) u ] (11)
u =1 pu ( y0 )

3 Mean-Shift with Adaptive Orientation

3.1 The Cascade Kernel Using in the Tracking

If only color space is used as the feature space, the disturbance will severely impact
the tracking result. Multi-feature description is used to reduce this bad effect in the
cost of increasing computational complexity. So we proposed the cascade kernel
function to solve this problem.
Two Epanechikov kernel functions are used here. The first one is a 1-dimensional
function, which is used as weighted factor of the feature histogram.
3 (12)
(h 2 x 2 ) x<h
K 1 ( x) = 4h 3
0 else

The second one, which is a 2-dimensional kernel function, is used as the description
of the color space:
2
(h 2 x T x ) xT x < h (13)
K 2 ( x ) = h 2
0 else

Above of all, the target model can be described as follows:


2 2
n
xi u x x0
q u = C k1 ( )k 2 ( i ) (14)
i =1 h1 h2

Where h 1 , h 2 are the bandwidth of the kernel functions K1 ( x) and K 2 ( x) ,


k1 ( x), k2 ( x) are the shadow functions of K1 ( x), K 2 ( x) , and C is a standardized
constant coefficient:
1
C= 2
n
x x0

i =1
k2 ( i
h2
)

Similarly, the target candidate can be described as:


2 2
nh
xi u y x0
p u = Ch k1 ( )k2 ( ) (15)
i =1 h1 h2

Where
1
Ch = 2
nh
y x0

i =1
k2 (
h2
)
Object Robust Tracking Based an Improved Adaptive Mean-Shift Method 173

3.2 4-Dimentional Mean Shift


The basic Mean Shift algorithm is usually 2-dimensional, and only the target
coordinates are needed. In the following discussion, we provide details of the
proposed scale and orientation selection method which are appreciate for both the
asymmetric and anisotropic kernel.
Firstly, the scale dimension is considered.
Let the spatial object center be X(x0,y0) where (x,y) is the image coordinates. We
define the scale and orientation dimensions as and respectively. The bandwidth
at an angel ( [0, 2 )) is denoted by r ( ) . ( x i ) is defined as the distance
of a pixel x i = ( xi , y i ) . After the above definition, what we need to do is to add the
scale dimension into the coordinate space. We define i as a ratio of ( xi ) and
r ( i ) :
( xi ) Xi X (16)
i = =
r ( i ) r (i )

From the above formula, we know that the bigger is, the more sampled points are.
The scale-mean satisfies the equilibrium of the sum of the pixel-scales on both sides of
the scale-mean:
2 r ( ) 2 r ( ) 2 r ( )

0 0 r ( )
d d =
0 2
d =
0 0 r ( )
d d (17)

Secondly, the orientation is considered into the mean shift algorithm.


As above, we define:
yi y0 (18)
i = arctan
xi x0
An important observation arising from equation(18) is that the orientation is not
directly dependent on the object shape which is expressed by the bandwidth r ( i ) .
We find the undirect relation of the orientation and the shape described by the follow
formula:
1 2
0
r ( ) d =
2 0
r ( ) d (19)

Let ( , , x, y ) denotes the joint space. The density estimator is given by:

1 n
f () = K ( i ) (20)
n i =1
Since the orientation and scale is independent of the centroid, the 4-dimentional space
can be divided into a product of three different kernels:
K ( x, y, , ) = K ( x, y) K E ( ) K E ( ) (21)
174 P. Zhao, Z. Liu, and W. Cheng

1 z z <1
Where k E ( z ) =
0 others
So the 4-D mean shift vector is:

=
K ( ) (x )( )
i i i i (22)
K ( ) (x )
i i i

Which is the target template. The information of location, orientation and scale are
included and updated in formula(22).
Similarity function is used to describe the similarity between target model and target
candidate. Bhattacharyya coefficient is the most common one, which is proved to be
better than others by Comaniciu in [8].
m
() = [ p (), q ] = p u ()q u (23)
u =1

The Taylor series of (23) is:


2
1 m C nh
xi
() = p u (0 ) qu + h i k 2 ( ) (24)
2 u =1 2 i =1 h2
2
m
qu x u
Where =
i u =1 pu (0 )
k1 ( i
h1
(25)

4 Realization of Modified Mean Shift Algorithm


The previous two parts described the improvements of object description and Mean
Shift vector dimension respectively. And based on the above two improvements, we
obtain the following modified Mean Shift algorithm:

(1) Calculate the Initial target model q u according to (9) and calculate target
candidate p u (0 ) at current frame according to (8)
(2) According to (24), calculate the similarity function 0 (0 ) of target model and
target candidate, where i is described as (25);
(3) According to (22), we obtain the four-dimension vector value 1 of new
position, including position information x 1 , scale factor and orientation factor
.
(4) If x 1 - x 0 < , 0 < , 0 < , The location frame is done and go to
next step.; Otherwise, assign the value of
x 1 to x 0 the value of to 0

the value of to 0 and jump back to (1).
(5) Read next frame and repeat the above location process until the end of tracking.
Object Robust Tracking Based an Improved Adaptive Mean-Shift Method 175

5 Experimental Results and Discussion


We make two comparative experiments using standard Mean-shift algorithm and
adaptive improved Mean Shift tracking algorithm. In the first experiment, the target
rotated incessantly. The two experiment results show that the new algorithm presented
in this paper can track the target accurately without increasing computational
complexity.

Fig. 1. Tracking result comparison of rotatable object

Fig. 2. The Iteration number of every rotation frame

Fig. 3. Tracking result comparison of scale object


176 P. Zhao, Z. Liu, and W. Cheng

Fig. 4. The Iteration number of every scale frame

6 Conclusion
Since basic Mean Shift method cannot solve the problem of target scale and
orientation, we propose a method of adding scale factor and orientation factor to Mean
Shift space, transforming the original two-dimension Mean Shift space to a four-
dimension space, based on the research of Mean Shift tracking theory. Meanwhile, a
multi-kernel Mean Shift theory is brought forward to ensure tracking accuracy,
describing the Mean Shift model by cascading the two core functions. Experimental
results show that this algorithm realizes good adaptability for target window when
target zooming in, zooming out and rotating. At the same time, the proposed multi-
kernel Mean Shift algorithm increases the accuracy and real-time feature.

Acknowledgment. The paper is gratefully supported by Aeronautics Fund of China


(2009ZA02001)

References
1. Comaniciu, D., Meer, P.: Kernel-based object tracking. IEEE Transactions on Pattern
Analysis and Machine Intelligence 25(3), 564575 (2003)
2. Comaniciu, D., Ramesh, V., Meer, P.: The variable bandwidth Mean Shift and data-driven
scale selection. In: Proc. 8th Intl. Conf. on Computer Vision, Vancouver, Canada (2001)
3. Collins, R.T.: Mean Shift blob tracking through scale space. In: Proceedings of IEEE
Conference on Computer Vision and Pattern Recognition, pp. 234240. IEEE, Madison
Wisconsin (2003)
4. Yilmaz, A.: Object tracking by asymmetric kernel mean shift with automatic scale and
orientation selection. In: Proceedings of IEEE Conference on Computer Vision and Pattern
Recognition, pp. 16. IEEE, Minneapolis (2007)
5. Collins, R.T.: Mean-shift blob tracking through scale space. In: Proc. IEEE Conference on
Computer Vision and Pattern Recognition, pp. 234240. IEEE Press (2003)
6. Yang, C., Duraiswami, R., Davis, L.: Efficient mean-shift via a new similarity measure. In:
Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 176183. IEEE
Press (2005)
Object Robust Tracking Based an Improved Adaptive Mean-Shift Method 177

7. Hager, G.D., Dewan, M., Stewart, C.V.: Multiple kernel tracking with SSD. In: Proc. IEEE
Conference on Computer Vision and Pattern Recognition, pp. 790797. IEEE Press (2004)
8. Comaniciu, D., Ramesh, V., Meer, P.: Kernel based object tracking. IEEE Transactions on
Pattern Analysis and Machine Intelligence 25(2), 564577 (2003)
9. Moeslund, T., Granum, E.: A Survey of Computer Vision-Based Human Motion Capture.
Computer Vision and Image Understanding 81(3), 231268 (2001)
10. Cheng, Y.: Mean Shift mode seeking and clustering. IEEE Transactions on Pattern
Analysis and Machine Intelligence 17(8), 790799 (1995)
A Novel Backstepping Controller Based Acceleration
Feedback with Friction Observer for Flight Simulator

Yan Ren1,2, ZhengHua Liu1, Weiping Cheng3, and Rui Zhou1


1
School of Automation Science and Electrical Engineering, Beihang University,
Beijing 100191, China
2
Information Engineering School, Inner Mongolia Science and Technology University,
Baotou 014010, China
3
Chinese Helicopter Research and Development Institute, Jingdezhen, 333001, China
renyan.ry@163.com

Abstract. Friction torque is the main factor that influences dynamic response
performance of high precise servo systems. To compensate for the friction
torque, a compound control strategy based on backstepping and acceleration
feedback with friction observer is proposed. In this control strategy, the
backstepping controller with integral element was used for the position loop; the
acceleration feedback controller with friction observer was introduced to
compensate for friction torque. The simulation results show that dynamic friction
torque is inhibited more effectively, and the robustness of system for the exterior
disturbance is also improved simultaneously.

Keywords: flight simulator, backstepping, friction compensation, robustness,


friction observer.

1 Introduction

The research on nonlinear friction attracts lots of attention in the field of some quite low
velocity servo systems because of their ubiquity in realistic applications, such as flight
simulator. In the first place, nonlinearities and uncertainties existing in the flight
simulator such as friction moment, motor moment fluctuation, lopsided moment and
system parameter change often deteriorate the performance and robustness of the
system. Moreover, being a highly nonlinear, the friction phenomenon causes
steady-state tracking errors, limit cycles, undesired stick-slip motion, the low-speed
shaking and other types of poor performance[1,2]. Therefore, to achieve high
performance of the system, appropriate control method should be designed. At present,

two main methods the approach based on the friction model compensation[3,4] and

the non-model compensation[5,6] are usually employed.
With the development of sensor technology and the successful application of
acceleration feedback controller in some systems[7,8], acceleration feedback gradually
attracts peoples attention in the field of high precision servo control. Acceleration
feedback is a kind of robust control method based on state feedback, and improves the
stiffness of the control system without broadening the bandwidth of the position or

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 179187.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
180 Y. Ren et al.

speed loop. Consequently, the ability of suppressing all kinds of disturbances,


including the friction, is strengthened[9]. As the above mentioned, few scholars has
used acceleration feedback to compensate the friction yet. This paper, taking the flight
simulator as an example, applies acceleration feedback to achieve effective
compensation of friction torque and suppress the effect of low speed shaking of the
system.
Generally, varied applications of the backstepping control techniques demonstrate
its superiority over classical controllers, especially in the servo system control problem.
Unlike the traditional control method, it could guarantee the stability and tracking
performance simultaneously[10,11]. Therefore this paper uses backstepping controller
in position feedback loop.

2 Dynamic Mathematical Model of Flight Simulator


Three-axis flight simulator is a high precision servo system with nonlinearities and
uncertainties. The differential equation of one axis of a certain three-axis flight
simulator is given by
ii i
J = B + u Tf + d , (1)

where is the angular position of the actual system, J is the inertial of the system and
B is the damp; u represents control variable, Tf stands for friction torque and d is
external disturbances. The equation (1) can be written as state space equation:

i
= w
i 1 B (2)
= (u Tf ) + d
J J ,
Where is angular velocity of the actual system.

Fig. 1. The control loop structure diagram of the servo flight simulator system
A Novel Backstepping Controller Based Acceleration Feedback 181

3 The Design of the Servo Control System


In this paper, the compound controller contains two parts, backstepping controller and
acceleration feedback controller. Fig.1 shows the overall control scheme of servo
control system, and the acceleration feedback loop is shown in the dashed box.

3.1 Backstepping Controller Design

This paper adopts the backstepping controller with integral element, whose design step
is as follows:
Step 1:Defining the error equation of system
e1 = d , (3)

where d is a command signal. Then



e1 = d = d . (4)

Defining virtual control value as



d = c1e1 + d 1 , (5)

where c1 > 0 , 1 > 0 , = t e ( )d is an integral action of the position tracking error. It


1 0 1
ensures tracking error converging to zero when system model and load disturbance are
uncertain.
There is an error e2 between the actual angular velocity and the reference
signal d , so the velocity error equation can be defined:
e2 = d . (6)
Then

e1 = d = d + e2 d = c1e1 + e2 1 (7)
.
A Lyapunov function is chosen:
1 2 1
v1 = e 1 + 1 2 (8)
2 2 .
Then

v1 = e1 e1 + 1 = e1 (e2 c1e1 1 ) + 1 e1 = c1e12 + e1e2 . (9)


If e2 = 0 then v1 0 .Therefore, it is necessary to design the following step.
Step 2:Defining a Lyapunov function:
1
v2 = v1 + e22 (10)
2 .
182 Y. Ren et al.

The time derivative of formulae (6) can be written as:


1 B 1 B
e2 = d = u d = u + c1 e1 d + 1
J J J J

(11)
1 B
= u + c1e2 + (1 c1 )e1 c11 d
2

J J

To make v 2 0 the backtepping control law is designed as:

u = B + J [ (c1 + c2 )e2 (1 c12 + 1)e1 + d + c11 ] ,
t (12)
= [ B J (c1 + c2 )] + J (c1 + c2 ) d + J d J (1 + 1 + c1c2 )e1 J 1c2 e1 ( )d
0

where c2 > 0 . Then



v2 = c1e12 c2e2 2 0 . (13)

3.2 Design of Acceleration Feedback

Acceleration feedback signal, different from other traditional feedback signal, can
increase the whole system dynamic stiffness while the position or velocity loop
bandwidth is unchanged[12]. The acceleration signal can be obtained under the
condition of available velocity as the below formula:
i
ii u Tf B .
= (14)
J

3.2.1 Friction Observer Design


According to formula (14), friction torque is the key factor to obtain the acceleration
signal. In this paper, friction torque employs Coulomb friction model which can be
written as:
i
Tf = Tc (t ) sgn( ) (15)
,
where Tc(t) is Coulomb friction parameter, sgn() is the sign function. Novel nonlinear
observer is used in this paper to estimate Tc (t) [13].
A friction Observer model is built:
l i

Tc (t ) = Z f + K f (t )
i i i ,
l
Z f = K f (u Tf B (t )) sgn( (t )) (16)
i
Tl f = Tl c sgn( )

where Kf = Kf / J is defined, Tlc(t) is the estimation of Coulomb friction parameter, Zf is


friction state variable, and gaining K f is the design parameter of observer.
A Novel Backstepping Controller Based Acceleration Feedback 183

Define the observation error :

ec = Tc (t ) Tlc (t ) , (17)

then
i
d i (T Tl )
i i i i i i i
ec = T c (t ) Tlc (t ) = T c (t ) Z f Kf = T c (t ) + Kf f f sgn( ) = T c (t ) + Kf ec . (18)
dt J

According to equation (18), in order to ensure estimation error ec converging to zero


asymptotically, the following conditions should be met: T (t )
i
c is bounded.
K < 0 and ensure e < 0 .
f
i
c

i
Finally, friction torque is obtained: Tl f = Tl c sgn( ) .
l f into equation (14), then it follows that
Substitute T
i
ii u Tl f B .
= (19)
J
To sum up, the control structure of acceleration feedback is shown in the dashed box
of Fig1.

3.2.2 The Design of Acceleration Feedback


The classical PID control is employed in the acceleration controller, as shown in
figure 1; the control law is defined as:
t ii ii ii
u = Ki (u
t =0
BK )dt + K p (uBK ) + K d (uBK ) (20)

where K i is integral factor of the controller, K p is proportional factor of the controller,


K d is differential factor of the controller, and they are positive real numbers.
According to the design principle of acceleration feedback controller, acceleration
feedback loop must be stable first. In order to verify the stability of the acceleration
t
feedback loop, let X 1 = (uBK ) dt ,
i
X 2 = X 1 = (uBK ) , X 2 = (uBK ) .
t =0

Substitute equation (20) into (1),


i
ii t ii ii ii i i i
J = u Tf B = K i
t =0
(u BK )dt + K p (u BK ) + K d (uBK ) Tf B = K i X 1 + K p X 2 + K d X 2 Tf B .

Adds Ju BK to both sides of the equation, as following:


i i
X 2 = Ki / Kd X1 (J + Kp ) / Kd X2 + (Tf + JuBK + B ) .
184 Y. Ren et al.

i
Let U = Tf + JuBK + B , the new state space representation can be written:

i
X1 = X 1 0
0 1
+
i K i / K d ( J + K p ) / K d X 2 1
U. (21)
X2

According to the Lyapunov stability equation, a result can be concluded as:


For any given positive definite symmetric matrix Q , if there is a unique positive
definite symmetric matrix P satisfying equation (22), the system is asymptotically
stable.
AT P + PAT = Q (22)

( K p + J ) 2 + Ki ( Ki + K d ) 1

Let Q=
1 0
0 1
work out P =

2 Ki Kd ( K p + J )
1
2 Ki
Ki + Kd
.To make the

2 Ki ( K p + J )
2 Ki

matrix P a positive definite matrix, Ki K K must meet the condition as follows:
p d

( K p + J ) 2 + Ki ( Ki + K d ) > K d ( K p + J ) 2 (23)
Therefore, it follows the Lyapunov stability theory and acceleration feedback loop is
asymptotic stability. Equation (23) implies that the acceleration feedback system
satisfies the stability condition when K i , K p and K d select appropriate value.

4 Simulation Results
Based on the above approach for flight simulator, the parameters of actual plant,
friction model and control system are supposed as follows: B = 25 /133, J = 1 /133 ,
Tc (t ) = 1 c = c
1 2 = 350, 1 = 4.0, K f = 28 , K p = 66, K i = 5, K d = 0.5.
Still more, let the reference input signal be a triangular signal where the amplitude is
0.01 degree and frequency is 0.025Hz. In order to verify the system robustness to
external disturbances, a sinusoidal interference signal whose amplitude is 0.02 degree
and frequency is 0.025Hz is appended into the system. Comparing the traditional
backstepping with the novel backstepping controller based acceleration feedback with
adaptive friction observer, the simulation results are shown by Fig.2 to Fig.4.
From the above simulation results, we can see that the tracking error of the system
under the novel controller is evidently smaller than that under the traditional
backstepping controller. By using the novel backstepping controller based acceleration
feedback with friction observer, the unstable phenomenon of the low-speed system is
suppressed, while tracking accuracy of flight simulator is improved, and dynamic
friction torque and perturbed torque are compensated effectively.
A Novel Backstepping Controller Based Acceleration Feedback 185

-3 -5
x 10 x 10
2 4

t he er r or of pos i t i on r es pons e/ deg


t he er r or of pos i t i on r es pons e/ deg

1.5 3

1 2

0.5 1

0 0

-0.5 -1

-1 -2

-1.5 -3

-2 -4
0 20 40 60 80 100 0 20 40 60 80 100
t/s t/s

(a) (b)

Fig. 2. Position tracking error for flight simulator (a) position tracking error based the traditional
backstepping controller (b) position tracking error based the novel backstepping controller

0.015 r ef er ence posi t i on 0.015 r ef er ence posi t i on


r eal posi t i on r eal posi t i on
0.01 0.01
po s i t i o n/ de g

0.005 0.005
pos i t i on/ deg

0 0

-0.005 -0.005

-0.01 -0.01

-0.015 -0.015
0 20 40 60 80 100 0 20 40 60 80 100
t/s t/s

(a) (b)

Fig. 3. Position tracking response for flight simulator (a) position output based the traditional
backstepping (b) position output based the novel backstepping

-3
x 10
0.15 8
t he er r or of v el o c i t y r es p ons e/ d eg
t he er r or of vel oc i t y r es pons e/ deg

0.1 6

4
0.05
2
0
0
-0.05 -3
30 30.1 x 10 29.95 30 30.05 30.1

0.1 5
0.05
0 0
-0.05
-0.1 -5

0 20 40 60 80 100 0 20 40 60 80 100
t/s t/s

(a) (b)

Fig. 4. Velocity tracking error for flight simulator (a) velocity tracking error based the traditional
backstepping (b) velocity tracking error based the novel backstepping
186 Y. Ren et al.

In order to reduce system mutation or trembling of friction observer while the


velocity is around zero-crossing, sign function sgn() in the observer is replaced by
saturation function sat() .
i i

sgn( ) i

sat( ) =
i i 1
k < k=

where is the linear range.The simulation result of the observer output is shown in
Fig.5.

0
r ef er enc e f r i c t i on
r eal f r i ct i on
-1
1 30 30. 1
f r i ct i on

-1
0 20 40 60 80 100

Fig. 5. Simulation result of the friction observer

Fig.5 shows that the effective observer can be achieved as long as K f chooses the
appropriate value. Simulation results show that the novel backstepping controller based
acceleration feedback with the friction observer is more effective in compensating
dynamic friction torque and the system influence of the low-speed shaking is inhibited,
so that the performance of flight simulator is improved remarkably.

5 Conclusions
Flight simulator is a kind of servo system with uncertainties and disturbances (such as
nonlinear friction factors) which worsens the performance of the flight simulator
especially when low frequency and small gain signal inputs the system. To obtain the
high performance and good robustness for flight simulator, a novel backstepping
controller based acceleration feedback with friction observer has been presented. The
adaptive friction compensation based on Coulomb model can overcome the effect of
the system friction. Based on the Lyapunov stability theorem, the novel backstepping
controller keeps the system globally asymptotically stable. Simulation results indicate
that the compound controller is capable of giving excellent position tracking and
velocity tracking for the flight simulator. The effect of friction to the system is
overcome effectively.
A Novel Backstepping Controller Based Acceleration Feedback 187

Acknowledgment. The paper is gratefully supported by Aeronautics Fund of China


(2009ZA02001) and Innovation Fund of Inner Mongolia Science and Technology
University (2010NC031).

References
1. Lischinsky, P., Canudas de Wit, C., Morel, G.: Friction compensation for an industrial
hydraulic robot. IEEE Control Systems Magazine 19, 2530 (1999)
2. Zhu, Y., Pagilla, P.R.: Static and dynamic friction compensation in trajectory tracking
control of robots. In: Proceedings of the 2002 IEEE International Conference on Robotics &
Automation, pp. 26442649. IEEE Press, Washington (2002)
3. Noorbakhash, S.M., Yazdizadeh, A.: A new approach for lyapunov based adaptive friction
compensation. In: IEEE Control Applications (CCA) & Intelligent Control (ISIC), pp. 6670.
IEEE Press, Russia (2009)
4. Liu, G.: Decomposition-based friction compensation of mechanical systems.
Mechatronics 12, 755769 (2002)
5. Morel, G., Iagnemma, K., Dubowsky, S.: The precise control of manipulators with high
joint-friction using base force/torque sensing. Automatica 36, 931941 (2000)
6. Yuan, T., Zhang, R.: Design of guidance law for exoatmospheric interceptor during its
terminal course. Journal of Astronautics 30, 474480 (2009)
7. Shen, D., Liu, Z., Liu, S.: Friction compensation based acceleration feedback control for
flight simulator. Advanced Materials Research 8, 17021707 (2010)
8. Nima Mahmoodi, S., Craft, M.J., Southward, S.C., Ahmadian, M.: Active vibration control
using optimized modified acceleration feedback with Adaptive Line Enhancer for frequency
tracking. Journal of Sound and Vibration 330, 13001311 (2011)
9. He, Y.Q., Han, J.D.: Acceleration feedback enhanced robust control of an unmanned
helicopter. Journal of Guidance. Control and Dynamics 33, 12361250 (2010)
10. Bousserhane, I.K., Hazzab, A., Rahli, M., Mazari, B., Kamli, M.: Mover position control of
linear induction motor drive using adaptive backstepping controller with integral action.
Tamkang Journal of Science and Engineering 12, 1728 (2009)
11. Sanchez, E.N., Sanchez, E.N., Alanis, A.Y., Loukianov, A.G.: Real-time discrete
backstepping neural control for induction motors. IEEE Transactions on Control Systems
Technology 19, 359366 (2011)
12. Wang, Z.: Friction compensation for high precision mechanical bearing turntable. PhD
thesis, Harbin Institute of Technology (2007)
13. Mentzelopoulou, S., Friedland, B.: Experimental evaluation of friction estimation and
compensation techniques. American Control Conference 29, 31323136 (1994)
The Optimization Space Design on Natural Ventilation in
Hunan Rural Houses Based on CFD Simulation

Mingjing Xie, Lei Shi, Runjiao Liu, and Ying Zhang

School of architecture and Art, Central South University, Changsha,


Hunan Province, China, 410083
Xmj8051@163.com

Abstract. Natural ventilation in summer is a common and effective passive


technologies in the rural houses, but was ignored in the recently researches.
Based on the investments on the Hunan rural residential houses, the space
design models are summarized as the CFD simulation models by building dual
method. With the analysis on the results and the consideration of land-saving,
the model 2, the extruded space model, is the best model and worth to be
applied in Hunan rural houses.

Keywords: Natural Ventilation, CFD simulation, Space design, Rural houses.

1 Introduction
In recently, the natural ventilation researches are focused on the thermal comfort with
the condition of ventilation [1-3] and the optimization of residence community. The
studies on residence community are also concentrating on the planning and the
interaction between buildings. But the researches on the rural house, especially on the
effects of space design on natural ventilation are rarely. In the rural houses, air
conditioning is a expensive technologies, and natural ventilation is a important
passive technologies to improve the indoor thermal comfort. So, the optimization on
natural ventilation in Hunan rural houses is worth to be studied and the results can
guide the rural house design in Hunan in order to enhance the natural ventilation and
reduce the energy consumption in summer.

2 Methodology

2.1 The Existing Space Design Models

By the investigation on the rural houses in Yueyang, Huarong and Pingjiang country
of Hunan, it is found that the space design are similar and can be summarized to
several models. The building dual graph method[7], which was formed from 60s, are
used in this article. First, different spaces should be named: bed room is 1; hall is 2;
toilet is 3; stair hall is 4; corridor is 5; southern outside is S; northern outside is N;
western outside is W; and eastern outside is E. Then, the real line is used to show the
connection that indoors spaces and outdoor spaces by doors and windows; the broken

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 189195.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
190 M. Xie et al.

line is used to show the connection that indoors spaces and outdoor spaces by walls.
So, the dual graph of different houses can be got. The graph with the same topological
relation can be combined. At last four dual graphs with different topological relation
can be got, showed as Fig.1. Based on the suggestion of March and Steadman[7] and
the average dimension of rural houses, the topological graph can be restored to the
planes. Then, these planes are the summarized models of existing spaces design in
rural houses, which showed in Fig.2.

2.2 Simulation Models and Boundary Conditions

Discussion on the effect of space design on natural ventilation, the main point is the
different place of accessory space in the whole building, but on the ventilation effect
of these spaces. Because of the low using frequency and the closed connection doors,
the summarized models can be simplified for the simulation. The default of doors of
accessory spaces is closed and there is no infiltration. So, the simulation models can
be got, showed as Fig.3, simplified form Fig.2. The dimensions of each space are
average value got from the investigation. The toilet and stair hall of each model are
deleted except Model3, because that one of the bedroom windows towards the stair
hall. With the consideration of the effects of stair on ventilation, the stair is simplified
as a wall in the stair hall in Model 3.

Fig. 1. The summarized building dual graph of Hunan rural houses


The Optimization Space Design on Natural Ventilation in Hunan Rural Houses 191

Fig. 2. The summarized spaces design models of Hunan rural houses

Fig. 3. The simulation Models

In model 1 and 2, the building depth is the standards, the simulation range to
incoming flow direction and to after flow direction are both 6 times of depth, the
height range is five times of the building height. The simulation range is large enough
for uniform inflow and the full interaction between flow and building in the
simulation. Then, the simulation range is 156m*60m, using the 2D standard k-search-
evade antagonistic model turbulence model, and the orthogonal grid is 312*120.
In Model 3, the default setting is the same as Model 1 and 2, except the simulation
height is 4 times of the building height. Than the simulation range is 195m*60m, and
the orthogonal grid is 390*120. in Model 4, the default setting is the same as Model 1
and 2, except the range to incoming flow and after flow are both 5 times of the
building depth. Then, the simulation range is 198m*60m, and the orthogonal grid is
396*120.
192 M. Xie et al.

Pressure-speed coupled SIMPLE algorithm is used in all models simulation. The


second order differential format is adapted in the momentum, the turbulence kinetic
energy and the turbulent dissipative term. The inflow surface boundary conditions is
2.3 m/s that the speed of the height of 1.5m.

3 Simulation Results and Discussion


Fig 5-8 show the simulation results of Model 1-4 separately. Because of no barriers,
the natural ventilation effects of rooms in Model 1 are all satisfied. The effects of
rooms in Model 2 are also satisfied. But the speed in corridor is larger than other
spaces because of the valley effect caused by narrow space form. In Model 3, the
effect in the northeastern room is less than Model 1 and 2 caused by the effect of the
stair hall. Furthermore, the effect in the northwestern room is also less than Model 1
and 2 without the windows towards the outside directly. In Model 4, the the effect in
the northwestern room is also dissatisfied with the same reason in Model 3. The
acceleration effect in corridor is not obviously as model 2. But the ventilation effects
in other rooms are satisfied.
From the discussion, it is known that the space design in Model 1 and 2 are better
with the better ventilation effects of main spaces. Model 4 is an acceptable model
because that only one room ventilation effect is bad. Model 3 should be abandoned
with bad ventilation in northern rooms. But in Model 1, the accessory spaces are
attached on main rooms and the building width is larger than Model 2. In Model 2,
the accessory spaces are extruding in the main spaces and the width is shorter than
other Models. Then, the transportation space in Model 2 is less than others and the
Space utilization rate is also larger than others. With the comprehensive
consideration, especially the consideration on land-saving, Model 2 is the best and
Model 1 and 4 are acceptable.

Fig. 4. The simulation result of Model 1


The Optimization Space Design on Natural Ventilation in Hunan Rural Houses 193

Fig. 5. The simulation result of Model 2

Fig. 6. The simulation result of Model 3


194 M. Xie et al.

Fig. 7. The simulation result of Model 4

4 Conclusions
By the analysis, the best space design can be chose, which is Model 2. Then, the best
space design model and the corresponding building dual graph can be summarized for
Hunan rural houses, which showed in Fig. 8. In the later rural houses design in
Hunan, this model can be popularized for farmers for better natural ventilation effect
and energy saving. The detailed dimension of different house can adjust based on the
real situation and the best building dual graph.

Fig. 8. The best model and the corresponding building dual graph
The Optimization Space Design on Natural Ventilation in Hunan Rural Houses 195

References
1. Zhang, G., Yang, L., Zhou, J., et al.: Development and Application of Natural Ventilation
Potential Evaluation System. Journal of Hunan University (Natural Sciences) 33(1), 2528
(2006)
2. Yin, W., Zhang, G., Xu, F.: Preliminary Exploration of the Universal Design Process of
Natural Ventilation. Architectural Journal (5), 7780 (2009)
3. Wang, Y., Liu, J., Xiao, Y.: Study on the effective hours of natural ventilation under the
regional climatic condition. Journal of Xian University of Architecture &
Technology 39(40), 541546 (2007)
4. Nyuk, N.H., Wong, H., Huang, B.: Comparative study of the indoor air quality of naturally
ventilated and air-conditioned bedrooms of residential buildings in Singapore. Building and
Environment 39(9), 11151123 (2004)
5. Hummelgaard, J., Juhl, P., Sabjornsson, K.O., et al.: Indoor air quality and occupant
satisfaction in five mechanically and four naturally ventilated open-plan office buildings.
Building and Environment 42(12), 40514058 (2007)
6. Han, J.: Thermal Comfort Model of Natural Ventilation Environment and Its Application in
the Yangtze Climate Zone. In: Doctor thesis of Hunan University, pp. 1103. Hunan
University, Hunan (2009)
7. Liu, X.: Theories of Modern Architecture, vol. 1, pp. 87536. China Building Industry
Press, Beijing (1999)

Appendix
Project supported by NSFC (51108469) and Hunan Provincial Natural Science
Foundation of China (11JJ5032).
Optimal Simulation Analysis of Daylighting Design in
New Guangzhou Railway Station

Lei Shi, Mingjing Xie, Nan Shi, and Runjiao Liu

School of architecture and Art, Central South University, Changsha,


Hunan Province, China, 410083
330980452@qq.com

Abstract. This article uses the Ecotect software to simulate and analyze the
Guangzhou new railway stations interior illumination and the lighting energy
saving rate under the background of the natural lighting. The results shows that
the daylighting design in new Guangzhou railway station is satisfied and the
artificial lighting system should be divided into different regions according to
the illumination distribution for energy efficiency.

Keywords: Daylighting, Railway station, Indoor lighting environment,


Simulation.

1 Introduction
The total construction area of the new Guangzhou railway station is about 560537 ,

including 247517 of overground construction area and 117466 of underground
construction area. The contour area of the awning which doesnt cover the platform is

about 195554 , and the room of passenger information takes up 212732 . The
depth of the station is 398m, not including the elevated driveways beside the station,
and the width is 335m.
The first floor of the station is the outbound area; the second floor is the platform;
the third floor is the elevated waiting area; the underground floor includes the station
for the metro and the equipment room etc.
Because the size of the structure is very large, it costs relatively high in the energy
consumption of artificial lighting. So we need to do the analysis at the beginning of
the design in order to optimize its natural lighting design. In that case, we can get
better light environment indoors, and we can save the energy as well.
In this article, we use the Ecotect software to simulate and analyze the new railway
stations interior illumination and the lighting energy saving rate under the
background of the natural lighting.

2 Methodology
2.1 Analysis Method
The main content of natural lighting research about new Guangzhou railway station is
including the interior illumination situation of both elevated waiting area and

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 197206.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
198 L. Shi et al.

outbound area, and the amount of the power consumption which can be saved in a
whole year.
The specific analysis method is following as below:
(1) Calculate the interior illumination situation at the date of the summer solstice
and the winter solstice, which assumes that the whole day is a cloudy day with no
direct light. And the cloudy days illumination in winter solstice is corresponding to
the worst lighting results of the whole year;
(2) Simulate the interior environment with direct sunlight at 2:00pm in the summer
solstice, and calculate the illumination situation.
(3) Use the Ecotect software to calculate the whole years lighting satisfaction rate
in both the elevated waiting area and the outbound area ( which implies the proportion
of the time when the interior illumination is above the 75lx), and analyze how much
energy can be saved from the lighting.

(4) The interior illumination watt density should be set on 7w/ when we do the
lighting energy saving research. The time for interior lighting should start from
5:00am to 1:00am in the next day (about 20 hours), and the time for natural lighting
should calculate from 8:00am to 5:00pm (9 hours in total).
(5) The lighting satisfaction rate of the whole year in different parts of the waiting
room is represented by DA (unit: percentage). The DA is identified as the proportion
of the accumulated hours when the interior natural illumination is higher than the
allowable value (about 75lx) during the natural lighting time (from 8:00am to
5:00pm), and the basic number of the proportion is confined in the natural lighting
hours (9*365=3285 hours) of the whole year.

2.2 Simulation Models and Settings

The lighting model in the simulation is presented in the following figures (from Fig.1
to Fig.4). This model is constructed in accordance with the simplified architecture
design, and the material of it is chosen from the scope offered by the design. The
calculated area is including the elevated waiting area and the outbound area.
When the station uses the natural lighting, the interior illumination is changing
constantly in accordance with exterior illumination.So we must add the exterior
illumination into the consideration as we decide the interior illumination of the natural
lighting.Normally, adopting the lighting coefficient as the index is a kind of main
methods to evaluate the natural lighting.

Fig. 1. The complete mode of architecture Fig. 2. The mode of the waiting area
Optimal Simulation Analysis of Daylighting Design in New Guangzhou Railway Station 199

Fig. 3. The mode of the platform Fig. 4. The mode of the computational area.

The design standard of lighting in architecture (GB/T 50033) is the current norm
we should follow when we do the lighting design, and it has some regulations on the
interior lighting index of the natural lighting. To make sure the quality of the interior
lighting is eligible, this project ensures that both the waiting area and the outbound
area meet with the standard of general operating accuracy, that is to say the interior
illumination of natural lighting should achieve 75lx.
The main target of the lighting simulation analysis is to find out what the lower
limit of the optical property in glass should take in order to satisfy the need of interior
lighting, under the worst condition of exterior environment. Therefore, in the
simulation research, the illumination of sky background is set as the CIE all the
cloudy mode, and the whole meteorological data using in the calculation is taken from
the website of the Lawrence Berkeley National Laboratory.
According to the attribute of the transparent materials, the settings of each
transparent component are given in the Table 1.

Table 1. The visible light transmissivity of the transparent exterior protected construction

The name of the construction The visible light transmissivity


Glass curtain wall 75%
The ridge of the middle 19%
The high side skylight
(considering the equivalent transmissivity after it is 30%
covered by shaded facilities)
Normal flat skylight 75%
Side skylight above the platform 75%

3 Results

3.1 Simulation Analysis Results of the Elevated Waiting Area

The natural lighting of the elevated waiting area is achieved by the transparent
materials in the ridge of the middle, the skylight on the roof, the side skylight and the
200 L. Shi et al.

glass curtain wall in four directions (as shown in Fig.5).Due to the platform awning,
there is hardly any direct light coming through the glass curtain walls in both north
and south; as for the east and west sides glass curtain walls, part of the direct light is
also shielded from the overhanging roof.

Fig. 5. The transparent components in elevated waiting area

The illumination of the natural lighting in elevated waiting area is shown in the
Table 2 and the Fig.6 to Fig.11.

Table 2. The data of the natural lighting illumination in elevated waiting area

The average
Time Illustration
illumination(lx)
Summer 8:00 387
(1) The four direction of the waiting area are
solstice 14:00 483 composed by the glass curtain walls, so the
(Cloudy) 17:00 169 illumination in perimeter zone is above normal.
8:00 214 Even in the winter solstice, the illumination is
14:00 303 basically beyond 200lx (the allowable data in
interior artificial lighting).
Winter
solstice (2) The illumination in the central hall of waiting
(Cloudy) With no exterior area is lower than the perimeter zone, but in
17:00
light resources winter solstice, at 2:00pm, its illumination data is
also basically beyond 75lx.

(1) The illumination in most parts of the waiting


Summer area ranges from 1000lx to 800lx, except that the
solstice 14:00 2205 corner of west side and the area under the skylight
(Fine) will have the situation of illumination out of the
range.
Optimal Simulation Analysis of Daylighting Design in New Guangzhou Railway Station 201

Fig. 6. The interior illumination in the Fig. 7. The interior illumination in the
summer solstice at 8:00am (Cloudy Day) summer solstice at 2:00pm (Cloudy Day)

Fig. 8. The interior illumination in the Fig. 9. The interior illumination in the winter
summer solstice at 5:00pm (Cloudy Day) solstice at 8:00am (Cloudy Day)

Fig. 10. The interior illumination in the winter Fig. 11. The interior illumination in the
solstice at 2:00pm (Cloudy Day) winter solstice at 5:00pm (Cloudy Day)
202 L. Shi et al.

According to the national standards about the interior illumination and the Ecotect
analysis results taken from the condition that we only use the natural light, we can
calculate the energy saving rate in elevated waiting area, as is shown in the Table 3.

Table 3.The energy saving data of the interior lighting in elevated waiting area

Function of the area Waiting area

Area (including the waiting room


57200 m2
and the central hall)
Standards of illumination 75 lx
Watt density of lighting 7 W/m2
Operating hours in a whole year 20h365 days=7300 h

Hours in natural lighting 9h365 days=3285 h 8:0017:00


79.2%
The average rate of natural
lighting satisfaction (DA) Except very few places in the hall have lower DA data,
the rest all have high values in DA.
Hours of energy saving 328579.2% = 2602 h
Rate of energy saving 2602/7300=35.6%
Amount of energy saving (kWh) 1042000 kWh
The perimeter zone in waiting hall has 60% of the time
in a year covered by the illumination of over 2000lx,
Uniformity of the interior and it may results in glare. But considering that this
illumination part of area is only used as passageway, not for people
to stay in the long term, so it wont create serious
impact.

3.2 Simulation Analysis Results of Outbound Area

There are mainly three methods about lighting in the outbound area as is shown in
the Fig.12.
Component 1: the glass curtain walls in both east and west sides of the entrance
hall.
Component 2: the oblong glass on the top of the outbound area, so the area close to
the north and south ends of the glass can gain sunlight from the
awning.
Component 3: the skylight on the top of the outbound area, and it gains the sunlight
from the ridge of the middle through the hole of waiting area.
Optimal Simulation Analysis of Daylighting Design in New Guangzhou Railway Station 203

Fig. 12. The lighting components of the outbound area

The illumination of the natural lighting is shown in the Table 4 and the Fig.13 to
Fig,18.

Table 4. The data of the natural lighting illumination in outbound area

The average
illumination
Time Illustration

in outbound
area lx
8:00 109 (1) The three types of lighting components in
Summer outbound area make the entrance hall in the east and
solstice 14:00 135.6 west side, gain more sunlight, and they also improve
(Cloudy) the illumination, close to the east and west side glass
17:00 49.4 curtain wall, significantly.
8:00 61.4 (2) The area, close to the glass curtain wall and just
below the oblong glass skylight, has better natural
Winter 14:00 112 lighting effect.
solstice
(Cloudy) With no (3) The four holes in the waiting area have limited
17:00 exterior light effect on the natural lighting of outbound area.
resources
204 L. Shi et al.

Fig. 13. The interior illumination at 8:00am in Fig. 14. The interior illumination at 2:00pm in
summer solstice (Cloudy) summer solstice (Cloudy)

Fig. 15. The interior illumination at 5:00pm in Fig. 16. The interior illumination at 8:00am in
summer solstice (Cloudy) winter solstice (Cloudy)

Fig. 17. The interior illumination at 2:00pm in Fig. 18. The interior illumination at 5:00pm in
winter solstice (Cloudy) winter solstice (Cloudy)
Optimal Simulation Analysis of Daylighting Design in New Guangzhou Railway Station 205

According to the national standards about the interior illumination and the Ecotect
analysis results taken from the condition that we only use the natural light, we can
calculate the energy saving rate in elevated waiting area, as is shown in the Table 5.

Table 5.The energy saving data of the interior lighting in outbound area

Function of the area Outbound area



Area including the waiting
room and the central hall 80000 m2

Standards of illumination 75 lx
Watt density of lighting 7 W/m2
Operating hours in a whole year 20h365 days=7300 h

Hours in natural lighting 9h365 days=3285 h 8:0017:00


49%
The average rate of natural
lighting satisfaction (DA) The central parts of outbound area may not meet the
standards of lighting illumination in the whole year.
Hours of energy saving 328567% = 1609.5 h
Rate of energy saving 1609.5/7300=22%
Amount of energy saving (kWh) 899000 kWh
As is shown in the figures, only the east and west
entrance halls in outbound area have the 70% of the
Uniformity of the interior time in a year covered by the illumination of over
illumination 2000lx, and it may results in glare. But considering
that these places are in the corner of outbound area, it
wont involve in the normal use of the hall.

4 Conclusions
According to the simulation analysis results of the lighting, we can obtain the
following conclusions:
(1) The waiting area is mainly through the surrounding glass curtain walls to obtain
the natural light, and the central hall is through the roof (ETFE) to obtain the natural
light, so the waiting area gains much more natural light. Even in the winter solstice, it
can obtain higher interior illumination (over 200lx), and the illumination in perimeter
zone is higher than the central hall.
(2) The outbound area is mainly through the glass curtain walls of the entrance
halls in both east and west side, and the oblong skylight near the glass curtain walls in
the north and south sides to obtain the natural light. The holes in the waiting area
contribute little to the lighting, and the illuminations in the surrounding area all
outnumber 75lx, except for the central hall.
206 L. Shi et al.

(3) If the lighting lamps and lanterns can automatically control the on-off
according to the interior illumination, the waiting area can save 35.6% of the lighting
energy consumption (1042000kWh) in natural lighting, and the outbound area can
also save 22% of the lighting energy consumption (899000kWh).
(4) The illuminations in the main activity regions of both waiting area and
outbound area wont be too high, so they wont have serious glare problems.
(5) We suggest that the lighting system should be divided into different regions
according to their illuminations, so we can control the system in a energy saving
mode. The specific zoning method is shown in the Fig.19 to Fig.20. Every region
should set the illumination sensor in order to control the on-off of the lighting lamps
and lanterns independently.

Fig. 19. The zone chart of the lighting control Fig. 20. The zone chart of the lighting control
in elevated waiting area in outbound area

References
1. Zain-Ahmed, A., Sopian, K., Othman, M.Y.H., Sayigh, A.A.M., Surendran, P.N.:
Daylighting as a passive solar design strategy in tropical buildings. A case Study of
Malaysia 43 (2007)
2. Hua, Y., Oswald, A., Yang, X.: Effectiveness of daylighting design and occupant visual
satisfaction in a LEED Gold laboratory building, vol. 54. Crown pud, New York (2008)
3. Aghemo, C., Pellegrino, A., LoVerso, V.R.M.: The approach to daylighting by scale models
and sun and sky simulators: A case study for different shading systems, vol. 20 (2008)
4. Altan, H., Ward, I., Mohelnikova, J., Vajkay, F.: An internal assessment of the thermal
comfort and daylighting conditions of a naturally ventilated building with an active glazed
facade in a temperate climate, vol. 9, p. 12. Oxford University, New York (2009)
5. Miguet, F., Miguet, F., Groleau, D.: A daylight simulation spaces-application to transmitted
direct and diffuse light through glazing. Building and Environment, 833843 (2002)
6. Xie, H.: The design strategy of skylight in public buildings. Guangdong University of
Technology
7. Geng, J.: Using the natural light fully-The important thoroughfare of the energy saving
lighting. The Light and Lighting (1), 11 (2003)
8. Liu, J.: Building physics, 3rd edn., vol. (5). China Building Industry Press, Beijing (2000)
9. Li, X.: The natural light and modern architecture design, Zhengzhou University (2006)
Research on Passive Low Carbon Design Strategy
of Highway Station in Hunan

Lei Shi, Mingjing Xie, Zhang Ying, and Luobao Ge

School of architecture and Art, Central South University,


Changsha, Hunan Province, China, 410083
330980452@qq.com

Abstract. This paper analyzed public building energy consumption and


climate characteristics in Hunan province. It is proposed that passive LC (low
carbon) design of highway station in Hunan province should focus on both


natural daylighting, shading and natural ventilation. Based on the different
design strategies, including daylighting shading and natural ventilation it is
proposed that passive LC technology strategies, such as atrium, skylight
combining the natural ventilation, caused by thermal pressure, should be used in
order to reduce the emission of highway station in Hunan area.

Keywords: highway station, Hunan province, passive design, daylighting,


natural ventilation.

1 Introduction
In recent years, With large-scale development and construction of high-grade
highway and freeway, station develop rapidly and become one of the main
transportation. With the development, the scale and the construction speed are
increasing accordingly.
Since 1995, there has built Ten several highway station who matches built freeway
in whole Hunan province. Such as East, South, West and North Station in Changsha,
Huaxin and Shuangfeng Station in Hengyang, East Station in Yiyang, Central Station
in Huaihua, North Station in Yueyang, Central Station in Chenzhou, East Station in
Liuyang, etc. we can have an insight that design of highway station is getting more
and more progress in design, becoming more perfect in function, more humanized in
service. Related design research emerges in endlessly. However, related LC building
design especially using passive technology that aimed at energy conservation is still
weak. Related articles about the research about climate characters in hot-summer and
cold-winter area need to be enhanced urgently.
According to the statistic from American energy department (EIA),it can be
reduced by 47percent by using passive technology in building energy consumption,
comparing to the normal new building, while 60 percent to normal old building.
Passive technology can be used widely in applies to most large-scale building and all
small-scale building. Therefore, passive LC designs in Hunan highway station worth
to be noticed.

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 207214.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
208 L. Shi et al.

2 Passive Technology Selection


Passive technology does not depend on regular energy consumption, but depend on
natural principle such as sunlight, wind power, temperature and humidity in nature by
using technique of planning, architectural designing and environment configuration
while improving and creating comfortable housing environment. The purpose of the
passive technology is to reduce as far as possible or not use Cooling, heating and
lighting equipment, as well as to create higher-quality environment both indoor and
outdoor. The design strategy of the concept emphasizes that: design on the basis of
the local climate characteristics; follow the basic principle of the control technique in
building environment; consider the architecture function and form claimed for.
Therefore, to choose appropriate passive technology, we should analyze the formation
of energy consumption and the climate characteristics in Hunan.

2.1 The Formation of Energy Consumption in Public Building

For now, theres no information alone focused on the statistic of the energy
consumption about highway station for the time being. Considering highway station
belong to public building, therefore, the statistic of the energy consumption in public
building can Provide a degree of reference for highway station design.
By statistics in 2008,there are 5.3 billion square meters of public construction of 36
percent of town construction aggregates, energy consumption of per unit area except
for heating is equivalent to 90-200kWh/(m2year). According to analysts by Ding and
others in 2009, energy consumption of per unit area except for heating is equivalent to
95-96kWh/ (m2year) in central area in China including Hunan, 82-114kWh/(m2year)
in hot-summer and cold-winter area who has the same climate characters with Hunan.
From the statistics, we can see that energy consumption of per unit area in public
building is much higher than residential building. Although the energy consumption of
Hunan in hot-summer and cold-winter area is almost at the minimum, we should take
more LC design measures to reduce the increasing trend of energy consumption.
From the analysis of the formation, lighting, office appliances, air condition and
the others makes energy consumption in public building. Lighting energy
consumption is about 5-25kWh/(m2year), office appliances energy consumption is
about 5-25 kWh/(m2year), and air condition energy consumption is about 5-25
kWh/(m2year).Considering there are less office appliances in highway station than
office building and marketplace, Lighting and air condition energy consumption are
the predominant part in highway station.
From the analysis of the formation, LC design should reduce lighting and air
condition energy consumption, passive technical strategies to match include natural
lighting, natural ventilation, heat insulating by enclosure structure and so on. And the
more specific choose also need to consider the climate characteristic of Hunan.

2.2 The Climate Characteristic of Hunan

Hunan is subtropical in the west to East Asia Monsoon region. Owing to its
geography characteristics, the area enjoys a subtropical humid monsoon climate
bearing obvious features of continental climate. Its monsoon climate feature lies
Research on Passive Low Carbon Design Strategy of Highway Station in Hunan 209

primarily in opposite wind direction between summer and winter: South monsoon in
summer while north monsoon in winter. From the building climate, Hunan belongs to
hot-summer and cold-winter area. It is always soaked with an average temperature of

17 , the highest temperature 41 , the lowest temperature-3 .The annual average
humidity of 79%, annual average rainfall of 1302mm, sunshine of 1722.1-1816.5
hours all year, solar irradiance of 458.4-462.1kJ/cm2 all year.
We can see from the climatic conditions, highway station in Hunan need passive
technology to solve the adverse environment with hot-summer, cold-winter and
soaked weather. Undoubtedly, natural ventilation will be the best choice. For it can
reduce energy consumption and ameliorate the soaked problem simultaneously.

2.3 The Suitable Passive Design Strategies

Jiang Yi present his view In view of building LC design in south: solar radiation
influence the power consumption for air conditioning most, so key approaches for
energy saving are external shading and outside surface ventilation. Study by Dean
Heerwagen also holds that In the subtropical moist climate (including hot-summer,
cold-winter climate, natural ventilation, shading and lightweight envelope should be
considered first in architectural passive design Strategy, among which natural
ventilation should be on the first place.
Passive technology has different characteristics respectively based on different
architecture types. The power consumption for air conditioning depends mainly on
the internal heat of building. As far as the heat gaining of the building is concerned,
main component is envelope dominantwithinternal heat-gaining dominant two sort
models. Domestic architecture and small public buildings belongs to the former,
internal heat is mainly transmitted by envelope who accepting solar radiation; while
Large-scale public buildings belongs to the latter, heat is mainly sent out by internal
personnel and equipment. Envelope dominant should focus on improving thermal
insulation performance of the envelope structure, and internal heat-gaining
dominant focus less. Because human is the main heat source, and its high crowded
building, highway station belongs to internal heat-gaining dominant, as a result, in
the choice of passive technology, thermal insulation performance of the envelope
structure is not in the optimal position.
On the above, passive LC design strategies of highway station in Hunan should
focus on both daylighting, shading and natural ventilation.

3 Natural Daylighting and Shading

3.1 Natural Daylighting Design Strategies

Although natural daylighting design is much more complicated than electric lighting
design, it could bring more aesthetic feeling to the indoor and outdoor of architecture.
Comparing with electric lighting, natural daylighting is significance as one important
part of passive design, such as, reduces the influence of the environment by reducing
the power consumption, well for health, improve work efficiency, etc.
210 L. Shi et al.

Usually in multi-storied residential building, area far away from window less than
5 meters can be light up by natural light, between 5 and 10 meters can be light up
partly, while more than 10 meters can never be light up. There are three different
plans with completely same area in graph 1.In the foursquare plan, 16% of the region
have no natural lighting, besides 33% have some; n the rectangle plan, there is no
space who has no natural lighting, but substantial just have some; While in the plan
with central courtyard, all regions can be light up by natural light adequately. Thus,
architecture depth should be reduced as much as possible; if its more than 10 meters,
country yard should be set. Considering the functional requirements of the highway
station whose depth is always deep, daylighting for centre region can come true only
by country yard or lighting atrium; at the same time the side window can be raise
higher to strengthen the depth of natural light.
Of course, skylight can used to improve the indoor lighting in single-storey
building. Each kind of skylights are illustrated in graph 2.However,any kind of
skylight has the main problem, without any exception. They face the sun much more
in summer than in winter. So using skylight with larger gradient and facing south or
north can make the light more well-distributed, and reduce solar radiation through it.

Fig. 1. Influence on daylighting in different plans with same area

Fig. 2. Skylights of different kinds


Research on Passive Low Carbon Design Strategy of Highway Station in Hunan 211

3.2 Shading Design Strategy


Any kind of direct daylighting brings solar radiation inner, so for Hunan area whose
summer is burning hot, it will accentuate indoor cooling burden. Consequently,
adopting daylighting strategy corresponding shading design will prove effective and
reduce indoor cooling burden.
There are three forms of shading device, which are greenness shading, outdoor
shading and indoor shading. Considering design requirement of outdoor park lot and
accessory space, indoor daylighting, there is little possibility to adopt greenness
shading or indoor shading. On the above, shading design should mainly adopt outdoor
shading corresponding natural daylighting design.
First to consider is the position of ante-venna. Ante-venna of daylight opening as
important element should be light, artistic, economic and durable. There are three
forms of ante-venna, which are horizontal device, vertical device, mixed device and
baffle device. Each include fixed and controllable.
Horizontal device: Used to keep out sunshine with higher sunshine height angle in the
minion, general for southing window. Its fixed in common. The length heading out is
due to local shading angle. It was made of reinforced concrete or asbestos shingle in
the past, while metal at present such as aluminum alloy.
Vertical device: Used to keep out sunshine with lower sunshine height angle in the
morning or afternoon. Controllable device composed of metal plate usually use stay,
gear transmission or pin to adjust the shading angle.
Mixed device: Used to keep out slanting sunshine from side elevation of the
window, general for window in the southeast and south-west. Its form mainly
involves tiered device, louvered device and plate-type mixed device.
Baffle device: Used to keep out indirect sunshine with lower sunshine height angle,
general for easting window and western window. .Its form mainly involves tiered
device, louvered device and plate-type mixed device.

Fig. 3. Different forms of ante-venna

4 Natural Ventilation Design Strategy


Natural ventilation is a kind of passive ventilation driven by natural force without any
power. Meaning of application of natural ventilation technology lies in two aspects:
212 L. Shi et al.

Firstly passive cooling it brings can reduce energy consumption; secondly It can
Provide fresh natural air through removing wet and dirty air. When the outside
humidity is heavy, natural ventilation can help sweat evaporate on the skin surface,
and therefore improve heat comfort.
Wind pressure and heat pressure construct the essential driving force of natural
ventilation with basic forms of wind pressure ventilation and heat pressure
ventilation. While airflow acts on the building surface, differential pressure cones
into being between the windward and leeward, then the air flows inside the building.
But studies shows that, when the building depth grows deep step by step, wind
pressure ventilation impact less and less. Considering the depth of highway station is
always deep, this paper focuses on design strategy of heat pressure ventilation.
So-called heat pressure ventilation is based on heat buoyancy principle, warmer
air rises and cooler air falls, named chimney effects. According to the fluid balance
equation, A classic heat pressure flow formula is as follows:

Ti T0
qs = (CD A) 2 gh (1)
T0

In type(1), qs present flux, CD present convection coefficient related to openings



characteristics, g present the acceleration due to gravity h present a two opening

center line vertical height difference Ti present the indoor air temperature; T0
present the outdoor air temperature.
From the formula we can see, thermal pressure height is different between the
temperature inlet and outlet difference between indoor and outdoor related, greater are
they, more obvious become the effect. In architectural design, we can use the vertical

Fig. 4. The hot pressure ventilation of the Atrium in Frankfurt commercial bank
Research on Passive Low Carbon Design Strategy of Highway Station in Hunan 213

cavity inside the building: such as staircases, atrium, well hole to meet the height
requirements of inlet and outlet, and set openings under control on the top of the
building, exhausting the hot air of each layer, achieving the purpose of natural
ventilation. Comparing with the wind-pressure natural ventilation, natural ventilation
can adapt to the ever-changing external wind environment.
As mentioned, atrium and skylight is compatible for daylighting in highway
station, exactly meeting the basic conditions of the thermal natural ventilation, is a
suitable passive technology in highway station buildings. In addition, the atrium
connected originally isolated floors into a whole, which is helpful for ventilation
strategies overall. Therefore, the thermal natural ventilation by atrium increases
thermal pressure effect artificially essentially, a then improves the ventilation
frequency. Graph 4 is the Frankfurt commercial bank designed by Norman Forster. It
use the thermal natural ventilation by atrium to enhance natural ventilation in
buildings, has received good effect.

5 Conclusions
Along with people's living levels rising and traffic network increasingly perfect day
by day, the construction of highway station will get more development with
increasingly perfect function. Simultaneously, the passenger request more of indoor
comfort in highway station, which put forward the higher request to passive LC
design of highway station. It will be a key point in the future research, that which
passive technologies, with low-cost and low energy consumption, should be adopted
to reduce the effect on natural environment and improve the satisfaction on indoor
environment. The article just put forward corresponding design strategy based on
limited data analysis, and need to practice in the project. This paper can only offer a
role in attracting valuable opinions.

References
1. Ling, Z., Wenfang, C.: Discussion on Historical Development of the Design of Passenger
Station. Chinese and Overseas Architecture (1), 3435 (2005)
2. Tan, F., Zhu, T., Huo, J.: Layout Method of Modern Automobile Passenger Transportation
Station. Huazhong Architecture 24(3), 8487 (2006)
3. Xu, L., Li, B.: The design of east highway station in Guigang and the use of eco-
technology. In: The Third Guangxi Youth Conference Proceedings (natural science article)
(2004)
4. Huang, J., Dong, X.: Ecological StationThe Design of Haizhu Bus Station. New
Architecture (1), 4851 (2004)
5. Wang, G.-G., Zeng, K.-M., Zhu, X.-M.: Design for Jiangmen Long-distance Passenger-
transport Bus Station. Journal of Guangdong University of Technology 22(4), 9498
(2005)
6. Wang, C.: Green building techniques manual. China Architecture and Building Press,
Beijing (1999); Public Technology Inc., US Green Building Council
7. Zhang, G., Xu, F., Zhou, J.: Sustainable building technologies. China Architecture and
Building Press, Beijing (2009)
214 L. Shi et al.

8. Tsinghua university building energy research center. China building energy saving annual
development research report 2008. China Architecture and Building Press, Beijing (2008)
9. Ding, H., Liu, H., Wang, L.: Preliminary analysis of the energy consumption statistics in
civil buildings. Heating Ventilating & Air Conditioning 39(10), 13 (2009)
10. Hunan meteorological nets. The climate characteristics of hunan,
http://www.hnqx.gov.cn/qxbk/2/2009-5-2/HuNaQiHou-Zheng.htm
(May 21, 2009)
11. Tsinghua university building energy research center. In: China building energy saving
annual development research report 2007. China Architecture and Building Press, Beijing
(2007)
12. Heerwagen, D.: Passive and active environmental controls. McGraw-Hill Companies, New
York (2004)
13. Public Works and Government Services Canada. In: Daylighting Guide for Canadian
Commercial Buildings, Canada (2002)
14. Yijian, S.: Industrial Ventilation. China Architecture and Building Press, Beijing (1994)
A Hybrid Approach to Empirically Test Process
Monitoring, Diagnosis and Control Strategies

Luis G. Bergh

CASIM, Automation and Supervision Centre for Mineral Industry,


Santa Maria University, Valparaiso, Chile
luis.bergh@usm.cl

Abstract. Testing strategies for process monitoring, diagnosis and control is


expensive, and usually requires either complex pilot plant facilities or to deal
with the hard constraints posed by experimentation in real plants. The
experience developing hybrid system in the chemical processing area is
discussed and its application to flotation columns is presented. The main idea is
that the essential phenomena underlying a process can be divided in two
aspects: the hydrodynamics (mixing and separation) and the physicochemical
changes. The first can be experimentally implemented in pilot plants where the
main streams are mixed and separated, at a low cost. However, to follow the
process changes it will require expensive instrumentation, store facilities,
chemical reagent consumptions. The use of physics and chemical models,
coupled with operating measured variables describing the hydrodynamics, are
proposed as an economical convenient substitute of experimentation in real
processes.

Keywords: process control, automation, modeling, hybrid systems, flotation.

1 Introduction
In the last decade a number of techniques have been proposed to improve the
operation of different processes. For example, the state of art and challenges in the
mining, minerals and metal processing area was recently discussed by [1,2]. These
works mainly focused on aspects such as: process modeling, data reconciliation, soft
sensor and pattern recognition, process monitoring, fault detection and isolation,
control loop monitoring, control algorithms and supervisory control.
What is common to most of these areas is that the theoretical advantages of novel
method have to be confronted and tested in real plants. However, experimentation on
real plants is of high cost and may present other difficulties. For example, the input
variables can only be changed inside a narrow band, to avoid risky operating
conditions or high loses in products. Some important disturbances cannot always be
managed, interrupting and degrading the experiments, and then leading to confusing
results. Moreover, sometimes is extremely difficult to reproduce a given disturbance,
for example a change in particle size distribution. Another important factor may be
the quality of the collected data. For example, in the mineral processing industry, the
measurements of important variables, such as particle size or a stream grade, demands

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 215222.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
216 L.G. Bergh

large efforts on instrumentation maintenance plans to obtain these measurements with


the required quality. In summary, it is more difficult and costly to conduct a designed
experience, which collected data can be properly analyzed to make right decisions.
On the other extreme, the simulation of the application of novel methods by using
a plant model has some known advantages. Once a model of a plant is available, the
new methodology can be tested under a number of different conditions, at low cost.
However, some disadvantages of this approach may be the high cost of producing
reliable models. Even when phenomenological models could be built, there will be
always necessary to make some simplifying assumptions, in order to reduce the
mathematical complexity to find a solution. The value of comparisons of different
methods by simulation can be significant degraded, and often cannot replace
experimentation in the real world.
Pilot plants can reproduce most of the advantages of experimenting in a real plant.
The small plant scale relaxes some constraints, at the same time that provides the
chance to a wide experimentation. However, in some plants, the presence of solids in
flotation, or expensive organic solvents for copper extraction, demand installations,
with large storing facilities and expensive on stream analyzers. If the pilot plant is
located nearby a plant, usually the lack of instrumentation and computer platforms
may represent a problem difficult to overcome. Environmental issues are strongly
dependent on plant location and facilities. Therefore, the flexibility gained for
working in a smaller scale is lost when the instrumentation investment and
maintenance cannot be paid by the generation of products in large scale and when
environmental regulations put hard constraints.

2 A Different Approach
Some processes can be analyzed separating the phenomena in different levels. A first
and basic level is how the different streams are mixed and separated in a process unit.
In most cases, the physical properties of each phase, such as density or viscosity, are
usually invariant or experiment moderate changes as a function of temperature. A
second level is to take into account the change in properties such as the concentration
of a solute. This can occur due to a chemical reaction, where some specie partially
disappear to form other species, or due to a selective migration of some species from
one phase to another. In flotation, for example, some species are selectively attached
to gas bubbles and form a froth phase. Other solid particles remain in the liquid phase
forming the tailings. Some constraints are that changes in solute concentration may
also change transport properties, and that artificial decoupling is not effectively
occurring.
More generally speaking, when process hydrodynamics are not significant
influenced by changes in solute concentration, then process behavior can be
separately represented in these two levels. If this hypothesis is accepted, then the
experimentation approach on pilot plants can be simplified in two senses:
(i) Experimentally each fluid (liquid, solid and gases) may be substituted by low
cost and easy manageable fluids, such as water and air in flotation, or water and
organic solvent in solvent extraction processes, or water in a liquid phase reaction.
The process hydrodynamics will be well represented by such fluid mixing and
A Hybrid Approach to Empirically Test Process Monitoring, Diagnosis 217

separation in a process unit. Thus experimental work can be carried on under safety
and low cost condition.
(ii) The solute concentration changes remains as a problem because there will be
no real change, under the previously discussed condition. Also remember the
difficulties found in a real case to use low cost and reliable instrumentation.
Alternatively, the solute concentration changes can be obtained from detailed models
relating measured operating conditions, such as flow rates, temperatures, pressures
and levels, and initial concentration state on the feeds. This kind of models may also
difficult to obtain.
Therefore, if a model is available, and the pilot plant is operated by using these low
cost fluids, a hybrid system can be developed. The real plant behavior is simplified
but the main hydrodynamic characteristics are still well represented. On the other
hand, by using the model the variables representing the target of the process operation
are predicted (not measured), under a wide operating region.
Distributed control of local objectives can be administrated by supervisory control
strategies based on estimation of the crucial variables, if this hybrid system is built.
Also process monitoring, diagnosis and fault detection, isolation and remediation
studies can be developed under low cost with a reasonable approximation to the
behavior of a real process.
Models of different kinds may be built to be solved on-line to produce virtual
output variables from virtual and real input variables, as illustrated in Figure 1.

Real input Pilot plant Real output


variables variables

Plant simulator

Virtual output
Virtual input
variables
variables

Fig. 1. Hybrid system

When supervisory control decisions are to be taken rather frequently, dynamic


models should be used. The building of these detailed dynamic models, usually end
up with a very complex and difficult model to be solved on-line. Sometimes the cost
of implementing this approach is unaffordable and in some cases unjustified. Usually
for control purposes a more simply linear model may capture the essential of the
dynamics. The problem is that they are valid only on a very narrow operating zone.
Usually, the steady state gains, time constants and time delays become a function of
operating variables such as feed flow rates. A lower cost approach is then to use
simpler models and update its parameters every time that main flow rates are
changed.
218 L.G. Bergh

At the Process Automation and Supervision Laboratory of Santa Mara University,


three pilot plants were built, instrumented and controlled: a flotation column, a
solvent extraction circuit and a continuous stirred tank reactor.

3 Flotation Column Pilot Plant


Flotation columns [3] are now used world-wide as efficient cleaning stages in a large
number of sulfide mineral concentrators. More degrees of freedom in their operating
variables have led to large variations in metallurgical performance and therefore to
much scope for improving their control [4].
The primary objectives, as indices of process productivity and product quality, are
the column recovery and the concentrate grade. The on-line estimation of these
indices usually requires significant amount of work in maintenance and calibration of
on-stream analysers, in order to maintain good accuracy and high availability [5].
Therefore, it is a common practice to control secondary objectives, such as, feed pH,
froth depth, gas flow rate and wash water flow rate [4].
Stable operation of flotation columns and consequent consistent metallurgical
benefits can only be obtained if basic distributed control systems are implemented. In
general, at least wash water and air flow rates and froth depth are measured on line,
and tailings, air and wash water flow rates are manipulated. In some circuits pH
control and chemical reagent addition control are also included. A schematic of a
control system is shown in Figure 2.
The coordinated control of these secondary objectives is in the form of a
hydrodynamics supervisory control. To avoid working with solids and collectors, the
approach of substituting feed by water represents quite well the column hydrodynamics.
The phenomenological model to predict output grades from operating variables is a
version of that reported in [6] and it is schematically shown in Figure 3.

4 Experimental Results
Once the hybrid system is built with all measurements and model predictions
available in the computer network, different monitoring, diagnosis and supervisory
control systems can be tested. In this work, examples of building and applying models
based on principal component analysis (PCA) are discussed.
The concept of a latent variable model is that the true dimension of a process is not
defined by the number of measured variables, but by the underlying phenomena that
drive the process. The latent variables themselves are modeled as mathematical
combinations of the measured variables and describe directions of variation in the
original data. A latent variable model can contain many fewer dimensions than the
original data, it can provide a useful simplification of large data sets, and it can allow
better interpretation of the measured data during analysis [7].

4.1 PCA Application to a Flotation Column

A PCA model was built from 1800 sets of data corresponding to a normal condition
of 16 variables (froth depth, gas hold up, low and high pressure, mA signals to air and
A Hybrid Approach to Empirically Test Process Monitoring, Diagnosis 219

tailings control valves, bias, air, wash water and feed flow rates, feed particle size,
grade and solid percentage, and the predicted concentrate grade and process recovery)
. A model with 6 latent variables was found to explain at least 92 % of the variance in
the centered and scaled pretreated data. For monitoring the process the Hotelling T2
limit was found to be 12.6, while the Q residuals (prediction errors) was 3.81.

concentrate
FIC

wash water AI
reagents
AI feed LIC

FI
FIC
Supervisor
air
tails
AI

Fig. 2. P&ID of a pilot flotation column

Design parameters: Concentrate


Diameter, high, geometry grade
Recovery
Feed characteristics: flow rate,
species, density, solid percentage,
particle size, grades, kinetic
Process
constants Metallurgic
Simulator
froth depth
air flow rate
water flow rate

Fig. 3. Metallurgic model of a flotation column

Experiments were carried on to test when the process is out of control and an
abnormal operating condition is met. Two results are presented: when the process is at
steady state and during the transient period. One example is shown in Figure 4, where
the T2 and Q test has been followed for over 600 samples, taken every 5 seconds.
220 L.G. Bergh

Q residuals
6
4
2
0
0 200 400 600
Sam ple num ber
30
Hotelling

20

10

0
0 200 400 600
Sam ple num ber

33 90
85

Recovery %
Conc. grade

31
80
29 75
70
27
65
Concentrate grade Recovery
25 60
0 200 400 600
Sam ple num ber

Fig. 4. Operating condition test A

One can see that most of the time the Q test is satisfied, while T2 test is failed at
intervals 130-200, 300-430 and 480-560. On these same periods, the concentrate
grade is too low and recovery is high or concentrate grade is too high and recovery is
low, then an abnormal operation has been detected. To identify which variables are
causing this, the individual contribution to the T2 residuals, for sample 512, are shown
in Figure 5. One can see that the main contribution were the froth depth and the high
and low dp/cells. All variables consistently showed that the problem is due to a low
froth depth, causing high recovery and low concentrate grade. Figure 6 shows the
froth depth changes during the whole period. If the froth depth were change from 50
to 100 cm, as is shown at sample 600, the column operation is driven back to a normal
condition, as can be seen from the previous figures. When only the Q residuals test
fails, the device measuring the isolated variable must be recalibrated o replaced.
Several tests were carried on to find the sensibility of the monitoring test to the
extension of the fail, measured in percentage of error. Errors less than 5% on pressure
to control valves, 7% on Dp/cells, 15% on flow meters and 10% on virtual
measurements of concentrate grade were detected. These error limits were found for a
large number of different operating conditions. One example is shown in Figure 7 for
the virtual measurement of copper concentrate grade.
The same PCA model was used to test abnormal operation either because of
decision based on failed sensors or process variable deviations. The PCA model
relays on the selected data. If the data collected represents a narrow band of operation
A Hybrid Approach to Empirically Test Process Monitoring, Diagnosis 221

30

% residual Ti
20

10

0
z E P L PH P A P T Jb Jf Jg Jt Jw R CCG D FCG S

Fig. 5. Contributions to abnormal operation A


Froth depth [cm]

150
100
50
0
0 200 400 600
Sample number

Fig. 6. Froth depth over the whole period

15
Q residuals

10
5
0
-15 -5 5 15
Error %

Fig. 7. Failure detection on virtual concentrate grade

around the targets, it may be expected that abnormal conditions, as a result of a


combination of process variable deviations, will be easily detected. A model built on
such selected data will be less useful to identify measurements problems. The model
used in this work was based on data corresponding to a wide operation zone,
favouring the detection of sensor failures. A best approach to be tested is the use of
different PCA models, based on different data, for each purpose.

5 Conclusions
This approach, by combining on-line process measurements and predicting models
variables, permitted low cost, safe and wide range experimentation in pilot plants.
When process hydrodynamics and the phenomena of changing the concentration of
some species can be decoupled, considerable simplification for experimentation can
be achieved. By using low cost materials, such as water, air or kerosene, real
experimentation can be performed to describe the real process hydrodynamics. By
222 L.G. Bergh

adding the use of simpler models, reliable information on key unmeasured variables
can be obtained.
The application of multivariate statistical methods, and particularly PCA, is a
powerful tool to build linear models containing the essential of the process
phenomena with the minimum number of latent variables. The application of PCA
models to monitoring CSTR and flotation columns has been demonstrated.
These PCA models can be effectively used as part of a supervisory control
strategy, especially when control decisions are infrequently made. A novel approach
for testing strategies for process monitoring, diagnosis and control has been proposed.
In a near future, more tests on novel strategies for process monitoring, diagnosis,
fault detection and isolation and supervisory control can be performed giving
considerable inside on process performances under real experimentation.

Acknowledgments. The author would like to thanks Santa Maria University (Project
271123) and Fondecyt (Project 1100854) for their financial support.

References
1. Hodouin, D., Jms-Jounela, S.-L., Carvalho, T., Bergh, L.G.: State of the Art and
Challenges in Mineral Processing Control. Control Engineering Practice 9, 10071012
(2001)
2. Hodouin, D.: Methods for Automatic Control, Observation and Optimization in Mineral
Processing Plants. Journal of Process Control 21, 211225 (2011)
3. Finch, J.A., Dobby, G.S.: Column Flotation. Pergamon Press (1990)
4. Bergh, L.G., Yianatos, J.B.: Control Alternatives for Flotation Columns. Minerals
Engineering 6(6), 631642 (1993)
5. Bergh, L.G., Yianatos, J.B.: State of the Art: Automation on Flotation Columns. Control
Engineering Practice 11(1), 6772 (2003)
6. Bergh, L.G., Yianatos, J.B., Leiva, C.: Fuzzy Supervisory Control of Flotation Columns.
Minerals Engineering 11(8), 739748 (1998)
7. MacGregor, J.F., Kourti, T., Liu, J., Bradley, J., Dunn, K., Yu, H.: Multivariate Methods for
the Analisys of Databases Process Monitoring, and Control in the Material Processing
Industries. In: Proceedings 12th IFAC Symposium MMM 2007, pp. 193198 (2007)
Reconstructing Assessment in Architecture Design
Studios with Gender Based Analysis: A Case Study of 2nd
Year Design Studio of National University of Malaysia

Nangkula Utaberta1,2, Badiossadat Hassanpour1,


Azami Zaharim2, and Nurhananie Spalie1
1
Architecture Department, Faculty of Engineering & Built Environment,
Universiti Kebangsaan Malaysia {The National University of Malaysia},
2
Centre of Engineering Education Research, Faculty of Engineering & Built Environment,
Universiti Kebangsaan Malaysia {The National University of Malaysia}
nangkula_arch@yahoo.com

Abstract. Education is a contiguous and consecutive process. Thereby learning


skills and knowledge in any context, requires strong and potent academic basis.
Appraisal methods and grading systems in studio based educating systems, such
as architecture, more than other majors and fields needs attention and scrutiny.
Because transmitting the success amount of solving ill-defined problems in
design studios to grading symbols are more difficult than multiple choice tests
and even open ended questions. Architecture education is a phenomenon that
involves relationships that associate the personal characteristics, needs, attitudes
and intentions of the studio master, juries, and students participants with the
sociopolitical characteristics and educational philosophies of the school. Each
student (she/ he) learns things based on their perceptions of their social world
and their aspiration. These factors shape students unique experience and their
approach to design problem. In this study we are interested in assessing the
participation and interaction of participants in design jury process that is female
and male students and juries. The findings presented here have been tabloid
from one portion of ongoing comprehensive research and investigation in
faculty of engineering and built environment in University Kebangsaan
Malaysia.

Keywords: Architecture Education, Gender, Students, Assessments.

1 Introduction
Design process in architectural studios is based on some small-small well defined
projects during the semester and on final project at the end which is ill defined and in
larger scale. Students should finalize their project before deadline and present it in
submission day with proper documentation. In this day they have a chance to see
other students project and get the comments from peers and experts and finally they
will get mark. Experiences show that students are worry about their grades insofar as
they wont attend in discussions if they think their comments will affect grades and
with small negative comments or finding fault in their project they get disappointed

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 223229.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
224 N. Utaberta et al.

and loose other statements and suggestions coming after. Most of the students
complain is about the unfairness and inequitable of grades. This may rout in
unawareness of the way they evaluate and graded.
On the other hand analysis shows that there is no common understanding of what
grading process is in architecture and what occurs in faculties are just instructors
experience from what their own professors did. This has inhabited high-quality
discourse, research and development of grading system in architecture education.
First of all, we have to investigate about the past and current implemented grading
systems in architecture faculties to find the characteristics and attributes of idealistic
grading systems Since different definitions of some terms related to the discussion are
used differently in different countries, and even within a single country, in different
education sectors, finding an appropriate terminology to use in analysis of assessment
and grading is essential. For instance, assessment in some contexts in the USA
refers to the evaluation of a wide range of characteristics and processes relating to
higher education institutions, including entry levels, attrition rates, student services,
physical learning environments and student achievements. In the UK, assessment can
mean what students submit by way of project reports, written papers and the like as
distinct from what they produce under examination conditions. Similarly, a grade
may refer to the classification of the level of a students performance in an entire
degree, the summary of achievement in a single degree component or the quality of a
single piece of work a student submits in response to a specified task.
Assessment in this article refers to the process of forming a judgment about the
quality and extent of students achievement or performance. Such judgments are
mostly based on information obtained by requiring students to attempt specified tasks
and submit their work to instructors or tutors for an appraisal of its quality.
Student learning issues are currently at the forefront of education. Specially in art
and architecture that the challenge of identifying a problem, defining limits and
developing creative approach to solve the problem, aids in the development of
reasoned judgment, interpersonal skills, reflection in action and critical reflection on
practice [1] are taking place. Hence design studios as a place which inherited traits
and values transmit and social relationship between students, tutors, and peers
cultivates [2] would be the crucial part.
Sara 2002 asserts that studio nature as a confined space which is isolated from
social world, itself, prevents the movement for libration. Each architecture faculty
promotes different niche and different studio culture and academic climate affects
students' interest, performance, and sense of self-worth.
Empirical studies of architecture education are few and studies of gender issues in
architectural education are all the more rare. Educational research [3] reveals that
male and female students are treated differently and architecture pedagogy has
historically constructed with masculine identity and still persists on this.
Architecture students are usually presented with a history in which women do not
appear and in which women's particular contributions are not recognized (Ahrentzen,
Anthonry1993) Most women remain spectators in popular versions of both past and
present. A look at architectural history textbooks reveals little mention of women and
their contributions to the built landscape. We might reasonably assume that most
syllabi of architectural history courses also neglect women [3].
Reconstructing Assessment in Architecture Design Studios 225

For example, the awarding of the 1991 Pritzker Architectural Prize solely to
architect Robert Venturi ignored the contributions of his partners, notably Denise
Scott Brown. Venturi commented on this omission when he acknowledged the award:
"It's a bit of a disappointment that the Prize didn't go to me and Denise Scott Brown,
because we are married not only as individuals, but as designers and architect [4].
Another example is Julia Morgan. Her capabilities as a designer and architectural
professional were on par with those of her male contemporaries. However, because
she was not male, the commissions that she received and the publishing of her design
work were not of the same caliber and prominence of those distinguishing her male
architectural colleagues [5].
Favro reveals that Morgan's work at the Ecole was every bit as object-oriented
and style-conscious as her peers. What she lacked was opportunity. Armed with her
diploma from the Ecole, Morgan sought professional validation, yet found herself by
gender in the position of an outsider. She displayed obvious skill as a designer and
engineer, yet was often given commissions because of preconceptions about female
sensitivity
Genderization is attaching out cultural constructs of masculinity to our concept of
what constitutes a well-educated person or suitable educational methods [3].
Gender is a biological difference and should not be construed as the property of
individuals. Rather gender reflects how social expectations and beliefs treat the
biological characteristics of sex to form a system of domination and subordination,
privilege and restraint. Domination does not necessarily have to be as overt as
physical oppression, it can be as pervasively subtle as silencing an individuals voice
in text, display, or class discussion [3] .It is important to recognize that our social
constructions of masculine and feminine are fluid, from one culture to another, within
any culture over time, over the course of ones life, and among different group of men
and women depending on class, race, ethnicity and sexual orientation. We must be
constantly aware of how society treats gender and how we may inadvertently
reinforce it [3].
In 2003 Phakiti [2] suggested that learning strategies, motivation and the role of
context are intertwined with gendered identities and the research is needed to
understand why and under what context gender differences in learning occur. In this
paper an example of a particular design studio was used to illustrate the gendering in
design studios. So based on literature review and most cited researches all around the
world we administered a questionnaire and distributed among second year studio
student in architecture department in university Kebangsaan Malaysia and based on
students responses and one by one interview we will try to trace the weak point of
current models and at the end some suggestions we be given.

2 Methods and Material


Crit sessions and design juries are back bone of design studios and fundamental
component of architectural education. Done studies and researches by Ahrentzen,
Anthonry1993, Ayona Datta 2007 shows that student -teacher interaction during the
desk crit and juries revealed some gendered patterns of communication. Sotto 1994
suggested that it makes no sense to decide how one is going to teach, before one has
226 N. Utaberta et al.

made some study of how people learn. To investigate about students perception and
feelings and measuring their satisfaction from implementing models, we chose second
year architecture students at BS degree in university Kebangsaan Malaysia as case
study. The questionnaire was administered at the end of first semester. All 23 students
of the studio filled the questionnaire form and attended in one to one interview. Of
these 14 were female and 9 were male.15 of them were Malaysian and else were
Chinese-Malaysian. The studio was run by 2 male lecturer and 3 female teacher
assistants that 3 0f them were Malaysian, one Indonesian and on Iranian.
One to one tutorials were informal crits and sometimes informal group discussions
with one lecturer in each group. During formal assessment periods such as crits or
reviews, student pinned up their sheets and explained their concepts and ideas to
justify their design process to lecturers, peers and reviewers. Juries used evaluation
sheets to assess students work base on predefined criteria.
In first part of study, a questionnaire has distributed with 14 questions that expect 3
first questions and 4 last questions which were in open ended questions all were in
liker type with five level from 1 to five that 1 shows the minimum and 5 for
maximum.
When students asked how you learn to design, 47% of students identified working
in the studio as the primary model of learning. 55% of the men also cited reading,
while 42% of women chose discussion with peers. 47% of all students and 33% of the
men, mentioned discussion with tutors as effective means for learning in the studio. it
highlights the collaborative nature of architectural learning and stresses the
importance of the studio as the primary learning space in architecture and the role of
teacher.
Students were asked have they been encouraged to participate in the jury/panel
discussion when other student is presenting his/her project. As figure 1 shows the
percentage of female students who chose never is 15 percent more than male students
and the total average illustrate that just sometimes students asked to attend in
discussion. This supports Fredricksons idea 1993 that in small groups often do not
receive fair hearing. He also emphasized on importance of role of tutors and leaders
to encourage students.

Fig. 1. Students responses to the question how often instructors encouraged to attend in
discussion
Reconstructing Assessment in Architecture Design Studios 227

Students where asked how do you usually feel after crit sessions. Interestingly 49
percent of female cited disappointed, uncertainty and confused while 88 percent of
males felt inspired and motivated.
Men also complained of feeling humiliated and demoralized after receiving
negative comments. But many male students view the session as just one more battle
to be won. By contrast, to many women students, this warrior mentality is truly
foreign, causing them to feel all the more self-conscious at the jury [3].
Like the studio, the design jury is a fundamental component of architectural
education. At most schools, the typical jury includes only men, or perhaps on
occasion, a token woman. Although we see a vast number of juries in which all jurors
are male, we rarely if ever see juries in which all jurors are female. Mark
Frederickson [6] reveals several important sources of gender and racial bias.
Compared to male jurors, female jurors receive less than their fair share of total time
to comment, they speak less often, and they are interrupted more often. Compared to
juries for male students, juries for female students are shorter. Female students are
interrupted more often. Jurors appear to have a condescending attitude and lower
expectations and demonstrate coddling behavior toward female students. Obtained
data from students response to the question "who do benefit from jury sessions has
tabulated in figure 2. The variance of choices reveals that female students benefit
more than male students.

Fig. 2. Responses of the students to the question who do benefit from jury sessions?

Students were asked how often do you interrupted by the juries or your tutor while
you are giving presentation on your concept or project. More than 55 percent of
females complained that they all the time had interrupted and after that they lost they
words and got more nervous. While most of the males mentioned that they interrupted
by the teacher or juries to ask questions and that help them to explain what was
needed, better.
In addition, research has shown that instructors give male students more detailed
instructions on how to complete assignments on their own, while they are more likely
to complete assignments for female students.
228 N. Utaberta et al.

Done surveys in past years by Ayona Datta 2007 shows that instructors talk more
to male students, ask them more challenging questions, listen more, give them more
extended directions, allow them more time to talk, and criticize, and reward them
more frequently. But interestingly in our research girls and boys cited same feelings
about being rewarded by their tutors. This may derive from the number of female
teachers who were attending in the studio. Changing the competitive atmosphere of
design studios to a corporative climate can help females to be able to show
themselves. As Laura Tracy said competing against the problem instead of against
one another.
Results of this survey illustrate that gender differences exist in some studio
contexts and these differences are part of socialization into the culture. Students
leaning can be influenced by gender differences. It means that educators must
recognize that design and learning "differences" may reflect the different worlds in
which boys and girls are socialized as well as our socialized expectations of men and
women.
So teachers need to be trained about this issue to be able to facilitate the learning
process. Most tutors in the studio find themselves thrust into teaching without much
training in gender-sensitive teaching skills. Hence, they pass down teaching models
gleaned from their own education without critically evaluating the hegemonic
ideologies that may be part of these models [7]. Adequate training of all tutors in the
skills of listening, reflective questioning, and gender-sensitive attitudes and behaviors
would create a more inclusive context for learning. On the other hand removing the
over-reliance on crits and increasing the range of assessment methods to cover self-
and peer-assessment as well as verbal presentation skills would provide more
empowerment to the students and allow them more involvement with their learning.

3 Conclusions
The intention in this paper is to make architecture educators aware of gendered
educational practices and their consequences, for students and for the disciplines
itself. Since learning is always connected with concurrent experiences, there should
be more focus in the curricula for gender sensitive design projects as there is for
technology. We hope this paper can start a further discussion and discourse on this
unpopular area.

References
1. Schon, D.: Educating the reflective practitioner: Towards a new design for teaching (1987)
2. Datta, A.: Gender and Learning in the Design Studio. Journal for Education in the Built
Environment 2(2), 2135 (15) (2007)
3. Ahrentzen, S., Anthony, K.: Sex, stars, and studios: A look at gendered educational
practices in architecture. Journal of Architectural Education 47(1), 1128 (1993)
4. M.J.C., Robert Venturi Awarded Pritzker Prize. Architecture 80(5), 21 (1991)
Reconstructing Assessment in Architecture Design Studios 229

5. Favro, D.: Sincere and Good: The Architectural Practice of Julia Morgan. Journal of
Architectural and Planning Research 9(2), 125 (1992)
6. Frederickson, M.P.: Gender and racial bias in design juries. Journal of Architectural
Education 47(1), 3847 (1993)
7. Glasser, D.E.: Reflections on architectural education. Journal of Architectural
Education 53(4), 250252 (2000)
Re-assesing Criteria-Based Assessment in Architecture
Design Studio

Nangkula Utaberta1,2, Badiossadat Hassanpour1,


Azami Zaharim2, and Nurhananie Spalie1
1
Architecture Department, Faculty of Engineering & Built Environment,
Universiti Kebangsaan Malaysia {The National University of Malaysia},
2
Centre of Engineering Education Research, Faculty of Engineering & Built Environment,
Universiti Kebangsaan Malaysia {The National University of Malaysia}
nangkula_arch@yahoo.com

Abstract. The methods of education and assessment are transmitting from


pioneer universities and faculties to other departments and universities without
any consideration to the destination country. This issue can be common and
easy in science and related fields, but in art and architecture that the heritage,
environment and etchas influence on conducting design process, it is
questionable to follow the imported models. This papers effort is to restudy the
previous researches by most cited scholars in architecture education field to
reach the better cognition to formative assessment as transformable traits of
assessment and its profits.

Keywords: architecture education, criteria based assessment, grading.

1 Introduction
Architecture is involved with every aspect of the design process from concept to
completion, and because of the nature of its education, the architect is ideally suited to
exercise and maintain overall management of the project. A student after taking
liberal arts subjects, basic architectural graphics and communication is given an
associate degree in architectural technology. After another two years of architectural
building subjects he may be given a diploma for bachelors degree in architectural
technology. Another or two years of graduate work an advanced architectural,
structural design and professional subjects he may be given a master degree. This path
has a frequent part, called assessment.
As Derek Rowntree [1] stated that if we wish to discover the truth about an
educational system, we must first look to its assessment procedures. The locus of
studies in this millennium is shifting towards skills acquisition, rather than knowledge
accumulation, for autonomous self-directed and lifelong learning. In same condition
once a technology is developed in a certain country, its know- how can be instantly
spread out all over the world, neglecting the cultural aspects of countries to or from
which it propagates .On the contrary the spiritual and cultural aspects of human life,
namely, how to enrich mens day by day life, cannot easily be communicated. The
interchange of mans cultural aspects is not as easy as that of materialistic ones. In

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 231238.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
232 N. Utaberta et al.

this paper first the assessment culture will be discussed and then the effect of using
formative assessment in education and its profits.

2 Assessment Culture in Architecture Education


The notion of culture represents a fundamental basic human concept that underlies
historical developments and the creation of civilizations. Though there are numerous
ways of defining culture, it is often perceived as referring to the shared ways of
thinking and behaving, to common attitudes and beliefs that a social community
shares , and to the products the social community has created. The concept of culture
is currently applied broadly to refer to, depict and characterize sets of shared beliefs
and modes of practice in diverse areas, including in the sphere of education (For
instance learning cultures, school culture) and educational assessment.
The role of assessment is shifting. Assessment currently perceives as a means to
promote learning rather than monitor it, hence assessment is for learning. Assessment
for learning is the process of seeking and interpreting evidence for use by learners and
their teachers to decide where the learners are in their learning, where they need to go
and how best to get there. Assessment should not merely seen as something separable
from instruction, administered at the end of the learning process, but also as a
powerful tool for promoting deep learning activities. Each of the assessments is
important those that are occur in daily classroom interactions among teachers and
students, those set by teachers at the end of particular phase in the work, and those
developed and administered by external jurors.
Assessment culture refers to educational evaluation practices that are compatible
with current ideologies, social expectations, attitudes and values so the emergence of
assessment cultures needs to be discussed with reference to current views on learning
and education and the social role of assessment. Birenbaum 1996 [2] has made a
distinction between cultures in the measurement of achievement and relates them to
the developments in learning society. In the traditional so-called testing culture,
instruction and testing are considered to be separate activities. Instruction is the
responsibility of the teacher, whereas testing is the responsibility of the psychometric
expert, who can use elaborate procedures for test development and sophisticated
psychometric models for the analysis of test responses. The changing learning society
has generated the so-called assessment culture as an alternative to the testing culture.
According to Birenbaurn 1996 [2], the assessment culture is in accord with the
constructivist approach to education. In this approach, learning is viewed as a process
through which the learner creates meaning and teacher is not a person who transfers
knowledge, but a mentor who provides opportunities for learners to use the
knowledge and skills they already possess in order to understand new topics. The
teacher is expected to provide interesting and challenging tasks.
Teymur in 1985[3] reached to this conclusion that the design process consists of
regular experimentation, it can be said that architectural curriculum generally has few
real variations in different countries .but the reason which leads to distinction in
results are the differences in learning style which originate from numerous cultures
and variety of backgrounds.
Re-assesing Criteria-Based Assessment in Architecture Design Studio 233

The research consisted of 108 university first-year Bachelors students as sample


by David Gijbel [4] in 2006 shows that differences in assessment preferences are
correlated with differences in approach to learning. Deep approaches to learning are
associated with students intentions to understand and construct the meaning of the
learned content, whereas surface approaches to learning refer to students intentions
to learn by memorizing and reproducing the factual contents of the study materials.
Students generally shift between surface and deep approaches to suit the assessment
demands of their courses.

3 Criteria Based Grading Models

Since criteria are attributes or rules that are useful as levers for making judgments, it
is useful to have a general definition of what criterion is. There are many meanings
for criterion (plural criteria) but many of them have overlap. Here is a working
dictionary style definition, verbatim from Sadler 1987, which is appropriate to this
discussion and broadly consistent with ordinary usage [10]. Criterion (n): A
distinguishing property or characteristic of anything, by which its quality can be
judged or estimated, or by which a decision or classification may be made.
(Etymology: from Greek kriterion: a means for judging). Grading models may be
designed to apply to whole course or alternatively on specific assessment tasks and
some can be appropriate for both. For all grading models explained below, the
interpretation of criteria is same with the general definition given above and all of
them make a clear connection between the achievement of course objectives and
given grades, without reference to other students achievements.

3.1 Verbal Grade Description

In this model, grades are based on students achievement to the course objectives. In
this form, the given grades are base on interpretations which clarify the attainment
amount of course objectives figure 1. This kind of grading method is based on
holistically attitude in evaluations.

3.2 Objective Achievements

In this form the course objectives will be portioned into major and minor and the
achievement of each can be determined by yes or No and the achievements of each
objective will be computed [11] Fig.2. Both of these two objective base models make
clear connections between the attachments of course objectives and the grades
awarded but students cant easily ant close connection between the course objectives
and assessment items and they are not in strong position to judge how much they
reached to the objectives.
Therefore these types of models have little prospective value for students. Also
there are no indications of whether given grades are for attainment in objectives of a
special task or for whole objectives and it will be assessed by its own or in
combination to other objectives. Most educational outcomes and attainments amount
234 N. Utaberta et al.

Fig. 1. Form (a)

Fig. 2. Form (b)

cannot be assessed as dichotomous states like yes or no or zero and one, because
learning is a continuous process that in contrast with discrete scales it can just be
divided into segments satisfactory and dissatisfactory[11].

4 Proposed Criteria Based Model in Architecture Assessments


In all submission days, students prepare needed documentation such as sheets
included plans, evaluations, sections, perspectives etc and 3D models which may
determine by instructors or leave arbitrary. But these are not just the things that are
going to be assessed by jurors. Primary goals that were the basis of problem solving
process are the most important part of assessment. So the criteria to be used in
assessment and grading are linked directly to the way objectives expressed.
Since this approach has some conceptual parallels with the behavioral objectives
movement, according to Mager (1962)[5], a behavioral objective is not properly
formulated unless it includes a statement of intent, descriptions of the final behavior
Re-assesing Criteria-Based Assessment in Architecture Design Studio 235

desired, the conditions under which this behavior is to be demonstrated and the
minimum acceptable level of performance that signifies attainment of that
objective.Defined architecture assignments, Depends on their type, scale and duration,
have different objectives and expectations to assess the students submissions and
different tasks are required. These tasks are based on some practical necessity and
some personal standards aligned with course objectives. These tasks will create
policies for assessors to intend to take into account in judgment. Eyeballing different
evaluation sheets in variety of studios for different projects bring us to this result that
the rubric of the tasks is as follow:

1. Critical Explanation
2. Logical Development
3. Proposal and recommendation
4. Oral and Graphic Presentation

The potential number of tasks relevant to the projects is large but these are enough
to be illustrated and discussed in this paper. For each rubric and task some criteria
will be defined. Segregating evaluation extent to more tasks will increase students
opportunities to show their capabilities and sufficiency and gain more chance to get
better marks. But in contrast the more objectives are expressed for each task, the more
they will operate isolated and will recede from the overall configuration that
constitutes a unit of what the students are suppose to do. In addition it will restrict
assessors between these defined boarders and will confine their authority and
experiences in cognition and analyzing students hidden intends in their designing.
This is completely in opposition with the main target of inviting external jurors which
is benefit from diversity of expert ideas and critical attitudes. So characteristic of
objectives are more effective that their numbers in defining flexible evaluation
borders.
Since students perform in continuous path, the result of their performance just can
be revealed in continuum that can be divided between satisfactory and dissatisfactory.
Students locus this vector derives from quality of their work in response to defined
criteria in each task. So it is needed to define some qualitative levels to apply as a
norm to the assessment. Descriptions should have the best overall fit with the
characteristics of the submitted projects. The assessor does not need to make separate
decisions on a number of discrete criteria, as is usual list form. Such as little or no
evidence, beginning, developing, accomplish, exemplary.
However these descriptions are very helpful and effective in appraisal system but
finally the qualitative assessment should be able to be transmitted into grades and
marks. So we need to coordinate this model to one of the common grading system. As
we mentioned before, using grading systems such as (1 -100) or (A, B,..) are not
appropriate ways to import to criteria based assessment model because after
transmitting students work to numerical grades the connection between course
objectives and grades will be completely broken. Since marks and grades do not in
themselves have absolute meaning in the sense that a single isolated result can stand
alone as an achievement measurement or indicator that has a universal interpretation.
Assessment and grading do not take place in a vacuum. Quality of students work
236 N. Utaberta et al.

Fig. 4. Proposed Criteria Model Assessments

together with interpretations of such judgments can be known as comprehensive


model in judgments. So alternatively, a simple verbal scale could be used for each
criterion such as Fail, Poor, Average, Good and Excellent but in this type verbal grade
description applies to given assessment task, with a separate description for each
grade level (as mentioned before).So each list of criteria can be elaborated into a
marking grid.
Finally components of grades will be weighted before being added together to
reflect their relative importance in the assessment program.There are several forms to
show the final grades. The simplest is a numerical rating scale for each criterion, in
which case the ratings could be added to arrive at an overall mark or grade for the
work. Using numerical ranges gives the impression of precision and the system is
easy to make operational.Introduced model contains most of the strong points of other
criteria based models and none criteria base models. This method does not depend on
Re-assesing Criteria-Based Assessment in Architecture Design Studio 237

ranking or sorting students projects. It means there is no explicit reference to other


students performance. But final grades are assigned by determining where each
student stands in relation to others.Also since this model is completely base on course
objectives and instructors expectations and strategies in conducting the project, it
makes opportunities for instructors to discuss and criticize their implemented methods
in teaching and defining assignment and their objectives. This may lead to
improvement in education level.Although judgments can be made either analytically
(that is, built up progressively using criteria) or holistically (without using explicit
criteria), or even comparatively, it is practically impossible to explain a particular
judgment, once it has been made, without referring to criteria. So it is needed to
investigate about all evaluation and assessment methods and find used criteria and
hybrid their potentials to current methods and upgrade the existing models.

5 Conclusion
Evaluation and grading system in art and architecture and especially in their studio-
based courses are more difficult than other majors and field. Since their teaching and
learning process are different and more complicated than theory courses, it is
admissible. But there is common thought that believes there is no criterion and norm
in their grading and assessing system, in the other word the grading system is
holistically and subjective. This statement also is not incoherent. There is no special
criteria and norm among jurors and instructors in evaluating and grading students
project and if they have it is not known and explained to students. Students
themselves are inducted directly into the processes of making academic judgments so
as to help them make more sense of and assume greater control over , their own
learning and therefore become more self-monitoring.
In recent years, more and more universities have made explicit overtures towards
criteria-based grading to make assessment less mysterious and more open and more
explicit. But whenever there is no discussion and contribution, there is no way to
improve and development in this model and many institutions may employ identical
or related models without necessarily calling them criteria-based. A further
framework can be self-referenced assessment and grading, in which the reference
point for judging the achievement of a given student is that students previous
performance level or levels. What counts then is the amount of improvement each
student makes.

References
1. Biggs, J.: Teaching for quality learning at university: what the student does. SRHE &
Open University Press, Buckingham (1999)
2. Birenbaum, M., Dochy, F.: Alternatives in Assessment of Achievement, Learning
Processes and Prior Knowledge. Kluwer, Norwell (2009)
3. Teymur, N.: Towards a working theory of architectural education (2005)
4. Gijbels, D., Dochy, F.: Students assessment preferences and approaches to learning: can
formative assessment make a difference. Educational Studies 32(4) (2006)
238 N. Utaberta et al.

5. Prosser, M., Trigwell, K., Hazel, E., Gallagher, P.: Research and Development in Higher
Education 16, 305310 (1994)
6. Dochy, F.J.R.C., McDowell, L.: Introduction assessment as a tool for learning. Studies in
Educational Evaluation 23(4), 279298 (1997)
7. Inbar-Lourie, O.: Language assessment culture. In: Shohamy, E., Hornberger, N.H. (eds.)
Encyclopedia of Language and Education, 2nd edn. Language Testing and Assessment,
vol. 7, pp. 285299. Springer Science+Business Media LLC, Heidelberg (2008)
8. Black, P.J., Wiliam, D.: Assessment and Classroom Learning. Assessment in Education,
774 (March 1998)
9. Nitko, J.: Educational Assessment of Students. Prentice Hall, Upper Saddle River (2000)
10. Sadler, D.R.: Ah! ...so thats quality. In: Schwartz, P., Webb, G. (eds.) Assessment: Case
Studies, Experience and Practice from Higher Education, Kogan Page, London (2002)
11. Sadler, D.R.: Interpretations of criteria-based assessment and grading in higher education.
Assessment and Education in Higher Education 30(2), 175193 (2005)
Layout Study on Rural Houses in Northern Hunan
Based on Climate Adaptability

Xi Jin, Shouyun Shen, and Ying Shi

School of Environmental Art Design, Central South University of Forestry and


Technology,Changsha 410004, China
jinxi_alex@163.com

Abstract. Nowadays, in China, the relationship between building and climate is


a hot issue in green building field, especially in rural construction. It is vital for
us to develop and scientize valuable passive strategies and skills which may be
adjusted to local ecosystem including climate, wind, and geomorphology, so as
to build more distinctive and adaptable rural houses. Firstly, based on on-spot
investigation and detailed tests on rural houses in Huarong which is located in
hot summer and cold winter zone in Northern Hunan , this paper analyzes
overall planning and plane layouts quantitatively from the climate adaptability
angel, and summarizes existing function features. On the other hand, from what
analyzes from collected data, this paper has demonstrated several original green
building design strategies and concrete measures of local rural houses adapted
to climate in Huarong, which would be also applied in other places in Northern
Hunan as effective experiences.

Keywords: Rural house, Climate adaptability, Huarong, Passive technology.

1 Introduction

2007 Annual Report on China Building Energy Efficiency stated, in South China,
it is the solar radiation that affects summer air condition energy consumption, shading
and ventilation are very vital for energy saving. The application of passive
technology is the key to energy efficiency system of residential buildings in South
China.
Theories and practices about rural houses are almost focused on traditional folk
houses: in Asia, the typical Japanese and Korean traditional folk houses were
thoroughly researched so as to summarize responsive sustainable design strategies, so
had been done in several places in China including Shanxi, Anhui, Hunan and
Yunnan.
Contrast to contemporary rural houses, especially to those located in hot summer
and cold winter zone, similar researches are insufficient related in researches on
sustainable practice and climate adaptability, which could hinder the implement of
energy efficiency policies in the countryside. Therefore, it is extremely significant to
reduce energy consumption and build ecological rural houses by deeply researching

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 239246.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
240 X. Jin, S. Shen, and Y. Shi

layouts of rural houses based on climate adaptability in South China, developing and
scientizing valuable passive strategies and techniques so as to optimize the function
layout.

2 Research Objective and Purpose


Huarong, near the Dongting Lake and belonged to the city of Yueyang in Hunan
province, is chosen as the study objective this time, which is of typical climate
features of hot summer and cold winter zone: stuffiness in summer, clamminess in
winter, and high calm wind rate. From statistical data of typical meteorological year
in Yueyang, it is could be seen: The annual average temperature is about 17. The
daily extreme temperature is 41 with an average monthly dry-bulb temperature of
30 throughout July, the hottest month; in January, the coldest month, the minimum
temperature is -10 with an average monthly dry-bulb temperature of 6. The
annual average relative humidity is 79%~80%. These make houses exceeded the
scope of thermal comfort.
In January and July 2010, we surveyed 60 rural houses in Huarong, among which,
3 houses (2 townhouses, 1 single house) had been investigated in detail. By
summarizing these samples and questionnaires, data related to over-all layout of local
houses, indoor thermal conditions, function layouts and energy consumption had been
obtained.
It is meaningful to research and analyze over-all layout, function and thermal
conditions so as to sum up climate-responsive design elements; in addition, it is also
necessary for us to optimize internal layout from climate adaptability angel by
scientizing valuable passive technologies, which could devote to new rural
construction in Hunan.

3 Research Methods

Field measurements were conducted in 7 days. A portable weather station mounted at


a height of 10m was used to measure outdoor air temperature and wind speed. The
indoor air temperature was measured by the dataloggers covered by silver paper to
reduce the influence of lighting at night, at a height of 2m above the floor or the
ground and the data were recorded every 5 minutes. The locations of the measuring
points are shown in Fig. 1. In particular, before measuring, calibration of all
instruments was adjusted precisely.
60 rural houses had been surveyed through questionnaires. Among them, we tested
3 representative houses in detail, as they had similar testing conditions: They were
adjacent to the road in southern; in northern, they were near to inside patio; still,
measuring point of each building was located in the southern bedroom on the second
floor.
Layout Study on Rural Houses in Northern Hunan Based on Climate Adaptability 241

Fig. 1. Planes of measured houses and measuring points

4 Results and Analysis

4.1 Over-All Layout

Most houses are compactly built along road sides with obvious linear features, which
can achieve optimal land utilization. In hot summer, it is appropriate for rural houses
to take low-rise and high-density over-all layout: firstly, it can form and promote
lane wind to improve human thermal comfort; secondly, this allocation could make
most rooms far away from outdoor ground to minimize excessive thermal influence
on indoors.
Most dwellings are townhouses. From figure 2, it seems clear that under the same
testing conditions, temperature difference among three houses fluctuates within 2.
This unobvious comparison is due to relatively fine weather during testing days.
Nonetheless, remarkable tendency could be concluded as follows: thermal conditions
of 1# house (the midterm townhouse) are relatively stable, and it has the minimum
temperature variation and the best thermal performance contrasted with the other two
houses. Thermal performance of 5# house (the endmost townhouse) is a little poorer
than that of 1# house, because of the western solar radiation. 3# house (the single
house) has poorest thermal conditions among three houses, owing to its both eastern
and western solar radiation. From these data, objective conclusion could be drawn
that as to energy efficiency of building types, a midterm townhouse is superior to an
endmost townhouse, and the latter is also superior to a single house; this thermal
comparison is undoubtedly obvious under extreme climate. Whats more, with small
building shape coefficient and mutual shading system, townhouses can avoid negative
effects on indoors resulted from excessive solar radiation.
In summary, light, temperature, humidity, wind direction and velocity are fully
considered in this layout to improve microclimate and optimize climate adaptability.
This spontaneous rural layout contains original and plain ideas of Green Building.
242 X. Jin, S. Shen, and Y. Shi

Fig. 2. Temperature changes of tested houses in the winter and the summer

4.2 Plane Layout

It is discovered that hall-patio pattern in traditional folk houses of Hunan has been
inherited in rural house planes. Centered in the patio, rooms enclose each other to
form a space sequence that excludes from outside and opens to inside (Fig. 3). The
eaves galleryhallpatiobedrooms typical layout has inherited unique
architecture language of traditional folk houses, and formed spontaneous mutual
shading and ventilation system to improve microclimate, which make houses adapted
to local climate.
Layout Study on Rural Houses in Northern Hunan Based on Climate Adaptability 243

Fig. 3. Typical space sequence and the unobstructed ventilation aisle from south to north

Eaves gallery: In Huarong, there is not a real eaves gallery in each house. By
vertical concave and convex of building envelopes and eaves projection of pitched
roof, a climate buffer zone is formed similar to traditional eaves gallery which has
resulted in an original, passive, self-shading system as follows: a) For a single-layer
house with three bays, bedrooms and kitchen are located in both sides of the plane
and bulges in contrast to the hall which concaves about 1.5m inward. Therefore,
concave and convex features of the building envelope, combined with eaves
projection of the pitched roof, achieves favorable shading effects. b) For a multi-
storey house, on the second floor, just above the hall, a balcony or a bedroom always
bulges 1.5m outward which forms a side elevation with concaving first floor and
indenting second or third floor. This method can largely improve shading effects of
the hall. Whether intentionally or not, this eaves gallery ameliorates thermal
problems by providing the self-shading system, and it alters a little monotonous
elevations (Fig. 4).

Fig. 4. Demonstration of building self-shading

Hall: The hall is always a core in a rural house; 70% of home activities happen in
the hall such as entertaining, dining, and recreation. Among surveyed houses, all
rooms are organized based on the hall center. Most halls width is 4meters, and depth
is above 10m. Two gates are installed separately towards the outdoors or the
244 X. Jin, S. Shen, and Y. Shi

courtyard, on northern and southern walls of the hall; this can ensure that the hall is
unobstructed both in northern and southern directions. The southern gate placed in the
middle of the hall is bigger than the northern gate placed aside. Because gate
directions are accordant with prevailing wind direction in summer, this measure may
give rise to cross ventilation to improve dehumidification and heat dissipation from
natural ventilation in summer. Most surveyed houses adopt this pattern. For an
example, as Figure 2 shows, hall width is 3.9m, its southern gate is almost 2.45m
wide and 3m high, this insures the hall is unobstructed both in northern and southern
directions in favor of ventilation.

Fig. 5. Principle of hall as a responsive core

From a climate-responsive angle, the hall may be considered as a responsive


core an adytum surrounded by rooms. In summer, through adjusting the interface
between adytum and rooms or outdoors, an unobstructed ventilating aisle could be
created to promote house natural ventilation due to thermal and wind pressures to
achieve a desired indoor thermal environment (Fig. 5).
Patio: During investigation, we discover that it is advisable for townhouses with
narrow width and long depth to build a courtyard to solve functional problems, such
as day lighting of dark rooms, ventilation, and discharge. The patio may also use air
convection currents to make the house cool in summer and warm in winter. Two patio
types have been adopted in local rural houses: central patio and side patio. Patio size
fluctuates between 2m and 5m. In the patio, walls can shelter from solar radiation to
produce a cool space, cooling air flows toward hot rooms, and then thermal
ventilation can be realized. At night, air in the courtyard rises up due to heating by
wall thermal radiation and combines with upper cooling air to cause air convection, so
cooling air can circulate gradually into each room.
Besides, unlike the traditional patio, what residents cared about is not greening.
Some use it as a grain-basked ground or utility room; some integrate it with the
kitchen and bathroom and put a biomass pool or septic tank in it so as to reuse energy
Layout Study on Rural Houses in Northern Hunan Based on Climate Adaptability 245

more conveniently and efficiently. This special feature of the rural patio should be
emphasized during function optimization.
Bedroom: From statistics, the bedroom area is relatively standardized considering
regional living habits. Firstly, bedrooms all face north or south. Most southern
bedroom areas are above 20m2, which is greater than that of northern bedrooms,
typically 17 or 18m2. Secondly, there are three bedrooms off the hall, in a single-layer
house, two bigger southern rooms and a smaller northern room. Thirdly, three or four
bedrooms would be built in a multi-storey house, a southern bedroom is usually on
the first floor for the old, and others are on upper layers.

5 Conclusions
From the above analysis regarding building adaptability of rural houses in Hurong, we
can summarize application principles of passive technologies and scientific design
strategies.
1) In Huarong, low-rise and high-density is adopted in townhouse over-all
layout. Through enlarging building width, simplifying building form, and
concentrating building volume, this layout can improve effects of passive
technologies in rural houses.
2) A self-shading system has been formed spontaneously by using vertical concave
and convex features of the building envelope and eaves projection of the pitched roof.
3) The inner layout is concentrated on the hall as a responsive core. By adjusting
interfaces, the house forms an unobstructed aisle eaves galleryhall
courtyardbedrooms, which is adapted to the summer prevailing wind.
4) The patio is appropriately designed to improve the indoor thermal environment
resulted from thermal and wind ventilation. It integrates with kitchen and bathroom
to decontaminate, discharge, and reuse energy more conveniently and efficiently.
In conclusion, it is advisable to link regional custom and scientific application of
passive technologies, adopt economical, effective passive skills so as to optimize
building layout, and improve indoor thermal environment. This is significant for
green rural houses design in Huarong, even possiblely applied in rural construction of
northern Hunan.

References
1. Tsinghua University, 2007 Annual Report on China Building Energy Efficiency. China
Architecture & Building Press, Beijing (2007)
2. Ooka, R.: Field study on sustainable indoor climate design of a Japanese traditional folk
house in cold climate area. Building and Environment 37(3), 319329 (2002)
3. Lee, K.-H., Han, D.-W., Lim, H.-J.: Passive design principles and techniques for folk
houses in Cheju Island and Ullung Island of Korea. Energy and Buildings 23(3), 207216
(1996)
246 X. Jin, S. Shen, and Y. Shi

4. Zhao, J., Liu, J., Li, G.: Research on summer thermal environmental of dwelling with
courtyard. Journal of Architecture and Civil Engineering 18(1), 811 (2001)
5. Yi, W., Qun, Z., Mei, H.: The study on indoor environment of old and new Yaodong
dwellings. Journal of Xian University of Architecture & Technology (Natural Science
Edition) 33(4), 309312 (2001)
6. Lin, B., Tan, G., Wang, P.: Study on the thermal performance of the Chinese traditional
vernacular dwellings in summer. Energy and Buildings 36(1), 7379 (2004)
7. Xu, F., Zhang, G., Xie, M.: Influence of site selection on natural ventilation in Chinese
trational folk house. Journal of Southeast University (English Edition) 26(2), 2831 (2010)
8. Wang, R., Cai, Z.: An ecological assessment of the vernacular architecture and of its
embodied energy in Yunnan, China. Building and Environment 41(5), 687697 (2006)
9. Zuo, X., Zou, Y., Tang, M.: Analysis on tests of sustainable buildings in hot summer and
cold winter zone. New building Materials 2, 13 (2003)
10. Xie, M.: The Study on the Natural Ventilation Design of Rural Residential Houses in the
North of Hunan Province, Ph D. Thesis, Hunan University, China (2009)
11. Lv, A.: Climate-responsive building. Tongji University Press, Shanghai (2003)
12. Xie, M., Zhang, G., Xu, F.: Influence of patio on indoor environment in a Chinese
traditional folk house in summer. Journal of Southeast University (English Edition) 26(2),
2527 (2010)
Determination of Software Reliability Demonstration
Testing Effort Based on Importance Sampling and Prior
Information

Qiuying Li and Jian Wang

School of Reliability and Systems Engineering, Beihang University,


100191 Beijing, China
li_qiuying@buaa.edu.cn

Abstract. When using conventional statistical methods to calculate the software


reliability demonstration testing (SRDT) effort for high reliable software
systems, it has a very huge number of test cases and takes a long time of testing,
which leads to a fact that the conventional statistical methods cannot be put into
actual usage. In this paper, according to the characteristics of the high reliable
software systems, an accelerated operational profile (OP) and an acceleration
factor were put forward based on importance sampling theory. For high reliable
continuous-type software, by means of the prior information, a method of
determination of the SRDT effort with accelerated OP and prior information
was proposed by Bayesian inference method. It could significantly reduce
the number of the test cases and ensure the confidence of software. Finally, the
effectiveness of the proposed method was shown by a case study, and the
estimation method of the hyper-parameters of prior distribution was given.

Keywords: importance sampling, accelerated operational profile, software


reliability demonstration testing, prior distribution, hyper-parameters.

1 Introduction

With the development of computer technology, a variety of highly complex software


systems appear in safety-related systems area, such as the nuclear industry, the
avionics sector, the military etc [1]. In these area, the software systems are usually
required to be high reliable, otherwise, the consequences of the software failure would
be catastrophic. It takes a long time or huge amount of test cases to do SRDT for high
reliable software systems by conventional statistical methods, and cannot put into
actual use [2]. Therefore, how to verify the high reliable software systems becomes a
hot and difficult problem. Butler pessimistically pointed out that it is impossible to
quantify the reliability of life-critical software [3]. In the case of ensuring the
confidence level of test results, some researchers tried to use different statistical
sampling methods or experimental design methods to reduce the number of test cases,
and accelerate the test process of SRDT [4][5][6]. Reference [7][8] proposed that the
factors of testability of software should be considered in SRDT, and established a
method of SRDT with software testability which could reduce the effort of testing.

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 247255.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
248 Q. Li and J. Wang

Reference [9][10] introduced the importance sampling theory to SRDT, and proposed
accelerated OP and acceleration factors. It could significantly reduce the number of
the test cases and ensure the confidence of software. Reference [11] estimated the
prior distribution of software failure rate from test results of software reliability
growth testing (SRGT), and established a method of SRDT based on prior
information. It could reduce the test effort and accelerate the process of SRDT when
reasonably introduced prior information or different statistical sampling methods.
Therefore, this article firstly analyzed the characteristics of high reliable software,
and proposed an accelerated OP, then, estimated the acceleration factor according to
the importance sampling theory. For high reliable continuous-type software, a method
of determination of the SRDT effort with accelerated OP and prior information was
proposed by Bayesian method. Finally, it gave a method of estimating the hyper-
parameters of prior distribution.

2 The Characteristics of High Reliable Software Systems


High reliable software systems generally have the following 3 characteristics: a), the
functions of high reliable software were singleness, and the operations were less; b),
the development process of high reliable software has been introduced lots of
software reliability analysis and design measures, such as, fault tolerant design,
redundancy design, N-text programming etc.; c), the prior software testing is adequate
enough, thus, there does few failures appear in SRDT. Generally, two kinds of
operations are existed in software of which one are critical operations and the other
are regular operations. The probabilities of occurrence of critical operations are very
small, which leads to that these operations are not tested adequately in software
reliability testing, and the faults are not fully triggered. However, the probabilities of
occurrence of regular operations are usually relatively high, which leads to that most
of the residual faults in regular operations of software could be triggered and ruled out
after a period of software reliability testing. According to the characteristics of high
reliable software systems, the failures are very rare. Therefore, the faults of the high
reliable software are considered to be existed in critical operations which have a very
small probability of occurrence. Then, how to let those critical operations to be tested
adequately, the method of importance sampling gives a feasibility.

3 Accelerated OP Based on Importance Sampling


3.1 The Importance Sampling Theory

Let X be a random variable whose probability density function is f (x) , denoted the
operation. And its definitional domain is {1 , 2 , , m } , denoted the finite m
operations. Y = h( x) is a function of random variable X , which expresses that if a
failure occurs after a reliability testing on operation x , then h( x) = 1 ; and if no
failures occur after a reliability testing on operation x , then h( x) = 0 . The
Determination of Software Reliability Demonstration Testing Effort 249

+
mathematical expectation of Y is =
h( x) f ( x)dx , denoted the software failure
rate. According to the function f ( x) to sample n test cases {x1 , x2 , , xn } , then
yi (i = 1,2, , n) could be calculated by formula Y = h( x ) , and the sample mean is
n n

y h( x ) .
1 1
=Y = i = i
n i =1
n i =1
The failures are considered not existed in the regular operations of high reliable
software according to the analysis in section 2, which means that h( x) = 0 for all the
test cases, and it has no contribution for the estimation of u . However, it is
possible to occur failures on those critical operations, which means it is very
important for the estimation of u . Thus, by means of the importance sampling theory,
the probabilities of occurrence of critical operations should be increased to reduce the
number of test cases.
Suppose the probability of critical operation i in probability density function
g (x) is relatively high. Using X to denote the random variable whose probability
density function is g (x) , then, the mathematical expectation of Y
+ +
becomes =

h ( x ) f ( x )d x =
h( x) ( x) g ( x )dx , where ( x) = f ( x) / g ( x) is
called likelihood ratio. Let Y = h( x) ( x) , then the above formula becomes
+
=
y g ( x)dx = E (Y ) . That is to say, the estimation of the mathematical
expectation of Y by sampling from f (x) becomes the estimation of the
mathematical expectation of Y by sampling from g (x) . In other words, using
probability density function g (x) to generate samples {x1 , x2 , , xn } , and using the
formula Y = h( x) ( x) to get yi (i = 1,2, , n) , then the sample mean is
n n

h( x )( x ) .
1 1
=Y= yi = i i
n i =1
n i =1

3.2 The Construction of the Accelerated OP

The operations in high reliable software could be divided into two parts, with one is
the set of critical operations, and the other is the set of regular operations. In this
paper, the operations are considered to be critical operations, which have a small
probabilities of occurrence no more than 10 4 . Using symbols oc1 , oc2 , , ocn to
denote the critical operations whose probability of occurrence are pc1 , pc2 , pcn ,
and using or1 , or2 , , orm to denote the regular operations whose probability of
occurrence are pr1 , pr2 , prm , which satisfied:
n m


i =1
pci + pr
j =1
j = PC + PR = 1
250 Q. Li and J. Wang

Because PC << PR , the test cases generated by OP focus mainly on the regular
operations. As shown in section 3.1, the failures focus on the critical operations.
Therefore, the probability of critical operations should be increased to propose the
accelerated OP to significantly reduce the number of test cases.
n
The sum of all probabilities of critical operations is PC = pc
i =1
i . Then, the

relative probabilities of critical operations oci (i = 1,2, , n) in set of critical


n n

P
pci pci
operations are pci = , and pci = = 1 . Let the probabilities of
PC i =1 i =1 C

occurrence of regular operations are 0, which means pri = 0 . Then, there only
contains critical operations oci whose probabilities are increased to pci in the new
probability density function g (x) . Then, the accelerated OP is constructed by the
critical operations oci and its probabilities pci .
By means of the definition of likelihood ratio ( x) = f ( x) / g ( x) , there have
(oci ) = pci / pci = PC (i = 1,2, , n) . And the acceleration factor is defined as
P (Oc ) pc1 + pc2 + + pcn
= = = PC , which denoted the degree of acceleration.
P (Oc ) pc1 + pc2 + + pcn

4 Determination of SRDT Effort for Continuous-Type Software


Based on Importance Sampling and Prior Information

Bayesian method assumes the continuous-type software failure rate is a random


variable. The probability of the number of software failures x equals k in time
interval (0, t ] is a conditional probability of . The number of software failures x
obeys a Poisson distribution with the parameter t [12]:

(t ) k t
p( x = k | ) = e , k = 0,1,2, (1)
k!
The conjugate distribution of Poisson distribution is Gamma distribution, then
supposes the prior distribution of obeys Gamma distribution:
ba
( ) = Gamma (a, b) = a 1e b (2)
(a)

where a, b are called hyper-parameters, and (a ) is a Gamma function.


Execute SRDT according to the accelerated OP. Suppose the testing time is t , and
r failures occur. By the definition of acceleration factor, the needed testing time must
be t / PC , generated by the normal OP, which could leads to the r failures. Then the
posterior distribution of is as follows:
Determination of Software Reliability Demonstration Testing Effort 251

(b + t / PC ) a + r a + r 1 (b +t / PC )
f ( | r , t / PC , a, b) = Gamma(a + r , b + t / PC ) = e (3)
(a + r )

For the given reliability target (0 , C ) , C is the confidence. Then the required
minimum testing time T is the least t which satisfies the following inequality:
0 0 (b + t / PC ) a + r a + r 1 (b +t / PC )
P ( 0 ) =
0
f ( | r , t / PC , a, b)d = 0 (a + r )
e d C (4)

For the high reliable software, no failures are allowed, which means r = 0 . Then, (4)
can be rewritten as:
0 0 (b + t / PC ) a a 1 ( b + t / PC )
P ( 0 ) = 0
f ( | 0, t / PC , a, b)d = 0 (a)
e d C (5)

When there is no prior information, which means a = 1, b = 0 , thus (5) becomes:


0 0
P ( 0 ) = 0
f ( | 0, t / PC ,1,0)d = 0
(t / PC ) e t / PC d = 1 e t0 / PC C (6)

PC
T = ln(1 C ) (7)
0
For the same reliability target as above, the required minimum testing time T based
on the normal OP is the least t which satisfies the following inequality:
0 0 (b + t ) a a 1 (b + t )
P ( 0 ) = 0
f ( | 0, t , a, b)d = 0 (a)
e d C (8)

When there is no prior information, which means a = 1, b = 0 , thus (8) becomes:


0 0
P ( 0 ) = 0
f ( | 0, t ,1,0)d = 0
te t d = 1 e t0 C (9)

1
T = ln(1 C ) (10)
0
Obviously, it is seen from the formulas (5), (7), (8) and (10) that the needed testing
time based on accelerated OP is PC times of the needed time based on normal OP.
Because PC << 1 , the method of SRDT based on accelerated OP could significantly
reduce the testing time and accelerate the process of testing.
For example, suppose there has prior information, and the hyper-parameters are
a = 1, b = 50000 , and the reliability target is (10 4 ,1 10 4 ) . Suppose the acceleration
factor is PC = 0.05 . Then according to the inequality (5), there has:
252 Q. Li and J. Wang

0 0
0
f ( | 0, t / PC ,1, b)d = 0
(b + t / PC )e (b + t / PC ) d = 1 e (b + t / PC ) 0 C

1
T = PC [ ln(1 C ) b] = 0.05(10 4 ln 10 4 50000) = 2105.2
0
Suppose there has no prior information, which means a = 1, b = 0 . For the above
reliability target and acceleration factor, and according to formula (7), there has:
1
T = PC [ ln(1 C )] = 0.05(10 4 ln 10 4 ) = 4605.2
0
By means of inequality (8), the testing time considered prior information and based
on the normal OP could be calculated by the following inequality:
0 0
0
f ( | 0, t ,1, b)d =
0
(b + t )e (b + t ) d = 1 e (b +t ) 0 C

1
T = ln(1 C ) b = 10 4 ln 10 4 50000 = 42103.4
0
According to formula (10), the needed testing time based on the normal OP which has
no prior information is as follows:
1
T = ln(1 C ) = 10 4 ln 10 4 = 92103.4
0
As is shown above, the method of SRDT based on prior information and importance
sampling is significant in reducing the testing effort of SRDT.

5 Estimation of the Hyper-parameters of Prior Distribution

5.1 Basic Principle

Suppose sample variables are x1 , x2 , , xn , and the conditional distribution of variable


p with prior distribution f ( p ) is ( x1 , x2 , , xn | p ) . Then the density of
simultaneous distribution of x1 , x2 , , xn and p is f ( p ) ( x1 , x2 , , xn | p ) . Then the
marginal probability density function of sample is as follows:

h( x1 , x2 , , xn ) = f ( p) (x , x , , x
1 2 n | p )d p (11)

Using sample data x1 , x2 , , xn to estimate h( x1 , x2 , , xn ) by the method of classical


statistics, such as the first moment and the second moment of h( x1 , x2 , , xn ) . Then
the hyper-parameters of f ( p ) could be calculated by these information.

5.2 The Estimation of the Hyper-parameters of Prior Distribution

According to the analysis in section 4, the software failure rate obeys


Gamma(a, b) distribution. The estimation of a, b can be determined by the test data
Determination of Software Reliability Demonstration Testing Effort 253

obtained before SRDT. For example, lots of test records in SRGT are leaved before
SRDT. For continuous-type software, after selecting the last m groups of time
between failures T1 , T2 , , Tm which could give the experience sample values, we can
use the method of parameter synthesis to estimate the hyper-parameters.
The number of software failures x obeys a Poisson distribution with parameter
t as shown in (1). Therefore, the marginal distribution of x is as follows:
+
m( x) = Gamma( a, b) p( x | )d =
0

+ b a a-1 -b ( t ) x -t b a t x (a + x) ( a + 1)

0 (a)
e
x!
e d =
x!(b + t ) a+ x
(12)

The first moment and second moment of m(x) are given by the following:
+ + + b a a-1 -b ( t ) x -t
x
at
E ( x) = xm( x) = e e d = (13)
x =0 x =0
0 (a) x! b

+ + + b a a-1 -b ( t ) x -t at (a + 1)at 2
E(x 2 ) = x m( x ) = x
x=0
2

x =0
2
0 (a )
e
x!
e d = +
b b2
(14)

Let t is a relatively long time for T1 , T2 , , Tm . Then, during the time interval (0, t ] ,
the experience sample values of software failures are t / T1 , t / T2 , , t / Tm . Using the
mean value and the mean square value of t / T1 , t / T2 , , t / Tm to estimate the first
moment and second moment,
2
m m
t

~ 1 t ~ 2 1
E ( x) = , E (x ) = (15)
T
i =1 i
m i =1
Ti m

Then, the estimations of a, b can be calculated by the equations (13) and (14):
~
~ E ( x) ~ t
a= ~ 2 ~ ~ , b= ~ 2 ~ ~ (16)
( E ( x ) / E ( x)) E ( x) 1 ( E ( x ) / E ( x)) E ( x ) 1

For example, table 1 gives a group time between software failures to be as the
experience sample values of Ti . Let t = 100000 hours, and the experience sample
values of software failures t / Ti are shown in table 1.

Table 1. Prior sample data (unit of time: hours)

time between failures Ti 3503.2 3002.1 3912.9 2899.2 4012.1


failures t / Ti 29 33 26 34 25
time between failures Ti 3598.0 2991.1 3232.5 4502.1 3900.7
failures t / Ti 28 33 31 22 26
254 Q. Li and J. Wang

~
According to the formulas (15) and (16), we have a~ = 64.2, b = 223778.0 .
Then it is known from the formula (5) that for the given reliability target (0 , C ) ,
the required minimum testing time T of SRDT with no failures based on importance
sampling and prior distribution is the least t which satisfies the following inequality:
0 0 (b + t / PC ) a a 1 ( b + t / PC )
P ( 0 ) =
0
f ( | 0, t / PC , a, b)d =
0 (a )
e d C

where a = 64.2, b = 223778.0 .

6 Conclusion
In this paper, we brought importance sampling theory and prior information into
SRDT, and proposed an accelerated method of SRDT with the above two aspects.
According to the characteristics of high reliable software and the theory of importance
sampling, an accelerated OP and acceleration factor were given. Also, the prior
information of software reliability was considered, which could significantly reduce
the number of test cases. It gives a theory and technical support for verifying the high
reliable software. In the future work, we will research the application of the method in
actual project account to verify its feasibility.

References
1. Kuball, S., May, J.: Test-Adequacy and Statistical Testing: combining different properties
of a test-set. In: 15th ISSRE, pp. 161172. IEEE Com. Soc., Washington, DC (2004)
2. Fenton, N.E., Pfteeger, S.L.: Software Metrics: A Rigorous & Practical Approach, 2nd
edn. Intl Thomson Computer Press (1996)
3. Butler, R.W., Finelli, G.B.: The Infeasibility of Quantifying the Reliability of Life-Critical
Real-Time Software. IEEE Transactions on Software Engineering 19(1), 312 (1993)
4. Andy, P., Wassim, M., Yolanda, M.: Estimation of software reliability by stratified
sampling. ACM Transaction on Software Engineering and Methodology 8(3), 263283
(1999)
5. Hecht, M., Hecht, H.: Use of importance sampling and related techniques to measure very
high reliability software. In: Aerospace Conference Proceedings, pp. 533546. IEEE
Aerospace and Electronics Systems Soc., Montana (2000)
6. Alarm, S., Chen, H., Ehrlich, W.K., et al.: Assessing software reliability performance
under highly critical but infrequent event occurrences. In: 8th ISSRE, pp. 294303. IEEE
Comp. Soc., Los Alamitos (1997)
7. Zhao, L., Wang, J.-M., Sun, J.-G.: Study on the Relationship between Software Testability
and Reliability. Chinese Journal of Computers 30(6), 986991 (2007) (in Chinese)
8. Li, Q., Li, H., Wang, J.: Effects of software test efficiency on software reliability
demonstration testing effort. Journal of Beijing University of Aeronautics and
Astronautics 37(3), 325330 (2011) (in Chinese)
9. Li, Q.-Y., Li, X., Wang, J., Luo, L.: Study on the Accelerated Software Reliability
Demonstration Testing for High Reliability Software based on Strengthened Operational
Profile. In: Proceedings of ICCTD 2010, pp. 655662 (2010)
Determination of Software Reliability Demonstration Testing Effort 255

10. Jiong, Y., Ji, W.: Software statistical test acceleration based on importance sampling.
Computer Engineering and Science (3), 6466 (2005) (in Chinese)
11. Qin, Z., Chen, H., Shi, Y.: Reliability Demonstration Testing Method for Safety-Critical
Embedded Applications Software. In: Proceedings of the International Conference on
Embedded Software and Systems, pp. 481487. IEEE Com. Soc., Washington, DC (2008)
12. Miller, K.W., Morell, L.J., Noonan, R.E.: Estimating the probability of failure when
testing reveals no failures. IEEE Transactions on Software Engineering 18(1), 3343
(1992)
The Stopping Criteria for Software Reliability Testing
Based on Test Quality

Qiuying Li and Jian Wang

School of Reliability and Systems Engineering, Beihang University,


100191 Beijing, China
li_qiuying@buaa.edu.cn

Abstract. Software testing which plays an increasingly important role on the


assurance of software quality and reliability has been paid more and more
attention. However, when to stop testing is still one of the challenges in
software testing field. Firstly, the stopping criteria for software testing were
studied from the theoretical point of view. Then the stopping criteria for
software testing based on test quality were put forward. According to the
different purposes of testing, two kinds of stopping criteria were discussed, one
for software correctness testing (SCT), and the other for software reliability
testing (SRT). Referred to the achievements of the stopping criteria for SCT and
combined with the characteristics of SRT, stopping criteria for SRT were put
forward, which showed when to stop testing can be explicitly decided by the
control of the test procedure.

Keywords: test quality, software reliability testing, stopping criteria.

1 Introduction
Since Goodenough and Gerhart firstly proposed the problem when to stop software
testing in 1975, when they researched whether software testing can ensure the
correctness of the software, how to give a stopping criterion for software testing
becomes a hot and difficult problem in software testing field [1]. From 1980s, such
research has never been stopped. Reference [2][3] established software reliability
models based on reliability theory. Based on the software reliability models, [4]
quantitatively presented a conclusion of when to stop testing, and gave a reliability
metric model and method. Under the conditions of the given budget constraints and
test cost, [5][6] proposed a method of how to give the stopping time of testing.
Reference [7] put forward an optimal software release tactic based on the optimal tactic
of test cost which contained three cost elements. Reference [8][9] researched the best
software release time on the basis of the comprehensive reliability requirements and
cost constraints. Reference [10] researched the problem of when to stop testing in the
case of lots of codes changed during the testing process. Obviously, it is the most ideal
and reasonable method to quantify the testing process and use the quantitative
measurement results to guide a variety of decision-making behaviors in testing. How to
measure testing and make the measurement results accurately reflect the testing
process are problems that researchers in testing area work hard to think and solve.

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 257264.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
258 Q. Li and J. Wang

SRT which plays an increasingly important role on the assurance of software


quality and reliability has been paid more and more attention. As well as the software
testing, SRT is also facing the problem of when to stop testing.

2 The Classification of Software Testing


According to the different purposes of testing, software testing can be divided into
two types, SCT and SRT. The former's aim is to find errors as much as possible, in
other words, its purpose is to rule out residual defects in software as much as possible.
The latter's aim is to ensure software meet the usage requirements, in other words, its
purpose is not to find errors as much as possible, but to find the errors which
influence the usage requirements of software as much as possible.
SCT and SRT have different purposes which decide that their characteristics are
different, and the points of view to judge and weigh the stopping criteria are also
different.

2.1 The Stopping Criteria for SCT

2.1.1 Definition and Formal Analysis


The stopping criteria for SCT refers to that testing can be stopped when the
correctness of the softwares behavior on test cases set can represent that of the
softwares behavior on the whole input domain.
The purpose of the stopping criteria for SCT can be formally analyzed as follows
[11]. Firstly, the following conceptions are introduced to express clearly.
For the input point d in input domain D , Correct (d ) defines a predicate which
expresses the acceptability of result F (d ) when it equals to oracle S (d ) defined by
function specification, written as
F (d ) = S (d ) Correct (d ) (1)

Successful (T ) defines a predicate which states that test cases set T is a subset of
input domain D , and Correct (t ) is true for arbitrary element t in T . In other words,
T D , t T , Correct (t ) Successful (T ) (2)

Ideally, the test set which has the capacity to realize the stopping criteria for SCT can
be described as a test set T which has the following property.
T D , Successful (T ) Successful ( D) (3)

In other words, a successful SCT should include two contents. One is that softwares
behavior is normal during the testing with enough test cases, the other is the success
of the software's behavior on the finite test set can represent the success of softwares
behavior on the whole input domain. However, Howden has showed that there doesn't
exist such computable function can be used to prove that the correctness of the testing
domain can represent that of the input domain when it is only a proper subset of input
domain [12].
The Stopping Criteria for Software Reliability Testing Based on Test Quality 259

2.1.2 The Stopping Criteria for SCT Based on Test Quality


Affected by the property of the software correctness itself, software correctness must
be treated from a transcendental viewpoint just like software quality. That is to say,
software correctness should be regarded as the ideal objective for endeavor which
couldnt be realized. Therefore, from the viewpoint of methodology, researchers have
proposed that the method of how to realize the stopping criteria of SCT is not the
software itself, but the control of the testing process.
Reference [13] gave a summary of the existing researches. It is easy to know that
the existing research of the criteria when to stop testing focus on the research filed of
the correctness testing criteria and the SCT criteria are researched from the point of
view of test quality.

2.2 The Stopping Criteria for SRT

2.2.1 Definition and Formal Analysis


The stopping criteria for SRT refers to that testing can be stopped when the reliability
of a program which is acquired by running it on a subset of the whole input domain
could represent that of the program by running it in practice.
The purpose of the stopping criteria for SRT can be formally analyzed as follows
[11]. Firstly, the following conceptions are introduced to express clearly.
Correct (d ) defines as the same meaning in section 2.1.1.
Reliable (T ) also defines a predicate which expresses that test set T randomly
generated according to the operation profile is a subset of the input domain D and the
evaluation value of the softwares reliability obtained by the test result of T is
Reval (T ) which satisfy the following property with lim | Reval (T ) R real | 0 ,
||T ||
where R real is the true value of software reliability.
Because Rreal just exists in theoretical sense which can't be obtained in practice,
the above definition can't be applied in practice. In connection with the original
definition of the stopping test criteria for SRT, starting from the different targets of
reliability testing, the approximations are obtained according to different information.

2.2.2 The Stopping Criteria for SRT Defined from the Point of Test Quality [14]
Actual usage of the software is a subset of all possible usage. Each element in the set
represents a possible running condition of the system. And the purpose of the test
quality measurement is to measure the systems ability to properly run a sample. Its
impossible to test a system completely because of the infinite population, and the
usage of the system must be made a valid inference with statistical methods.
It is known that the actual usage can be seen as a random process obeying a
probability distribution. If the probability distribution used in testing process is as the
same as the one used in reality, it can be regarded that the testing process converges to
the usage process. Therefore, SRT quality can be considered in terms of measuring the
statistical characterization of test set.
260 Q. Li and J. Wang

2.2.3 Introduction of the Concept of Profile Coverage


Firstly, some definitions are introduced [15].
Def. 1. The area of probability function is defined as follows:
Let P be a probability function on its input domain ID that is followed by the
programs input selection during operation and C ID . The sum cC P(c) is
called the area of P on C, denoted by C ( P ) .
Fig.1 illustrates the area of a probability function. cID P(c) = 1 , which implies
ID ( P) = 1 .

Fig. 1. Area of probability function

Def. 2. The profile coverage is defined as follows:


Let PO (c) be a probability function on its operational profile, and PT (c) be a
probability function on its testing profile. The profile coverage is defined by the area
of probability function ID (min( PO , PT )) , denoted by PC ( PO , PT ) which means
PC ( PO , PT ) = ID (min( PO , PT )) . To be short, let PC denote PC ( PO , PT ) .
As shown in Fig.2, profile coverage can be used to denote the degree of
approximation between testing profile and operational profile.
1) According to the definitions of C ( P ) and PC , because cID PO (c) = 1
and cID PT (c) = 1 , the profile coverage PC = 1 , if and only if PO (c) = PT (c) for
c ID .
2) PC = 0 , if PO (c) an PT (c) are disjoint, i.e. when PO (c) > 0 , PT (c) = 0 and
PT (c) > 0 , PO (c) = 0 .
3) In general, 0 < PC < 1 , denotes that PO (c) and PT (c) have a joint area, as
shown in Fig.2.
It is easy to see from the above analysis that the measurement PC can be used to
express the degree of approximation between testing profile and operational profile.
PC PC min is used to ensure the degree of approximation to achieve a certain
requirement at least.
The following gives some approximation methods for defining profile coverage
based on the idea of profile coverage. The stopping criteria based on the methods
The Stopping Criteria for Software Reliability Testing Based on Test Quality 261

which have different ability to express the connotation to guide testing. However,
these criteria are all called the stopping criteria for SRT defined from the point of test
quality.

Fig. 2. Profile coverage in different situations

2.2.4 Probability Distribution Criterion


As shown in Fig. 3, the higher the probability of the test set T appears in the input
space D , the test is closer to the actual usage and the reliability assessment value
obtained is closer to the reliability value in actual usage. That is to say, the reliability
evaluation value is more credible.

Fig. 3. The relationship between test cases set T and input space D

Using test set t to test the program p, testing can be stopped, if P(t ) Pr , where
P (t ) is the probability of the appearance of t in input space D , and Pr is the limit
value required.

2.2.5 Chi-square Statistic Criterion


The feature of the SRT is that the test cases are generated by simulating the actual
usage. Therefore, the relationship between testing and usage is just like the
relationship between the sample and population in statistics. If the method used to
demonstrate the relationship between sample and population in statistics can be bring
into SRT, it is possible to verify the existence of the statistical relationship between
testing (sample) and usage (population).
Suppose that the usage of a program S can be abstracted as follows [16]:

D D1 D2 Dm
p p1 p2 pm
D1, D2,, Dm are the divisions of the input space of program S according to the
user's actual usage, and p1, p2,, pm are the corresponding occurrence probabilities.
Therefore, the reliability testing set T can be seen as a sample by N times of sampling
with the above distribution.
262 Q. Li and J. Wang

It is known from the knowledge of chi-square statistic that the reliability testing set
T is sampled with the population which satisfied the above distribution. Therefore, the
sample must satisfy the following statistical properties.
Suppose the sampling number of the test cases set in D1, D2,, Dm are n1, n2,,
nm which satisfied the following equation
m m (n i Np i ) 2
ni = N , q 2 = Np i
(4)
i =1 i =1

The asymptotic distribution of statistics q 2 (t ) obeys a chi-square distribution with a


degree of m-1. It can be used to judge the test set T whether satisfy the statistical
properties of SRT via inspecting the distribution of the q 2 (t ) whether obey a chi-
square distribution which has a certain confidence requirements.
There can be some risk when using the methods of probability and statistics to
solve the practical problem. Here let the value of risk which user can accept is .
(ni Np i ) 2
m
When the statistics q 2 (t ) satisfy the inequation
Np i
< 1 2 (m 1) , its
i =1
distribution can response the distribution properties of the population. In other words,
the test set is an approximation of the usage, and testing can be stopped.

2.2.6 Testing-to-Usage Chain Criterion


The input of the testing is often not a single input variable, but an input sequence in
actual testing. Testing-to-usage chain is a method which using the Markov process to
measure the difference between the testing process and the actual usage, which could
be used to control the testing process [17]. It bases on the statistical properties of the
testing chain and the usage chain. The usage chain describes the real use of software,
and it is an ideal testing model of software. The testing chain is a model constructed
from a test history, and it describes the testing of software. The level of similarity of
the two models simulates the approximation degree of the testing and usage.
Therefore, measuring the difference between the testing chain and the usage chain in
actual testing is to realize the control of the process of testing. When the testing chain
is more and more close to the usage chain and the difference is small enough, the
testing practice can replace the actual usage. Inspired by the above ideas, the testing-
to-usage criterion is established called Kullback-discriminant value [18].
When testing program p, the testing can be stopped if the following formula is
satisfied.

1 Pr[ x 0 , x1 , " , x n | U ]
lim log 2 <
(5)
n n
Pr[ x 0 , x1 , " , x n | T ]

Therefore, stochastic process U denotes the usage chain, and stochastic process T
denotes the testing chain, and Pr( x 0 , x1 , " , x n |U) denotes the probability of the input
sequence generated by usage chain, and Pr( x 0 , x1 , " , x n |T) denotes the probability of
the input sequence ( x 0 , x1 , " , x n ) generated by testing chain.
The Stopping Criteria for Software Reliability Testing Based on Test Quality 263

2.2.7 Information Entropy Criterion


The information entropy of a single random event is denoted by log 2 p , and the
information entropy of the population is the mathematic expectation of the random
events' information entropies, defined as follows[19]:
m
H = pi log 2 pi (6)
i =1

The following is the definition of the reliability testing entropy [20].


Def. 3. The reliability testing entropy based on the entropies of the finite probability
distribution p i (i=1,2,m) in operational profile is denoted by:

m
H RT = pi log 2 pi (7)
i =1

As described in 2.2.5, suppose the sampling number of the test set in D1, D2,, Dm
are n1, n2,, nm which satisfied the following
m
ni =N (8)
i =1

The point estimation value of p i can be denoted by p i = ni / N . Hence, the


estimation of the reliability testing entropy of the reliability test set t is denoted by:
m
H RT (t ) = p i log 2 p i (9)
i =1

Obviously, H RT (t ) is a variable of t , and the testing can be stopped when


| H RT H RT |< ( is an acceptable error value).

3 Conclusion and Future Work

The stopping criteria for software testing could quantitatively establish the
requirement of the software testing, and measure the quality of software testing, guide
the selection of test cases and the observation and record of software's behavior in
testing process. It could also ensure the quality of the software testing to avoid the
unnecessary testing. The existing results of SCT were summarized from the point of
view of test quality, four stopping criteria for SRT were proposed with the test
quality. In the future work, we will compare the four criteria and study their
relationships. From the process of establishing these criteria, it is not easy to measure
these criteria. That is to say, using the stopping criteria to guide practical testing still
needs a transitional process. However, the research itself is indispensable.
264 Q. Li and J. Wang

References
1. Goodenough, J.B., Gerhart, S.L.: Toward a Theory of Test Data Selection. IEEE
Transactions on Software Engineering SE-3(2), 156173 (1975)
2. Musa, J.D., Frank Ackerman, A.: Quantifying Software Validation: When to Stop Testing?
IEEE Software 6(3), 1927 (1989)
3. Schneidewind, N.F.: Reliability modeling for safety-critical software. IEEE Transactions
on Reliability 46(1), 8898 (1997)
4. Garg, M., Lai, R., Jen Huang, S.: When to stop testing A study from the perspective of
software reliability models. IET Softw. 5(3), 263273 (2011)
5. Hou, R.-H., Kuo, S.-Y., Chang, Y.-P.: Optimal release times for software systems with
scheduled delivery time based on the HGDM. IEEE Transactions on Computers 46(2),
216221 (1997)
6. Yang, B., Hu, H., Zhou, J.: Optimal Software Release Time Determination with Risk
Constraint. In: Proc. 54th Ann. Reliability and Maintainability Symp., pp. 393398 (2008)
7. Huang, C.-Y., Kuo, S.-Y., Lyu, M.R.: Optimal software release policy based on cost and
reliability with testing efficiency. In: IEEE Computer Societys International Computer
Software and Applications Conference, pp. 468473 (1999)
8. Ehrlich, W., Prasanna, B., Stampfel, J., Wu, J.: Determining the cost of a stop-test
decision. IEEE Software 10(2), 3342 (1993)
9. Xie, M.: On the determination of optimum software release time. In: 1991 International
Symposium on Software Reliability Engineering, pp. 218224 (1991)
10. Siddhartha, R.D., McIntosh, A.A.: When to Stop Testing for Large Software Systems with
Changing Code. IEEE Trans. on Software Engineering 30(4), 318323 (1994)
11. Li, Q., Ruan, L., Liu, B.: Research on Software Reliability Testing Adequacy.
Measurement & Control Technology (11), 4952 (2003) (in Chinese)
12. Zhu, H., Jin, L.: Software Quality Assurance and Testing, pp. 70215. Science Press,
Beijing (1997) (in Chinese)
13. Li, Q., Lu, M., Ruan, L.: Theoretical Research on Software Reliability Testing Adequacy.
Journal of Beijing University of Aeronautics and Astronautics 29(4), 312316 (2003) (in
Chinese)
14. Li, Q.: Theoretical Research on Software Reliability Testing Adequacy. Ph.D. Thesis,
Beijing University of Aeronautics and Astronautics (2004) (in Chinese)
15. Chen, Y.: Modelling Software Operational Reliability via Input Domain-Based Reliability
Growth Model. In: Twenty-Eighth Annual International Symposium on Fault-Tolerant
Computing, pp. 314323 (1998)
16. Musa, J.D.: Operational Profiles in Software Reliability Engineering. IEEE
Software 10(2), 1432 (1993)
17. Whittaker, J.A.: A Markov Chain Model for Statistical Software Testing. IEEE
Transaction on Software Engineering 20(10), 812824 (1994)
18. Kullback, S.: Information Theory and Statistics. Wiley, New York (1958)
19. Zeng, G., et al.: Summary of Systems Theory, Information Theory, and Control Theory,
pp. 149151. Central South University of Technology Press, Hunan (1986) (in Chinese)
20. Bin, L.: Software Reliability Research. Postdoctoral research report. Beijing University of
Aeronautics and Astronautics (2002) (in Chinese)
The CATS Project

Licia Sbattella1, Roberto Tedesco1, Alberto Quattrini Li1,


Elisabetta Genovese2, Matteo Corradini2, Giacomo Guaraldi2, Roberta Garbo3,
Andrea Mangiatordi3, and Silvia Negri3
1
Politecnico di Milano Piazza Leonardo da Vinci 32, 20133 Milano, Italy
{licia.sbattella,roberto.tedesco}@polimi.it
2
Universit_a degli Studi di Modena e Reggio Emilia Via Universit_a 4, 41121 Modena, Italy
{elisabetta.genovese,matteo.corradini,
giacomo.guaraldi}@unimore.it
3
Universit_a degli Studi di Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, 20126 Milano, Italy
{roberta.garbo,andrea.mangiatordi,silvia.negri}@unimib.it

Abstract. University students with learning or sensorial disability often face


huge difficulties in accessing campus facilities and, specifically, lectures. Many
universities over a wide range of support services to overcome such issues, but
this is not always enough. This paper presents CATS, an ongoing research
project involving three Italian universities, aiming to design and test technolo-
gical solutions directed towards a better support to accessible lectures. By pro-
viding students with a set of experimental, advanced tools, the aim of the
project is also to foster inclusive practices. The solutions described here share
the principle of being adaptable to the real needs of the students, which are
measured using ICF*, an adapted version of the WHO ICF model.

Keywords: Accessibility, Assistive Technology, Human Factors, Inclusion,


Campus Tools.

1 Introduction

The `Campus Tools for Students' (CATS) project1 aims at supporting students with
hearing impairments or Learning Disabilities (LD), during classes, individual study,
and fruition of administrative, ICT based services.
During lectures, hearing impaired students lose a great amount of information, de-
livered by the teacher via his/her voice. Students with LD, on the other hand, face
difficulties in taking notes and, once again, lose a great amount of information. It is
thus clear that specific tools are necessary to solve these peculiar difficulties, max-
imizing the usefulness of lectures.
During individual study, it is particularly important for hearing impaired students
to review a recording of the lecture, augmented with automatic captioning; this ap-
proach is highly effective in reducing learning time and increasing students' ability to

1
CATS is funded by the Italian Ministry of Education, University and Research (MIUR). See
the project's web site at http://cats.unimore.it/.

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 265272.
Springerlink.com Springer-Verlag Berlin Heidelberg 2012
266 L. Sbattella et al.

take advantage of their notes. Students with LD can benefit from study materials with
linguistic complexity reduced and tailored to their specific difficulties. Such adapted
materials could increase the effectiveness of individual study.
Finally, software services students interact with are often not designed with uni-
versal accessibility in mind, as their interfaces cannot be personalized.
Thus, the project main goals are: analysis of the difficulties faced by students with
LD or hearing impairment, during lectures and individual study; design of methodol-
ogies to support students during lectures and individual study; and personalization of
software services. Students and teachers will be involved in evaluating the aforemen-
tioned solutions, by means of extended studies. The solutions will be proposed to
other universities as free software (open source).
This paper is structured as follows. In Section 2, we analyze the state of the art of
assistive tools. In Section 3, we describe the CATS project. In Section 4, we present
the evaluation plan of the project. Finally, in Section 5, we sum up the CATS project
and report future works.

2 State of the Art


The role of technology in the process of school inclusion for people with special edu-
cational needs has been met in the last two decades with different approaches, both in
school and extra-school environments. As pointed out in [7], it is quite important to
analyze these technical aids in terms of methodology and didactic use by lecturers and
students alike.
Technology is used to improve the learning experience [4], or to improve the
productivity and the human-computer interaction. As examples of the latter typology,
Automatic Speech Recognition (ASR) software and applications for the semantic
elaboration of text [5, 13] can facilitate the human-computer interaction experience of
students with linguistic difficulties [6].

2.1 Personalization of Service Interfaces

Personalization of user interface is an interesting aspect in human-computer interac-


tion and it is particularly important for users with disabilities. Several initiatives exist
aiming at the standardization of guidelines for the creation of accessible user inter-
faces2. For example WCAG, ATAG, UAAG, proposed by W3C WAI3, target the
web, but can represent a good reference for the design of generic user interfaces.
W3C also proposed a standard for the definition of device capabilities4. About user
profiles, several ad-hoc proposals exist. The World Health Organization (WHO), in
2001, proposed the International Classification of Functioning, Disability and Health
(ICF) standard5, a classification of health and health-related attributes.

2
See http://hcibib.org/accessibility
3
See http://www.w3.org/WAI/
4
The CC/PP standard, see http://www.w3.org/Mobile/CCPP
5
See http://www.who.int/classifications/icf/en
The CATS Project 267

2.2 Hearing Impairment

The introduction of Interactive White Boards (IWB) has been found, according to a
research carried out in a school environment [15], to be an effective optimization of
resources, providing a useful environment for authoring learning materials and giving
classes. The video, recorded by IWB software, should be augmented with closed cap-
tioning. In [12, 14, 16] some tools, able to carry out automatic captioning of videos,
are presented; these software applications, already present on the market, have be-
come more and more sophisticated over the years, and nowadays these methods ob-
tain satisfying results.
Several devices have been developed to help persons with auditory impairment in
their daily activities. These assistive listening devices can be exploited in order to over-
come the negative effects given by distance, background noise or poor acoustics [11]. In
particular: FM systems (a small radio station that operates on special frequencies; the
speaker uses a microphone which transmits to a receiver connected directly to the audi-
tory aid in use); infrared systems (the sound is transmitted along the waves of infrared
light); induction loop systems (normally installed in a fixed manner in a dedicated area;
they are connected to the microphone of the speaker and create a current that, in turn,
produces a magnetic field in the room); text telephones (allow one to hold a telephone
conversation using a keyboard); automatic speech recognition (allows the computer to
convert a verbal message into text); closed captioning TV (allows one to see the tran-
scription of a conversation); screen readers (read the content of computer interfaces; and
warning systems (alert the person with disability to notice when a sound arrives, e.g. the
ring of a doorbell or telephone, alarms for danger like fire, etc.)

2.3 Learning Disabilities


Learning Disabilities are a heterogeneous group of disorders manifested by significant
difficulties in the acquisition and use of reading, writing, mathematical abilities.
These disorders are intrinsic to the individual and they do not depend on his/her ef-
forts. LD are not the direct result of other disabling conditions or environmental in-
uences. LD are chronic disorders and, nonetheless in some cases they can be
\compensated", but residual difficulties still remain all lifelong. Research in the field
of LD collected a great deal of information regarding the identification and treatment
of these disorders and typically focused on the study of academic problems and cog-
nitive functioning. Most of the treatments observed in literature give simple sugges-
tions, instead of proposing interventions based on the analysis of the cognitive
processes. Moreover, only few studies have so far developed specific training based
on new technologies that could help individuals with LD in different ways.
Particularly interesting are the works on: summarization of texts, error correction,
Text-To-Speech (TTS), and applications for note-taking.
Automatic generation of summaries is a complex task, and several approaches ex-
ist. In general, key-topics are: definition of an importance function, sentence com-
pression, and identification of semantic similarities among sentences. The importance
function represents a way to select the relevant sentences to include in the summary
[2]. Sentence compression [10] is generally based on stochastic models that select the
\most important" parts of sentences. Semantics is used in [8, 9] to calculate the
\distance" among different parts of the text. Then, clustering techniques are used to
aggregate similar sentences.
268 L. Sbattella et al.

Error correction is based on two fundamental approaches. The first one is able to
catch non-words errors (spelling errors that result in a sequence of characters not be-
longing to the language vocabulary). Classic algorithms are based on Kernighan's
confusion matrices and on the so-called \edit distance". In order to catch realword
errors (spelling errors that result in a sequence of characters still belonging to the
language vocabulary). A common approach is to leverage language models, as defi
ned by means of n-grams.
TTS based applications are today largely used, and provide a quite accurate pro-
nunciation. For example, Loquendo TTS and Festival TTS are used for high quality
text reading, while JAWS, NVDA or ORCA are screen readers (i.e., they are used for
reading the system interface).
Finally, tools that can support students when they take notes during classes are:
OneNote (a text editor, able to record the audio and connect it to the text) and the
LightScribe pen (which is used for hand writing and is able to record audio and con-
nect it to the notes).

3 The Project
The following sections summarize the activities that are planned and the hard
ware/software solutions that are under development.

3.1 Personalization of Software Services

ICF and service adaptation. The project is based on the ICF* model [1], a subset of
the original WHO ICF specification, extended with new, technology-oriented
attributes. ICF* provides a simple yet expressive model for the description of user
models and the personalization of software applications. In particular, the project will
focus on using portable devices as a terminal for accessing such personalized services.
Software virtualization. Students with disability often encounter difficulties related
to the availability of the software they need on the computers of the university net-
work. Whenever a student needs a specific assistive software, such software must be
installed on the student's computer. Making assistive software directly available on
the university network gives more autonomy to the students, allowing them to access
software resources whenever needed. The technology known as IBM ThinApp, al-
lows deploying virtualized software over internet. This system guarantees high per-
formance and effective management, allowing to save time and resources. In particu-
lar, this approach permits a better management of software licenses. Sharing the li-
cense in `Time Sharing' mode foresees an optimization in the total number of licenses
acquired, based on the effective usage of the software by the students.

3.2 Analysis of Difficulties Faced by Students with LD or Hearing Impairment

A web based survey was developed, leveraging the ICF* model. Data related to the
didactic limitations in the participation, with respect to the university environment,
will be compared with information coming from the students database, and compiled
by the social-psychological-pedagogical team that met students at the beginning of
their university career. This survey will allow a better definition of the most important
The CATS Project 269

areas of participation on which the tools developed by the project should focus.
Moreover, as ICF* extends the ICF specification adding technology-related attributes
that specify human-machine interaction skills of students, the survey will permit to
define user profiles, supporting personalization of software services, in many ways.

3.3 Methodologies to Support Students during Lectures

PoliNotes. Taking notes during classes is a fundamental activity. Unfortunately,


whenever teachers use electronic materials like slides, it is difficult to integrate such
materials with students' own notes (and for people with particular difficulties, such
activity is even more complicated.) PoliNotes [17] is composed of two modules: the
first one, installed as a PowerPoint plug-in on the teacher's notebooks, will be used to
show slides, sending their components to students' notebooks, in real time; the second
module, installed as a OneNote plug-in on students' laptops, is able to integrate the
material received from the teacher, with student's notes. Slide contents are divided in
parts (images, equations, codes, etc.), which are independently editable and
re-positionable (see Fig. 1).
PoliDisEdit. For dyslexic students, taking notes is a particularly hard activity. Tra-
ditional text editors do not provide a spell checker tailored to dyslexic users. As an
example, the user interface of traditional spell checkers shows to the user a list of
suggested words. Reading such a list is hard and time consuming for dyslexic users.
Moreover, spell checkers are not typically able to capture errors that result in an ac-
tual word (e.g., \pile" vs \pine"). Finally, spell checkers' error models are not tailored
to dyslexic typical errors. The project aims to develop an advanced spell checker,
specific for dyslexia. The spell checker will exploit the context in order to capture \out
of context" words. Moreover, the tool will provide a predictor, suggesting the next
most probable word to insert, given the last word written by the user.

Fig. 1. PoliNotes (left) showing slide contents edited and annotated on the student's laptop.
PoliConcepts (right) summarizing an Italian text about history and drawing the related mental
map
270 L. Sbattella et al.

3.4 Methodologies to Support Individual Study

PoliConcepts. A system, based on domain ontologies, able to extract main con cepts
from texts belonging to a predefined domain. Such concepts will be used to produce a
summary and a conceptual map of the text. Moreover, the tool will be able to infer
new concepts, thus enhancing the domain ontology. Dyslexic students can use this
tool to build conceptual maps in a semi-automatic way. A beta version has been de-
veloped (see Fig. 1) and testing is currently ongoing.
IWB-augmented classes. A component of the IWB software allows lecturers to ac-
tivate audio/video recording of the screen. Then the system, leveraging ASR software,
generates the subtitles of the audiovisual material. These audio /video material will be
integrated in slides and virtually in any other didactic material that the lecturer could
distribute to students using digital media [18]. The material will be accessed through
the use of an accessible web interface (see Fig.2) allowing students to access the
available material according to the student's needs, for example, add or remove sub-
titles, change font size and character, only audio, or the addition of subtitles, the posi-
tion of the page and so forth. The web interface will allow the student to personalize
the material, to create their own indexes, classification, linking the material with
comments, attachments etc.The possibility to share contents with others will also be
included.

Fig. 2. Screenshot of IWB (LIM, in Italian) web interface

4 Pre- and Post-evaluation


The CATS team is evaluating (and will evaluate) every single step of the project, with
the main purpose of collecting data about the present needs of the students and on the
quality of the services/facilities available in campuses.
The CATS Project 271

Since the earliest stage of the project, a set of procedures has been defined in order
to carry out the evaluation process: 1) a semi-structured interview track was defined
for tutors working with students with disabilities or LD, in order to collect qualitative
data about the needs expressed by the students; 2) another interview track was de-
fined, to be administered to students together with the ICF* web survey in order to
highlight the problems and the needs arising in lectures context; 3) the complete
process going from a student's application to the final development of an Individua-
lized Services Plan (ISP) was analyzed; 4) a grid for evaluating the already existing
inclusive practices in teaching was elaborated, which will be fine tuned after the
analysis of students' needs; and 5) the main perspective of the whole project was de-
fined as the passage from a \dependency culture" model to a needs-driven approach.
The evaluation of the above proposed solutions will be carried out on two levels:
first, their impact on students' performance will be measured adopting well-organized,
repeatable experimental settings; second, in depth analysis of further qualitative data
will be performed, with a particular focus on the social impact of the introduction of
campus tools on the personal learning experiences of university students.

5 Conclusion
In this paper we presented the CATS project, which aims at providing different tools
to support university students with hearing impairments or LD. The project, started in
July 2010, is currently ongoing. During the next year, we will conclude the develop-
ment of all the planned solutions, starting the evaluation phase that will involve
teachers and students of the three involved universities.

References
1. Sbattella, L., Tedesco, R., Pegorari, G.: Personalizing and making accessible innovative
academic services using ICF*, an extended version of the WHO ICF model. In: INTED
Conference, Valencia, Spain (2011)
2. Hovy, E., Lin, C.: Advances in Automated Text Summarization. MIT Press (1999)
3. Borrino, R., Furini, M., Roccetti, M.: Augmenting Social Media Accessibility. In: Interna-
tional Cross-Disciplinary Conference on Web Accessibililty, W4A (2009)
4. Corni, F., Gilberti, E.: A Proposal of VnR-Based Dynamic Modelling Activities to Initiate
Students to Model-Centred Learning. Physics Education 44 (2009)
5. Furini, M., Ghini, V.: An Audio-Video Summarization Scheme Based on Audio and Video
Analysis. In: IEEE CCNC (2006)
6. Gonzalez, D.: Text-to-speech applications used in EFL contexts to enhance pronunciation.
In: TESL-EJ (2007)
7. Cohen, V.: Learning styles in a technology-rich environment. Journal of Research on
Computing in Education, 29(4), 338351 (1981)
8. Hatzivassiloglou, V., Klavans, J., Eskin, E.: Detecting text similarity over short passages:
Exploring linguistic feature combinations via machine learning. In: EMNLP (1999)
9. Hatzivassiloglou, V., Klavans, J., Holcombe, M., Barzilay, R., Kan, M., McKeown, K.:
Simfinder: A exible clustering tool for summarization. In: NAACL Workshop on Auto-
matic Summarization, Pittsburgh, PA, United States (2001)
272 L. Sbattella et al.

10. Knight, K., Daniel, M.: Summarization beyond sentence extraction: A probabilistic ap-
proach to sentence compression. Artificial Intelligence 139 (2002)
11. Copley, J., Ziviani, J.: Barriers to the use of assistive technology for children with multiple
disabilities. Occupational Therapy International 11, 229243 (2004)
12. Higgins, S., Beauchamp, G., Miller, D.: Reviewing the literature on interactive
white-boards Learning. Media and Technology 32(3), 213225 (2007)
13. Jurafsky, D., Martin, J.H.: Speech and language processing. MIT Press (2000)
14. Kennewell, S., Tanner, H., Jones, S., Beauchamp, G.: Analysing the use of interactive
technology to implement interactive teaching. Journal of Computer Assisted Learning 24,
6173 (2007)
15. Somekh, B.: Evaluation of the Primary Schools Whiteboard Expansion Project, Report to
the department for Children, Schools, and Families, Becta (2007)
16. Swann, J.I.: Promoting independence and activity in older people. Quay Books (2007)
17. Marrandino, A., Sbattella, L., Tedesco, R.: Supporting note-taking in multimedia classes:
PoliNotes. In: ITHET Conference, Kusadasi, Turkey (2011)
18. Bertarelli, F., Corradini, M., Guaraldi, G., Fonda, S., Genovese, E.: Advanced learning and
ICT: new teaching experiences in university setting. International Journal of Technology
Enhanced Learning 3(4), 377388 (2011)
Application of Symbolic Computation in Non-isospectral
KdV Equation

Yuanyuan Zhang

College of Science, China Three Gorges University,


Yichang, Hubei, 443002, P.R. China
mathzhyy@yahoo.com.cn

Abstract. In this paper, new exact solutions of a non-isospectral and variable-


coefficient KdV (vcKdV) equation are discussed. In the past the Wronskian
technique had been used to solve the isospectral KdV equation. In this paper, by
means of symbolic computation system Maple, we generalized the Wronskian
technique to the vcKdV equation. As a result, some new complexiton solutions
of the vcKdV equation are obtained.

Keywords: Symbolic Computation, non-isospectral KdV equation, Wronskian


Solutions.

1 Introduction
During the past several years, the study of coupled nonlinear evolution equations
(NEEs) has played an important role in explaining many interesting phenomena such
as the fluid dynamics, plasma physics and so on. For understanding those nonlinear
mechanism, numerous work has been done on solitary wave solutions to NEEs [1-9].
In this paper, we would like to generalize the Wronskian technique to the following
variable-coefficient KdV (vcKdV) equation

ut + h1 (u xxx 6uu x ) + 4h2u x h3 (2u + xu x ) = 0, (1)

where h1 = h1 (t ), h2 =h 2 (t ) and h3 = h3 (t ) are all arbitrary real functions of t .


The purpose of this paper is to show that (1) admits complexiton solutions in the
Wronskian form,although it is an equation with non-isospectral properties. We obtain
the solutions by generalizing the procedure to this non-isospectral equation.

2 Bilinear Form and Wronskian Technique


The bilinear form of Eq.(1) is given by

[ Dt Dx + h1Dx4 + 4h2 Dx2 xh3 Dx2 ] f f 2h3 f x f = 0 (2)

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 273278.
Springerlink.com Springer-Verlag Berlin Heidelberg 2012
274 Y. Zhang

through the following transformation

u = 2(ln f ) xx , (3)

where D is the well-known Hirota bilinear operator defined by


Dxm Dtn a b = ( x x ' ) m ( t t ' ) n a( x, t )b( x' , t ' ) | x '= x ,t '=t . (4)

To construct solutions to Eq.(1), we love to use the Wronskian determinant

1 1(1) " 1( N 1)
2 2(1) " 2( N 1)
f = W (1 , 2 ,", N ) = , N 1, (5)
" " " "
N N(1) " N( N 1)

ji
where ( j)
= , 1 i N , j 1 . Actually, the Wronskian technique only needs
x j
i

i , xx = ii , (6)

i ,t = 4h1i , xxx 4h2i , x + xh3i , x ,1 i N , (7)

to guarantee that the Wronskian determinant (5) generates solutions

u = 2(ln W (1.2 ,", N )) xx (8)

to Eq.(1). These conditions (6) and (7) mean that all functions i are eigenfunctions
of the Lax representation of Eq.(1):

xx = ( u ) , (9)

t = h1u x 4h1 xxx 4h2 x + xhx x, (10)

t = 2h3 , (11)

under zero potential u = 0 .


In this paper, we consider the Wronskian (5) where each i enjoys (7) and

i, xx = ( Ai e 1Bi e
t t
2 h3 dt 2 h3 dt
)i , (12)
Application of Symbolic Computation in Non-isospectral KdV Equation 275

where Ai and Bi are real numbers. It is clear that each

i = A i e 1Bie
t t
2 h 3 dt 2 h 3 dt
still satisfies

i,t = 2h3i . (13)

Since the basic functions i ( 1 i N ) in the Wronskian determinant (5)


are derived from eigenfunctions associated with complex eigenvalues of the Lax
representation under zero potential, we call the resulting solutions to Eq.(1)
complexiton solutions.
If we set

1 i 2 , i = Ai e 1Bie
t t
2 h 3 dt 2 h 3 dt
i = i 1 ,1 i N , (14)

then the conditions

i, xx = ii , (15)

i,t = 4h1i, xxx 4h2i, x + xh3i, x (16)

mean that

i1, xx Ai e 3
2 h3 dt
Bi e
t t
2 h dt

= i1 , (17)
2 h3 dt
i 2, xx Bi e 3 Ai e i2
t t
2 h dt

i1,t i1, xxx i1, x


= 4h1 + (4h2 + xh3 ) . (18)
i 2,t i 2, xxx i 2, x
It can be found that the Wronskian determinant relation

W ( 11 , 12 ,", N 1 , N 2 ) = (2 1) N W (1+ , 1 ," N+ , N ) (19)


holds for all functions ij (1 i N , j = 1,2), and thus

(ln W ( 11 , 12 , " , N 1 , N 2 )) x = (ln W (1+ , 1 , " N+ , N )) x . (20)


Therefore, the corresponding Wronskian determinant gives exact solutions to Eq.(1):

u = 2(ln W ( 11 , 12 , " , N 1 , N 2 )) xx = 2(ln W (1+ , 1 , " N+ , N )) xx . (21)


Our discussion has led us to the statement of the following theorem.
276 Y. Zhang

Theorem 1. Let Ai , Bi > 0, if i1 and i 2 (1 i N ) are determined by (17) and


(18), then
u = 2(ln W ( 11 , 12 ,", N 1 , N 2 )) xx (22)
presents an exact solution to Eq.(1).
Further, we can obtain the complexiton solutions of higher order to Eq.(1). The
result is summed up in the following theorem.

Theorem 2. If a set of functions ij (1 i N , j = 1,2) satisfy the conditions in


(17) and (18), then for any integers mi , ni 0 , the following function
u = 2(ln W ( 11 , 12 , " , N 1 , N 2 ;
A1 11 , A1 12 , " , mA11 11 , mA11 12 ; B1 11 , B1 12 , " , nB11 11 , nB11 12 , "; (23)
AN N 1 , " , mANN 11 , mANN 12 ; BN N 1 , BN N 2 , " , nBNN N 1 , nBNN N 2 )) xx
gives a more general class of exact solutions to Eq.(1).
It is not difficult to verify this result. In the next section, some concrete examples
of complexiton solutions will be analyzed in some detail.

3 New Complexiton Solutions of the vcKdV Equation


In this section we would like to apply our method to obtain complexiton solutions for
Eq.(1). Let us first solve the system (17) and (18) to present explicit exact solutions to
Eq.(1). By means of symbolic computation system Maple, we obtain the general
solutions to (17) and (18):
pi e
t h dt
1 3
1 t h3dt x + li ( t )
i1 = [C1i cos( qi e x + ki (t )) + C2i sin(ki (t ))]e 2
2 (24)
pi e
t h dt
1 3
1 t h3dt x li ( t )
+ [C3i cos( qi e x + ki (t )) + C4i sin(ki (t ))]e 2 ,
2
pi e
t h dt
1 3
1 t h3dt x + li ( t )
i 2 = [C1i sin( qi e x + ki (t )) C2i cos(ki (t ))]e 2
2 (25)
pi e
t h dt
1 3
1 t h3dt x li ( t )
+ [C3i sin( qi e x + ki (t )) + C4i cos(ki (t ))]e 2 ,
2
where C1i ,C2i , C3i and C4i are arbitrary real constants,
Application of Symbolic Computation in Non-isospectral KdV Equation 277

pi = 2 Ai2 + Bi2 + 2 Ai , qi = 2 Ai2 + Bi2 2 Ai , (26)

1 3 t t 3h3dt
dt pi2 qi t h1e
3 t

qi h1e
3 h3 dt
ki (t ) = dt 2qi , (27)
2 2

1 3 t t 3h3dt
dt qi2 pi t h1e
3 t

pi h1e
3 h3 dt
li (t ) = dt 2 pi . (28)
2 2
If we choose

1 1
C1i = cos( 1i )e 1i , C2i = sin( 1i )e 1i ,
2 2
(29)
1 1
C3 i = cos( 2i )e 2 i , C4i = sin( 2i )e 2 i ,
2 2
then we have a compact form for i1 and i 2 :

pi e t h3dt x li ( t )
t h dt
1 3 1
1 1 x +li ( t ) pie
i1 = cos( qi e h3dt x + ki (t ))](e 2
t
+e 2
), (30)
2 2

pi e t h3dt x li ( t )
t h dt
1 3 1
1 1 x + li ( t ) pi e
i 2 = sin( qi e h3dt x + ki (t ))](e 2
t
+e 2
). (31)
2 2
Let us first concentrate on the case of N = 1 . It is not difficult to obtain the zero-
order complexiton solution for Eq.(1):

Case 1
u1 = 2 ln(W ( 11 , 12 )) xx

4 p12 q12e [1 + cos(q1e + 2k1 (t ))cosh( p1e


t t t
2 h3 dt h3 dt h3 dt
+ 2l1 (t ))]
=
[ p sin(q e + 2k (t )) + q sinh( p e
t t
h3 dt h3 dt
1 1 1 + 2l (t ))]2
1 1 1

2 p1q1e [sin(q1e + 2k1 (t ))sinh( p1e


t t t
2 h3 dt h3 dt h3 dt
+ 2l1 (t ))]
+ , (32)
[ p sin(q e + 2k (t )) + q sinh( p e
t t
h dt h dt
+ 2l (t ))]2
3 3
1 1 1 1 1 1

where p1 , q1 , k1 (t ) and l1 (t ) are defined by (26)-(28).


278 Y. Zhang

Case 2

u = 2 ln( W ( 11 , 12 )) xx

4 p12 q12e [1 cos(q1e + 2k1 (t ))cosh( p1e


t t t
2 h3 dt h3 dt h3 dt
+ 2l1 (t ))]
=
[ p sin(q e + 2k (t )) + q sinh( p e
t t
h3 dt h3 dt
1 1 1 + 2l (t ))]2
1 1 1

( p12 q12 )e [sin(q1e + 2k1 (t ))sinh( p1e


t t t
2 h3 dt h3 dt h3 dt
+ 2l1 (t ))]
+ , (33)
[ p sin(q e
t t
h3dt h3 dt
1 1 + 2k1 (t )) + q1sinh( p1e + 2l1 (t ))]2

where p1 , q1 , k1 (t ) and l1 (t ) are defined by (26)-(28).

4 Conclusion
The complexiton solutions of the variable-coefficient KdV equation are obtained
through the Wronskian technique. The method can also be used to solve other
variable-coefficient nonlinear partial differential equations.

Acknowledgments. This work has been supported by Tianyuan Fund of NNSF


Project (11026196), the Science and Technology Research Project of HPDE
(Q20091304) and the Scientific Innovation Team Project of HPDE (T200809).

References
1. Yong, X.L., Gao, J.W., Zhang, Z.Y.: Singularity analysis and explicit solutions of a new coupled
nonlinear Schrodinger type equation. Commun. Non. Sci. Numer. Simul. 16, 25132518 (2011)
2. Huang, D.J., Zhou, S.G.: Group Properties of Generalized Quasi-linear Wave Equations. J.
Math. Anal. Appl. 366, 460472 (2010)
3. Wang, Q.: Variational principle for variable coefficients KdV equation. Phys. Lett. A. 358,
9193 (2006)
4. Yan, Z.Y., Chow, K.W., Malomed, B.A.: Exact stationary wave patterns in three coupled
nonlinear Schrodinger/Gross-Pitaevskii equation. Chaos. Sol. Fract. 42, 30133019 (2009)
5. Fan, E.G.: Supersymmetric KdV-Sawada-Kotera-Ramani equation and its quasi-periodic
wave solutions. Phys. Lett. A. 374, 744749 (2010)
6. Freeman, N.C., Nimmo, J.J.C.: Soliton solutions of the KdV and KP equations: the
Wronskian technique. Phys. Lett. A. 95, 13 (1983)
7. Nimmo, J.J.C., Freeman, N.C.: A method of obtaining the soliton solution of the
Boussinesq equation in terms of a Wronskian. Phys. Lett. A. 95, 46 (1983)
8. Hirota, R., Ohta, Y.: Hierarchies of coupled soliton equations. J. Phys. Soc. Jpn. 60, 798809
(1991)
9. Hirota, R.: Direct Methods in Soliton Theory. Iwanami Shoten, Tokyo (1992) (in Japanese)
Modeling Knowledge and Innovation Driven Strategies
for Effective Monitoring and Controlling of Key Urban
Health Indicators

Marjan Khobreh, Fazel Ansari-Ch., and Madjid Fathi

Institute of Knowledge Based Systems and Knowledge Management, Department of Electrical


Engineering & Computer Science, University of Siegen
Hlderlinstrasse 3, D-57068 Siegen, Germany
{marjan.khobreh,fazel.ansari}@uni-siegen.de,
fathi@informatik uni-siegen.de

Abstract. Urban Health (UH) leading organizations are confronting with


complex and multi-domain problems. Integration of knowledge and innovation
strategies is to optimize resource, time and cost intensive processes. This paper
discusses Knowledge Management (KM) and Innovation Management (IM)
concepts for exploration and exploitation of new idea or existing knowledge for
supplying UH policy makers. Social and knowledge processes as KM tools feed
up exploitation for existing knowledge and exploration for new idea on the
basis of IM strategy. These are inputs of Radical-, Incremental-, Radar (RIR)
strategies which are designed to effectively control or monitor the current
situation. RIR strategies are defined based on five identified situations (5S) that
are determined through indication of three-color codes in Urban HEART
Matrix. The proposed concept is consequently applying an analytical method
for indication of transition and riskiness situations. These strategies are
supporting UH policy makers to identify improvement potentials based on
acquired evidences, and to select and apply relevant strategy.

Keywords: RIR Strategies, Exploration, Exploitation, Knowledge Management,


Innovation Strategy, Urban Health.

1 Introduction
Today urban proliferation leads to sustainability challenges particularly sustaining
Urban Health (UH) is correlated highly with long term controlling of UH services [1].
In other words UH is one of the most significant issues of urbanization creating
social, political, environmental and managerial opportunities and threats for health
contributors. UH is crucially depending on proper interaction and cooperation of
various groups and sectors. Wide range of multi-domains stakeholder partnership and
high volume of UH information and knowledge is circulating in the relevant sections
of urban health e.g. health care services like hospitals and clinics. Therefore health
care organizations are creating and storing information and knowledge regularly e.g.
patients health records, therein how sustainable UH can be achieved? In this way,
Knowledge- and Innovation Management have potentials to promote sustainable UH
strategies.

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 279285.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
280 M. Khobreh, F. Ansari-Ch., and M. Fathi

Innovation Management (IM) is for improving organizational processes. In this


term, innovation is considered as new or modified processes or products that reach the
marketplace or, when put into use, increase the performance or competitiveness of the
organization [2], [3]. Innovation may include new designs, techniques, managerial
tools, organizational approaches, patents, licenses, business models and paradigms
[3]. Basically innovation lies on an organizational capability rooted in peoples
creativity, which is recognized as the capability of people to invent novel and useful
ideas [2], [3]. Creative ideas may not turn into innovations for the organization, but
they are the source that innovative solutions come from [3]. IM is about the organized
planning, realization, and control of ideas in organisations [2]. It is not about
development of new ideas, but rather focused on the realization of ideas.
Consequently IM strategy enables UH stakeholders to identify and extract individual
and organizational experiences and/ or good practices, and accordingly transfer them.
Moreover, personal and documented knowledge are fundamentals for precise
identification, extraction and transfer of innovation [2]. The importance of knowledge
continues to growing to the spread of global networks, accelerated product cycles and
changing market conditions [4], [5]. Since decades, the knowledge intensity in work
processes increases compared to manual work [4], [5]. Organizations need to know
what they know and be able to leverage on its knowledge base to gain competitive
advantages [4], [5]. Thus UH- IM strategy should comprehensively address almost all
alternatives (e.g. multi-domains strategies and future-oriented policies) involved in
the creation and sharing of knowledge across the UH stakeholders. This crucially
requires indication of social inclusion as well as establishing transnational
partnership. Thus the contribution sustains the achievement of goals and the creation
of value added by UH stakeholders [4]. A comprehensive knowledge base can support
the IM process, by providing an easy access to existing, and also already applied
knowledge [4]. This can supply UH stakeholders in finding new ideas. On the other
hand, good ideas should be preserved in the knowledge base, so that they can be taken
into consideration in future planning problems.
Khobreh, Ansari and Nasiri discussed the objectives for integration of Knowledge
Management (KM) in Urban Health Equity Assessment and Response Tool (Urban
HEART) [6],[7]. The Urban HEART is a tool for monitoring of inequities and
forming the basis for determining future actions and interventions [8]. In this paper
principal focus is on Urban HEART assessment component where health equity is
monitored by creation of monitoring support matrix.

2 Knowledge and Innovation Integration into Urban Heart


Assessment Component
The Urban HEART assessment component is an indicator guide designed to monitor
and identify situation of (pilot) cities beside specific key criteria [8]. Thus Urban
HEART inputs are determined as core indicators, which provide a clear direction to
local governments on key aspects to issues when tackling health inequities [8]. Five
key criteria are used for identifying core indicators are acknowledged in [8] as: (1)
Availability of data*, (2) Strength of indicator to measure inequalities*, (3) Coverage
of a broad spectrum of issues, (4) Comparability and universality of indicator, and (5)
Availability of indicator in other key urban and health tools [8]. The starred (*) items
Modeling Knowledge and Innovation Driven Strategies 281

are created through sum up of the experiences and recommendations of pilot-tested


cities as well as international experts [8]. Local or national experts and stakeholders in
each city should be identified, cause they are urban knowledge-holders whose
expertise can assure and guarantee accomplishment of desired results. After gathering
the data by means of core indicators, Urban HEART assessment component
visualizes acquired data as structured matrix for monitoring. It produces the matrix
representing (1) Comparative performance of cities or neighborhoods within cities,
and (2) Comparative effectiveness of policies and future plans [8]. Figure 1 illustrates
a typical matrix adopted from Urban HEART report [8]. In this matrix each column
represents the performance of cities or neighborhoods based on the different
determinants [8]. In addition rows figure outs the effectiveness of a particular policy
or future event intervention [8]. The color codes in each square stand for the level of
accomplishment: GREEN (good performance), RED (poor performance), and
YELLOW (performance below the intended goal but better than the lower
benchmark) [8].

Fig. 1. An Example of Urban Health Equity Matrix adapted from Urban HEART [8]

2.1 Determination of the Five Situations (5S)

While the performance has been monitored and evaluated, five situations/states (5S)
can be determined from the three-color codes. Figure 2 shows the 5S. The situations
are seen in the colored zones (1 to 5) where:

1. The desired performance


2. The good performance with potential to transition from GREEN to
YELLOW.
3. The fair performance.
4. The critical performance with potential to transition from YELLOW to
RED.
5. The poor (extreme undesired) performance.
282 M. Khobreh, F. Ansari-Ch., and M. Fathi

Fig. 2. 5S corresponding to the indicators states

Situation 1, 3 and 5 are defined similar to the Urban HEART matrix, however
situation 2 and 4 (transitional zones) are additionally seen in order to detect and
identify potential riskiness point. For example the position 2 is close to the edge of
position 3, if the status of this position is not probably and regularly monitored and
the potential of transition from zone 2 to 3 is not identified, then possibility of
changing states of relevant indicators from GREEN to YELLOW is undesirably
increased. Alike transition from situation 4 to 5 (the worst case) is led to alter
pertinent indicators from YELLOW to RED.
In consequence, consideration of 5S promotes early stage discovery of riskiness as
well as transitional points beside the indicators. In this research 5S are considered
especially based on two fundamental assumptions:

First: detection of the potential that can lead to transition from desired to fair,
fair to poor and then the poorest situation is significantly important.
Second: 5S provide a comprehensive view of colored situations (indicator
states) particularly because three color situations are extended to five
situations which are providing a spectrum of change.

Based on declaration of 5S, three strategies are modeled to advance analysis and
accordingly inference based on evidences and reasons.

2.2 Radical, Incremental and Radar Strategies (RIR)

To manage the current/actual situation based on the indications, and to provide


adequate interventions, three strategies are designed as:

Radical: direct and rapid transition from situation 5 (RED) to situation 1


(GREEN).
Incremental: gradual transition from worse/ the worst situation to better
situation (RED to YELLOW) or (YELLOW to GREEN).
Radar: remain in its situation, monitor the transition speed and prevent to
transition to worse/ the worst situation (GREEN to YELLOW) or (YELLOW
to RED).
Figure 3 schematically shows these three strategies. While the Radical strategy
shifts back situation 5 to situation 1 directly, the Incremental strategy improves the
situation step by step. Finally if indicator state(result) takes place in situation 1, 2 and
4, the Radar strategy is applied for detection and prevention of transition to worse /
the worst situation.
Modeling Knowledge and Innovation Driven Strategies 283

Fig. 3. Schematic diagram for RIR strategies

Indeed know-how to whether to transition from the current situation to better


situation or remain in an actual situation is essential. Know-how are structured from
practical knowledge, skills and expertise and promoted by applying innovation and
new ideas. Based on the RIR strategies and their required body of knowledge, two
types of input are defined as a new idea (potential innovation) or existing knowledge.

2.3 Knowledge Exploration and Exploitation for RIR Strategies

To provide the body of knowledge for the RIR strategies two methods are determined
as (see Figure 4):

Exploration: focusing on creating or obtaining new idea or knowledge


(especially for Radical strategy).
Exploitation: focusing on using or reinforcing existing knowledge as a good
practice (especially for Incremental and Radar strategies).
Figure 4 reveals the relation of RIR strategies to KM and IM strategy for
exploration and exploitation of existing and new knowledge through social and
knowledge processes, and in connection to UH experts as well as UH-KM resources.
The Radical strategy is mostly supported based on creating or obtaining new idea
and knowledge for providing a solution in order to assure rapid transition.
Nevertheless Incremental or Radar strategies are supplied with existing knowledge.
Principally new knowledge is a product of innovation. However, as stated by KONG
Xiang-yu, LIXiang-yang, knowledge itself does not ensure profits. The value of
knowledge lies on its effect on mainstream [9]. In this context, Exploration is to
utilize and derive benefits from a new idea or knowledge based on KM social
processes. Social processes are human-related assets and including structural KM-,
cultural KM- and human KM-resource [10]. Tacit knowledge is mainly identified and
documented through social processes [4], [9]. Furthermore, Exploitation is to search,
analyze and examine the existing knowledge based on KM processes. KM processes
has been defined in terms of knowledge discovery, distribution, collaboration, and
generation [10]. Explicit knowledge (encoded knowledge) is the main material of
knowledge processes [4]. Nevertheless, knowledge sharing and learning mechanisms
are use to sustain the organizational learning process through creation, structuring and
transfer of knowledge [4].
284 M. Khobreh, F. Ansari-Ch., and M. Fathi

Fig. 4. Explore new knowledge and Exploit existing knowledge as inputs of RIR strategies

Besides, James March identified both exploitation and exploration as essential


part of innovation process that involves the acquisition, dissemination, and use of new
knowledge [9]. However, as a highly uncertain activity, the creation of new,
applicable knowledge is not an assured outcome. Innovation involves the acquisition,
dissemination, and use of new knowledge [9]. New knowledge often emerges from
unique (re-) combinations, (re-) exploitation and reuse of existing knowledge [4].

3 Conclusion/ Outlook
As explained earlier, analysis of Urban HEART Matrix particularly three-color codes
leads to identify five situations beside UH indicators. These 5 situations are used to
select a Radical-, Incremental- and/or Radar (RIR) strategies. RIRs inputs are
considered either as new idea or existing knowledge. In addition, the RIR strategies
require a body of knowledge, which optimally decreases decision failure rate and
assures the accomplishment of strategic and operational UH objectives. These
strategies are supporting UH decision and policy makers to identify improvement
potentials based on acquired evidences and accordingly select and apply an adequate
strategy.
Modeling Knowledge and Innovation Driven Strategies 285

In future, research is progressed towards formation of a mathematical-based model


for determining RIR strategies with special consideration of uncertainty factors. In
addition, providing RIRs inputs needs (re-) definition of associated exploration and
exploitation methods.

References
1. United Nations Human Settlements Programme: UN-HABITAT State of the Worlds
Cities 2008/2009 Harmonious Cities. London, UK (2008)
2. Howlett, R.J. (ed.): Innovation through Knowledge Transfer. Springer, Heidelberg (2010)
3. Leavitt, P.: Using Knowledge Management to Drive Innovation. American Productivity &
Quality Center, APOC (2003) ISBN: 1928593798
4. Maier, R.: Knowledge Management Systems: Information and Communication
Technologies for Knowledge Management. Springer, Heidelberg (2007)
5. Ansari-Ch, F., Holland, A., Fathi, M.: Advanced Knowledge Management Concept for
Sustainable Environmental Integration. In: the 8th IEEE International Conference on
Cybernetic Intelligent Systems, pp. 17. IEEE Press, Birmingham (2009)
6. Khobreh, M., Ansari-Ch, F., Nasiri, S.: Knowledge Management Approach for Enhancing
of Urban Health Equity. In: The 11th European Conference on Knowledge Management,
Famalico, Portugal, pp. 554564 (2010)
7. Khobreh, M., Ansari-Ch., F., Nasiri, S.: Necessity of Applying Knowledge Management
towards Urban health equity. In: The IADIS Multi Conference on Computer Science and
Information Systems, E-Democracy, Equity and Social Justice, Freiburg, Germany, pp. 310
(2010)
8. Center, W.H.O.: for Health Development: Urban HEART. World Health Organization,
The WHO center for Health Development, Kobe, Japan (2010)
9. Kong, X.-Y., Li, X.-Y.: A Systems Thinking Model for Innovation Management: The
Knowledge Management Perspective. In: The 14th International Conference on
Management Science & Engineering, pp. 14991504. IEEE Press, Harbin (2007)
10. Chuang, S.H.: A resource-based perspective on knowledge management capability and
competitive advantage: an empirical investigation. Journals of Expert Systems with
Applications 27(3), 459465 (2004)
Team-Based Software/System Development
in the Vertically-Integrated Projects (VIP) Program

Randal Abler, Edward Coyle, Rich DeMillo, Michael Hunter, and Emily Ivey

The Arbutus Center for the Integration of Research and Education


Georgia Tech, 777 Atlantic Dr NW, Atlanta GA 30332-0250, USA
{randal.abler,ejc,rad,eivey3}@gatech.edu, mhunter@lusars.net

Abstract. The Vertically-Integrated Projects (VIP) program is an undergraduate


education program that operates in a research context. Undergraduates who join
VIP teams earn academic credit for their participation in development tasks that
assist faculty with their research efforts. The teams are: multidisciplinary
drawing students from across campus; vertically-integrated maintaining a mix
of sophomores through PhD students each semester; and long-term each
undergraduate student may participate in a project for up to three years. The
continuity, technical depth, and disciplinary breadth of these teams enable the
completion of projects of significant benefit to research efforts. This paper
provides: overviews of three VIP projects to show the range of computing
topics that are addressed; a summary of the resources and practices that enable
the teams to create sophisticated software and systems; and an explanation of
how student performance is evaluated.

Keywords: Project-based learning, Problem-based learning, Computer engineering


education, Computer science education, Undergraduate research.

1 Introduction

Sustaining and accelerating the pace of technological innovation will require a


continuous stream of new graduates who understand how the processes of research,
technology development, and product creation must be integrated to enable
innovation. Current approaches to the education of undergraduates and graduate
students are not up to this challenge: Undergraduates are generally not provided with
a deep exposure to any technology area; Masters students are often not involved in
research or the development of new technology; and PhD students rarely see their
research breakthroughs implemented and tested in applications. We have thus
developed a new curriculum that integrates education and research in engineering: the
Vertically-Integrated Projects (VIP) Program [1-4]. It creates and supports teams of
faculty, graduate students, and undergraduate students that work together on long-
term, large-scale projects. The focus of each project team is on challenges in research,
development, and applications that are of interest to federal funding agencies,
industry, and not-for-profit organizations.

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 287294.
Springerlink.com Springer-Verlag Berlin Heidelberg 2012
288 R. Abler et al.

The research focus and long-term, large-scale nature of VIP projects provide
several advantages, including:

Engaging faculty in the project at a very high level because the activities of the
team directly support the faculty members research effort including the
generation of publications and prototypes.
Engaging graduate students in the mentoring of undergraduate students that are
assisting them with their research efforts. This will accelerate the graduate
students research effort and enable the undergraduates to learn directly about
the goals and responsibilities of graduate students.
Providing the time and context necessary for students to learn and practice many
different professional skills, make substantial technical contributions to the
project, and experience many different roles on a large design team.
Creating new and unique opportunities for the integration of the research and
education enterprises within the university.
In this paper, we discuss a subset of VIP projects that focus on the development of
large-scale software applications. The scale of these projects and the depth of
knowledge the undergraduates must develop to participate in them have led to the
creation of both new approaches to training the new students that join these teams
each semester and an industry-like approach to evaluating their performance.
In Section 2, we provide overviews of several VIP teams and the software/systems
they are developing. In Section 3, we describe the techniques we have developed for
bringing new VIP students up to speed and creating and supporting a team-based
software development process. In Section 4, we describe how we evaluate the
students performance on the projects.

2 VIP Teams with Significant Software/System Development Goals


There are currently 12 VIP teams at Georgia Tech, 15 at Purdue University, and one
at Morehouse College. A new VIP program will be starting at the University of
Strathclyde in 2012. In all cases, these programs establish a curriculum and support
infrastructure that enables the creation of long-term, large-scale design teams.
A VIP team is created when a faculty member requests one and completes a form
describing the teams name, research goals, the technologies involved, the disciplines
from which students should be recruited, and the teams customer or project partner.
A new team typically consists of 8 to 10 undergraduates and then grows to 12 to 20
students. Each also typically includes 1 to 4 MS and PhD students that are working on
the research topics at the heart of the project.
Of the 12 teams currently in operation at Georgia Tech, at least 5 are pursuing
projects with significant computing/systems goals:

Collaborative Workforce Team: Design and test multimedia systems, web-based


applications, and human-computer interfaces to support the distributed design
and research teams that are the future of the global engineering workforce.
Team-Based Software/System Development in the VIP Program 289

eDemocracy Team: Create systems, processes and policies for secure,


authenticated election procedures and citizen participation in government.
eStadium Team: Design, deploy and test wireless and sensor networks to gather
and deliver game information and other applications to fans in the stadium.
Intelligent Tutoring System Team: Design, test and use systems to enhance
student learning in courses via video/data mining and machine learning.
eCampus Team: Design, develop, and deploy mobile wireless applications for
the use of visitors, students, faculty, staff and administrators on campus.
eStadium was the first VIP team and has been in operation for 9 years. The
Collaborative Workforce and eDemocracy teams have been in operation for 4 years.
The sophisticated systems these teams have created are described next.

2.1 The eStadium VIP Team


The eStadium team pursues research, development, and deployment of: (i) Systems
that gather and process multimedia content during the game and make it available to
fans via cellular and WiFi networks [5]; (ii) Sensor networks that collect audio
information, structural vibration information, and images from around the stadium
and make this information available to fans and stadium personnel [6]; and (iii) WiFi
networks in the stadium to enable as many fans as possible to access eStadium [6].
The Web Applications subteam creates and maintains the website at which fans
find game information and applications: http://estadium.gatech.edu. These web
applications include video clips, which are generally 1530 seconds in duration, of
each play in the game. Each video clip is annotated with its official NCAA play-by-
play description. This yields a searchable list of plays that grows throughout the
game. Fans can search for plays by type (touchdown, run, pass, etc) or by the names
of the players. The play-by-play information is also parsed to produce a visual
representation the Drive Tracker of any sequence of plays on the field. It enables
fans to quickly identify plays of interest so they can view the associated videos.
The Sensor Network subteam designs and deploys sensor networks in the stadium
for many different applications: monitoring crowd noise and activity; measuring the
behavior of the stadiums physical structure during games; collecting images that
show the length of queues for concession stands and restrooms; etc. Fig. 1 shows the
gateway node, a camera attached to it, and one of the many wireless sensor motes that
communicates with the gateway. The sensor net team also focuses on implementing
new data fusion algorithms developed by PhD students on the team; see e.g. [7].

Fig. 1. The sensor network deployed by the eStadium team at Purdue. A second network will
be deployed at Georgia Tech to gather and process audio, RF, vibration, and image data.
290 R. Abler et al.

The Wireless Network subteam of eStadium has designed WiFi networks for the
stands on the north side of the stadium and the suites on the west side of the stadium.
They measured the propagation of RF signals in the stadium when no people were
present and when the stadium was full during a game. They also considered a number
of antenna designs and access point configurations to ensure adequate coverage in the
stadium. These networks should be installed in the stadium within the next year.

2.2 The eDemocracy VIP Team

The VIP eDemocracy team has developed an Android-based system to aid The Carter
Centers election observation missions [8]. Election observation is the process by
which countries invite organizations such as The Carter Center to observe their
elections to increase transparency and promote electoral validity. Election observation
occurs in several stages but our system focuses solely on election-day processes. In
the old observation process, paper-based forms with lists of questions were distributed
to observers who traveled to polling stations throughout the day and returned in the
evening after poll closing. Difficulties arose as forms were often lost, illegible or
returned late, making it difficult to make an accurate and timely analysis.
To solve these problems, the eDemocracy team developed an Android-based
mobile application [9]. It used the same questions as the paper-based form and sent
responses via SMS to a back-end Command Center
for analysis. Development of the mobile application
was performed in Java using version 1.6 of the
Android SDK. Use of Googles API allowed direct
integration of the application with the phones
onboard hardware so that GPS tagging and SMS
transmission take place transparently and without
user intervention.
The command centers structure is simple. MySQL
is used as the central database for storing all election
observation data and SQLite is used to retrieve Fig. 2. Sample question for
messages that are received by FrontlineSMS. PHP and election monitors about
HTML are used to present the received data to the chain of custody of ballots
administrator/moderator. The command center also
consists of a map handler that displays the current locations of observers and the
geographical origin of each SMS message. This system was beta tested during the
Filipino presidential election in May 2010. Since that time, the system has been updated
with improved features and interfaces. This new system will be deployed in an upcoming
election in Liberia this Fall.

2.3 The Collaborative Workforce VIP Team

The Collaborative Workforce team arose from a long-term effort in educational


technology and distance learning [10,11]. Manufacturers of videoconference systems
focus primarily on executive conference rooms, which frequently do not meet the
needs of educational use, which ranges from multiple interconnected classrooms and
highly dynamic interaction styles, to rapid search and review of archived content, to
Team-Based Software/System Development in the VIP Program 291

project interaction between multiple students. The goal of the Collaborative


Workforce team is therefore to develop technology that enables all types of local and
distant interaction necessary for the modern global workforce. The team must thus
meld emerging technology with human work habits and workplace needs.
Currently the team is developing an appliance based on the Texas Instrument's
DaVinci multi-core processor which will convert video, audio and control signals to
Ethernet transport for transport within a
room as shown in Figure 3, with emphasis
placed on minimal latency and assuming
high bandwidth gigabit networking within
the room. This replaces complex audio-
visual cabling systems and eliminates the
need for an expensive video routing
matrix. The work includes creating an
embedded web-based control interface
into the ARM processor portion of the
DaVinci, and programming the
appropriate encoding and decoding
Fig. 3. TI DaVinci-based video board
algorithms into the DSP portion of the
DaVinci to support a wide range of audio
and visual signals.
Additional team interests include: 1) dynamically selecting a image subset or
Region Of Interest (ROI) [12] for transmission to smartphones and tablets, 2) acoustic
analysis and tuning mechanisms for optimizing the audio, and 3) capture and storage
of transmitted network video signals as a recording mechanism [13].

2.4 Summary: The Learning Environment in VIP Teams

The 3 VIP projects above demonstrate the depth and breadth of the teams. This is
made possible by the long-term nature of VIP teams. In fact, each VIP team is best
thought of as a small design firm that conducts research and then develops and
deploys systems based on that research. The experiences that students have on VIP
teams are thus very close to what they will later encounter in industry.
New students on VIP teams function like new employees in industry: they are
developing skills and learning about the teams objectives. Students who have been
on a team for a year are performing significant technical tasks that contribute to the
development of prototypes. Students who are close to graduation have both project
management and technical responsibilities. Students who participate for 2 or 3 years
thus have a clear understanding of how industry-scale software projects function.

3 VIP Resources and Practices


On a mature VIP team, new students joining the team must quickly develop an
understanding of the internal workings of the code/system the team has developed so
far. This requires giving them access to a working system to experiment with and
develop their understanding. At the same time, it is unwise to allow new students to
292 R. Abler et al.

have access to and experiment with production code or a deployed system because of
the damage that might result. It is also not efficient to have the experienced students
on a team spend a large amount of time teaching the new students the basics of the
system. Our solution to these problems is to provide: (1) an initial five to six-week-
long formal training period for new students; and (2) create development servers for
the new students so thy can safely install and experiment with the latest deployed
system without damaging the actual production server.

3.1 Training Processes for New Students

All computing oriented VIP teams share a need to build up new students knowledge
very quickly on such topics as C, MySQL, PHP, and Linux [11]. We have thus
created a collection of course modules on these topics that are available to new VIP
students at the beginning of each semester. The advisers for each team decide which
students from their team should participate. The evaluation of each students progress
is provided to his/her adviser throughout the duration of the module, including a list
of students who complete exercises each week and, if requested, a demonstration of a
small application the student develops that is related to their teams effort.
The first time a course module is taught, the instructor is a faculty/staff member
who is an expert in the field. The instructor develops the reading list, lecture
materials, assignments, quizzes, and the grading process. During the second offering,
the lectures are taped and made available on the VIP Wiki [4] along with assignments.
In subsequent semesters, teaching assistants run the course modules. They inform the
students about the lecture viewing and assignment schedule, grade the assignments
and quizzes, and report the students performances to the advisers.
New students participating in a course module must still participate in their VIP
teams weekly meetings so they become familiar with the project and contribute by
performing tasks assigned to them. This participation also enables them to build up
technical and personal connections within the team. These connections are
particularly helpful for teaching new students:

Good development strategies; including coordination, documentation, revision


control, and resisting the urge to adopt every new programming language.
Where to find resources to learn, test, and develop code without interfering with
the teams operational goals and deployments.
How to create network/server infrastructure compatible with campus network
security policies.

3.2 Server and Network Infrastructure

Since many VIP applications do not conform to a simple web content model but
involve complex database and application programming support, staff administrators
are not able to support the needed servers for each team or application. Therefore the
VIP program at Georgia Tech has moved to a model that has successfully built,
maintained, and provided production-level services on web servers, in conjunction
with a development, test, and quality assurance plan for new software development.
Team-Based Software/System Development in the VIP Program 293

Security policies in the department and campus generally require that production
systems with external visibility, such as a publicly accessible web server, not allow
students to have privileged access as this increases the security risk of other systems
on the same local area network IP subnet. To address this issue, a VIP subnet was
established with separately configured firewall policies. This required an initial effort
in the form of configuring a separate IP subnet, allocating a VLAN on the campus
network to support that subnet, propagating that subnet to the effected campus
network Ethernet switches, and creating a new policy configuration in the firewall
associated with that subnet.
Georgia Techs VIP Cloud utilizes 4 physical servers to create virtual machines
called guest machines. Each guest machine acts as an independent server, with a
separate network identity, operating system installation, software installation, and
configuration. The guest configurations includes team-specific guest servers for 5
teams. To simplify creating new guests, a template guest configuration is maintained.
Each guest machine is allocated to a responsible administrator: a student, staff, or
faculty member. Team-specific guest servers are typically administered by a graduate
student but ultimately that decision resides with the VIP teams faculty advisor. The
designated administrator must sign a form [4] indicating that they are responsible for
assuring proper use of the guest in compliance with all applicable policies. If the
administrator is not a faculty member, the teams faculty advisor also signs.
To allow students to get experience with such a challenge, each student involved in
the web project is assigned a specific guest server. Each guest server was configured
with the operating system (RedHat Linux 5.6) preinstalled and a unique network
identity preconfigured. This methodology, developed for team-based guest servers,
applies equally well to individual students guest servers.

4 Evaluation of Student Performance


Since VIP students function as a team with individual members working on different
goals, the evaluation of student accomplishments is not a numerical assessment based
on identical assignments. Students may be sophomores thru graduate students, be
enrolled for 1, 2 or 3 credits, and vary from new, first-semester members to well
established team members. It is thus critical that students understand what is expected
of them and how they are assessed.
While each students individual accomplishments are unique, each student is
evaluated across these three equally weighted areas:

Documentation: Quality of Design Notebook; Level of Version Control Activity;


Quality and Quantity of Code Documentation, Wiki Contributions.
Technical Accomplishment: Production and Quality of Code/Systems;
Contributions to Papers and Presentations; Performance in Course Modules.
Teamwork: Level of Participation and Cooperation; Results of Peer Evaluations.

This provides clear objectives within the context of team-based projects. Established
team members generally have practices that align with these objectives, which helps
new team members adopt good techniques for succeeding on the project. All team
294 R. Abler et al.

members are given mid-semester advisory assessment results. These results are
reviewed individually with new team members and with any student wishing to
discuss his/her progress. The general VIP syllabus, the peer evaluation form, and the
design notebook evaluation form are available on the Georgia Tech VIP wiki [4].
As part of the VIP program, the students are expected to maintain design
notebooks. In addition to meeting notes, these notebooks are expected to contain a
record of student efforts and accomplishments as well as task lists and current issues.
For primarily software-focused development efforts, the design notebook does not
provide a good mechanism for tracking code development. Subversion is being used
to track code changes, therefore the subversion logs can be reviewed for student
accomplishments. This provides incentives for the students to make proper and
frequent use of the version control software.

Acknowledgments. The work reported in this paper was funded in part by the
National Science Foundation under grant DUE-0837225.

References
1. Coyle, E.J., Allebach, J.P., Garton Krueger, J.: The Vertically-Integrated Projects (VIP)
Program in ECE at Purdue: Fully Integrating Undergraduate Education and Graduate
Research. In: ASEE Annual Conf. and Exposition, Chicago, IL, June 18-21 (2006)
2. Abler, R., Krogmeier, J.V., Ault, A., Melkers, J., Clegg, T., Coyle, E.J.: Enabling and
Evaluating Collaboration of Distributed Teams with High Definition Collaboration
Systems. In: ASEE Annual Conference and Exposition, Louisville, KY, June 20-23 (2010)
3. Abler, R., Coyle, E.J., Kiopa, A., Melkers, J.: Team-based Software/System Development
in a Vertically-Integrated Project-Based Course. In: Frontiers in Education, Rapid City,
SD, October 12-15 (2011)
4. The Vertically-Integrated Projects Program,
http://vip.gatech.edu, http://vip.gatech.edu
5. Ault, A.A., et al.: eStadium: The Mobile Wireless Football Experience. In: Conf. on
Internet and Web Applications and Services, Athens, Greece, June 8-13 (2008)
6. Zhong, X., Coyle, E.J.: eStadium: A Wireless Living Lab for Safety and Infotainment
Applications. In: Proc. of ChinaCom, Beijing, China, October 25-27 (2006)
7. Sun, X., Coyle, E.J.: Low-Complexity Algorithms for Event Detection in Wireless Sensor
Networks. IEEE Journal on Selected Areas in Communications 28(7) (September 2010)
8. The Carter Center,
http://www.cartercenter.org/peace/democracy/index.html
9. Osborn, D., et al.: eDemocs: Electronic Distributed Election Monitoring over Cellular
Systems. In: Intl Conf. on Internet and Web Applications and Services, Barcelona, SP
(2010)
10. Abler, R., Wells, G.: Supporting H.323 video and voice in an enterprise network. In: 1st
Conference on Network Administration, May 23-30, pp. 915 (1999)
11. Abler, R., Jackson, J., Brennan, S.: High Definition Video Support for Natural Interaction
through Distance Learning. In: Frontiers in Education, Saratoga Springs, NY (October
2008)
12. Mavlankar, A., et al.: An Interactive Region-of-Interest Video Streaming System for
Online Lecture Viewing. In: Intl Packet Video Workshop (PV), Hong Kong, China
(December 2010)
13. Abler, R., Wells, I.G.: Work in Progress: Rapid and Inexpensive Archiving of Classroom
Lectures. In: Frontiers in Education Conf., San Diego, CA (October 2006)
Frameworks for Effective Screen-Centred Interfaces

Luigi Benedicenti1, Sheila Petty2, Christian Riegel3, and Katherine Robinson4


1
Faculty of Engineering and Applied Science, University of Regina, Regina SK, Canada
2
Faculty of Fine Arts, University of Regina, Regina SK, Canada
3
Dept. of English, Campion College, University of Regina, Regina SK, Canada
4
Dept. of Psychology, Campion College, University of Regina, Regina SK, Canada
{Luigi.Benedicenti,Sheila.Petty,Christian.Riegel,
Katherine.Robinson}@uregina.ca

Abstract. At the union of the humanities and technology, computer interfaces


are often studied technically and from a psychological point of view, but more
rarely do such studies include a broader perspective connecting cultural theories
and cognitive processes to the transformation of user interfaces as the screen
real estate changes. This paper introduces a research framework for such
research that the authors have developed for repeatable, broadly scoped
experiments aimed at the identification of the relationship between screen-
centered cultures and user interface semantics. A first experiment based on this
framework is then illustrated. Although the experiment described has not come
to an end yet, the aim of this paper is to propose the framework as a
collaborative tool for researchers in the humanities, social sciences, sciences
and engineering that allows an integrated approach identifying interdisciplinary
contributions and discipline transitions, with the clearly positive gains that such
an approach affords.

Keywords: interface, screens, aesthetics, narrative, encode, decode, framework.

1 Introduction
It has become a commonplace notion that computer-based technology and forms of
expression transform human experience and that the screen is the 21st century face
of the image [1]. There is, thus, clearly an urgent need to examine the ways in which
screen-centred interfaces present images and encode and decode meaning, identity,
and culture, borne out of an intuitive sense that whoever controls the metaphor
controls the mind [2]. This is not a question of technology alone, for as Craig
Harris has argued, aesthetics and the technology for creating those aesthetics are
tightly intertwinedJust as technology is influenced by its potential use, aesthetics or
content is molded by what is possible [3]. And Lev Manovich has argued that we
are no longer interacting to a computer but to a culture encoded in digital form [4].
This paper presents the groundwork for an interdisciplinary project by four
researchers at the University of Regina who are working to advance the state of the
knowledge in how aesthetically represented information-in language and in visual
media-is understood, mediated, and processed. Our project builds on our work on

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 295301.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
296 L. Benedicenti et al.

screen-centred interfaces in our respective disciplines of cognitive psychology (Dr.


Katherine Robinson), literary studies (Dr. Christian Riegel), media studies (Dr. Sheila
Petty) and software systems engineering (Dr. Luigi Benedicenti).
The fundamental goals of our collaborative project are to engage interdisciplinary
means and perspectives to systematically develop effective methodologies to measure
cognitive processes, aesthetic effects, and software and hardware efficacy of the new
and developing digital media. In this project/pilot study we intend to select a series of
media fragments that include poetic, visual, and language texts, as well as those that
combine these features, and present them on a variety of screen-centred interfaces to
explore their cognitive and aesthetic effects and features.
The fragments will have varied conceptual complexity and varied cultural
references. Using a variety of screens (e.g., a television screen, a conventional
computer screen, a tablet computer, a touch-screen phone, and a conventional mobile
phone with limited screen space for simple text messages), we will examine cognitive
and aesthetic features of how the fragments (e.g., an essay, a sonnet, or a net art
project) are experienced on each platform and whether the essence of their content is
altered or influenced.
Our study will address whether and how media content is influenced by the device
on which it is presented, from cognitive, cultural, and aesthetic perspectives. This
pilot study is meant to 1) define parameters to develop methodologies and to construct
an ontology to map the nexus between technology, aesthetics (including uses of time,
space, text, font size, screen resolution, window size, etc.) and user
impact/experience, and; 2) understand and measure the cognitive, cultural, and
aesthetic experiences of screen users.

2 Context and Significance


We start with the general premise that screens shape our world and identities in such
ubiquitous ways that their very presence and influence often go unproven, or at the
very least, unchallenged. According to Kate Mondloch, From movie screens to
television sets, from video walls to PDAs, screens literally and figuratively stand
between us, separating bodies and filtering communication between
subjects.present-day viewers are, quite literally, screen subjects [5]. She further
contends that the way in which we view or consume artworks made with screen
interfaces has been underexplored as a system or method [5]. The challenge to create
coherent frameworks or methodologies to describe how screen media create meaning
has occupied a significant place in debates among new media scholars, game and
interface designers.
Until very recently, primacy has been placed on what happens behind the screen
with a focus on the technology and software used by computer programmers and
designers. And research in computer-based narrative has mainly focused on
theoretical issues around what narratives do and how they inscribe interactivity on
computer screens. It is time to redress the balance by bringing focus to bear on the
screen itself and examine how images/sensations evoked on the computer screen, and
this experience, create meaning with the user.
Frameworks for Effective Screen-Centred Interfaces 297

As early as the 1980s, C. Crawford advocated that real art through computer
games is achievable, but it will never be achieved so long as we have no path to
understanding. We need to establish our principles of aesthetics, a framework for
criticism, and a model for development [6]. In his essay on whether computer games
will ever be a legitimate art form, Ernest W. Adams disagrees with the need for a
model of development as he feels art should be intuitively produced, but he agrees
with the necessity for a methodology of analysis [7].
Other theoretical positions have evolved to focus on either the technological
construction of new media or their social impact. For example, in the quest to
quantify effective human interface design, Brenda Laurel turns to theatre and
Aristotles Poetics by creating categories of action, character, thought, language,
melody (sound) and enactment [8]. However, Sean Cubitt argues that the
possibilities for a contrapuntal organisation of image, sound and text [should be]
explored, in pursuit of a mode of consciousness which is not anchored in the old
hierarchies [9]. Peter Lunenfeld takes a more radical stance by suggesting once we
distinguish a technoculture from its future/present from that which preceded it, we
need to move beyond the usual tools of contemporary critical theory. His assertion of
the need for a hyperaesthetic that encourages a hybrid temporality, a real-time
approach that cycles through the past, present and future to think with and through the
technocultures [10] offers its own set of problematics: computer-based forms are
neither a-historical, nor represent a leap in technology so distinct that they are
unlinked to preceding forms.
Processing and experiencing text is embodied; linguistic meaning evokes all
aspects of the experience of reading, physical and cognitive, and every aspect of
language is implicated in embodiment [11], [12]. This notion of the embodied
experience of language corresponds with McLuhans evocation of the medium as an
extension of the body in Understanding Media [13]. Ubiquitous computing embraces
the embodied nature of language and literature in that it brings the media in closer
contact with the human (for example, an individual becoming immersed in a virtual
reality world). As Peter Stockwell argues, The notion of embodiment affects every
part of language. It means that all of our experiences, knowledge, beliefs and wishes
are involved in and expressible only through patterns of language that have their roots
in our material existence [12].
Gibbs Jr. argues that Understanding embodied experience is not simply a matter
of physiology or kinesiology (i.e., the body as object), but demands recognition of
how people dynamically move in the physical and cultural world (i.e., the body
experienced from a first-person, phenomenological perspective) [14]. We link this
notion of the embodied experience with McLuhans conception of the relationship of
media to human experience and understanding, for McLuhans formulation inherently
recognizes that exposure to a new medium is not only an experience of a new form of
technology but that it also changes the way we relate to and understand the world and
our place in that world. For example, the mobile phone could be considered as an
extension of the ear in that it changes the fundamental way with which the human
body is situated within the world [15].
298 L. Benedicenti et al.

3 Importance, Originality and Anticipated Contribution to


Knowledge
Each of the above-mentioned scholars touches directly or indirectly on the notion that
there is something unique about screen-centred interfaces that defies inscription in
previous modes of analysis, and all seem to be grasping for a language of description
for the pervasive nature of ubiquitous information processing in the human
environment [16]. We aim to develop theoretical frameworks with which to develop
an understanding of the relation of conventional aesthetic textual forms to the newly
and rapidly developing media technology that shapes our lives. We wonder how the
new screens change and shift our relationship to text as well as our understanding and
processing of that text. How does the increased embodiment of new screen contexts
alter how we respond to a text (meaning various visual media) we read?
Deriving from our theoretical questions and issues is the need to develop
methodological tools to harness the potential of ubiquitous computing in the
humanities and social sciences. Researchers are forced to find new methodologies for
convergence between analog theories and digital contexts where the users freedom to
determine sequence can profoundly affect the users response to the text and the
meaning she/he derives from it. This pilot study will help us understand and develop
methodological issues relating to how one studies digital media in the new screens
that predominate our time: the variety of methods we will use all need to be
calibrated, adapted, and integrated to be of value to researchers in the future. There
are no models at the moment to aid in this work; thus, we are proposing to develop
one.

4 The Pilot Study for the Framework


The pilot study will be divided into two broad phases with two steps each:
Step 1: Preparation. We have already written the application to the Research
Ethics Board for approval to begin the study and approval was granted by the
University of Regina on July 26, 2011. We are in the process of finalizing the
appropriate measures of cognition, cultural relevance and aesthetic relevance that
form the basis of our analysis. Cognitive measures will include measures of retention,
recall, and reading/viewing speed. Measures of cultural and aesthetic relevance will
include questions relating to the experience of reading, viewing, or being immersed in
a digital media context.

Step 2: Selection of Media Fragments. Proposed examples include:


Digital Artworks
1. The net art project, Blackness for Sale: http://obadike.tripod.com/ebay.html
where new media artist Keith Townsend Obadike offered his blackness for sale on
eBay in 2001, creating an effective commentary on the relationship between black
identity and consumer culture. Because the project is primarily text-based, it raises the
interesting issue of how text functions as an image system in net/web art.
Frameworks for Effective Screen-Centred Interfaces 299

2. In the hypertext artwork, With Liberty and Justice for All by African
American artist Carmin Karasic, three browser windows weave a story. This work is
interactive as the viewer can click on the images and different photos appear:
http://www.carminka.net/wlajfa/pledge1.htm

Concrete Poetry
3. Concrete poetry created for a visual medium in which the moving visual image is
reflected in the text that emerges: http://www.vispo.com/guests/DanWaber/arms.htm
4. Concrete poetry created for a visual medium in which the animation illustrates the text:
http://dichtung-digital.mewi.unibas.ch/2003/parisconnection/concretepoetry.htm
In the first, the moving visual image is reflected in the text that emerges. In the
second, the animation illustrates the text. In both, metaphor operates at the lexical
level and at the level of image. Why might this be of interest? 1. we can work at
metaphor from multiple directions, including at the level of linguistics, which reflects
more closely the experience of using a new media device; 2. because the poem is
fluid, it will lend itself well to the embodied nature of handheld and immersive
worlds: a question might be, what happens when we move the text because we hold it
in our hand as it too moves? Are there differences in cognitive processes and how
they work over a static image? The second example provided above might be very
useful for an experimental design because the animation is derived from a fixed text,
so one would have access to both versions (e.g., paper/conventional and digital).
These texts become a useful tool for methodological experimentation: how does one
deal with digital aesthetic objects presented on digital media versus conventional
forms? How do we deal with aesthetic experiences when the mode of delivery has
changed so radically?

Step 3: Data collection. 80 participants from the U of Regina Psychology Research


Participant Pool will be equally distributed to one of four conditions (conventional
computer screen, iPad, iPhone, mobile phone) and will be presented with the media
fragments. Counterbalancing will ensure that each fragment is viewed on each device
by 20 participants. For each fragment, processing time will be collected, recall on
the fragment will be assessed (ie/ what do you remember about Fragment 1, 2, 3, and
4?), and a questionnaire about the fragments and the devices on which they were
presented will be given. This should take approximately 45 minutes.

Step 4: Data analysis. The team will code the data to prepare a final data set:
analyses of variance of the various cognitive measures (recall, reading speed, etc)
examining how these measures are affected by media platform will be conducted.
Correlational analyses will be performed between the cognitive measures, the
questionnaires examining participants' aesthetic experiences, and the media platforms.
The correlational analyses will also be used to construct a decision support system
linking interface factors for all content with the parameter set as screens change. We
will use software engineering systems compression methods like Principal
Component Analysis and Clustering to extract a core set of measures that will
constitute the initial vector state of the decision support system. The correlational
analyses will provide the rules for linking these parameters and will be used to build
an active rule set (either as a look-up table or as a set of if-then rules) that will form
300 L. Benedicenti et al.

the knowledge base given to the system. The system, built in this way, essentially
becomes a decision support system, or computer program, capable of forming a
general prediction of the best type of content fragments to use in a certain defined
screen size format. Linking changes in interface parameters (cognitive, cultural, and
aesthetic) with different screens and their description, will allow us to infer how to
automatically change a presentation from one interface to another and obtain a desired
effect (cognitive, cultural, and aesthetic).

5 Discussion and Future Work


This paper introduces an interdisciplinary approach and a framework for quantitative
research in screen-centered interfaces. The involvement of many disciplines lends
itself to a holistic approach that, however, is grounded solidly in quantitative methods
from the Social Sciences and Engineering. This approach also encourages mixed
mode approaches for research, including purely qualitative aspects of research (e.g.
grounded theory) to analyze cultural aspects of the screen interface, and to provide
results that can be of value in the humanities, thus complementing the wide spectrum
of research encouraged by the chosen approach.
As a result, the quality of the results may be kept high from all aspects but to
successfully achieve such quality a number of researchers need to effectively
represent the needs of each discipline, which at times may lead to the temptation to
compromise. In our approach, we have avoided such temptation by allowing
quantitative and qualitative research to proceed on separate independent parts of the
study, and by limiting the number of factors detected in the study. This method is
effective, but it reduces the generality of the findings. For example, the choice of
specific hardware devices excludes very large screens and participatory experiences.
We expect that other research efforts will be able to confirm, refute and/or expand our
findings.
The quantitative analysis is the first step to construct a tool that supports the
translation of user interfaces into different screen real estates. This tool, which at
present is a rule-based expert system, should provide developers and interface
designers with a starting point to encode the same information for devices of different
characteristics (e.g., desktops, laptops, tablets, and smartphones). To date, such
translations can only be made manually and are consequently extremely costly. An
ideal deployment tool would decompose an interface in its semantic components and
express them in the most appropriate affordance for the type of screen selected.
From a qualitative point of view, we aim at exploring the cultural and semantic
differences in screen-centered interfaces and possibly explain the motivations for such
changes from a more general point of view as the one provided by Action Research or
Grounded Theory. Such results cannot be achieved with only one experiment; hence
our desire to share our approach.

Acknowledgments. The researchers would like to acknowledge and provide thanks to


the Social Sciences and Humanities Research Council of Canada for funding the
research (New Theories and Methods for Screen-Centred Interfaces: a Pilot Study) on
which this paper is based.
Frameworks for Effective Screen-Centred Interfaces 301

References
1. Ramsay, C.: Personal conversation (January 19, 2011)
2. Bey, H.: The information war. In: Dixon, B., Joan, Cassidy, E.J. (eds.) Virtual Futures:
Cyberotics, Technology and Post-Human Pragmatism. Routledge, London (1998)
3. Harris, C. (ed.): Art and Innovation: the Xerox PARC Artist-in-Residence Program. The
MIT Press, Cambridge (1999)
4. Manovich, L.: The Language of New Media. The MIT Press, Cambridge (2001)
5. Mondloch, K.: Screens: Viewing Media Installation Art. University of Minnesota Press,
Minneapolis (2010)
6. Crawford, C.: The Art of Computer Game Design. McGraw-Hill/Osbourne Media,
Berkeley, CA (1984)
7. Adams, E.W.: Will Computer Games Ever Be a Legitimate Art Form? In: Mitchell, G.,
Clarke, A. (eds.) Videogames and Art. Intellect Books (2007)
8. Laurel, B.: Computers as Theatre. Addison-Wesley (1991)
9. Cubitt, S.: The Failure and Success of Multimedia. Paper Presented at the Consciousness
Reframed II Conference at the University College of Wales, Newport (August 20, 1998)
10. Lunenfeld, P.: Snap to Grid: a Users Guide to Digital Arts, Media, and Cultures. The MIT
Press, Cambridge (2002)
11. Geeraerts, D.: Incorporated but not embodied? In: Brone, G., Vandaele, J. (eds.) Cognitive
Poetics: Goals, Gains and Gaps, pp. 445450. Walter de Gruyter, New York (2009)
12. Stockwell, P.J.: Texture - A Cognitive Aesthetics of Reading. Edinburgh University Press,
Edinburgh (2009)
13. McLuhan, M.: Understanding Media: The Extensions of Man. The MIT Press, Cambridge
(1964)
14. Gibbs Jr., R.W.: Embodiment and Cognitive Science. Cambridge University Press,
Cambridge (2006)
15. Gordon, W.T., Hamaji, E., Albert, J.: Everymans McCluhan. Mark Batty Publisher, New
York (2007)
16. Greenfield, A.: Everyware: The Dawning Age of Ubiquitous Computing. New Riders,
Berkeley (2006)
Analytical Classification and Evaluation of Various
Approaches in Temporal Data Mining

Mohammad Reza Keyvanpour and Atekeh Etaati

Islamic Azad University, Qazvin Branch, Qazvin, Iran


{Keyvanpour,A.Etaati}@QIAU.ac.ir

Abstract. Modern data bases have vast information and their manual analysis
for the purpose of knowledge discovery is almost impossible. Today the
requirement of automatic extraction of useful knowledge among large-capacity
data is completely realized. Consequently, the automatic analysis and data
discovery tools are in progress rapidly. Data mining is a knowledge that
analyzes extensive level of unstructured data and helps discovering the required
connections for better understanding of fundamental concepts. On the other
sides, temporal data mining is related to the analysis of sequential data streams
with temporal dependence. The purpose of temporal data mining is detection of
hidden patterns in either unexpected behaviours or other exact connections of
data. Hitherto various algorithms have been presented for temporal data mining.
The aim of present study is to introduce, collect and evaluate these algorithms
to create a global view over temporal data mining analyses. According to
significant importance of temporal data mining in diverse practical applications,
our suggestive collection can be considerably beneficial in selecting the
appropriate algorithm.

Keywords: Temporal data mining (TDM), TDM algorithms, Data set, Pattern.

1 Introduction
Analysis of sequential data stream for understanding the hidden rules within various
applications (from the investment stage to the production process) is significantly
important. Since the calculations are growing in many of practical fields, a large amount
of data is collecting rapidly. So, various frameworks are required for extraction of useful
knowledge from the database. After appearance of data mining science, new techniques
are represented for this field. Because of particular dealing of many of these fields with
temporal data, the time aspect should be considered for the purpose of correct
interpretation of collected data. This matter clarifies the significance of temporal data
mining. In fact, TDM is equivalent to knowledge discovery from the temporal databases.
TDM is a fairly modern branch which can be considered as the common interface of
various fields, namely statistics, temporal pattern recognition, temporal databases,
optimization, visualization and high-level and parallel computations. In all of TDM
applications, large amount of data is the first limitation. Consequently, it is always
required to employ efficient algorithms in this field. This study attempts to represent a
comprehensive collection and evaluation for these algorithms.

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 303311.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
304 M.R. Keyvanpour and A. Etaati

The paper is organized as follows: Section 2 deals with introducing the basic
concepts of TDM and presenting architecture for TDM. TDM algorithms are
classified in section 3 based on the output type and the applied techniques. Evaluation
of the TDM algorithms according to this classification is presented in sections 4.

2 Temporal Data Mining


Knowledge discovery in database (KDD) is rapidly in progress and is essential for
practical, social and economical fields [1] . KDD expression is ordinarily educed as
the process of converting the low-level data into the high-level data. In another
definition, KDD is the process of identification of reputing, novel, understandable and
inherently useful patterns in data [2] .
Data mining is a complicated process that extracts non-trivial data from the
database [3] . Data mining helps in revelation of inherent potential of undiscovered
natural resources. It is also applicable in the applications such as rapid warning
systems for the incidents, analysis of medical records of hospitals, analysis of client
transactions, etc [4] . In fact, data mining is relevant to the analysis of an extensive
range of unstructured data and causes to discover the hidden connections, which leads
toward better understanding of fundamental concepts. TDM is related to the same
analysis for the case of sequential data streams with temporal dependence. The main
purpose of TDM is to discover the hidden patterns in unexpected behaviours and/or
other precise connections within the temporal data. This goal is achieved by
combining some techniques from machine learning, statistics, database and etc. TDM
is connected with the data mining of a large collection of sequential data. The
sequential data are a category of data that have been sorted by some indices.
Namely indexing of records by the time generates a class of sequential data, entitled
as time series.
TDM is different from traditional modelling techniques of data stream in the size
and nature of data set and the method of data gathering. The reason of this difference
is the incapability of traditional modelling techniques in handling large data sets,
while the data set in TDM is large without any limitation. In spite of applications that
use statistical methods, in the data mining there is almost no control in the procedure
of data collection and TDM is often confronted with data that gathered for various
purposes. Another difference between TDM and data stream analysis is related to the
type of estimated or extracted information from the data. In TDM it can not be
predicted that which variable must be expected for connection illustration.
Furthermore, precise data are less taken into account in TDM [5] . In fact, data
mining combines some techniques for the analysis of large data sets. However, the
main problem in TDM is the discovery of patterns from sequential data.
TDM algorithm is a well-defined procedure that takes the data as an input and
generates the output in the form of a pattern or model [7] .

2.1 Architecture of TDM

Figure 1 presents the architecture for the extraction of temporal patterns in TDM. The
architecture is consisted of the following components [6]:
Analytical Classification and Evaluation of Various Approaches 305

Task Analysis: when a user gives a request, this part has the duty of analyzing the
request both syntactically and semantically and extracting of required data. In fact,
this part provides the query for appropriate data. It also extracts the information
relevant to user-expected patterns. During the analysis procedure, it calls the modules
that support the time. These modules make time expressions by processing the time-
related components during the mining procedure. According to the obtained results, it
requests data access, pattern search and pattern representation modules, respectively.
Data access: After providing the query request, it searches the database to find proper
data with a format based on the mining algorithm. The temporal aspects must also be
considered during the mining procedure. The data access modules use services that
generate by time-supporting modules to interpret the time-dependant components.
Pattern Search: Based on the miming demand, it selects and runs an appropriate
algorithm that passes through the chosen data to search for considerable patterns. The
search demand illustrates a type of knowledge required for user and exerts the
determined thresholds by user. According to the type of demand and selective data
set, the pattern search module runs the algorithm and stores the extracted rules.
Pattern representation: Based on various demands for pattern representation, the
extracted knowledge can be demonstrated in different formats. Namely, the patterns
may be represented as tables, graphs, etc.
Time support: This component is a crucial module in supporting the TDM and uses
in all of other modules. For the purpose of temporal aspects identification, each
expression in the temporal query must be passed through the time support module. All
of other time-related modules employ the services of this component. The time
support module stores and uses the knowledge-based calendar, which contains the
definitions of the whole of relevant calendars.

3 Proposed Framework for Temporal Data Mining Classification


The proposed framework for TDM classification is presented in this section.
According to the performed research on TDM, the algorithms can be classified into
the following categories based on their output. The target framework is illustrated in
figure 2. Additionally, the obtained results based on our classification are explained in
table 1, briefly.
In our viewpoint, TDM algorithms can be categorized based on their output into
pattern or model. Models and patterns are structures that can be estimated or
matched with the data. These structures are employed to achieve the data mining
goals [8].

3.1 Pattern-Based Algorithms

A pattern is a local structure that creates a particular state for a few variables from the
given points and that typically can be similar to a substring with a number of "dont
care" characters [7,8]. Matching and discovery of patterns have a significant role in
the data mining [7].
306 M.R. Keyvanpour and A. Etaati

In spite of search and retrieval of applications, there is not any particular query in
pattern discovery that can be searched within the database. The purpose of this item is
the discovery of entire considerable patterns [8].

Fig. 1. A prototype system architecture of TDM [6]

Fig. 2. Proposed framework for classification of TDM algorithms

3.1.1 Frequent Pattern-Based Algorithms


There are many methods that define how a pattern is constructed but there is not any
general theory for discovering a particular pattern. However it seems that repetitive
patterns are useful in data mining. A repetitive pattern is the one that occurs many
times in the data. Formulation of useful pattern structure and extension of efficient
Analytical Classification and Evaluation of Various Approaches 307

algorithms for discovering all of repetitive patterns are two different aspects of data
mining. The methods that applied for finding the repetitive patterns are important,
because they are used for discovering of patterns and useful rules. These rules are
employed for extraction of interesting orders in data [8].

3.1.2 Association Rule-Based Algorithms


A rule is a pair of valuable Boolean situation that the posterior expression is true if the
prior expression be also true [8] . Suppose that I = {i1, i2, . . ., im} is an item set and
D is a set of transactions that each transaction (such as t) is a sub-item set (t I).
Each transaction has a unique identifier, called TID.
An association rule is form of X Y that X I, Y I and X Y = . X Y
maintenance in D with confidence c, (0 c 1) if c was part of transactions that
included X include Y in D [9]. c is calculated by equations 1.

(1)

X Y repeat s in D, if s is equal to the part of transaction that contains X Y. s is


calculated by equations 2.

(2)

Using a certain transaction set D, extraction rule problem produces all association
rules that their support count is not less than the minimum repeat. The minimum
repeat is defined by user and called minsup. Also, association rule minimum
confidence should not be less than the defined confidence by user (minconf) [10].

3.2 Model-Based Algorithms

A model is high level and general representation of data. The models are usually
specified by a set of modeling parameters that are estimated from the data. The
models are classified into predictive and descriptive ones. The predictive model
is used for prediction and classification of applications, while the descriptive models
are useful for data abstraction [7].

3.2.1 Predictive Algorithms


The duty of this part is the prediction of future values for data series based on the
previous patterns. The first element for doing this is construction of a predictable
model for data that can be created by using an initial sample. In this method, it is
assumed that the time series have no variations. Prediction is very useful in industrial
and economical applications.
In the cases of variability of time series, it is assumed that the time series are semi-
intelligent or locally constant. It supposes that each series can be broken into non-
variable parts which have learning capability [8]. Classification algorithms are in this
category. The process of model creation is illustrated in figure 4.
308 M.R. Keyvanpour and A. Etaati

Fig. 3. Extraction of association rules [7]

3.2.2 Descriptive Algorithms


In this category of algorithms, train and test datasets are not clearly available and the
data grouping is possible just based on the proportion quantity. Namely, clustering
algorithms locate in this category. Clustering of sequence or time series is dependant
to the grouping of a sequence or time series collection that is based on the similarities.
The amount of similarity is generally defined by the distances measuring criteria such
as Euclidean and Manhattan distances. These distances are calculated by equations 3
and 4, respectively.Where i and j are two dimensions from the n database dimensions.

, (3)

, (4)

4 Evaluation of TDM Algorithms on the Proposed Framework


In this section, the efficiency of TDM algorithms is evaluated based on proposed
classification. The results of this evaluation are presented in table 2. The considered
criteria are as follows:
High capacity data: If the capacity of produced temporal data is high in an
essential scope, a TDM algorithm must be run over the data accurately and
efficiently.

Fig. 4. Process of model creation [7]


Analytical Classification and Evaluation of Various Approaches 309

Table 1. Summary of the obtained results based on adopted classification

Approach Application Method attributes Challenges


When there isnt a Process on sequential data; Efficiency reduction
Frequent pattern-based

capability in model Inserting the obtained data in a for the algorithms that
determination for errors; matrix, compactly. produce a recursive
algorithms

In the cases of sudden model in output;


and large variations in Speed reduction in
Pattern-based algorithms

data; algorithms with complex


Uniform distribution dependency in data.
of transactions in
different temporal
regions.
Modeled normally for Conversion of non-constant This method doesnt
the process of events data series into a constant and work appropriately, if the
stream in large scale; stable sequence; collected data be less than
Association
algorithms
rule-based

Useable for the data Capability of removing the a specified threshold.


set without noises and null records;
null values; Reduction of data capacity and
Useable for very maintenance of valid data for
abnormal data. large, noisy and missing-value
included inputs.
Convenient Reduction of the size of large If the final patterns
application for time series data sets; can not locate beside
Predictive algorithms

with different dimensions Response time improvement in each other in the


and various time delays; the mining process; reduction process of data
Usable for large data Similarity optimization in a set, the cost of these
sets. group of objects; methods becomes very
Model-based algorithms

Data mining process in much;


different levels of detail. These algorithms are
limited and dont work
properly when the data
abstraction is difficult.
Application in highly Recognition of data with Dont cover the
noisy and complex data; errors, noises or missing values; particular situations;
Usable for high High speed and efficiency of To represent the
Descriptive
algorithms

capacity generated data. algorithm; communications, utilize a


Useful analyses for model that cant show the
understanding the behavior of cause and causality
investigated sample; relation.
Suitable for continues
trainings.

Existence of data with noises or missing values: If the data are collected
from different sources, the existence of noises and missing values in the
temporal database is quite probable. An algorithm with appropriate
efficiency is required to analyze such data properly.
Capability of model determination for errors: In addition to the capability
of algorithm in producing an accurate and suitable output, its capability in
creation of the model for errors should be considered.
Existence of complex and correlated data: The existence of complicated
and correlated data decreases the efficiency of TDM algorithms. So, for the
analysis of this category of data, an algorithm should be selected that doesnt
reduce the efficiency.
310 M.R. Keyvanpour and A. Etaati

Existence of sudden or large variations in data: Sudden variation is a


common feature in the temporal data. The evaluation of efficiency in
different algorithms is necessary to identify the behaviour of algorithms in
facing with such sudden variations.

Table 2. Evaluation of TDM Algorithms efficiency based on proposed framework

High Data with Model Complex Existence of


Method capacity noises or determination and sudden and
data missing for errors correlated large
l d t i ti i
methods
based
Model-

Descriptive algorithm high medium medium low high

Predictive algorithm high medium low low medium

Frequent Pattern-based
methods
based
Pattern-

medium high low medium low


algorithm
Association rule-based
high medium low high medium
algorithm

5 Conclusions
In this paper, TDM algorithms are investigated. These algorithms are categorized and
evaluated based on the applied techniques and obtained output. In order to provide an
appropriate tool for selecting the suitable algorithms, the results are represented in
diagrams and the attributes of each group are investigated.
Results of this research assert that an algorithm can not be introduced as an optimal
case based on its structure. Since each algorithm is used for a special aim, comparison
of algorithms does not make sense. One of the most important problems in TDM is
the elimination of challenges and improvement of algorithms efficiency which is an
important and active research field and requires more investigation.

References
1. Goebel, M., Gruenwald, L.: A Survey of Data Mining and Knowledge Discovery Software
Tools (1999)
2. Shapiro, G.P., Frawley, W.J.: Knowledge Discovery in Databases. AAAI/MIT Press
(1991)
3. Feelders, A., Daniels, H., Holsheimer, M.: Methodological and Practical Aspects of Data
Mining (2000)
4. Bellazzi, R., Larizza, C., Magni, P., Bellazzi, R.: Temporal Data Mining for The Quality
Assessment of Hemodialysis Services. Artificial Intelligence in Medicine 34, 2539 (2004)
5. Laxman, S., Sastry, S.: A Survey of Temporal Data Mining. Sadhana 31(2), 173198
(2006)
6. Chen, X., Petrounias, I.: An Architecture for Temporal Data Mining. In: IEE Colloquium
on Knowledge Discovery and Data Mining, vol. 310, pp. 8/18/4. IEEE (1998)
Analytical Classification and Evaluation of Various Approaches 311

7. Hand, D., Mannila, H., Smyth, P.: Principles of Data Mining. MIT Press, Cambridge
(2001); Published by Asoke K
8. Gopalan, N.P., Sivaselvan, B.: Data Mining: Techniques and Trends. A.K. Ghosh, New
Delhi (2009); Published by A.K. Ghosh
9. Gharib, T.F., Nassar, H., Taha, M., Abraham, A.: An Efficient Algorithm for Incremental
Mining of Temporal Association Rules. Journal of Data & Knowledge Engineering 69,
800815 (2010)
10. Agrawal, R., Srikant, R.: Fast Algorithms for Mining Association Rules. In: 20th
International Conference on Very Large Data Bases (VLDB 1994), pp. 487499 (1994)
A Novel Classification of Load Balancing Algorithms
in Distributed Systems

Mohammad Reza Keyvanpour, Hadi Mansourifar, and Behzad Bagherzade

Faculty of Electrical, Computer and IT, Islamic Azad University,


Qazvin Branch, Qazvin
{keyvanpour,h_mansourifar}@qiau.ac.ir, b_bagherzade@yahoo.com

Abstract. Load-balancing algorithms play an important role in order to avoid


the situations which heavily loaded processors and idle or lightly loaded
processors are happen simultaneously in distributed systems. In this paper we
present a new classification of load balancing algorithms which can clarify the
future direction of load balancing algorithms. The proposed classification
indicates that all load balancing algorithms fall in two main categories:
topology dependent and topology independent. We demonstrate the main
advancements or weakness of each category over each other. Hence, we try to
reveal the hidden aspects of topology dependent algorithms in the research
literature which can be used in future researches.

Keywords: Load Balancing; Distributed Systems, Parallel Systems;

1 Introduction
Load balancing mechanisms are one of the most essential issues in distributed
systems. The final goal of load balancing is obtained by fair distribution of load
across the processors such that, execution time should be decreased after load
balancing operation. The problem of load balancing emerges when a processor is
ready to execute the tasks but is going to idle state. The idle processors are sign of
overloaded processors when adequate tasks exist in the systems. Such conditions can
lead to a remarkable decrease of performance in distributed systems. Load balancing
algorithms have been categorized into static or dynamic, centralized or decentralized,
cooperative or non cooperative in the literature [2, 3, 5, 9, 11, 12]. In this paper we
categorize the load balancing algorithms into topology dependent or topology
independent algorithms. Topology dependent methods are algorithms which have
designed to execute in a specific topology in order to minimize the communication
overhead. However topology independent methods are not restricted to execution in
specific topology and instead of minimizing the overhead, try to minimize the
execution time. Although synchronization has an essential effect in order to decrease
the execution time, topology independent methods cannot guarantee the
synchronization. On the other hand, some topology independent methods can
guarantee the synchronization. Therefore they can be combined with some aspects of

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 313320.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
314 M.R. Keyvanpour, H. Mansourifar, and B. Bagherzade

topology independent algorithms to minimize the communication overhead and


execution time, simultaneously.
This paper is organized as follows. Section 2 presents our proposed classification
of load balancing algorithms. Section 3 introduces main aspects of topology
dependent algorithms and categorizes them into synchronize and asynchronous.
Section 4, describes in detail the topology independent load balancing algorithms.
Section 5 demonstrates the seven load balancing features which are used in proposed
classification. Finally, section 6 concludes the paper and presents the current and
future direction of load balancing algorithms.

2 Classification of Load Balancing Algorithms


Most of load balancing classifications in the research literature are based on the
functionality of load balancing algorithms. Based on such classifications, load
balancing algorithms have been categorized into static or dynamic, centralized or
decentralized, cooperative or non cooperative and etc. However, such classifications
are very general and have many overlaps with each other. For instance, all
cooperative algorithms are dynamic and decentralize. Therefore such classifications
cannot demonstrate the main aspects of load balancing algorithms. For solving this
problem we propose a new classification which can reveal the main characteristics of
load balancing algorithms straightforwardly. Based on proposed classification, load
balancing algorithms are categorized into topology dependent and topology
independent algorithms. Hence, topology dependent algorithms are categorized into
synchronous and asynchronous and topology independent algorithms are classified
into primary and intelligent algorithms. Primary load balancing algorithms
categorized into load based or cost based algorithms and intelligent load balancing
algorithms are categorized into optimizer or advisor algorithms. Fig1 shows the
proposed classification and some algorithms which belong to each sub class. Such a
classification can distinguish a load balancing algorithm from others based on its
individual characteristics.

Fig. 1. Classification of load balancing algorithms


A Novel Classification of Load Balancing Algorithms in Distributed Systems 315

In the next sections we try to demonstrate main aspects of each sub class and its
individual properties.

3 Topology Dependent Algorithms


Topology dependent algorithms are load balancing mechanisms which have been
designed for execution in a specific topology. Such algorithms called topology
dependent because their logic is based on a proper topology and executing their logic
on another topology leads to remarkable growing of communication overhead in
major conditions [1]. Topology dependent algorithms can be categorized into
synchronous and asynchronous. In this section, we demonstrate main characteristics
of each category of topology dependent algorithms.

3.1 Synchronous Algorithms

This category of load balancing algorithms is suitable for highly parallel systems.
Since, some synchronous topology dependent algorithms have minimal amount of
communication overhead but some of them cannot guarantee a reasonable overhead.
For instance, Dimension Exchange Model (DEM) [1, 8] is a synchronous approach
which tries to balance the system through an iterative manner. DEM was conceptually
designed for execution on hypercube topology, such that, load is migrated between
directly connected nodes. Therefore DEM can guarantee the minimal overhead. The
main drawback of DEM is its dependency to Log N iterations where, N denotes the
number of nodes in the system. For example, an overloaded node may be forced to
wait until the last iteration for transferring its load to another node. Fig2 shows the
process of load balancing in DEM strategy. For solving this problem Direct
Dimension Exchange (DDE) proposed [8]. DDE eliminates the unnecessary
iterations by taking load average in every dimension.

Fig. 2. Required iterations in DEM

On the other hand, Symmetric Broadcasting Network (SBN) [4] is a synchronous


algorithm which cannot guarantee the minimal overhead. Because, logically
connected nodes in SBN are not physically connected. This manner of
communication imposes a significant overhead to the system. SBN first sorts the
nodes with respect to their loads in ascending and descending order. After that, it
forms two different broadcasting trees according to the load in the system and uses
316 M.R. Keyvanpour, H. Mansourifar, and B. Bagherzade

ascending or descending order with respect to situations. In fact SBN is an adaptive


algorithm and can adapt itself during the life of system. Fig3 shows two broadcasting
trees of SBN algorithms.

Fig. 3. Broadcasting trees of SBN

3.2 Asynchronous Algorithms

Some load balancing algorithms are asynchronous. However their local behavior
makes them suitable for highly parallel systems. Such algorithms act locally in each
domain and simultaneous executing the algorithm on various domains can satisfy the
synchronization. For instance, Hierarchical Balancing Model HBM [1] is an
asynchronous algorithm which was conceptually designed for executing in hypercube
topology. HBM organizes the nodes in a binary tree, such that, each parent node
receives the triggers which indicate an imbalance on its children. Fig4 shows the
binary tree of HBM. Other instances of asynchronous load balancing algorithms are
Gradient Model (GM) and Extended Gradient Model, which are demand driven
algorithms and work based on detecting the global or local nearest lightly loaded
processors.

Fig. 4. Hierarchical organization of 8 processor in HBM

4 Topology Independent Algorithms


Topology independent methods have not conceptually designed for execution in
specific topology. Therefore the communication overhead is regardless of the
topology. We categorize topology independent algorithms into primary and intelligent
algorithms.
A Novel Classification of Load Balancing Algorithms in Distributed Systems 317

4.1 Primary Algorithms

Primary load balancing algorithms are non intelligent methods which process of setting
the thresholds and selecting the destination of migration are based on trial and error
approaches. Although the process of execution of these methods is very simple, most
of them can be combined with artificial intelligence or optimization methods. For
instance, Central Load Manager [2] is a static load balancing algorithm. In this
algorithm, when the thread is created, a minimally loaded host is selected by the
central load manager for executing the new thread. The integrated decision making
causes the uniform allocation and consequently minimum number of separated
neighbor threads. However, high degree of communication overhead is main drawback
of this algorithm. Thresholds algorithm [2] is another static load balancing algorithm
which the load manager is distributed between the processors. Each local load manager
knows the load state of whole system and following thresholds represent the load state
of processors: Tunder and Tupper, which get default values. In this algorithm, if the
local state is not overloaded or if no underloaded host exists then the thread is allocated
locally, otherwise, a remote underloaded host will be selected. Comparing to Central
Load Manager algorithm, distributing the load manager among all processors leads to
low communication overhead. However, when all processors are overloaded, local
load assignment can cause a significant load imbalance. In these situations, a host can
be much overloaded than other host, which is in conflict with ultimate goal of load
balancing algorithms. As illustrated, the process of setting and changing the thresholds
in primary load balancing algorithms follows from trial and error approaches.
Therefore they cannot guarantee the best decision.
Selection phase in most primary load balancing algorithms is based on load related
thresholds. However, some primary load balancing algorithms utilize performance
measures in order to select the destination of migration. For instance, Shortest Expected
Delay (SED) selects the hosts with mean response time and Adaptive Separable Policy
(ASP) selects the hosts with best utilization during the past interval [14].

4.2 Intelligent Algorithms

Intelligent load balancing algorithms are methods which process of setting and
changing the thresholds and selecting the destination of migration are based on
optimization or machine learning mechanisms. For instance, Classifier Based Load
Balancer (CBLB) [7] employs a simple classifier system on central host, in order to
dynamically setting the load balancing thresholds. For this reason, the central host
classifies the state of the system based on following parameters.
Mean response time since the last update.
The mean utilization per node since the last update.
Inverse standard deviation of arrivals since the last update.
Based on these parameters, central host forms three classes and assign each class a
specific action. System parameters which used in CBLB are as follows. Transfer
queue threshold (Tq), update period time (UP) and CPU threshold TCPU. The main
advantage of CBLB algorithm is that it can work as an independent central algorithm
or can be combined easily with primary load balancing algorithms.
318 M.R. Keyvanpour, H. Mansourifar, and B. Bagherzade

Application of genetic algorithms in dynamic load balancing becomes more popular


since the last years. Genetic algorithms utilize historical data from behavior of system,
in order to achieve minimum total completion time and maximum processor utilization.
For instance, Genetic Task Assigner (GCTA) [7] is a central algorithm which collects
the state information of other processors periodically and tries to take the best available
load distribution among the processors. First, GCTA forms a population which
represents possible task assignments. Unsuitable transfers will be ignored and the
selection process continues with individuals in the next generation.
Main advantage of genetic algorithms is their ability to optimize the selection
process. However, taking optimal or near optimal decision impose significant cost on
central host.
Utilizing the artificial neural network is another emerging solution in order to solve
the dynamic load balancing problem especially in large distributed systems. For
instance, KNN load balancer (KNNL) [13] is a central intelligent algorithm which
works based on neural networks. KNNL collects resource information and saves them
in a log file. After that, KNNL reads the log file and extract the required features for
learning process. This process is offline but the generated model can be used for
dynamically changing the parameters with respect to system states. Although KNNL
suffers from extensive overhead for small task redistribution, it can offer reasonable
performance for large distributed systems.

5 Load Balancing Features


In this section, we introduce seven load balancing features as follows, which are used
for proposed classification of load balancing algorithms.
Log Based: Load balancing algorithms which collect the historical
information of system and utilize them for next decision. Such algorithms
have dynamic behavior and can act globally or locally.
Distributed Static: Distributed load balancing algorithms which assign the
processors to the tasks at compile time. These algorithms act locally and
each host has a load manager with a copy of load information of the system.
Central Dynamic: intelligent load balancing algorithms which collect the
information generate a model and try to dynamically setting the parameters.
Central Static: Primary load balancing algorithms which assign the
processors to the tasks at compile time. Such algorithms act globally and
setting the threshold is non intelligent.
Minimal Overhead: Topology based load balancing algorithms which have
conceptually designed based on physically connected system. Therefore such
algorithms have minimal communication overhead.
Sort Based: Dynamic load balancing algorithms which sort the hosts with
respect to their load. Such algorithms have adaptive nature and are dynamic
algorithms with extensive communication overhead
Demand driven: Dynamic load balancing algorithms which their function is
based on probing the other hosts in order to find the sender or receiver hosts.
The most of such algorithms have diffusive nature and act locally.
A Novel Classification of Load Balancing Algorithms in Distributed Systems 319

Table 1. Comparison of load Balancing algorithms

Method Type Communication Central Central Distributed Description


Overhead Static Dynamic Static
SBM Topology Extensive No No No Sort Based
Dependent
HBM Topology Minimal No No No Asynchronous
Dependent
Central Topology Medium Yes No No Primary
Manager Independent Load Based
SED Topology Medium No Yes No Primary
Independent Cost Based
X-GM Topology Medium No No No Asynchronous
Dependent Demand
Driven
CBLB Topology Extensive No Yes No Intelligent
Independent Advisor
GCTA Topology Extensive No Yes No Intelligent
Independent Optimizer
DE Topology Minimal No No No Synchronous
Dependent
Gradient Topology Extensive No No No Asynchronous
Dependent Demand Driven
Thresholding Topology Medium No No Yes Primary
Independent Load Based
ASP Topology Medium No Yes No Primary
Independent Cost Based
KNNL Topology Extensive No No No Intelligent
Independent Advisor
DDE Topology Minimal No No No Synchronous
Dependent

6 Conclusion
In this paper we proposed a novel classification of load balancing algorithms based on
topological view and 7 load balancing features. Such classification can reveal
interesting facts about load balancing algorithms as follows.
All intelligent load balancing algorithms are central and there is no local
intelligent load balancing algorithms in the research literature
All intelligent load balancing algorithms which act based on machine
learning mechanisms are topology independent. To the best of our
knowledge there is no topology dependent and machine learning based load
balancing algorithms in the research literature.
Minimal overhead and optimized execution time or utilization, are trade- offs
of each load balancing algorithms. The major of synchronous load balancing
algorithms have minimal overhead. It seems that the combination of such
algorithms with artificial intelligent techniques can form the future direction
of load balancing algorithms.
320 M.R. Keyvanpour, H. Mansourifar, and B. Bagherzade

References
1. Willebeek-LeMair, M.H., Reeves, A.P.: Strategies for Dynamic Load Balancing on Highly
Parallel Computers. IEEE Transactions on Parallel and Distributed Systems 4(9) (1993)
2. Dubrovski, A., Friedman, R., Schuster, A.: Load Balancing in Distributed Shared Memory
Systems. International Journal of Applied Software Technology 3, 167202 (1998)
3. Zhou, S., Ferrari, D.: A Trace-Driven Simulation Study of Dynamic Load Balancing. IEEE
Transactions on Software Engineering 14(9), 13271341 (1988)
4. Das, S.K., Harvey, D.J., Biswas, R.: Parallel Processing of Adaptive Meshes with Load
Balancing. IEEE Trans. Parallel and Distributed Systems 12(12), 12691280 (2001)
5. Corradi, A., Leonardi, L., Zambonelli, F.: On the Effectiveness of Different Diffusive
Load Balancing Policies in Dynamic Applications. In: Bubak, M., Hertzberger, B., Sloot,
P.M.A. (eds.) HPCN-Europe 1998. LNCS, vol. 1401, Springer, Heidelberg (1998)
6. Corradi, A., Leonardi, L., Zambonelli, F.: Diffusive Load Balancing Policies for Dynamic
Applications. IEEE Concurrency 7(1), 2231 (1999)
7. Baumgartner, J., Cook, D.J., Shirazi, B.: Genetic Solutions to the Load Balancing
Problem. In: Proc. of the International Conference on Parallel Processing, pp. 7278
(1995)
8. Shu, W., Wu, M.Y.: The direct dimension exchange method for load balancing in k-ary n-
cubes. In: Proceedings of Eighth IEEE Symposium on Parallel and Distributed Processing,
New Orleans, pp. 366369 (1996)
9. Osman, A., Ammar, H.: Dynamic load balancing strategies for parallel computers. In:
International Symposium on Parallel and Distributed Computing, ISPDC (2002)
10. Luque, E., Ripoll, A., Cortes, A., Margalef, T.: A distributed diusion method for
dynamic load balancing on parallel computers. In: Proc. of EUROMICRO Workshop on
Parallel and Distributed Processing. IEEE CS Press (1995)
11. Sharma, S., Singh, S., Sharma, M.: Performance Analysis of Load Balancing Algorithms.
World Academy of Science, Engineering and Technology 38 (2008)
12. Xu, C.-Z., Lau, F.: Load Balancing in Parallel Computers: Theory and Practice. Kluwer
Academic Publishers, Dordrecht (1997)
13. Salim, M., Manzoor, A., Rashid, K.: A Novel ANN-Based Load Balancing Technique for
Heterogeneous Environment. Information Technology Journal 6(7), 10051012 (2007)
14. Ghanem, J.: Implementation of Load Balancing Policies in Distributed Systems, Master
thesis (2004)
Data Mining Tasks in a Student-Oriented DSS

Vasile Paul Bresfelean, Mihaela Bresfelean, and Ramona Lacurezeanu

Babes-Bolyai University, 400084 Cluj-Napoca, Romania


{paul.bresfelean,ramona.lacurezeanu}@econ.ubbcluj.ro,
miha1580@yahoo.com

Abstract. In recent years of intense transformations in Internet and information


technologies, higher education institutions seek to implement novel tools in an
attempt to enhance their activities. Among these tools, the decision support
systems (DSS) play an important part to assist in all managerial and academic
processes. While previous DSS research focused on enterprise-level decision
making [15], we center on the individual and introduce a DSS architecture for
students support in decision processes, which integrates some operative data
mining tasks.

Keywords: DSS, data mining, clustering, classification learning, numeric


prediction, decision trees.

1 Introduction
Universities, as integrating parts of the local community, have important tasks in
education, training, research and are also an important supplier of high-quality future
staff for local and international companies. These institutions try to adopt innovative
tools in an attempt to augment all their activities due to an increasing competitive and
demanding background. Such tools may possibly be the decision support systems
(DSS) with the purpose to assist in all managerial and academic processes, for
retrospective analysis of economic and organizational data.
The success of any organization depends greatly on the quality decision-making
processes which demand assisting software tools such as decision support systems.
Most recent predilection of the actual DSS is to smooth the progress of cooperation
between participants in collective decisions in all fields of activity. They denote
complex applications that assist and not substitute human decision making processes
and rely on the effectiveness and accuracy of the ensuing information.
The perception and purposes of DSS have been appreciably lengthened owing to
the hasty development [15] of IT and web technologies. Marakas definition
underlines that DSS is a system under the control of one or more decision makers,
that supports the activity of decision making by offering an organized set of tools
projected to impose structure on portions of the decision-making situation and to
develop the eventual efficiency of the decision result [10].
While most of the previous DSS research focused on enterprise-level decision
support [15], in our research we center on individual support with regard to
personalized preferences and expectations. In the present article we introduce a DSS

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 321328.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
322 V.P. Bresfelean, M. Bresfelean, and R. Lacurezeanu

architecture for students assist in decision processes, and also our view of integrating
several data mining tasks in system.

2 State of the Art in the Field


From the most recent literature, we have examined an increasing development of DSS
in various fields of activity, including business, medicine, natural resources,
transportation, but also some interesting approaches in education.
There are intelligent decision support systems developed to facilitate all phases of
consumer decision processes in business-to-consumer e-services applications [15].
Some of their main functional modules comprise: consumer and personalized
management, evaluation and selection, planning and design, community and
collaboration management, auction, negotiation, transactions and payments, quality
and feedback control [15] etc. Other researches [8] considered a Web based approach
to DSS structure to assist natural resources management, which expanded the
potential of finding professional solutions in complex situations, as well as applying
the expert knowledge in the field of water resources management and protection.
We can also find DSS frameworks in clinical decision support, highlighting the
requirements for medical device plug-and-play standards and treatment management
[13] in order develop protocol guideline modules. An original approach can be found
in a Clinical Decision Support System prototype focusing on the treatment of
Traumatic Brain Injury [13], which is modular and easily extendible, allowing the
insertion of new data sources, visualization methods, and protocols.
Another major issue addressed by the DSS is the safety of transportation vehicles
and passengers. The AIRNET project [12] contributed to the solution of airport
emergency situations by developing a modular platform for safety management
within the Ground System and focusing into innovative characteristics to offer the
surveillance, control and guidance tasks in an airport environment.
There are also some interesting approaches of DSS systems in educational
environments. Frize & Frasson [7] presented several advances that can strengthen the
learning processes in medical education: from decision-support tools, scoring
systems, Bayesian models, neural networks, to cognitive models that reproduce how
students gradually build their knowledge into memory and encourage pedagogic
methods. Another idea [2] was to design a model for testing and measuring the
student capabilities (like intelligence, understanding, comprehension, mathematical
concepts, past academic records, intelligence level) and apply the module results to a
rule-based decision support system so as to determine the compatibility of those
capabilities with the available faculties/majors. The purpose was to help students to
opt for the best appropriate faculty/major decision as taking admission in a certain
university. Nguyen & Haddawy [11] presented an approach using Bayesian networks
to forecast graduating cumulative Grade Point Average founded on applicant
condition at the time of admission. They integrated a case-based component with a
prediction model, in an attempt to define similarity of cases (applicants) in a
consistent manner with the prediction model.
Further works [9] offered a DSS simulation and assessment of different scenarios,
rooted in a methodology to evaluate educational competence and organize its
Data Mining Tasks in a Student-Oriented DSS 323

allocation and utilization. Their system used an autonomous data warehouse


comprising input data from significant sources and had a graphical front-end client for
adequate output presentation.

3 Our Proposed Student-DSS Architecture and Data Mining


Tasks
As a result of our most recent studies in developing a DSS academic system [4],[5]
and on data mining technologies [6] recent development and events, we propose a
student-oriented decision support system. The educational institution managers will
also access and benefit from its results, data and statistics, but nevertheless the system
is largely addressed to the central point of education, namely students. In our DSS we
have projected three main modules (fig. 1), each with its distinctive utility:
Academic performance - for academic results, scholarships, competitions;
Career orientation - employment opportunities for graduates, summer jobs, part-
time jobs;
Continuing Education & Life Long Learning - principally for college graduates,
and their appeal to new master and doctoral studies, double degrees etc.
Needed data and information to fuel the systems comes from different sources
through:
A database server - which involves the universitys data from: edu-network,
legacy software, academic info, research & management, local community and
partners, questionnaires, official documents, library, archive etc.
A web server - extracting data from Internet sources: university portals, e-learning
platforms, job portals, employment agencies, ministry of education, classification
of universities, alumni continuous feed-back, companies etc.
A knowledge server - which comprises machine learning features, utilizes the data
mining methods on the above data so as to provide significant knowledge to the
DSS modules and to facilitate the decision-making processes. We have as a
feature several built-in algorithms and open-source software (such as Weka).
The data mining methods are complementary to the decision support [3] and their
association has significant advantages in individual decision-making processes and
data-analysis. While data mining has the potential of resolving decision problems,
decision support methods usually produce a decision model, attesting the authority of
decisional factors [3]. Here are the styles of learning in data mining [14] that we
currently administer in the Student DSS:
Clustering - we look for groups of students with similar characteristics;
Classification learning - we tend to use a set of classified examples so as to learn
a way of classifying undetected examples (e.g. job orientation and professional
development after graduation - correspondence amid specializations and
employment paths);
Numeric prediction - the predicted result is a numeric quantity (e.g. prediction of
students exam results, predictions on the employment/unemployment rates etc.).
324 V.P. Bresfelean, M. Bresfelean, and R. Lacurezeanu

Another data mining task to be included is the Association learning where we seek
any association among features, not only to predict a definite class value (e.g.
relations between subjects, courses, labs, facilities that might attract new students, or
cause scholastic abandonment, transfer, or study interruptions etc.).

Fig. 1. Student-oriented DSS architecture, derived from our continuous research [4]

3.1 Data Clustering

The system applies clustering in the situations when information lacks on accessibility
concerning the connection of data with predefined classes and will determine groups
founded on data resemblance, while disparate groups will contain different data. For
example, anchored in the feedback data received from our master degree students who
are employed, the system builds a profile for students with jobs in the graduated
Data Mining Tasks in a Student-Oriented DSS 325

specialization. Applying the K-means clustering algorithm (72.7273% correct


clustered instances), it optimistically divides the students into 2 clusters (Table 1):
1. First cluster - students with job in the graduated specialization,
2. Second cluster - students with job in different fields.

Table 1. Students with jobs in the same/different graduated specialization clusters centroids

Score
Cluster 1 Cluster 2
(based on
Attribute Job in graduated Job in different
chi-squared
specialization field
statistic)
Graduated school type 0.61214 University University
Year of last school graduation 0.1111 2005 2009
Final grades 2.55337 7.01-8 8.01-9
Qualification field 39.41197 Economic Natural sciences
Age 2.93085 26_35 36_45
Gender 0.11791 male male
Type of job 41.23092 Full time Full time
Headquarters 30.31725 Tg Mures, Mures Cluj-Napoca
Time to hire 1.82114 3 to 6 months 1 to 2 years
Job satisfaction 17.56531 Satisfied So and so
Number of job interviews participation 1.89431 2 to 5 2 to 5
Number of refused jobs 2.06833 2 to 5 2 to 5
Type of requested experience by employer 6.65839 In the graduated field In other fields
Years of requested experience 0.56306 2 to 5 years 1 year
Employer appreciation 3.20531 Very good Good
Firm technical level 4.42825 Average Higher
Level of self qualification vs. required tasks 0.89261 Adequate Higher
Firms staff fluctuation 3.5824 Low Average
Firm stimulates employees training 3.1144 Yes No
Firm stimulates innovation 3.6831 No No
Own innovations at work 0.47226 Yes No
Aware of the promotion criteria 1.2838 Yes Yes
Time to fulfill the promotion 7.00431 Between 1-2 years Now

3.2 Classification Learning

For the classification learning tasks, the system applies for example the C4.5
algorithm to predict the students present level of qualification versus the required
tasks by their employer. Based on the training set, we obtained a 79.22% success rate
(correctly classified instances), and a 72,08% success rate for the 10 fold
cross-validation experiment. The system used the Laplace estimator, where leaves
counts are smoothed by starting counts at 1 instead of zero to avoid some problems - a
traditional procedure named after the 18th century mathematician Pierre Laplace.
326 V.P. Bresfelean, M. Bresfelean, and R. Lacurezeanu

An illustration of a decision tree (Fig. 2) resulted from 155 instances (employed


undergraduate and master-degree students from different specializations at Babes-Bolyai
University in 2010), has as a central root joint the Firm_technicLevel attribute (opinion
on the employers technical equipment level). For the second level, the ramification is
based on the Refused_jobs attribute (number of job offers refused by the respondent till
the present one); for the third level, the ramification is based on the Year_last_school (the
year of most recent school graduation) and Job_satisfaction (students satisfaction with
the current job) attributes; the last ramification is found at Training_stimulated attribute
(to which extent the employer motivates staffs training).

Fig. 2. Classification learning tree - students qualification versus job required tasks

Here are some suggestive examples of interpretation of the decision trees branches:
- If the students believed their employer had average technical equipment, then they
would find their level of qualification to be just adequate to the required tasks.
- If the students believed their employer had a high level of technical equipment,
and they refused 0 jobs before the taking the present one, and their most recent
school graduation was after 2008, then they would find their level of qualification
to be higher than the required tasks.
- If the students believed their employer had a high level of technical equipment,
and they refused 1 job before taking the present one, and this jobs satisfaction is
<<so and so>>, and felt the employer was not motivating staff training, then they
would find their level of qualification to be lower than the required tasks.

3.3 Numeric Prediction

In support of the numeric prediction tasks the system uses for instance the REPTree
method to generate a decision tree (fig. 3) anchored in gain/variance decrease and
Data Mining Tasks in a Student-Oriented DSS 327

then trims it using reduced-error pruning [14] by sorting values for numeric attributes
once due to its speed optimization and deals with absent values by dividing instances
into pieces. Based on several public statistics, the system tries to numerically predict
the level of national youth (between 15-24 years old youngsters) unemployment rate,
namely YUR_15-24.

Fig. 3. Generated REPTree for numeric prediction of the youth unemployment rate

It can be seen that the countrywide Unemployment rate (for all categories of ages)
has a role for the YUR_15-24 future evolution, by hovering around 7.25 percent. The
next nodes of the decision tree reveal the importance of other factors, namely
LPFR_55-64 (Labour force participation rate, age 55-64), and Net_product_taxes.
The Net Product Taxes represent the difference between taxes owed to the state
budget (VAT, excise and other taxes) and subsidies on products which are paid by the
state budget. As a concluding point, the last split takes place in the LPFR_65+
(Labour force participation rate, age over 65) with its 17.82% value that finally
influences YUR_15-24 future evolution. A grater level of Net_product_taxes might
be an indication of a sounder economy, thus decreasing the youth unemployment rate.
A motivating approach for our systems upcoming assessment will be, for instance, to
unravel the issues that determine the influence of the mature and senior citizens
employment over the youth employment/unemployment rates.

4 Conclusions
In the present article we included a first part of our research in developing a student-
oriented decision support system and its data mining tasks. We commenced by over
viewing recent DSS studies in several domains, then outlining some interesting
applications in the educational field. Afterward, we introduced our DSS architecture and
described the main modules and their roles addressed to the central point of education,
namely students.
328 V.P. Bresfelean, M. Bresfelean, and R. Lacurezeanu

The central point of our study was to integrate the data mining processes in the decision
support system. For that we envisioned a knowledge server comprising built-in algorithms
and also open-source software, to provide the tasks required: data clustering, classification
learning, and numeric prediction. We presented several examples of how the system can
generate significant knowledge for the decision-maker: clusters based on students with
jobs in the same/different graduated specialization; classification learning tree based on
students qualification versus job required tasks; REPtree for numeric prediction of the
youth unemployment rate.
In our future research we plan to continue developing the system, by integrating more
planned components, such as the association learning tasks.
Acknowledgements. This work was supported by the CNCSIS TE_316 Grant.

References
1. Abdelhakim, M.N.A., Shirmohammadi, S.: Improving Educational Multimedia selection
process using group decision support systems. International Journal of Advanced Media
and Communication 2(2), 174190 (2008)
2. Aslam, M.Z., Nasimullah, Khan, A.R.: A Proposed Decision Support System/Expert
System for Guiding Fresh Students in Selecting a Faculty in Gomal University, Pakistan
(2011), http://arxiv.org/abs/1104.1678
3. Bohanec, M., Zupan, B.: Integrating decision support and data mining by hierarchical multi-
attribute decision models. In: IDDM-2001: ECML/PKDD-2001 Workshop, Freiburg (2001)
4. Bresfelean, V.P., Ghisoiu, N., Lacurezeanu, R., Vlad, M.P., Pop, M., Veres, O.: Designing
a DSS for Higher Education Management. In: Proceedings of CSEDU 2009 International
Conference on Computer Supported Education, March 23-26, vol. 2, pp. 335340 (2009)
5. Bresfelean, V.P., Ghisoiu, N., Lacurezeanu, R., Sitar-Taut, D.A.: Towards the
Development of Decision Support in Academic Environments. In: ITI 2009, Croatia, June
22-25, pp. 343348 (2009)
6. Bresfelean, V.P.: Implicatii ale tehnologiilor informatice asupra managementului
institutiilor universitare, Ed. Risoprint, ClujNapoca, 277 pages (2008)
7. Frize, M., Frasson, C.: Decision-support and intelligent tutoring systems in medical
education. Clin. Invest. Med. 23(4), 266269 (2000)
8. Iliev, R., Kirilov, L., Bournaski, E.: Web-based decision support system in regional water
resources management. In: Proceedings of the CompSysTech 2010, pp. 323328 (2010)
9. Mansmann, S., Scholl, M.H.: Decision Support System for Managing Educational
Capacity Utilization. IEEE Transactions on Education 50(2), 143150 (2007)
10. Marakas, G.M.: Decision Support Systems: In the 21st Century, 2nd edn. Pearson
Education (2003)
11. Hien, N.T.N., Haddawy P.A.: Decision Support System for Evaluating International
Student Applications. In: 37th ASEE/IEEE Frontiers in Education Conference,
Milwaukee, WI, USA, October 10-13 (2007)
12. Pestana, G., da Silva, M.M., Casaca, A., Nunes, J.: An airport decision support system for
mobiles surveillance & alerting. In: Proceedings MobiDE 2005, pp. 3340 (2005)
13. Williams, M., Wu, F., Kazanzides, P., Brady, K., Fackler, J.: A modular framework for
clinical decision support systems: medical device plug-and-play is critical. SIGBED
Rev. 6(2), Article 8, 11 pages (2009)
14. Witten, I.H., Frank, E., Hall, M.A.: Data mining: Practical machine learning tools and
techniques, 3rd edn. Morgan Kaufmann, Elsevier (2011)
15. Yu, C.-C.: A web-based consumer-oriented intelligent decision support system for
personalized e-services. In: ICEC 2004, pp. 429437 (2004)
Teaching Automation Engineering: A Hybrid Approach
for Combined Virtual and Real Training
Using a 3-D Simulation System

Juergen Rossmann1, Oliver Stern2, Roland Wischnewski2, and Thorsten Koch2


1
Institute of Man-Machine Interaction (MMI), RWTH Aachen University
Ahornstr. 55, 52074 Aachen, Germany
2
RIF e.V., Department Robot Technology
Joseph-von-Fraunhofer-Str. 20, 44227 Dortmund, Germany
rossmann@mmi.rwth-aachen.de,
{stern,wischnewski,koch}@rt.rif-ev.de

Abstract. The ever growing complexity of automated manufacturing plants


requires new methods for the vocational training of engineers. We present a
hybrid training approach which bases on learning sessions at both, small but
industrially relevant real plants, and virtual models of the same plants. The 3-D
models are of close-to-reality visualization quality and allow for true interaction
while simulation is running. Students first work on ready-made or individually
planned tasks at the virtual plant before they time-efficiently transfer their new
knowledge into practice at the real plant. This new approach has been
implemented and tested with an industrial manufacturer of training plants.

Keywords: automation, engineering, training, simulation, education.

1 Introduction
In contrast to the field of education, the use of simulation systems is already state-of-
the-art within the daily routine of big industrial enterprises. The prospects of such
software packages for a close-to-reality virtual handling of complex mechatronic
systems shall now also be used for a cost-effective and motivating introduction to
automation engineering.
The use of 3-D simulation in education does not only offer an excellent support of
learning processes but it also confronts the students with a tool that by now, especially
in the automotive industry, has reached a mature degree and become a standard.
In the stage of preparing the programming and the commissioning of a
manufacturing plant, the real plant is very often not yet available due to delays in time
management. Because of this, a virtual model is programmed and the commissioning
of this virtual plant is executed with the help of simulation [1]. The propagation of the
results to the real plant can then be carried out in less time, so that this time saving
alone exceeds the additional costs for the virtual plant in many cases. Similar time and
cost savings will also be achieved in the field of education if methods of the virtual
production are established.

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 329337.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
330 J. Rossmann et al.

2 State of the Art


Practical exercises are an essential component of education and training in the
different branches of automation engineering. Many different, partly related,
introduction courses and study courses are now well established, for example in the
fields of robotics, production engineering, and mechatronics (electrics, mechanics,
controller engineering). To cover this whole spectrum with learning systems, an
interdisciplinary approach [2] is required which adequately regards the big differences
in level, e.g. between vocational schools and universities.
A major part of the training in the mentioned disciplines is nowadays carried out
employing training plants built up in hardware. This approach causes some significant
problems:
The high acquisition costs of such plants lead to very small numbers of
available plants at the educational institutions so that, for every student, only a
small amount of training time results. Moreover, the plants have a long
operational life time so that they are often not up-to-date due to the fast
advance in this domain.
In many exercise scenarios, specially designed low-cost education
components are used which complicate the transfer of the gathered
knowledge into industrial practice.
The low complexity of such plants prevents a manifold use, especially with
respect to the interdisciplinary focus of the increasing number of modern
training courses and study courses.
Safety regulations for real hardware lead to a high introduction effort to
prevent damages for humans and machines. This overhead time cannot be used
for the practical exercises with the equipment.
Frequently, the maintenance effort is so high that failure scenarios are scarcely
practiced as the required modifications of the plants for every student are not
possible in the available time.
A possible alternative approach is the use of simulation software, which by now
belongs to the state-of-the-art in big enterprises especially in the automotive
industry. With this approach, the following problems have to be considered:
The creation of suitable simulation models is very time-consuming and can
only be executed by experts in most cases. This leads to a high entry threshold
in the field of education.
Generally, current simulation systems are not capable to virtually represent all
necessary details required for education. As a consequence, they cannot
completely substitute the practical exercise with real hardware.
However, an adequate synergetic combination of real and virtual exercise scenarios
should be able to solve these problems for the most part. Especially a system
following a modular design principle, in which hardware systems and ready-made
simulation models can be combined variably, offers a promising interdisciplinary
approach. Such a construction kit must be didactically supported by comprehensive
Teaching Automation Engineering: A Hybrid Approach 331

learning materials, and it must be open and extensible concerning the simple adding
of additional individual learning scenarios.

3 The Connection of Reality and Simulation


Everybody dealing with curricula of the branches of automation engineering will
quickly find out that the projects and tasks to work on should ideally be carried out
using mechatronic systems in the form of complex plants. Already some years ago,
the German company FESTO Didactic GmbH & Co. KG developed the approach of
modular production systems (MPS) and thus created an education concept that
already meets the demands concerning real hardware. It contains many modern
mechatronic sub systems, e.g. robots and PLCs, and replicates numerous functions
which can also be found in modern manufacturing, assembly, and packaging plants.
The hardware of this system consists of industrial components and thus lays a
foundation for the transfer of the gathered knowledge into practice.
To communicate the learning objectives, an instructor can nowadays choose from
numerous methods ranging from conservative ex-cathedra teaching to multimedia-
based e-learning. However, this theoretical knowledge can hardly be transferred to
practical application within the scope of the training. Learning by Doing is not only
an option, and every student should work with an industrial training system to really
get a grasp of the technology. On the other hand, it is not realistic to think that every
student can work with suitable hardware for the necessary time as the costs are far too
high for this. To bridge this didactic gap, we have extended the 3-D real-time
simulation system CIROS [3] to ensure close-to-reality work with all training-relevant
mechatronic systems using virtual models. Afterwards, we have created simulation
models for all available real training systems. The models match the real hardware
concerning all essential properties (electrics, mechanics, etc.) and the behavior.

Fig. 1. Real (left) und virtual (right) training scenario


332 J. Rossmann et al.

Figure 1 shows a real working environment for a robotic example on the left side
and the corresponding virtual environment on the right side. The student practices
with the virtual model which only marginally differs in function and behavior from
the real plant. This way, knowledge gathered using simulation can directly be
transferred into practice. Here, it is essential that the correct mechanical, electrical,
and controller engineering details of the plant are simulated in real-time and that
students as well as teachers can interact with the plant while simulation is running.
Another important aspect of this concept, also when looking at costs, is the
possibility to replace selected parts of a real working environment by a virtual
counterpart (Hardware-in-the-Loop, HiL). Furthermore, this allows for the creation of
user-specific training scenarios for which a real working environment only exists in
parts or not at all. Scenarios prepared in this way can then also be used by students to
prepare learning contents outside of the teaching times.

4 Features of the Virtual Learning Environment


The concept we present in this paper is based on an industrial 3-D simulation system
for the Digital Factory. Therefore, we cannot only simulate robot systems, sensors and
technical processes, but also comprehensive complex manufacturing lines which are
normally not available in the field of education. Such a powerful foundation is
necessary to convey the learning content close-to-reality, i.e.
to understand devices (magazine feeders, rotary arms, turntables, conveyor
systems, handling systems, robots, etc.) and their function in reality,
to execute the wiring of actors and sensors,
to move devices or execute single process steps, and
to create PLC or robot programs and to test their behavior within the whole
course of the process (e.g. correct functional behavior, correct interaction with
sensors, collision detection).
The objective of the virtual learning environment is not only to show an animation of
processes and procedures but to enable students to grasp the real hardware
environment in an almost literal sense. This requires very close-to-reality and detailed
3-D visualization and simulation to enable an efficient Learning by Doing for the
student. For this reason, the student can prepare himself individually so that the
transfer of his knowledge to the real hardware environment is nothing but a final test.
To meet these demands, we have integrated the following modules into the basic
simulation system:
Multi-robot simulation with original Mitsubishi MELFA-BASIC IV robot
programs
PLC simulation with original Siemens STEP7 controller programs
Transport simulation (e.g. work piece carrier transport systems, part
magazines, etc.)
Actor and sensor simulation up to camera systems for object pose detection
Coupling of real controllers using different interfaces, e.g. OPC
Teaching Automation Engineering: A Hybrid Approach 333

operation of the simulated plant with the original human-machine interface


(HMI)
Adding these features resulted in a simulation system that is able to simulate all
relevant parts of the real MPS hardware to the necessary degree.

Fig. 2. User interface (left) and electric wiring (right)

Besides the close-to-reality simulation, a learning system must enable students to


interact with the virtual model by providing means that match the real plant as close
as possible. To achieve this, our system provides the following means (compare
figure 2):
Original interaction devices like switches or calipers are virtually represented
within the 3-D model and can be operated while simulation is running.
The electric wiring can be changed using a virtual patch panel.
Electrical connections can be observed from one end to the other.
Additionally, special displays allow for monitoring and manually influencing
the I/O behavior.
Mechanical devices can be moved by hand with respect to kinematic
restrictions, velocities, and accelerations.
Single automation devices like sensors can be aligned and adjusted.
334 J. Rossmann et al.

Malfunction Simulation

Maintaining and monitoring mechatronic systems necessarily requires practical error


diagnostics and error correction skills. For this purpose, the simulation software has
been extended to give the instructor comprehensive facilities to define error scenarios
and provide these to his students. Afterwards, the students actions to identify errors
can be evaluated by the instructor, thus supporting an evaluation of the learning
progress. With selected examples, systematic error diagnostics and practical error
correction can then successfully be carried out using the hardware plant.

Fig. 3. Error definition (left) and error protocol (right)

On the left, figure 3 shows how a teacher can add malfunctions during the plant
operation or during commissioning like e.g. cable breaks, defective electric wirings,
or breakdown of sensors. These malfunctions are added to the learning scenarios
while simulation is running to let them be analyzed and compensated by the students.
These skills are often required in vocational practice and can hardly be trained in
classical learning environments as a return to an error-free state of the plant is only
possible with a high effort and thus with high costs. On the right, figure 3 shows a
protocol of a students actions. The protocol supports the teacher in evaluating the
learning progress.
Teaching Automation Engineering: A Hybrid Approach 335

5 Set-Up and Application of the Learning Scenarios


During the implementation of the concept described above, we have developed the
interdisciplinary construction kit CIROS Automation Suite with more than 60
learning scenarios which consist of one simulation model and one or more hardware
components each. The scenarios are didactically built upon one another. Figure 4
shows how the different educational disciplines of the whole automation spectrum are
supported. This way, an instructor can choose and freely combine the topics which are
relevant for his target audience. Additionally, we have added modeling tools to enable
the instructor to create additional individual learning environments himself.
The Automation Suite is based on the concept of open learning environments i.e. a
free learning approach affected by constructivism. This means that different working
media like base knowledge, lexicon, simulations and real hardware are available and
can be arbitrarily combined and used. This open structure has also been applied for
the design of the knowledge base which consists of an interactive, multimedia-based
knowledge information system.

Fig. 4. Coverage of learning contents

The content is separated into single information units which are linked by
hyperlinks and which consist of texts (concepts, explanations, directives, examples,
etc.), graphics, videos, and animations. Besides the cost-effectiveness, a major
advantage of the approach presented in this paper is the fact that students turn-up at
the limited available hardware well prepared which enables them to work with the
hardware more efficiently. They are familiar with the system not only in a theoretical
336 J. Rossmann et al.

way but also practice-oriented. They can concentrate on the essential details which
separate the real and the virtual world, e.g.:
How can a sensor be adjusted with the corresponding hardware tools?
Which safety regulations have to be considered for manual wiring?
How can a robot be moved with the original manual control unit?
How can sub components be added to or removed from the system?
This way, the required amount of time for the first learning steps with the real
hardware is significantly reduced. As a consequence, it now becomes possible despite
all time restrictions to plan and organize courses so that every student understands all
the hardware details.
Essential elements of the CIROS Automation Suite are the ready-made work cell
libraries for the different fields of robotics, mechatronics, and production engineering.
The robotics library contains e.g. predefined robot-based work cells together with the
corresponding technical documentation. This library aims at two objectives:
For the presentation of application examples, every work cell contains an
example solution which the teacher can show and explain.
The work cells are the foundations for the students to solve the different
project tasks i.e. to execute all project steps in the simulation. These steps
range from teaching positions over creating robot programs to the complete
commissioning as well as the final test of the application. Of course, at this
point of time, students have no access to the provided example programs yet.
To permit for this flexible use of the libraries, we have implemented two access
modes. On the one hand, students can only access the work cells in a read-only mode
(presentation mode). On the other hand, the teacher can modify the work cells for the
students according to his requirements. The students can then open these modified
work cells in their personal work spaces and continue to work on them.

6 Conclusion
In this paper, we have showed how virtual learning can be continuously used for all
levels of education and training, covering all the branches of automation engineering
like production engineering, mechanical engineering, mechatronics, robotics, etc. An
essential aspect of the presented concept is the seamless integration of detailed 3-D
simulation with the corresponding real work environments. This way, virtually
gathered knowledge can directly be transferred into practice and be verified there.
The synergy effects of virtual and classical learning, which can be obtained with
our hybrid approach, are not restricted to an essential cost reduction for the initial
acquisition of the learning materials. Moreover, the consequent application of the
concept leads to a more efficient use of available hardware resources as the required
introduction overhead is reduced and the training at the real plant can concentrate on
details which cannot be simulated.
Teaching Automation Engineering: A Hybrid Approach 337

The possibility to use the virtual learning contents outside of the teaching times
leads to a further improvement of the quality of teaching. This blended learning a
methodological mix of e-learning and attendance learning joins the advantages of
both learning methods. In contrast to common internet courses, the learning contents
exactly aim at the students and can furthermore be modified by the teacher. If
required, students can be independent of time and space and can control their learning
for themselves both into depth and into breadth.

References
1. Rossmann, J., Stern, O., Wischnewski, R.: Eine Systematik mit einem darauf abgestimmten
Softwarewerkzeug zur durchgngigen Virtuellen Inbetriebnahme von Fertigungsanlagen.
atp 49(7), 5256 (2007)
2. Rossmann, J., Karras, U., Stern, O.: Ein hybrider Ansatz fr die interdisziplinre Aus- und
Weiterbildung auf Basis eines 3-D Echtzeitsimulationssystems. In: Tagungsband zur 6.
Fachtagung Virtual Reality, pp. 291300. Magdeburg, Germany (2009)
3. Rossmann, J., Wischnewski, R., Stern, O.: A Comprehensive 3-D Simulation System for the
Virtual Production. In: Proceedings of the 8th International Industrial Simulation
Conference (ISC), Budapest, Hungary, pp. 109116 (2010)
The Strategy of Implementing e-Portfolio in Training
Elementary Teachers within the Constructive Learning
Paradigm

Olga Smolyaninova and Vladimir Ovchinnikov

Siberian Fedearl University,


pr. Svobodny 79, Krasnoyarsk, Russian Federation, 660041
smololga@mail.ru

Abstract. The system of training elementary school teachers for work in the
constructive learning paradigm at the Siberian Federal University has
significantly changed after the Applied Bachelor degree in Education was
introduced. The article contains strategies of implementing e-Portfolio
technology for training first year students: e-Portfolio allows academic teachers
to carry out longitude research of competencies being developed in accordance
with the federal Russian educational standards and encourages students
reflexive work. e-Portfolio is the learning tool supporting reflection and
individual progress assessment to develop pedagogical competencies within the
framework of professional training of elementary school teachers for work in
the constructive learning model.

Keywords: e-Portfolio, constructive learning paradigm, reflexion, progress


assessment, development.

1 Starting an Experiment in Using e-Portfolio in Training


Teachers
The system of training elementary school teachers for work in the constructive learning
paradigm at the Siberian Federal University has significantly changed after the Applied
Bachelor degree in Education was introduced. The e-Portfolio technology was introduced
for training the first-year students. The e-Portfolio allows academic teachers to carry out
a longitude research of competencies being developed in accordance with the federal
Russian educational standards and encourages students reflexive work.

1.1 The Beginning of the Experiment

In 2010 SibFU started an experiment on implementing a new curriculum for training


elementary school teachers in accordance with the educational system by Elkonin-
Davidov.
Basic principles of working out the curriculum:
Principle of integrity in studying disciplines
Logical completion of professional modules

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 339344.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
340 O. Smolyaninova and V. Ovchinnikov

Curriculum comprises 70% of practical work and 30% of academic study


Use of interactive educational forms: organizational activities, workshops,
projecting seminars, e-Portfolios
Students practical work starts during their first year at the university and lasts for the
whole period of study four years. Students reflexive materials devoted to their
pedagogical practical work are included in their e-portfolios.

1.2 The Problem of the Research


We have determined the problem of the research as working out a new model of
Applied Bachelor degree in Education for training elementary school teachers. This is
caused by the necessity of increasing the level of pedagogical education within the
process of modernization taking place in the Russian educational system; and
transition to the federal educational standards of the third generation. L. Vygotsky [2]
pointed out that pedagogical science should focus on the tomorrows (not yesterday's)
level in childrens development.

1.3 The Goal


The goal of our work within this project is developing the strategy of using the e-
Portfolio technology for training teachers for work in the constructive learning
paradigm. The e-Portfolio is the means for visualizing students achievements,
reflecting students pedagogical practical work and the tool for students individual
progress assessment.
If you have more than one surname, please make sure that the Volume Editor
knows how you are to be listed in the author index.

2 Strategy of Using e-Portfolio in Training Teachers in the


Constructive Learning Paradigm
First-year students of the Institute of Education, Psychology and Sociology of the
Siberian Federal University come across the e-Portfolio technology while mastering a
course in IT. They get their own accounts at the Institute web-site and start their work
on personal e-Portfolios. Subject teachers use the e-Portfolio method to assess
students individual progress. The e-Portfolio has become the part of the academic
program and the tool to control the process of education. Students enclose their essays
and presentations in their e-Portfolios, teachers assess their works, and the works get
quantitative or qualitative evaluation. Moreover, students may continue their work on
the materials presented in their e-Portfolio. In this context e-Portfolio is regarded as
the tool of developing reflexive skills and learning strategies.
Yi-Ping Huang [1] states that e-Portfolios require students to indicate their
understanding of course materials. A program portfolio, located within a discipline,
requires students to reflect and provide evidence of their competences across the
discipline. Reflective portfolio is also regarded in terms of the three processes:
collection, selection, and reflection. These processes coincide with the prevailing
cognitive theory and principles, such as an apprenticeship model of cognitive
development [6].
The Strategy of Implementing e-Portfolio in Training Elementary Teachers 341

The first professional reflexive materials of students e-Portfolio appear as a result


of the professional orientation activity. This activity takes place at the beginning of
the second semester. The students include in their e-Portfolios reflexive materials
devoted to the formation of the basic notions of the Elkonin-Davidov theory:
development, theoretical thinking, educational activity, educational cooperation,
educational goal, modeling, etc. Among the e-Portfolio artifacts there are analyses of
the basic theories and practical experience in realization of the constructive learning
paradigm in Russian secondary schools.
During the second semester and after the pedagogical practical training at school
the students publish reflexive materials in their e-Portfolios in the form of a diary.
Students try to mark out and describe basic characteristics of the school educational
activity and its structure, examine educational results, marks and effects of the
constructive learning paradigm.
The next years of study include using the e-Portfolio for reflecting different forms of
organizing the learning process within the Elkonin-Davidovs system, and special
attention is given to the students project work and teachers and students assessment
activities.
Professional assessment of a graduate student consists of 3 components: Integrated
final examination. Students e-Portfolios including reflexive materials and Students
graduation thesis.

2.1 e-Portfolio in Developing Competences (Bachelor Program in Education)


One of the significant reasons to use the e-Portfolio in training primary school
teachers (Bachelor program) is the opportunity to coordinate our graduates e-
Portfolio with the requirements of the prospective employees, i.e. to transform
learning e-Portfolio into career e-Portfolio. The career e-Portfolio means that
reflexive materials are focused on demonstrating the students competencies
important for the professional activity.
Professional pedagogical competencies include an important practical component
which implies knowledge of the means to carry out different types of activities based
on the concepts about the subjects, their transformation, dynamics and the
consequence of activities.
Professional experience of a teacher is an important psychological tool to master
new activities and activities transformation. Realization of the important role of this
knowledge in the professional activity stimulates teachers to find didactic tools of
developing professional competencies: cognitive instruction, including deliberate and
dedicated modeling of the activity presented by the teacher working in the
constructive learning paradigm. Olga Smolyaninova and Ruslan Glukhikh state that
this reflexive activity may be presented in the e-Portfolio [4].
Olga Smolyaninova [5], [3] underlines that an important place in organizing
educational process for the successful realization of the competency approach belongs
to the interactive forms, such as the Bachelors e-Portfolio.
We distinguish the following blocks of competencies necessary for pedagogical
career at present: a student may start demonstrating these blocks of competencies by
means of the e-Portfolio within his first year at the university (Table1). We described
the types of artifacts the first year students are supposed to present in their e-
Portfolios for demonstrating general professional competencies (GPC) and special
physiological and pedagogical competencies (SPPC) according to the new federal
standards of higher education established in Russia in 2010.
342 O. Smolyaninova and V. Ovchinnikov

Table 1. Competences and activities

Competency content Types of the artifacts in the


students e-Portfolio
GPC 1 Able to take into consideration Reflexive e-Portfolio materials
general, specific regularities and devoted to the results of the
individual peculiar features of pedagogical practical work:
psychological development, peculiar essays, students diaries.
features of behavior regulation and
activity of the learners at different age
levels.
GPC 2 Ready to use qualitative and An essay on the methods of
quantitative methods in psychological psychological and pedagogical
and pedagogical research work. research work .
GPC 3 Ready to use different methods for Report on the participation in
diagnosing childrens development, the organizational game
communication, activity at different activity.
age levels.
GPC 4 Ready to use the knowledge of An essay and a presentation on
different theories of learning, theories the constructive learning
of development and the basic theories.
educational programs for the younger
school age.
GPC 5 Able to organize different types of Scenarios of lessons,
activities: games, educational, educational and extra-
productive, extra-curriculum activities. curriculum activities.
GPC 9 Able to carry out professional activity Essays on the peculiar features
in the polycultural environment taking of the polycultural school
into consideration social and cultural environment (describing the
peculiarities. school where the student
carried out practical work).
GPC 10 Able to take part in the Abstracts of conference
interdisciplinary and interdepartmental publications, descriptions of
interaction of the specialists aimed at the personal experience in
solving professional tasks. projecting workshops.

SPPC 1 Able to organize collaborative and Scenarios of the lessons


individual childrens activity in worked out within the
accordance with their developmental constructive learning paradigm
age levels. containing descriptions of the
learners activities.
SPPC 2 Able to implement the approved Final projects in the sphere of
standard methods and technologies psychological diagnostics and
allowing to solve diagnostic and remedial pedagogy
development tasks
SPPC 3 Able to collect and preprocess data, Electronic diaries devoted to
results of the psychological the results of the pedagogical
observations and diagnostics. practical work.
SPPC 4 Able to carry out reflexive analysis of Reflexive materials collected
the types and the results of his throughout the term.
professional activities.
The Strategy of Implementing e-Portfolio in Training Elementary Teachers 343

Within the two semesters the students filled in their e-Portfolios with the materials
illustrating the formation and development of their professional competencies. The
teachers of pedagogical and psychological disciplines prescribed the assignments to
the students by means of virtual educational environment, supported the students by
means of timely feedback and left their comments concerning the students work in
the students e-Portfolios.

3 Opportunities and Prospects of the e-Portfolio


Summarizing the results of using the e-Portfolio we carried out a questionnaire among
teachers taking part in the experiment. This questionnaire was devoted to the
opportunities and prospects of the e-Portfolio technology. 25 people took part in this
work, among them there were teachers of psychological, pedagogical and IT courses, and
supervisors of students pedagogical practical work. The age of the respondents varied
from 28 to 56 years. The average length of teaching experience was 8 years. The results
of the research work are presented in Fig. 1. Figure 1 shows that teachers consider the e-
Portfolio as a prospective technology for organizing students independent study and the
opportunity to support students by means of feedback (high rating 77%, average rating
17%). The second place is taken by the opportunity of the e-Portfolio to present the
student to a potential employer (high rating 33%, average rating 22%).

Fig. 1. Opportunities and Prospects of the e-Portfolio Technology

The third place is occupied by the opportunity to present the results of the
pedagogical practical work (high rating 39%, average rating 39%). During the
detailed interviews with teachers taking part in the experiment we found out that all
three opportunities mentioned above are closely connected with the formation of the
344 O. Smolyaninova and V. Ovchinnikov

professional competencies with the help of the e-Portfolio technology. The results of
the questionnaire we carried out allowed to prove the fact that the e-Portfolio is
considered to be the means to form professional competencies: considering the e-
Portfolio for developing professional and ICT competencies 39% of teachers chose
high rating and 39% - average rating.

4 Conclusion
The research we carried out with the support from Krasnoyarsk Regional Scientific
Fund (KF-193) indicate that the e-Portfolio technology is a powerful resource for
students professional development at the expense of demonstrating students
individual progress to teachers and prospective employers.
Apart from the fact that the e-Portfolio allows visualization of the professional
competencies of future teachers their level and the process of development. The e-
Portfolio contributes to the formation of effective integrative learning strategy. The e-
Portfolio also supports feedback between students and teachers, assessment of
academic results of mastering the curriculum, analysis of pedagogical practical work;
it enhances students educational technology learning, reflection and collaboration. L.
Vygotsky [2] wrote that one step in education may correspond to one hundred steps in
development. In this context, the e-Portfolio is the tool for visualizing the process of
students development.

Acknowledgments. This research was carried out with the support from Krasnoyarsk
Regional Scientific Fund within the Project KF-193 Increasing Quality and
Accessibility of Education at Krasnoyarsk Region: Formation of the Content
Structure for the eLibrary of the Siberian Federal University for Secondary Schools
(Profile: Natural Sciences).

References
1. Huang, Y.-P.: Sustaining ePortfolio: Progress, Challenges, and Dynamics in Teacher
Education Handbook of Research on ePortfolio (2006)
2. Vygotsky, L.S.: Mind in Society. Harvard University Press, Cambridge (1978)
3. Smolyaninova, O.G.: University Teacher Professional Development and Assessment on the
Basis of ePortfolio Method in Training Bachelors of Education at the Siberian Federal
University. Newsletter of the Fulbright Program in Russia 9, 1415 (2010)
4. Smolyaninova, O.G., Glukhikh, R.S.: E-Portfolio as the Technology for Developing the
Basic Competencies of a University Student. Journal of Siberian Federal University,
Humanities & Social Sciences 2, 601610 (2009)
5. Smolyaninova, O.G.: ePortfolio as the Technology for Developing the Basic University
Students Competencies. In: Proceedings of the XVIII International Conference and
Exhibition on Information Technology in Education, Moscow (2008)
6. Gardner, H.: Assessment in Context: the Alternative to Standardized Testing. In: Grifford, B.,
O Conner, M. (eds.) Changing Assessments: Alternative Vies of Aptitude, Achievement, and
Instruction, pp. 77119. Kluwer, Boston (1992)
Speech Recognition Based
Pronunciation Evaluation Using
Pronunciation Variations and Anti-models
for Non-native Language Learners

Yoo Rhee Oh, Jeon Gue Park, and Yun Keun Lee

Spoken Language Processing Team


Software Research Laboratory
Electronics and Telecommunications Research Institute (ETRI)
138 Gajeongno, Yuseong-gu, Daejeon 305-700, Korea
{yroh,jgpark,yklee}@etri.re.kr

Abstract. This paper proposes a speech recognition based automatic


pronunciation evaluation method using pronunciation variations and
anti-models for non-native language learners. To this end, the proposed
pronunciation evaluation method consists of (a) speech recognition step
and (b) pronunciation analysis step. As a first step, a Viterbi decoding
algorithm is performed with a multiple pronunciation dictionary for non-
native language learners, which is generated in an indirect data-driven
method. As a result, the phoneme sequence, log-likelihoods of the acous-
tic models and anti-models and the duration of each phoneme are ob-
tained for an input speech. As a second step, each recognized phoneme is
evaluated using the speech recognition results and the reference phoneme
sequence. For the automatic pronunciation evaluation experiments, we
select English as a target language and Korean speakers as non-native
language learners. Moreover, it is shown from the experiments that the
proposed method achieves the average value between a false rejection
rate (FRR) and a false alarm rate (FAR) as 32.4%, which outperforms
an anti-model based method or a pronunciation variant based method.

Keywords: Pronunciation variation, anti-model, automatic pronuncia-


tion evaluation, non-native language learner.

1 Introduction
With the improved performance of speech recognition, there are many attempts
to adopt speech recognition technology in a computer-assisted language learning
(CALL) [1]. As one of a speech recognition based CALL, we propose a speech
recognition based automatic pronunciation evaluation method for non-native
language learners. Especially, we utilize the multiple pronunciation dictionary
for non-native language learners and the anti-models. To this end, the proposed
method consists of two main steps: (a) speech recognition step with a given script

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 345352.
springerlink.com c Springer-Verlag Berlin Heidelberg 2012
346 Y.R. Oh, J.G. Park, and Y.K. Lee

Fig. 1. Overall procedure of the proposed speech recognition based pronunciation eval-
uation method for non-native language learners

and the corresponding speech data and (b) pronunciation analysis step with the
recognition results for a pronunciation evaluation. Moreover, the speech recog-
nition step obtains the phoneme sequence, the log-likelihoods of acoustic models
and anti-models, and the duration for each phoneme. In addition, the pronun-
ciation analysis step evaluates each recognized phoneme using the results of the
speech recognition step and the reference phoneme sequence. For experiments,
we select English as a target language and Korean speakers as non-native lan-
guage learners.
The organization of the remainder of this paper is as follows. In Section 2, we
present an overall procedure of the proposed speech recognition based pronunci-
ation evaluation method for non-native language learners. Next, we describe the
generation of a multiple pronunciation dictionary for non-native language learn-
ers and the pronunciation variants in Section 3 and present a anti-model based
pronunciation analysis method in Section 4. In Section 5, we show the perfor-
mance of the proposed pronunciation evaluation method. Finally, we conclude
ndings in Section 6.

2 Overall Procedure of the Proposed Pronunciation


Evaluation Method for Non-native Language Learners
The proposed speech recognition based automatic pronunciation evaluation
method for non-native language learners consists of a speech recognition step
and a pronunciation analysis step, as shown in Fig. 1.
In the speech recognition step, a Viterbi decoding is performed using triphone-
based hidden Markov models (HMMs) as acoustic models (H0 ), a multiple pro-
nunciation dictionary, and a network generated from the given script when an
Automatic Pronunciation Evaluation for Non-native Language Learners 347

input speech is entered. Especially, a multiple pronunciation dictionary is gen-


erated by adding the pronunciation variations that are commonly occurred by
non-native language learners, which is explained in Section 3. As a result, we
obtain the recognized phoneme sequence, the log-likelihood of H0, and the start
and end frame indexes for each phoneme. For each phoneme, the log-likelihood
of the corresponding anti-model (H1 ) is then obtained by performing a Viterbi
decoding using the H1 and the corresponding sub-speech data.
In the pronunciation analysis step, each pronunciation is evaluated by compar-
ing the recognized phoneme sequence and the reference phoneme sequence and
by calculating the phoneme-level log-likelihood ratio using the log-likelihoods of
H0 and H1 and the duration, which step is explained in Section 4.

3 Generation of a Multiple Pronunciation Dictionary


for Non-native Language Learners
To reect the eects of the mother tongue of non-native language learners. a
multiple pronunciation dictionary for non-native language learners is generated
using a speech database of non-native language learners with the hand-labeled
phoneme-level transcription. To this end, we use a decision-tree based indirect
data-driven method that was presented in [2], and the multiple pronunciation
dictionary is generated in three steps: (a) acquisition of mispronunciation rule
patterns by non-native speakers, (b) derivation of the pronunciation variant rules
in a decision-tree based indirect data-driven method, and (c) generation of a
multiple pronunciation dictionary for non-native language learners.

3.1 Acquisition of Mispronunciation Rule Patterns for Non-native


Language Learners
For each utterance in a speech database of non-native language learners, the
transcribed phoneme sequence is aligned with the reference phoneme sequence
using a dynamic programming algorithm, where the reference phoneme sequence
is obtained from a pronunciation dictionary for native speakers.
Next, the pronunciation rule patterns for each alignment result are obtained
in the form of
L2 L1 X + R1 + R2 Y (1)
which indicates that a reference phoneme /X/ with the left two phonemes /L1 /
and /L2 / and the right two phonemes /R1 / and /R2 / is mapped into a recognized
phoneme /Y /.
After that, we only take the mispronunciation rule patterns which reference
phoneme /X/ is dierent from a recognized phoneme /Y /.

3.2 Derivation of the Pronunciation Variant Rules for Non-native


Language Learners
The pronunciation variant rules for non-native language learners are derived
from the collected mispronunciation rule patterns by generating decision trees.
348 Y.R. Oh, J.G. Park, and Y.K. Lee

Fig. 2. The procedure of the pronunciation analysis based on the pronunciation vari-
ants for non-native language learners and the anti-models

In other words, a decision tree for each phoneme (X) is generated using the
mispronunciation rule patterns corresponding to X. Moreover, the attributes
of a decision tree for X are the two left phonemes for X (L1 and L2 ) and
the two right phonemes for X (R1 and R2 ). In addition, the output class of
a decision tree for X is determined as a commonly occurred phoneme by non-
native language learners. After that, each decision tree is converted into the
equivalent pronunciation variant rules for non-native language learners.

3.3 Generation of a Multiple Pronunciation Dictionary for


Non-native Language Learners
A multiple pronunciation dictionary for non-native language learners is expanded
from a multiple pronunciation dictionary for native speakers by adding all pro-
nunciation variants by non-native language learners. Moreover, the pronunci-
ation variants for non-native speakers are generated by applying the derived
pronunciation variant rules for non-native speakers into the pronunciation se-
quences of each word in a multiple pronunciation dictionary for native speakers.

4 Pronunciation Variants and Anti-model Based


Pronunciation Analysis
The pronunciation analysis is performed in two steps: (a) comparison between
the recognized and the reference phoneme sequences and (b) comparison be-
tween the phoneme-level log-likelihood ratio normalized by the duration and the
predened threshold, as shown in Fig. 2.
The recognized phoneme sequence is rst compared with all possible phoneme
sequences obtained from a given script and a multiple pronunciation dictionary
Automatic Pronunciation Evaluation for Non-native Language Learners 349

for native speakers using a dynamic programming algorithm. As a result, the


best matched reference phoneme sequence is obtained by selecting the phoneme
sequence having the smallest distance via a dynamic programming algorithm.
Next, the recognized phoneme sequence is aligned with the selected reference
phoneme sequence using a dynamic programming algorithm.
After that, each phoneme of the recognized phoneme sequence is examined
throughout the aligned result. That is, a phoneme of the recognized phoneme
sequence is determined as a wrong pronunciation if the phoneme is dierent
from that of the selected reference phoneme sequence. Otherwise, the normalized
phoneme-level log-likelihood ratio (PLLR) of the phoneme is calculated by the
following equation,
log P (H0) log P (H1)
normalized P LLR = (2)
d
where log P (H0) and log P (H1) are the log-likelihood of the acoustic model
and that of the anti-model, respectively. Moreover, d indicates the duration
between the start frame index and the end frame index. If the normalized PLLR
is smaller than a predened threshold, the phoneme is determined as a wrong
pronunciation. Otherwise, it is determined as a correct pronunciation.

5 Experiments
In order to evaluate the proposed speech recognition based pronunciation evalu-
ation method, we select English as a target language and Korean adult speakers
as non-native language learners. Section 5.1 describes the baseline automatic
speech recognition (ASR) system and Section 5.2 shows a performance of the
proposed pronunciation evaluation method.

5.1 Baseline ASR System


As a speech recognition feature, we extract a 39-dimensional feature vector by
extracting 12 mel-frequency cepstral coecients (MFCCs) with logarithmic en-
ergy for every 10 ms analysis frame and by concatenating their rst and second
derivatives [3].
For the acoustic models (H0) of the baseline ASR system, we train the cross-
word triphone-based HMMs which are based on the 3-state left-to-right with
4-mixture Gaussian distributions with the English utterances spoken by native
speakers. In other words, the monophone-based HMMs are expanded into the
triphone-based HMMs, and then the states of the triphone-based HMMs are
tied by employing a decision tree [4]. For the anti-models (H1), we train the
monophone-based HMMs which are based on the 3-state left-to-right with 4-
mixture Gaussian distributions with the English utterances spoken by Korean
adult speakers. For a pronunciation dictionary of the baseline ASR system, each
pronunciation of a word is built from a multiple pronunciation dictionary for
native speakers.
350 Y.R. Oh, J.G. Park, and Y.K. Lee

5.2 Performance of the Proposed Pronunciation Evaluation Method

We rst selected 10 confusable English phonemes by Korean speakers as /F/,


/V/, /TH/ /DH/, /SH/, /Z/, /CH/, /JH/, /R/, /ER/, /L/, and /W/ 1 . Next,
each transcribed phoneme sequence of an English speech database by Korean
adult speakers was aligned with the reference phoneme sequence using a dynamic
programming algorithm and then the mispronunciation rule patterns were ob-
tained as described in Section 3.1. Among the collected mispronunciation rule
patterns, we only selected the patterns corresponding to the 10 confusable En-
glish phonemes. After applying a decision tree algorithm with the selected pat-
terns, we obtained the 211 English pronunciation variant rules by Korean adult
speakers and then generated a multiple pronunciation dictionary for Korean
adult speakers using the 211 variant rules. Using the baseline ASR system and
the multiple pronunciation dictionary for Korean adult speakers, we performed a
Viterbi decoding algorithm for each utterance of a test set which consists of 8,770
English utterances spoken by Korean adult speakers. Each recognized phoneme
sequence was compared with the reference phoneme sequences in a multiple pro-
nunciation dictionary for native speakers. Next, each recognized phoneme was
evaluated as shown in Fig. 2.
In order to compare the performance of a pronunciation evaluation method,
we measured a false rejection rate (FRR) and a false acceptance rate (FAR)
for each confusable phoneme and then we averaged the FRRs and FARs of all
confusable phonemes, respectively. Moreover, FRRp and FARp of a phoneme, p,
were calculated as

Np,correct,wrong
F RRp = , (3)
Np,correct
Np,wrong,correct
F ARp = , (4)
Np,wrong

where Np,correct and Np,wrong were the number of phonemes that were correctly
uttered and the number of phonemes that were incorrectly uttered, respectively,
for p. Moreover, Np,correct,wrong and Np,wrong,correct were the number of phonemes
that were correctly uttered but evaluated as wrong and the number of phonemes
that were incorrectly uttered but evaluated as correct, respectively, for p.
Table 1 shows a performance comparison of three pronunciation evaluation
methods employing either the multiple pronunciation dictionary for non-native
language learners or anti-models. It was shown from the rst row that the average
FRR and FAR were measured as 52.6% and 20.1%, respectively, for an anti-
model based pronunciation evaluation method. In addition, the average FRR
and FAR were achieved as 17.1% and 59.3%, respectively, for a pronunciation
evaluation method employing the multiple pronunciation dictionary for non-
native language learners. Moreover, it can be seen from the third row that the
1
All pronunciation symbols in this paper are denoted in the form of the two-letter
uppercase ASPAbet [5].
Automatic Pronunciation Evaluation for Non-native Language Learners 351

Table 1. Performance comparisons of the average false rejection rate (FRR) and the
average false acceptance rate (FAR) for the pronunciation evaluation methods em-
ploying either a multiple pronunciation dictionary for non-native language learners or
anti-models

Multiple pronunciation dictionary Anti- FRR. FAR (FRR+FAR)/2


for non-native speakers models (%) (%) (%)
X O 52.6 20.1 36.3
O X 17.1 59.3 38.2
O O 32.1 32.7 32.4

average FRR and FAR were 32.1% and 32.7%, respectively, for the proposed
method employing both anti-models and the multiple pronunciation dictionary
for non-native language learners.

6 Conclusion
This paper proposed an automatic pronunciation evaluation method based on
speech recognition by using a multiple pronunciation dictionary for non-native
language learners and the anti-models. Especially, the multiple pronunciation
dictionary for non-native language learners was automatically generated in an
indirect data-driven method, and the proposed method could cover the eects
of the mother tongue of non-native learners by using the multiple pronunciation
dictionary for non-native language learners. Moreover, the proposed pronuncia-
tion evaluation method performed in two steps: (a) speech recognition and (b)
pronunciation analysis. By performing speech recognition using anti-models and
the multiple pronunciation dictionary for non-native language learners, we ob-
tained the phoneme sequence and the log-likelihood of the acoustic models, that
of anti-models, and the duration of each phoneme of the recognized sequence.
Using the speech recognition results, each phoneme was then evaluated by com-
paring the phoneme sequences and the normalized phoneme log-likelihood ratio.
From the automatic English pronunciation evaluation experiments by Korean
adult speakers, the proposed pronunciation evaluation method were achieved the
average FRR and FAR as 32.1% and 32.7%, respectively, which outperformed
an anti-model based method and the pronunciation variant based method.

Acknowledgements. This work was supported by the Industrial Strategic


technology development program, 10035252, Development of dialog-based spon-
taneous speech interface technology on mobile platform funded by the Ministry
of Knowledge Economy(MKE, Korea)

References
1. Eskenazi, M.: An overview of spoken language technology for education. Speech
Commun. 51, 832844 (2009)
2. Kim, M., Oh, Y.R., Kim, H.K.: Non-native pronunciation variation modeling using
an indirect data driven method. In: ASRU, Kyoto, Japan, pp. 231236 (2007)
352 Y.R. Oh, J.G. Park, and Y.K. Lee

3. Lee, S.J., Kang, B.O., Jung, H.-Y.: Statistical model-based noise reduction approach
for car interior applications to speech recognition. ETRI Journal 32, 801809 (2010)
4. Young, S.J., Woodland, P.C.: Tree-based state tying for high accuracy acoustic
modeling. In: ARPA Human Language Technology Workshop, Plainsboro, NJ, pp.
307312 (1994)
5. Deller, J.R., Hansen, J.H.L., Proakis, J.G.: Discrete-Time Processing of Speech Sig-
nals. IEEE Press, New York (2000)
Computer Applications in Teaching and Learning:
Implementation and Obstacles among Science Teachers

Abdalla M.I. Khataybeh and Kholoud Al Sheik

drkhataybah@yahoo.com

Abstract. This study aimed at investigating the degree of using computer


applications in teaching and learning among Science Teachers at Irbid
Directorate of Education and the obstacles they face. A questionnaire consisted
of 30 items (five point likert scale) was constructed, validity using panel of
judges and reliability using CronbachAlpha for internal consistency were
conducted and found suitable for this questionnaire. Means, standard deviations
and ANOVA were used according to the variables of the study .The results of
the study showed that there is a lack in using computer applications among
Science Teachers at Irbid Directorate of Education and they showed different
sources of obstacles in using computer applications. The recommendations of
this study were in equipping science teachers with knowledge and practice on
how to deal with these programs and establishing a new computer laboratories
and buying more and software.

1 Purpose and Background of the Study


Changes in the teaching and practice of education have been brought about by
technology in all educational institutions (Lont, MacGregor and Willett, 1998;
Nicholson and Williams, 1994; Green, 1999). Despite such curricular and
technological developments, educators appear to have lagged behind in adopting new
teaching and learning strategies (Becker and Watts, 1999; Adler and Milne, 1998).
Albrecht and Sacks (2000). Technologysupported innovation does provide fruitful
possibilities for improved outcomes for students. This study provides an approach for
evaluating possible choices for enhancing student learning in the modern
technologysupported programs. The results will be of interest to educators interested
in promoting effective educational innovation, Specially Jordan started implementing
the Educational Reform for Knowledge Economy (ERFKE) to be in line with the
aims of Ministry of Education (MoE). ERFKE means that learners are going to use,
implement, and create knowledge by using Information and communication
technology (ICT), while the learner is the key person in this process. This study will
examine the main motivations for adopting technological innovation in education. It
also outlines the main obstacles to value being added by educational innovation. The
perspectives of students and teachers are considered and it discusses the implications
for educators. In adopting learning innovations it is crucial that changes match the
needs of learners . The final part concludes that the opportunities for innovation
should be grasped and that obstacles can be overcome. The quantitative aspects of
science are perceived as difficult and sometimes abstract. It is possible that these
problems can be solved to a great extent with the use of computers and some

K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 353360.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
354 A.M.I. Khataybeh and K. Al Sheik

innovations in the teaching process. Computer applications are used effectively at


Jordanian schools. This was based on the experience in the corporate sector and
feedback from teachers. This includes popular software such as Excel, software that
comes with books, a paid electronic database and free internet resources , as some
students receive assignments and give feedback through email. Whole process
enhances the learning process for students. The interactive process may increases the
science teachers knowledge and help improve those teaching techniques.

2 Computer Applications Areas in Schools


Teaching in all disciplines has been subjected to unprecedented scrutiny and pressure
to change (Ramsden, 1998, 1992). Chong (1997) maintains that there are two major
goals of integrating technology into education: to prepare students for computer usage
in their prospective workplace and to enhance student learning. In their study of 104
nationally funded technology innovations in higher education in Australia, Alexander
and McKenzie (1998) identified two further popular motivations for integrating
technology: to enhance departmental or institutional reputation and to improve
productivity for students, academics and departments. Alexander and McKenzie
(1998) reported reputation as the only outcome (34% of cases) achieving in excess of
that expected (32% of cases). Developing technology capabilities for students appears
quite positive (Lont, MacGregor and Willett, 1998; Brooksbank, Clark, Hamilton and
Pickernell, 1997; Goggin, Finkenberg and Morrow, 1997; Leidner and Jarvenpaa,
1995; Aiken and Hawley, 1995; Baker, Hale and Gifford, 1997; Scigliano, Levin and
Horne, 1996). In regard to productivity, Alexander and McKenzie (1998) report
serious underachievement. The most common motive for technology supported
innovation in education, claimed by ( Alexander and McKenzies,1998; McKenzie,
1977: Beard and Hartley, 1984; Russell,1999; McQuillan ,1994; Mykytyn , 2007;
Saad and Kira ,2004; Bissell, McKerlie, Kinane and McHugh, 2003; Byrom,2002;
Ghosal and DeThomas ,1991.

Aims of the study


This study is aiming at elucidating (inquiring) the effectiveness of using computer
applications in teaching and learning process among science teachers and to
determine the degree of implementing these applications and the obstacles they are
facing while using these applications .

Questions of the Study


This study tried to answer the following questions:
Q1: To what extent do science teachers are implementing ICT in their teaching?
Q2 : What are the Obstacles facing science teachers in using computer applications?
Q3: Is there Statistical differences at ( 0.05) in their Implementation regarding
their (computer usage, specialization and their experience)?
Q4: Is there Statistical differences at ( 0.05) in obstacles faced the teaching staff
regarding their (computer usage , specialization and their experience)?
Q5: Is there a correlation between Obstacles and implementation of computer
applications among science teachers?
Computer Applications in Teaching and Learning 355

3 Methodology
A questionnaire was developed using a nine step process for Likerttype scales
according to Koballa (1984) , full questionnaire can be requested from the authers.
While the sample was consisted of ( 52) science teachers were selected randomly.
Results of the Study :Results related to question 1: To what extents do science
teachers are implementing ICT in theirteaching?

Results of the Study


Results related to question 1: To what extents do science teachers are implementing
ICT in their teaching?

Table 1. Percentages of respondents, Means and Standard deviation for each statement

Table (1a). Percentages of respondents, Means and Standard deviation for each domain
356 A.M.I. Khataybeh and K. Al Sheik

Table (1) shows the mean and Standard deviation for each statement. According to
the criteria each statement with mean less than three is considered as low performance
23 statements were classified as low performance and 6 statements have means
between 3.00 and 3.50 they are considered as satisfactory performance. The highest
performance was for statement (13, 19, 20, 21, 22, 23, and 25). And lowest
performance was for (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 15, 16, 17, 18, 24, 26, 27,
28, 29, and 30). Table (1a) shows that only one domain have mean more than 3.00
while 4 domains have means less than 3.00, while for the whole test mean was less
than 3.00 this means low performance.

Results related to question 2


What are the Obstacles facing science teachers in using computer applications? Table (2)
shows means and standard deviations for science teachers responses to each statement
and for each domain

Table 2. Means and Standard Deviation of the obstacles for each domain

Table (2) showed that the highest obstacles rate was in domain(5) Computer
Application for project work with mean of (3.12 0ut of 5), PowerPoint and Hand on
learning is the second Obstacles rate with (3.104 out of 5), those two domains were
accepted for them mean ratio's which is more than 3. The third one is Excel Template
Sheet with ratio of (2.964 out of 5), fourth one is Excel Spreadsheet Models with (2.84
out of 5) and the last Obstacles rate for the Internet Resources with (2.59 out of 5).
Computer Applications in Teaching and Learning 357

Results related to question 3: Is there Statistical differences at ( 0.05) in


their Implementation regarding their (computer usage, specialization and their
experience)?

Table 3. Means and standard deviation for science teachers implementation of computer
applications

IndependentVariable Levels of IV Mean Std.Dev.

Computerusage LessthanoneyearMorethanoneyear 2.8042.373 0.791.07

Experience 15Yearsmorethan5years 2.6022.548 1.030.92

Specialization
chemistryOthers 2.4492.718 0.950.98

This table shows the lack of science teachers implementation for these applications.
It is also showed that there is no difference between chemistry teachers and other
teachers, teachers with more than Five years experience or less .
358 A.M.I. Khataybeh and K. Al Sheik

Table 4. ANOVA analysis for science teachers implementation of computer application

Source SumofSquares df MeanSquare F Sig.


ComputerusageExperience
1
1.8440.595 1.8440.595 1.9820.640 0.1700.431
1

Specialization 0.388 1 0.388 0.417 0.524


Error 26.053 28 0.930
Total 28.513 31

Results related to question 4: Is there Statistical differences at ( 0.05) in obstacles


facing the science teachers regarding their (computer usage , specialization and their
experience)?

Table 5. ANOVA Analysis for the obstacles of science teachers in using computer applications

Source SumofSquares df MeanSquare F Sig.


ComputerUsageExperience
1 0.6380.40
0.2560.797 0.2560.797 0.2270.708
1 7

Specialization 0.491 1 0.491 0.436 0.514


Error 31.532 28 1.126
Total 32.836 31
( 0.05)

Results related to question 5: Is there a correlation between Obstacles and


implementation of computer applications among science teachers?

Table 6. Correlation Coefficient between the Implementation and Obstacles in using computer
applications

Domain Correlation
Usingcomputersforstatisticalanalysis 0.37
Integratingcomputersinstudentslearning 0.17
Powerpointandhandsonlearning 0.01
Usinginternetresources 0.17
Computerapplicationforprojectwork 0.32
Allitems 0.14

4 Discussion and Recommendations


Table (1) showed that science teachers implementation is too low, this weakness
caused by lacking of teaching aids in the classrooms, high number of the students and
Computer Applications in Teaching and Learning 359

the lack of computer laboratories. While table (2) showed that obstacles concentrated
in almost all Computer application domains. In the open question at the questionnaire,
science teachers mentioned that obstacles stemmed because of the lack of knowledge,
lack of teaching aid in the classrooms, high number of the students and the lack of
computer laboratories. One of the chemistry teachers said" Obstacles stemmed from
my lack of knowledge not institutional lack of equipment; therefore there is no
contraction between obstacles and the use of the different applications. I do
recommend that most science teachers should attend training courses in computers
applications in teaching". Also table(4) showed that There is no statistical difference
regarding to computer usage , specialization and teachers experience, because of the
similar implementation's obstacles faced teaching staff at whole. Table (5) showed
that there is no significant differences for all variables, because the obstacles are
similar to all science teachers regardless their experience and their specialization. As
in table (6) negative correlation between implementation and obstacles which could
be due to lack of knowledge and practice among science teachers and lack of
equipment and software in the classrooms.

5 Recommendations

IN the light of the findings of the study the following recommendation can be offered :
Equipping the laboratories with enough software and hardware , training science
teachers in how to use the software and to use the sophisticated software such as
Excel Spreadsheet, Templates sheet Models, PowerPoint, SPSS Crocodile program
and equipping classrooms with data show and PC to allow students to
present their projects.

References

1. Aiken, M.W., Hawley, D.D.: Designing an electronic classroom for large college courses.
T.H.E. Journal 23(2), 7678 (1995)
2. Albrecht, W.S., Sack, R.J.: Accounting Education: Chartingthe Coursethrougha
PerilousFuture. American Accounting Association (August 2000)
3. Alexander, S., McKenzie, J.: An Evaluation of Information Technology Projects
forUniversity Learning. Australian Government Printing Service, Canberra (1998); Baker,
W., Hale, T., Gifford, B.R.: Technology in the Classroom: From Theory toPractice.
Educom Review 32(5), 4250 (1997); Beard, R., Hartley, J.: Teaching and Learning in
Higher Education, 4th edn. PaulChapman Publishing, London (1984)
4. Bissell, V., McKerlie, R.A., Kinane, D.F., McHugh, S.: Teaching periodontal
pocketcharting to dental students: a comparison of computer assisted learning and
traditional tutorials. British Dental Journal 195(6), 333336 (2003)
5. Byrom, Elizabeth: Evaluating the impact of technology (2002),
http://www.serve.org/_downloads/publications/Vol5.3.pdf
(retrieved May 12, 2011); Chong, V.K.: Student Performance and Computer Usage: A
Synthesis of Two DifferentPerspectives. Accounting Research Journal 10(1), 9097 (2002)
360 A.M.I. Khataybeh and K. Al Sheik

6. Dunn, J.G., Kennedy, T., Bond, D.J.: What skills do science graduates need? Search 11,
239242 (1980)
7. Freeman, M.A., Capper, J.M.: Obstacles and opportunities for technological innovation
inbusiness teaching and learning (2007),
http://www.heacademy.ac.uk/assets/bmaf/documents/publication
s/IJME/Vol1no1/freeman_tech_innovation_in_Tandl.pdf
(retrieved April 17, 2011)
8. Ghosal, M., Arthur, D.: An Electronic Spreadsheet Solution to Simulaneousequations in
Financial Models. Financial Practice and Education 1(2), 9398 (1991)
9. Goggin, N.L., Finkenberg, M.E., Morrow Jr., J.R.: Instructional Technology in
HigherEducation Teaching. Quest 49(3), 280290 (1997)
10. Green, K.C.: Campus computing1998: the ninth national survey of desktop computing and
information technology in higher education. The Campus Computing Project, California
(1999); Koballa, T.R.: Designing a likerttype scale to assess attitudes towards energy
conservation. Journal of Research in Science Teaching 20, 709723 (1984)
11. Leidner, D.E., Jarvenpaa, S.L.: The use of information technology to enhance
managementschool education: A theoretical view. MIS Quarterly 19(3), 265291 (1995)
12. Lont, D., MacGregor, A., Willett, R.: Technology and the Accounting Profession.
Chartered Accountants Journal of NewZealand 77(1), 3137 (1998)
13. McKenzie, J.: Computers in the teaching of undergraduate science. British Journal
ofEducational Technology 8(3), 214224 (1977)
14. McQuillan, P.: Computers and pedagogy: the invisible presence. Journal of
CurriculumStudies 26(6), 631653 (1994)
15. Mykytyn, P.P.: Educating our Student in Computer Applications concepts: A case
forProblemBased Learning. Journal of organizational and End user computing 19(1),
5161 (2007)
16. Nicholson, A.H.S., Williams, B.C.: Computer use in accounting finance and
managementteaching amongst universities and colleges: a survey. Account 6(2), 1927
(1994)
Author Index

Abler, Randal 287 Genovese, Elisabetta 265


Al Sheik, Kholoud 353 Gong, Xiugang 77
Ansari-Ch., Fazel 279 Guaraldi, Giacomo 265
Guo, Fengying 55
Bagherzade, Behzad 313
Benedicenti, Luigi 295 Hassanpour, Badiossadat 223, 231
Bergh, Luis G. 215 Huang, Chi-Jen 63, 71
Bresfelean, Mihaela 321 Hunter, Michael 287
Bresfelean, Vasile Paul 321 Huo, Jiuyuan 83, 89

Cabral, Jorge 41 Ivey, Emily 287


Cardoso, Paulo 41 Jia, Jie 133
Chang, Guiran 133 Jia, Yongxin 77
Chen, Chao-Jung 63 Jin, Xi 239
Chen, Li-Chiou 9
Cheng, Weiping 169, 179 Keyvanpour, Mohammad Reza 303, 313
Cheon, Yee-Jin 33 Khataybeh, Abdalla M.I. 353
Choi, Jong-Wook 33 Khine, Win Kay Kay 143
Chuang, Hsueh-Hua 63, 71 Khobreh, Marjan 279
Corradini, Matteo 265 Kline, Richard 1
Coutras, Constantine 1 Koch, Thorsten 329
Coyle, Edward 287
Lacurezeanu, Ramona 321
DeMillo, Rich 287 Lee, Jae-Seung 33
Deng, De-sheng 17 Lee, Yun Keun 345
Drugus, Ioachim 109 Lehmann, Mark 117
Li, Alberto Quattrini 265
Ekpanyapong, Mongkol 41 Li, Qiuying 247, 257
Etaati, Atekeh 303 Liu, Chunxiao 133
Liu, Han-Chin 63, 71
Fathi, Madjid 279 Liu, Runjiao 189, 197
Fitzgerald, Sue 55 Liu, ZhengHua 169, 179
Funk, Burkhardt 117 Lwin, Zin Mar 143
Garbo, Roberta 265 Majid, Shaheen 143
Ge, Luobao 207 Mangiatordi, Andrea 265
362 Author Index

Mansourifar, Hadi 313 Shin, Hyun-Kyu 33


Memon, Nasrullah 95 Smolyaninova, Olga 339
Mendes, Jose 41 Spalie, Nurhananie 223, 231
Monteiro, Joao 41 Stern, Oliver 329
Mukati, M. Altaf 23 Sun, Lina 133
Murthy, Narayan 1
Takahashi, Kenichi 161
Negri, Silvia 265 Tan, Yubo 49
Niemeyer, Peter 117 Tao, Lixin 1, 9
Nizamani, Sarwat 95 Tavares, Adriano 41
Tedesco, Roberto 265
Oh, Yoo Rhee 345
Oo, Ma Zar Chi 143 Ueda, Hiroaki 161
Ovchinnikov, Vladimir 339 Utaberta, Nangkula 223, 231

Pang, Ya-jun 153 Wang, Jian 247, 257


Park, Jeon Gue 345 Wang, Ping 55
Petty, Sheila 295 Weidner, Stefan 117
Pleite, Jorge 103 Wischnewski, Roland 329

Qiu, Jin 77 Xie, Mingjing 189, 197, 207


Qu, Hong 83, 89
Yang, Wen 77
Ren, Yan 179 Yao, Huali 49
Riegel, Christian 295 Ying, Zhang 207
Robinson, Katherine 295
Rossmann, Juergen 329 Zaharim, Azami 223, 231
Zarate Silva, Victor Hugo 127
Salas, Rosa Ana 103 Zhang, Shaoquan 77
Sbattella, Licia 265 Zhang, Ying 189
Shen, Shouyun 239 Zhang, Yuanyuan 273
Shi, Lei 189, 197, 207 Zhao, Pengfei 169
Shi, Nan 197 Zheng, Gui-jun 17
Shi, Ying 239 Zhou, Rui 179
Shimada, Kazutoshi 161 Zhou, Wei 17

Anda mungkin juga menyukai