Advanced Information
Technology in Education
ABC
Editor
Khine Soe Thaung
Society on Social Implications of Technology and Engineering
Mal
Maldives
Organizing Chairs
Khine Soe Thaung Society on Social Implications of Technology and
Engineering, Maldives
Bin Vokkarane Society on Social Implications of Technology and
Engineering, Maldives
Program Chair
Tianharry Chang University Brunei Darussalam, Brunei Darussalam
Wei Li Wuhan University, China
Local Chair
Liu Niu Beijing Sport University, China
Publication Chair
Khine Soe Thaung Society on Social Implications of Technology and
Engineering, Maldives
Program Committees
Tianharry Chang University Brunei Darussalam, Brunei Darussalam
Kiyoshi Asai National University of Laos, Laos
Haenakon Kim ACM Jeju ACM Chapter, Korea
Yang Xiang Guizhou Normal University, China
Minli Dai Suzhou University, China
Jianwei Zhang Suzhou University, China
Zhenghong Wu East China Normal University
Tatsuya Adue ACM NUS Singapore Chapter, Singapore
Aijun An National University of Singapore, Singapore
Yuanzhi Wang Anqing Teachers' University, China
Yiyi Zhouzhou Azerbaijan State Oil Academy, Azerbaijan
Contents
1 Introduction
Master in Computer Science programs (MS-CS) are critically important in producing
competitive IT professionals and preparing students for doctorate research. A major
challenge is how to integrate the latest computing technologies into MS-CS programs
without compromising the computer science foundation education. This paper shares
Pace Universitys study and experience in renovating its MS-CS program to address
this challenge.
The study started with the identification of the most important progress in
computing over the past decade and its relationship with the fundamental computer
science concepts and theory. In particular Internet and web technologies, cloud
computing, mobile computing, and Internet/web security are analyzed. It was
concluded that they are all based on recursive application of the fundamental
computer science concepts; XML is the new fundamental subject supporting from
data integration and transformation to the implementation of web services and cloud
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 17.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
2 L. Tao et al.
computing; abstraction and divide-and-conquer are the theory underlying the layered
web architecture, distributed system integration, component-based software
engineering, and server-based thin-client computing.
Another major challenge is how to integrate the current technologies into the MS-
CS curriculum. The traditional computing curricula are based on the water-fall model
with long prerequisite requirement chains, and students cannot have a global
subject/technology overview until the end of the program. As a result students are not
motivated in the early courses, and hands-on projects cannot be easily implemented to
enhance the courses. We decided to adopt the mature iterative education model, and
divide the MS-CS program into three iterations. The first iteration is the program core
containing the most fundamental computer science concepts and skills in computing
theory, hardware/software systems, Internet computing and data engineering. It
enables the students to have a global perspective of the study program and IT
technologies, the necessary skills for hands-on projects in the follow-up courses, and
the ability for life-long study. In the second iteration the students conduct focused in-
depth study in a chosen concentration to understand how the computing theories and
methodologies are applied in solving the real work challenges. The third iteration is
the capstone options and the students will conduct thesis research or major project to
explore the problem-solving skills in larger scale under faculty guidance.
Based on the above theoretical analysis Pace Universitys MS-CS program was
revised into a 30-credit program with a 12-credit program core, 12-credit
concentrations or elective courses, and two 6-credit capstone options. Each course
carries 3 credits. To ensure that all graduates have solid education on computer
science fundamentals and balanced perspective on computing, the program core
includes Algorithms and Computing Theory, Introduction to Parallel and
Distributed Computing, Concepts and Structures in Internet Computing, and
Database Management Systems, covering fundamentals of computing theory,
hardware/system, software/web, and data management and XML respectively. This
program core factors out the shared computing fundamentals so students could freely
take any of the following six concentrations with minimal prerequisite dependency
and redundancy: (1) Classical Computer Science, (2) Artificial Intelligence, (3)
Mobile Computing, (4) Game Programming, (5) Internet Computing, and (6) Web
Security. The two main 6-credit capstone options are master thesis research and
master major report, supporting in-depth original research and guided study of a new
technology and applying it in a major project respectively.
The result of this study also provides theoretical foundation to renovate computer
science undergraduate programs.
Since early 2000s the IT industry has adopted the service-oriented computing model.
As a generalization of the web and distributed computing technologies, Internet
business services [4] (for which web service is one of the particular implementation
techniques) are provided on servers through the Internet for heterogeneous client
systems to consume. An Internet business service abstracts specific business logics
and their implementations, the server IT infrastructure, and the expertise for
maintaining the server resources. The clients of such services are typically software
systems that can consume the services with remote system integration. Credit card
processing is a typical Internet business service provided by major financial
institutions. New Internet business services are typically implemented by integrating
existing services, and the XML technologies are the foundation of data integration
across heterogeneous IT systems. Internet business services promote specialized and
collaborated computing as well as support competitive global economy. Web service
is a particular implementation technology of Internet business services, and service-
oriented architecture (SOA) specifies the software architecture based on service
integration. The service-oriented computing is based on networking, the client-server
and thin-client architectures, and the web architecture which is still the foundation of
the fast-growing e-commerce and e-society.
As the top-level abstraction, each Internet business service is implemented with
server-side software component technologies like EJB [5] and .NET [6]. A software
component is a software module that has well-defined interfaces and can be
individually deployed. A software component typically implements specific business
logics with multiple objects, and the common server infrastructure functions, like
component life cycle management, thread pooling and synchronization, data caching,
load balancing and distributed transactions, are factored out into a component
container which is basically a software framework interacting with the components
through pre-declared hook and slot methods. Since early 1990s the software
component based software engineering has become the mainstream IT industry
practice. In 1995 The Department of Defense mandated that all its projects must be
based on software components.
Based on the above discussion we can see that over the last two decades the
concepts of abstraction and divide-and-conquer have been recursively applied to
higher-level software modules/systems, from objects to software components and to
Internet business services; the knowledge base for server-based computing is a
superset of that for client-side computing and introduces many new challenges not
properly covered by the current curricula; and the dominant server-based computing
IT technologies are based on sound and recurring concepts and methodologies that
must be integrated in computer science curricula to prepare students for the current
and future IT challenges.
But many of our computer science programs are still struggling with effective
teaching of objects and have weak coverage of server-side computing. Most of the
concepts and methodologies mentioned above are either only covered in elective senior
courses, or weakly covered, or totally missing in the current curricula. Our students
need early introduction of the fundamental modern computing concepts so they can
have a clear roadmap and motivation for their programs and be well-prepared for the
4 L. Tao et al.
competitive global job market. ACM Computing Curricula 2001 correctly introduced
the net-centric knowledge area to address the above knowledge gap, but most
computer science programs have not properly integrated it into the curricula due to
limitations of faculty expertise and resources.
Most of the computer science curricula today are still based on ACM Computing
Curricula 1991 that reflected the IT technologies at that age, with limited coverage on
server-based computing. The topics are covered in their waterfall order specified by
the existing prerequisite chains. Even though the fundamental concepts in these
curricula are still the foundation of todays technologies, many important concepts
and skills are scattered in many senior courses which cannot be taken earlier due to
the strict course prerequisite requirements. For example, a typical engaging
programming project today involves graphic user interfaces, database and networking.
To make user interface responsive, multi-threads are needed. But most of the current
curricula introduce networking programming as advanced topics, and introduce
multithreading briefly in an operating system course. As a result the instructors are
limited in what kind of projects they can use to engage the students, and the students
have limited opportunities in practicing the important skills. To resolve this problem
we need to switch away from the current waterfall teaching model and greatly shorten
the current deep course prerequisite chains.
References
1. Tao, L.: Integrating Component and Server Based Computing Technologies into Computing
Curricula. In: NSF Northeast Workshop on Integrative Computing Education and Research
(ICER), Boston, MA, November 3-4 (2005),
http://gaia.cs.umass.edu/nsf_icer_ne
2. Kurose, J., Ryder, B., et al.: Report of NSF Workshop on Integrative Computing Education
and Research. In: Northeast Workshop ICER (2005),
http://gaia.cs.umass.edu/nsf_icer_ne
Integrating Current Technologies into Graduate Computer Science Curricula 7
3. Tao, L., Qian, K., Fu, X., Liu, J.: Curriculum and Lab Renovations for Teaching Server-
Based Computing. In: ACM SIGCSE 2007 (2007)
4. Microsoft, Internet business service,
http://msdn2.microsoft.com/en-us/architecture/aa948857.aspx
5. Oracle, The Java EE 6 Tutorial,
http://download.oracle.com/javaee/6/tutorial/doc/
javaeetutorial6.pdf
6. Microsoft, Microsoft .NET,
http://msdn2.microsoft.com/en-us/netframework/default.aspx
Effective Web and Java Security Education with the
SWEET Course Modules/Resources
1 Introduction
Over the last two decades web technologies have become the foundation of (1) e-
commerce, (2) interactive (multi-media) information sharing, (3) e-governance and
business management, (4) distributed heterogeneous enterprise information system
integration, and (5) delivering services over the Internet. It is of high priority that the
web and web security technologies be integrated into computing curricula so
computer science students know how to develop innovative and secure web
applications, information system students know how to use web technologies to
address business challenges, and information technology students know how to
securely deploy web technologies to deliver good system scalability and robustness.
The main challenges for integrating secure web technologies into computing curricula
include (a) web technologies depend on a cluster of multiple types of servers (web
servers, application servers and database servers) and university labs normally cannot
support such complex lab environment; (b) there is a big knowledge gap between the
current computing curricula and the latest web technologies, and the faculty need help
to develop courseware so the web technologies could fit in the existing
course/curriculum design with sufficient hands-on experience and a robust evaluation
system. This integration of web technologies into computing curricula has not been
successful up to now as reflected in the recent ACM computing curricula
recommendations.
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 916.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
10 L. Tao and L.-C. Chen
2 Literature Review
Many computer security educators have designed courseware with hands-on
laboratory exercises for computer security courses but none of them focus specifically
on secure web development. Whiteman and Mattord [1] has complied a set of hands-
on exercises for introductory computer security classes. The SEED (Developing
Instructional Laboratories for Computer SEcurity Education) project [2] has provided
a comprehensive list of computer security exercises including system security,
network security and web security, to a lesser degree at this point.
Web security textbooks suitable for undergraduate courses are also very limited.
Most textbooks in computer security published in recent years only have a chapter or
a section in web security with a limited overview of Secure Socket Layer (SSL) and
certificate authority. While there are many books in web application vulnerabilities
[3-9] and secure programming [10, 11], they are designed for practitioners, not for
undergraduate students.
Web security professional organizations have provided abundant learning materials
in secure web development, which are good information sources for our project. The
Open Web Application Security Project (OWASP) is an international group of experts
and practitioners who are dedicated to enabling organizations to develop, purchase,
and maintain secure applications. The Web Application Security Consortium
(WASC) is an international group of experts and industry practitioners who produce
Effective Web and Java Security Education with the SWEET Course Modules 11
open source and widely accepted security standards for the web. WASC has
constantly posted current information in securing web applications, such as security
exploits and its incident database.
virtual
computer
host
computer
security exercises to the local network and prevent the spilling effect of the exercise
results on the Internet. Second, the virtual computers greatly reduce the pressure on
the servers and network bandwidth. As a result, the laboratory exercises will not be
hindered by network performance. Third, the virtual computers are portable. Since
there are virtualization emulators on all operating systems and a virtual computer is
implemented as a folder of files, the students could hold the folder on a portable disk
and use, pause, and resume work on the same virtual computer on different host
computers at university labs or at home. Since a virtual computer is simply a folder of
files or a self-extracting file after compressing, it can be distributed through web
downloading, USB flash disks, or DVD disks. In addition, the virtual computers are
flexible, which can be run on computers in a general purpose computer laboratory,
students laptops or home computers, with only emulators installed. Moreover, the
virtual computers are easy to maintain since any software changes will be done on the
virtual computers which can be easily copied, modified and distributed. Last but not
the least, the virtual computers are cost effective. Both students and faculty do not
have to purchase additional hardware or software except for the emulator, which is
mostly free for educational purchases.
1. Introduction to Web Technologies: The module covers HTML form and its
various supported GUI components; URL structure and URL rewrite; HTTP
basic requests; the four-tiered web architecture and web server architecture and
configuration; session management with cookies, hidden fields, and server
session objects; and Java servlet/JSP web applications. Laboratory exercises
guide students to set up a web server, observe HTTP traffic via a web proxy, and
develop a servlet web application and a JSP web application.
Effective Web and Java Security Education with the SWEET Course Modules 13
secure file exchange with Java security utilities, (4) grant special rights to applets
based on code base, (5) grant special rights to applets based on code signing, (6)
create a certificate chain to implement a trust chain, (7) protect a computer from
insecure Java applications, and (8) secure file exchange with Java security API and
newly created keys or keys in files or a keystore.
Each section includes review questions to enhance students understanding of the
materials. Sample questions are listed below:
Review questions, as listed below, are also provided at the end of the module to
connect various concepts taught throughout the module.
modules have been posted on a project web site1 to help other institutions to adopt or
incorporate it into their Web/Security courses and to train more qualified IT
professionals to meet the demand of the workforce.
The SWEET modules could also be integrated into several relevant computer
science courses since web computing highlights the application of the latest
computing concepts, theory and practices. For example, in a few lab hours, the
"Service Oriented Architecture" module could be integrated into Computer
Networking or Net-Centered Computing courses to provide the students with hands-
on exposure to the latest concepts and technologies in integrating heterogeneous
computing technologies over the Internet; and the "Threat Assessment" module could
be adopted by a database course for students to understand how SQL injection could
be used by hackers to attack server systems.
7 Conclusions
Secure web development is an important topic in assuring the confidentiality,
integrity and availability of the web-based systems. It is necessary for computing
professionals to understand web security issues and incorporate security practices
during the life cycle of developing a web-based system. Our secure web development
teaching modules (SWEET) provides the flexible teaching materials for educators to
incorporate this topic in their courses using hands-on exercises and examples.
Acknowledgment. The authors acknowledge the support of the US. National Science
Foundation under Grant No. 0837549 and the Verizon Foundation in partnership with
Pace Universitys Provost Office through its Thinkfinity Initiative. Any opinions,
findings, and conclusions or recommendations expressed in this material are those of
the author(s) and do not necessarily reflect the views of the National Science
Foundation or the Verizon Foundation.
References
1. Lawton, G.: Web 2.0 Creates Security Challenges. IEEE Computer (October 2007)
2. Andrews, M., Whittaker, J.A.: How to Break Web Software: Functional and Security
Testing of Web Applications and Web Services. Addison-Wesley (2006)
3. Fisher, M.: Developers Guide to Web Application Security. Syngress (July 2006)
4. Garfinkel, S.: Web Security, Privacy and Commerce, 2nd edn. OReilly (2002)
5. Shah, S.: Web 2.0 Security - Defending Ajax, Ria, and Soa. Charles River (December
2007)
6. Stuttard, D., Pinto, M.: The Web Application Hackers Handbook: Discovering and
Exploiting Security Flaws. Wiley (2007)
7. Graff, M.G., van Wyk, K.R.: Secure Coding: Principles & Practices. OReilly (2003)
8. Grembi, J.: Secure Software Development: A Security Programmers Guide. Delmar
Cengage Learning (2008)
1
http://csis.pace.edu/~lchen/sweet/
16 L. Tao and L.-C. Chen
9. Whitman, M.E., Mattord, H.J.: Hands-on Information Security Lab Manual. Thomson
Course Technology, Boston (2005)
10. Du, W., Wang, R.: SEED: A Suite of Instructional Laboratories for Computer Security
Education. ACM Journal on Educational Resources in Computing 8(1) (2008); The SEED
project is also accessible at, http://www.cis.syr.edu/~wedu/seed/
11. Komaroff, M., Baldwin, K.: DoD Software Assurance Initiative (September 13, 2005)
12. The Open Web Application Project (OWASP), Software Assurance Maturity Model,
Version 1.0, http://www.opensamm.org/ (released March 25, 2009)
13. McGraw, G., Chess, B.: Building Security In Maturity Model version 2, BSIMM2 (May
2010), http://bsimm2.com/
14. McGraw, G.: Software Security: Building Security. Addison-Wesley (2006)
15. Howard, M., Lipner, S.: The Security Development Lifecycle. Microsoft Press (2006)
16. Chen, L.-C., Lin, C.: Combining Theory with Practice in Information Security Education.
In: Proceedings of the 11th Colloquium for Information Systems Security Education,
Boston, June 4-7 (2007)
Thinking of the College Students Humanistic Quality
Cultivation in Forestry and Agricultural Colleges
1 Introduction
There is great differentials in humanistic quality education between agricultural and
forestry Colleges and universities and others. Generally speaking the agriculture and
forestry colleges cover long history, rich in culture and advantages in agriculture and
forestry. Humanistic quality mainly refers to the spiritual state of the human subject,
which is the integration of qualities directly linked with the subjective spiritual state,
such as cultural quality, political thought, psychology, business quality, physical
quality. With social progress and scientific and technological development, humanistic
quality becomes an important part of college quality education. So agricultural and
forestry colleges and universities should pay more attention to developing students
humanistic quality considering the current education system and the practical demands
of the society: Firstly, humanistic quality cultivation caters for the needs of social
practice. In the period of economic development, it is necessary to foster high-quality
talents with high moral cultivation, the scientific and cultural level, and concept of
legal, commitment and dedication. Secondly, it meets the demand of cultivating
humanistic spirit. A person's growth and his contribution to society originate from his
spiritual power. Humanistic spirit, centered on the ideals of truth, virtue and beauty,
emphasizes the conscience, responsibility and values when pursuing and applying
knowledge [1]. Humanistic quality education internalizes the outstanding culture into a
relatively stable internal quality and cultivates their rational knowledge of the world,
K.S. Thaung (Ed.): Advanced Information Technology in Education, LNEE 126, pp. 1721.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
18 G.-j. Zheng, D.-s. Deng, and W. Zhou
society and individuals, which promotes the national cohesion and solidarity. Thirdly,
Humanistic quality education is one part of education reform and it is needed to
cultivate creative talents. Innovative education is to cultivate the spirit of innovation,
and the ability of innovation and innovative personality focuses on college students
curiosity, intellectual curiosity and inquisitive mind. The traditional education runs
counter to the quality education, so the agriculture and forestry college should cultivate
humanistic quality combining the features of their own.
2 Raise Questions
For the importance of humanistic quality, the agriculture and forestry college must
pay attention to humanistic education. However, by the influence factors of family,
society, school for a long time, college students their humanistic quality generally are
low, such as narrow human knowledge, irrational knowledge structure and poor
psychological quality which do not adapt to the requirements of actual work [2]. And
the entire phenomenon is greatly connected with current higher education to a large
extent. The main reasons are as follows:
For some years, education sector, influenced by the ideas of pragmatism, tended to
weaken or abolish the humanistic education. Many universities ignore the humanistic
education and pursue solely value subject education in the process of cultivating
students ,coupled with fewer students activities , so that a good cultural atmosphere
can`t be formed in the whole campus , replaced with some of the edged-culture and
Back Street Culture. On the other hand, many students are unconcern to traditional
culture and the masterpieces, while they are very enthusiastic to practical English and
computer grade examination and this makes universities more indifferent to
humanistic education.
Entering the 21st century, people are faced with diverse, multi-dimensional and multi-
level value choices. On the one hand, it implies the Chinese society is full of uplifting
energy during rapid development; on the other hand, it also tells us some social
members` values orientation confused and lost to some extent during the
transformation period. Especially in the college and universities, considerable number
Thinking of the College Students Humanistic Quality Cultivation in Forestry 19
of students feel vacuity facing the pressure of job searching and the increasingly
competition social life; they are short of ideals and fighting spirit to do things, seek
quick way to success and instant benefits, bite off more than they can chew; selfish
and lack of responsibility; fragile in mind, poor tolerance facing frustration. And all of
these need to be guided through the right education, no wonder humanity education is
imminent.
Along with the era of global economic integration, China's economy has witnessed a
rapid development and universities are closely connected to the market. Universities,
especially those which lack state fund, regard education as an industry in the process
of operation. They just have a fancy for training students into future technician and
professionals and overemphasize the instrumental value of human resources. They
consider economic benefits as the only criterion for everything, ignoring the
education rules and laws, opening lots of so-called practical courses, and declaring
hot majors and to hold certification rush, randomly cut or cancel the humanities
courses, falsely guide students to consider learning skills and getting certification as
the study goal and consequently ignore the cultivation of humanistic quality.
Many universities have set up a series of humanistic courses in recent years, but
some teachers don`t update their teaching models and examine models, still only
emphasize the imbuing with knowledge , and they take traditional assessment and are
careless about the students thought expansion and sentiment influence, which makes
humanistic education as technology operation in some degree. So it is difficult to
experience humanity and hard to have the sublimation of spirit for college students.
Thus, the teachers should change the models of teaching, slighting practice and mind-
expand and it should be diversified, open and flexible in the assessment.
According to the analysis of the problems above, the forestry Agriculture schools
could carry out tactics to improve college students' humanistic quality:
4.2 Taking the University as the Base Developing the Humanistic Spirit
Education Vigorously
College students are the group with great creative energy and creative passion. It is
the key to humanistic guidance that whether we can furthest stimulate their
enthusiasm or play their initiative and innovation [5]. For cultivating students
abilities on the innovative spirit and practice effectively, the colleges and universities
should reform and change traditional teaching method, enrich multi-channel of
education, and pay more attention to the quality of basic education and humanistic
education through introducing the humanities knowledge, such as philosophy, history,
society, ethics and management, logic to students. What`s more, it is needed to plan
the humanistic education courses to the program of cultivating students and improve
their humanistic quality through systematic education.
play the leading role of education, and promote the essence of traditional culture and
give correct guidance to students value direction based on traditional culture
education. As the profound traditional culture is the crystallization of our 5000-year-
old Chinese civilization, it can cultivate character, sublimate spirit, inspire wisdom
and improve literacy, and which plays a basic role in students quality education.
5 Conclusion
College students' quality education is a widely and systematic problems. This thesis
analyzes the main factors from four aspects of independent recruitment, the
humanistic spirit, humanistic education and the humanities education system. And it
puts forward the measures to resolve from the aspects of enlarging autonomous
recruit students ratio combined with the schools` characteristics, developing the
humanistic spirit education, educating in traditional culture and constructing
humanistic education guarantee system. But a lot of other factors are not referred such
as political consciousness, political accomplishment, and what`s more, much further
work should do by quantitative analysis.
References
1. Li, W.: The theory and practice of College Students Quality Education. China Journal of
Radio and TV University 1, 8285 (2006)
2. Yang, J.: Humanistic Education Thinking and Practice For College Students. Explore
Reform (2) (2007) (in Chinese)
3. Sorensen, C.W., Furst-Bowe, J.A., Moen, D.M. (eds.): Quality and Performance Excellence
in Higher Education. Anker Press (2005)
4. Ren, Y.: Traditional Culture and Humanistic Education. Journal of Vocational and
Technical College of Yiyang 3, 6768 (2009) (in Chinese)
5. Hong, B.: The Analysis Of Cultivating of Humanistic spirit For College Students. Culture
and Education Material 9, 192193 (2009)
A Heuristic Approach of Code Assignment to Obtain
an Optimal FSM Design
M. Altaf Mukati
1 Introduction
Concept of FSM was first emerged in 1961. An FSM can be formally defined as a
quintuple M = (I, S, 0, , ) where I is a finite set of inputs, S is a finite, nonempty set,
of states, 0 is a finite set of outputs, : I x S S is the next state function, and : I x S
0 (: S 0) is the output function for a sequential circuit [1]. It is a device that
allows simple and accurate design of sequential logic and control functions. Any large
sequential circuit can be represented as an FSM for easier analysis. For example, the
control units of various microprocessor chips can be modeled as FSMs [2].
Moreover, an FSM can be modeled by the discrete Markov chains. Static probabilities
(the probabilities that FSM is in the given state) can be obtained from the Chapman-
Kolmogorov equations [3] useful to perform synthesis and analysis. FSM concepts
are also applied in the areas of pattern recognition, artificial intelligence etc. [4].
These are widely used to reduce logic complexities and hence cost, however in
asynchronous type, minimization of combinational logic has to be dealt with carefully
to avoid Races & Hazards, which means a minimized circuit in such a case may not
be the desired one, if it carries the threats of races & hazards. Hence a classic design
problem of asynchronous sequential machines is to find the optimum state code
assignment for the critical race-free operation [5].
An FSM can be optimized for area, performance, power consumption, or testability.
The design of an FSM can be simplified in different steps, such as state minimization,
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 2331.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
24 M.A. Mukati
state assignments, logic synthesis and optimization of sequential circuits [4]. The first
step i.e., state minimization, is related to reducing the number of states that results in
reducing the number of flip flops. This could not get much attention, in the earlier days,
in view of its inherent complexity of the process. It was shown that the reduction of
completely specified finite automata can be achieved in O(n log n) steps [6], whereas
the minimization of incompletely specified finite automata is a NP-complete problem
[7]. In view of growing requirement of FSM circuits in digital systems, designers were
forced to work in finding appropriate methods to reduce a state table. An Implicant
Chart Method is one of such methods. The second step is carried out by assigning
proper codes to the remaining states to obtain minimal combinational logic, but there is
no definite method available to guarantee a minimal circuit. The synthesis of FSMs can
be divided into functional design, logic design and physical design. Logic design maps
the functional description of an FSM into a logic representation using logic variables
[8]. Its optimization can considerably affect various performance metrics such as power,
area and delay [8]. The state assignment problem is related to minimizing the
combinational gates, where binary values are assigned to the remaining states contained
in the reduced state tables.
2 Literature Survey
Previous approaches to state assignment were targeted, both for area and performance
for two-level and multi-level logic circuits [9][10]. In [11], JEDI algorithm performs
state assignments for a multi-level logic implementation that works in two stages, the
weight assignment stage and the encoding stage. In [12], state assignment algorithms
have been described to target low power dissipation circuits which is shown to obtain
by assigning codes to the states in such a way as to reduce switching activity on the
input and output state variables. Several state assignment algorithms and heuristics
have been developed. In [3], an algorithm known as Sequential Algorithm has been
presented which assigns the codes to the states depend on the states assigned earlier. It
needs to define the set KR of all the state codes that can be assigned, where R is the
code width which can be any value in the range [int log2M, M], where M is the
number of states in the reduced state table. Most of the state assignment algorithms
were focused on the minimum state code length [13]. Moreover, assignment with the
minimum state code length does not always mean the minimum circuit size [13][14],
when realized on a silicon chip. Hartmanis, Stearns, Karp and Kohavi presented some
algebraic methods based on partition theory [15]. Their methods were based on a
reduced dependence criterion that resulted in good assignment of codes, but did not
guarantee the most optimal circuit. Moreover, no systematic procedure provided to
deal with assignment of codes in case of large FSMs. Armstrong presented a method
which was based on interpretation of a graph of the problem [15]. Although it was
able to deal with a large FSM, but his method could not make any impact due to its
limitation in transforming the state assignment problem into a graph embedding
problem, which was only partially representing the codes [15]. Armstrongs technique
was improved in [16]. In [17] NOVA is also based on a graph embedding algorithm.
However, still no state assignments procedures exist that guarantee a minimal
combinational circuit [18].
A Heuristic Approach of Code Assignment to Obtain an Optimal FSM Design 25
The state assignment problem in FSM, especially in the larger FSMs, may not be
optimally attainable because of being NP-complete i.e. the problem can be formulated
as an optimization problem that is NP-complete. The algorithms that try to solve this
problem are computationally intensive [4], therefore several people have worked on
heuristic solutions rather than on algorithms, to obtain good state assignments. State
assignment is thus one of the challenging problems of Switching Theory [18].
3 Problem Description
Each state of an FSM corresponds to one of the 2n possible combinations of the r
state variables. In order to illustrate, consider a reduced state table of a certain
problem containing 5 states i.e. r = 5, that requires 3 bits to represent each state i.e.
n = 3. One possible assignment of codes to the states is:
Clearly each state can be assigned with 8 possible combinations of bits i.e. from 000
to 111. The variables n and r are related as:
2n-1 < r 2n (1)
In general, the total number of permutations for a 3 bit code would be 8! = 40320. For
the value of r less than 2n, although the number of permutations would be lesser but
still it would represent some big value. Out of these possible permutations, very few
will represent the distinct permutations, as proved by McClucky [19]. He has shown
that the number of distinct row assignments for a table with r rows using n state
variables is:
ND = (2n 1) ! / ((2n r) ! n !) (2)
Where ND = Number of distinct assignments.
Equation (2), suggests that for a state table containing 4 states i.e. r = 4, requiring 2
bits to represent each state i.e. n = 2, the number of distinct assignments would be 3.
These distinct assignments can be best understood through Figure 1. In this case,
although 24 possible combinations exist but only 3 are distinct one. Any other
assignment would be just the rotation or reversal of any of the above three
assignments, and thus correspond either to reversing the order of the variables or
complementing one or both variables. Such changes do not change the form of any
Boolean function [19][20].
Like in the above case (00-01-11-10) and (11-10-00-01) will still result in the same
circuit, as the later can be obtained from the former by inverting the variables. As
evident from the equations (1) & (2), the number of distinct assignments of codes
increases very sharply as the value of r increases, as shown in Table-1[20]:
The best distinct assignment, when r is high, is extremely difficult to find that
guarantees minimal combinational logic, as all these distinct assignments produce
circuits of varying complexities. To workout on all possible distinct assignments of
codes would require intensive computations that may take days even on a high speed
computer! As a second choice, a heuristic approach is presented in this paper that can
possibly produce a good reduced combinational logic, if not the most minimal one.
1. Assign adjacent codes to the Present States which lead to the identical
Next State for a given input.
2. Assign adjacent codes to the Next States which correspond to the same
Present State.
The rule-1 has precedence over rule-2. If both rules are applicable on a given
reduced state table, it may likely to produce one of a probable set of simplified design
equations.
3.2 Example
The state diagram in Figure 2 represents an FSM to detect BCD code appearing at its
primary input X. Clearly 0000 to 1001 are valid codes. With every clock cycle, a
single bit of the code (with MSB first) enters into the circuit. On the detection of an
invalid code, the output Z is raised to high. The State Table is given in Table 2[20].
A Heuristic Approach of Code Assignment to Obtain an Optimal FSM Design 27
Due to nature of the problem, no state can be eliminated i.e. reduced state table is
not required in this example. Clearly, with r = 8, code assignment can be done in 840
distinct ways [refer table 1]. To prove working of the two rules described in section
3.1, first we will evaluate how many gates are required after assigning three distinct
random codes as in Table 3. In the next step, we will assign the codes after applying
the given rules and then we will compare all the reduced circuits to draw conclusions.
Using J-K flip flops, the three set of design equations obtained is summarized in
Table 4. The total number of logic gates required in each case is summarized in
Table 5. Obviously NOT gates are not required for internal variables in such circuits.
All gates with 2-inputs are considered in calculations.
. 00 01 11 10
0 S3 S6 S2 S7
1 S1 S4 S0 S5
Table 6. Assignment of specific codes along with the corresponding design equations and the
logic gates requirements
4 Conclusion
Assignment of codes to the states in the reduced state table at random produced the
larger circuits in an FSM, whereas after applying the heuristics presented in this
paper; we obtained a simplified combinational logic.
30 M.A. Mukati
References
1. Avedillo, M.J., Quintana, J.M., Huertas, J.L.: Efficient state reduction methods for PLA-
based sequential circuits. IEEE Proceedings-E 139(6) (November 1992)
2. Bader, D.A., Madduri, K.: A Parallel State Assignment Algorithm for Finite State
Machines, http://cs.unm.edu/~treport/tr/03-12/parjedi-bader.pdf
3. Salauyou, V., Grzes, T.: FSM State Assignment Methods for Low-Power Design. In: 6th
International Conference on Computer Information Systems and Industrial Management
Applications (CISIM 2007), pp. 345350 (June 2007)
4. Bader, D.A., Madduri, K.: A Parallel State Assignment Algorithm for Finite State
Machines. In: Boug, L., Prasanna, V.K. (eds.) HiPC 2004. LNCS, vol. 3296, pp. 297
308. Springer, Heidelberg (2004)
5. Unger, S.H.: Asynchronous Sequential Switching Circuit. John Wiley & Sons (1969)
6. Hopcroft, J.: An n log n algorithm for minimizing stales in a finite automaton. In: Kohavi,
Z. (ed.) Theory of Machines and Computation, pp. 189196. Academic Press (1971)
7. Pfleeger, C.: State reduction of incompletely specified finite state machines. IEEE
Trans. C-26, 10991102 (1973)
8. Shiue, W.-T.: Novel state minimization and state assignment in finite state machine design
for low-power portable devices. Integration the VLSI Journal 38, 549570 (2005)
9. Eschermann, B.: State assignment for hardwired control units. ACM Computing
Surveys 25(4), 415436 (1993)
10. De Micheli, G.: Synthesis and optimization of digital circuits. McGraw-Hill (1994)
11. Lin, B., Newton, A.R.: Synthesis of multiple level logic from symbolic high-level
description languages. In: Proc. of International Conference on VLSI, pp. 187196
(August 1989)
12. Benini, L., De Micheli, G.: State assignment for low power dissipation. In: IEEE Custom
Integrated Circuits Conference (1994)
13. Cho, K.-R., Asada, K.: VLSI Oriented Design Method of Asynchronous Sequential
Circuits Based on One-hot State Code and Two-transistor AND Logic. Electronics and
Communications in Japan (Part-III: Fundamental Electronic Science) 75(4) (February 22,
2007)
14. Tan, C.J.: State assignments for asynchronous sequential machine. IEEE Trans.
Comput. C-20, 382391 (1971)
15. De Micheli, G., et al.: Optimal state assignment for finite state machines. IEEE
Transaction on Computer-Aided Design CAD-4(3) (1985)
16. De Micheli, G., Sangiovanni-Vincentelli, A., Villa, T.: Computer-aided synthesis of PLA-
based finite state machines. In: Int. Conf. on Comp. Aid. Des., Santa Clara, CA, pp. 154
157 (September 1983)
A Heuristic Approach of Code Assignment to Obtain an Optimal FSM Design 31
17. Villa, T., Sangiovanni-Vincentelli, A.: NOVA: state assignment for optimal two-level
logic implementation. IEEE Trans. Comput. Aided Designs 9(9), 905924 (1990)
18. Mano, M.M.: Digital Logic and Computer Design, vol. ch.6. Rev. Ed. Prentice Hall, Inc.,
Englewood Cliffs (2001)
19. McClusky, E.J., Unger, S.H.: A Note on the Number of Internal Assignments for
Sequential Circuits. IRE Trans. on Electronic Computer EC-8(4), 439440 (1959)
20. Mukati, A., Memon, A.R., Ahmed, J.: Finite State Machine: Techniques to obtain Minimal
Equations for Combinational part. Pub. Research Journal 23(2) (April 2004)
Development of LEON3-FT Processor Emulator for
Flight Software Development and Test
1 Introduction
The microprocessor in on-board computer (OBC) is responsible for performing the
flight software (FSW) which controls the satellite and accomplishes missions to be
loaded and executed, and it is specially designed to be operated in the space
environment. Currently developing satellites by KARI (Korea Aerospace Research
Institute) use the ERC32 processor and the LEON3-FT processor will be embedded
for the OBC of next-generation satellites, and those processors were developed by
ESA (European Space Agency)/ESTEC (European Space Research and Technology
Centre).
The processor emulator is an essential tool for developing FSW and the core of
building the satellite simulator, but there is a very limited selection for choosing
LEON3 processor emulator. Only TSIM-LEON3 from Aeroflex Gaisler is available
for commercial purpose, so it is inevitable to purchase TSIM-LEON3 continuously
for development of FSW and constructing the satellite simulator. But TSIM-LEON3
does not support full features of the LEON3-FT model and it is difficult to change or
modify the emulator core to integrate FSW development platform and satellite
simulator.
In order to resolve these problems successfully, a new LEON3-FT processor
emulator, LAYSIM-leon3, has been developed. LAYSIM-leon3 is a cycle-true
instruction set simulator (ISS) for the LEON3-FT processor and it includes the
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 3340.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
34 J.-W. Choi et al.
embedded source-level debugger. Also LAYSIM-leon3 can support the full system
simulator for the SCU-DM (Spacecraft Computer Unit Development Model) based on
the LEON3-FT/GRLIB and various ASIC/FPGA cores.
This paper presents the architecture and design of LAYSIM-leon3, and the result
of FSW development and test under LAYSIM-leon3. In Section 2, we introduce the
emulation method and status of emulators for LEON3. The detailed simulation of the
LAYSIM-leon3 is discussed in Section 3. Section 4 gives the software development
environment under LAYSIM-leon3 with VxWorks/RTEMS RTOS. Finally we draw
the conclusion in Section 5.
Table 1. (continued)
LAYSIM-leon3 has been developed by using the GNU compiler and the GTK library
for GUI, so it can be executed at Windows and Linux platform without any
modification. LAYSIM-leon3 can be divided into seven parts broadly. First the file
loader module is responsible for loading a LEON3 program into memory, and it
analyzes and stores the symbol information and debugging information according to
file format (a.out, elf, or binary format). The source/disassembler module displays the
mixed format of source codes and disassembled code to GUI source viewer. The IU
(Integer Unit) execution module is the core of LAYSIM-leon3 which executes
36 J.-W. Choi et al.
SPARC v8 instructions. The FPU execution module takes the responsibility of FPU
operation. All GRLIB operations are controlled and executed by GRLIB execution
module. Trap or interrupts are treated by the trap/interrupt handling module. Finally
the GUI control module takes care of the watch/breakpoint operation, real-time
register update, user control of GUI environment.
LEON3 programs which can be loaded to LAYSIM-leon3 are a.out file format from
VxWorks 5.4 output and elf file format from VxWorks 6.5, RCC (RTEMS
LEON/ERC32 Cross-Compiler) and BCC (Bare-C Cross-Compiler System for
LEON). Also binary file format can be loaded to LAYSIM-leon3 with address option.
During loading a LEON3 program, the appropriate loader is executed after the
analysis of file format, it extracts symbol and debugging information and copies
text/data segments to memory. If a RAM based LEON3 program is selected, then
stack/frame pointers of the IU are automatically are set for its execution in RAM.
If the matching C source code of a LEON3 program which is loaded through the file
loader module is available, then the source/disassembler module displays the mixed
format to GUI source viewer, otherwise it displays assembler code only. As for
disassemble, the rule of Suggested Assembly Language Syntax [4] from SPARC is
adopted for the convenience of software engineers. The LEON3-FT, SPARC v8 core,
supports 5 types instructions such as load/store, arithmetic/logical/shift, control
transfer, read/write control register and FP/CP instructions.
To trace the code execution, LAYSIM-leon3 has the function of code coverage. In
GUI source viewer, the executed code line is highlighted with blue color, untouched
Development of LEON3-FT Processor Emulator for Flight Software Development 37
code is colored in black, and current executing code line is marked with red color.
After execution, it can report the code coverage of the LEON3 program with source
code.
The LEON3-FT has 3 operation modes: reset, run, error mode. It supports three types
of traps: synchronous, floating-point, and asynchronous traps. Synchronous traps are
caused by hardware responding to a particular instruction or by the Ticc instruction
and they occur during the instruction that caused them. Floating-point traps caused by
FP instruction occur before that instruction is completed. Asynchronous trap
(interrupt) occurs when an external event interrupts the processor such as timers,
UART, and various controllers.
The software handlers for window overflow/underflow trap among synchronous
traps are provided by RTOS or compiler, so they can be handled correctly by
software. But other traps whose handlers are not installed properly by software will
lead the LEON3-FT to error mode. Interrupts can be processed by the IU on no
pending synchronous trap. All trap operations are handled by the trap/interrupt
handling module as the real LEON3-FT trap operation.
1553B Monitor/Simulator
virtual 1553B
Virtual LAN91C
Network
5 Conclusion
In this paper we introduced the development of LEON3-FT emulator, LAYSIM-
leon3, which is a GUI-based and cycle-true emulator and can support the full system
simulator for the SCU-DM. And we described the software development and test on
LAYSIM-leon3. LAYSIM-leon3 shows the slightly lower performance compared
with TSIM-leon3 due to overhead of GUI processing, but it supports significantly
better environment for s/w developers. Currently the instruction level verification test
has been completed and the operation level test is undergoing. It will be the main core
of flight software simulator and operation simulator of SWT/KARI.
References
1. Pidgeon, A., Robison, P., McCellan, S.: QERx: A High Performance Emulator for Software
Validation and Simulations. In: Proceeding of DASIA 2009, Istanbul, Turkey (2009)
2. Gaisler, A.: GRLIB IP Core Users Manual. Version 1.1.0-B4104 (2010),
http://www.gaisler.com
3. Gaisler, A.: LEON3FT-RTAX Data Sheet and Users Manual. Version 1.1.0.9 (2010),
http://www.gaisler.com
4. SPARC International Inc : The SPARC Architecture Manual Version 8 (1992),
http://www.sparc.org
Experiments with Embedded System Design
at UMinho and AIT
Abstract. Nowadays, embedded systems are central to modern life, mainly due
to the scientific and technological advances of the last decades that started a new
reality in which the embedded systems market has been growing steadily, along
with a monthly or even weekly emergence of new products with different
applications across several domains. This embedded system ubiquity was the
drive for the following question "Why should we focus on embedded systems
design?" that was answered in [1, 2] with the following points: (1) high and fast
penetration in products and services due to the integration of networking,
operating system and database capabilities, (2) very strategic field economically
and (3) a new and relatively undefined subject in academic environment. Other
adjacent questions have been raised such as "Why is the design of embedded
systems special? ". The answer for this last question is based mainly on several
problems raised by the new technologies, such as the need for more human
resources in specialized areas and high learning curve for system designers. As
pointed in [1], these problems can prevent many companies from adopting these
new technologies or force them not to respond timely in mastering these
technological and market challenges. In this paper, it is described how staff at
ESRG-UMinho1 and ISE-AIT2 faced the embedded systems challenges at several
levels. It starts to describe the development of the educational context for the new
technologies and show how our Integrated Master Curriculum in Industrial
Electronics and Computer Engineering has been adapted to satisfy the needs of
the major university customers, the industry.
1 Introduction
Embedded systems are vital to our own existence as can be proven by their widespread
use in automative applications, home appliances, comfort and security systems, factory
control systems, defense systems, and so on. This view is nowadays indiscriminately
shared by everybody, mainly those who live in developed countries, as well as those
that are in charge of developing such systems. Mr. Jerry Fiddler, Wind River Chairman
and Co-Founder [3] said: We live in a world today in which software plays a critical
part. The most critical software is not running on large systems and PCs. Rather it runs
inside the infrastructure and in the devices that we use every day. Our transportation,
1
Embedded Systems Research Group at University of Minho, Guimares, Portugal.
2
Industrial Systems Engineering at Asian Institute of Technology, Bangkok, Thailand.
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 4148.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
42 A. Tavares et al.
communications and energy systems wont work if the embedded software contained in
our cars, phones, routers and power plants crashes. However, this wide diversity along
with the increasing complexity due to the multi-disciplinary nature of products and
services and heterogeneity of applied technologies demand changes in industrial
practices and consequently ask for changes in the educational system. At
ESRG-UMinho, the embedded system subject was first taught as a two-hour credit
course to mainly promote education in robotic, automation and control. Therefore, it
was viewed as an overview course (i.e., a backpack [4]) where students should be first
introduced to the main concepts of embedded systems design that would later be
combined to provide the embedded systems design big picture. The course was
theoretical but due to the growing importance of embedded systems, it was promoted to
two three-hour credit courses, allowing the introduction of lab sessions for hands-on
activities. Three years ago, our diploma program was revised and the subject promoted
to four three-hour courses under a new teaching and research specialization field
denominated embedded systems. The embedded systems research group,
ESRG-UMinho, was created and a discussion was held within the group to figure out
how to attract and keep students in elective courses in embedded systems field. The
general objectives should be: (1) exposing students to the industrial project
environment of embedded sys-tems design, (2) developing the capacity for teamwork,
and (3) highlighting the need to be self-taught. The teaching approach to be followed
were based on ideas presented in [1, 2] and was later reviewed overcome some issues
faced in the first year. In the remainder of this paper, several questions will be answered,
giving special focus on (1) why does the skill mismatch phenomenon exist and how to
cope with it and (2) mainly how to drive the whole group in sync by keep-ing
undergraduate and graduate students, teachers and researchers in flow and committed
with the ESRG-UMinho vision and outcomes.
1. It is the field with the highest and fastest growth rate in industry;
2. It is a very strategic field from economic standpoint: (a) their market size is about
100 times the desktop market, (b) nearly 98% of all 32-bit mi croprocessors in
use today around the world are incorporated in embedded systems and (c) in a
near future, nearly 90% of all software will be for embedded systems, and most
computing systems will be embedded systems and their importance will grow
rapidly;
3. Design of embedded systems is a new and not well defined subject in academic
environments [1] and the "skill mismatch" phenomenon is visible, where the
maturity levels of graduates' skills in the academies don't meet levels required by
key industry sectors.
The Embedded System course is a merging of the ECE 125 course [15] and
Complex Systems Design Methodology [4] and it was drafted to provide students with
a broad overview of the embedded systems design and also to show how the synergistic
combination of the broad set of knowledge fields will be explored through backward
and forward references to other courses in the curriculum. The other three courses of
the embedded systems course track, Languages for Embedded Systems, Dedicated
System Design and Advanced Processors Architecture focus on more advanced
embedded systems concepts like compiler, processor and System-On-Chip (SoC)
design. They are based on a mix of lectures, small real-world examples hands-on and
project-based sessions that end with the implementation of a SoC, a C compiler and
Linux porting to the new developed platform. Unlike the undergraduate
microcontroller-based design course track that strictly follows bottom-up design
methodology, the graduate embedded systems course track focuses on high-level
abstraction and top-down and bottom-up system-level design methodologies, starting
with a knowledge about the system to be developed. All students are forced to always
follow the same information flow during systems design, by first transforming the
system knowledge into a specification model of the system.
5 Conclusions
The omnipresence of embedded systems altogether with "skill mismatch" phenomenon
evince the need and urgency for an embedded systems education that produces skilled
graduates capable of engineering embedded systems as required by the hiring industry.
At UMinho an embedded systems design course track was designed and several
techniques employed in order to fill the "skill mismatch" gap, and also align teaching
and R&D activities. Among those techniques we'll emphasize the promotion of: depth
to the learning approach by bridging all these courses together, design-for-reuse
principles and system-level concepts early at the undergraduate microprocessor-based
course track, embedded systems education based on interactive communication with
strong focus on real world examples and project-based works, breadth to the learning
approach by vertical exemplification teaching approach combined with enough high
level of formal knowledge, procrastination avoidance, and integrated learning style but
strongly based on kinesthetic learning style. Furthermore, we found the creation of a
motivating environment with supportive and high performance culture in course
classes and R&D activities are very important, as was visible during the three months
staying at AIT where the twelve students were and still are completely in flow and
committed with the group vision and outcomes. The assessment of our embedded
systems design course track was very positive and manifested by (1) our internal
evaluation process with questions to drive further course track improvement, (2) the
performance of student coaching lab sessions at UMinho and AIT, (3) the willingness
of students to buy their own microprocessor and FPGA boards, (4) the way older
students sell ESRG brand, and (5) the increasing number of students attending the
elective embedded systems design course track year after year.
48 A. Tavares et al.
References
1. Grimheden, M., Torngreen, M.: What is Embedded Systems and How Should It Be Taught?
- Results from a Didactic Analysis. ACM Transactions on Embedded Computing
Systems 4(3) (August 2005)
2. Mesman, B., et al.: Embedded Systems Roadmap 2002. In: Eggermont, L.D.J (ed.) (March
2002)
3. Li, Q., Yao, C., Li, Q.: Real-time concepts for embedded systems. CMP (July 2003)
4. Bertels, P., et al.: Teaching Skills and Concepts for Embedded Systems Design. ACM
SIGBED Review Archive 6(1) (January 2009)
5. Helmerich, A., Braun, P., et al.: Study ofWorldwide Trends and R & D Programmes in
Embedded Systems in View of Maximising the Impact of a Technology Platform in the
Area. Final Report, Information Society Technologies (November 18, 2005)
6. Blake, D.: Embedded systems and vehicle innovation. In: Celebration of SAEs Centennial
in 2005, AEI (January 2005)
7. Kopetz, H.: The Complexity Challenge in Embedded System Design. In: ISORC 2008
Proceedings of the 2008: 11th IEEE Symposium on Object Oriented Real-Time Distributed
Computing (2008)
8. Henzinger, T.A., Sifakis, J.: The Embedded Systems Design Challenge. In: Misra, J.,
Nipkow, T., Karakostas, G. (eds.) FM 2006. LNCS, vol. 4085, pp. 115. Springer,
Heidelberg (2006)
9. Opportunities and challenges in Embedded Systems,
http://www.extra.research.philips.com/natlab/sysarch/
EmbeddedSystemsOpportunitiesPaper.pdf
10. Emerging trends in embedded systems and applications,
http://www.eetimes.com/discussion/other/4204667/
Emerging-trends-in-embedded-systems-and-applications
11. A Comparison of Embedded Systems Education in the United States, European,and Far
Eastern Countries,
http://progdata.umflint.edu/MAZUMDER/Globalization%20of%20En
gg.%20Education/Review%20papers/Paper%204.pdf
12. Pan, Z., Fan, Y.: The Exploration and Practice of Embedded System Curriculum in
Computer Science field. In: ICYCS 2008, Proceedings of the 2008 The 9th International
Conference for Young Computer Scientists, IEEE Computer Society, Washington, DC,
USA (2008)
13. Chen, T., et al.: Model Curriculum Construction of Embedded System in Zhejiang
University. In: CSSE 2008, Proceedings of the 2008 International Conference on Computer
Science and Software Engineering, vol. 05. IEEE Computer Society, Washington, DC, USA
(2008)
14. Pak, S., et al.: Demand-driven curriculum for embedded system software in Korea. In: ACM
SIGBED Review - Special issue: The first workshop on embedded system education
(WESE), vol. 2(4) (October 2005)
15. Ricks, K.G., et al.: An Embedded Systems Curriculum Based on the IEEE/ACMModel
Curriculum. IEEE Transactions on Education 51(2) (May 2008)
16. Seviora, R.E.: A curriculum for embedded system engineering. ACM Transactions on
Embedded Computing Systems 4(3) (August 2005)
17. Haberman, B., Trakhtenbrot, M.: An Undergraduate Program in Embedded Systems
Engineering. In: Proceeding, CSEET 2005 Proceedings of the 18th Conference on Software
Engineering Education & Training. IEEE Computer Society, Washington, DC, USA (2005)
18. Barrett, S.F., et al.: Embedded Systems Design: Responding to the Challenge. Computers in
Education Journal XVIIII(3) (July, September, 2009)
19. The IVV Automao Lda, http://www.ivv-aut.com/
The Study of H. 264 Standard Key Technology
and Analysis of Prospect
Abstract. H.264 standard is the latest video coding standard, which uses a series
of advanced coding techniques, it has a great advantage than the traditional
standard in the coding efficiency,error resilience capabilities, network adapta-
bility.This article main studies the key technologies of H.264, put forward the
current problems and gives solutions,last introduces some new development and
applications.
1 Preface
Since the 90's in the last century, as the rapid development of the mobile communica-
tions and the network technology, the processing and transmission technology of
multimedia and video information in mobile networks has become a hot spot in China's
information technology. Video information has lots of advantages such as intuitive,
precise, efficient, and extension, whereas because of abundance information of video,
besides the problem of video compression coding, we also should solve the quality
assurance issues after video compression to ensure better application of the video. But
it is a contradiction, what we do is to have a greater compression ratio and ensure a
certain degree of video quality at the same time.
For this reason, since the first enactment of video coding international standards in
1984, people have done a lot of effort, ITU-T and other International Organization for
Standardization have issued ten more video coding standards one after another, which
greatly promoted the development of video communication. While the development of
video communication is less than satisfactory in some degree, this mainly because the
conflict between video compression and video quality are not well resolved, in this
form, H.264 video compression coding standard was published.
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 4954.
Springerlink.com Springer-Verlag Berlin Heidelberg 2012
50 H. Yao and Y. Tan
Specifically, H.264 is developed by the ITU-T Video Coding Experts Group under the
(VCEG) and ISO / IEC Moving Picture Experts Group under the (MPEG). Therefore,
from this degree, the point that H.264 video compression technology differents from
the past standard is that it is not only the industry standard, but also international
standards.
H.264 is the latest and most promising video compression technology, it has more
significant improvement than the previous video compression technology whether in
the compression efficiency or network adaptability. Many new technologies were
used in the H.264 standard such as multiple reference frame prediction, integer trans-
form and quantization, entropy coding and a new intra frame prediction coding, which
are designed to achieve higher coding efficiency, but these technologies are at the cost
of increasing the computational complexity. In order to have a better image quality in
the lower possible storage space and transfer images quickly in the condition of limited
bandwidth, H.264 increases nearly twice the compression ratio under the premise of
image quality, which would resolve the contradiction between the video compression
efficiency and real-time transmission. For this reason, H.264 is considered the most
influential video compression standard.
The idea behind the algorithm of H.264 is to eliminate space redundancy in the using of
intraframe prediction and motion compensation, to elimination time redundancy in the
using of interframe prediction and motion compensation, as well as using transform
coding to remove frequency domain redundancy. While there is no change compared
with previous standards (such as H.261, H.263, MPEG-1, MPEG-4) in the basic prin-
ciple and function module, the idea is still the classic compensation hybrid coding
algorithm. In addition, H.264 defines a new SP frame and SI frame to achieve different
data rates, fast switching of different image quality's streaming and rapid recovery
capability of information loss.
H.264 codec's basic functions are briefly described as follows:
Encoder. Encoder adopts a mixture coding method of transformation and prediction, if
it uses intraframe prediction encoding in the process, first select the corresponding mode
to the intraframe prediction, make the subtraction of the predicted value and the current
actual value, then transform, quantitative and entropy code the difference, meanwhile do
inverse quantization and inverse transformation for the encoded bitstream, and then
reconstruct prediction residual image, get the reconstruction frame by adding the pre-
vious, last feed the results into the frame memory after it was smoothed by the deb-
locking filter.
If using interframe prediction, the input image block first obtains motion vector in the
reference frame by motion estimation, then make integer transformation, quantization
and entropy coding for the residual image after motion estimation, send the results and
motion vector previous into the channel. Another streaming has the same pattern to
reconstruct and then is sent to frame memory as the next frame reference images after
through the deblocking filter.
The Study of H. 264 Standard Key Technology and Analysis of Prospect 51
Decoder. In a word, the decoding process is the reverse of encoding, , the priority is to
judge whether it is the intraframe prediction or interframe when the compressed
streaming is sent into the decoder: If the intraframe prediction, to reconstruct directly
after inverse quantization and inverse transform; if interframe, because the last result is
reconstructed residual image, which requires motion compensation on the basis of the
reference image in the frame memory, then makes the superposition of reference image
and residual image, ultimately obtains reconstructed frame.
H.264 is based on the techniques of H.263, it adopts a hybrid coding which contains
DPCM encoding and transform coding, H.264 has its own unique in many aspects such
as multi-mode motion estimation, integer transform, uniform variable length. In addi-
tion, it introduced a series of advanced technology: 4 4 integer transform, intraframe
prediction in spatial domain, interframe prediction of multiple reference frames and 1 / 4
pixel accuracy motion estimation. These technology make the images quality of com-
pressed video is far better than any previous coding standards in the condition of same
bitrate , H.264 can save the bitrate up to 50%. Details of H.264 standard 's key tech-
nologies are as follows:
Deblocking filter
Usually in the case of low bit rate, the block-based transform coding algorithm will
produce the block effect because of the using of a larger quantization step, moreover, it
can strengthen the block effect when H.264 uses multiple reference frames in some case.
In order to solve this problem, H.264 uses the adaptive deblocking filter based on 4 4
block boundary. The filter located in the circuit of encoder motion estimation / motion
compensation, the reconstruction frame only can be restored in frame memory as the
52 H. Yao and Y. Tan
next coded reference frame just after being filtered. The method of deblocking effec-
tively removes the blocking effect produced by the prediction error, maintain the orig-
inal edge information as much as possible, and highly improve the subjective quality of
the image, but these are all at cost of increaseing the system's complexity.
Entropy coding
Entropy coding is a lossless compression technology, which is based on statistical
property of random process, the streaming obtained by entropy coding can recover the
original data without distortion through decoding.
H.264 adopts two new types of entropy coding, the first is variable length coding,
which contains variable length coding (UVLC) and context-based adaptive variable
length coding (CAVLC), another is context-based adaptive binary arithmetic coding
(CABAC). The entropy coding of H.264 has the following characteristics:
Two techniques both make good use of context information to enable proba-
bility statistics of coding closing to the maximum of the statistical information
in the video stream, and reduce the coding redundancy.
The entropy coding of H.264 can adapt streaming, has good coding efficiency in
a large range of rate, and meet a lot of different applications.
SP / SI frame technology
In order to accommodate the requirements of video stream's bandwidth adaptation and
error resilience, H.264 proposes two new frame types: SP frame and SI frame.
SP frame coding is still the motion compensation prediction coding which based on
intraframe prediction, and the basic principle is similar to P frame, the difference is SP
frame can consult different reference frame to reconstruct the same image frame.Take
advantage of this, SP frame can replace the I frame, widely be used in the occasions of
switching streams, splicing, randomly accessing, fast forward/backward, error recov-
ery and so on. While SI frame is based on intraframe prediction, it is the most similar
slice to SP frame when the method of interframe prediction can't be used because of
transmission errors. In some sense, the network affinity of H.264 was greatly improved
just because the using of SP/SI frames, in addition, H.264 also has a strong anti-error
performance, supporting for flexible application services of streaming media.
The Study of H. 264 Standard Key Technology and Analysis of Prospect 53
5 Conclusion
H.264 technology as an important progress in the next-generation video coding stan-
dard, it has obvious advantage compared to previous standards such as H.263,MPEG-2
and so on, it adds some advanced technology on the basis of previous ones and this
makes it used in many fields. It enhanced the coding efficiency and the adaptability of
the network at the same time, therefore it can obtain a higher video transmission quality
54 H. Yao and Y. Tan
in the same of bit rate. Although H.264 has many advantages compared to the traditional
standard, it is at the cost of increasing video encoding's computational complexity. So it
is an important issue to be resolved that how to reduce computational complexity as low
as possible when ensuring the high encoding efficiency.
Acknowledgment: 2010 key project in henan science and technology agency:the
study of video transmission control based on 3g mobile network. Number:
102102210125, funds:20000rmb.
References
1. Feng, L.: Video Image Coding Technology and International Standards. Beijing University
of Posts and Telecommunications Press, Beijing (2005)
2. Huang, J., Liu, J.: Digital Image Processing and Compression Technology. University of
Electronic Science and Technology of China, Chengdou (2000)
3. Yu, Z.: Image Coding Standard H.264 Technology. Posts & Telecom Press, Beijing (2006)
4. Wu, L.: Data Compression. Publishing House of Electronics Industry, Beijing (2000)
5. Deng, Z.: H.264-Based Video Encoding / Decoding and Control Technology. Beijing
University of Posts and Telecommunications Press, Beijing (2000)
6. Wu, Q.: The Key Technical Analysis Based on H.264 Video Coding & Complexity Re-
search Testing. Modern Electronic Technology (20), 6062 (2009)
7. Wang, Q., Guo, X.: The Progress of H.264/AVC Standard in Recent Years. World Radio
&Television (9), 7882 (2010)
8. Chen, Q.: The Status and Development Trend of H.265 Standard. China Multimedia
Communication (10), 1215 (2008)
9. Marpe, D., Schwarz, H., Wiegand, T.: Context-based adaptive binary arithmetic coding in
the H.264/AVC video compression standard. IEEE Trans. CSVT 13, 620636 (2003); Full
Text via CrossRef | View Record in Scopus | Cited By in Scopus (336)
10. Lee, S.W.: H.264/AVC decoder complexity modeling and its application, Ph.D. Dissertation
presented to graduate school University of Southern California
11. Kalva, H.: Issues in H.264/MPEG-2 video transcoding. In: Proceedings of the IEEE Con-
sumer Communications and Networking Conference, pp. 657659 (January 2004); Full
Text via CrossRef | View Record in Scopus | Cited By in Scopus (17)
12. Karczewicz, M., Kurceren, R.: The SP- and SI-frames design for H.264/AVC. IEEE Trans.
CSVT 13, 637644 (2003); Full Text via CrossRef | View Record in Scopus | Cited By in
Scopus (118)
Syllabus Design across Different Cultures
between America and China
Abstract. This article compares the different approaches, goals, and education-
al philosophies of syllabus design for higher education by exploring different
cultural and educational traditions.
1 Introduction
Universities are places of higher education and scientific research. American universi-
ties are evolved from European tradition of a long, classical model: for example, Har-
vard University, with its original name as Cambridge College, was established in
1636, and it has over 300 years of history. These classical universities, adapting to
societal needs, adhering to its intellectual tradition, gradually developed into universi-
ties of today.
The history of Chinese universities is only about one hundred years; most modeled
and built upon the Soviet Russia tradition.
Higher education in the United States largely serves three types of students and
the profile of each is discussed later. Adhering to its classical, European tradition, the
majority of American universities do not require students to declare a major in the
first year although students are encouraged to indicate an area of interest before ad-
mission. University curriculum generally encourages freshman and sophomore stu-
dents to pursue self exploration by requiring students to complete general education
courses. These general education courses provide a broad exposure to the disciplines
in the liberal arts, helping students to explore philosophy or sociology or literature;
and helping build skills in mathematical or logical thinking, abilities in effective writ-
ing and communications as well as the knowledge base for an educated citizen.
Another goal of such general education requirements is aimed at helping students to
decide on what exactly an individual may be interested in learning more in depth and
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 5561.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
56 F. Guo, P. Wang, and S. Fitzgerald
gaining more expertise in a specific discipline or help students to decide on the ma-
jor of pursuit by the end of this exploration. Its important to note that almost all
universities require certain math, laboratory science and technology related courses in
their general education requirements. General education is sometimes called liberal
studies requirements or university wide requirements: these are the requirements
every student must complete regardless of the major of an individual. In many institu-
tions, students are not allowed to take any major courses until the majority of these
university requirements are met. As a consequence, major courses are built upon the
assumption that all students have had a certain level of college level writing skills and
have completed some self exploration and have decided on the major upon reflection
of personal aptitude, skill set, and value. Most students declare their majors by the end
of sophomore year.
Here is an example. Metropolitan State University is one of the universities within
Minnesota State Colleges and Universities (MnSCU) system. There are 32 colleges
under MnSCU; and all bachelor degree graduates must meet ten areas of competen-
cies called Minnesota Transfer Curriculum: The transfer curriculum commits all
public colleges and universities in the state of Minnesota to a broad educational foun-
dation that integrates a body of knowledge and skills with study of contemporary
concerns -- all essential to meeting individuals social, personal, and career challenges
in the 1990s and beyond. The competencies people need to participate successfully in
this complex and changing world are identified. These competencies emphasize our
common membership in the human community; personal responsibility for intellec-
tual, lifelong learning; and an awareness that we live in a diverse world. They include
diverse ways of knowing -- that is, the factual content, the theories and methods, and
the creative modes of a broad spectrum of disciplines and interdisciplinary fields -- as
well as emphasis on the basic skills of discovery, integration, application and com-
munication. All competencies will be achieved at an academic level appropriate to
lower-division general education. At Metropolitan State University, General Educa-
tion and Liberal Studies requirements for graduation is as 10 goals areas with a mini-
mum of 48 semester credits, and the goals are:
Goal 1: Communication:
At least two writing courses and one oral communication course
Goal 2: Critical Thinking
Goal 3: Natural Sciences
At least one lab based science course
Goal 4: Mathematical/Logical Reasoning
Goal 5: History and the Social & Behavioral Sciences
At least two courses from two disciplines
Goal 6: Humanities and the Fine Arts
At least two curses from tow disciplines
Goal 7: Human Diversity
Goal 8: Global Perspective
Goal 9: Civic and Ethical Responsibility
Goal 10: People and the Environment
Syllabus Design across Different Cultures between America and China 57
Traditional students: the traditional college students are between the ages of 18
to 25; and have graduated from high school. In most cases, they have taken SAT
(formerly known as the Scholastic Aptitude Test) or the ACT (formerly Ameri-
can College Testing) exams a year before their high school graduation. Most
American colleges use SAT or ACT scores as part of their admission require-
ments. Both the SAT and ACT tests have measurements in reading, math and
writing. Although test scores are important, most institutions look for other indi-
cators of a students abilities. Many universities emphasize a students high
school grades, demonstration of leadership, volunteering experience, or extraor-
dinary athletic activities as well as academic pursuits when making admission
decisions. Traditional students normally attend school fulltime, taking 12 to 15
credits each term.
Adult students: These are students who have fulltime jobs and have family re-
sponsibilities. Many of them attend college classes at night or during the week-
ends. Most of them need to self-support their own education by paying for the
tuition out of their own pocket. The majority of adult students attend college
part-time, taking 8 credits or less a term.
Online students: over 3.4 million (over 19%) students are taking college courses
online. Its now possible for students to complete their undergraduate degrees
without ever showing up on-campus and meeting other students face to face.
Students decide on how many credits they want to take each term and they are
able to complete school work at their own pace.
Traditional students in China, ages 18-19, are required to pass the National En-
trance Exam before they are eligible for application to any university. Generally
all high school graduates want to attend university. Its estimated that there will
be 2,500,000 students studying in universities in 2010.
Adult students are enrolled in adult education. If a student fails to meet the mini-
mum threshold of the National Entrance Exam, he or she may either retake the
exam the following year, hoping for a higher score, or choose to take the National
Entrance Exam for Adults. A student who elects to take the National Entrance
Exam for Adults is eligible only for degree programs designed for adults, earning
58 F. Guo, P. Wang, and S. Fitzgerald
an Adult diploma. Students who pass the National Entrance Exam for Adults may
attend universities which provide adult education programs or they may study via
distance learning. For instance, online programs are available and students do
not attend face-to-face classes. At present, there are 68 distance learning educa-
tion programs which serve approximately 4.2 million adult and remote students.
Unlike traditional students, they attend school on a part-time basis, evenings and
weekends.
Self learning students are administrated by national and local governments and
private colleges. These students may have failed the National Entrance Exam or
they may have chosen not to take it. Self learners may study individually, but
most attend a private college.
In America, syllabi are used by students. Faculty members are expected to give a
syllabus to students at the beginning of each term. In many institutions of higher
learning, the syllabus is attached to the course schedules so students are able to read
the syllabus before registering for a course. The syllabus serves as a guide for students
to know about the course: whats expected, how evaluations are done, and how much
work is involved in a certain course. There is no universal format to follow, but gen-
erally it includes an introduction of the faculty member, how to contact the faculty
member, office hours, required textbook, prerequisites, course description, learning
goals for the course, competence statement, and evaluation methods, schedule of as-
signments or labs and tests. Policies relating to learning disabilities, complaints or
absences are listed as well. Methods of evaluation and assessment measurements are
clearly laid out: scores needed to pass the course as well as scores needed for each
grade level; scores for every assignment, lab, or quiz. Students generally get an idea
how the course proceeds by looking at the schedule such as which chapter in covered
in which week, or how many chapters or how many concepts are covered in the
course.
In short, the syllabus is designed to inform students what the course is about; how
it proceeds during the term; what the learning outcomes are; and what evaluation
methods are used to assess students mastery of content knowledge.
Chinese syllabi are instruction files for teaching. They are designed in reference to the
overarching goal of the institutions curriculum. The standardized format is set by the
division of Academic Affairs: from font size to the type of dots and how to use voca-
bularies to describe the syllabus. Usually it includes an introduction of the course,
goals, how important the course is in relation to other courses in the major, prerequi-
sites, learning components such as labs or lectures as well as a detailed schedule,
outlining test dates and times and specific requirements for each assignment. The
content of the syllabus is extremely detailed and long, describing every aspect of
learning and teaching, section by section, chapter by chapter, hour by hour. For each
Syllabus Design across Different Cultures between America and China 59
section, the learning outcomes must be outlined and standard language must be used
in the syllabus. For instance, the main concepts of each chapter and the teaching me-
thod for instructing such concepts are included. In addition, standard language is used
for learning each sub-concept. The standard phrases used are: must know, must un-
derstand, must master.
According to such a system, faculty members in the same department teach the
same course with exactly the same syllabus. They all emphasize the same key con-
cepts in each chapter and each section. They all assess students according to the same
methods; and they all teach at the same pace. In this system, the final exam normally
counts heavily in the successful passing of the course.
3.3.1 Similarities
There are some similarities between the two designs: information such prerequisites,
the course description, goals, assessment methods, schedule, textbook titles and lab
descriptions are included.
3.3.2 Differences
The differences in syllabus design seem to highlight the different educational tradi-
tions and cultures:
(1) Different Audience
American syllabi are used by students; they are the guide for students. The syllabus
explains what a course is about and how students can pass the course. After they read
the syllabus, the students know clearly how many assignments they must hand in,
how many assignments are required, what scores they will lose if they dont hand in
certain assignments; how many quizzes are expected and how well they need to do for
each.
Chinese syllabi are used by faculty members only. They are the credo of how the
faculty members teach the course: they are the files that faculty members must follow,
dictating the content matter covered during certain weeks of the term. The students
dont care about the contents of the syllabus.
(2) Different Formats
The format of American syllabus is more individualized. It has neither a strict for-
mat nor requirements for font sizes and types of dots as long as the basic information
is introduced clearly. Faculty members may include additional information they feel
is important.
Chinese syllabi follow a strict design format, from font sizes to the type of dots.
(3) Different Content:
The content of American syllabi is not very detailed, containing only the title of the
class, chapters the course covers, the lab and assignment schedule, etc. It introduces
nothing about other details found in Chinese syllabi.
The contents of Chinese syllabi are very rigid: choice of phrases or standardized
words and phrases guide a faculty members teaching. By following the syllabus ac-
curately, a faculty member accurately interprets the teaching of the important to
less important topics.
60 F. Guo, P. Wang, and S. Fitzgerald
(1) Individual faculty members in America often have flexibility in the choices of
topics, the amount of time spent on each topic and when and in what order to cover
topics. What the students learn depends, for the most part, on the interpretation of
what the faculty member deems as important or less important. When students are
following a sequence of courses, different skills or key concepts covered by different
faculty in the prerequisite courses can be problematic for the students. However, such
an approach also encourages a faculty member to introduce areas of strong personal
skills, especially in areas of advanced knowledge that are still new. Chinese faculty
members have less flexibility; they teach students according to the syllabus strictly.
Students who are taught by different faculty members for the same title course need to
pass the same exam, therefore, the students do not face a knowledge gap when con-
tinuing onto a sequential course. Some compromise for incorporating aspects of the
American style into the Chinese format while assessing students with standard exams
in American courses may be worth looking into.
(2) In American universities, from the syllabus, the students know the learning
goals of each course, the requirements for passing as well as schedules for assign-
ments and quizzes. It is the students decision to take a certain course with a certain
faculty member; and it is the responsibility of a student to pass the course. In the Chi-
Syllabus Design across Different Cultures between America and China 61
nese system, its the facultys responsibility to help students pass the final exam an
exam thats not written by any specific faculty member but by the Academic Affairs
division. Doing well on the final exam remains the only path to success in most
courses. Before the final exam, students memorize their notes in order and faculty
members help them as well. Such a method is not good in evaluating a students true
ability in applying theoretical knowledge and can be damaging to those with more
creative minds. A serious examination of assessment methods is worthwhile.
(3) American students are encouraged to be proactive, independent and creative
learners. They choose their majors and courses themselves. Therefore, its more likely
they are involved actively during class time. They ask questions and they want faculty
input. They also know why they need to finish assignments and the importance of get
them in to the faculty on time. Chinese students are more dependent on their profes-
sors for direction. Students take notes during class. Before the final exam they me-
morize their notes in order to pass exams. Chinese students are not encouraged to
actively participate in their education and are generally following the same path to
carry on their professors thoughts as a consequence. A model of giving students the
responsibility to learn for themselves by encouraging creative and active participation
in making course choices may be worth looking into.
(4) It is easy for American students to be accepted to universities but it is hard for
them to graduate. Its reported that only 60% of those who enter college ever gradu-
ate. On the contrary, it is hard for Chinese students to be accepted into universities,
but it is easy for them to graduate. Its estimated that over 95% of them graduate. Its
the facultys responsibility and the universitys duty to help every entering student
graduate. If students success in life is the ultimate goal of higher education, its im-
portant that the Chinese system take a closer look at helping students become self
directed learners. Coursework in general education such as communication, culture
and technology may be beneficial to Chinese students so they can explore different
areas before deciding on their majors and take more personal responsibility to learn
what they believe are their chosen majors. With such freedom, students may be able
to develop into independent thinkers, creative workers, and life-long learners.
As outlined above, syllabus design is based on different cultures and educational
traditions and philosophies. Though each has its own characteristics, the time for
rethinking each model and learning from each other seems to be here.
References
1. http://www.360doc.com/showWeb/0/0/294799.aspx
2. http://research.microsoft.com/asia/asiaur/summit04/downloads/
china.pdf
3. http://www.edu.cn/20010827/208372.shtml
4. http://define.cnki.net/WebForms/WebDefines.aspx
5. http://www.mntransfer.org/pdfs/transfer/PDFs/MNTC.pdf
Using Eye-Tracking Technology to Investigate the Impact
of Different Types of Advance Organizers on Viewers
Reading of Web-Based Content: A Pilot Study
1 Introduction
The concept of advance organizers was first introduced by Ausubel[1]. According to
Ausubel, an advance organizer is a cognitive strategy that allows learners to recall and
integrate their prior knowledge with the new information presented in the learning
environments. According to Mayers [5] theory, advance organizers are able to affect
learning by first, conceptual anchoring, the concept in the reading content will be
integrated with prior knowledge and promote retention and transfer; second, by
obliterative subsumption, the technical information and insignificant aspect of the
reading content will be diminished. Advance organizers have long been used to
present information before a lesson to make the content of the lesson more
meaningful to the learners and help learners integrate their prior knowledge with
reading content in meaning making[1][4]. Ausubel[2] defined two types of advance
organizers, the expository and the comparative organizers. An expository organizer
can be used to provide related adjoining subsumers on materials that are relatively
unfamiliar to the learners while a comparative organizer can be used to help learners
relate unfamiliar to familiar or existing knowledge. Barnes and Clawsons [3] argued
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 6369.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
64 H.-C. Liu et al.
that when variables such as the type of organizers was taken into consideration, early
studies reported statistically non-significant positive or significant negative results on
achieving student learning.
As text-based information, especially the webpage format, still serves as the main
information source for multimedia learning, advanced technology could still serve as
an effective strategy to achieve learning. However, only a few studies have
investigated the impact of advance organizers on learning from cognitive
perspectives. By utilizing eye-tracking technology, this pilot study sought to realize
the effect of different types of advance organizers on learners information processing
of the to-be-learned content that is encoded in a webpage format.
2 Related Literature
Advance organizers have long been used to present information prior to a lesson to
make the content of the lesson more meaningful and to help learners integrate their
own prior knowledge with lesson content in meaning determination [1]. Ausubel [2]
defined two types of advance organizers, expository and comparative. An expository
organizer can be used to provide related adjoining subsumers with respect to materials
that are relatively unfamiliar to the learners, while a comparative organizer can be
used to help learners relate unfamiliar knowledge to familiar or existing knowledge.
Different formats, such as verbal, visual, or a combination of the two, have also been
used as advance organizers to facilitate learning. As a result, a variety of media have
been utilized to generate different advance organizer formats. In addition to the use of
oral and textual advance organizers, simple illustrations and concept maps have been
used as graphic organizers [6][7][8]. Recently, dynamic graphics like video and
computer animations have been implemented as advance organizers in a hypermedia
format [9]. Early studies have tested the effectiveness of the use such advance
organizers on learning. Ausubel and colleagues conducted a series of experiments on
the impact of advance organizers on student learning [10][11][12][13]. In their
experiments, college and high school students using text-based advance organizers
were found to perform significantly better than the control group on immediate and
retention achievement tests. However, later studies have found controversial results
on the effectiveness of the use of advance organizers on student learning.
Eye movements can work as a blueprint for presenting details as to just how
information in different visual media formats is retrieved and processed [14]. Human
eyes are believed to be stably-positioned for only short time intervals, roughly 200 to
300 milliseconds long. Periods of stability of ones eyes are called fixations. During a
fixation, there is an area in size corresponding to only about 2 degrees of visual angle
over which the information is focused and clear. Saccades refers to fast and scattered
movements of ones eyes from one fixation point to another. It is believed that no
information is obtained during these movements. The distance between two
successive fixation points is defined as the saccade length. In the 70s non-intrusive
technology was invented to track participants eye movements. With further
enhancement of technology, the usability of eye-tracking technology increased and
eye movements studies emerge in the late 90s with attention specially given to
human-computer interaction and human cognitive reactions [15]. Eye fixations have
been found to correspond to the information to be encoded and processed by the
Using Eye-Tracking Technology to Investigate the Impact of Different Types 65
3 Methods
Nine college students in their freshman or sophomore years were invited and
participated in this study. Eye-tracking technology was utilized to track the learners
66 H.-C. Liu et al.
Two introductions on different types of rocks were served as the reading content. Five
test questions testing the nature of metamorphic and a short paragraph summarizing the
characteristics of pluton were developed to work as different types of advance
organizers. The advance information was placed before the related detailed
introduction of the two types of rocks respectively. All the reading content was
presented in web page format. Participants were asked to read two forms of reading
content with either question-based or summarized advance organizers in random order.
Participants eye movements were recorded by a faceLAB 4 eye-tracking system
while they were reading the content on the computer screen.
5 Conclusions
Taking advantage of eye-tracking technology, this pilot study found that using
questions as advance organizers seemed to have achieve students reading efficiency
on the web-based reading content. The small sample size might have weaken the
conclusion drew form the results; however, the findings of the present study have
paved the way for our further studies. More studies using larger sample sizes and
utilizing tests on student achievement and cognitive load are beneficial in realizing in-
depth how advance organizer instructional strategies effectively facilitate student
learning.
References
1. Ausubel, D.P.: Educational psychology: A cognitive view. Holt, Rinehart & Winston, New
York (1968)
2. Ausubel, D.P.: The acquisition and retention of knowledge: A cognitive view. Kluwer
Academic Publishers, Boston (2000)
3. Barnes, B.R., Clawson, E.V.: Do advance organizers facilitate learning? Recommendations
for further research based on an analysis of 32 studies. Review of Educational
Research 45, 637659 (1975)
4. Dembo, M.H.: Applying educational psychology in the classroom, 4th edn. Longman,
New York (1991)
5. Mayer, R.E.: Twenty years of research on advance organizers: Assimilation theory is still
the best predictor of results. Instructional Science 8(2), 133167 (1979)
6. Gil-Garcia, Villegas, J.: Engaging minds, enhancing comprehension and constructing
knowledge through visual representations. Paper presented at the Conference on Word
Association for Case Method Research and Application, Bordeaux, France (2003)
7. Kang, O.-R.: A meta-analysis of graphic organizer interventions for students with learning
disabilities. Unpublished Ph.D. dissertation, University of Oregon, Oregon (2002)
8. Millet, P.: The effects of graphic organizers on reading comprehension achievement of
second grade students. Unpublished Ph.D. dissertation, University of New Orleans, New
Orleans (2000)
9. Tseng, C., Wang, W., Lin, Y., Hung, P.-H.: Effects of computerized advance organizers on
elementary school mathematics Learning. In: Paper presented at the International
Conference on Computers in Education (2002)
10. Ausubel, P.: The use of advance organizers in the learning and retention of meaningful
verbal material. Journal of Educational Psychology 51(5), 267272 (1960)
11. Ausubel, P., Fitzgerald, D.: Organizer, general background, and antecedent learning
variables in sequential verbal learning. Journal of Educational Psychology 53, 243249
(1962)
12. Ausubel, D.P., Youssef, M.: The role of discriminability in meaningful parallel learning.
Journal of Educational Psychology 54, 331336 (1963)
13. Fitzgerald, D., Ausubel, D.P.: Cognitive versus affective factors in the learning and
retention of controversial material. Journal of Educational Psychology 54, 7384 (1963)
14. Unsworth, N., Heitz, R.P., Schrock, J.C., Engle, R.W.: An automated version of the
operation span task. Behavior Research Methods 37, 498505 (2005)
Using Eye-Tracking Technology to Investigate the Impact of Different Types 69
15. Jacob, R.J.K., Karn, K.S.: Eye tracking in human-computer interaction and usability
research: Ready to deliver the promises. In: Hyn, J., Radach, R., Deubel, H. (eds.) The
minds Eye: Cognitive and Applied Aspects of Eye Movement Research, pp. 573605.
Elsevier, Amsterdam (2003)
16. Just, M.A., Carpenter, P.A.: Eye fixations and cognitive processes. Cognitive
Psychology 8, 441480 (1976)
17. Huber, S., Krist, H.: When is the ball going to hit the ground? Duration estimates, eye
movements, and mental imagery of object motion. Journal of Experimental Psychology:
Human Perception and Performance 30, 431444 (2004)
18. Clark, R.E.: Media Will Never Influence Learning. Etr & D-Educational Technology
Research and Development 42(2), 2129 (1994)
The Development and Implementation of Learning
Theory-Based English as a Foreign Language (EFL)
Online E-Tutoring Platform
Abstract. The purpose of this study was to develop and implement a learning
theory-based EFL(English as a Foreign Language) e-tutoring platform to help
EFL learners develop English language skills in their own pace. The online e-
tutoring platform was designed based on learning theories of constructivism, si-
tuated learning theory, cooperative learning, and self-regulated learning. It is in-
tended that the online e-tutoring platform can provide an opportunity for e-
tutors to facilitate each individual EFL learner to develop his/her language
skills. A group of 25 six graders participated in the 20 week e-tutoring program
via the online EFL e-tutoring platform. Analysis of Results from the partici-
pants achievement tests and feedback from the e-tutors helped to inform future
improvements of e-tutoring programs. Recommendations for platform im-
provements were also provided.
1 Introduction
According to Taiwan Network Information Center's (TWNIC) latest Internet database
in 2010, about 14,660,000 people (60 percent of Taiwan population) are using
internet.[1] The database also shows the age group of Internet users is decreasing each
year. Young generations are the digital natives and fluent in navigating among the
Internet tools. Specifically for the younger generation, learning is no more limited in
the classroom; e-Learning has emerged as an optimal option for them.
In a global knowledge economic era, learning will no longer restricted by time and
place boundaries. E-learning in a digital age can be customized to meet each individ-
ual learner needs and starting points, which makes e-Learning a promising innovation
that will transform the way learning takes place.
2 Related Literature
2.1 e-Learning
There are numerous definitions of the word eLearning. These include that e-learning
is a type of education where the medium of instruction is computer technology by
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 7176.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
72 H.-H. Chuang, C.-J. Huang, and H.-C. Liu
Wikipedia [16]. Brown and Voltz [6] proposed that e-Learning is the use of computers
in a systematic four step process: presented practiced, assessed and reviewed. According
to Clark and Mayer [2], eLearning is an instruction that is delivered on a computer
which has the following characteristics:
2.2 Theory
This research project aimed to construct an EFL online e-tutoring platform based on
four learning theories: constructivism, situated learning theory, cooperative learning,
and self-regulated learning.
3 System
When established the EFL online e-tutoring platform, four English content modules
are digitalized as the basis of the system with manageable interface and too aids. We
used Moodle to match up related module.
The advantages of Moodle system are that it could be downloaded on the internet
for free, easy to use and install.
This research use Fedora Linux Server Version 10 with Apache 2 to construct the sys-
tem and server of the platform. The programming technologies are PHP5, MySQL,
CSS, JavaScript, and Flash. The platform has been pilot tested to examine its usability.
4 System Introduction
The following are introduction of learning theory-based English as a foreign language
(EFL) Online E-tutoring Platform.
74 H.-H. Chuang, C.-J. Huang, and H.-C. Liu
Online E-tutoring
Platform
Internet Learning Online Learning
community Passport
(cooperative learn- (self-regulated learn-
1.Introduction forum
1.Learning Quiz Activity
2.E-tutors and e-learners
2.Report Card of learning
communication forum
progress
3.Q & A session
3.Record of lessons re-
4.Learning Reflections
viewed
4.Rewards
5.1 Achievement
The participants of the study are 25 six grade elementary school students from the
economically disadvantage family.
For the students without Internet access (about 50%) at home, they used school
computer labs to conduct one-on-one e-learning in school in the afternoon. For those
(about 50%) who have home Internet access, they used their home computers.
Students were given a pre-and post English test in the beginning and the end of the
18 week participating period. After analyzing the data, the percentage of improvement
is about 21% for school computer users and 33%. for the home users. The at-home
learning is more effective. The reason of the difference might be that the students who
have computer at home could do more learning activities than the students who could
use the computer only at school. The school only allowed one session of one hour
computer lab time from Monday to Friday. What is worth noticing was that those who
did make significant progress in the English proficiency only spent half of time or
even less on the platform compared with those who made significant progress.
Four of the e-tutors of the program were interviewed. They all believed that the e-
tutors should possess computer literacy, knowledge of netiquette, and related ability
of pedagogy and teaching tactics. The e-tutors should be able to understand learners'
The Development and Implementation of Learning Theory-Based EFL 75
needs, and to solve technical problems. Besides, they should have the ability to com-
municate and interact with learners and other e-tutors.
Given that the home e-Learning group made more progress than the school group due
to the access issue of Internet. In light of digital divide, it is helpful if the use of on-
line platform could be in cooperation with community e-learning digital center and
school computer and Internet facilities to address the access issue of digital divide.
The school and the society should provide more digital resources to bridge the digital
divide gap.
6 Conclusion
With the rapid development of internet-mediated communication tool in the digital
era, teaching and learning is not just limited by the traditional classroom. Therefore,
how to develop effective e-learning with solid learning theories is an emerging issue.
We hope that through recruiting and training the volunteers to use the online platform,
it will be helpful to the disadvantaged students to improve their English learning,
especially in the remote area where there is a lack of qualified English teachers.
References
1. Taiwan Network Information Center: Basic Internet Investigate. Website,
http://statistics.twnic.net.tw/item04.htm (May 15, 2010) (January 31,
2010)
2. Clark, R., Mayer, R.: e-Learning and the science of instruction: Proven guidelines for con-
sumers and designers of multimedia learning. Jossey-Bass/Pfeiffer, San Francisco (2003)
3. Heinich, R., Molenda, M., Russell, J., Smaldino, S.: Instructional Media and Technologies
for Learning, 7th edn. Merrill Prentice Hall, New Jersey (2002)
4. Lai, A.F.: Discussion of Digitalized Learning. Bimonthly Journal of Teachers
World 1236, 1623 (2005)
5. Lin, Y.Y.: An Action Research of Cooperative Learning on Health and Physical Education
Learning Area for High Grade Student of Elementary School (Master thesis, Education
Department of CCU, Chiayi, Taiwan) (2004)
6. Brown, A.R., Voltz, B.D.: Elements of Effective eLearning Design. The International Re-
view of Research in Open and Distance Learning, 6 6(1), 217226 (retrieved May 13,
2008); from the Washington State University database (March 2005)
7. Govindasamy, T.: Successful implementation of e-learning pedagogical considerations.
The Internet and Higher Education 4, 287299 (2001)
8. Su, X.Y., Huang, M.L.: Content Analysis of Digital Learning Literacy in Digital Age. Bi-
monthly Journal of Educational Resources and Research 80, 147172 (2008)
76 H.-H. Chuang, C.-J. Huang, and H.-C. Liu
9. Brown, J.S., Collins, A., Duguid, P.: Situated cognition and the culture of learning. Educa-
tional Researcher 18, 3242 (1989)
10. Pei, X.N.: Collision of Ideas between the East and West:the Understanding of Education in
the view of Constructivism. Open Education Research 41, 1214 (2003)
11. Resnick, L.B.: Shared Cognition: Thinking as Social Practice. In: Perspectives on Socially
Shared Cognition, pp. 120. American Psychological Association, Washington, DC (1991)
12. Su, W.J., Lin, P.J.: Learning Technology Standards and SCORM. Journal of Library and
Information Science 29(1), 1528 (2003)
13. Yager, R.E.: The constructivist learning model: Towards real reform in science education.
The Science Teacher 58(6), 5257 (1991)
14. Zimmerman, B.J., Schunk, D.E.: Self-regulated learning and academachievement: Theory,
research, and practice. Springer, Heidelberg (1989)
15. Zimmerman, B.J.: Self-efficacy and educational development. In: Bandura, A. (ed.) Self-
efficacy in changing societies, pp. 202231. Cambridge University Press, New York
(1995)
16. Wikipedia, The Free Encyclopedia (May 13, 2007)
http://en.wikipedia.org/wiki/ELearning (retrieved May 13, 2008)
Analysis of the Appliance of Behavior-Oriented
Teaching Method in the Education of Computer Science
Professional Degree Masters
Xiugang Gong1, Jin Qiu2, Shaoquan Zhang3, Wen Yang3, and Yongxin Jia1
1
College of Computer Science and Technology,
Shandong University of Technology
Zibo, China
gong_xg@sina.com
2
Information Engineering Department
Shandong Silk Textile Vocational College
Zibo, China
xindi1998@tom.com
3
Graduate School
Shandong University of Technology
Zibo, China
pyk@sdut.edu.cn
1 Introduction
Since China implemented professional degree in 1991 and strived for over ten years,
professional degree developed fast, acquiring prominent achievements. The scale of
masters in China was small and academic specialists for education and scientific
research were the major part of masters before 1999, therefore, professional degree at
that time was mainly for the employees at their post to satisfy their need to improve
themselves. In order to adapt the change of social needs for master education
structure, Ministry of Education decided to recruit professional degree masters for
graduating students since 2009[1]. Full time education and credit system were carried
out in this degree. The schooling period was two years[2].
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 7782.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
78 X. Gong et al.
and improves the "role ability" of individual action which promotes substantially the
inspiration and ability to solve problems of students. Owning to its important and
significant role in cultivating students overall quality and comprehensive ability,
Behavior Oriented teaching was valued by experts from vocational education field
and manufacturing field of different countries.
Some teaching methods develop leading by the thoughts of Behavior-Oriented
Teaching Method: Simulated Teaching, Education Case Method, Project Based
Approach, Role Play, etc[6]. The patterns for teaching and learning could vary
according to the nature of learning task. Now a number of teaching methods followed
Behavior-Oriented Teaching Method are transferred to the elementary education and
the regular higher education of our country which has made good results[7-11].
The professional degree is oriented by professional practice. It stresses on practice
and appliance. Behavior-Oriented Teaching Method sets the practical action of
students as subject who are the active participants. Teachers are the imparters of
knowledge as well as the consultants and instructors. The teachers change information
teaching to method teaching which takes the activities of students as the main part. As
a result, students could learn not only the indirect experience summarized by
predecessors but also the direct experience from their own practice. For example,
"VC++ Programming" taught in our college can be used to explain the appliance of
Behavior-Oriented Teaching Method in practical teaching.
Task Teaching
Content Needed knowledge
number techniques
Find out all the perfect use Visual C++ IDE, the
numbers within 2 to 10000. basic control stru-cture of
1 A perfect number is a the program, basic data demonstration
number n for which the sum types of C ++,C++ expre-
of divisors, s(n)=n ssion method, etc
Use Newton's method for Practice using the function
2 equation: 2x3-4x2+3x-6=0 call and simple algorithm demonstration
to solve the root near 1.5 application.
Print out the Chinese Master the output format of
3 triangle(required to print out C++, and the use of loop demonstration
10 lines) structure.
Design a small library Understande design pattern
management system, the of object-oriented program,
main functions include master classes and object
register the information of concepts in C++, master the case methods
4
every book, register library definition of class and of teaching
card, borrow registration and instantiation methods
returning registration etc through this mission.
[12].
Use VC++ program-ming to Master the messaging
simulate paint software in mechanism of Win-dows,
Windows accessories. The drawing with graphics case methods
5
design bases on MFC device interface of teaching
writing, and supports saving
and bitmap reading.
Design a digital image Master Messaging mecha-
processing demonstration nism of Windows, the
system, the function of the concept of dialog boxes and
system includes: open and controls, basic knowledge of
6 save bitmap images, bitmap and digital image Project Method
histogram, translation, proce-ssing, drawing with
mirroring transformation, GDI graphics.
transposition, scale, rotation,
etc.
Write a class performance The use of basic controls in
management system, which VC, basic knowledge of the
7 could statistic the average database and ADO tech- Project Method
grade, the number of nology.
failures, etc.
Write a chat program based Socket programming, ADO
8 Project Method
on UDP [13]. technology, C / S mode, etc
In teaching, teachers should demonstrate the running of program first. Then they
explain in detail the main knowledge combined with the program. The knowledge
include the usage method of the IDE of Visual C++, the debug method of computer
programming environment, the basic control structure of program, the basic data type
Analysis of the Appliance of Behavior-Oriented Teaching Method in the Education 81
of C++, the usage method of expressions in C++, the input format of C++ etc. Find
some students to design similar programs on the base of these.
(2)Task 4: Design a small library management system. The class need to be
defined for the system includes library book class, library card class and record class.
Member functions of the system include book entry functions, library card entry
functions, borrow process functions and return process functions, etc. The preparation
of the system uses the MFC framework.
Teachers give the case to the students before class. Students refer to various theories
and knowledge they think necessary after they got the case. Students understand the
knowledge better for that they think carefully and raise up the solvement. When
teaching, teachers cooperate with students to complete the task and instruct students to
master the basic knowledge of data base, ADO technology, Object-Oriented
programming thought, the concept of class and object which includes class definition,
constructor, destructor, the declaration of an object and references, etc.
Behavior-Oriented Teaching used, students could master quickly the features and
basic application of Visual C++ after the instruction and accomplishment of these 8
tasks. The accomplishment of tasks could make students free from the fear of VC
computer programming, initially understanding the procedures and mode of VC
computer programming. The interest of students was easy to be raised up.
4 Conclusion
On account of the above-mentioned reasons, the author reform in teaching to use
Behavior-Oriented Teaching Method and received good effects. The anonymous
investigation questionnaire shows that the teaching level improved evidently as the
follow Table 2. In this table, the test results of "C Programming" is gotten in
interview.
Test Result of
Teaching Test Result of "C Students
Object "VC++
Method Programming" Assessment
Programming"
High 4 4
Traditional 22
teaching students Medium 7 6 78
methods in 2009
Low 11 6
behavior- High 4 3
25
oriented
Students Medium 9 7 87
teaching
in 2010
approach Low 12 8
82 X. Gong et al.
To sum up, in traditional teaching method, teachers just give instructions and
students receive instructions. Teaching effects do not show until the last tests.
Students learn fixed knowledge. The teaching mode of "VC++ Programming"
according to the thoughts of Behavior-Oriented Teaching Method could give
students definite and concrete tasks and stimulate the students interest and motivation
of VC computer programming. Students will exercise their knowledge and techniques
actively. They will take a serious and careful attitude to accomplish the 8 tasks, to
produce and debug codes in accordance with practical requires, to show their work
achievements. Their study interest is inspired. Therefore, the teaching effects are
superior. The teaching content could be mastered in a compared short time. The
expected study goal will be achieved.
References
1. Ministry of Education of PRC: Make greater efforts to adjust the educational structure of
graduate degreeYang yuliang, Director of the State Council and Chinese Academy of
Sciences, answer the reporter (2009)
2. Ministry of Education of PRC: Certain opinions on how to do well the job of training
masters with professional degree (2009)
3. Xu, G.: The research on practice-oriented vocational education. Shanghai Education Press
(2006)
4. Liu, Y.: The application of "behavior-oriented" in the teaching of Excel. Information and
Computer (Theory Edition) 5, 195 (2010)
5. Ye, C.: Teaching and practice based on professional activities. Zhejiang Science and
Technology Press (2008)
6. Chen, Y.: Analysis of professional teaching methods. Professional Worlds 6, 8990 (2007)
7. Zhou, W., Zhang, X., Li, C.: Applications on the behavior-oriented teaching approach used
in vocational English courses. Education and Vocation 30, 133134 (2006)
8. Zhao, H.: On Behavior-Oriented Teaching Mode in Teaching English Major Interpersonal
Function, Practice and Case Study. Foreign Language and Literature 26, 134136 (2010)
9. Tang, C.: Analysis on the integration of behavior-oriented approach and modular
approach. Education and Vocation 29, 147149 (2006)
10. Zheng, L.: Discuss how to implement behavior-oriented teaching mode in vocational
colleges. Education and Vocation 23, 6869 (2008)
11. Huang, B.: Implementation and exploration on how to implement behavior-oriented
teaching mode in vocational colleges. Education and Vocation 26, 134135 (2009)
12. Lv, J., Yang, Q., Luo, J., et al.: Visual C++ and object-oriented programming tutorial.
High Education Press (2003)
13. Sun, X.: VC++ detailed in depth. Publishing House of Electronics (2006)
Automatic Defensive Security System for WEB
Information
1 Introduction
With the rapid popularization of Internet, Website has become an important way for
enterprises to issue and exchange their information. Enterprises could publish infor-
mation to the public through Web site, but also provide various Web applications for
customers and partners. Government portal has been turned into a new important form
for all levels of government to using information technology to fulfill their functions.
But the Internet Websites are in a relatively open environment, it is convenient to
provide service for the public, but also easy to become a target for hackers. In all attacks
events, the Web page tampering had occurred most frequently. Released Statistical data
from National Computer Network Emergency Response Technical Team/Coordination
Center of China (CNCERT/CC) showed that a total of 35,113 sites have been tampered
during the first half of 2008 in China, increased by 23.7% as compared with the same
period last year [1].
Due to the complexity and diversity of modern operating systems, emerging vul-
nerabilities, many security breaches in applications and other reasons, Web pages could
be altered by hackers. Web page illegal tampering is making use of vulnerabilities in
operating system and application-level to fulfill the attack, the existing security meas-
ures are focused on the network layer and could not form an effective monitoring and
protection for such attacks which result in page tampering event could not be avoided.
To protect Web site security and the credibility of Internet information, the Ministry
of Public Security issued "Regulations of Technical measures for Internet security
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 8388.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
84 J. Huo and H. Qu
protection" on December 1, 2005. The regulations states clearly that "Portals, news
sites, e-commerce sites should prevent the Web sites and pages from being tampered,
and should recover automatically from tampering." [2].
For these discussed above, we have researched and developed a suite of Web pages
anti-tampering system, WebDefender, to protect the information security of Web site
for enterprises and institutions. This product has been successfully developed, and has
passed certification of information security product by the Ministry of Public Security
of China.
2 Related Work
After years of development, technologies adopted by Web anti-tampering system were
constantly developed and updated [3-4]. Round-robin detection technology should be
the first generation of Web anti-tampering technology, which inspects the pages of
protected sites in a round-robin way through a detection program running in the back-
ground. Read out the monitored pages at regular intervals and compare with the backup
pages to determine whether content has been tampered. Once a tamper event found, the
tampered pages could be restored and alarms be sent. There is a certain time interval in
this detection technology, pages could be hacked and saw by users in this period. And
the method takes up system resources such as CPU and memory, and less efficiency.
The second generation of Web anti-tampering technology is Web server embedded
technology. It modifies the existing Web servers architecture, encrypts and stores
specific features such as size, create time of Web pages and other documents. As users
access Web site, the same features of requested pages will be encrypted, then compared
with the stored encrypted value. If two values were equal, Web server would forward
the pages to user. If not, Web server would refuse to show the page to users. In this way,
the possibility of transmission of altered information to the user has been eliminated.
This technology greatly improves security of Web site, but it will take a lot of server
resources during period of encryption calculation and comparison, and result in greater
system load and lower efficiency.
The latest third generation technology combines the advanced file filtering technol-
ogy with event-driven technology. File filter technology utilizes the underlying file drive
of operating system kernel. As the files feature changes, the operating system will
generate an appropriate message. By defining the monitored directories and files of a
Web site, when operating system produced these messages of directories and their files
change, an event-triggered synchronization technology will start a page tampering re-
covery mechanism to restore the backup file to tampered files, and alarm system ad-
ministrator to take the follow-up measures. By the underlying file drive technology, the
entire discovery process of file tampering attacks and file recovery process are about
several milliseconds. Thus, the tampered page will hardly be saw by users. At present,
the operation performance and real-time detection of this technology are all the highest
standards, and it is a kind of simple, efficient and safe anti-tampering technology.
Currently, all kinds of Web anti-tampering products are all based on software [5-10].
Once hackers got administrator privileges of operating system, it would cannot prevent
their destruction and illegal tampering to system. The security of public information
service system is related to many aspects and multiple levels of a system. Any one part
Automatic Defensive Security System for WEB Information 85
has a security vulnerability, it may cause a fatal damage to the entire system. There is
not a solution which combining software and hardware, solving network security
problems of public information service system at multi-level of system.
3 System Architecture
WebDefender is a Website tampering protection product, which adopts the most ad-
vanced third-generation Web anti-tampering technology. When the Website files were
tampered, a real-time blocking mechanism will be immediately called to prevent pro-
viding the error information, and the synchronization mechanism will be started to ra-
pidly recovery the tampered directories and files, the tampered files will also be backup
for future witness. As the anti-tampering service was attacked and could not operate
normally or operating system failures, a hardware security guard would shutdown the
corresponding service or host based on predefine security level or SMS (Short Mes-
saging Service) instructions sent by administrator. All the exception message could be
sent to the administrator through SMS or E-mail in real time. The system could realize
real-time detection, real-time recovery, real-time alerts of the Website tampering, and
effectively solve to the Website's security problems. As shown in Fig. 1, WebDefender
system consists of three components: anti-tampering server-side, anti-tampering
client-side and hardware security guard.
Anti-tampering server-side was deployed on the Website server, and responsible for
real-time monitoring of the protected directories and files. As tampering attack was
detected, the illegal tampered file would be real-time recovered, and send alarms
through hardware security guard. Anti-tampering server-side includes page synchro-
nization module, page monitoring modules, system management module and alarm
management module.
Page synchronization module and page monitoring module are the core of an-
ti-tampering system, reside in the operating system kernel of Web server. Page moni-
toring module utilizes file drive filtering technology of Operating System and enhanced
real-time event triggering technology to detect files tampering of protected Website.
When a tamper event occurred, the module will immediately notify page synchronization
module to restore the damaged files, notify system management module a tampering
event has occurred, and notify alarm management module to inform the administrator by
using a variety of ways. Page synchronization module is responsible for communication
of the backup server which deployed the anti-tampering client-side, achieving the normal
update of Website files or the file recovery tasks in the tampering attack. System man-
agement module communicates with modules in anti-tampering client-side and hardware
security, records log of events occurred in the system and attacks, provides administrators
a WEB management method. The module can also take actions under the appropriate
strategy to avoid more damage according to the degree and frequency of attacks. Alarm
management module sends alarm information to the administrator through hardware
security guard, and communicates regularly with hardware security guard to ensure an-
ti-tampering core modules were not be malicious shut down.
86 J. Huo and H. Qu
Anti-tampering Client-side was deployed on the backup server, responsible for pub-
lishing Web pages to anti-tampering and restoring the illegal tampering file when the
server-side detected attacks. All legal changes of Web pages, including add, modify, or
delete operation must be located the specified directory in the client's backup server.
Anti-tampering client-side includes page synchronization module and system man-
agement module.
Page synchronization module resides on the backup server and runs in kernel of
Operating System. When administrator normally update files in Website, page syn-
chronization module communicates with server-side module, and the updated file will
be synchronized to the Web server. When a tampering event occurred, page synchro-
nization module at service-side will immediately start synchronization recovery me-
chanism with the client-side's page synchronization module together to restore the
tampered file. System management module is responsible for sending log messages of
system and attack to the server for log recording.
Hardware security guard includes alarm management module and system man-
agement module. Which the system management module can monitor the status of
corresponding module running on server, and communicate with the core modules
regularly to ensure anti-tampering modules will not be malicious shut down. When a
tamper event occurs, the alarm management module immediately notify the system
administrator a tampering attack has undergone. Hardware security guard receives the
administrators SMS to execute system command to close network function, shut down
power and other operations to reduce the extent of damages.
4 Product Deployment
Currently, most Websites use a content management system (CMS) to manage the
whole process of page production, including page editing, page auditing and page
generation. An anti-tampering system Website architecture after WebDefenders
deployment was shown in Fig. 2. First, WebDefender system should be adopted to
protect the integrity the dynamic files (Jsp, Asp, Php, etc.) of Website from illegal
tampering. Once the files of dynamic pages have been illegally tampered, WebDe-
fender system could detect the behavior in real time, and immediately restore tampered
files, inform the administrator through hard security guard. Second, anti-SQL injection
module should be deployed in WWW server to prevent SQL injection attacks which
initiated by hackers to undermine and tamper database service, modify important in-
formation stored in database. Through this joint work of these security products, the
operation security of enterprises Website information system can be protected com-
prehensively and systematically.
References
1. National Computer Network Emergency Response Technical Team/Coordination Center of
China (CNCERT/CC), http://www.cert.org.cn/
2. Regulations of Technical measures for Internet security protection ,
http://www.mps.gov.cn/n16/n1282/n3493/n3823/n442104/
452223.html
3. Waldman, M., Rubin, A.D., Cranor, L.F.: The architecture of robust publishing systems.
ACM Trans. Internet Technol. 1, 199230 (2001)
4. Waldman, M., Rubin, A.D., Cranor, L.F.: Publius: a robust, tamper-evident, censor-
ship-resistant web publishing system. In: Proceedings of the 9th conference on USENIX
Security Symposium, vol. 9, pp. 512. USENIX Association, Colorado (2000)
5. Lee, J.-W., Kim, H., Yoon, H.: Tamper Resistant Software by Integrity-Based Encryption.
In: Liew, K.-M., Shen, H., See, S., Cai, W. (eds.) PDCAT 2004. LNCS, vol. 3320, pp.
608612. Springer, Heidelberg (2004)
6. Jin, H., Lotspiech, J.: Proactive Software Tampering Detection. In: Boyd, C., Mao, W. (eds.)
ISC 2003. LNCS, vol. 2851, pp. 352365. Springer, Heidelberg (2003)
7. Jin, H., Myles, G., Lotspiech, J.: Towards Better Software Tamper Resistance. In: Zhou, J.,
Lpez, J., Deng, R.H., Bao, F. (eds.) ISC 2005. LNCS, vol. 3650, pp. 417430. Springer,
Heidelberg (2005)
8. Blietz, B., Tyagi, A.: Software Tamper Resistance Through Dynamic Program Monitoring.
In: Safavi-Naini, R., Yung, M. (eds.) DRMTICS 2005. LNCS, vol. 3919, pp. 146163.
Springer, Heidelberg (2006)
9. Horne, B., Matheson, L., Sheehan, C., Tarjan, R.E.: Dynamic Self-Checking Techniques for
Improved Tamper Resistance. In: Sander, T. (ed.) DRM 2001. LNCS, vol. 2320, pp.
141159. Springer, Heidelberg (2002)
10. Ghosh, S., Hiser, J.D., Davidson, J.W.: A Secure and Robust Approach to Software Tamper
Resistance. In: Bhme, R., Fong, P.W.L., Safavi-Naini, R. (eds.) IH 2010. LNCS, vol. 6387,
pp. 3347. Springer, Heidelberg (2010)
Design and Implementation of Digital Campus Project
in University
1 Introduction
In recent years, Digital Campus-based information technology of higher education has
developed rapidly, colleges and universities has made great progress in setting up ap-
plication systems. Digital Campus makes use of computer technology, network com-
munication technology on university's teaching, research, management, service and all
other information resources to conduct comprehensive, digital information resources
for scientific and standardized integration, to form unified user management, unified
resource management and unified access control, to promote universitys innovation,
management innovation and eventually realize educational information, scientific and
standardized decision-making [1-3].
In increasingly competitive environment of higher education, building up Digital
Campus, realizing information in education, and strengthening information manage-
ment is an urgent task of colleges and universities. Currently, many universities raise
funds in various way to start construction of Digital Campus project, and construction
in some key universities has began to take shape [4-5]. Since 1996, campus network in
our university began to plan and construct. After over 10 years of construction and
development, there are WWW, BBS, mail system, office automation system, education
management system, graduate management system, e-card system and other informa-
tion application systems in our university. But in the process of Digital Campus con-
struction, some problems and challenges must be resolved. Such as lack of effective
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 8994.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
90 H. Qu and J. Huo
2 Related Work
Begun in 1990, Kenneth Green, a professor from Claremont University in America
initiated and sponsored a large-scale research project, "The Campus Computing Project
", which is the earliest concept of Digital Campus. The Campus Computing Project is
the largest continuing study of the role of information technology in American higher
education. The national studies of this project have collected lots of qualitative and
quantitative data to help inform faculty, campus administrators, and others interested in
the use of information technology in American colleges and universities [6].
On January 31, 1998, former U.S. Vice President, Al Gore, made a speech entitled
"The Digital Earth: Understanding our planet in the 21st Century" in Science Center of
California. This is first time to put forward the concept of Digital Earth and the concept
of digital world was accepted universally, which leads to the "Digital City", "Digital
Campus" and other concepts [7].
Digital Campus construction of our university has began in the middle of 1990s in
China. At present, we have established tens of information systems such as education
management system, student management system, office automation system, financial
management system, human resource management system, library management sys-
tems, e-card management system and so on. These various information management
systems were built up at different times, provides management service by different
departments. They have played an important role in accumulating of information re-
sources, improving teaching, working, study and living environment for students and
faculty, improving efficiency of management and other aspects. However, with the
university's expansion, workload of management have greatly augmented, the current
information management systems have showed lots of significant problems which can
not be surmountable.
1). Information technology development in universities is lack of a unified planning
and deployment, information resources were scattered. Construction of information
systems are independent between each department, information systems could not be
integrated and interoperated, and lead to information islands.
Design and Implementation of Digital Campus Project in University 91
2). Lack of public base platform of data, information standards are not uniform, data
and resources sharing is at a low-level. Lack of a set of uniform standards for data and
information, resulting in the data in information systems are not unified and nonstan-
dard, and the largest problem is can not compatible with each other.
3). Could not realize automatic data transfer between information systems. Data
exchange between systems in university are usually based on manual or file transfer
methods. It is not only lower work efficiency, and could not guarantee accuracy and
consistency of data.
In summary, for constraints of history and technology, and the drawbacks and in-
ternal deficiencies of management systems, current information systems could not
adapt to management and service requirements, restrict to develop the university.
Therefore, adopting new ideas and new technical methods to solve the current problems
in information management systems is an urgent task.
Infrastructure provides basic supporting environment for the entire Digital Campus. It
includes campus network, server host, storage devices, security products, operating
environment of application systems and other supporting hardware devices. Resources
comprises of information and data resources which gathered from database systems,
application servers, directory servers, etc.
Information systems in university provide all kinds of services for teachers and stu-
dents. Besides include human resources management system, education management
system and other common business systems, and office automation system, mail sys-
tem and other public service systems. We also add the following public service plat-
form.
various departments, and transforming them into data which compiles with information
standards of university. Data acquisition is acquire data which is needed but not cov-
ered by existing management system. By using the way of workflow, acquire and audit
data through web, and convert into data which complies the information standards of
university.
Unified information portal of campus achieves the interaction process between Digital
Campus platform with users, it is the internal service window for teachers and students
[8-10]. Portal platform is to solve the challenges of unified providing, unified showing
and unified aggregation of information of university. It aggregates the distributed, he-
terogeneous resources of applications and information, achieves seamless access to
applications and integrated systems through a unified access portal, and provides an
integrated collaboration environment for supporting information access, information
transmission, and information collaboration. Based on characteristics and preferences
of each user and different roles, it could provide individual application interface for
different users to access data related to them.
Information Security System: protecting the overall security of Digital Campus project
through physical, network, system, information, management and other aspects. It is the
supporting system to protect safe and reliable operation of campus information system.
Information Standard System: Defining information standards of university is the
foundation for building Digital Campus, and it is also the premise to ensure data con-
sistency. Data sharing through the exchange is based on it, and it is also responsible for
building a stable, reasonable data structure.
Operation and Maintenance Supporting System: including system monitoring, sys-
tem management, maintenance services and so on. It is the important support system to
protect the safe and reliable operation of campus information system.
References
1. The status and thinking of Digital Campus in Peking University,
http://metc.njnu.edu.cn/
2. Fernndez Niello, J., Cipolla Ficarra, F.V., Greco, M., Fernndez-Ziegler, R., Bernaten, S.,
Villarreal, M.: A Set of Rules and Strategies for UNSAM Virtual Campus. In: Jacko, J.A.
(ed.) HCI International 2009. LNCS, vol. 5613, pp. 101110. Springer, Heidelberg (2009)
3. Bers, M., Chau, C.: The virtual campus of the future: stimulating and simulating civic ac-
tions in a virtual world. Journal of Computing in Higher Education 22, 123 (2010)
4. Liu, N., Li, G.: Research on Digital Campus Based on Cloud Computing. In: Lin, S., Huang,
X. (eds.) CESM 2011, Part II. CCIS, vol. 176, pp. 213218. Springer, Heidelberg (2011)
5. Hunt, C., Smith, L., Chen, M.: Incorporating collaborative technologies into university
curricula: lessons learned. Journal of Computing in Higher Education 22, 2437 (2010)
6. The Campus Computing Project, http://www.campuscomputing.net/
7. The Digital Earth: Understanding our planet in the 21st Century,
http://portal.opengeospatial.org/files/?artifact_id=6210
8. Eisler, D.: Campus portals: Supportive mechanisms for university communication, colla-
boration, and organizational change. Journal of Computing in Higher Education 13, 324
(2001)
9. Pan, W., Chen, Y., Zheng, Q., Xia, P., Xu, R.: Academic Digital Library Portal A Per-
sonalized, Customized, Integrated Electronic Service in Shanghai Jiaotong University Li-
brary. In: Chen, Z., Chen, H., Miao, Q., Fu, Y., Fox, E., Lim, E.-p. (eds.) ICADL 2004.
LNCS, vol. 3334, pp. 563567. Springer, Heidelberg (2004)
10. Yu, S., Zhang, J., Fu, C.: Sharing University Resources Based on Grid Portlet. In: Zhang,
W., Chen, Z., Douglas, C.C., Tong, W. (eds.) HPCA 2009. LNCS, vol. 5938, pp. 515521.
Springer, Heidelberg (2010)
Detecting Terrorism Incidence Type from News
Summary
Abstract. The paper presents the experiments to detect terrorism incidence type
from news summary data. We have applied classification techniques on news
summary data to analyze the incidence and detect the type of incidence. A
number of experiments are conducted using various classification algorithms
and results show that a simple decision tree classifier can learn incidence type
with satisfactory results from news data.
1 Introduction
Since the unfortunate events of 9/11, the research trend towards counterterrorism
domain has increased at large scale. This paper is also an attempt towards the domain
of the research. In the paper, we present text mining experiments to detect terrorism
incidence type from news summary in the Global Terrorism Database (GTD). The
purpose of the research is to emphasize that we can extract useful information accord-
ing our query from free text using classification techniques. It is time consuming if
one goes through the lengthy text to extract a specific kind of information. Classifica-
tion techniques can be applied in different ways according to ones requirements to
extract specific information from text. We have applied classification techniques for
accomplishing the desired task. We experimentally show that we can extract this in-
formation from free text summary in the database. By using training data from the
GTD, we train the classifiers to learn the patterns of the incidence and classify the
new incidence from the news data as a specific type of terrorism incidence. We have
applied text mining on news summary, and trained the classifiers by providing train-
ing data. We performed experiments using three different classifiers i.e. decision tree
(J48 WEKA implementation of C4.5), Nave Bayes and Support Vector Machine
(SVM). We present the experimental analysis of the classifiers. The evaluation me-
thod that we have used for experimental analysis is tenfold cross validation. In the
experiments we show the empirical analysis of all the three classifiers on the GTD.
For applying text mining techniques we have used Waikato Environment for Know-
ledge Analysis (WEKA) [14].
We show experimentally that a simple decision tree classifier can identify the inci-
dence type with adequate accuracy. SVM classifier also achieved reasonable accuracy
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 95102.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
96 S. Nizamani and N. Memon
but at the expense of long running time where Nave Bayes classifier runs faster but
with low accuracy. According to our findings we can reliably apply classification
techniques on task like detecting terrorism incidence type from news data summary
using decision tree classifier. Below we present brief description of GTD.
The Global Terrorism Database is an open source database that contains information
regarding terrorism incidences that took place between 1970-2008 in all over the
world. There are certain characteristics of the dataset defined on website [1] of the
GTD.
Following is the brief description of the dataset:
In the next section we present related work. Section 3 describes classification tech-
niques; whereas in Section 4 we elaborate the terrorism incidence type detection. We
discuss preprocessing of data in Section 5 while we illustrate experimental results in
Section 6 and conclusion and future work is presented in Section 7.
3 Classification Algorithms
Classification [15] is a kind of supervised machine learning algorithm. It takes train-
ing examples as input along with their class labels. It can be defined by following
equations:
Decision tree is a kind of divide and conquer algorithm. A decision tree consists of
finite number of nodesinternal and external nodes. Each internal node corresponds
to an attribute selected by some measure of algorithm like information gain or gain
ratio that divides the training examples into the parts according to the values of that
attribute. For example if the attribute has three possible values then there will be three
branches going out from that node. The choice of attribute at particular level of hie-
rarchy usually depends on the class distinction ability of that attribute. External nodes
98 S. Nizamani and N. Memon
in the decision tree contain decisions or the class value. ID3 (Iterative Dichotomiser
3) is a kind of decision tree algorithm by Quinlan [9]. The algorithm suffers from
overfitting and also the algorithm can only work on nominal values and discrete val-
ues and also ID3 does not deal with missing value. To overcome these issues of the
ID3, Quinlan [10] proposed C4.5 algorithm. It uses pruning to overcome overfitting
problem, uses discretization at a certain threshold to deal continuous data and ignores
missing value attributes while making decisions.
Nave Bayes [11] is a simple and efficient technique used by data mining communi-
ty for classification task. It uses Bayes theorem to estimate probabilities for each class
to decide the class of an instance. NB assigns the maximum probability class label to
a test instance.
4 Preprocessing Data
We used terrorism incidences from GTD that took place between 2001 and 2008.
Each incidence is considered as a record of ARFF file. ARFF is Attribute Relation
File Format used by WEKA [14]. From the GTD we took only two fields of each
incidence namely; summary (a text field) that describes the incidence and type of
incidence that derives a value from one of the type of incidences. The summary field
needs to be preprocessed because it contains free text. We applied further preprocess-
ing using WEKA utility (StringToWordVector). This utility performs preprocessing
steps like tokenization and stop word removal.
6 Experimental Results
In experiments we take terrorism incidence records from 2001 to 2008 in GTD. Total
number of incidence that we have used is 22235. After preprocessing we have total
5345 distinguished features. We present brief description of the dataset, we have used
for the experiments in Table 1. In Table 2, we demonstrate all the incidence types and
number of instances of each incidence type. We have performed experiments using
three famous classification algorithms, namely; Decision tree J48 (WEKA implemen-
tation of C4.5), Nave Bayes (NB) and Support Vector Machine (SVM). These algo-
rithms are widely used algorithms by the research community [8]. In the subsequent
sub-sections we illustrate the evaluation method and evaluation measures used in the
experiments performed using these classifiers.
100 S. Nizamani and N. Memon
The evaluation method that we have used is 10 fold cross validation. This method of
evaluation splits the dataset in 10 subsets. It runs for 10 rounds, in each round 9 sub-
sets are used for training and one of them is used for testing. In each round a new
subset is chosen for testing. After 10 rounds the average accuracy of all the rounds is
measured.
The evaluation measures that we have used are accuracy, precision and recall. These
measures are calculated as follows:
Accuracy = (Tp+Tn) / (Tp+Tn+Fp+Fn.). (4)
The experimental results (see Figure 2) clearly illustrate that from the news sum-
mary data we can successfully detect terrorism incidence type. The classification
algorithms can extract this information successfully. It is clearly depicted in the
figure that decision tree correctly detects 83% of incidences with a balance of preci-
sion and recall.
90%
80%
70%
60%
50% Naves Bayes
40% DecisionTree
30% SVM
20%
10%
0%
Accuracy Precision Recall
References
1. http://www.start.umd.edu/gtd/about/
2. Laura, D., Gary, L., Piquero, A.R.: Testing a Rational Choice Model of Airline Hijackings.
Criminology 43, 10311065 (2005)
102 S. Nizamani and N. Memon
3. Robert, G., Laura, D., LaFree, G.: The Impact of Terrorism on Italian Employment and
Business Activity. Urban Studies 44, 10931108 (2007)
4. Gary, L., Laura, D.: Tracking Global Terrorism, 1970-2004. In: Weisburd, D., Feucht, T.,
Hakimi, I., Mock, L., Perry, S. (eds.) To Protect and to Serve: Police and Policing in an
Age of Terrorism. Springer, New York (2009)
5. Gary, L., Laura, D., Korte, R.: The Impact of British Counter Terrorist Strategies on Polit-
ical Violence in Northern Ireland: Comparing Deterrence and Backlash Models. Criminol-
ogy 47, 501530 (2009)
6. Gary, L., Yang, S.-M., Crenshaw, M.: Trajectories of Terrorism: Attack Patterns of For-
eign Groups that have targeted the United States, 1970 to 2004. Criminology and Public
Policy 8, 445473 (2009)
7. Guo, D., Liao, K., Morgan, M.: Environment and Planning B: Planning and Design. Visua-
lizing patterns in a global terrorism incident database 34, 767784 (2007)
8. Wu, X., Kumar, V., Quinlan, J.R., Ghosh, J., Yang, Q., Motoda, H., McLachlan, G.J., Ng,
A., Liu, B., Yu, P.S., Zhou, Z.H., Steinbach, M., Hand, D.J., Steinberg, D.: Top 10 algo-
rithms in data mining, Survey paper. Springer, Heidelberg (2007)
9. Quinlan, J.R.: Induction of decision trees. Journal of Machine Learning 1, 81106 (1986)
10. Quinlan, J.R.: C4.5: Programs for machine learning. Machine Learning, vol. 16, pp. 235240.
Springer, Heidelberg (1993)
11. McCallum, D.J.: Nigam. K.: A Comparison of event models for Naive Bayes text classifi-
cation. Technical Report. Workshop on learning for text categorization. pp. 4148 (1998)
12. Joachims, T.: A statistical learning model of text classification for Support Vector Ma-
chines. In: International ACM SIGIR Conference on Research and Development in Infor-
mation Retrieval (2001)
13. Vapnik, V.: The nature of statistical theory. Springer, Heidelberg (1995)
14. Hall, M., Frank, E., Pfahringer, B., Reutemann, P., Witten, I.H.: The WEKA Data mining
software: An Update. SIGKDD Explorations 11(1) (2009)
15. Sebastiani, F.: Machine learning in automated text categorization. ACM Computing Sur-
veys 34(1), 147 (2002)
Integration of Design and Simulation Softwares
for Computer Science and Education Applied
to the Modeling of Ferrites for Power Electronic Circuits
1 Introduction
In the field of science we can find a great variety of commercial programs for
calculation and modeling that can be used for educational and industrial applications.
Modeling and computer simulations play an important role in the analysis, design,
and education for university students of power electronic systems [1]. In this article
we present a procedure that uses different programming and modeling techniques
coupled together: a Computer Aided Design program (AutoCAD [2]), a Finite
Element Analysis program (Maxwell [3]), two scientific calculus programs for the
numerical solving of derivatives and integrals (Origin [4] and Matlab [5]), a
numerical simulation program (Simulink [5]) combined with Matlab and finally an
electronic circuit simulation program (PSIM [6]). In this article we study the joined
application of these tools through the example of the design and modeling of ferrites
[7]-[10]. Ferrites are widely used in the area of electronic industry as a magnetic core
of inductors and transformers in photovoltaic solar energy. An inductor consists of a
winding, a ferrite core and sometimes a coil former. Ferrite materials show nonlinear
magnetic properties such as hysteresis and saturation. They come in different sizes
and geometrical shapes, some simpler (e.g. E type, Figure 1) and others more
complex (e.g. RM type, Figure 3).
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 103108.
Springerlink.com Springer-Verlag Berlin Heidelberg 2012
104 R.A. Salas and J. Pleite
Fig. 1. Inductor with E type ferrite core Fig. 2. Cross-section of the inductor in Figure 1.
Fig. 3. Inductor with RM type ferrite core Fig. 4. 2D equivalent inductor of Figure 3.
Design of the 2D or 3D domains using AutoCAD
x Boundary conditions
x Adaptative meshing
x Excitation level (voltage or current)
Maxwell Finite Element Analysis program
Solving numerically with Origin
) B dS
S
-I Curve
d) di
L I vt Li, f iRi, f
dI dt
L-I Curve
Solving differential and integral equations numerically Solving mathematical equations numerically using
using Simulink Matlab
vn in
x Design of the circuit
x Excitation level PSIM Electronic Circuit Simulation program
Next, either the 2D (equivalent section of the ferrite plus its winding and coil
former) or the 3D domain is designed using the program AutoCAD. In Figures 2 and
4 the 2D equivalent domains of the real inductors designed in AutoCAD are shown.
This design and the magnetic properties are introduced into the Finite Element
Analysis program Maxwell. After this, the boundary conditions and excitation current
levels are assigned. In order to generate the mesh both in 2D and 3D we chose to
carry out an adaptative refinement of the mesh consisting of making a finer mesh at
the spatial points where the previously established error level is exceeded (corners,
regions with irregular borders, etc..). In each iteration the program computes the
magnetic fields, makes an error estimate and refines the mesh. This adaptative
meshing reduces the computing time and the convergence and tolerance. This
algorithm is implemented in the Maxwell program. In the adaptative procedure the
parameters corresponding to the stopping criteria and the percent refinement per pass
are introduced. The first ones specify the maximum number of passes and the
maximum percent error, also called error tolerance. The percent refinement per pass
specifies what percentage of finite elements (triangles in 2D or tetrahedra in 3D)
should be refined during each iteration in the initial mesh. In Figures 2 and 4 the mesh
106 R.A. Salas and J. Pleite
= B dS
S
(1)
Fig. 7. Experimental (stars) and simulated by Finite Element Analysis (squares) L-I curves
Fig. 8. Experimental (stars) and simulated by Finite Element Analysis (squares) R-Irms curves
In the last phase three programs are used linkedly. The L-I and R-Irms curves are
introduced into the Simulink program. With help of the Matlab program the equation
2 that represents the voltage of the inductor is solved numerically. We draw the
electrical circuit to be simulated in the circuit simulator PSIM and assign the voltage
and/or current excitation level. At each instant in time the Simulink program sends
the excitation current value i that flows through the inductor to PSIM and PSIM sends
the voltage v across the inductor to Simulink.
Integration of Design and Simulation Softwares for Computer Science 107
Finally, as output of the PSIM program we obtain the voltage and current
waveforms (v(t), i(t)) and from these the power waveform p(t) is derived using Origin.
In Figure 10 we show an example of these waveforms. The Figures (a), (b) and (c)
correspond to the linear region and (d), (e) and (f) to the saturation region.
Fig. 10. Example of experimental (dotted line) and simulated (solid line) voltage, current and
power waveforms for the linear and saturation regions.
108 R.A. Salas and J. Pleite
3 Conclusions
We have presented a procedure in which different standard softwares are used
together so that university students can see how different commercial programs and
modeling techniques can communicate to solve a specific task. Through our example
it can be seen how the procedure can be applied to the task of the modeling of
inductors with ferrite cores for use in circuit simulators.
References
1. Mohan, N., Undeland, T.M., Robbins, W.P.: Power Electronics: Converters, Applications
and Design. John Wiley Sons, Inc., New York (1995)
2. AutoCAD, http://usa.autodesk.com
3. Maxwell, http://www.ansoft.com
4. Origin, http://www.OriginLab.com
5. Matlab-Simulink, http://www.mathworks.com
6. PSIM, http://www.powersimtech.com
7. Salas, R.A., Pleite, J., Olas, E., Barrado, A.: Nonlinear saturation modeling of magnetic
components with an RM-type core. IEEE Trans. Magn. 44, 18911893 (2008)
8. Salas, R.A., Pleite, J., Olas, E., Barrado, A.: Theoretical-experimental comparison of a
modeling procedure for magnetic components using Finite Element Analysis and a circuit
simulator. J. Magn. Magn. Mater. 1028, e1024e1028 (2008)
9. Salas, R.A., Pleite, J.: Modelling nonlinear inductors with a ferrite core. Przegld
Elektrotechniczny (Electrical Review) R 85, 8488 (2009)
10. Salas, R.A., Pleite, J.: Accurate modeling of voltage and current waveforms with saturation
and power losses in a ferrite core via two-dimensional finite elements and a circuit
simulator. J. Appl. Phys. 107, 09A517 (2010)
Metalingua: A Language to Mediate Communication
with Semantic Web in Natural Languages
Ioachim Drugus
Abstract. The main obstacle in the way of Semantic Web becoming a democrat-
ic tool is complexity of its standards. Therefore, a simple language, Notation3, is
alternatively used in many communities. But Notation3 does not comply with
compositionality principle, a characteristics of natural languages. Metalingua,
described in this paper, is a counterpart of Notation3, which complies with this
principle, can mediate the communication between humans speaking natural lan-
guages and Semantic Web, and be used for formalization of natural languages, in-
cluding languages considered difficult, like Chinese. Metalingua is currently
used in the projects of EstComputer, Inc. (www.estcomputer.com) for education
and natural languages informatics.
1 Introduction
The objective of Semantic Web (SW) project is to build knowledge bases and a me-
chanism to deliver content from them to the wide public. But the standards of SW for
knowledge representation are addressed to IT professionals and in order for this next
generation of web to become same democratic as current web, it must be equipped
with a natural language interface to allow each person to communicate with web in
a natural language. This is hard to achieve, since there are hundreds of natural lan-
guages, and there is no adequate apparatus to formalize them. Notice, that formal
languages were shown in [1] to be a poor apparatus to serve this purpose. In [2-6], I
proposed a simple language metalingua (ML) as a tool for the formalization of natural
languages, and for building such a natural language interface.
In 2004, Tim Berners-Lee, the founder of the Web, proposed a simple language,
Notation3 (N3) for knowledge representation. With N3 a body of knowledge, or
ontology, is represented as a set of sentences, i.e. triples <s, p, o>, where s, p and o, are
strings written in a certain format, and said to be subject, predicate and object, respec-
tively. Since N3 uses a 3-ary relationship and no operations to represent knowledge, it
can be said to be a relational language. N3 is not appropriate for the formalization of
natural languages, the main feature of which is compositionality principle stating
that the meaning of a compound expression is a function of the meanings of its compo-
nents. Such a principle, obviously, can apply only to operational languages, the
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 109115.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
110 I. Drugus
expressions of which bear sense and are built out of atomic expressions by opera-
tions, but it does not apply to relational languages.
The A3 approach to Semantic Web and Brain Informatics [2, 6] is an operational
approach since it is preoccupied with operations whereby mental entities are built. I
contended that brain uses exactly 3 operations to build a mental entity: association to
form a set-theoretic ordered pair (operation ascribed to left hemisphere of brain),
aggregation to form a finite set (ascribed to right hemisphere), and atomification to
encapsulate a structure built by association and aggregation operations into an enti-
ty (ascribed to the bridge between the two hemispheres). The notation A3 used in A3
approach is an operational language, since it uses operations for building expres-
sions, and it complies with compositionality principle. In [3], I added the equality sign
= to denote synonymy, named new language metalingua, and explained how to use
it for integration of knowledge of various domains.
The name metalingua is justified by three reasons: it is intended for the formali-
zation of metalogic [7], it can serve as one metalanguage for other languages includ-
ing natural languages, and it has the operator meta to formalize meta-discourse. This
paper does not presuppose any knowledge from my previous publications.
2 Specification of Metalingua
Notation A3 is a sublanguage of ML and it is more appropriate to specify A3 before
specifying ML. The notation A3 proceeds from expressions said to be atomic expres-
sions or atoms, the set of which is called vocabulary of A3, and build out of them
compound expressions according several rules. We allow for many variants of A3
(and ML), each variant with its vocabulary, but in IT, it is appropriate to use only one
vocabulary - the set of all strings of Unicode characters, to which I refer as Unicode
texts. Since such atoms can contain also characters of ML and, thus, create collisions
with ML syntax, demand that a Unicode text used as an atom is enclosed between
angular brackets < and >, which either do not occur within Unicode text, or are
preceded by % sign (as recommended in URI standard). But we will allow the al-
phanumeric strings without white spaces to be used without angular brackets. Since
all the characters and symbols used in any natural language and in sciences are
represented in Unicode, A3 (and ML) based on such a vocabulary will also have a
practical use, because phrases in a natural language and even whole Unicode texts,
can serve as atoms within such vocabulary. We also refer to the atoms in vocabulary
as names: common names and proper names.
The expressions of A3 over a vocabulary V are typed and are defined by the fol-
lowing recursion rules:
the formulation of universal structures or the language of universics. I used the term
formulation, because, in this context, it sounds more correct than denotation or
representation (say, denotation could imply that the whole structure is denoted by
one name, while the expression representation of a structure in a language also
sounds weird).
The notion of structure as built from atoms, where atoms are not built, does
not describe some phenomena which we also consider a structure. Really, a formula
like a + x b is considered atomic in assertoric logic because, if we decompose it
further, we obtain non- assertions. But, the expression a + x also has structure.
Thus, a + x b is an example of a universal structure, where the expression a + x
is obtained by atomification.
Data model is a term used in IT to refer to an apparatus which describes a certain
kind of data structures. Set theory can also be said to define a data model, an abstract
one, because it describes a type of abstract data called sets and ordered pairs. But the
term data model is not used in set theory, because, so far, mathematicians were
focused on properties of pure sets, built out of the empty set. In set theory an atom
is intuitively treated as non-set and non-ordered-pair. An atom can be treated as
any piece of data and, therefore, the notion of data model becomes useful in a set
theory with atoms and, even more useful, in a theory with atoms of different level of
universics. I refer to the data model of universal structures as universal data model.
Notice, that the notation (a : b) of ML is the Pierces denotation of ordered pair,
which historically preceded the currently used notation (x, y). In order to keep both
denotations, I consider that (a : b) is same as (b, a), and regard the first as a primitive
expression in ML and second as an expression defined in ML (obviously, I could have
proceeded the other way around).
While it is regular to use the expressions structure of text, since I distinguish be-
tween structures defined in terms of set theory and universal structures defined in
terms of universics, I prefer to use the expression organization of text. The organi-
zation of a text is a universal structure, and only in some particular cases, it can be a
structure. As per A3 approach, there are exactly three types of organization to which
reduce all other types: order between two entities, non-order, which is specific to
sets, and the atomic constituency as I refer to the organization of multi-level atomic
structures.
In order to formulate a linguistic structure as an expression of ML, consider it a
universal structure and proceed recursively to denote its constituents this manner:
out of names, but is an operation over names. The reading b qualified by a is the
reading of a complex name, which can be used in compound expressions like ((a : b) :
c), but the reading b is qualified by a cannot be used in compound expressions. On
the other hand, if both a and b are statements, then (a : b) is a also a statement: a if
b, or more appropriately, b implies a. Without details which would digress from the
topic of this paper, I will only mention that qualification is generalization of implica-
tion and applies to arbitrary names, not only to names of values of truth values, i.e. to
statements. Thus, in discursive semantics, a term which better reflects the semantics
than association is qualification. It is appropriate to give an example of qualifica-
tion, which we will use later: the functional notation f (x) can be considered a quali-
fication of x by f denoted as (f : x), say, if Domain(x) is a function which for a function
x results in the domain of x, then (Domain : x) can serve as an alternative denotation for
this function. Aggregation operation is a generalization of conjunction, since it applies
to arbitrary names, and not only to statements. The term aggregation sounds appro-
priate both for denotational and discursive semantics.
In discourse, the atomification expression [a] is treated as obtained by an operator,
called operator meta, which switches between the discourse and the universe of dis-
course. To better understand this terminology consider any expression between quo-
tation marks used in a English text, like Socrates is mortal. It is clear that the dis-
course is about this 18 character length formal expression, i.e. the expression is con-
sidered to be within the universe of discourse and is not part of the discourse itself. By
including such an expression between quotation marks we throw it out of discourse
into the universe of discourse. Quotation marks is a tool for meta-discourse and in
discourse, the square brackets play the same role.
Fold mapping is a one-to-one mapping from N3 to ML, but ML also has expres-
sions which are not images in this mapping; ML is richer, at least, because it can ex-
press also sentences with only the predicate, or with subject and predicate. For each
ML expression e, which is an image in fold mapping, there is an unfold as a N3 ontol-
ogy denoted as u(e).
7 Applications of Metalingua
ML is currently used in two projects of EstComputer, Inc. (www.estcomputer.com)
with participation of State University of Moldova and the Academy of Economics of
Moldova, which are described below:
References
1. Chomsky, N.: Syntactic Structures. Mouton, The Hague (1957)
2. Drugus, I.: A Whole brain approach to the Web. In: Proceedings of the Web Intelligence
Intelligent Agent Technology Conference, pp. 6871. Silicon Valley (2007)
3. Drugus, I.: Metalingua a Formal Language for Integration of Disciplines via their Un-
iverses of Discourse. ETC, 1723 (2009)
4. Drugus, I.: Universics - a Structural Framework for Knowledge Representation. In: Know-
ledge Engineering Principles and Techniques, Cluj-Napoca, Romania, pp. 115118 (2009)
5. Drugus, I.: Universics: an Approach to Knowledge based on Set theory. In: Knowledge En-
gineering Principles and Techniques. Selected Extended Papers, Cluj-Napoca, Romania, pp.
193200 (2009)
6. Drugus, I.: Universics: a Common Formalization Framework for Brain Informatics and Se-
mantic Web. In: Web Intelligence and Intelligent Agents, pp. 5578. InTech Publishers,
Vucovar (2010)
7. Hunter, G.: Metalogic: An Introduction to the Metatheory of Standard First-Order Logic.
University of California Press (1971)
An Integrated Case Study of the Concepts and
Applications of SAP ERP HCM
1 Introduction
Organizations use ERP (enterprise resource planning) systems in nearly all areas of
corporate activities. Therefore companies require employees who are familiar with
those systems. Universities should satisfy this demand and integrate courses on ERP
systems into their curriculum. Various software vendors like SAP, Microsoft and
Oracle established programs to provide their systems for universities. In Germany,
many universities use systems provided by the SAP University Alliances (UA). SAP
UA programs primary objective is to support education by supplying the newest
SAP-technology available [1]. The SAP UA program distributes SAP systems with
the aid of University Competence Centers (UCC). These centers operate and maintain
SAP systems for all participating universities and provide additional services like
training courses and teaching materials.
To the best of our knowledge, no case study on HR was available in 2009. But the
increasing importance of IT systems for human resource management has underlined
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 117125.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
118 M. Lehmann et al.
the need for training in human resource oriented studies [2]. Hence a collaboration
between the UCC Magdeburg and the Institut fr elektronische Geschftsprozesse
(IEG) of the Leuphana University of Lneburg was started to close that gap by
developing a human resource oriented case study for application in SAP systems.
The second chapter discusses some fundamentals of case study didactics. Based on
these descriptions, chapter three develops a concept for the use of case studies and
presents the HCM case study. In chapter four we describe the application of the HCM
case study in our classes during the last two years. We conclude with a summary of
our findings.
One of the main goals of all teaching is the transfer of knowledge. This transfer is
often achieved with traditional teaching techniques. The application of knowledge,
however, can only be realized through active teaching techniques, like business games
or case studies [5]. When processing a case study, participants are forced to extend
and/or use their theoretical knowledge to perform actions which, in the end, will
hopefully solve the underlying problem. The primary goal of case study work is to
build a link between theoretical knowledge and action. The evolution of the
participants action competence is closely linked to case study work. Young humans
have to learn how to make autonomous decisions as early as possible [6].
A secondary objective of case study work is knowledge transfer. We have already
mentioned existing knowledge which is used during case study work. Besides the use
of knowledge, participants should be encouraged to acquire the knowledge needed to
solve the problem, which results in the expansion of their knowledge base [7].
As there are no generally agreed standards for case studies, we are going to suggest
that they should conform to the three design principles of exemplarity, vividness and
action orientation.
An Integrated Case Study of the Concepts and Applications of SAP ERP HCM 119
Case studies can be applied as part of a decision or problem solving process. A typical
example of the latter [see Kaiser] combines six phases (see table 1). Each phase is
related to a teaching goal, as shown in table 1.
Table 1. Sequence of activities for the application of case studies following [6]
Phase Goal
Confrontation Find the problem, get an overview and describe the task
Information Validate existing information and acquire further information
Exploration Find alternative solutions
Resolution Compare and evaluate the various solution alternatives
Disputation Defend the chosen solution
Collation Compare your own solution with the real solution
Kaiser distinguishes between four basic case study variants using the parameters of
problem detection, information gathering, problem solving and critique of solution.
The four case study variants have dierent focus areas and are therefore distinguished
by the way in which they are designed and applied.
120 M. Lehmann et al.
Current SAP case studies are distinct from traditional case studies in a few aspects of
their design and application. First, they cannot be identified with one of the case study
variants described but combine characteristics of the caseproblem-method and the
stated-problem-method [9], which both share the feature of given and described
problems. Second, they differ sharply from traditional case studies in the areas of
solution determination and criticism, and their application in the exploration and
resolution phases. Traditional case studies challenge participants to develop solutions
to a given problem. The teaching aim of SAP case studies, by contrast, is not to
develop solutions, but to present one solution which is applicable to SAP. This is
necessary since participants can only interact with the software in a specific way [9].
Therefore some degree of adaptation has to be made to fit the case study didactics to
SAP case studies.
3.1 Requirements
We use a list of requirements to design the HCM case study and to evaluate it later.
Requirements can be divided into functional and didactic requirements.
We define functional requirements as topic-related requirements that describe the
knowledge we want to transfer to the students. First, the HCM case study should give
a detailed overview of the functions provided by SAP ERP for the field of human
resource management. Following Jung, we established the following areas of
personnel management [10]:
Second, participants should understand what they do when they process the HCM
case study. They should be given a brief introduction to the dierent modules of SAP
ERP HCM and are to be able to explain the key functions of each module. Finally, we
expect the participants to understand the relationship between dierent SAP modules
and the integration of the dierent HCM components.
The didactic point of view can be divided into design elements and application
elements. These elements depend on the application area and the teaching goal. First,
the HCM case study should be usable at universities by students from dierent
departments and dierent levels of knowledge. The primary teaching goal is the
transfer of SAP knowledge, the secondary one the transfer of human capital
management knowhow. Second, the HCM case study is to fit generally the conditions
found at universities. The HCM case study should be scalable to fit to dierent types
of classes with dierent specializations. Finally, the HCM case study should inspire
the participants to work independently on the respective topic.
To ensure the applicability of the HCM case study at universities, it must stay within a
certain timeframe. At the same time the HCM case study should be adjustable and be
applicable in dierent classes with dierent topics. We try to achieve that by splitting
the HCM case study into dierent chapters, which can be taught in dierent orders.
The case story was used to visualize circumstances within the HCM case study.
Each chapter had its own case story to guarantee the flexible application of the HCM
case study. Every chapter deals with one topic concerning human resource
management, treating a main task of human resource management and using the same
enterprise as an example. The situations are easy to generalize in order to meet the
concept of exemplarity.
The HCM case study should fulfill the goals of case studies as described in chapter 2.
We expect participants to have some theoretical knowledge in human resource
management. While they work through the HCM case study, participants should link
their theoretical knowledge with actions in SAP ERP HCM. The exercises are the most
important part within the HCM case study to ensure action orientation. Knowledge
transfer as secondary objective concentrates on the transfer of technical skills.
The case story takes place in the IDES Corporation. SAP IDES is a configured SAP
system containing a database with example transactions and master data [11]. It is
used by the SAP AG for demonstration and educational purposes, as well as by the
SAP UA to provide universities with a complete operational SAP system. The IDES
122 M. Lehmann et al.
Corporation produces and distributes dierent products. In the context of the HCM
case study, participants act in dierent sections of the corporations human resources
department. The HCM case study consists of nine chapters, which each include one
process from the human resources department. The nine chapters are divided into two
parts for application purposes. The first part is called introductory course and contains
only two chapters. The introductory course includes the following chapters:
organization management
human resource administration
The introductory course is the basis for the second part, the advanced course. The
advanced course can be started after the participants have completed both chapters of
the introductory course. The advanced course contains seven chapters. These chapters
can be studied in random order:
personnel procurement
time management
travel management
payroll accounting
human resources development
performance management
human asset accounting
All nine chapters have a uniform structure, which enables a steady learning process
while working through the HCM case study. Each chapter starts with an introduction,
which introduces the chapters case story and the topic treated. The second section
deals with the preparation for the later exercises (preparing thematic controversy). It
consists of questions that test the students knowledge. There is a key to these
questions. The third section describes the realization of the chapters topic in SAP.
The integration of the component within the module SAP ERP HCM is shown
graphically. Important concepts und SAP terms are explained.
The fourth section contains the exercise for SAP ERP. The case storys situation is
described. It consists of descriptions of activities as well as a list of the actors
involved and the participants role within the exercise. The exercises can be done in
group or single work, which latter format is our recommendation: the learning eect
can thus be maximized as each participant has to perform actions rather than to
merely watch what other participants are doing.
Finally, chapters come with a brief conclusion which sums up the actions
performed.
In addition, we adjusted the sequence of activities presented in chapter two. The
schedule is the same for each chapter. First, the participants are introduced to the
chapter, its aim and its underlying problem. Second, the participants have to answer
questions. They have to use their theoretical knowledge as well as their first working
experiences to answer all questions. Third, the participants are made familiar with
SAP terms and concepts. Fourth, the exercises in SAP are done. The participants
work through the case study either in small groups or on their own. Finally, the
participants summarize their action, an activity which also makes them repeat
subconsciously the combination of theoretical knowledge with practical experiences.
An Integrated Case Study of the Concepts and Applications of SAP ERP HCM 123
4 Application
We applied the HCM case study approach over two years in a class for human
resource students at master level. The course is called IT supported human resource
management, and has been developed to teach students basic IT knowledge. A
central learning outcome of the class is for students to be able to define and formulate
requirements for software in the context of human resource management.
The class is divided into two parts. The first part takes up the first five or six
sessions and is a basic introduction to information technology, dierent software
systems for corporate activities and the history of ERP systems with examples from
dierent software vendors. Next, students are made familiar with requirements
engineering methods and tools. We then concentrate on event-driven process chains
(EPC) and entity-relationship models (ER). The first part concludes with models
developed by the students in short exercises.
The second part deals with our HCM case study, but we replaced the questions
with modeling exercises. We started each meeting with a short introduction. Then
students had to design EPC- or ER-models in small groups of up to three students.
Next, they presented their results to the whole group and we showed an example that
fitted the SAP implementation. This was followed by a discussion of concepts and
SAP terms using PowerPoint presentations. We finished each meeting with work on
the case study. The students had to work through the SAP system on their own.
We made an evaluation in both years. As we concentrated on the whole HCM case
study and its approach in the first year, we did the evaluation at the end of the term.
Most students enjoyed working with the HCM case study. What pleased students
particularly was the variation between knowledge transfer and action oriented work.
Also, the modeling tasks presented a challenge to students: most of them were able to
design EPC-models, but disliked modeling ER-models.
As for the evaluation, students were given the chance to make suggestions for
improvements. Some proposed concentrating on EPC-models while others wanted
more time for software installation and modeling basics before starting on the HCM
case study. We took up this request and changed our schedule so that we had more
time for basics and installations in the second year. Yet other students suggested using
operations in the procurement case study. These suggestions were also incorporated
into the relevant chapter.
In the second year we developed a standardized questionnaire and used it after each
chapter of the HCM case study. The questionnaire was designed to give answers to
three questions:
In our second course, half of the students had practical experiences with SAP
during internships or job training. Most students enjoyed the HCM case study. 71%
thought that the knowledge gained from the class would be helpful for future
employment. We found that students interest in the case study decreased over time.
We measured this with the help of the number of evaluations and the number of
124 M. Lehmann et al.
completed case studies. Furthermore, we asked the students to judge the case study
chapters on a 5-point scale by answering the following questions in order to and
irrelevant topics:
1. I was able to use knowledge from my studies within the case study.
2. I was able to identify topics from my studies within the case study.
3. I will benefit from the case study work in my future working life
We used the answers to these sentences to measure the topics relevance: the more
the students agreed with each sentence, the more relevant was the topic considered. In
response to the second question, most of those surveyed indicated that typical hype-
topics (like talent management and human resource development) are relevant and
that the units on travel management and payroll accounting are irrelevant.
In order to establish whether the quality of students models had improved, we
compared the first training models with models from the students seminar papers. The
models quality is determined by the number of mistakes. We searched in both models
for some typical mistakes. The presence of more mistakes in the first training models
was considered strong evidence of an improvement in students work over time.
We also included questions in which students were asked to suggest improvements.
Interestingly, dierent students made dierent suggestions. For example, some
mentioned that concepts and SAP terms should be discussed after the case study while
others agreed with our approach. A small number of students found the case study too
easy after the first meetings and requested more challenging exercises. Most students,
however, were pleased with the level of diculty of the case study. This supports our
finding from the first year. Both courses were inhomogeneous and some students had
a steep learning curve while others had a flat one.
5 Conclusion
Within the framework of this paper, a new case study concept was presented for the
application of SAP case studies at universities. The HCM case study contains
traditional elements of case study work as well as new approaches. Freely combinable
chapters make it possible to adapt the case study to nearly all kinds of classes.
The HCM case study combines practical exercises with knowledge transfer and
repetition. The approach combines elements from traditional classes with more
practical elements. The sequence of activities developed for the HCM case study
supports the whole learning process.. The HCM case study can be applied with
dierent teaching methods. Some parts of the case study can be done either in group
or single work. Other parts, such as the preparing thematic controversy one, can be
integrated into the class itself.
We used the HCM case study in a class for human resources students. Our teaching
goal was the transfer of basic IT knowledge, for which purpose we replaced the
questions to be prepared with our own exercises to reach our goal. As our evaluation
showed, most students enjoyed working with the HCM case study, although some of
them had already worked with the SAP system. It also showed the success of the
HCM case study in achieving our teaching goals. Interestingly, some students found
the case study challenging while others were unchallenged.
An Integrated Case Study of the Concepts and Applications of SAP ERP HCM 125
We are migrating the HCM case study to the new GBI (Global Bike Incorporate)
dataset at the moment. The GBI dataset is developed by SAP UA and will replace the
IDES dataset soon. Some chapters are already available for the GBI dataset and can
be accessed via the UA website. Finally, we will include the suggested improvements
into the new version of the HCM case study.
References
1. Rosemann, M., Maurizio, A.A.: SAP-related Education - Status Quo and Experiences.
Journal of Information Systems Education 16, 437453 (2005)
2. Bedell, M.D., Floyd, B.D., McGlashan Nicols, K., Ellis, R.: Enterprise Resource Planning
Software in the human resource classroom. Journal of Management Education 31, 4363
(2007)
3. Kaiser, F.J., Kaminski, H.: Methodik des konomieunterrichts: Grundlagen eines
handlungsorientierten Lernkonzepts mit Beispielen. Klinkhardt, Bad Heilbrunn/Obb
(1997)
4. Brettschneider, V.: Entscheidungsprozesse in Gruppen: Theoretische und empirische
Grundlagen der Fallstudienarbeit. Klinkhardt, Bad Heilbrunn/Obb (2000)
5. Alewell, K.: Entscheidungsflle aus der Unternehmenspraxis, Gabler, Wiesbaden (1971)
6. Kaiser, F.J.: Die Fallstudie: Theorie und Praxis der Fallstudiendidaktik. Klinkhardt, Bad
Heilbrunn/Obb (1983)
7. Kosiol, E.: Die Behandlung praktischer Flle im betriebswirtschaftlichen
Hochschulunterricht (Case Method). Duncker & Humbolt, Berlin (1957)
8. Reetz, L., Beiler, J., Seyd, W.: Fallstudien Materialwirtschaft: Ein praxisorientiertes
Wirtschaftslehre-Curriculum. Feldhaus, Hamburg (1993)
9. Funk, B., Lehmann, M., Niemeyer, P.: Entwicklung einer Fallstudie fr die Lehre im IT-
gesttzten Personalmanagement. Final 20, 1123 (2010)
10. Jung, H.: Personalwirtschaft. Oldenbourg, Mnchen (2006)
11. Vluggen, M., Bollen, L.: Teaching enterprise resource planning in a business cur riculum.
International Journal of Information and Operations Management Education 1, 4457
(2005)
IT Applied to Ludic Rehabilitation Devices
1 Introduction
In 2000 the Tec de Monterrey Campus Cuernavaca, the biggest private university in
Mexico, along with Dr. Paul Bach-y-Rita, neurologist who worked at the Wisconsin
University, proposed a joint effort to develop a motivational device [1]. One of the
ideas we have is that if the patient is well motivated, the rehabilitation exercises will
more willingly and therefore recover faster. At this time these ideas were very innova-
tive, the closest was the creation of virtual reality-based rehabilitation by Professors
Thalmann and Burdea in 2002 [2], defined as an unconventional therapy that allows
entertainment and motivation. As the same time we use intensively IT to improve the
rehabilitation process. Although its application to rehabilitation is new, the aspects of
funny and attractive systems are also associated with ITs, as Professor Don Norman
states: "Thinking to humanize everyday things design to be at the same time functional
and attractive, but also funny and ludic" [3]. Even in empirical observations (as seen in
http://www.thefuntheory.com/), it has been showed that fun is a good incentive for
people to react and change its attitude or improves the perception of things. A technol-
ogy widely used for this purpose is the video games. For example, Zach Rosenthal of
Duke University in the United States use the videogames for drug patients [4]. Diane
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 127131.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
128 V.H. Zrate Silva
Gromala from Fraser University in Vancouver Canada use videogames instead of med-
icine to treat chronic pains [5]. In the "Chaim Sheba Rehabilitation Hospital" in Tel
Aviv they build a 650,000.00 USD system which is capable of simulating a total vir-
tual reality for patients with disabilities [6]. They affirm their system is funny, enter-
tained and addictive because, in some extent, reflects in a virtually fashion what they
can perform in real life, inducing the brain to be motivated during rehabilitation. They
have seen incredible results with this therapy; however the cost is very high. There are
many other examples using video games therapy, but all has in common the high price
[7, 8].
Under these assumptions, our focus is the development of innovative rehabilitation
technologies allowing registration of the evolution of the rehabilitation process, and
motive people to use them. Our goal is to create affordable systems for a non-profit
social community as CRIC, to test them and make improvements to assure better
therapy.
2 Methodology
Kindly our students have been collaborating to developed rehabilitation devices based
on CRICs needs on its daily work. These devices are validated by experts in rehabili-
tation through their use, thus providing maturity to these designs. On the other hand,
we chose CRIC as a social community partner because it is a nonprofit institution
providing social support and service to people with low-income economic resources.
That is why we support our participation through the service-learning pedagogy ap-
proach. This methodology link the social community partner (in this case CRIC) with
the academic curricula to produce final products made by the students to serve our
social partner.
Our work's main objective is to create IT artifacts based on computer resources to
improve the efficiency of rehabilitation therapy. We choose two approaches, comput-
er video games like a funny therapy and virtual reality immersion like a relax therapy.
Both approaches must suit the different rehabilitation devices created earlier. The
video games are designed in conjunction with the CRICs Therapists and according to
the guidelines of the learning objectives of the Tecnolgico de Monterrey Campus
Cuernavaca. We take care first at all of the functionality without losing sight to be
adaptable and flexible as seen in Figure 1.
Biomechatronic
Device
3 Results
From all the rehabilitation devices created, we have three with integrated ludic aspect
as seen in Figure 2.
130 V.H. Zrate Silva
Stimulation bed
C. Multiple Stimulation bed: This system has 2 pedals bicycle-likewise; one fit the
feet and the other the Patient's hands. The system is programming in time and speed
of movement and allows improve the moves to regain range of the same while being
retrained. We include a monitor where the patients observe, in a relax ambience, pic-
tures or a virtual navigation of the nature. This is one of the most tested systems. We
have a forest virtual tour that can be attached to goggles to be more practical. The
results seem very attractive but we are also improving the mechatronic bed, so the
process to assess is slow.
IT Applied to Ludic Rehabilitation Devices 131
4 Conclusion
The use of new technologies such as ITs gives us the opportunity to do sophisticated
systems. For rehabilitation, a good use of these IT can substantially improve the quali-
ty of service and expand the number of people served.
In this work we show some experiences in progress on apply academic work strat-
egies to create rehabilitation systems. Mainly we use service-learning approach to join
the particular requirements of a social partner with the academic requirements and
permit the students create devices to supply these requirements. In our methodology,
the social community (the Therapists) is highly involved and its participation is essen-
tial to success. The systems showed here are in operation in the CRIC and particularly
the multiple stimulation bed is currently in the second stage of redesign and im-
provement.
References
1. Vargas, S.: Realiza el CRIC rehabilitacin ldica. Reportaje periodstico. Diario de More-
los. 4 de agosto de (2009) (in Spanish),
http://www.diariodemorelos.com/index.php?option=com_content&t
ask=view&id=45764&Itemid=68 (visited on 30th April 2010)
2. Burdea, G.: Keynote Address: Virtual Rehabilitation-Benefits and Challenges. In: 1st Interna-
tional Workshop on Virtual Reality Rehabilitation (Mental Health, Neurological, Physical,
Vocational) VRMHR 2002, Lausanne, Switzerland, November 7, 8, vol. 8, pp. 111 (2002)
3. Norman, D.A.: Emotional design: why we love (or hate) everyday things. In: Clerk Maxwell,
J. (ed.) Basic Books, A Treatise on Electricity and Magnetism, 3rd edn., vol. 2, pp. 6873.
Clarendon, Oxford (1892)
4. Rivera, A.: Using Games for Rehabilitation, (November 7, 2007),
http://www.massively.com/2007/11/07/using-games-for-
rehabilitation/ (visited on June 30, 2011)
5. Shayotovich, E.: Video Games Treat Chronic Pain Better Than Drugs (December 17, 2007)
http://www.massively.com/2007/12/17/video-games-treat-
chronic-pain-better-than-drugs-working-title/ (visited on June 30,
2011)
6. MSNBC News Tech & Science, Virtual Reality Boosts Rehab Efforts (December 18, 2006),
http://www.msnbc.msn.com/id/16266245/ (visited on June 30, 2011)
7. Gesture Tek technologies, IREX: The Best in Virtual Reality Physical Therapy (2009),
http://www.gesturetekhealth.com/products-rehab-irex.php (visited
on June 30, 2011)
8. Balasubramanian, S., et al.: IEEE Digital library. Rupert: An exoskeleton robot for assisting
rehabilitation arm functions, 27 de agosto del, video (2008),
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=46251
54, http://www.youtube.com/watch?v=SZAp9ZXye8w (visited on June 30,
2011); Smith, T.F., Waterman, M.S.: Identification of Common Molecular Subsequences. J.
Mol. Biol. 147, 195197 (1981)
A Channel Assignment Algorithm Based on Link Traffic
in Wireless Mesh Network
1 Introduction
Wireless mesh network (WMN)[1] merges the advantage of WLAN (wireless local
area network) and Ad Hoc network, which is a high-capacity, high-speed and high-
coverage of wireless network. In order to increase network capacity, each node in the
wireless mesh network is configured multiple RFs and different RF is distributed
different non-overlapping channel.
How to reduce the interference among channels while data transmitting which means
maximizing the reuse ratio of scarce radio spectrum and how to allocate channels while
the link traffic is not balancing become the major challenges which a multi-channel
wireless mesh network has faced. Reference [2] proposed a polynomial-time greedy
heuristic algorithm, which is called Connect low interference channel allocation
algorithm. According to the connection graph and conflict graph, this algorithm
computes a priority for each mesh node and allocates channel for each link. However,
the algorithm does not consider the problem of flexibility and also can not resolve a
variety of network traffic patterns problem in channel allocation. Reference [3]
proposed an interference-aware channel allocation algorithm, which fully takes into
account the impact of link traffic, but it only considers the impact of the traffic of
external wireless network traffic. For the above problems, this paper proposes a
heuristic channel allocation algorithm based on the link traffic. In order to predict the
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 133141.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
134 C. Liu et al.
busy-degree of each node in the wireless mesh network, the algorithm uses the Markov
chain model[4] to predict the link traffic. The link connecting with node which has a
larger busy-degree has the priority to allocate channel. If there are a number of links,
according to the protocol interference model[5] then the link with a greater interference
degree has the priority to allocate channel.
This paper uses Markon chain model (ON_OFF model) to predict the link traffic, as
shown in Figure 1. ON indicates the state while there is data transmitting on the link,
OFF indicates the state while no data transmission. a, b, c and d indicate the transition
probability between ON state and OFF state.
Assume system time is divided into two parts, that no data transmission time and
data transmission time. TN is the average length of data transmission time, TF is the
average length of no data transmission time (TN1,TF1). Therefore, for any time
slot in the ON state, the probability of the next time slot still in the ON state is (TN-
1)/TN. The same for any time slot in the OFF state, the probability of the next slot
still in the OFF state is (TF-1)/TF.
TN 1 1 ; 1 ; TF 1 1 ; 1
a= = 1 b =1 a = c= =1 d = 1 c =
TN TN TN TF TF TF
TN ; TF
on = off =
TN+TF TN + TF
where on is the probability of the link in the ON state, off is the probability of the
link in the OFF state. So in a single wireless network, the mathematical expectation
EQ of the following m time slots is defined as follows:
i
where S= m
1 1 , B is the bandwidth of wireless network.
1 TN
i =1
TF
Q(i, j) is used to represent data flow from node i to node j. As links in the wireless
network are two-way link, so Q(i)=Q(i,j)+Q(j,i), and Q(i) is the data flow of node i.
136 C. Liu et al.
Busy-degree of node is used to indicate the business of the node. The greater the
value of busy-degree, the busier the node. So the definition of busy-degree is defined
as follows:
Q(i ) | Neighbor(i ) |
(i ) = + (3)
Ck N
where C is the channel capacity of network, a fixed value C is assigned in this paper.
k is the number of available channels in the network. |Neighbor(i)| is the number of
neighbors of node i in the network. There are two parameters separately and ,
+ =1, which are used to denote the importance of the prediction value of traffic in
the next stage and the number of node neighbors.
The measurement mechanisms of node load have four kinds, including CQI
(channel quality index), the occupancy of MAC buffer, the number of neighbor nodes
and the delay of processing packet. And this paper mainly considers the number of
neighbor nodes. So in the calculation of node busy-degree, this paper considers not
only the traffic in the next stage, but also the potential traffic of the node.
0 (i ) 1 , the greater the value of (i ) , the busier the node i. So the link connected
with node i should be assigned first.
In figure G (V, E, L), according to the protocol interference model to calculate the
interference degree |I (e)| of link, and follow the descending order to assign channel
for each link while the first time to assign channel for wireless mesh network. We use
the following algorithm to assign channel for corresponding link in the after channel
assignment:
Step1. Calculate the busy-degree of each node in the wireless mesh network
according to the formula 3. If eij E and eij=0, set the value of aij= 0 directly.
Step2. Following the descending order to assign channel for each link connected
with the node i according to the calculated value of busy-degree (i) . The channel
assignment algorithm process is as follows:
While assigning channel for link eij every time, check the value of aij. If aij it 0,
indicates that the link eij has been assigned channel before, then skipping the channel
assignment for the link. If aij=0, it indicates that the link eij has not been assigned
channel before, then performing the following channel assignment algorithm.
For node i in the network and node j which satisfies eij =1, if Li j L
, then it
indicates that there are same channels in the table of available channel of node i and
node j. Calculating the interference degree I(eij) respectively, following the
descending order to assign channel for each link according to the calculated
interference degree.
If | Li Lj | = 1, then the same channel k in available channel table of node i and
node j is assigned to the link directly. If | Li Lj |> 1, it indicates that the number of
A Channel Assignment Algorithm Based on Link Traffic in Wireless Mesh Network 137
the same channel in the available channel table is greater than 1, then comparing the
corresponding value of Ok, the channel k with smaller value is assigned to the link eij.
Calculating the total interference degree w' in the network, if w' <w, then the
channel assignment is successful, and w = w', set aij=k, Ok=Ok+1, modify the
corresponding value in the queue Ok, modify matrix L,
m, m P (eij )
l mk =0
.
Otherwise, cancel the channel assignment for the link.
Step3. Repeat step2 until each link in the network has been assigned.
3 Network Simulation
This paper uses the MAC protocol of IEEE 802.16s standard[7] and the
transmission rate of the wireless link is 54Mbps. The main traffic flow is generated in
the peripheral nodes, using a continuous bit rate CBR traffic pattern. Using on-
demand distance vector routing protocol (AODV)[8] in the simulation.
)
t 8.5
i
b
M
(
c
i
f 8
f
a
r
t
k
n 7.5
i
l
7
0 100 200 300 400 500
simulation time(s)
Fig. 3. Comparison between the actual value and prediction value of link traffic
Experimental result as shown in figure 3 is that using the Markov chain (ON_OFF
model) can effectively predict the link traffic in the network.
(2) Network throughput
Comparing the throughput in the same network state among the proposed
algorithm in this paper, CLICA algorithm of reference [2] and interference-aware
algorithm of reference [3] from two aspects(higher traffic and lower traffic in the
network).
Figure 4 and figure 5 compare the network throughput of different channel
assignment algorithms respectively. Polynomial-time greedy heuristic algorithm
calculates a priority for each mesh node based on the connection graph and
confliction graph, but the algorithm does not consider the flexible problem during
channel assignment and also can not resolve the traffic pattern problem, so the
throughput is lower relatively. Interference-aware channel assignment algorithm not
only considers the impact of link traffic, but also the link interference degree, so the
throughput of the algorithm is higher than CLICA algorithm. The algorithm proposed
in this paper predicts busy-degree of each node through using Markov chain model
predicting link traffic in wireless mesh network. The link connecting with node which
has a larger busy-degree has the priority to assign channel. If there are a number of
links, according to the protocol interference model then the link with a greater
interference degree has the priority to assign channel. So the network throughput of
proposed algorithm in this paper is higher than CLICA algorithm and interference-
aware algorithm.
A Channel Assignment Algorithm Based on Link Traffic in Wireless Mesh Network 139
20
)s 19
pb 18
M(
tu 17
ph 16
gu
or 15
ht
14
kr
ow 13
te 12
n
11
10
0 100 200 300 400 500
simulation time(s)
20
)s 19
pb 18
M(
tu 17
ph 16
gu
or 15
ht
14
kr
ow 13
te 12
n
11
10
0 100 200 300 400 500
simulation time(s)
Experimental results show that regardless of how much traffic in the network, the
algorithm based on link traffic proposed in this paper can improve the network
throughput effectively. According to the results shown in figure 4, the network
throughput of algorithm proposed in this paper is 1.4 times than CLICA algorithm,
1.2 times than interference-aware algorithm. According to the results shown in figure
5, the network throughput of algorithm proposed in this paper is 1.5 times than
CLICA algorithm, 1.3 times than interference-aware algorithm. As traffic in the
network increasing, the algorithm proposed in this paper can improve network
throughput and network performance better.
140 C. Liu et al.
CLICA algorithm
interference-aware algorithm
algorithm in this paper
3
s
/ 2.5
y
a
l
e
d 2
t
e
k
c 1.5
a
p
e 1
g
a
r
e 0.5
v
a
0
lower traffic higher traffic
4 Conclusion
This paper proposes a heuristic channel assignment algorithm based on the link
traffic. In order to predict the busy-degree of each node in wireless mesh network, the
algorithm uses the Markov chain model to predict the link traffic. The link connecting
with node which has a larger busy-degree has the priority to assign channel. If there
are a number of links, according to the protocol interference model then the link with
a greater interference degree has the priority to assign channel. Finally, experiment
results show that the algorithm proposed in this paper can improve network
throughput and network transmission performance effectively.
References
1. Avallone, S., Akyildiz, I.F.: A channel assignment algorithm for multi-radio wireless mesh
networks. In: 16th International Conference on ICCCN 2007, Honolulu, HI, pp. 10341039
(2007)
2. Marina, M.K., Das, S.R.: A topology control approach for utilizing multiple channels in
multi-radio wireless mesh networks. In: 2nd International Conference on BroadNets 2005,
California, pp. 381390 (2005)
A Channel Assignment Algorithm Based on Link Traffic in Wireless Mesh Network 141
3. Ramachandran, K.N., Belding, E.M., Almeroth, K.C., Buddhikot, M.M.: Interference aware
channel assignment in multi-radio wireless mesh networks. In: 25th IEEE International
Conference on Computer Communications, Barcelona, Spain, pp. 112 (2006)
4. Li, L., Yang, G.W., Cheng, G.Y.: A Forecast Method of the Flow in Networks. J. Computer
Engineering, 1516 (1998)
5. Naouel, B.S., Hubaux, J.P.: A fair scheduling for wireless mesh networks. In: The First
IEEE Workshop on Wireless Mesh Networks (WiMesh), Santa Clara (2005)
6. NS tutorial, http://www.isi.edu/nsnam/ns/tutorial/index.htm
7. IEEE Std. 802.16-2004, IEEE standard for Local and Metropolitan Area Networks (2004)
8. Perkins, C., Royer, E.: Ad-hoc On-Demand Distance Vector Routing. In: 2nd IEEE
Workshop on Mobile Computing Systems and Applications, Washington, DC, pp. 90100
(1999)
An Analysis of YouTube Videos for Teaching
Information Literacy Skills
Shaheen Majid, Win Kay Kay Khine, Ma Zar Chi Oo, and Zin Mar Lwin
1 Introduction
These days, large amount of information is readily available and without the neces-
sary skills to search, locate, process, evaluate, and use information, people may ex-
perience various information related problems, such as information overload, inability
to find the needed information, and underutilization of information. It is, therefore,
desirable for people from different segments of the society to become lifelong learn-
ers, and possess adequate levels of information-related competencies. The term in-
formation literacy (IL), sometimes referred to as information competency, is generally
defined as the ability of an individual to access, evaluate, organize, and use informa-
tion from a variety of sources. Being information literate requires knowing how to
clearly define a subject or area of investigation; select the appropriate terminology
that expresses the concept or subject under investigation; formulate a search strategy
that takes into consideration different sources of information and the variable ways
that information is organized; analyze the data collected for value, relevancy, quality,
and suitability; and subsequently turn information into knowledge [1].
In higher education arena, one of the objectives is to prepare information literate
citizens who can work effectively in an information and knowledge rich society. In-
formation literacy leads students to become independent learners rather than over-
depending on teachers to seek answers to questions or solve problems.
Preddie [2] listed several benefits of information literacy for students and general
public. Information literacy requires active learning, thus students should take more
control of and be responsible for their own learning. Information literate citizens
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 143151.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
144 S. Majid et al.
know how to analyze and use information to best apply to their work and everyday
life. For workers, information literacy enables them to embrace changes and quickly
adapt to a dynamic and constantly evolving work environment and at the same time
value-add the organization that they are working for.
Appreciating the importance of information literacy, various standards and guide-
lines have been developed and implemented. In the United States, the American Li-
brary Association (ALA) and Association for Educational Communications and
Technologys landmark publication Information Power and the Association of Col-
lege and Research Libraries publication Information Literacy Competency Standards
for Higher Education, have both become de facto standards for IL competencies from
kindergarten to college. The UK Standing Committee for National and University
Libraries (SCONUL) proposed the Seven Pillars of Information Skills in December
1998. The Council of Australian University Librarians (CAUL), reviewed the US
Information Literacy Standards for Higher Education by ACRL and revised the
Australian and New Zealand Information Literacy Framework (ANZIIL) and pro-
vided four guiding principles and more comprehensive details for each of the six core
standards.
Different information literacy models have also been presented, emphasizing dif-
ferent aspects of information literacy. Bruce [3] stated that information literacy is
generally influenced by five concepts, namely: information technology literacy, com-
puter literacy, library skills, information skills, and learning to learn. She stated that
the five concepts are simultaneously distinct and interconnected, and that each con-
cept is either differentiated from or integrated into current descriptions of information
literacy. This model is generally acknowledged, accepted and used. Burdick [4] de-
scribed information literacy as being made up of five components: abilities, skills,
knowledge, use and motivation. Eisenberg & Berkowitz [5] presented a well-received
Big6 information literacy model, describing how people solve a typical information
problem. Their model comprises six information-related activities: 1) task definition,
2) information seeking strategies, 3) location and access to information, 4) use of
information, 5) synthesis, and 6) evaluation.
With advancements in information technology, libraries and other stakeholders are
experimenting with new content delivery techniques to make IL instruction more ef-
fective and useful. A big advantage of using ICT is that IL instruction materials are
accessible to intended users on 24/7 basis. An exciting addition to such initiatives is the
availability of Web 2.0 applications. Among the Web 2.0 tools, YouTube is quickly
becoming a new way of teaching IL skills in a more interesting and engaging manner.
Unlike certain online tutorials and quizzes, which are usually designed and accessible
to only authorized user groups, YouTube videos are freely available to all interested
viewers. Burke and Snyder [6] pointed out that academics are using YouTube in an
innovative way to teaching their courses. Gilroy [7] noted that previously colleges
and universities were posting their videos on the site through their own channels.
However, now the new YouTube EDU page organizes the educational related videos at
one place. Primary Research Group [8] conducted a survey to explore how libraries use
Google, Yahoo, Wikipedia, Ebay, Amazon, Facebook, YouTube and other web tools
and websites. The study showed that 24.2% of the libraries had a YouTube account
and one-half of them had posted user education videos on YouTube.
An Analysis of YouTube Videos for Teaching Information Literacy Skills 145
The above literature review shows that libraries and other information handling
agencies increasingly use ICT tools, including YouTube, for reaching out to their
patrons. Libraries have already been taking advantage of YouTube for user education
and developing information literacy skills of their patrons. However, no study could
be located analyzing the attributes of YouTube videos on information literacy. The
purpose of this study was to analyze the scope and coverage of information literacy
videos using the Big6 information literacy model. The areas covered by this study
included: the type of IL skills taught, use of different instructional approaches, quality
and duration of videos, and the intended viewers. It is expected that the analysis will
help libraries putting their IL-related videos on YouTube, to identify strengths and
weaknesses of their videos and how to improve their production quality. In addition,
this analysis will also help other libraries to select appropriate and high quality videos
for recommending to their patrons.
2 Method
For the study only those videos using the keyword Information Literacy to describe
their contents were selected. As discussed in previous sections, information literacy is a
more comprehensive concept; therefore, videos on library collections, services, facili-
ties, rules and procedures, service hours, etc. were excluded from this study. In addition,
videos of book promotions, announcement of information literacy workshops, students
projects, and videos of inaugural sessions and dinners of information literacy confe-
rences were dropped. Similarly, use of certain other related terms such as user educa-
tion, bibliographic instruction, library orientation, library skills, library awareness and
promotion, etc. were avoided as these terms do not adequately represent the complete
scope and coverage of the concept of information literacy. It was interesting to note that
some videos produced by American Medical Association on the ill-effects of marijuana
and other drugs also used information literacy in their video titles (e.g. Marijuana In-
formation literacy - http://www.youtube.com/watch?v=Mqc7YBD7EqE). Another vid-
eo providing tip for the purchase of a new car used the title Final Project for
Information Literacy- Car Economics (http://www.youtube.com/watch?v=
gDQhZE0oDBo). All such videos were excluded from the analysis. Other criteria used
to limit the search result were: category education and upload date any time. The
data was collected in the first week of March 2010, and a total of 912 videos were re-
trieved matching the above stated criteria. Due to YouTube access limit, only first 800
videos could be analyzed. These videos were viewed and manually filtered to remove
irrelevant videos. Similarly, videos less than two minutes long were also removed as
these were not expected to communicate any meaningful knowledge related to informa-
tion literacy. It was interesting to note that many retrieved videos, using the keyword
information literacy, did not actually cover any distinct aspect of information literacy.
Even some videos on library jokes, advertisements and other types of literacy also used
the keyword information literacy. In addition, videos appearing multiple times or with
navigation problems were removed. After manual filtering, 70 unique videos on infor-
mation analysis were selected for more in-depth analysis.
146 S. Majid et al.
The shortlisted videos were examined to identify their content, extent of coverage
and other related attributes. Two approaches were used to do content analysis of these
videos, i.e. coverage of Big6 skills and use of different instruction styles. For Big6
analysis, the selected videos were examined to determine what skills were covered
and the depth of their treatment. The coverage given to each Big6 skill was deter-
mined using a three point scale (fair, good and excellent), based on the time alloca-
tion, sub-topics covered, examples used and other factors. For instruction style, the
selected videos were analyzed for the teaching style used by them, such as lectures,
tutorials, discussions, PowerPoint slides, oral presentations, interviews or a combina-
tion of styles.
3 Findings
The following sections provide an analysis of 70 unique videos on information litera-
cy skills. The discussion is divided into two major sections, i.e. coverage of Big6
skills and instructional approaches and other attributes.
The selected videos were analyzed for coverage given to different Big6 skills com-
prising: task definition, information seeking strategies, information location and
access, use of information, synthesis, and evaluation. It was found that 11% of the
videos discussed task definition skills which include problem definition and identifi-
cation of information needs (Figure 1). The highest percentage (26%) of the videos
covered different strategies that can be used for seeking the needed information. The
percentage of videos teaching information location and access skills, information use
skills and information synthesis were 23%, 20% and 13% respectively. Only 7% of
the videos covered information evaluation related skills. It appeared that a majority of
the YouTube videos covered three IL skills, i.e. information seeking strategies, infor-
mation location and access, and information use skills. Comparatively fewer videos
taught the remaining three equally important information literacy skills.
Info.
Task
Info. Evaluation
Definition
Synthesis (7%)
(12%)
(10%)
Information
Information Seeking (27%)
Use (20%)
Information
Access (24%)
As many videos covered more than one information literacy skill, these videos
were further analyzed for the number of Big6 skills taught in each individual video. It
was noted that none of the videos discussed all the six information literacy skills. One
skill was covered by 19 or 27.2% of the YouTube videos. The videos teaching two or
three information literacy skills were 22 (31.4%) and 17 (24.3%) respectively. Four
information literacy skills were covered by 11 (15.7%) of the videos. It appeared that
a majority of the videos covered two to four information literacy skills; however,
almost none of them covered all the Big6 skills.
The selected videos were also analyzed to determine the depth of treatment given to
different information literacy skills. For this purpose, the videos were categorized into
three groups, namely excellent, good, and fair. The criteria used for this categoriza-
tion were: coverage of topics, content (quality of script), duration, presentation style
and number and type of example used. The following is a brief description of each
category:
10
Interview (2)
Lecture (17)
Presentation
(43)
Tutorial (37)
Slide Show
(20) Discussion
(5)
Other commonly used communications techniques were slides shows without any
verbal explanation (20 videos) and lectures (17 videos). It appeared that YouTube
videos used a variety of communication techniques for exposing viewers to necessary
information literacy skills and a majority of these videos used a combination of teach-
ing techniques.
An Analysis of YouTube Videos for Teaching Information Literacy Skills 149
> 10
8:01-10:00 minutes 2:00-4:00
minutes (3%) minutes
(19%) (27%)
6:01-8:00
minutes
(21%) 4:01-6:00
minutes
(30%)
A big variation was observed in the quality of YouTube videos on information liter-
acy. Many of the videos were not shot professionally with poor production and pres-
entation quality (Table 1).
However, quality of several videos was quite good. Most of such videos
were shot purposely to teach IL skills, predominantly by library directors
and other senior staff. For example, Dr. Bob Bakars 12 vidoes series
(http://www.youtube.com/watch?v=cnfmzIHzTds&feature=channel), was presented
with good picture and sound quality, good PowerPoint slides, tutorial and recom-
mended readings. Similarly, another 11 videos series by Nathan Pineplow from Uni-
versity of Colorado is rich in content with engaging presentation
(http://www.youtube.com/ watch?v=1_Ksbwlaf88 &feature=related).
4 Conclusion
A variety of techniques are being used for creating familiarity with and for imparting
information literacy skills. YouTube is a very powerful medium which has the poten-
tial to reach out to different segments of the society on 24/7 basics. Another advan-
tage is that it can deliver the intended message in a more interesting, effective, and
engaging manner. This analysis found that many libraries, particularly academic li-
braries, were using YouTube videos for teaching different information literacy skills
to their users. However, it was a matter of concern that many videos were not of good
quality, probably shot by amateurs, without adequate video production skills. Several
such videos had poor picture quality, inappropriate backgrounds, inadequate light and
poor sound recording. Although many other videos available on YouTube are also of
not good quality and produced by amateurs, it is desirable that libraries should take
extra care while posting their videos. It is because these videos are likely to indirectly
affect the image of libraries and the quality of services provided by them. It is, there-
fore, desirable that libraries should either get professional help for producing their
YouTube videos or get their staff trained for this purpose. Another aspect requiring
attention is the communication skills of presenters. Many presenters of information
literacy videos failed to demonstrate good communication skills. Library profession-
als need to understand that, in order to take full advantage of the power of audio-
visual media, they need to make extra efforts to acquire necessary skills for effective
communication through videos. Library and information schools can also consider
providing audio-video production skills to their students.
References
1. American Library Association. Presidential committee on information literacy: Final report.
Washington, D.C.,
http://www.ala.org/ala/mgrps/divs/acrl/publications/
whitepapers/presidential.cfm (retrieved July 21, 2011)
2. Preddie, M.I.: Canadian Public Library Users are Unaware of their Information Literacy De-
ficiencies as Related to Internet Use and Public Libraries are Challenged to Address these
Needs. Evidence Based Library & Information Practice 4(4), 5862 (2009)
3. Bruce, H.: Information Professionals as Agents for Information Literacy. Education for In-
formation 20(2), 81106 (2002)
An Analysis of YouTube Videos for Teaching Information Literacy Skills 151
Ya-jun Pang
Luoyang Institute of Science and Technology, Henan Province, P.R. China, 471023
shizi7677@hotmail.com
1 Introduction
The hybrid learning refers to online learning (e-Learning) and traditional classroom
learning (Face-to-Face) organic integration [13], which is not only playing teachers
guide, inspire, the leading roles of monitoring the teaching process, but also fully
embodies the learning process of students as the main body of the initiative,
enthusiasm and creativity [57]. Training methods of domestic enterprising by foreign
influence and inspiration, the use of hybrid learning gradually increased, but just the
initial stage of education mentioned in the blended learning, now more and more
people are concerned at hybrid learning and has been widely used at home and abroad
in English, computers and other teaching areas and achieve better teaching results.
And the present study and practice of hybrid learning focuses on the following aspects
of the principle of hybrid learning, definitions, strategies and more research model,
the learning effect of hybrid learning, instructional design and research factors such as
relatively small. As far as PE is concerned, the research on hybrid learning less sports,
literature [4,8] mainly focus on physical education hybrid learning platform (PEHLP)
structural model, the use of video annotation and editing technology to improve sports
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 153160.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
154 Y.-j. Pang
hybrid learning teaching. Due to the impact of the network bandwidth, learning
materials such as recording and transmission efficiency of video upload prominent,
and its popularity is limited. In order to solve these problems, the paper establishes
new hybrid learning in physical education model, which through the use of
lightweight interactive communication tools to achieve the traditional F2F and e-
Learning instruction blended.
The evolution of e-Learning systems in the last two decades was impressive. In their
first generation, e-Learning systems were developed for a specific learning domain
and had a monolithic architecture. Gradually, these systems evolved and became
domain-independent, featuring reusable tools that can be effectively used virtually in
any e-Learning course. The systems that reach this level of maturity usually follow a
component-oriented architecture in order to facilitate tool integration. The LMS is an
example of this type of system, which integrates several types of tools for delivering
content and for recreating a learning context (e.g. Moodle, Sakai)[9].
The present generation focuses on the interchange of learning objects and learners
information through the adoption of new standards that brought content sharing and
inter-operability to e-Learning. In this context, several organizations have developed
specifications and standards in the last years, which define standards for e-Learning
content and inter-operability among many others. These systems based around
pluggable and interchangeable components, led to oversized systems that are difficult
to reconvert to changing roles and new demands such as the integration of
heterogeneous services based on semantic information, the automatic adaptation of
services to users (both learners and teachers), and the lack of a critical mass of
services to supply the demand of e-Learning projects [9]. These issues triggered a
new generation of e-Learning platforms that can be integrated in different scenarios
based on Service Oriented Architecture (SOA) technology. In the last few years there
have been initiatives to adapt SOA to e-Learning [10]. These initiatives (e-Learning
frameworks) had the same goal: to provide flexible learning environments for learners
worldwide. These e-learning frameworks use intensively the standards for e-Learning
content sharing and inter-operability developed in the last years by several
organizations (e.g. ADL, IMS GLC, and IEEE). Therefore, we conclude that the
hybrid learning should be open and be equipped with inter-operability and flexible
learning environments.
week), 67% students thinks exchanges and interaction out of the PE classroom of
teacher and student is worse [4].
Hybrid learning model not only to make up the traditional F2F teaching feedback is
not timely, instructional media single, divorced of inside and outside the classroom
etc., but also overcome the teaching monitoring badly in teacher-led online learning
mode and other shortcomings. Based on the existed material, physical education in the
hybrid learning model with modern teaching media, carries on infinite extension to
time space and place of PE teaching on the horizontal, so as to achieve Teaching and
learning anytime, anywhere effect. From the particularity of environment and
physical education, the physical education can be divided into outside physical
education and physical education classroom according to the place and the way
students learning. The basic model of hybrid learning is organic to combine two
instruction methods: Face-to-Face and e-Learning (refer to Figure 1).In Figure 1, the
physical education classroom is carried on by guided navigation, so the Face-to-Face
instruction mode should be the primary means and the e-Learning is supplementary
one. On the contrary, the outside physical education is carried on by learner self-
navigation, so the e-Learning should be the primary means and the Face-to-Face is
supplementary one. Whats more, the e-Learning is divided into e-Learning by oneself
and interactive e-Learning. And the interactive e-Learning would be mainly adopted
according to characters of physical education. To realize the interactive e-Learning, we
put forward one education platform named PEHLP in [4,8]. As mentioned above, the
main character of PEHLP is turned to video editing (video review) technology to
smooth away communication barriers. Although, the PEHLP can afford the physical
education, there are also some defects, such as: firstly, the teacher argues about that the
video review cost their more time; secondly, students hope that the PEHLP would be
supplied with some instant communication tools, so their questions can be replied in
short time or commutate with their classmates, and so on.
identity identification, physical course deliverer and dynamic learning space. The
Identity Identification Module (IIM) is to accomplish user registration and
authorization for system safe. In this paper, the physical course deliverer and dynamic
learning space will be presented in detail.
A module named Smart Deliverer (SD) is adopted by the physical education course
deliverer to reuse the physical education course materials of the national elaborate or
other education platform. SD is a set of services to accomplish information
exchanging between the FPEHLP and the heterogeneous education platforms, for
example the education platform of National elaborate physical education course. The
main components of Smart Deliverer (refer to Figure 2) include Theory Material
Learning Space, Courseware Warehouse, Multimedia Library and Test Bank. More
details are illustrated in [8]. To make the learning interested, the Microsoft Agent
toolkit and Speech SDK is adopted in FPEHLP. We put up some smart flags in the
learning material and courseware warehouse, where the knowledge are important and
need be illustrated in more detail. When student is learning the key points and
clicking it, the agent is coming and speaking it. For example, in the Figure 3, when
the student click the key point Aerobic exercise, the Agent will appear and tell what
is Aerobic exercise just as the teacher.
The learning process of e-learning in Hybrid Learning context can be presented with:
firstly, the student learns study material by the learning resources of the education
platform; then he/she does exercises/tests by the test papers supplied by the Test Bank
of education platform; lastly, by checking exercises/tests according to the answers,
he/she can clearly know whether his/her answers are right or wrong since the answers
is unique. As far as physical education is concerned, the visualization of standard
demonstration actions, the teachers hand-in-hand teaching and the communication
between the teacher and student play very important role during the learning process.
Hybrid Learning of Physical Education Adopting Lightweight Communication Tools 157
The student would not act well, even if he/she grasps every detail of each action. In
the F2F context, the student can only find out mistakes depending on his/her
classmates or the teacher in the field, whats more, the mistakes need be illustrated
with instructions and demonstrated action hand-in-hand. To mimic F2F context, the
video data tools should be supplied with. By the video data tools, the teacher can
make teaching materials easily and conveniently, such as, making commentary on the
video which is key point. Another function of the video data tools is reviewing the
students exercises/tests video. In order to meet these requirements, the Dynamical
Learning Space (DLS) is proposed, which is composed of Video Review Module
(VRM), Video Conference and IM Adaptor (refer to Figure 2). The core module of
DLS is VR and IM Adaptor. The function of VR is video reviewing and annotation.
By VR the teacher can review the students actions video or image, pick out the
wrong actions and add still annotations. By watching the reviewed video or image, the
student can find out mistakes and get instructions.
4.1 Purpose
The primary aim of this project is to devise one simple flexible framework of hybrid
learning of physical education which can promote students life-long physical
education and cultivate their health physical training habits and hobbies. The
following issues were specifically focused on:
158 Y.-j. Pang
Would the hybrid learning promote physical education effect, especial whether the
hybrid learning can promote students exercise habit in university culture?
Would the education platform for hybrid learning with lightweight communication
tools afford students physical education exercise requirements?
Since September 2007, we started to investigate the students and make students watch
inspirational films and videos created by teachers in the room on initial stage; then
incorporate a variety of teaching content and standard video placed on the FPEHLP
for students to study; then set up the QQ group taking grade as an unit set for students
to provide students and teachers, students and students of mutual exchange in
September, 2008 through the IM Adaptor of FPEHLP.
Averaged Mark
No Group Long-distance Standing Rope P/S
Running Long-Jump Skipping
Experimented 42.3 48.2 66.5
I 1.0/0.01
Control 42.1 48.6 66.2
Experimented 66.2 65.8 81.3
II 0.53/0.45
Control 55.4 57.2 68.3
Notes: (1) No. I is before the experiment data; No. II is post the experiment data; (2)
P/S refers to the Pearson correlation coefficients and significant.
All the students were divided into experimental group and control one. In
experimental group, the excitation and exchanges between students and teachers, and
teaching information and timely feedback, so that after the end of each new teaching
self-assessment, each student exchange at least once a week, and at least watch a
movie or inspirational articles every two weeks. In the control group, learning model
used only the traditional physical education. During a school year of physical
education, using long-distance running (1,000 meters boys and girls 800 meters),
standing long jump, jumping rope (one minute count) three examinations as indicators
of physical fitness testing. Content analysis was used to explore the information
collected from participants. The results revealed that hybrid learning can significantly
promote calisthenics teaching/learning effect (refer to Table 1). From experimented
results (Table 1), the Pearson significance of correlation coefficient is 0.53 and 0.45 is
greater than 0.05. That is to say, there are differences between the students
experimented group and contrasted group on test indicators. We notices that long-
distance running, standing long jump and rope skipping of experimented group has
increased 23.9%, 17.6%,and 14.8%. In contrast, indicator of students in contrasted
group has increased to 13.3%, 8.6%, and 2.1%. Therefore, that hybrid learning can
promote physical education effect. To find out why hybrid learning can promote
physical education effect, we surveyed on physical education attitude, physical
education aim, learning initiative and so on. The results show that 97% of students in
Hybrid Learning of Physical Education Adopting Lightweight Communication Tools 159
experimented group like physical exercise, 86.2% of them are eager to take part in
physical exercise, and 86.2% of them like hybrid learning (refer to Table 2). And
there are 87.5% of students think that the FPEHLP can afford their physical exercise
requirements.
Acknowledgements. The work has been partly supported by 2010 Luoyang Institute
of Science and Technology Foundation 2010YR14 and 2011 Henan Province Social
science federation Project SKL-2011-2573.
References
1. Graham, C.R.: Blended learning systems: Definition, Current Trends, and Future Directions.
In: Handbook of blended learning: Global perspectives, local designs, pp. 321. Pfeiffer,
San Francisco (2005)
2. Zhao, S.J.: Application of Network Education Technology in Physical Education of Higher
Education. Dissertation of South China Normal University (2007)
3. Kim, W.: Towards a Definition and Methodology for Blended Learning. In: International
Workshop on Blended Learning 2007 (WBL 2007), pp. 1517. University of Edinburgh,
Scotland (2007)
4. Pang, Y.-J.: Hybrid Learning of Physical Education Using National Elaborate Course
Resources. In: Tsang, P., Cheung, S.K.S., Lee, V.S.K., Huang, R. (eds.) ICHL 2010.
LNCS, vol. 6248, pp. 270281. Springer, Heidelberg (2010)
5. Tan, C., Liu, Y.: Hybrid Learning and Discussion on its Implementation Measures in
Distance Education. Modern Distance Education Research 81(3), 3638 (2006)
6. Qi, Y.: Analysis on Application of Hybrid Teaching Mode in Higher Education. In: Hybrid
Learning: A New Frontier, pp. 151160. City University of Hong Kong (2008)
7. Karen, V., Charles, D., et al.: Blended Learning Review of Research: An Annotative
Bibliography. In: The ALN Conference Workshop on Blended Learning & Higher
Education (2005)
8. Pang, Y.-J.: Techniques for Enhancing Hybrid Learning of Physical Education. In: Tsang, P.,
Cheung, S.K.S., Lee, V.S.K., Huang, R. (eds.) ICHL 2010. LNCS, vol. 6248, pp. 94105.
Springer, Heidelberg (2010)
9. Dagger, D., OConnor, A., Lawless, S., et al.: Service Oriented eLearning Platforms: From
Monolithic Systems to Flexible Services. Internet Computing 11(3), 2835 (2007)
10. Schools Interoperability Framework, http://www.sifassociation.org
Experiments on an E-Learning System
for Keeping the Motivation
Abstract. E-learning systems that use computers and the Internet have become
popular. E-learning systems have many advantages. However, the users often
lose their motivation for learning in the process of studying, and the frequency
that they use e-learning systems sometimes decreases. In order to improve their
motivation, we add two functions in this paper: (1) a function that users are
praised or scolded, (2) a function of limiting the answering time. Also we check
their functions utility by experiments.
1 Introduction
E-learning systems with which users study using a computer and the Internet have
been widely used[1][2]. There are various good points in using e-learning systems;
for example, users can study at any place where the Internet function is provided. On
the other hand, e-learning systems also have weak points. Some users can easily get
bored since users just read texts displayed on a computer screen and they have no
chance to communicate with teachers; they dont feel joy or mental stress such as
being praised or being scolded in the process of learning. As a result, a user often
loses the will to keep learning and the frequency of using an e-learning system
decreases. Recently, a new way called entertainment learning has been studied for
keeping the learning will; the entertainment learning incorporates "the fun" of the
game into an education system[3].
The objective of this study is to construct an e-learning system that improves or
keeps the motivation for learning of a user. In the research, we add a function of
praising and scolding and a time-limit function in the answering time to the web-type
e-learning system "Let 's study English" developed by our laboratory[4]. In addition,
with technology of Ajax, we build seamless environment by decreasing the frequency
of page transitions in order to improve the learning efficiency[5]-[7]. We check the
effectiveness of these functions by experiments.
This paper is organized as follows: In section 2, we describe the functions to keep
the learning will or motivation. Section 3 shows the experimental results performed
for estimating the functions added to the system. Finally, in section 4 we give some
conclusions and future tasks.
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 161168.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
162 K. Shimada, K. Takahashi, and H. Ueda
We add a function of limiting the answering time which is the interval from the
moment that the system displays a problem on the screen to the time limit that a user
can input the answer to the system. This function aims to give the user a feeling of
entertainment and to keep or improve the learning will of the user by letting the user
concentrate on solving the problem.
In this study, we set the time limit to 10 seconds for every problem whose type is
one-out-of-four selection; a user chooses one right answer among four choices.
Experiments on an E-Learning System for Keeping the Motivation 163
Fig. 1. An example of the flow of transition of pages with displayed images to scold and to
praise in the e-learning system.
When the time is up, the screen page to give a problem is replaced with the screen
page which includes the teacher image with the sentence Time is up. You should
concentrate on learning more.
164 K. Shimada, K. Takahashi, and H. Ueda
3 Experiments
We examine whether the functions added to the system to improve or keep the
learning will are effective or not in this chapter.
We gathered 14 students of our university as cooperators of the experiments; they
use this system to learn English words. After they take the test, we consult a
questionnaire about the system and the feeling.
For learning experiments, we prepare two types of systems that have different
degrees of difficulty of being praised during learning: one system that tends to praise
users easily and the other that tends to scold users. In addition, for comparison we
perform an experiment in which the system without the function to scold and praise
nor the time-limit function.
Three systems which we use for the experiments are as follows:
SYS1: the system that allows a user to make mistakes twice and praises a user when
he answers correctly at the third trial; the user is scolded when he makes
mistakes more than three times.
SYS2: the system that scolds a user as soon as he makes a mistake.
SYS3: the system that does not have the time-limit function nor the function to scold
and praise users.
The cooperators of the experiments are divided into two groups based on the
grade-points obtained for the technical English lecture of our university so that two
groups have little difference in the English ability. Let us call the groups Group 1 and
Group 2. Group 1 uses the system SYS1 for learning, and Group 2 learns with the
system SYS2. In addition, both groups use the system SYS3. We also prepare two
tests as follows; we call them Test A and Test B. Each test consists of 40 problems.
Test A: The test users take after users learn with the system SYS3.
Test B: The test users take after users learn with the system SYS1 or SYS2.
For example, Group 1 takes Test B after learning with SYS1 first, and then takes Test
A after learning with SYS3.
We consider what kind of change is observed in test points between the prior test
and the test after learning with the three systems, namely SYS1, SYS2 and SYS3.
SYS1 is a generous system that tends to praise users. SYS2 scolds a user as soon as
he makes a mistake. SYS3 is the system that has neither the time-limit function nor
the function to scold and praise users.
By using a questionnaire, we also investigate what kind of change of feelings users
make and whether the learning will improves or not, according to the system with
which users study.
obtained in the test after learning, respectively. We calculate the relative increase rate
as follows:
R=(P2-P1)/P1. (1)
Figure 2 plots the relative increase rates of users studied with SYS1 and SYS2.
We can see from Figure 2 that the average of relative increase rates for users with
SYS1 and the average of relative increase rates for users with SYS2 are almost the
same, but there is a user that shows a great change in the increase rate with SYS1
while no one with SYS2 shows a great increase in the rate.
Figure 3 shows the relative increase rates of users with SYS2 and SYS3. We can
see from figure 3 that some students show low increase rates with SYS2 compared to
those with SYS3. The authors think that this is caused by the images to scold students.
Some students feel too stressed by being scolded, and thus the increase rates are low.
After learning with SYS2, learning with SYS3 makes students relaxed.
We summarize the average points that are obtained from the questionnaire for
every group in Table 1. We can see from the results for items 4 to 7 in Table 1 that
the users in Group 1 who learned with the system to praise feel more joy and less
sadness or irritation than the users in Group 2 who learned with the system to scold.
On the other hand, users in Group 2 feel more sadness and irritation than the users in
Group 1. From item 8 of the questionnaire, we see that users in Group 1 and Group 2
improve the learning will. We also see from item 9 of the questionnaire that limiting
the answering time gives stress to most users. Further, we see from items 11 and 12
that the users in Group 1 who use the system to tend to "praise" feel more familiar to
the system than users in Group 2 who use the system to tend to "scold".
0.7
0.6
0.5
R
0.4
0.3
0.2
0.1
0.0
A B C D E F G Av. H I J K L M N Av.
Students with SYS1 Students with SYS2
Students
Fig. 2. The comparison of the relative increase rates of students with SYS1 and SYS2
166 K. Shimada, K. Takahashi, and H. Ueda
0.6
0.5
0.4 SYS2
R
0.3 SYS3
0.2
0.1
0.0
H I J K L M N Av.
Students
Fig. 3. The comparison of the relative increase rates of students with SYS2 and SYS3
1. Can you easily understand how to use the system? 4.9 4.6 4.7
2. Does the system have enough functions to study? 4.0 3.9 3.9
3. Can you learn effectively with the system? 4.6 3.7 4.1
4. Do you use the system more easily than the previous
4.9 4.4 4.6
system?
5. Do you feel glad when the image for the correct answer
3.9 3.0 3.4
is displayed?
6. Do you feel sad when the image for the correct answer
1.7 2.4 2.1
is displayed?
7. Do you feel irritated when the image for the correct
2.1 3.4 2.8
answer is displayed?
8. Did your learning will improve by displaying images? 3.6 3.6 3.6
10. Did your learning will improve by the time limit? 3.3 3.1 3.2
11. Do you want to use this system again? 4.6 3.9 4.2
12. How much is the degree of the total satisfaction of the
4.4 3.9 4.1
system?
Experiments on an E-Learning System for Keeping the Motivation 167
The authors believe that a function to "scold" to "praise" users and a time-limit
function that are added newly to the system are effective to improve or keep the
learning will from the experiments.
6 Conclusions
In this study, we added functions to the e-learning system, aiming at improvement of
the learning efficiency. The implemented functions are a function to set the time limit
to every problem and a function to display images to "scold" or "praise" users at the
time when the answer to the problem is wrong or correct, respectively. We employed
the Ajax technology to enable users to continue answering without page transitions at
the time of a wrong answer and thus to make the waiting time shorter.
In order to show the effectiveness of the functions, we performed experiments. In
the experiments, we prepared two systems that tend to scold and praise users, in order
to examine which system is more effective in keeping the learning will. Also, we
examined the effectiveness of the time-limit function. Further, we consulted a
questionnaire to examine what kind of change appeared in the learning will. In the
experiments, the difference between the system to scold and praise users was not seen
about learning will on the average, while for some user such systems show
effectiveness in learning. We also showed that the function to "scold" or "praise"
helped learning will improve more than the time-limit function. As for time limiting,
we need more experiments such as setting a more appropriate limiting time.
As a future task, we consider as follows; we can improve the system by adding an
entertainment function like games so that users do not feel boredom.
References
1. Okamoto, T., Mizoguchi, R.: Aritificial Intelligence and Tutoring System. Ohm Inc., Tokyo
(1990) (in Japanese)
2. Saitoh, A., Nishida, T., Nakanishi, M., et al.: Classroom Supporty and System
Administration Support for Large Scale Educational Computer System. Trans. IEICE
Japan J84-D-I(6), 956965 (2001) (in Japanese)
3. Takaoka, R., Watanabe, Y., Matsushima, W., Onitake, S., Horikawa, T., Okamoto, T.: A
Development of Game-based Learning Environment to Activate the Interaction among
Learners. IEICE SIG Technical report, ET2007-96, pp.6972 (March 2008) (in Japanese)
4. Yamashjta, Y., Takahashi, K., Ueda, H., Miyahara, T.: Construction and Analysis of Web-
based E-learning System for Exercises. In: Proc. Chugoku Branch Conference of Electrical
and Information Related Institutes (2004) (in Japanese)
5. Takahashi, T.: Beginners Guide: Asynchronous Javascript + XML. Softbank Creative Inc.,
Tokyo (2005) (in Japanese)
168 K. Shimada, K. Takahashi, and H. Ueda
6. Urushio, T.: Introductory 10days Class of Ajax. Syoueisya Inc., Tokyo (2007) (in Japanese)
7. Takeuchi, G., Takeushi, M., Sano, H.: A Sentence Interface Program for Language
Learning by Using Ajax Technology. IPSJ SIG Technical Report, 2008-CE-93, pp.147154
(February 2008) (in Japanese)
8. Construction of WWW Servers,
http://cyberam.dip.jp/linux_server/www_server.html
Object Robust Tracking Based an Improved Adaptive
Mean-Shift Method
1 Introduction
Visual target tracking currently is widely used in the military, video surveillance
transportation, etc. Research on robustness and real-time property is a hot issue in the
visual target tracking. Mean Shift algorithm is a versatile nonparametric probability
density estimation method. This method was firstly used successfully in the visual
tracking by Comaniciu. The article [1-2] discussed the Mean-Shift based target
tracking method and the kernel function bandwidth selection. For the tracking
purposes, the Mean Shift method generally uses histogram into modeling the target
region and candidate region. Similarity between the target model and the target
candidates in the next frame is measured by using the metric derived from the
Bhattacharyya coefficients.
Mean Shift algorithm is an optimal estimation method with a rising gradient of the
maximum probability density. Using the Mean Shift algorithm, we dont need the
endless search in the candidate region. We use the kernel probablity density to describe
the target features, and find the target real position by the Mean Shift vector. However,
the basic Mean Shift algorithm does not provide the solution of the orientation and
scale target. Many improved Mean Shift algorithms are proposed. Collins[3] put
forward the scale-space on the Mean Shift algorithm to solve the target scale change by
adding a scale kernel. Yilmaz[4] built a 4-dimensional kernel space, including location
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 169177.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
170 P. Zhao, Z. Liu, and W. Cheng
information, rotation and the scale information, in order to get the optimization
tracking results. In paper [5], the selection of kernel scale via linear search was
discussed. Elgammal et al. reformulated the tracking framework as a general form of
joint feature-spatial distributions [6-7].
We give an improved adaptive Mean Shift object tracking which can acquire some
improvements in two aspects: orientation and scale. We also propose a cascade kernel
function to reduce the computational complexity.
The rest of the paper is organized as follows. Section 2 analyzes the standard
Mean-shift object tracking algorithm and the shortcomings of classics Mean-shift
algorithm. Section 3 brings forward the improvements in adaptive orientation and
adaptive scale. In this section, the cascade kernel is also proposed to reduce the
computational complexity. Then, we give out the algorithm of adaptive mean shift in
section 4. The experiment results and discussion are as followed in section 5.
2 Mean-Shift analysis
K H ( x) = H 1 / 2
K( H 1 / 2
x) (2)
\Where x denotes the norm of x, The function k ( x) is called kernel profile and
ck , d is normalizing constant. Using this kernel, formula (1) is replaced as:
c n x xi 2
fh , K ( x) = k , dd
nh
k
h
(3)
i =1
Other kernels are Epanechnikov kernel, Gaussian kernel, and normal kernels. We
assume that the first derivative of the kernel profile is k '( x) ( x [ 0, ] ), and define a
new profile as
g ( x) = k '( x) (4)
Object Robust Tracking Based an Improved Adaptive Mean-Shift Method 171
G ( x) = cg , d (x )2
(5)
( ) [b ( x ) u ]
nq
Where qu = C k xi
2
i
i =1
i g
h
i =1
172 P. Zhao, Z. Liu, and W. Cheng
Where i =
m
qu
[b( xi ) u ] (11)
u =1 pu ( y0 )
If only color space is used as the feature space, the disturbance will severely impact
the tracking result. Multi-feature description is used to reduce this bad effect in the
cost of increasing computational complexity. So we proposed the cascade kernel
function to solve this problem.
Two Epanechikov kernel functions are used here. The first one is a 1-dimensional
function, which is used as weighted factor of the feature histogram.
3 (12)
(h 2 x 2 ) x<h
K 1 ( x) = 4h 3
0 else
The second one, which is a 2-dimensional kernel function, is used as the description
of the color space:
2
(h 2 x T x ) xT x < h (13)
K 2 ( x ) = h 2
0 else
Where
1
Ch = 2
nh
y x0
i =1
k2 (
h2
)
Object Robust Tracking Based an Improved Adaptive Mean-Shift Method 173
From the above formula, we know that the bigger is, the more sampled points are.
The scale-mean satisfies the equilibrium of the sum of the pixel-scales on both sides of
the scale-mean:
2 r ( ) 2 r ( ) 2 r ( )
0 0 r ( )
d d =
0 2
d =
0 0 r ( )
d d (17)
Let ( , , x, y ) denotes the joint space. The density estimator is given by:
1 n
f () = K ( i ) (20)
n i =1
Since the orientation and scale is independent of the centroid, the 4-dimentional space
can be divided into a product of three different kernels:
K ( x, y, , ) = K ( x, y) K E ( ) K E ( ) (21)
174 P. Zhao, Z. Liu, and W. Cheng
1 z z <1
Where k E ( z ) =
0 others
So the 4-D mean shift vector is:
=
K ( ) (x )( )
i i i i (22)
K ( ) (x )
i i i
Which is the target template. The information of location, orientation and scale are
included and updated in formula(22).
Similarity function is used to describe the similarity between target model and target
candidate. Bhattacharyya coefficient is the most common one, which is proved to be
better than others by Comaniciu in [8].
m
() = [ p (), q ] = p u ()q u (23)
u =1
(1) Calculate the Initial target model q u according to (9) and calculate target
candidate p u (0 ) at current frame according to (8)
(2) According to (24), calculate the similarity function 0 (0 ) of target model and
target candidate, where i is described as (25);
(3) According to (22), we obtain the four-dimension vector value 1 of new
position, including position information x 1 , scale factor and orientation factor
.
(4) If x 1 - x 0 < , 0 < , 0 < , The location frame is done and go to
next step.; Otherwise, assign the value of
x 1 to x 0 the value of to 0
the value of to 0 and jump back to (1).
(5) Read next frame and repeat the above location process until the end of tracking.
Object Robust Tracking Based an Improved Adaptive Mean-Shift Method 175
6 Conclusion
Since basic Mean Shift method cannot solve the problem of target scale and
orientation, we propose a method of adding scale factor and orientation factor to Mean
Shift space, transforming the original two-dimension Mean Shift space to a four-
dimension space, based on the research of Mean Shift tracking theory. Meanwhile, a
multi-kernel Mean Shift theory is brought forward to ensure tracking accuracy,
describing the Mean Shift model by cascading the two core functions. Experimental
results show that this algorithm realizes good adaptability for target window when
target zooming in, zooming out and rotating. At the same time, the proposed multi-
kernel Mean Shift algorithm increases the accuracy and real-time feature.
References
1. Comaniciu, D., Meer, P.: Kernel-based object tracking. IEEE Transactions on Pattern
Analysis and Machine Intelligence 25(3), 564575 (2003)
2. Comaniciu, D., Ramesh, V., Meer, P.: The variable bandwidth Mean Shift and data-driven
scale selection. In: Proc. 8th Intl. Conf. on Computer Vision, Vancouver, Canada (2001)
3. Collins, R.T.: Mean Shift blob tracking through scale space. In: Proceedings of IEEE
Conference on Computer Vision and Pattern Recognition, pp. 234240. IEEE, Madison
Wisconsin (2003)
4. Yilmaz, A.: Object tracking by asymmetric kernel mean shift with automatic scale and
orientation selection. In: Proceedings of IEEE Conference on Computer Vision and Pattern
Recognition, pp. 16. IEEE, Minneapolis (2007)
5. Collins, R.T.: Mean-shift blob tracking through scale space. In: Proc. IEEE Conference on
Computer Vision and Pattern Recognition, pp. 234240. IEEE Press (2003)
6. Yang, C., Duraiswami, R., Davis, L.: Efficient mean-shift via a new similarity measure. In:
Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 176183. IEEE
Press (2005)
Object Robust Tracking Based an Improved Adaptive Mean-Shift Method 177
7. Hager, G.D., Dewan, M., Stewart, C.V.: Multiple kernel tracking with SSD. In: Proc. IEEE
Conference on Computer Vision and Pattern Recognition, pp. 790797. IEEE Press (2004)
8. Comaniciu, D., Ramesh, V., Meer, P.: Kernel based object tracking. IEEE Transactions on
Pattern Analysis and Machine Intelligence 25(2), 564577 (2003)
9. Moeslund, T., Granum, E.: A Survey of Computer Vision-Based Human Motion Capture.
Computer Vision and Image Understanding 81(3), 231268 (2001)
10. Cheng, Y.: Mean Shift mode seeking and clustering. IEEE Transactions on Pattern
Analysis and Machine Intelligence 17(8), 790799 (1995)
A Novel Backstepping Controller Based Acceleration
Feedback with Friction Observer for Flight Simulator
Abstract. Friction torque is the main factor that influences dynamic response
performance of high precise servo systems. To compensate for the friction
torque, a compound control strategy based on backstepping and acceleration
feedback with friction observer is proposed. In this control strategy, the
backstepping controller with integral element was used for the position loop; the
acceleration feedback controller with friction observer was introduced to
compensate for friction torque. The simulation results show that dynamic friction
torque is inhibited more effectively, and the robustness of system for the exterior
disturbance is also improved simultaneously.
1 Introduction
The research on nonlinear friction attracts lots of attention in the field of some quite low
velocity servo systems because of their ubiquity in realistic applications, such as flight
simulator. In the first place, nonlinearities and uncertainties existing in the flight
simulator such as friction moment, motor moment fluctuation, lopsided moment and
system parameter change often deteriorate the performance and robustness of the
system. Moreover, being a highly nonlinear, the friction phenomenon causes
steady-state tracking errors, limit cycles, undesired stick-slip motion, the low-speed
shaking and other types of poor performance[1,2]. Therefore, to achieve high
performance of the system, appropriate control method should be designed. At present,
two main methods the approach based on the friction model compensation[3,4] and
the non-model compensation[5,6] are usually employed.
With the development of sensor technology and the successful application of
acceleration feedback controller in some systems[7,8], acceleration feedback gradually
attracts peoples attention in the field of high precision servo control. Acceleration
feedback is a kind of robust control method based on state feedback, and improves the
stiffness of the control system without broadening the bandwidth of the position or
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 179187.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
180 Y. Ren et al.
where is the angular position of the actual system, J is the inertial of the system and
B is the damp; u represents control variable, Tf stands for friction torque and d is
external disturbances. The equation (1) can be written as state space equation:
i
= w
i 1 B (2)
= (u Tf ) + d
J J ,
Where is angular velocity of the actual system.
Fig. 1. The control loop structure diagram of the servo flight simulator system
A Novel Backstepping Controller Based Acceleration Feedback 181
This paper adopts the backstepping controller with integral element, whose design step
is as follows:
Step 1:Defining the error equation of system
e1 = d , (3)
If e2 = 0 then v1 0 .Therefore, it is necessary to design the following step.
Step 2:Defining a Lyapunov function:
1
v2 = v1 + e22 (10)
2 .
182 Y. Ren et al.
J J
To make v 2 0 the backtepping control law is designed as:
u = B + J [ (c1 + c2 )e2 (1 c12 + 1)e1 + d + c11 ] ,
t (12)
= [ B J (c1 + c2 )] + J (c1 + c2 ) d + J d J (1 + 1 + c1c2 )e1 J 1c2 e1 ( )d
0
Acceleration feedback signal, different from other traditional feedback signal, can
increase the whole system dynamic stiffness while the position or velocity loop
bandwidth is unchanged[12]. The acceleration signal can be obtained under the
condition of available velocity as the below formula:
i
ii u Tf B .
= (14)
J
Tc (t ) = Z f + K f (t )
i i i ,
l
Z f = K f (u Tf B (t )) sgn( (t )) (16)
i
Tl f = Tl c sgn( )
ec = Tc (t ) Tlc (t ) , (17)
then
i
d i (T Tl )
i i i i i i i
ec = T c (t ) Tlc (t ) = T c (t ) Z f Kf = T c (t ) + Kf f f sgn( ) = T c (t ) + Kf ec . (18)
dt J
i
Finally, friction torque is obtained: Tl f = Tl c sgn( ) .
l f into equation (14), then it follows that
Substitute T
i
ii u Tl f B .
= (19)
J
To sum up, the control structure of acceleration feedback is shown in the dashed box
of Fig1.
i
Let U = Tf + JuBK + B , the new state space representation can be written:
i
X1 = X 1 0
0 1
+
i K i / K d ( J + K p ) / K d X 2 1
U. (21)
X2
( K p + J ) 2 + Ki ( Ki + K d ) 1
Let Q=
1 0
0 1
work out P =
2 Ki Kd ( K p + J )
1
2 Ki
Ki + Kd
.To make the
2 Ki ( K p + J )
2 Ki
matrix P a positive definite matrix, Ki K K must meet the condition as follows:
p d
( K p + J ) 2 + Ki ( Ki + K d ) > K d ( K p + J ) 2 (23)
Therefore, it follows the Lyapunov stability theory and acceleration feedback loop is
asymptotic stability. Equation (23) implies that the acceleration feedback system
satisfies the stability condition when K i , K p and K d select appropriate value.
4 Simulation Results
Based on the above approach for flight simulator, the parameters of actual plant,
friction model and control system are supposed as follows: B = 25 /133, J = 1 /133 ,
Tc (t ) = 1 c = c
1 2 = 350, 1 = 4.0, K f = 28 , K p = 66, K i = 5, K d = 0.5.
Still more, let the reference input signal be a triangular signal where the amplitude is
0.01 degree and frequency is 0.025Hz. In order to verify the system robustness to
external disturbances, a sinusoidal interference signal whose amplitude is 0.02 degree
and frequency is 0.025Hz is appended into the system. Comparing the traditional
backstepping with the novel backstepping controller based acceleration feedback with
adaptive friction observer, the simulation results are shown by Fig.2 to Fig.4.
From the above simulation results, we can see that the tracking error of the system
under the novel controller is evidently smaller than that under the traditional
backstepping controller. By using the novel backstepping controller based acceleration
feedback with friction observer, the unstable phenomenon of the low-speed system is
suppressed, while tracking accuracy of flight simulator is improved, and dynamic
friction torque and perturbed torque are compensated effectively.
A Novel Backstepping Controller Based Acceleration Feedback 185
-3 -5
x 10 x 10
2 4
1.5 3
1 2
0.5 1
0 0
-0.5 -1
-1 -2
-1.5 -3
-2 -4
0 20 40 60 80 100 0 20 40 60 80 100
t/s t/s
(a) (b)
Fig. 2. Position tracking error for flight simulator (a) position tracking error based the traditional
backstepping controller (b) position tracking error based the novel backstepping controller
0.005 0.005
pos i t i on/ deg
0 0
-0.005 -0.005
-0.01 -0.01
-0.015 -0.015
0 20 40 60 80 100 0 20 40 60 80 100
t/s t/s
(a) (b)
Fig. 3. Position tracking response for flight simulator (a) position output based the traditional
backstepping (b) position output based the novel backstepping
-3
x 10
0.15 8
t he er r or of v el o c i t y r es p ons e/ d eg
t he er r or of vel oc i t y r es pons e/ deg
0.1 6
4
0.05
2
0
0
-0.05 -3
30 30.1 x 10 29.95 30 30.05 30.1
0.1 5
0.05
0 0
-0.05
-0.1 -5
0 20 40 60 80 100 0 20 40 60 80 100
t/s t/s
(a) (b)
Fig. 4. Velocity tracking error for flight simulator (a) velocity tracking error based the traditional
backstepping (b) velocity tracking error based the novel backstepping
186 Y. Ren et al.
sgn( ) i
sat( ) =
i i 1
k < k=
where is the linear range.The simulation result of the observer output is shown in
Fig.5.
0
r ef er enc e f r i c t i on
r eal f r i ct i on
-1
1 30 30. 1
f r i ct i on
-1
0 20 40 60 80 100
Fig.5 shows that the effective observer can be achieved as long as K f chooses the
appropriate value. Simulation results show that the novel backstepping controller based
acceleration feedback with the friction observer is more effective in compensating
dynamic friction torque and the system influence of the low-speed shaking is inhibited,
so that the performance of flight simulator is improved remarkably.
5 Conclusions
Flight simulator is a kind of servo system with uncertainties and disturbances (such as
nonlinear friction factors) which worsens the performance of the flight simulator
especially when low frequency and small gain signal inputs the system. To obtain the
high performance and good robustness for flight simulator, a novel backstepping
controller based acceleration feedback with friction observer has been presented. The
adaptive friction compensation based on Coulomb model can overcome the effect of
the system friction. Based on the Lyapunov stability theorem, the novel backstepping
controller keeps the system globally asymptotically stable. Simulation results indicate
that the compound controller is capable of giving excellent position tracking and
velocity tracking for the flight simulator. The effect of friction to the system is
overcome effectively.
A Novel Backstepping Controller Based Acceleration Feedback 187
References
1. Lischinsky, P., Canudas de Wit, C., Morel, G.: Friction compensation for an industrial
hydraulic robot. IEEE Control Systems Magazine 19, 2530 (1999)
2. Zhu, Y., Pagilla, P.R.: Static and dynamic friction compensation in trajectory tracking
control of robots. In: Proceedings of the 2002 IEEE International Conference on Robotics &
Automation, pp. 26442649. IEEE Press, Washington (2002)
3. Noorbakhash, S.M., Yazdizadeh, A.: A new approach for lyapunov based adaptive friction
compensation. In: IEEE Control Applications (CCA) & Intelligent Control (ISIC), pp. 6670.
IEEE Press, Russia (2009)
4. Liu, G.: Decomposition-based friction compensation of mechanical systems.
Mechatronics 12, 755769 (2002)
5. Morel, G., Iagnemma, K., Dubowsky, S.: The precise control of manipulators with high
joint-friction using base force/torque sensing. Automatica 36, 931941 (2000)
6. Yuan, T., Zhang, R.: Design of guidance law for exoatmospheric interceptor during its
terminal course. Journal of Astronautics 30, 474480 (2009)
7. Shen, D., Liu, Z., Liu, S.: Friction compensation based acceleration feedback control for
flight simulator. Advanced Materials Research 8, 17021707 (2010)
8. Nima Mahmoodi, S., Craft, M.J., Southward, S.C., Ahmadian, M.: Active vibration control
using optimized modified acceleration feedback with Adaptive Line Enhancer for frequency
tracking. Journal of Sound and Vibration 330, 13001311 (2011)
9. He, Y.Q., Han, J.D.: Acceleration feedback enhanced robust control of an unmanned
helicopter. Journal of Guidance. Control and Dynamics 33, 12361250 (2010)
10. Bousserhane, I.K., Hazzab, A., Rahli, M., Mazari, B., Kamli, M.: Mover position control of
linear induction motor drive using adaptive backstepping controller with integral action.
Tamkang Journal of Science and Engineering 12, 1728 (2009)
11. Sanchez, E.N., Sanchez, E.N., Alanis, A.Y., Loukianov, A.G.: Real-time discrete
backstepping neural control for induction motors. IEEE Transactions on Control Systems
Technology 19, 359366 (2011)
12. Wang, Z.: Friction compensation for high precision mechanical bearing turntable. PhD
thesis, Harbin Institute of Technology (2007)
13. Mentzelopoulou, S., Friedland, B.: Experimental evaluation of friction estimation and
compensation techniques. American Control Conference 29, 31323136 (1994)
The Optimization Space Design on Natural Ventilation in
Hunan Rural Houses Based on CFD Simulation
1 Introduction
In recently, the natural ventilation researches are focused on the thermal comfort with
the condition of ventilation [1-3] and the optimization of residence community. The
studies on residence community are also concentrating on the planning and the
interaction between buildings. But the researches on the rural house, especially on the
effects of space design on natural ventilation are rarely. In the rural houses, air
conditioning is a expensive technologies, and natural ventilation is a important
passive technologies to improve the indoor thermal comfort. So, the optimization on
natural ventilation in Hunan rural houses is worth to be studied and the results can
guide the rural house design in Hunan in order to enhance the natural ventilation and
reduce the energy consumption in summer.
2 Methodology
By the investigation on the rural houses in Yueyang, Huarong and Pingjiang country
of Hunan, it is found that the space design are similar and can be summarized to
several models. The building dual graph method[7], which was formed from 60s, are
used in this article. First, different spaces should be named: bed room is 1; hall is 2;
toilet is 3; stair hall is 4; corridor is 5; southern outside is S; northern outside is N;
western outside is W; and eastern outside is E. Then, the real line is used to show the
connection that indoors spaces and outdoor spaces by doors and windows; the broken
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 189195.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
190 M. Xie et al.
line is used to show the connection that indoors spaces and outdoor spaces by walls.
So, the dual graph of different houses can be got. The graph with the same topological
relation can be combined. At last four dual graphs with different topological relation
can be got, showed as Fig.1. Based on the suggestion of March and Steadman[7] and
the average dimension of rural houses, the topological graph can be restored to the
planes. Then, these planes are the summarized models of existing spaces design in
rural houses, which showed in Fig.2.
Discussion on the effect of space design on natural ventilation, the main point is the
different place of accessory space in the whole building, but on the ventilation effect
of these spaces. Because of the low using frequency and the closed connection doors,
the summarized models can be simplified for the simulation. The default of doors of
accessory spaces is closed and there is no infiltration. So, the simulation models can
be got, showed as Fig.3, simplified form Fig.2. The dimensions of each space are
average value got from the investigation. The toilet and stair hall of each model are
deleted except Model3, because that one of the bedroom windows towards the stair
hall. With the consideration of the effects of stair on ventilation, the stair is simplified
as a wall in the stair hall in Model 3.
In model 1 and 2, the building depth is the standards, the simulation range to
incoming flow direction and to after flow direction are both 6 times of depth, the
height range is five times of the building height. The simulation range is large enough
for uniform inflow and the full interaction between flow and building in the
simulation. Then, the simulation range is 156m*60m, using the 2D standard k-search-
evade antagonistic model turbulence model, and the orthogonal grid is 312*120.
In Model 3, the default setting is the same as Model 1 and 2, except the simulation
height is 4 times of the building height. Than the simulation range is 195m*60m, and
the orthogonal grid is 390*120. in Model 4, the default setting is the same as Model 1
and 2, except the range to incoming flow and after flow are both 5 times of the
building depth. Then, the simulation range is 198m*60m, and the orthogonal grid is
396*120.
192 M. Xie et al.
4 Conclusions
By the analysis, the best space design can be chose, which is Model 2. Then, the best
space design model and the corresponding building dual graph can be summarized for
Hunan rural houses, which showed in Fig. 8. In the later rural houses design in
Hunan, this model can be popularized for farmers for better natural ventilation effect
and energy saving. The detailed dimension of different house can adjust based on the
real situation and the best building dual graph.
Fig. 8. The best model and the corresponding building dual graph
The Optimization Space Design on Natural Ventilation in Hunan Rural Houses 195
References
1. Zhang, G., Yang, L., Zhou, J., et al.: Development and Application of Natural Ventilation
Potential Evaluation System. Journal of Hunan University (Natural Sciences) 33(1), 2528
(2006)
2. Yin, W., Zhang, G., Xu, F.: Preliminary Exploration of the Universal Design Process of
Natural Ventilation. Architectural Journal (5), 7780 (2009)
3. Wang, Y., Liu, J., Xiao, Y.: Study on the effective hours of natural ventilation under the
regional climatic condition. Journal of Xian University of Architecture &
Technology 39(40), 541546 (2007)
4. Nyuk, N.H., Wong, H., Huang, B.: Comparative study of the indoor air quality of naturally
ventilated and air-conditioned bedrooms of residential buildings in Singapore. Building and
Environment 39(9), 11151123 (2004)
5. Hummelgaard, J., Juhl, P., Sabjornsson, K.O., et al.: Indoor air quality and occupant
satisfaction in five mechanically and four naturally ventilated open-plan office buildings.
Building and Environment 42(12), 40514058 (2007)
6. Han, J.: Thermal Comfort Model of Natural Ventilation Environment and Its Application in
the Yangtze Climate Zone. In: Doctor thesis of Hunan University, pp. 1103. Hunan
University, Hunan (2009)
7. Liu, X.: Theories of Modern Architecture, vol. 1, pp. 87536. China Building Industry
Press, Beijing (1999)
Appendix
Project supported by NSFC (51108469) and Hunan Provincial Natural Science
Foundation of China (11JJ5032).
Optimal Simulation Analysis of Daylighting Design in
New Guangzhou Railway Station
Abstract. This article uses the Ecotect software to simulate and analyze the
Guangzhou new railway stations interior illumination and the lighting energy
saving rate under the background of the natural lighting. The results shows that
the daylighting design in new Guangzhou railway station is satisfied and the
artificial lighting system should be divided into different regions according to
the illumination distribution for energy efficiency.
1 Introduction
The total construction area of the new Guangzhou railway station is about 560537 ,
including 247517 of overground construction area and 117466 of underground
construction area. The contour area of the awning which doesnt cover the platform is
about 195554 , and the room of passenger information takes up 212732 . The
depth of the station is 398m, not including the elevated driveways beside the station,
and the width is 335m.
The first floor of the station is the outbound area; the second floor is the platform;
the third floor is the elevated waiting area; the underground floor includes the station
for the metro and the equipment room etc.
Because the size of the structure is very large, it costs relatively high in the energy
consumption of artificial lighting. So we need to do the analysis at the beginning of
the design in order to optimize its natural lighting design. In that case, we can get
better light environment indoors, and we can save the energy as well.
In this article, we use the Ecotect software to simulate and analyze the new railway
stations interior illumination and the lighting energy saving rate under the
background of the natural lighting.
2 Methodology
2.1 Analysis Method
The main content of natural lighting research about new Guangzhou railway station is
including the interior illumination situation of both elevated waiting area and
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 197206.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
198 L. Shi et al.
outbound area, and the amount of the power consumption which can be saved in a
whole year.
The specific analysis method is following as below:
(1) Calculate the interior illumination situation at the date of the summer solstice
and the winter solstice, which assumes that the whole day is a cloudy day with no
direct light. And the cloudy days illumination in winter solstice is corresponding to
the worst lighting results of the whole year;
(2) Simulate the interior environment with direct sunlight at 2:00pm in the summer
solstice, and calculate the illumination situation.
(3) Use the Ecotect software to calculate the whole years lighting satisfaction rate
in both the elevated waiting area and the outbound area ( which implies the proportion
of the time when the interior illumination is above the 75lx), and analyze how much
energy can be saved from the lighting.
(4) The interior illumination watt density should be set on 7w/ when we do the
lighting energy saving research. The time for interior lighting should start from
5:00am to 1:00am in the next day (about 20 hours), and the time for natural lighting
should calculate from 8:00am to 5:00pm (9 hours in total).
(5) The lighting satisfaction rate of the whole year in different parts of the waiting
room is represented by DA (unit: percentage). The DA is identified as the proportion
of the accumulated hours when the interior natural illumination is higher than the
allowable value (about 75lx) during the natural lighting time (from 8:00am to
5:00pm), and the basic number of the proportion is confined in the natural lighting
hours (9*365=3285 hours) of the whole year.
The lighting model in the simulation is presented in the following figures (from Fig.1
to Fig.4). This model is constructed in accordance with the simplified architecture
design, and the material of it is chosen from the scope offered by the design. The
calculated area is including the elevated waiting area and the outbound area.
When the station uses the natural lighting, the interior illumination is changing
constantly in accordance with exterior illumination.So we must add the exterior
illumination into the consideration as we decide the interior illumination of the natural
lighting.Normally, adopting the lighting coefficient as the index is a kind of main
methods to evaluate the natural lighting.
Fig. 1. The complete mode of architecture Fig. 2. The mode of the waiting area
Optimal Simulation Analysis of Daylighting Design in New Guangzhou Railway Station 199
Fig. 3. The mode of the platform Fig. 4. The mode of the computational area.
The design standard of lighting in architecture (GB/T 50033) is the current norm
we should follow when we do the lighting design, and it has some regulations on the
interior lighting index of the natural lighting. To make sure the quality of the interior
lighting is eligible, this project ensures that both the waiting area and the outbound
area meet with the standard of general operating accuracy, that is to say the interior
illumination of natural lighting should achieve 75lx.
The main target of the lighting simulation analysis is to find out what the lower
limit of the optical property in glass should take in order to satisfy the need of interior
lighting, under the worst condition of exterior environment. Therefore, in the
simulation research, the illumination of sky background is set as the CIE all the
cloudy mode, and the whole meteorological data using in the calculation is taken from
the website of the Lawrence Berkeley National Laboratory.
According to the attribute of the transparent materials, the settings of each
transparent component are given in the Table 1.
Table 1. The visible light transmissivity of the transparent exterior protected construction
3 Results
The natural lighting of the elevated waiting area is achieved by the transparent
materials in the ridge of the middle, the skylight on the roof, the side skylight and the
200 L. Shi et al.
glass curtain wall in four directions (as shown in Fig.5).Due to the platform awning,
there is hardly any direct light coming through the glass curtain walls in both north
and south; as for the east and west sides glass curtain walls, part of the direct light is
also shielded from the overhanging roof.
The illumination of the natural lighting in elevated waiting area is shown in the
Table 2 and the Fig.6 to Fig.11.
Table 2. The data of the natural lighting illumination in elevated waiting area
The average
Time Illustration
illumination(lx)
Summer 8:00 387
(1) The four direction of the waiting area are
solstice 14:00 483 composed by the glass curtain walls, so the
(Cloudy) 17:00 169 illumination in perimeter zone is above normal.
8:00 214 Even in the winter solstice, the illumination is
14:00 303 basically beyond 200lx (the allowable data in
interior artificial lighting).
Winter
solstice (2) The illumination in the central hall of waiting
(Cloudy) With no exterior area is lower than the perimeter zone, but in
17:00
light resources winter solstice, at 2:00pm, its illumination data is
also basically beyond 75lx.
Fig. 6. The interior illumination in the Fig. 7. The interior illumination in the
summer solstice at 8:00am (Cloudy Day) summer solstice at 2:00pm (Cloudy Day)
Fig. 8. The interior illumination in the Fig. 9. The interior illumination in the winter
summer solstice at 5:00pm (Cloudy Day) solstice at 8:00am (Cloudy Day)
Fig. 10. The interior illumination in the winter Fig. 11. The interior illumination in the
solstice at 2:00pm (Cloudy Day) winter solstice at 5:00pm (Cloudy Day)
202 L. Shi et al.
According to the national standards about the interior illumination and the Ecotect
analysis results taken from the condition that we only use the natural light, we can
calculate the energy saving rate in elevated waiting area, as is shown in the Table 3.
Table 3.The energy saving data of the interior lighting in elevated waiting area
There are mainly three methods about lighting in the outbound area as is shown in
the Fig.12.
Component 1: the glass curtain walls in both east and west sides of the entrance
hall.
Component 2: the oblong glass on the top of the outbound area, so the area close to
the north and south ends of the glass can gain sunlight from the
awning.
Component 3: the skylight on the top of the outbound area, and it gains the sunlight
from the ridge of the middle through the hole of waiting area.
Optimal Simulation Analysis of Daylighting Design in New Guangzhou Railway Station 203
The illumination of the natural lighting is shown in the Table 4 and the Fig.13 to
Fig,18.
The average
illumination
Time Illustration
in outbound
area lx
8:00 109 (1) The three types of lighting components in
Summer outbound area make the entrance hall in the east and
solstice 14:00 135.6 west side, gain more sunlight, and they also improve
(Cloudy) the illumination, close to the east and west side glass
17:00 49.4 curtain wall, significantly.
8:00 61.4 (2) The area, close to the glass curtain wall and just
below the oblong glass skylight, has better natural
Winter 14:00 112 lighting effect.
solstice
(Cloudy) With no (3) The four holes in the waiting area have limited
17:00 exterior light effect on the natural lighting of outbound area.
resources
204 L. Shi et al.
Fig. 13. The interior illumination at 8:00am in Fig. 14. The interior illumination at 2:00pm in
summer solstice (Cloudy) summer solstice (Cloudy)
Fig. 15. The interior illumination at 5:00pm in Fig. 16. The interior illumination at 8:00am in
summer solstice (Cloudy) winter solstice (Cloudy)
Fig. 17. The interior illumination at 2:00pm in Fig. 18. The interior illumination at 5:00pm in
winter solstice (Cloudy) winter solstice (Cloudy)
Optimal Simulation Analysis of Daylighting Design in New Guangzhou Railway Station 205
According to the national standards about the interior illumination and the Ecotect
analysis results taken from the condition that we only use the natural light, we can
calculate the energy saving rate in elevated waiting area, as is shown in the Table 5.
Table 5.The energy saving data of the interior lighting in outbound area
Standards of illumination 75 lx
Watt density of lighting 7 W/m2
Operating hours in a whole year 20h365 days=7300 h
4 Conclusions
According to the simulation analysis results of the lighting, we can obtain the
following conclusions:
(1) The waiting area is mainly through the surrounding glass curtain walls to obtain
the natural light, and the central hall is through the roof (ETFE) to obtain the natural
light, so the waiting area gains much more natural light. Even in the winter solstice, it
can obtain higher interior illumination (over 200lx), and the illumination in perimeter
zone is higher than the central hall.
(2) The outbound area is mainly through the glass curtain walls of the entrance
halls in both east and west side, and the oblong skylight near the glass curtain walls in
the north and south sides to obtain the natural light. The holes in the waiting area
contribute little to the lighting, and the illuminations in the surrounding area all
outnumber 75lx, except for the central hall.
206 L. Shi et al.
(3) If the lighting lamps and lanterns can automatically control the on-off
according to the interior illumination, the waiting area can save 35.6% of the lighting
energy consumption (1042000kWh) in natural lighting, and the outbound area can
also save 22% of the lighting energy consumption (899000kWh).
(4) The illuminations in the main activity regions of both waiting area and
outbound area wont be too high, so they wont have serious glare problems.
(5) We suggest that the lighting system should be divided into different regions
according to their illuminations, so we can control the system in a energy saving
mode. The specific zoning method is shown in the Fig.19 to Fig.20. Every region
should set the illumination sensor in order to control the on-off of the lighting lamps
and lanterns independently.
Fig. 19. The zone chart of the lighting control Fig. 20. The zone chart of the lighting control
in elevated waiting area in outbound area
References
1. Zain-Ahmed, A., Sopian, K., Othman, M.Y.H., Sayigh, A.A.M., Surendran, P.N.:
Daylighting as a passive solar design strategy in tropical buildings. A case Study of
Malaysia 43 (2007)
2. Hua, Y., Oswald, A., Yang, X.: Effectiveness of daylighting design and occupant visual
satisfaction in a LEED Gold laboratory building, vol. 54. Crown pud, New York (2008)
3. Aghemo, C., Pellegrino, A., LoVerso, V.R.M.: The approach to daylighting by scale models
and sun and sky simulators: A case study for different shading systems, vol. 20 (2008)
4. Altan, H., Ward, I., Mohelnikova, J., Vajkay, F.: An internal assessment of the thermal
comfort and daylighting conditions of a naturally ventilated building with an active glazed
facade in a temperate climate, vol. 9, p. 12. Oxford University, New York (2009)
5. Miguet, F., Miguet, F., Groleau, D.: A daylight simulation spaces-application to transmitted
direct and diffuse light through glazing. Building and Environment, 833843 (2002)
6. Xie, H.: The design strategy of skylight in public buildings. Guangdong University of
Technology
7. Geng, J.: Using the natural light fully-The important thoroughfare of the energy saving
lighting. The Light and Lighting (1), 11 (2003)
8. Liu, J.: Building physics, 3rd edn., vol. (5). China Building Industry Press, Beijing (2000)
9. Li, X.: The natural light and modern architecture design, Zhengzhou University (2006)
Research on Passive Low Carbon Design Strategy
of Highway Station in Hunan
natural daylighting, shading and natural ventilation. Based on the different
design strategies, including daylighting shading and natural ventilation it is
proposed that passive LC technology strategies, such as atrium, skylight
combining the natural ventilation, caused by thermal pressure, should be used in
order to reduce the emission of highway station in Hunan area.
1 Introduction
In recent years, With large-scale development and construction of high-grade
highway and freeway, station develop rapidly and become one of the main
transportation. With the development, the scale and the construction speed are
increasing accordingly.
Since 1995, there has built Ten several highway station who matches built freeway
in whole Hunan province. Such as East, South, West and North Station in Changsha,
Huaxin and Shuangfeng Station in Hengyang, East Station in Yiyang, Central Station
in Huaihua, North Station in Yueyang, Central Station in Chenzhou, East Station in
Liuyang, etc. we can have an insight that design of highway station is getting more
and more progress in design, becoming more perfect in function, more humanized in
service. Related design research emerges in endlessly. However, related LC building
design especially using passive technology that aimed at energy conservation is still
weak. Related articles about the research about climate characters in hot-summer and
cold-winter area need to be enhanced urgently.
According to the statistic from American energy department (EIA),it can be
reduced by 47percent by using passive technology in building energy consumption,
comparing to the normal new building, while 60 percent to normal old building.
Passive technology can be used widely in applies to most large-scale building and all
small-scale building. Therefore, passive LC designs in Hunan highway station worth
to be noticed.
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 207214.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
208 L. Shi et al.
For now, theres no information alone focused on the statistic of the energy
consumption about highway station for the time being. Considering highway station
belong to public building, therefore, the statistic of the energy consumption in public
building can Provide a degree of reference for highway station design.
By statistics in 2008,there are 5.3 billion square meters of public construction of 36
percent of town construction aggregates, energy consumption of per unit area except
for heating is equivalent to 90-200kWh/(m2year). According to analysts by Ding and
others in 2009, energy consumption of per unit area except for heating is equivalent to
95-96kWh/ (m2year) in central area in China including Hunan, 82-114kWh/(m2year)
in hot-summer and cold-winter area who has the same climate characters with Hunan.
From the statistics, we can see that energy consumption of per unit area in public
building is much higher than residential building. Although the energy consumption of
Hunan in hot-summer and cold-winter area is almost at the minimum, we should take
more LC design measures to reduce the increasing trend of energy consumption.
From the analysis of the formation, lighting, office appliances, air condition and
the others makes energy consumption in public building. Lighting energy
consumption is about 5-25kWh/(m2year), office appliances energy consumption is
about 5-25 kWh/(m2year), and air condition energy consumption is about 5-25
kWh/(m2year).Considering there are less office appliances in highway station than
office building and marketplace, Lighting and air condition energy consumption are
the predominant part in highway station.
From the analysis of the formation, LC design should reduce lighting and air
condition energy consumption, passive technical strategies to match include natural
lighting, natural ventilation, heat insulating by enclosure structure and so on. And the
more specific choose also need to consider the climate characteristic of Hunan.
Hunan is subtropical in the west to East Asia Monsoon region. Owing to its
geography characteristics, the area enjoys a subtropical humid monsoon climate
bearing obvious features of continental climate. Its monsoon climate feature lies
Research on Passive Low Carbon Design Strategy of Highway Station in Hunan 209
primarily in opposite wind direction between summer and winter: South monsoon in
summer while north monsoon in winter. From the building climate, Hunan belongs to
hot-summer and cold-winter area. It is always soaked with an average temperature of
17 , the highest temperature 41 , the lowest temperature-3 .The annual average
humidity of 79%, annual average rainfall of 1302mm, sunshine of 1722.1-1816.5
hours all year, solar irradiance of 458.4-462.1kJ/cm2 all year.
We can see from the climatic conditions, highway station in Hunan need passive
technology to solve the adverse environment with hot-summer, cold-winter and
soaked weather. Undoubtedly, natural ventilation will be the best choice. For it can
reduce energy consumption and ameliorate the soaked problem simultaneously.
Jiang Yi present his view In view of building LC design in south: solar radiation
influence the power consumption for air conditioning most, so key approaches for
energy saving are external shading and outside surface ventilation. Study by Dean
Heerwagen also holds that In the subtropical moist climate (including hot-summer,
cold-winter climate, natural ventilation, shading and lightweight envelope should be
considered first in architectural passive design Strategy, among which natural
ventilation should be on the first place.
Passive technology has different characteristics respectively based on different
architecture types. The power consumption for air conditioning depends mainly on
the internal heat of building. As far as the heat gaining of the building is concerned,
main component is envelope dominantwithinternal heat-gaining dominant two sort
models. Domestic architecture and small public buildings belongs to the former,
internal heat is mainly transmitted by envelope who accepting solar radiation; while
Large-scale public buildings belongs to the latter, heat is mainly sent out by internal
personnel and equipment. Envelope dominant should focus on improving thermal
insulation performance of the envelope structure, and internal heat-gaining
dominant focus less. Because human is the main heat source, and its high crowded
building, highway station belongs to internal heat-gaining dominant, as a result, in
the choice of passive technology, thermal insulation performance of the envelope
structure is not in the optimal position.
On the above, passive LC design strategies of highway station in Hunan should
focus on both daylighting, shading and natural ventilation.
Although natural daylighting design is much more complicated than electric lighting
design, it could bring more aesthetic feeling to the indoor and outdoor of architecture.
Comparing with electric lighting, natural daylighting is significance as one important
part of passive design, such as, reduces the influence of the environment by reducing
the power consumption, well for health, improve work efficiency, etc.
210 L. Shi et al.
Usually in multi-storied residential building, area far away from window less than
5 meters can be light up by natural light, between 5 and 10 meters can be light up
partly, while more than 10 meters can never be light up. There are three different
plans with completely same area in graph 1.In the foursquare plan, 16% of the region
have no natural lighting, besides 33% have some; n the rectangle plan, there is no
space who has no natural lighting, but substantial just have some; While in the plan
with central courtyard, all regions can be light up by natural light adequately. Thus,
architecture depth should be reduced as much as possible; if its more than 10 meters,
country yard should be set. Considering the functional requirements of the highway
station whose depth is always deep, daylighting for centre region can come true only
by country yard or lighting atrium; at the same time the side window can be raise
higher to strengthen the depth of natural light.
Of course, skylight can used to improve the indoor lighting in single-storey
building. Each kind of skylights are illustrated in graph 2.However,any kind of
skylight has the main problem, without any exception. They face the sun much more
in summer than in winter. So using skylight with larger gradient and facing south or
north can make the light more well-distributed, and reduce solar radiation through it.
Firstly passive cooling it brings can reduce energy consumption; secondly It can
Provide fresh natural air through removing wet and dirty air. When the outside
humidity is heavy, natural ventilation can help sweat evaporate on the skin surface,
and therefore improve heat comfort.
Wind pressure and heat pressure construct the essential driving force of natural
ventilation with basic forms of wind pressure ventilation and heat pressure
ventilation. While airflow acts on the building surface, differential pressure cones
into being between the windward and leeward, then the air flows inside the building.
But studies shows that, when the building depth grows deep step by step, wind
pressure ventilation impact less and less. Considering the depth of highway station is
always deep, this paper focuses on design strategy of heat pressure ventilation.
So-called heat pressure ventilation is based on heat buoyancy principle, warmer
air rises and cooler air falls, named chimney effects. According to the fluid balance
equation, A classic heat pressure flow formula is as follows:
Ti T0
qs = (CD A) 2 gh (1)
T0
Fig. 4. The hot pressure ventilation of the Atrium in Frankfurt commercial bank
Research on Passive Low Carbon Design Strategy of Highway Station in Hunan 213
cavity inside the building: such as staircases, atrium, well hole to meet the height
requirements of inlet and outlet, and set openings under control on the top of the
building, exhausting the hot air of each layer, achieving the purpose of natural
ventilation. Comparing with the wind-pressure natural ventilation, natural ventilation
can adapt to the ever-changing external wind environment.
As mentioned, atrium and skylight is compatible for daylighting in highway
station, exactly meeting the basic conditions of the thermal natural ventilation, is a
suitable passive technology in highway station buildings. In addition, the atrium
connected originally isolated floors into a whole, which is helpful for ventilation
strategies overall. Therefore, the thermal natural ventilation by atrium increases
thermal pressure effect artificially essentially, a then improves the ventilation
frequency. Graph 4 is the Frankfurt commercial bank designed by Norman Forster. It
use the thermal natural ventilation by atrium to enhance natural ventilation in
buildings, has received good effect.
5 Conclusions
Along with people's living levels rising and traffic network increasingly perfect day
by day, the construction of highway station will get more development with
increasingly perfect function. Simultaneously, the passenger request more of indoor
comfort in highway station, which put forward the higher request to passive LC
design of highway station. It will be a key point in the future research, that which
passive technologies, with low-cost and low energy consumption, should be adopted
to reduce the effect on natural environment and improve the satisfaction on indoor
environment. The article just put forward corresponding design strategy based on
limited data analysis, and need to practice in the project. This paper can only offer a
role in attracting valuable opinions.
References
1. Ling, Z., Wenfang, C.: Discussion on Historical Development of the Design of Passenger
Station. Chinese and Overseas Architecture (1), 3435 (2005)
2. Tan, F., Zhu, T., Huo, J.: Layout Method of Modern Automobile Passenger Transportation
Station. Huazhong Architecture 24(3), 8487 (2006)
3. Xu, L., Li, B.: The design of east highway station in Guigang and the use of eco-
technology. In: The Third Guangxi Youth Conference Proceedings (natural science article)
(2004)
4. Huang, J., Dong, X.: Ecological StationThe Design of Haizhu Bus Station. New
Architecture (1), 4851 (2004)
5. Wang, G.-G., Zeng, K.-M., Zhu, X.-M.: Design for Jiangmen Long-distance Passenger-
transport Bus Station. Journal of Guangdong University of Technology 22(4), 9498
(2005)
6. Wang, C.: Green building techniques manual. China Architecture and Building Press,
Beijing (1999); Public Technology Inc., US Green Building Council
7. Zhang, G., Xu, F., Zhou, J.: Sustainable building technologies. China Architecture and
Building Press, Beijing (2009)
214 L. Shi et al.
8. Tsinghua university building energy research center. China building energy saving annual
development research report 2008. China Architecture and Building Press, Beijing (2008)
9. Ding, H., Liu, H., Wang, L.: Preliminary analysis of the energy consumption statistics in
civil buildings. Heating Ventilating & Air Conditioning 39(10), 13 (2009)
10. Hunan meteorological nets. The climate characteristics of hunan,
http://www.hnqx.gov.cn/qxbk/2/2009-5-2/HuNaQiHou-Zheng.htm
(May 21, 2009)
11. Tsinghua university building energy research center. In: China building energy saving
annual development research report 2007. China Architecture and Building Press, Beijing
(2007)
12. Heerwagen, D.: Passive and active environmental controls. McGraw-Hill Companies, New
York (2004)
13. Public Works and Government Services Canada. In: Daylighting Guide for Canadian
Commercial Buildings, Canada (2002)
14. Yijian, S.: Industrial Ventilation. China Architecture and Building Press, Beijing (1994)
A Hybrid Approach to Empirically Test Process
Monitoring, Diagnosis and Control Strategies
Luis G. Bergh
1 Introduction
In the last decade a number of techniques have been proposed to improve the
operation of different processes. For example, the state of art and challenges in the
mining, minerals and metal processing area was recently discussed by [1,2]. These
works mainly focused on aspects such as: process modeling, data reconciliation, soft
sensor and pattern recognition, process monitoring, fault detection and isolation,
control loop monitoring, control algorithms and supervisory control.
What is common to most of these areas is that the theoretical advantages of novel
method have to be confronted and tested in real plants. However, experimentation on
real plants is of high cost and may present other difficulties. For example, the input
variables can only be changed inside a narrow band, to avoid risky operating
conditions or high loses in products. Some important disturbances cannot always be
managed, interrupting and degrading the experiments, and then leading to confusing
results. Moreover, sometimes is extremely difficult to reproduce a given disturbance,
for example a change in particle size distribution. Another important factor may be
the quality of the collected data. For example, in the mineral processing industry, the
measurements of important variables, such as particle size or a stream grade, demands
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 215222.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
216 L.G. Bergh
2 A Different Approach
Some processes can be analyzed separating the phenomena in different levels. A first
and basic level is how the different streams are mixed and separated in a process unit.
In most cases, the physical properties of each phase, such as density or viscosity, are
usually invariant or experiment moderate changes as a function of temperature. A
second level is to take into account the change in properties such as the concentration
of a solute. This can occur due to a chemical reaction, where some specie partially
disappear to form other species, or due to a selective migration of some species from
one phase to another. In flotation, for example, some species are selectively attached
to gas bubbles and form a froth phase. Other solid particles remain in the liquid phase
forming the tailings. Some constraints are that changes in solute concentration may
also change transport properties, and that artificial decoupling is not effectively
occurring.
More generally speaking, when process hydrodynamics are not significant
influenced by changes in solute concentration, then process behavior can be
separately represented in these two levels. If this hypothesis is accepted, then the
experimentation approach on pilot plants can be simplified in two senses:
(i) Experimentally each fluid (liquid, solid and gases) may be substituted by low
cost and easy manageable fluids, such as water and air in flotation, or water and
organic solvent in solvent extraction processes, or water in a liquid phase reaction.
The process hydrodynamics will be well represented by such fluid mixing and
A Hybrid Approach to Empirically Test Process Monitoring, Diagnosis 217
separation in a process unit. Thus experimental work can be carried on under safety
and low cost condition.
(ii) The solute concentration changes remains as a problem because there will be
no real change, under the previously discussed condition. Also remember the
difficulties found in a real case to use low cost and reliable instrumentation.
Alternatively, the solute concentration changes can be obtained from detailed models
relating measured operating conditions, such as flow rates, temperatures, pressures
and levels, and initial concentration state on the feeds. This kind of models may also
difficult to obtain.
Therefore, if a model is available, and the pilot plant is operated by using these low
cost fluids, a hybrid system can be developed. The real plant behavior is simplified
but the main hydrodynamic characteristics are still well represented. On the other
hand, by using the model the variables representing the target of the process operation
are predicted (not measured), under a wide operating region.
Distributed control of local objectives can be administrated by supervisory control
strategies based on estimation of the crucial variables, if this hybrid system is built.
Also process monitoring, diagnosis and fault detection, isolation and remediation
studies can be developed under low cost with a reasonable approximation to the
behavior of a real process.
Models of different kinds may be built to be solved on-line to produce virtual
output variables from virtual and real input variables, as illustrated in Figure 1.
Plant simulator
Virtual output
Virtual input
variables
variables
4 Experimental Results
Once the hybrid system is built with all measurements and model predictions
available in the computer network, different monitoring, diagnosis and supervisory
control systems can be tested. In this work, examples of building and applying models
based on principal component analysis (PCA) are discussed.
The concept of a latent variable model is that the true dimension of a process is not
defined by the number of measured variables, but by the underlying phenomena that
drive the process. The latent variables themselves are modeled as mathematical
combinations of the measured variables and describe directions of variation in the
original data. A latent variable model can contain many fewer dimensions than the
original data, it can provide a useful simplification of large data sets, and it can allow
better interpretation of the measured data during analysis [7].
A PCA model was built from 1800 sets of data corresponding to a normal condition
of 16 variables (froth depth, gas hold up, low and high pressure, mA signals to air and
A Hybrid Approach to Empirically Test Process Monitoring, Diagnosis 219
tailings control valves, bias, air, wash water and feed flow rates, feed particle size,
grade and solid percentage, and the predicted concentrate grade and process recovery)
. A model with 6 latent variables was found to explain at least 92 % of the variance in
the centered and scaled pretreated data. For monitoring the process the Hotelling T2
limit was found to be 12.6, while the Q residuals (prediction errors) was 3.81.
concentrate
FIC
wash water AI
reagents
AI feed LIC
FI
FIC
Supervisor
air
tails
AI
Experiments were carried on to test when the process is out of control and an
abnormal operating condition is met. Two results are presented: when the process is at
steady state and during the transient period. One example is shown in Figure 4, where
the T2 and Q test has been followed for over 600 samples, taken every 5 seconds.
220 L.G. Bergh
Q residuals
6
4
2
0
0 200 400 600
Sam ple num ber
30
Hotelling
20
10
0
0 200 400 600
Sam ple num ber
33 90
85
Recovery %
Conc. grade
31
80
29 75
70
27
65
Concentrate grade Recovery
25 60
0 200 400 600
Sam ple num ber
One can see that most of the time the Q test is satisfied, while T2 test is failed at
intervals 130-200, 300-430 and 480-560. On these same periods, the concentrate
grade is too low and recovery is high or concentrate grade is too high and recovery is
low, then an abnormal operation has been detected. To identify which variables are
causing this, the individual contribution to the T2 residuals, for sample 512, are shown
in Figure 5. One can see that the main contribution were the froth depth and the high
and low dp/cells. All variables consistently showed that the problem is due to a low
froth depth, causing high recovery and low concentrate grade. Figure 6 shows the
froth depth changes during the whole period. If the froth depth were change from 50
to 100 cm, as is shown at sample 600, the column operation is driven back to a normal
condition, as can be seen from the previous figures. When only the Q residuals test
fails, the device measuring the isolated variable must be recalibrated o replaced.
Several tests were carried on to find the sensibility of the monitoring test to the
extension of the fail, measured in percentage of error. Errors less than 5% on pressure
to control valves, 7% on Dp/cells, 15% on flow meters and 10% on virtual
measurements of concentrate grade were detected. These error limits were found for a
large number of different operating conditions. One example is shown in Figure 7 for
the virtual measurement of copper concentrate grade.
The same PCA model was used to test abnormal operation either because of
decision based on failed sensors or process variable deviations. The PCA model
relays on the selected data. If the data collected represents a narrow band of operation
A Hybrid Approach to Empirically Test Process Monitoring, Diagnosis 221
30
% residual Ti
20
10
0
z E P L PH P A P T Jb Jf Jg Jt Jw R CCG D FCG S
150
100
50
0
0 200 400 600
Sample number
15
Q residuals
10
5
0
-15 -5 5 15
Error %
5 Conclusions
This approach, by combining on-line process measurements and predicting models
variables, permitted low cost, safe and wide range experimentation in pilot plants.
When process hydrodynamics and the phenomena of changing the concentration of
some species can be decoupled, considerable simplification for experimentation can
be achieved. By using low cost materials, such as water, air or kerosene, real
experimentation can be performed to describe the real process hydrodynamics. By
222 L.G. Bergh
adding the use of simpler models, reliable information on key unmeasured variables
can be obtained.
The application of multivariate statistical methods, and particularly PCA, is a
powerful tool to build linear models containing the essential of the process
phenomena with the minimum number of latent variables. The application of PCA
models to monitoring CSTR and flotation columns has been demonstrated.
These PCA models can be effectively used as part of a supervisory control
strategy, especially when control decisions are infrequently made. A novel approach
for testing strategies for process monitoring, diagnosis and control has been proposed.
In a near future, more tests on novel strategies for process monitoring, diagnosis,
fault detection and isolation and supervisory control can be performed giving
considerable inside on process performances under real experimentation.
Acknowledgments. The author would like to thanks Santa Maria University (Project
271123) and Fondecyt (Project 1100854) for their financial support.
References
1. Hodouin, D., Jms-Jounela, S.-L., Carvalho, T., Bergh, L.G.: State of the Art and
Challenges in Mineral Processing Control. Control Engineering Practice 9, 10071012
(2001)
2. Hodouin, D.: Methods for Automatic Control, Observation and Optimization in Mineral
Processing Plants. Journal of Process Control 21, 211225 (2011)
3. Finch, J.A., Dobby, G.S.: Column Flotation. Pergamon Press (1990)
4. Bergh, L.G., Yianatos, J.B.: Control Alternatives for Flotation Columns. Minerals
Engineering 6(6), 631642 (1993)
5. Bergh, L.G., Yianatos, J.B.: State of the Art: Automation on Flotation Columns. Control
Engineering Practice 11(1), 6772 (2003)
6. Bergh, L.G., Yianatos, J.B., Leiva, C.: Fuzzy Supervisory Control of Flotation Columns.
Minerals Engineering 11(8), 739748 (1998)
7. MacGregor, J.F., Kourti, T., Liu, J., Bradley, J., Dunn, K., Yu, H.: Multivariate Methods for
the Analisys of Databases Process Monitoring, and Control in the Material Processing
Industries. In: Proceedings 12th IFAC Symposium MMM 2007, pp. 193198 (2007)
Reconstructing Assessment in Architecture Design
Studios with Gender Based Analysis: A Case Study of 2nd
Year Design Studio of National University of Malaysia
1 Introduction
Design process in architectural studios is based on some small-small well defined
projects during the semester and on final project at the end which is ill defined and in
larger scale. Students should finalize their project before deadline and present it in
submission day with proper documentation. In this day they have a chance to see
other students project and get the comments from peers and experts and finally they
will get mark. Experiences show that students are worry about their grades insofar as
they wont attend in discussions if they think their comments will affect grades and
with small negative comments or finding fault in their project they get disappointed
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 223229.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
224 N. Utaberta et al.
and loose other statements and suggestions coming after. Most of the students
complain is about the unfairness and inequitable of grades. This may rout in
unawareness of the way they evaluate and graded.
On the other hand analysis shows that there is no common understanding of what
grading process is in architecture and what occurs in faculties are just instructors
experience from what their own professors did. This has inhabited high-quality
discourse, research and development of grading system in architecture education.
First of all, we have to investigate about the past and current implemented grading
systems in architecture faculties to find the characteristics and attributes of idealistic
grading systems Since different definitions of some terms related to the discussion are
used differently in different countries, and even within a single country, in different
education sectors, finding an appropriate terminology to use in analysis of assessment
and grading is essential. For instance, assessment in some contexts in the USA
refers to the evaluation of a wide range of characteristics and processes relating to
higher education institutions, including entry levels, attrition rates, student services,
physical learning environments and student achievements. In the UK, assessment can
mean what students submit by way of project reports, written papers and the like as
distinct from what they produce under examination conditions. Similarly, a grade
may refer to the classification of the level of a students performance in an entire
degree, the summary of achievement in a single degree component or the quality of a
single piece of work a student submits in response to a specified task.
Assessment in this article refers to the process of forming a judgment about the
quality and extent of students achievement or performance. Such judgments are
mostly based on information obtained by requiring students to attempt specified tasks
and submit their work to instructors or tutors for an appraisal of its quality.
Student learning issues are currently at the forefront of education. Specially in art
and architecture that the challenge of identifying a problem, defining limits and
developing creative approach to solve the problem, aids in the development of
reasoned judgment, interpersonal skills, reflection in action and critical reflection on
practice [1] are taking place. Hence design studios as a place which inherited traits
and values transmit and social relationship between students, tutors, and peers
cultivates [2] would be the crucial part.
Sara 2002 asserts that studio nature as a confined space which is isolated from
social world, itself, prevents the movement for libration. Each architecture faculty
promotes different niche and different studio culture and academic climate affects
students' interest, performance, and sense of self-worth.
Empirical studies of architecture education are few and studies of gender issues in
architectural education are all the more rare. Educational research [3] reveals that
male and female students are treated differently and architecture pedagogy has
historically constructed with masculine identity and still persists on this.
Architecture students are usually presented with a history in which women do not
appear and in which women's particular contributions are not recognized (Ahrentzen,
Anthonry1993) Most women remain spectators in popular versions of both past and
present. A look at architectural history textbooks reveals little mention of women and
their contributions to the built landscape. We might reasonably assume that most
syllabi of architectural history courses also neglect women [3].
Reconstructing Assessment in Architecture Design Studios 225
For example, the awarding of the 1991 Pritzker Architectural Prize solely to
architect Robert Venturi ignored the contributions of his partners, notably Denise
Scott Brown. Venturi commented on this omission when he acknowledged the award:
"It's a bit of a disappointment that the Prize didn't go to me and Denise Scott Brown,
because we are married not only as individuals, but as designers and architect [4].
Another example is Julia Morgan. Her capabilities as a designer and architectural
professional were on par with those of her male contemporaries. However, because
she was not male, the commissions that she received and the publishing of her design
work were not of the same caliber and prominence of those distinguishing her male
architectural colleagues [5].
Favro reveals that Morgan's work at the Ecole was every bit as object-oriented
and style-conscious as her peers. What she lacked was opportunity. Armed with her
diploma from the Ecole, Morgan sought professional validation, yet found herself by
gender in the position of an outsider. She displayed obvious skill as a designer and
engineer, yet was often given commissions because of preconceptions about female
sensitivity
Genderization is attaching out cultural constructs of masculinity to our concept of
what constitutes a well-educated person or suitable educational methods [3].
Gender is a biological difference and should not be construed as the property of
individuals. Rather gender reflects how social expectations and beliefs treat the
biological characteristics of sex to form a system of domination and subordination,
privilege and restraint. Domination does not necessarily have to be as overt as
physical oppression, it can be as pervasively subtle as silencing an individuals voice
in text, display, or class discussion [3] .It is important to recognize that our social
constructions of masculine and feminine are fluid, from one culture to another, within
any culture over time, over the course of ones life, and among different group of men
and women depending on class, race, ethnicity and sexual orientation. We must be
constantly aware of how society treats gender and how we may inadvertently
reinforce it [3].
In 2003 Phakiti [2] suggested that learning strategies, motivation and the role of
context are intertwined with gendered identities and the research is needed to
understand why and under what context gender differences in learning occur. In this
paper an example of a particular design studio was used to illustrate the gendering in
design studios. So based on literature review and most cited researches all around the
world we administered a questionnaire and distributed among second year studio
student in architecture department in university Kebangsaan Malaysia and based on
students responses and one by one interview we will try to trace the weak point of
current models and at the end some suggestions we be given.
made some study of how people learn. To investigate about students perception and
feelings and measuring their satisfaction from implementing models, we chose second
year architecture students at BS degree in university Kebangsaan Malaysia as case
study. The questionnaire was administered at the end of first semester. All 23 students
of the studio filled the questionnaire form and attended in one to one interview. Of
these 14 were female and 9 were male.15 of them were Malaysian and else were
Chinese-Malaysian. The studio was run by 2 male lecturer and 3 female teacher
assistants that 3 0f them were Malaysian, one Indonesian and on Iranian.
One to one tutorials were informal crits and sometimes informal group discussions
with one lecturer in each group. During formal assessment periods such as crits or
reviews, student pinned up their sheets and explained their concepts and ideas to
justify their design process to lecturers, peers and reviewers. Juries used evaluation
sheets to assess students work base on predefined criteria.
In first part of study, a questionnaire has distributed with 14 questions that expect 3
first questions and 4 last questions which were in open ended questions all were in
liker type with five level from 1 to five that 1 shows the minimum and 5 for
maximum.
When students asked how you learn to design, 47% of students identified working
in the studio as the primary model of learning. 55% of the men also cited reading,
while 42% of women chose discussion with peers. 47% of all students and 33% of the
men, mentioned discussion with tutors as effective means for learning in the studio. it
highlights the collaborative nature of architectural learning and stresses the
importance of the studio as the primary learning space in architecture and the role of
teacher.
Students were asked have they been encouraged to participate in the jury/panel
discussion when other student is presenting his/her project. As figure 1 shows the
percentage of female students who chose never is 15 percent more than male students
and the total average illustrate that just sometimes students asked to attend in
discussion. This supports Fredricksons idea 1993 that in small groups often do not
receive fair hearing. He also emphasized on importance of role of tutors and leaders
to encourage students.
Fig. 1. Students responses to the question how often instructors encouraged to attend in
discussion
Reconstructing Assessment in Architecture Design Studios 227
Students where asked how do you usually feel after crit sessions. Interestingly 49
percent of female cited disappointed, uncertainty and confused while 88 percent of
males felt inspired and motivated.
Men also complained of feeling humiliated and demoralized after receiving
negative comments. But many male students view the session as just one more battle
to be won. By contrast, to many women students, this warrior mentality is truly
foreign, causing them to feel all the more self-conscious at the jury [3].
Like the studio, the design jury is a fundamental component of architectural
education. At most schools, the typical jury includes only men, or perhaps on
occasion, a token woman. Although we see a vast number of juries in which all jurors
are male, we rarely if ever see juries in which all jurors are female. Mark
Frederickson [6] reveals several important sources of gender and racial bias.
Compared to male jurors, female jurors receive less than their fair share of total time
to comment, they speak less often, and they are interrupted more often. Compared to
juries for male students, juries for female students are shorter. Female students are
interrupted more often. Jurors appear to have a condescending attitude and lower
expectations and demonstrate coddling behavior toward female students. Obtained
data from students response to the question "who do benefit from jury sessions has
tabulated in figure 2. The variance of choices reveals that female students benefit
more than male students.
Fig. 2. Responses of the students to the question who do benefit from jury sessions?
Students were asked how often do you interrupted by the juries or your tutor while
you are giving presentation on your concept or project. More than 55 percent of
females complained that they all the time had interrupted and after that they lost they
words and got more nervous. While most of the males mentioned that they interrupted
by the teacher or juries to ask questions and that help them to explain what was
needed, better.
In addition, research has shown that instructors give male students more detailed
instructions on how to complete assignments on their own, while they are more likely
to complete assignments for female students.
228 N. Utaberta et al.
Done surveys in past years by Ayona Datta 2007 shows that instructors talk more
to male students, ask them more challenging questions, listen more, give them more
extended directions, allow them more time to talk, and criticize, and reward them
more frequently. But interestingly in our research girls and boys cited same feelings
about being rewarded by their tutors. This may derive from the number of female
teachers who were attending in the studio. Changing the competitive atmosphere of
design studios to a corporative climate can help females to be able to show
themselves. As Laura Tracy said competing against the problem instead of against
one another.
Results of this survey illustrate that gender differences exist in some studio
contexts and these differences are part of socialization into the culture. Students
leaning can be influenced by gender differences. It means that educators must
recognize that design and learning "differences" may reflect the different worlds in
which boys and girls are socialized as well as our socialized expectations of men and
women.
So teachers need to be trained about this issue to be able to facilitate the learning
process. Most tutors in the studio find themselves thrust into teaching without much
training in gender-sensitive teaching skills. Hence, they pass down teaching models
gleaned from their own education without critically evaluating the hegemonic
ideologies that may be part of these models [7]. Adequate training of all tutors in the
skills of listening, reflective questioning, and gender-sensitive attitudes and behaviors
would create a more inclusive context for learning. On the other hand removing the
over-reliance on crits and increasing the range of assessment methods to cover self-
and peer-assessment as well as verbal presentation skills would provide more
empowerment to the students and allow them more involvement with their learning.
3 Conclusions
The intention in this paper is to make architecture educators aware of gendered
educational practices and their consequences, for students and for the disciplines
itself. Since learning is always connected with concurrent experiences, there should
be more focus in the curricula for gender sensitive design projects as there is for
technology. We hope this paper can start a further discussion and discourse on this
unpopular area.
References
1. Schon, D.: Educating the reflective practitioner: Towards a new design for teaching (1987)
2. Datta, A.: Gender and Learning in the Design Studio. Journal for Education in the Built
Environment 2(2), 2135 (15) (2007)
3. Ahrentzen, S., Anthony, K.: Sex, stars, and studios: A look at gendered educational
practices in architecture. Journal of Architectural Education 47(1), 1128 (1993)
4. M.J.C., Robert Venturi Awarded Pritzker Prize. Architecture 80(5), 21 (1991)
Reconstructing Assessment in Architecture Design Studios 229
5. Favro, D.: Sincere and Good: The Architectural Practice of Julia Morgan. Journal of
Architectural and Planning Research 9(2), 125 (1992)
6. Frederickson, M.P.: Gender and racial bias in design juries. Journal of Architectural
Education 47(1), 3847 (1993)
7. Glasser, D.E.: Reflections on architectural education. Journal of Architectural
Education 53(4), 250252 (2000)
Re-assesing Criteria-Based Assessment in Architecture
Design Studio
1 Introduction
Architecture is involved with every aspect of the design process from concept to
completion, and because of the nature of its education, the architect is ideally suited to
exercise and maintain overall management of the project. A student after taking
liberal arts subjects, basic architectural graphics and communication is given an
associate degree in architectural technology. After another two years of architectural
building subjects he may be given a diploma for bachelors degree in architectural
technology. Another or two years of graduate work an advanced architectural,
structural design and professional subjects he may be given a master degree. This path
has a frequent part, called assessment.
As Derek Rowntree [1] stated that if we wish to discover the truth about an
educational system, we must first look to its assessment procedures. The locus of
studies in this millennium is shifting towards skills acquisition, rather than knowledge
accumulation, for autonomous self-directed and lifelong learning. In same condition
once a technology is developed in a certain country, its know- how can be instantly
spread out all over the world, neglecting the cultural aspects of countries to or from
which it propagates .On the contrary the spiritual and cultural aspects of human life,
namely, how to enrich mens day by day life, cannot easily be communicated. The
interchange of mans cultural aspects is not as easy as that of materialistic ones. In
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 231238.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
232 N. Utaberta et al.
this paper first the assessment culture will be discussed and then the effect of using
formative assessment in education and its profits.
Since criteria are attributes or rules that are useful as levers for making judgments, it
is useful to have a general definition of what criterion is. There are many meanings
for criterion (plural criteria) but many of them have overlap. Here is a working
dictionary style definition, verbatim from Sadler 1987, which is appropriate to this
discussion and broadly consistent with ordinary usage [10]. Criterion (n): A
distinguishing property or characteristic of anything, by which its quality can be
judged or estimated, or by which a decision or classification may be made.
(Etymology: from Greek kriterion: a means for judging). Grading models may be
designed to apply to whole course or alternatively on specific assessment tasks and
some can be appropriate for both. For all grading models explained below, the
interpretation of criteria is same with the general definition given above and all of
them make a clear connection between the achievement of course objectives and
given grades, without reference to other students achievements.
In this model, grades are based on students achievement to the course objectives. In
this form, the given grades are base on interpretations which clarify the attainment
amount of course objectives figure 1. This kind of grading method is based on
holistically attitude in evaluations.
In this form the course objectives will be portioned into major and minor and the
achievement of each can be determined by yes or No and the achievements of each
objective will be computed [11] Fig.2. Both of these two objective base models make
clear connections between the attachments of course objectives and the grades
awarded but students cant easily ant close connection between the course objectives
and assessment items and they are not in strong position to judge how much they
reached to the objectives.
Therefore these types of models have little prospective value for students. Also
there are no indications of whether given grades are for attainment in objectives of a
special task or for whole objectives and it will be assessed by its own or in
combination to other objectives. Most educational outcomes and attainments amount
234 N. Utaberta et al.
cannot be assessed as dichotomous states like yes or no or zero and one, because
learning is a continuous process that in contrast with discrete scales it can just be
divided into segments satisfactory and dissatisfactory[11].
desired, the conditions under which this behavior is to be demonstrated and the
minimum acceptable level of performance that signifies attainment of that
objective.Defined architecture assignments, Depends on their type, scale and duration,
have different objectives and expectations to assess the students submissions and
different tasks are required. These tasks are based on some practical necessity and
some personal standards aligned with course objectives. These tasks will create
policies for assessors to intend to take into account in judgment. Eyeballing different
evaluation sheets in variety of studios for different projects bring us to this result that
the rubric of the tasks is as follow:
1. Critical Explanation
2. Logical Development
3. Proposal and recommendation
4. Oral and Graphic Presentation
The potential number of tasks relevant to the projects is large but these are enough
to be illustrated and discussed in this paper. For each rubric and task some criteria
will be defined. Segregating evaluation extent to more tasks will increase students
opportunities to show their capabilities and sufficiency and gain more chance to get
better marks. But in contrast the more objectives are expressed for each task, the more
they will operate isolated and will recede from the overall configuration that
constitutes a unit of what the students are suppose to do. In addition it will restrict
assessors between these defined boarders and will confine their authority and
experiences in cognition and analyzing students hidden intends in their designing.
This is completely in opposition with the main target of inviting external jurors which
is benefit from diversity of expert ideas and critical attitudes. So characteristic of
objectives are more effective that their numbers in defining flexible evaluation
borders.
Since students perform in continuous path, the result of their performance just can
be revealed in continuum that can be divided between satisfactory and dissatisfactory.
Students locus this vector derives from quality of their work in response to defined
criteria in each task. So it is needed to define some qualitative levels to apply as a
norm to the assessment. Descriptions should have the best overall fit with the
characteristics of the submitted projects. The assessor does not need to make separate
decisions on a number of discrete criteria, as is usual list form. Such as little or no
evidence, beginning, developing, accomplish, exemplary.
However these descriptions are very helpful and effective in appraisal system but
finally the qualitative assessment should be able to be transmitted into grades and
marks. So we need to coordinate this model to one of the common grading system. As
we mentioned before, using grading systems such as (1 -100) or (A, B,..) are not
appropriate ways to import to criteria based assessment model because after
transmitting students work to numerical grades the connection between course
objectives and grades will be completely broken. Since marks and grades do not in
themselves have absolute meaning in the sense that a single isolated result can stand
alone as an achievement measurement or indicator that has a universal interpretation.
Assessment and grading do not take place in a vacuum. Quality of students work
236 N. Utaberta et al.
5 Conclusion
Evaluation and grading system in art and architecture and especially in their studio-
based courses are more difficult than other majors and field. Since their teaching and
learning process are different and more complicated than theory courses, it is
admissible. But there is common thought that believes there is no criterion and norm
in their grading and assessing system, in the other word the grading system is
holistically and subjective. This statement also is not incoherent. There is no special
criteria and norm among jurors and instructors in evaluating and grading students
project and if they have it is not known and explained to students. Students
themselves are inducted directly into the processes of making academic judgments so
as to help them make more sense of and assume greater control over , their own
learning and therefore become more self-monitoring.
In recent years, more and more universities have made explicit overtures towards
criteria-based grading to make assessment less mysterious and more open and more
explicit. But whenever there is no discussion and contribution, there is no way to
improve and development in this model and many institutions may employ identical
or related models without necessarily calling them criteria-based. A further
framework can be self-referenced assessment and grading, in which the reference
point for judging the achievement of a given student is that students previous
performance level or levels. What counts then is the amount of improvement each
student makes.
References
1. Biggs, J.: Teaching for quality learning at university: what the student does. SRHE &
Open University Press, Buckingham (1999)
2. Birenbaum, M., Dochy, F.: Alternatives in Assessment of Achievement, Learning
Processes and Prior Knowledge. Kluwer, Norwell (2009)
3. Teymur, N.: Towards a working theory of architectural education (2005)
4. Gijbels, D., Dochy, F.: Students assessment preferences and approaches to learning: can
formative assessment make a difference. Educational Studies 32(4) (2006)
238 N. Utaberta et al.
5. Prosser, M., Trigwell, K., Hazel, E., Gallagher, P.: Research and Development in Higher
Education 16, 305310 (1994)
6. Dochy, F.J.R.C., McDowell, L.: Introduction assessment as a tool for learning. Studies in
Educational Evaluation 23(4), 279298 (1997)
7. Inbar-Lourie, O.: Language assessment culture. In: Shohamy, E., Hornberger, N.H. (eds.)
Encyclopedia of Language and Education, 2nd edn. Language Testing and Assessment,
vol. 7, pp. 285299. Springer Science+Business Media LLC, Heidelberg (2008)
8. Black, P.J., Wiliam, D.: Assessment and Classroom Learning. Assessment in Education,
774 (March 1998)
9. Nitko, J.: Educational Assessment of Students. Prentice Hall, Upper Saddle River (2000)
10. Sadler, D.R.: Ah! ...so thats quality. In: Schwartz, P., Webb, G. (eds.) Assessment: Case
Studies, Experience and Practice from Higher Education, Kogan Page, London (2002)
11. Sadler, D.R.: Interpretations of criteria-based assessment and grading in higher education.
Assessment and Education in Higher Education 30(2), 175193 (2005)
Layout Study on Rural Houses in Northern Hunan
Based on Climate Adaptability
1 Introduction
2007 Annual Report on China Building Energy Efficiency stated, in South China,
it is the solar radiation that affects summer air condition energy consumption, shading
and ventilation are very vital for energy saving. The application of passive
technology is the key to energy efficiency system of residential buildings in South
China.
Theories and practices about rural houses are almost focused on traditional folk
houses: in Asia, the typical Japanese and Korean traditional folk houses were
thoroughly researched so as to summarize responsive sustainable design strategies, so
had been done in several places in China including Shanxi, Anhui, Hunan and
Yunnan.
Contrast to contemporary rural houses, especially to those located in hot summer
and cold winter zone, similar researches are insufficient related in researches on
sustainable practice and climate adaptability, which could hinder the implement of
energy efficiency policies in the countryside. Therefore, it is extremely significant to
reduce energy consumption and build ecological rural houses by deeply researching
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 239246.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
240 X. Jin, S. Shen, and Y. Shi
layouts of rural houses based on climate adaptability in South China, developing and
scientizing valuable passive strategies and techniques so as to optimize the function
layout.
3 Research Methods
Most houses are compactly built along road sides with obvious linear features, which
can achieve optimal land utilization. In hot summer, it is appropriate for rural houses
to take low-rise and high-density over-all layout: firstly, it can form and promote
lane wind to improve human thermal comfort; secondly, this allocation could make
most rooms far away from outdoor ground to minimize excessive thermal influence
on indoors.
Most dwellings are townhouses. From figure 2, it seems clear that under the same
testing conditions, temperature difference among three houses fluctuates within 2.
This unobvious comparison is due to relatively fine weather during testing days.
Nonetheless, remarkable tendency could be concluded as follows: thermal conditions
of 1# house (the midterm townhouse) are relatively stable, and it has the minimum
temperature variation and the best thermal performance contrasted with the other two
houses. Thermal performance of 5# house (the endmost townhouse) is a little poorer
than that of 1# house, because of the western solar radiation. 3# house (the single
house) has poorest thermal conditions among three houses, owing to its both eastern
and western solar radiation. From these data, objective conclusion could be drawn
that as to energy efficiency of building types, a midterm townhouse is superior to an
endmost townhouse, and the latter is also superior to a single house; this thermal
comparison is undoubtedly obvious under extreme climate. Whats more, with small
building shape coefficient and mutual shading system, townhouses can avoid negative
effects on indoors resulted from excessive solar radiation.
In summary, light, temperature, humidity, wind direction and velocity are fully
considered in this layout to improve microclimate and optimize climate adaptability.
This spontaneous rural layout contains original and plain ideas of Green Building.
242 X. Jin, S. Shen, and Y. Shi
Fig. 2. Temperature changes of tested houses in the winter and the summer
It is discovered that hall-patio pattern in traditional folk houses of Hunan has been
inherited in rural house planes. Centered in the patio, rooms enclose each other to
form a space sequence that excludes from outside and opens to inside (Fig. 3). The
eaves galleryhallpatiobedrooms typical layout has inherited unique
architecture language of traditional folk houses, and formed spontaneous mutual
shading and ventilation system to improve microclimate, which make houses adapted
to local climate.
Layout Study on Rural Houses in Northern Hunan Based on Climate Adaptability 243
Fig. 3. Typical space sequence and the unobstructed ventilation aisle from south to north
Eaves gallery: In Huarong, there is not a real eaves gallery in each house. By
vertical concave and convex of building envelopes and eaves projection of pitched
roof, a climate buffer zone is formed similar to traditional eaves gallery which has
resulted in an original, passive, self-shading system as follows: a) For a single-layer
house with three bays, bedrooms and kitchen are located in both sides of the plane
and bulges in contrast to the hall which concaves about 1.5m inward. Therefore,
concave and convex features of the building envelope, combined with eaves
projection of the pitched roof, achieves favorable shading effects. b) For a multi-
storey house, on the second floor, just above the hall, a balcony or a bedroom always
bulges 1.5m outward which forms a side elevation with concaving first floor and
indenting second or third floor. This method can largely improve shading effects of
the hall. Whether intentionally or not, this eaves gallery ameliorates thermal
problems by providing the self-shading system, and it alters a little monotonous
elevations (Fig. 4).
Hall: The hall is always a core in a rural house; 70% of home activities happen in
the hall such as entertaining, dining, and recreation. Among surveyed houses, all
rooms are organized based on the hall center. Most halls width is 4meters, and depth
is above 10m. Two gates are installed separately towards the outdoors or the
244 X. Jin, S. Shen, and Y. Shi
courtyard, on northern and southern walls of the hall; this can ensure that the hall is
unobstructed both in northern and southern directions. The southern gate placed in the
middle of the hall is bigger than the northern gate placed aside. Because gate
directions are accordant with prevailing wind direction in summer, this measure may
give rise to cross ventilation to improve dehumidification and heat dissipation from
natural ventilation in summer. Most surveyed houses adopt this pattern. For an
example, as Figure 2 shows, hall width is 3.9m, its southern gate is almost 2.45m
wide and 3m high, this insures the hall is unobstructed both in northern and southern
directions in favor of ventilation.
more conveniently and efficiently. This special feature of the rural patio should be
emphasized during function optimization.
Bedroom: From statistics, the bedroom area is relatively standardized considering
regional living habits. Firstly, bedrooms all face north or south. Most southern
bedroom areas are above 20m2, which is greater than that of northern bedrooms,
typically 17 or 18m2. Secondly, there are three bedrooms off the hall, in a single-layer
house, two bigger southern rooms and a smaller northern room. Thirdly, three or four
bedrooms would be built in a multi-storey house, a southern bedroom is usually on
the first floor for the old, and others are on upper layers.
5 Conclusions
From the above analysis regarding building adaptability of rural houses in Hurong, we
can summarize application principles of passive technologies and scientific design
strategies.
1) In Huarong, low-rise and high-density is adopted in townhouse over-all
layout. Through enlarging building width, simplifying building form, and
concentrating building volume, this layout can improve effects of passive
technologies in rural houses.
2) A self-shading system has been formed spontaneously by using vertical concave
and convex features of the building envelope and eaves projection of the pitched roof.
3) The inner layout is concentrated on the hall as a responsive core. By adjusting
interfaces, the house forms an unobstructed aisle eaves galleryhall
courtyardbedrooms, which is adapted to the summer prevailing wind.
4) The patio is appropriately designed to improve the indoor thermal environment
resulted from thermal and wind ventilation. It integrates with kitchen and bathroom
to decontaminate, discharge, and reuse energy more conveniently and efficiently.
In conclusion, it is advisable to link regional custom and scientific application of
passive technologies, adopt economical, effective passive skills so as to optimize
building layout, and improve indoor thermal environment. This is significant for
green rural houses design in Huarong, even possiblely applied in rural construction of
northern Hunan.
References
1. Tsinghua University, 2007 Annual Report on China Building Energy Efficiency. China
Architecture & Building Press, Beijing (2007)
2. Ooka, R.: Field study on sustainable indoor climate design of a Japanese traditional folk
house in cold climate area. Building and Environment 37(3), 319329 (2002)
3. Lee, K.-H., Han, D.-W., Lim, H.-J.: Passive design principles and techniques for folk
houses in Cheju Island and Ullung Island of Korea. Energy and Buildings 23(3), 207216
(1996)
246 X. Jin, S. Shen, and Y. Shi
4. Zhao, J., Liu, J., Li, G.: Research on summer thermal environmental of dwelling with
courtyard. Journal of Architecture and Civil Engineering 18(1), 811 (2001)
5. Yi, W., Qun, Z., Mei, H.: The study on indoor environment of old and new Yaodong
dwellings. Journal of Xian University of Architecture & Technology (Natural Science
Edition) 33(4), 309312 (2001)
6. Lin, B., Tan, G., Wang, P.: Study on the thermal performance of the Chinese traditional
vernacular dwellings in summer. Energy and Buildings 36(1), 7379 (2004)
7. Xu, F., Zhang, G., Xie, M.: Influence of site selection on natural ventilation in Chinese
trational folk house. Journal of Southeast University (English Edition) 26(2), 2831 (2010)
8. Wang, R., Cai, Z.: An ecological assessment of the vernacular architecture and of its
embodied energy in Yunnan, China. Building and Environment 41(5), 687697 (2006)
9. Zuo, X., Zou, Y., Tang, M.: Analysis on tests of sustainable buildings in hot summer and
cold winter zone. New building Materials 2, 13 (2003)
10. Xie, M.: The Study on the Natural Ventilation Design of Rural Residential Houses in the
North of Hunan Province, Ph D. Thesis, Hunan University, China (2009)
11. Lv, A.: Climate-responsive building. Tongji University Press, Shanghai (2003)
12. Xie, M., Zhang, G., Xu, F.: Influence of patio on indoor environment in a Chinese
traditional folk house in summer. Journal of Southeast University (English Edition) 26(2),
2527 (2010)
Determination of Software Reliability Demonstration
Testing Effort Based on Importance Sampling and Prior
Information
1 Introduction
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 247255.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
248 Q. Li and J. Wang
Reference [9][10] introduced the importance sampling theory to SRDT, and proposed
accelerated OP and acceleration factors. It could significantly reduce the number of
the test cases and ensure the confidence of software. Reference [11] estimated the
prior distribution of software failure rate from test results of software reliability
growth testing (SRGT), and established a method of SRDT based on prior
information. It could reduce the test effort and accelerate the process of SRDT when
reasonably introduced prior information or different statistical sampling methods.
Therefore, this article firstly analyzed the characteristics of high reliable software,
and proposed an accelerated OP, then, estimated the acceleration factor according to
the importance sampling theory. For high reliable continuous-type software, a method
of determination of the SRDT effort with accelerated OP and prior information was
proposed by Bayesian method. Finally, it gave a method of estimating the hyper-
parameters of prior distribution.
Let X be a random variable whose probability density function is f (x) , denoted the
operation. And its definitional domain is {1 , 2 , , m } , denoted the finite m
operations. Y = h( x) is a function of random variable X , which expresses that if a
failure occurs after a reliability testing on operation x , then h( x) = 1 ; and if no
failures occur after a reliability testing on operation x , then h( x) = 0 . The
Determination of Software Reliability Demonstration Testing Effort 249
+
mathematical expectation of Y is =
h( x) f ( x)dx , denoted the software failure
rate. According to the function f ( x) to sample n test cases {x1 , x2 , , xn } , then
yi (i = 1,2, , n) could be calculated by formula Y = h( x ) , and the sample mean is
n n
y h( x ) .
1 1
=Y = i = i
n i =1
n i =1
The failures are considered not existed in the regular operations of high reliable
software according to the analysis in section 2, which means that h( x) = 0 for all the
test cases, and it has no contribution for the estimation of u . However, it is
possible to occur failures on those critical operations, which means it is very
important for the estimation of u . Thus, by means of the importance sampling theory,
the probabilities of occurrence of critical operations should be increased to reduce the
number of test cases.
Suppose the probability of critical operation i in probability density function
g (x) is relatively high. Using X to denote the random variable whose probability
density function is g (x) , then, the mathematical expectation of Y
+ +
becomes =
h ( x ) f ( x )d x =
h( x) ( x) g ( x )dx , where ( x) = f ( x) / g ( x) is
called likelihood ratio. Let Y = h( x) ( x) , then the above formula becomes
+
=
y g ( x)dx = E (Y ) . That is to say, the estimation of the mathematical
expectation of Y by sampling from f (x) becomes the estimation of the
mathematical expectation of Y by sampling from g (x) . In other words, using
probability density function g (x) to generate samples {x1 , x2 , , xn } , and using the
formula Y = h( x) ( x) to get yi (i = 1,2, , n) , then the sample mean is
n n
h( x )( x ) .
1 1
=Y= yi = i i
n i =1
n i =1
The operations in high reliable software could be divided into two parts, with one is
the set of critical operations, and the other is the set of regular operations. In this
paper, the operations are considered to be critical operations, which have a small
probabilities of occurrence no more than 10 4 . Using symbols oc1 , oc2 , , ocn to
denote the critical operations whose probability of occurrence are pc1 , pc2 , pcn ,
and using or1 , or2 , , orm to denote the regular operations whose probability of
occurrence are pr1 , pr2 , prm , which satisfied:
n m
i =1
pci + pr
j =1
j = PC + PR = 1
250 Q. Li and J. Wang
Because PC << PR , the test cases generated by OP focus mainly on the regular
operations. As shown in section 3.1, the failures focus on the critical operations.
Therefore, the probability of critical operations should be increased to propose the
accelerated OP to significantly reduce the number of test cases.
n
The sum of all probabilities of critical operations is PC = pc
i =1
i . Then, the
P
pci pci
operations are pci = , and pci = = 1 . Let the probabilities of
PC i =1 i =1 C
occurrence of regular operations are 0, which means pri = 0 . Then, there only
contains critical operations oci whose probabilities are increased to pci in the new
probability density function g (x) . Then, the accelerated OP is constructed by the
critical operations oci and its probabilities pci .
By means of the definition of likelihood ratio ( x) = f ( x) / g ( x) , there have
(oci ) = pci / pci = PC (i = 1,2, , n) . And the acceleration factor is defined as
P (Oc ) pc1 + pc2 + + pcn
= = = PC , which denoted the degree of acceleration.
P (Oc ) pc1 + pc2 + + pcn
(t ) k t
p( x = k | ) = e , k = 0,1,2, (1)
k!
The conjugate distribution of Poisson distribution is Gamma distribution, then
supposes the prior distribution of obeys Gamma distribution:
ba
( ) = Gamma (a, b) = a 1e b (2)
(a)
(b + t / PC ) a + r a + r 1 (b +t / PC )
f ( | r , t / PC , a, b) = Gamma(a + r , b + t / PC ) = e (3)
(a + r )
For the given reliability target (0 , C ) , C is the confidence. Then the required
minimum testing time T is the least t which satisfies the following inequality:
0 0 (b + t / PC ) a + r a + r 1 (b +t / PC )
P ( 0 ) =
0
f ( | r , t / PC , a, b)d = 0 (a + r )
e d C (4)
For the high reliable software, no failures are allowed, which means r = 0 . Then, (4)
can be rewritten as:
0 0 (b + t / PC ) a a 1 ( b + t / PC )
P ( 0 ) = 0
f ( | 0, t / PC , a, b)d = 0 (a)
e d C (5)
PC
T = ln(1 C ) (7)
0
For the same reliability target as above, the required minimum testing time T based
on the normal OP is the least t which satisfies the following inequality:
0 0 (b + t ) a a 1 (b + t )
P ( 0 ) = 0
f ( | 0, t , a, b)d = 0 (a)
e d C (8)
1
T = ln(1 C ) (10)
0
Obviously, it is seen from the formulas (5), (7), (8) and (10) that the needed testing
time based on accelerated OP is PC times of the needed time based on normal OP.
Because PC << 1 , the method of SRDT based on accelerated OP could significantly
reduce the testing time and accelerate the process of testing.
For example, suppose there has prior information, and the hyper-parameters are
a = 1, b = 50000 , and the reliability target is (10 4 ,1 10 4 ) . Suppose the acceleration
factor is PC = 0.05 . Then according to the inequality (5), there has:
252 Q. Li and J. Wang
0 0
0
f ( | 0, t / PC ,1, b)d = 0
(b + t / PC )e (b + t / PC ) d = 1 e (b + t / PC ) 0 C
1
T = PC [ ln(1 C ) b] = 0.05(10 4 ln 10 4 50000) = 2105.2
0
Suppose there has no prior information, which means a = 1, b = 0 . For the above
reliability target and acceleration factor, and according to formula (7), there has:
1
T = PC [ ln(1 C )] = 0.05(10 4 ln 10 4 ) = 4605.2
0
By means of inequality (8), the testing time considered prior information and based
on the normal OP could be calculated by the following inequality:
0 0
0
f ( | 0, t ,1, b)d =
0
(b + t )e (b + t ) d = 1 e (b +t ) 0 C
1
T = ln(1 C ) b = 10 4 ln 10 4 50000 = 42103.4
0
According to formula (10), the needed testing time based on the normal OP which has
no prior information is as follows:
1
T = ln(1 C ) = 10 4 ln 10 4 = 92103.4
0
As is shown above, the method of SRDT based on prior information and importance
sampling is significant in reducing the testing effort of SRDT.
h( x1 , x2 , , xn ) = f ( p) (x , x , , x
1 2 n | p )d p (11)
obtained before SRDT. For example, lots of test records in SRGT are leaved before
SRDT. For continuous-type software, after selecting the last m groups of time
between failures T1 , T2 , , Tm which could give the experience sample values, we can
use the method of parameter synthesis to estimate the hyper-parameters.
The number of software failures x obeys a Poisson distribution with parameter
t as shown in (1). Therefore, the marginal distribution of x is as follows:
+
m( x) = Gamma( a, b) p( x | )d =
0
+ b a a-1 -b ( t ) x -t b a t x (a + x) ( a + 1)
0 (a)
e
x!
e d =
x!(b + t ) a+ x
(12)
The first moment and second moment of m(x) are given by the following:
+ + + b a a-1 -b ( t ) x -t
x
at
E ( x) = xm( x) = e e d = (13)
x =0 x =0
0 (a) x! b
+ + + b a a-1 -b ( t ) x -t at (a + 1)at 2
E(x 2 ) = x m( x ) = x
x=0
2
x =0
2
0 (a )
e
x!
e d = +
b b2
(14)
Let t is a relatively long time for T1 , T2 , , Tm . Then, during the time interval (0, t ] ,
the experience sample values of software failures are t / T1 , t / T2 , , t / Tm . Using the
mean value and the mean square value of t / T1 , t / T2 , , t / Tm to estimate the first
moment and second moment,
2
m m
t
~ 1 t ~ 2 1
E ( x) = , E (x ) = (15)
T
i =1 i
m i =1
Ti m
Then, the estimations of a, b can be calculated by the equations (13) and (14):
~
~ E ( x) ~ t
a= ~ 2 ~ ~ , b= ~ 2 ~ ~ (16)
( E ( x ) / E ( x)) E ( x) 1 ( E ( x ) / E ( x)) E ( x ) 1
For example, table 1 gives a group time between software failures to be as the
experience sample values of Ti . Let t = 100000 hours, and the experience sample
values of software failures t / Ti are shown in table 1.
~
According to the formulas (15) and (16), we have a~ = 64.2, b = 223778.0 .
Then it is known from the formula (5) that for the given reliability target (0 , C ) ,
the required minimum testing time T of SRDT with no failures based on importance
sampling and prior distribution is the least t which satisfies the following inequality:
0 0 (b + t / PC ) a a 1 ( b + t / PC )
P ( 0 ) =
0
f ( | 0, t / PC , a, b)d =
0 (a )
e d C
6 Conclusion
In this paper, we brought importance sampling theory and prior information into
SRDT, and proposed an accelerated method of SRDT with the above two aspects.
According to the characteristics of high reliable software and the theory of importance
sampling, an accelerated OP and acceleration factor were given. Also, the prior
information of software reliability was considered, which could significantly reduce
the number of test cases. It gives a theory and technical support for verifying the high
reliable software. In the future work, we will research the application of the method in
actual project account to verify its feasibility.
References
1. Kuball, S., May, J.: Test-Adequacy and Statistical Testing: combining different properties
of a test-set. In: 15th ISSRE, pp. 161172. IEEE Com. Soc., Washington, DC (2004)
2. Fenton, N.E., Pfteeger, S.L.: Software Metrics: A Rigorous & Practical Approach, 2nd
edn. Intl Thomson Computer Press (1996)
3. Butler, R.W., Finelli, G.B.: The Infeasibility of Quantifying the Reliability of Life-Critical
Real-Time Software. IEEE Transactions on Software Engineering 19(1), 312 (1993)
4. Andy, P., Wassim, M., Yolanda, M.: Estimation of software reliability by stratified
sampling. ACM Transaction on Software Engineering and Methodology 8(3), 263283
(1999)
5. Hecht, M., Hecht, H.: Use of importance sampling and related techniques to measure very
high reliability software. In: Aerospace Conference Proceedings, pp. 533546. IEEE
Aerospace and Electronics Systems Soc., Montana (2000)
6. Alarm, S., Chen, H., Ehrlich, W.K., et al.: Assessing software reliability performance
under highly critical but infrequent event occurrences. In: 8th ISSRE, pp. 294303. IEEE
Comp. Soc., Los Alamitos (1997)
7. Zhao, L., Wang, J.-M., Sun, J.-G.: Study on the Relationship between Software Testability
and Reliability. Chinese Journal of Computers 30(6), 986991 (2007) (in Chinese)
8. Li, Q., Li, H., Wang, J.: Effects of software test efficiency on software reliability
demonstration testing effort. Journal of Beijing University of Aeronautics and
Astronautics 37(3), 325330 (2011) (in Chinese)
9. Li, Q.-Y., Li, X., Wang, J., Luo, L.: Study on the Accelerated Software Reliability
Demonstration Testing for High Reliability Software based on Strengthened Operational
Profile. In: Proceedings of ICCTD 2010, pp. 655662 (2010)
Determination of Software Reliability Demonstration Testing Effort 255
10. Jiong, Y., Ji, W.: Software statistical test acceleration based on importance sampling.
Computer Engineering and Science (3), 6466 (2005) (in Chinese)
11. Qin, Z., Chen, H., Shi, Y.: Reliability Demonstration Testing Method for Safety-Critical
Embedded Applications Software. In: Proceedings of the International Conference on
Embedded Software and Systems, pp. 481487. IEEE Com. Soc., Washington, DC (2008)
12. Miller, K.W., Morell, L.J., Noonan, R.E.: Estimating the probability of failure when
testing reveals no failures. IEEE Transactions on Software Engineering 18(1), 3343
(1992)
The Stopping Criteria for Software Reliability Testing
Based on Test Quality
1 Introduction
Since Goodenough and Gerhart firstly proposed the problem when to stop software
testing in 1975, when they researched whether software testing can ensure the
correctness of the software, how to give a stopping criterion for software testing
becomes a hot and difficult problem in software testing field [1]. From 1980s, such
research has never been stopped. Reference [2][3] established software reliability
models based on reliability theory. Based on the software reliability models, [4]
quantitatively presented a conclusion of when to stop testing, and gave a reliability
metric model and method. Under the conditions of the given budget constraints and
test cost, [5][6] proposed a method of how to give the stopping time of testing.
Reference [7] put forward an optimal software release tactic based on the optimal tactic
of test cost which contained three cost elements. Reference [8][9] researched the best
software release time on the basis of the comprehensive reliability requirements and
cost constraints. Reference [10] researched the problem of when to stop testing in the
case of lots of codes changed during the testing process. Obviously, it is the most ideal
and reasonable method to quantify the testing process and use the quantitative
measurement results to guide a variety of decision-making behaviors in testing. How to
measure testing and make the measurement results accurately reflect the testing
process are problems that researchers in testing area work hard to think and solve.
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 257264.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
258 Q. Li and J. Wang
Successful (T ) defines a predicate which states that test cases set T is a subset of
input domain D , and Correct (t ) is true for arbitrary element t in T . In other words,
T D , t T , Correct (t ) Successful (T ) (2)
Ideally, the test set which has the capacity to realize the stopping criteria for SCT can
be described as a test set T which has the following property.
T D , Successful (T ) Successful ( D) (3)
In other words, a successful SCT should include two contents. One is that softwares
behavior is normal during the testing with enough test cases, the other is the success
of the software's behavior on the finite test set can represent the success of softwares
behavior on the whole input domain. However, Howden has showed that there doesn't
exist such computable function can be used to prove that the correctness of the testing
domain can represent that of the input domain when it is only a proper subset of input
domain [12].
The Stopping Criteria for Software Reliability Testing Based on Test Quality 259
2.2.2 The Stopping Criteria for SRT Defined from the Point of Test Quality [14]
Actual usage of the software is a subset of all possible usage. Each element in the set
represents a possible running condition of the system. And the purpose of the test
quality measurement is to measure the systems ability to properly run a sample. Its
impossible to test a system completely because of the infinite population, and the
usage of the system must be made a valid inference with statistical methods.
It is known that the actual usage can be seen as a random process obeying a
probability distribution. If the probability distribution used in testing process is as the
same as the one used in reality, it can be regarded that the testing process converges to
the usage process. Therefore, SRT quality can be considered in terms of measuring the
statistical characterization of test set.
260 Q. Li and J. Wang
which have different ability to express the connotation to guide testing. However,
these criteria are all called the stopping criteria for SRT defined from the point of test
quality.
Fig. 3. The relationship between test cases set T and input space D
Using test set t to test the program p, testing can be stopped, if P(t ) Pr , where
P (t ) is the probability of the appearance of t in input space D , and Pr is the limit
value required.
D D1 D2 Dm
p p1 p2 pm
D1, D2,, Dm are the divisions of the input space of program S according to the
user's actual usage, and p1, p2,, pm are the corresponding occurrence probabilities.
Therefore, the reliability testing set T can be seen as a sample by N times of sampling
with the above distribution.
262 Q. Li and J. Wang
It is known from the knowledge of chi-square statistic that the reliability testing set
T is sampled with the population which satisfied the above distribution. Therefore, the
sample must satisfy the following statistical properties.
Suppose the sampling number of the test cases set in D1, D2,, Dm are n1, n2,,
nm which satisfied the following equation
m m (n i Np i ) 2
ni = N , q 2 = Np i
(4)
i =1 i =1
1 Pr[ x 0 , x1 , " , x n | U ]
lim log 2 <
(5)
n n
Pr[ x 0 , x1 , " , x n | T ]
Therefore, stochastic process U denotes the usage chain, and stochastic process T
denotes the testing chain, and Pr( x 0 , x1 , " , x n |U) denotes the probability of the input
sequence generated by usage chain, and Pr( x 0 , x1 , " , x n |T) denotes the probability of
the input sequence ( x 0 , x1 , " , x n ) generated by testing chain.
The Stopping Criteria for Software Reliability Testing Based on Test Quality 263
m
H RT = pi log 2 pi (7)
i =1
As described in 2.2.5, suppose the sampling number of the test set in D1, D2,, Dm
are n1, n2,, nm which satisfied the following
m
ni =N (8)
i =1
The stopping criteria for software testing could quantitatively establish the
requirement of the software testing, and measure the quality of software testing, guide
the selection of test cases and the observation and record of software's behavior in
testing process. It could also ensure the quality of the software testing to avoid the
unnecessary testing. The existing results of SCT were summarized from the point of
view of test quality, four stopping criteria for SRT were proposed with the test
quality. In the future work, we will compare the four criteria and study their
relationships. From the process of establishing these criteria, it is not easy to measure
these criteria. That is to say, using the stopping criteria to guide practical testing still
needs a transitional process. However, the research itself is indispensable.
264 Q. Li and J. Wang
References
1. Goodenough, J.B., Gerhart, S.L.: Toward a Theory of Test Data Selection. IEEE
Transactions on Software Engineering SE-3(2), 156173 (1975)
2. Musa, J.D., Frank Ackerman, A.: Quantifying Software Validation: When to Stop Testing?
IEEE Software 6(3), 1927 (1989)
3. Schneidewind, N.F.: Reliability modeling for safety-critical software. IEEE Transactions
on Reliability 46(1), 8898 (1997)
4. Garg, M., Lai, R., Jen Huang, S.: When to stop testing A study from the perspective of
software reliability models. IET Softw. 5(3), 263273 (2011)
5. Hou, R.-H., Kuo, S.-Y., Chang, Y.-P.: Optimal release times for software systems with
scheduled delivery time based on the HGDM. IEEE Transactions on Computers 46(2),
216221 (1997)
6. Yang, B., Hu, H., Zhou, J.: Optimal Software Release Time Determination with Risk
Constraint. In: Proc. 54th Ann. Reliability and Maintainability Symp., pp. 393398 (2008)
7. Huang, C.-Y., Kuo, S.-Y., Lyu, M.R.: Optimal software release policy based on cost and
reliability with testing efficiency. In: IEEE Computer Societys International Computer
Software and Applications Conference, pp. 468473 (1999)
8. Ehrlich, W., Prasanna, B., Stampfel, J., Wu, J.: Determining the cost of a stop-test
decision. IEEE Software 10(2), 3342 (1993)
9. Xie, M.: On the determination of optimum software release time. In: 1991 International
Symposium on Software Reliability Engineering, pp. 218224 (1991)
10. Siddhartha, R.D., McIntosh, A.A.: When to Stop Testing for Large Software Systems with
Changing Code. IEEE Trans. on Software Engineering 30(4), 318323 (1994)
11. Li, Q., Ruan, L., Liu, B.: Research on Software Reliability Testing Adequacy.
Measurement & Control Technology (11), 4952 (2003) (in Chinese)
12. Zhu, H., Jin, L.: Software Quality Assurance and Testing, pp. 70215. Science Press,
Beijing (1997) (in Chinese)
13. Li, Q., Lu, M., Ruan, L.: Theoretical Research on Software Reliability Testing Adequacy.
Journal of Beijing University of Aeronautics and Astronautics 29(4), 312316 (2003) (in
Chinese)
14. Li, Q.: Theoretical Research on Software Reliability Testing Adequacy. Ph.D. Thesis,
Beijing University of Aeronautics and Astronautics (2004) (in Chinese)
15. Chen, Y.: Modelling Software Operational Reliability via Input Domain-Based Reliability
Growth Model. In: Twenty-Eighth Annual International Symposium on Fault-Tolerant
Computing, pp. 314323 (1998)
16. Musa, J.D.: Operational Profiles in Software Reliability Engineering. IEEE
Software 10(2), 1432 (1993)
17. Whittaker, J.A.: A Markov Chain Model for Statistical Software Testing. IEEE
Transaction on Software Engineering 20(10), 812824 (1994)
18. Kullback, S.: Information Theory and Statistics. Wiley, New York (1958)
19. Zeng, G., et al.: Summary of Systems Theory, Information Theory, and Control Theory,
pp. 149151. Central South University of Technology Press, Hunan (1986) (in Chinese)
20. Bin, L.: Software Reliability Research. Postdoctoral research report. Beijing University of
Aeronautics and Astronautics (2002) (in Chinese)
The CATS Project
1 Introduction
The `Campus Tools for Students' (CATS) project1 aims at supporting students with
hearing impairments or Learning Disabilities (LD), during classes, individual study,
and fruition of administrative, ICT based services.
During lectures, hearing impaired students lose a great amount of information, de-
livered by the teacher via his/her voice. Students with LD, on the other hand, face
difficulties in taking notes and, once again, lose a great amount of information. It is
thus clear that specific tools are necessary to solve these peculiar difficulties, max-
imizing the usefulness of lectures.
During individual study, it is particularly important for hearing impaired students
to review a recording of the lecture, augmented with automatic captioning; this ap-
proach is highly effective in reducing learning time and increasing students' ability to
1
CATS is funded by the Italian Ministry of Education, University and Research (MIUR). See
the project's web site at http://cats.unimore.it/.
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 265272.
Springerlink.com Springer-Verlag Berlin Heidelberg 2012
266 L. Sbattella et al.
take advantage of their notes. Students with LD can benefit from study materials with
linguistic complexity reduced and tailored to their specific difficulties. Such adapted
materials could increase the effectiveness of individual study.
Finally, software services students interact with are often not designed with uni-
versal accessibility in mind, as their interfaces cannot be personalized.
Thus, the project main goals are: analysis of the difficulties faced by students with
LD or hearing impairment, during lectures and individual study; design of methodol-
ogies to support students during lectures and individual study; and personalization of
software services. Students and teachers will be involved in evaluating the aforemen-
tioned solutions, by means of extended studies. The solutions will be proposed to
other universities as free software (open source).
This paper is structured as follows. In Section 2, we analyze the state of the art of
assistive tools. In Section 3, we describe the CATS project. In Section 4, we present
the evaluation plan of the project. Finally, in Section 5, we sum up the CATS project
and report future works.
2
See http://hcibib.org/accessibility
3
See http://www.w3.org/WAI/
4
The CC/PP standard, see http://www.w3.org/Mobile/CCPP
5
See http://www.who.int/classifications/icf/en
The CATS Project 267
The introduction of Interactive White Boards (IWB) has been found, according to a
research carried out in a school environment [15], to be an effective optimization of
resources, providing a useful environment for authoring learning materials and giving
classes. The video, recorded by IWB software, should be augmented with closed cap-
tioning. In [12, 14, 16] some tools, able to carry out automatic captioning of videos,
are presented; these software applications, already present on the market, have be-
come more and more sophisticated over the years, and nowadays these methods ob-
tain satisfying results.
Several devices have been developed to help persons with auditory impairment in
their daily activities. These assistive listening devices can be exploited in order to over-
come the negative effects given by distance, background noise or poor acoustics [11]. In
particular: FM systems (a small radio station that operates on special frequencies; the
speaker uses a microphone which transmits to a receiver connected directly to the audi-
tory aid in use); infrared systems (the sound is transmitted along the waves of infrared
light); induction loop systems (normally installed in a fixed manner in a dedicated area;
they are connected to the microphone of the speaker and create a current that, in turn,
produces a magnetic field in the room); text telephones (allow one to hold a telephone
conversation using a keyboard); automatic speech recognition (allows the computer to
convert a verbal message into text); closed captioning TV (allows one to see the tran-
scription of a conversation); screen readers (read the content of computer interfaces; and
warning systems (alert the person with disability to notice when a sound arrives, e.g. the
ring of a doorbell or telephone, alarms for danger like fire, etc.)
Error correction is based on two fundamental approaches. The first one is able to
catch non-words errors (spelling errors that result in a sequence of characters not be-
longing to the language vocabulary). Classic algorithms are based on Kernighan's
confusion matrices and on the so-called \edit distance". In order to catch realword
errors (spelling errors that result in a sequence of characters still belonging to the
language vocabulary). A common approach is to leverage language models, as defi
ned by means of n-grams.
TTS based applications are today largely used, and provide a quite accurate pro-
nunciation. For example, Loquendo TTS and Festival TTS are used for high quality
text reading, while JAWS, NVDA or ORCA are screen readers (i.e., they are used for
reading the system interface).
Finally, tools that can support students when they take notes during classes are:
OneNote (a text editor, able to record the audio and connect it to the text) and the
LightScribe pen (which is used for hand writing and is able to record audio and con-
nect it to the notes).
3 The Project
The following sections summarize the activities that are planned and the hard
ware/software solutions that are under development.
ICF and service adaptation. The project is based on the ICF* model [1], a subset of
the original WHO ICF specification, extended with new, technology-oriented
attributes. ICF* provides a simple yet expressive model for the description of user
models and the personalization of software applications. In particular, the project will
focus on using portable devices as a terminal for accessing such personalized services.
Software virtualization. Students with disability often encounter difficulties related
to the availability of the software they need on the computers of the university net-
work. Whenever a student needs a specific assistive software, such software must be
installed on the student's computer. Making assistive software directly available on
the university network gives more autonomy to the students, allowing them to access
software resources whenever needed. The technology known as IBM ThinApp, al-
lows deploying virtualized software over internet. This system guarantees high per-
formance and effective management, allowing to save time and resources. In particu-
lar, this approach permits a better management of software licenses. Sharing the li-
cense in `Time Sharing' mode foresees an optimization in the total number of licenses
acquired, based on the effective usage of the software by the students.
A web based survey was developed, leveraging the ICF* model. Data related to the
didactic limitations in the participation, with respect to the university environment,
will be compared with information coming from the students database, and compiled
by the social-psychological-pedagogical team that met students at the beginning of
their university career. This survey will allow a better definition of the most important
The CATS Project 269
areas of participation on which the tools developed by the project should focus.
Moreover, as ICF* extends the ICF specification adding technology-related attributes
that specify human-machine interaction skills of students, the survey will permit to
define user profiles, supporting personalization of software services, in many ways.
Fig. 1. PoliNotes (left) showing slide contents edited and annotated on the student's laptop.
PoliConcepts (right) summarizing an Italian text about history and drawing the related mental
map
270 L. Sbattella et al.
PoliConcepts. A system, based on domain ontologies, able to extract main con cepts
from texts belonging to a predefined domain. Such concepts will be used to produce a
summary and a conceptual map of the text. Moreover, the tool will be able to infer
new concepts, thus enhancing the domain ontology. Dyslexic students can use this
tool to build conceptual maps in a semi-automatic way. A beta version has been de-
veloped (see Fig. 1) and testing is currently ongoing.
IWB-augmented classes. A component of the IWB software allows lecturers to ac-
tivate audio/video recording of the screen. Then the system, leveraging ASR software,
generates the subtitles of the audiovisual material. These audio /video material will be
integrated in slides and virtually in any other didactic material that the lecturer could
distribute to students using digital media [18]. The material will be accessed through
the use of an accessible web interface (see Fig.2) allowing students to access the
available material according to the student's needs, for example, add or remove sub-
titles, change font size and character, only audio, or the addition of subtitles, the posi-
tion of the page and so forth. The web interface will allow the student to personalize
the material, to create their own indexes, classification, linking the material with
comments, attachments etc.The possibility to share contents with others will also be
included.
Since the earliest stage of the project, a set of procedures has been defined in order
to carry out the evaluation process: 1) a semi-structured interview track was defined
for tutors working with students with disabilities or LD, in order to collect qualitative
data about the needs expressed by the students; 2) another interview track was de-
fined, to be administered to students together with the ICF* web survey in order to
highlight the problems and the needs arising in lectures context; 3) the complete
process going from a student's application to the final development of an Individua-
lized Services Plan (ISP) was analyzed; 4) a grid for evaluating the already existing
inclusive practices in teaching was elaborated, which will be fine tuned after the
analysis of students' needs; and 5) the main perspective of the whole project was de-
fined as the passage from a \dependency culture" model to a needs-driven approach.
The evaluation of the above proposed solutions will be carried out on two levels:
first, their impact on students' performance will be measured adopting well-organized,
repeatable experimental settings; second, in depth analysis of further qualitative data
will be performed, with a particular focus on the social impact of the introduction of
campus tools on the personal learning experiences of university students.
5 Conclusion
In this paper we presented the CATS project, which aims at providing different tools
to support university students with hearing impairments or LD. The project, started in
July 2010, is currently ongoing. During the next year, we will conclude the develop-
ment of all the planned solutions, starting the evaluation phase that will involve
teachers and students of the three involved universities.
References
1. Sbattella, L., Tedesco, R., Pegorari, G.: Personalizing and making accessible innovative
academic services using ICF*, an extended version of the WHO ICF model. In: INTED
Conference, Valencia, Spain (2011)
2. Hovy, E., Lin, C.: Advances in Automated Text Summarization. MIT Press (1999)
3. Borrino, R., Furini, M., Roccetti, M.: Augmenting Social Media Accessibility. In: Interna-
tional Cross-Disciplinary Conference on Web Accessibililty, W4A (2009)
4. Corni, F., Gilberti, E.: A Proposal of VnR-Based Dynamic Modelling Activities to Initiate
Students to Model-Centred Learning. Physics Education 44 (2009)
5. Furini, M., Ghini, V.: An Audio-Video Summarization Scheme Based on Audio and Video
Analysis. In: IEEE CCNC (2006)
6. Gonzalez, D.: Text-to-speech applications used in EFL contexts to enhance pronunciation.
In: TESL-EJ (2007)
7. Cohen, V.: Learning styles in a technology-rich environment. Journal of Research on
Computing in Education, 29(4), 338351 (1981)
8. Hatzivassiloglou, V., Klavans, J., Eskin, E.: Detecting text similarity over short passages:
Exploring linguistic feature combinations via machine learning. In: EMNLP (1999)
9. Hatzivassiloglou, V., Klavans, J., Holcombe, M., Barzilay, R., Kan, M., McKeown, K.:
Simfinder: A exible clustering tool for summarization. In: NAACL Workshop on Auto-
matic Summarization, Pittsburgh, PA, United States (2001)
272 L. Sbattella et al.
10. Knight, K., Daniel, M.: Summarization beyond sentence extraction: A probabilistic ap-
proach to sentence compression. Artificial Intelligence 139 (2002)
11. Copley, J., Ziviani, J.: Barriers to the use of assistive technology for children with multiple
disabilities. Occupational Therapy International 11, 229243 (2004)
12. Higgins, S., Beauchamp, G., Miller, D.: Reviewing the literature on interactive
white-boards Learning. Media and Technology 32(3), 213225 (2007)
13. Jurafsky, D., Martin, J.H.: Speech and language processing. MIT Press (2000)
14. Kennewell, S., Tanner, H., Jones, S., Beauchamp, G.: Analysing the use of interactive
technology to implement interactive teaching. Journal of Computer Assisted Learning 24,
6173 (2007)
15. Somekh, B.: Evaluation of the Primary Schools Whiteboard Expansion Project, Report to
the department for Children, Schools, and Families, Becta (2007)
16. Swann, J.I.: Promoting independence and activity in older people. Quay Books (2007)
17. Marrandino, A., Sbattella, L., Tedesco, R.: Supporting note-taking in multimedia classes:
PoliNotes. In: ITHET Conference, Kusadasi, Turkey (2011)
18. Bertarelli, F., Corradini, M., Guaraldi, G., Fonda, S., Genovese, E.: Advanced learning and
ICT: new teaching experiences in university setting. International Journal of Technology
Enhanced Learning 3(4), 377388 (2011)
Application of Symbolic Computation in Non-isospectral
KdV Equation
Yuanyuan Zhang
1 Introduction
During the past several years, the study of coupled nonlinear evolution equations
(NEEs) has played an important role in explaining many interesting phenomena such
as the fluid dynamics, plasma physics and so on. For understanding those nonlinear
mechanism, numerous work has been done on solitary wave solutions to NEEs [1-9].
In this paper, we would like to generalize the Wronskian technique to the following
variable-coefficient KdV (vcKdV) equation
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 273278.
Springerlink.com Springer-Verlag Berlin Heidelberg 2012
274 Y. Zhang
u = 2(ln f ) xx , (3)
1 1(1) " 1( N 1)
2 2(1) " 2( N 1)
f = W (1 , 2 ,", N ) = , N 1, (5)
" " " "
N N(1) " N( N 1)
ji
where ( j)
= , 1 i N , j 1 . Actually, the Wronskian technique only needs
x j
i
i , xx = ii , (6)
to Eq.(1). These conditions (6) and (7) mean that all functions i are eigenfunctions
of the Lax representation of Eq.(1):
xx = ( u ) , (9)
t = 2h3 , (11)
i, xx = ( Ai e 1Bi e
t t
2 h3 dt 2 h3 dt
)i , (12)
Application of Symbolic Computation in Non-isospectral KdV Equation 275
i = A i e 1Bie
t t
2 h 3 dt 2 h 3 dt
still satisfies
1 i 2 , i = Ai e 1Bie
t t
2 h 3 dt 2 h 3 dt
i = i 1 ,1 i N , (14)
i, xx = ii , (15)
mean that
i1, xx Ai e 3
2 h3 dt
Bi e
t t
2 h dt
= i1 , (17)
2 h3 dt
i 2, xx Bi e 3 Ai e i2
t t
2 h dt
1 3 t t 3h3dt
dt pi2 qi t h1e
3 t
qi h1e
3 h3 dt
ki (t ) = dt 2qi , (27)
2 2
1 3 t t 3h3dt
dt qi2 pi t h1e
3 t
pi h1e
3 h3 dt
li (t ) = dt 2 pi . (28)
2 2
If we choose
1 1
C1i = cos( 1i )e 1i , C2i = sin( 1i )e 1i ,
2 2
(29)
1 1
C3 i = cos( 2i )e 2 i , C4i = sin( 2i )e 2 i ,
2 2
then we have a compact form for i1 and i 2 :
pi e t h3dt x li ( t )
t h dt
1 3 1
1 1 x +li ( t ) pie
i1 = cos( qi e h3dt x + ki (t ))](e 2
t
+e 2
), (30)
2 2
pi e t h3dt x li ( t )
t h dt
1 3 1
1 1 x + li ( t ) pi e
i 2 = sin( qi e h3dt x + ki (t ))](e 2
t
+e 2
). (31)
2 2
Let us first concentrate on the case of N = 1 . It is not difficult to obtain the zero-
order complexiton solution for Eq.(1):
Case 1
u1 = 2 ln(W ( 11 , 12 )) xx
Case 2
u = 2 ln( W ( 11 , 12 )) xx
4 Conclusion
The complexiton solutions of the variable-coefficient KdV equation are obtained
through the Wronskian technique. The method can also be used to solve other
variable-coefficient nonlinear partial differential equations.
References
1. Yong, X.L., Gao, J.W., Zhang, Z.Y.: Singularity analysis and explicit solutions of a new coupled
nonlinear Schrodinger type equation. Commun. Non. Sci. Numer. Simul. 16, 25132518 (2011)
2. Huang, D.J., Zhou, S.G.: Group Properties of Generalized Quasi-linear Wave Equations. J.
Math. Anal. Appl. 366, 460472 (2010)
3. Wang, Q.: Variational principle for variable coefficients KdV equation. Phys. Lett. A. 358,
9193 (2006)
4. Yan, Z.Y., Chow, K.W., Malomed, B.A.: Exact stationary wave patterns in three coupled
nonlinear Schrodinger/Gross-Pitaevskii equation. Chaos. Sol. Fract. 42, 30133019 (2009)
5. Fan, E.G.: Supersymmetric KdV-Sawada-Kotera-Ramani equation and its quasi-periodic
wave solutions. Phys. Lett. A. 374, 744749 (2010)
6. Freeman, N.C., Nimmo, J.J.C.: Soliton solutions of the KdV and KP equations: the
Wronskian technique. Phys. Lett. A. 95, 13 (1983)
7. Nimmo, J.J.C., Freeman, N.C.: A method of obtaining the soliton solution of the
Boussinesq equation in terms of a Wronskian. Phys. Lett. A. 95, 46 (1983)
8. Hirota, R., Ohta, Y.: Hierarchies of coupled soliton equations. J. Phys. Soc. Jpn. 60, 798809
(1991)
9. Hirota, R.: Direct Methods in Soliton Theory. Iwanami Shoten, Tokyo (1992) (in Japanese)
Modeling Knowledge and Innovation Driven Strategies
for Effective Monitoring and Controlling of Key Urban
Health Indicators
1 Introduction
Today urban proliferation leads to sustainability challenges particularly sustaining
Urban Health (UH) is correlated highly with long term controlling of UH services [1].
In other words UH is one of the most significant issues of urbanization creating
social, political, environmental and managerial opportunities and threats for health
contributors. UH is crucially depending on proper interaction and cooperation of
various groups and sectors. Wide range of multi-domains stakeholder partnership and
high volume of UH information and knowledge is circulating in the relevant sections
of urban health e.g. health care services like hospitals and clinics. Therefore health
care organizations are creating and storing information and knowledge regularly e.g.
patients health records, therein how sustainable UH can be achieved? In this way,
Knowledge- and Innovation Management have potentials to promote sustainable UH
strategies.
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 279285.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
280 M. Khobreh, F. Ansari-Ch., and M. Fathi
Fig. 1. An Example of Urban Health Equity Matrix adapted from Urban HEART [8]
While the performance has been monitored and evaluated, five situations/states (5S)
can be determined from the three-color codes. Figure 2 shows the 5S. The situations
are seen in the colored zones (1 to 5) where:
Situation 1, 3 and 5 are defined similar to the Urban HEART matrix, however
situation 2 and 4 (transitional zones) are additionally seen in order to detect and
identify potential riskiness point. For example the position 2 is close to the edge of
position 3, if the status of this position is not probably and regularly monitored and
the potential of transition from zone 2 to 3 is not identified, then possibility of
changing states of relevant indicators from GREEN to YELLOW is undesirably
increased. Alike transition from situation 4 to 5 (the worst case) is led to alter
pertinent indicators from YELLOW to RED.
In consequence, consideration of 5S promotes early stage discovery of riskiness as
well as transitional points beside the indicators. In this research 5S are considered
especially based on two fundamental assumptions:
First: detection of the potential that can lead to transition from desired to fair,
fair to poor and then the poorest situation is significantly important.
Second: 5S provide a comprehensive view of colored situations (indicator
states) particularly because three color situations are extended to five
situations which are providing a spectrum of change.
Based on declaration of 5S, three strategies are modeled to advance analysis and
accordingly inference based on evidences and reasons.
To provide the body of knowledge for the RIR strategies two methods are determined
as (see Figure 4):
Fig. 4. Explore new knowledge and Exploit existing knowledge as inputs of RIR strategies
3 Conclusion/ Outlook
As explained earlier, analysis of Urban HEART Matrix particularly three-color codes
leads to identify five situations beside UH indicators. These 5 situations are used to
select a Radical-, Incremental- and/or Radar (RIR) strategies. RIRs inputs are
considered either as new idea or existing knowledge. In addition, the RIR strategies
require a body of knowledge, which optimally decreases decision failure rate and
assures the accomplishment of strategic and operational UH objectives. These
strategies are supporting UH decision and policy makers to identify improvement
potentials based on acquired evidences and accordingly select and apply an adequate
strategy.
Modeling Knowledge and Innovation Driven Strategies 285
References
1. United Nations Human Settlements Programme: UN-HABITAT State of the Worlds
Cities 2008/2009 Harmonious Cities. London, UK (2008)
2. Howlett, R.J. (ed.): Innovation through Knowledge Transfer. Springer, Heidelberg (2010)
3. Leavitt, P.: Using Knowledge Management to Drive Innovation. American Productivity &
Quality Center, APOC (2003) ISBN: 1928593798
4. Maier, R.: Knowledge Management Systems: Information and Communication
Technologies for Knowledge Management. Springer, Heidelberg (2007)
5. Ansari-Ch, F., Holland, A., Fathi, M.: Advanced Knowledge Management Concept for
Sustainable Environmental Integration. In: the 8th IEEE International Conference on
Cybernetic Intelligent Systems, pp. 17. IEEE Press, Birmingham (2009)
6. Khobreh, M., Ansari-Ch, F., Nasiri, S.: Knowledge Management Approach for Enhancing
of Urban Health Equity. In: The 11th European Conference on Knowledge Management,
Famalico, Portugal, pp. 554564 (2010)
7. Khobreh, M., Ansari-Ch., F., Nasiri, S.: Necessity of Applying Knowledge Management
towards Urban health equity. In: The IADIS Multi Conference on Computer Science and
Information Systems, E-Democracy, Equity and Social Justice, Freiburg, Germany, pp. 310
(2010)
8. Center, W.H.O.: for Health Development: Urban HEART. World Health Organization,
The WHO center for Health Development, Kobe, Japan (2010)
9. Kong, X.-Y., Li, X.-Y.: A Systems Thinking Model for Innovation Management: The
Knowledge Management Perspective. In: The 14th International Conference on
Management Science & Engineering, pp. 14991504. IEEE Press, Harbin (2007)
10. Chuang, S.H.: A resource-based perspective on knowledge management capability and
competitive advantage: an empirical investigation. Journals of Expert Systems with
Applications 27(3), 459465 (2004)
Team-Based Software/System Development
in the Vertically-Integrated Projects (VIP) Program
Randal Abler, Edward Coyle, Rich DeMillo, Michael Hunter, and Emily Ivey
1 Introduction
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 287294.
Springerlink.com Springer-Verlag Berlin Heidelberg 2012
288 R. Abler et al.
The research focus and long-term, large-scale nature of VIP projects provide
several advantages, including:
Engaging faculty in the project at a very high level because the activities of the
team directly support the faculty members research effort including the
generation of publications and prototypes.
Engaging graduate students in the mentoring of undergraduate students that are
assisting them with their research efforts. This will accelerate the graduate
students research effort and enable the undergraduates to learn directly about
the goals and responsibilities of graduate students.
Providing the time and context necessary for students to learn and practice many
different professional skills, make substantial technical contributions to the
project, and experience many different roles on a large design team.
Creating new and unique opportunities for the integration of the research and
education enterprises within the university.
In this paper, we discuss a subset of VIP projects that focus on the development of
large-scale software applications. The scale of these projects and the depth of
knowledge the undergraduates must develop to participate in them have led to the
creation of both new approaches to training the new students that join these teams
each semester and an industry-like approach to evaluating their performance.
In Section 2, we provide overviews of several VIP teams and the software/systems
they are developing. In Section 3, we describe the techniques we have developed for
bringing new VIP students up to speed and creating and supporting a team-based
software development process. In Section 4, we describe how we evaluate the
students performance on the projects.
Fig. 1. The sensor network deployed by the eStadium team at Purdue. A second network will
be deployed at Georgia Tech to gather and process audio, RF, vibration, and image data.
290 R. Abler et al.
The Wireless Network subteam of eStadium has designed WiFi networks for the
stands on the north side of the stadium and the suites on the west side of the stadium.
They measured the propagation of RF signals in the stadium when no people were
present and when the stadium was full during a game. They also considered a number
of antenna designs and access point configurations to ensure adequate coverage in the
stadium. These networks should be installed in the stadium within the next year.
The VIP eDemocracy team has developed an Android-based system to aid The Carter
Centers election observation missions [8]. Election observation is the process by
which countries invite organizations such as The Carter Center to observe their
elections to increase transparency and promote electoral validity. Election observation
occurs in several stages but our system focuses solely on election-day processes. In
the old observation process, paper-based forms with lists of questions were distributed
to observers who traveled to polling stations throughout the day and returned in the
evening after poll closing. Difficulties arose as forms were often lost, illegible or
returned late, making it difficult to make an accurate and timely analysis.
To solve these problems, the eDemocracy team developed an Android-based
mobile application [9]. It used the same questions as the paper-based form and sent
responses via SMS to a back-end Command Center
for analysis. Development of the mobile application
was performed in Java using version 1.6 of the
Android SDK. Use of Googles API allowed direct
integration of the application with the phones
onboard hardware so that GPS tagging and SMS
transmission take place transparently and without
user intervention.
The command centers structure is simple. MySQL
is used as the central database for storing all election
observation data and SQLite is used to retrieve Fig. 2. Sample question for
messages that are received by FrontlineSMS. PHP and election monitors about
HTML are used to present the received data to the chain of custody of ballots
administrator/moderator. The command center also
consists of a map handler that displays the current locations of observers and the
geographical origin of each SMS message. This system was beta tested during the
Filipino presidential election in May 2010. Since that time, the system has been updated
with improved features and interfaces. This new system will be deployed in an upcoming
election in Liberia this Fall.
The 3 VIP projects above demonstrate the depth and breadth of the teams. This is
made possible by the long-term nature of VIP teams. In fact, each VIP team is best
thought of as a small design firm that conducts research and then develops and
deploys systems based on that research. The experiences that students have on VIP
teams are thus very close to what they will later encounter in industry.
New students on VIP teams function like new employees in industry: they are
developing skills and learning about the teams objectives. Students who have been
on a team for a year are performing significant technical tasks that contribute to the
development of prototypes. Students who are close to graduation have both project
management and technical responsibilities. Students who participate for 2 or 3 years
thus have a clear understanding of how industry-scale software projects function.
have access to and experiment with production code or a deployed system because of
the damage that might result. It is also not efficient to have the experienced students
on a team spend a large amount of time teaching the new students the basics of the
system. Our solution to these problems is to provide: (1) an initial five to six-week-
long formal training period for new students; and (2) create development servers for
the new students so thy can safely install and experiment with the latest deployed
system without damaging the actual production server.
All computing oriented VIP teams share a need to build up new students knowledge
very quickly on such topics as C, MySQL, PHP, and Linux [11]. We have thus
created a collection of course modules on these topics that are available to new VIP
students at the beginning of each semester. The advisers for each team decide which
students from their team should participate. The evaluation of each students progress
is provided to his/her adviser throughout the duration of the module, including a list
of students who complete exercises each week and, if requested, a demonstration of a
small application the student develops that is related to their teams effort.
The first time a course module is taught, the instructor is a faculty/staff member
who is an expert in the field. The instructor develops the reading list, lecture
materials, assignments, quizzes, and the grading process. During the second offering,
the lectures are taped and made available on the VIP Wiki [4] along with assignments.
In subsequent semesters, teaching assistants run the course modules. They inform the
students about the lecture viewing and assignment schedule, grade the assignments
and quizzes, and report the students performances to the advisers.
New students participating in a course module must still participate in their VIP
teams weekly meetings so they become familiar with the project and contribute by
performing tasks assigned to them. This participation also enables them to build up
technical and personal connections within the team. These connections are
particularly helpful for teaching new students:
Since many VIP applications do not conform to a simple web content model but
involve complex database and application programming support, staff administrators
are not able to support the needed servers for each team or application. Therefore the
VIP program at Georgia Tech has moved to a model that has successfully built,
maintained, and provided production-level services on web servers, in conjunction
with a development, test, and quality assurance plan for new software development.
Team-Based Software/System Development in the VIP Program 293
Security policies in the department and campus generally require that production
systems with external visibility, such as a publicly accessible web server, not allow
students to have privileged access as this increases the security risk of other systems
on the same local area network IP subnet. To address this issue, a VIP subnet was
established with separately configured firewall policies. This required an initial effort
in the form of configuring a separate IP subnet, allocating a VLAN on the campus
network to support that subnet, propagating that subnet to the effected campus
network Ethernet switches, and creating a new policy configuration in the firewall
associated with that subnet.
Georgia Techs VIP Cloud utilizes 4 physical servers to create virtual machines
called guest machines. Each guest machine acts as an independent server, with a
separate network identity, operating system installation, software installation, and
configuration. The guest configurations includes team-specific guest servers for 5
teams. To simplify creating new guests, a template guest configuration is maintained.
Each guest machine is allocated to a responsible administrator: a student, staff, or
faculty member. Team-specific guest servers are typically administered by a graduate
student but ultimately that decision resides with the VIP teams faculty advisor. The
designated administrator must sign a form [4] indicating that they are responsible for
assuring proper use of the guest in compliance with all applicable policies. If the
administrator is not a faculty member, the teams faculty advisor also signs.
To allow students to get experience with such a challenge, each student involved in
the web project is assigned a specific guest server. Each guest server was configured
with the operating system (RedHat Linux 5.6) preinstalled and a unique network
identity preconfigured. This methodology, developed for team-based guest servers,
applies equally well to individual students guest servers.
This provides clear objectives within the context of team-based projects. Established
team members generally have practices that align with these objectives, which helps
new team members adopt good techniques for succeeding on the project. All team
294 R. Abler et al.
members are given mid-semester advisory assessment results. These results are
reviewed individually with new team members and with any student wishing to
discuss his/her progress. The general VIP syllabus, the peer evaluation form, and the
design notebook evaluation form are available on the Georgia Tech VIP wiki [4].
As part of the VIP program, the students are expected to maintain design
notebooks. In addition to meeting notes, these notebooks are expected to contain a
record of student efforts and accomplishments as well as task lists and current issues.
For primarily software-focused development efforts, the design notebook does not
provide a good mechanism for tracking code development. Subversion is being used
to track code changes, therefore the subversion logs can be reviewed for student
accomplishments. This provides incentives for the students to make proper and
frequent use of the version control software.
Acknowledgments. The work reported in this paper was funded in part by the
National Science Foundation under grant DUE-0837225.
References
1. Coyle, E.J., Allebach, J.P., Garton Krueger, J.: The Vertically-Integrated Projects (VIP)
Program in ECE at Purdue: Fully Integrating Undergraduate Education and Graduate
Research. In: ASEE Annual Conf. and Exposition, Chicago, IL, June 18-21 (2006)
2. Abler, R., Krogmeier, J.V., Ault, A., Melkers, J., Clegg, T., Coyle, E.J.: Enabling and
Evaluating Collaboration of Distributed Teams with High Definition Collaboration
Systems. In: ASEE Annual Conference and Exposition, Louisville, KY, June 20-23 (2010)
3. Abler, R., Coyle, E.J., Kiopa, A., Melkers, J.: Team-based Software/System Development
in a Vertically-Integrated Project-Based Course. In: Frontiers in Education, Rapid City,
SD, October 12-15 (2011)
4. The Vertically-Integrated Projects Program,
http://vip.gatech.edu, http://vip.gatech.edu
5. Ault, A.A., et al.: eStadium: The Mobile Wireless Football Experience. In: Conf. on
Internet and Web Applications and Services, Athens, Greece, June 8-13 (2008)
6. Zhong, X., Coyle, E.J.: eStadium: A Wireless Living Lab for Safety and Infotainment
Applications. In: Proc. of ChinaCom, Beijing, China, October 25-27 (2006)
7. Sun, X., Coyle, E.J.: Low-Complexity Algorithms for Event Detection in Wireless Sensor
Networks. IEEE Journal on Selected Areas in Communications 28(7) (September 2010)
8. The Carter Center,
http://www.cartercenter.org/peace/democracy/index.html
9. Osborn, D., et al.: eDemocs: Electronic Distributed Election Monitoring over Cellular
Systems. In: Intl Conf. on Internet and Web Applications and Services, Barcelona, SP
(2010)
10. Abler, R., Wells, G.: Supporting H.323 video and voice in an enterprise network. In: 1st
Conference on Network Administration, May 23-30, pp. 915 (1999)
11. Abler, R., Jackson, J., Brennan, S.: High Definition Video Support for Natural Interaction
through Distance Learning. In: Frontiers in Education, Saratoga Springs, NY (October
2008)
12. Mavlankar, A., et al.: An Interactive Region-of-Interest Video Streaming System for
Online Lecture Viewing. In: Intl Packet Video Workshop (PV), Hong Kong, China
(December 2010)
13. Abler, R., Wells, I.G.: Work in Progress: Rapid and Inexpensive Archiving of Classroom
Lectures. In: Frontiers in Education Conf., San Diego, CA (October 2006)
Frameworks for Effective Screen-Centred Interfaces
1 Introduction
It has become a commonplace notion that computer-based technology and forms of
expression transform human experience and that the screen is the 21st century face
of the image [1]. There is, thus, clearly an urgent need to examine the ways in which
screen-centred interfaces present images and encode and decode meaning, identity,
and culture, borne out of an intuitive sense that whoever controls the metaphor
controls the mind [2]. This is not a question of technology alone, for as Craig
Harris has argued, aesthetics and the technology for creating those aesthetics are
tightly intertwinedJust as technology is influenced by its potential use, aesthetics or
content is molded by what is possible [3]. And Lev Manovich has argued that we
are no longer interacting to a computer but to a culture encoded in digital form [4].
This paper presents the groundwork for an interdisciplinary project by four
researchers at the University of Regina who are working to advance the state of the
knowledge in how aesthetically represented information-in language and in visual
media-is understood, mediated, and processed. Our project builds on our work on
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 295301.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
296 L. Benedicenti et al.
As early as the 1980s, C. Crawford advocated that real art through computer
games is achievable, but it will never be achieved so long as we have no path to
understanding. We need to establish our principles of aesthetics, a framework for
criticism, and a model for development [6]. In his essay on whether computer games
will ever be a legitimate art form, Ernest W. Adams disagrees with the need for a
model of development as he feels art should be intuitively produced, but he agrees
with the necessity for a methodology of analysis [7].
Other theoretical positions have evolved to focus on either the technological
construction of new media or their social impact. For example, in the quest to
quantify effective human interface design, Brenda Laurel turns to theatre and
Aristotles Poetics by creating categories of action, character, thought, language,
melody (sound) and enactment [8]. However, Sean Cubitt argues that the
possibilities for a contrapuntal organisation of image, sound and text [should be]
explored, in pursuit of a mode of consciousness which is not anchored in the old
hierarchies [9]. Peter Lunenfeld takes a more radical stance by suggesting once we
distinguish a technoculture from its future/present from that which preceded it, we
need to move beyond the usual tools of contemporary critical theory. His assertion of
the need for a hyperaesthetic that encourages a hybrid temporality, a real-time
approach that cycles through the past, present and future to think with and through the
technocultures [10] offers its own set of problematics: computer-based forms are
neither a-historical, nor represent a leap in technology so distinct that they are
unlinked to preceding forms.
Processing and experiencing text is embodied; linguistic meaning evokes all
aspects of the experience of reading, physical and cognitive, and every aspect of
language is implicated in embodiment [11], [12]. This notion of the embodied
experience of language corresponds with McLuhans evocation of the medium as an
extension of the body in Understanding Media [13]. Ubiquitous computing embraces
the embodied nature of language and literature in that it brings the media in closer
contact with the human (for example, an individual becoming immersed in a virtual
reality world). As Peter Stockwell argues, The notion of embodiment affects every
part of language. It means that all of our experiences, knowledge, beliefs and wishes
are involved in and expressible only through patterns of language that have their roots
in our material existence [12].
Gibbs Jr. argues that Understanding embodied experience is not simply a matter
of physiology or kinesiology (i.e., the body as object), but demands recognition of
how people dynamically move in the physical and cultural world (i.e., the body
experienced from a first-person, phenomenological perspective) [14]. We link this
notion of the embodied experience with McLuhans conception of the relationship of
media to human experience and understanding, for McLuhans formulation inherently
recognizes that exposure to a new medium is not only an experience of a new form of
technology but that it also changes the way we relate to and understand the world and
our place in that world. For example, the mobile phone could be considered as an
extension of the ear in that it changes the fundamental way with which the human
body is situated within the world [15].
298 L. Benedicenti et al.
2. In the hypertext artwork, With Liberty and Justice for All by African
American artist Carmin Karasic, three browser windows weave a story. This work is
interactive as the viewer can click on the images and different photos appear:
http://www.carminka.net/wlajfa/pledge1.htm
Concrete Poetry
3. Concrete poetry created for a visual medium in which the moving visual image is
reflected in the text that emerges: http://www.vispo.com/guests/DanWaber/arms.htm
4. Concrete poetry created for a visual medium in which the animation illustrates the text:
http://dichtung-digital.mewi.unibas.ch/2003/parisconnection/concretepoetry.htm
In the first, the moving visual image is reflected in the text that emerges. In the
second, the animation illustrates the text. In both, metaphor operates at the lexical
level and at the level of image. Why might this be of interest? 1. we can work at
metaphor from multiple directions, including at the level of linguistics, which reflects
more closely the experience of using a new media device; 2. because the poem is
fluid, it will lend itself well to the embodied nature of handheld and immersive
worlds: a question might be, what happens when we move the text because we hold it
in our hand as it too moves? Are there differences in cognitive processes and how
they work over a static image? The second example provided above might be very
useful for an experimental design because the animation is derived from a fixed text,
so one would have access to both versions (e.g., paper/conventional and digital).
These texts become a useful tool for methodological experimentation: how does one
deal with digital aesthetic objects presented on digital media versus conventional
forms? How do we deal with aesthetic experiences when the mode of delivery has
changed so radically?
Step 4: Data analysis. The team will code the data to prepare a final data set:
analyses of variance of the various cognitive measures (recall, reading speed, etc)
examining how these measures are affected by media platform will be conducted.
Correlational analyses will be performed between the cognitive measures, the
questionnaires examining participants' aesthetic experiences, and the media platforms.
The correlational analyses will also be used to construct a decision support system
linking interface factors for all content with the parameter set as screens change. We
will use software engineering systems compression methods like Principal
Component Analysis and Clustering to extract a core set of measures that will
constitute the initial vector state of the decision support system. The correlational
analyses will provide the rules for linking these parameters and will be used to build
an active rule set (either as a look-up table or as a set of if-then rules) that will form
300 L. Benedicenti et al.
the knowledge base given to the system. The system, built in this way, essentially
becomes a decision support system, or computer program, capable of forming a
general prediction of the best type of content fragments to use in a certain defined
screen size format. Linking changes in interface parameters (cognitive, cultural, and
aesthetic) with different screens and their description, will allow us to infer how to
automatically change a presentation from one interface to another and obtain a desired
effect (cognitive, cultural, and aesthetic).
References
1. Ramsay, C.: Personal conversation (January 19, 2011)
2. Bey, H.: The information war. In: Dixon, B., Joan, Cassidy, E.J. (eds.) Virtual Futures:
Cyberotics, Technology and Post-Human Pragmatism. Routledge, London (1998)
3. Harris, C. (ed.): Art and Innovation: the Xerox PARC Artist-in-Residence Program. The
MIT Press, Cambridge (1999)
4. Manovich, L.: The Language of New Media. The MIT Press, Cambridge (2001)
5. Mondloch, K.: Screens: Viewing Media Installation Art. University of Minnesota Press,
Minneapolis (2010)
6. Crawford, C.: The Art of Computer Game Design. McGraw-Hill/Osbourne Media,
Berkeley, CA (1984)
7. Adams, E.W.: Will Computer Games Ever Be a Legitimate Art Form? In: Mitchell, G.,
Clarke, A. (eds.) Videogames and Art. Intellect Books (2007)
8. Laurel, B.: Computers as Theatre. Addison-Wesley (1991)
9. Cubitt, S.: The Failure and Success of Multimedia. Paper Presented at the Consciousness
Reframed II Conference at the University College of Wales, Newport (August 20, 1998)
10. Lunenfeld, P.: Snap to Grid: a Users Guide to Digital Arts, Media, and Cultures. The MIT
Press, Cambridge (2002)
11. Geeraerts, D.: Incorporated but not embodied? In: Brone, G., Vandaele, J. (eds.) Cognitive
Poetics: Goals, Gains and Gaps, pp. 445450. Walter de Gruyter, New York (2009)
12. Stockwell, P.J.: Texture - A Cognitive Aesthetics of Reading. Edinburgh University Press,
Edinburgh (2009)
13. McLuhan, M.: Understanding Media: The Extensions of Man. The MIT Press, Cambridge
(1964)
14. Gibbs Jr., R.W.: Embodiment and Cognitive Science. Cambridge University Press,
Cambridge (2006)
15. Gordon, W.T., Hamaji, E., Albert, J.: Everymans McCluhan. Mark Batty Publisher, New
York (2007)
16. Greenfield, A.: Everyware: The Dawning Age of Ubiquitous Computing. New Riders,
Berkeley (2006)
Analytical Classification and Evaluation of Various
Approaches in Temporal Data Mining
Abstract. Modern data bases have vast information and their manual analysis
for the purpose of knowledge discovery is almost impossible. Today the
requirement of automatic extraction of useful knowledge among large-capacity
data is completely realized. Consequently, the automatic analysis and data
discovery tools are in progress rapidly. Data mining is a knowledge that
analyzes extensive level of unstructured data and helps discovering the required
connections for better understanding of fundamental concepts. On the other
sides, temporal data mining is related to the analysis of sequential data streams
with temporal dependence. The purpose of temporal data mining is detection of
hidden patterns in either unexpected behaviours or other exact connections of
data. Hitherto various algorithms have been presented for temporal data mining.
The aim of present study is to introduce, collect and evaluate these algorithms
to create a global view over temporal data mining analyses. According to
significant importance of temporal data mining in diverse practical applications,
our suggestive collection can be considerably beneficial in selecting the
appropriate algorithm.
Keywords: Temporal data mining (TDM), TDM algorithms, Data set, Pattern.
1 Introduction
Analysis of sequential data stream for understanding the hidden rules within various
applications (from the investment stage to the production process) is significantly
important. Since the calculations are growing in many of practical fields, a large amount
of data is collecting rapidly. So, various frameworks are required for extraction of useful
knowledge from the database. After appearance of data mining science, new techniques
are represented for this field. Because of particular dealing of many of these fields with
temporal data, the time aspect should be considered for the purpose of correct
interpretation of collected data. This matter clarifies the significance of temporal data
mining. In fact, TDM is equivalent to knowledge discovery from the temporal databases.
TDM is a fairly modern branch which can be considered as the common interface of
various fields, namely statistics, temporal pattern recognition, temporal databases,
optimization, visualization and high-level and parallel computations. In all of TDM
applications, large amount of data is the first limitation. Consequently, it is always
required to employ efficient algorithms in this field. This study attempts to represent a
comprehensive collection and evaluation for these algorithms.
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 303311.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
304 M.R. Keyvanpour and A. Etaati
The paper is organized as follows: Section 2 deals with introducing the basic
concepts of TDM and presenting architecture for TDM. TDM algorithms are
classified in section 3 based on the output type and the applied techniques. Evaluation
of the TDM algorithms according to this classification is presented in sections 4.
Figure 1 presents the architecture for the extraction of temporal patterns in TDM. The
architecture is consisted of the following components [6]:
Analytical Classification and Evaluation of Various Approaches 305
Task Analysis: when a user gives a request, this part has the duty of analyzing the
request both syntactically and semantically and extracting of required data. In fact,
this part provides the query for appropriate data. It also extracts the information
relevant to user-expected patterns. During the analysis procedure, it calls the modules
that support the time. These modules make time expressions by processing the time-
related components during the mining procedure. According to the obtained results, it
requests data access, pattern search and pattern representation modules, respectively.
Data access: After providing the query request, it searches the database to find proper
data with a format based on the mining algorithm. The temporal aspects must also be
considered during the mining procedure. The data access modules use services that
generate by time-supporting modules to interpret the time-dependant components.
Pattern Search: Based on the miming demand, it selects and runs an appropriate
algorithm that passes through the chosen data to search for considerable patterns. The
search demand illustrates a type of knowledge required for user and exerts the
determined thresholds by user. According to the type of demand and selective data
set, the pattern search module runs the algorithm and stores the extracted rules.
Pattern representation: Based on various demands for pattern representation, the
extracted knowledge can be demonstrated in different formats. Namely, the patterns
may be represented as tables, graphs, etc.
Time support: This component is a crucial module in supporting the TDM and uses
in all of other modules. For the purpose of temporal aspects identification, each
expression in the temporal query must be passed through the time support module. All
of other time-related modules employ the services of this component. The time
support module stores and uses the knowledge-based calendar, which contains the
definitions of the whole of relevant calendars.
A pattern is a local structure that creates a particular state for a few variables from the
given points and that typically can be similar to a substring with a number of "dont
care" characters [7,8]. Matching and discovery of patterns have a significant role in
the data mining [7].
306 M.R. Keyvanpour and A. Etaati
In spite of search and retrieval of applications, there is not any particular query in
pattern discovery that can be searched within the database. The purpose of this item is
the discovery of entire considerable patterns [8].
algorithms for discovering all of repetitive patterns are two different aspects of data
mining. The methods that applied for finding the repetitive patterns are important,
because they are used for discovering of patterns and useful rules. These rules are
employed for extraction of interesting orders in data [8].
(1)
(2)
Using a certain transaction set D, extraction rule problem produces all association
rules that their support count is not less than the minimum repeat. The minimum
repeat is defined by user and called minsup. Also, association rule minimum
confidence should not be less than the defined confidence by user (minconf) [10].
A model is high level and general representation of data. The models are usually
specified by a set of modeling parameters that are estimated from the data. The
models are classified into predictive and descriptive ones. The predictive model
is used for prediction and classification of applications, while the descriptive models
are useful for data abstraction [7].
, (3)
, (4)
capability in model Inserting the obtained data in a for the algorithms that
determination for errors; matrix, compactly. produce a recursive
algorithms
Existence of data with noises or missing values: If the data are collected
from different sources, the existence of noises and missing values in the
temporal database is quite probable. An algorithm with appropriate
efficiency is required to analyze such data properly.
Capability of model determination for errors: In addition to the capability
of algorithm in producing an accurate and suitable output, its capability in
creation of the model for errors should be considered.
Existence of complex and correlated data: The existence of complicated
and correlated data decreases the efficiency of TDM algorithms. So, for the
analysis of this category of data, an algorithm should be selected that doesnt
reduce the efficiency.
310 M.R. Keyvanpour and A. Etaati
Frequent Pattern-based
methods
based
Pattern-
5 Conclusions
In this paper, TDM algorithms are investigated. These algorithms are categorized and
evaluated based on the applied techniques and obtained output. In order to provide an
appropriate tool for selecting the suitable algorithms, the results are represented in
diagrams and the attributes of each group are investigated.
Results of this research assert that an algorithm can not be introduced as an optimal
case based on its structure. Since each algorithm is used for a special aim, comparison
of algorithms does not make sense. One of the most important problems in TDM is
the elimination of challenges and improvement of algorithms efficiency which is an
important and active research field and requires more investigation.
References
1. Goebel, M., Gruenwald, L.: A Survey of Data Mining and Knowledge Discovery Software
Tools (1999)
2. Shapiro, G.P., Frawley, W.J.: Knowledge Discovery in Databases. AAAI/MIT Press
(1991)
3. Feelders, A., Daniels, H., Holsheimer, M.: Methodological and Practical Aspects of Data
Mining (2000)
4. Bellazzi, R., Larizza, C., Magni, P., Bellazzi, R.: Temporal Data Mining for The Quality
Assessment of Hemodialysis Services. Artificial Intelligence in Medicine 34, 2539 (2004)
5. Laxman, S., Sastry, S.: A Survey of Temporal Data Mining. Sadhana 31(2), 173198
(2006)
6. Chen, X., Petrounias, I.: An Architecture for Temporal Data Mining. In: IEE Colloquium
on Knowledge Discovery and Data Mining, vol. 310, pp. 8/18/4. IEEE (1998)
Analytical Classification and Evaluation of Various Approaches 311
7. Hand, D., Mannila, H., Smyth, P.: Principles of Data Mining. MIT Press, Cambridge
(2001); Published by Asoke K
8. Gopalan, N.P., Sivaselvan, B.: Data Mining: Techniques and Trends. A.K. Ghosh, New
Delhi (2009); Published by A.K. Ghosh
9. Gharib, T.F., Nassar, H., Taha, M., Abraham, A.: An Efficient Algorithm for Incremental
Mining of Temporal Association Rules. Journal of Data & Knowledge Engineering 69,
800815 (2010)
10. Agrawal, R., Srikant, R.: Fast Algorithms for Mining Association Rules. In: 20th
International Conference on Very Large Data Bases (VLDB 1994), pp. 487499 (1994)
A Novel Classification of Load Balancing Algorithms
in Distributed Systems
1 Introduction
Load balancing mechanisms are one of the most essential issues in distributed
systems. The final goal of load balancing is obtained by fair distribution of load
across the processors such that, execution time should be decreased after load
balancing operation. The problem of load balancing emerges when a processor is
ready to execute the tasks but is going to idle state. The idle processors are sign of
overloaded processors when adequate tasks exist in the systems. Such conditions can
lead to a remarkable decrease of performance in distributed systems. Load balancing
algorithms have been categorized into static or dynamic, centralized or decentralized,
cooperative or non cooperative in the literature [2, 3, 5, 9, 11, 12]. In this paper we
categorize the load balancing algorithms into topology dependent or topology
independent algorithms. Topology dependent methods are algorithms which have
designed to execute in a specific topology in order to minimize the communication
overhead. However topology independent methods are not restricted to execution in
specific topology and instead of minimizing the overhead, try to minimize the
execution time. Although synchronization has an essential effect in order to decrease
the execution time, topology independent methods cannot guarantee the
synchronization. On the other hand, some topology independent methods can
guarantee the synchronization. Therefore they can be combined with some aspects of
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 313320.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
314 M.R. Keyvanpour, H. Mansourifar, and B. Bagherzade
In the next sections we try to demonstrate main aspects of each sub class and its
individual properties.
This category of load balancing algorithms is suitable for highly parallel systems.
Since, some synchronous topology dependent algorithms have minimal amount of
communication overhead but some of them cannot guarantee a reasonable overhead.
For instance, Dimension Exchange Model (DEM) [1, 8] is a synchronous approach
which tries to balance the system through an iterative manner. DEM was conceptually
designed for execution on hypercube topology, such that, load is migrated between
directly connected nodes. Therefore DEM can guarantee the minimal overhead. The
main drawback of DEM is its dependency to Log N iterations where, N denotes the
number of nodes in the system. For example, an overloaded node may be forced to
wait until the last iteration for transferring its load to another node. Fig2 shows the
process of load balancing in DEM strategy. For solving this problem Direct
Dimension Exchange (DDE) proposed [8]. DDE eliminates the unnecessary
iterations by taking load average in every dimension.
Some load balancing algorithms are asynchronous. However their local behavior
makes them suitable for highly parallel systems. Such algorithms act locally in each
domain and simultaneous executing the algorithm on various domains can satisfy the
synchronization. For instance, Hierarchical Balancing Model HBM [1] is an
asynchronous algorithm which was conceptually designed for executing in hypercube
topology. HBM organizes the nodes in a binary tree, such that, each parent node
receives the triggers which indicate an imbalance on its children. Fig4 shows the
binary tree of HBM. Other instances of asynchronous load balancing algorithms are
Gradient Model (GM) and Extended Gradient Model, which are demand driven
algorithms and work based on detecting the global or local nearest lightly loaded
processors.
Primary load balancing algorithms are non intelligent methods which process of setting
the thresholds and selecting the destination of migration are based on trial and error
approaches. Although the process of execution of these methods is very simple, most
of them can be combined with artificial intelligence or optimization methods. For
instance, Central Load Manager [2] is a static load balancing algorithm. In this
algorithm, when the thread is created, a minimally loaded host is selected by the
central load manager for executing the new thread. The integrated decision making
causes the uniform allocation and consequently minimum number of separated
neighbor threads. However, high degree of communication overhead is main drawback
of this algorithm. Thresholds algorithm [2] is another static load balancing algorithm
which the load manager is distributed between the processors. Each local load manager
knows the load state of whole system and following thresholds represent the load state
of processors: Tunder and Tupper, which get default values. In this algorithm, if the
local state is not overloaded or if no underloaded host exists then the thread is allocated
locally, otherwise, a remote underloaded host will be selected. Comparing to Central
Load Manager algorithm, distributing the load manager among all processors leads to
low communication overhead. However, when all processors are overloaded, local
load assignment can cause a significant load imbalance. In these situations, a host can
be much overloaded than other host, which is in conflict with ultimate goal of load
balancing algorithms. As illustrated, the process of setting and changing the thresholds
in primary load balancing algorithms follows from trial and error approaches.
Therefore they cannot guarantee the best decision.
Selection phase in most primary load balancing algorithms is based on load related
thresholds. However, some primary load balancing algorithms utilize performance
measures in order to select the destination of migration. For instance, Shortest Expected
Delay (SED) selects the hosts with mean response time and Adaptive Separable Policy
(ASP) selects the hosts with best utilization during the past interval [14].
Intelligent load balancing algorithms are methods which process of setting and
changing the thresholds and selecting the destination of migration are based on
optimization or machine learning mechanisms. For instance, Classifier Based Load
Balancer (CBLB) [7] employs a simple classifier system on central host, in order to
dynamically setting the load balancing thresholds. For this reason, the central host
classifies the state of the system based on following parameters.
Mean response time since the last update.
The mean utilization per node since the last update.
Inverse standard deviation of arrivals since the last update.
Based on these parameters, central host forms three classes and assign each class a
specific action. System parameters which used in CBLB are as follows. Transfer
queue threshold (Tq), update period time (UP) and CPU threshold TCPU. The main
advantage of CBLB algorithm is that it can work as an independent central algorithm
or can be combined easily with primary load balancing algorithms.
318 M.R. Keyvanpour, H. Mansourifar, and B. Bagherzade
6 Conclusion
In this paper we proposed a novel classification of load balancing algorithms based on
topological view and 7 load balancing features. Such classification can reveal
interesting facts about load balancing algorithms as follows.
All intelligent load balancing algorithms are central and there is no local
intelligent load balancing algorithms in the research literature
All intelligent load balancing algorithms which act based on machine
learning mechanisms are topology independent. To the best of our
knowledge there is no topology dependent and machine learning based load
balancing algorithms in the research literature.
Minimal overhead and optimized execution time or utilization, are trade- offs
of each load balancing algorithms. The major of synchronous load balancing
algorithms have minimal overhead. It seems that the combination of such
algorithms with artificial intelligent techniques can form the future direction
of load balancing algorithms.
320 M.R. Keyvanpour, H. Mansourifar, and B. Bagherzade
References
1. Willebeek-LeMair, M.H., Reeves, A.P.: Strategies for Dynamic Load Balancing on Highly
Parallel Computers. IEEE Transactions on Parallel and Distributed Systems 4(9) (1993)
2. Dubrovski, A., Friedman, R., Schuster, A.: Load Balancing in Distributed Shared Memory
Systems. International Journal of Applied Software Technology 3, 167202 (1998)
3. Zhou, S., Ferrari, D.: A Trace-Driven Simulation Study of Dynamic Load Balancing. IEEE
Transactions on Software Engineering 14(9), 13271341 (1988)
4. Das, S.K., Harvey, D.J., Biswas, R.: Parallel Processing of Adaptive Meshes with Load
Balancing. IEEE Trans. Parallel and Distributed Systems 12(12), 12691280 (2001)
5. Corradi, A., Leonardi, L., Zambonelli, F.: On the Effectiveness of Different Diffusive
Load Balancing Policies in Dynamic Applications. In: Bubak, M., Hertzberger, B., Sloot,
P.M.A. (eds.) HPCN-Europe 1998. LNCS, vol. 1401, Springer, Heidelberg (1998)
6. Corradi, A., Leonardi, L., Zambonelli, F.: Diffusive Load Balancing Policies for Dynamic
Applications. IEEE Concurrency 7(1), 2231 (1999)
7. Baumgartner, J., Cook, D.J., Shirazi, B.: Genetic Solutions to the Load Balancing
Problem. In: Proc. of the International Conference on Parallel Processing, pp. 7278
(1995)
8. Shu, W., Wu, M.Y.: The direct dimension exchange method for load balancing in k-ary n-
cubes. In: Proceedings of Eighth IEEE Symposium on Parallel and Distributed Processing,
New Orleans, pp. 366369 (1996)
9. Osman, A., Ammar, H.: Dynamic load balancing strategies for parallel computers. In:
International Symposium on Parallel and Distributed Computing, ISPDC (2002)
10. Luque, E., Ripoll, A., Cortes, A., Margalef, T.: A distributed diusion method for
dynamic load balancing on parallel computers. In: Proc. of EUROMICRO Workshop on
Parallel and Distributed Processing. IEEE CS Press (1995)
11. Sharma, S., Singh, S., Sharma, M.: Performance Analysis of Load Balancing Algorithms.
World Academy of Science, Engineering and Technology 38 (2008)
12. Xu, C.-Z., Lau, F.: Load Balancing in Parallel Computers: Theory and Practice. Kluwer
Academic Publishers, Dordrecht (1997)
13. Salim, M., Manzoor, A., Rashid, K.: A Novel ANN-Based Load Balancing Technique for
Heterogeneous Environment. Information Technology Journal 6(7), 10051012 (2007)
14. Ghanem, J.: Implementation of Load Balancing Policies in Distributed Systems, Master
thesis (2004)
Data Mining Tasks in a Student-Oriented DSS
1 Introduction
Universities, as integrating parts of the local community, have important tasks in
education, training, research and are also an important supplier of high-quality future
staff for local and international companies. These institutions try to adopt innovative
tools in an attempt to augment all their activities due to an increasing competitive and
demanding background. Such tools may possibly be the decision support systems
(DSS) with the purpose to assist in all managerial and academic processes, for
retrospective analysis of economic and organizational data.
The success of any organization depends greatly on the quality decision-making
processes which demand assisting software tools such as decision support systems.
Most recent predilection of the actual DSS is to smooth the progress of cooperation
between participants in collective decisions in all fields of activity. They denote
complex applications that assist and not substitute human decision making processes
and rely on the effectiveness and accuracy of the ensuing information.
The perception and purposes of DSS have been appreciably lengthened owing to
the hasty development [15] of IT and web technologies. Marakas definition
underlines that DSS is a system under the control of one or more decision makers,
that supports the activity of decision making by offering an organized set of tools
projected to impose structure on portions of the decision-making situation and to
develop the eventual efficiency of the decision result [10].
While most of the previous DSS research focused on enterprise-level decision
support [15], in our research we center on individual support with regard to
personalized preferences and expectations. In the present article we introduce a DSS
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 321328.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
322 V.P. Bresfelean, M. Bresfelean, and R. Lacurezeanu
architecture for students assist in decision processes, and also our view of integrating
several data mining tasks in system.
Another data mining task to be included is the Association learning where we seek
any association among features, not only to predict a definite class value (e.g.
relations between subjects, courses, labs, facilities that might attract new students, or
cause scholastic abandonment, transfer, or study interruptions etc.).
Fig. 1. Student-oriented DSS architecture, derived from our continuous research [4]
The system applies clustering in the situations when information lacks on accessibility
concerning the connection of data with predefined classes and will determine groups
founded on data resemblance, while disparate groups will contain different data. For
example, anchored in the feedback data received from our master degree students who
are employed, the system builds a profile for students with jobs in the graduated
Data Mining Tasks in a Student-Oriented DSS 325
Table 1. Students with jobs in the same/different graduated specialization clusters centroids
Score
Cluster 1 Cluster 2
(based on
Attribute Job in graduated Job in different
chi-squared
specialization field
statistic)
Graduated school type 0.61214 University University
Year of last school graduation 0.1111 2005 2009
Final grades 2.55337 7.01-8 8.01-9
Qualification field 39.41197 Economic Natural sciences
Age 2.93085 26_35 36_45
Gender 0.11791 male male
Type of job 41.23092 Full time Full time
Headquarters 30.31725 Tg Mures, Mures Cluj-Napoca
Time to hire 1.82114 3 to 6 months 1 to 2 years
Job satisfaction 17.56531 Satisfied So and so
Number of job interviews participation 1.89431 2 to 5 2 to 5
Number of refused jobs 2.06833 2 to 5 2 to 5
Type of requested experience by employer 6.65839 In the graduated field In other fields
Years of requested experience 0.56306 2 to 5 years 1 year
Employer appreciation 3.20531 Very good Good
Firm technical level 4.42825 Average Higher
Level of self qualification vs. required tasks 0.89261 Adequate Higher
Firms staff fluctuation 3.5824 Low Average
Firm stimulates employees training 3.1144 Yes No
Firm stimulates innovation 3.6831 No No
Own innovations at work 0.47226 Yes No
Aware of the promotion criteria 1.2838 Yes Yes
Time to fulfill the promotion 7.00431 Between 1-2 years Now
For the classification learning tasks, the system applies for example the C4.5
algorithm to predict the students present level of qualification versus the required
tasks by their employer. Based on the training set, we obtained a 79.22% success rate
(correctly classified instances), and a 72,08% success rate for the 10 fold
cross-validation experiment. The system used the Laplace estimator, where leaves
counts are smoothed by starting counts at 1 instead of zero to avoid some problems - a
traditional procedure named after the 18th century mathematician Pierre Laplace.
326 V.P. Bresfelean, M. Bresfelean, and R. Lacurezeanu
Fig. 2. Classification learning tree - students qualification versus job required tasks
Here are some suggestive examples of interpretation of the decision trees branches:
- If the students believed their employer had average technical equipment, then they
would find their level of qualification to be just adequate to the required tasks.
- If the students believed their employer had a high level of technical equipment,
and they refused 0 jobs before the taking the present one, and their most recent
school graduation was after 2008, then they would find their level of qualification
to be higher than the required tasks.
- If the students believed their employer had a high level of technical equipment,
and they refused 1 job before taking the present one, and this jobs satisfaction is
<<so and so>>, and felt the employer was not motivating staff training, then they
would find their level of qualification to be lower than the required tasks.
In support of the numeric prediction tasks the system uses for instance the REPTree
method to generate a decision tree (fig. 3) anchored in gain/variance decrease and
Data Mining Tasks in a Student-Oriented DSS 327
then trims it using reduced-error pruning [14] by sorting values for numeric attributes
once due to its speed optimization and deals with absent values by dividing instances
into pieces. Based on several public statistics, the system tries to numerically predict
the level of national youth (between 15-24 years old youngsters) unemployment rate,
namely YUR_15-24.
Fig. 3. Generated REPTree for numeric prediction of the youth unemployment rate
It can be seen that the countrywide Unemployment rate (for all categories of ages)
has a role for the YUR_15-24 future evolution, by hovering around 7.25 percent. The
next nodes of the decision tree reveal the importance of other factors, namely
LPFR_55-64 (Labour force participation rate, age 55-64), and Net_product_taxes.
The Net Product Taxes represent the difference between taxes owed to the state
budget (VAT, excise and other taxes) and subsidies on products which are paid by the
state budget. As a concluding point, the last split takes place in the LPFR_65+
(Labour force participation rate, age over 65) with its 17.82% value that finally
influences YUR_15-24 future evolution. A grater level of Net_product_taxes might
be an indication of a sounder economy, thus decreasing the youth unemployment rate.
A motivating approach for our systems upcoming assessment will be, for instance, to
unravel the issues that determine the influence of the mature and senior citizens
employment over the youth employment/unemployment rates.
4 Conclusions
In the present article we included a first part of our research in developing a student-
oriented decision support system and its data mining tasks. We commenced by over
viewing recent DSS studies in several domains, then outlining some interesting
applications in the educational field. Afterward, we introduced our DSS architecture and
described the main modules and their roles addressed to the central point of education,
namely students.
328 V.P. Bresfelean, M. Bresfelean, and R. Lacurezeanu
The central point of our study was to integrate the data mining processes in the decision
support system. For that we envisioned a knowledge server comprising built-in algorithms
and also open-source software, to provide the tasks required: data clustering, classification
learning, and numeric prediction. We presented several examples of how the system can
generate significant knowledge for the decision-maker: clusters based on students with
jobs in the same/different graduated specialization; classification learning tree based on
students qualification versus job required tasks; REPtree for numeric prediction of the
youth unemployment rate.
In our future research we plan to continue developing the system, by integrating more
planned components, such as the association learning tasks.
Acknowledgements. This work was supported by the CNCSIS TE_316 Grant.
References
1. Abdelhakim, M.N.A., Shirmohammadi, S.: Improving Educational Multimedia selection
process using group decision support systems. International Journal of Advanced Media
and Communication 2(2), 174190 (2008)
2. Aslam, M.Z., Nasimullah, Khan, A.R.: A Proposed Decision Support System/Expert
System for Guiding Fresh Students in Selecting a Faculty in Gomal University, Pakistan
(2011), http://arxiv.org/abs/1104.1678
3. Bohanec, M., Zupan, B.: Integrating decision support and data mining by hierarchical multi-
attribute decision models. In: IDDM-2001: ECML/PKDD-2001 Workshop, Freiburg (2001)
4. Bresfelean, V.P., Ghisoiu, N., Lacurezeanu, R., Vlad, M.P., Pop, M., Veres, O.: Designing
a DSS for Higher Education Management. In: Proceedings of CSEDU 2009 International
Conference on Computer Supported Education, March 23-26, vol. 2, pp. 335340 (2009)
5. Bresfelean, V.P., Ghisoiu, N., Lacurezeanu, R., Sitar-Taut, D.A.: Towards the
Development of Decision Support in Academic Environments. In: ITI 2009, Croatia, June
22-25, pp. 343348 (2009)
6. Bresfelean, V.P.: Implicatii ale tehnologiilor informatice asupra managementului
institutiilor universitare, Ed. Risoprint, ClujNapoca, 277 pages (2008)
7. Frize, M., Frasson, C.: Decision-support and intelligent tutoring systems in medical
education. Clin. Invest. Med. 23(4), 266269 (2000)
8. Iliev, R., Kirilov, L., Bournaski, E.: Web-based decision support system in regional water
resources management. In: Proceedings of the CompSysTech 2010, pp. 323328 (2010)
9. Mansmann, S., Scholl, M.H.: Decision Support System for Managing Educational
Capacity Utilization. IEEE Transactions on Education 50(2), 143150 (2007)
10. Marakas, G.M.: Decision Support Systems: In the 21st Century, 2nd edn. Pearson
Education (2003)
11. Hien, N.T.N., Haddawy P.A.: Decision Support System for Evaluating International
Student Applications. In: 37th ASEE/IEEE Frontiers in Education Conference,
Milwaukee, WI, USA, October 10-13 (2007)
12. Pestana, G., da Silva, M.M., Casaca, A., Nunes, J.: An airport decision support system for
mobiles surveillance & alerting. In: Proceedings MobiDE 2005, pp. 3340 (2005)
13. Williams, M., Wu, F., Kazanzides, P., Brady, K., Fackler, J.: A modular framework for
clinical decision support systems: medical device plug-and-play is critical. SIGBED
Rev. 6(2), Article 8, 11 pages (2009)
14. Witten, I.H., Frank, E., Hall, M.A.: Data mining: Practical machine learning tools and
techniques, 3rd edn. Morgan Kaufmann, Elsevier (2011)
15. Yu, C.-C.: A web-based consumer-oriented intelligent decision support system for
personalized e-services. In: ICEC 2004, pp. 429437 (2004)
Teaching Automation Engineering: A Hybrid Approach
for Combined Virtual and Real Training
Using a 3-D Simulation System
1 Introduction
In contrast to the field of education, the use of simulation systems is already state-of-
the-art within the daily routine of big industrial enterprises. The prospects of such
software packages for a close-to-reality virtual handling of complex mechatronic
systems shall now also be used for a cost-effective and motivating introduction to
automation engineering.
The use of 3-D simulation in education does not only offer an excellent support of
learning processes but it also confronts the students with a tool that by now, especially
in the automotive industry, has reached a mature degree and become a standard.
In the stage of preparing the programming and the commissioning of a
manufacturing plant, the real plant is very often not yet available due to delays in time
management. Because of this, a virtual model is programmed and the commissioning
of this virtual plant is executed with the help of simulation [1]. The propagation of the
results to the real plant can then be carried out in less time, so that this time saving
alone exceeds the additional costs for the virtual plant in many cases. Similar time and
cost savings will also be achieved in the field of education if methods of the virtual
production are established.
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 329337.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
330 J. Rossmann et al.
learning materials, and it must be open and extensible concerning the simple adding
of additional individual learning scenarios.
Figure 1 shows a real working environment for a robotic example on the left side
and the corresponding virtual environment on the right side. The student practices
with the virtual model which only marginally differs in function and behavior from
the real plant. This way, knowledge gathered using simulation can directly be
transferred into practice. Here, it is essential that the correct mechanical, electrical,
and controller engineering details of the plant are simulated in real-time and that
students as well as teachers can interact with the plant while simulation is running.
Another important aspect of this concept, also when looking at costs, is the
possibility to replace selected parts of a real working environment by a virtual
counterpart (Hardware-in-the-Loop, HiL). Furthermore, this allows for the creation of
user-specific training scenarios for which a real working environment only exists in
parts or not at all. Scenarios prepared in this way can then also be used by students to
prepare learning contents outside of the teaching times.
Malfunction Simulation
On the left, figure 3 shows how a teacher can add malfunctions during the plant
operation or during commissioning like e.g. cable breaks, defective electric wirings,
or breakdown of sensors. These malfunctions are added to the learning scenarios
while simulation is running to let them be analyzed and compensated by the students.
These skills are often required in vocational practice and can hardly be trained in
classical learning environments as a return to an error-free state of the plant is only
possible with a high effort and thus with high costs. On the right, figure 3 shows a
protocol of a students actions. The protocol supports the teacher in evaluating the
learning progress.
Teaching Automation Engineering: A Hybrid Approach 335
The content is separated into single information units which are linked by
hyperlinks and which consist of texts (concepts, explanations, directives, examples,
etc.), graphics, videos, and animations. Besides the cost-effectiveness, a major
advantage of the approach presented in this paper is the fact that students turn-up at
the limited available hardware well prepared which enables them to work with the
hardware more efficiently. They are familiar with the system not only in a theoretical
336 J. Rossmann et al.
way but also practice-oriented. They can concentrate on the essential details which
separate the real and the virtual world, e.g.:
How can a sensor be adjusted with the corresponding hardware tools?
Which safety regulations have to be considered for manual wiring?
How can a robot be moved with the original manual control unit?
How can sub components be added to or removed from the system?
This way, the required amount of time for the first learning steps with the real
hardware is significantly reduced. As a consequence, it now becomes possible despite
all time restrictions to plan and organize courses so that every student understands all
the hardware details.
Essential elements of the CIROS Automation Suite are the ready-made work cell
libraries for the different fields of robotics, mechatronics, and production engineering.
The robotics library contains e.g. predefined robot-based work cells together with the
corresponding technical documentation. This library aims at two objectives:
For the presentation of application examples, every work cell contains an
example solution which the teacher can show and explain.
The work cells are the foundations for the students to solve the different
project tasks i.e. to execute all project steps in the simulation. These steps
range from teaching positions over creating robot programs to the complete
commissioning as well as the final test of the application. Of course, at this
point of time, students have no access to the provided example programs yet.
To permit for this flexible use of the libraries, we have implemented two access
modes. On the one hand, students can only access the work cells in a read-only mode
(presentation mode). On the other hand, the teacher can modify the work cells for the
students according to his requirements. The students can then open these modified
work cells in their personal work spaces and continue to work on them.
6 Conclusion
In this paper, we have showed how virtual learning can be continuously used for all
levels of education and training, covering all the branches of automation engineering
like production engineering, mechanical engineering, mechatronics, robotics, etc. An
essential aspect of the presented concept is the seamless integration of detailed 3-D
simulation with the corresponding real work environments. This way, virtually
gathered knowledge can directly be transferred into practice and be verified there.
The synergy effects of virtual and classical learning, which can be obtained with
our hybrid approach, are not restricted to an essential cost reduction for the initial
acquisition of the learning materials. Moreover, the consequent application of the
concept leads to a more efficient use of available hardware resources as the required
introduction overhead is reduced and the training at the real plant can concentrate on
details which cannot be simulated.
Teaching Automation Engineering: A Hybrid Approach 337
The possibility to use the virtual learning contents outside of the teaching times
leads to a further improvement of the quality of teaching. This blended learning a
methodological mix of e-learning and attendance learning joins the advantages of
both learning methods. In contrast to common internet courses, the learning contents
exactly aim at the students and can furthermore be modified by the teacher. If
required, students can be independent of time and space and can control their learning
for themselves both into depth and into breadth.
References
1. Rossmann, J., Stern, O., Wischnewski, R.: Eine Systematik mit einem darauf abgestimmten
Softwarewerkzeug zur durchgngigen Virtuellen Inbetriebnahme von Fertigungsanlagen.
atp 49(7), 5256 (2007)
2. Rossmann, J., Karras, U., Stern, O.: Ein hybrider Ansatz fr die interdisziplinre Aus- und
Weiterbildung auf Basis eines 3-D Echtzeitsimulationssystems. In: Tagungsband zur 6.
Fachtagung Virtual Reality, pp. 291300. Magdeburg, Germany (2009)
3. Rossmann, J., Wischnewski, R., Stern, O.: A Comprehensive 3-D Simulation System for the
Virtual Production. In: Proceedings of the 8th International Industrial Simulation
Conference (ISC), Budapest, Hungary, pp. 109116 (2010)
The Strategy of Implementing e-Portfolio in Training
Elementary Teachers within the Constructive Learning
Paradigm
Abstract. The system of training elementary school teachers for work in the
constructive learning paradigm at the Siberian Federal University has
significantly changed after the Applied Bachelor degree in Education was
introduced. The article contains strategies of implementing e-Portfolio
technology for training first year students: e-Portfolio allows academic teachers
to carry out longitude research of competencies being developed in accordance
with the federal Russian educational standards and encourages students
reflexive work. e-Portfolio is the learning tool supporting reflection and
individual progress assessment to develop pedagogical competencies within the
framework of professional training of elementary school teachers for work in
the constructive learning model.
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 339344.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
340 O. Smolyaninova and V. Ovchinnikov
Within the two semesters the students filled in their e-Portfolios with the materials
illustrating the formation and development of their professional competencies. The
teachers of pedagogical and psychological disciplines prescribed the assignments to
the students by means of virtual educational environment, supported the students by
means of timely feedback and left their comments concerning the students work in
the students e-Portfolios.
The third place is occupied by the opportunity to present the results of the
pedagogical practical work (high rating 39%, average rating 39%). During the
detailed interviews with teachers taking part in the experiment we found out that all
three opportunities mentioned above are closely connected with the formation of the
344 O. Smolyaninova and V. Ovchinnikov
professional competencies with the help of the e-Portfolio technology. The results of
the questionnaire we carried out allowed to prove the fact that the e-Portfolio is
considered to be the means to form professional competencies: considering the e-
Portfolio for developing professional and ICT competencies 39% of teachers chose
high rating and 39% - average rating.
4 Conclusion
The research we carried out with the support from Krasnoyarsk Regional Scientific
Fund (KF-193) indicate that the e-Portfolio technology is a powerful resource for
students professional development at the expense of demonstrating students
individual progress to teachers and prospective employers.
Apart from the fact that the e-Portfolio allows visualization of the professional
competencies of future teachers their level and the process of development. The e-
Portfolio contributes to the formation of effective integrative learning strategy. The e-
Portfolio also supports feedback between students and teachers, assessment of
academic results of mastering the curriculum, analysis of pedagogical practical work;
it enhances students educational technology learning, reflection and collaboration. L.
Vygotsky [2] wrote that one step in education may correspond to one hundred steps in
development. In this context, the e-Portfolio is the tool for visualizing the process of
students development.
Acknowledgments. This research was carried out with the support from Krasnoyarsk
Regional Scientific Fund within the Project KF-193 Increasing Quality and
Accessibility of Education at Krasnoyarsk Region: Formation of the Content
Structure for the eLibrary of the Siberian Federal University for Secondary Schools
(Profile: Natural Sciences).
References
1. Huang, Y.-P.: Sustaining ePortfolio: Progress, Challenges, and Dynamics in Teacher
Education Handbook of Research on ePortfolio (2006)
2. Vygotsky, L.S.: Mind in Society. Harvard University Press, Cambridge (1978)
3. Smolyaninova, O.G.: University Teacher Professional Development and Assessment on the
Basis of ePortfolio Method in Training Bachelors of Education at the Siberian Federal
University. Newsletter of the Fulbright Program in Russia 9, 1415 (2010)
4. Smolyaninova, O.G., Glukhikh, R.S.: E-Portfolio as the Technology for Developing the
Basic Competencies of a University Student. Journal of Siberian Federal University,
Humanities & Social Sciences 2, 601610 (2009)
5. Smolyaninova, O.G.: ePortfolio as the Technology for Developing the Basic University
Students Competencies. In: Proceedings of the XVIII International Conference and
Exhibition on Information Technology in Education, Moscow (2008)
6. Gardner, H.: Assessment in Context: the Alternative to Standardized Testing. In: Grifford, B.,
O Conner, M. (eds.) Changing Assessments: Alternative Vies of Aptitude, Achievement, and
Instruction, pp. 77119. Kluwer, Boston (1992)
Speech Recognition Based
Pronunciation Evaluation Using
Pronunciation Variations and Anti-models
for Non-native Language Learners
Yoo Rhee Oh, Jeon Gue Park, and Yun Keun Lee
1 Introduction
With the improved performance of speech recognition, there are many attempts
to adopt speech recognition technology in a computer-assisted language learning
(CALL) [1]. As one of a speech recognition based CALL, we propose a speech
recognition based automatic pronunciation evaluation method for non-native
language learners. Especially, we utilize the multiple pronunciation dictionary
for non-native language learners and the anti-models. To this end, the proposed
method consists of two main steps: (a) speech recognition step with a given script
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 345352.
springerlink.com c Springer-Verlag Berlin Heidelberg 2012
346 Y.R. Oh, J.G. Park, and Y.K. Lee
Fig. 1. Overall procedure of the proposed speech recognition based pronunciation eval-
uation method for non-native language learners
and the corresponding speech data and (b) pronunciation analysis step with the
recognition results for a pronunciation evaluation. Moreover, the speech recog-
nition step obtains the phoneme sequence, the log-likelihoods of acoustic models
and anti-models, and the duration for each phoneme. In addition, the pronun-
ciation analysis step evaluates each recognized phoneme using the results of the
speech recognition step and the reference phoneme sequence. For experiments,
we select English as a target language and Korean speakers as non-native lan-
guage learners.
The organization of the remainder of this paper is as follows. In Section 2, we
present an overall procedure of the proposed speech recognition based pronunci-
ation evaluation method for non-native language learners. Next, we describe the
generation of a multiple pronunciation dictionary for non-native language learn-
ers and the pronunciation variants in Section 3 and present a anti-model based
pronunciation analysis method in Section 4. In Section 5, we show the perfor-
mance of the proposed pronunciation evaluation method. Finally, we conclude
ndings in Section 6.
Fig. 2. The procedure of the pronunciation analysis based on the pronunciation vari-
ants for non-native language learners and the anti-models
In other words, a decision tree for each phoneme (X) is generated using the
mispronunciation rule patterns corresponding to X. Moreover, the attributes
of a decision tree for X are the two left phonemes for X (L1 and L2 ) and
the two right phonemes for X (R1 and R2 ). In addition, the output class of
a decision tree for X is determined as a commonly occurred phoneme by non-
native language learners. After that, each decision tree is converted into the
equivalent pronunciation variant rules for non-native language learners.
5 Experiments
In order to evaluate the proposed speech recognition based pronunciation evalu-
ation method, we select English as a target language and Korean adult speakers
as non-native language learners. Section 5.1 describes the baseline automatic
speech recognition (ASR) system and Section 5.2 shows a performance of the
proposed pronunciation evaluation method.
Np,correct,wrong
F RRp = , (3)
Np,correct
Np,wrong,correct
F ARp = , (4)
Np,wrong
where Np,correct and Np,wrong were the number of phonemes that were correctly
uttered and the number of phonemes that were incorrectly uttered, respectively,
for p. Moreover, Np,correct,wrong and Np,wrong,correct were the number of phonemes
that were correctly uttered but evaluated as wrong and the number of phonemes
that were incorrectly uttered but evaluated as correct, respectively, for p.
Table 1 shows a performance comparison of three pronunciation evaluation
methods employing either the multiple pronunciation dictionary for non-native
language learners or anti-models. It was shown from the rst row that the average
FRR and FAR were measured as 52.6% and 20.1%, respectively, for an anti-
model based pronunciation evaluation method. In addition, the average FRR
and FAR were achieved as 17.1% and 59.3%, respectively, for a pronunciation
evaluation method employing the multiple pronunciation dictionary for non-
native language learners. Moreover, it can be seen from the third row that the
1
All pronunciation symbols in this paper are denoted in the form of the two-letter
uppercase ASPAbet [5].
Automatic Pronunciation Evaluation for Non-native Language Learners 351
Table 1. Performance comparisons of the average false rejection rate (FRR) and the
average false acceptance rate (FAR) for the pronunciation evaluation methods em-
ploying either a multiple pronunciation dictionary for non-native language learners or
anti-models
average FRR and FAR were 32.1% and 32.7%, respectively, for the proposed
method employing both anti-models and the multiple pronunciation dictionary
for non-native language learners.
6 Conclusion
This paper proposed an automatic pronunciation evaluation method based on
speech recognition by using a multiple pronunciation dictionary for non-native
language learners and the anti-models. Especially, the multiple pronunciation
dictionary for non-native language learners was automatically generated in an
indirect data-driven method, and the proposed method could cover the eects
of the mother tongue of non-native learners by using the multiple pronunciation
dictionary for non-native language learners. Moreover, the proposed pronuncia-
tion evaluation method performed in two steps: (a) speech recognition and (b)
pronunciation analysis. By performing speech recognition using anti-models and
the multiple pronunciation dictionary for non-native language learners, we ob-
tained the phoneme sequence and the log-likelihood of the acoustic models, that
of anti-models, and the duration of each phoneme of the recognized sequence.
Using the speech recognition results, each phoneme was then evaluated by com-
paring the phoneme sequences and the normalized phoneme log-likelihood ratio.
From the automatic English pronunciation evaluation experiments by Korean
adult speakers, the proposed pronunciation evaluation method were achieved the
average FRR and FAR as 32.1% and 32.7%, respectively, which outperformed
an anti-model based method and the pronunciation variant based method.
References
1. Eskenazi, M.: An overview of spoken language technology for education. Speech
Commun. 51, 832844 (2009)
2. Kim, M., Oh, Y.R., Kim, H.K.: Non-native pronunciation variation modeling using
an indirect data driven method. In: ASRU, Kyoto, Japan, pp. 231236 (2007)
352 Y.R. Oh, J.G. Park, and Y.K. Lee
3. Lee, S.J., Kang, B.O., Jung, H.-Y.: Statistical model-based noise reduction approach
for car interior applications to speech recognition. ETRI Journal 32, 801809 (2010)
4. Young, S.J., Woodland, P.C.: Tree-based state tying for high accuracy acoustic
modeling. In: ARPA Human Language Technology Workshop, Plainsboro, NJ, pp.
307312 (1994)
5. Deller, J.R., Hansen, J.H.L., Proakis, J.G.: Discrete-Time Processing of Speech Sig-
nals. IEEE Press, New York (2000)
Computer Applications in Teaching and Learning:
Implementation and Obstacles among Science Teachers
drkhataybah@yahoo.com
K.S. Thaung (Ed.): Advanced Information Technology in Education, AISC 126, pp. 353360.
springerlink.com Springer-Verlag Berlin Heidelberg 2012
354 A.M.I. Khataybeh and K. Al Sheik
3 Methodology
A questionnaire was developed using a nine step process for Likerttype scales
according to Koballa (1984) , full questionnaire can be requested from the authers.
While the sample was consisted of ( 52) science teachers were selected randomly.
Results of the Study :Results related to question 1: To what extents do science
teachers are implementing ICT in theirteaching?
Table 1. Percentages of respondents, Means and Standard deviation for each statement
Table (1a). Percentages of respondents, Means and Standard deviation for each domain
356 A.M.I. Khataybeh and K. Al Sheik
Table (1) shows the mean and Standard deviation for each statement. According to
the criteria each statement with mean less than three is considered as low performance
23 statements were classified as low performance and 6 statements have means
between 3.00 and 3.50 they are considered as satisfactory performance. The highest
performance was for statement (13, 19, 20, 21, 22, 23, and 25). And lowest
performance was for (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 15, 16, 17, 18, 24, 26, 27,
28, 29, and 30). Table (1a) shows that only one domain have mean more than 3.00
while 4 domains have means less than 3.00, while for the whole test mean was less
than 3.00 this means low performance.
Table 2. Means and Standard Deviation of the obstacles for each domain
Table (2) showed that the highest obstacles rate was in domain(5) Computer
Application for project work with mean of (3.12 0ut of 5), PowerPoint and Hand on
learning is the second Obstacles rate with (3.104 out of 5), those two domains were
accepted for them mean ratio's which is more than 3. The third one is Excel Template
Sheet with ratio of (2.964 out of 5), fourth one is Excel Spreadsheet Models with (2.84
out of 5) and the last Obstacles rate for the Internet Resources with (2.59 out of 5).
Computer Applications in Teaching and Learning 357
Table 3. Means and standard deviation for science teachers implementation of computer
applications
Specialization
chemistryOthers 2.4492.718 0.950.98
This table shows the lack of science teachers implementation for these applications.
It is also showed that there is no difference between chemistry teachers and other
teachers, teachers with more than Five years experience or less .
358 A.M.I. Khataybeh and K. Al Sheik
Table 5. ANOVA Analysis for the obstacles of science teachers in using computer applications
Table 6. Correlation Coefficient between the Implementation and Obstacles in using computer
applications
Domain Correlation
Usingcomputersforstatisticalanalysis 0.37
Integratingcomputersinstudentslearning 0.17
Powerpointandhandsonlearning 0.01
Usinginternetresources 0.17
Computerapplicationforprojectwork 0.32
Allitems 0.14
the lack of computer laboratories. While table (2) showed that obstacles concentrated
in almost all Computer application domains. In the open question at the questionnaire,
science teachers mentioned that obstacles stemmed because of the lack of knowledge,
lack of teaching aid in the classrooms, high number of the students and the lack of
computer laboratories. One of the chemistry teachers said" Obstacles stemmed from
my lack of knowledge not institutional lack of equipment; therefore there is no
contraction between obstacles and the use of the different applications. I do
recommend that most science teachers should attend training courses in computers
applications in teaching". Also table(4) showed that There is no statistical difference
regarding to computer usage , specialization and teachers experience, because of the
similar implementation's obstacles faced teaching staff at whole. Table (5) showed
that there is no significant differences for all variables, because the obstacles are
similar to all science teachers regardless their experience and their specialization. As
in table (6) negative correlation between implementation and obstacles which could
be due to lack of knowledge and practice among science teachers and lack of
equipment and software in the classrooms.
5 Recommendations
IN the light of the findings of the study the following recommendation can be offered :
Equipping the laboratories with enough software and hardware , training science
teachers in how to use the software and to use the sophisticated software such as
Excel Spreadsheet, Templates sheet Models, PowerPoint, SPSS Crocodile program
and equipping classrooms with data show and PC to allow students to
present their projects.
References
1. Aiken, M.W., Hawley, D.D.: Designing an electronic classroom for large college courses.
T.H.E. Journal 23(2), 7678 (1995)
2. Albrecht, W.S., Sack, R.J.: Accounting Education: Chartingthe Coursethrougha
PerilousFuture. American Accounting Association (August 2000)
3. Alexander, S., McKenzie, J.: An Evaluation of Information Technology Projects
forUniversity Learning. Australian Government Printing Service, Canberra (1998); Baker,
W., Hale, T., Gifford, B.R.: Technology in the Classroom: From Theory toPractice.
Educom Review 32(5), 4250 (1997); Beard, R., Hartley, J.: Teaching and Learning in
Higher Education, 4th edn. PaulChapman Publishing, London (1984)
4. Bissell, V., McKerlie, R.A., Kinane, D.F., McHugh, S.: Teaching periodontal
pocketcharting to dental students: a comparison of computer assisted learning and
traditional tutorials. British Dental Journal 195(6), 333336 (2003)
5. Byrom, Elizabeth: Evaluating the impact of technology (2002),
http://www.serve.org/_downloads/publications/Vol5.3.pdf
(retrieved May 12, 2011); Chong, V.K.: Student Performance and Computer Usage: A
Synthesis of Two DifferentPerspectives. Accounting Research Journal 10(1), 9097 (2002)
360 A.M.I. Khataybeh and K. Al Sheik
6. Dunn, J.G., Kennedy, T., Bond, D.J.: What skills do science graduates need? Search 11,
239242 (1980)
7. Freeman, M.A., Capper, J.M.: Obstacles and opportunities for technological innovation
inbusiness teaching and learning (2007),
http://www.heacademy.ac.uk/assets/bmaf/documents/publication
s/IJME/Vol1no1/freeman_tech_innovation_in_Tandl.pdf
(retrieved April 17, 2011)
8. Ghosal, M., Arthur, D.: An Electronic Spreadsheet Solution to Simulaneousequations in
Financial Models. Financial Practice and Education 1(2), 9398 (1991)
9. Goggin, N.L., Finkenberg, M.E., Morrow Jr., J.R.: Instructional Technology in
HigherEducation Teaching. Quest 49(3), 280290 (1997)
10. Green, K.C.: Campus computing1998: the ninth national survey of desktop computing and
information technology in higher education. The Campus Computing Project, California
(1999); Koballa, T.R.: Designing a likerttype scale to assess attitudes towards energy
conservation. Journal of Research in Science Teaching 20, 709723 (1984)
11. Leidner, D.E., Jarvenpaa, S.L.: The use of information technology to enhance
managementschool education: A theoretical view. MIS Quarterly 19(3), 265291 (1995)
12. Lont, D., MacGregor, A., Willett, R.: Technology and the Accounting Profession.
Chartered Accountants Journal of NewZealand 77(1), 3137 (1998)
13. McKenzie, J.: Computers in the teaching of undergraduate science. British Journal
ofEducational Technology 8(3), 214224 (1977)
14. McQuillan, P.: Computers and pedagogy: the invisible presence. Journal of
CurriculumStudies 26(6), 631653 (1994)
15. Mykytyn, P.P.: Educating our Student in Computer Applications concepts: A case
forProblemBased Learning. Journal of organizational and End user computing 19(1),
5161 (2007)
16. Nicholson, A.H.S., Williams, B.C.: Computer use in accounting finance and
managementteaching amongst universities and colleges: a survey. Account 6(2), 1927
(1994)
Author Index