Anda di halaman 1dari 128

IJDIWC

International Journal of
ISSN 2225-658X (Online)
DIGITAL INFORMATION AND WIRELESS COMMUNICATIONS

Volume 3, Issue 4

2013

TABLE OF CONTENTS
Original Articles

PAPER TITLE
DEVELOPMENT OF A NETWORKING LABORATORY COMPLEX

AUTHORS
Feliksas
Kuliesius,
Ousinskis

PAGES
Evaldas

335

THE DIRECT INFLUENCE OF THE CELL PERIMETER ON THE Elmabruk


Elgembari,
HANDOVER DELAY IN THE BROADBAND NETWORK
Kamaruzzaman Seman

341

CONSTRUCTING A COURSE PROFILE BY MEASURING COURSE


Neville I. Williams
OBJECTIVES

350

STATE OF THE ART OF A MULTI-AGENT BASED RECOMMENDER Udsanee


Pakdeetrakulwong,
SYSTEM FOR ACTIVE SOFTWARE ENGINEERING ONTOLOGY
Pornpit Wongthongtham

363

ORGANIZATIONAL COMMITMENT AMONG PURCHASING AND Jarno


Einolander,
SUPPLY CHAIN PERSONNEL
Vanharanta

377

Hannu

Faezeh Mirzaei, Mohsen Biglari,


Hossein Ebrahimpour-komleh

385

E-GOVERNMENT WEB ACCESSIBILITY: WCAG 1.0 VERSUS WCAG Faouzi Kamoun, Basel M. Al
2.0 COMPLIANCE
Mourad, Emad Bataineh

390

ADAPTIVE CHANNEL EQUALIZATION FOR FBMC BASED ON Mashhoor AlTarayrah, Qasem


VARIABLE LENGTH STEP SIZE AND MEAN-SQUARED ERROR
Abu Al-Haija

400

ELLIPTIC JES WINDOW FORMS IN SIGNAL PROCESSING

411

A NOVEL RULE-BASED FINGERPRINT CLASSIFICATION APPROACH

Claude Ziad Bayeh

SMART ON-BOARD TRANSPORTATION MANAGEMENT SYSTEM


Saed Tarapiah, Shadi Atalla,
USING GPS/GSM/GPRS TECHNOLOGIES TO REDUCE TRAFFIC
Rajaa AbuHania
VIOLATION IN DEVELOPING COUNTRIES

430

OBJECT-ORIENTED IN ORGANIZATION MANAGEMENT: ORGANIC


AmrBadr El-Din
ORGANIZATION

440

Mojtaba Raeisi Nejad Dobuneh,


AN EFFECTIVENESS TEST CASE PRIORITIZATION TECHNIQUE FOR
Dayang
N.
A.
Jawawi,
WEB APPLICATION TESTING
Mohammad V. Malakooti

451

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 335-340
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

Development of a Networking Laboratory Complex


Feliksas Kuliesius, Evaldas Ousinskis
Physics faculty, Vilnius University
Sauletekio 9, bld. 3, LT-10222, Vilnius, Lithuania
feliksas.kuliesius@ff.vu.lt, evaldas.ousinskis@ff.vu.lt

ABSTRACT
This paper reviews the complex system implemented
for teaching the network design and management
topics, taught at Vilnius University. The special
attention to sustain the practical skills is foreseen in
these courses. The set of training tools, blending i) the
emulators, ii) the real in-class lab and iii) the remote
online laboratories, is deployed with the aim to
expand advantages of interactive teaching. These
tools as well as their benefits and limitations in
teaching process are discussed in this paper.

In general, University studies create an excellent


opportunity to get not solely strong theoretical
background, but also to train hands-on
experience working with modern IT equipment.
On the other hand science (physics) students are
very curious and adoptive to technical questions
that always exist when ICT is studied, so
particular attention to the technical and
technological side must be scheduled when
delivering the courses.
2. DISCUSSION

KEYWORDS
Network Management, Virtual/Managed Learning
Environments, Remote/Online/Virtual Laboratories,
Network education

1. INTRODUCTION
Nowadays, especially when services and data
storage virtualization and cloud computing
plays a more and more significant role in the
ICT sector, the demand of enhanced high speed
network infrastructure and, consequently, of
well-educated
network
designers
and
administrators is growing tremendously [1].
Additionally, network engineers must possess
deeper understanding of the processes occurring
in converged networks with mixed services.
It is evident, that when seeking to prepare
highly qualified professionals it isnt enough to
convey solely written knowledge, i.e., theoretic
background a complex teaching environment,
in which special attention to hands-on labs and
case studies, must be planed.
This paper reviews the lab equipment complex
used in the network design and management
courses, taught at Vilnius University, Faculty of
Physics and the ways to achieve the predefined
goals to acquire and sustain practical skills
necessary for network administrators.

The main goals of the networking courses, that


are partly oriented to prepare students for
CCNA and CCNP professional certification
exams (which are recognized as de facto
standard in networking qualification grading)
are to prepare creative engineers that have deep
understanding of technology, infrastructure and
processes, related to converged, often
inhomogeneous networks, and possess teamwork skills.
As evidenced after extensive discussions with
industry representatives on what they expect the
needs for the future graduates, there is almost
nothing to add to the verbal teaching
methodology, except to change and renew the
content of course, adopting it to contemporary
technology trends. This is due to the wellknown Cisco Networking Academy Program
materials [2] and other readings (W. Odom, T.
Lammle, D. Teare, K. Walace and other Study
Guides [3-6]) unrivaled textbooks which can
be used as a background for corresponding
courses, discussing their topics in lectures and
seminars.
However,
to
prepare
well
qualified
professionals, special attention to practical,
hands-on lab teaching must be paid.
Three ways that can be used to solidify practical
skills of students are: working with real network
appliances directly, remote access to equipment
335

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 335-340
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
and simulators. All of them have their own pros
and cons. The simulators Packet Tracer,
GNS3, IOU etc. can be used as self-test tools,
even to partly replace lab infrastructure, since
they can be available everywhere you have a
computer, but, unfortunately, they have limited
and reduced functionality [7,8]. Packet Tracer
supports only the basic functionality of real
equipment. GNS3, based on Dynamips, allows
to load genuine IOS, but does not support
switches. IOU, may be is the best choice from
the perspective of functionality, but its usability
is limited due to its proprietary license
limitations. An important fact is that simulators,
especially GNS3 are resource consumptive
software, and it is problematic to design and test
more complicated network topologies.
The real equipment and the experience of
working with it overcome these limitations and
is more preferable, since students can perceive
all aspects of network sets: solving emerging
problems, simulating (quasi)real situations,
setting cabling structures and, what may be the
most important learning to work in teams.
Part of the equipment of the described
laboratory is available directly, part via
remote access. Direct access is very useful for
beginners: students become acquainted with the
appliances, they can do the cabling, see
deployed topology evidently, console directly to
the gear and coordinate team-work as well as
consult classmates or be consulted by the
instructor. The main weakness of this approach
is the limited access to the equipment we have
only 10 workplaces available in our lab for such
direct access training.
The way to utilize the resources and possibilities
of the facility is to enable remote access, which
can be solved in different ways [9-11].
Additionally, remote access helps to develop
creativity, mental visual thinking and the
imagination, which is necessary striving to
better perceive the maintained infrastructure. In
other words, remote access manner accustoms
the students to the conditions close to real ones
which the administrators face every day when
monitoring and configuring extended networks.
When designing the lab, the main points that
were pursued were: in order to meet the needs
of students to experiment effectively, handily
and securely, the main labs (CCNP, Security
and Service provider lab racks) were devoted

for common collaborative use. They can be


accessed via a different VPN connection per
each lab pod. The conventional Cisco Easy
VPN implemented on Cisco ASA 5510 Firewall
was used to safeguard the authenticated and
secured connection to the each of pod access
server. Pre-shared keys for individuals or all
users of the group, intended to access the pods,
were used. So, a single pod (or some of them,
connected in a bigger cluster) can be used
personally or by a team, depending on the
teaching goals. The appliances can be consoled
in using reverse lines of access server (Cisco
2511 or 1841 routers with HWIC-8A cards). To
visualize the lab topology, the access server
menu was mapped and hyperlinked to the
topology diagram on the web server, attached to
the pod. So, selecting a device icon, the user
should be connected into the appropriate device
console. The versatile CCNP lab topology,
adopted both to our own customized labs, and to
the standard labs from Student Lab Guide [12],
designed to meet all three required courses
(ROUTE, SWITCH, TSHOOT) is presented in
Fig.1. The PCs and servers, required for testing,
authenticating and monitoring lab equipment
were installed as virtual machines (VM) using
VMware ESXi 5.0 hypervisor on the Dell
PowerEdge R210 server. One of the each PC
interfaces was configured in a different VLAN
denoted for this PC on VMware virtual
switches, trunked to Catalyst 2960 switch (not
shown in diagram) and separated by it.
Additional VM can easily be deployed under
request. FTP, TFTP, DHCP, RADIUS,
TACACS, Syslog servers and network
monitoring tools like Wireshark, Zenoss [13]
and others were implemented as test instruments
into these VM machines. Some security
restrictions are applied to prevent machine
misusing, e.g., misconfiguration of management
connections.
To access the PCs remote desktop more
conveniently than using the usual RDP,
TinyVNC was installed into each PC/Server of
the lab, and these PCs became reachable via
web browser, without a need of RDP client.
The electric supply of the devices was
controlled by Internet driven APC switched rack
PDUs, model AP7953, which enabled the user
to switch the lab equipment on/off when needed
remotely. On the start or full reload of the
336

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 335-340
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

Fig. 1. The versatile CCNP lab topology.

laboratory pod, each device should be started


with an appropriate delay to prevent power
overload.
These labs fully solved the access problem,
since students can access the labs 24x7 from
anywhere. It is convenient to organize the teamwork training using group login in such a way.
From the point of the instructor, the only
downside of the system is that it is difficult to
track and evaluate student performance, since
the instructor often sees only the final result,
without intermediate actions. So, it is difficult to
help the students timely, if needed. The
opportunity to evaluate students operation
sequence is essential when organizing hands-on
skills exam. To utilize the resources and
possibilities of the facility to achieve the before
mentioned goals, the lab system was
supplemented with the lab management system
which ensures i) individualized access to the lab
with time reservation, ii) database of predefined
lab tasks, iii) tracking the student actions and
iv) securing the system from misusing.
This additional management system has been
deployed into some pods of the lab cluster. The

structural model of the managed lab is presented


in Fig. 2. The management system consists of
i) the server, dedicated for the main control of
processes such as a RADIUS server, a MySQL
database, and an Apache HTTP server as a
platform
for
the
web
user
and
administrator/instructor interfaces adapted for
the user registration, the lab time reservation,
the control of lab devices and power supply, the
monitoring of students activities as well as for
the storage of lab descriptions, assignments and
other materials; ii) the intrusion prevention
system (based on Suricata [14]) designed for
monitoring and securing the system. The main
server of the lab on the basis of Ubuntu 11.10
installed upon the same VMware ESXi 5.0
hypervisor where the lab PCs reside, has been
used. The desired functionality is achieved by
installing a FreeRadius tiny, modular and
flexible server which includes not only AAA
services, but usual LAMP and additional ones
[15]. When designing and deploying laboratory
web interface the PHP, HTTP, CSS, Java Script
with jQuery framework library were used. The
337

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 335-340
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

Fig. 2. The structure of the managed laboratory pod.

control of the devices was realized using AJAX


and Java Web Start technologies.
With the aim to heighten the security level of
the virtual machines, additional scripts were
implemented into management server. The user
was not only restricted to the rights and
commands which are necessary to solve the
assignments presented only, but, due to these
scripts the possibility to easily recover the PC
appeared. After the misconfiguration or other
unpredicted change it is enough to restart the
VM which is configured so that after its reboot
the configuration and storages are restored to
the initial state. Reboot is available not only
from the VM desktop (impossible after system
hang-up), but from the system management
panel also (the same way the power supply and
network devices (routes, switches etc.) reset or
the pre-configuration load (described below)
can be initiated).
It is more difficult to control unwanted actions
of the users connected into the network
equipment. These tasks were resolved when
using Suricata IPS, through which all user
traffic was directed. IPS filtered unwanted
commands and generated logs for user activity
monitoring.

To secure the user reservation, a RADIUS


server and access to real lab equipment time
synchronization, the cron processes were used.
To use this system, students must register and
fill-in a lab user profile with a nickname,
password, real name and email address. The
registration must be approved by the lab
administrator in order to reject illegal users.
When the user chooses the reservation time in
the calendar the temporal password valid for the
reservation time only is generated and sent to
the user. The access server allows connections
to the lab appliances only for the user, which
successfully passes the RADIUS authentication.
Connecting to the lab on the reserved time the
student can start the desired assignment. To feel
more convenient and to save time, the
assignments are supplied with preconfigured
devices. This also allows the instructor to
design specific tasks, especially necessary in
the troubleshooting course. Each lab
assignment is described by a separate URL
element which generates an appropriate
assignment description view and loads the
predefined configurations onto the lab
appliances:
<ul class="lab">
<li>Lab 1-1: Switching basics</li>
<li><span><a href="Labs/SW_Lab_1_1" datalink="http://172.16.83.53/ConfigurationLoad/
LoadLab/Switch/lab_1_1"
class="start">Beguin</a></span></li>
<li><span><a href="<?php echo base_url() .
"assets/pdfs/Switch/Lab_1_1.pdf"; ?>"
title="Switch Lab 1-1 Description" alt="Switch
Lab 1-1 Description" target="_blank"
class="description">Assignement
description</a></span></li>
</ul>

Since it is impossible to use two hyperlinks


(HREF attributes executed in range) directly the
additional attribute data-link="http://172.16.83.53/ConfigurationLoad/LoadLab/Switch/lab_1_1" is used to load the
predefined configurations. The loading of the
configuration is activated by this javascript
code:
<script type="text/javascript">
$(function(){
var seconds = 1;
var canClick = true;
seconds *= 1000;
/* HIDE DESCRIPTION. (Hide "Description"

338

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 335-340
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
button when hitting "Start" button */
$('ul.lab a.start').bind('click', function(){
$(this).parent().parent().siblings().find('.descripti
on').hide();
});
/* Load gif. Configs loading beguins, loading
gif icon is shown when loading */
$('a.start').bind('click', function(e){
var _link = $(this).attr('data-link');
var _default = $(this).attr('href');
var loader = $('input[name="link"]').val();
$(this).html('<img src="' + loader + '"/>');
$.ajax({
url: _link,
dataType: 'html',
cache: false,
success: function(){
window.location.href = _default;
}
});
e.preventDefault();
});
});
</script>

To track the activity of students on the lab and


to enable the instructor to evaluate a students
intermediate actions, correct and advise him/her
if required, an information flow from the
bridged virtual interfaces (Fig. 3) of filterring
VM was branched to the log server. Since
tcpdump (mainly used with wireshark) writes a
full two-way stream of the session with
annoying dublication and, often, with packet
misordering, the tcpflow [16] capture was used
in our system. Since tcpflow understands TCP
sequence numbers and correctly reconstructs
data streams regardless of out-of-order delivery,
it can store each flow (only the user commands
issued to the lab equipment are important in our
case) in a separate file with clear naming
(src_ip.src_port dst_ip.dst_port) which is
convenient for later analysis. Afterwards,
intresting information can be grouped, formated
and accessed via web interface.
3. CONCLUSIONS
It can be summarized that by combining on-line
teaching materials and in-class discussions with
hands-on labs using i) simulators, ii) real

Fig. 3. Communication structure of management server.


The lab environment is denoted as private LAN.

equipment and iii) remote labs each of them


having different benefits it has been succeeded
to
prepare
highly
qualified
network
administrators and designers, feeling confident
in challenging labor situations.
The management system enables the instructor
to track a students intermediate actions, to
correct and advise him/her if required as well as
evaluate their activity more accurately.
The special components of the management
system allow the user the student and the
instructor as well to feel handily with the lab
assignments and to focus on solving the main
tasks.
4. REFERENCES
1. OECD: ICT Skills and Employment: New Competences and Jobs for a Greener and Smarter Economy. OECD
Digital Economy Papers, No. 198, pp. 1--59, OECD
Publishing 2012). http://dx.doi.org/10.1787/5k994f3prlr5-en
2. Cisco Networking Academy Course Catalog http://www
.cisco.com/web/learning/netacad/course_catalog/CCNAe
xploration.html
3. Odom W.: CCNA 640-802 Official Cert Library. 3rd
ed. Cisco Press, Indianapolis, IN (2012)
4. Lammle T.: CCNA: Cisco Certified Network Associate
study guide. 7th ed. Wiley Publishing, Inc., Indianapolis,
IN (2011).
5. Teare D.: Implementing Cisco IP Routing (ROUTE)
Foundation Learning Guide, Cisco Press, Indianapolis, IN
(2010).
6. Wallace K.: CCNP TSHOOT 642-832 Official
Certification Guide, Cisco Press, Indianapolis, IN (2010).

339

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 335-340
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
7. White B., Lepreau J., Stoller L., Ricci R., Guruprasad
S., Newbold M., Hibler M., Barb C., Joglekar A.: An
integrated experimental environment for distributed
systems and networks. SIGOPS Operating Systems
Review. Vol. 36, pp. 255--270 (2002).
8. Jourjon G., Rakotoarivelo T., Ott M.: Why simulate
when you can experience? In: Proc. ACM SIGCOMM
Education Workshop, Toronto (2011)
9. Hua J., Ganz A.: Web enabled remote laboratory (Rlab) framework, In: Proc. 33rd ASEE/IEEE Frontiers in
Education conference, Boulder, CO, T2C pp. 8--13
(2003).
10. DeHart J., Kuhns F., Parwatikar J., Turner J.,
Wiseman C., Wong K.: The Open Laboratory: a resource
for networking research and education.
ACM
SIGCOMM Computer Communication Review, Vol. 35,
No 5, pp. 75--78 (2005).
11. Li Ch.: Developing an Innovative Online Teaching
System. In: Proc. 2010 International Conference on
Education, Training and Informatics, Vol.2, pp. 365-370, Orlando, FL (2010). http://www.iiis.org/CDs2010/C
D2010IMC/ICETI_2010/PapersPdf/EB531TD.pdf
13. CCNP ROUTE
Lab Manual, Cisco Press,
Indianapolis, IN (2011).
12. Release Notes for Zenoss Service Dynamics Version
4.2.2,
http://community.zenoss.org/docs/DOC-13701
(2012).
14. Day D. J., Burns B. M.: A Performance Analysis of
Snort and Suricata Network Intrusion Detection and
Prevention Engines, The Fifth International Conference
on Digital Society CDS 2011, pp.187--192 (2011).
15. Walt D.: Freeradius beginners guide, Pact Publishing
Ltd., Birmingham, UK (2011).
16. Garfinkel S. L., TCPFLOW:
TCP/IP packet
demultiplexer, https://github.com/simsong/tcpflow

340

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 341-349
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

The Direct Influence of the Cell Perimeter on the Handover Delay in


the Broadband Network

Elmabruk S M Elgembari and Kamaruzzaman B Seman


Universiti Sains Islam Malaysia
Bandar Bandar Baru Nilai,71800 Nilai Negeri, Sembilan, Malaysia
mgembari@gmail.com

ABSTRACT
One of the main keys of positioning and ranging
technologies in wireless telecommunication networks
is the values of distance and RSSI (Received Signal
Strength Indication), side along with the cell
perimeter have a significant relationship for the
effect of the network quality in terms of quality in
services and connectivity, and not less in the roaming
stage. Many research results drove to influence these
keys to the handover delay in the broadband network,
so in many cases that MT handover between the
boarders of location areas or zones has been affected
by long time delay and packet data lost. Over recent
years many signal propagation schemes have been
presented to describe the relationship between cell
size and Mobile terminal distance between the
current and the targeted base station and RSSI. In this
paper, the modify scheme of the location
management area (LMA) -based multimedia
broadcast services scheme has been presented , the
effect of cell size and distance between BS and MT
and the RSSI , which shown their influence of the
handoff delay . The analytical algorithm results
showed the reduction of the handover delay in
smaller perimeter cell size compared to the large cells
in location management area.

KEYWORDS
WiMAX Network, LMA, Distance, RSSI, Handoff
Delay.

1 INTRODUCTION
Mobile WiMAX consists of three entities:
mobile station (MS), access service network
(ASN), and connectivity service network (CSN)
[1]. The BS performs radio-related functions,

which is located in the ASN. The CSN provides


connectivity to the Internet, ASP, other public
networks, and corporate networks. The CSN is
owned by the NSP and includes AAA servers
that support authentication for the devices, users,
and specific services. The CSN also provides per
user policy management of QoS and security.
The CSN is also responsible for IP address
management, support for roaming between
different NSPs, location management between
ASNs, and mobility and roaming between ASNs
[2].
To provide MBS, a new functional entity,
multicast and broadcast service controller
(MBSC), is introduced in [LMB], the handover
delay of an MBS session includes two types of
delay: 1) the delay due to the link level messages
during the IEEE 802.16e handover; and 2) the
delay due to the MBS signaling messages. The
IEEE 802.16e handover mechanism consists of
cell
selections,
handover
decision,
synchronizations, ranging and termination with
the serving BS, since the 802.16e embraces the
functionality for elaborate parameter adjustment
and procedural flexibility. RSSI and Distance
will take apart in the signaling between the
current BS and the Target BS. The exchange
data shall be via MOB-NBR-ADV The MT is
analysis all received data particularly the signal
strength, among many neighboring BSs data
information it will compare the strongest signal,
The Received signal strength is a strength which
is used to measure the power between the
341

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 341-349
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
received radio signals [3]. For each base station
there is a threshold point below which
connection breaks with the active base station.
Therefore the signal strength must be greater
than threshold point to maintain the connection
with active BS. The signal gets weaker as the
mobile moves far away from the active base
station and gets stronger signal towards the new
base station as it move closer.
In the modified LMB , RSSI and the distance
between BS are taking a major part as a
parameter in this algorithm , however, MOBNBR-ADV message it continues on the strength
signal power information's of the neighboring
BSs , the MT compare among all these signals
to select the appropriate one to take the initiate
decision for the handover, however, this paper is
organized as follows: second Section the
summery and related work has been presented,
the WiMAX handover and MAC layer were
described in section three, in terms of scanning
and ranging procedures, in section four, the
fundamentals of RSSI ranging is described also
the comparison based on the cells perimeters and
mobile terminal speed is presented ,the modified
model analysis and the results in section five and
six respectively. Finally the conclusion in section
seven.
2 RELATED WORKS
2.1 LMA-based on Multimedia Broadcast
Services
The inner MBS zone handover is a critical issue
when the network planning promotes real-time
multimedia services as ( QoS) function, as the
innate MBS handover zone increases the BSs
numbers where multicast packet increases in
most time no much users, which will cause a
waste of bandwidth channels , beside the packet
delay and lost. It is unnecessary to maintain into
MBS zone handover in the network planning, to
avoid it, the planning design adopts the large
MBS zone to eliminate the inter MBS handover
zone effect.
In [4] MBS zone is divided into multiple
location management areas (LMAs) and then
MBS data packets are transmitted only to the

LMAs in which MBS users currently reside. An


LMA is a group of geographically adjacent BSs.
It is larger than a single BS, and smaller than a
whole MBS zone. Managing MBS subscribers in
each LMA are incarnated by the WiMAX
framework on location management. In IEEE
802.16e, a paging group,] which is a given set of
adjacent cells is used to track the locations of
MTs. A paging controller (PC) manages the upto-date information as to which MSs are located
in which paging groups. The location of an MT
in ordinary mode is tracked by the PC at the
level of BSs. Whenever the IEEE 802.16 MAC
layer handover is performed, the target BS
reports this action to the PC. Thus, the PC keeps
track of the current BS and the current PG of an
MS in ordinary mode.
[5] When a mobile terminal stays in a link, it
listens to the Layer 2 neighbor advertisement
messages, named MOB_NBR-ADV, from its
serving base station (BS). A BS broadcasts them
periodically to identify the network and
announce the characteristics of neighbor BSs.
Receiving this, the MT decodes the message to
find out information about the parameters of
neighbor BSs for its future handover. With the
provided information in a MOB_NBR-ADV, the
MT may minimize the handover latency by
obtaining the channel number of neighbors and
reducing the scanning time, or may select the
best target BS based on the signal strength,
Quality level. All the handover mechanism has
been included in the handover procedure where
is conceptually divided into" handover
preparation" and "handover execution".
[6] Handover preparation can be initiated by
either an MT or a BS. Within this period,
neighbors BSs are compared by the metrics such
as signal strength or QoS parameters, and a
target BS is selected among them. As the
selection procedure completed, the MT start to
associate initial ranging with candidate BSs to
implement a new handover. Once the MT
decides to handover, it notifies its intent by
sending a MOB_MSHO-REQ message to the
serving BS (s-BS). Based on this notification,
The BS then replies with a MOB_BSHO-RSP
containing the recommended BSs to the MT
after negotiating with candidates. Optionally, it
may confirm handover to the target BS (t-BS)
342

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 341-349
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
over backbone when the target is decided.
Alternatively, the BS may trigger handover with
a MOB_BSHO-REQ message.
After handover preparation, handover execution
it will take place to start. The serving BS will
receive a MOB_HO-IND message from the MT
as a final indication of its handover.
After ranging with the target BS successfully, the
MT negotiates basic capabilities such as
maximum
transmit
power
and
modulator/demodulator type.
Then performs
authentication and key exchange procedures, and
finally registers with the target BS. However,
2.2 Cell Size
Previous study, [7] proposed a model for the
TDMA-FDMA mobile cellular communication
system. The authors presented traffic and
coverage analysis for procedure of cell planning.
As the cell radius increases, then the location
area and multicast zones will increase in terms of
numbers and coverage area , so transmitted
power of base station (BS) within these area and
path loss are increased, however the capacity has
better performance with consider the user
capacity and the travelling within the network.
Also three environments of urban areas ,
suburban area, and rural area. The model
presented the result based on these criteria so, In
case of the urban environment, the performance
was very worse comparing to rural or suburban
environment. In [8 present another comparison it
was based on the power consumption of access
networks which are passive optical networks,
fiber to the node, point-to-point optical systems
and WiMAX. The model results showed that
the optical access technologies have a more
power-efficient and saving solutions comparing
to other access technologies which has been
introduced .
2.3 Handoff
In [8]
many considerations have been
investigated to present the performance of
cellular mobile communication systems with
handover procedures. The author considered
cellular organization , frequency reuse, and
handoff for mobile radio telephone systems.
They also analyze the probability distribution of
residing time in a cell . One of the results

described that mean channel holding time in a


cell is increased as the cell radius is increased. In
[9] proposed the new vertical model, this model
is enabling a neighbor node to process requests
of a mobile node. Theis proposed model has
better performance of average handover latency,
packet loss and power consumption. The final
results was energy consumption per a mobile
node is increased if and only if the speed of the
mobile node is larger.
In this paper the cell perimeter is presented in
terms of location area and multi cast broadcast
zones, to show the effect and influence of this
parameter on the handover in the broadband
network , along with many scenarios such as
different distances of mobile terminal between
the base stations in the current locations and the
targeted one, also the mobile terminal speed
which consider one of the important parameters
too.
3 HANDOVER IN WiMAX
The good way to adjustments the MAC layer is
to follow up the HO process for various
scenarios.
3.1 MAC Layer Handover Procedure
A. Network Topology Advertisement
The BSs periodically broadcast Mobile
Neighbor
Advertisement
Control
Signal
(MOB_NBR_ADV). These signals contain
both physical layer (i.e., radio channel) and
link layer (e.g., MAC address) information.
B. Scanning/ranging
The MT scans and synchronizes with the
neighboring BSs based on channel information
from the neighbor advertisement. If the
synchronization successes, it then starts the
ranging procedure. The scanning and ranging
processes are shown in Fig (1). The MT starts to
be allocated a ranging slot by the neighboring
BS. Then the MT starts a handshake ranging
procedure with the neighboring BS for the
OFDMA uplink synchronization and parameter
(e.g., transmission power) adjustment. This
process may contain multiple messages (Ranging
Request (RNG-REQ) and Ranging Response
(RNG-RSP) transmission and parameter
adjustment transactions. This procedure ends
after the MT has completed ranging with all its
343

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 341-349
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
neighbors. In the ranging phase, an MT may
switch to a new channel, thus temporally loosing
connection with the serving BS.

4 FUNDAMENTALS OF RSSI RANGING


The fundamentals of RSSI ranging [11],
explaining the relationship between transmitted
and received the power of wireless signals and
the distance between node, This relationship is
illustrated in (1). Pt is the transmitted power
signal. Pr is the received power signals. d is the
distance between the sending and receiving
nodes. n is the transmission factor whose value
based on the propagation environment [12].
Pr = Pt *

(1)

Take 10 times the logarithm of both sides on (1),


the Equation (1) becomes equation (2).
10 lg Pr = 10lg Pt -10n lg d

Figure 1. Scanning and Ranging [ 10]

(2)

Pr , the transmitted power of nodes are given. Is


the expression of the power converted to dBm
Equation (2) can be directly written as Equation
(3).

C. Handover Decision and Initiation


The HO trigger decision and initiation can be
originated by both the MT and the BS uses an
MT HO Request message (MOB_MSHO-REQ)
or a BS HO Request message (MOB_BSHOREQ) respectively. This procedure is illustrated
in Fig (2)

PR (dBm) = A 10n lg d

(3)

By Equation (3), it's clear that the values of


parameter A and n determine the relationship
between the strength of receiving signals and the
distance of signal transmission.
5 THE MODEL ANALYSIS
As the location management multimedia
multicast services algorithms is presented in [1]
and [4] Where, p is the mobile destiny
(mobile/m2); v is the moving velocity (m/s); the
mobile terminals are moving in various
velocities, and l is the cell perimeter (m).
Mobile terminals move across a boundary in two
directions, However only one direction needs to
be considered. The paging area boundary
crossing rate rp is:

Figure 2. HO decision and Initiation [10]

344

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 341-349
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
p=

(4)

The received signal strength can be as a function


of a users position and the base station position
as follows [13]:
RSS = h ( X , Si )

(5)

Where,

X (x, y) : is the users position.


Si (Si,x , Si,y ) is the position of the base station i .

Y = 22.98 log10 ( X ) 23.89

(6)

Where,
Y : is the received signal strength.
X : is the distance between the BS and SS.
Generally, The MT selects those BSs whose
RSSI value are higher than the serving BS which
results in a better link for communication with
the target BS with the lower bit error rate (BER).
Typically handover initiated when the RSSI of
the serving BS is less than the targeted BS, and
executed only if there is another BS having RSSI
is at least H higher than the threshold drop. It is
mean that the distance between base stations and
the mobile terminals and RSS is playing a main
key to the handover operation, Using a location
management area mathematical algorithm in [4]
as fundamental to the modified algorithm (7)
has been adopted these new factors and pay
more attention to these factors performances to
present the effect in different scenarios,
particularly under the conditions , when the
Mobiles terminals in the mobility in different
velocities categories.

6 THE ANALYTICAL RESULTS


Will consider the location management areas and
multicasting and broad casting zones are fixed so
Figure (3) shown the relation of MT speed &
RSS and Distance between BS and MT, it's clear
that the increasing of the MT speed it led to the
increaseing in the RSS too, a a complicity
relations of different factors functions is explain
the handover delay becomes less in terms of the
RSS and distance , the distance between the MT
and the targeted BS becomes smaller , the RSS
becomes high on the other hands the RSS is
becomes smaller when distance becomes far
from the serving BS, the signal strength is
becomes then less than the threshold point, the
target BS receive many control signals from
many MTs as in same time MTs located many
power signals from neighboring BSs .its heavy
signals will targeted the selected BS, for
handover, the BS channel will be busy and
bandwidth delay or block many traffic cause the
bandwidth limitations .

600
400

300270240
200
210180
150
120
906030

RSS
D

V
9

signal unit

To calculate the function h, we calculate its


gradients, Where di is the distance or the range
between the user and the base station i .
However if the received signal strength can be
considered either in the following logarithmic
curve [14]:

(7)

Distance between BS and MT

Figure 3. Effect The Distance on The RSSI


under Different Velocities

345

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 341-349
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

Figure (4) shown the effect of RSS and distance


between MTs and BS to the handover delay,
When the MTs velocity increases the RSS
increase too, considering the velocity of the MTs
where it becomes high, the effect of the RSS to
the handover delay it becomes very influence,
note that distance range from 7 to 10, the level of
speed of handover delay is lower compare to the
distance from 6 to 1, starting from 210 MT
velocity the handover delay change to be more
steady in increases value, the distance between
the BS and MTs coincide with the delay,

600

400
TVD
V
RSS

300
270
240
210
180
150
120
90

300
200
60

signal unit

500

100
30
0

10 9 8 7 6 5 4 3 2 1

The results shown that the cell size of LMA has


an impact to the delay, the different perimeter of
the cells at different levels has been investigated,
with respect to the different velocity levels of the
mobile terminals, as described in the figure (5),
the increasing of the delay is patent in the small
perimeter of cells comparing to the large one.
Because the numbers of consecutive times of
approaching and crossing a high number of
traffic within small cell size into the LMA where
many signals cross the borders, will waste the
channel bandwidth , many processing will it
take place in the handover operation , such as
updating , registration, paging, where these
processing mainly unnecessary.
However, Its a clear approach that, larger cell
size will eliminate the signal's volume from
approaching the target BS at the same time, the
pressure to the bandwidth channels almost
decrease, coinciding with this, the handover
delay will decrease too, RSS has advantages
ahead when becomes very close to the target BS,
with strong signals, it supports the MT to carry
on the handover decision in a short time, with no
much delay.

200
180
160
140
120
100
80
60
40
20
0

Figure 4. Effect RSS to The Handover Delay


DLMA,
D=100
DLMA,
D=50
DLMA,
D=25

L4=4000m

200

L3=2500m

100

L2=1000m
L1=400m

0
1
5 4 3 2
8 7 6

10 9
Distance between MTand BS/m

Figure 5. Effect Cell perimeter of LMA on The


Handover Delay

Delay signal unite

300

Delay Signal Unite

Distance Between BS and MT

330 300 270 240 210 180 150 120 90 60

MT SPEED/km

Figure 6. Effective Distance of MT to Target BS on The


Handover Delay

Figure (6), the number of locations area are


almost 16 when the number of multicast zones
are 32 ,with the respect the perimeter of the cells
is 100m, The comparison based on different
346

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 341-349
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
distances of MTs between the current BSs and
target BSs.
160
140
120
100

DLMA,
D=100
m

80
60
40

Delay Signal Unite

The figure shown that as much as the distance


between MT and the target BS is far the
handover delay will increase, so at 100m
distance ,the delay is really clear comparing
when distances are 50m and 25m,the MTs high
speed the crossing handover area will take very
short time which support the handover
processing to complete into very short time, so
for example at 210 MT speed all different
distances have lower delay than the higher MTs
speeds, so when the speed increases the delay is
becomes lower and vice versa.

20
0
330 300 270 240 210 180 150 120 90 60
MT Speed Km/h

Figure 8. Effect Large Cell Perimeter in Different


Distances on The Handover
250

HD,
L=100
HDL=200

150
100

HD,L=300

Delay Signal Unit

200

50
0
330 300 270 240 210 180 150 120 90 60
MT Speed Km/h

Figure 7. Effect The Cell Perimeter and The MT Speed on


The Handover

Figure (7), is presented the effect of the cell


perimeter on the handover delay, with
considering the distance and the number of
location areas is 8 and multicast zones are 16,and
the MTs move with different speed range , the
comparison shown that when the perimeter of
cell small the delay becomes low and decrease
with the MTs speed will increase ,The increasing
delay level it is becomes slowly movement
increase with respect the speed of mobile
terminal.
..

Figure (8) describe the relation of distance of


mobile terminal between the current base station
and the target one and the perimeter when its fix,
the results shown that when the distance is big
the delay becomes high in the mobile terminal
movements and this delay increasing when the
mobile speed becomes slow, since the perimeter
of cell is large , the crossing area of cells it will
takes long time, the bandwidth channels have no
much signaling activity such a location update
and searching.
So the delay will decrease, when the speed is
increased the crossing location area will be faster
in terms of time, the high signaling activities will
happen, but at the high mobile speed the
handover delay becomes low, it's clear at a
small distance to the target base station the
handover delay will be lower than the bigger
distances with the respect the location area size.

347

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 341-349
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
80
60
50
40
30

DLMA

20

Delay Signal Unit

70

10
330
300
270
240
210
180
150
120
90
60

0
MT speed/ KM

Figure 9. Effect Number of LAs and Zones on The


Handover

Figure (9), shown that the delay it has not


influence of the number of locations areas and
zones if the cells perimeter is fix so whatever the
number of location area increase the number of
cells will increase too, so the effect of delay will
change with the same ratio, this figure present
this result, the perimeter and distances are fix,
but the number of location areas is changed from
16, 32, 48 and number of zones is 16, the
distance is 100m. Under all different number of
locations area the delay is change synchronizing
with the same ratio of increasing the locations
area
.

7 CONCLUSION
This modified model presents different
parameter which have a significant effect on the
handoff delay these parameter such as the
velocities of the mobile terminals , distance
between the mobile terminal and the target base
station, location managements area size, cell
perimeter and RSSI.
The results shown that the received signal
strength in the high velocity mobile terminal is
increasing parallel with the distaed model nce
between the MT and BS, on the other hands, the
increasing value could be steady after certain
velocity, No much distance can affect the
handover delay in high distance but it's clear

that the handover delay record less delay in


small distance, the reason is the RSS is becomes
so higher to the target BS, which the chance to
handover be high with respect the channel
bandwidth. And with using different perimeter
cells lengths, RSS records lower delay in a high
perimeter cell size comparing to the lower cell
size perimeter. The location management's areas
and zones have same influence on the handover
if the perimeter of cells is changed in another
word the cells and the location areas and zones
are strong related in terms of locations and area
of services, but in some cases the effects of cells
have more clear effect within the location area
and that happen when the number of cells is
increase . the mobile terminals with the mobility
speed it takes many handoff processing within
short time when the mobile terminal crossing
the border between the location areas and that
where the handoff should be happen. These
actions reduce the capacity and quality of the
handover channels where a signaling traffic
should be treated within the handover.

8 REFERENCES
1. Elgembari, E., Seman, K.,: A Study on the Effect of
Different Velocities on the Handover Delay in
WiMAX Systems. International Review on Computers
and Software (I.RE.CO.S.), Vol. 8, N. 1,ISSN 18286003 Praise Worthy Prize S.r.l. , , January (2013).
2. Saini, M., Verma, A., : Analysis of Handover Schemes
in IEEE 802.16 (WiMAX) ,Thesis submitted in partial
fulfillment of the requirements for the award of the
degree of Master of Engineering in Computer Science
and Engineering Thapar University, JUNE (2008).
3. Al-Safwani, A., Sheikh, A., :Signal Strength
Measurement at VHF in the Eastern Region of Saudi
Arabia, The Arabian Journal for Science and
Engineering, Vol. 28, No.2C, December (2003).
4. Lee, J., Kwon, T., Choi, Y., Pack, S.,: Location
Management Area Based MBS Handover in Mobile
WiMAX Systems., 3rd International Conference on
Communication Systems Software and Middleware
and Workshops, COMSWARE (2008).
5. Kim, K.., Kim, C., Kim, T.,: A Seamless Handover
Mechanism for IEEE 802.16e Broadband Wireless
Access, International Conference on Computational
Science vol.2, (2005).
6. Samsung, H., ETRI, H., KUT, Park.,: Mobile IPv6 Fast
Handovers over IEEE 802.16e Networks, ietfipr@ietf.org.SAMSUNG Electronics J. Cha ETRI ,
June (2008).
7. I.H. Cavdar and O. Akcay, The Optimization of Cell
Sizes and Base Stations Power Level in Cell

348

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 341-349
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
Planning, VTC 2001, vol. 4, pp. 2344- 2348, May
2001.
8. J. Baliga, R. Ayre, W.V. Sorin, K. Hinton, and R.S.
Tucker, Energy Consumption in Access Networks,
OFC/NFOEC 2008, pp. 1-3, February 2008.
9. D.H. Hong and S.S. Rappaport, Traffic model and
performance analysis for cellular mobile radio
telephone systems with prioritized and nonprioritized
handoff procedures, IEEE Trans. Vehicular
Technology, vol. 35, no. 3, pp. 77-92, August 1986.
10. Handover concept for next-generation heterogeneous
networks, VTC 2005 Spring, vol. 4, pp. 2225-2229,
May 2005.
11. Makelainen, A.,: Analysis of Handoff Performance in
Mobile WiMAX Networks, Helsinki University of
Technology, Espoo, Finland, (2007).
12. Jiuqiang, Xu., Liu, W., Lang, L., Zhang, Y., Wang,
Y.,: Distance Measurement Model Based on RSSI in
WSN
,Wireless
Sensor
Network,
2010,
doi:10.4236/wsn.2010.28072 Published Online August
(2010).
13. Zheng,F., Zhan,Z., Guo,P.,: Analysis of Distance
Measurement Based on RSSI, Chinese Journal of
Sensors and Actuators, Vol. 20, No. 11, ( 2007).
14.Gustafsson,F., Gunnarsson,F.,: Possibilities and
fundamental Limitations of positioning using wireless
communication network measurements, IEEE Signal
Processing Magazine, vol. 22,( 2005).
15. Bshara, M., Deblauwe, N., Biesen, L.,: "Localization
in WiMAX Networks Based on Signal Strength
Observations
Applications
in
Location-based
Services",
Department
of
Electricity
and
Instrumentation. www.ieee-globecom.org/2008, ICTMobile Summit (2008).

349

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 350-362
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

Constructing a Course Profile by Measuring Course Objectives


Neville I. Williams
School of Computer Science, Engineering and Mathematics
Flinders University,
GPO Box 2100, Adelaide 5001, Australia
Neville.Williams@flinders.edu.au

ABSTRACT
An ongoing goal in higher education is to provide
quality education programs and to produce high
quality graduates. While much of the attention to
quality matters is focussed on outputs, there is little
that addresses the preliminary or input side of the
educational program. The purpose of this paper is to
demonstrate that it is feasible to construct a course
profile based on standard inputs in the form of the
behavioral objectives that are stated for the subjects
comprising the course. The application of the course
profile will be found in supplementing the
understanding of quality frameworks in the
assessment of degree programs, thereby becoming a
useful tool for benchmarking courses and comparing
across department and institution boundaries.

KEYWORDS
course profile, benchmarking, course evaluation,
course quality

1. INTRODUCTION
In the higher education sector a continuing issue
remains at the forefront of the teaching and
learning agenda, and that is quality of the degree
programs offered. There are many initiatives
undertaken to investigate the quality of teaching,
the quality of assessment, the quality of
graduates, and other output evaluations. These
are all post-event or post-process activities that
have an important role in attempting to validate
and maintain institutional quality standards.
Applications of this approach are particularly
evident at times of course accreditations or
during benchmarking processes when various
forms of documentation are provided as evidence
of effective quality teaching, learning and
assessment processes being in place.
When these evaluation and audit processes
occur, a significant element that is examined is
the documentation associated with a complete
program (a degree, or a course) and its
component elements (the individual subjects, or

topics, or courses). In particular, the aims and


objectives are reviewed, and then the outputs and
deliverables associated with the member items
are examined and evaluated. The bracketed
terms are listed to show the variability in
terminology usage across the education sector
where for example the term course may mean
either a whole degree program in one institution
or a semester (subject) of study in another.
The purpose of this paper is to focus on the
initial part of this documentation, namely the
aims and objectives of individual component
elements, and to propose that a profile can be
established for each of the subjects in a degree
program, and by extension therefore to arrive at
an overall course profile that may be used as an
indicator of course intent. When implemented,
this profile can be used to provide key
stakeholders with a predictive capacity that
presently does not exist. For example, students
could compare courses in a quantitative manner
to supplement their qualitative decision making
on course selection. University administrations
could compare courses within their institution to
confirm consistency or identify inconsistency
between department offerings. External course
reviewers and evaluators could establish baseline
expectations for the conduct of their reviews and
audits. Being an input-side or pre-process
activity, there is an inherent value in such a
profile being created.
While at first appearing to be either confronting,
or perhaps an impossible dream, it should be
pointed out that many other areas of endeavor
have metrics that are used to provide initial
expectations for evaluators on which to base
their judgements. Simple examples include the
degree of difficulty factor used in judging some
Olympic events such as diving, gymnastics,
dance and similar. In the health sector in
Australia the case-mix approach identifies a
standard time in hospital for various medical
procedures, and in financial accounting there
350

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 350-362
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
exists the standard cost for production of
component items in manufacturing processes.
Why then should it not be possible to establish a
baseline value that may be used as an indicator
to the educational potential of course-work
studies? As will be shown in the remainder of
this paper, a proposed course profile indicator is
feasible.
2. THE HIGHER EDUCATION SECTOR
For the purposes of this paper, the focus will be
constrained to the degree programs of the higher
education sector.
Structurally, a degree program comprises a
number of specified studies that must be
undertaken in an acceptable combination to
satisfy the requirements of the particular degree.
Typically the studies are organized on a semester
basis, and, depending on the institution
concerned, the studies may have the same
weighting value in each semester, or there may
be differences. For each subject, there is a set of
aims and objectives that are intended to provide
information about the content of the subject and
the skills and knowledge that a student should
attain. To clarify the use of terms in this paper, a
brief set of interpretation definitions and
equivalences is given in Table 1 below:
2.1 Relationship between Elements
The higher education enterprise may be viewed
as a composite set of the elements just discussed
and arranged in an hierarchical order as shown in
Table 2.
At the subject level, the subject specification
may be thought of as the set of the learning
objectives for that subject. Typically these are
expressed in behavioral terms and are therefore
usually prefaced with a statement such as On
successful completion of this subject the student
will be able to .
In practice, each degree/course has its own
course aims and objectives, which are
presumably addressed by one or more of the
individual subject learning objectives. These
overarching aims and objectives are intended to
convey a sense of the overall graduate attributes
that should be realized in the successful students,
and provide some thematic relevance or intent
across the subjects in the course.

University standards require that each subject


has an approved assessment and examination
scheme, and a fundamental principle of
university teaching is that the assessment plan
tests the achievement of the subject learning
objectives. On the assumption that this principle
is valid and applied in every case, it is reasonable
to assume that any student who has received a
passing grade has met the subject specifications.
Of course the reality is that the assessment of
students is not quite so simplistic otherwise there
would exist just Pass and Fail as the two possible
outcomes for students. What students, educators,
and potential employers wish to see is a qualifier
on the level of pass attained, so we have grading
systems that extend beyond the simple Pass/Fail
criteria and include additional classifications
such as Credit, Distinction, and High Distinction.
Some systems allocate grades in the range A to
E, or A to F, with similar interpretations being
applied to the final grade. Rather than being
purely indicators of success, these categories
generally show some form of performance index,
and may include other factors such as the way in
which students have applied themselves to the
subject at hand. Typically those students who
engage well with the subject will achieve higher
grades than those students who minimize their
efforts to satisfy the subject requirements. The
relative performance of students is used by
universities world-wide and accumulated into a
statistic known as GPA (Grade Point Average).
This statistic is then used for subsequent
admissions to other courses or for the award of
scholarships.
The question now becomes this:
Is it feasible to construct a meaningful
a-priori profile of a degree course based on
subject learning objectives?
3. DETERMINATION OF AN INDIVIDUAL
SUBJECT PROFILE
In order to achieve a satisfactory course profile,
it is necessary to examine the individual subjects
that make up the course, and then aggregate the
individual subject assessments to create an
overall view.

351

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 350-362
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
Table 1: Terminology Interpretations

Term

Meaning

Alternative
Terminology
A complete degree program
Degree, award
Course
Specification
for
the
combination
of
subjects
to
be
Degree
Regulations,
Course
completed in order to satisfactorily complete the Schedule of Study
Rule
course
A prescribed study program in a specific discipline Topic, Course
Subject
area, typically over one semester
The effective weight of the subject in the student Course credits, Credit
Unit
load, typically expressed as a fraction of a full-time Points, Units
Value
year
Learning A student learning objective written in behavioral Learning Outcome
Objective terms
Table 2: Degree Hierarchy Structure

Degree Programs

Specific Course 1

Subject 1

Specific Course 2

Subject 2
.
Subject n1
Subject 1

Learning Objective 1
Learning Objective 2
.
Learning Objective
m1

Learning Objective 1
Learning Objective 2
.
Learning Objective
m2

Subject 2
.
Subject n2
.
Fortunately there have been several studies
undertaken in the field of learning objectives,
and two in particular deal with the development
of taxonomies for learning objectives in an
attempt to provide qualitative approaches to the
examination of learning objectives. One of the
key platforms that gained a great deal of support
was the taxonomy of educational objectives
proposed by Bloom, which subsequently became
widely referred to as Blooms Taxonomy. The
underlying basis of Blooms ideas was to create
a framework for classifying the statements of
what was expected for students to learn through
the teaching process. While the original

publication of Blooms work dates back to the


1950s, the evolutionary work resulting from the
investigation and adoption of Blooms approach
has resulted in a more recent version now
labelled as the revised Blooms Taxonomy [1].
In essence, the revised taxonomy has expanded
the Knowledge dimension of the original
taxonomy and has become represented as a twodimensional matrix mapping the Knowledge
dimension against the Cognitive dimension, as
shown in Table 3 [2]. Use of this tabular form
allowed the analysis of the objectives of a unit or
course of study, and in particular, enabled an
indication of the extent to which more complex
352

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 350-362
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
types of knowledge and cognitive processes were
involved. It was found that this tabular form was
able to be applied across a range of granularities,
from the fine-grained analysis of a module in a
larger teaching program, to broader analyses of
subject objectives. The application of the revised
Bloom Taxonomy matrix involves the
examination of learning objectives and
classifying them into the appropriate cells of the
matrix.
In the accompanying table (Table 3), the terms in
the cognitive dimension are self-explanatory, and
similarly, the first three terms in the knowledge
dimension
are
equally
self-explanatory.
However, the fourth term, Metacognitive
Knowledge requires further explanation. In a
related work, Pintrich [3] discusses the
importance of metacognitive knowledge and
highlights three distinct types. Specifically,
Pintrich identifies the first type as Strategic
Knowledge, which incorporates the knowledge
of strategies for learning, thinking and problem
solving in the domain area. The second type is
identified as Knowledge about cognitive tasks,
which includes the ability to discern more about
the nature of the problems to be solved and to
begin to know about the what and how of
different strategies as well as when and why
the strategies may be appropriate. The third type
is described as Self-Knowledge, which
includes understanding about ones own
strengths and weaknesses with respect to
learning.
A second significant model is that proposed by
Biggs in the form of the SOLO Taxonomy
(Structure of Observed Learning Outcome),
was not included as no teaching activity would
be deliberately aimed at this ab-initio state.
While the initial intention of using the SOLO
Taxonomy is to classify learning objectives into
the appropriate SOLO categories, the work
undertaken by Brabrand and Dahl enabled a
relative measure of competencies to be
established across the courses in the science
faculties in the universities in the study. The
body of evidence in the Brabrand and Dahl work
has established a method to create a quantitative
measure based on the statements of learning
objectives.

which is described as a means of classifying


learning outcomes in terms of their complexity
and leading to the ability to assess student work
in terms of its quality [4]. Earlier
publications from Biggs [5], which refers to an
even earlier study by Collis and Biggs, outlines
the 5 level structure of the SOLO Taxonomy and
discusses the intent and interpretation of each of
the 5 levels:
1.
2.
3.
4.
5.

Pre-Structural
Uni-Structural
Multi-Structural
Relational
Extended Abstract

The application of the SOLO Taxonomy to the


assessment of learning outcomes (objectives)
involves the review of the objectives in terms of
the functionality expected at the various levels.
In particular, there are typical verbs associated
with each level that are likely to appear in
statements of learning objectives.
A study that attempted to provide a quantitative
value conversion from the qualitative base of the
taxonomy structure was conducted in Denmark
where the data considered was over some 550
syllabi from the science faculties at two
universities [6]. The approach in this study listed
a number of typical verbs associated with the
SOLO Taxonomy, and identified levels 2 and 3
as providing mostly quantitative outcomes and
levels 4 and 5 as being more qualitative in
nature, as shown in Table 4. The mapping of
learning objective statement to a value was then
given by the level number that the verb(s) in the
objective most closely matched. SOLO Level 1
The method used by Brabrand and Dahl in the
examination of syllabi was to count the
frequencies of the verbs used in the learning
objectives for the subjects and apply an average
to the subject. It was further enhanced by using a
double-weight averaging scheme, which meant
that compound statements of learning objectives
such as identify and compare would
result in an averaging for that single objective of
(S2 + S4)/2. In this approach, the values 2 to 5
were applied to the learning objectives based on
their verb classification. The outcome of this
method is to create a singular value for each
subject syllabus objective within the range 2 to
5, and ultimately generate a single value for each
353

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 350-362
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
Table 3: Revised Bloom Taxonomy Matrix

Cognitive Dimension
Knowledge
Remember Understand Apply
Dimension
Factual
knowledge
Conceptual
knowledge
Procedural
knowledge
Metacognitive
knowledge

Analyze

Evaluate

Create

Table 4: Prototypical verbs according to the SOLO Taxonomy per Brabrand and Dahl

Quantitative
SOLO 2
Uni-structural
Paraphrase
Define
Identify
Count
Name
Recite
Follow
(simple)
instructions

Qualitative
SOLO 4
SOLO 5
Relational
Extended Abstract
Analyze
Theorize
Compare
Generalize
Contrast
Hypothesize
Integrate
Predict
Relate
Judge
Explain causes
Reflect
Apply Theory (to its Transfer Theory (to
domain)
new domain)

SOLO 3
Multi-structural
Combine
Classify
Structure
Describe
Enumerate
List
Do algorithm
Apply method

Table 5: Revised Bloom Ranking Schedule

Cognitive Dimension
Knowledge
Dimension
Factual
knowledge
Conceptual
knowledge
Procedural
knowledge
Metacognitive
knowledge

Remember

Understand Apply

Analyze

Evaluate

Create

13

17

21

10

14

18

22

11

15

19

23

12

16

20

24

subject. As described by the authors, there is an


underlying assumption that the distance between
each SOLO level is equal to enable the values 2
to 5 to be used in this manner. The term for this
metric given by Brabrand and Dahl is SOLO
Average.
The use of the SOLO classifications is quite
simple at the conceptual level, and it also has an

implied equality of learning competencies within


each level. Hence, any learning activity that is
classified at a particular SOLO level may be
thought of as being educationally equivalent to
every other learning activity at that level.
This is somewhat different from the revised
Bloom
Taxonomy
which
differentiates
knowledge types within cognitive levels, but an
354

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 350-362
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
interesting question arises to consider whether
similar approaches can be used with the revised
Bloom Taxonomy as a classification and metric
determination tool. Under the equal distance
assumption proposed in Brabrand and Dahl, the
cognitive levels within any one knowledge
dimension should change by an equal amount.
Similarly, a constant distance value between
knowledge dimension levels should apply within
any one cognitive dimension. Accordingly, using
an integral unit value a score value table could
be constructed as in Table 5.
Given the revised Bloom Taxonomy matrix
contains 24 cells, the resultant scale will be in
the range 1 to 24. In pure numeric terms the
scores obtained using this scale will be vastly
different from those using the SOLO scale where
the range is between 2 and 5. However it is
worth examining whether meaningful outcomes
are obtained using the two techniques. This
approach has been taken on the grounds that
behavioral objectives are written as statements of
intended student behaviors and learning
outcomes, which is about the cognitive skills
rather than the subject content. Recognising that
subject content should become more in-depth as
a student progresses through their studies, it is
reasonable to remove the depth of knowledge
factor in determining a profile that examines the
cognate skills.
As the knowledge dimension addresses the
nature of content within a subject, the
comparison is not really comparing like with like
by ranking against the SOLO scores. Therefore,
to be more reflective of a properly constructed
test to compare similar items, namely the
cognate skills specified by learning objectives,
an adjusted scale based purely on the cognitive
dimension by collapsing the knowledge
dimension to a single integral value resulted in a
scoring range between 1 and 6, where 1 was
assigned to Remember and 6 was assigned to
Create.
With two possible measuring instruments
available, the question of how to determine an
individual subject metric must now be answered.
When reviewing syllabus learning objectives it
becomes clear that many are framed in
compound terms that is to do x and do y, or
to understand x, y, and z. The evaluation of

compound objective statements can be resolved


by one of three methods, namely

to expand the compound statements into


multiple simple statements, which in
many instances would create a much
longer list of objectives. The potential
problem with this approach is that an
objective of single intent but expressed in
compound form would provide a doubling
or tripling of scores, thus inflating the
value of the objective.
to evaluate the compound statement and
average the individual parts that would be
the simple statements under the expansion
approach. In this method, the inflationary
problem of the first method is overcome
and it gives a score within the scaling
range for the specific objective. This is the
method that was adopted by Brabrand and
Dahl.
to use the maximum classification value
obtained by inspecting the statement of
the learning objective. While simplistic in
nature, this method tends to err on the side
of generosity when evaluating compound
objective statements.

For consistency and comparison purposes it has


been decided in this research to adopt the same
approach as Brabrand and Dahl and use the
double-weight averaging scheme.
4. METHODOLOGY
In this study the syllabi for a degree in
Information Technology were examined and
rated in conjunction with the individual subject
coordinators using both the SOLO Taxonomy
and the revised Bloom Taxonomy scales. The
average score for each objective was calculated
when the objective was expressed in compound
terms, and then an overall average was
calculated for each subject. It was necessary to
use a standardized item such as the average of
the individual objective scores because different
subjects list a different number of objectives.
Accordingly, the average score would highlight
the broad intentions of the subject on the
cognitive scales. The relative weight of the
subject is given in terms of its unit value, so this
weighting was applied to the subject score to
arrive at the year level aggregate.
355

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 350-362
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
The scores obtained were then grouped by the
year level of the course to consider whether there
were year level differences, and finally a score
for the degree program was calculated.
In the particular degree program examined, there
were three classes of subjects, the Core subjects
which were compulsory, Selective subjects
where students have a narrow choice from a
limited list of subjects, and Elective subjects
where students may choose from a broad range
of subjects. A total of 20 syllabi statements were
examined in this degree course to provide the
data for the core and selective subjects. To
effectively deal with the mix of subject types, the
following rules were applied:
a) The compulsory subjects were evaluated as
distinct entries;
b) The selective subjects were evaluated
individually but the number of required
selective subjects were included as
cumulative average values. That is, where the
course rule made a statement such as
include 2 of the following 5 subjects ,
then the average score for the 5 subjects
would be calculated and included twice to
allow for the number required;
c) The elective subjects needed for each year
level would be included as the average of the
core subjects for that year level. The
underlying assumption here is that the
subjects across year levels within an
institution
may
be
considered
as
approximately similar in educational content
even though they may come from different
domain areas.
5. RESULTS
It is important to demonstrate the application of
this methodology through an example. In the
case study application, a three-year degree of
Bachelor of Information Technology, there were
24 semester subjects required to complete the
degree. Of these, there were 20 distinct subjects
that were classified as core or selective subjects,
with the remaining 4 being elective choices from
a large range of options taken over different year
levels. Taking as an example the first-year
introductory
subject
COMP1001

Fundamentals of Computing, the behavioral


objectives were listed as:

On successful completion of this subject, a


student is expected to:
1. be familiar with the fundamentals, nature and
limitations of computation
2. be familiar with standard representations of
data and the translation to and from standard
forms
3. be aware of the structure of information
systems and their use
4. understand the social and ethical implications
of the application of information systems
5. be able to construct simple imperative
programs
In conjunction with the subject coordinator, the
following assessments on the SOLO scale and
Bloom scale were recorded:
Table 6: Subject Assessment of Objectives

Objective #
1
2
3
4

SOLO Score
S3 + S3 + S4 =
3.33
2 * S3 = 3.0
2 * S3 = 3.0
S3 = 3.0

5
Average

S4 = 4.0
3.27

Bloom Score
B2 + B2 + B3 =
2.33
B2 + B3 = 2.5
2 * B2 = 2.0
B2 + B4 + B5 =
3.67
B6 = 6.0
3.30

The same process was followed for the other


subjects in the degree and aggregated in a
summary table where the subjects were grouped
by year level, and weighted according to the unit
value of the subject within the degree program.
For the three year Information Technology
degree considered, the following summary of
classifications for the subjects was obtained.
Table 7: SOLO vs Bloom Scores

First Year
Second Year
Third Year
Degree Total
Course
Average

Weighted
SOLO Total
3.43
3.56
3.84
10.83
3.61

Adjusted
Bloom Scores
3.57
3.63
4.04
11.24
3.78

6. DISCUSSION
In evaluating the behavioral objectives for the
subjects in this degree there were several
356

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 350-362
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
interesting points that were revealed. These are
discussed as separate items below.
6.1 Appropriateness
approach.

of

Taxonomic

This paper has used two distinct taxonomic


approaches, namely the revised Bloom
Taxonomy and the SOLO Taxonomy, as
vehicles to investigate the learning outcomes or
objectives of the subjects in a particular course.
The question of taxonomic appropriateness has
been raised with respect to the use of Blooms
Taxonomy in the Computer Science domain [7].
In that work, Johnson and Fuller proposed a
slightly different structure to cater for the idea
that application is the aim of computer science
teaching. No firm resolution was given, but the
issue of whether the Bloom Taxonomy is
suitable in the computer science arena was
raised. Other works have proposed that the
revised Bloom Taxonomy was useful in
Computer Science teaching, particularly where
multiple staff members were involved in the
subject [8]. Developments and research into a
computer-science specific learning taxonomy
have been undertaken by Fuller et al. [9] with a
proposed model addressing the perceived
deficiencies in both the Bloom and SOLO
taxonomies. These research activities in concert
with the Brabrand and Dahl efforts highlight and
support that a taxonomic approach is relevant,
even though the taxonomic tools currently
available may not yet be the best fit, or may need
some refinement for domain areas such as
Computer Science. The experience gained in this
study suggests that it may be more an issue of
interpretation of the standard descriptors used in
the classifications rather than changing the
classification framework to suit the domain,
otherwise one spawns a whole new set of
taxonomies for various discipline domains, each
of which then need to be validated.

work is required to provide comparative data and


overall calibration for this metric. What has been
revealed is that the closer analysis of the subject
behavioral objectives for this degree across year
levels does match the nave expectations
namely that as one progresses through the degree
studies from first year to second year to third
year there is a shift of emphasis from the lower
more functional or quantitative SOLO levels to
the more sophisticated qualitative levels. The
data in Table 7 demonstrates an increasing
SOLO Average through the year levels, and
provides a total of 10.83 for the course, or an
average of 3.61 if one wanted to arrive at a
single indicator figure within the scaling range.
This finding is consistent with the findings of a
separate study by Brabrand and Dahl [10] that
explored the use of the SOLO Taxonomy to
examine
competence
progression
from
undergraduate to graduate level studies. An
almost identical result was obtained when using
the Bloom Taxonomy, adjusted to consider only
the cognitive elements. The fact of being able to
establish a metric suggests that there is an
opportunity to further develop a set of expanded
tools that may be useful in the quality and
benchmarking domain for degree courses.
6.3 Current written form of the statements of
behavioral objectives.
The standard and consistency of the current
behavioral objective statements was quite
variable for this course. A significant number
were quite vague and therefore difficult to
classify appropriately. However, the vaguely
expressed objectives were more easily classified
using the Bloom Taxonomy than with the SOLO
Taxonomy. The challenge for educational
institutions is to ensure that the stated learning
objectives accurately reflect what is being
taught, what is being expected of students, and
subsequently what is being learned in the
subjects of the course.

6.2 Meaningful result.


The process applied in this research project has
demonstrated that a statistic can be determined
for a particular course of study. At this point the
value of that statistic as an indicator to the
academic rigor proposed for a course of study is
yet to be proved, either with the SOLO
Taxonomy or the Bloom Taxonomy. Subsequent

6.4 Subjective nature of assessment of


objectives.
A potential criticism of this approach is that the
interpretation of behavioral objectives is
subjective, and therefore suggests that repetition
by different researchers would generate a
different set of results. While there is some merit
357

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 350-362
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
in this argument, it is defended by the confidence
that we have in the professionalism of the people
charged with making the assessment when they
exercise their professional judgement. This is no
different to examiners marking student papers,
recognising that different examiners may arrive
at slightly different final scores, but overall
should return a similar result. Accreditation and
benchmarking
panels
make
subjective
professional judgements based on the evidence
presented to them when deciding to award
particular achievement or status levels to
courses.
6.5 Language-rich bias.
The subjects which have a stronger focus on
language elements such as report writing and
critiquing of subject materials tended to score
more highly in both taxonomies. Some subject
areas such as computer programming may
involve quite complex levels of problem solving
and formulation of creative approaches to
resolve issues, but these elements were not
explicitly stated in the subject learning
objectives. Discussions with the subject
coordinators highlighted that their impressions of
some of the tasks required of the students
involved
the
higher
order
taxonomy
classifications, yet the subject learning objectives
did not adequately express this.
6.6 Interpretation opportunities.
The two dimensional nature of the revised
Bloom
taxonomy
makes
subsequent
investigation of comparative subsets somewhat
more difficult computationally. On the other

hand, the SOLO approach allows for more


internal analysis to be undertaken with relative
ease, as can be seen in the comparative
distribution of SOLO levels across the degree in
Figure 1. The data shown has been calculated by
accumulating the number of objective elements
stated at each of the SOLO levels for each of the
years of study in the degree program, as
indicated in Table 8.
Table 8: Distribution of SOLO Levels across Degree

First Year
Second Year
Third Year
Overall

Solo2
5%
7%
4%
5%

Solo3
49%
41%
25%
36%

Solo4
39%
42%
55%
47%

Solo5
7%
10%
16%
12%

Using the adjusted Bloom scale to focus only on


the cognitive dimension, a comparable set of
data was obtained with the equivalent statistic
listed as the Adjusted Bloom Score in Table 7,
and the detailed breakdown is shown in Table 9
with the associated graphic in Figure 2.
While the proportions of objectives in each of
the taxonomy classification levels across year
levels appear as an interesting set of numbers in
tables 8 and 9, it becomes patently clear in the
graphical representation Figure 1 and Figure 2
that there is a distinct trend in the learning
expectations in this course from the entry level at
year 1 through to the final year program. In
particular, it can be seen that there is a shift from
the lower level learning activities based on
simple recall and application of method through
to the higher order analysis and evaluation
required in critical thinking in later years.

358

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 350-362
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
Table 9: Distribution of Bloom Levels across Degree

Bloom1 Bloom2 Bloom3 Bloom4 Bloom5 Bloom6


0%
21%
31%
22%
16%
10%
6%
14%
26%
20%
22%
12%
0%
11%
18%
37%
20%
13%
2%
15%
25%
26%
19%
12%

First Year
Second Year
Third Year
Overall

Example graphical analysis of the Information Technology degree considered in this study.

SOLO Summary by Year Level


Year Level of Course

Overall

5%

36%

Third Yr 4%
Second Yr

First Yr

47%

25%

7%

55%
41%

5%

16%
42%

49%

0%

12%

10%

39%

20%

40%

Solo2

7%

60%

80%

Solo3
Solo4
Solo5

100%

Proportion of SOLO Classifications

Figure 1: Relative SOLO Levels in the Information Technology Degree

Bloom Summary by Year Level


Year Level of Course

Overall 2%

15%

Third Yr 0% 11%

Second Yr

25%

26%

18%

19%

37%

12%
Bloom1

20%

13%

22%

12%

Bloom2
Bloom3

6%

14%

26%

20%

Bloom4
Bloom5

First Yr 0%
0%

21%

31%
20%

40%

22%
60%

16%
80%

10%

Bloom6

100%

Proportion of Bloom Classifications

Figure 2: Relative BLOOM Levels in the Information Technology Degree

359

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 350-362
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
6.7 What then is the profile of this degree
course?
The analysis undertaken demonstrates that there
are several potential profiles that can be
determined.
6.7.1 Course Index Score
In the first instance, there is the Solo Average of
3.61, or the Bloom Average of 3.78 for this
course. As indicators, these values propose that
the overall course endeavors to go beyond basic
learning activities of recall and application of
standard methods and approaches, and ventures
well into the qualitative realm of analysis and
evaluation that would be expected of graduates
able to display critical thinking attributes in their
chosen field. Since the underlying nature of both
the SOLO and Bloom taxonomies is cumulative,
the equal-distance assumption in creating the
basic scoring system for analyzing the subject
objectives allows the interpretation of
progression to higher levels in this manner. This
can be represented on a graphical scale as shown
in Figure 3, which indicates the positioning of
the course score on a SOLO scale, and is labelled
as a Course Index or C-Index.

The overall distribution of subject objectives in


the various SOLO Classifications can be seen in
the following charts as Figures 4 and 5. The
representation in Figure 4 highlights that the
most frequent behavioral objectives are at the
SOLO 4 level (47%), which indicates a large
concentration on analysis, comparison and
evaluation, and application of theory within the
course. To a lesser extent, the next most frequent
classification is at SOLO 3 level (36%),
indicating a substantial amount of the more
routine tasks such as describing, classifying and
performing known tasks.

BIT Overall SOLO Profile


50%
40%
30%
20%
10%
0%
Solo2

Solo3

Solo4

Solo5

Figure 4: Proportions of SOLO Classifications in BIT

This same information can be seen in Figure 5,


which is more like a one-line summary of the
overall course composition, showing the relative
proportions of SOLO classifications, and
therefore learning outcome expectations for the
course.

Figure 3: Course Index Score

6.7.2 Overall or Summary Profile


Secondly, a more detailed profile for the course
can be claimed if one simply takes the overall
line of the graphic representation or the
corresponding table. This particular profile lends
itself to closer examination of the extent of
academic rigor that is proposed for the course,
namely 59% of the learning objectives are
oriented towards the higher order qualitative
tasks versus 41% being oriented towards the
lower level quantitative tasks in the SOLO
analysis, or 57% higher order and 43% lower
order skills in the Bloom analysis.

Overall

BIT Overall SOLO Profile


5%

0%

36%

20%
Solo2

47%

40%
Solo3

60%

12%

80%

Solo4

100%

Solo5

Proportion of SOLO Classifications


Figure 5: Overall BIT Profile

6.7.3 Year-Level Profile


A third profile is achieved by examining the
year-level breakdowns of learning intent. In this
view the ratios of the different year level studies
can be reviewed to ensure consistency with
institutional goals and to validate against
360

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 350-362
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
comparable courses to gauge the appropriate
amount of different types of learning activity.
For the course examined in this case study, it
appears that each year level includes the
objectives related to basic recall of factual
material, but the major shift in focus seems to be
from application of routine processes initially
into more critical evaluation and analysis tasks in
later years.

BIT Detailed Year-Level Profile


Third Year 4% 25%

55%

Second Year 7%

41%

First Year 5%

49%

16%
42%

0%
20% 40% 60%
Solo2
Solo3
Solo4

10%

39%

7%

80% 100%
Solo5

Proportion of SOLO Classifications


Figure 6: Detailed Course Profile

It is evident from the data displayed in Figure 6


that there is a substantial jump in those higher
order objectives between year 2 and year 3
(19%) compared with the difference between
year 1 and year 2 (6%). It is feasible to speculate
on why this may be the case, whether by design
or coincidence, and indeed could give course
evaluators hints about empirically testing
whether the implementation of the course
matches the expected course rigor.
7. CONCLUSIONS
Traditionally it has been the case that teachers,
academics and educators generally have rejected
the notions of measurement and accountability in
relation to the teaching process, even though
they subject their students to exactly those
elements.
Many previous
attempts
at
measurement of the education sector have been
derived from administrators attempting to apply
accounting principles which overlook many of
the peculiarities of the education sector and
invariably fail or invoke feelings of angst and
hostility towards their implementation. This
paper has introduced a concept for a metric that
is systemic in nature, measuring attributes of the
system via the individual subjects that
comprise a course of study, and ultimately

generates a measure for the overall course of


study. Being a pre-activity indicator it is
independent of the approach taken by the
teaching team and the peculiarities of the
particular cohort of students. Individual
academics have control over the attributes being
measured in that they are the ones who write the
behavioral objectives for their subjects and
therefore contribute to the specifications for the
subjects under their control as they have always
done. The proposed value of this metric is that it
should be used as an indicator of the educational
rigor of the course examined. In such a context it
may be used in a comparable manner to the
degree of difficulty factor previously discussed,
and subsequently as a starting point for
comparison of courses in future benchmarking
processes.
One of the major findings of this research is that
the standard of written behavioral objectives in
the course examined was somewhat inconsistent.
Some of the subjects had well-formed statements
and made it clear about what was intended in the
subject. Others were somewhat vague and
provided minimal useful information about the
subject content or intended student expectations.
From
an
institutional
perspective,
a
recommendation would be to tighten the
statements of behavioral objectives to improve
the subject specifications. With better and more
consistent statements of objectives the key
stakeholders who make use of those subject
specifications will be better informed, and more
reliable data based on those stated objectives
may be obtained.
This research has demonstrated that it is feasible
to construct a course profile for a degree using
either the SOLO Taxonomy or the amended
Bloom Taxonomy to evaluate the subject
learning objectives for the course. Although the
numeric values given in Table 7 are potentially
useful indicators, the distribution of expected
learning activity across year levels has proven to
be much more interesting and informative when
displayed either in tabular form (Table 8, Table
9), or graphically as in Figure 1 and Figure 2.
The more specific graphical displays of subsets
of the data in Figures 3 to 6 provide alternative
forms of interpretation tools which may be used
to examine courses from the broad overview
level through to detailed year-level views. When
361

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 350-362
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
used in conjunction with other examination tools
and inspection of output artifacts, the profile of
expected learning activities in the course should
be a valuable instrument that finds application in
course comparisons, benchmarking, and the
evaluation of course quality.
The language-rich subjects tended to score
higher in the methodology used in this research.
Although this may be a slight impediment to the
technically oriented courses, the overall
influence of the language-rich subjects tends to
be overshadowed by the inherent ratio of
technical to less/non-technical subjects in
structuring technically oriented degree programs.
There are many opportunities to extend the
research associated with this work, including the
expansion of the data sets involved, making
decisions about the relative ease of working with
each taxonomy, investigating the ways to better
interpret the results obtained, and assessing the
applicability of the approach in course
benchmarking when different courses are
compared.

REFERENCES
1.

Anderson, L.W., Krathwohl, D.R.: A Taxonomy for


Learning, Teaching and Assessing: A Revision of
Blooms Taxonomy of Educational Objectives.
Addison Wesley Longman, New York (2001).
2. Krathwohl, D.R.: A Revision of Blooms Taxonomy:
An Overview. Theory into Practice 41(4), 212-218
(2002).
3. Pintrich, P.R.: The Role of Metacognitive Knowledge
in Learning, Teaching, and Assessing. Theory into
Practice 41(4), 219-225 (2002).
4. Biggs,
J.:
SOLO
Taxonomy,
http://www.johnbiggs.com.au/academic/solo-taxonom
y/., retrieved May 2013.
5. Biggs, J.: Individual Differences in Study Processes
and the Quality of Learning Outcomes. Higher
Education 8(4), 381-394 (1979).
6. Brabrand, C., Dahl, B.: Constructive Alignment and
the SOLO Taxonomy: A Comparative Study of
University Competences in Computer Science vs
Mathematics. In Lister, R., Simon (eds.) Seventh
Baltic Sea Conference on Computing Education
Research, CRPIT vol. 88, pp 3-17. ACS, Wollongong,
NSW, Australia (2007).
7. Johnson, C.G., Fuller, U.: Is Blooms Taxonomy
Appropriate for Computer Science? In Berglund, A.,
Wiggberg, M. (eds.) 6th Baltic Sea Conference on
Computing Education Research: Koli Calling 2006,
pp. 120-123. Uppsala University, Sweden (2006)
8. Thompson, E., Luxton-Reilly, A., Whalley, J., Hu, M.,
Robbins, P. : Blooms Taxonomy for CS Assessment.
In Hamilton, S., Hamilton, M. (eds.) Tenth
Australasian Computing Education Conference (ACE
2008). CRPIT vol. 78, pp 155-162. ACS,
Wollongong, NSW, Australia (2008).
9. Fuller, U., Johnson, C. G., Ahoniemi, T., et al.:
Developing a Computer Science-specific Learning
Taxonomy. ACM SIGCSE Bulletin 39 (4), 152-170
(2007).
10. Brabrand, C., Dahl, B.: Using the SOLO Taxonomy to
Analyze Competence Progression of University
Science Curricula. Higher Education 58(4), 531-549
(2009).

362

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 363-376
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

State of the Art of a Multi-Agent Based Recommender System for


Active Software Engineering Ontology
Udsanee Pakdeetrakulwong 1 and Pornpit Wongthongtham 2
School of Information Systems, Curtin Business School
Curtin University
Kent Street Bentley WA 6102, Australia
1
udsanee.pakdeetr@postgrad.curtin.edu.au, 2 ponnie.clark@curtin.edu.au

ABSTRACT
Software engineering ontology was first developed
to provide efficient collaboration and coordination
among distributed teams working on related software
development projects across the sites. It helped to
clarify the software engineering concepts and project
information as well as enable knowledge sharing.
However, a major challenge of the software
engineering ontology users is that they need the
competence to access and translate what they are
looking for into the concepts and relations described in
the ontology; otherwise, they may not be able to obtain
required information. In this paper, we propose a
conceptual framework of a multi-agent based
recommender system to provide active support to
access and utilize knowledge and project information
in the software engineering ontology. Multi-agent
system and semantic-based recommendation approach
will be integrated to create collaborative working
environment to access and manipulate data from the
ontology and perform reasoning as well as generate
expert recommendation facilities for dispersed
software teams across the sites.

KEYWORDS
Software engineering ontology, multi-agent based
systems, recommendation systems, multi-site software
development, ontology development

1 INTRODUCTION
Due to the emergence of the Internet and the
globalization of software development, there has
been a growing trend towards the traditional
centralized
to
the
distributed
software
development form which means that software
team members work on the same project but they

are not co-located. They are distributed across


cities, regions, or countries. For example, the
requirement specification and design are done in
Austria, the development is done in China and
Brazil and the testing is done in Russia. There are
several terms used for this approach, for example,
Global software development (GSD), Distributed
software development (DSD), or Multi-site
software development (MSSD). gerfalk et al.
[1] discussed the reasons why organizations
consider adopting distributed development of
software systems and application models which
include utilizing larger labor pool, accessing
broader skill base, minimizing production costs
and reducing development duration from round
the clock working. Conchir et al. [2] also
mentioned other advantages like market
proximity, local knowledge accessibility and
adaptability to various local opportunities.
However, this type of long-distance collaborative
work is not without problems. It can cause
challenges such as communication difficulties,
coordination barriers, language and cultural
differences [3]. This may result in some tasks not
being carried out properly due to the difficulty of
communication and coordination among team
members located in different geographical areas
and lead to scenarios such as software project
delay and budget overrun. Many researches were
proposed to overcome these issues. Thissen et al.
[4] discussed the communication tools and
collaboration processes that were used in globally
distributed
projects
to
facilitate
team
communication and interaction. Biehl et al. [5]
proposed
a
framework
for
supporting
collaboration in multiple display environments
called IMPROMPTU. It enabled team members to
363

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 363-376
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
discuss software development tasks through
shared displays. Salinger et al. [6] presented Saros
which was an eclipse plug-in for collaborative
programming activities between distributed
parties.
Since the Semantic Web emerged, ontologies
have been widely used as a means of providing the
semantics to support the retrieval information
based on the intended meaning rather than simply
match the search terms [7]. Since then, they have
now applied to several fields including software
engineering throughout the various stages of the
software development life cycle because they can
provide a shared conceptualization of fundamental
concepts and relationships of
software
development projects as well as provide semantics
and mechanisms for communication and
structuring of knowledge. In addition, ontologies
also have a great potential for analysis and design
of complex object-oriented software systems by
using them to create object model for objectoriented software engineering [8].
In
multi-site
software
development
environment, ontologies have played an important
role to support working context. There are several
tools, techniques, models and best practices that
utilizing ontologies to facilitate collaboration,
communication, project knowledge management
including software engineering processes activities
and it is proved that ontologies can bring benefits
such as communication within remote teams,
knowledge sharing and effectiveness in
information management [9].
Wongthongtham et al. [10] introduced the
Software Engineering Ontology which was an
ontology model of software engineering as a part
of a communication framework to define common
software engineering domain knowledge and share
useful project information for multi-site
development environment. They defined the
software engineering ontology as a formal,
explicit specification of a shared conceptualization
in the domain of software engineering [11].
Formal implies that the software engineering
ontology should be machine-understandable to
enable a better communication and semantically
shared knowledge between humans and machines
(i.e. in the form of software application or
software agents). Explicit implies that the type of

software engineering concepts and their


constraints used are explicitly defined. Shared
shows that the consensual knowledge of software
engineering is public and accepted by a group of
software engineers. Conceptualization implies and
abstract model of having identified the relevant
software engineering concepts.
The software engineering ontology comprises
two sub-ontologies: the generic ontology and the
application specific ontology [11]. The generic
ontology contains concepts and relationships
annotating the whole set of software engineering
concepts which are captured as domain
knowledge. Application specific ontology defines
some concepts and relationships of software
engineering for
the
particular software
development project captured as sub domain
knowledge. In addition, in each project, project
information including project data, project
understanding, and project agreement that
specifically for a particular project need are
defined as instance knowledge. Remote software
teams can access software engineering knowledge
shared in the ontology and query the semantic
linked project information to facilitate common
understanding and consistent communication.
However, the current software engineering
ontology has the same passive structure as other
ontologies [12]. Passive structure means that in
order to address the ontology, users need to have
competence to translate the issue to the concepts
and relationships to which they are referring;
otherwise, the user may not be able to obtain
precise knowledge and project information. In
order to address this drawback, active support is
needed that can utilize the ontology to advise users
on what to do in a certain situation.
In this paper, we propose a novel approach that
can offer active support to the software
engineering ontology users. Two main key
technologies will be used which are agent
technologies and recommendation systems.
This paper is organized as follows. In section 2,
we discuss the motivation of this work.
Background and related work are reviewed in
section 3. In section 4, we propose our conceptual
framework. Section 5 demonstrates some scenario
examples of multi-agent based recommender
system providing active support through software
364

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 363-376
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
engineering ontology. Finally, the conclusion and
future work are discussed in Section 6.
2 MOTIVATION
The potential benefits of this work are
significant as follows.
2.1 Report in the literature [13] mentions that
not all globally distributed projects can benefit
from working in the global context. Twenty to
twenty-five
percent
of
all
outsourcing
relationships fail within two years and fifty
percent fail within five years. One of the main
reasons for this failure rate is the communication
barrier across multiple sites. The proposed work is
intended to support effective communication
within projects in order to reduce the failure rate
of
geographically
distributed
software
development projects.
2.2 The proposed recommender approach
integrating with automatic reasoning capacity of
autonomous software agents will provide active
support to multi-site software teams by
recommending useful project information and
solutions for project issues that arise as experts.
2.3 With the proposed framework, software
companies can take advantage of developing
software in a global context, the benefits of which
are: reduction in development costs, access to a
large skilled labor pool, effective utilization of
time zones etc. This will enable them to be more
competitive when bidding in the software
development market.

3 BACKGROUND AND RELATED WORK


3.1 Agent Technologies
The evolution of Web technologies started
from Web 1.0 which was considered as the
traditional information web. Then it moved to
Web 2.0, focusing on user-generated contents or
community-oriented
information
gathering.
However, with the problem of the substantial
amount of data and unstructured content
generated, web users have difficulty searching for
the contents. Therefore, Web 3.0 also known as
Semantic Web has emerged to alleviate this issue.

The underlying structure is that data should be


well-organized to support information exchange
and enable a machine or software agent to
understand, process and reason to produce a new
conclusion. Web 3.0 is the combination of
existing Web 2.0 and the Semantic Web which
integrates ontology, intelligent agent, and
semantic knowledge management together [14].
A software agent is a computer program that
has relatively complete functionality and
cooperates with others to meet its designed
objectives [15]. The other characteristic of an
agent is its capability of flexible and autonomous
action in the environment where it is situated [16].
An agent is also active, task-oriented and is
capable of decision-making [17].
Multi-agent system (MAS) consists of multiple
agents communicating and collaborating with each
other in one system in order to achieve goals [17].
It is used to solve complex problem that cannot be
done by individual agent. MAS is appropriate for
domains that are distributed such as global
manufacturing supply chain network [18, 19],
distributed computing [20, 21], software
collaborative developing environment [22, 23],
etc. It can increase the efficiency and effectiveness
of working groups in distributed environments.
Implicit [24] was a multi-agent recommendation
system for web search intended to support groups
or a community of people with similar but specific
interests. Romero, Viscaino and Piattini [25]
introduced a multi-agent simulation tool to support
training in global requirement elicitation process.
They used agent technology to simulate various
stakeholders in order to enable requirement
engineers to understand and gain experience in
acquiring requirement elicitation. Knowledge
sharing and exchange is one of key factors in the
development of MAS [26]. Each agent will
collaborate with other agents, so they must be able
to communicate and understand messages from
one another. MAS has been widely used in several
researches to support software collaborative
systems in distributed software development
environment. For example, (Col_Req) was the
multi-agent based collaborative requirements tool
that supported requirement engineers for real time
systems during the requirement engineering phase
[27]. Distributed stakeholders (e.g. software
365

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 363-376
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
teams, customer, etc.) worked on the system for
collaborative
acquisition,
navigation
and
documentation activities.
Ontologies can be used to facilitate the
semantic
interoperability
while
Agent
Communication Language (ACL) defined by
FIPA can be used as the language of
communication between agents. There are several
existing researches that integrate the use of
ontologies and MAS. Paydar and Kahani [28]
introduced a multi-agent framework for automated
testing of web-based applications. The framework
was designed to facilitate the automated execution
of different types of tests and different information
sources. Ontology-based computational intelligent
multi-agent for Capability Maturity Model
Integration (CMMI) assessment was proposed by
Lee and Wang [29]. The multi-agent system
consisted of three main agents interacting with one
another to achieve the goal of effectively
summarizing the evaluation reports of the software
engineering process regarding CMMI assessment.
The CMMI ontology was developed to represent
the CMMI domain knowledge. This research did
not cover other knowledge areas of the software
engineering domain but it specifically focused on
the software engineering process with respect to
CMMI assessment only. The integration of two
promising technologies in software engineering

which were multi-agent system and Software


Product Lines (SPL) was addressed in [30]. It
provided the solution of producing higher quality
software, lower development costs and less timeto-market by taking advantage of agent
technologies.
The ontology was used for
modeling the Multi-agent System Product Lines
(MAS-PLs) and was represented by UML class
diagrams. MADIS [21] was a multi-agent design
information system aiming at supporting the
distributed design process by managing
information, integrating resources dispersed over a
computer network and aiding collaboration
processes. The MADIS ontology was developed to
formally conceptualize the engineering design
domain to enable knowledge sharing, reuse and
integration in a distributed design environment.
Monte-Alto et al. [31] proposed a multi-agent
context processing mechanism called ContextPGSD (Context Processing on Global Software
Development) that utilized contextual information
to assist users task during the software
development project. This project applied agentbased technology to process contextual
information and support human resource
allocation. OntoDiSen was an application
ontology exploited in this system representing
GSD contextual information. Although this
research aimed at facilitating the collaboration and

Table 1. Review of some multi-agent system applications


Methodologies/
Tools/Authors
Implicit
Romero et al.
(Col_Req)

Paydar and Kahani


Lee and Wang
Nunes et el.
MADIS

ContextP-GSD

Purpose of using multi-agent systems


Supporting web search for groups or communities
of people
Being a simulation tool to support training in
global requirements elicitation process
Supporting software engineers during the
requirements engineering phase for collaborative
acquisition, navigation and documentation
activities.
Performing automated test process
Summarizing the evaluation reports for the CMMI
assessment
Supporting mass customized software production
Supporting the distributed design process by
managing information, integrating resources
dispersed over computer network and facilitating
collaboration processes.
Processing context information and supporting
human resource allocation

Focus
Web search

Make use of
ontologies

E-learning

Requirements engineering
activities

Software testing
CMMI assessment

Software product lines


Distributed collaborative
engineering design

GSD contextual information

366

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 363-376
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
coordination in global software development
environment and used ontology to define semantic
information which was quite similar to our
proposed work, it focused only on contextual
software engineering information, not the whole
software engineering domain knowledge.
The summary of the reviewed multi-agent
system applications is presented in Table 1. It is
evident that many researches have exploited multiagent technology in various applications and a
number of them utilizes multi-agent technology
along with the use of ontologies to support
software development tasks. However, most of
them cover only a specific phase or issue in
software
engineering
domain
knowledge.
Currently, there are no multi-agent system
applications that provide active communication
and coordination throughout the whole software
engineering process.
3.2 Recommendation Systems
Recommendation systems are techniques or
software tools assisting users with suggestions for
items, contents or services to be of use in
overloaded amounts of information [32]. The
initial academic work on implementing
recommendation systems was first conducted in
the mid-1990s. Park et al. [33] undertook a
literature
review
and
classification
of
recommender systems based on 210 research
papers on recommendation systems published in
academic journals between 2001 and 2010. The
result showed that publications related to this topic
had increased significantly, especially after 2007
and also extended to fields other than movies and
shopping. They conclude from their review that it
is highly likely that research in the area of
recommendation systems will be active and has
the potential to increase significantly in the future.
Recommendation systems are normally
classified based on how recommendation is
implemented as following [34].
Content-based approach recommends
items which resemble the ones that a specific
user formerly preferred.
Collaborative
filtering
approach
recommends items to the users based on the
similarity between users.

Hybrid approach combines collaborative


filtering and content-based techniques.
Content-based approach has the main strength
that it can provide accurate recommendations to a
user without knowing others preferences.
However, due to the syntactic similarity metrics
employed, it suffers from the overspecialization
problem whereby only those items similar to those
the user already knows are recommended [35].
Collaborative filtering approach mimics human
behavior for sharing opinion with others. It offers
recommendation based on not only users interest
but also on others preferences; therefore, it can
produce more unexpected or different items than
content-based technique. However, collaborative
filtering also suffers from some severe drawbacks
such as data sparsity, gray sheep, and synonymy
[34]. The data sparsity issue means that a
recommender is unable to make meaningful
recommendations because of an initial lack of
ratings such as new user and new item. The gray
sheep problem refers to the users whose interests
do not match any group of people so they do not
benefit from this approach. The synonym
challenge causes poor quality of recommendations
because the collaborative filtering approach
cannot discover items that have different names
but have the same meanings.
From critical weaknesses of content-based and
collaborative filtering recommender systems,
hybrid approach has been introduced by
combining these two approaches to resolve certain
problems associated with those two approaches.
Nevertheless, hybrid recommender system is still
limited by the syntactic matching but semantic
mismatching [35]. The syntactic matching
techniques relate items from common words not
from their meaning, so the result of
recommendations is sometimes limited and poor
quality.
Semantic-based recommendation systems have
emerged to address the limitations of previous
recommendation
techniques.
These
recommendation approaches integrate the
semantic knowledge in their processes and their
performances are based on a knowledge base
which contains relations between concepts,
367

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 363-376
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
normally defined through ontology or conceptdiagram (like taxonomy) [36]. Semantic-based
recommendation systems have been proven to
have better performance than previous approaches
by applying a knowledge base and semantic
reasoning filtering techniques. These two elements
can help to improve the accuracy of
recommendation systems because semantic
descriptions are used, unlike syntactic approaches
which consider the word only [37]. Various
applications in several fields have been proposed
which include a semantic reasoning mechanism in
their recommendation systems, for instance,
Blanco-Fernndez et al. [38] presented a
methodology to overcome the overspecialization
problem and improve the effectiveness of contentbased recommendation approaches by applying
semantic descriptions of the items and including
semantic reasoning technique in them. They
claimed that the proposed methodology had the
potential
to
enhance
the
quality
of
recommendations better than the traditional
recommendation systems did and it could be
applied in various domains. This model was
realized through the implementation of the
prototype, AVATAR, a recommender system of
personalized TV content. Cantador et al. [39]
explored a model of an enhanced semantic layer
for hybrid recommendation systems. Different
methods were integrated for different purposes in
order to improve the accuracy and quality of
recommendations such as ontology-based
knowledge representation concept, spreading
activation algorithm and three recommendation
techniques which were personalized, semantic
context-aware and content-based collaborative
recommendation systems. The authors illustrated
the use of their methodology in a news
recommendation system, News@Hand. An
ontology-based semantic recommendation for
programming tutoring system called Protus 2.0
was a research in education domain proposed by
[40]. It was an adaptive and personalized webbased tutoring system that used recommendation
approaches during the personalization process.
Web Ontology Language (OWL) was used to
represent context knowledge while Semantic Web
Rule Language (SWRL) was exploited to deal
with semantic reasoning. Although semantic-

based recommendation systems were employed in


several domains, none of them was specifically
intended to create recommendations to manage
queries or project issues raised in software
development teams through the use of ontologies
in software engineering.
3.3 Recommendation systems for software
engineering
Recommendation systems for software
engineering (RSSEs) are software tools introduced
specifically to help software development teams to
deal with information-seeking and decisionmaking [41]. RSSEs have become an active area
of research for the past several years and they
have been proven to be effective and useful to
software developers to cope with the huge amount
of information when they are working on software
projects. They can provide recommendations for
development information (i.e. code, artifacts,
quality measures, tools) and collaboration
information (i.e. people, awareness, status and
priorities) [42].
Here are some reviews of recommendation
systems that focus mainly on recommending
expert or relevant people. Codebook [43] was a
social network web service that linked developers
and their work artifacts and maintains connections
with other software team members. Conscius [44]
was a recommender system that located a source
code expert on a given software project by using
communication history (archived mail threads),
source code, documentation and SCM change
history. Steinmacher et al. [45] proposed a
recommendation system that could assist
newcomers to discover the expert who had the
skill matching the selected issue to mentor the
regarding technical and social aspects of a
particular task. Ensemble was a recommender
application that helped software team members to
communicate in the current works by
recommending other people when developer does
any updates on related artifacts such as source
code or work items [46]. These recommendations
could help to locate related people and save time
when seeking their expertise during software
development process. They increased the accuracy
of recommendations by exploiting user context,
workspace information and social information.
368

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 363-376
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
Some other RSSEs focused on supporting
developers while they were coding or debugging
program. Fishtail was a plugin tool for the Eclipse
IDE which automatically recommended source
code examples from the web to developers that
were relevant to their current tasks [47]. Cordeiro
et al. [48] proposed a context-based
recommendation to support problem-solving in
software development. They developed a
client/server tool to integrate recommendation of
question/answering web resources in the
developers work environment to provide
automatic assistance when the exception errors
occured. DebugAdvisor [49] was proposed as a
search tool for debugging which supported fat
query, a query with all contextual information of
the bug issue. Developers could do a bug report
search from multiple software repositories with a
single query. The system returned a bug
description raked list that matched the query and
then used it to retrieve recommendation of the
related artifacts such as source code and functions
from the generated relationship graph. Jaekel et al.
[50] developed a Semantic Helper component
which was one of the modules of the FACIT-SME
project, a three-year project intended to assist IT
SMEs to select and use quality business process
models and software engineering methods in their
software development projects. Dhruv [51]
advised software developers on relevant software
artifacts and bug reports. Semantic web
technology was explored in this research in order
to facilitate problem-solving in the open-source
software community. It exploited ontologies to
identify where related artifacts were located and
their description including relevant bug
information. A Semantic Helper component aims
was intended to assist other components by
filtering information and doing automatic
matching between the models which were stored
in semantic format in FACIT-SME repositories.
This recommender system also provided ranking
lists of the most relevant models from a given
query.
All the described applications had been
developed to improve the productivity of software
development projects only for one of phases in
SDLC, and most of them focus on the
implementation phase in particular. However,

software team members mostly need support in


every phase of a software development project.
Regarding knowledge representation, all systems
except for Dhruv and Semantic Helper used
traditional knowledge representation and syntactic
matching techniques so they lacked integrated and
shared information and could not support a
semantic reasoning mechanism.
4 CONCEPTUAL FRAMEWORK
This section presents the proposed conceptual
framework of multi-agent based recommender
approach for active software engineering
ontology. The users of software engineering
ontology will be provided intelligent support to
access and recommend knowledge and project
information captured in the software engineering
ontology.
Intelligent
agents
will
work
collaboratively to facilitate the software project
teams who are working together irrespective of
their geographical location. The aims of the multiagent based recommender system are:
1) to extract and convey semantic rich project
information described in the software
engineering ontology to team members,
2) to manage project issues that arise by
utilizing the agents ability of automate
reasoning,
3) to recommend solutions for any project
issues as experts on a constant and
autonomous basis,
4) to support work of adding semantic project
information automatically into the software
engineering ontology instantiations during
the refinement process.
The proposed conceptual framework of multiagent based recommender system is shown in
Figure 1. It comprises four types of agents with
the short descriptions of their roles as following.
1) User agents
Act as representatives of each user.
Build and maintain user profiles.
Manage semantic annotation service.
Communicate with recommender and
ontology agents.
369

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 363-376
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
2) Semantic recommender agent
Recommend tentative solutions including
affected software artifacts and users.
Work with ontology agent to make a
decision based on knowledge in software
engineering ontology.
Notify affected agents in case of ontology
update.
Coordinate with evolution agent in case
of unresolved issues/queries.
3) Ontology agents
Manage
and
maintain
software
engineering ontology repository.

Retrieve information from the ontology


to other agents.
Work with user agents for annotation
service.
Manage ontology population process.
Notify ontology update to recommender
agent.
4) Evolution agent
Receive update request regarding
unresolved issues/queries in existing
software engineering ontology and
coordinate with the Software Engineering
Social Network system (SESN) for the

370

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 363-376
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
ontology evolution process.
Notify ontology agents for update from
SESN process.
The agents will work collaboratively throughout
six processes as following.
1) Semantic Annotation Process
As mentioned, in the software engineering
ontology, there are two types of abstraction:
generic software engineering representing a whole
set of software engineering domain concepts, and
application
specific
software
engineering
illustrating the set of software engineering
concepts
used
for
particular
projects.
Instantiations, also known as population, are part
of the abstraction of the application specific
software engineering ontology. They are used for
storing data instances of the projects. Software
project information is often updated according to
changes in requirements or in design processes;
therefore, manually transformation or mapping
new changes into semantically rich form and
populating them as instances of the software
engineering
ontology
is
time-consuming,
laborious, tedious and prone to error. With the
help of agents which perform semantic annotation
process and ontology population, project
information can be automatically transformed or
mapped into concepts defined in the ontology with
a minimum of human intervention.
This process starts from user agents
receiving project information from software team
members. User agents will perform information
extraction process with references to classes and
instances in the software engineering ontology
retrieved by ontology agents. The RDF annotation
is then generated by semantic annotating module
and stored in the repository containing the
annotation of other project information.
2) Ontology Population Process
Ontology population is a process of adding
new instances into an existing ontology. When
project information is successfully annotated, it is
ready to populate into the software engineering
ontology.

In this research, ontology agents will be


responsible for managing ontology population
process. The annotated project information is
identified as candidate ontological instances and
will be validated for the consistency between
incoming instances and those already stored in the
ontology. It is then inserted into the software
engineering ontology as new instances.
3) Query Process
User agents will send their queries to
ontology agents. Ontology agents will retrieve and
provide information from the software engineering
ontology in accordance with their queries.
4) Recommendation Process
User agents will send their issues or requests
to the semantic recommender agent. The
recommender agent then cooperate with ontology
agents to make a recommendation based on
knowledge explicitly described in the software
engineering ontology and other resources, e.g.
user profiles or issue tracking systems. Semantic
recommendation techniques will be employed
during the recommendation process to improve the
accuracy of recommendation and to provide the
tentative solutions as well as the most relevant
knowledge according to user request.
5) Ontology Evolution Update Process
In case that the recommender agent is not
able to recommend solutions due to requests that
do not match with the concepts defined in the
software engineering ontology or different
understandings of project-related information, the
evolution agent will coordinate with the Software
Engineering Social Network System (SESN) for
the ontology evolution process. Nevertheless, this
is beyond the scope of this research but more
information can be found in [52] and [53]. When
the evolution process is completed and agreement
regarding changes has been reached, the evolution
agent will notify ontology agents to merge these
concepts with the existing software engineering
ontology. When ontology agents complete the
update, it will tell the recommender agent to notify
all affected agents. This change will cause some
particular concept and relationship to be adjusted
and leads to the change of generic concepts in the
371

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 363-376
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
ontology. This is called ontology evolution and
may generate a new version of software
engineering ontology. It is to be noted that a
version of software engineering ontology refers to
a broad category of software applications e.g.
software engineering for CRM, ERP or cloud
computing rather a specific software development
project. Therefore, each version still needs each
ontology agent to manage and maintain including
ensure reliability and consistency.
6) Issue Raising with Instance Update Process
This process is different from ontology
evolution update process. Ontology evolution
update process is a process of an evolution at
concept level that changes will be made to the
underlying
software
engineering
domain
knowledge while instance update process is a
process of an evolution at instance level that deals
with changes in refinement process or in the
conceptualization. This process starts from
software team member raises an issue to his
personal user agent to make a change of instance
in the software engineering ontology. Ontology
agents will check any instance, component, or
people who will be affected from this change and
notify the user. He or other members can propose
their opinions to the change until the final
agreement has been discovered. Ontology agents
will then update related instance in the software
engineering ontology repository and inform the
semantic recommender agent about the update.
The recommender agent will notify only those
team members who should be advised about the
changes and their effects.
5 SCENARIO EXAMPLES OF MULTIAGENT BASED RECOMMENDER SYSTEM
PROVIDING ACTIVE SUPPORT THROUGH
SOFTWARE ENGINEERING ONTOLOGY
Here are some scenarios that can explain how
the proposed system works. Suppose that
Globeware Company is a US multinational
company which has three software development
sites located in US, Australia, and India. They are
currently working on a mobile application project.
All requirement gathering and software
specification are done in US while software design

and implementation are done in Australia and


India. Globeware utilizes the agent-based
recommendation system for software engineering
ontology framework in this project to facilitate
effective remote communication and coordination.
The software engineering ontology instantiations
for this project have been derived from populating
software project information, project agreement,
and problem domain from each phase in SDLC
which are mapped into the concepts defined in the
software engineering ontology. Here are some
examples showing how this methodology can
provide active support to team members when
working on software development project.
First example: Member A is a system analyst.
Since the user requirement has changed, an
additional class has to be added (considered as a
new instance) into the specific software
engineering ontology in which all project data is
generally stored as instances. He contacts his user
agent and inputs project information about the
additional class. The user agent will automatically
annotate it into concepts formed in the ontology
through a semantically annotating process. Related
concepts, classes, data type, object property and
data type property are used as metadata to
annotate the content of documents (refer to Figure
1 semantic annotation process). The annotated
additional class will be in the semantic structure of
the software engineering domain and ready to be
populated to the ontology by ontology agents
(refer to Figure 1 ontology population
process). The recommender agent will take
responsibility for notifying all affected agent(s)
about this ontology instance update.
Second example: Member B is a new member
who has just joined this project as a developer. He
would like to learn more about project information
such as output from the design phase that only
relates to his work and catch up with the current
status of the project. He can query ontology agents
via his user agent to access project information
and status. The agent will autonomously consider
retrieving only particular project information
stored as instance knowledge in the specific
software engineering ontology that is related to his
work so it assists him to start working quickly
372

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 363-376
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
with the most relevant and precise situational
knowledge (refer to Figure 1 query process). If
he doubts the output from the design phase, he can
raise a query or an issue through his user agent
who will communicate with the recommender
agent to reason knowledge published in ontology
repository to find a possible solution or
recommend the most suitable person who can
clarify his issue (refer to Figure 1
recommendation process).
Third example: Member C finds out that there
is a bug in the new released system so he informs
his user agent. Before the bug issue is filed, the
recommender agent and ontology agents will try
to locate related problems from the project issue
tracking system based on its associated concepts
defined in the software engineering ontology and
its instances. The benefit is to avoid a bug
duplicated report from other developers which
may create confusion and unnecessary information
overload. Ontology agents will then attempt to
link the bug symptoms to related software artifacts
that are all annotated using the software
engineering ontology in order to help the
developer quickly diagnose which part of the
software artifacts might be causing the problem.
Additionally, before the developer fixes the bug,
Ontology agents will inform him of the classes or
components that might be affected. Furthermore,
with a full record of mappings between previously
reported bugs and people who resolved those
bugs, the recommender agent will be able to
recommend potential people to consult or to
resolve some particular bug issue (refer to Figure
1 recommendation process).
Fourth example: Member D raises an issue
about customer class diagram through the
information platform in plain text. From the
content, the ontology agent will automatically
parse software engineering terms by referring to
the concept in software engineering ontology and
autonomously reason and derive only related
instances which are customer class and other
relevant classes and relationships. Then it will
dynamically draw the diagram from the retrieved
information and show this to Member A. He or
other members can propose their opinions by

working on the diagram itself and do tracked


changes. Ontology agents will also warn them
about affected classes or components from their
change proposal. The content in ontology
repository will not be updated until the final
agreement has been discovered. Then ontology
agents will converse the solution diagram and
store it back into the semantic format of the
specific software engineering ontology. The
recommender agent will automatically notify only
those team members who should be advised about
the changes and their effects (refer to Figure 1
issue-raising with instance update process). It
makes a discussion among team members to
propose issues, questions or solution easier than
communicating with normal plain texts or just
words. So with the support of collaborative agents,
long-distance communication which often causes
misunderstanding problems during the software
development can proceed more clearly and
effectively in the multi-site environment.
6 CONCLUSION AND FUTURE WORK
This paper proposes the multi-agent based
recommender system conceptual framework for
providing an intelligent support to access and
recommend knowledge and project information
captured in the software engineering ontology.
The roles of four types of software agents are
analyzed and identified. The interaction between
software agents and ontology within collaboration
framework are defined into six processes. This
work is intended to facilitate effective
communication and coordination for remote
software development teams to reduce the
unsuccessful rate of multi-site software
development project.
For future work, semantic annotation will be
implemented to annotate project information such
as user requirements, source codes, etc. and then
populate it into the software engineering
instantiations. We will then design a semanticbased recommendation system based on the
software engineering ontology and integrate them
with multi-agent implementation.
We will
evaluate and validate our work in accordance with
a framework for evaluation in design science
research addressed by Venable, Pries-Heje and
373

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 363-376
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
Baskerville [54]. The prototype will be developed
and evaluated by two groups of multi-site software
development teams in order to obtain feedback to
measure the usability and effectiveness of the
system to solve the problem. In addition, to
evaluate the system performance, simulation will
be used by executing a prototype with artificial
data.

8.

9.

7 ACKNOWLEDGEMENTS
Financial supports for this study are funded by
the Australian Government through their provision
of the Endeavour Awards program and the Royal
Thai Government Scholarship program.

10.

8 REFERENCES

11.

1.

2.

3.

4.

5.

6.

7.

P. J. gerfalk, B. Fitzgerald, H. Holmstrm, B.


Lings, B. Lundell, and E. O. Conchir, "A
framework for considering opportunities and threats
in distributed software development." pp. 47-61.
E. . Conchir, P. J. gerfalk, H. H. Olsson, and
B. Fitzgerald, Global software development:
where are the benefits?, Communications of the
ACM, vol. 52, no. 8, pp. 127-131, 2009.
S. Islam, M. M. A. Joarder, and S. H. Houmb,
"Goal and risk factors in offshore outsourced
software development from vendor's viewpoint."
pp. 347-352.
M. R. Thissen, J. M. Page, M. C. Bharathi, and T.
L. Austin, Communication tools for distributed
software development teams, in Proceedings of the
2007 ACM SIGMIS CPR conference on Computer
personnel research: The global information
technology workforce, St. Louis, Missouri, USA,
2007, pp. 28-35.
J. T. Biehl, W. T. Baker, B. P. Bailey, D. S. Tan, K.
M. Inkpen, and M. Czerwinski, Impromptu: a new
interaction framework for supporting collaboration
in multiple display environments and its field
evaluation for co-located software development, in
Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems, Florence, Italy,
2008, pp. 939-948.
S. Salinger, C. Oezbek, K. Beecher, and J. Schenk,
Saros: an eclipse plug-in for distributed party
programming, in Proceedings of the 2010 ICSE
Workshop on Cooperative and Human Aspects of
Software Engineering, Cape Town, South Africa,
2010, pp. 48-55.
T. S. Dillon, E. Chang, and P. Wongthongtham,
"Ontology-based software engineering- software
engineering 2.0." pp. 13-23.

12.

13.

14.

15.

16.

17.

18.

19.

20.

Y. Blanco-Fernndez, J. J. Pazos-Arias, A. GilSolla, M. Ramos-Cabrer, M. Lpez-Nores, J.


Garca-Duque, A. Fernndez-Vilas, and R. P. DazRedondo, Exploiting synergies between semantic
reasoning and personalization strategies in
intelligent recommender systems: A case study,
Journal of Systems and Software, vol. 81, no. 12,
pp. 2371-2385, 2008.
A. Borges, #233, r. Soares, S. Meira, Hil, #225, r.
Tomaz, R. Rocha, and C. Costa, Ontologies
supporting the distributed software development: a
systematic mapping study, in Proceedings of the
17th International Conference on Evaluation and
Assessment in Software Engineering, Porto de
Galinhas, Brazil, 2013, pp. 153-164.
P. Wongthongtham, E. Chang, T. S. Dillon, and I.
Sommerville, Development of a software
engineering ontology for multi-site software
development, IEEE Transactions on Knowledge
and Data Engineering, 2008.
P. Wongthongtham, E. Chang, T. S. Dillon, and I.
Sommerville, Ontology-based multi-site software
development methodology and tools, Journal of
Systems Architecture, vol. 52, no. 11, pp. 640-653,
2006.
P. Wongthongtham, T. Dillon, and E. Chang, "State
of the art of community-driven software
engineering ontology evolution." pp. 1039-1045.
C. Ebert, "The dark side: challenges," Global
Software and IT, pp. 19-25: John Wiley & Sons,
Inc., 2011.
H.-C. Chu, and S.-W. Yang, "Innovative semantic
web services for next generation academic
electronic library via web 3.0 via distributed
artificial intelligence," Intelligent Information and
Database Systems, Lecture Notes in Computer
Science, pp. 118-124: Springer Berlin Heidelberg,
2012.
H. Qingning, Z. Hong, and S. Greenwood, "A
multi-agent software engineering environment for
testing Web-based applications." pp. 210-215.
N. R. Jennings, On agent-based software
engineering, Artificial Intelligence, vol. 117, no. 2,
pp. 277-296, 2000.
V. N. Marivate, G. Ssali, and T. Marwala, "An
intelligent Multi-Agent recommender system for
human capacity building." pp. 909-915.
J. Jiao, X. You, and A. Kumar, An agent-based
framework for collaborative negotiation in the
global manufacturing supply chain network,
Robotics and Computer-Integrated Manufacturing,
vol. 22, no. 3, pp. 239-255, 2006.
W. T. Goh, and J. W. P. Gan, "A dynamic multiagent based framework for global supply chain."
pp. 981-984 Vol. 2.
Z. Zhong, J. D. McCalley, V. Vishwanathan, and
V. Honavar, "Multiagent system solutions for
distributed computing, communications, and data

374

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 363-376
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

21.

22.

23.
24.

25.

26.

27.

28.

29.

30.

31.

32.

33.

integration needs in the power industry." pp. 45-49


Vol.1.
C. Chira, A multi-agent approach to distributed
computing, Computational Intelligence Report No,
vol. 42007, 2007.
A. Y. AHamo, and M. A. Aljawaherry,
Constructing a collaborative multi-agents system
tool
for
realtime
system
requirements,
International Journal of Computer Science (IJCSI),
vol. 9, no. 4, 2012.
Z. Chuan, "A software collaborative developing
environment based on intelligent agents." pp. 1-4.
A. Birukou, E. Blanzieri, and P. Giorgini, Implicit:
a multi-agent recommendation system for web
search, Autonomous Agents and Multi-Agent
Systems, vol. 24, no. 1, pp. 141-174, 2012/01/01,
2012.
M. Romero, A. Viscaino, and M. Piattini, "Towards
the definition of a multi-agent simularion
environment for education and training in global
requirements elicitation." pp. 48-53.
V. Iordan, A. Naaji, and A. Cicortas, Deriving
ontologies using multi-agent systems, WSEAS
Transactions on Computers, vol. 7, no. 6, pp. 814826, 2008.
K. Giri, Role of ontology in Semantic web,
DESIDOC Journal of Library & Information
Technology, vol. 31, no. 2, 2011.
S. Paydar, and M. Kahani, An agent-based
framework for automated testing of web-based
systems, Journal of Software Engineering and
Applications, 2011.
C.-S. Lee, and M.-H. Wang, Ontology-based
computational intelligent multi-agent and its
application to CMMI assessment, Applied
Intelligence, vol. 30, no. 3, pp. 203-219,
2009/06/01, 2009.
I. Nunes, C. P. Lucena, U. Kulesza, and C. Nunes,
"On the development of multi-agent systems
product lines: A domain engineering process,"
Agent-Oriented Software Engineering X, Lecture
Notes in Computer Science, pp. 125-139: Springer
Berlin Heidelberg, 2011.
H. Monte-Alto, A. Biaso, L. Teixeira, and E.
Huzita, "Multi-agent applications in a contextaware global software development environment
distributed computing and artificial intelligence,"
Advances in Intelligent and Soft Computing, pp.
265-272: Springer Berlin / Heidelberg, 2012.
T. Mahmood, and F. Ricci, Improving
recommender systems with adaptive conversational
strategies, in Proceedings of the 20th ACM
conference on Hypertext and hypermedia, Torino,
Italy, 2009, pp. 73-82.
D. H. Park, H. K. Kim, I. Y. Choi, and J. K. Kim,
A literature review and classification of
recommender systems research, Expert Systems
with Applications, 2012.

34.

35.
36.

37.

38.

39.

40.

41.

42.

43.

44.

45.

46.

A. Y. Hamo, and M. A. Aljawaherry, Constructing


a Collaborative Multi-Agents System Tool for Real
Time System Requirements, International Journal
of Computer Science, vol. 9, 2012.
Semantic Annotation, Indexing, and Retrieval.
Q. Gao, J. Yan, and M. Liu, "A semantic approach
to recommendation system based on user ontology
and spreading activation model." pp. 488-492.
Y. Blanco-Fernndez, M. Lpez-Nores, J. J. PazosArias, and J. Garca-Duque, An improvement for
semantics-based recommender systems grounded
on attaching temporal information to ontologies and
user profiles, Engineering Applications of
Artificial Intelligence, vol. 24, no. 8, pp. 13851397, 2011.
Y. Blanco-Fernndez, J. J. Pazos-Arias, A. GilSolla, M. Ramos-Cabrer, M. Lpez-Nores, J.
Garca-Duque, A. Fernndez-Vilas, R. P. DazRedondo, and J. Bermejo-Muoz, A flexible
semantic inference methodology to reason about
user preferences in knowledge-based recommender
systems, Knowledge-Based Systems, vol. 21, no. 4,
pp. 305-320, 2008.
I. Cantador, P. Castells, and A. Bellogn, "An
enhanced semantic layer for hybrid recommender
systems: Application to news recommendation,"
IGI Global, 2011, pp. 44-78.
B. Vesin, M. Ivanovi, A. Klanja-Milievi, and
Z. Budimac, Protus 2.0: Ontology-based semantic
recommendation in programming tutoring system,
Expert Systems with Applications, vol. 39, no. 15,
pp. 12229-12246, 2012.
M. Robillard, R. Walker, and T. Zimmermann,
Recommendation
systems
for
software
engineering, Software, IEEE, vol. 27, no. 4, pp.
80-86, 2010.
H. J. Happel, and W. Maalej, "Potentials and
challenges of recommendation systems for software
development." pp. 11-15.
A. Begel, K. Yit Phang, and T. Zimmermann,
"Codebook:
discovering
and
exploiting
relationships in software repositories." pp. 125-134.
A. Moraes, E. Silva, C. d. Trindade, Y. Barbosa,
and S. Meira, Recommending experts using
communication history, in Proceedings of the 2nd
International Workshop on Recommendation
Systems for Software Engineering, Cape Town,
South Africa, 2010, pp. 41-45.
I. Steinmacher, I. S. Wiese, and M. A. Gerosa,
"Recommending mentors to software project
newcomers." pp. 63-67.
P. F. Xiang, A. T. T. Ying, P. Cheng, Y. B. Dang,
K. Ehrlich, M. E. Helander, P. M. Matchen, A.
Empere, P. L. Tarr, C. Williams, and S. X. Yang,
Ensemble: a recommendation tool for promoting
communication in software teams, in Proceedings
of the 2008 international workshop on

375

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 363-376
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

47.

48.

49.

50.

51.

52.

53.
54.

Recommendation systems for software engineering,


Atlanta, Georgia, 2008, pp. 1-1.
N. Sawadsky, and G. C. Murphy, Fishtail: from
task context to source code examples, in
Proceedings of the 1st Workshop on Developing
Tools as Plug-ins, Waikiki, Honolulu, HI, USA,
2011, pp. 48-51.
J. Cordeiro, B. Antunes, and P. Gomes, "Contextbased recommendation to support problem solving
in software development." pp. 85-89.
B. Ashok, J. Joy, H. Liang, S. K. Rajamani, G.
Srinivasa, and V. Vangala, DebugAdvisor: a
recommender
system for
debugging,
in
Proceedings of the the 7th joint meeting of the
European software engineering conference and the
ACM SIGSOFT symposium on The foundations of
software
engineering,
Amsterdam,
The
Netherlands, 2009, pp. 373-382.
F. W. Jaekel, E. Parmiggiani, G. Tarsitano, G.
Aceto, and G. Benguria, FACIT-SME: A semantic
recommendation system for enterprise knowledge
interoperability, Enterprise Interoperability V, pp.
129-139, 2012.
A. Ankolekar, K. Sycara, J. Herbsleb, R. Kraut, and
C. Welty, "Supporting online problem-solving
communities with the semantic web." pp. 575-584.
A. A. Aseeri, Lightweight community-driven
approach to support ontology evolution, School of
Information Systems, Curtin University, 2011.
N. Kasisopha, and P. Wongthongtham, "Semantic
wiki-based ontology evolution." pp. 493-495.
J. Venable, J. Pries-Heje, and R. Baskerville, "A
comprehensive framework for evaluation in design
science research," Design Science Research in
Information Systems. Advances in Theory and
Practice, Lecture Notes in Computer Science, pp.
423-438: Springer Berlin Heidelberg, 2012.

376

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 377-384
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

Organizational Commitment among Purchasing and


Supply Chain Personnel
Testing Research Instrument
Jarno Einolander

Hannu Vanharanta
Dept. of Industrial Management and Engineering
Tampere University of Technology
Pori, Finland

Abstract The concept of organizational commitment has


been studied extensively during the past decades, and remains
one of the most challenging and studied concepts in
organizational research. In purchasing as well as in supply chain
management, commitment plays a central role, because the
personnel work directly with outside organizations, and
consequently their performance can have a significant effect on
the total effectiveness of the organization. This paper discusses
organizational commitment and its importance in supply chain
management and in the purchasing function. In addition, the
principles of our newly developed Internet-based evaluation
instrument are highlighted. This application has been
preliminarily tested and the verification and validation processes
are currently under evaluation.
Keywordsorganizational commitment; purchasing; supply
chain management; evaluation; measurement

I.

INTRODUCTION

Organizational success is one of the main goals in


leadership and management. It is assumed that through good
leadership and management, success can be achieved.
Success greatly depends on how well leaders can manage the
workforce and get them to work towards their shared goals
and objectives. Employees have been found to contribute to
organizational effectiveness and work efficiently towards its
goals if they identify with the organizations goals and values
and are willing to engage in activities that go beyond their
immediate role requirements. One of the main sources of
competitive advantage for todays organizations is the ability
to retain talented employees. In other words, long-term
sustained success and growth can be achieved by attracting
and retaining the best talent [1]. Heinen and ONeill [1] argue
that the relationship with the employees immediate manager
has the greatest effect on employee commitment, growth, and
development. Furthermore, organizational policies and human
resource management practices can have a significant effect
on employee commitment. Therefore, successful development
and execution of organizational policies, systems,
management, and leadership are crucial because otherwise,
they could hinder highly committed employees from
converting their commitment into performance outcomes [2].
All these characteristics have been repeatedly associated with
the concept of organizational commitment. Particularly, it is
assumed that employees are likely to exert great effort on
behalf of the organization if their commitment is based on
affective attachment to the organization.

The theoretical part of this paper deals with the issue of


organizational commitment linked to the context of purchasing
and supply chain management. The performance of activities
in the purchasing and supply chain function can have a
significant effect on the total performance of the
organization [3]. Special focus is directed on purchasing
personnel, since, as boundary spanners, employees in the
purchasing function represent their firms strategic goals and
intentions as they play a significant role in initiating and
establishing relationships with outside organizations.
Consequently, they can have a significant effect on their
organizations reputation and image [4] and therefore the
effectiveness of their performance can have an enormous
effect on the companys bottom line. These are some of the
reasons why organizations would like their employees to be
highly committed and why we believe that organizations
should first evaluate the degree of commitment and the factors
affecting it to find the best course of action for trying to
manage it.
II.

THEORETICAL FRAMEWORK

A. Organizational Commitment
Organizational commitment refers to the extent to
which an individual regards him or herself as an
organizational person. In particular, organizational
commitment refers to the relative strength of an individual's
identification with and involvement in a particular
organization [5]. Reichers [6] defines commitment as a
process of identification with the goals of an organization's
multiple constituencies [6], such as organization, occupation,
job, supervisor, workgroup, or organizational goals.
While there are several definitions of organizational
commitment, a common three-dimensional theme is found in
most of these definitions: (1) committed employees believe
in and accept organizational goals and values, (2) they are
willing to devote considerable effort on behalf of their
organization, and (3) they are willing to remain with their
organization [7; 8]. Hence, organizational commitment can
be described as a psychological state that binds an
individual to an organization [9] and influences individuals
to act in ways that are consistent with the interests of the
organization [5; 10]. Meyer and Allen [11; 12] defined
organizational commitment as consisting of three
components: affective, continuance, and normative

377

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 377-384
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
commitment. They argue that these components reflect
distinct psychological states and that employees can
experience each of these states to varying degrees. First,
affective commitment refers to how strongly the employee
identifies with, is involved in, and enjoys membership in an
organization. This dimension is closely related to Porter,
Steers, Mowday and Boulians [5] definition. Second,
continuance commitment [11; 12] is the cost-related aspect of
commitment. This form is the function of perceived cost
based on the sacrifices and investments made by the
employee. This view draws upon Beckers [13] early
thoughts about the reasons behind commitment. The third
component of the Meyer and Allen model [11; 12],
normative commitment, sees commitment developing based
on internalized loyalty norms, i.e. the feeling of obligation to
remain with an organization.
Organizational commitment has been considered as a
mediator variable in several causal models of employee
behavior. Often it has been included as a mediator focusing
on predicting other employee reactions or behaviors [14]. As
a consequence, organizational commitment has been linked to
several personal variables, role states, and aspects of the work
environment, such as job characteristics or organizational
structures. From an antecedent point of view, it has been
related to employees absenteeism, performance, turnover,
and other behaviors. In addition, several other variables have
been found to correlate with organizational commitment,
such as job involvement and job satisfaction behaviors [14].
Additionally, DeCotiis and Summers [15] found that
commitment had a direct positive influence on employees
work motivation and objective measures of job
performance, as well as a direct negative influence on
their intention to leave and actual turnover [14]. In other
words, employees who identify with and are involved in their
organization are committed, and presumably want to maintain
membership in their organization and exert effort on its
behalf [7]. Many extensive studies support this prediction
[c.f. 14; 16]. Hence, Mowday et al. [7] argued that the
strongest and most predictable behavioral consequence of
organizational commitment is low turnover. Many extensive
studies support this prediction [c.f. 14]. Meyer and Allen [17]
emphasized the positive correlation between affective
commitment and work attendance. A committed workforce
will be more dedicated to their jobs and more motivated to
give their time and effort to accomplish the required tasks.
This can also lead to a more autonomous and self-controlling
workforce [18]. Therefore, it is important to identify more
clearly what drives employees to become committed to their
organization and to understand how to influence and maintain
commitment in the workforce.
1) External Organizational Commitment
McElroy, Morrow and Laczniak [19] extended the
concept of commitment beyond the boundaries of ones
employing organization to include commitment to another
organization. They argued that an employee could develop
commitment, in other words, a psychological attachment to a
specific organization external to ones own employer. This is
known as external organizational commitment (EOC), and
is predicted to develop among those boundary spanning

members of an organization (e.g. people working in


purchasing, selling, consulting) who are in a position to
develop long-term relationships with members of other
organizations. However, whether EOC develops or not is
influenced by the nature of the interaction between the
individual and the external organization.
EOC can have both positive and negative effects for the
employing organization, the external organization, and the
individual [19]. For example, for employing organizations that
depend heavily on customers and clients, high levels of EOC
are beneficial as long as this loyalty does not come at the
expense of the employing organization (e.g. if in-house duties
are neglected, external agreements begin to favor the external
organization). One of the positive effects of a high EOC is that
employees who develop commitment to an external client
organization are likely to exert more effort than required for
that organization, which may lead to new and better business
opportunities and relationships [19]. Taking EOC to the
extreme, valued employees in boundary spanning roles may
even terminate their employment and take a position with the
external organization, which will lead to undesirable turnover
and may lead to a potential loss or deterioration of business.
However, it must be noted that commitment to one's own
organization will have the greatest impact on the potential
negative consequences of EOC. Specifically, because
organizational commitment deals with the relationship
between the individual and the employing organization,
organizational commitment is likely to moderate the
connection between EOC and the negative consequences to
the employing organization and the individual [19].
2) Measuring Organizational Commitment
As the definitions of organizational commitment have been
quite diverse [20], so the interest in commitment as an
explanation of employee behavior and performance has led to
the development of several attempts to measure it. However,
they continue to draw criticism for a lack of precision and for
concept redundancy [6]. Allen and Meyer [11] conclude that
relatively little attention has been paid to the development of
measures of commitment that conform closely to the
researcher's particular definitions of commitment.
Perhaps the most widely used commitment scale, the
Organizational Commitment Questionnaire (OCQ), was
developed by Porter, Steers, Mowday and Boulian [5]. This
scale was developed based on their definition of commitment
and measures the affective dimensions of commitment,
although it incorporates some 'continuance' and 'normative'
elements. The OCQ is used to measure the state in which an
individual identifies with a particular organization and its
goals and wishes to maintain membership in order to facilitate
those goals [7].
Meyer, Allen and Smith [21] argue that different
components of commitment are differently related to
variables such as antecedents and consequences. Therefore,
Meyer and Allen developed independent scales to measure
these three components of commitment, i.e. the Affective
Commitment Scale, the Continuance Commitment Scale, and
the Normative Commitment Scale. It is important for
management to understand the relationships between these

378

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 377-384
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
different components of commitment because of their
differences in outcome variables. As Meyer and Herscovitch
[9] argue, the different components of organizational
commitment will have different effects on other variables
such as attitudes and behaviors. However, various studies
have shown that normative commitment overlaps with the
other two types of commitment (e.g. [22; 23]). On the other
hand, constructs of affective and continuance commitment
have been well supported in the literature [14].
It is clear that there will always be employee turnover in
organizations, and total commitment is not required from
all employees. As Pierce and Dunham [24] contend,
organizational commitment is more important in complex
jobs that require adaptability and demand that employees
take the initiative. Clearly, undesirable turnover can be
extremely costly to organizations given the high costs
incurred (e.g. losing productive employees, recruiting,
selecting, and training costs, and the potential negative
impact on current customer relationships). In contrast,
turnover of undesirable employees can be healthy for
organizations [c.f. 14]. For example, highly committed poorer
performers may even decrease organizational effectiveness as
a result of lower absenteeism and turnover. Based on these
arguments, a better understanding concerning the factors
affecting commitment is needed in order to manage the
workforce effectively.
As a result of all these points, leaders need to have an
understanding of how employee commitment develops and is
maintained over time [25]. A deep understanding of the
processes related to the causes and consequences of
commitment will enable management to create better
interventions. Management can, for example, adopt
appropriate leadership behaviors in order to enhance the levels
of job satisfaction, and in turn improve the levels of employee
commitment to their organization and job performance,
consequently increasing productivity and profitability [25].
Mathieu & Zajac [14] conclude that organizational
commitment is a useful criterion for various organizational
interventions designed to improve employees attitudes and
behaviors. At minimum, they suggest that it should be used to
influence
the
employees
socialization
processes,
participation, ownership in the company, and reactions to job
enrichment. However, before interventions can be effectively
planned and executed, measurement of organizational
commitment and other mediating factors should be conducted.
B. Human Aspects in Purchasing and Supply Chain
Management
The purchasing function has evolved into an integral part
of supply chain management [26]. It has increasingly assumed
a more pivotal strategic role in supply chain management
(SCM) [27]. It can be seen as a subset of SCM that deals
primarily with managing all aspects related to the inputs to an
organization (e.g. purchased goods, materials, and services). It
can contribute both in quantitative and qualitative ways to
improving the organizations bottom line [28]. Since
performance in purchasing and materials-related activities can
have a significant effect on the total performance of the
organization [3], increased emphasis has naturally been placed

on the functions efforts to maintain or rebuild organizational


competitiveness [29]. Consequently, buyers entrusted with the
expenditure of company funds are automatically placed in a
more vulnerable position than most employees [29].
Many studies point out the fact that people working in
supply chains have a major effect on the building of trust
between organizations, which is one of the key factors in
mutually beneficial business relationships. Trust is critical
because, without trust, suppliers are unlikely to make longterm investments to support future business with the buyer
[30]. The establishment and maintenance of a trusting
relationship rely on the motivated individuals who regularly
interact across organizational boundaries [30; 31]. In addition,
the purchasing agents communication skills, professional
knowledge, decision-making autonomy, and ability to
compromise have been found to influence the suppliers trust
in purchasers significantly [32]. A supplier is more likely to
develop trust in a purchasing agent, and consequently in the
buying organization, when they perceive purchasing agents as
being competent and able to keep their promises [30; 32].
Zhang, Viswanathan and Henke [32] concluded that, because
of their position as boundary spanners, purchasing agents have
an influence on the amount of trust outside organizations place
in the company the purchasers represent.
In addition, the boundary spanning capabilities of
purchasing agents are critical in establishing and maintaining
supply chain relationships (e.g. [33; 30; 31]). Smith, Plowman,
Duchon and Quinn [34] argue that these capabilities can be
influenced by the intrinsic dispositional traits of the individual
[32]. The relationships between individuals in boundary
spanning positions provide a means for the development of
wider
communications
between
their
employing
organizations, which will create familiarity and trust between
the parties.
In addition, Perrone, Zaheer and McEvily [30] argue that
the purchasing agents tenure, i.e. the length of time an
individual has spent working within an organization, can
significantly increase the suppliers trust in the purchasing
agent. This finding is also important because of its direct link
to organizational commitment. It is based on the assumption
that individuals with long tenure have acquired informal
power and knowledge over time [35; 30], making their
knowledge more valuable and thus making them more
powerful [30].
In their study of supply chain integration, Fawcett and
Magnan [36] identified the main factors most likely to benefit
from, hinder, and assist in successful Supply Chain
Management. These factors are presented in Table 1.

379

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 377-384
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
TABLE I.

TOP TEN BENEFITS, BARRIERS, AND BRIDGES TO SUPPLY


CHAIN MANAGEMENT [36]

Benefits
Increased customer responsiveness
More consistent on-time delivery
Shorter order fulfillment lead times
Reduced inventory costs
Better asset utilization
Lower cost of purchased items
Higher product quality
Ability to handle unexpected events
Faster product innovation
Preferred & tailored relationships

Barriers
Inadequate information sharing
Poor/conflicting measurement
Inconsistent operating goals
Organizational culture & structure
Resistance to changelack of trust
Poor alliance management practices
Lack of SC vision/understanding
Lack of managerial commitment
Constrained resources
No employee passion/empowerment

Bridges
Senior & functional managerial support
Open & honest information sharing
Accurate & comprehensive measures
Trust-based, synergistic alliances
Supply chain alignment & rationalization
Cross-experienced managers
Process documentation & ownership
Supply chain education and training
Use of supply chain advisory councils
Effective use of pilot projects

As shown in Table 1, many of the barriers and bridges


concern managerial practices, organizational culture, trust,
passion, empowerment, managerial support, and information
sharing, many of which have been repeatedly associated with
motivation and job satisfaction, and in the long run affective
organizational commitment.
Therefore, it can be reasoned that these commitmentrelated aspects of working life are one of the key factors to
enhance when trying to improve the overall performance of
the supply chain function and consequently the performance
of the organization. Enabling organizations to improve these
factors requires commitment from senior management.
Saunders [3] argues that it is their responsibility to increase
passion in the workplace by establishing adequate systems that
help to nurture a work environment where participation is
highly valued, where people are empowered to experiment,
take risks, and solve problems, and where constant, life-long
learning and knowledge sharing is carried out [3].
In their study of purchasing executives, Kelley and Dorsch
[37] argue that purchasing executives feelings of commitment
to their organization have a tendency to reflect the extent to
which they identify themselves as a corporate person. As
individuals identify more strongly with their organization,
their interpretations and reactions to events tend to be
influenced by their definition of who they are, i.e. a committed
employee [37].
As can be seen from the above, the commitment of those
employed in the supply chain has a major importance on their
organization. Successful operations and meeting customer and
financial goals are in large part determined by the abilities and
motivation of the employees. People working in purchasing
and the supply chain must have the right motivation and
abilities for strategic purchasing and supply chain
management to be successful. Most importantly they must be
committed to the objectives of the organization and dedicated
to the long-term best interests of their employer. This
emphasizes the ethical principles such as equity, trust,
responsibility, and commitment that are required from
employees in purchasing and supply management.
III.

BUILDING AN EVALUATION INSTRUMENT

Our aim was to develop a tool that could be rapidly


administered to a large number of employees to evaluate the
various components of organizational commitment and their
primary correlate constructs, such as job satisfaction and
perceived organizational support. Our evaluation instrument
was designed to incorporate these various constructs in order

to unearth the primary factors that affect commitment and to


be able to pinpoint their current state in a given organizational
context. The evaluation of different components is important,
because the various components of organizational
commitment will lead to different effects on other variables
such as attitudes and behaviors [9].
A. Evolute System
Evolute is an online system that supports specific-purpose
fuzzy logic applications [38; 39]. Fuzzy logic is a conceptual
system of reasoning, deduction, and computation that makes it
possible to reason precisely with imperfect information.
Imperfect information is information which in one or more
respects is imprecise, uncertain, incomplete, unreliable, vague,
or partially true [40].
The Evolute system allows researchers to develop a
specific domain ontology and present it online to the target
group through semantic entities, such as statements [39]. Each
ontology and its propositions can be fine-tuned over time by
adjusting the fuzzy set design and fuzzy rule design this is
the ontology lifecycle in ontology engineering [41].
Furthermore, the content of the ontology can be modified, i.e.
variables can be added and removed as more about the domain
is learned, thus making the ontology correspond better to the
phenomenon in question. Evolute makes the examination of
results possible both visually and numerically. In general, a
fuzzy logic application resembles an experts task of
evaluating and reasoning based on linguistic information.
B. Evaluation Instrument
From the literature we identified a broad range of
constructs related to organizational commitment as its
antecedent, determinant or correlate factors [e.g. 2;11;14;
17;42;43]. We categorized these factors under relevant
constructs, such as work motivation, job satisfaction, personorganization fit, perceptions of organizational support, and
turnover intentions. All the identified categories were grouped
under three main dimensions of organizational commitment
affective, continuance, and normative. As a result, 55
variables were identified in 15 categories under these three
main dimensions. These 55 variables were described in the
initial version of our instrument with an average of five
linguistic indicative statements (indicators).
The statements used in our research application were
developed based on various studies and models. For example,
organizational commitment statements were adapted from the
scales created by Meyer and Allen [12], and Porter, Steers,
Mowday and Boulian [5]. Job satisfaction and the motivating
potential of job measures were developed based on
measurement tools devised by Hackman and Oldham [44] and
Weiss, Dawis, England and Lofquist [45]. In addition,
Thomass [46] intrinsic motivation theory was used as a
reference. Statements describing the components of
organizational justice were adapted from the Niehoff and
Moorman [47] scale. Role ambiguity and conflict were
measured for items based on the Rizzo and Lirtzman [48]
scale. Items relating to the psychological contract were
adapted from studies by Raja, Johns and Ntalianis [49], and
Rousseau [50;51]. In addition, role overload was measured

380

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 377-384
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
with statements based on Pareeks [52] Organizational Role
Stress Scale.
In general, commitment studies have utilized Likert-type
scales. In this study, we propose to capture subjects responses
through a continuous graphic rating scale. The verbal limit
values of the continuous scale can be set to be statementspecific. By using the continuous scale, the aim is to overcome
some of the disadvantages that the conventionally used Likertscale type measures may possess [c.f. 53]. Russell & Bobko
[53] speculated that the Likert-scale requires subjects to
somehow compress or otherwise reduce their latent response.
They suggest that information loss due to the coarseness of the
scale can cause false increases or decreases in moderated
regression effect sizes, and propose that it could result in an
unknown systematic error, which can have an enormous effect
on the ability to detect true interaction effects.
On a practical level, respondents are asked to evaluate
their current reality and vision for the future as they perceive it
regarding statements describing the identified constructs. This
evaluation results in the creation of a proactive vision, i.e. the
gap between the current reality and future vision. The
reasoning from the indicative statement evaluation to the
visualized proactive vision is made with fuzzy logic; the
statements are semantic entities and the ontology is the
information resident in a knowledge base [c.f. 40; 54]. The
aim of our research application is to help organizations to
identify the prevailing nature of their employees commitment
by examining its related constructs and aspects of working life
collectively at the team, workgroup, or organizational level.
DEMOGRAPHIC CHARACTERISTICS
Several demographic characteristics were also included in
the study as descriptive statistical variables. We included age,
gender, highest education level attained, job type, experience
in current job, and overall tenure in the current organization.
Nationality and total tenure in working life were also included.
As Mathieu and Zajac [19] state, most of the researchers have
included personal variables in commitment studies as
descriptive statistics rather than explanatory variables.
C. Preliminary Research Instrument Testing
The first testing of the instrument was performed in the
fall of 2012 with 18 Finnish industrial management M.Sc.
program students with various engineering backgrounds. All
the subjects were asked to answer the statements in relation to
their own studies at the industrial management school. After
the first test run, adjustments were made to some of the items.
The second test run was made with 15 other M.Sc. students at
the same institution in early 2013. After the second test runs,
additional adjustments were made to the overall construction
of the ontology behind the application and to its statements.
The average age of participants in both studies was 31.5 years
and 70 percent of them were male.
The third testing of the instrument was made in a Polish
technical university in the spring of 2013 with 18 international
students from various countries with an average age of 24.4
years and 65 percent of them were male. After the third test
run some irregularities were detected and some major

adjustments had to be made to the overall construct of the


ontology and its statements. Based on these three test runs, the
application evolved into a new version with 59 main variables
in 18 categories. Development process led to the elimination
of 26 statements based on their similarity with others. In
addition, the wording of several statements was modified and
made clearer.
Because the development of this application is still
ongoing, only preliminary results from the testing of the
instrument are presented. Figs. 1 and 2 illustrate examples of
the category level results of the second instrument testing
(n=15).

Fig. 1. Example of Category Level Results.

Fig. 1 represents an example of the category level results


of the preliminary tests. The blue bars represent the groups
collective perception of the current reality (perceived current
state), and the red bars represent their vision for the future, and
the difference is their collective proactive vision. The results
have been sorted based on the highest proactive vision, i.e. the
greatest collective feeling of tension between the current and
envisioned future state.

Fig. 2. Average and Standard Deviation of Category Level Results.

Fig. 2 presents the averages and standard deviations of the


same category level results as in Fig. 1. The blue bars
represent the current state results and their standard deviation
in the research group. Likewise, the red bars represent the
range of category level results and their standard deviation in
the future target state of the research group. The lines
represent the averages of these results.
In order to delve into what makes the category level results
and to pinpoint the most appropriate targets for possible
development activities, we must look at the variable level

381

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 377-384
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
results. Fig. 3 shows the ten highest proactive vision results
from the third instrument testing study (n=18).

emphasized that it was more important to improve various


features of job satisfaction.
The example above describes the ten categories with the
highest proactive vision. Similar differences can be observed
in the rest of the categories. There are various reasons for
these differences, for example, they may stem from cultural
differences or from the fact that the members in these groups
have major differences in their overall length in working life.
The reasons for the similarities or differences are not in the
scope of this study are not analyzed any further.

Fig. 3. Example of Variable Level Results of the third study.

Fig. 3 portrays ten variables with the highest proactive


vision in the third study with a multicultural research group.
For example, it can be seen that the highest tension between
the current state and target state is in satisfaction with and
need for job security, as well as in opportunities for
advancement and management commitment to staff
development. Furthermore, the fair distribution of pay and
other work rewards is seen to be highly important for
betterment. In addition, various aspects of job satisfaction and
motivational characteristics of the job are indicated to be
important and there is a desire for them to be improved.
Fig. 4 shows the ten highest proactive vision variable
results in the Finnish sample in study 2. As can be seen, there
are similarities between Fig. 3 and Fig. 4.

Fig. 4. Example of Variable Level Results of the second study.

This sample shows the highest tension in overall


interaction and information sharing. Similarly, as in the
multicultural group, the need for job security is evident. In
addition, there is a gap between how much the work provides
opportunities for professional growth and achievement and
what is desired. Also, it can be seen that some of the
motivational characteristics of the job e.g. task identity
(identifiable, visible outcomes from work) is regarded as one
of the variables most needing improvement. This sample also
highlights the need for recognition from management for work
and performance.
A comparison of these two studies shows that there are
differences but also similarities. There are major similarities in
job security needs and desired fairness in the distribution of
work rewards. The biggest difference is in information
sharing categories and the feeling of organizational support
i.e. valued contributions. The Finnish research group shows
the need for recognition from management and that the
information sharing culture should be improved. These
categories are not thought to be as important in the Polish
test group. On the other hand, the Polish test group

Based on these preliminary results, it seems that


understanding the nature of proactive vision is an important
element of the process of developing these occupational
factors. We believe that the categories that have the highest
proactive vision are the targets that organizations should focus
their development activities on in order to increase overall
commitment in a specific organizational setting. The
development of this instrument is ongoing and more empirical
results are needed to improve its internal consistency, as well
as its usability. In addition, these preliminary tests have
shown some areas of the application that still need further
development. However, these tests have shown that this
application allows us to illustrate some trends that could be
used to assist organizations in developing their HRM
practices.
IV.

CONCLUSIONS AND FUTURE RESEARCH

There are a wide variety of reasons why organizational


commitment is important and why organizations want their
key employees to be highly committed. Studies have
demonstrated that higher commitment is related to factors
such as greater levels of satisfaction, motivation, and prosocial behavior, while lower levels are related to a higher
intent to quit, a higher turnover rate, and tardiness. In addition,
high levels of organizational commitment have been
associated with greater work attendance, extra role behaviors,
and reduced levels of absenteeism.
The theoretical part of this paper dealt with the issue of
organizational commitment among employees in the supply
chain function, especially in purchasing. The focus was
directed to employees in purchasing because, in their position,
their work can have an enormous effect on company
performance. The important personal qualities of employees in
purchasing include the same qualities that are required from
any employee in a responsible position, such as honesty,
integrity, commitment, ambition, responsibility, and
willingness to grow. However, many of these qualities have a
special meaning for personnel in purchasing because of the
higher trust their company has placed in them [29]. In
addition, personnel in purchasing are in a position to develop
external commitment towards their client organization, which
emphasizes the importance of commitment to their own
employer.
We believe that organizational performance can be
achieved with employees who are motivated and committed to
their work and to their organization. Therefore, it is important
for management to be able to attain knowledge of the degree
of their employees commitment to their organization and the

382

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 377-384
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
constructs affecting its development. In particular, retaining
key employees and their commitment can be critical to the
long-term success of the organization.
In this paper we also highlighted the basic principles of our
newly developed application for commitment evaluation
purposes. We developed this tool to make the factors affecting
commitment clearer for management. Also, the visual nature
of this instrument will enable management to gain a better
understanding of this concept and quickly see the degree of
the factors affecting it for a certain specific group. This
information will also help management to compare results of
various evaluated groups and plan specific actions. We believe
that, in addition to scale validity and reliability, the
measurement method and the results it provides must be useful
with regard to the organizations goals and objectives and it
must be possible to administer it cost-effectively and rapidly.
The first test runs of our evaluation instrument have
shown that our application allows the formation of a systemic
view of an organization's commitment-related environment.
We believe that our application can point out the areas where
the organization should direct the focus of its HRM practices
in order to enhance these commitment-related factors. With
the combined collective information gathered, the organization
will be able to provide interventions to improve its employees
working life and consequently the effectiveness of the whole
organization. In addition, it will also be possible to monitor
systematically how various commitment-related aspects will
develop in the organization. However, more development is
needed before we can implement our application in a real
business context.
REFERENCES
[1] J. S. Heinen and C. ONeill, Managing Talent to Maximize
Performance, Employment Relations Today, Vol. 31, Iss. 2, 2004, pp. 67
83
[2] S. Swailes, Organizational commitment: a critique of the construct and
measures. International Journal of Management Reviews, Vol. 4, Iss. 2,
2002, pp. 155178.
[3] M. Saunders, Strategic Purchasing and Supply Chain Management,
London: Pitman Publishing. 1997, 354p.
[4] L.L., Stanley and J.D. Wisner,. Service quality along the supply chain:
implications for purchasing. Journal of Operations Management, Vol. 19,
Iss. 3, 2001, pp. 287306.
[5] L. W. Porter, R. M. Steers, R. T. Mowday and P. V. Boulian,
Organizational commitment, job satisfaction and turnover among
psychiatric technicians, Journal of Applied Psychology, Vol. 59, Iss. 5, 1974,
pp. 603609.
[6] A. E. Reichers, A Review and Reconceptualization of Organizational
Commitment. Academy of Management Review, Vol. 10, Iss. 3. 1985, pp.
465-476.
[7] R.T Mowday, R.M. Steers and L.W Porter, The measurement of
organizational commitment. Journal of Vocational Behavior, Vol. 14, Iss,
2, 1979, pp. 224247.
[8] 8 R. Steers, Antecedents and outcomes of organizational commitment.
Administrative Science Quarterly, Vol. 22, Iss. 1, 1977, pp. 4657.
[9] J. P. Meyer and L. Herscovitch, Commitment in the workplace: Toward a
general model. Human Resource Management Review, Vol. 11, Iss. 3,
2001, pp. 299326.
[10] R. Mowday and T. McDade, Linking behavioral and attitudinal
commitment: a longitudinal analysis of job choice and job attitudes,
Academy of Management Proceedings, 1979, pp. 8488.

[11] N. J. Allen and J. P. Meyer, The measurement and antecedents of


affective, continuance and normative commitment to the organization.
Journal of Occupational Psychology, Vol. 63, Iss. 1, 1990, pp. 118.
[12] J. P. Meyer and N. J. Allen, A three-component conceptualization of
organizational commitment. Human Resource Management Review, Vol.
1, Iss. 1, 1991, pp. 6189.
[13] H. S. Becker, Notes on the Concept of Commitment, The American
Journal of Sociology, Vol. 66, No. 1, 1960, pp. 32-40.
[14] J. E. Mathieu and D. M. Zajac, A review and meta-analysis of
the antecedents, correlates, and consequences or organizational commitment.
Psychological Bulletin, Vol. 108, Iss. 2, 1990, pp. 171 - 194.
[15] T. A. DeCotiis and T. P. Summers, A path-analysis of a model of the
antecedents and consequences of organizational commitment. Human
Relations, Vol. 40, Iss. 7, 1987, pp. 445470.
[16] A. Cohen, The Relationship between commitment forms and work
outcomes: A comparison of three models. Human Relations. Vol. 53, Iss.
3, 2000, pp. 387417.
[17] J. Meyer and N. Allen, Commitment in the workplace: theory, research,
and application, Sage Publications, Inc. 1997, 160p.
[18] R. Nehmeh,What is Organizational commitment, why should managers
want it in their workforce and is there any cost effective way to secure it?
SMC Working Paper 05/2009. 10p.
[19] J. McElroy, P. Morrow and R. Laczniak, External organizational
commitment. Human Resource Management Review, Vol. 11, Iss. 3,
2001, pp. 237256.
[20] K. Ferris, and N. Aranya, A comparison of two organizational
commitment scales. Personnel Psychology, Vol. 36, Iss. 1, 1983, pp. 8798.
[21] 21 J. P. Meyer, N. J. Allen and C. A. Smith, Commitment to
Organizations and Occupations: Extension and Test of a ThreeComponent Conceptualization, J. Appl. Psychology, Vol. 78, Iss. 4, 1993,
pp. 538551.
[22] H. L. Angle and M. B. Lawson, Organizational commitment and
employees performance ratings: Both type of commitment and type of
performance count. Psychological Reports, Vol. 75, 1994, pp. 1539
1551.
[23] R. B. Brown, Organizational commitment: Clarifying the concept and
simplifying the existing construct typology. Journal of Vocational
Behavior, Vol. 49, Iss. 3, 1996, pp. 230251.
[24] J. L. Pierce and R. B. Dunham, Organizational Commitment: PreEmployment Propensity and Initial Work Experience, Journal of
Management, Vol. 13, Iss. 1, 1987, pp. 163178.
[25] D. A. Yousef, Organizational commitment: a mediator of the
relationships of leadership behavior with job satisfaction and
performance in a non-western country, Journal of Managerial
Psychology, Vol. 15, Iss. 1, 2000, pp. 625.
[26] J. D. Wisner, K. Tan, and G. Keong Leong. 2008, Principles of Supply
Chain Management: A Balanced Approach. 2nd ed. Cengage Learning,
2008, 562p.
[27] I. J. Chen, A. Paulraj, and A. A. Lado, 2004. Strategic
purchasing, supply management and firm performance. Journal of
Operations Management. Vol. 22, Iss. 5, 2004, pp. 505523.
[28] A. Van Weele, Purchasing and Supply Chain Management: Analysis,
Strategy, Planning and Practice. 5th ed. Cengage Learning EMEA. 2009, 418p.
[29] H. E. Fearon, D. W. Dobler and K. H. Killen, The Purchasing handbook,
5th ed. National Association of Purchasing Management. 1993, 907p.
[30] V. Perrone, A. Zaheer, and B. McEvily,Free to Be Trusted?
Organizational Constraints
on
Trust in
Boundary Spanners.
Organization Science. Vol. 14, Iss. 4, 2003, pp. 422439.
[31] R. D. Ireland and J.W. Webb, A multi-theoretic perspective on trust
and power in strategic supply chains. Journal of Operations
Management, Vol. 25, Iss. 2, 2007, pp. 482497.
[32] C. Zhang, S. Viswanathan, and J. W. Henke, The boundary spanning
capabilities of purchasing agents in buyersupplier trust development.
Journal of Operations Management, Vol. 29, Iss. 4, 2011, pp. 318328.
[33] H. Aldrich and D. Herker. Boundary spanning roles and organization
structure. Acad. Management Rev. Vol. 2, Iss. 2, 1977, pp. 217230.

383

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 377-384
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
[34] A.D. Smith, D.A. Plowman, D. Duchon, and A.M. Quinn, A qualitative
study of high-reputation plant managers: political skill and successful
outcomes. J. Oper. Manag. Vol. 27, Iss. 6, 2009, pp. 428443.
[35] J. Pfeffer, Managing with Power. Politics and Influence in
Organizations. Harvard Business School Press, Boston, MA. 1992.
[36] S. E. Fawcett and G. M Magnan. Achieving World-Class Supply Chain
Alignment: Benefits, Barriers, and Bridges, Center for Advanced
Purchasing Studies, 2001, 158p.
[37] S. W. Kelley and M. J. Dorsch, Ethical Climate, Organizational
Commitment, and Indebtedness Among Purchasing Executives Journal of
Personal Selling and Sales Management, Vol. 11, Iss. 4, 1991. pp. 5566.
[38] J. Kantola, Ingenious Management, Doctoral thesis, Pori, Finland:
Tampere University of Technology, 2005.
[39] J. Kantola, H, Vanharanta, and W. Karwowski,The evolute system: a
co-evolutionary human resource
development methodology, in
International Encyclopedia of Ergonomics and Human Factors, 2nd ed.,
2006.
[40] L. Zadeh, Toward extended fuzzy logic - a first step, Fuzzy Sets and
Systems, 160, Iss. 21, 2009, pp. 31753181.
[41] A, Gmez-Prez, Ontology evaluation, in: Handbook on Ontologies,
S, Staab and R. Studer, Eds. Berlin: Springer, 2004, pp. 251274.
[42] R. T. Mowday, L. W. Porter and R. M Steers, Employee-organization
linkages: The psychology of commitment, absenteeism, and turnover.
Academic Press, New York, 1982, 253p.
[43]42 A. Cohen, Commitment before and after: An evaluation
and reconceptualization of organizational commitment, Human Resource
Management Review, Vol. 17, Iss. 3, 2007, pp. 336354.
[44] J. R. Hackman and G. R. Oldham,. Work redesign. Reading, MA:
Addison-Wesley. 1980, 330p.
[45] D. J. Weiss, R. V. Dawis, G. W. England and L. H. Lofquist., Manual
for the Minnesota Satisfaction Questionnaire, Minnesota Studies in
Vocational Rehabilitation :xxii, University of Minnesota, 1967.
[46] K. W. Thomas, 2000. Intrinsic Motivation at Work: building energy and
commitment. 1st ed. Barrett-Koehler Publishers, Inc. San Francisco, CA.
143p.
[47] B. P. Niehoff and R. H. Moorman, Justice as a Mediator of the
Relationship between Methods of Monitoring and Organizational
Citizenship Behavior, The Academy of Management Journal, Vol. 36, Iss.
3, 1993, pp. 527557.
[48] J. R. Rizzo, R. J. House and S. I. Lirtzman, Role conflict and ambiguity
in complex organizations. Administrative Science Quarterly, Vol. 15, Iss.
2, 1970, pp. 150163.
[49] U, Raja, G. Johns and Ntalianis, F. 2004. The Impact of Personality on
Psychological Contracts, The Academy of Management Journal, Vol. 47, No.
3 (Jun., 2004), pp. 350-367.
[50] D. M. Rousseau, 2000. Psychological Contract Inventory. Technical
Report. Pittsburgh, PA: Carnegie Mellon University.
[51] D. M. Rousseau, 1999. New hire perceptions of their own and their
employer's obligations: A study of psychological contracts, Journal of
Organizational Behavior, Vol 11, Issue 5, pages 389400.
[52] U. Pareek, (1982). Organizational role stress scales (manual, scale,
answer sheet). Ahmedabad: Navin Publications.
[53] C. J. Russell and P. Bobko. Moderated Regression Analysis and Likert
Scales: Too Coarse for Comfort. Journal of Applied Psychology,
Jun92, Vol. 77, Iss. 3, 1992, pp. 336342.
[54] L. Zadeh, Outline of a new approach to the analysis of complex systems
and decision processes, IEEE Transactions on systems, Man, and
Cybernetics, Vol. 1, Iss. 1, 1973, pp.2844.

384

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 385-389
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

A Novel Rule-based Fingerprint Classification Approach


Faezeh Mirzaei, Mohsen Biglari*, Hossein Ebrahimpour-komleh
fmirzaei@grad.kashanu.ac.ir, Department of Electrical and Computer Engineering, Kashan University,
Kashan, Iran.
biglari@shahroodut.ac.ir, Department of Electrical and Computer Engineering, Shahrood University,
Shahrood, Iran.
ebrahimpour@kashanu.ac.ir, Department of Electrical and Computer Engineering, Kashan University,
Kashan, Iran.

ABSTRACT
Fingerprint classification is an important phase in
increasing the speed of a fingerprint verification
system and narrow down the search of fingerprint
database. Fingerprint verification is still a challenging
problem due to the difficulty of poor quality images
and the need for faster response. The classification gets
even harder when just one core has been detected in
the input image. This paper has proposed a new
classification approach which includes the images with
one core. The algorithm extracts singular points (core
and deltas) from the input image and performs
classification based on the number, locations and
surrounded area of the detected singular points. The
classifier is rule-based, where the rules are generated
independent of a given data set. Moreover,
shortcomings of a related paper has been reported in
detail. The experimental results and comparisons on
FVC2002 database have shown the effectiveness and
efficiency of the proposed method.

In huge databases, fingerprints are divided into


some classes first, to reduce the search time and
then matching phase took place. Many fingerprint
classification methods rely on ridge flow or global
features. After that, input data needs to be matched
only with same class images.
In fingerprint classification algorithms, extracting
the number and precise locations of singular
points (SP), namely core and delta points are very
important. Henry defined the core point as "the
north most point of the innermost ridge line". A
delta point is the center of triangular regions
where three different direction flows meet [1, 2].
Figure 1 shows the five most common classes of
the GaltonHenry classification scheme (arch,
tented arch, left loop, right loop, and whorl):

KEYWORDS
Fingerprint classification, Rule-based classification,
Singular point, Core, Delta

1 INTRODUCTION
In last decades, fingerprint recognition has
received great attention because of its unique
properties like easy acquisition, universality,
permanency and circumvention. The use of
fingerprints for criminal verification, forensics,
access control, credit cards, driver license
registration and passport authentication is
becoming very popular.

Figure 1. The five most common classes of Galton-Henry


classification scheme

A fingerprint can be simply classified according to


the number and positions of the singularities; this
is the approach commonly used by human experts
for manual classification, therefore several authors
proposed to adopt the same technique for
385

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 385-389
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
automatic classification [2-10]. Table 1 shows
some of these rules. The key idea of proposed
classification method in [3] is dividing fingerprint
into small sub-images using singular point
location, and then, creating distinguished patterns
for each class using frequency domain
representation for each sub-image. In [4], an
algorithm based on the interactive validation of
singular points and the constrained nonlinear
orientation model is proposed. The final features
used for classification comprises the coefficients
of the orientation model and the singularity
information. This resulted in very compact feature
vector which is used as input to an SVM classifier
to perform the classification. Some of singularity
extraction methods are presented in [6, 8-9].
Qinzhi Zhang et al. in [10] used the pseudo ridge
tracing and analysis of the traced curve, so their
method does not rely on the extraction of the exact
number and positions of the true singular points.
Most of approaches using singularity points use
combination of several features or classification
methods that cause increase in time and
complexity.
On the other hand, those methods relying on exact
number of singularities and some simple rules as
follows, dont have effective rules for accurate
class separation. The quality of the fingerprint
images may depends on many factors such as that
caused by breaks, scars, too oily or too dry.
Besides, in many cases we have a partial image,
usually with the delta point outside the print. So,
all of these factors make it extremely hard to
classify such images according to the simple rules.

classification and is robust to transformation,


rotation and scale variations. Our approach works
on images with just one core too. The rule used for
these kind of images has been presented and
compared in detail with other methods.
The remaining sections of the paper are organized
as follows. Section 2 presents the proposed
method. Section 3 gives the obtained results of
experiments. Finally, in section 4 some
conclusions are drawn.
2 THE PROPOSED METHOD
The rules in Table 1. are not exhaustive. For
example, in many fingerprint images of low
quality as those of FVC databases, the delta is
outside the borders due to low pressure at the
surface or small sensors. In these cases, the delta
is not in ROI (region of interest which is defined
by the segmentation step). We must therefore use
more robust approaches. However, a certain
number of pictures have no core or delta in their
ROI. We tried to use the general rules set out in
section 1 and significant extensions in the
proposed system. Our rules are shown in Figure 2.

Table 1. Fingerprint classification rules using singular


points [3, 7, 11]

Fingerprint class
Arch
Tented arch, Left
loop, Right loop
Whorl

Singular points
No singular points
One loop and one delta
Two loops (or a whorl) and two
deltas

In this paper, a rule-based classification approach


has been proposed that classifies fingerprints into
four classes, namely, whorl (W), right loop (R),
left loop (L) and arch (A). This approach uses the
number and location of singularity points for

Figure 2. Proposed rules for fingerprint classification based


on singular points information

In our system, arch and tented arch are considered


as a class named arch. Extended rules are as
follows. If no singular point is recognized, we
have unknown class. Figure 3 shows a sample of
unknown class in FVC2002 database. If a delta
386

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 385-389
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
and no core is extracted, the location of the delta
in ROI determines one of the left loop or right
loop classes. If x coordinate of delta is less than
half of ROI length, fingerprint class is left loop,
and right loop otherwise.
If number of cores and deltas is equal to one,
decision will be one of left loop, right loop and
arch classes. To do this, we calculated differences
between xs coordinate of core and delta
with | X c X d | . If the result is less than a certain
threshold , it means arch class. Otherwise left
loop or right loop is selected. In the case
of X c X d 0 , core lies on the right hand of delta
and the class will be recognized as right loop and
vice versa for left loop. If one core and two deltas
or just two deltas are extracted, the whorl class
will be the one.
In the case that we have no core and two deltas,
the class is whorl. And if one core is extracted, the
number of delta is important for determining the
class type. In this situation, if the number of delta
is zero, we should decide between right and left
loop classes. As mentioned earlier, the general
rules do not support this case. Some references
proposed several methods to solve this problem.
For example [7] is one of the recent ones that
divided ROI to four regions and decided according
to location of core in them. This rule is shown in
Figure 4. But it's obvious that in real world, cores
lie on wrong places sometimes. Rotation and
translation create this situation often. This paper
used FVC2002 DB1-A database, but many
examples of violations for the proposed rule are
found in this database. Some of these images are
shown in Figure 5.

Figure 4. Two samples of the rule that is used in [7]

Figure 5. Some of images of FVC2002 that are


misclassified by [7] rules

Figure 3. A fingerprint image without any singular point

In the proposed approach, image direction in a


neighborhood area of core is used for
classification of these images. A neighborhood
area of core has been extracted first. Then this area
is divided into B B blocks. Each blocks
direction is calculated using gradient method [12].
387

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 385-389
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
The overall direction that is average direction
(MeanDir) of all blocks determines the class.
MeanDirs range belongs to the lower half of unit
circle. In the right loop and left loop classes,
MeanDir belongs to lower-right and lower-left
half of unit circle respectively. Figure 6 shows an
example of proposed approach for two right-loop
and left-loop cases. Size and location of the
neighborhood area, and number of the blocks in
the area were achieved by examination of one core
left and right loop images in FVC2002 database.

Figure 6. First row: Right loop class - Second row: Left


loop class (a) Neighborhood area (b) Blocks direction in the
area (c) Average direction of blocks (MeanDir)

3 EXPRIMENTAL RESULTS
The proposed algorithm is tested on FVC2002
DB1-A database and the results are reported. The
fingerprint classification problem considered as a
four-class problem, because fingerprint classes A
(arch) and T (tented arch) have a substantial
overlap, and it is very difficult to separate these
two classes. The FVC2002 DB1-A database
consists of 800 fingerprint images with size of
388374 (500 dpi).
Table 2 shows the confusion matrix of our
proposed method classification results on
FVC2002 database and Table 3 shows the
confusion matrix of [7] classification result on the
same images. There are 35 left/right loop
fingerprints that are misclassified by [7] that at
least 24 of them have one core. In our rule-based

classification algorithm, a new feature named


MeanDir is used to separate left loop ad right loop
classes. With this feature, just two images are
misclassified by our algorithm (Figure 7). As we
can see, the proposed approach has much better
precision than [7]. Selection of neighborhood area
and blocks size proved to be accurate by
experiment.
Table 4 shows the comparison of our system
performance with some other similar systems
using FVC2002 DB1-A database. All of these
systems used four class classification and the same
database. The results show our system better
performance over all the compared systems.

Figure 7. Two samples of one core images of FVC2002


that are misclassified by the proposed rule
Table 2. The confusion matrix of the proposed system
classification result
Classified As:
Actual
UN
TOT
Class
A
CT
LL
RL
02
91
A
88
00
00
01
00
93
CT
00
90
01
02
01
125
LL
01
00
122
01
01
122
RL
01
00
01
119
Acc.
97.21
Table 3. The confusion matrix of [7] classification result
Classified As:
Actual
TOT
Class
A
CT
LL
RL
91
A
77
01
05
08
93
CT
00
83
05
04
125
LL
02
00
108
15
122
RL
04
02
20
96
Acc.
84.5%

388

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 385-389
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
Table 4. Comparison of the proposed system with other
similar systems

Ref.
[7]
[13]
[14]
Proposed
System

Features & Methods


Coordinate geometry of
singularities
Orientation image and
singular points
Singular point location
Singular point and proposed
feature

Accuracy

6.

84.5%
68%
7.

91.4%
97.21%

4 CONCLUSION
This papers focus was on rule-based fingerprint
classification. This classification approach
concentrates on the number and location of
singularity points. This paper proposed a very
accurate rule-based classification approach. This
novel approach is invariant to translation, rotation
and scale changes. Also, it can work with any
number of possible singular points: none, one and
two. For comparison purpose, the rules proposed
in [7] has been presented. We showed that the
proposed rules for images with one core was not
accurate and did not work on many images in
FVC2002 database that they used for experiments.
Their approach did not covered the images with
rotation and transition in FVC2002. Then a very
accurate rule for this kind of images has been
proposed. The experimental results showed that
the proposed method outperforms [7] and others
compared methods.

8.

9.

10.

11.

12.

13.

14.

Intelligence, IEEE Transactions on, vol. 24, pp. 905919, 2002.


F. Mirzaei, M. Biglari, and H. Ebrahimpour-Komleh,
"First and second singular points detection in noisy
images for designing a fingerprint classification system
(In Persian)," presented at the The 1st Conference on
Novel Advances in Engineeringv, Kish, Iran, 2012.
I. S. Msiza, B. Leke-Betechuoh, F. V. Nelwamondo,
and N. Msimang, "A fingerprint pattern classification
approach based on the coordinate geometry of
singularities," in Systems, Man and Cybernetics, 2009,
pp. 510-517.
C. H. Park, J. J. Lee, M. J. T. Smith, and K. H. Park,
"Singular point detection by shape analysis of
directional fields in fingerprints," Pattern Recognition,
vol. 39, pp. 839-855, 2006.
V. Srinivasan and N. Murthy, "Detection of singular
points in fingerprint images," Pattern Recognition, vol.
25, pp. 139-153, 1992.
Q. Zhang and H. Yan, "Fingerprint classification based
on extraction and analysis of singularities and pseudo
ridges," Pattern Recognition, vol. 37, pp. 2233-2243,
2004.
A. Tariq, M. U. Akram, and S. A. Khan, "An automated
system for fingerprint classification using singular
points for biometric security," 2011, pp. 170-175.
Y. Wang, J. Hu, and F. Han, "Enhanced gradient-based
algorithm for the estimation of fingerprint orientation
fields," Applied Mathematics and Computation, vol.
185, pp. 823-833, 2007.
H. Kekre and V. Bharadi, "Fingerprint Core Point
Detection Algorithm Using Orientation Field Based
Multiple Features," International Journal of Computer
Applications IJCA, vol. 1, pp. 106-112, 2010.
A. I. Awad and K. Baba, "An Application for Singular
Point Location in Fingerprint Classification," in Digital
Information Processing and Communications, ed:
Springer, 2011, pp. 262-276.

7 REFERENCES
1.
2.

3.

4.

5.

S. E. R. Henry, Classification and uses of finger prints:


George Routledge and Sons, 1900.
D. Maltoni, D. Maio, A. K. Jain, and S. Prabhakar,
Handbook of fingerprint recognition: Springer-Verlag
New York Inc, 2009.
A. I. Awad and K. Baba, "Efficient Fingerprint
Classification Using Singular Point," International
Journal of Digital Information and Wireless
Communications (IJDIWC), vol. 1, pp. 611-616, 2011.
J. Li, W. Y. Yau, and H. Wang, "Combining singular
points and orientation image information for fingerprint
classification," Pattern Recognition, vol. 41, pp. 353366, 2008.
A. M. Bazen and S. H. Gerez, "Systematic methods for
the computation of the directional fields and singular
points of fingerprints," Pattern Analysis and Machine

389

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 390-399
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

E-Government Web Accessibility: WCAG 1.0 versus WCAG 2.0 Compliance


Faouzi Kamoun, Basel M. Al Mourad and Emad Bataineh
Zayed University.
P.O. Box 19282. Dubai. UAE.
Emails: {faouzi.kamoun@zu.ac.ae; basel.Almourad@zu.ac.ae; Emad.Bataineh@zu.ac.ae}

ABSTRACT
Most e-governments have traditionally used version
1.0 of the Web Content Accessibility Guidelines
(WCAG) as a basis to ensure that their websites are
accessible by people with disabilities. This was
reflected in their design guidelines, accessibility
evaluations, policy-making and legislations. Recently,
WCAG 2.0 emerged as an ISO/IEC International
accessibility standard that has been recommended for
adoption by the W3C WAI. This paper seeks to
examine if there is a need for e-governments to
reassess their web accessibility conformance, in light
of the latest WCAG 2.0 standard. A case study related
to the 21 Dubai e-government websites is presented
whereby accessibility is evaluated based on the WCAG
1.0 and WCAG 2.0 guidelines and using automated
accessibility testing tools. We found that WCAG 2.0
conformance testing identified some notable
accessibility issues that were not revealed by WCAG
1.0 conformance testing. Hence we recommend that egovernments should develop and update their web
content and accessibility policies to conform to the
latest WCAG 2.0 guidelines and success criteria.
Additional implications for practice and academic
research are also provided.

KEYWORDS
E-services, e-government, web accessibility, universal
access, e-inclusion.

1 INTRODUCTION AND RESEARCH


MOTIVATIONS
Most governments are endorsing the migration
towards an information society where egovernment websites are becoming the primary
doorways to citizens and businesses for
government information and e-service delivery.
However, to enable all citizens to benefit from egovernment services, it is important to secure

universal accessibility. This accessibility enables


people with disabilities to benefit from the
information and services offered by egovernments; the same way a person with no
disability would.
E-government public services have opened new
expectations for people with disabilities to access
public government online information and
services, without relying on the assistance from
others [1]. This brings new opportunities to people
with disability for more active social engagement
and participation. On the other hand, the lack of
web accessibility can turn e-government portals
into a new source of digital divide, public
deception and distrust among people with
disabilities ([2], [3]).
Although most governments have officially
endorsed web site accessibility via legislations and
conformity standards, many e-government
websites are still not conforming to basic
accessibility principles (see for e.g. [2], [4]). To
make e-Government websites accessible and
inclusive, it is recommended that web designers
conform to the World Wide Web Consortium
(W3C), Web Content Accessibility Guidelines
(WCAG) [5]. WCAG guidelines are considered
today the most comprehensive de facto reference
for web accessibility assessment. These guidelines
set the minimum criteria for accessibility
compliance and are used as design guidelines as
well as a heuristic for website accessibility [6].
The first W3C accessibility standard for the Web
was WCAG 1.0, which was released on May 5th
1999. After years of tedious discussions,
deliberations and changes, this standard was
updated on December 11th 2008 to a newer
version, WCAG 2.0, in order to account for
various existing and emerging web technologies,
not just HTML.

390

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 390-399
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
There are many ways by which one can assess and
test for e-government web accessibility. These
include expert testing, end-user testing, automated
testing, and surveys targeting e-government
webmasters and site developers [7]. All these
methods are based on cross-checking against some
accessibility targets, usually set by individual
governments, and often derived from the Web
Content Accessibility Guidelines [5].
This work was particularly motivated by the fact
that most of the earlier reported e-government
accessibility assessment studies were based on the
WCAG 1.0 standard. This might be due to the lack
of WCAG 2.0 support on most automated
accessibility tools and to the fact that it was only
on October 2012 that WCAG 2.0 became an
official and stable International ISO standard
(ISO/IEC 40500:2012). However, the W3C Web
Accessibility Initiative (WAI) recommends using
WCAG 2.0 instead of WCAG 1.0, although it
recognizes that most sites that already conform to
WCAG 1.0 should not make significant changes to
conform to WCAG 2.0 [5]. Accordingly, through
the lens of a specific case study related to Dubai egovernment, this paper aims to address the
following research question: Should e-government
web accessibility be retested against the latest
WCAG 2.0 guidelines? In other words, using
WCAG 2.0 conformance testing results as a
baseline, are there any noteworthy accessibility
issues that a WCAG 1.0 conformance testing
might not reveal? To the best of our knowledge,
this research question has not been empirically
validated by the current literature.
The remaining of this paper is organized as
follows: Section 2 presents a comparative analysis
between WCAG 1.0 and WCAG 2.0 standards.
Section 3 outlines our research methodology,
while section 4 presents and discusses the results
of our quantitative study. In section 5, we present
a summary of the main findings of the paper and
highlight the implication of this research for
practice and academic research.
2 HOW is WCAG 2.0 DIFFERENT FROM
WCAG 1.0?
WCAG 2.0 emerged in response to the new
challenges posed by emerging interactive

technologies. In fact, WCAG 2.0 is a future


oriented standard that applies to a broader
spectrum of existing and emerging web content
technologies (including XML, Flash, JavaScript,
PDF, AJAX, DHTML, RIA, CSS, SML, and
SVG, in addition to HTML and XHTML) and
covers more types of disability than WCAG 1.0.
Besides adopting a technology-neutral language to
describe accessibility requirements, WCAG 2.0
strives to make all its requirements testable.
WCAG 2.0 also applies more stringent
accessibility requirements on the default look and
feel of a website [8]. The WCAG 2.0 requirements
are easier to understand and use, and are easier to
test via automated accessibility testing tools or
through human evaluation. Furthermore, although
the major issues of web accessibility are the same,
there are some notable differences in the approach,
organization and requirements between the
WCAG 1.0 and WCAG 2.0 standards [5]. These
are further discussed below.
2.1 WCAG 1.0 Standard
WCAG 1.0 was published in May 1999. The
standard promotes accessibility by outlining 14
high-level guidelines (table 1) to make web
content accessible to people with disabilities.
Table 1. Web content accessibility guidelines 1.0 [5]
Guideline 1
Provide equivalent alternatives to auditory
and visual content
Guideline 2
Don't rely on color alone
Guideline 3
Use markup and style sheets and do so
properly
Guideline 4
Clarify natural language usage
Guideline 5
Create tables that transform gracefully
Guideline 6
Ensure that pages featuring new
technologies transform gracefully
Guideline 7
Ensure user control of time-sensitive content
changes
Guideline 8
Ensure direct accessibility of embedded user
interfaces
Guideline 9
Design for device-independence
Guideline 10 Use interim solutions
Guideline 11 Use W3C technologies and guidelines
Guideline 12 Provide context and orientation information
Guideline 13 Provide clear navigation mechanisms
Guideline 14 Ensure that documents are clear and simple

Conformance to WCAG 1.0 involves designing


and testing e-government websites against these
391

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 390-399
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
guidelines. Each guideline consists of a number
of checkpoints, each being assigned a priority
level (1, 2 or 3) that is based on the checkpoints
impact on accessibility [5]. Therefore WCAG 1.0
uses 65 checkpoints as the basis for determining
and testing for conformance. The full listing of the
WCAG 1.0 checkpoints under each priority level
can be found at the W3C WAI website [5].
Accordingly, WAI defines three possible WCAG
1.0 accessibility conformance levels, as illustrated
in Table 2.

Table 2. WAI WCAG 1.0 conformance claims [5]


Conformance
Level
WAI-A
(basic accessibility)

WAI-AA
(intermediate
accessibility)

WAI-AAA
(high accessibility)

Web Accessibility Checkpoint


All priority 1 checkpoints are met.
This is the minimum (basic) W3C
requirement. Otherwise one or
more groups of people will find it
impossible to access information
from the website. This is the
minimum requirement and must be
met.
All priority 1 and 2 checkpoints are
satisfied; otherwise one or more
groups of people will find it
difficult to access information from
the website.
This conformance level status
should be met, as it will remove
significant barriers to accessing
Web documents.
All priority 1, 2 and 3 checkpoints
are satisfied; otherwise one or more
groups of people will find it
somehow difficult to access
information from the website. This
conformance level status may be
addressed by Web developers to
improve
access
to
Website
documents.

WCAG 1.0 has long been criticized for its


ambiguity, html-dependence, lack of guidance,
usage of vague acronyms and difficulty to
understand and apply.
2.2. WCAG 2.0 Standard
As illustrated in figure 1, WCAG 2.0 is organized
around four theoretical design principles that
provide the foundations and guidelines for web
accessibility. These overarching principles are:

Perceivable: Content is visible to the senses,


including those of a blind user or a user with
limited vision.
Operable: Any user should be able to interact
with the web content and operates its controls,
navigation schemes and forms.
Understandable: The information (content)
and the interaction (controls) must be
understandable by all users, including those
with learning disabilities (e.g. dyslexia) or
cognitive impairments.
Robust: Content can be easily read by
browsers, readers, current and future assistive
technologies.

Figure 1. WCAG 2.0 layers of guidelines and success


criteria

Under the above four design principles are twelve


guidelines (Appendix 1) that, although not directly
testable, define the basic requirements to be met in
order to make web page content accessible. For
each guideline, a series of testable Success Criteria
(SC) are outlined to enable WCAG 2.0
conformance testing. For each success criteria, a
list of sufficient and advisory techniques is also
provided to guide web developers and testers
towards meeting the success criteria within
different contexts and using different web content
technologies. Finally, in order to reflect the
importance of a given success criterion, WCAG
2.0 uses three levels of conformance instead of the
priority levels adopted by WCAG 1.0. These
conformance levels are A (minimum level for
WCAG 2.0 accessibility conformance), AA (midlevel of conformance) and AAA (the ideal and
highest level of conformance). Accordingly, to
achieve conformance level A, all level A success
criteria must be met. Similarly, to achieve
392

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 390-399
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
conformance level AA, all level A and level AA
success criteria must be met. Furthermore, a
particular accessibility issue can be covered by
more than one success criteria at different
conformance levels. These conformance levels
are similar to WCAG 1.0 checkpoint priority
levels; although WCAG 2.0 emphasizes that all
the success criteria are important. Consequently,
unlike WCAG 1.0, which uses checkpoints as a
basis for determining conformance, WCAG 2.0
uses 61 success criteria instead.
As illustrated in figure 2, for each of the three
conformance levels, there is a clear disparity in the
number of WCAG 1.0 checkpoints versus WCAG
2.0 success criteria. While the number of WCAG
2.0 success criteria at conformance levels A and
AAA outweighs the corresponding number of
WCAG 1.0 checkpoints, this is not the case at
conformance level AA.
30
25
20

15
10
5
0

AA

WCAG 2.0 Success Creteria

AAA
WCAG 1.0 Checkpoints

Figure 2. WCAG 1.0 checkpoints versus WCAG 2.0 success


criteria

Compared to WCAG 1.0, WCAG 2.0 puts more


emphasis on website usability by empowering
users to take more control over the interface while
interacting with the web content, with and without
assistive technologies. WACG 2.0 also requires
websites to explicitly reveal the semantic structure
of each webpage to help users with disabilities
find content without being disoriented.
Another notable difference between WCAG 1.0
and WCAG 2.0 is the extent to which assistive
technologies are supported [8]. In fact, while
WCAG 1.0 supports assistive technologies by
requiring web programmers to code for
interoperability with these technologies, WCAG
2.0 requires website accessibility to be an integral

part in the design of the website. In addition,


WCAG 2.0 defines the visual, auditory and
interactive accessibility design requirements more
explicitly than WCAG 1.0. WCAG 2.0 also
provides more flexibility on how to meet these
design requirements through a set of fully
documented strategies and techniques.
Although the fundamental web accessibility issues
are similar across the two standards, as each
WCAG 1.0 checkpoint can, in principle, be
mapped to a WCAG 2.0 success criteria, WCAG
2.0 outlines new accessibility requirements that
are not covered by WCAG 1.0 [5]. As a result, the
mapping between WCAG 1.0 checkpoints and
WCAG 2.0 success criteria is not a direct one-toone mapping. Furthermore, a number of WCAG
1.0 checkpoints were excluded in WACG 2.0, as
they were deemed obsolete. These include for
instance support for outdated technologies such as
ASCII-art and the usage of keyboard shortcuts for
important links. The additional accessibility issues
introduced by WCAG 2.0 are briefly summarized
in Appendix 2. Additional details can be found in
the W3C WAI website [5].
Despite improvements, WCAG 2.0 has been
criticized for being vague, overwhelming readers
with massive amount of information, and for not
engaging people with disabilities in the
development and validation of the guidelines.
3 RESEARCH METHODOLOGY
To address our research question, we adopt a case
research approach. A case study is a research
strategy that focuses on understanding the
dynamics present within single setting [9].
Adopting this methodology enables researchers to
better understand the dynamics present within a
given situation, and focus on emerging
phenomena [10]. Our research design is therefore
exploratory by nature as it seeks to better
understand whether testing web content for
compliance to WCAG 2.0 success criteria would
reveal any new accessibility issues that would not
be revealed by a WCAG 1.0 conformance test.
We selected the 21 Dubai e-government websites
as the target for our accessibility testing. Though
we cannot guarantee that our results could be
generalized to other contexts, the choice of 21
393

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 390-399
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
different cases to address our research question
was deemed appropriate for the purpose of this
study. Moreover, for the sake of simplicity,
objectivity and scalability, we opted for an
automated testing method, based on WCAG 1.0
and WCAG 2.0 guidelines. In addition we limited
the accessibility testing to the home page of each
tested website to keep the scope of the study
manageable and consistent. Accordingly, we
adopted a quantitative research method to derive a
comparative
analysis
of
accessibility
conformance, based on the WCAG 1.0 and
WCAG 2.0 standards.
Several automatic accessibility tools are available
to assess web accessibility. For the purpose of this
study, we selected the 3.08 standalone version of
TAW (Test de Accesibilidad Web) software
accessibility testing tool (http://www.tawdis.net/)
to test for conformance to WCAG 1.0
checkpoints. We also selected the 1.10 standalone
version of the Web Accessibility Assessment Tool
(WaaT)
(http://sourceforge.net/projects/waat/files/?source
=navbar) to test for conformance to WCAG 2.0
success criteria. Both tools can test conformance
against all the checkpoint levels (A, AA, and
AAA).
Using TAW and WaaT automatic accessibility
tools, we conducted several rounds of WCAG 1.0
and WCAG 2.0 accessibility conformance tests on
the 21 Dubai e-Government Websites. All
accessibility conformance checks were performed
during the period August-September 2013, using
Internet Explorer 8.0 (IE) running Windows XP
operating system with MS Service Pack 3.
Therefore, our accessibility results might change
since the last time we carried our testing. We
provisioned TAW and WaaT tools to use the
highest AAA conformance level, thus testing all
WCAG 1.0 and WCAG 2.0 [A, AA, and AAA]
conformance levels.
4 RESULTS AND DISCUSSIONS
Table 3 summarizes the accessibility conformance
results for each tested Dubai e-government
website, based on WCAG 1.0 and WCAG 2.0
standards. It should be noted that the three (A,
AA, and AAA) conformance levels in WCAG 1.0

are not equivalent to the three levels in WCAG


2.0, meaning that a website that conforms to
WCAG 1.0 level A may not be compliant with
WCAG 2.0 level A conformance level. The
number of errors shown in table 3 corresponds to
the number of conformance violations to the tested
WCAG 1.0 checkpoint or WCAG 2.0
conformance level.
As shown in table 3, of the 21 tested websites,
only two (RTA and DAFZ) have fully conformed
to the WCAG 1.0 WAI-A conformance level and
thus met the minimum WCAG 1.0 accessibility
requirement for people with disabilities. On the
other hand, none of the tested websites has passed
the WCAG 2.0 Level A conformance level which
constitutes the minimum WCAG 2.0 accessibility
requirements for people with disabilities. Table 3
also reveals that the number of WCAG 2.0 level-A
violations are consistently higher that their WCAG
1.0 counterparts. This result demonstrates the
wider breadth and stringent depth of WCAG 2.0
success criteria and techniques at this minimum
level of accessibility conformance. This is also a
reflection of the strict requirements that WCAG
2.0 puts on the website look and feel, as nearly
50% of its success criteria, at level A conformance
level, relate to website design [8].
Table 3 also shows that, for most of the tested egovernment websites, the number of WCAG 1.0
level AA violations is higher than its WCAG 2.0
level AA counterpart. This can be explained by
referring back to figure 2, where we noticed that
the number of WCAG 1.0 level-AA checkpoints is
more than double the number of WCAG 2.0 levelAA success criteria.
Further analysis of WaaT and TAW accessibility
reports enabled us to identify some accessibility
barriers that were only unveiled by the WCAG 2.0
conformance testing. These barriers are illustrated
in table 4 and are further discussed below in order
to shed light on their implications for people with
disabilities.
2.2.1 Time limits SC violations: Compared to
users without disabilities, people with
disabilities generally need more time to
complete a given task. For example a blind
person may need 3 times as much time a
person with complete vision takes to read a
document [11]. The additional slowdown may
394

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 390-399
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

be due to the usage of assistive technologies


(e.g. screen readers) or to some mobility or
cognitive impairments. Since some egovernment webpages impose some time
limits on the duration of active sessions due to
limited resources or because of security
concerns, it is essential to provide people with
disabilities with the option to disable or at least
readjust the time limits of active interactive
sessions. Failure to do so will prevent people
with disabilities from successfully completing
their online e-government transactions.
2.4.2 Page Titled SC violations: The absence
of a unique and descriptive page title prevents
people with disabilities from determining
where they are on a given e-government
website, thus complicating their navigation
experience.
3.3.1 Error Handling SC Violations: The issue
of properly detecting and correcting erroneous
data field entries is particularly important for
people with disabilities. In particular, some egovernment transactions involve some
financial or legal implications for which errors
can have serious ramifications [8]. Hence if
users with disabilities are not provided with
alerts to specifically describe the error and
suggest how to correct it, they can be so
frustrated to the point that they will quit the
online transaction.
3.3.2 Missing Labels/Instructions for User
Input SC Violation:
This accessibility
violation will lead to a situation where people
with disabilities who need to fill-up an online
e-government form will not receive the needed
assistance to avoid making mistakes. For
instance, they will not be able to search for
visual cues, such as icons, that tag erroneous
fields. However, labels or instructions are
critical for people with cognitive disabilities
and for those who rely on screen readers and
magnifiers to fill-up online forms, without
being
overwhelmed
with
cluttered
information. This barrier increases the
likelihood of data entry errors and complicates
users interaction, thus leading to potential
confusion and frustration.
1.4.3 Contrast Violation: Unlike WCAG 1.0
which does not impose specific requirements

for the contrast of the text and background


color combination, WCAG 2.0 does require a
minimum contrast ratio of 4.5 to 1. The failure
of some websites to guarantee a minimum
contrast ratio implies that people with
disabilities will be exposed to a combination
of text and background colors that can create
serious reading problems for them.
1.4.8 Visible Presentation SC Violation: Some
people with mild visual impairment find it
difficult, if not impossible, to perceive some
words because of low contrast and the inability
to resize the texts font size to at least double
its default size. This accessibility barrier is
particularly significant for people with visual
impairment, who do not have access to
assistive technologies.
3.3.5 Context-sensitive Help SC Violation:
Context-sensitive help enables users to get
specific guidance about whatever part of the
dialog boxes or controls they are using at any
given moment. The absence of this type of
assistance will make it hard for people with
reading and intellectual disabilities to enter the
correct text in forms or other places that need
text input.
A major finding that can be inferred from the
above results is that, with the exception of success
criteria 2.4.2 and 3.3.1, fixing the remaining
WCAG 2.0-specific accessibility issues that are
highlighted in table 5 does involve a noteworthy
additional retrofit work. The above results do not
seem to be congruent with earlier claims that
Most websites that conform to WCAG 1.0 should
not require significant changes in order to conform
to WCAG 2.0, and some will not need any
changes at all [5]. Therefore, our research
highlights the need to use WCAG 2.0 success
criteria as reference guidelines to assess web
content accessibility.
5 CONCLUSIONS AND RESEARCH
IMPLICATIONS
There is a growing demand for e-governments to
ensure universal accessibility by conforming to
WCAG guidelines.
This contribution demonstrated via empirical
evidence that although WCAG 2.0 builds on
WCAG 1.0, a website that already meets WCAG
395

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 390-399
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
1.0 requirements might not comply with the new
WCAG 2.0 success criteria. In particular, we
found that since some WCAG 2.0 success criteria
are more specific and testable than the
corresponding WCAG 1.0 checkpoints, there is a

need for e-government web content developers to


retest web accessibility and update their web
content to comply with the latest WCAG 2.0
requirements.

Table 3: WCAG 1.0 & WCAG 2.0 accessibility conformance results


Dubai eGovernment
Website
AMAF
DPP
DEWA
DCUS
RTA
IACAD
DMI
DED
DGW
Dubai.ae
DM
DAFZ
DNRD
DHA
DP
DC
LD
DCCI
DCAA
DTCM
DCD

WCAG 1.0
Level A
errors
34
64
14
2
0
15
8
38
13
33
42
0
64
46
2
4
120
12
28
1
16

WCAG 2.0
Level A
errors
146
183
556
811
84
781
180
1411
818
2702
993
126
892
372
19
174
1014
233
851
153
202

WCAG 1.0
Level AA
errors
153
246
141
48
2
158
2
256
83
372
372
17
731
61
15
163
368
28
67
16
5

5.1 Implications for Practice


Based on our research findings, we suggest that egovernment entities that are currently using
WCAG 1.0 guidelines
as reference for
accessibility compliance must plan for a graceful
transition to update their web content to comply
with WCAG 2.0 success criteria. This transition
should have a time-phased plan, targeting A
conformance level within the short term (e.g. 1-2
years) and AA within a longer term (e.g. in year 3
and beyond). In addition, we outline below an
action plan that can assist e-governments to
successfully transition towards WCAG 2.0
compliance:
First, there is a need to update the existing
accessibility policy, if any, in light of the newest
WCAG 2.0 guidelines, success criteria and
associated conformance levels. If no such a policy
already exists, then e-governments must develop a

WCAG 2.0
Level AA
errors
38
137
45
6
105
33
2
282
42
324
395
2
4
2
6
31
2
0
2
0
10

WCAG 1.0
Level AAA
errors
50
54
50
35
1
111
1
73
43
145
57
2
85
135
3
23
52
2
25
0
1

WCAG 2.0
Level AAA
error
76
243
192
32
276
85
18
441
75
291
557
7
2
25
51
33
8
25
14
22
60

web accessibility policy, based on the latest


WCAG 2.0 success criteria. Special care needs to
be taken to the new and more specific accessibility
requirements introduced in WCAG 2.0 that might
not be met.
Second, there is a need to design, code and test for
web accessibility conformance, based on the
WCAG 2.0 standard and the associated tips,
informative techniques and best practices for
developing accessible web content, which are
available on the W3C WAI homepage [5].
Third, e-government entities need to assess their
web accessibility through conformance testing
against the WCAG 2.0 success criteria. This can
be done using a suitable automatic accessibility
software tool, which should be complemented by
expert and/or end-user testing methods. In
addition, there is a need to cogitate WCAG 2.0
conformance during the early stages of website
design and maintenance tasks.
396

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 390-399
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
Table 4. Accessibility barriers identified by WCAG 2.0 conformance testing
L
Violated WACG 2.0 Success Criteria [5]

2.2.1 Timing Adjustable: For each time limit that is set by the content, at least one
of the following is true:
- Turn off: The user is allowed to turn off the time limit before encountering it; or
- Adjust: The user is allowed to adjust the time limit before encountering it over a
wide range that is at least ten times the length of the default setting; or
- Extend: The user is warned before time expires and given at least 20 seconds to
extend the time limit with a simple action (for example, "press the space bar"),
and the user is allowed to extend the time limit at least ten times; or
- Real-time Exception: The time limit is a required part of a real-time event (for
example, an auction), and no alternative to the time limit is possible; or
- Essential Exception: The time limit is essential and extending it would invalidate
the activity; or
- 20 Hour Exception: The time limit is longer than 20 hours.
2.4.2: Web pages have titles that describe topic or purpose (The intent of this SC is
to help users find content and orient themselves within it by ensuring that each Web
page has a descriptive title)
3.3.1: If an input error is automatically detected, the item that is in error is identified
and the error is described to the user in text (The intent of this SC is to ensure that
users are aware that an error has occurred and can determine what is wrong by
referring to the error message)
3.3.2: Labels or instructions are provided when content requires user input (The
intent of this SC is to have content authors place instructions or labels that identify
the controls in a form so that users know what input data is expected. These labels
will provide important cues and instructions that will benefit people with
disabilities)

1.4.3. Contrast (Minimum): The visual presentation of text and images of text has a
contrast ratio of at least 4.5:1, except for the following:
Large Text: Large-scale text and images of large-scale text have a contrast ratio of
at least 3:1;
Incidental: Text or images of text that are part of an inactive user interface
component, that are pure decoration, that are not visible to anyone, or that are part
of a picture that contains significant other visual content, have no contrast
requirement.
Logotypes: Text that is part of a logo or brand name has no minimum contrast
requirement.

1.4.8: For the visual presentation of blocks of text, a mechanism is available to


achieve the following:
- Foreground and background colors can be selected by the user.
- Width is no more than 80 characters or glyphs (40 if CJK).
- Text is not justified (aligned to both the left and the right margins).
- Line spacing (leading) is at least space-and-a-half within paragraphs, and
paragraph spacing is at least 1.5 times larger than the line spacing.
- Text can be resized without assistive technology up to 200 percent in a way that
does not require the user to scroll horizontally to read a line of text on a full-screen
window.
(The intent of this SC is to ensure that visually rendered text is presented in such a
manner that it can be perceived without its layout interfering with its readability)
3.3.5: Context-sensitive help is available (The intent of this SC is to help users avoid
making mistakes. Some users with disabilities may be more likely to make mistakes
than users without disabilities. Using context-sensitive help, users find out how to
perform an operation without losing track of what they are doing.)

e
v Affected Websites
e
l
A DMI

A AMAF
DPP
DEWA
A DMI
DHA
DCD
A AMAF
DPP/DM
DEWA/DCUS
DGW
Dubai.ae
DHA
DP
DC/DCCI
A AMAF/DCD
DPP
A
DEWA
DCUS
RTA
DGW
Dubai.ae
DM
DNRD/DCAA
DP
DC
LD
A AMAF
DPP
A
DEWA
A
RTA
DMI/DCCI
DGW
Dubai.ae
DM
DP
LD
DCAA
DCD
A AMAF/DEWA/DCCI/DP/DC
DCUS/LD
A
DCAA
A

Number
of
violations
7

1
1
1
1
1
1
14
8
6
20
35
11
2
4
6
36
19
4
62
1
28
19
2
3
31
24
52
150
151
180
1
49
194
503
28
112
2
23
2
1
3

397

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 390-399
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
Finally, once the additional work that is needed to
meet the WCAG 2.0 success criteria is identified,
the required fixes can be prioritized, based on their
impact on people with disabilities, the amount of
additional efforts involved and the risk associated
with the retrofit. As a rule of thumb, priority
should be given to fix the most frequently-visited
webpages that have the highest impact on
accessibility (in accordance with the associated
conformance level) and that can be fixed with
least efforts and least risk [7].
5.2. Implication for Research
In this contribution, we established through
empirical evidence the need to design, code and
test e-government websites based on compliance
to the latest WCAG 2.0 standards through a single
case study conducted in Dubai, UAE. It would be
interesting to explore if the research findings of
this case study can be replicated or validated
across other contexts. We also invite other
researchers to develop automated web content
adaptation tools in line with the applicable WCAG
2.0 guidelines in order to serve the particular
needs of people with disabilities. The work of
Anam, Ho and Lim [12] in the context of mobile
devices can be a starting point for further
contributions to better adapt web content towards
enhanced accessibility.
5.3. Limitations
This research has few limitations. First, the
findings of this contribution were based on a case
study involving 21 e-government websites in UAE
and therefore it is not sure whether these are also
applicable elsewhere. Second, our accessibility
testing for conformance compliance to WCAG
standards were exclusively based on automated
testing tools. However, not all WCAG success
criteria can be tested by automated accessibility
checker tools, as some criteria still require human
evaluation and judgment. Finally, the WCAG
accessibility violations reported from automatic
accessibility tools are proxy indicators of web
accessibility and may not capture all the
accessibility barriers that people with disabilities
might encounter in real-life situations. For

instance, Rmen and Svans [6] conducted a


usability testing experiment involving people with
disabilities, and found that only 32% of the real
accessibility barriers were identified by WCAG
2.0. Hence more research is warranted to further
illuminate the extent to which WCAG 2.0
addresses accessibility issues, as actually
experienced by people with disabilities.
Acknowledgement
This research was supported by Zayed University
Research Incentive Fund (RIF) grant #R13067.
6 REFERENCES
1.

2.

3.

4.

5.
6.

7.

8.

9.

Goodwin, M., Susar, D., Nietzio, A., Snaprud, M.,


Jensen, C. S.: Global Web Accessibility Analysis of
National Government Portals and Ministry Web Sites,
Journal of information Technology and Politics, 8(1), pp.
41--67 (2011).
Jaeger, P. T.: User-centered Policy Evaluations of
Section 508 of the Rehabilitation Act: Evaluating eGovernment Websites for Accessibility, Journal of
Disability Policy Studies, 19(1), pp. 2433 (2008).
Cullen, R., Hernon, P.: More Citizen Perspectives on eGovernment, In: P. Hernon, R. Cullen and H.C. Relyea
(eds.) Comparative Perspectives on e-Government :
Serving Today and Building for Tomorrow, pp. 209-242, Lanham, MD: Scarecrow (2006).
Kuzma, J.: Global E-Government Web Accessibility: A
Case Study, Proceedings of the British Academy of
Management 2010 Conference, pp. 14-16, September
2010, University of Sheffield, UK (2010).
Web Content Accessibility Guidelines (WCAG),
http://www.w3.org/WAI/intro/wcag.php.
Rmen, D., Svans, D.: Validating WCAG 1.0 and
WCAG 2.0 through Usability Testing with Disabled
Users, Proceedings of the International Conference on
Universal Technologies, May 19-20, Oslo University
College, Oslo, Norway. (2010).
Al Mourad, B., Kamoun, F.: Accessibility Evaluation of
Dubai e-Government
Websites: Findings and
Implications, Journal of E-Government Studies and Best
Practices, 2013, pp. 1--15 (2013).
Reid, L.G., Snow-Weaver, A.: WCAG 2.0 for Designers:
Beyond Screen Readers and Captions, Proceedings of the
13th International Conference on Human-Computer
Interaction HCII 2009, Lecture Notes in Computer
Science, (5616), pp. 674--682 (2009).
Eisenhardt, K.M.: Building Theories from Case Study
Research, Academy of Management Review, 14 (4), pp.
532--550 (1989).

398

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 390-399
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
10. Benbasat, I., Goldstein, D.K., Mead, M.: The Case
Research Strategy in Studies of Information Systems,
MIS Quarterly, 11 (3), pp. 369--386 (1987).
11. Kapsi, M., Vlachogiannis, E., Spyrou, T., Darzentas, J.:
A Preliminary Feedback for the WCAG 2.0-WCAG 1.0
Vs WCAG 2.0 Evaluation Study, Proceedings of the 2nd
International Conference on PErvasive Technologies
Related to Assistive Environments (PETRA) 2009, pp. 1-6, June 9-13, Corfu, Greece,. (2009).
12. Anam, R., Ho, C.K., Lim, T.Y.: Flexi-adaptor; An
Automated Web Content Adaptation for Mobile Devices,
International Journal of Digital Information and Wireless
Communications (IJDIWC) 1(3), pp. 656670 (2011).

Appendix 2
WCAG 2.0 Accessibility Requirements [5]
Conformance Level

AA

Appendix 1
Web Content Accessibility Guidelines 2.0 [5]
1.
Guideline 1.1
[Text alternatives]

Guideline 1.2
[time-based media]
Guideline 1.3 [Adaptable]

Guideline 1.4
[Distinguishable]

Guideline 2.1
[Keyboard accessible]
Guideline 2.2
[Enough time]
Guideline 2.3 [Seizures]
Guideline 2.4 [Navigable]

3.
Guideline 3.1 [Readable]
Guideline 3.2 [Predictable]
Guideline 3.3
[Input Assistance]
Guideline 4.1[Compatible]

Perceivable
Provide text alternatives for any
non-text content so that it can be
changed into other forms people
need, such as large print, braille,
speech, symbols or simpler
language.
Provide alternatives for time-based
media.
Create content that can be presented
in different ways (for example
simpler layout) without losing
information or structure.
Make it easier for users to see and
hear content including separating
foreground from background
2. Operable
Make all functionality available
from a keyboard
Provide users enough time to read
and use content
Do not design content in a way that
is known to cause seizures
Provide ways to help users navigate,
find content, and determine where
they are
Understandable
Make text content readable and
understandable.
Make Web pages appear and operate
in predictable ways
Help users avoid and correct
mistakes
4. Robust
Maximize compatibility with current
and future user agents, including
assistive technologies

AAA

New accessibility requirement


Sensory characteristics
Audio control
No keyboard trap
Page titled
Error identification
Labels or instructions
Headings and labels
Error suggestions
Error prevention (Legal, Financial, Data)
Prerecorded sign language
Prerecorded media alternative
Low or no background audio
Visual presentation
No timing
Re-authentication
Focus visible
Location
Unused words
Pronunciation
Help
Error prevention

399

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 400-410
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

Adaptive Channel Equalization for FBMC Based on Variable Length Step Size and
Mean-Squared Error
Mashhoor AlTarayrah and Qasem Abu Al-Haija
King Faisal University, Department of Electrical Engineering,
Al-Ahsa 31982, P.O. Box 380, Saudi Arabia
Emails: Mtarayrah@kfu.edu.sa; Qalhaija@kfu.edu.sa
ABSTRACT
Recently, increasing data transmission rates and the
demand of more bandwidth at the same time have been
a challenge. The trend now is to support high data rates
in wireless communications. Multicarrier systems have
overcome many challenges of high bandwidth
efficiency and at the same time provided also high
spectral efficiency. Filter bank multicarrier systems
(FBMC) provide some advantages more than the
traditional orthogonal frequency division multiplexing
(OFDM) with cyclic prefix (CP). FBMC systems
provide a much better spectral shaping of the
subcarriers than orthogonal frequency division
multiplexing (OFDM). Therefore, the most obvious
difference between the two techniques is in frequency
selectivity. In this paper, we will present a least-meansquare (LMS) algorithm which is based on well-known
cost functions, which is the mean-squared error (MSE)
adapted for FBMC system with offset QAM
modulation (OQAM). This leads to a per-subchannel
adaptive equalizer solution with low complexity. The
proposed simulations have used practical channel
information
based
on
the
International
Telecommunications
Union
(ITU)
Standards.
Moreover, we will discuss how the proposed algorithm
will optimize and evaluate the convergence
characteristic curves of LMS equalization algorithm
per-subcarrier.

channels that are far from being linear or time


invariant. Other sources of complications are the
market need for user mobility, global roaming,
diverse high quality services, and high data rate
multimedia services. These constraints guide
naturally to complex transmitter and receiver
architectures [1], [2], and [7].
Wireless communications need to support high
data rate, with high quality to transmit data, thus
requires a wide transmission bandwidth.
Increasing the transmission rate generally converts
the communication channels into a frequency
selective one. Frequency selectivity appears in the
form of inter-symbol interference (ISI) that results
from the generated multipath effect [1], [5]. The
following schematic in figure 1 illustrates how the
inter-symbol interference is generated when the
signal bandwidth is large (this is the case for highbit rate signals). The condition for frequency
selectivity for the signal bandwidth (BT) is much
larger than the channel coherence bandwidth ( ),
i.e.
(1)
Large Signal
Bandwidth

Frequency
Selectivity

Multipath

Intersymbol
interference

KEYWORDS
Filter Banks, Multicarrier Modulation, Offset
Quadrature Amplitude Modulation (OQAM), PerSubchannel Equalization, LMS- Algorithm.

1 INTRODUCTION
Wireless communication systems are amongst the
most complicated communication systems which
arise essentially from the hostile nature of wireless

Figure 1: ISI generating in large signal bandwidth

Multicarrier Modulation (MCM) is an efficient


transmission technique that consists of splitting up
the channel bandwidth at a higher symbol rate into
parallel lower rate sub-channels each with
narrower band. MCM divides the available
channel bandwidth into
sub-channels through
the use of narrowband subcarriers [7].
400

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 400-410
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
Figure 2 illustrates the MCM principle where
represents a bit stream of the input data to be
transmitted over the channel which will be divided
to be allocated to each
into blocks of length
subcarrier has a set of
subcarrier such that the
bits. Thus the total number of bits per block
can be expressed as:

channel overlaps only with its immediate


neighbors as shown in Figure 3 [2], [6].

(2)

The number of bits


can vary from one
subcarrier to another. To increase the bit rate we
can increase the number of bits in subcarriers that
have higher signal to noise ratios (SNR) [2].
Figure 3: Comparison of sub-channel filter magnitude
responses in the case of OFDM and FBMC.
x0(n)
Symbol
Mapper

Subcarrier
Modulator

To channel

Input bit sream

D/A

serial to
parallel
converter
Symbol xM-1(n)
Mapper

Subcarrier
Modulator

Figure 2: Multicarrier modulation

Multicarrier systems provide many attractive


properties for high rate wireless communications.
A major advantage of the multicarrier approach is
its robustness to the multipath effect, and therefore
to ISI where multicarrier modulation splits the
large-bit rate incoming sequence into several
parallel lower-bit rate sequences. The number of
parallel sequences can be adjusted such that the
wireless channel becomes frequency nonselective. Filter bank based multicarrier (FBMC)
systems offer a number of benefits over
conventional multicarrier systems based on
Orthogonal Frequency Division Multiplexing
(OFDM) [2], [6], [7].
The most obvious difference between the two
techniques is frequency selectivity. OFDM
exhibits large ripples in the frequency domain. In
contrast, FBMC divides the transmission channel
of the system into a set of sub-channels, where any

Channel equalization is a technique that can be


used to improve received signal quality and link
performance as it is used to combat ISI. In
equalization process there are two important tasks,
the first one is to mitigate the ISI effect, and the
other is to prevent enhancement noise power in the
received signal in the processing of ISI mitigation.
These tasks must be balanced in frequency
selective channel equalization. Since the channel
is random and time varying, equalizer must deal
with time varying characteristic of the channel and
so it is called adaptive equalizer or equalizer
training. The adaptive algorithm is based on the
error signal where derivative of the signal based
on comparing between the equalizer outputs
with referenced signal
which comes from the
replica of the transmitted signal . The adaptive
algorithms use error ( ) to update the equalizer
weights in such a way that reduces the cost
function iteratively [4].
Generally, the channel equalization plays a major
role in enhancing the performance of
communication systems such as FBMC system
which uses simple sub-channel equalization,
where equalizers take the form of FIR filters [1]
with very small numbers of taps. It should be
emphasized that using multicarrier modulation

401

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 400-410
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
leads to a substantial simplification of the required
channel equalizers; due to its ability to combat the
multipath/ISI effects of the frequency selective
wireless channel [5].
2 LITERATURE REVIEW
Tarayrah and Abu Al-Haija in [1] (2013) proposed
a preliminary study of this paper as a
reconfigurable design of channel equalization
algorithms for FBMC system with offset QAM
modulation (OQAM). They optimized their
algorithms based on the mean-squared error
(MSE) criterion and the Least-Mean-Square
(LMS) and applied them to each sub-carrier which
resulted in an adaptive equalizer with lower
complexity.
Yahia Medjahdi et. al. in [8] (2011) provided a
theoretical performance evaluation of the
downlink of asynchronous orthogonal frequency
division multiplexing (OFDM) and filter bank
based
multicarrier(FBMC)
cellular
radio
communication systems, and they developed an
accurate derivation for the interference caused by
the timing synchronization errors in the
neighboring cells. Their system considered the
multipath effects on the interfering and desired
signal are also considered as well as the frequency
correlation fading in the case of block subcarrier
assignment scheme. Finally, they derived the exact
expressions for average error rates of OFDM and
FBMC systems.
Tero Ihalainen et. al. in [9] (2011) studied the
channel equalization in filter bank multicarrier
(FBMC) transmission based on the offset
quadrature-amplitude
modulation
(OQAM)
subcarrier modulation. They derived the finite
impulse response (FIR) per-subchannel equalizers
based on the frequency sampling (FS) approach,
both for the single-input multiple-output (SIMO)
receive diversity and the multiple-input multiple-

output
(MIMO)
spatially
multiplexed
FBMC/OQAM systems. Their FS design consisted
of computing the equalizer in the frequency
domain at a number of frequency points within a
subchannel bandwidth and deriving the
coefficients of subcarrier-wise equalizers. Also,
they evaluated the error rate performance and
computational complexity for both antenna
configurations and compared them with the
SIMO/MIMO OFDM equalizers. The results
obtained confirmed the effectiveness of their
proposed technique with channels that exhibit
significant frequency selectivity at the subchannel
level and showed a performance comparable with
the optimum minimum mean-square-error
equalizer, despite a
significantly lower
computational complexity.
Tero Ihalainen et. al. in [10] (2010) presented a
new approach to generate an FBMC signal
waveform focusing on the contiguous narrowband
subcarrier allocation, which is often encountered
in uplink transmission. Their proposed scheme
relies on a cascade of aP-subchannel synthesis
bank (P << M), time domain interpolation, and a
user-specific frequency shift. Their approach
provided a notable computational complexity
savings over a wide range of practical user
bandwidth allocations, when compared to the
conventional implementation consisting of equally
sized filter banks. Also, their novel scheme
provided a flexible and low-complexity synthesis
of spectrally well-localized FBMC uplink
waveforms with a strong potential for future
broadband mobile communications.
Tobias Hidalgo Stitz et. al. in [7] (2010) presented
a detailed analysis of synchronization methods
based on scattered pilots for filter bank based
multicarrier(FBMC) communications, taking into
account the interplay of the synchronization,
channel estimation, and equalization methods.

402

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 400-410
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
They showed that by applying pilots designed
specifically for filter banks, the carrier frequency
offset (CFO), fractional time delay (FTD), and
channel response can be accurately estimated.
Also, they performed the channel parameter
estimation and compensation are in the frequency
domain, in a subchannel-wise fashion. Finally, the
performance evaluation was applied in a
hypothetical WiMAX scenario in which an FBMC
system would substitute OFDM maintaining as
much physical layer compatibility as possible.
Eleftherios
Kofidis
and
Athanasios
A.
Rontogiannis in [11] (2010) proposed an adaptive
T/2-spaced decision-feedback equalization (DFE)
algorithm for MIMO-FBMC/OQAM systems, that
is both computationally efficient and numerically
stable. The structure of their algorithm followed
the V-BLAST idea where the algorithm was
applied in a per subcarrier fashion. Their
simulation results reported that the proposed
algorithm demonstrated its effectiveness in timevarying MIMO channels with high frequency
selectivity.
Assa Ikhlef and Jerome Louveaux in [12] (2009)
investigated the problem of equalization for filter
bank multicarrier modulation based on offset
OQAM (FBMC/OQAM). They found that the
existing equalizers for FBMC/OQAM do not
completely cancel inter carrier interference (ICI)
and hence some ICI remains even after
equalization. To cope more efficiently with ICI,
they proposed a two stages MMSE (TSMMSE)equalizer; the first stage consisted in
applying an MMSE equalizer and taking some
preliminary decisions, then subsequently using
these decisions to remove the term corresponding
to ICI from the received signal for each
subchannel. The resulting signal from the first
stage has its ICI practically removed (up to
channel estimation errors and decision errors) and

is therefore corrupted only by inter-symbol


interference (ISI) and additive noise. In the second
stage they applied an MMSE equalizer which
copes only with ISI. The proposed two stage
MMSE equalizer had shown better performance
compared to the classical MMSE one.
Leonardo G. Baltar, Dirk S. Waldhauser and Josef
A. Nossek in [13] (2009) presented a persubchannel nonlinear equalizer for a class of filter
bank based multicarrier (FBMC) systems. They
considered the class of exponentially modulated
FBMCs with offset quadrature amplitude
modulated (OQAM) input symbols and then
designed a fractionally spaced decision feedback
equalizer (DFE) that minimizes the mean squared
error (MMSE) and taking into account the inter(sub)channel interference (ICI). Their simulation
results showed that despite its increased
computational complexity, the performance and
the higher bandwidth efficiency of OQAM FBMC
systems makes them a competitive alternative to
conventional multicarrier systems like cyclic
prefix based orthogonal frequency division
multiplexing (CP-OFDM).
Y. Medjahdi et. al. in [14] (2009) focused on the
downlink of multi cellular networks and
investigated the influence of the inter-cell
interference in an unsynchronized frequency
division duplex (FDD) context with a frequency
reuse of 1. They compared the conventional
orthogonal frequency division multiplexing with
cyclic prefix modulation (CP-OFDM) and the
filter bank based multi-carrier modulation
(FBMC). Finally, they evaluated the performance
in terms of average capacity in FBMC multi-cell
networks compared to CP-OFDM ones.
Ari Viholainen et. al. in [15] (2009) discussed an
efficient prototype filter design in the context of
filter
bank
based
multicarrier (FBMC)

403

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 400-410
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
transmission. They analyzed the performance of
various designs using the offset-QAM based
FBMC system and provided numerical results to
characterize different optimization criteria in
terms of frequency selectivity of resulting
prototype filters and total interference level of the
filter bank structure. Finally, they have shown the
kind of performance trade-offs that can be
obtained by adjusting those free parameters which
offered a useful information to a system designer.
Assa Ikhlef and Jerome Louveaux in [16] (2009)
studied the distortions of multiple-input multipleoutput (MIMO) filter bank multicarrier
modulation (FBMC) systems such as inter-symbol
interference (ISI), and inter-carrier interference
(ICI), and using multiple antennas creates inter
antenna interference (IAI), which are caused by
the frequency selective transmission channel. To
mitigate these distortions, they derived an MMSE
equalizer assuming spatial multiplexing is used
and they proposed a successive interference
cancellation (SIC) and Ordered SIC (OSIC)
techniques to extract the transmitted streams as
well as they improved the performance by
introducing a two stage OSIC (TSOSIC)
technique. Their simulation results confirmed the
effectiveness of the proposed techniques over the
classical one tap equalizer.
Dirk S. Waldhauser, Leonardo G. Baltar and Josef
A. Nossek in [17] (2008) presented a least-meansquare (LMS) algorithm adapted to the principle
of orthogonally multiplexed QAM filter
banks(OQAM-FBMC) which led to an adaptive
equalizer solution with low complexity. The
initialization of the LMS equalizer resulted from a
pilot based channel estimation. They compared
their results with a classical OFDM system, where
the loss in data rate is compensated with a higher
modulation scheme.

In this paper, we propose to design an adaptive


channel equalization algorithm for FBMC system
with offset QAM modulation based on the wellknown cost functions Mean-Squared Error (MSE)
criterion. We will design the LMS equalizer to
every subcarrier in order to gain the benefits of
using different step size values for each subcarrier.
3 PROPOSED SYSTEM MODEL
A figure 4 contains the general block diagram of
FBMC system which will be used in this paper
and the main processing blocks are: OQAM preprocessing, Synthesis filter bank (SFB), Analysis
filter bank (AFB), OQAM postprocessing and
LMS per-subcarrier equalization. We assumed
Linear Time Varying (LTV) channel model which
is based on AWGN channel and has physical
characteristic like multipath propagation. We
focused on a specific prototype filter length that is
), an extra delay
(
has to be introduced
on the SFB or AFB input, here depends on , that
, so in our model the extra
is
[1].
delay is

Figure 4: FBMC system

3.1 OQAM PREPROCESSING


The first block is OQAM preprocessing block
which converts the QAM symbols into OQAM.
There are two steps to convert QAM symbols into
OQAM, firstly, a simple complex to real
conversion required. We must know that the
conversion will be different for even and odd subchannels as shown in Table 1. This conversion
increases the sample rate by [1], [5], and [7].
Secondly, the
multiplication

conversion is followed
sequence, where

by

404

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 400-410
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

(3)
is discrete time variable that runs at twice the
rate of . The pattern of real and imaginary
samples must follow the sign of the
sequences can be as follows
sequences. Here
as an example [7]:

C2R

d0,n

0,n

0,n

A0(z2)

M/2

Z-1

C2R

1,n

1,n

d1,n

A1(z2)

M/2

IFFT

After the OQAM preprocessing, the input signals


are either pure real or pure imaginary.
M-1,n

Converting the QAM symbols to OQAM format


involves two important specificities [5]:
A time offset of half a QAM symbol period

Z-1

M-1,n

dM-1,n
C2R

AM-1(z2)

OQAM modulation
Transform block

is applied to either the real part or the


imaginary part of the QAM symbol when the
OQAM signal is generated.
For two successive sub-channels, say
and
, the offset are applied to the real part
of the QAM symbol in sub-channel , while it
is applied to the imaginary part of the QAM
symbol in sub-channel
.
The OQAM symbols will be denoted by
OQAM symbols are given by:

Polyphase
filtering

M/2

P/S conversion

Figure 5: Poly-phase filter bank structures Synthesis filter


bank (SFB).

is complex-valued.
The output signal of SFB
We can express the discrete-time baseband signal
at the output of the SFB of an FBMC transmitter
based on OQAM modulation as

Where
for even/odd values of

Table 1: OQAM symbol

&

Even

Odd

Even
Odd

And is number of subcarriers,


is the realth
subcarrier during the th
valued symbol at the
are shift-invariant impulse
symbol interval,
responses of the SFB channel filters.

3.2 SYNTHESIS FILTER BANK

3.3 ANALYSIS FILTER BANK

Analysis filter bank consists of serial to parallel


converter (delay chain and down-samplers by
M/2), poly-phase filtering, FFTs, and simple
multipliers as shown in figure 5.

Synthesis filter bank that consists


of simple
multipliers, IFFT, poly-phase filtering, and
parallel to serial converter (up-samplers by M/2
and delay chain) as shown in figure 6.

405

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 400-410
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
*0,n
M/2

B0(z2)

*0,n
x

Re

d 0,n

R2C

Subchannel
processing

R2C

Subchannel
processing

Z-1
*1,n

M/2

*1,n
d 1,n

B1(z )

Re

FFT

Z-1

*M-1,n *M-1,n

M/2

S/P conversion

d M-1,n

BM-1(z )

Polyphase
filtering

Transform block

Re

R2C

Subchannel
processing

frequency selective channel and to improve the


symbol decisions. The LMS equalizer is used in
the minimization of the mean square error (MSE)
between the desired output of the equalizer and the
actual output equalizer [1], [7].
In this paper we applied the equalizer to the real
and imaginary parts for each sub-carrier
individually. The filter weights are updated as in
the below equation. LMS can be calculated
iteratively by

(10)

OQAM demodulation

Figure 6: Poly-phase filter bank structures Analysis filter


bank (AFB).

(11)
(12)

Then AFB output is

at

Now to compute the mean square error


instance time

(7)

And because the length of the prototype filters


. Then
is

(13)

3.6 THE COMMUNICATION CHANNEL

(8)
Here the extra delay is merged to S/P converter.
3.4 OQAM POST-PROCESSING
In the post-processing operation there are 2 steps,
firstly, the real part should be taken after
sequence. The second
multiplication by
operation is real-to-complex conversion, where
two successive real-valued symbols (with one
multiplied by ) form a complex-valued symbol
. This conversion decreases the sample rate by
a factor [3, 7].

Wireless communication channels can affect the


input signals in different mechanisms, such as
linear and nonlinear distortion, additive random
noise,
fading
and
interference.
The
communication channel can be modeled as a
linear time invariant system with transfer function
followed, by a zero-mean additive white
. But we cant
Gaussian noise (AWGN) source
apply this model directly to wireless
communications channels, because it is timevarying. In fact, we can represent a short-term
time-invariant for a wireless communication to be
useful in most cases of practical interest. Figure 7
shows a simplified block diagram of a
communication channel [6].

3.5 LMS PER-SUBCARRIER EQUALIZATION


x(n)

Sub-channel equalization in this paper is based in


MSE criterion, and the LMS algorithm is used as
adaptive equalizer, here a per-subcarrier equalizer
works at T
, where is the symbol duration.
One FIR equalizer per subcarrier is used to
mitigate ISI and ICI that results from the

C(z)

y(n)

detector

^
x(n)

e(n)

Figure 7: Linear time-invariant communication channel


model

406

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 400-410
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
Assumed to be consisted of the sum of
sequences, each transmitted in one of
subchannels. Mathematically, well have

An -fold down-sampler performs the input-output


relation
(17)

(14)

4 SIMULATION ENVIRONMENTS
The output of the channel can be as follows:

(15)
is a noisy and distorted
The received signal
is the
.
signal version of the input signal
estimated value which is the output of the
detector, but may it will be not identical to the
transmitted signal because of ISI caused by
channel which is because of multipath fading and
limitation in bandwidth, and noise
, this will
give us a nonzero probability of error
In wireless communication, the channels are time
varying so a single
cant be used to represent
them successfully [6].
3.7 DOWNSAMPLERS/UPSAMPLERS

We have used a computer simulation software


package-MATLAB to simulate the performance of
the OQAM FBMC system applying the persubcarrier LMS equalizer. We have assumed
1MHz channel bandwidth with
subchannels and the overlapping factor
based
on OQAM modulation. We have assumed Linear
Time Varying (LTV) channel model which is
based on AWGN channel and has physical
characteristic like multipath propagation and we
have used in our simulation practical channel
information
based
on
the
International
Telecommunications Union ITU Standards
(Vehicular channel type A) and 12 dB SNR. In the
LMS sub-channel equalizer we have used 2
different step size equalizer for several subchannels and 3 tap equalizer.

Up-samplers are part of the SFB which increases


the sampling rate, the - fold up-sampler inserts

5 RESULTS AND COMPARISONS

zeroes between adjacent samples of its input


signal. The up-sampling operation is illustrated as
in Figure 8 [6].

The following simulation results illustrate the


performance of sub-channel equalizers which
were evaluated by the computer simulation
software package-MATLAB as shown in the
flowchart of figure 12. Prototype filter designed
with the sampling frequency technique
coefficients.

x(n)

M/2

y(n)

Figure 8: M/2 -fold up-sampler

The input-output relationship of the up-sampler


can be expressed as

In design the prototype filter we have used simple


technique which is called frequency sampling
technique, and it is presented with the following
parameters

(16)

Down-sampler is part of the AFB, which reduces


the sampling rate. The
down-sampling
operation is illustrated as in Figure 9 [6].

x(n)

M/2
Figure 9:

y(n)

We started the design by determination of


desired values
in the
frequency domain by

-fold down-sampler

407

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 400-410
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

(18)
Then, the prototype filter coefficients are obtained
by inverse DFT as

Where
In fact, the condition h1 0 determines the desired
. It is necessary to
and
values
make the number of coefficients an odd number,
so that the filter delay is an integer number of
sample periods. The Impulse response of the
prototype filter is shown in figures 10 and 11
show the frequency response obtained. In this
figures, the sub-channel spacing f is taken as
unity (f =1).

Another important feature is the background noise


of the system, due to the non-orthogonality of the
sub-channels with even indices with respect to the
sub-channel with zero index. With this prototype
2
filter, the background noise is b 65 dB .
Finally, the convergence characteristics of LMS
algorithm were performed. In the simulation
results
represents the equalizer taps and
represents the step size of the equalizer.

Figure 12: Flowchart of Sub-channel equalization based on


FBMC (Read Left To- Right)
Vehicular A, Mu=0.008,N=3, Sub-carrier # 74
6
5.5
5

impulse response
1.2

4.5
4

MSE(dB)

0.8

3.5

magnitude

0.6

2.5
2

0.4
1.5

0.2

100

200

300

400
500
600
Number of Iterations

700

800

900

1000

-0.2

500

1000

1500

2000

2500

time

Figure 13: MSE vs. Number of Iteration for Vehicular a

Figure 10: Impulse Response of the prototype filter

channel

frequency response
0

Vehicular A,Mu=0.007,N=3, Sub-carrier # 74


5

-10

-20

-30

-5

MSE(dB)

dB

-40

-50

-10

-15

-60

-20

-70

-80

-25

-90

-30
-100

0.5

1.5
2
2.5
frequency (unit:subchannel spacing)

3.5

100

200

300

400
500
600
Number of Iterations

700

800

900

1000

Figure 11: Frequency response of the prototype filter

It is important to notice that the filter attenuation


exceeds 60 dB for the frequency range above 2
sub-channel spacings.

Figure 14: MSE vs. Number of Iteration for Vehicular a


channel, with 0.007 step size

For subcarrier number 74 the convergence


characteristic curves of LMS algorithm shown in

408

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 400-410
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
Figures 13 and 14. It was taken to Vehicular
channel type A, 3 tap equalizer and 12 dB SNR.
We noticed that the MSE in Vehicular A channel
model with step size
converges to 18 dB
which is better than with
which converges
to 4 dB.
Vehicular A,N=3,Mu=0.008
5
SC#173
SC#253
SC#330

We noticed from previous results that the same


subcarrier could have a good characteristic curve
at sertain step sizes of the equalizer and a bad
characteristic curve at another step sizes . At the
same time another subcarrier will be in opposite
way with the same step size valuse in the first
subcarrier. And that is because of frequency
selectivity which give each subcarrier different
correlation matrix which should the step size to be
taken.

MSE(dB)

-5

6 CONCLUSIONS
-10

-15

-20

100

200

300

400
500
600
Number of Iterations

700

800

900

1000

Figure 15: Comparison of MSE vs. Number of iteration for


different subcarriers,Mu=0.008
Vehicular A,N=3,Mu=0.007

Using FBMC systems eleminate Intercarrier


Interference ICI and ISI and convert the channel to
frequency selective one therefore no need for
using complex equalizer thus simple adaptive
LMS equalizer is enough. And because of Using
FBMC systems the data rates have been increased
for large bandwidth as this a demand in wilreless
communication.

5
SC#173
SC#253
SC#330

MSE(dB)

-5

-10

-15

-20

-25

100

200

300

400
500
600
Number of Iterations

700

800

900

1000

Figure 16: Comparison of MSE vs. Number of iteration for


different subcarriers ,Mu=0.007

Comparsion between
the convergence
characteristic curves of LMS algorithm for
different subcarriers (173,263 and 330) and
different step size of the equalizers is shown in
Figures 15 and 16
We noticed that it converges quickly for subcarrier
173 with 0.008 step size to -15 dB and for step
size
it converges to -10 dB.
Howeever, for subcarrier 330 with 0.007 step size
it converges quickly to -10 dB and with
step size it converges to 4 dB.

We have designed adaptive channel equalization


algorithms for FBMC systems with offset QAM
modulation and perfect channel estimation, We
have optimized the mean squared error (MSE)
criterion by using simple adaptive LMS equalizer
because of its simplicity , as LMS per-subchannel
equalizer operates as fractionally spaced (T/2)
equalizer which aims to avoid irrevocable aliasing
of the subchannels. Also, we have optimized and
compared the convergence curve of LMS
equalization algorithms as well as applied the
equalizer to practical channel information based
on ITU Standards (VEH A) and 12 dB SNR with
different equalizer step sizes and subcarriers.
Because of frequency selectivity, every subchannel will have a different optimum LMS
equalizer step sizes based on the correlation
matrix to every sub-channel as illustrated in the
simulation results.
ACKNOWLEDGEMENT
Authors appreciate the publication support of
Deanship of Scientific Research at King Faisal
University KFU-AL-AHSA.

409

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 400-410
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
7 REFERENCES
1.

2.

3.
4.

5.

6.

7.

8.

9.

10.

11.

12.

13.

Mashhoor Al Tarayrah and Qasem Abu Al-Haija,


"LMS-based Equalization in Filter Bank Multicarrier
Wireless Communication Systems", The International
Conference on Digital Information Processing, EBusiness and Cloud Computing (DIPECC2013), UAE,
pp. 87-9,Octobar 2013.
Tero Ihalainen, Yuan Yang and Renfors M., Filter
Bank Based Frequency-Domain Equalizers with
Diversity Combining IEEE International Symposium
on Circuits and Systems, pp. 93-96, 2008.
John Proakis and Masoud Salehi, Digital
Communications, Fifth Edition, McGraw-Hill, 2008.
P.Siohan, C.Siclet, and N.Lacaille, Analysis and
Design of OFDM/OQAM Systems Based on Filter
bank Theory IEEE transactions on signal processing,
VOL. 50, NO. 5, pp. 1170-1183, MAY 2002.
Tero Ihalainen ET. al., Channel equalization in filter
bank based multicarrier modulation for wireless
communications EURASIP Journal on Advances in
Signal Process., 2007.
P. P. Vaidyanathan, Filter Banks in Digital
Communications IEEE Transactions on Circuits and
Systems 2001; 1(2): 4-25.
Tobias Hidalgo Stitz, Tero Ihalainen, A. Viholainen,
and M. Renfors, Pilot-Based Synchronization and
Equalization
in
Filter
Bank
Multicarrier
Communications, EURASIP Journal on Advances in
Signal Processing, 2010; 18 pages.
Yahia Medjahdi, et. Al.,"Performance Analysis in the
Downlink of Asynchronous OFDM/FBMC Based
Multi-Cellular Networks", IEEE transactions on
wireless communications, vol. 10, no. 8, august 2011.
Tero Ihalainen, et.al. , "Channel Equalization for
Multi-Antenna FBMC/OQAM Receivers", IEEE
transactions on vehicular technology, vol. 60, no. 5,
June 2011.
Tero Ihalainen, et.al. , "Generation of Filter BankBased Multicarrier Waveform Using Partial Synthesis
and Time Domain Interpolation", IEEE transactions on
circuits and systems: regular papers, vol. 57, no. 7, July
2010.
Eleftherios Kofidis and Athanasios A. Rontogiannis, "
Adaptive BLAST Decision-Feedback Equalizer FOR
MIMO-FBMC/OQAM
Systems",
IEEE
21st
International Symposium on Personal Indoor and
Mobile Radio Communications (PIMRC), 2010.
Aissa Ikhlef and Jerome Louveaux, "An Enhanced
Mmse Per Subchannel Equalizer For Highly Frequency
Selective Channels For FBMC/OQAM Systems",
IEEE 10th Workshop On Signal Processing Advances
In Wireless Communications, 2009. SPAWC '09.
Leonardo G. Baltar, Dirk S. Waldhauser and Josef A.
Nossek, "MMSE Subchannel Decision Feedback
Equalization for Filter Bank Based Multicarrier
Systems ", IEEE Circuits and Systems, 2009.

14. Yahia
Medjahdi,
et.
Al.,
"INTER-CELL
INTERFERENCE ANALYSIS FOR OFDM/FBMC
SYSTEMS ", on Signal.
15. Ari Viholainen et. al., "PROTOTYPE FILTER
DESIGN
FOR
FILTER
BANK
BASED
MULTICARRIER TRANSMISSION", 17th European
Signal Processing Conference (EUSIPCO 2009).
16. Aissa Ikhlef and Jerome Louveaux, " Per subchannel
equalization for MIMO FBMC/OQAM system", on
Communications
17. Dirk S. Waldhauser, Leonardo G. Baltar and Josef A.
Nossek, "Adaptive Equalization for Filter Bank Based
Multicarrier Systems" IEEE Circuits and Systems,
2008.

410

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 411-429
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

Elliptic Jes Window Forms in Signal Processing


Claude Ziad Bayeh
EEE GROUP
R&D department
LEBANON
Email: c.bayeh@eee-group.net
claude_bayeh_cbegrdi@hotmail.com

ABSTRACT

1 INTRODUCTION

The Elliptic Jes window forms are original studies


introduced by the author in Mathematics and Signal
Processing in 2012. They are based on an Elliptical
Trigonometry function Ejes in which it can produce a
large number of different signals and shapes by varying
only one parameter. In this paper, the developed study
is the application of the Elliptical Trigonometry in
signal processing in which some formulae are
introduced using the function Ejes, these formulae
has many advantages ahead the traditional window
functions such as improving the convergence of the
Fourier series at the discontinuity more rapidly
compared to the traditional window functions, the
proposed window functions are used to truncate the
Fourier series with variable window shapes that keep
the necessary information about the signal even after
truncation. The proposed window functions are
variable in form; they can take a huge number of
different forms by varying only a few numbers of
parameters. The proposed window functions can be
used in both analog and digital design of filters. In fact,
the General trigonometry and its sub-topics such as
Elliptical Trigonometry can have also other
applications in any scientific field that uses the
trigonometry and it can improve all previous studies by
replacing the traditional trigonometric functions such
as cosine and sine by General trigonometric functions
such as Gjes and Gmar or other functions.

In mathematics and in signal processing, the


definition of a window function is that it has a
zero-valued outside of some chosen interval [1-3].
A typical example is a rectangular window in
which any curve inside the window is conserved
and any curve outside the window is set to be
equal to zero [6-15]. When another function or a
signal (data) is multiplied by a window function,
the product is also zero-valued outside the interval:
all that is left is the part where they overlap.
Applications of window functions include filter
design, spectral analysis, beamforming [4-5],
[28] and [33]. In typical applications, the window
functions used are non-negative smooth "bellshaped" curves, though rectangle, triangle, and
other functions are sometimes used [1-2]. Thy are
used mainly to converge the Fourier series at the
discontinuity [1-2].

KEYWORDS
Elliptical Trigonometry, Window functions, Signal
processing, Fourier series, Truncated series.

In this paper, the author introduced four new


window
functions
using
an
Elliptical
Trigonometric function such as Elliptic Jes
function [16-17]. The Elliptical trigonometry is
also introduced by the author and it can be
considered as the basis of the new generation of
Signal Processing, Electronics and Electrical
systems based on variable signals [17]. The new
window functions based on the Elliptical
Trigonometry has huge advantages over the
traditional window functions based on the
traditional trigonometry. This will be discussed in
this paper.

411

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 411-429
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

2 BRIEF INTRODUCTION
ELLIPTIC JES FUNCTION

TO

THE

The Elliptic Jes function ( ()) is a function


of the Elliptical Trigonometry which is defined in
the papers [16-17]. In this paper we will use it only
to create new window functions based on the
Elliptical Trigonometry.

b) = 0.2

The function () can describes an infinite


number of forms but in this section we will see
only some important forms as depicted in the
following figures.
Figures 1.a to 1.f represent multi form signals
obtained by varying one parameter ( ) of the
function ().

a) = 0.001

c) = 3/3

d) = 1

e) = 3

412

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 411-429
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

So the truncated Fourier series using the Elliptic


Jes window form 3takes the following form:

() =
=

=
=

25 21

+

46 46

(3)

3.1 Variable shapes of window formed by


Elliptic Jes window form 3

f) = 90

Figure 1, multi form signals of the function () and


for different values of > 0.

Important signals obtained using this function:


Impulse train with positive and negative part,
elliptic deflated, quasi-triangular, sinusoidal,
elliptical swollen, square signal, rectangular
signal[40].
These types of signals are widely used in power
electronics, electrical generator, signal processing
and in transmission of analog signals [16-17], [3545].
3 ELLIPTIC JES WINDOW FORM 3
FUNCTION
The Elliptic Jes window form 3 function is the
application of the Elliptic Jes function in signal
processing. It takes the following forms:
21
2
25

(1)
=
46

46

The formed shapes of this function can be drawn


using MATLAB. In the figures 2.a to 2.f, different
shapes of the window function are formed by
varying only one parameter which is .

a)

= 0.001

With 0 1 and
And
25
21

= +
46

46

With and

(2)
b) = 0.2

413

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 411-429
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

= 3/3

c)

d) = 1

e)

= 3

f) = 90
Figure 2, multi form signals of the function Elliptic Jes
window form 3 and for different values of > 0.

In fact, this window is very important as it has


variable amplitude that can be changed as we wish
over a period or a half period. Applications of
window functions include spectral analysis, filter
design, beamforming and telecommunications. A
more general definition of window functions does
not require them to be identically zero outside an
interval, as long as the product of the window
multiplied by its argument is square integrable,
that is, that the function goes sufficiently rapidly
toward zero.
3.2 Programming the function Elliptic Jes
window form 3 using MATLAB
%----------------------------------------------------------%Elliptic Jes Window form 3
%Introduced by Claude Ziad Bayeh in 2012-06-21
clc
close all
M=2;
a=1; x=0:0.0001:M-1;
fprintf('---Elliptic Jes Window form 3 Introduced by Claude
Ziad Bayeh in 2012-06-21---\n');
fprintf('---------------------------------------------\n');
repeat='y';
while repeat=='y'
b=input('determine the form of the Elliptic trigonometry:
b=');
fprintf('b is a variable can be changed to obtain different
signals \n');
%b is the intersection of the Ellipse and the axe y'oy in the
positive part.

414

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 411-429
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

if b<0,
b
error('ATTENTION: ERROR b must be greater than
Zero');
end;
Ejes=(1./(sqrt(1.+((a/b).*tan(x)).^2))).*angx(x); % the
Elliptic Jes "Ejes"
Emar=(1./(sqrt(1.+((a/b).*tan(x)).^2))).*angx(x).*tan(x).*a/b
; % the Elliptic Mar "Emar"
% Elliptic Jes Window form 3
EjesW3=25./4621./46.*((1./(sqrt(1.+((a/b).*tan(2.*pi.*x)).^2))).*angx(2.*pi.
*x));
plot(x,EjesW3);
%xlabel('X''OX axis'); ylabel('f(x)=AEjesx(x)');
title('Absolute Elliptic Jes-x: AEjesx(x)');
axis([0 M-1 0 1.1]);
grid on;%grid Minor (for more details)/ grid on (for less
details)
fprintf('Do you want to repeat ?\nPress y for ''Yes'' or any
key for ''No''\n');
repeat=input('Y/N=','s'); %string input
clc
close all
end; %End while
%-----------------------------------------------------------

3.3 Advantages of the function Elliptic Jes


window form 3 over the traditional window
functions
Similar to other windows used in signal processing
such as: Hamming, Hanning, Blackman, Kaiser,
Lanczos, Tukey and many other windows, the
main goal of introducing the Elliptic Jes window
form 3 is to improve the convergence of the
Fourier Series at the discontinuity.
The advantages of the new window function over
the traditional windows are:
-The proposed window function is variable in
form; it can take more than 6 different forms by
varying only one parameter.
-It can help the Fourier series to converge more
rapidly compared to the traditional ones.
It can be used in both analog design of filters and
digital design of filters.

It is used to truncate the Fourier series with a


variable window shape that keep the necessary
information about the signal even after truncation.
4 ELLIPTIC JES WINDOW FORM 4
FUNCTION
The Elliptic Jes window form 4 function is the
application of the Elliptic Jes function in signal
processing. It takes the following forms:
1

(4)

(5)

= 1
1
2

With 0 1 and
With is the radical of the function, it is an
integer with
And

= 1 +
2

With and

So the truncated Fourier series using the Elliptic


Jes window form 4 takes the following form:

() =
=

= 1 +
2

(6)

4.1 Variable shapes of window formed by


Elliptic Jes window form 4
The formed shapes of this function can be drawn
using MATLAB. In the figures 3.a to 3.f, different
shapes of the window function are formed by
varying only one parameter which is . We
consider in this case that the value of = 1.

415

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 411-429
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

a) = 0.001 and = 1

b) = 0.2 and = 1

c)

= 3/3 and = 1

d) = 1 and = 1

e)

= 3 and = 1

f) = 90 and = 1
Figure 3, multi form signals of the function Elliptic Jes
window form 4 and for different values of > 0 and = 1.

If we change the value of , we can increase the


performance of the window at the discontinuity

416

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 411-429
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

and at each point within the window. The formed


shapes of this function can be drawn using
MATLAB. In the figures 4.a to 4.f, different
shapes of the window function are formed by
varying only one parameter which is . We
consider in this case that the value of = 4.

= 3/3 and = 4

c)

d) = 1 and = 4

a)

= 0.001 and = 4

e)

= 3 and = 4

b) = 0.2 and = 4

f) = 90 and = 4
Figure 4, multi form signals of the function Elliptic Jes
window form 4 and for different values of > 0 and = 4.

417

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 411-429
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

In fact, this window is very important as it has


variable amplitude that can be changed as we wish
over a period or a half period. Applications of
window functions include spectral analysis, filter
design, beamforming and telecommunications. A
more general definition of window functions does
not require them to be identically zero outside an
interval, as long as the product of the window
multiplied by its argument is square integrable,
that is, that the function goes sufficiently rapidly
toward zero.
This window function is considered as the most
powerful window function of all existing ones.
Because we can form an infinite number of forms
and shapes that help the Fourier series to converge
more or less rapidly than ever.
4.2 Programming the function Elliptic Jes
window form 4 using MATLAB
%-------------------------------------------------------------------%Elliptic Jes Window form 4
%Introduced by Claude Ziad Bayeh in 2012-06-21
clc
close all
M=2;
a=1; x=0:0.0001:M-1;
fprintf('---Elliptic Jes Window form 4 Introduced by Claude
Ziad Bayeh in 2012-06-21---\n');
fprintf('-----------------------------------------------------------\n');
repeat='y';
while repeat=='y'
b=input('determine the form of the Elliptic trigonometry:
b=');
fprintf('b is a variable can be changed to obtain different
signals \n');
%b is the intersection of the Ellipse and the axe y'oy in the
positive part.
if b<0,
b
error('ATTENTION: ERROR b must be greater than
Zero');
end;
R=input('determine the root of the window: R=');
fprintf('R is a variable can be changed to obtain variable
amplitude of the window function\n');
if R<=0,
R
error('ATTENTION: ERROR "R" must be greater than
Zero');
end;

Ejes=(1./(sqrt(1.+((a/b).*tan(x)).^2))).*angx(x); % the
Elliptic Jes "Ejes"
Emar=(1./(sqrt(1.+((a/b).*tan(x)).^2))).*angx(x).*tan(x).*a/b
; % the Elliptic Mar "Emar"
% Elliptic Jes Window form 4
X=(2.*pi.*x);
EjesW4=(1./2.*(1(1./(sqrt(1.+((a/b).*tan(X)).^2))).*angx(X))).^(1./R);
plot(x,EjesW4);
%xlabel('X''OX axis'); ylabel('f(x)=EjesW4(x)');
title('Elliptic Jes Window form 4');
axis([0 M-1 0 1.1]);
grid on;%grid Minor (for more details)/ grid on (for less
details)
fprintf('Do you want to repeat ?\nPress y for ''Yes'' or any
key for ''No''\n');
repeat=input('Y/N=','s'); %string input
clc
close all
end; %End while
%--------------------------------------------------------------------

4.3 Advantages of the function Elliptic Jes


window form 4 over the traditional window
functions
Similar to other windows used in signal processing
such as: Hamming, Hanning, Blackman, Kaiser,
Lanczos, Tukey and many other windows, the
main goal of introducing the Elliptic Jes window
form 4 is to improve the convergence of the
Fourier Series at the discontinuity.
The advantages of the new window function over
the traditional windows are:
-The proposed window function is variable in
form; it can take an infinite number of different
forms by varying only two parameters.
-It can help the Fourier series to converge more
rapidly compared to the traditional ones.
It can be used in both analog design of filters and
digital design of filters.
It is used to truncate the Fourier series with a
variable window shape that keep the necessary
information about the signal even after truncation.

418

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 411-429
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

5 ELLIPTIC
FUNCTION

JES

WINDOW

FORM

a)

= 0

The Elliptic Jes window form 5 function is the


application of the Elliptic Jes function in signal
processing. It takes the following forms:
1 (+2)


= 1 +
2 (+1) 1
2

(7)

With 0 1 and
And

= 1 +
2

1 (+2)
2 (+1)

With and

(8)
b) = 0.01

So the truncated Fourier series using the Elliptic


Jes window form 5takes the following form:

() =

1
1 ( + 2)
1 +

2
2 ( + 1)
(9)

5.1 Variable shapes of window formed by


Elliptic Jes window form 5
The formed shapes of this function can be drawn
using MATLAB. In the figures 5.a to 5.g, different
shapes of the window function are formed by
varying only one parameter which is .

c)

= 0.2

d) = 3/3

419

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 411-429
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

over a period or a half period. Applications of


window functions include spectral analysis, filter
design, beamforming and telecommunications. A
more general definition of window functions does
not require them to be identically zero outside an
interval, as long as the product of the window
multiplied by its argument is square integrable,
that is, that the function goes sufficiently rapidly
toward zero.
5.2 Programming the function Elliptic Jes
window form 5 using MATLAB
e)

= 1

f)

= 3

%-------------------------------------------------------------%Elliptic Jes Window form 5


%Introduced by Claude Ziad Bayeh in 2012-06-21
clc
close all
M=2;
a=1; x=0:0.0001:M-1;
fprintf('---Elliptic Jes Window form 5 Introduced by Claude
Ziad Bayeh in 2012-09-21---\n');
fprintf('-------------------------------------------------------------------------------\n');
repeat='y';
while repeat=='y'
b=input('determine the form of the Elliptic trigonometry:
b=');
fprintf('b is a variable can be changed to obtain different
signals \n');
%b is the intersection of the Ellipse and the axe y'oy in the
positive part.
if b<0,
b
error('ATTENTION: ERROR b must be greater than
Zero');
end;
Ejes=(1./(sqrt(1.+((a/b).*tan(x)).^2))).*angx(x); % the
Elliptic Jes "Ejes"
Emar=(1./(sqrt(1.+((a/b).*tan(x)).^2))).*angx(x).*tan(x).*a/b
; % the Elliptic Mar "Emar"
% Elliptic Jes Window form 5
X=1/2.*(b+2)./(b+1).*(2*pi*x-pi);

g) = 90
Figure 5, multi form signals of the function Elliptic Jes
window form 5 and for different values of > 0.

In fact, this window is very important as it has


variable amplitude that can be changed as we wish

EjesW5=1/2.*(1+(1./(sqrt(1.+((a/b).*tan(X)).^2))).*angx(X))
;
plot(x,EjesW5);
%xlabel('X''OX axis'); ylabel('f(x)=EjesW1(x)');
title('Elliptic Jes Window form 5');

420

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 411-429
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

axis([0 M-1 0 1.1]);


grid on;%grid Minor (for more details)/ grid on (for less
details)
fprintf('Do you want to repeat ?\nPress y for ''Yes'' or any
key for ''No''\n');
repeat=input('Y/N=','s'); %string input
clc
close all
end; %End while
%--------------------------------------------------------------

5.3 Advantages of the function Elliptic Jes


window form 5 over the traditional window
functions
Similar to other windows used in signal processing
such as: Hamming, Hanning, Blackman, Kaiser,
Lanczos, Tukey and many other windows, the
main goal of introducing the Elliptic Jes window
form 5is to improve the convergence of the Fourier
Series at the discontinuity.
The advantages of the new window function over
the traditional windows are:
-The proposed window function is variable in
form; it can take more than 6 different forms by
varying only one parameter.
-It can help the Fourier series to converge more
rapidly compared to the traditional ones.
It can be used in both analog design of filters and
digital design of filters.
It is used to truncate the Fourier series with a
variable window shape that keep the necessary
information about the signal even after truncation.
6 ELLIPTIC
FUNCTION

JES

WINDOW

FORM

is a real number greater than zero with


In general we take 1 2, it is also a variable
parameter which represents the frequency of the
window function and it is used to justify the
attenuation to zero in the extremity of the window.
And

= 2 1 +

(11)

With and

So the truncated Fourier series using the Elliptic


Jes window form 6takes the following form:

() =

1
1
= 1 +

2

(12)

6.1 Variable shapes of window formed by


Elliptic Jes window form 6
The formed shapes of this function can be drawn
using MATLAB. In the set of figures 5 to 7,
different shapes of the window function are
formed by varying one of the three parameters
which are , , .

The Elliptic Jes window form 6 function is the


application of the Elliptic Jes function in signal
processing. It takes the following forms:

= 1 +
2

With 0 1 and

(10)

is an integer greater than zero with , it is


a variable parameter and it is used to amplify the
window when the value is smaller than 1.

a)

= 0.01, = 1, = 1

421

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 411-429
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

b) = 0.2, = 1, = 1

c)

= 3/3, = 1, = 1

e)

= 3, = 1, = 1

f) = 90, = 1, = 1
Figure 5, multi form signals of the function Elliptic Jes
window form 6 and for different values of > 0 and
= 1, = 1.

d) = 1, = 1, = 1
a)

= 0.01, = 1, = 4

422

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 411-429
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

b) = 0.2, = 1, = 4

c)

= 3/3, = 1, = 4

e)

= 3, = 1, = 4

f) = 90, = 1, = 4
Figure 6, multi form signals of the function Elliptic Jes
window form 6 and for different values of > 0 and
= 1, = 4.

d) = 1, = 1, = 4
a)

= 0.01, = 1.5, = 1

423

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 411-429
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

b) = 0.2, = 1.5, = 1

c)

= 3/3, = 1.5, = 1

d) = 1, = 1.5, = 1

e)

= 3, = 1.5, = 1

f) = 90, = 1.5, = 1
Figure 7, multi form signals of the function Elliptic Jes
window form 6 and for different values of > 0 and
= 1.5, = 1.

In fact, this window is very important as it has


variable amplitude that can be changed as we wish
over a period or a half period. It is considered as
the most important Window function ever written
because of its wide variation forms and its
simplicity in the design of circuits. Applications of
window functions include spectral analysis, filter
design, beamforming and telecommunications. A
more general definition of window functions does
not require them to be identically zero outside an
interval, as long as the product of the window
multiplied by its argument is square integrable,
that is, that the function goes sufficiently rapidly
toward zero.

424

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 411-429
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

6.2 Programming the function Elliptic Jes


window form 6 using MATLAB
%-------------------------------------------------------------%Elliptic Jes Window form 6
%Introduced by Claude Ziad Bayeh in 2012-06-21
clc
close all
M=2;
a=1; x=0:0.0001:M-1;
fprintf('---Elliptic Jes Window form 6 Introduced by Claude
Ziad Bayeh in 2012-09-21---\n');
fprintf('-------------------------------------------------------------------------------\n');
repeat='y';
while repeat=='y'
b=input('determine the form of the Elliptic trigonometry:
b=');
fprintf('b is a variable can be changed to obtain different
signals \n');
%b is the intersection of the Ellipse and the axe y'oy in the
positive part.
if b<0,
b
error('ATTENTION: ERROR b must be greater than
Zero');
end;
R=input('determine the root of the window: R=');
fprintf('R is a variable can be changed to obtain variable
amplitude of the window function\n');
if R<=0,
R
error('ATTENTION: ERROR "R" must be greater than
Zero');
end;
c=input('determine the frequency of the window: c=');
fprintf('"c" is a variable can take the values between 1 and
2\n');
if c<=0,
c
error('ATTENTION: ERROR "c" must be greater than
Zero');
end;
Ejes=(1./(sqrt(1.+((a/b).*tan(x)).^2))).*angx(x); % the
Elliptic Jes "Ejes"
Emar=(1./(sqrt(1.+((a/b).*tan(x)).^2))).*angx(x).*tan(x).*a/b
; % the Elliptic Mar "Emar"
% Elliptic Jes Window form 5
X=1./c.*(2*pi*x-pi);
EjesW6=(1/2.*(1+(1./(sqrt(1.+((a/b).*tan(X)).^2))).*angx(X)
)).^(1./R);

plot(x,EjesW6);
%xlabel('X''OX
axis');
ylabel('f(x)=EjesW6(x)');
title('Elliptic Jes Window form 6');
axis([0 M-1 0 1.1]);
grid on;%grid Minor (for more details)/ grid on (for less
details)
fprintf('Do you want to repeat ?\nPress y for ''Yes'' or any
key for ''No''\n');
repeat=input('Y/N=','s'); %string input
clc
close all
end; %End while
%--------------------------------------------------------------

6.3 Advantages of the function Elliptic Jes


window form 6 over the traditional window
functions
Similar to other windows used in signal processing
such as: Hamming, Hanning, Blackman, Kaiser,
Lanczos, Tukey and many other windows, the
main goal of introducing the Elliptic Jes window
form 6 is to improve the convergence of the
Fourier Series at the discontinuity.
The advantages of the new window function over
the traditional windows are:
-The proposed window function is variable in
form; it can take infinite different forms by
varying three parameters.
-It can help the Fourier series to converge more
rapidly compared to the traditional ones.
It can be used in both analog design of filters and
digital design of filters.
It is used to truncate the Fourier series with a
variable window shape that keeps the necessary
information about the signal even after truncation.
7 EXISTING WINDOW FUNCTIONS
In this section, the author present other types of
window functions used to converge a Fourier
series at the limit [1-2] and [39]. Such as:
7.1 Rectangular window
The rectangular window (sometimes known as the
boxcar or Dirichlet window) is the simplest
window, equivalent to replacing all but N values of
425

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 411-429
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

a data sequence by zeros, making it appear as


though the waveform suddenly turns on and off:
1 0 1
() =
(13)
0

Figure 9, Triangular window function (from Wikipedia


[46]).

Figure 8, Rectangular window function (from Wikipedia


[46]).

Other windows are designed to moderate these


sudden changes because discontinuities have
undesirable effects on the discrete-time Fourier
transform (DTFT) and/or the algorithms that
produce samples of the DTFT.

The triangular window is defined by:


1
2
+1
2

7.3 Welch window


The Welch window consists of a single parabolic
section:
() = 1

7.2 Triangular window

() = 1

The end samples are positive (equal to 2/(N + 1)).


This window can be seen as the convolution of
two half-sized rectangular windows (for N even),
giving it a main lobe width of twice the width of a
regular rectangular window. The nearest lobe is
26 dB down from the main lobe.

1
2
+1
2

(15)

(14)

Figure 10, Welch window function (from Wikipedia [46]).

426

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 411-429
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

The defining quadratic polynomial reaches a value


of zero at the samples just outside the span of the
window.
7.4 Hann (Hanning) window
The Hann window also known as the Hanning is
defined by:
1

() = 2 1 cos 1

(16)
Figure 12, Tukey window function (from Wikipedia [46]).

And so on

Figure 11, Hann window function (from Wikipedia [46]).

The main purpose of developing these windows is


to obtain the smoother form that helps the
attenuation of the desired signal in the extremity of
the window function at the same time obtaining
the minimum amplitude of the side-lobes and
maximum width of the main lobe (refer to figure
10). This is not possible with the existing window
functions. So there is a compromise to do.
The disadvantage of these window functions is
that their frequency response doesnt converge to
zero outside the interval, and moreover, their
amplitudes are not negligible.

7.5 Tukey window


The Tukey window, also known as the tapered
cosine window, can be regarded as a cosine lobe of
width N/2 that is convolved with a rectangular
window of width (1 /2)N
()
1
2
( 1)
1 + cos
1
0
2
(

1)
2

( 1)

=
1

( 1) 1
2
2

1 1 + cos 2
+ 1 ( 1) 1 ( 1)
2
( 1)
2

(17)

427

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 411-429
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

Figure 13, window function in the frequency domain (from


Wikipedia [46]).

This problem is resolved with the proposed


window function by the author based on the
Elliptical Trigonometry in which we can regulate
the shape of the window to obtain a very smooth
form within the window function and obtain at the
same time a wider main lobe and very small side
lobes in magnitude.
8 CONCLUSION
In this paper, the author introduced new window
functions based on the Elliptical Trigonometry.
These new window functions have many
advantages as cited in the previous sections. The
main goal of introducing these new window
functions is to improve the convergence of the
Fourier Series at the discontinuity. As we have
seen, the shapes of the window functions using the
Elliptical Trigonometry have variable shapes and
we can regulate the shape in a way to improve the
convergence and moreover to control the
frequency of the signals that we want and that we
dont want, the Elliptical Trigonometry has other
applications such as [16-17] and [35-45].
These new window functions have enormous
applications in mathematics and in signal
processing and precisely in the design of analog
and digital filters.
6 REFERENCES
1. H. Baher, Signal processing and integrated circuits,
Published by John Wiley & Sons Ltd., ISBN:
9780470710265, (2012).
2. J. G. Proakis, Dimitris G. Manolakis, Digital Signal
Processing, Principles, Algorithms, and Applications
Fourth edition, Pearson International Edition, ISBN: 013-228731-5.
3. N. Wirth, Digital Circuit Design, Springer, ISBN: 3540-58577-X.
4. N. Senthil Kumar, M. Saravanan, S. Jeevananthan,
Microprocessors and Microcontrollers, Oxford
University Press, ISBN-13: 978-0-19-806647-7.

5. G. Wade, Signal Coding and Processing Second


Edition, Cambridge University Press, ISBN: 0-52142336-8.
6. S. J. Orfanidis, Introduction to Signal Processing,
Rutgers University, ISBN 0-13-209172-0, 2010.
7. B. Gold and C. M. Rader, Digital Processing of Signals,
McGraw-Hill, New York, (1969).
8. A. V. Oppenheim and R. W. Schafer, Discrete-Time
Signal Processing, Prentice Hall, Englewood Cliffs,
NJ, (1989).
9. A. V. Oppenheim and R. W. Schafer, Digital Signal
Processing, Prentice Hall, Englewood Cliffs, NJ,
(1975).
10. L. R. Rabiner and B. Gold, Theory and Application of
Digital Signal Processing, Prentice Hall, Englewood
Cliffs, NJ, (1975).
11. S. K. Mitra and J. F. Kaiser, eds., Handbook of Digital
Signal Processing, Wiley, New York, (1993).
12. T. W. Parks and C. S. Burrus, Digital Filter Design,
Wiley, New York, (1987).
13. A. Antoniou, Digital Filters: Analysis and Design, 2nd
ed., McGraw-Hill, New York, (1993).
14. D. F. Elliott, Handbook of Digital Signal Processing,
Academic Press, New York, (1987).
15. L. R. Rabiner and C. M. Rader, eds., Digital Signal
Processing, IEEE Press, New York, (1972).
16. C. Bayeh, M. Bernard, N. Moubayed, Introduction to the
elliptical trigonometry, WSEAS Transactions on
Mathematics, Issue 9, Volume 8, (September 2009), pp.
551-560.
17. N. Moubayed, C. Bayeh, M. Bernard, A survey on
modeling and simulation of a signal source with
controlled waveforms for industrial electronic
applications, WSEAS Transactions on Circuits and
Systems, Issue 11, Volume 8, (November 2009), pp.
843-852.
18. M. Christopher, From Eudoxus to Einstein: A History of
Mathematical Astronomy, Cambridge University Press,
(2004).
19. E. W. Weisstein, Trigonometric Addition Formulas,
Wolfram MathWorld, (1999-2009).
20. P. A. Foerster, Algebra and Trigonometry: Functions
and Applications, Addison-Wesley publishing company,
(1998).
21. F. Ayres, Trigonomtrie cours et problmes, McGrawHill, (1991).
22. R. C. Fisher and Allen D.Ziebur, Integrated Algebra and
Trigonometry with Analytic Geometry, Pearson
Education Canada, (2006).
23. E. Demiralp, Applications of High Dimensional Model
Representations to Computer Vision, WSEAS
Transactions on Mathematics, Issue 4, Volume 8, (April
2009).
24. A. I. Grebennikov, Fast algorithm for solution of
Dirichlet problem for Laplace equation, WSEAS

428

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 411-429
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

25.

26.

27.

28.

Transactions on Computers Journal, 2(4), pp. 1039


1043, (2003).
I. Mitran, F.D. Popescu, M.S. Nan, S.S. Soba,
Possibilities for Increasing the Use of Machineries
Using Computer Assisted Statistical Methods, WSEAS
Transactions on Mathematics, Issue 2, Volume 8,
(February 2009).
Q. Liu, Some Preconditioning Techniques for Linear
Systems, WSEAS Transactions on Mathematics, Issue 9,
Volume 7, (September 2008).
A. I. Grebennikov, The study of the approximation
quality of GR-method for solution of the Dirichlet
problem for Laplace equation. WSEAS Transactions on
Mathematics Journal, 2(4), pp. 312-317, (2003).
R. Bracewell, Heaviside's Unit Step Function. The
rd

Fourrier Transform and its Applications, 3 edition,


New York: McGraw-Hill, pp. 61-65, (2000).
29. M. Abramowitz and Irene A. Stegun, eds, Handbook of
mathematical functions with formulas, graphs and
mathematical tables, 9th printing, New York: Dover,
(1972).
30. V. Kantabutra, On hardware for computing exponential
and trigonometric functions, IEEE Transactions on
Computers, Vol. 45, issue 3, pp. 328339, (1996).
31. H. P. Thielman, A generalization of trigonometry,
National mathematics magazine, Vol. 11, No. 8, (1937),
pp. 349-351.
32. N. J. Wildberger, Divine proportions: Rational
Trigonometry to Universal Geometry, Wild Egg,
Sydney, (2005).
33. C. W. Lander, Power electronics, third edition, McGrawHill Education, (1993).
34. C. Bayeh, Introduction to the Rectangular Trigonometry
in Euclidian 2D-Space, WSEAS Transactions on
Mathematics, ISSN: 1109-2769, Issue 3, Volume 10,
(March 2011), pp. 105-114.
35. C. Z. Bayeh, Introduction to the Angular Functions in
Euclidian 2D-space, WSEAS Transactions on
Mathematics, ISSN: 1109-2769, E-ISSN: 2224-2880,
Issue 2, Volume 11, (February 2012), pp.146-157.
36. C. Z. Bayeh, Introduction to the General Trigonometry
in Euclidian 2D-Space, WSEAS Transactions on
Mathematics, ISSN: 1109-2769, E-ISSN: 2224-2880,
Issue 2, Volume 11, (February 2012), pp.158-172.
37. C. Bayeh, Application of the Elliptical Trigonometry in
industrial electronic systems with analyzing, modeling
and simulating two functions Elliptic Mar and Elliptic
Jes-x, WSEAS Transactions on Circuits and Systems,
ISSN: 1109-2734, Issue 11, Volume 8, (November 2009),
pp. 843-852.
38. C. Bayeh, A survey on the application of the Elliptical
Trigonometry in industrial electronic systems using
controlled waveforms with modeling and simulating of
two functions Elliptic Mar and Elliptic Jes-x, in the book
Latest Trends on Circuits, Systems and Signals,

publisher WSEAS Press, ISBN: 978-960-474-208-0,


ISSN: 1792-4324, (July 2010), pp.96-108.
39. C. Z. Bayeh, Elliptic Jes window form 2 in Signal
Processing, International Journal of Digital Information
and Wireless Communications (IJDIWC) 3(3). The
Society of Digital Information and Wireless
Communications, 2013 (ISSN: 2225-658X), pp.1-9.
40. C. Z. Bayeh, Introduction to the Elliptical Trigonometry
in Euclidian 2D-space with simulation of four elliptical
trigonometric functions Jes, Jes-x, Mar and Rit, WSEAS
Transactions on Mathematics, ISSN: 1109-2769, E-ISSN:
2224-2880, Issue 9, Volume 11, September 2012, pp.784795.
41. C. Z. Bayeh, Introduction to the Rhombus
Trigonometry in Euclidian 2D-space with simulation of
four Rhombus trigonometric functions RhJes, RhJes-x,
RhMar and RhRit, WSEAS Transactions on
Mathematics, ISSN: 1109-2769, E-ISSN: 2224-2880,
Issue 10, Volume 11, October 2012, pp.876-888.
42. C. Z. Bayeh, Introduction to the Elliptical Trigonometry
Series Using two Functions Absolute Elliptic Jes (AEjes)
and Absolute Elliptic Mar (AEmar) of the First Form,
WSEAS Transactions on Mathematics, E-ISSN: 22242880, Issue 4, Volume 12, April 2013, pp. 436-448.
43. C. Z. Bayeh, Nikos E. Mastorakis, Rectangular Base
Function, in the book Mathematical Methods for
Science and Economics, Proceedings of the 17th
WSEAS
International
Conference
on
Applied
Mathematics (AMATH '12), Montreux, Switzerland,
Published by WSEAS Press, December 29-31, 2012,
ISBN: 978-1-61804-148-7, pp. 105-108.
44. C. Z. Bayeh, Nikos E. Mastorakis, Elliptic Jes Window
Form 1, in the book Mathematical Methods for
information Science and Economics, Proceedings of the
17th WSEAS International Conference on Applied
Mathematics (AMATH '12), Montreux, Switzerland,
Published by WSEAS Press, December 29-31, 2012,
ISBN: 978-1-61804-148-7, pp. 115-120.
45. C. Z. Bayeh, Nikos E.Mastorakis , Application of the
Rectangular Trigonometry in industrial electronic systems
with analyzing, modeling and simulating the function
Rectangular Rit, in the book Recent Researches in
Circuits, Communications and Signal Processing,
Proceedings of the 7th WSEAS International Conference
on Circuits, Systems, Signal and Telecommunications
(CSST '13), Milan, Italy, publisher WSEAS Press, ISBN:
978-1-61804-151-7, January 2013, pp.97-107.
46. Wikipedia, Window function.

429

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 430-439
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

Smart On-Board Transportation Management System Using GPS/GSM/GPRS


Technologies to Reduce Traffic Violation in Developing Countries
1

Saed Tarapiah , Shadi Atalla , Rajaa AbuHania

3
1

Telecommunication Engineering Dept., An-Najah National University, Nablus, Palestine


2

Pervasive Technologies, Istituto Superiore Mario Boella, Torino, Italy


1

Email: s.tarapiah@najah.edu , shadi.atalla@ismb.it , Rajaa.ameen@gmail.com

ABSTRACT
Nowadays, the evolution in transportation technologies
makes the necessity for increasing road safety. In this
context, we propose the implementation of a smart
onboard GPS/GPRS system to be attached to vehicles
for monitoring and controlling their speed. In case of
traffic speed violation, a GPRS message containing
information about the vehicle such as location and
maximum speed is sent to a hosting server located in
an authorized office so that the violated vehicle is
ticketed. Moreover, this system can also track the
vehicles current location on a Google Map, which is
mostly beneficial when vehicles should follow a
specific road and in case of robbery. Also geo-casting
can have a major role in this model. Some sensors,
such as shock/vibration sensor usually attached to the
air-bags in vehicles, are attached to the system that in
case of accident, it will send notifications to the nearest
hospital, police station and civil defense. Our proposed
model can be utilized for different implementations,
both in public and private sectors. While similar
existing systems in Palestine have focalized just on the
tracking aspect of vehicles monitoring, it would be the
first system supporting both ticketing and tracking.

KEYWORDS
GPS; GPRS; Transportation; Ticking; Tracking;
Intelligent Transport System; ITS; Machine-toMachine; M2M; Internet of Things; IoT; Smart
City;GSM.

1. INTRODUCTION
Recent studies show that all over the world,
including Palestine, there has been a rapid increase
in vehicle numbers. The latest statistics show that
there are approximately 140,000 licensed vehicles
in West Bank in 2011. About 17,000 of them are
newly registered [1]. As a result, as Figure [1]
illustrates, traffic crashes increase in past few

years in West-Bank. It was investigated that the


lack of proper infrastructure for roads is one of the
reasons for these crashes. Moreover, people by
nature, are not willing to deter something unless
they are obliged by laws and threatened to pay
large fines or to get penalty. Thus, the resulting
costs of damages add an extra burden to the
development of society. As a consequence, it is
worthwhile to provide solutions to these
challenges.
Advances in Information and Communication
Technologies (ICT) represent a good potential tool
to tackle the increasing road accidents and vehicle
robbery. The application of the ICT to the
transportation and traffic management is called
Intelligent Transportation System (ITS) [2], [3],
[4].
In this work, a Smart Transportation Management
System (STMS) based on GSM, GPS and large
array of smart sensors integration has been
developed for enhancing public and private
transportation services. The system is composed
of an embedded microcontroller based smart
board called SmartBoard, a Cloud based web
application and Google MAP Services [5].
1.1.
GSM Technology
Global System for Mobile Communications,
originally (GroupeSpcial Mobile), is the worlds
most popular mobile telephone system.
80 percentages of mobile operators use this
standard, providing services to over 1.5 billion
people across more than 212 countries. This is
because GSM is the first mobile generation which
provides services and ability to roam and switch
carriers without replacing phones, and also to
network operators. General Packet Radio Service
GPRS represents an evolution of the GSM
430

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 430-439
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
standard, allowing data transmission in packet
mode and providing higher throughputs compared
with the circuit switched mode [6], [7].
1.2. GPS Technology
Global Positioning System (GPS) is a worldwide
radio navigation system formed from the
constellation of 24 satellites and their ground
stations. The Global Positioning System is mainly
funded and controlled by the U.S Department of
Defense (DOD). The system was initially created
and designed for the U.S. military use. But
nowadays, it is available for civilian, without any
kind of charge or restrictions. Global Positioning
System tracking is a method of working out
exactly the position of GPS sensors holder based
on a simple mathematical principle called
trilateration or triangulation. Trilateration falls into
two categories: 2-D Trilateration and 3-D
Trilateration. It requires having at least four
satellites transmitting coded signals from known
positions. Three satellites are required to provide
the three distance measurements, and the fourth to
remove receiver clock error [9].
A GPS tracking system can work in various ways
i.e. Active and passive tracking. In Passive
tracking, the position is usually stored in internal
memory or on a memory card along the ride, while
in the active tracking, also refers to a real time
tracker, data is to be transmitted to central
database via a modem within the GPS unit [9].

Figure 1. Number of registered road traffic accidents for


major cities in Palestine 2011 [1].

The paper is organized as follows: the first section


gives an overview about the problem. Related

works are discussed in the second section. The


model components are introduced in section three.
System requirements and implementation issues
are described in section four. The ticking and
tracking algorithm description is proposed in the
fifth section, the sixth section discuss the
experiments and user experience with the STMS,
while conclusions are introduced in the final
section.
2. RELATED WORK
This section briefly introduces implementation
and development activities in research and
academia of selected smart transportation
systems. Specifically, GPS and GPRS based
models which have been designed for managing
and organizing transportation systems.
Patinge and Kolhare developed a GPS based
urban transportation management system in
which the fleet tracking using GPS and
GSM/GPRS technology and public information
system unit mounted at bus [10].
Kumar and Prasad attempted to enhance public
transportation management services based on
GPS and GSM [9]. Optimizing the traffic and
passenger flows and improving system
management, integrated real-time information on
the traffic situation in the urban area (e.g.
concerning parking spaces, congestion, and
public transport) can be provided by CIVITAS II
[12]. Goud and Padmaja proposed a useful
approach in detecting accidents precisely by
means of both vibration sensor and Micro electro
Mechanical system (MEMS) or accelerometer
[13].
In a preliminary research paper related to this
work, Saed Tarapiah and others in [14] identify a
common criteria and also they provide a design
guideline for such system. Furthermore, they
have developed in Palestine an initial prototype to
control public transportation.
3. MODEL MAIN COMPONENTS
This section provides detail descriptions for the
used hardware modules by SmartBoard.
3.1.
GSM-GPRS Module
A GSM module is a wireless transmission module
that works with a GSM wireless network. It
behaves like a dialup modem. The main difference
431

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 430-439
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
between GSM module and dialup modem, which
is a dialup modem sends and receives data through
a fixed telephone line while a wireless modem
sends and receives data through radio waves. The
module can be connected to a computer through a
serial cable or a USB cable. In our project we use
the SM5100B Cellular Shield, since it is easy to
deal with and more flexible, also it supports AT
Commands. It has unlimited transmission range
and distance, so we can use it in any place. GSM
can easily send and receive data across the mobile
network, and it can transmit instructions,
commands, SMS and receive them from
microcontroller [15].

3.2.
Arduino Microcontroller
The Arduino Uno is a microcontroller board,
which has 14 digital input/output pins (of which 6
can be used as PWM outputs), 6 analog inputs, a
16 MHz ceramic resonator, a USB connection, a
power jack, and a reset button. It contains
everything needed to support the microcontroller;
simply connect it to a computer with a USB cable
or power it with an AC to DC adapter or battery to
get started [15].
3.3.
GPS-11058 Module
GPS-11058 is a development board that uses the
smallest, most powerful, and most versatile GPS
receiver. The module can be configured to an
amazingly powerful 10Hz update rate, with 14
channel tracking. It has two serial ports, UART
and SPI interfaces, 28 mA operating current and
high sensitivity. It needs to connect it with an
external battery or super capacitor to the board, to
support very fast restarts after power is removed.
There are even pads on the bottom of the board for
the 0.2F super capacitor, which keeps the board
hot start-able for up to 7 hours without power [15].

Figure 2. GSM-GPRS module.

Figure 4. Arduino microcontroller.

Figure 3. GPS-11058 module.

4. SYSTEM REQUIREMENTS AND


IMPLEMENTATION
This section provides a more in depth look at the
STMS architecture and it also elaborates on every
single component of the system. This section
explains the enabling technologies they are used to
glue the system components together, along with
the motivations why these technologies are
suitable for the STMS. The STMS is composed of
an embedded smart board called SmartBoard, a
Cloud based web application and Google MAP

432

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 430-439
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
Services.
Figure [5] illustrates the high level network
architecture of the system and shows the main
communication technologies used for information
and event flow between the system major
components.

Figure 5. The high level network architecture of the system


shows the main communication technologies used for
information and events flow between the system major
components.

4.1.
Cloud Based Web Application
A Web Service (WS) [16] is a communication
paradigm which allows two Internet applications
or two electronic devices (such as computers,
embedded processors and microcontrollers, smart
sensors and actuators, smart phone, cellular
mobile and tablet PCs) to interact in order to
exchange information and events between each
other through the WWW (World Wide Web).
There are two important Kinds of Web Services,
Simple Object Access Protocol (SOAP) [17] and
Representation Stateless Transfer (REST) [18].

used the three-tier architecture [19]:


1. A front-end which relate to the client side. The
user interface is based on a web-browsers
application. It contains a responsive web page
developed using Hypertext Transfer Markup
Language HTML5 , javascript , JQuery
library [21] and Cascading Style Sheet
(CSS) whose application is tested on both
desktop and smartphone web browsers. This
web page uses Asynchronous JavaScript
AJAX in order to build bidirectional data flow
with middle layer. This way allows the user
interface to receive a real time data from the
servers and also to interact with the system
through issues commands to middle layer.
2. A middle layer which includes a dynamic PHP
program running on top of Apache web server
[22]. This program exposes its internal
functionality through a RESTful interface
towards the front-end and it uses the MySQL
native driver for PHP for storing and
retrieving data.
3. A back-end containing MySQL database
server [23] used to store all known roads in the
region, system users , users profiles and user
alerts and tickets. This component is a
relational database that is used to store and
retrieve the data. Note that the positioning and
speed data are time-stamped according to the
UTC time reference.

SOAP it a standard communication protocols for


XML-based data exchange between peers. SOAP
uses different transport protocol such as HTTP,
Simple Mail Transfer Protocol (SMTP) and Java
Message Service (JMS). SOAP uses the XML
language to define the message architecture and
the message format.
REST is a style of software architecture that
describes how data is transmitted between peers
over a standardized interface e.g. HTTP. It is
based on client server communication paradigm
and the interaction is made by a request response
pairs. REST relies on a stateless and cacheable
communications protocol.
In order to make our web application flexible and
extendable, we have adapted the REST (RESTful)
architecture. And our implemented STMS has

Figure 6. The High-level design and architecture of the


Web Application. First tire contains the database storage.

433

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 430-439
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
The second tire runs the business logic and computations.
Finally, the clients tire.

Figure [6] shows the Web application high-level


architecture as well as the HTTP REST
communication between the user web browser and
the Apache web server. The whole Apache and
My SQL systems are tested and run over windows
server platform [24].
It is important to note that the middle and backend layers can reside in two separate physical
server machines - to improve the scalability of the
STMS - or both of them can be resided in the
same physical server machine - this case is
suitable for small and medium scale applications.
In RESTful vocabulary things are resources. Each
resource is an uniquely addressable entity by a
Universal Unique Identifier (URI) attached to it.
Also each resource has representation that can be
transferred and manipulated by means of four
verbs. These verbs are create, read, update and
delete (CRUD).
4.2.

SmartBoard

Figure 7. High Level Architecture of the SmartBoard.

The software architecture of the proposed


SmartBoard (shown in Figure [7]) is organized as
follows:
Sensing layer. This layer depends on offthe-shelf sensors used to collect
information regarding the vehicle status,
such as geographical location and speed.
More specifically, we have GPS sensors
which is used to collect location and speed
information of the vehicles; accelerometer
sensors - such as vehicle built-in speed
meter - providing the vehicles speed

information and shock and vibration sensors

which are usually attached to the air bags in


vehicles. In case there is an accident, these
latter sensors will send notifications to other
components.
Processing layer. This layer uses Arduino

Uno microcontroller [8] and it runs the


ticketing and tracking algorithm as we will
explain it in section[5].
Communications layer. This layer contains
wired
and
wireless
communication
technologies. The former connects the sensors
layer to the microcontroller using Universal
Synchronous Asynchronous Receiver and
Transmitter (USART) [25], while the latter
(GPRS) are used to connect the smart board to
the internet.

4.3.
Google Maps API
Google MAP API [5] is a public and free service
offered by Google cooperation. This service
composed of a set Application Programming
Interfaces (APIs) which aims to facilitate the
integration of Google Maps and services into
newly created geographical applications and
services by developers. We embedded a Google
Map in our web application. After that, we used
Google Map APIs to put over this Map a series of
geographical location information they are
longitude and latitude pair values. Google MAP
APIs are based on Web Services technology. In
Web Service, retrieving, creating, deleting and
updating geographic data is made through HTTP
request to specific Uniform Resource Identifier
URI. The response of Google MAP APIs is based
on JavaScript Object Notation (JSON) or
Extensible Markup Language (XML) data format.
5. THE PROPOSED TICKETING AND
TRACKING ALGORITHM
Our proposed model performs online monitoring,
ticketing and tracking. Moreover, our module
supports geo-casting features. The geo-casting
function is activated if an accident occurs in a
certain area, all the vehicles within a range of
specific geographical coordinates will receive a
message to choose another road trip. So, traffic
jamming and unnecessary delays can be avoided
and help in saving time and money. GPS receiver
is used to determine the position and speed of
vehicles. The location is used for tracking, while
the measured speed is to be compared with a
434

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 430-439
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
limited, predetermined value stored in the
microcontroller, extracted from legal maps. When
the vehicles speed is approaching the specified
limit, alarm will go on to warn the driver. If the
driver does not slow down, and the speed is still
increasing and exceeding the maximum allowed
speed, GPRS packet containing the speed will be
sent to the hosting server. The ticket will be
registered at the server side. Moreover, SMS will
invoice the driver about his ticket. Because, GPS
signal requires Line Of Sight (LOS), in case, there
is no valid GPS signal; accelerometer is used to
measure the vehicles speed.
For accidents prevention and notification, we will
use vibration sensor which is attached to the
vehicles air-bags. When air-bags are launched,
then an accident is detected. So the nearest
hospital is informed to help and send paramedics
to handle the situation and all other vehicles near
the crash will receive a message to configure
another route. GSM/GPRS module and GPS
sensor are being controlled using an Arduino
microcontroller. Not all the available data will be
sent to the web server from the microcontroller.
On the contrary, just a selected set of data will be
send - i.e, location and speed after having been
processed and analyzed.
Figure [8] illustrates the proposed model.

Figure 8. GSM-GPRS module.

5.1.
Flowchart
Figure 9 depicts the flowchart of our project,
starting with measuring the speed and location of a
vehicle, as longitude and latitude points. These
readings will be compared with standard specified

value stored in the microcontroller. If the new


measured speed is about to exceed a certain level,
an alarm will be activated. If the driver is still
speeding up and exceeding the maximum allowed
speed or threshold for about 10 seconds, GPRS
packet will be sent to a server, so the driver would
be ticketed. If there is a need for tracking,
coordinates will be transmitted to our web
application to be plotted over Google map. Tracks
can be online, by sending the coordinate
periodically or offline by storing location on the
located memory.
In order to apply the ticketing algorithm, we need
to determine the maximum speed in each road,
since in the West Bank there are no available
vector maps typically used for navigation, we have
decided to divide the whole area into many small
areas and for each single subarea, we have used
different maximum speed values.
For simplicity, we have considered only two types
of subareas where each subarea is identified with a
polygon contained in its geographical borders. The
first subarea corresponds to the specific territory
of the cities Jerusalem, Bethlehem Ramallah,
Nablus, Jenin, Tubas/Salfit, Tulkarem, Qalqiliya,
Hebron and Jericho - whilst the second one
coincides with all the remaining territory outside
the Palestinian cities borders.
For the first subarea, the maximum speed has been
set at 50 kilometer per hour (km/h) whilst for the
second one, it has been set at 90 km/h.
In order to check if a certain vehicle is in the first
or in the second subarea, we used a ray casting
algorithm which will be explained in the following
subsection.
5.2.
Ray Casting Algorithm
The Ray Casting Algorithm [26] has been used to
determine if a given point P is inside a polygon or
outside the polygon.
function checkInOrOut(Polygon $pol,
Point $pt )
{
$count = 0;
foreach ( $side in $pol){
if ( $pt ray_intersect $side){
$count = $count + 1;
}//end if
}//end foreach
if ( is is_odd($count) ){
retrun inside;
}//end if

435

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 430-439
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
else {
retrun outside;
} // end else
}// end function

The pseudo code above shows a function which


takes as input parameter both:
1) a given point which represents the
longitude and latitude values of the
vehicles location;
2) a given polygon which surrounds the
border of a given geographical region on
the map.
The function, after completing calculations, will
return either if the point is inside the region or
outside the region.
The line of the pseudo code if ( $pt
ray_intersect $side) checks if a horizontal
line, which begins from the $pt point and ends
to the infinity point, intersects the segment $side
of the polygon. If this is the case, the result will be
true otherwise false.

Figure 9. Model Flow Chart. This algorithm runs inside


Arduino Uno microcontroller. Both online and offline modes
are supported by this algorithm.

436

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 430-439
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
6. EXPERMENTAL RESULTS
This section presents selected results from our
experiments with the established Ticketing and
tracking module in our laboratory.
The purpose of these experiments is to validate the
proposed system under different real driving
environments and to show a functional prototype.
We mounted two SmartBoards into two cars
belonging to our faculty teachers. And within the
university campus, we used a laptop to track the
cars on Google map. Furthermore, for the
proposed demonstration, we print all the
communication messages on another screen.
After getting the model tested, one of the cars
made a short trip around the university campus.
The university campus is located in rural area (in
an open space with scattered buildings around
with very wide streets). This campus is close to
Nablus city where we found that there is a good
GPS signal quality. Hence, the algorithm always
reads the speed and the geographical location in
terms of longitude and latitude values from the
GPS module and it never needs to read the vehicle
speed from the vehicles accelerometer.

Figure 10. Nablus-Tulkarem Trip. The picture above is


automatically generated by the system. The Map source is
Google Maps [5]. The red line represents the trip path. This
path is generated through the insertion of GPS module
readings into Google Maps. The blue lines depict the
locations where the vehicle driver exceeds the threshold
speed.

The second cars driver performed a longer trip


starting from Tulkarem city heading to the
university campus in Nablus city. A long with the
trip path, in some crowed regions having long
buildings and narrow streets, the quality received
by the GPS module was poor or even worst with
no signal received at all. This scenario allows us to
verify all the algorithm branches such as reading
the vehicles speed either from the GPS module or
from the vehicles accelerometer depending on the
received GPS signal quality. Figure [10] depicts
the trace of the vehicle on Google Map and Figure
[11] lists the geographical location as well as the
speed of the vehicle in 10 minutes period.
A separate technical report containing a more
detailed discussion of STMS characteristics and
implementations is also available [27].

Figure 11. Nablus City Trip. GPS module reading.


Including the geographical location as well as the speed of
the vehicle in 10 minutes period.

7. CONCLUSION AND DISCUSSION


Saving and protecting souls needs both government
and drivers corporation and commitment. Much

437

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 430-439
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
efforts and money will be essential to accomplish
and maintain a very good level of road safety. Our
target is to design a Low cost GPS/GPRS based
wireless controlling model. Due to the wide spread
of GSM network which increases the chance for
applying this model in many areas around the
world. The proposed model hope to be able to
achieve what is meant for, reducing road traffics,
leading to cut in crashes expenses, decreasing the
number of resulting casualties. All these are in
favor of human road safety.
The economic study shows the feasibility of our
project. After running our model for long enough
period, we expect that tracking and ticketing
system can be fed by authorized department which
can be utilized to get clear view about
infrastructure which can be used for developing
and planning to improve the infrastructure on
some field or apply some regulation which will
aim to reduce traffic accidents.
We will keep working to expand the experimental
section of the proposed model, and also we will
consider additional features (such geo-casting
feature) to the designed model. To add the
vibration sensor and complete the geo-casting
procedures, all the vehicles within a range of
specific geographical coordinates will receive a
message to choose another route, so traffic
jamming and unnecessary delays can be avoided,
help in saving time and money, leading to cutting
in crashes expenses and decreasing the number of
resulting casualties. All these are in favor of
human road safety.

8. REFERENCES
[1] Transportation and Communication Statistics in the
Palestinian Territory: Annual Report 2011. [Online]
Available
at:
http://www.pcbs.gov.ps/Portals/_pcbs/PressRelease/tran
s_comm2011e.pdf [Accessed:24 Nov 2013].
[2] Alkandari, Abdulrahman A., and Imad Fakhri
Alshaikhli. "Traffic Management System Based On
WSN IN Kuwait: AN INITIAL DESIGN." The
International Conference on Informatics and
Applications (ICIA2012). The Society of Digital
Information and Wireless Communication, 2012.
[3] Weiland, Richard J., and Lara Baughman Purser.
"Intelligent transportation systems." Transportation in
the New Millennium (2000).

[4] Dimitrakopoulos, George, and Panagiotis Demestichas.


"Intelligent
transportation
systems." Vehicular
Technology Magazine, IEEE 5.1 (2010): 77-84.
[5] Google
Map
APIs.
[Online]
Available
at
https://developers.google.com/maps/documentation/web
services/ [Accessed: 25 Nov 2013].
[6] Eberspacher, Jorg, and H. Vogel. "GSM switching
services and protocols." IEE Review 45.2 (1999): 7777.
[7] Mobile technologies GSM [Online] Available at
http://www.etsi.org/index.php/technologiesclusters/technologies/mobile/gsm [Accessed:

25 Nov
2013].
[8] Arduino Microcontroller. [Online] Available at
http://arduino.cc/ [Accessed: 24 Nov 2013].
[9] T. Pratt, Ch. W. Bostian, J. E.Allnutt. (Oct 25, 2002).
Satellite Communications.
[10] P. D. Patinge, N. R. Kolhare. (July 2012). Smart
Onboard Public Information System using GPS and
GSM Integration for Public Transport. International
Journal of Advanced Research in Computer and
Communication Engineering, Vol. 1, Issue V.
[11] IEETimes. How does a GPS tracking system work?
2013.
[Online]
Available
at:
http://www.eetimes.com/document.asp?doc_id=127836
3 [Accessed: 9 Oct 2013].
[12] S.KALLAS, J. YATES (December 2009) CIVITAS II,
Cleaner and Better Transport in Cities. [Online]
Available
at:
http://www.civitas.eu/.../CIVITAS_II_Final_Brochure_
E N.pdf [Accessed: 10 Sep 2013].
[13] V. Goud, V. Padmaja. (July 2012). Vehicle Accident
Automatic Detection and Remote Alarm Device.
International Journal of Re-configurable and Embedded
Systems (IJRES), Vol. 1, No. 2, pp. 49 54.
[14] S.Tarapiah, R. AbuHania, I. Hindi, D. Jamal. (October
2013). Applying Web Based GPS/GPRS Ticketing and
Tracking Mechanism to Reduce Traffic Violation in
Developing Countries.
[Online] Available at:
http://sdiwc.net/digital-library/applying-web-basedpsgprs-ticketing-and-tracking-mechanism-to-reducetraffic-violation-in-developing-countries.html
[Accessed: 20 Nov 2013].
[15] SparkFun Electronics. 2013. [Online] Available at:
www.sparkfun.com[Accessed: 2 Sep 2013].
[16] Alonso, Gustavo, Fabio Casati, Harumi Kuno, and
Vijay Machiraju. Web services. Springer Berlin
Heidelberg, 2004.
[17] Box, Don, David Ehnebuske, Gopal Kakivaya, Andrew
Layman, Noah Mendelsohn, Henrik Frystyk Nielsen,
Satish Thatte, and Dave Winer. "Simple object access
protocol (SOAP) 1.1." (2000).
[18] R. T. Fielding, Architectural styles and the design of
network-based
software
architectures,
Ph.D.
dissertation, University of California, 2000.
[19] W. W. Eckerson, Three tier client/server architecture:
Achieving scalability, performance, and efficiency in

438

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 430-439
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
client server applications. Open Information Systems,
vol. 10, no. 1, 1995.
[20] Flanagan, David. JavaScript. O'Reilly, 1998.
[21] Chaffer, Jonathan, and Karl Swedberg. Jquery
reference guide: a comprehensive exploration of the
popular javascript library. Packt Publishing, 2010.
[22] Fielding, Roy T., and Gail Kaiser. "The Apache HTTP
server project." Internet Computing, IEEE 1.4 (1997):
88-90.
[23] Vegh, Aaron. "MySQL Database Server." Web
Development with the Mac(2010): 317-340.
[24] Friedman, Mark. Microsoft Windows Server 2003
Performance Guide. Microsoft Press, 2005.
[25] Durda, Frank. "Serial and UART Tutorial." FreeBSD
Documentation (1996).
[26] Sutherland, Ivan E., Robert F. Sproull, and Robert A.
Schumacker. "A characterization of ten hidden-surface
algorithms." ACM Computing Surveys (CSUR) 6.1
(1974): 1-55.
[27] S.Tarapiah, R. AbuHania, S.Atalla. "Web Based
GPS/GPRS
Ticketing
and
Tracking
Vehicles." Technical report (2013). [Online] Available
at:http://www.retitlc.polito.it/tarapiah/Technical_Report
_Web_Based_GPS_GPRS_GSM_Ticketing_and_Track
ing_Vehicles.pdf [Accessed:26 Nov 2013].

439

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 440-450
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

Object-Oriented in Organization Management:


Organic Organization
AmrBadr-El-Din,
The American University, Cairo, Egypt,
AUC Avenue. P.O.Box 74 New Cairo
abadreldin@aucegypt.edu

Abstract
In a time when virtual organizations are fast
becoming a phenomenon, this paper aims to explore
how IT-based approachesand in specific the
object-oriented (organic) approachcan be used to
structure and facilitate the management of such
virtual organizations.
As with traditional organizations, a virtual
organization needs a structure that is fast, reliable,
adaptable and deterministic. In addition, such
organizations need a time and cost-efficient structure
to allocate resources and maximize their utilization in
a way that avoids deadlock and priority inversion.
This paper sets out to present how the organic
approach can meet the virtual organizations needs; it
will also illustrate how this approach can be applied
to the technology sector of Egypts Ministry of
Transport.
Keywords: Organization Structure, Object-Oriented,
Organic Organization.

1. Introduction
Management is the set of tasks or activities that
govern, design and maintain the environment in
which employees work together while
competing at the same time for the
organizations resources to accomplish a set of
jobs or tasks effectively and efficiently. As such,
This paper sets out to review the object-oriented
(organic) approach and to explain how it can be
applied to structure and manage the multitude of
complex tasks that make the virtual organization.

Furthermore, it will also illustrate how this


approach can be applied to the technology sector
of Egypts Ministry of Transport.

structuring the organization is a fundamental


management task.
With the modern virtual organizations as with
the traditional large-sized organizations setting
the structure has become an extremely
complicated procedure. Some organization use
classical structuring models, such as the
hierarchical organization model (functional,
divisional, geographical), the matrix organization
or the flat organization, to tackle this issue.
However, these models have proved to be
somewhat limited in handling the needs of big
virtual organizations.
Virtual organizations need a structure that is fast,
reliable, adaptable and deterministic. In addition,
these organizations need a structure to allocate
resources and maximize their utilization in a way
that avoids deadlock and priority inversion. The
object-oriented approach is a structural approach
that is uniquely positioned to undertake these
complex demands. Approaches based on
information technology (the object-oriented
model) and others such as the monolithic and
hierarchical approaches take their cue from
computer operating systems in how to structure
an organization. All approaches have a similar
end goal .Yet apply different designs to structure
and
operate
the
organization.
2. Organizations: A Background
2.1 Virtual Organizations vs Virtual
Offices
Traditional organizations are an alliance of
manufacturing and administrative services that
provide specific business needs. The services
provided by any organization may include
finance, IT, sales, marketing, operations,
distribution, just to state a few. In a traditional
organization setup, these services are located in a
defined physical space for convenience of
coordination, communication and resource
440

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 440-450
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

sharing. However, the setup is very different for


a virtual organization which usually outsources
most of these services and only keeps its core
activities in-house. As such, the typical alliance
of manufacturing and services for a virtual
organizations resources is located in virtual
space.
Although many authors have addressed the
concept of the virtual organization since it
emerged almost 20 years ago, Lucas et.al. [1],
there is no standard definition for the virtual
organization concept yet. However, Quinn in [2]
discussed that most successful enterprises can be
considered Intelligent Enterprises converting
intellectual resources into a chain of service
outputs and integrating these into a form most
useful for certain customers. Unless the facilities
and manufacturing technologies are themselves
part of the core competencies of the company,
strategy dictates that they should be limited and
selectively outsourced whenever feasible. On the
other hand, Klein [3] argues that as the virtual
organization consists of several independent
organizations that function as a single entity, a
virtual organization mainly depends on
outsourcing all the organizations activities such
as finance, production, sales, IT, etc.
On the other hand, the virtual office consists of
many organizational and technological elements
that operate as a regular
office does, yet they are not bound by a specific
physical location. The virtual office concept
suggests that employees are no longer bound by
the management or the physical constraints of a
conventional office space. To illustrate the
concept, Thayer. [4] provides the example of
sales people carrying their office in their portable
computers; they communicate electronically, and
make rare office appearances.
Designers of virtual offices use three methods to
accommodate human resources and allocate
space: Hoteling, Moteling, and Telecommuting.
As Smith [5] explains, Hoteling involves the use
of unassigned offices that employees can reserve
in advance for a specified period of time on
temporary basis, in the same way that business
travelers reserve hotel rooms. Similarly,
Hoteling can be described as the process of
sharing space, where employees with varying

work schedules rotate into common offices


(Hoewing [6]). Moteling works along the lines
of just-in-time office planning: in this case no
advance reservation is required. Finally,
telecommuting refers to employees working
from home, with occasional visits to the office.
2.2 Management
As previously described, management is defined
as the set of activities that plans, directs and
maintains an environment in which employees
work together, yet compete for organization
resources, to accomplish a set of selected jobs or
tasks effectively and efficiently. The resources of
the organization can be tangible (e.g. faxes,
printers, computers, and office space etc.) or
non-tangible (e.g. managers time slots).
In that context, the most fundamental
management task is setting the organization
structure. Mintzberg [7] states that every
organized human activitygives rise to two
fundamental and opposing requirements: the
division of labor into various tasks to be
performed and the co-ordination of these tasks to
accomplish the activity. From that perspective,
the organization structure can be simply defined
as the sum total of ways in which the
organization divides its labor into performing
specific tasks and then coordinates between
those tasks to perform the ultimate activity or
service required. However, a vital problem that
faces management is how to organize and
coordinate different tasks with time constrains as
well as dependence constrains to achieve the
final goal or objective of the organization
(Mintzberg [7]and Lucas et.al. [1]).In that sense
management is quite comparable to operating
systems like UNIX, Windows, or Apple OS;
operating systems can be viewed as managers of
resources (hardware and software).
2.3 Management Challenges
While managing traditional organizations is
considerably a difficult task, the difficulty is
compounded
when
managing
virtual
organizations given that partners and employees
are managed and organized remotely. The
concept of the virtual organization introduces
new management and coordination challenges.
The problem gets even more complicated when
441

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 440-450
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

the number of partners increases in the case of


virtual organizations, or the number of
employees increases in the case of virtual
offices. As Thayer [4] points out, a manger can
no longer walk down the hall and grab someone
to do an urgent job. For a more elaborate
discussion on difficulties as well as advantages
of managing virtual organizations, see Klein [3].
A major problem that faces the management of
both virtual organization and virtual offices is
resource allocation to employees in a way that
ensures maximum utilization of those resources.
With large organizations, whether traditional or
virtual, the most important resources are the time
slots of employees, managers, directors, and
CEOs. However, there are many other resources
that can be used by only one job at a time. In a
more realistic setting, a job may even require
exclusive access to not only one, but several
resources at the same time or at different times
(for example, an employee can request an IT
expert, a laptop and a printer all at the same time
in order to perform one task or job). At times a
job can hold a resource while waiting for another
resource to become available to finish the job
(for example, an employee may already have
access to a laptop and a printer but needs the IT
expert to fix something before the job can be
done). In virtual organizations and offices this
may cause serious problems to arise i.e.
deadlock, process starvation, and priority
inversion.
In addition to effective resource allocation,
another problem that faces management is
setting a proper organization structure that
ensures
effective
communication
and
coordination between departments. In that
respect the proper structure will set the
guidelines for all the reporting mechanisms that
govern the workflow of the company and the
communication mechanisms between its
employees.
The Corporate Executive Board [8] conducted a
study which compares between the five basic
organization structures from the strategy point of
view. In its study, the Council provides a
description for each structure and its advantages,
disadvantages, and characteristics. In their view,
Kubrak et.al. [9] maintain that companies with

strong organizational structures benefit from a


defined hierarchical structure, improved
communications, and the ability to produce a
unified
company
message.
Effective
communication is required to keep an
organizational structure running smoothly and is
critical to its success, and without it, new ideas
and processes can get confused and managers
may redouble efforts to claim certain parts of a
process as their own. Communication difficulties
become more pronounced in a virtual
organization due to spatial considerations and
because of the multitude of tasks, functions and
requirements management can become difficult
if not impossible with traditional management
tools and techniques.
It is at this stage that constructing an intelligent,
organic structure model proves helpful in
managing the virtual
organizations
processes and resources
especially in assisting management in
maximizing resource utilization and eliminating
problems like deadlock, priority inversion, or
processes starvation. However, it is worth noting
here that such problems can be avoided if
resources were not shared by more than one job
(thus eliminating the concept of resource
sharing) and if each job could have its own set of
dedicated resources. However, this is an unlikely
scenario as work will continue to increase, but
internal resources will not, Norman et.al. [10].
Even if it were possible, it would typically be
cost prohibitive.
Cordeiro, et.al. [11] argue that co-ordination is a
fundamental aspect of organizational activity
where computers can help. This is motivated by
the need to reconcile the conflicts that arise from
the division of labor that characterizes any
organizational structure and that is present in
almost all business processes.
To tackle all the coordination, communication
and resource allocation issues listed above,
different companies set up unique organizational
structures that fit their own needs. However,
there are several basic types of organizational
structures that are the spring board from which
these tailor-made structures arise. The three most
common types are: the functional structure, the
442

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 440-450
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

divisional structure, and the product structure.


Other types of organizational structures may
include geographical structure and process
structure.
One of the more commonly used organization
structures is the matrix structure presented by
Barlett et.al.. [12] who argued that Top-level
managers in many of todays leading
corporations are losing control of their
companies. The problem is not that they have
misjudged the demands created by an
increasingly complex environment and an
accelerating rate of environmental change, nor
even that they have failed to develop strategies
appropriate to the new challenges. The problem
is that their companies are organizationally
incapable of carrying out the sophisticated
strategies they have developed. Over the past 20
years, strategic thinking has far outdistanced
organizational capabilities.
Thomas et.al. [13]basing their findings on
surveys, interviews, and workshops with 294
top-level and mid-level managers from seven
major multinational corporations in six
industriesidentified the matrix structures top
five challenges: misaligned goals, unclear roles
and responsibilities, ambiguous authority, lack of
a matrix guardian, and silo-focused employees.
With the complexity of present day virtual
organizations, the structure and design of their
internal structure has become integral to the
efficiency of the whole process and
organizations designers main focus is to solve
the problem of managing processes competing
for resources with certain basic requirements (a
bare minimum) in mind. The best virtual
organization structure system is:
1. Fast, to minimize the average response
time for the set of tasks managed.
2. Efficient in resource utilization, which is
the time using the resource compared to
the time the resource is available.
3. Adaptable, so that small changes in the
system can be carried out easily and
smoothly, and only affect well defined
entities.
4. Reliable, in terms of system performance.
5. Deterministic, in the sense that for each
possible set of inputs, a unique set of

outputs will be determined after a precise


time.
3. Virtual OrganizationThe Object-Oriented
(Organic) Structure
Where the object-oriented approach is
concerned, the structure of the virtual
organization is viewed as a collection of objects.
One type of object may be a task or a process.
Other types of objects may include IT resources,
employees, time slots, managers, managers time
slots, synchronization primitives, and an endless
variety of other objects. Each object contains a
set of rights that define the operations applicable
to that object. Interactions among objects are
determined by messages. The resulting structure
is a network of objects interconnected by
messages. The principles of abstraction are also
applied; each object embodies an abstraction of
some concept, it hides the internal
implementation of that concept and provides a
set of operations applicable to the object. The
object-oriented paradigm reflects a methodology
that is booming. The benefits gained in case of
implementing object-oriented models include:
Simplicity of understanding, analyzing,
and designing. Since object-oriented
models are very close to mental models
of reality.
Stability over changing requirements.
Objects are more stable than functions.
Object-oriented models offer excellent
support for modularity and encapsulation.
Object-oriented methods provide a high
degree of flexibility and reusability.
4. Ministry of TransportTechnology
DepartmentClassical Structure
In 2010, the organization structure of Egypts
Ministry of Transport (MoT) consisted of four
deputy minister sectors: policies and planning,
follow-up and coordination, funding and
investments, and technology development. Two
other functions reported directly to the minister
but not at the sectorial level: projects
management, and finance, administration, and
human resources. Technological development
was clearly an integral part of the Ministrys
structure (being at the level of deputy minister
sector) and in fact it was the first time that the
443

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 440-450
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

ICT domain had become that important within


the MoT. This shift within the MoT itself was a
clear reflection of the importance of ICT in the
transport industry in Egypt in general. Internally,
the technology department is divided into four
functions.
ICT,
transport
technology,
information centers, and research and
development.
Figure 1: ICT Department Functional
Organization Structure illustrates the functions
of the MoTs ICT department. As clearly
reflected in the figure, the ICT department itself
is a mega-sized virtual organization which is
responsible for overseeing the ICT departments
of more than 60 huge entities like the Egyptian
National Railway, the Port of Alexandria, the
Roads and Bridges Authority, the River
Transport Authority, etc. Although all those
huge entities report to and observe the policies
and regulations of the Ministry, each is
considered a separate entity, which represents
itself and has its own set of relevant departments.
In other words, the MoT can be viewed as a
virtual organization that focuses on its own set of
competencies
(policies,
regulations
and
governance) and outsources all the other
functions to other entities. Because the collective
entities that report to the ministerial ICT
department employ approximately 130,000
employees, coordination, follow-up, meeting
objectives, and maximizing resource utilizations
become a substantial task. This makes it quite
difficult to use classical organization structure
approaches without the structure being too big
and too complicated. To complicate matters
further, the MoT set a number of priority
objectives for the ICT department. The first is
achieving the highest level of safety for the
transport industry in Egypt. The second is the
highest quality of service. The third is providing
luxurious services and the fourth is providing
ultra-luxurious services throughout the transport
industry in Egypt.
Figure 2: ICT Department Matrix Organization
Structure shows the ICT departments complex
matrix structure and depicts the priority
objectives required of the ICT department, in
addition to the technical support provided by the
ICT department such as telecommunication, ICT
infrastructure
[servers,
ERP,
Intelligent

Transport Systems (ITS), etc.], ICT resources


management, and ICT technology services. This
matrix structure also takes into account the
supra-classification of the transportation industry
into: railway, and metro; maritime and river; and
roads, and bridges. Each of these sub-industries
is quite different and thus each requires and uses
technology that is unique and specific to it. It is
apparent from the figure that the huge matrix
structure makes the business processes very hard
to implement and modify under the given
constraints.
5. The Object-Oriented (Organic) Structure: A
Proposal
In virtual organizations in general and especially
when there is a huge number of large entities
working to achieve certain tasks or jobs but are
not physically together, the organizations data
structure becomes crucial for keeping track of
the multitude of operations that take place. The
data structure constitutes the jobs within the
organization and the resources available to get
these jobs done. The object-oriented approach is
a communication model that synchronizes
between jobs and resources in such a way as to
facilitate the workflow of the organization,
prioritize jobs and manage bottlenecks and
competition for resources. However, this
approach does not differentiate between a job or
a resource; each element, whatever it may be is
considered an object.
Figure 3: Virtual Organization Organic
Structure illustrates this organic structure for a
virtual organization. In the model, the virtual
organization (also considered an object under
this approach) generates all operations applicable
to all jobs and resources. However, the
manipulation of each individual job or resource
is the responsibility of the object representing
this job or this resource respectively. In the
model, the virtual organization maintains a list of
all jobs that need to be completed; they are given
identification numbers and saved in the (Jobs
List). This list is sorted by priority, and jobs with
equal priority are further sorted by time
(Priority/Time Queue). Like jobs, resources of
the organizations are also identified and saved in
lists (for example Employees List and Managers
List, etc.) Other resources are similarly
represented by the data structure. Since each job
444

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 440-450
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

needs a number of resources (both tangible and


intangible) to be executed, each job needs to
communicate its need for specific resources. At
the same time the resources of the organization
need to be protected from simultaneous access
by jobs. In 1965 Dijkstra [14] introduced the
concept of semaphores as a method of
communication to protect and prioritize the use
of resources. This communication concept
considerably simplified the synchronization
between jobs competing for the organizations
resources. In our proposed model, the semaphore
registers a certain value for available resources
(for example 50 printers). When a job requests
this resource (for example 5 printers), the
number used is deducted from the total resources
available and thus the semaphore value decreases
(it registers that only 45 printers are available). If
no resources are available at a given moment in
time, the semaphore registers a value of zero for
this specific resource and jobs have to wait in
queue until this resource becomes available after
being freed up by other jobs. The wait message
signaled by the semaphore means that jobs
should wait in queue until a resource is freed up.
From the other side, a job sends a signal message
to the semaphore when it is done which in turn
increments its value of available resources by the
number of resources released. In this sense, the
semaphore manages the shared resources of the
organization and all the jobs requesting those
resources. However, there are other types of
inter-job communication methods like pipes and
mailboxes. These communication methods are
one-on-one communications messages between
two parties. The pipe synchronizes the activities
between two jobs where the receiver has to wait
on the sender to fully send the data before he can
start to work and the sender then waits on the
receiver to finish his part of the job before he can
resume with it. None can do another job while
this job is ongoing. On the other hand, the
mailbox also synchronizes activities between
two parties, but it is a more flexible approach
where the receiver does not wait on the sender to
finish his job right now and both can engage in
other jobs in the meantime.
At the same time the organic structure model
creates a list for all the jobs that are ready to
proceed. This list is maintained in priority order
so that the next job to get resources and start

running is always the first job on the list. This


block is called the (Ready Jobs). For jobs that
are not ready another queue is created (Delayed
Jobs). This queue is sorted based on the time
each job needs to wait until it becomes ready.
Figure 4: Virtual Organization Organic
Structure Process presents a process snap shot
of object communication in a virtual
organization. In this micro model, a manager
receives a message to create a job. The manager
sends a message to create a job and waits for the
new job identification number (Job ID). The
manager adds the job to the (Jobs List), and the
(Ready Jobs Queue). The (Ready Jobs Queue)
requests a number of employees from the
(Employees List). If the requested employees are
available to work, the job will be allocated the
necessary resources and will then start. If there
are no employees available, the job will be
moved to the (Delayed Jobs) queue and, in turn
moved to wait in the semaphore queue for
employees. When resources become available,
they are allocated to the job, and the job moves
from the semaphore queue to the ready jobs
queue again. This sequence basically happens for
all the jobs that take place in the virtual
organization.
6. Ministry of Transport-Technology
Department-Organic Structure
Following the basis of the object-oriented
(organic) structure explained in the previous
sections, this section outlines the fine details of
the proposed organic structure for the MoTs
technology department. In addition to viewing
each process as an object, another new concept
(inheritance) is introduced to further facilitate the
organic structure. Inheritance is the term used to
express the similarity between objects and so
simplifies the definition of new objects (called
sub-objects) if they have similarities to older
objects (called super-objects) that were
previously defined. Thus a sub-object inherits
common attributes from a super-object. The
structure need not define those inherited
attributes because they were previously defined
for the super-object. However, the structure does
define the attributes that are only unique to the
newly added sub-object. In short, inheritance
portrays generalization and specialization,
445

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 440-450
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

making common attributes and services explicit


within an object hierarchy, Coad et.al. [15] and
one that also allows new objects to be built on
top of older, less specialized objects rather than
defined from scratch. For example, in Figure 5:
Hierarchy, messages and inheritance a subobject (like First Fit Q) inherits from the superobject (Sorted Queues) similar attributes or
messages like add, del, del all, etc. However,
there are new defined messages that are unique
only for the sub-object (FirstFitQ), like for
example delete and new. The hierarchy of
objects is clearly presented in Figure 5:
Hierarchy, messages and inheritance. It starts
with the super-object in the upper tier. From
there, two categories of sub-objects are added:
The first category (on the right-hand-side of
figure 5) is related to databases of processes
(business processes), resources (business
resources),
and
processors
(employees,
managers, etc.). The second category (on the
left-hand-side of figure 5) is related to tools, and
techniques to manage, synchronize and allocate
resources to processes to be executed by
processors. For example, the Semaphore Object,
is used to allocate resources to processes in an
exclusive manner. Once a process requests a
resource, it activates the message Wait for a
resource. If the resource is available, the process
will hold the resource until the process is
executed. If the resource is not available, the
process will wait in
Queue for that specific resource until it is freed
by the message Signal. Wait and Signal
messages are two types of messages that the
object semaphore manages. Other objects have
other specific messages that are uniquely related
to them, and so on and so forth.
As such, the system can be viewed as hundreds
of objects instances that are all structured by
hierarchical inheritance (as explained above),
and that communicate by messages. A new
object can be created at any time and after that it
can, wait for a resource, get assigned the

resource and start running. It can also be delayed


for another process to finish first. And finally it
can be terminated at any instance without
affecting the whole structure. In the objectoriented approach, creation and termination of
objects is automated during the lifetime of the
systems but at any one instance (static), one can
get a snap shot of the organization and view the
current situation of all activities running just at
this particular moment in time. However, for a
more dynamic view new tools like colored-petrinet are used to graphically display the ongoing
processes and thus would be very useful in
preventing deadlocks, starvation or priority
inversion.
7. Conclusion
This paper proposed the object-oriented
(organic) approach as a fast, reliable, adaptable
and deterministic organization structure model
for the virtual organization. Fast, in that it
provides real-time capability for the jobs to
which time is essential. Reliable, in terms of the
organizations reliance on its solid structure.
Adaptable, since it can easily accommodate any
changes required to the virtual organizations
functions without disturbing other ongoing tasks.
Deterministic, since all activities need to
determine input, output and execution time with
certainty. In that sense, the organic approach is
well suited for huge virtual organizations that
outsource most of their tasks but at the same
time need a reliable way of following up on
them.
Another contribution of this paper is to come up
with a model that conciliates organizational and
business processes integration and theorizes
about their impact in an extremely complicated
environment like that of the Egyptian Ministry of
Transports ICT department.
However, it is worth noting that this model is
static in presentation. For a dynamic execution
of the model, we need to simulate the model
with a dynamic tool such as Colored-Petri net in
the future.

446

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 440-450
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

Name
Minister of Transport

Name
Finance, Admin &
HR

Name
Research & Development

Name
Technology Development

Name
Information Centers

Name
Funding and Investments

Name
Projects

Name
Transport Technology

Name
Analysis &
Statistical

Name
Intelligent
Transport Sys.

Name
Telecomm

Name
Quality

Name
Portals

Name
Transport
Operations Sys.

Name
IT

Figure 1: ICT Department Functional Organization Structure

Name
Policies and Planning

Name
Information &
Communication
Technology

Name
Safety

Name
Transport Safety

Name
Follow-Up & Coordination

447

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 440-450
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
Name
Information and Communication Technology

Consultants

ICT Office Affairs

Name
Roads and Bridges

Name
Railways and Metro

Name
Maritime and River

Digital
Telecommunication

Roads
Telecommunication

Roads
Safety

Railways
Telecommunication

Railways
Safety

Maritime
Telecommunication

Maritime
Safety

Safety

ICT Infrastructure

Roads
ICT Infrastructure

Roads
Quality

Railways
ICT Infrastructure

Railways
Quality

Maritime
ICT Infrastructure

Maritime
Quality

Quality

ICT Resources

Roads
ICT Resources

Roads
Luxurious

Railways
ICT Resources

Railways
Luxurious

Maritime
ICT Resources

Maritime
Luxurious

Luxurious

ICT Services

Roads
ICT Services

Roads
Super-Luxurious

Railways
ICT Services

Railways
Super-Luxiourious

Maritime
ICT Services

Maritime
Super-Luxurious

Super-Luxurious

Figure 2: ICT Department Matrix Organization Structure

Virtual Organization
Organization Structure Tools
Employees List

Managers List

Index List

Index List

Employee
Object

Manager
Object

Communication Resources
Semaphore Objects
FIFO
Priority
Best-Fit
First-Fit

Mailbox Objects

FIFO
Priority

Pipe Objects

FIFO
Priority

Jobs List

Ready Jobs

Delayed Jobs

Jobs Waiting Resources

Priority/Time Queue

Priority Queue

Priority/Time Queue

Priority/Time Queue

Job
Object

Figure 3: Virtual Organization Organic Structure

Job
Object

Job
Object

Job
Object

448

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 440-450
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

Job Object

Create

Object Manager

Order:jb1

Add: JobID

JobID

Add: JobID

Jobs List

Ready Jobs Queue

Priority/Time Queue

Priority Queue

Job
Object

Employees List
Request

Job
Object

Index List

Employee
Object

Grante

Grante

Add: JobID

Semaphore Object

Delayed Jobs

Priority Queue

Priority/Time Queue
Add:JobID

Job
Object

Job
Object

Figure 4: Virtual Organization Organic Structure Process

Object
Department

Object
Semaphore

Init:
Signal:
Wait:

Object
Pipes

Init:
claim:
release:
Send:
Receive:

Object
Business Resource

Object
OrderedCollection

Object
Stream

Object
Mailbox

Init:
Send:
Receive:
Object
FIFOQueue
Add:
Del:
delAll:
DelFirst:
DelProcess:

Object
Business Process

Init
Terminate:
addPipe:
delPipe:
getEntry
addResource
delResource
delAllResource:

Object
SortedCollection

Object
Manager

Init:
newID:

Object
SortedQueues
Add:
Del:
delAll:
DelFirst:
DelProcess:

Object
PriorityQ
Delete:
New:

Object
FirstFitQ:
Delete:
New:

Object
Priority/SizeQ:
Delete:
New:

Object
BestFitQ:

Delete:
New:

Figure 5: Hierarchy, messages and inheritance

Object
DelayQ::
Delete:
New:





449

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 440-450
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
Works Cited

[1] Henry C.jr Lucas and Jack Baroudi, "The Role of Information Technology in Organization
Design.," in Management Information Systems, vol. 10, 1994, pp. 9-23.
[2] James Brain Quinn, "The Intelligent Enterprise a new Paradigm," Academy of Management
Executive, vol. 6, no. 4, Nov 1992.
[3] M Klein, "The Virtue of Being A Virtual Corporation," in Best's Review [Life/Health], vol. 95,
1994, pp. 88-94.
[4] Thayer C. Taylor, "How to make the virtual office a Literal Success," The Magazine of Magazine
Management, vol. 23, p. 53, 1994.
[5] B. Smith, "Welcome To The Virtual Office," HR Focus, vol. 71, no. 11, p. 16, 1994.
[6] MW. Hoewing, "Fortune 500 Check Into Hoteling, Virtual Office," National Real Estate Investor,
vol. 36, no. 11, pp. 30, 200, 1994.
[7] H. Mintzberg, The Structuring of Organizations.: Prentice-Hall, Englewood, N.J., USA, 1979.
[8] Corporate Executive Board, "Frameworks for Organizational Design,"
www.clc.executiveboard.com, March 2013. [Online]. www.clc.executiveboard.com
[9] Kubrak Anatolij, Koval Konstantin, Kavaliauskas Valdas, and Sakalas Algimantas,
"Organizational Structure Forming Problems in Modern Industrial Enterprise," ISSN 1392-
2785Engineering Economics, 2007.
[10] Norman W. Snell, "Virtual HR: Meeting new world realities," Compensation & Benefits Review,
vol. 26, p. 35, Nov 1994.
[11] Jose Cordeiro and Joaquim Filipe, "Application of the Theory of Organized Activity to the
Coordination of Social Information Systems,".
[12] CA Barlett and S Ghoshal, "Matrix Management: Not a Structure a Frame of Mind," Harvard
Business Review, July-August 1990.
[13] Thomas Sy and Laura D'Annunzio, "Challenges and Strategies of Matrix Organizations: Top-
Level and Mid-Level Managers' Perspectives," Human Resource Planning.
[14] E.W. Dijkstra, "Co-Operating Sequential Processes," London Academic Press, 1965.
[15] Peter Coad and Edward Yourdon, Object-Oriented Analysis. New Jersy: Prentice Hall
International, 1991.

450

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 451-459
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

An Effectiveness Test Case Prioritization Technique for Web Application Testing


Mojtaba Raeisi Nejad Dobuneh1, Dayang N. A. Jawawi2 and Mohammad V. Malakooti3
1, 2
Department of Software Engineering, Universiti Teknologi Malaysia (UTM), Skudai 81310, Johor Bahru,
Malaysia, 3 Department of Computer Engineering, Islamic Azad University, UAE Branch
rmojtaba2@live.utm.my1, dayang@utm.my2, malakooti@iau.ae3

ABSTRACT
Regression tests are executed when some changes are
made in the existing application in order to check the
negative impact of the changes in the rest of the system
or on the expected behavior of other parts of the
software. There are two primary options for test suites
used during regression testing, first generate test suites
that fulfill a certain criterion, or user session based test
suites. User-sessions and cookies are unique features of
web applications that are useful in regression testing.
The main challenge is the effectiveness of average
percentage fault detection rate and time constraint in
the existing techniques. Test case prioritization
techniques improve the performance of regression
testing, and arrange test cases in order to obtain
maximum available fault that is going to be detected in
a shorter time. Thus, in this research the priority is
given to test cases that are performed based on some
criteria related to log file which collected from the
database of server side. In this technique some fault
will be seeded in subject application then applying the
prioritization criteria on test cases to obtain the
effectiveness of average percentage fault detection rate.

KEYWORDS
Software testing, web application, regression testing,
prioritization test cases, clustering, user session

1 INTRODUCTION
Web applications have served as critical tools for
different business. Failure of these critical tools
means the loss of millions of dollars for the
organizations that are using them [1], [2].
Web
applications
are
presentations
of
organizations and in front of a large number of
audience faces. Most of the web applications must
run without any interruption during the day and
night. This requires that Software testers to detect
the software bugs and software engineers to x

those bugs immediately and release the new


versions. Under such circumstances the execution
of regression tests is performed in order to make
the performance of the new version as per the
requirements of the client or organization. The
fixing of bugs in web applications requires a short
time span, so the test suites can help the testers to
perform its best in detection of new faults during
the testing phase [3].
The purpose of regression testing is to provide
confidence that the newly introduced changes do
not obstruct the behavior of the existing,
unchanged part of the software. Regression testing
is one of the largest maintenance costs during the
software development cycle. It is a complicated
process for web applications based on modern
architectures and technologies. User session based
testing has the benefit that tests can be
automatically constructed from web logs for use in
regression testing and they contain sequences of
actions that real users have performed. User
session based test cases also have the benefit that
testers do not need to specify input for test cases.
For instance, web applications are accessible
through the Internet and each http POST and GET
request that a user makes is written to a log file.
The logs can then be passed into test cases by
using the IP addresses, cookies, and time stamp
for each POST and GET request in order to
identify the steps of each user and to create the
respective test cases [4].
2 RESEARCH BACKGROUND
In simplest form we can easily execute all the
existing test cases in the test suite without any
extra handling. However, software engineers
know about the gradually growing size of the test
suites due to software modifications. Thus,
executing the entire test suite would be very
451

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 451-459
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
expensive. This leads the software engineers to
think about deploying efficient techniques to
reduce the effort that is required for regression
testing in different ways.
In the life cycle of an application, a new version
of the application is created as a result of (a) bug
xes and (b) requirements modications [5]. A
large number of reusable test cases may be
accumulated from different application versions
suitable to test newer versions of the application.
However, running all the test cases may take a
signicant amount of time. An example that may
spend weeks in the execution of all the test cases
of an earlier version [6], Regarding the time
restrictions, software testers need the selection and
ordering of a covering subset of test cases for
execution.
The three major approaches for regression testing
are test suite minimization, test case selection and
test case prioritization [7]. Test case prioritization
(TCP) helps us to find out the different optimal
combinations of the test cases. A prioritization
process is not associated with the selection process
of test cases, and it is assumed that all test cases
must be executed. But, it tries to get the best
schedule running of test cases in a way that if the
test process is interrupted or early halted at an
arbitrary point, the best result is achieved in
which more faults are detected. TCP is introduced
by Wong et al. [8]. The general TCP criteria and
techniques are described in the literature review
[7]. Structural coverage is the most commonly
used metric for prioritization [9].
The logic of this criterion is that faster structural
coverage of the whole software code leads to
maximum detection of faults in a limited time.
Therefore, the aim of this approach is achieving

higher fault detection rates, via faster structural


coverage.
Although
the
most
common
prioritization technique is about structural
coverage in different forms, some prioritizations
techniques are presented that have different
criteria [10], [11], [12].
An approach which is presented by Sampath et al.,
for TCP in order to test the web applications with
different user sessions of previous software
versions is recorded. The best test cases for a web
application are session-based due to the reflection
of real user patterns, and make the testing process
quite realistic [3]. User-Session based techniques
are new light weight useful mechanisms of testing.
Just applying for web applications, Automating
test process is more feasible and simpler by user
sessions.
In user-session approaches the collection of the
interactions of the users with the server is
collected and the test cases are generated using a
suitable policy. Client's requests transported as
URLs composed of page addresses and name
value pairs are the data to be captured. These data
that can be found in the log files stored in web
servers or cookies left in clients machines.
Captured data about user sessions can be used to
generate a set of http requests and turn into a real
test case.
The benefit of the approach is to generate the test
cases without any awareness web applications
internal structure. The test cases that are generated
by user sessions are not too much dependent on
different technologies that are required for web
based applications.
In Table 1 comprise the existing methods based on
the user session base web application testing
between the years 2001 till 2012.

452

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 451-459
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
Table 1. Test case prioritization techniques
Study

Description technique

Strength

Limitation

(Rothermel et

(Elbaum et al.,

(Srikanth et al.,

(Sampath et al.,

(Sampath and

al., 2001) [6].

2002)[9].

2005)[11].

2008)[3].

Bryce, 2012)[13].

The techniques
used for test
cases prioritize
for regression
testing,
including
criteria, such
as coverage
code

The technique

The technique is a

The technique

The technique to

is leverage

Prioritization of

applied several

be used both the

captured user

Requirements for

new

reduction and

behavior to

Test (PORT).

prioritization

prioritization for

generate test

criteria to test

test case web

cases

suites

application

Improved the
rate of fault
detection

Leads to a

Improves

Increase the rate

Increased the

reduction in the

detection of

of fault detection

effectiveness of

amount of

severe faults

test suite

required tester

during the

reduction

intervention

regression testing

Not cost

Faults need to

Not effectiveness

Considering the

Using reduction

effective

be seeded in the

in time and cost

costs associated

test suite can be

with the

missing some part

prioritization

of the test case

strategies

and consequently

application.

effect on the
finding faults

3 PROBLEM STATEMENTS
The current researchers have been using reduction
test suite or prioritization test suite for
effectiveness of fault detection rate and time. This
research will address the useful new technique
offered by Sampath et al. named session based
technique. Although in [3], both approaches were
used to improve the effectiveness of user session
based testing but the reduction test suites cause to
omit some part of the test case and consequently
effects on the fault detection in web application.

Thus, it is suggested to improve the method by


proposing a new technique that uses clustering and
prioritization together with applied criteria. In this
research we have proposed a method that the
effectiveness of clustering test suites can be
further improved by ordering the clustered set of
test cases as well as to show the criteria for the
ordering [14]. Such an ordering would be
beneficial to a tester who will be faced with
limited time and resources but still can complete
the testing process. Therefore the problem is to
453

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 451-459
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
verify the new technique for improving the
effectiveness of fault detection rate and time of
testing in session-based test case prioritization of
web applications.
3.1 Research Questions
In this paper a new technique has applied into the
all test cases which is collecting from log files of
respect subject, book store portal. The research
question is:
How can we increase the effectiveness of current
session based on test case prioritization techniques
for the web applications?
In order to answer the above question the
following questions need to be answered:
Question 1: How can the new technique with
prioritized test cases based on number of most
common http requests in pages will improve the
rate of average percentage fault detection
(APFD)?
Question 2: What is the effectiveness of the new
technique with an ordered length of http request
chains to obtain a better APFD rate?
Question 3: How can we use the new technique
with an order dependency of http requests helps to
improve the rate of APFD?
3.2 Objectives
The permutation of test cases in a way that leads
to faster detection of maximum available faults in
a modified version of web application needs to
find good criteria. The goal of this research is to
propose a new technique which merges two
approaches of prioritizing and the clustering test
suite to improve one of test case prioritization
techniques called session-based technique in web
application regression testing.
The research aimed to improve the accuracy of
existing test suites with respect to the
effectiveness of time and the rate of fault
detection. The following objectives need to be
accomplished in order to achieve our goals:

1. To develop a new technique to prioritize


the cluster test cases in the web application
testing process.
2. To propose the applied criteria for the fault
detection rate to improve the effectiveness
of new technique.
3. To verify the technique for effectiveness of
mixing the above strategies together.
3.3 Research Justification
User session used for web application testing and
this approach has been accurate and adequate for
dynamic web application domain, Therefore we
conduct our research by using session based test
case prioritization for web application testing. The
lines of code for large web applications are in the
millions and debugging and error detection for all
these lines are time consuming. Thus, there will be
so many object interactions that also required the
interactions of users significantly. The automated
testing becomes complicated due to the continuous
maintenance process and due to the changes that
occurred in the profiles of the users [15]. Most of
the existing literature studies are the two
approaches for test methods disjoint. While
reduction techniques generate a smaller set of tests
than the original suite, even though a reduced test
suite can be so large that it cannot be executed
completely under time constraints. In this paper
we have proposed a method that the effectiveness
of clustering test suites can be further improved by
ordering the clustered set of test cases as well as to
show the criteria for the ordering such an ordering
would be beneficial to a tester who will be faced
with limited time and resources but still can
complete the testing process.
4 RESEARCH METHODOLOGIES
This section refers to the method that used to
study the field of research, perform the job and
explain the results. This research is conducted to
improve the rate of fault detection in web
applications. Based on proposed the new
technique with three criteria for prioritization test
cases, the first step is dedicated to collecting log
454

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 451-459
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
file from server side for subject Bookstore web
application which choose as case study in this
research. Online Book Store is a shopping portal
for buying books. The online book store allows
users to register, log in, browse for books, search
for books by keyword, rate books, add books to a
shopping cart, modify personal information, and
log out. Book uses JSPs for its front end and a
MySQL database for the back end.
The Bookstore application has introduced to under
graduate students of University Teknologi
Malaysia (UTM) for collect log files on server
side. The log files have been collected for 60 days.
Clustering the previous works and documents and
gathering useful information for next steps is
basically done in step 2. A new problem is defined
and explained in step 3. This step includes the
techniques and methods that used to solve the

problem too. Getting to work and performing


some real experiments and writing the results is
the next step of our research shown in step 4.
Finally, some comparisons are made between the
results that we obtained and those previous works
to reach the conclusion, which is explained in step
5. Thus, this project is planned to accomplish in 5
steps. The research procedure is based on the five
main phases as shows in figure 1. In the first phase
we have introduced literature review. The second
phase is to find the problem by systematic
mapping on web application testing and determine
the problem statement in this domain. The third
phase is conducted to propose solution then we
check the rate of fault detection in phase fourth.
Finally, the last phase is evaluated with case study
that compares the average fault detection rate with
the results of previous techniques.

Figure 1. Research process

455

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 451-459
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
4.1 Literature Review
The focus of research is on two approaches in web
application regression testing which are clustering
test suite and prioritization test suite in case of
effectiveness time and fault detection rate. This
step is aimed to search about the current state of
the art and challenges in web application test
considering prioritization test case technique for
better fault detection in less time. The drawbacks
of these approaches are explored thoroughly in
literature review step of the research process.
4.2 Problem Definition
The analysis of the problem, in test cases
prioritization and web application testing is
performed making the current literature review as
a base for the issue or problem.
4.3 Proposed Solution
The goal of this research is to propose a new
technique in test case prioritization for improving
effectiveness of fault detection rate and time, due
to these objectives the first step of the proposed
solution is generating test cases base on the user
session data which is using logs file in server side.
Then we clustered the test suite into equivalence
classes or groups of test suites in step 2. In the end
we are prioritizing the test suite based on new
criteria which are proposed in the research
question.
For test case generation, the test cases come from
real user session for web application testing, while
using log file in sever side then convert each log
file to the test cases.
Test Suite clustering techniques could be
categorized into modes partitioned or hierarchical.
A partitional clustering algorithm constructs
partitions of the data, where each cluster optimizes
a clustering criterion, such as the minimization of
the sum of squared distance from the mean within
each cluster. The complexity of Partitional
clustering is large because it enumerates all
possible groupings and tries to find the global
optimum. Even for a small number of objects, the
number of partitions is huge. Thats why; common
solutions start with an initial, usually random,

partition and proceed with its refinement. A better


practice would be to run the partitional algorithm
for different sets of initial points (considered as
representatives) and investigate whether all
solutions lead to the same final partition.
Partitional Clustering algorithms try to locally
improve a certain criterion. First, they compute the
values of the similarity or distance, they order the
results, and pick the one that optimizes the
criterion [16]. Hence, the majority of them could
be considered as greedy-like algorithms.
Hierarchical algorithms create a hierarchical
decomposition of the objects. They are
agglomerative (bottom-up) or divisive (top-down):
(a) Divisive algorithms start with one group of all
objects and successively split groups into smaller
ones, until each object falls in one cluster, or as
desired [17].
(b) Agglomerative algorithms follow the opposite
strategy. They start with each object being a
separate cluster itself, and successively merge
groups according to a distance measure. The
clustering may stop when all objects are in a
single group or at any other point the user wants.
These methods generally follow a greedy-like
bottom-up merging.
Divisive approaches divide the data objects in
disjoint groups at every step, and follow the same
pattern until all objects fall into a separate cluster.
This is similar to the approach followed by divideand-conquer algorithms. For gathering all test
cases into similarity groups have used K-mean
clustering method.
The K-means algorithm comprises the following
four steps:
1-Choose k initial cluster centers (representing the
k transaction groups) randomly from the center of
hypercube.
2- Assign all data points (representing the
transactions) to the closest cluster (measuring
from the cluster center). This is done by
presenting a data point x and calculate the
similarity (distance) d of this input to the weight w
of each cluster center j. the closest cluster center to
a data point x is the cluster center with minimum
distance to the data point x.
456

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 451-459
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)

(1)

3- Recalculate the center of each cluster as


centroid of all data point in each cluster. The
centroid c is calculated as follow:

(2)

Where

Where: N
cluster

4.5 Evaluation
The rate of fault detection is dened as the total
number of faults detected in a given subset of the
prioritized test case order [6]. For a test suite, T
with n test cases, if F is a set of m faults detected
by T, then let TFi be the position of the rst test
case t in T', where T' is an ordering of T, that
detects fault i. Then, the APFD metric for T' is
given as:

(3)

(5)

is the number of data points in the

For 100% detection of faults, the time used by


each prioritized suit is measured properly. The
best possible option to calculate the detection of
the total faults is to detect them in earlier stages of
the tests.

(4)

5 Results
4-if the new centers are different from previous
ones repeat step 2, 3 and 4 otherwise terminate the
algorithm.
4.4 Prioritization
Test case prioritization (TCP) helps in finding out
the different optimal combinations of the test
cases. A prioritization process is not associated
with the selection process of test cases, and it is
assumed that all test cases must be executed, but it
tries to best schedule running of test cases in a
way that if the test process is interrupted or early
halted at an arbitrary point, the best result that is
finding more faults is achieved.
The three proposed prioritization criteria which
are based on following: prioritization criterion
with number of most common http requests in
pages, prioritization criterion with an ordered
length of http request chains, and prioritization
criterion with an order dependency of http
requests have been applied in this technique to
obtain a better result.

This phase of research methodology is comprised


of verification and validation of the proposed new
prioritization strategy for web based application.
The subject book store is selected in order to
verify it by applying criteria to test cases of web
application.
The result of applied criteria to Bookstore shows
in Table 2, as it can be seen the third criteria
which ordered the test cases base on dependency
http requests has been detected more fault at early
stage of running test cases by 50 % of whole test
cases detected about 94% of faults. The 93.33%
percentage of total faults has been detected by first
criteria (number of most common http requests in
pages) with executed 60% of exist test cases. The
length of http request chains prioritization criteria
as a second criterion performed poorly for
detected 94% fault by run more than 90 % of test
cases.
Test cases executed by Random priorities create a
reasonably effective test order with APFD
comparable to the other techniques.

457

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 451-459
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
Table2: Average percentage of fault detection for Bookstore
Percentage of test cases
execute

fault detected by
first criterion

fault detected by
second criterion

fault detected by
third criterion

fault detected by random

10%
20%
30%
40%
50%
60%
70%
80%
90%
100%

76.67
83.33
83.33
83.33
86.67
93.33
93.33
93.33
93.33
93.33

70
80
86.67
86.67
86.67
86.67
86.67
93.33
93.33
93.33

76.67
76.67
83.33
83.33
93.33
93.33
93.33
93.33
93.33
93.33

53.33
53.33
66.67
80
80
80
80
86.67
93.33
93.33

100.00%
90.00%
80.00%

APFD

70.00%
60.00%

FIRST CRITERION

50.00%

SECOND CRITERION

40.00%

THIRD CRITERION

30.00%

RANDOM CRITERION

20.00%
10.00%
0.00%
30

60

90

120 150 180 210 240 270 300


Test cases no.

Figure 2. Results of APFD prioritization criteria

Figure 2 shows the graph of APFD by


prioritization criteria in proposed technique to
obtain the goals.
6 CONCLUSIONS
The web application domain has an advantage that
actual user-sessions can be recorded and used for
regression testing. While these tests are indicative
of the users interactions with the system, both
clustering and prioritizing user-sessions has not

been thoroughly studied. In this paper we


examined a new technique for using cluster based
test case prioritization of such user-sessions for
web applications. By applying several new
prioritization criteria to these test suites to identify
whether they can be used to increase the rate of
fault detection. Prioritization by frequency metrics
and systematic coverage of parameter-value
interactions has increased the rate of fault
detection for web applications.

458

International Journal of Digital Information and Wireless Communications (IJDIWC) 3(4): 451-459
The Society of Digital Information and Wireless Communications, 2013 (ISSN: 2225-658X)
7 REFERENCES
1.
2.
3.

4.

5.

6.

7.

8.

9.

10.

11.

12.

13.

14.

Blumenstyk, M. Web application developmentBridging the gap between QA and development,


(2002).
Pertet, S. and Narasimhan, P, Causes of Failure in
Web Applications (CMU-PDL-05-109). Parallel Data
Laboratory. 48, (2005).
Sampath, S., Bryce, R. C., Viswanath, G., Kandimalla,
V. and Koru, A. G, Prioritizing user-session-based
test cases for web applications testing, Proceedings of
the 2008 Software Testing, Verification, and
Validation, 2008 1st International Conference on:
IEEE, 141-150, (2008).
Sprenkle, S., Gibson, E., Sampath, S. and Pollock, L.
(2005). Automated replay and failure detection for
web applications. Proceedings of the 2005
Proceedings of the 20th IEEE/ACM international
Conference on Automated software engineering,
ACM, 253-262.
Onoma, A. K., Tsai, W.-T., Poonawala, M. and
Suganuma, H, Regression testing in an industrial
environment, Communications of the ACM. 41(5),
81-86, (1998).
Rothermel, G., Untch, R. H., Chu, C. and Harrold, M.
J, Prioritizing test cases for regression testing,
Software Engineering, IEEE Transactions on. 27(10),
929-948, (2001).
Yoo, S. and Harman, M., Regression testing
minimization, selection and prioritization: a survey.
Software Testing, Verification and Reliability, 22(2),
67-120, (2012).
Wong, W. E., Horgan, J. R., Mathur, A. P. and
Pasquini, A, Test set size minimization and fault
detection effectiveness: A case study in a space
application, Journal of Systems and Software. 48(2),
79-89, (1999).
Elbaum, S., Malishevsky, A. G. and Rothermel, G,
Test case prioritization: A family of empirical
studies, Software Engineering, IEEE Transactions on.
28(2), 159-182, (2002).
Leon, D. and Podgurski, A, A comparison of
coverage-based and distribution-based techniques for
filtering and prioritizing test cases, Proceedings of the
2003 Software Reliability Engineering: IEEE, 442453, (2003).
Srikanth, H., Williams, L. and Osborne, J, System test
case prioritization of new and regression test cases,
Proceedings of the 2005 Empirical Software
Engineering, International Symposium on: IEEE, 10
pp., (2005).
Tonella, P., Avesani, P. and Susi, A, Using the casebased ranking methodology for test case
prioritization, Proceedings of the 2006 Software
Maintenance, 22nd IEEE International Conference on:
IEEE, 123-133, (2006).
Sampath, S., Bryce, R, C., Improving the
effectiveness of test suite reduction for user-sessionbased testing of web applications, Information and
Software Technology 54 (2012) 724738.
Raeisi Nejad Dobuneh, M., Jawawi, N.A.D, Malakooti
V.M, Web application regression testing: A session
based test case prioritization approach, The

International Conference on Digital Information


Processing, E-Business and Cloud Computing, 107112, (2013).
15. Kirda, E., Jazayeri, M., Kerer, C. and Schranz, M,
"Experiences in engineering flexible web services",
Multimedia, IEEE. 8(1), 58-65, (2001).
16. Lazli, L., Mounir, B., Chebira, A., Madani, K. and
Laskri, M.T., Connectionist probability estimators in
HMM using genetic clustering application for speech
recognition and medical diagnosis, International
Journal of Digital Information and Wireless
Communications 1(1), 14-31, (2011).
17. Kaufman, L., Rousseeuw, P.J., Finding groups in
data: An introduction to cluster analysis, John Wiley
and sons, (1990).

459

Anda mungkin juga menyukai