Anda di halaman 1dari 344

Proceedings of the

National Conference on

Communication Control and


Energy System
NCCCES'
NCCCES'11
CCES'11

29-30 August 2011

Organized by

School of Electrical

Vel Tech Dr. RR & Dr. SR Technical University


(VEL TECH RANGARAJAN Dr. SAGUNTHALA R&D INSTITUTE OF
SCIENCE AND TECHNOLOGY)

#42, Avadi-Vel Tech Road, Avadi, Chennai - 600 062, Tamil Nadu, India.
i

PROCEEDINGS OF THE NATIONAL CONFERENCE ON


COMMUNICATION CONTROL AND ENERGY SYSTEM

Vel Tech Dr. RR & Dr. SR Technical University, Chennai


August 2011

No part of the material protected by this Copyright notice may be reproduced or utilized in any form or by any
means, electronic or mechanical including photocopying, recording or by any information storage and retrieval
system, without prior written permission from the Copyright owner.

ISBN: 978 93 80624 41 9

Published by
Jai Tech
10/785, Elangovan Salai,
Mogappair East,
Chennai - 600037, Tamil Nadu.

Page Layout and Printed by


HiKey Media,
greendurai@yahoo.com

ii

.R

R TECH
NI
Dr.S
C

ES

IO
VIS

VE
NT
UR

THE

VEL

VEL

HE

C
HE THE VIRTUES T DI
NN
IN
A I - 6 00 0 6 2 .

RSITY

TECH

IVE
UN

Dr

&

AL

VEL TECH

Vt

Dr.RR & Dr.SR

TECHNICAL UNIVERSITY

VEL

VEL TECH RANGARAJAN Dr.SAGUNTHALA


R&D INSTITUTE OF SCIENCE AND TECHNOLOGY
University u/s 3 of UGC Act, 1956

#42, Avadi-Vel Tech Road, Avadi, Chennai - 600 062, Tamil Nadu, INDIA.

R S

Prof. Dr. Vel. R. Rangarajan


Founder Chancellor

MESSAGE
Dear Students,
All the young minds of EEE, ECE, EIE department have brought together NCCCES'11,
through pure dedication and will power.
To embark on a journey of success one needs the tools of preparedness, foresight and strategy.
Education alone is not enough to complete a student's knowledge. The strive to reach your goal
and the confidence with which you handle tough situations bring out your true caliber.
Opportunities knock the door just once, and it is upto you, dear students, to recognize and
utilize it to the fullest. I am writing all these to you now as a successful person, believing in
hard work. Triumph will treat you to sweet fruits of labour if you put in hard work to the
fullest.
I presume that this conference would reveal the ambushed talents of many and result in creating
great engineers for the future .
I sincerely hope that NCCCES'11 organized by blooming blossoms will be a grand victory.

Prof. Dr. Vel. R. Rangarajan, B.E(Elec), B.E(Mech), M.S(Auto), D.Sc


Founder - Chancellor

.R

R TECH
NI
Dr.S
C

ES

IO
VIS

VE
NT
UR

THE

VEL

VEL

HE

C
HE THE VIRTUES T DI
NN
IN
A I - 6 00 0 6 2 .

RSITY

TECH

IVE
UN

Dr

&

AL

VEL TECH

Vt

Dr.RR & Dr.SR

TECHNICAL UNIVERSITY

VEL

VEL TECH RANGARAJAN Dr.SAGUNTHALA


R&D INSTITUTE OF SCIENCE AND TECHNOLOGY
University u/s 3 of UGC Act, 1956

#42, Avadi-Vel Tech Road, Avadi, Chennai - 600 062, Tamil Nadu, INDIA.

R S

Dr. (Mrs). Sagunthala Rangarajan


Founder Vice Chairman

MESSAGE
Dear Students,
It gives me great pleasure to note that the students of the departments of Electrical and
Electronics Engineering, Electronics and Communication Engineering and Electronics and
Instrumentation Engineering have come together to offer a National Level Technical
Conference NCCCES'11, for our benefit.
Nothing dies faster than a new idea in a closed mind. These students have brought out the
best in them to make this event possible.
The conference serve as a platform for connecting the young and intelligent minds.
The theme of the conference , NCCCES'11 fits very well into our education philosophy
enshrined in Vision, Virtues and ventures . Endeavour towards fulfillment of this mission really
deserve our deep appreciations.
I wish the staff and students all the very best for NCCCES'11 would be a grand success.

DR. (Mrs). Sagunthala Rangarajan, M.B.B.S


Founder - Vice Chairman

.R

R TECH
NI
Dr.S
C

ES

IO
VIS

VE
NT
UR

THE

VEL

VEL

HE

C
HE THE VIRTUES T DI
NN
IN
A I - 6 00 0 6 2 .

RSITY

TECH

IVE
UN

Dr

&

AL

VEL TECH

Vt

Dr.RR & Dr.SR

TECHNICAL UNIVERSITY

VEL

VEL TECH RANGARAJAN Dr.SAGUNTHALA


R&D INSTITUTE OF SCIENCE AND TECHNOLOGY
University u/s 3 of UGC Act, 1956

#42, Avadi-Vel Tech Road, Avadi, Chennai - 600 062, Tamil Nadu, INDIA.

R S

Dr. Mrs. Rangarajan Mahalakshmi. K


Chair Person - Managing Trustee

MESSAGE
Beloved Students,
I congratulate the Departments of EEE, ECE, EIE of Vel Tech Dr.RR & Dr.SR Technical
University for holding a National Level Technical Conference NCCCES'11 on 29th and 30th
of August 2011.
A conference is a professional performance in which students share a common , ideal and
embrace a common goal, strive with trust and commitment in the end.
The four essential ingredients that make a person truly wholesome
Choose a career you love
Give it the best
Seize your opportunities
Be a member of the team
These words by Benjamin Fairless are truly inspiring. Opportunities come by rarely, and it is
the duty of the student to make the most of them.
My heartfelt wishes for the grand success of their conference.

Dr. Mrs. Rangarajan Mahalakshmi .K, B.E., MBA. (UK),


Chair Person - Managing Trustee

.R

R TECH
NI
Dr.S
C

ES

IO
VIS

VE
NT
UR

THE

VEL

VEL

HE

C
HE THE VIRTUES T DI
NN
IN
A I - 6 00 0 6 2 .

RSITY

TECH

IVE
UN

Dr

&

AL

VEL TECH

Vt

Dr.RR & Dr.SR

TECHNICAL UNIVERSITY

VEL

VEL TECH RANGARAJAN Dr.SAGUNTHALA


R&D INSTITUTE OF SCIENCE AND TECHNOLOGY
University u/s 3 of UGC Act, 1956

#42, Avadi-Vel Tech Road, Avadi, Chennai - 600 062, Tamil Nadu, INDIA.

R S

Mr. K.V.D. Kishore Kumar


Director

MESSAGE
Dear Students,
I extend my hearty wishes to the students of ECE, EEE & EIE who have made this mammoth
event possible. Their fertile minds with creativity have resulted in sharing of their knowledge
in the domain of school of Electrical Sciences, which in turn has opened up new vistas of
opportunities for world class performance.
NCCCES'11, a National Level Students' Technical Conference, is a platform where
innovative minds of great caliber come together and open up new possibilities in today's
globalized competitive scenario.
Young minds have the ability to grasp and process ideas fruitfully at a faster pace.
I am sure that this conference will clearly create a stage for exhibiting talents by the student
community.
Once again I wish all the participants success in their career to create milestones continuously.

Mr. K.V.D. Kishore Kumar, B.E., M.B.A (U.S.A)


Director

.R

R TECH
NI
Dr.S
C

ES

IO
VIS

VE
NT
UR

THE

VEL

VEL

HE

C
HE THE VIRTUES T DI
NN
IN
A I - 6 00 0 6 2 .

RSITY

TECH

IVE
UN

Dr

&

AL

VEL TECH

Vt

Dr.RR & Dr.SR

TECHNICAL UNIVERSITY

VEL

VEL TECH RANGARAJAN Dr.SAGUNTHALA


R&D INSTITUTE OF SCIENCE AND TECHNOLOGY
University u/s 3 of UGC Act, 1956

#42, Avadi-Vel Tech Road, Avadi, Chennai - 600 062, Tamil Nadu, INDIA.

R S

Dr. M. Koteeswaran
Vice Chancellor

MESSAGE
It augurs well for the School of Electrical Engineering in organising a National Conference
exclusively open to engineering students across the country.
We are confident that this National Conference on Communication Control and Energy
System to be held on 29-30 August 2011 will provide a platform for technical professional
students to showcase their technical skills and knowledge and demonstrate their appreciation
of the subject.
The School of Electrical Engineering deserves appreciation for having chosen the two most
relevant and pervasive subjects which have irreversibly changed the way humans live and
broken the barrier of distance by keeping people connected at all times. This has spread a vast
canvass with widest diversity in nature and content to provoke interest in every participant.
This should provide a fertile ground for cross fertilisation of ideas, technology and application
of multi disciplinary approaches in identifying and demystifying the challenges and issues in a
variety of areas of communication and energy sectors. We are sure students will seize this
opportunity and reap great benefits.
I wish the Conference Success.
Dr.M. Koteeswaran
Vice -Chancellor

.R

R TECH
NI
Dr.S
C

ES

IO
VIS

VE
NT
UR

THE

VEL

VEL

HE

C
HE THE VIRTUES T DI
NN
IN
A I - 6 00 0 6 2 .

RSITY

TECH

IVE
UN

Dr

&

AL

VEL TECH

Vt

Dr.RR & Dr.SR

TECHNICAL UNIVERSITY

VEL

VEL TECH RANGARAJAN Dr.SAGUNTHALA


R&D INSTITUTE OF SCIENCE AND TECHNOLOGY
University u/s 3 of UGC Act, 1956

#42, Avadi-Vel Tech Road, Avadi, Chennai - 600 062, Tamil Nadu, INDIA.

R S

Dr. E. Kannan
Registrar

MESSAGE
I have great pleasure to welcome you all to the National Conference on Communication,
Control and Energy System NCCCES 2011.
Globalization has brought many benefits, yet there is growing contentions over how these
benefits are shared and increasing recognition that globalised markets require greatly
improved global governance. Globalization is creating intense business pressure and for many
firms. Worldwide competition is fierce among organization and the recession is making it even
harder for many organizations to sustain their competitive advantage. To combat this
challenge, organizations Worldwide have been forced to look for innovation in their business
practices. Innovation is the specific tool of entrepreneurs, the means by which they exploit
change as an opportunity for a different business or service. It is capable of being presented as a
discipline, capable of being learned, capable of being practiced.
This conference aims to bring together the national practitioners in the field of Electrical
Communication and Instrumentation to exchange knowledge and understand the need for
change, with the members of their own profession and members of the multi-professional
team. This conference is designed to maximize the development of collaborative links and
provide an opportunity for informal discussions and recreation.
I am grateful to our founder & Chancellor Prof.Dr.R.Rangarajan, and Management of Vel
Tech Dr.RR & Dr.SR Technical University, Chennai, for all their help and support, without
which this event could not have taken place. I thank all the Keynote speakers, moderators and
delegates whose contributions made NCCCES-2011 a successful event. My special thanks to
the members of the conference committee whose involvement and their support are greatly
appreciated.
Dr. E. Kannan
Registrar

Commodore (Rtd) S. Saseendran


Director

MESSAGE
I am very happy that Vel-Tech University is organising a National Conference on
Communication, Control and Energy System. The topics and themes covered in the event
have very high relevance to world's emerging technologies. This conference is yet another
endeavour by Vel-Tech for sharing knowledge, information and trends in technological
advancements among participants with a focus on student community. The event will also act
as a good forum for interaction amongst the students and faculty resulting in a greater bond
between them.
I am sure this conference will provide a great opportunity and a well-balanced platform for the
participants for enhancing knowledge and technical skill in relevant areas.
Best wishes for the grand success of the Conference.

Commodore (Rtd) S. Saseendran


Director

Programme Committee
CHIEF PARTRONS

Dr. R. Rangarajan, Founder and Chancellor


Dr. Mrs. Sagunthala Rangarajan, Founder and Vice Chairman
Dr. Mrs. Rangarajan Mahalakshmi. K, Chairperson and Managing Trustee
Mr. K.V.D. Kishore Kumar, Director

PARTORNS

Dr. M. Koteeswarn, Vice Chancellor


Dr. E. Kannan, Registrar

CONVENORS

Dr. M.S. Varadarajan, Dean School of Electrical


Mr. S. Sivaperumal, DDD

CO-CONVENORS

Ms. R. Gayathri, ECE


Prof. S. Balraj, EEE
Mr. V.S. Hemakumar, EIE

TECHNICAL REVIEW

Dr. M.S. Varadarajan, Dean, SOE


Prof. S. Balraj, EEE
Prof. Joseph Henry, EEE
Ms. R. Gayathri, ECE
Mr. Ananda Saravanan, ECE
Mr. V.S. Hemakumar, EIE
Mr. A. Selvin Mich, EIE
Mr. P.K. Dhal, EEE

STAFF COORDINATORS

Mr. Sivakumar, EEE


Mr. Ananda Saravanan, ECE
Mr. A. Selvin Mich, EIE

STUDENTS COORDINATORS

Mr. V.Gauthugesh, ECE


Mr. B.Ashish, EEE
Mr. R.Jagateeshkumar, EIE
Faculty Members

Student Members

BROCHURES / POSTER PRINTING AND


POSTING COMMITTEE

Mr. Sivakumar, EEE


Mr. Immanuel, EIE
Mr. Ramesh, ECE

Mr. V. Jaganathan, ECE


Mr. S. Ram Kumar, EEE
Mr. B. Vignesh Ram, EIE

BUDGET COMMITTEE

Mr. Selwin Mich, EIE


Mr. Ramkumar, ECE
Mr. K. Ganesan, EEE

Mr. T.L. Nirupama, ECE


Ms. R. Ramya, EEE
Ms. J. Reeni, EIE

INVITED SPEAKERS / JUDGES


AND CHIEF GUEST

Ms. R. Gayathri, ECE


Mr. V.S. Hemakumar, EIE
Prof. Balraj, EEE

Mr. Nirmal, ECE


Mr. S. Mohana Priya, EEE
Ms. P. Yamini, EIE

PROCEEDING COMMITTEE

Mr. Sivakumar, EEE


Mr. Udhayakumar, EIE
Mr. Vignesh Prassana, ECE

Ms. S. Nandhini, ECE


Mr. M. Ram Prashanth, EEE
Mr. K. Sathish Kumar, EIE

INVITATION PREPARATION

Mr. V.S. Hemakumar, EIE


Mr. Anandh, EIE
Mr. Premanandh, Lab Asst., EIE

Mr. V. Gauthugesh, ECE


Mr. K. Siva Kumar, EEE
Mr. Anurag Srinivasan, EIE

COMMITTEE

xi

Faculty Members
CERTIFICATE PREPARATION

Student Members

Mr. Anandh, EIE


Mr. Premanandh, Lab Asst., EIE

Ms. P. Gayathri, ECE


Mr. Koushik, EEE
Mr. Ramakrishan, EEE
Ms. B. Pavithra, EIE

TRANSPORT ARRANGEMENT FOR VIP

Mr. Sivaraj, EEE


Mr. K. Baskar, EEE
Mr. G.R. Karthi, EEE

Mr. H. Hemanth Kumar, ECE


Mr. D. Sanjay, EEE
Mr. R.S. Suriya Prakash, EIE

TRANSPORT ARRANGEMENT FOR

Mr. Vimal, EEE


Mr. G. Ilangovan, EEE

Mr. N. Balaji, ECE


Mr. E.M.I. Hassan Meeran, EEE
Mr. E. Sathish Kumar, EIE

HOSPITALITY

Mr. P.K. Dhal, EEE


Mr. Ananda Saravanan, ECE

Mr. M.K. Kalaiarasi, ECE


Mr. V. Gunasundari, EEE
Ms. D. Aruna, EIE

ACCOMMODATION OF CHIEF GUEST /


JUDGES

Mr. Kalidass, EEE


Mr. Vijay Albert, EEE

Mr. N. Pramodh, ECE


Mr. S. Praveen Kumar, EEE
Mr. V. Subin, EIE

ACCOMMODATION OF PARTICIPANTS
IN HOSTEL

Mr. Meivel, ECE


Ms. Mona, ECE

Mr. A. Vimal Raja, ECE


Mr. P. Bharath, EEE
Mr. B. Thirumal, EIE

AUDITORIUM & SEMINAR HALLS

Mr. J. Sreenivasan, ECE


Mr. Udhayakumar, EIE
Mr. S. Ramakrishnan, EEE

Mr. S. Karthik, ECE


Mr. K. Kishore, EEE
Mr. R. Mareeswaran, EIE

Mr. Senthil Kumar, EIE


Mr. K. Baskar, EEE

Mr. M.S. Sumesh, ECE


Mr. P. Udaya Kumar, EEE
Mr. G. Venkatesh, EIE

Mr. Anandh (Program manager )


Mr. Selvakumar, Lab Asst., EEE
Mr. Koteeswaran, Lab Asst., EEE

Mr. A. Arun Prasadh, ECE


Mr. K. Vetrivel, EEE
Mr. A. Sathish Kumar, EIE

COMPEERING TEAM

Ms. Kamaline, EEE


Ms. Pavithra, ECE
Mr. Vijayaraghavan, ECE

Mr. Ashwin Sam, ECE


Mr. Mrinmayi Surendran, EEE
Ms. G. Shruthi, EIE

MULTIMEDIA PRESENTATION

Mr. Ramkumar, ECE

Mr. R. Karthik, ECE


Mr. S. Pon Jamieeson Paul, EEE
Mr. G. Gowtham Karthik, EIE

DECORATION COMMITTEE

Mr. Kanthikumar Kota, EIE


Mr. Ezhilan, EEE
Mr. Kalidoss, EEE
Mr. Vignesh Prassana, ECE

Mr. M. Goutham Raj, ECE


Mr. K. Siva Kumar, EEE
Mr. V. Vignesh, EIE

WEB DESIGNING

Mr. Ramkumar, ECE

Mr. Naveen Kumar, ECE


Mr. D. Karthick, EEE
Mr. N. Suresh, EIE

STATIONARY & PARTICIPANT KITS ,


BADGES

Mr. Tamilmani, EIE


Mr. V. Dillibabu, EIE

Mr. S. Mohamed Bilal, ECE


Mr. M. Sai Ganesh, EEE
Mr. S. Mukesh Anand, EIE

COMMITTEE

PARTICIPANTS

ARRANGEMENT

VIP BREAKFAST & LUNCH


ARRANGEMENT

PARTICIPANTS BREAKFAST & LUNCH


ARRANGEMENT INCLUDING SNACKS
& TEA

xii

Faculty Members

Student Members

POOJA PRAYER

Ms. Pavithra, ECE


Ms. Anitha, ECE

Mr. D. Prasad, ECE


Ms. R. Suba Shriv, EEE
Ms. B. Pavithra, EIE

PUBLICITY

Mr. Ramesh, ECE

Mr. P. Ramki, ECE


Mr. Ajai Vaidyanathan, EEE
Mr. V. Arun, EIE

NCC

Mr. Senthil Kumar, EIE

Mr. R. Prem Kumar, ECE


Ms. M. Janani Sri, EEE
Mr. N. Suresh, EIE

RANGOLI

Ms. Kalpana, Lab Asst., EEE & Team

Ms. G. Preethi, ECE


Ms. S. Vidhaya, EEE
Ms. R. Divya, EIE

MEMENTO & GIFTS & FLOWERS

Mr. Iyyapan, ECE


Mr. Anand Babu, ECE
Mr. Uma Maheswaran, ECE

Mr. C. Naresh, ECE


Mr. Sateesh Kumar, EEE
Mr. J. Muthukrishnan, EIE

CLASSICAL DANCE/
BHARATHANATIYAM

Ms. Suganthy, ECE

Ms. S. Maya Kumari, ECE


Mr. Hamsavalli, EEE
Mr. K. Moorthy, EEE
Mr. N. Devi Priya, EIE

PHOTOS & VIDEO ARRANGEMENT

Mr. V. Dillibabu
Mr. Sridhar, Lab Asst., EEE

Mr. B. Karthiravan, ECE


Mr. Ruthuraj V. Sawanth, EEE
Mr. S. Karthick, EIE

REGISTRATION

Ms. J. Jothy, EIE


Ms. Bharathi, ECE
Ms. Malathy, ECE
Ms. Subhamalini, EEE
Ms. Thilagamani, EEE

Mr. Y.T. Siva Kumar, ECE


Ms. J. Nandhini, EEE
Ms. N. Nithya, EIE

RECEPTION

Ms. Mona, ECE


Ms. Margaret, EIE
Ms. K.S. Latha, EEE
Ms. Jibin Priya Devi, ECE

Ms. M. Aishwariya, ECE


Ms. J. Preethi, EEE
Ms. S. Dhivya, EIE

xiii

xiv

Contents
Messages

iii

Programme Committee

xi

MOBILE COMMUNICATION
Mobile Communication
R. Sridhar, S. Sivakumar, M. Ramesh and A. Rajasekar
Dhaanish Ahmed College of Engineering

Customized Applications for Mobile Enhanced Logic (CAMEL)


G. Venkatanadhan and S. Amal Raj
Veltech Dr.RR. & Dr.SR. Technical University

ZigBee Based Electric Meter Reading System


A. Kastro, K. Aanandha Saravanan and N. Vignesh
Veltech Dr.RR & Dr.SR Technical University

Cloud Computing and its Security Issues


C.K. Dinesh Kumar, D. Jayanth and K. Vinoth
Veltech Dr.RR & SR Technical University

11

Mobile Computing Location Managment


S. Sapna and D. Balasathiya

15

Electorinc Voting System using Mobile


H. Hemanth Kumar and C. Naresh
Veltech Dr.RR & Dr.SR Technical University

19

SOS Transmission through Cellular Phones to Help Accident Victims


Himanshu Joshi
Veltech Dr.RR & Dr. SR Technical University

23

4G Communications
D. Krishnakumar and J. Manikandan
Veltech Dr.RR & Dr.SR Technical University

28

1-Bit Nano-Cmos Based Full Adder Cells for Mobile Applications with Low Leakage Ground Bounce
Noise Reduction
S. Porselvi
Veltech Dr.RR & Dr.SR Technical University

37

SIGNAL AND IMAGE PROCESSING


BioIDs - A Securiy Guaranteed System
V. Gauthugesh and M. Dhivya
Vel Tech Engineering College

43

Extraction of Scene Text using Mobile Camera


R. Bala Aiswarya and S. Naveena

49

xv

Design and Implementation of ZigBee Controller to Save Energy


D. Deepa, K. Radhika and S. Naveena

53

DCT Based Digital Image Watermarking in SVD Domain


A.M.V.N. Maruti and N. Chandra Sekhar
Khammam Institute of Technology & sciences

57

Realtime Navigation Monitoring System


T. Umamaheswari
Rajalakshmi Institute of Technology

60

Exposing Digital Image Forgeries by Detecting Discrepancies in Motion Blur


S.S. Vanaja
Rajalakshmi Institute of Technology

66

Significant Difference of Wavelet Coefficient Quantization Based Watermarking Method Attacks


G. Swati, C.I. Sherly Sofie and Sankara Gomathi
Panimalar Institute of Technology

76

Perception of Beacon in Localization in Wireless Sensor Networks


R. Krishnamoorthy, K. Aanandha Saravanan and N. Vignesh
Veltech Dr.RR & Dr.SR Technical University

81

Face Detection and Recognition Method based on Skin Color and Depth Information
R. Kavitha
Vel Tech Dr. RR & Dr. SR Technical University

87

Privacy-Preserving Updates to Anonymous and Confidential Databases using Cryptography with


ARM
T. Vanitha
S.A.Engineering College

91

Development of a MATLAB Simulation Environment for Vehicle-To-Vehicle and Infrastructure


Communication Based on IEEE 802.11P
D.V.S. Ramanjaneyulu and G. Naga Jyothi
Vel Tech Dr. RR & Dr. SR Technical University

94

Vehicle-To-Vehicle Wireless Real Time Image Transmission for Traffic Awareness


D.V.S. Ramanjaneyulu and G. Naga Jyothi
Vellore Institute of Technology

100

MFCC Feature Extraction Algorithm for Clinical Data


C.R. Bharathi and V. Shanthi
Vel Tech Dr. RR & Dr. SR Technical University and St.Josephs College of Engineering

103

An Introduction to Multimodal Biometrics


R. Gayathri and M. Suganthy
Vel Tech Dr. RR & Dr. SR Technical University

107

Efficient IRIS Recognition based on Improvement of Feature Extraction


M. suganthy and R. Gayathri
Vel Tech Dr. RR & Dr. SR Technical University

113

Hand Gesture Recoginition using Image Processing


C. Praveen
VelTech Dr.RR&Dr.SR Technical University

118

Biometric Authentication using Infrared Imaging of Hand Vein Patterns


P. Maragathavalli
Vel Tech Dr. RR & Dr. SR Technical University

127

xvi

Motion Analysis and Identification of Human Fall using MEMS Accelerometer


K. Malar Vizhi
Vel Tech Dr. RR & Dr. SR Technical University

132

BROAD BAND COMMUNICATION


Reflective Semiconductor Optical Amplifier (RSOA) Model used as a Modulator in Radio over Fiber
(RoF) Systems
S. Ethiraj, S. Anusha Meenakshi and M. Baskaran
Valliammai Engineering College

138

Employing Hybrid Automatic Repeat reQuest (HARQ) on MIMO


A. Ramakanth and N. Sateesh Kedarnath Kumar
G.Pulla Reddy Engineering College

141

Accident Detector using Wireless Communication


G. Gowri and K. Kavitha
Bharathiyar Institute of Engineering for Women

146

Transceiver Implementation for Software Defined Radio using DPSK Modulation and Demodulation
A. Saranya, S. Saranya Devi, B. Uma and N.L. Venkataraman
Jayaram College of Engineering & Technology

150

Routing for Re-configurable System for Wireless Mesh Networks


B. Sathyasri, K. Anandha Saravanan and N. N. Vignesh Prasanna
Rajalakshmi Institute of Technology and Vel Tech Dr. RR & Dr. SR Technical University

155

Study on Blocking Misbehaving Users in Anonymizing Networks


R. Divya
S.A. Engineering College

166

VLSI DESIGN
Thermal Management of 3-D FPGA: Thermal Aware Flooring Planning Tool
S. Kousiya Shehannas, D. Jhansi Alekhya and R. Vivekanandan
Panimalar Engineering College

168

Reduction of Inductive Effects using Fast Simulation Approach in VLSI Interconnects


J. Mohanraj
Vel Tech Dr.RR & Dr.SR Technical University

173

Design and VLSI Implementation of High-Performance Face-Detection Engine for Mobile Applications
R. Ilakiya
Vel Tech Dr. RR & Dr. SR Technical University

177

An Efficient Implementation of Floating Point Multiplier


B. Vijayarani
Vel Tech Dr. RR & Dr. SR Technical University

179

A FPGA-Based Viterbi Algorithm Implementation for Speech Recognition Systems


J. Ashok Kumar
Vel Tech Dr. RR & Dr. SR Technical University

185

Design and Simulation of UART Serial Communication Module based on VHDL


A. Sujitha
Vel Tech Dr. RR & Dr. SR Technical University

188

xvii

POWER SYSTEM AND SMART GRID


Fuzzy Logic Technique Applied to 3 Phase Induction Motor
S. Johnson
Vel Tech Dr. RR & Dr. SR Technical University

192

Design and Implementation of High Performance Optimal PID Controller for Fast Mode Control
System
K. Pooranapriya
Anna University of Technology

195

Design of Axial Flux Permanent MAGNET Synchronous Generator for Wind Power Generation
Rakesh Raushan and Rahul Gopinath
Mahendra Institute of Technology

199

LED Aplication in Power Factor Correction


S. Alexzander and V. Prem Kumar
Srinivasa Institute of Engineering and Technology

205

Wireless Power Transmission


Suraj and M. Vinoth
Srinivasa Institute of Engineering and Technology

209

Power Flow Control using U P F C with Fuzzy Logic Controller


G. Kumara swamy, K. Suresh and K. Manohar
Rajeev Gandhi Memorial College of Engineering & Technology

213

VIRTUAL INSTRUMENTATION
Human-Computer Interface Technology
R. Rama
Vel Tech Dr. RR & Dr. SR Technical University

219

Labview-Based Fuzzy Controller Design of a Lighting Control System


R. Saravana Kumar, R. Bharath Kumar and J. Saravanan
Jaya Engineering College

223

EMBEDDED SYSTEM TECHNOLOGY


Testing Methodologies for Embedded Systems and Systems-On-Chip
S. Sayee Kumar
Vel Tech Dr. RR & Dr. SR Technical University

228

Micro Electro Mechanical Systems


M. Yuvaraj and B. Gokul
Srinivasa Institute of Engineering and Technology

233

Design Technique of Hardware to Reduce Power Consumption in Embedded System


H. Anandkumar Singh
Vel Tech Dr. RR & Dr. SR Technical University

239

Embedded Virtual Machines for Robust Wireless Control Systems


D. Venkateshwari
Vel Tech Dr. RR & Dr. SR Technical University

243

xviii

ELECTRONIC DESIGN AND AUTOMATION


Design of new Auto Guard Anti-Theft System Based on RFID and GSM Network
M. Anjugam
Vel Tech Dr. RR & Dr. SR Technical University

248

Unmanned Adoration Car


S. Parijatham, N. Janani and C. Nagalalitha
Veltech Hightech Dr. RR & Dr. SR Engineering College

252

Patrolbot
S. Janani, C. Sai Smarana and G. Kalarani
Jaya Engineering College

255

ADVANCED CONTROL SYSTEM


Design of Reduced Order Controller for the Stabilization of Large-Scale Linear Control Systems
M. Kavitha and V. Sundarapandian
Vel Tech Dr. RR & Dr. SR Technical University

257

Global Chaos Synchronization of Liu-Su-Liu and Liu-Chen-Liu Systems by Active Nonlinear Control
R. Suresh and V. Sundarapandian
Vel Tech Dr. RR & Dr. SR Technical University

261

INSTRUMENTATION SYSTEM AND APPLICATION


New Improved Methodology for Pedestrian Detection in Advanced Driver Assistance System
Vijay Gaikwad, Sanket Borhade, Pravin Jadhav and Manthan Shah
Vishwakarma Institute of Technology

267

Spintronics Based Cancer Detection


V. Raghavi
VelTech Multitech Dr.Rangarajan Dr.Sakunthala Engineering College

271

Swarm Robotics - To Combat Cancer


Divya Devi Mohan, Reenu Aradhya Surathy and G. Kalarani
Jaya Engineering College

275

Remote Testing of Instruments using Image Processing


M. Sankeerthana, V. Sindhusha and V. Murugan
Jaya Engineering College

278

Application of Nanotechnology in Solar Cell


R. Adarsh, H. Prabhu and R. Bharath Kumar
Jaya Engineering College

283

Intrusion of Nanotechnology in Food Science


P. Arun Karthik Kani, A. Asif Iqbal and J. Saravanan
Jaya Engineering College

287

Microelectronic Pills for Remote Biomedical Measurements


P. Kavitha, E. Elakkiya and A.S. Anupama
Jaya Engineering College

290

xix

System Implementation of Pushing DATA to Handheld Devices via Bluetooth High Speed
Specification, Version 3.0 + HS
A. Valarmathi
Rajalakshmi Institute of Technology

294

Text Detection and Recognition in Images and Video Frames


C. Selvi

298

Smart Power \ Energy Metering


R. Balaji, K.S. Ashwin Kumar and M. Vignesh
Veltech Dr. RR & Dr. SR Technical University

304

Transparent Electronics
T. Gopala Krishnan, G.D. Vigneshvar and V.R. Arun
Veltech Dr. RR & Dr. SR Technical University

307

A Mobile Healthcare Service


A. Devendran

311

Remote Testing of Instruments using Image Processing


M. Sankeerthana, V. Sindhusha and V. Murugan
Jaya Engineering College

314

Wireless Communication using Human Area Networking


H. Lalithkrishnan, Kiran Mohan and S. Gowtham

319

AUTHOR INDEX

323

xx

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.1-3.

Mobile Communication
R. Sridhar, S. Sivakumar, M. Ramesh and A. Rajasekar
Dhaanish Ahmed College of Engineering, Padappai, Chennai
Email: srikannan.2009@gmail.com
 long wave transmission, high transmission power
necessary (> 200kw)

Abstract Mobile communication system changed the life


style of the human being in the past two decades. Various
research and development process is carried out to increase
the effective communication system to provide sophisticated
communication environment to the users. The. The
telephone system performance is assessed in terms of packet
transformation from one communication system components
that include Medium Access Control (MAC) protocol, a
routing protocol, and the treatment of voice packets device
to another. The performance is achieved based on the
percentage of blocked and dropped calls, packet loss, and
packet delay. The telephone system efficiency can be
increased through effective packet transformation and
control. The packet transformations achieved by various
routing and traveling algorithms. This project proposal
initiated to analyze the exiting packet-traveling algorithm
and achieve the effective system architecture for quality
mobile telephone service.

 1907 Commercial transatlantic connections - huge


base stations (30 100m high antennas)
 1915 Wireless voice transmission New York - San
Francisco
 1920 Discovery of short waves by Marconi
 1926 Train-phone on the line Hamburg - Berlin
 1928 Many TV broadcast trials (across Atlantic, color
TV, TV news)
 1933 Frequency modulation (E. H. Armstrong)
 1958 A-Netz in Germany
 1972 B-Netz in Germany
 1979 NMT at 450MHz (Scandinavian countries)

I. INTRODUCTION

 1982 Start of GSM-specification

Our ultimate goal is to communicate any information with


anyone, at anytime, from anywhere. This is only possible
through the aid of wireless technology. For the decade,
mobile
communications
have
enhanced
our
communications networks by providing an important
capability, i.e., mobility. Before the introduction of
mobile communications systems, communications were
only possible from/to xed places, i.e., houses and public
phones. It was December 1979 when the rst public mobile
communications services started in Japan. For the rest 10
years, its growth rate was very low. However, through the
liberalization of mobile communications services in 1988
and terminal markets in 1994, the growth rate of mobile
communications services accelerated over the last 10
years. In Japan, the number of subscribers to cellular and
Personal Handy- phone System (PHS) services has
reached close to 57 million by March 2000; this number is
equivalent to a penetration rate of around 45% in the
population. One of the most important technical factors
for this success is the increased utilization efficiency of
portable phones (lighter weight and longer talk time made
possible by advanced LSI technology) and easier-to-use
portable.

 1983 Start of the American AMPS (Advanced Mobile


Phone System, analog)
 1984 CT-1 standard (Europe) for cordless telephones
 1986 C-Netz in Germany
 analog voice transmission, 450MHz, hand-over
possible, digital signaling, automatic location of
mobile device
 still in use today (as T-C-Tel), services: FAX,
modem, X.25, e-mail, 98% coverage
 1991 Specification of DECT (Digital European
Cordless Telephone (today: Digital Enhanced
Cordless Telecommunications)
 1880-1900MHz, ~100-500m range, 120 duplex
channels, 1.2Mbit/s data transmission, voice
encryption, authentication, up to several 10000
user/km2, used in more than 40 countries.
 1992 Start of GSM
 in D as D1 and D2, fully digital, 900MHz, 124
channels

II. HISTORY OF WIRELESS COMMUNICATION

 automatic location, hand-over, cellular

 1895 Guglielmo Marconi

 roaming in Europe - now worldwide in more than


100 countries

 first demonstration of wireless telegraphy (digital)


1

Mobile Communication

 Upgrade to 2G systems offering higher data speeds


called 2.5G systems was developed. GSM has two
such technologies called High Speed Circuit Switched
Data (HSCSD) and General Packet Radio Service
(GPRS). Similarly in CDMA an extension of IS-95
known as IS-95B or CDMATwo was developed.

 services: data with 9.6kbit/s, FAX, voice, ...


 1994 E-Netz in Germany
 GSM with 1800MHz, smaller cells, supported by
11 countries
 as Eplus in D (1997 98% coverage of the
population)

 To meet the future bandwidth hungry services 3G


cellular systems was standardized in 2000. The
different 3G standards evolved include EDGE,
CDMA2000 and WCDMA.

 1996 HiperLAN (High Performance Radio Local Area


Network)
 ETSI, standardization of type 1: 5.15 - 5.30GHz,
23.5Mbit/s

 It is envisioned that future of mobile communication


will be towards an integrated system which will
produce a common packet switched possibly IP-based
system.

 recommendations for type 2 and 3 (both 5GHz)


and 4 (17GHz) as wireless ATM-networks (up to
155Mbit/s)

IV. MOBILE COMMUNICATION CURRENT SCENARIO

 1997 Wireless LAN - IEEE802.11

Since the time of wireless telegraphy, radio


communication has been used extensively. Our society
has been looking for acquiring mobility in communication
since then.Initially the mobile communication was limited
between one pair of users on singlechannel pair. The
range of mobility was defined by the transmitter power,
type of antenna used and the frequency of operation. With
the increase in the number of users, accommodating them
within the limited available frequency spectrum became a
major problem. To resolve this problem, the concept of
cellular communication was evolved. The present day
cellular communication uses a basic unit called cell. Each
cell consists of small hexagonal area with a base station
located at the center of the cell which communicates with
the user. To accommodate multiple users Time Division
multiple Access (TDMA), Code Division Multiple Access
(CDMA),Frequency Division Multiple Access (FDMA)
and their hybrids are used. Numerous mobile radio
standards have been deployed at various places such as
AMPS, PACS GSM, NTT, PHS and IS-95, each utilizing
dierent set of frequencies and allocating dierent
number ofusers and channels.

 IEEE-Standard, 2.4 - 2.5GHz and infrared, 2Mbit/s


 already many products (with proprietary
extensions) 1998 Specification of GSM successors
 for UMTS (Universal Mobile Telecommunication
System) as European proposals for IMT-2000
Iridium
 66 satellites (+6 spare), 1.6GHz to the mobile
phone
III. EVOLUTION OF MOBILE COMMUNICATIONS
 In 1895 a few years after invention of telephone,
Marconi demonstrated the first radio based wireless
transmission. The first radio based conversation was
used in ships during 1915. The first public mobile
telephone system known as Mobile Telephone System
(MTS) was introduced in United States in 1946.
 AT&T Bell laboratories devised the cellular concept
which replaced high power high coverage base
stations used by MTS with a number of low power
low coverage stations.

V. MOBILE COMMUNICATION FUTURE TRENDS

 The area of coverage of each such base station is


called a cell. The operating area of the system was
divided into a set of adjacent, non-over lapping cells.

Tremendous changes are occurring in the area of mobile


radio communications, somuch so that the mobile phone
of yesterday is rapidly turning into a sophisticatedmobile
device capable of more applications than PCs were
capable of only a fewyears ago. Rapid development of the
Internet with its new services and applications has created
fresh challenges for the further development of mobile
communication systems. Further enhancements in
modulation schemes will soon increase the In-ternet
access rates on the mobile from current 1.8 Mbps to
greater than 10 Mbps.Bluetooth is rapidly becoming a
common feature in mobiles for local connections. The
mobile communication has provided global connectivity
to the people ata lower cost due to advances in the
technology and also because of the growing competition
among the service providers.

 The first generation (1G) of cellular system was


designed in late 1960s and deployed in early 1980s.
The first commercial analog system in the United
States known as Advanced Mobile Phone System
(AMPS) went operational in 1982 with only voice
transmission.
 The disadvantages of Analog systems were overcome
by second (2G) generation of cellular systems which
represent data digitally. The first commercial
deployment of 2G system called GSM was made in
1992. In 1993 as other 2G system also known as
CDMAone
(IS-95)
was
standardized
and
commercially deployed in SouthKorea and Hongkong
in 1995, followed by United States in 1996.

Proceedings of the National Conference on Communication Control and Energy System

 intelligent travel guide with up-to-date location


dependent information

VI. MOBILE COMMUNICATION APPLICATIONS


A. Vehicles

 ad-hoc networks formulti user games

 transmission of news, road condition, weather, music


via DAB

VI. CONCLUSION

 personal communication using GSM

Wireless networks constitute an important part of the


telecommunications market. The result of the integration
of Internet with mobile system, the wireless Internet, is
expected to significantly increase the demand for wireless
data services. The use of wireless transmission and the
mobility of most wireless systems give rise to a number
challenges that must be addressed in order to develop
efficient wireless systems. The challenges include
wireless medium unreliability, spectrum use, power
management, security, and location /routing. Digital
cellular standards GSM and CDMA meet the current
requirements in voice communications and being
upgraded to meet the future demands in mobile
multimedia applications. 3G mobile networks represent
an evolution in terms of capacity, data speeds and new
service capabilities from second generation mobile
networks to provide an integrated solution for mobile
voice and data with wide area coverage.

 position via GPS


 local ad-hoc network with vehicles close-by to prevent
accidents, guidance system, redundancy
 vehicle data (e.g., from busses, high-speed trains) can
be transmitted in advance for maintenance
B. Emergencies
 early transmission of patient data to the hospital,
current status, first diagnosis
 replacement of a fixed infrastructure in case of
earthquakes, hurricanes, fire etc.
C. Travelling Salesmen
 direct access to customer files stored in a central
location
 consistent databases for all agents

REFERENCES

 mobile office

[1]

D. Replacement of Fixed Networks

[2]

 remote sensors, e.g., weather, earth activities

[3]

 flexibility for trade shows


 LANs in historic buildings

[4]

E. Entertainment, Education

[5]
[6]

 outdoor Internet access

Principles of Mobile Communication by Gordon L. Stber,


Springer publisher, 2000
Handbook of wireless networks and mobile computing,
Stojmenovic (Ivan) ED, 2002
5.T. S. Rappaport,Wireless Communications: Principles
and Practice 2nd ed.Singapore: Pearson Education, Inc.,
2002.
J. G. Proakis,Digital Communications, 4th ed. NY:
McGraw Hill, 2000
http://www.gsmworld.com
http://www.cdg.org

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.4-6.

Customized Applications for Mobile Enhanced Logic (CAMEL)


G. Venkatanadhan and S. Amal Raj
Department Electronic and Communication, Veltech Dr.RR. and Dr.SR. Technical University
4, and hence is a GSM and UMTS (Universal Mobile
Telecommunication System) common specification, while
Phase 4 was defined as part of 3GPP Released in 2005.

Abstract Taking the GSM (Global System for Mobile


Communications) in this competitive world a standard was
needed to enable competition between operators based on
the services offered. ETSI (European Telecommunications
Standards Institute) has started in 1994 with the
specification of Intelligent Network functionality in GSM,
named Customized Applications for Mobile Enhanced Logic
(CAMEL). CAMEL will provide the GSM operator with the
ability to offer operator specific services based on IN service
logic to a GSM subscriber even when roaming outside the
HPLMN (Home Public Land Mobile Network). It is used to
shift the call control from MSC (Mobile Switching Center)
and to give intelligence to the call handling. Before CAMEL
it was all postpaid after CAMEL the prepaid technology
started to spread worldwide.

In line with other GSM specifications, later phases should


be fully backwards compatible with earlier phases; this is
achieved by means of the Transaction Capabilities
Application Part (TCAP) Application Context (AC)
negotiation procedure, with each CAMEL phase being
allocated its own AC version.
A. Phase I
Camel phase I Architecture

I. INTRODUCTION
The CAMEL is a network feature and not a
supplementary service. It is a tool for the Network
operator to provide the subscribers with the operator
specific services even when roaming in the other network.
CAMEL is enhanced in to several phases by the operators
by introducing much innovative features. The first phase
of the standard has been approved in 1997 and is
currently implemented by the major GSM vendors. The
standardization of the second phase has been finalized in
1998 with products to come in 1999. Phase 3 was
finalized at the end of 1999 with products outcome in
2000. The fourth phase of CAMEL was built on the
capabilities of phase 3 and it was finalized in the year
2002 and released in 2005.

Fig. 1

Where,
gsmSCF: GSM Service Control Function.
gsmSSF: GSM Service Switching Function
gsmSRF:GSM Specialized Resource Function
gprsSSF: GPRS Service Switching Function

CAMEL makes use of IN SSP-SCP interface. The


CAMEL application protocol (CAP) Phase 1 and 2 are
based on ETSI Core INAP CS-1R. However, limited
fraction of the whole operation set is used in order to
assure 100% vendor compatibility in face of more than
200 mobile networks looking for mutual roaming
agreements.

a) Features of Phase I
1. Introduced the concept of a CAMEL Basic call state
model (BCSM) to the Intelligent Network (IN).

II. PHASES IN CAMEL

2. Gave the gsmSCF the ability to bar calls (release the


call prior to connection)

CAMEL was always intended to be specified in


phases. As of 2007, there have been 4 phases specified,
each building on the previous phase. Phases 1 and 2 were
defined before 3G networks were specified, and as such
support adding IN services to a GSM network, although
they are equally applicable to 2.5G and 3G networks.
Phase3 was defined for 3GPP Released in 1999 and phase

3. Allow a call to continue unchanged, or to modify a


limited number of call parameters before allowing it to
continue
4. The gsmSCF could also monitor the status of a call for
certain events (call connection and disconnection)

Proceedings of the National Conference on Communication Control and Energy System

B. Phase II

C. Phase III

a) Camel Phase II Architecture

a) Camel Phase III Architecture

Fig. 3

b) Features of Phase III


 Support of facilities to avoid overload

Fig. 2

b) Features in Phase II
 Additional event detection points

 Capabilities to support Dialed Services


 Capabilities to handle mobility events, such as
(Not)reachability and roaming

 Interaction between a user and a service using


announcements.ETC, CTR, PA, PAC, SRF, DFC

 Control of GPRS sessions and PDP contexts

 Unstructured Supplementary Service Data (USSD)


interaction

 Control of Mobile Originated SMS through both


circuit-switched and packet-switched serving network
entities

 Control of call duration and transfer of Advice of


Charge Information to the mobile station

 Interworking with SoLSA (Support of Localized


Service Area). Support for this interworking is
optional

 The ability to inform the gsmSCF about the invocation


of the supplementary services Explicit Call Transfer
(ECT), Call Forwarding (CF) and Multi-Party Calls
(MPTY)

 The gsmSCF can be informed about the invocation of


the supplementary service Call Completion to Busy
Subscriber

 The ability, for easier post-processing, of integrating


charging information from a serving node in normal
call records

 Interrogation/Modification of CF, CB, ODB, CSIs

 Introduction of Online charging


Apply charging,
Apply charging report, Send Charging information,
Furnish charging information.

c) Additional Feature
1. Short Message Service
Information

 CAP reporting-- CIRq, CIR

CAMEL

Subscription

2. SMS-CSI is transferred to the VPLMN. SMS-CSI


contains trigger information which is required to
invoke a CAMEL Service Logic for Mobile
Originating Short Message submissions.

 Call forwarding barring is introduced in CAP02


c) Facilities of Phase II
 On-line Charging Control

D. Phase IV

 On-line charging
 Off line charging

 CAMEL Phase 4 is an integral part of 3GPP Core


Network Release 5 .

 Pre-paid subscription

 CAMEL Phase 4 enhances the capabilities of phase 3

 Post-paid subscribers

 CAMEL Phase 4 circuit switched call control


encompasses all features of previous CAMEL phases
but extends these to completeness (see next slide)

 Spending control
 Call monitoring

 Support of 3GPP IP Multimedia Domain (IMS)

Customized Applications for Mobile Enhanced Logic (CAMEL)

 GPRS control

a) Architecture of Phase IV Camel

SMS control
 MO SMS control
 MT SMS control

 Mobility management for PS subscribers


 Any-time interrogation for CS subscribers Any-time
interrogation for PS subscribers
 Flexible tones injection
 Location information during call
 Services for IMS
 Any Time Interrogation for Terminal Capabilities
 Enhancements to Any Time Modification (ODB

Fig. 4

III. CONCLUSION

b) Specifications Used for Camel Phase 4


1. CAMEL phase 4 is specified in the following set of
specifications.

3GPP TS 22.078 [66] this specification contains


the service definitions for CAMEL phase 4;

3GPP TS 23.078 [83] this specification describes


the information flows, subscription data,
procedures, etc.;

3GPP TS 29.078 [106] this specification


specifies the CAP for call control, SMS control
and GPRS control

IMS opens up new perspectives for network operators.


But several technical and business challenges have to be
faced in order to enable the wide adoption of this
promising technology. Issues that are currently or need to
be addressed in the standards, for example: the
advancement in universal service obligation, number
planning, lawful intercept, reliability and voice quality,
emergency services, inter-carrier compensation, data
security and data protection are the most awaited
application which are possible in phase 4.
Some of the EUROPEAN countries have established
PHASE IV CAMEL technology and it is worth it. So the
initialization may be difficult and costlier but if it is made
possible in INDIA , the techno level of INDIA will
increase and standards if communication will be effective.

c) Features of Phase IV
 CALL CONTROL
 Call party handling
 Network-initiated call establishment

The CAMEL will enable the operators to implement as


many new innovative services with low implementation
cost and can keep the subscriber base intact.

 Interaction with basic optimal routing


 Alerting detection point
 Mid-call detection point

ACKNOWLEDGMENT
The author has preferred the scribd.com, and the pdfs
available in that site for scientific documents on CAMEL.

 Change of position detection point


 Flexible warning tone

REFERENCE

 Tone injection
[1]

 Enhancement to call forwarding notification


 Control of video telephony calls

[2]

 Service change and UDI/RDI fallback


 Reporting of IMEI and MS Class mark

Rogier Noldus ,Camel Intelligent Networks for the GSM,


GPRS And UMTS Network ,John Wiley & Sons
Ltd,2006
John R. Anderson, Intelligent Networks, Institution of
Electrical Engineers,2002

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.7-10.

ZigBee Based Electric Meter Reading System


A. Kastro, K. Aanandha Saravanan and N. Vignesh
Veltech Dr.RR & Dr.SR Technical University,Avadi, Chennai
Email: kastro.chinna@gmail.com
transmission and reception, IEEE 802.15.4 MAC uses the
Carrier Sense Multiple Access with Collision
Avoidance(CSMA/CA) mechanism for accessing the
channel, like other wireless networks such as IEEE802.11
and IEEE 802.15.3 . There are two variations: Beacon
Enabled Network which uses the CSMA/CA. Moreover,
it provides the GTS (Guaranteed Time Slots) allocation
method in order to provide real time data communication.

Abstract According to the market requirements of Electric


Meter. Nowadays, the system will use ZigBee and GSM
system for communication protocol. The ZigBee is used since
the application dont need high speed data rate, need to be
low powered and low cost. Presenting the remote wireless
Electric Meter Reading System, this aims at resolving the
shortcomings of the technology of the traditional Electric
Meter Reading, combining the characteristics of the ZigBee
technology and IEEE802.15.4 standard. The hardware
implementation was designed, and then analyzed the use
cases for Electric Meter.

B. ZigBee
Based on IEEE 802.15.4 PHY/MAC, the ZigBee network
layer provides functionality such as dynamic network
formation, addressing, routing, and discovering 1 hop
neighbors. The size of the network address is 16 bits , so
ZigBee is capable to accept about 65535 devices in a
network , an the network adderss is assigned in a
hierarchical tree structure . ZigBee provides not only star
topology, but last mesh topology. Since any device can
communicate with other devices except the PAN
Coordinator, the network has high scalability and
flexibility. Besides, the self-formation and self-healing
features makes ZigBee more attractive , The deployed
ZigBee devices automatically construct the network, and
then changes such as joining/leaving of devices are
automatically reflected in the network configuration.

Keywords ZigBee; IEEE802.15.4; Market Requirements

I. INTRODUCTION
Automatic Electric Meter reading is one method reading
and processing data automatically with computer and
communication. It is the need of improving the automatic
level of energy consumption and the necessity of rapid
development of computer and communication technology
too. It not only may relieve reading person's labor
intensity, reduce the reading mistake, but also has the
advantage of high speed and good real-time. With the
project of the wireless Electric Meter reading for wireless
communication technology, complete the design of
automatic Electric Meter reading system. Through
researching the characteristic of main wireless
communication protocol, ZigBee is chosen as lower layer
communication protocol. With these applications, the
standard is optimized for low data rate, low power
consumption, security and reliability. Here describes the
functional requirements to solve the technical issues
related to the market applications.

Fig. 1. ZigBee Protocol Stack

II. IEEE802.15.4 AND ZIGBEE


A. IEEE 802.15.4

III. THE MARKET REQURIREMENTS

The ZigBee protocol stack is described in Figure 1 .As we


can see, IEEE 802.15.4 and the ZigBee network are
tightly coupled to provide the consumer standardization
for low-power and low-rate wireless communication
devices.IEEE802.15.4 PHY layer provides 16 channels
for ISM 2.4GHz,10 channels for ISM 900 MHz, and 1
channel for 868 MHz IEEE 802.15.4 PHY provides
LOI(Link Quality Indicator) in order to characterize the
quality of links between nodes, as well as data

A. Market Needs
The utilities and Electric Metering companies continually
look for improved methods to support their day to
operations, which include: Providing flexible billing dates
for customers, Performing Monthly/Cycle billing reads,
Implementing Time-of-Use billing, Capturing Peak
Demand, Supporting Critical Peak Pricing events,
Forecasting energy usage, Positive Outage and restoration
7

ZigBee Based Electric Meter Reading System

detection and notification., Theft detection, Remote


connect and validation, Market advanced Electric
Metering and billing programs

c) Customer and Utility Shared


The shared HAN represents the worst security scenario
for an EMI deployment. Devices are on a network they
cannot trust, with other devices they cannot trust.
Application level authentication and authorization are
required to support a shared network environment.

B. Market Analyses
Within the typical ZigBee network there is a single
owner or stakeholder. This owner can determine
which devices are allowed on the PAN by only sharing
network keys with trusted devices. There may be two
stakeholders for a single network: the utility and the end
customer. Neither of these stakeholders necessarily trusts
the other. The utility wants to be sure that the end
customer cannot use ZigBee to inappropriately
manipulate a load control or demand response system, or
attack an energy service portal. The customers want to be
sure that the energy service portal does not allow the
utility to take liberties with their equipment or
compromise their privacy. This results in four primary
network ownership / deployment scenarios: utilityprivate, customer-private, shared, and bridged. Each of
these scenarios has different implications. All of these
scenarios are valid for EMI deployments, though their use
may be specific to particular use cases or markets.

Fig. 4. Shared HAN

a) Utility-Private
Utility Private HAN might include an in-home display, or
a load control device working in conjunction with energy
service portal, but it would not include any customer
controlled devices.

Fig. 5. Application-Linked

d) Application-Linked
As an example, in the scenario below, the Utility HAN is
made available strictly to utility controlled devices. The
Home Energy Management Console is a utility approved
device that also lives on a customer provided HAN. It can
respond to EMI commands, as well as sending out HA
commands to devices within the home.
IV. DESIGN OF EMI
According the design for this system, the hardware design
of EMI is divided to two parts: The Electric- Meter End
Devices and The Data Acquisition Device. The former is
to acquire the data of the Electric-Electric Meter, then
transmit the data to the Data Acquisition Device through
ZigBee network. Meanwhile display the energy and
system time on the Electric Meter for customer. The latter
functioned as a coordinator in the whole ZigBee network.
Its function is to obtain all the information of the Electric
Meters. And then transmit them to the energy
management center through the parellel port

Fig. 2. Utility Private HAN

b) Customer-Private
In the most extreme form, a customer private network
might not even include an ESP on the ZigBee network,
instead relying on some sort of customer provided device
with non-ZigBee access to usage, consumption, and price
data. Control messages in these examples would be one
determined by the end customer, not the utility, and
programmed into a home energy management console.

A. Design for End Electric Meter Device


The CC2430 comes in three different versions: CC2430F32/64/128, with 32/64/128 KB of flash memory
respectively. The CC2430 is a true System-on-Chip (SoC)
solution specifically tailored for IEEE 802.15.4 and
ZigBee applications. It enables ZigBee nodes to be built
with very low total bill-of-material costs. The CC2430
combines the excellent performance of the leading
CC2420 RF transceiver with an industry-standard

Fig. 3. Customer Private HAN

Proceedings of the National Conference on Communication Control and Energy System

Electric Metering including TOU (Time of Use), Load


Profile (profile of consumption), Peak Demand.

enhanced 8051 MCU, 32/64/128 KB flash memory, 8 KB


RAM and many other powerful features. Combined with
the industry leading ZigBee protocol stack (Z-Stack)
Wireless / Chipcon, the CC2430 provides the market most
competitive ZigBee solution.

The steps to accomplish this use case are:


 The CIS/MDMS requests the EMR solution to collect
a series of Electric Meter reads. This may be for all
Electric Meters or may be only for ones needed for
that day business needs.

B. Designs for Data Acquisition Device


As the ZigBee node has to be used with Electric Meter
Module, and it should be powered by battery, so the size
of the ZigBee node is small, low-rate, and high-stability.
Choose the small encapsulation designation circuit, and
use the PCB as the wireless antenna. Make the bulk of the
module to be minimized. Use the PIC18F4620 as the
MCU, at the idle and sleeping state, it can minimize the
power consumption of the system, choose the Chip on
CC2430 which conforms to ZigBee protocol stack
standard, and it needs a few external equipments, stable
performance and the power consumption is low.

 The Electric Meter Reading Host Processor breaks the


read requests into the appropriate routes for the
individual Hand Held Computers or Vehicle based
equipments.
 The Electric Meter Readers Proceed to collect the
Electric Meter information along their designated
routes.
 The Electric Meter information id uploaded to the
Electric Meter Reading Host Processor and then
forwarded upstream to the CIS/MDMS.

The interface circuit between PIC18F4620 and CC2430 is


simple and the external equipments is fewer, simplify the
difficulty of the debug, improve the stability of the
system. In the addition of using the PCB , this system can
communicate 60 miles.

B. Energy Management use Cases


ZigBee is to be utilized as the communication medium
between home and building automation devices, Electric
Metering devices, in-home displays, and fixed network
devices such a gateways, bridges or access points. ZigBee
based solutions for Energy Management should be
capable of operating independently but in conjunction
with current and future EMI solutions.

The Data Acquisition Device reads the data from the


Electric Meter timely, which read the impulse of the
sensor, sending the data to the gather through the ZigBee
communication module, till the Electric-Electric Meter
data transport module read the data of this area.

1) Utility Customer Reduces Load Voluntarily in


Response to CPP (Critical Peak Pricing)
When the utility determines that the next day will be a
Critical Peak Pricing (CPP) day and needs to invoke a
voluntary load reduction program, it will notify its
customer base of the impending event. The notification
can occur using a variety of methods such as newspaper,
TV, website, email, etc but may also include providing
notice through the EMI solution via the Electric Metering
device or customer display.

V. SOLUTION USE CASES


The following sections describe the predominant areas of
use cases for the EMI/EMR Market space, they are:_
1. Mobile EMR: Describes the market needs and the
utilization of ZigBee to facilitate Electric Meter
reading using mobile reading devices.
2. Energy Management: Provides the use cases that
utilize the ZigBee based devices that support or enable
EMS programs within premises.

2) Utility Customer Accesses Pricing Information


Customers are becoming aware of the importance of
understanding how much energy they are using and when
it is being used. Customers want to understand how their
energy consumption habits affect their monthly energy
bills and to find ways to reduce their monthly energy
costs. The utility and regulatory agencies also want
customers to be aware of the energy they are consuming
and associated costs. By providing customers better
visibility to their energy usage and cost at their site, they
can make more educated energy related decisions
regarding participation in load reduction programs, be
more inclined to install energy efficient equipment and
potentially to change their energy consumption habits.
EMI solutions will enable improved communications
between the utility and its customers by making if
possible to remotely transmit energy usage, cost and other

A. MOBILE EMR
Mobile EMR solutions consist of two scenarios, a WalkBy solution where Hand Held Computers are typically
used to gather Electric Meter information, and a Drive-By
solution where Computers used in conjunction with
dedicated radios are installed in vehicles to remotely read
Electric Meter information. Below are examples of both
scenarios .
As depicted in the above diagram, a ZigBee based profile
is used to transport the Electric Meter information to both
the Walk-by and the Drive-by solutions. The types of
Electric Metered information collected on a monthly basis
ranges from simple Consumption to very complex

ZigBee Based Electric Meter Reading System

Reading System. With the developments of the ZigBee


technology and the communication network technology of
computer, wireless Electric Meter Reading System will
grow up and practical mostly.

related utility messages to the EMI solution and down to


the customer display device within the home or business.
3) Utility Customer Uses Prepayment Services
Most utility customers pay for usage after the fact. The
utilities would also like to provide customers the ability to
prepay for their electric quantities. This would apply to
purchasing power for a residence or commercial site.

REFERENCE
[1]
[2]

VI. CONCLUSION
[3]

ZigBee technology is a new wireless protocol that widely


used various areas for it excellent performance in
reliability, capability, flexibility and cost, ZigBee
corresponds to a large market. This paper provides an
application in the field of automatic Electric Meter

[4]

10

ZigBee Alliance, ZigBee Specification Version 1.0, http://


www.ZigBee.org, December 14th, 2004
Microchip Technology Inc. Microchip Stack for the
ZigBeeTM Protocol Version 1.0,2005.
ZigBee Document 075356r08ZB, Advanced Metering
Initiative Profile Specification Version 08, November
13th, 2007.
ZigBee Document 074994r08, ZigBee AMI Technical
Requirements Document Version 08, september19th,
2007.

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.11-14.

Cloud Computing and its Security Issues


C.K. Dinesh Kumar, D. Jayanth and K. Vinoth
Electronics and Communication Engineering, Veltech Dr.RR&SR Technical University, Chennai
Abstract Cloud computing has recently emerged as one of
the buzzwords in the IT industry. Several IT vendors are
promising to offer computation, data/storage, and
application hosting services, offering Service-Level
Agreements (SLA) backed performance and uptime
promises for their services. While these clouds are the
natural evolution of traditional clusters and data centers,
they are distinguished by following a utility pricing model
where customers are charged based on their utilization of
computational resources, storage and transfer of data. They
offer subscription-based access to infrastructure, platforms,
and applications that are popularly termed as IaaS
(Infrastructure as a Service), PaaS (Platform as a Service),
and SaaS (Software as a Service). Whilst these emerging
services have reduced the cost of computation, application
hosting and content storage and delivery by several orders of
magnitude, there is significant complexity involved in
ensuring applications, services and data can scale when
needed to ensure consistent and reliable operation under
peak loads.

Support for redundancy, self-healing and highly scalable


programming model, so that workload can be recover
from a variety of inevitable hardware/software failure.
Real-time monitor resources usage, rebalance the
allocation of resources when needed.

I. INTRODUCTION
Fig. 1. A typical Cloud Storage system architecture

Cloud computing is not a total new concept; it is


originated from the earlier large-scale distributed
computing technology. However, it will be a subversion
technology and cloud computing will be the third
revolution in the IT industry, which represent the
development trend of the IT industry from hardware to
software, software to services, distributed service to
centralized service. Cloud computing is also a new mode
of business computing, it will be widely used in the near
future. The core concept of cloud computing is reducing
the processing burden on the users terminal by constantly
improving the handling ability of the cloud, eventually
simplify the users terminal to a simple input and output.
All of this is available through a simple Internet
connection using a standard browser or other connection.
However, there still exist many problems in cloud
computing today, a recent survey shows that data security
and privacy risks have become the primary concern for
people to shift to cloud computing.

a) Cloud Computing Providers


Below is a partial list of companies that provide cloud
computing services:
 Amazon
 Google
 Microsoft
 Salesforce.com
 Citrix
 IBM
 Mozyhome
 Sun
 cohensiveFT
 Icloud
 Nivanix

A. What is Cloud Computing?

 VMware

Cloud is a virtualized pool of computing resources. It


could do the following things, Manage a variety of
different workloads, including the batch of back-end
operations and user-oriented interactive applications.
Rapidly deploy and increase workload by speedy
providing physical machines or virtual machines.

 Flexscale
 Joyent

11

Rackspace

3tera

Cloud Computing and its Security Issues

development environment as a service. You can use


the equipment to develop your own program and
deliver it to the users through Internet and servers.

b) Advantages
Here are five key benefits of using cloud storage and of
applications that take advantage of storage in the cloud.

 Infrastructure-as-a-Service (IaaS): Infrastructure as a


service delivers a platform virtualization environment
as a service. Rather than purchasing servers, software,
data center space or network equipment, clients
instead buy those resources as a fully outsourced
service.

 Ease of management: The maintenance of the


software, hardware and general infrastructure to
support storage is drastically simplified by an
application in the cloud. Applications that take
advantage of storage in the cloud are often far easier
to set up and maintain than deploying an equivalent
service on premise.

 Hardware-as-a-Service (HaaS): According to


Nicholas Carr the idea of buying IT hardware or
even an entire data center as a pay-as-you-go
subscription service that scales up or down to meet
your needs. But as a result of rapid advances in
hardware virtualization, IT automation, and usage
metering and pricing, this model is advantageous to
the enterprise users, since they do not need to invest in
building and managing data centers.

 Cost effectiveness: For total cost of ownership, cloud


storage is a clear winner. Elimination of the costly
systems and the people required to maintain them
typically provides organizations with significant cost
savings that more than offset the fees for cloud
storage. The costs of being able to provide high levels
of availability and the scalability an organization
needs are also unmatched.

B. Cloud Computing Issues

 Lower impact outages and upgrades: Typically cloud


computing provides cost effective redundancies in
storage hardware. This translates into uninterrupted
service during a planned or unplanned outage. This is
also true for hardware upgrades which for the end user
will no longer be visible.

In the last few years, cloud computing has grown from


being a promising business concept to one of the fastest
growing segments of the IT industry. Now, recession-hit
companies are increasingly realizing that simply by
tapping into the cloud they can gain fast access to best-ofbreed business applications or drastically boost their
infrastructure resources, all at negligible cost. But as more
and more information on individuals and companies is
placed in the cloud, concerns are beginning to grow about
just how safe an environment it is.

 Disaster preparedness: Keeping important data


backed up off site has been the foundation of disaster
recovery since the inception of the tape drive. Cloud
storage services not only keep your data off premise,
but they also make their living at ensuring that they
have redundancy and systems in place for disaster
recovery.

a) Security
Where is your data more secure, on your local hard driver
or on high security servers in the cloud? Some argue that
customer data is more secure when managed internally,
while others argue that cloud providers have a strong
incentive to maintain trust and as such employ a higher
level of security. However, in the cloud, your data will be
distributed over these individual computers regardless of
where your base repository of data is ultimately stored.
Industrious hackers can invade virtually any server, and
there are the statistics that show that one-third of breaches
result from stolen or lost laptops and other devices and
from employees accidentally exposing data on the
Internet, with nearly 16 percent due to insider theft.

c) Service Model
 Software-as-a-Service (SaaS): Software as a service is
software that is deployed over the internet and/or is
deployed to run behind a firewall in your local area
network or personal computer. This is a pay-as-you
go model and was initially widely deployed for sales
force automation and Customer Relationship
management (CRM).

b) Cloud Computing Attacks


As more companies move to cloud computing, look for
hackers to follow. Some of the potential attack vectors
criminals may attempt include:
 Denial of Service (DoS) attacks - Some security
professionals have argued that the cloud is more
vulnerable to DoS attacks, because it is shared by
many users, which makes DoS attacks much more
damaging. Twitter suffered a devastating DoS attack
during 2009.

Fig. 2. Users and Providers of Cloud Computing.

 Platform-as-a-Service (PaaS): Platform as a service,


another SAAS, this kind of cloud computing provide
12

Proceedings of the National Conference on Communication Control and Energy System

 Side Channel attacks An attacker could attempt to


compromise the cloud by placing a malicious virtual
machine in close proximity to a target cloud server
and then launching a side channel attack.

be accessible to only legitimate receivers, and integrity


indicates that all data received should only be
sent/modified by legitimate senders.

 Authentication attacks Authentication is a weak


point in hosted and virtual services and is frequently
targeted. There are many different ways to
authenticate users; for example, based on what a
person knows, has, or is. The mechanisms used to
secure the authentication process and the methods
used are a frequent target of attackers.
 Man-in-the-middle cryptographic attacks This
attack is carried out when an attacker places himself
between two users. Anytime attackers can place
themselves in the communications path, there is the
possibility that they can intercept and modify
communications.

Fig. 3. Information Security

c) Security Concerns
While cost and ease of use are two great benefits of cloud
computing, there are significant security concerns that
need to be addressed when considering moving critical
applications and sensitive data to public and shared cloud
environments. To address these concerns, the cloud
provider must develop sufficient controls to provide the
same or a greater level of security than the organization
would have if the cloud were not used. Listed here are
ten items to review when considering cloud computing.

Fig. 4

Solution: public key encryption, X.509 certificates, and


the Secure Sockets Layer (SSL) enables secure
authentication and communication over computer
networks.

d) Types of Security Concerns


The security concerned problems can be broadly
classified into three types, they are as follows
 Host or client Security

Denial of service: Where servers and networks are


brought down by a huge amount of network traffic and
users are denied the access to a certain Internet based
service. Like DNS Hacking, Routing Table Poisoning,
XDoS attacks.

 Information Security
 Network Security
 Host Security
A trusted set of users is defined through the distribution
of digital certification, passwords, keys etc. and then
access control policies are defined to allow the trusted
users to access the resources of the hosts.

QoS Violation: through congestion, delaying or dropping


packets, or through resource hacking.
Man in the Middle Attack: To overcome it always use
SSL.

Solution: A trusted set of users is defined through the


distribution of digital certification, passwords, keys etc.
and then access control policies are defined to allow the
trusted users to access the resources of the hosts.

IP Spoofing: Spoofing is the creation of TCP/IP packets


using somebody else's IP address.
Solution: Infrastructure will not permit an instance to send
traffic with a source IP or MAC address other than its
own.

Security related to the information exchanged between


different hosts or between hosts and users. This issues
pertaining to secure communication, authentication, and
issues concerning single sign on and delegation. Secure
communication issues include those security concerns that
arise during the communication between two entities.
These include confidentiality and integrity issues.
Confidentiality indicates that all data sent by users should

e) Privacy
Different from the traditional computing model, cloud
computing utilizes the virtual computing technology,
users personal data may be scattered in various virtual
data center rather than stay in the same physical location,
13

Cloud Computing and its Security Issues

to and stored in the cloud. CSP should maximize the user


control and provide feedback. Organizations need to run
applications and data transfer in their own private cloud
and then transmute it into public cloud. While there are
many legal issues exist in the cloud computing, Cloud
Security Alliance should design relevant standards as
quickly as possible.

even across the national borders, at this time, data privacy


protection will face the controversy of different legal
systems. On the other hand, users may leak hidden
information when they accessing cloud computing
services. Attackers can analyze the critical task depend on
the computing task submitted by the users.
f) Reliability
Servers in the cloud have the same problems as your own
resident servers. The cloud servers also experience
downtimes and slowdowns, what the difference is that
users have a higher dependent on cloud service provider
(CSP) in the model of cloud computing. There is a big
difference in the CSPs service model, once you select a
particular CSP, you may be locked-in, thus bring a
potential business secure risk.

II. CONCLUSION
In this paper, we discuss a fresh technology: Cloud
Computing. Describing its definition and some existing
issues. There is no doubt that the cloud computing is the
development trend in the future. Cloud computing offers
real benefits to companies seeking a competitive edge in
todays economy. Many more providers are moving into
this area, and the competition is driving prices even
lower. Just as there are advantages to cloud computing,
there are also several key security issues to keep in mind.
One such concern is that cloud computing blurs the
natural perimeter between the protected inside the hostile
outside. Security of any cloud-based services must be
closely reviewed to understand what protections your
information has. There is also the issue of availability.
Cloud computing brings us the approximately infinite
computing capability, good scalability, service ondemand and so on, also challenges at security, privacy,
legal issues and so on. To welcome the coming cloud
computing era, solving the existing issues becomes
utmost urgency!

g) Legal Issues
Regardless of efforts to bring into line the lawful
situation, as of 2009, supplier such as Amazon Web
Services provide to major markets by developing
restricted road and rail network and letting users to
choose availability zones. On the other hand, worries
stick with safety measures and confidentiality from
individual all the way through legislative levels.
h) Freedom
Cloud computing does not allow users to physically
possess the storage of the data, leaving the data storage
and control in the hands of cloud providers. Customers
will contend that this is pretty fundamental and affords
them the ability to retain their own copies of data in a
form that retains their freedom of choice and protects
them against certain issues out of their control whilst
realizing the tremendous benefits cloud computing can
bring.

REFERENCES
[1]
[2]

[3]
[4]

C. Solution
To advance cloud computing, the community must take
adequate measures to ensure security. Before storing it at
virtual location, encrypt the data with your own keys and
make sure that a vendor is ready for security certifications
and external audits. Identity management, access control,
reporting of security incidents, personnel and physical
layer management should be evaluated before you select a
CSP. And you should minimize personal information sent

[5]
[6]
[7]

[8]

14

http://en.wikipedia.org/wiki/Cloud_computing.
Tharam Dillon, Chen Wu, Elizabeth Chang, 2010 24th
IEEEInternational Conference on Advanced Information
Networking and Applications, Cloud computing: issues
and challenges.
Barrie Sosinkey cloud computing bible.
Wikipedia.
Cloud
computing
.Retrieved
from
http://en.wikipedia.org/wiki/Cloud-computing,2010.
Jianfeng Yang, Zhibin Chen Cloud Computing security
research.
http://www.howstuffworks.com/cloud-computing
http://viewer.media.bitpipe.com/1078177630_947/
1268847180_5/WP_VI_10SecurityConcernsCloudComput
ing.pdf
http://www.infoworld.com.

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.15-18.

Mobile Computing Location Managment


S. Sapna and D. Balasathiya
Email: sapna91191@gmail.com, balasathiya90@gmail.com
connected to the mobile switching centre (MSC), which
is, in turn, connected to the public switched telephone
network (PSTN). The frequency spectrum allocated to
wireless communication is very limited, so the cellular
concept was introduced to reuse the frequency. Each cell
is assigned a certain number of channels. To avoid radio
interference, the channels assigned to one cell must
different from the channels assigned to its neighbouring
cells. The radio interference between them is tolerable. By
reducing the size of the cells, the cellular network is able
to increase its capacity, and therefore to serve more
subscribers.

Abstract Mobile computing is a new emerging computing


paradigm of the future. Data management and location
management in this paradigm poses many challenging
problems to the Mobile database community. In the past
decade, Mobile communications have experienced an
expensive growth due to recent technological advances in
mobile networks and cellular telephone manufacturing.
Location management is a very important problem among
these challenges.
It consists of updating the location of the user, searching the
location and performing search-updates. When the host
changes location, an update occurs. When the host want to
communicate with a mobile host whose location is unknown
to the requesting host, a search occurs. A search-update
occurs after a successful search, when the requesting host
updates the location information corresponding to the
searched mobile host. The goal of a good location
management scheme should be to provide efficient searches
and updates. In this paper, the different location
management schemes, various search & update strategies
are discussed.

A mobile station communicates with another station,


either mobile or land, via a base station. A mobile station
cannot communicate with another mobile station directly.
To make a call from a mobile station, the mobile station
first needs to make a request using a reverse control
channel of the current cell. If the request is granted by the
MSC, a pair of voice channels will be assigned for the
call. To route a call to a mobile station is more
complicated.

Keywords Location, Mobile Station, MSC, Base Station,


MSS & MH

The network first needs to know the MSC and the cell in
which the mobile station is currently located. How to find
out the current residing cell of mobile station is an issue
of location management. Once the MSC the cell of the
mobile station, it can assign a pair of voice channels in
that cell for the call. If a call is in progress when the
mobile station moves into a neighbouring cell, the mobile
station needs to get a new pair of voice channels in the
neighbouring cell from the MSC so that the call can
continue. This process is called as handoff or
handover. The MSC usually adopts a channel assignment
strategy that prioritizes handoff calls over new calls.

I. INTRODUCTION
Managing location information of mobile nodes is an
important issue in mobile computing systems. Location
management is one of the fundamental issues in cellular
networks. It deals with how to track subscribers on the
move and how to update his or her movements. In mobile
communication environment, they are going to
accommodate more subscribers; the size of the cell must
be reduced to make more efficient use of the limited
frequency spectrum allocation. This will add to the
challenge of some fundamental issues in cellular
networks. Location management consists of updating the
location of the user, searching the location and
performing search-updates. Various strategies can be
discussed in this paper for the efficient performance of
updating, searching and search-updating strategies
throughout the execution.

Providing connection-oriented services to the mobile host


requires that the host be always connected to the rest of
the network in such a manner that its movements are
transparent to the users. This would require efficient
location management in order to minimise the time taken
for updates and searches, so that there is no loss of
connection. The ability of mobile hosts (MHs) to
autonomously move from one part of the network to
another part in a mobile computing system sets it apart
from static networks. Unlike static networks, the network
configuration and topology keep changing in mobile

In a cellular network, a service coverage area is divided


into smaller hexagonal areas referred to as cells. A base
station serves each cell. The base station is fixed. It is able
to communicate with mobile stations such as cellular
telephones using its radio transceiver. The base station is

15

Mobile Computing Location Managment

network connects the mobile service stations to each


other.

computing systems. The mobility of some nodes in the


network raises interesting issues in the management of
location information of these nodes.

A mobile service station can be in wireless


communication with the mobile hosts in its cell. The
location of a mobile has can change with time. It may
move from its present cell to a neighbouring cell while
participating in a communication session, or it may stop
communicating with all nodes for a period of the and then
pop-up in another of the network.

Location server is maintaining the details about mobile


user, it consist separate location directory for each MH.
Creating a fixed location directory of all the nodes a priori
is not a solution. The location directory has to be
dynamically updated to account for the mobility of the
MHs. The design of a location directory whose contents
change dynamically raises important issues. Some of
them are as follows: (a) when should the location
directory be updated? If the updates are done each time an
MHs location changes, the directory will always have the
latest location information, reducing the time and effort in
locating an MH. However, such a policy imposes burden
on the communication network and the location servers,
i.e., nodes that maintain the directory, (b) should the
location directory be maintained at a centralized site, or
should it be distributed? A central location server has
problems with regard to robustness and scalability.
Hence, a distributed directory server is referable. This
leads us to the next questions. (c) How should the location
information be distributed among the location servers?
And (d) should location information about an MH be
replicated across multiple location servers? It is not
possible to a priori determine the variations in spatial
distribution of MHs in the network and the frequency
with which node location will be updated or queried.

A mobile host can communicate with other units, mobile


or static, only through the mobile service station of the
cell in which it is present. If a node wishes to
communicate with a mobile host, first it has to determine
the location of MH (the cell in which the MH is currently
residing). This location information is stored at location
servers. Depending on the frequency of location updates,
this location information may be current, or out-of-date.
Once the location of the MH has been determined, the
information is routed through the fixed wire network to
the MSS of the cell in which the MH is present. Thus the
MSS relays the information to the destination MH over a
wireless channel. We assume that MSSs act as location
servers. Hence all the MSSs collectively maintain the
location directory.
III. MECHANISMS FOR LOCATION MANAGEMENT
The Base Transceiver Station (BTS) of every cell
continuously transmits the location area identity on the
control channel (BCCH). When the mobile station detects
that the broadcast location area identity is different from
the one stored in the SIMcard, it performs a location
update. If the mobile subscriber is unknown to the Mobile
Services

A location management strategy is a combination of


search strategy, update strategy and search-update
strategies throughout the execution.
II. SYSTEM MODEL
A roaming mobile subscriber, moves freely within the
GSM network. Because the network knows the location
of the mobile station, it is possible for the mobile
subscriber to receive a call wherever he or she is. To keep
the system updated with the current subscriber location
information, the mobile station must inform the system
whenever it changes location area. A location area
consists of one or more cells in which a mobile station
can move around without needing to update the system on
its location. A location area is controlled by one or more
Base Station Controller (BSC) but by only one Mobile
Services Switching Center (MSC). The BSC sends paging
messages to the Radio Base Station (RBS) defined within
a certain location area. If the mobile station moves
between cells belonging to different location areas, the
network must be informed via a procedure called location
updating.

A. Switching Center/Visitor Location Register


(MSC/VLR)

Assume a cellular communication system that divides the


geographical region served by it into smaller regions,
called cells. Each cell has a base station, also referred to
as the mobile service station (MSS). The figure shows a
logical view of a mobile computing system. A fixed wire

Fig. 1: Logical view of a mobile computing system

16

Proceedings of the National Conference on Communication Control and Energy System

is a predetermined set of cells at which a mobile station


regardless of its mobility must generate location updates.
A scheme is dynamic if a mobile station in any cell
depending on its mobility can generate a location update.
A global scheme is based on aggregate statistics and
traffic patterns, and it is usually static too.

(that is, the broadcast location area belongs to a new


MSC/VLR serving area), then the new MSC/VLR must
be updated with subscriber information. This subscriber
information comes from the Home Location Register
(HLR).
This location updating procedure is described in the steps
below and in Figure 2
1. The mobile station requests a location update to be
carried out in the new MSC/VLR. The IMSI is used to
identify the mobile station. An International Mobile
Equipment Identity (IMEI) check is also performed.
2. In the new MSC/VLR, an analysis of the IMSI number
is carried out. The result of this analysis is a
modification of the IMSI to a mobile global title
which is used to address the HLR.
3. The new MSC/VLR requests the subscriber
information for the mobile station from the HLR.
4. The HLR stores the address of the new MSC/VLR.
5. The HLR sends the subscriber data to the new
MSC/VLR.
6. The HLR also orders the old serving MSC/VLR to
cancel all information for the subscriber because the
mobile subscriber is now served by another
MSC/VLR.

Fig. 2: Location updating

Location management involves signalling in both the wire


line portion and the wireless portion of the cellular
network. However, most researches only consider
signalling in the wireless portion due to the fact that the
radio frequency bandwidth is limited, whereas the
bandwidth of the wire line network is always expandable.
Location update involves reverse control channels
whereas paging involves forward control channels. The
total location management cost is the sum of the location
update cost and paging cost. There is a trade off between
the location update cost and the paging cost. If a mobile
station updates its location more frequently the network
knows the location of the mobile station better. Then the
paging cost will be lower when an incoming call arrives
for the mobile station. Therefore, both location update and
paging costs cannot be minimized at he same time.
However, the total cost can be minimized or putting a
bound on the other cost can minimize one cost.

7. When the new MSC/VLR receives the information


from the HLR, it sends a location updating
confirmation message to the mobile station.
Note: The HLR is not informed if the mobile subscriber
moves from one location area to another within the same
MSC/VLR serving area.
IV. LOCATING USER
Location management deals with how to keep track of an
active mobile station within the cellular network. In this
paper there are two basic operations involved in location
management is discussed. These are location update and
paging. The cellular network performs the paging
operation. When an incoming call arrives for a mobile
station, the cellular network will page the mobile station
in all possible cells to find out the cell in which the
mobile station is located so the incoming call can be
routed to the corresponding base station. The number of
all possible cells to be paged is dependent on how the
location update operation is performed. An active mobile
station performs the location update operation.

Locating users who are on the move and often to


locations, which are remote from home, is a challenging
task. In general, it is unnecessary to track locations of all
users all the time. Hence, a database, which stores
locations of users, will often be imprecise in terms of the
exact users location. For instance, a users location may
only be updated when the user crosses the border between
two different areas or zones as opposed to updates on
crossing a small cell. This, in general, will save on the
number of location updates that the moving user will have
to perform but will put an additional burden on the search
process if the exact location of the user is sought.

A location update scheme can be classified as either


global or local, A location update scheme is global if all
subscribers update their location at the same set of cells,
and a scheme is local if an individual subscriber is
allowed to decide when and where to perform the location
update. A local scheme is also called individualized or
per-user-based. A location update scheme is static if there
17

Mobile Computing Location Managment

station (MSS) are located at the leaf level of the tree. Each
MSS maintains information of the hosts residing in its
cell. The other nodes in the tree structure are called
location server (LS). Each location server maintains
information regarding mobile hosts residing in its sub
tree. Each communication link has a weight attached to it.
The weight of a link is the cost of transmitting a message
on the link. Let l[src][dest] represent the link between
nodes are and dest, and let w(l) represent the weight of
link l. the cost depends on the size of the message, the
distance between the hosts, and the bandwidth of the link.
For analysis purposes, we assume that, for all t, w (t0 =1.
essentially our cost metric is the number of messages.

V. LOCATION QUERY
A static note, say MSS or a mobile host in the cell
corresponding to the MSS, wishing to communicate with
the target mobile host first needs to know the location of
the target. Let the target mobile hosts identity be MH_id.
To locate the target, the function locate_MH is invoked.
First, MSS searches its cache for MH_ids entry. If such
an entry is found the corresponding mobile service
station, MSSi, is probed to determine if MH_id is still in
the same cell. If so, MSSi returns its own location in the
response. Otherwise, one of the virtual identities of
MH_id is arbitrarily selected. This virtual identity is used
by the hash function to determine the set of MSSs that
should be queried about MH_ids location which is the
read set for location information.

VII. LOCATING MH
The problem at hand is as follows: given an MH,
determine the location server (s) that will store the
location of the MH.

If a queried mobile service station, MSSi, has the location


MH_id in its directory, it is sent in the response. If no
query mobile service station has the location of MH_id,
the query is broadcast over the network. Once MSS
receives the location of the cell in which MH_id is present
the messages sent over the fixed wire network to the
corresponding mobile service station. If MH_id has
moved out of the cell since the last location update, a
sequence of forwarding pointers (depending on the path
taken by MH_id since it moved out of MSSis cell) is
followed to the cell in which MH_id is currently present.

Storing the location information of an MH at only one


MSS (serving as the MHs location server) is not
desirable due to the following reasons:
1. MHs exhibit a spatial locality of reference: even
though all nodes in the system can potentially
communicate with the network, bulks of the
references originate from only a subset of them. The
nodes in the working set may be clustered in different
parts of the networks. So, to reduce query costs, it is
advisable to have location servers for the MH in the
vicinity of such clusters.

VI. LOGICAL ARCHITECTURE IN MOBILE NETWORKS


Mobile system consists of mobile hosts, mobile support
stations, and location servers. Theological network
architecture (LAN) is a hierarchical structure (a tree with
H levels) consisting of mobile support stations and
location servers. As shown in fig the mobile support

2. Multiple location servers for an MH make the


distributed directory tolerant to the failure of some of
the servers.

18

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.19-22.

Electorinc Voting System using Mobile


H. Hemanth Kumar and C. Naresh
Veltech Dr.RR & Dr.SR Technical University
Abstract As we all know, the current scenario of the
polling system and the way of voting during the election
process. Polling disrupted by militants in J&k.

And one day we will see our country a well developed and
well friendly

I. INTRODUCTION
43,732 polling stations declared sensitive

M-voting is the technique of casting the votes through the


cell phones. The iris scanner will check the person for the
validity. The eye patterns will transmit to the server.

Seeks security plan from home ministry


Candidate escapes gunshots in TN.

After getting the permission from server , the person can


vote by sending the keys .This m-voting can have access
anywhere from the world with the help of satellite
communications.

Can you imagine an election without the booth capture,


without firing of polling stations, clashes between parties,
voting without long standing queues, without the ferry
service to vote, voter friendly, with an increased voting
rate.(NO, THAT WE ALL KNOW)

II. CURRENT SCENARIO

So , here we are coming with new ideas and approaches to


overcome such problems .As we all know the public use to
face lots of problem during the election process and also
after the election, this all lay down with a major effect on to
the economic and technological growth of our country.

Now a days, Large troops of military and Para military


forces are employed for security purposes. Its a drain to
our economy. The main stress on EC being the protection
of ballot boxes EVM. On the day of counting, higher
security is needed.
III. TODAYS VOTING PRACTICE

Fig. 1
Fig. 2

So here we are with new approach to overcome this all


problem and bring our nation a beauty full and peace one
.Yes , that day is not far from here. Here in this , we are
going to simplify the voting process. In this hi-tech era the
mobile phones attached with the iris scanner has to be used
to cast the votes. Of course this m-voting reduces the time,
cost, risks. The iris recognition technology will pace our
thoughts with higher degree of authentication. The
technology we are about to implement is about the mvotingthat is mobile voting using biometric. Few
electronics devices and the way of iris signature. We can
make the polling system of election in very great full and
peace full manner

People have to stand in lengthy queues, in hot sun , some


times in rains. Old age persons have to face more
troubles. Remote area peoples have to travel by boats,
horses, bulls and even elephants. These factors reduce the
voting rate.
Hence, the combination of mobile communication and the
biometric techniques paves the way for new generation of
voting. Hence, we can expect GOOD DEMOCRACY
can flourish in the country.

19

Electorinc Voting System using Mobile

IV. MISSION IMPOSSIBLE

VI. BIOMETRICS

The person who is going to vote has to call to election


commission office and then he\she has to show the eye for
a fraction of second. The photographed eye will be
transmitted to the office and it is processed for the
validity, authentication. And they will get a unique code
based on (0-9) according to the captured iris .After getting
the permission, the person can vote to the desired
candidate.

Biometrics technologies are defined as automated method


of identifying or authenticating the identity of a person
based on physiological or behavioral characteristics.

The figure shows the location of iris in human eye IRIS

 Iris recognition

Physiological
characteristics
characteristics such as,

are

more

stable

 Face recognition
 Finger print recognition
 DNA recognition
 Behavioral characteristics is the reflection of make up
 Signatures
 Voices
VII. IRIS SCANNING
The person who is going to vote has to send the number
provided by EC via text message , with the help of iris
scanner the server system scans the unique code which
has been sent by each individual and verify it for the
purpose of validation and authentication.

Fig. 3

The voting process consists of,


 Enrollment of iris patterns.
 Iris code scanning of individuals.
 Verification for the validity.
 Permission to vote.
 Voting.
All these leads to the success of democracy.
V. ENROLLMENT
On prior to the election the public has to enroll their iris
pattern with the main server provided by the election
commission. It may be at the district head quarters,
municipal office, etc. This registration can be done once
throughout the life. The database containing the iris
pattern along with the personal data will has to be kept
confidential. Here the camera takes a photograph and
generates the iris code. The distance between the camera
and the eye can be 4 to 24 inches. This stage alone needs
the human work.

Fig. 5

A low-level incandescent light illuminates the iris so the


video camera can focus on it, but the light is barely
noticeable and used strictly to assist camera. One of the
frames is then digitized then transmitted and stored in a pc
database of enrolled users. The whole procedure takes
less than a few seconds, and can be fully computerized.
Scanning is actually performed since the eye is simply
photographed.
VIII. ABOUT IRIS
An iris has a mesh-like texture to it, with numerous
overlays and patterns that can measured by the computer.
The iris recognition software uses about 260 degrees of
freedom.
A. Why IRIS Recognition?

Fig. 4

Glasses and contact lenses, even colored ones, do not


interfere with the process.
20

Proceedings of the National Conference on Communication Control and Energy System

The decision threshold is automatically adjusted for the


size of the search database to ensure that no false matches
occur even when huge numbers of Iris code templates are
being compared with the live one.

In addition, recent medical advances such as refractive


surgery; cataract surgery and cornea transplants do not
change the iris characteristics. In fact, it is impossible to
modify the iris without risking blindness. Moreover, even
a blind person can participate. As long as a sightless eye
has an iris, that eye

Some of the bits in an Iris code template signify if some


data is corrupted (For example by reflections, or contact
lens boundaries), so that it does not influence the process,
and only valid data is compared. Decision thresholds take
account of the amount of visible iris data, and the
matching operation compensates for any tilt of the iris.

Can be identified by iris recognition. Even the best


fingerprint technology uses about 60 to 70 degrees of
freedom.
IX. IRIS CODE GENERATION

XI. PERMISSION

The picture of an eye is first processed by software that


localizes the inner and outer boundaries of the iris, and
the eyelid contours, in order to extract just the iris portion.

The server provided at the polling station will verify the


iris code with the database and then it verifies whether
they had already voted. If they are eligible to vote, the
server permits them to vote through their cell phone , else
request them to register once for throughout lifetime.

Eyelashes and reflections that may cover parts of the iris


are detected and discounted.
Sophisticated mathematical software then encodes the iris
pattern by a process called Demodulation.
This creates a phase code for the texture sequence in the
iris, similar to a DNA sequence code.
The Demodulation process uses functions called 2-D
wavelets that make a very compact yet complete
description of the iris pattern, regardless of its size and
pupil dilation, in just 512 bytes.
The stored file is only 512 bytes with a resolution of
640*480, allowing for massive storage on a computers
hard drive.
The phase sequence is called an iris code template, and it
captures the unique features of a iris in a robust way that
allows easy and very rapid comparisons against large
databases of other templates. The Iris code template is
immediately encrypted to eliminate the possibility of
identity theft and to maximize security.

Fig. 7

XII. VOTING
The success of democracy lies here. After getting the
permission from the EC server the person can vote
from his mobile.Once the code get verified ,the server
will send the polling screen via message direct to the
mobile.This will make so portable to everyone .Hence the
resulting in increase rate of vote shows the growth in
technology and economic of our country. The prerecorded
message will help to select the candidate with right choice
and also give acknowledgement.

X. VERIFICATION

XIV. BENEFITS
 Time saving.
Fig. 6. One to many comparison

 Patterns are extremely complex to make a duplicate.


 Imitation is not possible .Pattern cannot be changed
with risking the eye.

In less than a few seconds, even on a database of millions


of records, the Iris code template generated from a live
image is compared to previously enrolled ones to see if it
matches any of them.

 Election mal practices can be stopped.

21

Electorinc Voting System using Mobile

 Enable secured voting any where from the world.

the dream day, the peaceful election in which everyone


can participate without any discrimination, threats, and
risks. Dream day is nearer.

 Further, This can be used for citizenship rights,


border-crossing permissions, passports, and even for
ATM services.

REFERENCES

XV. CONCLUSION

[1]

Electronics For You

We have presented a newer approach to the voting


process, which will surely brings a change. We can reach

[2]

www.google.com

22

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.23-27.

SOS Transmission through Cellular Phones to Help Accident Victims


Himanshu Joshi
Student, ECE Department, Veltech Dr.RR & Dr. SR Technical University
Having thus identified the situations in the accident, one
needs to understand the actual requirements in each case.
They are given below.

Abstract This paper describes an ORIGINAL IDEA to


help cellular phone users caught in an accident. The idea has
been developed keeping in mind the considerations of cost
and compatibility with existing system. The Short Message
Service or SMS as it is popularly referred to, is made use of
for this purpose.

i) The solution requires a software robot resident in the


cellular phone providers server, which can transmit
the SOS signal in an intelligent manner and monitor
responses for the victim.

The solution offered is the Force-Transducer method. The


victim is assumed to be unconscious and the accident is
detected automatically. Detailed simulation results at a
scaled down level are provided for this solution. The
threshold level is set based on data collected from the
experiments.

ii) Similarly, the solution needs a Positioning System to


transmit the victims whereabouts to others. This has
to be a cheap system and should not increase the cell
phone receivers cost greatly.
iii) The solution requires a high fidelity shock transducer
and decoding circuit to identify the shock magnitude.

One major problem in such design is the technique to find


the victims position. The Global Positioning System (GPS)
is found to be costly. So, an unorthodox design using Radio
Direction Finders (RDF) and beacon signals is described.
The Goniometer or Crossed Loop Antenna is used for this
purpose. This reduces cost effectively when compared with
the GPS system.

iv) The SOS has to be transmitted as soon as possible. So


all systems must have a very small time delay.
v) Above all, the new system must fit in with the present
system (i.e.,) there must be no difference in the
information received between a user who requests this
option and one who does not.

The paper proceeds to suggest an abstract view of the


software robot required to perform the Save Our Souls
(SOS) message routing task. It uses a special hierarchical
message dispatch system wherein people nearby and more
likely to help are contacted. The robot also acts as a proxy to
the victim and monitors responses for him.

The detailed description of the solution will be presented


now.
II. THE TOY CAR EXPERIMENT

This paper as a whole gives a cost-effective, high


performance system which can be introduced in the market
if any of the cellular companies are willing to encourage it.

In case the victim becomes unconscious, the system must


be able to automatically detect an accident and transmit
the SOS automatically. In order to achieve this, a shock
transducer is used to measure the jerk experienced
through the accident and trigger the SOS circuit if the
force level is very high. This system needs statistical data
acquisition to find out the exact threshold level of the
force in an accident. It is highly expensive to simulate the
accident in real time. So, a scaled down experiment is
used. Here, a pair of toy cars of mass 200g is made to
collide with each other. The force caused by them is
measured by simple piezoelectric transducers. The results
of this experiment are tabulated below.

I. INTRODUCTION
Cellular phones are turning out to be a menace on the
road. This is a major problem for the cellular phone
manufacturers. This paper provides a solution which
transmits a SOS signal to save the accident victim. It
describes in detail a cost-effective foolproof solution.
There are many factors to be considered when designing
such a system. In most of the accidents, the victim
becomes unconscious. How is a SOS transmitted then?
Here, many ideas can be implemented. One such solution
is described here. The cell phone is fitted with a
transducer, which detects shocks. The cell phone
automatically transmits the SOS if the shock level goes
beyond a certain percentage. The cell phone must not
trigger an accidental SOS. To ensure this, the shock level
that triggers the SOS must be high enough. Based on the
first condition, if the shock level is made very high, then
an accident might not be identified at all.

As seen from the experiment, the average force acting on


a toy car in case of an accident is approximately 1N. For a
car measuring 960kg and moving at 70kmph speed, the
force will be scaled 18000 times or 18kN. These practical
results can be verified by a simple theoretical calculation.
A car weighing 960kg decelerates from approximately
70kmph to 0kmph in 2 seconds in case of an accident.
Hence, the force is given by F =ma which is,

23

SOS Transmission through Cellular Phones to Help Accident Victims

960*70*1000/3600 or 18.67kN approximately. This


confirms with the scaled down experimental results.
However, in a four-wheeler, all of the total force does not
act inside the vehicle. As per information got from
Mercedes Benz, only 10% of the total force acts inside the
car (Acknowledgement [4]). Thus, the threshold can be
set to approximately at 1kN. The scaled down experiment
used a cheaper transducer that does not measure high
forces. The transducer required for the actual system costs
Rs. 1000 a pair. Based on the statistical data collected
above, the approximate threshold level is determined.
More accurate results can be determined if the
experiments are carried in real time to the exact detail.

resolution is not enough because the cell can be of a


huge size.
ii) Accidents are exceptional cases. They occur rarely.
Further, the probability of two users in the same cell
getting into an accident is highly improbable.
The system suggested by this paper makes use of a
beacon or search signal transmitted by the base station.
This is a constant amplitude a.c. signal that fits in the
guard band of the respective cell. The signal has the same
frequency for all users and so is unsuitable for
simultaneous multi-user handling. However, that will be a
highly improbable case as reasoned above.

Table 2.1
Sample
No.,
1.
2.
3.
4.
5.
Mean

Measured Voltage
( mV )
113.2
112.7
114.3
114.5
113.3
113.6

This search signal is sent only if an SOS is identified. So,


when a victim sends out his SOS, the base station
immediately sends the search signal. The cellular phone is
fitted with a small reflector which reflects this signal as
such. This is easily achieved by constructing a
mismatched termination in the cellular phone for that
frequency. Now, the to and fro travel of the signal
introduces a time delay. So, from the signal reflected, the
users distance can be identified.

Actual Force
( Newton )
.977
.972
.985
.987
.978
.980

The information got now gives only the radius of the


circle within which the user might be present. This might
be too large an area to identify the user even within the
cell limits as there is no maximum limit on the cell area.
Since we have got the radius, all that is required is to find
the angle or direction within which the user might be
present. To do this, we use the Radio Direction Finder
(RDF) antenna system. This makes use of a highly
directional loop antenna to identify the signal source
which in this case is the cellular phone.

In order to ensure that the force calculated above acts on


the cell phone, it is essential to place the phone in the
stand that normally comes as a standard part of cars. This
stand requires a slight modification to provide the cell
phone a small moving space so that it is jerked when an
accident occurs. The alternate and better solution would
be to attach the transducer to some part of the vehicle
itself and connect the cell phone to it whenever the user is
driving hi/her car. This solution would require that the
transducer be properly protected. The problem of finding
the positions victim is now dealt with.

In order to do this, the cellular phone needs to transmit a


microwave signal to the base station. This can be of any
frequency that has not been allocated for the existing
control frequencies. The base station is then fitted with
the CROSSED LOOP or BELLINI TOSI or
GONIOMETER type of direction finder. It has been
proved mathematically that the meter points to the
direction of the signal source (Reference [4])

III. IDENTIFYING THE POSITION OF THE VICTIM


The problem of knowing where we are has been an
interesting and difficult problem through the ages. Years
of research have resulted in the Global Positioning
System (GPS). This technique uses three satellites and pin
points the location by the triangulation process, wherein
the users position is located as the point of intersection of
the three circles corresponding to the satellites. Installing
such a system is quite simple. But the major constraint
here is the cost. A normal hand-held GPS costs around
$100 and weighs quite heavy. Minimizing the above
apparatus will increase the cost further. This would mean
an extra cost of Rs.10000 to Rs.15000 for the Indian user.

The user in distress sends out a microwave signal to the


base station just as the base station sends its beacon
signal. From the reflected beacon signal the radius of the
victims position is found. From the goniometer, the
direction is found as well. This system as assumed above
presents a design for only one user. To do this a small
electronic system, preferably a microcontroller based
system maybe used. Such systems are available widely in
the market and so there is no point in trying to design one.
Thus, the problem of identifying the victim is overcome.
Once the victims location is identified, the base station
transmits the SOS sent by the cell phone along with his
coordinates to the main server. The cell phone thus
initiates the process and the base station propagates it.

The better option would be to wait for a SOS signal and


then identify the victims position. This being a faster
technique also makes the design process easy and cheap.
This being the case, one could make use of certain
obvious facts to identify the victim. They are,
i) The cell within which the victim is present can be
identified easily by the base station. However, this
24

Proceedings of the National Conference on Communication Control and Energy System

IV. COMPLETE BLOCK DIAGRAM OF THE SYSTEM


Force Caused By Accident

Transducer
fitted in
Cellular Phone

To Base Station
High Pass
Filter
Position
Finding Signals
Trigger O/P

User
IDtoand
Switch
Trigger SMS
Subroutine
Position

Position
Finding Signals

Beacon Signal
To Various
Users

Base Stations
Goniometer and
Positioning
Equipment

Software
Robot
Help
Message

And Decoded
Fig. 1. Block Diagram

The above diagram depicts the working of the complete


system. As seen, the jerk caused by the accident is
detected by the shock transducer and the SMS sub-routine
is triggered. Along with the message, control signals that
inform the base station that an accident has occurred are
transmitted. The triggering is achieved by using a high
pass filter that detects abrupt changes in the transducer.
Simultaneously, the microwave signal for the goniometer
is also transmitted. The position is identified as described
in the previous section. The users id and his position in
the polar coordinates are given to the software robot. This
robot, then decodes the users position to other
subscribers based on a priority list.

highest level of abstraction are explained in the next


section.
V. DESIGNING THE SOFTWARE ROBOT
A software robot is a program that resides in a network
(or an environment) and executes a specific task assigned
to it. For this purpose, it may move around the
environment or contact other software robots in the same
or other environments. A software robot is to be designed
for this system so as to monitor and transmit the SOS
signals in an intelligent manner.
The tasks that are to be performed by the software robot
are listed below.

So far, the hardware design of the system has been dealt


with in detail. As mentioned at the start of the paper, a
software robot that manages the whole show will have to
be designed. This robot is made resident in the main
server in the control tower of the cellular service provider.
The functions that this robot will have to perform are
complex. The algorithm it follows and its code at the

i) It has to transmit the SOS to appropriate persons as


will be described.
ii) It has to act in the victims place and monitor
responses.

25

SOS Transmission through Cellular Phones to Help Accident Victims


send(

iii) It has to check for a confirmation form the victim to


avoid false alarms. This is accomplished by
interrogating the victim and waiting for the
confirmation. If in a very short time, there is no
response, the transmitted SOS must be followed
through with a False Alarm message.

HELP

vicitimPosn,

helpSub);
delay( 30 );

//

wait

for

30

seconds
Response

resp1

new

Response(

scanResponse( ) );

Before designing the algorithm, the hierarchy in which


the SOS is to be transmitted is to be decided. This takes
into account the following factors,

i) The proximity of the help source.

if( Response == NULL )

if( Response != NULL )


exit( 0 );

ii) The certainty with which help might be got. For


example, a relative whose cell number is present in the
victims address book would be more likely to help
rather than a third person.

//scan response continuously for


120

secs

after

transmitting

to

all

subscribers
Response = scanResponse( 120 );
if( Response != NULL )

Based on the above constraints, a suggested hierarchy of


transmitting the SOS is

send(

HELP

ON

THE

WAY

SOSTransmitter ); //inform victim of help

given below. This is maintained as sets of indices in the


agents look up table with each index representing one
group.

processingFlag = False;
}

i) Emergency desk of all hospitals in the victims cell.

The agent given here once started, gets the victims id and
his position into the respective objects. It then puts each
of the indexes from the look up table into its
corresponding object and sends the SOS to them. It then
monitors the response and informs the victim when
somebody responds.

ii) All doctors presently in the victims cell.


iii) All subscribers in the victims address book that are
presently in the cell.
iv) All subscribers in the victims cell.
v) Emergency desk of all hospitals in the next nearest
cell and so on.

VI. SIMULATION

This set of indices in changed in a dynamic fashion by the


MAILER DAEMON in the server. This MAILER
DAEMON is normally present in all servers. It is this
program that initiates the actual Agent whenever a SOS
occurs. The code of the Agent is given in an abstracted
level below.

Microphone

Public void class Agent


{
MW Receiver

/* get victim number and position from


MAILER DAEMON. Subscriber class

MW
Transmitter

Analog to
Digital
Converter

Serial Port
of Personal
Computer

defines the victim */


Subscriber

SOSTransmitter

new

Subscriber( MAILERDAEMON.getVictim( ) );
Position

victimPosn

new

Position(

C program
to measure
input and
set flag

MAILERDAEMON.getPosition( ) );
Boolean processingFlag = True;

Oracle Mock
Database

Java
FRONTEND

Subscriber helpSub;
//

lookUP

look

up

table

with

Fig. 2.: Block Diagram

all

indices

The simulation of this system has been carried out in a


small scale level. As seen from the block diagram a
microphone substitutes for the shock transducer in the
original system. This then transmitted through a Medium
Wave transmitter to a personal computer. The signal

While( ( helpSub = readList( lookUp )


) !=EOF )
{

26

Proceedings of the National Conference on Communication Control and Energy System

received is passed through an ADC and received in a C


program. This program checks the signal value and sets a
flag variable when it goes beyond a certain level. This
flag is continually checked by a thread of the JAVA frontend. If the flag is set, the program connects to the backend database and displays a list of users to whom the
mock message is sent based on the hierarchy explained
above. The simulation does not cover the positioning part
of the system as that is too expensive to be done on small
scale. The screen shot of the Java front-end is shown in
the next section.

IX. LIMITATIONS OF THE SYSTEM


The system though complete presents a few limitations.
They are, the system requires the user to place the cellular
phone in a stand or connect the transducer to the vehicle
in case of four wheelers. Though this might seem as if
taking choice from the user, the fact that the system deals
with a question of life or death is more important. The
system needs detailed surveying to decode the position of
the user in polar coordinates to actual localities. This
however is a one time job. The system does not handle
multiple victims simultaneously. However, priority can be
allocated to users based on the force measured. False
alarms are bound to occur in such a system. This can be
reduced by ringing the cellular phone every time an SOS
is sent and thereby warning the user. The data collected
are approximate. However, accurate data can be collected
if the system is tested in real time as a commercial
venture.

VIII. APPROXIMATE COST ANALYSIS OF THE SYSTEM


1. Cost of installing Goniometer for one base station
Rs. 20,000
2. Number of Base Stations in Chennai City
50
3. Cost of installing Goniometer for the City
Rs. 10,00,000
4. Cost of other hardware and software (for total city)
Rs. 1,00,000
5. Number of subscribers in Coimbatore city
10,000 ( Assumed Value )
6. Cost per Subscriber
Rs. 110
7. Cost of Transducer fitted in the Cell Phone Rs. 1,000
8. Cost of other hardware in the Cell Phone
Rs. 1,000
Total Cost Per Subscriber
(Approximate)

Thus, if implemented this system would prove to be a


boon to all the people out there driving with hands-free
earphone in their ears.
REFERENCES
[1]

Helfrick and Cooper, Electronics Measurements and

[2]

Raj Pandya, Personal Mobile Communication Systems and


Services
Thiagarajan Viswanathan, Telecommunication and
Switching Systems
K.D.Prasad, Antennas and Wave Propagation
George Kennedy, Electronic Communication Systems

Rs. 2,110
Instrumentation

It is thus seen that allowing for the widest possible cost;


the total increase is only Rs.2110 which is a good price
when considering the fact that the system saves the life of
the subscriber.

[3]
[4]
[5]

27

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.28-36.

4G Communications
D. Krishnakumar and J. Manikandan
Veltech Dr.RR &Dr.SR Technical University
This even lead to the invention of the first analog cellular
systems which was developed in the late 1960s and early
1970s. This system was "cellular" since coverage areas
were split into smaller areas called the cells" each of
which was served by a low power transmitter and receiver.

Abstract Mobile communication is continuously one of the


hottest areas that are developing at a booming speed, with
advanced techniques emerging in all the fields of mobile and
wireless communications. Current times are just the
beginning for deploying 3G mobile communication systems,
while research on the next generation of mobile
communications, 4G wireless and mobile networks begin to
pave the way for the future. This paper studies the visions of
4G from a technical perspective after a brief review on the
development history and status of mobile communications
and related 4G perspectives, we present an overall 4G
feature framework

This first generation (1G) analog system for mobile


communications had seen key.
Improvements during the 1970s:
1. This was due to the invention of microprocessor and
even the digitization of the control link between the
mobile phone and the cell site.

The 4G technology a recent technology it is suppose to allow


data transfer of up to 100Mbps to 1Gbps To provide
excellent quality of services to the users of 4G, key
technologies will be used along with OFDM (Orthogonal
frequency division multiplexing).Thus the challenge is to
present a latest information on the key technologies like
MIMO technology, Radio resource management Software
Defined Radio (SDR) communication systems, mobile IP and
relaying techniques..

Second generation (2G): by the end of the year 1980 the


digital cellular systems were developed. These systems
digitized the control link and the voice signal. These
systems provided better voice quality with higher capacity
which even coasted low.
Third generation (3G) systems provide
communication services which include:

Finally, a short summary on 4G visions is presented as a


continuum of features in the development of the mobile
communications world.

A. voice
B. fax

I. INTRODUCTION - EVOLUTION OF THE MOBILE


MARKET

C. Internet.

"The future always comes too fast," Alvin Toffler, an


eminent futurologist, once said this is evident from the
fast changes taking place in the telecommunication
market. Over the recent years, telecommunications has
been a fast growing industry. This can be evident from the
increase in the revenue of telecommunication industries
and from the carrier openings in the telecommunication
industries.

Now the work has started on the development of fourth


generation (4G) technologies in Japan.
II. 1G TECHNOLOGY
The introduction of the 1G system began the wireless
revolution towards the mobility being an accepted and
expected
method
of
communication.
Cellular
communication referred to as 1G is one of the most
prolific voice communications platform. It enables the
following features

By the end of the year 1940, the first radiotelephone


service was introduced in the US; this was used for
connecting mobile users in cars to that of public fixed
network.

1. Frequency reuse: Frequency reuse was an essential


element in the quest for cellular systems to have a
higher capacity per geographic area .It involves using
g the same frequency in a system many times over.
The capability to reuse the same radio frequency many
times in a system is the result of managing the C/I
signal level for an analog system. The minimum
separation required between two nearby co-channel
cells is based on specifying a tolerable co-channel

Improved Mobile Telephone Service" (IMTS) which was


launched by Bell Systems in the year 1960, which brought
many changes in the technology like
1. Direct dialing

faster

2. Higher bandwidth.

28

Proceedings of the National Conference on Communication Control and Energy System

4. Reduced cellular fraud

interference, which is measured by a required carrierto-interference ratio (C/I) s. The (C/I) s ratio also is a
function of the minimum acceptable voice quality of
system.

5. Improved features
6. Encryption.

2. Mobility of the subscriber. The subscriber was mobile.


He could use the ser-vice from many possible places

The main difference from the previous mobile telephone


systems, I e 1G, is that the radio signals that 1G networks
use are analog, while 2G networks are digital. Both
systems use digital signaling to connect the radio towers
to the rest of the telephone system.

3. Handoffs: This is one of the concepts which are the


fundamental principal of this technology due to the
implementation on this; cellular operators at lower
power levels. It uses multiple algorithms for the
generation of the processing of the handoff request
and an eventual handoff order. Individual algorithms
depend on individual vendors for their network
infrastructure and the software loads utilized. Handing
off from a cell to a cell is the process of transferring
the mobile unit that has a call in progress on a
particular voice channel to another voice channel. The
cellular communication is usually associated with
Advanced Mobile Phone System (AMPS) or Total
Access Communication Services (TAPS). These are
the analog based systems .These operates on the
concept of frequency reuse.

2G technologies can be divided into two main types


depending on the type of multiplexing used
TDMA-based: Time division multiple access (TDMA) is a
technology used for shared networks. It allows many
users to share the same frequency by dividing frequency
into timeslots. The users transmit in rapidly, each using its
own timeslot. Thus multiple users share the same
transmission medium but use only the part of its
bandwidth they require.
TDMA is used in
1. Global System for Mobile Communications (GSM)

The cell sites act as a conduit for the information


transfer .The cell site receives and sends the information
from the mobile telephone system and the mobile. The
MTSO is connected to the cell site either by the leased
lines or through the microwave system. The MTSO
processes the call and connects the cell site radio link to
the PSTN (Public Switched Telephone Network) The 1G
system suffered from a variety of difficulties. The biggest
problem that led to the introduction of the 2G was

2. Personal Digital Cellular (PDC)


3. IDEN digital cellular standards.
4. Satellite systems,
5. Local area networks,
6. Physical security systems,
7. Combat-net radio systems.

1. 1G system had limited capacity.

III. 2.5G TECHNOLOGY

2. The technology initially used did not include the


security feature.

2.5G is an acronym it represents various technology


upgrades for existing second generation cellular networks.
2.5G upgrade technologies are designed to be on top of
2G networks with minimum additional infrastructure. One
of the most widely discussed and important 2.5G
technologies are GPRS, designed to be layered on top of
existing 2G GSM networks. It provides good speed data
transfer, by using TDMA channels in the GSM network.

The second generation is the generalization used to


describe the advent of the digital mobile communication
for the cellular mobile systems. When the cellular systems
were being upgraded to the 2G capabilities' the
description at that time was digital It involved a variety of
technology platforms as well as the frequency bands.

3. Infrastructurechanges

2.5G is a stepping stone between 2G and 3G cellular


wireless technologies.The term " 2.5" is used to describe
2G-systems that have implemented a packet switched
domain with a circuit switched domain. The terms "2G"
and "3G" are officially defined, " 2.5G" is not defined . It
was for marketing purposes only.

4. Subscriber unit upgrades

Its advantages over the 2G were

The advantages of 2G are

1. high speed packet data services

1. Increased capacity over analog

2. uses existing radio spectrum

2. Reduced capital infrastructure costs

3. It can use some of the existing 2G infrastructure in


GSM and CDMA networks

The issues regarding the 2G deployment are as follows


1. Capacity
2. Spectrum Utilization

3. Reduced the capital per subscriber cost

29

4G Communications

Even though 3G has successfully been introduced to


European mobile users, there are some issues that are
debated by 3G providers and users.

3G (or 3-G) is short for third-generation technology


3G wireless technology represents a shift from voicecentric services to multimedia-oriented like video, voice,
data, fax services.

1. High input fees for the 3G service licenses;


2. Great differences in the licensing terms;

The services associated with 3G are

3. Current high debt of many telecommunication


companies, making it more of a

1. The ability to transfer simultaneously both voice data


and non-voice data .

4. Challenge to build the necessary infrastructure for 3G;

2. Video telephony has often been used as the


application for 3G. 3G networks is not up gradation
of 2G network and does not operate on the same
frequency spectrum; 3G or the third-generation
wireless refers to near future developments in personal
& business wireless technology, especially relating to
mobile communications. 3G or The Third Generation
will usher in many benefits as roaming capability,
broad bandwidth and high speed communication
(upwards of 2Mbps).

5. Member State support to the financially troubled


operators;
6. Health aspects of the effects of electromagnetic waves;
7. Expense and bulk of 3G phones;
8. Lack of 2G mobile user buy-in for 3G wireless service;
9. Lack of coverage because it is still new service;
10. High prices of 3G mobile services in some countries.
V. 3G VS 4G

The most significant features offered by third generation


(3G) mobile technologies are

The following table shows comparisons between some


key parameters of 3G Vs possible 4G systems.

1. Momentous capacity
2. Broadband capabilities to support greater numbers of
voice and data customers -

Frequency
Band

3. The 5 MHz channel carrier provides optimum use of


radio resources for operators who have been granted
large, contiguous blocks of spectrum.

Bandwidth

1.8 - 2.5 GHz


5-20 MHz

2 - 8 GHz
5-20 MHz

Data rate

Up to 2Mbps ( 384
kbps WAN)

Up to 20 Mbps or more

4. It also helps to reduce the cost to 3G networks while


being capable of providing extremely high-speed data
transmission to users.

Access

Wideband
CDMA

Multi-carrier CDMA or
OFDM(TDMA)

5. It allows the transmission of 384kbps for mobile


systems and 2Mbps for stationary systems.

FEC

Turbo-codes

Concatenated codes

Switching

Circuit/Packet

Mobile top
speeds

200 kmph

6. 3G users have expected greater capacity and improved


spectrum efficiency, which will allow them to access
global roaming between different 3G.

Packet
200 kmph

4G (or 4-G) is short for fourth-generation, the successor


wireless access technology to 3G.

IV. DISADVANTAGES
A. Why Did 3G Fail?

According to 4G working groups, the infrastructure and


the terminals will have almost all the standards from 2G
to 3G implemented.

The state of wireless telecoms is a classic example. Even


as "third-generation" (3G) mobile networks are being
switched on around the world, a couple of years later than
planned, attention are shifting to what comes next: a
group of newer technologies that are, inevitably, being
called 4G. Interest in 4G owes much to the mess
surrounding 3G. Operators spent euro100 billion (about
$100 billion) buying licenses to run 3G networks,
but the technology was harder to implement than
expected. Even where 3G networks are up and running,
demand for the snazzy video and multimedia services
they make possible is still uncertain.

What if you could combine Wi-Fi-style internet access


with the blanket coverage, and fewer base-stations, of a
mobile network? The various 4G technologies developed
by such firms as IPWireless, Flarion, Navini, ArrayComm
and Broad storm offer just such a blend.
1. High-speed wireless networks
2. Covering a wide area,
3. designed above all for carrying data, rather than voice
or a mixture of the two.

30

Proceedings of the National Conference on Communication Control and Energy System

What are the main features of 4G Technology?

Numerous 4G technologies are working today. The first


commercial deployments are in parts of America, Canada,
New Zealand, South Korea, Germany, Italy and the
Netherlands. Vendors are licensing 4G to telecomequipment makers such as

The 4G technology will be able to

1. Alcatel

4. They can pipe data to and from mobile devices at


"broadband" speed, typically 10-20 times faster than
a dial-upmodem connection.

2. Nortel and

1. Support Interactive services like Video Conferencing


(with more than 2 sites simultaneously), Wireless
Internet,etc.

3. LG Electronics
VI. OFDM

2. The bandwidth would be much wider (100 MHz) and


data would be transferred at much higher rates.

A. Introduction

3. The cost of the data transfer would be comparatively


very less and global mobility would be possible.

OFDM is a modified version of FDM (frequency division


multiplexing).

4. The networks will be all IP networks based on IPv6.

In general the multiplexing is applied to number of


signals originating from number of sources. In OFDM
technique the signal is divided into number of separate
channels, these channels are modulated by the data to be
sent. The all the modulated signals are combined to from
a single OFDM carrier signal.

5. The antennas will be much smarter and improved


access technologies like OFDM and MC-CDMA
(Multi Carrier CDMA) will be used.
6. Also the security features will be much better.
7. The entire network would be packet switched (IP
based). All switches would be digital.

In simple words, the FDM is like water flowing through a


single tap. While the OFDM is the same amount of water
coming out of an shower with number of small streams in
it. Here if the outlet of the tap is blocked by a finger the
whole water flow stops. But if the same water flow
through the shower is blocked by the finger, only some of
the water flows streams are blocked but the remaining
once continues to flow.

8. Higher bandwidths would be available which would


make cheap data transfer possible.
9. The network security would be much tighter. More
efficient algorithms at the Physical layer will reduce
the Inter-channel Interference and Co-channel
Interference.
10. The other great advantage of 4G will be its speed.
While 3G networks provide 2 megabytes per second,
4G can reach anywhere between 20 and 100
megabytes per second

This is the basic principle that is implemented in OFDM.


OFDM uses the technique in which the data is spread over
number of carriers which are at specific predefined
frequencies. Orthogonal means the perpendicular nature
of the signal, where in the modulate frequencies are
perpendicular to each other. This reduces or eliminates
the cross talk .But if the transmitter or the receiver is in
motion, or in a vehicle there is one problem of
intersymbol interference, since the frequency changes as
per the motion between the transmitter and the receiver.
Thus the performance is poor due to this interference.

11. Due to 4G it will be possible to use several


applications like videoconference or picture playback
simultaneously all through the mobile phone with
maximum resolution. The system will also serve as an
open platform where the new innovations can go with
it. Some of the standards which pave the way for 4G
systems are
1.
2.
3.

WiMaX
WiBro
3GPP

OFDM spectrum utilization:-

Fig. 2
Fig. 1

31

4G Communications

time domain signals are converted to frequency domain


using the forward Fourier transform.

Using ODFM, it is possible to exploit the time domain,


the space domain, the frequency domain and even the
code domain to optimize radio channel usage. It ensures
very robust transmission in multi-path environments with
reduced receiver complexity.
In order to reduce this Intersymbol interference a
technique of introducing a guard band is inserted between
two symbols of the OFDM.
B. OFDM Transmitter

Fig. 4

Now we have n number of parallel symbol streams.


These streams are then passed through a symbol detection
technique, which gives the original binary streams. All the
individual streams are then sent to a multiplexer where
the they are converted from parallel to serial form, thus
representing the original chunk of data which was there at
the transmitter.
D. Advantages of OFDM

Fig. 3

 High efficiency of the spectrum


The whole chunk of data to be transmitted which is in
serial form is converted to parallel n number of data
streams using a demultiplexer. Each of the data stream is
sent through a QAM , where these streams are mapped to
stream of symbols due to the constellation.

 FFT can be implemented efficiently


 Eliminates Inter-symbol interference and fading
caused by multipath propagation
 Eliminates narrow band co-channel interference
 Less sensitive to time synchronization errors

In the OFDM carrier signal, each of the subs- carriers in a


particular frequency spectrum are modulated separately.
Now these modulated sub carriers are then passed through
a inverse Fourier Transform module. In inverse FFT the
complex time domain signal is generated with real and
imaginary parts. At the transmitter we need analog signal
to transmit, hence each of these parts which are in digital
form are then converted to analog signal using a Digital to
Analog converter. These analog signals modulate the sine
and cosine components of the locally generated carrier
signal fc as shown in the figure. These sine and cosine
components are combined to generate the final signal for
transmission.

E. Disadvantages of OFDM
 Highly sensitive to frequency synchronization errors
 Peak to average power ratio (PARA) is high
 High power transmitter amplifiers are required to have
linear transmission
 Power and Capacity is wasted due to the guard band
 Guard band can consume up to 20% of transmitted
power and bandwidth.
VII. MIMO

C. OFDM Receiver

A. Introduction

At the receiver the exact opposite things take place, first


of all the received signal is mixed with the locally
generated carrier signal which has the same frequency as
it was at the transmitter. Thus base band signals are
separated from the carrier, but during this process the
higher frequency components of the carrier are generated,
thus low pass filter is used to block these additional
unwanted signals. To get the digital data, the signals are
passed through the Analog to Digital converter. Now the
result is the real and imaginary part of the complex signal
which is similar as created at the transmitter. Now these

OFDM-based access technology is the most appropriate


for 4G. In order to provide required and quality of
services to uses of 4G, several technologies will be used
with OFDM. Some of the technologies are MIMO
technologies, radio resource management, software
defined radio (SDR) communication systems, mobile IP
and relaying managements which come under multiantenna technologies. MIMO is a method in which
multiple antennas are used for wireless communication
over the channel. This is a technology in which migrating
the negative effect s of the wireless channel, providing

32

Proceedings of the National Conference on Communication Control and Energy System

better link quality and /or higher data rate without


consuming extra bandwidth or transmitting power.

C. Time Division Multiplexing (TDM) and Multiple


Access (TDMA)
In this multiple access technique time is divided in to
different slots, each slot of length T as shown in the
Figure. Data to be transmitted is divided into packets and
each data packet is assigned to a slot, users can also
occupy several slots. Defined number N slots are build
in a frame, which are periodically repeated. So each user
has access to the shared medium in periodical manner. A
guard interval of length T is inserted in the slots to avoid
interference between them. In between these intervals no
data/information is transmitted so that they represent
redundancy and reduce the spectral efficiency of the
communication system.

Multiple-input multiple-output (MIMO) channels or


Vector channels, these represent a wide range of
applications. In some special cases they also include
MISO (Multiple-Input Single-Output), SISO (SingleInput Single-Output), and SIMO (Single-Input MultipleOutput) channels, but most of the time MIMO is
associated with multiple antenna systems. These
technologies
are
mostly
used
in
multi-user
communications.
Figure below shows things in a very abstract form.

Fig. 5. Principle of multiple accesses to a common channel

Here Ni ----inputs and NO----outputs and here the


term channel is not limited to the physical transmission
medium, which is a radio channel but has a general
meaning and it also includes parts of a digital
communication systems.

Fig. 6. Principle of time division multiple access

D. Frequency Division Multiplexing (FDM) and


Multiple Accesses (FDMA)
In this frequency axis is divided into Nf sub-bands each of
width B as shown in the Figure. Now the data packets are
distributed on different frequency bands, Principle of
frequency division multiple access.

The difference between single-user and multi-user


communications is, in a single-user, the multiple inputs
and outputs of a vector channel may be correspond to be
different i.e. transmitting and receiving antennas, carrier
frequencies and time slots. Due to the fact that the data
stems from a single user, intelligent signaling at the
transmitter can be performed. Multiple antennas can also
be employed for increasing the system diversity degree
and therefore they enhance the link performance. The
reliability of the link can also be improved by beamforming, this enlarges the signal to noise ratio. Due to this
several data streams can be multiplexed over spatially
separated channels in order to multiply data rate without
increasing the bandwidth.

Unlike TDM which is done on different time slots. But in


mobile environments, the signal bandwidth is spread by
the Doppler Effect, so the gaps of an appropriate width _f
are obtained. This effect is done at the expense of reduced
spectral efficiency which is required for Frequency
division multiple accesses (FDMA).
E. Code Division Multiplexing (CDM) and Multiple
Accesses (CDMA)
This contrast for both the preceding schemes, CDMA
allows simultaneous access on a single channel in the
same frequency range. The basic principle is to spectrally
spread the data streams with a specific sequence called
spreading codes (Spread Spectrum technique). The
signals can be distinguished by assigning them
individually which opens a third dimension, as seen in the
below Figure. This would lead us to orthogonal codes,
ensuring a parallel transmission for different users.
However, the transmission channel generally destroys the
orthogonality and multi-user interference (MUI) which
becomes a limiting factor for the spectral efficiency.

B. Multiple Access Techniques


It can be seen that transmission of multiple data streams
which share a common medium are separated/or managed
by multiplexing techniques in both single-user or multiple
access techniques in multi-user communications. To
ensure reliable communication, most of the systems try to
avoid interference by choosing orthogonal access schemes
so that there is no multiple access interference (MAI) or
disturbance in the transmission. However, in many cases,
orthogonality cannot be maintained due to the influence
of the mobile channel.

33

4G Communications

VIII. 4G TERMINALS
The critical success factors for 4g terminals are:
 Service Convergence
a) Convergence it may be wireless and wired,
telecommunications and broadcasting
b) Seamless Connectivity
c) Ubiquitous
Fig. 7. Principle of code and space division multiple access

It is important that number of services from different


service providers is compatible with a large variety of
terminals with different capabilities. For example we have,

F. Space Division Multiplexing (SDM) and Multiple


Access (SDMA)

Sending SMS to broadcasters while move during a


broadcasted program and internet shopping for an object
appearing in a MPEG4 movie. As mobile terminals
become more complex, it is expected that convenient
features aimed at enhancing their usability will emerge,
for example, advance recognition technologies and user
friendly interfaces.

This scheme exploits the resources in space. Data streams


can simultaneously.
Access the channel in the same frequency band, provided
the location of transmit and receive.

A. User Centric Interface


a) Provide various features with convenience and fun
b) Universal design of Hardware and Software
c) Recognition technology; voice, text, context, etc.
There is no single terminal capable of fulfilling the need
of all users. Different user segments have different needs
and they expect diversity in mobile terminals, such as
devices intended for entertainment, information access,
and business.

Fig. 8

Principle of space division multiple access antennas are


appropriately chosen. This requirement is sometimes
difficult to fulfill in mobile environment, as the users
change their position during the connection. Therefore,
quasi-static scenarios or combinations with the
aforementioned access techniques are often considered.
Mutual interference is likely to occur in Space division
multiple access (SDMA) systems as the transmitter and
the receiver have no perfect channel knowledge of what
would be necessary to avoid interference. As expected, all
the mentioned access schemes can be combined. The
well-known Global System for Mobile (GSM)
Communications and Digital Cellular System (DCS)
standards both combine Time division multiple access
(TDMA) and FDMA. In UMTS (Universal Mobile
Telecommunications System) or IMT-2000 (International
Mobile Communications) systems, CDMA is used in
connection with TDMA and FDMA. While TDMA,
FDMA, and CDMA have already been used for a fairly
long time, SDMA is rather recent in comparison. This
development resulted in demand to use licenses which
were assigned to certain frequency bands as efficiently as
possible.

For example, general purpose terminals targeting the


horizontal market need to support imaging and rich media
for entertainment. Furthermore, the market will be driven
by specialized devices such as smart-phones with
business applications like Personal Information
Management (PIMS) and office functions or embedded
terminals for logistics.
B. Multimode Versus Single Purpose Terminals
In the evolution of mobile terminals we can see two
different basic concepts of design. The old fashioned
single mode terminal with just basic telephone services.
And the other is the multimode terminal where in
different functionalities are incorporated in one single
service. Most people feel that users should be able to
experience various content, such as music and video,
beyond simple telephony through on terminal with multifunctionality. However there is a need of a particular
society where they require simple devices with a
minimum and essential functions in it. Thus there is still
an ambiguity for design of multipurpose terminals.

34

Proceedings of the National Conference on Communication Control and Energy System

 In Key technologies trends are been followed in


spectrum and technology.

C. Application of 4G Terminals
a) Digital Rights Management
The 4G is expected to rely heavily on multimedia
communications; DRM is an issue that needs to be taken
into account while developing this technology. DRM is
nothing but a set of technologies that provides the means
to control the distribution and consumption of the digital
media objects.

 Standardization of the trials and precommercial


systems
 The Future
projects :

project is divided into following sub-

 B3G radio access techniques


 Wireless LAN and Adhoc networks

b) OMA DRM Functional Architecture


The DRM system is independent of media object formats,
operating systems and runtime environments. Content
protected by the DRM can be of wide variety: games, ring
tones, photos, music clips, video clips, streaming media
and many more. In order to protect the content from
unauthorized access it is converted into a packet. A
content issuer delivers DRM content, and a rights issuer
generated a right object. A rights object governs how
DRM content may be used. The request for the DRM
content and rights objects can be made at the same time as
well as can be made separately also the same is the case
for the delivery.

 Multiple Antenna environment


 3G-Based Adhoc networking
 Ipv6 based mobile core networks
 Generic techniques for mobile communication
 System structure, requirements and higher layer
applications.
 European funded 4G research cooperation projects are
focusing on FP6(Sixth Framework Program)which is
having certain objectives :
 Strengthen the scientific and technological bases of
industry.

DRM content can only be accessed with a valid rights


object, and so it can be freely distributed. This enables
super distribution, as users can freely pass DRM content
between them. The DRM content is accessed by
requesting and delivering a rights object to the DRM
agent.

 Encourage its international competitiveness while


promoting research activities in support and other EU
policies
 The research activities are carried out by 3
WGs(Working Groups) :
 Market and Service Working Group :

Open Mobile Alliance (OMA) DRM is one of the most


well known standards for mobile devices. OMA DRM v1
is for low value content such as ring tones. OMA DRM
v2 is aiming for high value content by providing
sophisticated security features for secure delivery of a
rights object from a rights issuer to a mobile device. Some
features of OMA DRM v2 are :-

 Analyze trends of mobile communication market


services
 Propose policies for the activation of the mobile
internet markets
 Develop new services
convergence markets

 Enhanced security for the acquisition of rights objects

infixed

and

mobile

 Analyze the broadband services trends and


practical possibilities in the wireless environments.

 Support for unconnected devise ( a device without


network capability)

 Forecast the 4G demands from 3G and mobile


internet markets analysis.

 Content sharing by multiple devices


2.

Exploring OMA DRM content and rights objects to other


DRM systems

System and Technology Working Groups:


 Define and select 4G technologies
 Valuate the 4G technologies and setup technical
goals

c) Short and Long-Term Vision of 4G


 The Future technology for universal radio
environment (FuTURE) is a government driven
research project in china, the plans and goals of
FuTURE project until 2010 are as shown below :

 Propose the technical standardizations in 4G


 Cooperate with the international forums for
technical developments.

 The long term goal is to put china into competitive


R&D position once the standardization and
development activities on 4G are started on a global
scale.

3.

Spectrum Working Groups:


 Analyze the trends of the spectrum utilization in
the mobile Communications.

35

4G Communications

 Propose the spectrum utilization plans for the


activation of the mobile communications.

7. Software defined radio


Mobile system technologies :

 To discuss the 4G spectrums.

1. Quality of service

d) The Upcoming Challenges Ahead


Future Terminals and technologies (4G Terminals) :

2. Mobility control

High Speed and large capacity wireless transmission


techniques :

4. Location determination and navigation

3. Mobility multicast techniques


5. Security, encryption and navigation

1. Frequency Refarming

IX. CONCLUSION

2. Advanced adaptive techniques to increase spectral


efficiency

4G base stations will use smart antennas to directly


transmit or receive radio-beam patterns to and from
individual users, which will make possible more reliable
calls at greater distances from base stations. Greater DSP
power will enable better ameliorations of fading and
interference from multipath reflections and from other
cell phones, producing better quality audio and video.
"And we will have biometric security features like
thumbprint readers and location-centric (GPS and more)
capabilities as well. Fourth-generation devices are not
likely to become widely available until about 2010.

3. MIMO techniques for exploiting spatial multiplexing


4. Multicarrier techniques
5. Interference and fading mitigating techniques
6. Error control technique
7. High speed packet radio
8. Handover techniques
Network Technologies
1. Radio Access networking techniques

Such a development in the world of wireless technology


would mean that compatibility with multiple radio
systems could be achieved in software alone, enabling the
development of simple terminals that can communicate
from anywhere in the world. Users could adapt
communications according to end use, with complete
freedom to select their own style of services irrespective
of network or operator, bringing the ultimate dream of
software-defined radio to reality.

2. Robust networks
3. Adhoc networks
4. Seamless networking techniques
5. Approach link techniques
6. High speed transport technology
Mobile terminal technologies :

REFERENCES

1. Circuit and component technology

[1]

2. Battery technology
3. Human interface

[2]

4. Terminal security techniques

[3]

5. Terminal software

[4]

6. Multisystem wireless terminal technologies

36

4G roadmap and emerging communication technologies


by Young Kim And Ramjee Prasad.
Electronics Maker June 2005 And electronics for you
_july 2005.
Ramjee, P. (2004). 4G Roadmap and Emerging
Communication Technologies. : PHS.
Smith, C., & Collins, D. (2005). 3G Networks. : .

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.37-42.

1-Bit Nano-Cmos Based Full Adder Cells for Mobile Applications


with Low Leakage Ground Bounce Noise Reduction
S. Porselvi
Veltech Dr.RR & Dr.SR Technical University, Avadi,Chennai.
Email: selvi1913@gmail.com
call). When an electronic device such as a mobile phone
is in standby mode, certain portions of the circuitry within
the electronic device, which are active when the phone is
in talk mode, are shut down. These circuits, however, still
have leakage currents running through them, even though
they have been de-activated. Even if the leakage current is
much smaller than the normal operating current of the
circuit. The leakage current depletes the battery charge
over the relatively long standby time, whereas the
operating current during talk time only depletes the
battery charge over the relatively short talk time. As a
result, the leakage current has a disproportional effect on
total battery life. This is why building low leakage adder
cells for mobile applications are of great interest. To
summarize, some performance criteria are considered in
the design and evaluation of adder cells, such as leakage
power, active power, ground bounce noise, area, noise
margin and robustness with respect to voltage and
transistor scaling as well as varying process and
compatibility with surrounding circuitries.

Abstract As technology scales into the nanometer regime


ground bounce noise and noise immunity are becoming
important metric of comparable importance to leakage
current, active power, delay and area for the analysis and
design of complex arithmetic logic circuits. In this paper, low
leakage 1bit full adder cells are proposed for mobile
applications with low ground bounce noise and a novel
technique has been introduced with improved staggered
phase damping technique for further reduction in the peak
of ground bounce noise. Noise immunity has been carefully
considered since the significant threshold current of the low
threshold voltage transition becomes more susceptible to
noise. We introduced a new transistor resizing approach for
1bit full adder cells to determine the optimal sleep transistor
size which reduce the leakage power and ground bounce
noise. The simulation results depicts that the proposed
design also leads to efficient 1bit full adder cells in terms of
standby leakage power, active power, ground bounce noise
and noise margin. We have performed simulations using
microwind 90nm standard CMOS technology at room
temperature with supply voltage of 1V.
Keywords Low leakage power; Noise Margin; Ground
bounce noise; Sleep transistor and Adder cell.

Shortening the gate length of a transistor increases its


power consumption due to the increased leakage current
between the transistors source and drain when no signal
voltage is applied at the gate [5], [6]. In addition to the
sub threshold leakage current, gate tunneling current also
increases due to the scaling of gate oxide thickness. Each
new technology generations results nearly a 30xincrease
in gate leakage [7], [8]. The leakage power is expected to
reach more than 50% of total power in sub 100nm
technology generation [9]. Hence, it has become
extremely important to develop design techniques to
reduce static power dissipation during periods of
inactivity. The power reduction must be achieved without
trading-off performance which makes it harder to reduce
leakage during normal (runtime) operation. On the other
hand, there are several techniques to reduce leakage
power [10].

I. INTRODUCTION
Adders are heart of computational circuits and many
complex arithmetic circuits are based on the addition [1],
[2]. The vast use of this operation in arithmetic functions
attracts a lot of researchers attention to adder for mobile
applications. In recent years, several variants of different
logic styles have been proposed to implement 1-bit adder
cells. These adder cells commonly aimed to reduce power
consumption and increase speed. These studies have also
investigated different approaches realizing adders using
CMOS technology [3], [4]. For mobile applications,
designers have to work within a very tight leakage power
specification in order to meet product battery life and
package cost objectives. The designer's concern for the
level of leakage current is not related to ensuring correct
circuit operation, but is related to minimize power
dissipation. For portable electronic devices this equates to
maximizing battery life. For example, mobile phones
need to be powered for extended periods (known as
standby mode, during which the phone is able to receive
an incoming call), but are fully active for much shorter
periods (known as talk or active mode, while making a

Power gating is one such well known technique where a


sleep transistor is added between actual ground rail and
circuit ground (called virtual ground) [11], [12], [13],
[14]. This device is turned off in the sleep mode to cut-off
the leakage path. It has been shown that this technique
provides a substantial reduction in leakage at a minimal
37

1-Bit Nano-Cmos Based Full Adder Cells for Mobile Applications with Low Leakage Ground Bounce Noise Reduction

ground bounce noise is achieved with a proposed novel


technique.

impact on performance [15], [16] , [17], [18] and further


peak of ground bounce noise is possible with proposed
novel technique with improved staggered phase damping
technique.

Modified sizings are shown in Fig. 2 and Fig. 5


respectively. The smallest transistor considered for 90nm
technology has a width of 120nm and a length of 100nm
and gives W/L ratio of 1.2. The W/L ratio of NMOS is
fixed at 1.2 and W/L of PMOS is 3.8 which is 3.1 times
that of NMOS in Design1. The sizing of each block is
based on the following assumption. Base case is
considered as individual block as shown in Fig. 3. Each
block has been treated as an equivalent inverter. The same
inverter ratio is maintained on each block. These sizing
will reduce the standby leakage current greatly because
subthreshold current is directly propotional to the
Width/Length ratio of transistor. On the other hand, these
reduced sizes will reduce the area occupied by the circuit.
This will reduce the silicon chip area and obviously there
will be a reduction in the cost.

This paper focuses on reducing sub threshold leakage


power consumption and ground bounce noise. The
remainder of this paper is organized as follows. In section
II, proposed nano-CMOS full adder circuits, and its
equivalent circuits are discussed. In section III, the
performance analysis and simulation results of
conventional CMOS full adder cell and proposed circuits
are explained. Then the paper is summarized in section
IV.
II. PROPOSED FULL ADDER CIRCUITS
Recently, power dissipation has become an important
concern and considerable emphasis is placed on
understanding the sources of power and approaches to
dealing with power dissipation [3].
Static logic style gives robustness against noise effects, so
automatically provides a reliable operation. Pseudo
NMOS and Pass-transistor logic can reduce the number of
transistors required to implement a given logic function.
But those suffer from static power dissipation.
Implementing Multiplexers and XOR based circuits are
advantageous when we implement by the pass transistor
logic [4]. On the other hand, dynamic logic
implementation of complex function requires a small
silicon area but charge leakage and charge refreshing are
required which reduces the frequency of operation. In
general, none of the mentioned styles can compete with
CMOS style in robustness and stability [4], [13].

Fig. 1. Conventional CMOS full adder

Fig. 1 shows the conventional CMOS 28 transistor adder


[12]. This is considered as a Base case throughout this
paper. All comparisons are done with Base case.
The CMOS structure combines PMOS pull up and NMOS
pull down networks to produce considered outputs.
Transistor sizes are specified as a ratio of Width/Length
(W/L). The sizing of transistors plays a key role in static
CMOS style. It is observed in the conventional adder
circuit that the transistor ratio of PMOS to NMOS is 2 for
an inverter and remaining blocks also followed the same
ratios when we considered the remaining blocks as an
equivalent inverters. This ratio does not give best results
with respect to noise margin and standby leakage power
when it is simulated in 90nm process. Modified adder
circuits with sizing are proposed in Design1 and Design2
targeting the noise margin, and ground bounce noise.

Fig. 2. Proposed full adder (Design1) circuit with sleep


transistor

Further, power gating technique is used to reduce the


leakage power, where a sleep transistor is connected
between actual ground rail and circuit ground. Ground
bounce noise is being estimated when the circuits are
connected with a sleep transistor. Further, the peak of

Modified adder circuit i.e Design2 shown in Fig. 5, the


W/L ratio of PMOS is 1.5 times that of W/L ratio of
NMOS and each block has been treated as an equivalent
inverter. The same inverter size has been maintained on
each block as shown in the Fig. 4. The goal of this design
38

Proceedings of the National Conference on Communication Control and Energy System

one of the input named as S and different DC values are


given to remaining inputs shown in the Table 1. The
values of VTH, VIL, VIH can be calculated by the obtained
DC transfer characteristic of the circuit. As illustrated in
the Table 1 and 2, noise margin of Design1 is comparable
to the Base case and switching threshold voltage levels
are almost approaches to VDD/ 2 i.e.0.5 and noise margin
levels of Design2 are comparable to the Base case.

is to reduce the standby leakage power. Further compared


to the Base case and Design1 and ground bounce noise
produced when a circuit is connected to sleep transistor.
However, there will be a slight variation on the noise
margin levels and is almost equal to the Base case.

Table 1. Noise Margins for Base Case and Design1


Input
Vector
S00
S01
S10
S11
0S0
0S1
1S0
1S1
00S
01S
10S
11S

Fig. 3. Equivalent circuit for Design1

VTH
(V)
0.53
0.51
0.52
0.53
0.51
0.49
0.51
0.51
0.50
0.47
0.50
0.50

Base case
NML
(V)
0.50
0.50
0.51
0.50
0.48
0.48
0.49
0.49
0.47
0.45
0.47
0.47

NMH
(V)
0.45
0.48
0.46
0.44
0.46
0.49
0.46
0.46
0.47
0.52
0.47
0.47

VTH
(V)
0.47
0.48
0.52
0.53
0.48
0.50
0.49
0.52
0.50
0.48
0.48
0.50

Design2
NML NMH
(V)
(V)
0.44
0.50
0.47
0.51
0.51
0.48
0.50
0.44
0.46
0.49
0.49
0.49
0.49
0.50
0.49
0.50
0.48
0.47
0.47
0.52
0.47
0.52
0.47
0.47

Table 2. Noise Margins for Base Case and Design2


Fig. 4. Equivalent circuit for Design2.
Input
Vector
S00
S01
S10
S11
0S0
0S1
1S0
1S1
00S
01S
10S
11S

Fig. 5. Proposed 1 bit full adder (Design2) circuit with sleep


transistor

VTH
(V)
0.53
0.51
0.52
0.53
0.51
0.49
0.51
0.51
0.50
0.47
0.50
0.50

Base case
NML
(V)
0.50
0.50
0.51
0.50
0.48
0.48
0.49
0.49
0.47
0.45
0.47
0.47

NMH
(V)
0.45
0.48
0.46
0.44
0.46
0.49
0.46
0.46
0.47
0.52
0.47
0.47

VTH
(V)
0.47
0.44
0.45
0.52
0.46
0.41
0.43
0.49
0.43
0 .41
0.41
0.48

Design2
NML NMH
(V)
(V)
0.44
0.49
0.45
0.57
0.47
0.56
0.54
0.52
0.48
0.59
0.40
0.58
0.42
0.56
0.47
0.47
0.47
0.60
0.40
0.58
0.41
0.42
0.51
0.55

B. Active Power
III. PERFORMANCE ANALYSIS AND SIMULATION
RESULTS

The power dissipated by the circuit when the circuit is in


active state. Active power is measured by giving input
vectors and calculating the average power dissipation
during this time. Considered simulation time to calculate
active power is 50ns. Input vectors have been given in
such a way that it covers almost all input vector
combinations. The same vectors and simulation time has
been given to Base case to compare the results. This
active power includes dynamic power as well as the static
power so it is being named as an active power.

We have performed post layout simulations using


cadence-spectre simulator and the technology being
employed for simulation is 90nm.
A. Noise Margin
The noise margin is measured by the voltage transfer
characteristic of a circuit. Sweep waveform varying from
0 to 1 with an increment of 0.001 volts has been given to
39

1-Bit Nano-Cmos Based Full Adder Cells for Mobile Applications with Low Leakage Ground Bounce Noise Reduction

As shown in the Table. III, both Design1 and Design2


active power is greatly reduced compared to the Base
case. This reduction is almost 40.49% and 63.87% in case
of Design1 and Design2 respectively compared to the
Base case.
Table 3. Active Power Dissipation of 1-Bit Full Adder Cells
Design name
Active
power(W)

Base case
3.488

Design1
2.076

Design2
1.261
Fig. 7. Layout of proposed full adder circuit (Design1) with
90nm technology

C. Standby Leakage Power


Standby leakage power is measured when the circuit is in
standby mode. Sleep transistor is connected to the pull
down network of 1 bit full adder circuit. Sleep transistor
is off by asserting an input 0V. For simplicity, size of a
sleep transistor is equal to the size of largest transistor in
the network (pull up or pull-down) connected to the sleep
transistor. The sleep transistor size in Design1 and
Design2 is reduced due to the resizing of the adder cells
Fig. 8. Layout of proposed full adder circuit(Design2) with
90nm technology

in proposed circuit. Standby leakage power is measured


by giving different input combinations to the circuit.
Standby leakage is greatly reduced in both Design1 and
Design2 as shown in Fig. 6. In case of Design1 reduction
in standby power is about 82% and in Design2 it is about
84% for all input combinations.

E. Ground Bounce Noise Reduction


During the power mode transition, an instantaneous
charge current passes through the sleep transistor, which
is operating in its saturation region, and creates current
surges elsewhere, Because of the self-inductance of the
off-chip bonding wires and the parasitic inductance
inherent to the on-chip power rails, these surges result in
voltage fluctuations in the power rails. If the magnitude of
the voltage surge or circuit may erroneously latch to the
wrong value or switch at the wrong time.
Inductive noise, also known as simultaneous switching
noise, is phenomenon that has been traditionally
associated with input/output buffers and internal circuitry.
The noise immunity of a circuit decreases as its supply
voltage is reduced such as power gating to address the
problem of ground bounce in low-voltage CMOS circuits.
The ground bounce model which is used in our simulation
is shown in Fig.9. Ground bounce noise is reduced in both
Design1 and Design2 as compared to the Base case and is
shown in Fig.10.

Fig. 6. Comparison of standby leakage power with different


input combinations of three designs.

D. Area
The layouts are used to calculate the areas of proposed
designs. The parasitics have been considered in the
designs. Layouts (90nm) of proposed full adder circuits
(Design1) and (Design 2) are shown in Fig. 7 and Fig.8.
Area is reduced 55.43% and 72% in Design1 and Design2
respectively comparing to the Base case and is depicted in
Table. IV.
Table IV. Area of 1 Bit-Full Adder Cells
Design
name
2

Area (m )

Fig. 9. DIP -40 package pin ground bounce noise model [11].

Base case

Design1

Design2

1.75

0.78

0.49

40

Proceedings of the National Conference on Communication Control and Energy System

shown in this Fig.12. For carry part, during stage1


transmission gate is off by giving proper enable signals
and at the same time control transistor is turned on to
make the sleep transistor working as a diode. The stored
charge in carry generation block is discharged through
sleep transistor. The drain current of the ST1 during the
this stage is as (1).
I = C (W/L)[(V -V )V -V 2/2]
d

ox

GS

DS

DS

DS

(1)

Since the drain-to-source voltage of the control transistor


(CT1) is zero, which makes VDS=VGS, the current Id goes
through ST1 can be written as (2)
I = C (W/L)[(V 2/2 -V V )]

Fig. 10. Peak of ground bounce noise comparison with proposed


and conventional power gating technique
(Base case, Design1, Design2).

ox

DS

th

DS

(2)

As the voltage level of virtual ground drops, VDS over


the ST1 drops and this makes the drain to source current
of the sleep transistor (ST1) ID drops quadratic manner.
The dropping Id decreases the voltage fluctuation on the
ground and power net. And same signals are applied to
sum generation part also but with duration of half of the
oscillation period. As a result, noise cancellation occurs
once the second sleep transistor (ST2) turns on due to
phase shift between the noise induced by the second sleep
transistor hence reduction in peak of ground bounce noise
shown in Fig. 13.

F. Improved Ground Bounce Noise Reduction


During last one decade various alternatives and
improvements of conventional power gating has been
proposed to reduce the ground bounce noise during mode
transition. In staggered Phase Damping technique [15]
during standby-to-active power mode transition,
staggered-phase damping delays the activation time of
one of the two sleep transistors relative to the activation
time of the other one by a time that is equal to half the
resonant oscillation period. As a result, noise cancellation
occurs once the second sleep transistor turns on due to
phase shift between the noise induced by the second sleep
transistor hence reduction in settling time. But it is not
very effective in reducing the peak noise due to the initial
spike. And in another scheme [17], there will be a two
stage procedure. In first stage sleep transistor working as
diode by turn on the control transistor which is connected
across the drain and gate of the sleep transistor. Due to
this drain to source current of the sleep transistor drops in
a quadratic manner. This reduces the voltage fluctuation
on the ground and power net and it also reduces the
circuit wakeup time. In second stage control transistor is
off so that sleep transistor works normally. This method is
not effective to suppress the overall fluctuations in the
ground bounce noise. Therefore, the technique must be
adopted to reduce both peak of ground bounce noise and
reducing the overall fluctuations in the ground bounce
noise. The idea is to combine both the above techniques
to further reducing the peak of ground bounce noise and
overall power mode transition noise in the proposed
technique.

Fig. 11. Proposed novel technique for ground bounce noise


reduction

Figure 11 shows the proposed scheme for peak of ground


bounce noise reduction in mode transition. One bit full
adders (Base Case, Design1, design2) have been taken to
apply the proposed technique. One-bit full adder
considered as two cascaded blocks i.e. carry generation
block and sum generation block. Separate sleep transistors
are added at the bottom of the blocks. The proposed
technique works as follows. The applied signals are

Fig. 12. Applied signals to the proposed technique

41

1-Bit Nano-Cmos Based Full Adder Cells for Mobile Applications with Low Leakage Ground Bounce Noise Reduction

[3]

[4]

[5]

[6]

[7]
Fig. 13. Peak of ground bounce noise comparison with proposed
and conventional power gating technique
(Base case, Design1, Design2).

[8]

[9]

IV. CONCLUSION
In this paper, low leakage 1 bit full adder cells are
proposed for mobile applications with low ground bounce
noise. Noise immunity has been carefully considered
since significant threshold current of the low threshold
voltage transition becomes more susceptible to noise. By
using the proposed technique leakage power is reduced by
82 %( Design1), 84% (Design2) in comparison to the
conventional adder cell (Base case). Ground bounce noise
is reduced about 1.5 times and 3 times in Design1 and
Design2 respectively compared to Base case. Further,
using the proposed Novel technique the ground bounce
noise is reduced to about 4.5 times in three designs (Base
Case, Design1, Design2) compared to without applying
the technique. Area is reduced by 55.4% in (Design1),
72% (Design2) in comparison to the Base case. Active
power reduction is reduced by 40.48% (Design1), 63.38%
(Design2) in comparison to Base case. Noise immunity of
proposed full adder cells are comparable to the
conventional adder cell (Base case). The proposed novel
technique has been introduced with improved staggered
phase damping technique for further reduction in the peak
of ground bounce noise and overall power mode transition
noise. The proposed 1-bit full adder cells are designed
with 90nm technology and operated with 1V supply
voltage.

[10]

[11]

[12]
[13]

[14]

[15]

[16]

REFERENCES
[1]

[2]

[17]

Radu Zlatanovici, Sean Kao, Borivoje Nikolic, EnergyDelay of Optimization 64-Bit Carry- Lookahead Adders
With a 240ps 90nm CMOS Design Example, IEEE J.
Solid State circuits, vol.44, no. 2, pp. 569-583, Feb. 2009.
K.Navi, O. Kavehei, M. Rouholamini, A. Sahafi, S.
Mehrabi, N. Dadkhai, Low-Power and High-Performance

[18]

42

1-bit CMOS Full Adder Cell, Journal of Computers,


Academy Press, vol. 3, no. 2, Feb. 2008.
Rabaey J. M., A. Chandrakasan, B. Nikolic, Digital
Integrated Circuits, A Design Perspective, 2nd Prentice
Hall, Englewood Cliffs, NJ, 2002
Pren R. Zimmermann, W. Fichtner, Low-power logic
styles: CMOS versus pass-transistor logic, IEEE J. SolidState Circuits, vol. 32, pp. 10791090, July 1997.
S.G.Narendra and A. Chandrakasan, Leakage in
Nanometer CMOS Technologies. New York: Springerverlag, 2006.
K.Bernstein et al., Design and CAD challenges in sub90nm CMOS technologies, in Proc. int. conf. comput.
Aided Des., 2003, pp.129-136.
International Technology Roadmap for Semiconductors,
Semiconductor Industry Association, 2005. [Online] .
Available:
H.Felder and J.Ganger,Full Chip Analysis of Leakage
Power Under Process variations, Including Spatial
Correlations, in proc. DAC, pp.523-528, June2005.
Jun Cheol Park and Vincent J. Mooney Sleepy Stack
Leakage Reduction IEEE transactions on very large
scale integration (vlsi) systems, vol.14, no.1. november
2006.
Harmander Singh, Kanak Agarwal, Dennis Sylvester,
Kevin J. Nowka,Enhanced Leakage Reduction
Techniques Using Intermediate Strength Power Gating,
IEEE Transactions on VLSI Systems,Vol.15, No.11,
November2007.
Y.Chang.S.K.Gupta, and M.A.Breuer, Analysis of
th
ground bounce in deep sub-micron circuits, in proc.15
IEEE VLSI Test symp.,1997,pp110-116.
N.West. K.Eshragian, Principles of CMOS VLSI Design:
A systems Perspective, Addison-wesley,1993.
Suhwan Kim, Chang Jun Choi, Deog-Kyoon
Jeong,Stephen V. Kosonocky, Sung Bae Park, Reducing
Ground-Bounce Noise and Stabilizing the Data-Retention
Voltage of Power-Gating Structures,IEEE transactions
on Electron Devices,Vol.55,No.1,January2008.
S.Mutoh et al., 1-v power supply high-speed digital
circuit technology with multithreshold-voltage CMOS.
JSSC, vol.SC- 30, pp.847-854, Aug.1995.
Charbel J. Akl, Rafic A. Ayoubi, Magdy A. Bayoumi, An
effective staggered-phase damping technique for
suppressing power-gating resonance noise during mode
transition, 10th International Symposium on Quality of
Electronic Design, pp.116-119, 2009.
K. Kawasaki et al., A sub-us wake-up time power gating
technique with bypass power line for rush current
support, IEEE J. Solid-State Circuits , vol.44, no. 4,
pp.146147, Apr. 2009.
Ku He, Rong Luo, Yu Wang, A Power Gating Scheme
for Ground Bounce Reduction During Mode Transition,
in ICCD07, pp. 388-394, 2007.
M. V. D. L. Varaprasad, Rohit Bapna, Manisha Pattanaik,
Performance Analysis of Low leakage 1-bit Nano-CMOS
Based Full Adder Cells for Mobile Applications,
Proceedings of International Conference on VLSI Design
& Communication Systems, pp.233-238, January 2010.

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.43-48.

BioIDs - A Securiy Guaranteed System


V. Gauthugesh and M. Dhivya
IV Year, Department of ECE, Vel Tech Engineering College
Email: 1gauthugesh17@gmail.com, 2mdhivya27@gmail.com
modalities still lead to an accurate identification. BioID is
the first identification system that uses a dynamic feature,
lip movement. This feature makes BioID more secure
against fraud than systems using only static features such
as fingerprints.

Abstract Most systems that control access to financial


transactions, computer networks, or secured locations
identify authorized persons by recognizing passwords or
personal identification numbers. The weakness of these
systems is that unauthorized persons can discover others
passwords and numbers quite easily and use them without
detection.

II. BIOIDS

Biometric identification systems, which use physical features


to check a persons identity, ensure much greater security
than password and number systems. Biometric features such
as the face or a fingerprint can be stored on a microchip in a
credit card, for example, If someone steals the card and tries
to use it, the impostors biometric features will not match the
features stored in the card, and the system will prevent the
transaction.

BioID is suitable for any application in which people


require access to a technical system such as computer
networks, Internet commerce and banking systems, and
ATMs, for example. In addition, this system secures
access to rooms and buildings. So far, most BioID
installations serve physical structures and small officecomputer networks. Depending on the application, BioID
authorizes people either through identification or
verification. In identification mode, the system identifies
a person exclusively through biometric traits. In
verification mode user name or a number are used, system
then verifies by means of biometric traits. Figure shows a
user interacting with the system.

Biometrics is automated methods of recognizing a person


based on a physiological or behavioral characteristic.
Biometric technologies are becoming the foundation of an
extensive array of highly secure identification and personal
verification solutions.
A single feature, however, sometimes fails to be exact enough
for identification. Consider identical twins, for example.
Their faces alone may not distinguish them. Another
disadvantage of using only one feature is that the chosen
feature is not always readable. For example, some five
percent of people have fingerprints that cannot be recorded
because they are obscured by a cut or a scar or are too fine
to show up well in a photograph.
This paper presents a system called BioID which is
developed to identify a person using different features-Face,
voice, lip movement,iris recognition, finger and palm
geometry .With its three modalities, BioID achieves much
greater accuracy than single feature systems.

I. INTRODUCTION
Biometric (Biological features as a measure) recognition
refers to the use of distinctive physiological and
behavioral characteristics (e.g., fingerprints, face, hang
geometry, iris, gait, signature), called biometric identifiers
or simply biometrics, for automatically recognizing a
person.

Fig. 1. Interacting with BioID: Seeking access to a computer


Network, user would pose in front of the PC camera and speaks
his name.

III. SYSTEM FUNCTIONS


Figure shows BioIDs functions. The system acquires
(records), preprocesses, and classifies each biometric
feature separately. During the training (enrollment) of the

In multimodal biometric identification systems even if


one modality is somehow disturbed, for example, if a
noisy environment drowns out the voice the other two
43

BioIDs - A Securiy Guaranteed System

system, biometric templates are generated for each


feature. For classification, the system compares these
templates with the newly recorded pattern. Then, using a
strategy that depends on the level of security required by
the application, it combines the classification results into
one result by which it recognizes persons.

V. DATA ACQUISITION AND PREPROCESSING


The input to the system is a recorded sample of a person
speaking. The one-second sample consists of a 25- frame
video sequence and an audio signal. From the video
sequence, the preprocessing module extracts two optical
biometric traits: face and lip movement while speaking a
word.
BioID scales all faces to the same size and crops the
images uniformly for easier comparison. This photograph
collection shows 12 individuals; here we note the
uniformity that the system achieves.
This Procedure ensures that the appropriate facial features
are analyzed (but not, for example, the head size, the
hairstyle, a tie, or a piece of jewelry). After rotating and
scaling the image, the preprocessing module extracts a
grayscale image. Some further preprocessing steps take
care of lighting conditions and color variance.
A. Using Hausdorff Distance for Face Location
To detect the location of a face in an arbitrary image,
identification systems often use neural-net-based
algorithms, but these approaches are very timeconsuming. Instead, BioID uses a newly developed,
model-based algorithm that matches a binary model of a
typical human face to a binarized, edge-extracted version
of the video image. The figure illustrates this process. The
face extractor bases its comparison on the modified.

Fig. 2. BioIDs main functions.: From video and audio


samplings of a person speaking, the system extracts facial, lip
movement, and voice features (a cestrum is a special form of the
frequency spectrum). Synergetic computers and a vector
quantifier classify the recorded pattern and combine the results.

To extract those features, the preprocessing module must


have exact knowledge of the faces position. Since this
recognition system should be able to function in any
arbitrary environment with off-the- shelf video
equipment, the face-finding process is one of the most
important steps in feature extraction.
IV. FACIAL FEATURES

Fig. 4. Face location: (a) original image, (b) edge-extracted


image, (c) face model, and (d) face model overlaid on the edgeextracted image

For face recognition, the preprocessing module uses the


first image in the video sequence that shows the person
with eyes open. Once the eyes are in position, the
preprocessing module uses anthropomorphic knowledge
to extract a normalized portion of the face. That is, it
scales all faces to a uniform size, as shown.

Hausdorff distance, which determines the models


optimal location, scaling, and rotation. The Hausdorff
distance uses two point sets, A and B. To obtain the
Hausdorff distance, we calculate the minimum distance
from each point of set A to a point of set B and vice versa.
The maximum of these minimum distances is the
Hausdorff distance. Point set A represents the face model,
and point set B is a part of the image. The minimum of
the calculated maximum distances determines the part of
the image where the face is located. After detecting the
face boundaries, the preprocessing module locates the
eyes from the first three images of the video sequence,
under the assumption that a person often closes his eyes
when beginning to speak. As with face location, eye
location also relies on an image model and the Hausdorff
distance.

Fig. 3. Samples of different faces

44

Proceedings of the National Conference on Communication Control and Energy System

VIII. OPTICAL FLOW TECHNIQUE

B. Classification
1. Lip movement and face classification

BioID collects lip movements by means of an opticalflow technique that calculates a vector field representing
the local movement of each image part to the next image
in the video sequence of several smaller, overlapping
windows. For each window, it calculates the cepstral
coefficients, which form the audio feature vector. The
vector quantifier uses this feature vector for classifying
audio patterns. For this process, the preprocessing module
cuts the mouth area out of the first 17 images of the video
sequence. It gathers the lip movements in 16 vector fields,
which represent the movement of identifiable points on
the lip from frame to frame. shows the optical-flow vector
field of two consecutive images.

2. Finger and palm geometry


3. Voice recognition
4. Iris recognition
VI. LIP MOVEMENT AND FACE RECOGNITION
The synergetic computer serves as a learning classifier for
optical biometric-pattern recognition. In the training
phase, BioID records several characteristic patterns of one
persons face and lip movement, and assigns them to a
class. Each class represents one person. During the
training process, all patterns are orthogonalized and
normalized. The resulting vectors, called adjunct
prototypes, are compressed in each class. This leads to
one prototype for each class (person), representing all
patterns initially stored in the class without any loss of
information. This prototype is called as biometric
template. The classification process is fairly easy: Firstly
preprocess and multiply a newly recorded pattern with
each biometric template and then rank the obtained scalar
products, and the highest one (as an absolute value) leads
to the resulting class. This strategy is known as winnertakes-all. Because this principle always leads to a
classification that is, no pattern is rejected. We also take
the second highest scalar product into account. If the
difference between the highest and the second highest is
smaller than a given threshold, we reject the pattern.

Fig. 6. An overview of the optical preprocessing steps

VII. CLASSIFICATION OF RESULTS


If the two highest scalar products have nearly the same
value, the two classes (two people) are indistinguishable,
and the classification is insecure. The training process
for the optical features of 30 persons with five learning
patterns echo takes about 15 minutes on an Intel Pentium
II.

Fig. 7. Example of an optical-flow vector field. The lip


movement between (a) the two images is defined by
(b), the vector field.

The classification time is very short (several milliseconds)


since there are only 30 scalar Products to calculate.

IX. REDUCING AMOUNT OF DATA


To reduce the amount of data, we reduce the optical flow
resolution to a factor of four through averaging. Finally, a
3D fast Fourier transformation of the 16 vector fields
takes place. The result is a one-dimensional lip movement
feature vector, which the system uses for training and
classification of lip movement. Essentially, we are
condensing the detailed movement defined by several
vector fields to a single.

Fig. 5: The facial biometric templates of six


classes (six people). In this each template consists
of several overlying patterns

X. FINGER GEOMETRY
Finger geometry biometric is very closely related to hand
geometry. The use of just one or two fingers means more
robustness, smaller devices and even higher throughput.
45

BioIDs - A Securiy Guaranteed System

Two variations of capture processes are used, first being


similar to hand geometry presented above. The second
technique requires the user to insert a finger into a tunnel
so that three-dimensional measurements of the finger can
be made.

XIII. ACOUSTIC PREPROCESSING


We record the speech sample using a 22-kHz sampling
rate with 16-bit resolution. After channel estimation and
normalization, the preprocessing module divides the time
signal into several smaller, overlapping windows. For
each window, it calculates the cepstral coefficients, which
form the audio feature vector. The vector quantifier uses
this feature vector for classifying audio patterns.

XI. PALM RECOGNITION


Palm biometrics is close to finger scanning and in
particular AFIS technology. Ridges, valleys and other
minutiae data are found on the palm as with finger
images. Main interest in palm biometrics industry is law
enforcement as latent images - "palm prints" - found from
the crime scenes are equally useful as latent fingerprints.
Certain vendors are also looking at the access control
market and hope to follow the footsteps of finger
scanning.

XIV. IRIS RECOGNITION


Iris is the colored part of the eye that consists of a
muscular diaphragm surrounding the pupil and regulating
the light entering the eye by expanding and contracting
the pupil.
Iris-recognition technology was designed to be less
intrusive than retina scans, which often require infrared
rays or bright light to get an accurate reading. Scientists
also say a persons retina can change with age, while an
iris remains intact. And no two iris blueprints are
mathematically alike, even between identical twins and
triplets.

XII. VOICE RECOGNITION


Voice biometrics examines particularly the sound of the
voice. Speech recognition can be defined as a system that
recognizes words and phrases that are spoken. Voice
identification has been derived from the basic principles
of speech recognition.

To record an individuals iris code, a black-and-white


video camera uses 30 frames per second to zoom in on the
eye and grab a sharp image of the iris. A low-level
incandescent light illuminates the iris so the video camera
can focus on it, but the light is rarely noticeable and used
strictly to assist the camera. One of the frames is then
digitized and stored in pc database of enrolled users. The
whole procedure takes less than a few seconds, and can be
fully computerized using voice prompts and auto focus.

 Speaker recognition focuses on recognizing the


speaker, and is accomplished either by speaker
verification or speaker identification.
 Speaker verification is a means of accepting or
rejecting the claimed identity of a speaker.
 Speaker identification is the process of determining
which speaker is present based solely on the speaker's
utterance. The speaker identification application
evaluates the input with models stored in a database to
determine the speaker's identity.

An iris has a mesh-like texture to it, with numerous


overlays and patterns that can measured by the computer,
said Johnston. The iris- recognition software uses about
260 degree of freedom, or points of reference, to search
the data for a match. By comparison, the best fingerprint
technology uses about 60 to 70 degrees of the freedom, he
noted.

The sound of a human voice is caused by resonance in the


vocal tract. The length of the vocal tract, the shape of the
mouth and nasal cavities are all important. Sound is
measured, as affected by these specific characteristics.
The technique of measuring the voice is discussed below.

This biometric technology could also be used to secure


your computer files. By mounting a Webcam to your
computer and installing the facial recognition software,
your face can become the password you use to get into
your computer. IBM has incorporated the technology into
a screensaver for its A, T and X-series Thinkpad laptops.

We use vector quantification to classify the audio


sequence. In the system-training phase, the audio
preprocessing module analyzes several recordings of a
single persons voice. From each voice pattern, it creates
a matrix, and the vector quantifier combines these
matrices into one matrix. This matrix serves as a
prototype (or codebook) that displays the reference voice
pattern. Using this voice pattern, a minimum distance
classifier assigns the current pattern to the class showing
the smallest distance.

A. Advantages
1. Glasses and contact lenses wont interfere in this.
2. Blind peoples are also involved in this.
3. Cataract, cornea transplant, surgery wont disturbs this
process.

46

Proceedings of the National Conference on Communication Control and Energy System

volatiles. These are extracted by the system and converted


into a biometric template.

XV. SENSOR FUSION


To analyze the classification results, BioID chooses from
different strategies to obtain various security levels.
Figure shows the available sensor fusion options, that is,
the combinations of the three results.

All testing and fastest possible analysis of the human


DNA takes at least 10 minutes to complete and it needs
human assistance. Thus, it cannot be considered as
biometric technology in its sense of being fast and
automatic.
Additionally
current
DNA
capture
mechanisms, taking a blood sample or a test swab inside
of the mouth, are extremely intrusive compared to other
biometric systems. Apart from these problems DNA, as a
concept, has a lot of potential.
Ear shape biometrics research is based on law
enforcement needs to collect ear markings and shape
information from crime scenes. It has some potential in
some access control applications in similar use as hand
geometry. There are not excessive research activities
going on with the subject.

Fig. 8

For normal operations, the system uses a two-out-of three


strategy, which classifies two of the three biometric
features to an enrolled class (person), without falling
below threshold values set in advance. The threshold
values apply to the relative distances of the best and the
second-best scalar products--that is, the two classes that
best match and can be determined by the system
administrator. For a higher security level, the system can
demand agreement of all three traits, a three-out-of-three
strategy. With this strategy, the probability that the system
will accept an unauthorized person decreases, but one
must live with the possibility that it will reject an
authorized person. Additional methods make the sum of
the classification results of all traits available. These
methods allow us to weight individual traits differently.
For example, if the system always correctly identifies a
person by lip movement, this feature will be more
significant than the others.

XVII. KEYSTROKE DYNAMICS


Keystroke dynamics is a strongly behavioural, learnt
biometric. As being behavioural, it evolves significantly
as the user gets older. One of the many problems include
that highly sophisticated measuring software and
statistical calculations have to be made real time if the
user actions should be constantly verified. Standard
keyboard could be used in simplest cases.
XVIII. VEINCHECK
Veincheck is a technique where infrared camera is used to
extract vein pattern from the back of the hand. The pattern
is very unique and the capture method is user friendly and
non-intrusive as hand geometry check. Maybe combining
them could result very accurate and easy-to-use biometric.

A. Applications

XIX. CONCLUSION

1. Electronic commerce

With its multimodal concept, BioID guarantees a high


degree of security from falsification and unauthorized
access. It also protects the privacy rights of system users,
who must speak their name or a key phrase, and therefore
cannot be identified without their knowledge. To guard
against the threat of unauthorized use, users can invalidate
their stored reference template at any time, simply by
speaking a new word and thus creating a new reference
template. In a test involving 150 persons for three months,
BioID reduced the false-acceptance rate significantly
below 1 percent, depending on the security level.

2. Information security
3. Entitlements authorization
4. Building entry
5. Automobile ignition
6. Forensic and police applications
7. Network access
XVI. FUTURE BIOMETRICS

The higher the security level, the higher the falserejection rate. Thus, system administrators must find an
acceptable false-rejection rate without letting the falseacceptance rate increase too much. The security level
depends on the purpose of the biometric system.

A system that analyses the chemical make-up of body


odor is currently in development. In this system sensors
are capable of capturing body odor from non-intrusive
parts of the body such as the back of the hand. Each
unique human smell consists of different amount of
47

BioIDs - A Securiy Guaranteed System

Biometric templates of people provide a reference from


one human being unique to just one identity. This can be
too tempting target to link different personal datas to if
stored on a central database. Solutions with central
databases are reasoned for better service to John Doe
customer. For example replacement of smart card which
has biometric information inside is time consuming and
inconvenient if the biometric data cannot be recovered
from anywhere else than the users body itself.

REFERENCE
[1]

[2]

[3]

48

R. Frischholz, and U. Dieckmann, BioID: A Multimodal


Biometric Identification System , in IEEE Computer, Vol.
33, No. 2, pp. 64-68, February 2000.
H.A. Rowley, S. Baluja, and T. Kanade, Neural Network
Based Face Detection, IEEE Trans. Pattern Analysis and
Machine Intelligence, Jan. 1998
D.P. Huttenlocher, G.A. Klanderman, and W.J.
Rucklidge,Comparing Images Using the Hausdorff
Distance, IEEE Trans. Pattern Analysis and Machine
Intelligence, Sept. 1993,

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.49-52.

Extraction of Scene Text using Mobile Camera


R. Bala Aiswarya1 and S. Naveena2
1

B.E student, 2Asst.professor


Email: balaais@gmail.com, 2snaveenaece@gmail.com
1

image binarization cannot distinguish different color


components having the same luminence.These all will
produce a lot of missing and false detections on many
natural scene image. The proposed method is a text
extraction method in the handheld camera using a hint on
the location on the target text.

Abstract The extraction of text from scene ima-ges is


essential for successful scene te-xt recognition. Scene images
have non-uniform illumination, complex background and
existence of text like objects. The common assumption of a
homogeneous text region on the nearly uniform background
cannot be maintained in real applications. The text
extraction method utilizes the users hint on the location of
the text within the image.A resizable square rim in the view
finder of the mobile camera, is called focus. It is the
interface used to help the user indicate the target text. With
the hint from the focus, the color of the target text is easily
estimated by clustering color only within the focused section.
Image binarization with the estimated color is performed to
extract components. After obtaining the text region within
the focussed section, the text region is expanded iteratively
by searching neighbouring regions with the updated text
color. Such an iterative method would prevent the problem
of one text region being separated into more than one
component due to the problem of nonuniform illumination
and reflection.

By knowing the color of the text region the text separation


should be easy,even from complex background. In
addition, restricting searching area within the focussed
section can prevent misclassification caused by
surrounding nontext regions. The target text region is
expanded iteratively by searching neighbouring regions
with the updated text color.
II. EXTRACTION OF TEXT USING FOCUS
This process consists of three Steps: selection of text
color candidates,extraction of connected components, text
verification. This extraction method is applied within the
focus and also applied outside of the focus to detect the
targeted text region.

I. INTRODUCTION
Scene text is an attempt to recognize text in an image of a
natural scene. In this method, the scene text could be
directly recognized from a mobile camera. If a user
simply snaps a photo of a restaurant signboard,the internet
connected camera can recognize the name and the
relevant information about the restaurant. There are some
challenging issues in separating texts from camera
captured images.The images usually have nonuniform
illumination due to lighting conditions and shadows.The
intrinsic properties of scene text like homogeneity of text
pixel colors and the distinctiveness of the text pixel color
from the background color, are difficult to preserve.
Complex layout and interaction of the content and the
background are common in outdoor images.When the
system scans the whole images of the text, nontext pixels
surrounding the text could be confused for text because of
similar shape of the texts.

Fig. 1. Overview of scene text extraction.

Many approaches for the extraction of the text from the


natural scene images have been proposed[2]. Ezaki et
al[3] proposed some text extraction based on connected
components:slobal edge detection, otsu binarization,
connected component extraction and connected
component filtering. Gatos et al[4] applied binarization
techniques to both gray and inverted images and chose
the optimum between both binarization results. But

The system analyzes the small region inside the focus to


select the color candidates for the target text. Since the
focussed area contains only a small number of different
colors, its easier to estimate the target text color than
searching the whole image. As seen in the figure1, the
system extracts three distinct colors(blue,black,gray) that
the scene having maximum pixels. For example, using the

49

Extraction of Scene Text using Mobile Camera

blue color, o is first extracted within the focus and


expanded to find the neighboring character components
L,n and so on. Images of partial degradations due to
uneven illumination or reflection are tough to discern.
Finally the extracted text candidate regions go under a
verification process, like Ezaki[3] and Gotos[4], to find
the true text components.

(2)
Where
describes the range of the meanshift, seedj is
the (r0,g0,b0) color value and n(r,g,b) is the number of
pixls which have (r,g,b) color value. The mean shift
algorithm is a kind of nonparametric clustering technique
which does not require prior knowledge to the number of
clusters, and does not constrain the shape of clusters. By
the mean shift clustering algorithm, a few most distinct
colors are selected as seed colors, the process of
connected component extraction should done with each
color seed.

A. Selection of Text Color Candidates


Considering that the scene text is designed to be easily
visible, it would be effective to use a color model close to
human perception of colors. The uniform color space
HCL(hue,chroma, and luminance) is used. The color
similarity has to be measured using HCL distance(DHCL)
to express the color difference between text and
background in HCL color space. HCL distance between a
pixel color(h,c,l) and a seed color (hs,cs,ls) is defined as

When clustering all pixels inside the focused area, the


color of the text boundary pixels can be chosen as a
representative color dropping the true text color. It would
bring up the unexpected result such that the text and the
background are combined or the text is segmented into
small pieces. To prevent this adverse effect of boundary
pixels, sampling of nonboundary pixels from the
homogeneous areas which have the minimum edge values
within 3*3 windows. The edge value is obtained as
maximum magnitude M(x,y) of sobel edge detection
among R,G,B color models.

(1)
Where AL=0.1, ACH=0.2+(h-hs)/2.
AL is the constant of linearization for luminance and ACH
is a parameter which helps to reduce the distance between
Colors having a same hue as the hue in the seed color.
HCL distance is more suitable in case of scene text
images by emphasizing hue difference. Hue is robust on
the illumination changes when compared to luminance or
RGB color.

(3)
Since most pixels having the minimum edge values
belong to the text region or background region, the
undesirable effect of boundary pixels can be avoided.
Fig. 2. Color distance in RGB and HCL

Figure 2 shows that the difference between RGB and


HCL distance in which gray scale represents a distance
from the seed color. The red points of the two images
indicate the seed text colors.
In RGB distance of color image (figure2(b)) the top and
bottom parts have large difference: the bottom parts of the
text are rather close to the image, on the other hand, in the
HCL distance of the color image(figure2(c)) every parts
of the text region show uniformly darker than
background. There is a need to select text seed color from
the focused area to apply a text extraction on HCL color
space. Text color is considered as the one of the
distinctive colors inside the focussed section by assuming
that the text region generally occupies significant portion
of the focused area. The meanshift clustering method on
RGB color space to find the seed colors(seedj) from
sample pixels.

Fig. 3. Color distribution of pixel samples

This figure illustrates the color distribution of the sampled


pixels of the original image. the sampled nonboundary
pixels are shown in the figure 3(b). The color distribution
of the original image shows that the colors of text and the
background are mixed without distinction(figure3(c)). On
the other hand, the color distribution of the sampled pixels
shows that they are well separated(figure 3(d)).

50

Proceedings of the National Conference on Communication Control and Energy System

region is well separated from the background. Five


conditions to stop the component expansion are listed as:

B. Extraction of Connected Components


In order to achieve the extraction of text components, the
binarization on a small region should be applied and
expand the searching to its neighbouring areas. An image
binarization technique with a seed color is conducted in
the HCL color space to classify the area into two regions.
i.e., one in similar colors to the seed color and other in the
different colors. The binarization method can effectively
separate the scene text from the complex background in
the case that the text pixels have similar HCL color values
distinguishable from the background. Furthermore, it has
the tendency to extract the region as single component
even the text color varies smoothly due to the reflection or
uneven illumination.

These rules are taken into account for certain limits of


height , width and location of the newly found component
(C2) along with the appearance of the existed
component(C1).
C. Text Verification
When the component expansion on each color has
finished, the text string can be easily decided.
Four conditions are used to determine text string from
component candidates by checking the global consistency
of the text string. The system compares all text candidates
which are obtained from each seed color, and then selects
the final text region which has the minimum variations in
the following rules.

Fig. 4. Components expansion

1) Number of components should be greater than or


equal to 3.

A text candidate, which is a fully connected component in


the binarization result, is first extracted as the initial
component(figure 4(b)) and the searching of neighbour
components of the same color bounded by a certain
distance (figure 4(c)). In the vast majority of cases, a text
string is aligned as horizontal in the image end and,
therefore neighbouring images are usually found within a
certain distance.

2) Variation of distance between components


3) Variation of heights of components.
4) Variation of compactness of components.

Binarization method needs to set a threshold of a border


between two regions. In contrast to the global binarization
method which uses a fixed global threshold, the adaptive
binarization method finds the thresholds adaptively for
each pixel.

Fig. 6. Text verification

III. EXPERIMENTAL RESULT


The evaluation method is ICDAR 2003 competition. It is
based on the notions of precision and recall which are
calculated in terms of number of pixels. Precision p is
defined as the number correct estimates (C) divided by
the total number of estimates proposed by the algorithm
(E). Recall r is defined as the number of correct estimates
(C) divided by the total number of target which is
manually labelled text area (T). Then the average
precision and recall over all the images in the dataset

Fig. 5. Binarization result: (a)original image (b)HCL distance in


seed color (c) global binarization result (d) adaptive
binarization.

The above figure shows the difference between the two


binarization methods on the HCL distance of color image.
Adaptive binarization (figure 5(d)) shows the better result
than its counterpart global binarization (figure 5(c)): text

p= C/E
Method
Proposed

51

r=C/T
precision
0.90

recall
0.51(0.89)

Extraction of Scene Text using Mobile Camera

indicating the location of the target text with the focus


interface, the proposed method resolves the difficulties of
text extraction on natural scene images caused by nonuniform illumination, complex backgrounds and the
existence of text like objects. Restricting the search area
within the focused section prevents misclassifications
caused by the surrounding non-text regions while the
current method can only extract a single text line from the
image, the jump to multiline text is also feasible.

The table showed that the proposed method achieved high


precision rate in test images.
The propose method works successfully even in the case
of non uniform text color. Excessive color change in the
same component may cause error. However, total results
shoed that the target text region are extracted well from
even complex backgrounds in most of the cases.

REFERENCES

Fig. 7. Example of text detection result.

IV. CONCLUSION
In this paper, the proposed text extraction algorithm by
utilizing the focus information on the scene images.
First step, pixel sampling and a meanshift algorithm are
used to choose the text color candidates. Second, all the
pixels are compared to target seed color in HCL distance
measure. And then the adaptive binarization method
classifies them into two regions to form connected
components. In the last step, text verification based on the
rules is used to determine the true text components. By

52

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.53-56.

Design and Implementation of ZigBee Controller to Save Energy


D. Deepa1, K. Radhika2 and S. Naveena3
1,2

BE., Student, 3Asst Professor-Project guide


Email: deepaa.devarajan@gmail.com, 2radhukuppan@gmail.com, 3naveenavlsi@gmail.com
1

integrated circuit(IC) chipset and hardware technologies


enhance the standby power reduction of home devices ,
the current energy crisis and green house effect require
more efficient energy management technology in home
area. The capability of controlling and power monitoring
of home devices are indispensible to achieve efficient
home energy management in addition to the technology of
standby power reduction and normal operation power
reduction. The network capability is also needed to
connect home devices with each other and to manage
them remotely. The technology to manage home energy
more efficiently with the network(HEMS). A PLC-based
HEMS combining home network and the internet
proposed. Architecture of home energy saving system
based on energy-awareness was proposed for real-time
home energy monitoring service and reducing standby
power of home appliances [14]. However, the pervious
works just monitors information. Their standby power
reduction method is passive. To reduce and manage home
energy more efficiently, a more active standby power
reduction method is needed and the controlling of the
power outlets with a remote control should be enabled. A
user friendly and reconfigurable HEMS user interface
(UI) is greatly necessary.

Abstract This paper describes more efficient home energy


management system to reduce power consumption in home
area. we consider the room easily controllable with an IR
remote control of a home device. The room has automatic
standby power cut-off outlets, a light, and a zigbee hub. The
zigbee hub has an IR code learning function and educates
the IR remote control signal of a home device connected to
the power outlet. Then the power outlets and the light in the
room can be controlled with an IR remote control. A typical
automatic standby power cut-off outlet has a waiting time
before cutting off the electric power. It consumes standby
power at that time. To eliminate the waiting time, we turn
off the home device and the power outlet simultaneously
with an IR remote control through the zigbee hub. This
method actively reduces the standby power. The proposed
system provides easy way to add, delete and move the home
devices to other power outlets. When a home device is moved
to the different outlet, the energy information of the home
device is kept consistently and seamlessly regardless of
location change. The proposed architecture gives more
efficient energy saving HEMS.

I. INTRODUCTION
As more and more home appliances and consumer
electronics are installed, residential energy consumption
tends to grow rapidly. A large number of home devices
increase power consumption in two aspects, standby
power and normal operation power. Those two kind of
power consumption are proportional to number of home
devices. As a result. Operational cost in the home area is
also increasing. Standby power is electricity used by
appliances and equipment while they are switched off or
not performing their primary function . As around 10% of
a total power is consumed during standby power mode,
the reduction of standby power is greatly necessary to
reduce the electricity cost in home. Many researches were
performed to reduce standby power in the region of chip,
circuit, board, and system. Those various technical
researches contributed to the reduction of standby power
of home devices. Normal operation power of home
devices. Home appliances and consumer electronics
account for about 27% of home energy consumption .
Therefore, the products with ENERGY STAR label are
recommende3d to minimize the cost of operating the
products during their life time. To reduce the normal
operation power of home devices, service-oriented power
management technology was proposed for an integrated
multi-function home server. Although advanced

In this paper we propose more efficient HEMS based on


ZigBee communication and infrared remote controls. In
section II, we describe several previous works related to
our paper. In section III we propose and discuss more
efficient home energy management system. In section IV
we show the implementation results. Finally, in section V,
we conclude and summarize the paper.
II. RELATED WORKS
A. Automatic Standby Power Cut-Off Outlet
As described in the introduction, various technical
researches were conducted to reduce standby power of
home devices. Although home devices consume a very
small amount of power in the standby mode, it is more
efficient to completely cut off the electric power supply to
those home devices. An automatic standby power cut-off
outlet can contributed to the reduction of home energy
cost. Fig.1 shows the architecture of the automatic
standby power cut-off outlet and the state transition
diagram of it . The microcontroller is supplied with

53

Design and Implementation of ZigBee Controller to Save Energy

electric power through the AC/DC circuit and includes


ZigBee radio frequency(RF) module to communicate with
ZigBee controller. ZigBee is a low-power and low-cost
wireless personal area network standard(WPAN) based on
IEEE 802.15.4 to configure wireless sensor networks. The
monitoring circuit measures the power consumption and
converts it into voltage. The microcontroller digitizes the
voltage and calculates the consumed power. The power
outlet has four kinds of state: boot, on, normal, and off.
After booing, the power outlet goesto the on state. After
the guard time elapses, the normal mode starts and the
microcontroller monitors the consumed power. When the
measured power is below the threshold value for the
predetermined time, the microcontroller decides the
connected home device is in the standby power mode and
turns off the relay to cut off the power supply to the
connected home device. It goes to the off state. When it
receives a wake-up command from the zigbee controller,
it goes to the on state. This state transition repeats
continuously.

C. Home Energy Management System


Energy monitoring systems can influence residents by
informing them of the real-time home energy usage with a
graphical interface. If the breakdown energy usage of
each home appliance and consumer electronics is
displayed on a wall pad, a computer, or a television,
residents can make an effort to reduce the home energy.
Furthermore, web-based monitoring and control systems
were developed to enable users to view home energy data
and control home devices remotely through the internet.
A recent study found that 10% of energy saving was
achieved with a monitoring system providing real-time
energy information. The home energy information UI on
the web in. It illustrates daily and weekly energy
consumption of both total home and each home device.
Other examples of web-based home energy management
system provided by internet companies. A user can access
the HEMS UI of his own via a smart phone and is
encouraged to control home devices to reduce home
energy usage because user figures out home energy usage
information of both total home and each home device
simultaneously.
III. PROPOSED HOME ENERGY MANAGEMENT SYSTEM
The home has two rooms and each room is equipped with
one dimming light, two power outlets, and one zigbee
hub. The dimming light and the power outlet include a
power measurement function to measure the power
consumption
and
the
capability
of
ZigBee
communication. The zigbee hub is connected to the
dimming light and the power outlets. The home server
communicates with two zigbee hubs. Through the
configured zigbee network the home server can monitor
and control the lights and the power outlets. When a home
device is connected in the HEMS UI of the power outlet,
a user can register the home device in the HEMS UI of
the home server by assigning the outlet number to it. The
HEMS can monitor the energy usage of the home device
according to the information from the corresponding
power outlet. As a result, the HEMS of the home server
can monitor and control the lights and the home devices.
It displays hourly, daily, weekly, and monthly energy
usage of each home device and encourages users to make
efforts to save energy. The HEMS can also display the
real-time active power consumption and the accumulated
power consumption of each home device. A user can
figure out which home appliance in unnecessarily turned
on through the real-time active power consumption and
how much power each home appliance consumes in this
month through the accumulated power consumption. User
can also analyze the energy usage of each room through
the zigbee hub. A user can access the HEMS through the
internet in the remote area and turn off unnecessarily
turned-on home devices.

Fig. 1. Automatic standby power cut-off outlet. (a) Architecture


(b) State transition diagram

B. ZigBee Controller and Remote Control


To control and wake up the power outlets, it is necessary
to equip the zigbee controller. Fig.2 shows the
configuration of the zigbee controller and the connected
end devices. Each button is assigned to the power outlets.
A user can wake up the target power outlet by pressing
the assigned button. To wake up the power outlet without
pressing the button, the zigbee controller has an IR code
learning functionality. Each button of the zigbee
controller can be assigned to the button of an IR remote
control. A user can control and wake up the power outlet
without coming close to the zigbee controller by using the
IR remote control.

Fig. 2. ZigBee controller connected with power outlets

54

Proceedings of the National Conference on Communication Control and Energy System

the zigbee hub does not receive the IR signal, it operates


according to the typical automatic standby power cut-off
algorithm. This method actively reduces standby power
consumption by tuning off a home device and the power
outlet simultaneously. Fig.4 shows the firmware process
flow chart of the zigbee hub to control, a user can
command the home server to display the power
consumption information of the room through the zigbee
hub and then check it at the home server.

When e user moves the home device to the other power


outlet, it is necessary to change the assignment of the
home device. The user can change it in the user-friendly
HEMS UI by clicking buttons several times. The
accumulated energy usage information of the home
device is measured seamlessly and kept consistent
regardless of location change. Fig.3 show consistent
accumulation of energy usage information according to
the change of location

Fig. 4. The firmware process of the ZigBee hub to control the


power outlet

IV. IMPLEMENTATION RESULTS


Fig.5 shows the implemented power outlet with power
measurement function, a zigbee communication module
and a zigbee hub. The power outlet uses an electric power
metering chipset for compactness instead of an amalog
metering. It is composed of an AC/DC conversion part, a
current measuring part, a voltage divider, a serial
interface, and a power metering IC, which measures the
reliable power consumption by multiplying the scaled
voltage and the converted current through digital signal
processing. The zigbee communication module has one
microcontroller with zigbee RF module and 2.4GHz
antenna. The power communicates with the zigbee
communication module via a serial interface. The zigbee
hub has an AC /DC conversion part, six buttons, an IR
receiver, a 2.4GHz antenna and microcontroller with
zigbee RF module. Six buttons are used to assign the
power outlet or the light. An IR receiver plays a role of a
detector for an IR remote control. A user can control the
power outlet and the light an IR remote control.

Fig. 3. Consistent accumulation of energy usage information.

The power outlet has the automatic standby power cut-off


function. The power outlet periodically monitors the
power consumption of the connected home device. As
soon as the monitored power consumption of the
connected home device is below the threshold value for
the predetermined period, the power outlet automatically
cuts off the AC power to reduce the standby power
consumption. The zigbee hub has several buttons and IR
receiver. The buttons are assigned to the power outlets
and the light. Its IR learning function enables the buttons
of an IR remote control to correspond to the power outlets
and the light. A user can control the light and the power
outlets with both the buttons of the zigbee hub and the IR
remote control. When a user turns off a television with a
remote control, the automatic standby power cut-off outlet
waits for the predetermined period before transiting to the
off state. Unfortunately, it consumes the electric power
during that period. To reduce the power consumption
during the decision time, we modified the zigbee hub
firmware in . When a user presses the power button of an
IR signal can be simultaneously received by a zigbee hub
because the emission angle of a remote control is wide or
the reflection of IR light is strong enough to reach the
zigbee hub. When the zigbee hub receives a power button
signal of a remote control and the monitored power
consumption of that outlet is below the threshold, it
decides that a usert urned off a home home device and
commands the power outlet to cut off the AC power. If

The captured displays of the HEMS UI of the home


server. Out proposed HEMS provides a mapping function
between the power outlets and home devices by use of 4
byte network node ID. It also provides time-based energy
monitoring, time-based energy usage query, and timebased statistics. The total power consumption trend
according to the time. It also shows the price and the
quantity of the CO2. The result of energy usage query
during the specified period. The pie chart illustrates the
energy usage ratio of each home device. The right table
shows each devices actual energy usage. With the help of
various kinds of HEMS dashboards, a user can figure out
the detailed home energy usage information. User can

55

Design and Implementation of ZigBee Controller to Save Energy

location change. We implemented the power outlet with a


power measuring function and the zigbee hub with six
buttons and n=an IR learning functionality. The webbased HEMS was implemented and could be accessed
through the web browser. These implemented results
showed the feasibility of our proposed HEMS. The
proposed HEMS is expacted to contribute to reduce home
energy usage in the future.

also obtain which home consumes largest power. The


proposed HEMS UI provides easy way to add, delete, and
move home devices to other power outlets. It is easily
reconfigurable and user-friendly. Because our HEMS is a
web-based system, a user can access the HEMS through
the internet webbrowser by using a smart phone or a
laptop computer. User can monitor home usage and
controlhome devices anywhere and anytime. The HEMS
helps a user make active efforts to reduce home energy
consumption and decide what device to purchase and how
to use it.

REFERENCES
[1]

IEA, Fact Sheet: Standby Power Use and the IEA 1


Watt Plan, Apr.2007.
[2] Apinunt Thanachayanont and Silar Sirimasakul, UltraLow-Power Differential ISFET/REFET Readout Circuit,
ETRI Journal, vol.31, no.2,pp.243-245, Apr. 2009.
[3] Mitsuru Hiraki, Takayasu Ito, Atsushi Fujiwara, Taichi
Ohashi, TetsuroHamano, and Takaaki Noda, A 63-uW
Standby Power Microcontroller With On-Chip Hybrid
Regulator Scheme, IEEE Journal of Solid-State Circuits,
vol.37, no.5, pp.605-611, May 2002.
[4] Jaesung Lee, On-Chip Bus Serialization Method for
Low-Power Communications, ETRI Journal, vol.32,
no.4, pp540-547, Aug. 2010.
[5] Kurt Schweiger and Horst Zimmermann, High-Gain
Double-Bulk Mixer in 65 nm CMOS with 830 uW Power
Consumption, ETRI Journal, vol.32, no.3, pp457-459,
Jun. 2010.
[6] Calboun B.H., Chandrakasan A.P., Standby Power
Reduction Using Scaling and Carry Flip-Flop Structures,
IEEE Journal of Solid-State Circuits, vol.39, no.9,
pp.1504-1511, Sep. 2004.
[7] Bo-Teng Huang, Ko-Yen Lee and Yen-Shin Lai, Design
of a Two-Stage AC/DC Converter with Standby Power
Losses Less Than 1 W, Proceedings of Power
Conversion Conference 2007, pp.1630-1635, Apr. 2007.
[8] Joon Heo et al, Design and Implementation of Control
Mechanism for Standby Power Reduction, IEEE Trans.
on Consumer Electronics, vol.53, no.1, pp.179-185, Feb.
2008.
[9] Cheng-Hung Tsai, Ying-Wen Bai, Hao-Yuan Wang, and
Ming-Bo Lin, Design and Implementation of a socket
with low standby power, IEEE Transactions on
Consumer Electronics, vol.55, no.3, pp.1558-1565, Aug.
2009.
[10] U.S. Department of Energy, Energy Saver Booklet: Tips
on Saving Energy & Money at Home, May. 2009.
[11] Jinsoo Han, Intark Han, and Kwang-Roh Park, ServiceOriented Power Management for an Integrated MultiFunction Home Server, IEEE Trans. on Consumer
Electronics, vol.53, no.1, pp.204-208, Feb. 2007.
[12] Young-Sung Son and Kyeong-Deok Moon, "Home
Energy Management System Based on Power Line
Communication," Proceedings of the 28th International
Conference on Consumer Electronics (ICCE), 2010.

Fig. 5. Implemented boards: (a) Power outlet with power


measurement function and ZigBee communication module
(c) ZigBee hub

V. CONCLUSION
We proposed the HEMS based in zigbee communication
and infrared remote controls. The configured zigbee
network is composed of the home server, the zigbee hub,
and the power outlet and light. The home server is a
central controlunit. The power outlets and the light are the
sensor nodes. The home server can manage the power
outlets and the light through the zigbee hub. The zigbee
hub with IR code learning function enables a user to
control the power outlets and the light with an IR remote
control. Furthermore, we actively reduce standby power
consumption by turning off a home device and the power
outlet simultaneously through the zigbee hub. This
method eliminates the waiting of a typical automatic
power cut-off outlet. The proposed HEMS UI provides
various kinds of dashboards for th user to figure out the
detailed home energy usage information. The proposed
HEMS UI provides easy way to add, delete, and move
home devices to other power outlets. When a home device
is moved to the different outelet, the energy information
of the home device is kept consistently and seamlessly of

56

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.57-59.

DCT Based Digital Image Watermarking in SVD Domain


A.M.V.N. Maruti1 and N. Chandra Sekhar2
Assistant Professor, Dept of ECE Khammam Institute of Technology & sciences, Khammam, A.P.
Email: 1murali_mtechdsp@yahoo.co.in, 2chandu9903@gmail.com
The cover object is only used for the stego-object
generation and is then discarded. The hope of the system
is that the stego-object will be close enough in appearance
and statistics to the original such that the presence of
information will go undetected. As mentioned previously,
for the purposes of this report we will assume the stegoobject to be a digital image, keeping in mind that concepts
may be extended to other cover objects as well.

Abstract Information hiding techniques have recently


become important in a number of application areas. Digital
audio, video, and pictures are increasingly furnished with
distinguishing but imperceptible marks, which may contain
a hidden copyright notice or serial number or even help to
prevent
unauthorized
copying
directly.
Military
communications systems make increasing use of trace
security techniques which, rather than merely concealing the
content of a message using encryption, seek to conceal its
sender, its receiver or its very existence. Similar techniques
are used in some mobile phone systems and schemes
proposed for digital elections. Criminals try to use whatever
traffic security properties are provided intentionally or
otherwise in the available communications systems, and
police forces try to restrict their use. However, many of the
techniques proposed in this young and rapidly evolving can
trace their history back to antiquity; and many of them are
surprisingly easy to circumvent.

A. Evaluation Techniques of Watermarking Systems


Capacity and speed can be easily evaluated using the
number of bits per cover size, and computational
complexity, respectfully. The systems use of keys is more
or less by definition, and the statistical imperceptibility by
correlation between the original images and watermarked
counterpart.

Keywords Watermarking, DCT, SVD, attacks

Table 1. Summery of Possible Perceptibility Assurance Levels


Level of
Assurance
Low

I. DEFINITION OF TERMS
Cryptography can be defined as the processing of
information into an unintelligible (encrypted) form for the
purposes of secure transmission. Through the use of a
key the receiver can decode the encrypted message
(decrypting) to retrieve the original message.

Moderate
High
High

Stegnography improves on this by hiding the fact that a


communication even occurred. The message m is
imbedded into a harmless message c which is defined as
the cover-object. The message m is then embedded into c,
generally with use of a key k that is defined as the stegokey. The resulting message is then embedded into the
cover-object c, which results in stego-object s. ideally the
stego-object is indistinguishable from the original
message c, appearing as if no other information has been
encoded.

Criteria





Peak Signal-to-Noise Ratio (PSNR)


Slightly perceptible but not annoying
Not perceptible in comparison with
original under studio conditions
Survives evaluation by large panel of
persons under the strictest of conditions.

It may also be noted above that the only rigorously


defined metric above is PSNR, shown below in figure 2.
The main reason for this is that no good rigorously
defined metrics have been proposed that take the effect of
the Human Visual System (HVS) into account. PSNR is
provided only to give us a rough approximation of the
quality of the watermark. Further levels of evaluation rely
strictly on observation under varied conditions, as shown
in table 1.

II. DCT-SVD DOMAIN WATERMARKING


In two-dimensional DWT, each level of decomposition
produces four bands of data denoted by LL, HL, LH, and

Fig. 1. Illustration of a Stenographic System

57

DCT Based Digital Image Watermarking in SVD Domain

HH. The LL sub-band can further be decomposed to


obtain another level of decomposition. In twodimensional DCT, we apply the transformation to the
whole image but need to map the frequency coefficients
from the lowest to the highest in a zigzag order to 4
quadrants in order to apply SVD to each block. All the
quadrants will have the same number of DCT
coefficients. For example, if the cover image is 512x512,
the number of DCT coefficients in each block will be
65,536. To differentiate these blocks from the DWT
bands, we will label them B1, B2, B3, B4. This process is
depicted in Figure 2.

A. Watermark Embedding

Fig. 2. Mapping of DCT coefficients into 4 blocks

In pure DCT-based watermarking, the DCT coefficients


are modified to embed the watermark data. Because of the
conflict between robustness and transparency, the
modification is usually made in middle frequencies,
avoiding the lowest and highest bands. Every real matrix
A can be decomposed into a product of 3 matrices A =
UVT, where U and V are orthogonal matrices, UTU = I,
VTV =I, and = diag (1, 2, ...). The diagonal entries of
are called the singular values of A, the columns of U are
called the left singular vectors of A, and the columns of V
are called the right singular vectors of A. This
decomposition is known as the Singular Value
Decomposition (SVD) of A, and can be written as

B. Watermark Extraction

where r is the rank of matrix A. It is important to note that


each singular value specifies the luminance of an image
layer while the corresponding pair of singular vectors
specifies the geometry of the image.

The DCT coefficients with the highest magnitudes are


found in quadrant B1, and those with the lowest
magnitudes are found in quadrant B4. Correspondingly,
the singular values with the highest values are in quadrant
B1, and the singular values with the lowest values are in
quadrant B4. The largest singular values in quadrants B2,
B3, and B4 have the same order of magnitude. So, instead
of assigning a different scaling factor for each quadrant,
we used only two values: One value for B1, and a smaller
value for the other three quadrants.

In SVD-based watermarking, several approaches are


possible. A common approach is to apply SVD to the
whole cover image, and modify all the singular values to
embed the watermark data. An important property of
SVD-based watermarking is that the largest of the
modified singular values change very little for most types
of attacks. We will combine DCT and SVD to develop a
new hybrid non-blind image watermarking scheme that is
resistant to a variety of attacks. The proposed scheme is
given by the following algorithm. Assume the size of
visual watermark is nxn, and the size of the cover image
is 2nx2n.

III. EXPEREMENTAL RESULTS


Evaluation of the algorithm is verified on various images
of available databases. The following is one of the
experimental results.

58

Proceedings of the National Conference on Communication Control and Energy System

REFERENCES
[1]

[2]

[3]

[4]

Fig. 3

59

Wavelet transform based watermark for digital images


Xiang-Gen Xia, Charles G. Boncelet and Gonzalo R. Arce
(Department of Electrical and Computer Engineering,
University of Delaware, Newark, DE 19716).
Fair evaluation methods for image atermarking systems:
M. Kuttera and F. A. P. Petitcolas (A Signal Processing
Laboratory, Swiss Federal Institute of Technology,
Ecublens, 1015 Lausanne, Switzerland)
Simulation Study of Digital Watermarking in JPEG and
JPEG 2000 Schemes M. A. Suhail, M. S. Obaidat, S. S.
Ipson and B. Sadoun
Key Independent Watermark Detection R.G. van
Schyndel, A.Z. Tirkel, I. D. Svalbe (Department of
Physics, Monash University, Clayton, 3168, Australia.)

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.60-65.

Realtime Navigation Monitoring System


T. Umamaheswari
Associate Professor and Head Of Department, ECE Department, Rajalakshmi Institute of Technology.
Email: t.umasaravanan@gmail.com
distributed algorithms. Such networks are often deployed
in resource-constrained environments, for instance with
battery operated nodes running continuously. These
constraints dictate that sensor network problems are best
approached in a holistic manner, by jointly considering
the physical, networking, and application layers and
making major design trade-offs across the layers.

AbstractSensor network is a set of small autonomous


system called sensor nodes that co-operates to solve at least
one common application. Sensor networks involve sensor
nodes that are very small in size, low in cost and have short
battery-life. One of the critical Wireless Sensor Network
applications is localization and tracking mobile sensor
nodes. ZigBee is a low rate, low power Wireless Networking
Standard that allows it to be widely deployed in Wireless
Monitoring and Control Applications. The proposed project
aims at tracking a mobile node using ZigBee based Wireless
Sensor Network for a limited geographical area especially in
an indoor environment. The mobile node is given to the
navigator, the end devices are placed randomly in key spot
areas and finally the location of the mobile node is tracked
and monitored in the base station. The handheld device
broadcasts its packets to all the end devices in the network
through common air interface. The end device which
receives these packets at the earliest in turn communicates
with the base station to display the location. The proposed
system overcomes the drawbacks of existing GPS monitoring
system in terms of cost and inaccessibility. The system is
implemented using Embedded C programming and VB 6.0
as front end tool.

Sensor networks typically refer to a collection of nodes


that have sensing, processing, storage, and wireless
communication capabilities. Wireless sensor network
consists of large number of tiny, disposable and
autonomous devices called motes. A sensor network can
either be deterministically or randomly deployed in
rugged environment. The motes work in such a manner
that its own energy consumption is minimized along with
maximized life of WSN. In WSN, the main function of
the mote is information gathering but they have to act as
repeaters to support multi-hop routing from distant node
to sink. They undergo sleep-awake cycle, data
integration/fusion, synchronization etc. Monitoring or
sensing a process and sending the monitored information
to a remote place via sink are the two primary jobs of the
sensor network.

I. INTRODUCTION
Recent technological advances allow us to envision a
future where large number of low-power inexpensive
sensor devices is densely embedded in the physical
environment, operating together in a wireless network.
Wireless sensor networks promise an unprecedented finegrained interface between the virtual and physical worlds.
They are one of the most rapidly developing new
information technologies, with applications in a wide
range of fields including industrial process control,
security and surveillance, environmental sensing, and
structural health monitoring. Wireless sensor networks
have recently come into prominence because they hold
the potential to revolutionize many segments of our
economy and life, from environmental monitoring and
conservation, to manufacturing and business asset
management, to automation in the transportation and
health-care industries.

Wireless sensor networks have been an active research


topic for around a decade. The recent release of standards
in the field, such as IEEE 802.15.4 and ZigBee, brought
the technology out of research labs and stimulated the
development of numerous commercial products. Moving
from early research in military applications, sensor
networks now are widely deployed in diverse applications
including home automation, building automation, and
utility metering. Although many early sensor networks
used proprietary routing algorithms and RF technology,
most recent products use standards-based networking and
RF solutions.
II. APPLICATIONS OF WSN
The applications for WSN are many and varied. They are
used in commercial and industrial applications to monitor
data that would be difficult or expensive to monitor using
wired sensors. They could be deployed in wilderness
areas, where they would remain for many years
(monitoring some environmental variable) without the

The design, implementation, and operation of a sensor


network requires the confluence of many disciplines,
including signal processing, networking and protocols,
embedded systems, information management, and

60

Proceedings of the National Conference on Communication Control and Energy System

need to recharge/replace their power supplies. They could


form a perimeter about a property and monitor the
progression of intruders (passing information from one
node to the next). Typical applications of WSN include
monitoring, tracking, and controlling. Some of the
specific applications are habitat monitoring, object
tracking, nuclear reactor controlling, fire detection, traffic
monitoring, etc. In a typical application, a WSN is
scattered in a region where it is meant to collect data
through its sensor nodes.

D. Military Surveillance
The focus of surveillance missions is to acquire and verify
information about enemy capabilities and positions of
hostile targets. Such mission often involves a high
element of risk for human personnel and requires high
degree of stealthiness. Hence, the ability to deploy
unmanned surveillance missions, by using wireless sensor
networks, is of great practical importance for the military.
Because of the energy constraints of sensor devices, such
systems necessitate an energy-aware design to ensure the
longevity of surveillance missions.

A. Area Monitoring

Solutions proposed recently for this type of system show


promising results through simulations.

Area monitoring is a common application of WSN. In


area monitoring, the WSN is deployed over a region
where some phenomenon is to be monitored. As an
example, a large quantity of sensor nodes could be
deployed over a battlefield to detect enemy intrusion
instead of using land mines. When the sensors detect the
event being monitored (heat, pressure, sound, light,
electro-magnetic field, vibration, etc), the event needs to
be reported to one of the base stations, which can take
appropriate action (e.g., send a message on the internet or
to a satellite). Depending on the exact application,
different objective functions will require different datapropagation strategies, depending on things such as need
for real-time response, redundancy of the data (which can
be tackled via data aggregation techniques), need for
security, etc.

E. Fire Rescue Applications


Wireless sensor network is also very promising for fire
rescue applications. The four requirements of this specific
application include accountability of firefighters, realtime monitoring, intelligent scheduling and resource
allocation, and web-enabled service and integration.
Based on these requirements and the characteristics of
wireless sensor networks, several research challenges in
terms of new protocols as well as hardware and software
support are examined. It can be concluded that wireless
sensor network is a very powerful and suitable tool to be
applied in this application. Figure 1 shows a sample WSN
and some of its applications.

B. Traffic Monitoring
Traffic surveillance is currently performed with inductive
loop detectors and video cameras for the efficient
management of public roads. An alternative traffic
monitoring system using wireless sensor networks can
offer high accuracy and lower cost. The system consists
of two parts: wireless sensor network and access point.
Traffic information is generated at the sensor nodes and
then transferred to the access point over radio. Use of the
prototype called Traffic-Dot achieves vehicle detection
accuracy of 97% and accurately measures speed with a
node pair.
C. Object Tracking
WSN are an efficient solution for applications that
involve deep monitoring of a deployment environment.
The object-tracking demonstrates a real-world application
that uses a WSN to track objects and communicate their
information. It is an event-driven application
implemented in Java, built on top of the Crossbow MSP
410 wireless sensor system. The algorithm process the
data returned from the WSN detection signals and tracks
the object's motion. Deployment scenarios are proposed
that demonstrate suitable node topologies for the system.
The evaluation of the object-tracking system is performed
by conducting a number of indoor and outdoor
experiments [2].

Fig. 1. Wireless Sensor Network and its applications

III. ZIGBEE PROTOCOL


ZigBee is a new technology now being deployed for
wireless sensor networks. Back in 1999, we saw the need
for an organization with a mission to define a complete,
open, global standard for reliable, cost-effective, lowpower, wirelessly networked products addressing
monitoring and control. While there were other standards
that addressed higher data rates or battery-powered
networks for a very small number of devices, none of
these truly met the needs of this market. Instead, what we
needed was something focused on large battery life, large
61

Realtime Navigation Monitoring System

detailing the specification of PHY and MAC by offering


building blocks for different types of networking known
as star, mesh, and cluster tree. Network routing
schemes are designed to ensure power conservation, and
low latency through guaranteed time slots. A unique
feature of ZigBee network layer is communication
redundancy eliminating single point of failure in mesh
networks. Key features of PHY include energy and link
quality detection, clear channel assessment for improved
coexistence with other wireless networks [3].

networks, low data rate and a standardized protocol. Thus


is 2002, ZigBee was born. There arises a question such as
why would a networking standard encourage low data
rates. The answer is simple, because ZigBee is all about
wireless monitoring and control. While most wireless
standards are striving to go faster, ZigBee aims for low
data rates. While other wireless protocols add more and
more features, ZigBee aims for a tiny stack that fits on 8bit micro controllers. While other wireless technologies
look to provide the last mile to the Internet or deliver
streaming high-definition media, ZigBee looks to control
a light or send temperature data to a thermostat. While
other wireless technologies are designed to run for hours
or perhaps days on batteries, ZigBee is designed to run for
years. And while other wireless technologies provide 12
to 24 months of shelf life for a product, ZigBee products
can typically provide decades or more of use. Wireless
communication is inherently unreliable. It's all because
radio waves are just that: waves. They run into
interference patterns, can be blocked by metal, water or a
lot of concrete, and vary depending on many complex
factors including antenna design, power amplification,
and even weather conditions. ZigBee is designed to
overcome these and hence we get these specifications.

B. IEEE 802.15.4WPAN
The main features of this standard are network flexibility,
low cost, very low power consumption, and low data rate
in an ad hoc self-organizing network among inexpensive
fixed, portable and moving devices. It is developed for
applications with relaxed throughput requirements which
cannot handle the power consumption of heavy protocol
stacks.
a) Components of WPAN
A ZigBee system consists of several components. The
most basic is the device. A device can be a full-function
device (FFD) or reduced-function device (RFD). A
network shall include at least one FFD, operating as the
PAN coordinator. The FFD can operate in three modes: a
personal area network (PAN) coordinator, a coordinator
or a device. An RFD is intended for applications that are
extremely simple and do not need to send large amounts
of data. An FFD can talk to RFD's or FFD's while an RFD
can only talk to an FFD.

A. ZigBee and IEEE 802.15.4


ZigBee technology is a low data rate, low power
consumption, low cost, wireless networking protocol
targeted towards automation and remote control
applications. IEEE 802.15.4 committee started working
on a low data rate standard a short while later. Then the
ZigBee Alliance and the IEEE decided to join forces and
ZigBee is the commercial name for this technology.
ZigBee is expected to provide low cost and low power
connectivity for equipment that needs battery life as long
as several months to several years but does not require
data transfer rates as high as those enabled by Blue tooth.
In addition, ZigBee can be implemented in star networks
larger than is possible with Blue tooth. ZigBee compliant
wireless devices are expected to transmit 10-75 meters,
depending on the RF environment and the power output
consumption required for a given application, and will
operate in the unlicensed RF worldwide (2.4GHz global,
915MHz Americas or 868 MHz Europe). The data rate is
250kbps at 2.4GHz, 40kbps at 915MHz and 20kbps at
868MHz. IEEE and ZigBee Alliance have been working
closely to specify the entire protocol stack. IEEE 802.15.4
focuses on the specification of the lower two layers of the
protocol (physical and data link layer). On the other hand,
ZigBee Alliance aims to provide the upper layers of the
protocol stack (from network to the application layer) for
interoperable data networking, security services and a
range of wireless home and building control solutions,
provide interoperability compliance testing, marketing of
the standard, advanced engineering for the evolution of
the standard. This will assure consumers to buy products
from different manufacturers with confidence that the
products will work together. IEEE 802.15.4 is now

C. Network Topologies
Topology is the study of arrangement or mapping of
elements of network, especially the physical and logical
interconnections between the nodes. Fig 2 shows three
types of topology ZigBee supports star topology, peerto-peer topology, cluster-tree topology.

Fig. 2. Topology Models

a) Star Topology
In the star topology, the communication is established
between devices and a single central controller, called the
PAN coordinator. The PAN coordinator may be mains
powered while the devices will most likely be battery
powered. Applications that benefit from this topology
62

Proceedings of the National Conference on Communication Control and Energy System

include home automation, personal computer (PC)


peripherals, toys and games. After an FFD is activated for
the first time, it may establish its own network and
become the PAN coordinator. Each start network chooses
a PAN identifier, which is not currently used by any other
network within the radio sphere of influence. This allows
each star network to operate independently.

D. ZigBee Protocol Stack


ZigBee builds upon the physical layer and medium access
control defined in IEEE standard 802.15.4 (2003 version)
for low-rate WPAN's. The specification goes on to
complete the standard by adding four main components:
network layer, application layer, ZigBee device objects
(ZDO's) and manufacturer-defined application objects
which allow for customization and favor total integration.
Figure 3 explains the ZigBee protocol stack in detail.

b) Peer-To-Peer Topology
In peer-to-peer topology, there is also one PAN
coordinator. In contrast to star topology, any device can
communicate with any other device as long as they are in
range of one another. A peer-to-peer network can be ad
hoc, self-organizing and self-healing. Applications such
as industrial control and monitoring, wireless sensor
networks, asset and inventory tracking would benefit from
such a topology. It also allows multiple hops to route
messages from any device to any other device in the
network. It can provide reliability by multipath routing.

E. IEEE 802.15.4 Physical Layer


The PHY provides two services: the PHY data service
and PHY management service interfacing to the physical
layer management entity (PLME). The PHY data service
enables the transmission and reception of PHY protocol
data units (PPDU) across the physical radio channel. The
features of the PHY are activation and deactivation of the
radio transceiver, energy detection (ED), link quality
indication (LQI), channel selection, clear channel
assessment (CCA) and transmitting as well as receiving
packets across the physical medium. The standard offers
two PHY options based on the frequency band. Both are
based on direct sequence spread spectrum (DSSS). The
data rate is 250kbps at 2.4GHz, 40kbps at 915MHz and
20kbps at 868MHz. The higher data rate at 2.4GHz is
attributed to a higher-order modulation scheme. Lower
frequency provides longer range due to lower propagation
losses. Low rate can be translated into better sensitivity
and larger coverage area. Higher rate means higher
throughput, lower latency or lower duty cycle. This
information is given in Table 1.

c) Cluster-Tree Topology
Cluster-tree network is a special case of a peer-to-peer
network in which most devices are FFDs and an RFD
may connect to a cluster-tree network as a leave node at
the end of a branch. Any of the FFD can act as a
coordinator and provide synchronization services to other
devices and coordinators. Only one of these coordinators
however is the PAN coordinator. The PAN coordinator
forms the first cluster by establishing itself as the cluster
head (CLH) with a cluster identifier (CID) of zero,
choosing an unused PAN identifier, and broadcasting
beacon frames to neighboring devices. A candidate device
receiving a beacon frame may request to join the network
at the CLH. If the PAN coordinator permits the device to
join, it will add this new device as a child device in its
neighbor list. The newly joined device will add the CLH
as its parent in its neighbor list and begin transmitting
periodic beacons such that other candidate devices may
then join the network at that device. Once application or
network requirements are met, the PAN coordinator may
instruct a device to become the CLH of a new cluster
adjacent to the first one. The advantage of this clustered
structure is the increased coverage area at the cost of
increased message latency [4].

Table 1. Frequency bands and data rates

There is a single channel between 868 and 868.6MHz, 10


channels between 902.0 and 928.0MHz, and 16 channels
between 2.4 and 2.4835GHz as shown in Figure 4 several
channels in different frequency bands enable the ability to
relocate within spectrum. The standard also allows
dynamic channel selection, a scan function that steps
through a list of supported channels in search of beacon,
receiver energy detection, link quality indication, channel
switching. Receiver sensitivities are -85dBm for 2.4GHz
and -92dBm for 868/915MHz. The advantage of 6-8dB
comes from the advantage of lower rate. These achievable
ranges are the function of receiver sensitivity and transmit
power. The maximum transmit power shall conform to
local regulations. A compliant device shall have its
nominal transmit power level indicated by the PHY

Fig. 3. ZigBee Protocol Stack

63

Realtime Navigation Monitoring System

ZigBee system to its end users. It comprises the majority


of components added by the ZigBee specification: both
ZDO and its management procedures, together with
application objects defined by the manufacturer, are
considered part of this layer.

parameter, phyTransmitPower. Fig 4 describes the


operating frequency bands.

a) Components
The ZDO is responsible for defining the role of a device
as either coordinator or end device, as mentioned above,
but also for the discovery of new (one-hop) devices on the
network and the identification of their offered services. It
may then go on to establish secure links with external
devices and reply to binding requests accordingly. The
application support sub layer (APS) is the other main
standard component of the layer, and as such it offers a
well-defined interface and control services. It works as a
bridge between the network layer and the other
components of the application layer: it keeps up-to-date
binding tables in the form of a database, which can be
used to find appropriate devices depending on the services
that are needed and those the different devices offer. As
the union between both specified layers, it also routes
messages across the layers of the protocol stack.

Fig. 4. Operating Frequency Bands

F. IEEE 802.15.4 MAC


The MAC sub layer provides two services: the MAC data
service and the MAC management service interfacing to
the MAC sub layer management entity (MLME) service
access point (SAP) (MLMESAP). The MAC data service
enables the transmission and reception of MAC protocol
data units (MPDU) across the PHY data service. The
features of MAC sub layer are beacon management,
channel access, GTS management, frame validation,
acknowledged frame delivery, association and
disassociation.

b) Communication and Device History


In order for applications to communicate, their
comprising devices must use a common application
protocol (types of messages, formats and so on); these
sets of conventions are grouped in profiles. Furthermore,
binding is decided upon by matching input and output
cluster identifiers, unique within the context of a given
profile and associated to an incoming our outgoing data
flow in a device. Binding tables contain source and
destination pairs. Depending on the available information,
device discovery may follow different methods. When the
network address is known, the IEEE address can be
requested using unicast communication. When it is not,
petitions are broadcast (the IEEE address being part of the
response payload). End devices will simply respond with
the requested address, while a network coordinator or a
router will also send the addresses of all the devices
associated with it. This extended discovery protocol
permits external devices to find out about devices in a
network and the services that they offer, which endpoints
can report when queried by the discovering device (which
has previously obtained their addresses). Matching
services can also be used. The use of cluster identifiers
enforces the binding of complementary entities by means
of the binding tables, which are maintained by ZigBee
coordinators, as the table must be always available within
a network and coordinators are most likely to have a
permanent power supply; backups may be needed by
some applications, whose higher-level layers must
manage. Binding requires an established communication
link; after it exists, whether to add a new node to the
network is decided, according to the application and
security policies. Communication can happen right after
the association. Direct addressing uses both radio address
and endpoint identifier, whereas indirect addressing

G. Network Layer
The main functions of the network layer are to enable the
correct use of the MAC sub layer and provide a suitable
interface for use by the next upper layer, namely the
application layer. Its capabilities and structure are those
typically associated to such network layers, including
routing. On the one hand, the data entity creates and
manages network layer data units from the payload of the
application layer and performs routing according to the
current topology. On the other hand, there is the layer
control, which is used to handle configuration of new
devices and establish new networks: it can determine
whether a neighboring device belongs to the network and
discovers new neighbors and routers. The control can also
detect the presence of a receiver, which allows direct
communication and MAC synchronization. The routing
protocol used by the Network layer is an AODV distance
vector based routing algorithm. In order to find the
destination device, it broadcasts out a route request to all
of its neighbors. The neighbors then broadcast the request
to its neighbors, etc until the destination is reached. Once
the destination is reached, it sends its route reply via
unicast transmission following the lowest cost path back
to the source. Once the source receives the reply, it will
update its routing table for the destination address with
the next hop in the path and the path cost.
H. Application Layer
The application layer is the highest-level layer defined by
the specification, and is the effective interface of the
64

Proceedings of the National Conference on Communication Control and Energy System

requires every relevant field (address, endpoint, cluster


and attribute) and sends it to the network coordinator,
which maintains these associations and translates requests
for communication. Indirect addressing is particularly
useful to keep some devices very simple and minimize
their need for storage. Besides these two methods,
broadcast to all endpoints in a device is available, and
group addressing is used to communicate with groups of
endpoints belonging to a set of devices.

targets simultaneously. We intend to provide each target


with unique ID or name to distinguish from the other
targets. A prediction technique should be deployed when
the number of nodes is high, as some of these nodes
should be on go to sleep mode, when the mobile target is
not in its range. The distance of operation or the range of
operation can be increased by using powerful antennas in
the sensor module used. The range can also be increased
by using other powerful transceiver existing in the
market. The size of the handheld device can be reduced
drastically by using advance technologies like MEMS,
NEMS etc. The same device can be manufactured as
PMD. Two way communications can be established
between the target (Handheld) and the base station (Sink)
so as to increase the reliability of the location. The same
framework can be used for data transmission which very
high security to the remote target as hardware contains
encryption services too.

I. Security Services
As one of its defining features, ZigBee provides facilities
for carrying out secure communications, protecting
establishment and transport of cryptographic keys,
ciphering frames and controlling devices. It builds on the
basic security framework defined in IEEE 802.15.4. This
part of the architecture relies on the correct management
of symmetric keys and the correct implementation of
methods and security policies. Security suite consists of
cryptographic key and suites that provide encryption,
done on the basis of AES (Advanced Encryption
Standard) [7]

REFERENCES
[1]

Table 2 tabulates the types of security and encryption


services available in ZigBee.
[2]

Table 2. Security suites supported by IEEE 802.15.4


Name
Null
AES-CTR
AES-CBC-MAC-128
AES-CBC-MAC-64
AES-CBC-MAC-32
AES-CCM-128
AES-CCM-64
AES-CCM-32

[3]

Description
No security
Encryption only, CTR Mode
128 bit MAC
64 bit MAC
32 bit MAC
Encryption & 128 bit
MAC
Encryption & 64 bit MAC
Encryption & 32 bit MAC

[4]
[5]

[6]

[7]

IV. CONCLUSION

[8]

In the recent past, notable importance is given to lives of


human who are mobile and even more is given to the
security of the individuals. Importance of tracking has
therefore increased by leaps and bounds. With the
implementation of networking, tracking is much reliable
and secured. This system has longer range of operation
and longer battery life and can be deployed in areas where
the node density is low. The target and sink are connected
most of the time which increases the target tracking
accuracy as the project possess good link quality indicator
(LQI) which is measure for connection quality between
the sink and the target.

[9]
[10]

[11]

[12]
[13]

V. SCOPE OF FUTURE WORK


The proposed work involves tracking one mobile target.
As a second step, we intend to track multiple mobile

65

Alhmiedat, T. and Yang, S. (2007) A survey: localization


and tracking mobile targets through wireless sensor
network, PGNet International Conference, ISBN:19025
60167.
Bhaskar Krishnamachari (2005) "Networking Wireless
Sensors", Cambridge University Press, pp. 32-85.
Blumenthal, J., Grossmann, R., Golatowski, F. and
Timmermann, D. (2007) "Weighted centroid localization
in ZigBee-based sensor networks, Folien IEEE
International Symposium on Intelligent Signal Processing,
WISP, Madrid, Spain.
Drew Gislason (2004) "ZigBee Wireless Networking,
ZigBee Alliance, Newnes Press, pp. 23-88.
Feng Zhao and Leonidas J. Guibas. (2004) Wireless
Sensor Networks: An Information Processing Approach,
Morgan Kaufmann Publishers, Palo Alto,California.
Jordan. R and Abdallah C.A, (2002) Wireless
communications and networking: an overview, Report,
Elect. and Comp. Eng. Dept., Univ. New Mexico.
Labiod, H., Afifi, H. and De Santis, C. (2007) Wi-Fi,
Bluetooth, ZigBee and WiMax, Springer, pp. 109-122.
Microchip, Microchip Stack for the ZigBee Protocol,
[online] Available:http://ww1.microchip.com/downloads
/en/AppNotes/00965c.p df.
Rappaport, T. S. (1996) Wireless communications:
principles and practice, Prentice-Hall Inc., New Jersey.
Shorey, R., Ananda, A., Chan, M. and Ooi, W. (2006)
Mobile, Wireless and Sensor Networks, John Wiley &
Sons, Canada.
Tareq Ali Alhmiedat and Shuang-Hua Yang (2008) A
ZigBee-based mobile tracking system through wireless
sensor networks, Loughborough University.
Tim Patrick (2008) Programming Visual Basic 2008:
Build .NET 3.5 OReilly Media, Inc.
S. P. Bingulac, On the compatibility of adaptive
controllers (Published Conference Proceedings style), in
Proc. 4th Annu. Allerton Conf. Circuits and Systems
Theory, New York, 1994, pp. 816.

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.66-75.

Exposing Digital Image Forgeries by Detecting


Discrepancies in Motion Blur
MRS.S.VANAJA
LECTURER
ECE DEPARTMENT
RAJALAKSHMI INSTITUTE OF TECHNOLOGY

AbstractThe widespread availability of photo manipulation


software has made it unprecedentedly easy to manipulate im-ages for
malicious purposes. Image splicing is one such form of tampering. In
recent years, researchers have proposed various methods for
detecting such splicing. In this paper, we present a novel method of
detecting splicing in images, using discrepancies in motion blur. We
use motion blur estimation through image gradients in order to
detect inconsistencies between the spliced region and the rest of the
image. We also develop a new measure to assist in inconsistent region
segmentation in images that contain small amounts of motion blur.
Experimental results show that our technique provides good
segmentation of regions with inconsistent motion blur. We also
provide quantitative comparisons with other existing blur-based
techniques over a database of images. It is seen that our technique
gives significantly better detection results.

Fig. 1. Forged Tourist Guy image allegedly captured on September 11, 2001.

camera is focused on the person, the aircraft should have appeared blurred in the image, due to its speed. The complete
ab-sence of motion blur in this image indicates a possible
forgery. On the other hand, if the original image contains
motion blur, how can a spliced region that has had motion
blur introduced in it be detected? Introducing motion blur into
a spliced object, in general, depends on the perception of the
person creating the forgery and hence, is unlikely to be
completely consistent with the blur in the rest of the image. In
this paper, we use this fact to present a solution to this
tampering detection problem. Specif-ically, we address
splicing in a motion-blurred region, with the artificial blur
introduced in the spliced part similar to the back-ground blur,
so that the inconsistency is difficult to perceive vi-sually.
Many techniques have been developed to discover splicing
and compositing of images [2]. Statistical analyses [3], [4] and
lighting inconsistencies [5], [6] may be used in order to detect
image tampering. Other methods involve exploiting certain
fea-tures of images which are characteristic of the imaging
systems, formats, and the environment [7], [8].
Many of these techniques implicitly assume the lack of any
postprocessing on the image [9]. With the appearance of sophisticated photo manipulation software, such an assumption is unlikely to hold for most believable forgeries. Therefore, significant research has gone into circumventing postprocessing (such
as blurring) of images. Some techniques [10][12] use statistics
and measures which are robust to blurring. Others [9], [13][16]
use discrepancies in defocus blur to discover forgeries. We are
not aware of any existing work that uses discrepancies in motion
blur, although the authors of [13] suggest that their methods can
be extended to motion blur as well . We provide a comparison of
our results with theirs below.
The key contributions of this paper are:
an original forgery detection approach employing motion
blur estimation via spectral characteristics of image gradi-

Index TermsImage cepstrum, image forgery detection,


motion blur estimation.

I. INTRODUCTION

AKE and manipulated images have proliferated in

todays
media-driven society. The power of the visual medium
is compelling and so, malicious tampering can have significant
im-pact on peoples perception of events. Misleading images are
used for introducing psychological bias, sensationalizing news,
political propaganda, and propagating urban myths. The image in
Fig. 1, taken from [1], is an instance of the latter. This photograph was widely circulated via e-mail, supposedly having been
obtained from a camera found in the debris of the World Trade
Center buildings after the attacks of September 11, 2001. The
approaching aircraft in the background seems to imply that this
image was captured mere seconds before the impact. However,
this image is clearly fake. There are many clues within this photograph that help decide that it is a hoax. A priori knowledge may
be employed to prove that this image is unauthentic. For example,
geographical knowledge or information about the type of aircraft
involved in the attacks can be used to dismiss this image as fake.
Even in the absence of such knowledge, as the
Manuscript received November 03, 2010; revised February 15, 2011; accepted February 20, 2011. Date of publication February 28, 2011; date of
current version May 18, 2011. This work was supported by a Ph.D.
scholarship awarded to P. Kakar by the Institute for Media Innovation. The
associate editor coordi-nating the review of this manuscript and approving it
for publication was Dr. Oscar C. Au.
The authors are with the Institute for Media Innovation, Nanyang Technological University, Singapore 637553 (e-mail: pkakar@pmail.ntu.edu.sg;
sudha@ntu.edu.sg; ewser@ntu.edu.sg).
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.

66

Proceedings of the National Conference on Communication Control and Energy System


ents, which can detect small inconsistencies in motion blur;
a novel blur estimate measure designed especially to deal
with very little motion blur;
a no-reference perceptual blur metric extended to directional motion blur;
a Hausdorff distance-based cost measure for evaluating the
efficiency of our technique.
This paper is an extension of our work reported in [17]. We
change the motion blur estimation technique from spectral
mat-ting to image gradients for faster processing. We refine
the seg-mentation process in order to provide better results
and deal with cases of more complex blurs. We develop new
measures in order to reduce the amount of human intervention
needed in our tech-nique and improve its robustness. We also
provide a detailed quantitative analysis of the efficiency of our
technique and test our technique on a database of images.
The organization of this paper is as follows. We present an
overview of the cause of motion blur in images in Section IIA, background information about the blur estimation process
in Section II-B, and our proposed forgery detection method is
pre-sented in Section III. Results and comparisons are
provided in Section IV.

II. MOTION BLUR ESTIMATION

forensics. Instead of using spectral matting as in [17], we employ image gradients in order to take advantage of their
simpler and faster computations
B. Blur Estimation
We use a variant of the widely recognized cepstral method
[20][22] in order to estimate motion blur. Instead of employing
the cepstrum directly, we use the spectral characteristics of the
image gradients as proposed in [23]. Such an approach has been
shown to be more robust to noise and somewhat non-uniform
motion than just using the cepstrum of the image.

For the case of uniform motion blur, the blurring process is


modeled as the convolution of a sharp image with a blurring
kernel:

(1)
where is the blurred image,
is the sharp image,
is the
blurring kernel, and is the noise present. and are the pixel
coordinates.
For a horizontal uniform velocity motion blur, the blurring kernel
can be modeled as
, where is the length
of the kernel. Note that a directional blurring
kernel
can be formulated by rotating by degrees about
the x-axis. Taking the Fourier transform of (1)
(2)

A. Overview of Motion Blur


One of the possible causes of motion blur is the slow speed
of the camera shutter relative to the object being imaged. In
many images, camera shake is found to be the culprit for the
presence of motion blur. Reducing the exposure interval of the
camera is a possible solution, but this often affects the amount
of noise or depth of field adversely. Tripods and flashes also
offer solutions to the problem of motion blur by allowing for
more stable exposures or greater illumination in a short
interval of time, respectively, but these are often impractical.
Hence, many images containing motion blur do exist and so, it
is useful to utilize the inconsistencies in motion blur in order
to detect image tampering.
Motion blur can be artificially created in an image by specifying the magnitude and direction of the desired blur. Although
perceptual clues in the image may be used to determine the direction of the blur, the magnitude of the blur is not, in general,
based on any such clues, except the perception of uniformity with
other parts of the image having the desired motion. Hence, it is
likely that a motion-blurred spliced region is not completely
consistent with the rest of the image.
We can model motion blur by averaging the instantaneous intensity falling on a pixel over the shutter interval. Such an averaging process can be weighted by a soft Gaussian window instead of using the idealized shutter interval, in order to allow for
non-ideal mechanical shutter effects. Alternatively, blurs arising
from motion, like other types of blur, can also be considered as
convolving an in-focus image with a blur kernel in the spatial
domain. The motion blur kernel is determined by the relative
velocities of the camera and the objects in the image.
However, in general, neither information about the motion nor
the sharp image are available, making blind motion estima-tion a
difficult problem. Approaches involving multiple images [18],
[19] have been proposed, but may not be suitable for image

where
that

represents the Fourier transform of

. It is observed

(3)
which is known to have periodic zeros at
. These periodic zeros also appear in if is ignored. The blur extent is estimated from the cepstrum of the
image :
(4)
which has two peaks separated by

, due to the large periodic


negative spikes in
.
The fast decay of the sinc function makes detecting the
peri-odic zero pattern difficult to discern, especially in noisy
images. However, a similar periodic pattern that is easier to
detect also exists in the blurred images gradient in the
spectral domain. Differentiating (1), we get

(5)
Taking the Fourier transform and omitting the noise term
(6)
Therefore, we obtain
(7)

67

Exposing Digital Image Forgeries by Detecting Discrepancies in Motion Blur

As can be seen, this is a non-decaying term and hence,


detecting the peaks in this case should be easier. For a blurred
image, the distance between the two peaks in determines the
blur extent and the orientation of the straight line through the
two peaks determines the direction of the blur. However, this
is often found to be far from ideal due to noise, poor modeling
of the blur, the presence of spurious peaks, etc.
So, instead of directly detecting the cepstral peaks, the
Radon transform, which is widely used for detecting straight
lines in noisy images, is used. For a motion-blurred image,
there are periodic large negative lines in
with slope
and periodicity
. Denoting the Radon transform
by
,
will have periodic peaks located at
.
Therefore, this should correspond to a peak in the Fourier
transform of
. Denoting this Fourier transform of

the Radon transform by


at

, let the peak in

occur

. Then, we have
(8)

We represent this estimated motion blur as a two-element


vector , where and .
An issue arises if the original image has multiple motion blurs
present. This is often the case when a moving object is imaged
against a still background or vice versa. In [24], segmentation of
an image into various regions depending on the estimated blur
model for each pixel has been proposed. The technique is useful
if the original image contains multiple motion blurs. In this case,
the blur models could be separated. While this is appropriate for
images with significantly different blurs, it is not feasible for
image forgeries. As the region spliced into an image is desired to
be concealed from detection, the blur added to it is likely to be
similar to that of the surrounding region. In such a case, it would
be difficult to estimate a separate blur model for the suspected
region at a global level. A localized technique is necessary to
separate blur models in case of forgeries.
Examples of such multiple motion blur segmentation are
shown in Fig. 2. As can be seen, motion blurs which differ from
each other to a relatively large extent are clearly segmented using
a Markov random field-based energy segmentation tech-nique
[25]. However, it is important to note that the spliced region is
not segmented due to having a different blur compared to its
surrounding region in any of these images. Therefore, such a
technique is useful in dealing with cases in which mul-tiple
motion blurs are present in the original image itself, but not for
detecting the forged region. Each segment can then be analyzed
for consistency using our technique described below.

Fig. 2. Multiple motion blur segmentation using Markov random fields [25].
The spliced regions are the Wikipedia and Youtube logos in the top and
bottom rows, respectively. Left column: Tampered images with multiple
blurs. Right column: Segmentation results.

such as spliced objects with artificial blur perceptually close


to the background blur, making the inconsistency in blur difficult to detect. A flowchart outlining the steps in our
technique is shown in Fig. 3.
A. Block-Level Analysis
Given an image having an artificially motion-blurred spliced
region, it is not possible to extract multiple blur models over the
whole image from its gradients, especially when the blurs are
quite similar to each other. Hence, we propose estimating the blur
at a local level allowing for different blur models to be estimated, without being lost in noisy data at a global level. The
image is divided into
overlapping blocks
to
,
to
, and the motion blur estimate

for each block is calculated.


is a two-dimensional vector
consisting of the motion blur estimate magnitudes
and directions. The image subdivision has two major benefits: 1)
Motion blur can be estimated at a number of points, as op-posed
to just a single estimate for the entire image, giving im-proved
resolution, and 2) space-invariance of motion blur can be
assumed over each block, allowing for simpler calculations.

B. Smoothing
The components of the motion blur estimates
can be
represented in magnitude and direction estimate matrices
and
, respectively, each of size
, i.e.,

.
.

III. PROPOSED FORGERY DETECTION TECHNIQUE


We propose a method to detect image forgeries using mo-tion
blur estimates. Blur estimates are first computed from the given
image, as defined in Section II-B. Our technique then seg-ments
the image based on these estimates, giving the regions with
inconsistent blur. Although the technique segments such regions
in both authentic and forged images, it is especially useful for
exposing the possible forgeries in blurred regions,

.
.
.

.
.

(9)

Since
is calculated independently for each block
, we
perform a smoothing operation to correct for small variations
in the estimated blurs. We smooth both the magnitudes and
the directions of the estimates:
(10)

68

Proceedings of the National Conference on Communication Control and Energy System

Fig. 3. Flowchart for proposed forgery detection technique.

where represent the respective smoothed estimates and is the


smoothing filter employed. A disk filter was used in both
cases in our technique.
Fig. 4 shows an example for smoothed motion blur
estimates. These estimates were obtained for a region
containing different motion blurs. As can be seen, the local
estimates display some variability at the border between the
two regions due to the mul-tiple blurs present in the same
block for each local estimate. Smoothing allows the local
estimates to vary in a non-abrupt manner, which results in
better segmentation of the image in subsequent steps.
C. Blur Estimate Measures
For images in which certain regions appear to have little perceptible motion blur, we propose using the gradients of along

with the motion blur estimates


, generating a new blur
estimate measure (BEM), in order to improve robustness:
(11)
where
is a neighborhood window located at the center
, in pixel coordinates, of each block
of the image
and of the same size as the block. As formulated above, (11) can
distinguish between the cases where there is little motion blur due
to better focusing and where there is a small amount of motion
blur due to the lack of enough texture to give significant
information about the motion blur present. Only the magnitudes
of the motion blur are considered as the direction estimates
are perpendicular to the image gradients. Similar to
in (9),
can be arranged in an
matrix
, as
and
are calculated block-wise.

Fig. 4. Smoothing motion blur estimates (magnitudes and directions indicated


by arrows). (a) Image with consistent motion blur. (b) Smoothing local
motion blur estimates. Green regular arrows indicate estimates before
smoothing; bold arrows, after smoothing.

In order to automate the decision of using BEMs or just motion estimates, we employ a modification of the no-reference

69

Exposing Digital Image Forgeries by Detecting Discrepancies in Motion Blur

Fig. 5. Computation of edge width .

Fig. 6. Blur measurement for motion-blurred images.

perceptual blur metric (PBM) presented in [26]. This metric is


intended for Gaussian and compression blurs, necessitating a
modification to deal with motion blur, which is not isotropic.
Let be the set of edge pixels in the binary edge map of obtained by applying the Sobel operator in the direction . A new
metric, named as oriented blur metric
, is defined as

(12)
where
is the width of the edge along the direction perpendicular to at the edge pixel and
denotes cardinality.
In order to define
in (12), let the two sides of the line
in the direction at some be denoted by and , as shown in
Fig. 5. Also, let the pixel locations of the first local maximum
and minimum from this , along the above perpendicular line,
on the side be denoted by
and
, respectively.
and
are defined similarly. Then, we can define
the edge width as

(13)
We compute the oriented

for orientations

to ,

where is the number of orientations evaluated and then define


the overall
as
(14)
This metric is used to determine the overall blurriness of the
image as large edge widths, indicating blurred edges, give high

values of
. BEMs are chosen when it is below a predetermined threshold . The chosen blur estimate
is given
by
if
otherwise.

(15)

It is to be noted that
is an
matrix in the latter
case, as both magnitude and direction estimates are employed.
In this case, all subsequent operations are carried out independently on each of the component matrices.
The PBM computed for images containing various amounts of
motion blur is plotted in the graph shown in Fig. 6. The blur
metric was calculated as an average over ten random images

Fig. 7. Segmentation outputs for various interpolation schemes applied to blur


estimates. (a) Nearest neighbor. (b) Bilinear. (c) Bicubic.

containing motion blur with the magnitudes specified in Fig. 6


in random directions. In calculating
, we use
orientations and 135 . The almost linear nature of the graph
indicates that our metric works well in the case of motion
blur.
D. Interpolation
The motion blur estimates
are then upsampled to the size
of using bicubic interpolation, in order to have an estimate of the
blur at each pixel. The accuracy of the estimate depends on the
amount of upsampling done. Bicubic interpolation provides better
results than nearest neighbor (which gives a blocky segmentation) and bilinear interpolation (whose segmentation still
has a few jagged edges that could be adequate for certain applications). Fig. 7 shows an example of the segmentation outputs for
various interpolations. The circles are examples of regions where
the improvement offered by bicubic interpolation over the other
two is most clearly visible.

E. Segmentation
We then segment the image into two regions that exhibit
dif-ferent motion blurs. This is done by adaptively
thresholding the upsampled using Otsus method [27]. This
method also pro-vides an effectiveness metric which is used to
discard images which show consistent directions and/or
magnitudes in their mo-tion blur estimates and hence cannot
be segmented effectively. The result of segmenting the
magnitude and direction of the es-timates provides us with an
indication of regions with dissimilar motion blur.
The results from this simple segmentation can be refined by
again employing an energy-based segmentation. In this case,

70

Proceedings of the National Conference on Communication Control and Energy System

Fig. 8. Left: Energy-based segmentation output. Right: Ideal segmentation obtained by supervised matting.

Fig. 10. Detection for spliced blurred regions. Left column: Forged images
with segmentation outputs. Right column: Ideal segmentations.

Fig. 9. Images in our database.

the pixel intensities are considered in addition to the motion blur


discrepancies, giving smoother boundaries, more likely to
correspond to the actual boundaries of the spliced region. This
assumes that the spliced region has a different intensity than its
immediate background, which is reasonable. Otherwise, the
boundary of the inconsistent region would not be detectable at all,
by any method. In order to accomplish such a segmentation, we
use the mean values of the motion blur estimates of the two
regions obtained by Otsus method and then find the Euclidean
distance between this mean and the motion blur estimate at each
pixel. Using graph cuts [28], we find a segmentation which minimizes the total cost consisting of the cost of assigning different
adjacent region labels (based on the above Euclidean distance)
and the cost of dissimilar neighboring pixel intensities. The results of such a segmentation are shown in Fig. 8.

The ideal segmentation shown in Fig. 8 is obtained by using


supervised spectral matting [29] in order to extract the spliced
regions from the image and applying Otsus method to this
ex-tracted matte. Obtaining such a segmentation requires
knowl-edge of the spliced region, making it useful only for
evaluating splicing detection. Note that the same ideal
segmentation can be used for comparison with the energybased segmentation ap-proach as well, since supervised
matting ensures that the ex-tracted regions boundaries
correspond very closely with the spliced objects boundaries.
IV. RESULTS AND COMPARISONS
We created a database of 15 forged images, shown in Fig. 9,
containing camera shake and motion blur. The original images

Fig. 11. Detection for non-blurred spliced region. (a) Segmentation of image
with non-blurred spliced object. (b) Ideal segmentation.

were obtained from the popular photo-sharing website Flickr


[30]. We spliced different objects into the blurred backgrounds of
the images and applied visually similar motion blurs, using the
GIMP image editor. The results of detection of spliced re-gions
along with the ground truths showing the actual regions
(determined by color similarity) for some images are shown in
Fig. 10. An example of the detection for a spliced region that is
not blurred is shown in Fig. 11.

A. Evaluation Criteria
In order to evaluate the efficacy of our technique and also
de-termine the various parameters used, we employ a cost
measure that is used to determine the difference between the
observed and the ideal segmentations.
As the energy-based segmentation gives rise to a boundary
dividing the image into different regions, a cost measure like the
Hausdorff distance measure which is used widely for shape

71

Exposing Digital Image Forgeries by Detecting Discrepancies in Motion Blur


TABLE I
SEGMENTATION COST FOR DIFFERENT BLOCK SIZE

Fig. 12. Probability distributions of PBM values for using BEMs and using
only motion blur estimates.

TABLE II
SEGMENTATION COST FOR DIFFERENT BLOCK OVERLAP
Fig. 13. Image patches with similar motion blur estimates, but different BEM
values. (a) High BEM value. (b) Low BEM value.

matching [31], [32] is quite suitable. In the case of both the


ideal segmentation and the segmentation from Otsus method,
the perimeter of the extracted region is used as the boundary
of the segmentation for evaluation purposes. However, the
conven-tional Hausdorff distance measure considers only the
maximum distance between two boundaries and is not robust
to outliers in the boundaries. Therefore, we use a modified
Hausdorff distance measure, , defined in [33].
Let and be the observed and ideal segmentation boundaries,
respectively. Then, the modified Hausdorff distance measure
is given by

(16)
where

and denotes the cardinality of the set, with


clidean distance.

being the Eu-

B. Results
Table I shows the variation of the segmentation cost with respect to the block size and Table II with respect to the block
overlap. We calculate the motion blur estimates for block sizes of
80 80, 100 100, 120 120, 140 140, and 160 160 pixels with
overlaps of 10, 15, 20, 30, and 40 pixels. The av-erage cost
function is then evaluated over 15 images for each of the 25
possible combinations of block size and overlap and the
minimum average cost is noted. It is observed that the segmentation costs are fairly robust to change in these parameters
and energy-based segmentation provides more refined results
than simply Otsus method. We choose to use values of parameters corresponding to the lowest costs for either method (shown

and a standard deviation of 1.1, while using only motion blur


estimates provides good results with a mean PBM value of 31
and a standard deviation of 13.3. We model the probability distributions for the two cases with normal and inverse Gaussian
distributions, respectively. The inverse Gaussian distribution
(Fig. 12) is employed for the latter case as significant skew
towards higher PBM values is observed in this case. We assume
that the prior probabilities of better results using BEMs or only
motion blur estimates are equal. Thus, based on the results

shown in Figs. 6 and 12, we set to 10.


2) BEMs versus
: Fig. 13 shows that BEMs provide
greater discrimination than simply using
. The normalized values of
for the patches in Fig. 13(a) and (b) are 1
and 1.03, respectively, in the same direction. On the other hand,
the normalized BEM values are 5.29 for the former and 1 for the
latter. The greater discrimination is apparent in the case of BEMs.
Normalized values are used due to the different scales
of
and BEMs.
3) Using
for
Selection: Fig. 14 shows the advantage of BEMs in images with little perceptible motion blur. The
image in this figure has a PBM value of 6.73 which is below our
threshold of 10 and hence, using BEMs should result in an
improved segmentation. Note that the motion blur magnitude in
the image is small, since the plane is the object focused on and
the sky is fairly uniform and hence, does not appear blurred to a
large extent. Fig. 14(b) and (c) shows the segmentation re-sults
based on motion blur estimates and BEMs, respectively. It can be
seen that the plane is segmented well from the sky in Fig. 14(c),
demonstrating the efficacy of BEMs.
The need for using a PBM in order to activate the use of BEMs
is shown in Fig. 15. The image in this figure has a consistent
motion blur and a PBM value of 20.2. It can be seen that using

in the highlighted cells), i.e., a block size of 100 100 and an


overlap of 20 pixels.
1) Choice of : We observed that BEMs generally provide
good results with images having a mean PBM value of 6.8

72

Proceedings of the National Conference on Communication Control and Energy System

Fig. 16. Distinguishing consistent and inconsistent regions. (a) Red box in
image indicates inconsistent region; yellow box indicates consistent region.
(b) and (c) Inconsistent region detection. (d) Consistent region detection.
Fig. 14. Segmenting regions with different blur. (a) Image with inconsistent
motion blur. (b) Segmentation using motion blur estimates. (c) Segmentation
using BEMs.

Fig. 15. Segmenting an image with consistent blur. (a) Image. (b) Segmentation using BEMs. (c) Segmentation using motion blur estimates.

BEMs does not provide a result as good as using motion blurs


only for images with significant blurring.
4) Distinguishing Consistent and Inconsistent Regions:
Fig. 16 shows how subimages with consistent and inconsistent
motion blurs are represented by our method. The consistent
subimage has all the pixels belonging to the same region. In
general, most of the pixels are classified as belonging to the same
region, though some insignificant outliers may be present. In the
inconsistent subimages, the segmentation shows distinct separate
regions of pixels that indicate the inconsistent regions.

C. Comparisons
There are a few methods that detect image tampering based on
different blurs. The work in [9], [14], and [15] used defocus blur
modeled with a two-dimensional circular Gaussian kernel. As
these methods are based on this isotropic kernel, they cannot be
extended directly to directional motion blur. Moreover, all of
these methods require considerable human intervention in
defining parameters and interpreting results. In another work
[16], the authors proposed a method intended for highly localized blur and mentioned that it is not suitable for motion blur. In
yet another work [13], the authors suggested that it is pos-sible to
distinguish natural and artificial motion blur using their method.
So, we compare our method with [13].
The authors of [13] use discrepancies in defocus blur to detect
forgeries. We implemented their technique and compared the results of our technique with those obtained using their technique.
The technique in [13] is based on normalizing the DCT coefficients of 8 8 image blocks and then computing the inverse DCT
of the image. This technique offers the advantages of speed and
simplicity. Enhancements such as threshold selection based on
global blur estimates and morphological operations may also be
used in order to improve the results of the technique. The output
images for the best thresholds are shown in Fig. 17. As can be
seen, while our technique generates a blob that covers the region
with inconsistent blur, the method in [13] results in

73

Exposing Digital Image Forgeries by Detecting Discrepancies in Motion Blur


is observed in images of fast-moving subjects and may also appear due to camera shake. We have used the fact that introducing
motion blur into a spliced object, in general, depends on the perception of the person creating the forgery and hence, is unlikely
to be completely consistent with the blur in the rest of the image.

Fig. 17. Comparison with [13]. (a) Image with spliced motion-blurred logo.
(b) and (c) Image with multiple motion blurs. (d)(f) Our results. (g)(i)
Results from [13].

TABLE III
COMPARISON OF SEGMENTATION COSTS

Our approach has been based on the method of spectral


anal-ysis of image gradients. The image gradients of a blurred
image in the spectral domain display some periodic
characteristics which are correlated with the amount and
direction of motion blur present.
The suspected image is divided into overlapping blocks and
the motion blur for each block is estimated. This is followed
by a postprocessing step of smoothing the blur estimates and
upsam-pling to the size of the image. The regions of the image
which show inconsistent blur are then segmented from the
image and displayed to the user. We have also developed a
BEM to pro-vide robust segmentation, in the case of little
perceptible blur. The presence of low blur is determined by
using a perceptual blur metric.
We have provided some results of detecting inconsistent regions by using our technique and have compared it with
another technique applicable to motion blur. Quantitative and
qualitative comparisons show that our technique provides
better results in selecting the regions with inconsistent blur.
REFERENCES
[1] Museum of Hoaxes. [Online]. Available: http://www.museumofhoaxes.com/hoax/photo_database/image/tourist_guy/.
[2] B. Mahdian and S. Saic, A bibliography on blind methods for identifying image forgery, Image Commun., vol. 25, no. 6, pp. 389399,
2010.
[3] A. Popescu and H. Farid, Exposing digital forgeries by detecting
traces of resampling, IEEE Trans. Signal Process., vol. 53, no. 2,
pp. 758767, Feb. 2005.

only certain parts of the inconsistent region being identified as


inconsistent.
Table III shows the performance of the DCT-based
technique for various threshold choices. The average
segmentation cost for this technique is calculated on the same
database used to obtain Tables I and II. The segmentation cost
for selecting a threshold based on a global blur estimate, as in
[13], is signifi-cantly higher than that obtainable by our
method. Although our proposed technique based on Otsus
method offers only slightly better performance than that
offered by the DCT-based tech-nique, applying energy-based
segmentation gives a significant improvement.
The DCT-based technique is strongly influenced by the
color and/or texture of the region with inconsistent motion
blur compared to the rest of the image. For example, the
image in Fig. 17(g) shows a distinct resemblance to Fig. 17(a)
in terms of shading. This indicates a strong dependence on
texture in the method of analysis used. Our method usually
generates a large blob which allows easy interpretation of
results. For images with multiple blurs as in Fig. 17(b) and
(c), we are able to segment regions with inconsistent motion
blurs. The technique of [13] is not able to do so.

[4] H. Farid, Detecting Digital Forgeries Using Bispectral Analysis, Massachusetts Inst. Technol., Cambridge, MA, 1999, Tech. Rep. AIM-1657.

[5] M. Johnson and H. Farid, Exposing digital forgeries by detecting inconsistencies in lighting, in Proc. 7th Workshop Multimedia and Security, 2005, pp. 110.
[6] M. Johnson and H. Farid, Exposing digital forgeries in complex
lighting environments, IEEE Trans. Inf. Forensics Security, vol. 2,
no. 3, pt. 1, pp. 450461, Sep. 2007.
[7] M. Johnson and H. Farid, Exposing digital forgeries through chromatic aberration, in Proc. 8th Workshop Multimedia and Security,
2006, pp. 4855.
[8] M. Johnson and H. Farid, Metric Measurements on a Plane From a
Single Image, Dept. Comput. Sci., Dartmouth College, 2006, Tech.
Rep. TR2006-579.
[9] G. Cao, Y. Zhao, and R. Ni, Edge-based blur metric for tamper detection, J. Inf. Hiding Multimedia Signal Process., vol. 2073, p.
4212, 2009.
[10] B. Mahdian and S. Saic, Detection of copy-move forgery using a
method based on blur moment invariants, Forensic Sci. Int., vol.
171, no. 23, pp. 180189, 2007.
[11] Y. Yun, J. Lee, D. Jung, D. Har, and J. Choi, Detection of digital
forgeries using an image interpolation from digital images, in Proc.
IEEE Int. Symp. Consumer Electronics, 2008, pp. 14.
[12] S. Bayram, H. Sencar, and N. Memon, An efficient and robust
method for detecting copy-move forgery, in Proc. IEEE Int. Conf.
Acoustics, Speech and Signal Processing, 2009, pp. 10531056.
[13] D. Hsiao and S. Pei, Detecting digital tampering by blur
estimation, in Proc. 1st IEEE Int. Workshop Systematic Approaches
to Digital Forensic Engineering, 2005, pp. 264278.

V. CONCLUSIONS

[14] X. Wang, B. Xuan, and S. Peng, Digital image forgery detection based
on the consistency of defocus blur, in Proc. Int. Conf. Intelligent Information Hiding and Multimedia Signal Processing, 2008, pp. 192195.

We have presented a technique for detecting spliced images


using discrepancies in motion blur in this paper. Motion blur

[15] L. Zhou, D. Wang, Y. Guo, and J. Zhang, Blur detection of digital


forgery using mathematical morphology, Lecture Notes Comput.
Sci., vol. 4496, pp. 990998, 2007.

74

Proceedings of the National Conference on Communication Control and Energy System


Computer Vision and Pattern Recognition, 2008, pp. 18.
[25] S. Geman and D. Geman, Stochastic relaxation, Gibbs distributions
and the Bayesian restoration of images, IEEE Trans. Pattern Anal.
Mach. Intell., vol. 6, no. 6, pp. 721741, Nov. 1984.
[26] P. Marziliano, F. Dufaux, S. Winkler, and T. Ebrahimi, A noreference perceptual blur metric, in Proc. IEEE Int. Conf. Image
Processing, Citeseer, 2002, vol. 3, pp. 5760.
[27] N. Otsu, A threshold selection method from gray-level histograms,
Automatica, vol. 11, pp. 285296, 1975.
[28] Y. Boykov, O. Veksler, and R. Zabih, Fast approximate energy
mini-mization via graph cuts, IEEE Trans. Pattern Anal. Mach.
Intell., vol. 23, no. 11, pp. 12221239, Nov. 2002.

[16] J. Zhang and Y. Su, Detecting logo-removal forgery by inconsistencies of blur, in Proc. Int. Conf. Industrial Mechatronics and
Automa-tion, 2009, pp. 14.
[17] P. Kakar, N. Sudha, and W. Ser, Detecting digital image forgeries
through inconsistent motion blur, in Proc. IEEE Int. Conf.
Multimedia & Expo, 2010, pp. 486491.
[18] A. Rav-Acha and S. Peleg, Two motion-blurred images are better
than one, Pattern Recognit. Lett., vol. 26, no. 3, pp. 311318, 2005.
[19] X. Liu and A. G. El, Simultaneous image formation and motion blur
restoration via multiple capture, in Proc. IEEE Int. Conf. Acoustics
Speech And Signal Processing, 2001, vol. 3, pp. 18411844.
[20] D. Gennery, Determination of optical transfer function by inspection
of frequency-domain plot, J. Opt. Soc. Amer. (19171983), vol. 63, p
1571, 1973.
[21] M. Cannon, Blind deconvolution of spatially invariant image blurs
with phase, IEEE Trans. Acoust., Speech, Signal Process., vol. 24,
no. 1, pp. 5863, Feb. 1976.
[22] I. Rekleitis, Steerable filters and cepstral analysis for optical flow
calculation from a single blurred image, Vis. Interface, vol. 1, pp.
159166, 1996.

[29] A. Levin, A. Rav-Acha, and D. Lischinski, Spectral matting, in Proc.


IEEE Conf. Computer Vision and Pattern Recognition, 2007, pp. 18.

[30] Flickr. [Online]. Available: http://www.flickr.com.


[31]

D. Huttenlocher, G. Klanderman, and W. Rucklidge, Comparing im- ages


using the Hausdorff distance, IEEE Trans. Pattern Anal. Mach.

Intell., vol. 15, no. 9, pp. 850863, Sep. 1993.


[32] X. Yu, M. Leung, and Y. Gao, Hausdorff distance for shape
matching, in Proc. 4th IASTED Int. Conf. Visualization, Image and
Image Pro-cessing, 2004.
[33] M.-P. Dubuisson and A. Jain, A modified Hausdorff distance for
ob-ject matching, in Proc. IAPR Int. Conf. Computer Vision Image
Pro-cessing, Oct. 1994, pp. 566568.

[23] H. Ji and C. Liu, Motion blur identification from image gradients, in

Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2008,


pp. 18.
[24] S. Dai and Y. Wu, Motion from blur, in Proc. IEEE Conf.

75

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.76-80.

Significant Difference of Wavelet Coefficient Quantization Based


Watermarking Method Attacks
G. Swati1, C.I. Sherly Sofie2 and Sankara Gomathi3
3

HOD of ECE Dept.


Department of Electronics and Communication Engineering,
Panimalar Institute of Technology, Poonamallee,Chennai-600123.
Email: 1swaticutedog@gmail.com, 2sherlysofie4@gmail.com
public detector is not available. According to the
classification suggested by Cayre et al. [2], this
constitutes a watermark-only-attack (WOA). Following
Kerckhoffs principle [3], a watermarking system should
be secure even if everything except the key is
known.Watermark security versus robustness is a
controversial topic. Kalker [4] states that security refers
to the inability by unauthorized users to have access to the
raw watermarking channel. While general signal
processing, geometric, and protocol level attacks [5][7]
have received ample attention in the literature, only few
works investigate targeted attack directed towards the
weakness of a particular watermarking algorithm. The
attacks mounted on the proposed scheme during the
Break OurWatermarking System (BOWS) contest [8]
expose vulnerabilities and indicate design guidelines for
robustness and security to be incorporated in new
watermarking schemes. It is thus worthwhile to consider
attacking
a
particular
watermarking
method.
Benchmarking may provide a robustness evaluation [9];
however, in the copyright protection scenario, a detailed
analysis for potential weaknesses is required. For
example, Das et al. [10] describe a successful analysis of
another wavelet-based quantization watermarking method
[11]. Although the scheme demonstrates good robustness
against many signal processing operations, the embedding
locations are revealed and can then be efficiently
attacked. We exploit a similar weakness in SDWCQ and
note that [1], [11] both perform ad-hoc quantization of
small vectors, ignoring established security measures such
as a key-dependent dither vector as proposed in the QIM
embedding framework [12].

AbstractThis paper describes an attack on the recently


proposed Watermarking Method Based on Significant
Difference of Wavelet Coefficient Quantizationl. While the
method is shown to be robust against many signal processing
operations, security of the watermarking scheme under
intentional
attack
exploiting
knowledge
of
the
implementation has been neglected. We demonstrate a
straightforward attack which retains the fidelity of the
image. The method is therefore not suitable for copyright
protection applications. Further, we propose a
countermeasure which mitigates the shortcoming.
Keywords Attack, copyright protection, quantization,
watermarking

I. INTRODUCTION
Copyright protection is an important watermarking
application where information identifying the copyright
owner is imperceptibly embedded in multimedia data
such that this watermark information is detectable even in
degraded copies. Quantization-based watermarking is an
attractive choice as it combines high watermark capacity
with robustness against manipulation of the cover data.
The ability to embed many watermark bits (in the range of
256 to 1024 bits) allows to hide a small black-and-white
logo image. An extracted logo image can be used to
visually judge the existence of a particular watermark.
Alternatively, the normalized correlation measure
between the embedded and extracted watermark provides
for numerical evaluation. Recently, Lin et al. [1] proposed
a robust, blind watermarking scheme based on the
quantization of the significant difference between wavelet
coefficients. Their results for a 512 bit watermark
demonstrate good robustness for a wide variety of signal
processing attacks such as JPEG compression, median
filtering, sharpening, and mild rotation. However, in the
copyright protection scenario, a watermarking method
must not only withstand unintentional processing of the
cover data but also intentional, targeted attack by a
malicious adversary. For the attack scenario in this
letter,we assume that we have access to only a single
watermarked image but possess full knowledge of the
implementation details of the watermarking scheme. A

In Section II, we briefly review the watermarking method


proposed by Lin et al. [1] based on the Significant
Difference of Wavelet Coefficient Quantization
(SDWCQ). Our attack is presented in Section III and after
discussing the weakness, we propose a countermeasure in
Section IV. Section V provides experimental results of the
attacks performance and the robustness of the modified
scheme. Finally, we conclude the paper in Section VI
with cautionary notes.
76

Proceedings of the National Conference on Communication Control and Energy System

is set to 0.9 (see [1] for details). The difference max*i sec* i between the largest and second largest coefficient
of each received block is compared against to extract
one bit of watermark information wi*

II. WATERMARKING METHOD


The SDWCQ method [1] selects the LH3 subband
obtained by a three-level DWT for watermark embedding.
Consecutive coefficients of the subband are grouped into
blocks of a fixed size; see Fig. 1. The block size 7 is
suggested in the paper as a tradeoff between capacity,
robustness, and security. A pseudo-random permutation
of the blocks is performed and only the first Nw blocks
are selected. Each block 1i<Nw encodes one bit of
watermark information w i {1,-1} by imposing a
constraint on the largest and second largest coefficient
within the block. Let max i and sec i denote these two
coefficient values for each block and max i and sec i
denotes the significant difference. If watermark symbol 1
is to be embedded in block i ,max i is replacing max i and
set to

(4)
To judge the presence of the watermark in the received
image, the normalized correlation (NC) between the
embedded and extracted watermark defined as

(5)
is compared against a decision threshold . If
Nc(w,w*), the watermark is declared present, otherwise
absent. For a watermark of length Nw=512 and a falsepositive probability of approximately 1.03 x 10^-7,is set
to 0.23.

(1)
where T is a threshold controlling the embedding strength
(see [1]) and is the average significant difference value
of all n blocks

III. ATTACK
The security measures employed by SDWCQare the
pseudo-random permutation of the blocks and the
watermark bits. However, the permutations merely
change the order in which blocks are watermarked and
thus the block locations where watermark bits are
embedded in the subband. In case the number of
watermark bits Nw is smaller than the number of available
blocks Nb, the attacker does not know which blocks to
target. The application scenario [1] assumes Nw=512,
block-size 7 and for a subband size of 64x64 the number
of available blocks Nb=[{64.64}/(7)]. Crucially, the
permutations do not disguise which coefficients make up
a block. Therefore, an attacker can derive the values max j
and sec j for all blocks 1jNb which potentially carry
watermark information. In [1], the authors claim that
targeting all largest coefficients would significantly
degrade the image quality and thus the commercial value
of the attacked copy. This is not the case as we show
below. The attack computes the significant difference for
all blocks 1jNb in the subband LH3. If max j sec
j<(T/2), then block is presumably carrying watermark
symbol -1 which we want to change to encode 1. Hence,
the attack increases the significant difference between the
attacked coefficients maxj and sec j

(2)
Where[.]denotes the floor operator. Similarly, to embed 1,max i is set to equal sec i .
For watermark extraction, an adaptive threshold is defined
as

(3)
where
are the ordered
significant differences of the received image and 01
is sensitive to the ratio between the two watermark
symbols. For equiprobable watermarksymbols,

(6)
whereT is a crude estimate of the threshold T used for
embedding.T can be easily determined by observing the
first sharp increase in ordered significant differences; see
Fig. 2. The variable is a small positive constant to
guarantee that the significant difference is always >T;
thus, the extractor will always decode watermark symbol
w i*=1 for all i [see (4)]. We set =2 for all images.

Fig. 1. Coefficients blocks in LH3 DWT subband

77

Significant Difference of Wavelet Coefficient Quantization Based Watermarking Method Attacks

Further, the weakness permits to copy a watermark to


another image. A simple countermeasure is to establish a
key-dependent pseudorandom mapping of wavelet
coefficients to coefficient blocks. For example, we can
apply the pseudo-random permutation function Per
defined in [1] with a secret seed __ on the wavelet
coefficients of subband LH3, Per(S3,4096), before
grouping non-overlapping wavelet coefficients into
blocks. This modification conceals which wavelet
coefficients make up a block; thus, the significant
differences cannot be MEERWALD et al.: determined
and the attack proposed in the previous section is
mitigated.

Results showing the effectiveness of the attack are


presented in Section III.
The Watermark can be
completely removed with an average PSNR of 54.59 dB
between the watermarked and attacked image. Note that
[1] defines 0<T ; therefore, we apply min(,T)in (4),
different from [1, eq. (6)]. Clearly, becomes much
larger than T under attack [see (6) and (3)]. We do not see
the point in confining T . In case, we lift the constraint
T , we resort to a different attack strategy, and aim to
move the significant differences j=max j- sec j close to
the decision threshold . The significant difference is
increased byT+1 for blocks likely carrying watermark
symbol -1;otherwise, max j is decreased byT/2+2.

Fig. 3. Ordered wavelet coefficients for two watermarked images

Fig. 2. Threshold T determined by observing the ordered


significant differences j for two watermarked images.

Fig. 4. Ten 512x512 gray-scale test images. (a) Lena.


(b) Goldhill. (c) Peppers. (d) Man. (e) Airport. (f) Tank.
(g) Truck. (h) Elaine. (i) Boat. (j) Barbara.

(7)

A related alternative modification was proposed.

1 is set to 2 as before and 2is determined for each


image such that slightly more than 128 likely locations for
watermark symbol 1 are altered (assuming ratio1:1
between watermark symbols). Again, the watermark can
be completely removed. The average PSNR is 53.56 dB,
with more detailed results in Section III.

Even without having access to the significant differences,


we can use properties of the SDWCQ method to attack
the watermark. First, the SDWCQ scheme constrains
embedding of the watermark to the LH3 subband of the
DWT. Second, only large positive coefficients are likely
to contribute to the significant difference. Third, setting
maxi equal.

IV. DISCUSSION AND COUNTERMEASURE


The previous section shows how an attacker can exploit
knowledge of the watermarking schemes implementation
to gain access to the embedding locations. Specifically,
the embedding subband (LH3), the formation of
consecutive coefficients into blocks, and the quantization
rule (1) are utilized. If kept secret, the embedding
threshold _ and the block size can be easily estimated
from the received image. The two pseudo-random
permutations (Per(S1,Nw))and Per(S2,4096/7)in [1]) fail
to protect the embedding locations. According to Kalkers
definition [4], the SDWCQ method is insecure because
the attacker can manipulate the raw watermark channel of
significant differences and thus remove the watermark
while maintaining a high PSNR for the attacked image.

To sec i in order to embed -1 might result in an increased


number of wavelet coefficients pairs with the same value,
thus revealing potential embedding locations of w i=-1.
It is well known that the energy ofwavelet detail subband
coefficients is concentrated in just a few large coefficients
for natural images. Based on this fact and the first two
assumptions above,we can design an attack which sets all
positive coefficients to zero, excluding only the largest.
Formally, let 12..Nc denote the Nc=4096
ordered wavelet coefficients of subband LH3. Then
choose two indices 1<2 and set

78

Proceedings of the National Conference on Communication Control and Energy System

correlation (NC) for the extracted watermark as well the


PSNR (dB) between the watermarked and attacked image
(w, a), the original and attacked image (o, a), and the
original and watermarked image (o, w).

(8)
Reasonable values are 1=1500 and 2=3900; see Fig. 3.
Since the images energy is concentrated mainly in large
coefficients and negative coefficients are hardly affected,
the image quality is not severely degraded. The average
watermark correlation is reduced to 0.156, well below the
detection threshold. The average PSNR between the
watermarked and attacked image is 42.57 dB. See Table
IV for detailed results.

Table 1 presents the results of the attack on the SDWCQ


watermarking method. The averaged normalized
correlation is close to zero for all images, and the
watermark is completely removed. The PSNR between
the watermarked and attacked image is 54.59 dB on
average, significantly higher than the average embedding
PSNR of 45.88 dB between the original and watermarked
image. Overall, the PSNR of the attacked image is 0.46
dB higher compared to the watermarked image. The
results for the SDWCQ variant with unrestrained are
provided in Table 2. Again, the watermark is completely
removed but the PSNR of the attacked image against
watermarked image is approximately 1 dB lower than
before: 53.56 dB. Regarding the case where the
watermark comprises {-1,1} symbols in the ratio 1:3, we
note that the scale parameter for the detector has to be
changed to 0.6 (see [1, Fig. 12]) and T=10 now. It can
easily be verified that the detection threshold has be to
adapted to maintain the target false-positive error rate. If
we assume a probability of error (PE) of 0.25 for each
watermark bit, the threshold 0.68 results in a probability
of false-positive error (Pfp) of 5.87x10^-7 according to
[1, eq. (9)]. The NC can be reduced to zero as before,
however at a slightly larger expense in PSNR. Note that
in practice it would be sufficient to reduce the NC just
below the detection threshold. The attack performance is
illustrated in Table 3.

V. RESULTS
The implementation of the SDWCQ watermarking
scheme, its modification, and the related attacks are
available as Python code at http:// www.wavelab.at/
sources
Table 1. Attack Results on the SDWCQ Scheme Averaged Over
Ten Test Runs For Block-Size 7, Nw=512,T=12 AND =0.9

Table 2. Attack Results on the SDWCQ Scheme (


Unrestrained) Averaged Over Ten Test Runs (Same Parameters)

For our experiments, we use ten 512x512 gray-scale


image freely available from the USC SIPI image
database;1 see Fig. 4.
We embed a random watermark sequence of length
Nw=512 with approximately the same number of
watermark symbols 1 and -1 in each image. Note that Lin
et al. [1] also consider the case where the
ratio is 1:3 which might be useful for binary logo
watermarking where the logo comprises an uneven
number of black and white pixels. The ratio between
watermark symbols affects the embedding strength in
terms of PSNR: with equiprobable symbols, the PSNR is
lower than indicated in [1]. To compensate, we choose
T=12 instead of T=10.The Daubechies-7/9 wavelet
filter is used for the DWT. The block size is set to 7 as
suggested. For watermark extraction the parameter is set
to 0.9 to reflect the even distribution of watermark
symbols (see [1, Fig. 12]). In the following, we evaluate
our attack on SDWCQ (including the detection variant
described in Section III) and on the modified SDWCQ
scheme proposed in Section IV. The experiment is
repeated ten times and we report the averaged normalized

Table 3. Attack Results on the SDWCQ Scheme (Symbol Ratio


1:3) Over Ten Tests Runs (Block-Size 7,Nw=512,T=10 AND
=0.6)

79

Significant Difference of Wavelet Coefficient Quantization Based Watermarking Method Attacks


Table 4. Attack Results on the Modified SDWCQ Scheme
Averaged Over Ten Test Runs For Block-Size 7, Nw=512,T=12
AND =0.9

almost identical to the original image using cropped


96x96 image regions of the Lena and Goldhill image.
Only in direct comparison, differences become
noticeable; for example, the tiles on the roof of the
Goldhill image appear slightly brighter (marked with
white circle). The PSNR between the original and the
attacked SDWCQ images shown is 46.78 and 46.51 dB;
the PSNR after attack on the modified SDWCQ scheme is
41.52 and 36.49 dB for Lena and Goldhill, respectively.
With the modified SDWCQ method, the average PSNR of
the watermarked images is more than 3 dB lower
compared to the original scheme for the same embedding
parameters. Due to pseudo-random assignment of the
coefficients to a block, the average significant difference
is increased. Hence, in case watermark symbol -1 is
embedded, the coefficient maxi is on average changed by
a larger amount; see (1). Consequently, the modified
SDWCQ scheme does not suffer from blocks with
approximately equal-valued coefficients due to smooth
image regions and the robustness is improved. The attack
on SDWCQ relates to the security of the scheme as an
unauthorized user can gain access to the watermark bits. It
is then possible to alter or copy the watermark. On the
other hand, the attack on the modified SDWCQ scheme
relates to the robustness of the watermarking method as it
is not possible to directly alter individual watermark bits
or copy the watermark information.

In Table 4, we report the results for the proposed targeted


attack on the modified SDWCQ scheme which occludes
the embedding locations. We observe that the normalized
correlation, 0.156 on average, is consistently below the
detection threshold of 0.23 (for a false-positive rate of
10^-7). The PSNR between the original and attacked
image is 38.88 dB on average yet the images have good
perceptual quality as confirmed by visual inspection. We
also assess the objective image quality using the SSIM
metric [13]. For the watermarked images, the SSIM is 1
(perceptually identical). The average SSIM value for the
attacked images is 0.98. For comparison, JPEG
compression (Q=95) which has a very minor impact on
the perceived image quality, also yields 0.98 on the SSIM
scale. So even for the low PSNR (o, a) value of 38.88 dB,
the perceptual quality of the attacked images is
maintained according to the SSIM metric. Hence, also the
modified SDWCQ and 36.49 for the attack on modified
SDWCQ. (a) Original image. (b) SDWCQ attack. (c)
Modified SDWCQ attack.

VI. CONCLUSION
This paper presents an attack on the SDWCQ method, a
recently published watermarking scheme for copyright
protection. The attack exploits knowledge of the schemes
implementation and the lack of protection of the
embedding locations to completely remove the watermark
with high PSNR. Further, we propose a simple
modification to SDWCQ which occludes the quantized
coefficients locations, inhibiting the attack. However,
also modified SDWCQ is prone to a targeted attack.
We highlight the need for a detailed security analysis,
assuming the attacker is familiar with the watermarking
schemes
implementation.
We
expect
several
quantization-based watermarking schemes to be
vulnerable to similar attacks. Evaluation of the robustness
against common signal processing operations is
insufficient for watermarking schemes in the copyright
protection scenario.
REFERENCES

Fig. 5. Image quality of cropped image region after attack on the


Lena and Goldhill image; PSNR (dB): 46.78 and 46.51 for the
attack on SDWCQ, 41.52

[1]

The proposed attack would not be successful if the


watermark energy would be spread over more subbands.
In Fig. 5, we confirm that the attacked images are visually

80

W.-H. Lin, S.-J. Horng, T.-W. Kao, P. Fan, C.-L. Lee, and
Y. Pan, An efficient watermarking method based on
significant difference of wavelet coefficient quantization,
IEEE Trans. Multimedia, vol. 10, no. 5, pp. 746757, Aug.
2008.

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.81-86.

Perception of Beacon in Localization in Wireless Sensor Networks


R. Krishnamoorthy, K. Aanandha Saravanan and N. Vignesh
Veltech Dr.RR & Dr.SR Technical University, Avadi, Chennai
Email: Krishnamoorthy.mkr@gmail.com
Abstract The Pervasive computing and sensor networks
are emerging as key application drivers for wireless
networks. Localization is of fundamental importance to
pervasive computing and sensor networks because the
former operate on devices based on their physical proximity
and the latter names data and organize the network in terms
of their physical location. Beacons (known-location nodes)
are one key approach to achieving localization in wireless
networks. In this paper we propose a Beacon Movement
Detection Schemes such as Location based, Neighbor based,
Signal strength binary and Signal strength real schemes for
localization accuracy based on the. Automatic Monitoring,
Identifying and Removal of such unreliable beacons from
the localization Module with the following assumption that
there are unnoticed changes of locations of some beacons in
the system which affects localization accuracy. The effect of
RSSI error in Signal Strength real scheme can be suppressed
by individual diversity difference coefficient, distance
difference coefficients and distance difference localization
equation is defined based on analysis models of maximum
likelihood estimation and RSSI, which improves localization

localization accuracy depends on the accuracy of


reference positions and relative parameters. In most
localization systems, we assume that there are sets of
beacon sensors which may or may not be aware of their
locations and can periodically transmit/ receive short
broadcast packets. By evaluating the distances, angles of
arrival, or signal strengths of these broadcast packets.
We can estimate the locations of objects by triangulation
or pattern matching Under such an architecture, we
observe that most existing works have an underlying
assumption that beacons are always reliable. Based on
this observation, this paper a new Beacon Movement
Detection (BMD). Localization system is unaware of this
event. With unnoticed beacon movement events, the
topology of the sensor network may be different from
what it is supposed to be, and thus a localization
algorithm may lose its accuracy In this paper, we assume
that beacons are static under normal circumstances, but
occasional beacon movement events are not unusual. This
is true especially in a wireless sensor network. The BMD
problem involves two issues. First, we need to monitor
and identify the Beacons that change its location
unexpectedly. Second, the result has to be given to the
positioning Module to reduce the impact of Movement on
localization accuracy.

I. INTRODUCTION
Wireless sensor networks (WSNs) are essentially intended
to observe spatial-temporal characteristics of the physical
world. Locations of sensor nodes are fundamental to
providing location stamps, locating and tracking objects,
forming clusters, and facilitating routing, etc. However, a
priori knowledge of locations is unavailable in large-scale
and ad-hoc deployments, and a pure-GPS (Global
Positioning System) solution is viable only with costly
GPS receivers and good satellite coverage. In a general
scenario, only a few nodes (called anchors) are aware of
their positions either through manual configuration or
equipped with GPS receivers, and the others (called
unknown nodes) have to estimate their positions by
making use of the positions of anchors.

Based on this assumption, we propose four schemes. The


first location-based (LB) scheme tries to calculate each
beacons current location and compares the result with its
predefined location to decide if it has been moved.
In the second neighbor-based (NB) scheme, beacons will
keep track of their nearby beacons and report their
observations to the BMD Module to determine if some
beacons have been moved. In the third signal strength
binary (SSB) scheme, the change of signal strengths of
beacons will be exploited. In the last signal strength real
(SSR) scheme.

Localization algorithms in WSNs are broadly divided into


range-free approaches and range-based approaches.
Range-free approaches normally rely on proximity, near
far information or less accurate distance estimation to
infer the locations of unknown nodes, and range based
approaches require accurate distance or angle
measurements to locate the unknown nodes .Both
approaches must rely on the positions of anchor nodes
and some measured/estimated parameters, and the

The BMD Module will collect the sum of reported signal


strength changes of each beacon to make decisions. Note
that only the first scheme assumes that the original
locations of beacons are known in advance. The other
three schemes do not assume any a priori knowledge on
the original locations of beacons. In real time conditions,
signal strengths may be influenced by many factors, such
81

Perception of Beacon in Localization in Wireless Sensor Networks

II. PREVIOUS WORK

as hardware difference, multipath propagation, and signal


fading. To evaluate the proposed BMD schemes, we
adopt a close-to-reality radio irregularity model to
simulate the decay of signal strengths. This model has
been shown to be able to reflect the propagation of radio
signals, especially in indoor environments the proposed
schemes shows their capability to improve the
localization accuracy in events of beacon movement.

There are two main approaches for localization: multilateration and pattern matching. Multilateration is a
process of finding the location of an object based on
measuring the distances or angles of three or more signal
sources at known coordinates. A special case of multilateration is triangulation, ultrasonic sensors are used to
estimate the location and orientation of a mobile device.
In a distributed positioning system called Ad Hoc
Localization System is proposed, where some beacons are
aware of their own locations.
The former are used to determine the positions of the
latter. All the above systems require special hardware to
support localization. Localization, using pattern-matching
techniques is based on the localization task that can be
achieved by off-the-shelf communication hardware, such
as WiFi-enabled mobile devices. Such localization
systems are more cost-effective. Pattern-matching
localization does not rely on any range estimation
between mobile devices and infrastructure networks. For
example, a system can be based on WiFi access points at
unknown locations to serve as beacons.

Fig. 1. System Model

Let us Assume a Sensing field with which a set of


beacons H={H1,H2.Hn} is deployed for localization
purposes. Depending on different schemes, the beacons
locations may or may not be know in advance.
Periodically, each beacon will broadcast a packet. To
determine its own location, an object will collect packets
from its neighboring beacons and send a signal strength
vector S={s1,s2,..sn} to an external positioning
Module, where si is the signal strength of the packet from
Hi.If it cannot hear from Hi, we let si = smin, where smin
denotes the minimum signal strength The positioning
Module can then estimate the objects location based on S

All the above works assume that beacons are reliable. In


reality, some beacons may be moved to locations where
they are not supposed to be without being noticed. Some
beacon signals may be blocked by new obstacles
deployed after the training phase, making their signal
strengths untrustworthy. Some beacons may even conduct
malicious attacks if they are compromised. A malicious
beacon is one which is tampered or compromised by an
adversary and which can provide false distance or angle
measurements. A malicious attack can be conducted
individually or cooperatively. The major sources of
unreliability come from unnoticed movement of some of
these tiny beacons or unnoticed deployment of obstacles
after the training phase, which may lower some beacons
signal quality. However, signal quality from beacons can
always be correctly measured, unless they are being
interfered by noise.

To solve the BMD problem, we will monitor each beacon


from time to time. The content of an observation will
depend on the corresponding BMD scheme. The BMD
Module is capable of calculating a set BD. The result is
then sent to the calibration algorithm in the positioning
Module. The above Fig .1 illustrates our system model.
Considering the following reasons, we define the tolerable
region of each beacon Hi as the geographic area within
which a slight movement of Hi is acceptable. First, radio
signal tends to fluctuate from time to time. Second, slight
movement of a

III. PROPOSED BEACON MOVEMENT DETECTION


SCHEMES
To solve the BMD problem, we propose four detection
schemes, namely LB, NB, SSB, and SSR schemes. These
schemes differ in their local processing rules of beacons
and the corresponding decision algorithms at the BMD
Module. In the LB scheme, each beacon reports its
observed signal strengths, which are used by the BMD
Module to compute each beacons current location. The
result is used to compare against its original location. In
the NB scheme, each beacon locally decides if some
neighboring beacons have moved into or out of their
communication coverage range and reports its binary
observations to the BMD Module. The SSB scheme is
similar to the NB scheme, but the definition of movement

Beacon should not change the signal strength much unless


an obstacle is encountered. Third, ignoring the data of a
slightly moved beacon in the location database will
decrease the localization accuracy due to fewer beacons
helping the localization procedure. So the slight
movement of beacons is constrained by the tolerable
regions. As a result, the unreliable set Dn only contains
those beacons which are moved out of their tolerable
regions. The sizes of tolerable regions are application
dependent, which is beyond the scope of this work. For
simplicity, tolerable regions are assumed to be circles
centered at beacons of the same radius.
82

Proceedings of the National Conference on Communication Control and Energy System

assumption is reasonable because, in practice, beacons are


usually moved by accident. Hence, we will try to
construct a set BD that is as small as possible. First, we
transform matrix observation graph as shown in Fig 2.e.
After constructing graph, the NB scheme adopts a
heuristic approach as follows. If a beacon bis in-degree
in graph is higher, it is more suspicious to be moved. So
the Module sorts the vertices in graph according to their
in-degrees of the uncovered edges in a descending order,
and then selects the first one. This node is included in BD
if any edge incident to it has not been covered. After
selecting the most suspicious one, we will sort the vertices
again. This process is repeated until a vertex cover is
found.

is according to a threshold of signal strength change. In


the SSR scheme, a beacon does not try to determine
whether a neighboring beacon has been moved or not.
Instead, each beacon reports the amount of signal strength
change of each neighbor; the sum of all reported values is
used by the BMD Module to make a global decision.
A. Location Based Scheme
The LB scheme assumes that the initial locations of
beacons are known by the BMD Module in advance and
utilizes localization techniques to monitor the locations of
beacons. Each beacon is in charge of reporting the
observed signal strength values of its neighbors to the
BMD Module. The trilateration technique .Suppose a
Beacon is moved out of its tolerable region. Since other
beacons are unmoved, they can help to determine new
location of the moved beacon. One point worth
mentioning is that because of beacons movement, the
estimated locations of other beacons may also be changed
by a certain degree. So the outcome depends on the
observations of the beacons. The LB scheme is sensitive
to the performance of the adopted localization system. If
the density of beacons is too low or signal strengths are
too unstable, the results of movement detection cannot
perform well. Since this scheme uses beacons to localize
each other, moved beacons will also contribute some
errors to the mutual localization process and thus
influence our decisions. After the BMD Module receives
the observations from all beacons, it estimates their
possible locations under current mutual observations.
Then the beacon with the longest moved distance will be
selected. If the beacons current location is out of its
tolerable region, it will be included in BMD Module and
any observations contributed from Hi will be removed
from boundary region. This process will be repeated until
the suspicious beacons are found and are regarded as an
unmoved one.
B. Neighbor Based Scheme
In the previous LB scheme, we report the observations
according to the received signal strengths directly. It is
sensitive to any slight movement. Hence, the NB scheme
is designed to hide the information of signal strengths and
just report binary observations to the BMD Module. In
this scheme, each beacon Hi monitors the change of
neighbor-hood relations with other beacons in its
coverage area. The neighborhood relation of Hi at time t is
defined as

Fig. 2. NB scheme: (a) the original relation, (b) a movement


scenario, (c) observation matrix, (d) another movement scenario
and (e) the observation graph.

C. Signal Strength Binary Scheme


=

In the previous NB scheme, we only consider the


neighborhood relations between beacons. The LB scheme
is more accurate because it considers the change of
locations of beacons. In the SSB scheme, we assume that
beacons can measure the signal strengths of packets from
their neighbors. However, beacons do not report these
measurements to the BMD Module directly. Instead, each
beacon bi evaluates the amount of signal strength change

An example with four beacons is shown in Fig.2 a, where


the coverage of each beacon is a circle.
Then the observation matrix is as shown in Fig. 2c. Then
another movement of beacons is shown in Fig2.d in the
NB scheme, our assumption that unreliable beacons are
only a small proportion among all beacons. This
83

Perception of Beacon in Localization in Wireless Sensor Networks

of each neighboring beacon Hj locally and only reports a


binary value to the BMD Module. Let the observed signal
strength by Hi on Hj at time to be Si,j.

Where

among different hardware,


is the path
loss, which has a non isotropic and continuous property,
and N (0, ) is a zero-mean normal random variable with
a standard deviation to stand for dynamically shadowing
noise.

of Hi on Hj is

The observation

is the transmit power, which may vary

For each neighboring beacon Hi, we measure the average


signal strength at each of these sampling points, assuming
that Hj is moved to this sampling point. Note that if
beacon bi does not hear any signals from Hj at a sampling
point, we let its average signal strength be Smin. The
major difference between the NB scheme and the SSB
scheme is the calculation of local observation. However,
the ambiguity property still holds

Fig. 3. (a) Evaluation of hit and false probabilities for the SSB
Scheme under different SSB

D. Signal Strength Real Scheme


Similarly to the previous SSB scheme, the SSR scheme
assumes that beacons can measure the signal strengths
from their neighboring beacons. However, in this scheme,
the real signal strength variations, instead of binary
values, observed by a beacon are reported to the BMD
Module. The RSSI in both the Schemes cannot be
measured accurately. Specifically, the observation is
=|

E. Difference Correction Scheme


Actual distances between Ho and H1,H2,.Hn are
Do1,D02,.Don respectively difference measurement
distance between object node O and the beacons
H1,H2,Hn are D1,D2,..Dn respectively. The
difference distance localization is measured from
Difference correction and Maximum likelihood
estimation method.

Fig. 3. (b) Evaluation of hit and false probabilities for the SSR
Scheme under different SSR

Received signal strength introduces the variance of


sending power (VSP) to model the impacts of hardware
difference and remaining battery of a device on transmit
power.

IV. SIMULATION RESULTS

*(1+N (0, VSP)

The sensing field is a 300 m X 300 m area. There are20


beacons randomly deployed on this field with the
restriction that the distance between any two beacons is at
least 5 m. This restriction is to avoid some beacons being
placed too crowded, thus reducing the detection capability
of the network. When a scenario violating the restriction
is generated, we will discard it and regenerate another
one.

Where Pt denotes the initial transmit power and N (0,


VSP) is a zero-mean normal random variable with a
standard deviation VSP. The parameter VSP controls the
degree of variance of sending power among different
beacons. Each beacon randomly selects its
when
the simulation starts.
The irregularity of signal fading is a common
phenomenon. However, most path loss models do not take
this non isotropic property of signal coverage into
consideration. To capture this effect, RIM imports a

Based on the received signal strengths at a distance of d is


modeled by
(d)=

(d) +N (0,

)
84

Proceedings of the National Conference on Communication Control and Energy System

degree of irregularity (DOI) to control the amount of path


loss in different directions.
(d)=PL (d)*
Where
formulation.
PL(d)=PL(

is the optimal obstacle-free path loss

)+10log(d/

Where d0 is the reference distance the coefficient


model the level of irregularity at degree i

is to

All results are from the average of 20 experiments. To


reduce the influence of noise, signal strength is calculated
from the average of 50 HELLO packets

Fig. 5. Comparison of hit and false probabilities by varying the


degree of irregularity DOI

Fig. 4. Comparison of hit and false probabilities by varying the


standard deviation

Fig. 6. Comparison of hit and false probabilities by varying the


varied sending power VSP of the RIM radio model

85

Perception of Beacon in Localization in Wireless Sensor Networks

Difference distance localization. As a Future work can be


done on to the system localization if we can relocate those
moved beacons.
REFERENCE
[1] S. Misra, G. Xue, and S. Bhardwaj, Secure and Robust
Localization in a Wireless Ad Hoc Environment, IEEE
Trans.Vehicular Technology, vol. 58, no. 3, pp. 14801489, Mar. 2008.
[2] D. Moore, J. Leonard, D. Rus, and S. Teller, Robust
Distributed Network Localization with Noisy Range
Measurements, Proc.ACM Conf. Embedded Networked
Sensor Systems (ACM SenSys),pp. 50-61, 2004.
[3] E. Olson, J.J. Leonard, and S. Teller, Robust Range-Only
Beacon Localization, IEEE J. Oceanic Eng., vol. 31, no.
4, pp. 949-958, Oct.2006..
[4] A. Kushki, K.N. Plataniotis, and A.N. Venetsanopoulos,
Kernel-Based Positioning in Wireless Local Area
Networks, IEEE Trans.Mobile Computing, vol. 6, no. 6,
pp. 689-705, June 2007.
[5] P. Bahl and V.N. Padmanabhan, RADAR: An InBuilding RF Based User Location and Tracking System,
Proc. IEEE INFOCOM,vol. 2, pp. 775-784, Mar. 2000.
[6] J. Burrell, T. Brooke, and R. Beckwith, Vineyard
Computing: Sensor Networks in Agricultural Production,
IEEE Pervasive Computing, vol. 3, no. 1, pp. 38-45, Mar.
2004.
[7] T.H. Cormen, C.E. Leiserson, R.L. Rivest, and C. Stein,
Introduction to Algorithms. MIT Press/McGraw-Hill,
1990.

Fig. 7. Error statistics of beacon nodes involved in Localization

V. CONCLUSION
In this paper, we have proposed a Beacon Movement
Detection in wireless sensor networks for improving
localization accuracy. This paper describes a situation
where some beacon sensors localization procedure are
moved unexpectedly, called beacon movement events.
We propose to allow beacons to monitor each other to
identify such events. Four schemes are presented for the
BMD problem. Moreover, it is clear that SSB and SSR
Schemes have some errors which is improved by

86

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.87-90.

Face Detection and Recognition Method Based on


Skin Color and Depth Information
KAVITHA.R
VTP0479 M.TECH (VLSI)
Vel Tech Technical University, Avadi.
Email id:- kavisumm@gmail.com

AbstractAn improved face detection and recognition


method based on information of skin color and depth
obtained by binocular vision system was proposed in this
paper. With this method, the face area was detected firstly
by using Adaboost algorithm. Afterwards, the real face was
distinguished from fake one by using the skin color
information and the depth data. Then, by using PCA
algorithm, a specific face can be recognized by comparing
the principal components of the current face to those of the
known individuals in a facial database built in advance. This
method was applied to a service robot equipped with a
binocular c amera system for real-time face detection and
recognition experiment, and satisfactory results were
obtained.

3D geometric invariants of the human face . Wang et al. described


a real-time algorithm based on fisherfaces. Though these 3D
methods which emphasis on the shape of human face are robust in
variable environment, they overlook the texture information of
human face on the contrary. Therefore, in order to get better
efficiency, face data should be sufficiently used and both 2D and
3D face information should be considered .
In this paper, we propose an improved method which
employs the skin color information and depth date of human face
for detection and PCA (principal components analysis) algorithm
for recognition. Finally, the experiment results on a service robot
are given and discussed.
II. IMPROVED FACE DETECTION METHOD

I. INTRODUCTION

Face detection and recognition technology has


been widely discussed in relation to computer vision and pattern
recognition. Numerous different techniques have been
developed owing to the growing number of real world
applications. For service robot, face detection and
recognitionare extremely important, in which the emphasis must
be put on security, real-time, high ratio of detection and
recognition.

A. Adaboost algorithm in OpenCV


The Adaboost algorithm was often used to detect face
area in an image. The main idea of this algorithm is to boost up a
large number of generally weak classifiers to form strong
classifier, and the strong classifier has strong classification ability .
Since the OpenCV is a great tool for image processing, and it
provides several examples including face detection. In our work,
for real-time process, we use OpenCV as a development tool to
implement this algorithm.

For real-time face detection, typically, in 1997, P.Viola


presented a machine learning approach based on Adaboost and
Cascade algorithm, which is capable of detecting faces in
images in real-time . Based on this work, many researchers
begin to study on Boosting algorithm. Stan Z. Li proposed a
multi-view face detection algorithm based on FloatBoost. Then
P.Viola presented an asymmetric Adaboost algorithm which can
be used for fast image retrieval and face detection. For face
recognition, the eigenface approach was presented by Turk and
Pentland introduced in . This approach is based on PCA, which
was later refined by Belhumeur et al. and Frey et al.
However, most of the methods above are based on 2D
face image and are easily affected by changeable factors such
as pose, illumination, expression, makeup and age. In order to
overcome these problems, 3D face detection and recognition
methods have been developed rapidly in recent years .Bronstein
et al. presented a recognition framework based on


Figure 1. Example photo of face detection by usin g Adaboost algorithm

However, as shown in Fig.1, though the face area can be


easily detected by using only Adaboost algorithm in OpenCV,
it can not distinguish fake face, such as picture face, from real

87

Face Detection and Recognition Method based on Skin Color and Depth Information
binocular camera, depth image of a face can be estimated .
Fig.3 gives an example of the depth images and the right images
obtained from of binocular camera for a specific persons face,
what used here is the depth data.

human face. So, the detection algorithm needs to be improved


for real-time applications such as vision guided service robot.
From the security point of view, the robot needs to distinguish
a picture face and real one. It is really an important ability for
the service robot to avoid been heated .

Here we define Di (i=1 ) as the depth value of the ith


pixel in the face area, where N is the total number of pixels in
the face area. As what can be observed in Fig.4, generally, the
depth values of a real face has a much larger varied range than
that of a picture face for plane of face. Considering this, we can
define the average value of depth of all skin pixels, AvgD, as:

B. Face detection with skin color and depth data


In order to avoid cheating by a picture face, skin color
and depth data should be considered [1]. The skin color mode
in a color image can be given as:
r+
=

R
R+ G B

g+
=

G
R +G

Y 11
= 0.+30 R + 0 .59 G

0.

(1)

AvgD = 1i =

(2)

(4)

Then the variance of depth values of the face can be


calculated as:

In which, the RGB values determine the original pixels. When


values of r, g, Y satisfy the following formula (3), the color
information in the skin area of human can be determined.

r
0.333<< 0.664
g
0.246 << 0.398

Di

di=-Di AvgD
Sdi =

(3)

rg

()

(5)
(6)

i =1

In formula (5), di represents the difference between the depth


value of each skin pixel and the average depth value, S gives the
variance decision value of the detection result. Generally, the di
varies much on a real face compared with a picture face for the
same average value, as depicted in Fig.5. So, the value
S in (6) is appropriate for decision.

gr =- 0.5 0.5
Y > 40

The advantage of this model is better color dot density


detecting. By using this mode, the real faces with skin color
information and the drawing faces without skin color
information can be distinguished, as shown in Fig.2.

Figure 4. Depth of real face (left) and picture face (right)

Figure 2. Result of skin color detection


Rea l f ace P ic t ure f ac e

Av e r a ge

P ix els o f s k in co lor

Figure 3. Depth images and right images of a face

Ave r a g e

P ix els of s ki n co lo r

III. FACE RECOGNITION BASED ON PCA

The PCA algorithm is based on K-L translation which is


a useful orthogonal transformation [5]. After K-L translation, an
image can be dimensionally reduced to a point of a feature
subspace. With this feature subspace, any face image can be
projected onto it, and get a set of coordinate coefficients. This
set of coefficients can be used as a basis for face recognition.

After that, face detection can eliminate general drawing


faces without color of skin. But for real-face picture, it is hard to
eliminate. To solve this problem, we use the 3D information of
the face [2]. According to the character that there is disparity
field between two stereo images of the

88

Proceedings of the National Conference on Communication Control and Energy System


Such a feature subspace is also known as eigenface space, so the
method is also known as the eigenface method.

detection. Ho wever, when the degree increased, the results are


not always satisfactory. The possible reason is that, the pixels of
people skin are difficult to extract accurately in changeable
environment, such as illumination, race and so on.

In our work, by using PCA algorithm, a specific face can


be recognized by comparing the principal components of the
current face to those of the known individuals in a facial
database built in advance. And it can be applied to a service
robot equipped with a binocular camera system for real-time
face recognition experiment.
The detailed procedure of PCA algorithm is described
below [3]. First, build a training database of human face.
Second, represent each image of the database as a vector. Here,
the average face vector needs to be calculated. Then subtract
average face vector from vector of each face image. Third,
calculate eigenface vector and space, and project the training
faces into eigenface space. Coordinate coefficients can be
obtained. Finally, project the test face image into eigenface
space and obtain the coordinate coefficients. Calculate the
Euclidean distance between coordinate coefficients of test image
and images in database, the test image will be classified by
using the nearest distance.

Figure 7. Face detection of real face and picture face

TABLE I. RE SU L T S O F FAC E DE T E CT IO N

Angle
of head
rotate

Total Success Detection rate

Averag e time
(once)

-30 50 49 98% 1022.34ms


-15 50 50 100% 980.65ms

IV. EXPERIMENTS AND RESULTS

0 50 50 100% 976.24ms

In the experiment, a binocular camera (produced by


Point Grey company in Canada, model: BB2-08S2), a PC (AMD
64 Processor, 991MHz) and a service robot equipped with a
same binocular camera are used, the robot uses three wheels to
walk and can do speech interaction, as shown in Fig.6. The
proposed method was programmed and tested on PC and the
robot. When the codes are running on the robot, the recognition
results are presented by voice interaction.

15 50 50 100% 989.21ms
30 50 49 98% 1006.15ms

B. Experiments for face recognition on Robot


For the real-time experiments of face recognition for the
robot, training face database was built in advance. The face images
within it meet the following conditio ns: (a) they are all grayscale
images; (b) each of the image is 50 50 in size; (c) the postures of
each face only allow little change. One of the experimental
training face databases is shown in Fig.8. At the experiment, real
faces of different persons are classified. Some of faces are stored
in database, they are known to the robot. And some of the faces
are 'strangers' to the robot. Several real faces and picture faces are
faced to the camera, and they are some known and unknown faces.

Figure 6. Experimental environment of a service robot with binocular


camera vision system

A. Experiments for face detection


First, put a drawing face on a paper and face it to the
binocular camera with different head rotate angles. Second, let a
real face and a picture face (the same person) both face to the
camera respectively. Detection experimental results are shown
in Fig.7 and numerically listed in TAB.I. Here, the variance
decision value S in formula (6) was set to 4000 for distinction,
and once detection needs time nearly 1s. It can be observed that,
the real face and the fake one can be distinguished accurately in
certain degree of head angle. The detection results are really
good for the drawing faces. In other words, it excludes nearly all
drawing faces in face

Figure 8. Faces in the training database

89

Face Detection and Recognition Method based on Skin Color and Depth Information
For the use of PCA algorithm, the larger 150 feature values and
vectors are chosen to build the eigenface space. In the
experiment, the Euclidean distance classification is used. That
is, to calculate the Euclidean distance of projection of the test
face and various average faces. The test samples will be in the
class having the smallest Euclidean distance.
One of the successful face recognition results of a
specific person is shown in Fig.9. In TAB.II, the total face
recognition results and the successful recognition results are
obtained with different head title angles, and the face
recognition rate can be calculated. The average time used in
once recognition was also recorded. Although the percentage of
recognition is not so high, it is fast enough for the robot to do
services.

this method is fast for the robot to do services, although the


recognition rate is not high enough. Currently, the recognition
accuracy may be affected by illumination, expressions and
mechanical vibrations in some cases. This is left for future
investigation.
REFERENCES

[1] Sergey Kosov, Kristina Scherbaum, Kamil Faber, Thorsten


Thorma hlen, and Hans-Peter Seidel. apid stereo-vision enhanced
face detection . in Proc. IEEE International Conference on Image
Processing, 2009, pp.12211224.

[2] Sergey Kosov, Thorsten Thorma hlen, Hans-Peter Seidel. Accurate


Real-Time Disparity Estimation with Variational Methods . in Proc.
International Symposium on Visual Computing, 2009, pp.796807.
[3] T.-H. Sun, M. Chen, S. Lo, and F.-C. Tien. Face recognition using
2D and disparity eigenface . Expert Syst.Appl.,vol.33,no.2, 2007,
pp.265273.
[4] Rainer Lienhart, Alexander Kuranov, and Vadim Pisarevsky.
Empirical Analysis of Detection Cascades of Boosted Classifiers for
Rapid Object Detection . Springer-Verlag Berlin Heidelberg, LNCS
2781, 2003, pp. 297-304.
[5] Kevin W. Bowyer , Kyon g Chang, Patrick Flynn. A su rvey of
approaches and challenges in 3D and multi-modal 3D + 2D face
recognition . Computer Vision and Image Understan ding (101), 2006,
pp.1-15.

Figure 9.

Face recognition result of the robot

TABLE II. FACE RE C OG N IT IO N WIT H DIFF E RE N T HE A D POS TU RES

Angle of
head rota te Total Success Recognitionrate

Average time
(once)

-3 0 50 40 80% 870.5ms
-1 5 50 42 82% 830.23ms
0 50 44 88% 785.72ms
15 50 42 84% 805.25ms
30 50 41 82% 826.36ms

V. CONCLUSION

The proposed method improves 2D face detection


techniques by additionally considering the facial 3D information
obtained by binocular camera vision system. The skin color
information and depth data of human face for detection and
PCA algorithm for recognition are employed. It can not only
detect person closed to the camera but also distinguish between
real face and picture face. Appling the method to a service
robot, a face which is faced to the camera can be determined
whether it is a real face or a picture face, and then just do
recognition on real face. The results show that

90

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.91-93.

Privacy-Preserving Updates to Anonymous and Confidential Databases


using Cryptography with ARM
T. Vanitha
ME-Communication Systems, S.A.Engineering College,Chennai-77.
Email: everfreshvani@gmail.com
regarding the legitimate owner [3]. There are many ways
to perform data anonymization. We only focus on the kanonymization approach [4], [5].)

Abstract Databases represent an important asset for many


applications but their security is crucial. Today scenario
there is an increased concern for privacy and confidentiality.
Databases recording a variety of information about
individuals which will be maintained by database owners
and users respectively. If Alice owns a K-anonymous
database and needs to determine whether her database,
when inserted with a tuple owned by Bob, is still kanonymous. Also,suppose that access to the database is
strictly controlled, allowing Alice to directly read the
contents of the database breaks the privacy of Bob .(e.g., a
patients medical record). Thus, the problem is to check
whether the database inserted with the tuple is Kanonymous, without letting Alice and Bob know the contents
of the tuple and the database respectively. To preserve and
secure this database and its updates cryptography is used. In
this paper, the existing cryptography is software based. The
proposed system is hardware and software based using
ARM processor. This system provide protection against
brute force rewind attacks, offline parallel attacks, other
cryptanalysis attacks..

To better understand the difference between confidentiality and anonymity, consider the case of a medical
facility connected with a research institution. Suppose that
all patients treated at the facility are asked before leaving
the facility to donate their personal health care records
and medical histories (under the condition that each
patients privacy is protected) to the research insti-tution,
which collects the records in a research database. To
guarantee the maximum privacy to each patient, the
medical facility only sends to the research database an
anonymized version of the patient record. Once this
anonymized record is stored in the research database, the
non-anonymized version of the record is removed from
the system of the medical facility. Thus the research
database used by the researchers is anonymous.

Keywords Privacy, anonymity, data management, secure


computation.

I. INTRODUCTION
Today data confidentiality is particularly relevant because
of the value, often not only monetary, it also preserve the
updates .For example, medical data collected by
following the history of patients over several years may
represent an invaluable asset that needs to be adequately
protected. Such a requirement has motivated a large
variety of approaches aiming at better protecting data
confidentiality and data ownership. Relevant approaches
include query processing techniques for encrypted data
and data watermarking techniques. Data confidentiality is
not however the only requirement that needs to be
addressed.

Fig. 1. Anonymous Database System

Addressing the problem of pri-vacy via data


anonymization,
One
well-known
technique
kanonymization is used[4], [5]. Such technique protects
pri-vacy by modifying the data so that the probability of
linking a given data value.The problem arises when data
stored in a confidential, anonymity-preserving database
need to be updated. The operation of updating such a
database, e.g., by inserting a tuple containing information
about a given individual, intro-duces two problems
concerning both the anonymity and confidentiality of the
data stored in the database and the privacy of the
individual to whom the data to be inserted are related: (i)

Although confidentiality and privacy are often used as


synonyms, they are different concepts: data confidentiality is about the difficulty (or impossibility) by an
unauthorized user to learn anything about data stored in
the database. Usually, confidentiality is achieved by
enforcing an access policy, or possibly by using some
cryptographic tools. Privacy relates to what data can be
safely disclosed without leaking sensitive information
91

Privacy-Preserving Updates to Anonymous and Confidential Databases using Cryptography with ARM

The purpose of encryption is to prevent third parties from


recovering the original information. This is particularly
important for sensitive data like credit card numbers.

Is the updated database still privacy-preserving? and (ii)


Does the database owner need to know the data to be
inserted? The two problems will be overcome by two
protocols used in this existing paper.

II. HARDWARE CRYPTOGRAPHY


A. Problem Statement
Figure 1 captures the main participating parties in our
application domain. We assume that the information
concerning a single patient (or data provider) is stored in a
single tuple, and DB is kept confidentially at the server.
The users in Figure 1 can be treated as medical
researchers who have the access to DB. Since DB is
anonymous, the data providers privacy is protected from
these researchers. (Note that to follow the tradi-tional
convention, in Section 4 and later sections, we use Bob
and Alice to represent the data provider and the server
respectively.)

Fig. 2. Typical cryptography hardware system

Figure 2 shows the basic hardware cryptography


system.This consists of two keys public key and private
key(symmetric key). The keys are assigned to Alice and
Bob respectively. Embedded cryptography is nothing but
the cryptography is in built within the embedded system.
It provides security for the embedded devices.[1],[2]
Commonly there are lot of problems attack on the
embedded system such as brute force rewind attacks,
offline parallel attacks, or other cryptanalysis attacks.this
system handles two types of databases as suppression and
generalisation based databases.

The modification of the anonymous database DB can be


performed as follows: In this paper,software based
cryptography is existed. The basic concept used here is
Embedded cryptography.The objective is to design a
hardware and software based cryptography. The two
parties Alice and Bob were considered as two ARM
processors to provide the database and transfer the
information. Another ARM also considered as intruder
(Eve).
Note that to assure a higher level of anonymity to the
party inserting the data, we require that the communication between this party and the database occurs
through an anonymous connection, as provided by protocols like Crowds [27] or Onion routing [26].

This databases are analysed using the software as keil


microvision with embedded C. It also preserves the
updates of the database. In this, we have proposed the
hardware to protect the updates of the databases and also
preserve the privacy of the individual users of the
database management.

Table 1. Anonymous Database System Requirements


Requirement
Anonymous
connection
Anonymous authentication
Anonymous update

Objective
Protect IP
address
and sensitive info
Protect sensitive
authentication info
Protect
nonanonymous data
u

Protocol
Crowds [27],
Onion
Routing [26]
Policy-hiding
access
control [20]
Proposed in this paper

The figure 3 shows the various possible security


boundaries for the embedded devices.The most possible
security aspects are Ipsec, firewall etc.,

Figure1 summarizes the various phases of a


comprehensive approach to the problem of anonymous
updates to confi-dential databases, while Table 1
summarizes the required techniques and identifies the role
of our techniques in such approach.
B. Proposed Solutions
To create a hardware and software based cryptography.
And also design hardware based cryptography.
Encryption software executes an algorithm that is
designed to encrypt computer data in such a way that it
cannot be recovered without access to the key. Software
encryption is a fundamental part of all aspects of modern
computer communication and file protection and may
include features like file shredding.

Fig. 3.Possible security boundaries

The unbreakable hardware cryptography is implemented


in our paper. Almost all of the practical cryptosystems are
theoretically breakable given the time and computational
resources.However, there is one system which is even
theoretically unbreakable. One-time-pad. One-time pad
92

Proceedings of the National Conference on Communication Control and Energy System

requires exchanging key that is as long as the plaintext.


However impractical, it is still being used in certain
applications which necessitate very high-level security.
Security of one-time pad systems relies on the condition
that keys are generated using truly random sources.
III. ADVANTAGES
Hardware-based encryption, when implemented in a
secure manner, is demonstrably superior to softwarebased encryption. That being said, hardware-based
encryption products can also vary in the level of
protection they provide against brute force rewind attacks,
offline parallel attacks, or other cryptanalysis attacks.

Fig. 5. Execution time of generalisation Based updates

Hardware-based encryption has other benefits for users.


Software based encryption typically runs much more
slowly than hardware-based encryption. IronKey devices
are specially optimized for high-speed data transfer,
performing at the top of their class by reading data at up
to 29 megabytes per second and writing data at 18
megabytes per second.
Fig. 6. Prototype architecture overview

IV. ARCHITECTURE AND EXPERIMENTAL RESULTS

V. CONCLUSION

Our prototype of a Private Checker (that is, Alice) is composed by the following modules: a crypto module that is
in charge of encrypting all the tuples exchanged between
an user (that is, Bob) and the Private Updater.

In this paper we have presented a hardware based based


cryptography to preserve the privacy of individuals and
database owners to maintain the two different updates
suppression and generalisation based approach.The
implementation was done with the help of ARM
processors.It provides security and confidentiality to the
embedded devices.

Modules are represented along with labeled arrows


denoting what information are exchanged among them.
Note that the functional-ity provided by the Private
Checker prototype regards the check on whether the tuple
insertion into the k-anonymous DB is possible. We do not
address the issue of actually inserting a properly
anonymized version of the tuple.

REFERENCES
[1]
[2]

The simulation results are shown in the graph.

[3]

[4]

[5]
Fig. 4. Execution time of suppression based updates

93

Cryptography using Arm processor,security and


privacy,IEEE,issue date jan-feb 2006 volume:5 issue 1.
The Hardware-based PKCS#11 Standard using the RSA
Algorithm Latin America Transactions, IEEE (Revista
IEEE America Latina) Issue Date: June 2009 Volume: 7
Issue:2
E. Bertino, R. Sandhu. Database security - Concepts,
approaches and challenges. IEEE Transactions on
Dependable and Secure Com-puting, 2(1), 2005; 2 19.
P.samarati.Protecting respondents privacy in microdata
release.IEEE Transactions on knowledge and data
engineering,vol.13,no.6,pp.1010-1027 Nov/Dec 2001,
UK, 2005.
L.Sweeney.K-anonymity:
amodel
for
protecting
privacy.international journal on uncertainity, Fuzziness
and knowledge-based systems,10(5),557-570,2002

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.94-99.

Development of a MATLAB Simulation Environment for Vehicle-To-Vehicle


and Infrastructure Communication Based on IEEE 802.11P
D.V.S. Ramanjaneyulu1 and G. Naga Jyothi2
1

Assistant Professor, Vel Tech Dr. RR & Dr. SR Technical University, Chennai.
Email: 1anji.dvsr@gmail.com, 2jyothigoriparthi@gmail.com
means that the parameters in the time domain are doubled
compared with the parameters from IEEE 802.11a. [2]

Abstract In this paper IEEE 802.11p Physical layer (PHY)


is presented. A MATLAB simulation is carried out in order
to analyze baseband processing of the transceiver.
Orthogonal Frequency Division Multiplexing (OFDM) is
applied in this project according to the IEEE 802.11p
standard, which allows transmission data rates from 3 up to
27 Mbps. Distinct modulation schemes, Binary Phase Shift
Keying (BPSK), Quadrate Phase Shift Keying (QPSK) and
Quadrature Amplitude modulation (QAM), are used
according to differing data rates. These schemes are
combined with time interleaving and a convolutional error
correcting code. A guard interval is inserted at the beginning
of the transmitted symbol in order to reduce the effect of
Intersymbol Interference (ISI). The Viterbi decoder is used
for decoding the received signal. Simulation results illustrate
the Bit Error Rate (BER), Packet Error Rate (PER) for
different channels.

The draft IEEE 802.11p standard defines a MAC and a


PHY layers based on OFDM technique for future
vehicular communication devices. But, the study on the
specification of IEEE 802.11p PHY working in high
mobility environments has the potential to be improved.
The main goal of this project is the evaluation of V2V
and V2I communication PHY layer based on IEEE
802.11p standard by MATLAB simulations. In this work
we refer to the BER versus SNR statistics that can be used
as the basic reference for the physical layer of the IEEE
802.11 p standard for all vehicular wireless network
simulations. In order to get realistic simulation results, a
special radio channel simulator is used. The following
sections describe a MATLAB SIMULINK model in
software that includes transmitter, receiver and channel
models.

I. INTRODUCTION

II. FADING STATISTICS IN VEHICULAR MOBILE


CHANNEL

IEEE 802.11p defines an international standard for


wireless access in vehicular environments. Generally
wireless access in vehicular environments contains two
distinct types of networking which are V2V and V2I.
Vehicular communications is categorized as a part of
Intelligent Transport Systems (ITS). In addition,
Vehicular communication networks will offer a wide
range of applications such as providing traffic
management with a real time data for responding to road
congestions. On the other hand finding a better path by
access to the real time data can be the other advantage of
vehicular communication system which cause saving the
time and fuel and has large economic benefits.

Fading occurs due to the multi-path propagation in


communications systems. As a result signals reach the
receiver from several different paths that may have
different lengths corresponding to different time delays
and gains. Time delay causes additional phase shifts to
the main signal component. Therefore the signal reaching
the receiver is the sum of some copies of the original
signal with different delays and gains. [4]
III. PRINCIPLE OF OFDM TRANSMISSION
Orthogonal Frequency Division Multiplexing (OFDM) is
a multiplexing technique that divides a channel with a
higher relative data rate into several orthogonal subchannels with a lower data rate.

IEEE 802.11a is provided for indoor environment with


high data rate communication and low user mobility.
IEEE 802.11p is designed for operating at high user
mobility (vehicular communication). In this paper an
existing IEEE 802.11a PHY Simulink, is updated to
obtain an 802.11p PHY model. Decreasing of the signal
bandwidth from 20 MHz to 10 MHz in IEEE 802.11p
makes the communication more efficient for high
mobility vehicular channel such as reducing ISI caused by
multipath channel with using doubled guard interval. This

For high data rate transmissions, the symbol duration Ts is


short. Therefore ISI due to multipath propagation distorts
the received signal, if the symbol duration Ts is smaller
than the maximum delay of the channel. To mitigate this
effect a narrowband channel is necessary, but for high
data rates a broadband channel is needed. To overcome
this problem the total bandwidth can be split into several

94

Proceedings of the National Conference on Communication Control and Energy System

Fig. 1. OFDM training structure.

where the modulation is given by the elements value.


The factor p (13/6) is for normalizing the average power
in the OFDM symbol. To improve the channel estimation
accuracy, long OFDM training symbols are used. The
long training symbols consist of 53 subcarriers that have a
zero value at DC which are given by

parallel narrowband subcarriers. Thus a block of N serial


data symbols with duration Ts is converted into a block of
N parallel data symbols, each with duration T = N Ts.
The aim is that the new symbol duration of each
subcarrier is larger than the maximum delay of the
channel, T > Tmax. With many low data rate subcarriers at
the same time, a higher data rate is achieved.

L26,26 = {1, 1,1,1, 1, 1,1, 1,1, 1, 1, 1, 1, 1,


1,1,1, 1, 1,1, 1,1, 1, 1,1, 1, 0, 1,1,1, 1, 1,1, 1,1,
1,1,1,1,1,1, 1, 1,1,1, 1,1, 1,1, 1,1, 1, 1},

In order to create the OFDM symbol a serial to parallel


block is used to convert N serial data symbols into N
parallel data symbols. Then each parallel data symbol is
modulated with different orthogonal frequency
subcarriers, and added to an OFDM symbol, [4].

where the modulation is given by the elements value.


Depending on the data rates, different modulation
schemes and coding rates must be applied. Table1.1
illustrates the difference between IEEE 802.11p and
IEEE802.11a standard.

The IEEE 802.11p PHY has similar specifications as


IEEE 802.11a with some changes. In IEEE 802.11p, a 10
MHz frequency bandwidth is used, instead of 20 MHz
bandwidth in IEEE 802.11a, thus all parameters in the
time domain for IEEE 802.11p are doubled compared
with the IEEE 802.11a. The double guard interval reduces
ISI more than the guard interval in IEEE 802.11a.

Table 1. Comparisons view on the key parameters of IEEE


802.11p PHY and IEEE 802.11a PHY [4].
parameters
Bit rate Mb/s

The IEEE 802.11p PHY uses 64 subcarriers OFDM that


includes 48 data subcarriers and 4 pilot subcarriers. The 4
pilot signals are used for tracing the frequency offset and
phase noise, and relocated on subcarrier 21, 7, 7 and
21. The short training symbols placed at the first part of
every data packet (t1 through t10 shown in Figures 2 and
1, relates to the signal detection, time synchronization,
and coarse frequency offset estimation. The long training
symbols (T1 and T2), which are located after the short
training symbols, are used for channel estimation. GI2 is
used as guard interval for long training sequence and GI is
used as guard interval for OFDM symbols. The cyclic
prefix is employed to reduce the ISI.

modulation
OFDM symbol
duration
Guard time
FFT period
Preamble duration
Sub carrier
frequency spacing

IEEE 802.11p
3,4,5,6,9,12,18,2
4,27
Bpsk,qpsk,16qpsk,64-qpsk
8s

IEEE 802.11a
6,9,12,24,36,48,5
4
Bpsk,qpsk,16qpsk,64-qpsk
4s

1.6s
6.4s
32s
0.15625MHz

0.8s
3.2s
16s
0.3125MHz

IV. PHYSICAL LAYER CODING


The messages are influenced by interference and in order
to detect and correct the errors in received signals, the
redundancy technique is introduced. For a binary block
code, an encoder is used in the transmission system to
prepare data for transmission. A binary convolution
encoder is one kind of block code, which is used in the
IEEE 802.11p standard. The coding rates of R =1/2, 2/3,
or 3/4, that correspond to the desired data rate had been
used in 802.11p. The Convolutional encoder uses the

The total training length is 16 s. A short OFDM training


symbol consists of 12 subcarriers, which are given by
S26,26 = p(13/6) {0, 0, 1+ j, 0, 0, 0,1 j, 0, 0, 0, 1+ j,
0, 0, 0,1 j, 0, 0, 0,1j, 0, 0, 0, 1+j, 0, 0, 0, 0, 0, 0,
0,1j, 0, 0, 0,1j, 0, 0, 0, 1+j, 0, 0, 0, 1+j,0, 0, 0, 1+ j,
0, 0, 0, 1+ j, 0, 0},

95

Development of a MATLAB Simulation Environment for Vehicle-To-Vehicle and Infrastructure Communication ...

binary data. In the subsystem of data source a buffer


exists whose output is according to the maximum bits per
block which are chosen in the simulation parameter list.

generator polynomials g0 =133and g1 =171 in octal


mode. The constraint length of the encoder is 7 with bit
rate 1/2. The bits denoted as A and B are output of
the encoder. Puncturing is used to create higher data rates.
Puncturing is a procedure through which the number of
transmitted bits is reduced and the coding rate is
increased. Figure 2 illustrates the convolutional encoder.

IEEE 802.11p OFDM PHY layer includes different data


rates which are selected according to the output of
adaptive modulation. The system uses different
modulation schemes due to different data rates. Figure 3
illustrates the subsystem of modulator. The modulator is
sub divided as following:
 Padding
 Convolutional encoder
 Puncturing convolutional codes
 Matrix interleaver
 General block interleaver
 Rectangular QAM
The Padding block changes the dimension of input matrix
along its columns, rows or both of them according to the
specified values. In this system each row is equal to one
subcarrier, it means, rows are in the frequency domain
and columns are in the time domain. In IEEE 802.11p
base band model the padding is employed for truncating
the input signal along column size. The specified output
dimension is the number of bits per block that is different
according to the different data rate and corresponding to
different code rate and modulation scheme.

Fig. 2. Convolutional encoder (k=7).

The minimum free distance of the code determines the


performance of the convolutional code. dfree is the
minimum Hamming distance between two different code
words which is also called free distance
V. FRAME TYPES
There are three main types of frames:
1. Data frames: Used for data transmission.

The convolutional encoder is carried out for coding of the


transmitted bits. Convolution codes have three main
parameters, the number of input bits, k, number of output
bits, n, and the number of memory register, m. k.(m + 1)
is introduced as constraint length to define a
convolutional encoder in Matlab Simulink. A poly2trellis
function is used to convert generator polynomials to
Trellis structure, see.

2. Control Frames: To address some wireless


communication phenomena and control medium
access of nodes in order to reduce collisions.
3. Management Frames: Are used to exchange
management information, but they do not belong to
the upper layers.
VI. IEEE 802.11P PHYSICAL LAYER MODEL

Trellis = poly2trellis (Constraint Length, Code Generator)

The IEEE 802.11p Model represents a base band model


for the physical layer in a Wireless Local Area Network
(WLAN) in this physical layer model having three main
parts transmitter, channel, and receiver each of these is
explained below.

In this system the Trellis structure is poly2trellis (7, [171


133]).
Puncturing Convolutional Codes used to Generation of
different code rates from code rate, which is the code
rate of the convolution encoder, is implemented through
puncturing. The output Elements will be according to the
puncture vector. The kth element of the input Vector will
be removed if the kth element of puncture vector is zero.
On the other hand if the kth element of puncture vector is

A. Transmitter Side
Binary data is created according to a predefined mode.
This mode is created in adaptive modulation control
according to the SNR estimation at the receiver. This
mode has to be entered to the data source to create the

Fig. 3: Subsystem of modulator.

96

Proceedings of the National Conference on Communication Control and Energy System

equal to one, then the kth element of the input vector is


represented in the output vector. The following example
illustrates creating new code rates from code rate: To
create a 3/4 code rate from a 1/2 code rate, one
convolutional code with one puncture vector [110110]
and constraint length 7 can be used. The third and sixth
element from the input vector is removed according to the
puncture vector. Therefore the bit rate resulting from
puncture vector is 3/2, finally bit rate will be 3/4 =
3/2.1/2,

Each OFDM symbol in IEEE 802.11p has four pilot


subcarriers. The pilot signals are used for tracing
frequency offset and phase noise. The location of pilot
subcarriers is -21,-7, 7 and 21.
The Pseudorandom Noise (PN) sequence generator block
is carried out creating the pilot subcarriers. The sample
time and the number of samples per frame for PN
sequence Generator is defined as follow
Sample time = the period of the Block/OFDM symbol per
frame

Matrix Interleaver and General Block Inter leaving can be


employed in digital data transmission technologies to
mitigate the effect of burst errors. When too many errors
exist in one code word, due to a burst error, the decoding
of a code word cannot be done correctly. To reduce the
effect of burst error, the bits in one code word are
interleaved before being transmitted. When interleaving
occurs the place of bits will change, which means that a
burst error cannot disturb a huge part of one code word

Samples per frame = OFDM symbol per frame


Preamble insertion is used for channel estimation in the
model in order to improve the channel estimation
accuracy. Four long OFDM training symbols are used
instead of two long training symbol in this system. The
long training symbols consist of 53 subcarriers that have a
zero value at DC subcarrier.

The SIMULINK model is defined by two steps. The first


step is mapping of the adjacent coded bits in to the
nonadjacent sub carriers that is implemented with the
matrix interleaver. The second step is mapping the
adjacent coded bits alternately onto significant bits of the
constellation that is implemented with the general block
interleaver.

The Pad block extends the input vector along its columns.
The padding values are equal to zero, inserted at the end
of the columns, where the specified dimension of the
output is the number of points of the IFFT block
Cyclic prefix is used as a guard interval to mitigate the
effect of ISI due to the multipath propagation. A selector
block is applied as a cyclic prefix inserter to insert the last
16 subcarriers into the beginning of the OFDM symbols.

Matrix interleaver interleaves the input vector according


to the specified row and column. In this system the
number of rows and columns is given by:

The multiplex block is the last block in the transmitter


part to convert the signal from parallel to serial and to
transmit time-domain samples of one symbol.

Interleaver Rows=16
Interleaver Columns = Number of transmitted bits per
block/interleaver Rows.

VII. RADIO CHANNEL


For the simulation a simple Additive White Gaussian
Noise (AWGN) channel model is used, followed by
simulations with multipath Rayleigh fading together with
AWGN model.

The rectangular QAM block is applied to indicate how the


binary words are assigned to points of the signal
constellation. In the IEEE 802.11p baseband model a
Gray-code is used. Four different modulation types are
implemented:
 BPSK
 QPSK
 16 QAM
 64 QAM
OFDM symbols:
To convert a block of N serial data symbols (each has a
duration of Ts) into a block of N parallel data symbols
(each has a duration of T = NTs), the modulator is using a
reshape block. The output vector is a number of data
subcarriers by OFDM symbol per frame.

Fig. 4: AWGN channel.

To implement the effect of AWGN on the input signal, an


AWGN channel is added to the input signals. This block
produces a complex output signal when the input signal is
complex. In this model the variance is specified from the
97

Development of a MATLAB Simulation Environment for Vehicle-To-Vehicle and Infrastructure Communication ...

port that inserts SNR, in order to calculate the variance of


the noise, as shown in Figure 4,
A. Rayleigh Fading Channel with AWGN
The multipath Rayleigh fading is added to an AWGN
channel. Since a transmitted signal propagates along
several paths in multipath channel to reach to the receiver,
it may lead to different time delays. In the block, two
parameter dialogs are specified, the delay vector is used to
specify time delay for each path and the gain vector is
used to specify the gain for each path at each delay. The
number of paths is according to the length of the delay
vector and the gain vector which must have the same
length.

Fig. 7: Subsystem of demodulator bank.

SNR(db)

Bit rate(Mb/s)

Fig. 5: Multipath Rayleigh fading channel.

B. Receiver Side
To convert a signal from serial to parallel, a de multiplex
block is used. A reshaping block is a subsystem of this
block and is employed to produce a matrix out of the
input vector.
In the receiver the inserted cyclic prefix must be removed,
to obtain the original input data. A selector block is used
to remove the 16 subcarriers that are inserted into the
beginning of the OFDM symbols.

BER(per packet)
Fig. 8

VIII. SIMULATION RESULTS

In this part the data subcarriers are separated from pilot


subcarriers and the n Parallel data symbols are converted
to the N serial data symbols, to achieve the original
signal.

The graphs show that IEEE 802.11p physical layer SNR


(db), Bit rate (Mb/s) BER (per packet)
In this above simulation result we are getting up 30-40%
of signal to noise ratio. When the SNR indicates
below10%, bit rate is decreasing and number of bits per
packet (BER) increases.
REFERENCES
[1]

Fig. 6: Disassemble OFDM frame.

The demodulator subsystem performs the inverse tasks of


the modulator subsystem. Figure 7 illustrates the
subsystem of demodulator bank. Zero insertion.

[2]

98

IEEE Std 802.11, IEEE Standard for Information


Technology-Telecommunications
and
Information
Exchange Between Systems-Local and Metropolitan Area
Networks-Specific Requirements Part 11: Wireless
LAN Medium Access Control (MAC) and Physical Layer
(PHY) Specifications, 2007.
Y. Zang, L. Stibor, G. Orfanos, S. Guo, and H.
Reumerman, An error model for inter vehicle
communications in highway scenarios at5.9GHz,in
International Workshop on Modeling Analysis and

Proceedings of the National Conference on Communication Control and Energy System

[3]

[4]

[5]

Simulation of Wireless and Mobile Systems, Montreal,


Quebec, Canada, 2005.
IEEE Communications Magazine Roberto A. Uzctegui,
Universidad Nacional Experimental Politcnica Antonio
Jos de SucreGuillermo Acosta-Marum, Georgia Institute
of Technology May 2009.
Roberto
A.
Uzctegui,
Universidad
Nacional
Experimental Politcnica Antonio Jos de Sucre
Guillermo Acosta-Marum, Georgia Institute of
Technology
WAVE:
A
Tutorial
TOPICSIN
AUTOMOTIVE NETWORKING
Arijit Khan, Shatrugna Sadhu, and Muralikrishna

[6]
[7]
[8]
[9]

99

Yeleswarapu Dept. of Computer Science, University of


California, Santa Barbara A comparative analysis of
DSRC and 802.11 over Vehicular Ad hoc Networks May
2009.
T. S. Rappaport, Wireless Communications: Principle and
Practice. NJ: Prentice-Hall, 1996.
E. Mark, Wireless OFDM Systems: How to Make Them
Work? Kluwer AcademicsPublishers,2002.
J. G. Proakis, DigitalCommunications.McGraw-Hill,
1995.
TU-RRec. M 1225, Guidelines for evaluation of radio
transmission technologies (RTTs) for IMT-2000,1997.

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.100-102.

Vehicle-To-Vehicle Wireless Real Time Image Transmission


for Traffic Awareness
D.V.S. Ramanjaneyulu1 and G. Naga Jyothi2
Vellore Institute of Technology (VIT University), Vellore India
Email: 1anji.dvsr@gmail.com, 2jyothigoriparthi@gmail.com
Abstract The number of vehicles has been increased on
road recent years, which causes high density in traffic and
further problems like accident and road congestion. A
solution regarding to this problem is vehicle-to-vehicle
communication, where vehicles are able to communicate
with their neighboring vehicle even in the absence of central
base station to provide safe and more efficient road
information. The goal of this paper is to communicate
among vehicles for the aware of real time traffic by sending
the image of the road situation at that particular instant of
time. One vehicle sends information to the vehicle behind it,
and so on. By using this image we find the updated traffic
information. Here we are using WAVE (Wireless Access in
Vehicular Environment), new wireless technology for vehicle
communication, which is operated in a range of 5.850-5.925
GHz.
Keywords IEEE 802.11p, WAVE, Matlab, Vehicle to
Vehicle Communication, OFDM.

known as wireless access in vehicular environment


(WAVE) [3].
Vehicular communication will offer a wide range of
applications such as providing traffic management which
are real time data for responding to road congestions. In
other hand finding the better path by accessing to the real
time data is the other advantage of vehicular
communication system which causes saving the time and
fuel and provide large economic benefits.
II. ROLE OF COMMUNICATION BASED VEHICULAR
SYSTEM
Cons for using sensors are limited range, limited field
view and also expensive. using wireless communication
gives 360degrees driver situation and awareness.

I. INTRODUCTION
Traffic congestion on the road is a large problem today in
major cities. The congestion and related vehicle
accommodation problem is accompanied by a constant
threat of accidents as well. The absence of road traffic
safety takes a toll of precious lives and passes a dire threat
to our environment. According to National Highway
Traffic Safety Administration (NHTSA) [9],

Fig. 1: Using sensors in car

 6.3 million traffic accidents were reported.


 43,000 people were killed.
 Millions of people were injured.
 The economy effects caused due to these accidents
were more than $230 billion.
By these reasons, we need an effective communication
like V2V vehicular communication and networks through
which vehicles and road side units communicate each
other. The transferred information with this type of
communication is traffic information.
Vehicular communication systems are effective in
decreasing the accidents and traffic congestions. In recent
years the research on vehicle to vehicle communication is
increasing. IEEE 802.11p defines an international
standards for wireless communication on vehicle and is

Fig. 2: Use of wireless communication in cars

III. OVERVIEW OF WAVE SYSTEM


IEEE 802.11p and IEEE 1609.xx are called wireless
access in vehicular environment (WAVE) and is the next
generation dedicated short range communication (DSRC)
technology which provides high speed vehicle to vehicle
(V2V) communication. It is operated at 5.850-5.925GHz
[4]. Total 70MHz band is divided in to 7 channels with
each channel having 10MHz, 2 small zones and 2 medium
zones service channels are designed for extended data

100

Proceedings of the National Conference on Communication Control and Energy System

transfer, 2 service channels are designed for special safety


critical applications and in all channels, public safety
application messages has highest priority. WAVE system
adopts orthogonal frequency division multiplexing
(OFDM) and achieve data rate up to 6-27Mbp/s. In
WAVE system, coverage range is up to 1000feets. Wave
systems are based on IEEE 802.11p protocol which is
currently under development.
IV. WAVE COMMUNICATION STACK
IEEE 802.11p is the combination of both MAC and
Physical layer. In this, we design a physical and MAC
layer such that it should be operated in the fast and
rapidly varying vehicular environment. In this we are
using two types of protocol stacks [2]. Internet protocol
version six (IPV6), Wave short message service protocol.

V. FUNCTIONAL DESCRIPTION OF WAVE TRANSCEIVER


In the present test case, we are using camera for taking the
live video on road traffic. It is fixed on the top of the
vehicle. And also we use a simple Matlab functions to
take the video and get one snapshot from that video. The
Matlab code for taking the snapshot from the webcam is
as follows
vid=videoinput(winvideo,1,RGB24_320x240);
start(vid);
b=getsnapshot(vid);
imwrite(b,traffic.bmp);
imshow(b);

The WAVE transceiver transmits the image only when


the image has more than 10 to 15 vehicles and it does this
process for every 5seconds from one vehicle to another
vehicle. It follows the same sequence to the last vehicle.

The reason for having two protocol stacks is to


accommodate high-priority, time sense to communicate as
well as more traditional and less demanding exchanges.
WSMP enables the application to send short messages
and directly control certain parameters of the radio
resources to maximize the probability that all the
implicated parties will receive the messages in time.
Fig. 3. Functional block diagram of wave transmitter

A. IEEE 1609 Series (1609.XX)


Multi-Channel Operation (IEEE 1609.4) provides
enhancement to the IEEE 802.11p MAC to support multichannel operation [5].
Wave Networking Services (IEEE 1609.3) provides
addressing and routing services within a wave system [6].
Wave Resource Management (IEEE 1609.1) describes an
application that allows the interaction of OBU with
limited computing resources and complex process,
running outside the OBU in order to give the impression
that the process are running in the OBUS [7].
Wave Security Services (IEEE 1609.2) covers the format
of secure messages and their processing [8].

Modulator converts the input information into IEEE


802.11p OFDM format. It includes different data rates
which are selected according to the output of the adoptive
modulation. This system uses different modulation
schemes for different data rates. In this we have
subsystems like padding, convolution encoder, puncturing
convolution encoder, matrix interleaver, general block
interleaver, Rectangular QAM because of space limitation
we are not describing this subsystems.
The output of modulated video is given to the wave
transceiver which can transmit the information into free
space and it reaches another wave transceiver which can
create its own network called wave base service sets
(WBSS) and exchanges the information.

Table 1. Comparisons view on the key parameters of IEEE


802.11p PHY and IEEE 802.11a PHY[4].
Parameters
Bit rate Mb/s
modulation
OFDM symbol
duration
Guard time
FFT period
Preamble duration
Sub carrier
frequency spacing

IEEE 802.11p
3,4,5,6,9,12,18,2
4,27
Bpsk,qpsk,16qpsk,64-qpsk
8s

IEEE 802.11a
6,9,12,24,36,48,5
4
Bpsk,qpsk,16qpsk,64-qpsk
4s

1.6s
6.4s
32s
0.15625MHz

0.8s
3.2s
16s
0.3125MHz

Fig. 4. Functional block diagram of wave receiver

Figure.4 shows block diagram of wave receiver in which


the vehicle receives the information by using wave
transceiver which is further processed and demodulated,
its output is given to the image viewer, located on the

101

Vehicle-To-Vehicle Wireless Real Time Image Transmission for Traffic Awareness

OBU which shows the road condition and traffic of the


road.
VI. PROPOSED EXPERIMENTAL SETUP

1609.2, IEEE1609.3, and IEEE 1609.4. Communication


stack and its details, how the WBSS (wave bass service
sets) are created in a network to exchange the information
in the vehicles. In this we send real time traffic images to
the neighboring vehicles, experimental results are
communicated effectively and achieved less time delay
for communicating each vehicle in the network. Future
works is implementing the total protocol stack for wave
and avoid the collision by sending the warning message.
By using this warning message we can control the speed
of the vehicle, and vehicle dynamics.
REFERENCES
[1]

Fig. 5. Experimental Setup

In the above setup, vehicle 1 captures the traffic image


and transmits to vehicle 2 through WAVE transceiver.
[2]

[3]

[4]

[5]

Fig. 6. The graph between number of vehicle and it received


time

[6]

Figure 6 shows that graph between the No of vehicles and


reception time in seconds. In the above graph, first
vehicle transmission time is 2sec and it transmits to other
vehicle in less than that time because it simply receives
and transmits to the following vehicles finally total ten
vehicle transmission time is 10sec.

[7]
[8]
[9]

VII. CONCLUSIONS
This article presents an overview of the IEEE standards
for WAVE, namely, IEEE 802.11p, IEEE 1609.1, IEEE

102

IEEE Std 802.11, IEEE Standard for Information


Technology-Telecommunications
and
Information
Exchange Between Systems-Local and Metropolitan Area
Networks-Specific Requirements Part 11: Wireless
LAN Medium Access Control (MAC) and Physical Layer
(PHY) Specifications, 2007.
IEEE Communications Magazine Roberto A. Uzctegui,
Universidad Nacional Experimental Politcnica Antonio
Jos de SucreGuillermo Acosta-Marum, Georgia Institute
of Technology May 2009.
Roberto
A.
Uzctegui,
Universidad
Nacional
Experimental Politcnica Antonio Jos de Sucre
Guillermo Acosta-Marum, Georgia Institute of
Technology WAVE: A Tutorial TOPICSIN Automotive
Networking
Arijit Khan, Shatrugna Sadhu, and Muralikrishna
Yeleswarapu Dept. of Computer Science, University of
California, Santa Barbara A comparative analysis of
DSRC and 802.11 over Vehicular Ad hoc Networks May
2009.
IEEE Std 1609.1-2006, Trial-Use Standard for Wireless
Access in Vehicular Environments (WAVE) Resource
Manager
IEEE Std 1609.2-2006, Trial-Use Standard for Wireless
Access in Vehicular Environments Security Services for
Applications and Management Messages
IEEE Std 1609.3-2006, Trial-use Standard for Wireless
Access in Vehicular Environments (WAVE) Networking
Services
IEEE Std 1609.4-2006, Trial-Use Standard for Wireless
Access in Vehicular Environments (WAVE) MultiChannel Operation
NHTSA national highway traffic safety administration.

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.103-106.

MFCC Feature Extraction Algorithm for Clinical Data


C.R. Bharathi1 and V. Shanthi2
1

Research Scholar,Sathyabama University, Asst.Prof.,Veltech University


2
Professor, MCA Dept.,St.Josephs College of Engg.

Abstract In real-world environment, speech signal


processing plays a vital role among the research
communities. A wide range of researches are carried out in
this field for denoising, enhancement and more. Besides the
other, stress management is important to improve Mental
retardation (MR) or intellectual disability (ID) childrens
speech. MR or ID is a descriptive term for subaverage
intelligence and impaired adaptive functioning arising in the
developmental period (< 18 y). MR/ID and other
neurodevelopmental disabilities are seen often in a general
pediatric practice. In order to provide proper speech
practice for the MR children, their speech is analyzed.
Initially, the normal and abnormal children speech are
obtained with the same set of words. As an initial process in
the work, Feature Extraction is done for normal and
abnormal childrens speech. In this paper Feature
Extraction and reducing the dimensionality is discussed.
Feature Extraction is implemented using well known Mel
Frequency Cepstrum Coefficients (MFCC) for both words of
normal children and abnormal children speech. Next the
Principal Component Analysis (PCA) is applied to reduce
the dimensionality of the words which leads to next step for
classification.
Keywords speech signal, stress management, Mel
Frequency Cepstrum Coefficients (MFCC), Principal
Component Analysis (PCA)

I. INTRODUCTION
Terminology for MR/ID has been particularly challenging
as the term mentally retarded carries significant social
and emotional stigma. The American Association for
Intellectual and Developmental Disability (AAIDD) has
been particularly influential in terminology changes such
that most professionals working in the field now refer to
mental
retardation
as
intellectual
disability.
Developmental delay is often used inappropriately as
synonymous with MR/ID. Developmental delay is an
overly inclusive term and should generally be used for
infants and young children in which the diagnosis is
unclear, such as those too young for formal testing.
Approximately 10% of children have some learning
impairment, while as many as 3% manifest some degree
of MR/ID. Speech is one of the salient forms of
communication in daily life [1]. Speech is formed through
the functioning of time-varying vocal tract system. During
the production of speech, both excitation and the vocal
tract are changed constantly with time [2]. In every verbal

communication, the quality and precision of speech are


given greater importance [3]. Due to the presence of
deleterious properties in the acoustic environment such as
multipath distortion (reverberation) and ambient noise,
the performance of speech and speaker recognizers are
often degraded [4]. Speech communication applications
such as voice-controlled devices, hearing aids, and handsfree telephones mostly suffer from poor speech quality
because of background noise and room echo [5]. Most of
the time, particularly during travel, we meet noisy
environment. Signal processing techniques removes the
noise from the signal-noise mixture and provides an
almost noise-free sound for enhanced communication [3].
Signal processing techniques are exploited in several
applications such as speech acquisition, acoustic imaging
and communications purposes [6].
In speech processing, formats are resonances of the vocal
tract. The interpretation of their location and bandwidth is
essential in several applications [7]. Many attempts have
been made for improving the amplitude of speech. If it is
feasible to recognize the speech when it is present, and
offer more gain for that speech than the surrounding
environmental sounds, both the accuracy and comfort of
speech are enhanced. The enhancement of speech from
corrupted noisy observations is mostly based on
probabilistic models of speech and noise, e.g. perfect
modeling and evaluation of the speech and noise statistics
is, therefore, of vast importance [8]. In the modern period,
there is a great interest for developing techniques for both
speech (and character/word sequences) recognition and
synthesis [9]. Automatic speech recognition by computer
is a process in which the speech signals are mechanically
changed into the sequence of words in text [10].
Generally, the speech recognition has two stages: feature
extraction and classification [11].
II. PROPOSED TECHNIQUE FOR FEATURE EXTRACTION
USED FOR CLASSIFICATION OF CLINICAL DATA
Recently, the speech signal processing plays an important
role in the research community and these researches are
supportive to the real world environment. In this work,
MR children speech of mild degree is taken as abnormal
children speech dataset. For improving the abnormal
speech initially Feature extraction and dimensionality
reduction is done. Initially, normal children speech and

103

MFCC Feature Extraction Algorithm for Clinical Data

abnormal children speech dataset is obtained and then the


MFCC is extracted from both the speeches. This speech
dataset is developed through in which both the databases
are same set of scripts. Subsequently, with the aid of the
PCA the dimensionality is reduced and after that the
parameters are obtained from the extracted MFCC
feature.The step by step processes are explained in detail
in the following subsections.

B. Principal Component Analysis (Pca)

N = w
cv =

A. Feature Extraction
In this segment, the speech samples are extracted from the
normal persons and abnormal persons with the aid of the
audio synthesizer. Let Da and D b are the abnormal
person, normal person speech datasets respectively and
from these datasets the MFCC feature is extracted.

Da = {w1 , w 2 , w 3 ...w N w 1 }

(1)

D b = {1 , 2 , 3 ...N w 1 }

(2)

Cepstrum and mel-frequency cepstrum differentiated in


that in the MFC, frequence bands are equally spaced on
the mel scale, which estimates the human auditory
systems response more closely than the linearly spaced
frequency bands used in the normal cepstrum [34].
MFCCs are based on the known variation of the human
ears critical bandwidths with frequency. The MFCC
technique makes use of two types of filter, namely,
linearly spaced filters and logarithmically spaced filters.
To capture the phonetically important characteristics of
speech, signal is expressed in the Mel frequency scale.
This scale has a linear frequency spacing below 1000 Hz
and a logarithmic spacing above 1000 Hz. Normal speech
waveform may vary from time to time depending on the
physical condition of speakers vocal cord. Rather than
the speech waveforms themselves, MFFCs are less
susceptible to the said variations [33]. The following steps
are involved in extracting the MFCC feature
 Fourier transform is taken for the signal
 With the aid of triangular overlapping windows, the
power spectrums are compared with mel scale
 At each of the mel frequency the logs of powers are
taken

N NT
n 1

(3)
(4)

Y = N * ET

(5)

Y
* = w
ET

(6)

N Mean Deviation
w-window
-mean

cv-covariance vector
E Eigenvector
The aforesaid eqns are utilized to obtain the PCA of both
M a and M b here given eqns are the general sets of eqns
to generate PCA. In PCA the data are processed by
window by window. After PCA the inverse PCA also
applied to obtain the dimensionality reduced original
information again. After this process completed, the
following parameters are obtained from the MFCC
featured vectors M a , M b .The parameters are mean,
standard deviation, maximum amplitude value and its id,
minimum amplitude value and its id, MFCC length are
extracted for the MFCC featured word and as well as for
the original word also extracted and hence for each word
we have 14 inputs.
III. RESULTS AND DISCUSSION
The proposed system was implemented in the working
platform of MATLAB (version 7.11). In this with the aid
of the Free Audio Editor we generate the dataset with the
normal and abnormal female children within the age limit
6-10. For 100 normal data (words) we utilized 2 female
children and for 100 abnormal data we utilized a female
child for our system and their normal frequency range is
from 0-300 kHz.

 Discrete Cosine Transform (DCT) is taken for the mel


log powers
 Obtaining the resulting spectrums is the amplitude of
MFCCs.
With the utilization of above steps, the MFCC features for
the above steps the MFCC are obtained from the normal
as well as abnormal datasets are obtained which is
referred as M a and M b
Fig. 1. Normal speech 1 ganesh

104

Proceedings of the National Conference on Communication Control and Energy System

Initially, the words are extracted from the both normal


and abnormal children and then the MFCC feature has
been extracted from it. Subsequently, the PCA is applied
to reduce the dimensionality of the words. Few speech
samples which are sent as input to MFCC Feature
Extraction.

Fig. 6. Feature Extraction output of Abnormal Speech of a MR


child

Fig. 2. Normal speech 2 ganesh

Fig. 7. Output of PCA

IV. CONCLUSION

Fig. 3. Abnormal speech ganesh

To develop an effective system to identify the abnormal


word and the spot, where the speech has to be improved,
initially the MFCC is obtained from both the normal and
abnormal words and then PCA dimensionality reduction
is done. The result of this first step of the work is
discussed with outputs. After that, the work will be
extended by (i) Classification Phase, (ii) Testing Phase
(iii) Spotting Phase respectively.
REFERENCES
[1]

[2]
Fig. 4. Feature Extraction output of Normal Speech of Child 1
[3]

[4]

[5]

Fig. 5. Feature Extraction output of Normal Speech of Child 2

105

Shirbahadurkar and Bormane, "Speech Synthesizer Using


Concatenative Synthesis Strategy for Marathi language
(Spoken in Maharashtra, India)", International Journal of
Recent Trends in Engineering, Vol. 2, No. 4, pp. 80-82,
November 2009
Yegnanarayana and Satyanarayana Murthy, "SourceSystem Windowing for Speech Analysis and Synthesis",
IEEE Transactions on Speech and Audio Processing, Vol.
4, No. 2, pp. 133-137, March 1996
Singaram, Guru Raghavendran, Shivaramakrishnan and
Srinivasan, "Real Time Speech Enhancement using
Blackfin Processor BF533", J. Instrument Society of india,
Vol. 37, No. 2, pp. 67-79, 2009
Qiguang Lin Ea-Ee Jan and James Flanagan, "Microphone
Arrays and Speaker Identification", IEEE Transactions on
Speech and Audio Processing, Vol. 2, No. 4, pp. 622-629,
October 1994
Thomas Lotter, Christian Benien and Peter Vary,
"Multichannel
Direction-Independent
Speech
Enhancement Using Spectral Amplitude Estimation",
EURASIP Journal on Applied Signal Processing, Vol.
2003, No. 11, pp. 1147-1156, 2003

MFCC Feature Extraction Algorithm for Clinical Data


[6]

[7]

[8]

[9]
[10]

[11]

[12]

[13]

[14]

[15]

[16]
[17]

[18]

[19]

[20]

Sven Nordholm, Thushara Abhayapala, Simon Doclo,


Sharon Gannot, Patrick Naylor and Ivan Tashev,
"Microphone Array Speech Processing", EURASIP
Journal on Advances in Signal Processing, Vol. 2010, pp.
1-3, 2010
Roy C. Snell and Fausto Milinazzo, "Formant Location
from LPC Analysis Data", IEEE Transactions on Speech
and Audio Processing, Vol. 1, No. 2, pp. 129-134, April
1993
David Zhao and Bastiaan Kleijn, "HMM-Based Speech
Enhancement using Explicit Gain Modeling", In
proceedings of IEEE International Conference on
Acoustics, Speech and Signal processing, 2006
Marius Crisan, "Chaos and Natural Language Processing",
Acta Polytechnica Hungarica, Vol. 4, No. 3, pp. 61-74,
2007
AidaZade, Ardil and Rustamov, "Investigation of
Combined use of MFCC and LPC Features in Speech
Recognition Systems", World Academy of Science,
Engineering and Technology, Vol. 3, No. 2, pp. 74-80,
Spring 2007
Chulhee Lee, Donghoon Hyun, Euisun Choi, Jinwook Go
and Chungyong Lee, "Optimizing Feature Extraction for
Speech Recognition", IEEE Transactions on Speech and
Audio Processing, Vol. 11, No. 1, pp. 80-87, January 2003
Rashad, Hazem M. El-Bakry and Islam R. Ismail,
"Diphone Speech Synthesis System for Arabic Using
MARY TTS ", International journal of computer science
& information Technology (IJCSIT), Vol. 2, No. 4, pp.
18-26, August 2010
Guojun Zhou, John H. L. Hansen and James F. Kaiser,
"Nonlinear Feature Based Classification of Speech under
Stress", IEEE Transactions on Speech and Audio
Processing, Vol. 9, No. 3, pp. 201-216, March 2001
Carolyn Reeves, A. Rene Schmauder and Robin K.
Morris, "Stress Grouping Improves Performance on an
Immediate Serial List Recall Task", Journal of
Experimental Psychology: Learning, Memory, and
Cognition, Vol. 26, No. 6, pp. 1638-1654, 2000
Sigmund, Prokes, and Brabec, "Statistical Analysis of
Glottal Pulses in Speech under Psychological Stress", In
Proceedings of 16th European Signal Processing
Conference (EUSIPCO 2008), Lausanne, Switzerland,
August 25-29, 2008
Milan Sigmund, "Spectral Analysis of Speech under
Stress", International Journal of Computer Science and
Network Security, Vol. 7, No. 4, pp. 170-172, April 2007
Anandthirtha. B. GUDI, and Nagaraj, "Optimal Curve
Fitting of Speech Signal for Disabled Children",
International journal of computer science and information
Technology (IJCSIT), Vol. 1, No. 2, pp. 99-107,
November 2009
Caroline Floccia, Thierry Nazzi, Keith Austin, Frederique
Arreckx and Jeremy Goslin, "Lexical stress and phonetic
processing in word learning in 20- to 24-month-old
English-learning children", Developmental Science, pp. 112, 2010
Stelzle, Ugrinovic, Knipfer, Bocklet, Noth, Schuster,
Eitner, Seiss and Nkenke, "Automatic, computer-based
speech assessment on edentulous patients with and
without complete dentures - preliminary results", Journal
of Oral Rehabilitation, Vol. 37, No. 3, pp. 209-216, March
2010
Ibrahim Patel and Srinivas Rao, "Speech Recognition
Using HMM with MFCC-An Analysis Using Frequency

[21]

[22]

[23]

[24]

[25]

[26]

[27]

[28]

[29]

[30]

[31]

[32]

[33]

[34]

106

Specral Decomposion Technique", Signal and Image


Processing: An International Journal (SIPIJ), Vol. 1, No.
2, pp. 101-110, December 2010
Soumadip Ghosh, Sushanta Biswas, Debasree sarkar and
Partha Pratim Sarkar, "Mining Frequent Item sets Using
Genetic Algorithm", International Journal of Artificial
Intelligence & Applications, Vol.1, No.4, pp.133143,October 2010
Fozia Hanif Khan, Nasiruddin Khan, Syed Inayatullah and
Shaikh Tajussin Nizami, "Solving TSP Problem By Using
Genetic Algorithm", International Journal of Basic &
Applied Sciences, Vol.9, No.10, pp.79-88, September
2009
Sufal Das and Banani Saha, "Data Quality Mining using
Genetic Algorithm", International Journal of computer
science and security, Vol.3, No.2, pp.105-112, 2009
Srinivas and Patnaik, "Adaptive Probabilities of Crossover
and Mutation in Genetic Algorithms", IEEE Transactions
on Systems, Man and Cybernetics, Vol.24, No.4, pp.656667, April 1994
Hiren Patel, Anirbid Sircar, Soham Sheth and Reshmi
Jadvani, "Application of genetic algorithm to hydrocarbon
resource estimation", Journal of Petroleum and Gas
Engineering, Vol.2, No.4, pp.83-92, April 2011
Sumathi and SanthaKumaran, "Pre-Diagnosis of
Hypertension Using Artificial Neural Network", Global
Journal of Computer Science and Technology, Vol.11,
No.2, pp.43-47, February 2011
Karim Solaimani, "Rainfall-runoff Prediction Based on
Artificial Neural Network (A Case Study: Jarahi
Watershed)", American Eurasian J. Agric & Environ. Sci.,
Vol.5, No.6, pp.856-865, 2009
El-Shafie, Mukhlisin, Najah and Taha, "Performance of
artificial neural network and regression techniques for
rainfall-runoff prediction", International Journal of the
Physical Sciences, Vol.6, No.8, pp.1997-2003, April 2011
Mohammad Reza Zakerzadeh, Mohsen Firouzi, Hassan
Sayyaadi and Saeed Bagheri Shouraki, "Hysteresis
Nonlinearity Identification Using New Preisach ModelBased Artificial Neural Network Approach", Journal of
Applied Mathematics, Volume 2011, 2011
Timothy James Stich, Julie Spoerre and Tomas Velasco,
"The Application of Artificial Neural Networks to
Monitoring and Control of an Induction Hardening
Process", Journal of Industrial Technology, Vol.16, No.1,
January 2000
Nachamai, Santhanam and Sumathi, "A New Fangled
Insinuation for Stress Affect Speech Classification",
International Journal of Computer Applications, Vol.1,
No.19, 2010
Xie, Andreae, Zhang and Warren, "Detecting stress in
spoken English using decision trees and support vector
machines",
Australian
Computer
Science
Communications, Vol.26, No.7, 2004
Rashidul Hasan, Mustafa Jamil, Golam Rabbani and
Saifur Rahman, "Speaker Identification Using Mel
Frequency Cepstral Coefficients", In proceedings of 3rd
International Conference on Electrical and Computer
Engineering, Dhaka, Bangladesh, December 2004
Abdalla and Ali, "Wavelet-Based Mel-Frequency Cepstral
Coefficients for Speaker Identification using Hidden
Markov Models", Journal of Tele Communications, Vol.1,
No.2, pp.16-21, March 2010

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.107-112.

An Introduction to Multimodal Biometrics


R. Gayathri and M. Suganthy
Research Scholar, Anna University and Assistant Professor, VelTech University
Abstract Fusion of multiple biometric modalities for
human authentication performance improvement has
received considerable attention. This paper presents a
robust multimodal biometric authentication scheme
integrating face, ear and signature on rank level fusion. The
developed multimodal biometric system possesses a number
of unique qualities, starting from utilizing principal
component analysis and Fishers linear discriminant
methods for individual matchers (face, ear, and signature)
identity authentication and utilizing the novel rank-level
fusion method in order to consolidate the results obtained
from different biometric matchers. The ranks of individual
matchers are combined using the highest rank, Borda count,
and logistic regression approaches. The results indicate that
fusion of individual modalities can improve the overall
performance of the biometric system, even in the presence of
low quality data.
Keywords
biometrics,
multimodal,
recognition, rank level fusion.

identification,

I. INTRODUCTION
Biometrics refers to the technologies that use
physiological or behavioral characteristics to authenticate
a person's identity [1]. In recent years, the increasing
demand on enhanced security has led to an unprecedented
interest in automated personal authentication based on
biometrics. Biometric systems based on a single source of
information are called unimodal systems. Although some
unimodal systems have got considerable improvement in
reliability and accuracy, they often suffer from enrollment
problems due to non-universal biometrics traits,
susceptibility to biometric spoofing or insufficient
accuracy caused by noisy data [2], and hence, may not be
able to achieve the desired performance requirement in
real-world applications. One way to overcome these
problems is the use of multimodal biometric
authentication systems, which combine information from
multiple modalities to arrive at a decision. Some studies
have demonstrated that multimodal biometric systems can
achieve better performance comparing to the unimodal
systems [2-7].
Although existing multimodal fusion techniques have
been shown effectively to improve the accuracy of
biometrics-based verification, they also face some
limitations. For example, most existing multimodal fusion
schemes, especially some single parametric machine
learning fusion strategies, are based on the assumptions

that each Biometric modality is available and completes


[4-6], so each registered person must be entered into
every modality. Once a modality is unavailable or missed,
the multimodal systems break down or the accuracy
degrades.The key to successful multibiometric system is
in an effective fusion scheme, which is necessary to
combine the information presented by multiple domain
experts. The goal of fusion is to determine the best set of
experts in a given problem domain and devise an
appropriate function that can optimally combine the
decisions rendered by the individual experts. A decision
made by a biometric decision or an impostor type of
decision. In this paper, we proposed a robust multimodal
authentication scheme which addresses the problem
mentioned above. The proposed multimodal scheme
integrates three biometric modalities: face, ear and
signature. Three biometric verifiers are fused at the on
rank level fusion.
Rank level fusion is a relatively new fusion approach. The
goal of rank level fusion is to consolidate the rank output
by individual biometric subsystems in order to derive a
consensus rank for each identity. This paper presents an
effective fusion scheme that combines information
presented by multiple domain experts based on the ranklevel fusion integration method. The developed
multimodal biometric system possesses a number of
unique qualities, starting from utilizing principal
component analysis and Fishers linear discriminant
methods for individual matchers (face, ear, and signature)
identity authentication and utilizing the novel rank-level
fusion method in order to consolidate the results obtained
from different biometric matchers. The ranks of
individual matchers are combined using the highest rank,
Borda count, and logistic regression approaches. The
results indicate that fusion of individual modalities can
improve the overall performance of the biometric system,
even in the presence of low quality data.
II. PROPOSED MODEL
Figure 1 shows the block diagram of the proposed
multimodal biometric authentication method integrating
face ear and signature. Eigenimage and fisherface
techniques are used in this system for enrollment and
recognition of biometric traits. The goal of rank-level
fusion is to consolidate the rank output by individual
biometric subsystems (matchers) in order to derive a

107

An Introduction to Multimodal Biometrics

consensus rank for each identity. There are basically two


types of recognition approaches appearance based and
model based. PCA and LDA are examples of appearancebased recognition approaches.PCA are a statistical
method which involves analysis of n- dimensional data.
PCA observes correspondence between different
dimensions and determines principal dimensions, along
which the variation of the data is high.

Fig. 2 shows an example of the Borda count method and


the logistic regression method of rank-level fusion. The
less the value of the rank, the more accurate the result.
Here, the ranks for Person 1 are 3, 2, and 1 from the
face, ear, and signature matchers, respectively. Thus, for
the Borda count method, these ranks are added and then
divided by 3 (number of matchers). Hence, we get two,
which is the second rank (as 1.33 is found for Person
2). For the logistic regression method, we have assigned
0.3, 0.4, and 0.3 as the weights for face, ear, and
signature, respectively. The more the weight, the less the
performance. This means that the ear matcher gives us
less accurate results than the face or signature matchers.
These weights are chosen by reviewing the previous
results obtained by different researchers and also by
consequently executing the system. Therefore, for
Person 1, we have 3, 2, and 1 from face, ear, and
signature, respectively. Thus, for the reordered rank
calculation, these initial ranks are multiplied by their
respective weights (3 multiplied by 0.3, 2 multiplied by
0.4, and 1 multiplied by 0.1). After that, these three new
ranks of Person 1 is added and divided by 3 (number of
matchers),and the new rank 0.67 is found, which, in turn,
is considered as rank 2 (second from the lowest) in the
final list of rank. Also, as Person 5 appears only in the
ear matchers result, so it is not considered in the final
result.

Fig. 1. Block diagram of the proposed multibiometric system.

III. RANK LEVEL FUSION


Rank-level fusion is a relatively new fusion approach.
The goal of rank-level fusion is to consolidate the rank
output by individual biometric subsystems in order to
derive a consensus rank for each identity. Ross et al. [8]
describe three methods to combine the ranks assigned by
different matchers. Those are the highest rank method, the
Borda count method, and the logistic regression method.
In the highest rank method, each possible match is
assigned the highest (minimum) rank, as computed by
different matchers. The Borda count method uses the sum
of the ranks assigned by individual matchers to calculate
the final rank. On the other hand, in the logistic regression
method, a weighted sum of the individual ranks is
calculated. The weight to be assigned to different
matchers is determined by logistic regression [8]. This
method is very efficient when different matching modules
have significant differences in their accuracies but
requires a training phase to determine the weights. We
propose to use all three matchers (face, ear, and signature)
and have considered only those identities which appear in
the results of at least two matchers. The identities which
appear in the result of only one matcher have been
discarded or not considered for the final rank in this
system.

Fig. 2. Example of rank-level fusion (adopted from [8]).

This section deals with the development procedures of the


proposed multimodal biometric system through the ranklevel fusion method. Eigenimage and fisherface
techniques are used in this system for enrollment and
recognition of biometric traits. A more detailed
representation of the proposed system is shown in Fig. 1.

108

Proceedings of the National Conference on Communication Control and Energy System

A. Recognition using Eigenimage


Eigenimage feature extraction is based on the KL
transform [9] and is used to obtain the most important
features from the face, ear, and signature subimages in
our system. These features are obtained by projecting the
original subimages into the corresponding subspaces. The
process of obtaining these subspaces and projecting the
subimages into them is identical for all subspaces. The
system is first initialized with a set of training images.
Eigenvectors and eigenvalues are computed on the
covariance matrix of these images according to the
standard procedure. Fig. 3 shows the average image and
eigenimage for face, ear, and signature, respectively.
From the eigenvectors (eigenimages) that are created, we
only choose a subset which has the highest eigenvalues
The higher the eigenvalue, the more characteristic
features of an image the particular eigenvector describes.
Eigenimages with low eigenvalues can be omitted.
Finally, the known images are projected onto the image
space, and their weights are stored. This process is
repeated as necessary. The steps for the recognition
process are as follows

projection matrix, similar to that used in the eigen face


method. However, the fisherface method is able to take
advantage of within-class information, minimizing
variation within each class, yet still maximizing class
separation. Scatter matrices, representing the within-class
( ), between-class ( ), and total ( ) distributions of the
training set through the image space are

(1)

(2)

(3)
Where
is the average image vector of the entire
training set

1. Project the test image into the eigenspace, and


measure the distance between the unknown images
position in the eigenspace and all the known images
positions in the eigenspace.
2. Select the image closest to the unknown image in the
eigenspace as the match.

Fig. 4. Fisherfaces generated from the training set.

is the average of each individual class Xi


(person).
Then, by performing PCA on the total scatter matrix

and taking the top M c principal components, we


produce a projection matrix
, which is used to reduce
the dimensionality of the within-class scatter matrix

Fig. 3. (a) Average images and (b) eigenimages


(for face, ear, and signature).

B. Recognition using Fisherface


Eigenspace representation is very sensitive to image
conditions such as background noise, image shift,
occlusion of objects, scaling of the image, and
illumination change. When substantial changes in
illumination and expression are present in any image,
much of the variation in data is due to these changes [10],
and the eigenimage technique, in this case, cannot give
highly reliable results. To overcome this new method
called fisherface method is adopted. The fisherface
method uses both PCA and LDA to produce a subspace

before computing the top c 1 eigenvectors of the


reduced scatter matrices,
, as shown in the following:

Finally, the matrix

is calculated to project a face

image into a reduced space of c 1 dimensions, in which


the between class scatter is maximized for all c classes,
while the within class scatter is minimized for each class

109

An Introduction to Multimodal Biometrics

Once the Uff matrix has been constructed, it is used in


much the same way as the projection matrix in the
eigenface method. Like the eigenface system, the
components of the projection matrix can be viewed as
images, referred to as fisherfaces in Fig. 4. The
recognition procedure for the fisherimage technique is
similar to the eigenimage technique. Also, for generating
the Training set ranked output, we follow the same
procedure described in the previous Section.

obtained through the Borda count method. This method is


similar to the logistic regression method, except that there
is no weight-assigning procedure in this method. This
leads to a vital issue on the performance of a biometric
system. The least advantage that we obtained through the
rank-level fusion method is by using the highest rank
method. This method only considers the highest rank
associated with each user and can often lead to a problem
of lower acceptance rate.

V. RESULT AND DISCUSSION


Multibiometric system is implemented in MATLAB 7.0
on a Pentium-IV Windows XP workstation. We compare
various eigenimage techniques and the fisherface
technique in terms of FAR and GAR. Fig. 5 shows the
results. From the results shown in the graph of Fig. 5, it is
clear that fisherface works more efficiently than eigenface
[Fig. 5(c)]. Among the three eigenimage methods, facebased recognition provides the best performance.
Between eigenear and eigensignature methods, the
eigensignature method is slightly better than the eigenear
method. It has been shown in previous section that the
fisherface method has some advantages over the
eigenimage method because it is able to take advantage of
within-class information, minimizing variation with each
class, yet still maximizing class separation. Variations in
lighting conditions, facial expression, and even small
change in orientation can cause the face image of a person
to change from one form to another. Our face database
has set of face images of the same person but with
expression, illumination, and orientation changes. The
fisherface method takes care of these changes, while the
eigenimage method does not. Therefore, in our system,
we obtained better recognition performance by the
fisherface method.
Fig. 5(d) shows the performance rate of three different
kinds of rank-level fusion approaches in terms of GAR
and FAR. These three different approaches of the ranklevel fusion method are as follows: highest rank, Borda
count, and logistic regression.
Fig.6.shows
the
combined
receiver
operating
characteristic (ROC) curves under one graph. From this
figure, it is clear that the error rate would be reasonably
high incorporating any fusion method. Significant
performance gain can be achieved with the combination
of rank information of different monomodal experts. The
best performance that we have received from this system
is using the logistic regression approach of the rank-level
fusion method.
In this method, assigning different weights to individual
matchers based on their accuracy plays a significant role
in determining the final result. The second best result is

110

Proceedings of the National Conference on Communication Control and Energy System


[2]
[3]

[4]
[5]

[6]

[7]
Fig. 5. ROC curves for different biometric systems for (a) ear,
(b) signature,(c) face (fisherface and eigenface), and (d) three
different approaches for ranklevel output fusion methods for
combining ear, face, and signature biometric systems

[8]
[9]
[10]

[11]

[12]

[13]
Fig. 6. ROC curves for different biometric systems in terms of
GAR and FAR.

[14]

VI. CONCLUSION

[15]

In this paper, we propose an effective multimodal


biometrics fusion method to get the optimal identification
result. In this a comparison between various rank level
fusion methods are obtained. Between the three ranklevel fusion approaches, the logistic regression method
gives us the better performance in terms of error rates.
The main reason for this is that, in this approach weights
are assigned to different matchers according to their
performance.

[16]

[17]

[18]

In future, a better result can be obtained by


improving the genuine acceptance rate and decreasing
the false acceptance rate. This can be done by using some
specific algorithm for each database.

[19]

REFERENCES

[20]

[1]

A.K. Jain, A. Roerifss, S. Prabhakar. An introduction to


biometric recognition. IEEE ransactions on Circuits and
Systems for Video Technology, Vol.14, 2004, 4-20.

111

A.K. Jain, A. Ross. Multibiometric systems.


Communications of the ACM, Vol. 47, 2004, 34-40.
Z. Liu, S. Sarkar. Outdoor recognition at a distance by
fusing gait and face. Image and Vision Computing, Vol.25,
2007, 817-832.
A. Ross, A. Jain. Information fusion in biometrics. Pattern
Recognition Letters, Vol.24, 2003, 2115-2125.
Y. Wang, T. Tan, A. K. Jain. Combining face and iris
biometrics for identity verification. Proceedings of the 4th
International Conference on Audio-and Video-Based
Biometric Person Authentication (AVBPA), June 9-11,
2003
N.A. Fox, R. Gross, J.F. Cohn, R.B. Reilly. Robust
Biometric Person Identification Using Automatic
Classifier Fusion of Speech, Mouth, and Face
Experts.IEEE Transactions on multimedia, Vol.9, 2007,
701-714.
S. Ben-Yacoub, Y. Abdeljaoued, E. Mayoraz. Fusion of
Face and Speech Data for Person Identity Verification.
IEEE Transactions on Neural Networks, Vol.10, 1999,
1065-1074.
A. Ross, K. Nandakumar, and A. K. Jain, Handbook of
Multibiometrics.New York: Springer-Verlag, 2006.
M. Turk and A. Pentland, Eigenfaces for recognition, J.
Cogn.Neurosci., vol. 3, no. 1, pp. 7186, 1991.
T. Heseltine, N. Pears, J. Austin, and Z. Chen, Face
recognition: A comparison of appearance-based
approaches, in Proc. 7th Digit. Image Comput.: Tech.
Appl., C. Sun, H. Talbot, S. Ourselin, and T. Adriaansen,
Eds., Sydey, Australia, 2003, pp. 5968.
U. M. Bubeck and D. Sanchez, Biometric authentication:
Technology and evaluation, San Diego State Univ., San
Diego, CA, 2003. Tech. Rep.
M. P. Down and R. J. Sands, Biometrics: An overview of
the technology,challenges and control considerations, Inf.
Syst. Control J., vol. 4, pp. 5356, 2004.
Y. Wang, The theoretical framework of cognitive
informatics, Int. J.Cognit. Informat. Nat. Intell., vol. 1,
no. 1, pp. 1022, 2007.
A. K. Jain and A. Ross, Fingerprint mosaicking, in
Proc. IEEEInt. Conf. Acoust., Speech Signal Process,
Orlando, FL, 2002, vol. 4, pp. 40644067.
A. Ross and R. Govindarajan, Feature level fusion using
hand and face biometrics, in Proc. SPIE 2nd Conf.
Biometric Technol. Human Identification, Orlando, FL,
2005, pp. 196204.
K. Chang, K. W. Bower, S. Sarkar, and B. Victor,
Comparison and combination of ear and face images in
appearance-based biometrics,IEEE Trans. Pattern Anal.
Mach. Intell., vol. 25, no. 9, pp. 11601165, Sep. 2003.
G. L. Marcialis and F. Roli, Fingerprint verification by
fusion of optical and capacitive sensors, Pattern Recogn.
Lett., vol. 25, no. 11, pp. 13151322, Aug. 2004.
A. Ross and A. K. Jain, Information fusion in
biometrics, Pattern Recogn. Lett. vol. 24, no. 13, pp.
21152125, Sep. 2003.
T. Kinnunen, V. Hautamki, and P. Frnti, Fusion of
spectral feature sets for accurate speaker identification, in
Proc. 9th Conf. Speech Comput.,St. Petersburg, Russia,
2004, pp. 361365.
A. K. Jain, A. Ross, and S. Pankanti, Biometrics: A tool
for information security, IEEE Trans. Inf. Forensics
Security, vol. 1, no. 2, pp. 125143, Jun. 2006.

An Introduction to Multimodal Biometrics


[21] J. Bhatnagar, A. Kumar, and N. Saggar, A novel
approach to improve biometric recognition using rank
level fusion, in Proc. IEEE Conf. Comput.Vis. Pattern
Recog. Minneapolis, MN, 2007, pp. 16.
[22] L. Hong and A. K. Jain, Integrating faces and fingerprints
for personal identification, IEEE Trans. Pattern Anal.
Mach. Intell., vol. 20, no. 12, pp. 12951307, Dec. 1998.
[23] R. Frischholz and U. Dieckmann, BiolD: A multimodal
biometric identification system, Computer, vol. 33, no. 2,
pp. 6468, Feb. 2000.
[24] J. Fierrez-Aguilar, J. Ortega-Garcia, D. Garcia-Romero,
and J. Gonzalez-Rodriguez, A comparative evaluation of
fusion strategies for multimodal biometric verification, in
Proc. 4th Int. Conf. Audio- Video-Based Biometric Person
Authentication.2003,Vol. LNCS 2688, pp. 830837.
[25] A. Kumar, D. C. M. Wong, H. C.Shen1, and A. K. Jain,

Personal verification using palmprint and hand geometry


biometric, in Proc. 4th Int.Conf. Audio- Video-Based
Biometric Person Authentication, J. Kittler and M. Nixon,
Eds., 2003, vol. LNCS 2668, pp. 668678.
[26] K. A. Toh, X. D. Jiang, and W. Y. Yau, Exploiting global
and local decisions for multi-modal biometrics
verification, IEEE Trans. Signal Process. vol. 52, no. 10,
pp. 30593072, Oct. 2004.
[27] R. Snelick, U. Uludag, A. Mink, M. Indovina, and A. K.
Jain, Large scale evaluation of multimodal biometric
authentication using state-of the-art systems, IEEE Trans.
Pattern Anal. Mach. Intell., vol. 27, no. 3, pp. 450455,
Mar. 2005.
[28] A. K. Jain, K. Nandakumar, and A. Ross, Score
normalization in multimodal biometric systems, Pattern
Recognit., vol. 38, no. 12, pp. 22702285, 2005.

112

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.113-117.

Efficient IRIS Recognition


Based on Improvement of
Feature Extraction
R.GAYATHRI2,Research

M.SUGANTHY1,

Scholar(Anna univ)
Asst prof(vtu chennai)
gayathricontact@gmail.com

Research Scholar(Anna univ)


Asst prof(vtu chennai)
suganthym46@gmail.com

compared with other biometric features such as face


recognition and fingerprints. Even the two eyes of an
individual contain completely independent iris patterns, and
identical twins posses uncorrelated iris patterns. Figure-1
shows the different iris of different persons.

Abstract Among the two biometric methods physical and

behavioural the fprmer has more secure and accurate


than the later method Iris recognition a relatively new
biometric technology, has great advantages, such as
variability, stability and security, thus it is the most
promising for high security environments. To determine
the performance and recognition system a database
grayscale eye images were used. Iris is part of eye between
eyelids and surrounding.the selection of the optimal feature
subset and the classification has become an important issue in
the field of iris recognition. In this paper we propose several
methods for iris feature subset selection and vector creation.
The deterministic feature sequence is extracted from the iris
image by using the contourlet transform technique.
Contourlet transform captures the intrinsic geometrical
structures of iris image. It decomposes the iris image into a set
of directional sub-bands with texture details captured in
different orientations at various scales so for reducing the
feature vector dimensions we use the method for extract only
significant bit and information from normalized iris images.
In this method we ignore fragile bits. And finally we use SVM
(Support Vector Machine) classifier for approximating the
amount of people identification in our proposed system.
Experimental result show that most proposed method reduces
processing time and increase the classification accuracy and
also the iris feature vector length is much smaller versus the
other methods.
Keywords-Biometric-Iris
Vector
Machine (SVM)

I.

Figure 1: Different Human Iris

First the image preprocessing step perform the localization


of The pupil, detects the iris boundary, and isolates the
collarette region, which is regarded as one of the most
important areas of the iris complex pattern. The collarette
region is less sensitive to the pupil dilation and usually
unaffected by the eyelids and the eyelashes [8]. We also
detect the eyelids and the eyelashes, which are the main
sources of the possible occlusion. In order to achieve the
invariance to the translation and the scale, the isolated
annular collarette area is transformed to a rectangular block
of fixed dimension. The discriminating features are
extracted from the transformed image and the extracted
features are used to train the classifiers. The optimal
features subset is selected using several methods to increase
the matching accuracy based on the recognition
performance of the classifiers Based on the technology
developed by Daugman [3, 5, 6], iris scans have been used
in several international airports for the rapid processing of
passengers through the immigration which have pre
registered their iris images.
A. The Main Steps In Proposed Method:
Figure. 2 illustrates the main steps of our proposed
Approach. First the image preprocessing step perform the
localization of The pupil, detects the iris boundary, and
isolates the collarette region, which is regarded as one of
the most important areas of the iris complex pattern. . In
order to achieve the invariance to the translation and the

Recognition-Contourlet-Support

INTRODUCTION

. Iris-based recognition system can be noninvasive to the


users since the iris is an internal organ as well as externally
visible,which is of great importance for the real-time
applications [4]. The iris is well protected from the
environment and stable over time. The function of the iris is to
control the amount of light entering through the pupil and this

is achieved with the help of sphincter and the dilator muscles,


which adjust the size of the pupil. Due to epigenetic nature of
iris patterns each and every individual has a unique Iris

113

Efficient IRIS Recognition based on Improvement of Feature Extraction

scale, the isolated annular collarette area is transformed to a


rectangular block of fixed dimension. The discriminating
features are extracted from the transformed image and the

.
extracted features are used to train the classifiers. The
optimal features subset is selected using several methods to
increase the matching accuracy based on the recognition
erformance of the classifiers.

An iris recognition method was used in [31] based on

the 2D wavelet transform for the feature extraction and


direct discriminant linear analysis for feature reduction
with SVM techniques as iris pattern classifiers. In a
statistical matcher that analyzes the average Hamming
distance between two codes. The performance of iris-based
identification system was analyzed at the matching score
level. A biometric system, which achieves the offline
verification of certified and cryptographically secured and
an iris recognition method was proposed based on the
histogram of local binary patterns to represent the iris
texture and a graph matching algorithm for structural
classification. An elastic iris blob matching algorithm was
proposed to overcome the limitations of local feature based
classifiers (LFC) in , and in order to recognize the various
iris images properly, a novel cascading scheme was used
to combine the LFC and an iris blob matcher.

B.Literatue survey
In [14], iris recognition technology was applied in mobile
phones. In [15], correlation filters were utilized to measure the
consistency of the iris images from the same eye. An
interesting solution to defeat the fake iris attack based on the
Purkinje image was depicted in [16]. An iris image was
decomposed in [17] into four levels by using the 2D Haar
wavelet transform, the fourth-level high-frequency
information was quantized to form an 87-bit code, and a
modified competitive learning neural network (LVQ) was
adopted for classification. A modification to the Hough
transform was made to improve the iris segmentation, and an
eyelid detection technique was used, where each eyelid was
modeled as two straight lines. A matching method was
implemented and its performance was evaluated on a large
dataset. A personal identification method based on the iris
texture analysis was described. An algorithm was proposed for
iris recognition by characterizing the key local variations in.

II. IRIS IMAGE PREPROCESSING


First, we outline our approach, and then we describe further
details in the following subsections. The iris is surrounded
by the various non relevant regions such as the pupil, the
sclera, the eyelids, and also noise caused by the eyelashes,
the eyebrows, the reflections, and the surrounding skin
[9].We need to remove this noise from the iris image to
improve the iris recognition accuracy.

A phase-based iris recognition algorithm was proposed


where the phase components were used in 2D discrete Fourier
transform of iris image with a simple matching strategy. The

iris recognition algorithm described exploited the integro


differential operators to detect the inner and outer
boundaries Of iris, Gabor filters to extract the unique
binary vectors constituting the iris code, And a statistical
matcher that analyzes the average Hamming distance
between two codes. The performance of iris-based
identification system was analyzed at the matching score
level. A biometric system, which achieves the offline
verification of certified and cryptographically secured
documents called EyeCerts was reported for the
identification of the people.

A. Detection of Eyelids,Eyelashes and noise


Eyelids detected by using linear hough transform by fitting a
line to the upper and lower eyelids and the second line is drawn
which intersects iris edge that is closest to pupil.separable
eyelashes are detected using 1D gabor filters. Multiple eyelashes
are detected using variance of intensity and if the values in asmall
windows are less than a threshold the centre of the window is
considered as a point in eyelash.

B Iris Normalization

114

Proceedings of the National Conference on Communication Control and Energy System

We use the rubber sheet model [12] for the


normalization of the isolated collarette area. The center
value of the pupil is considered as the reference point, and
the radial vectors are passed through the collarette region.

apply filters to iris images to the extract information about


iris texture. Daugmans approach maps The fractional
Hamming distance between two iris codes is computed and
decisions about the identity of a person are based on bits in
an iris code are equally useful. For a given iris image, a bit
in its corresponding iris code is defined as fragile if there
is any substantial probability of it ending up a 0 for some
images of the iris and a 1 for other images of the same iris.
III. FEATURE SUBSET SELECTION AND VECTOR
CREATION IN PROPOSED METHODS
According to the method mentioned in previous section,
we concluded the middle band of iris normalized images
have more important information and less affected by
fragile bits, so for introducing iris feature vector based on
contourlet transform the rows between 5 and 12 in iris
normalize image are decomposed into eight directional subband outputs using the DFB at three different scales and
extract their coefficients.

Blacck region ofinterest


White region denotes noise
Figure.3 : shows the normalization procedure on CASIA dataset

C Contourlet Transform

A. GRAY LEVEL CO-OCCURRENCE MATRIX (GLCM)


The technique uses the GLCM of an image and it
provides a simple approach to capture the spatial
relationship between two points in a texture pattern. It is
calculated from the normalized iris image using pixels as
primary information. The GLCM is a square matrix of size
G * G, where G is the number of gray levels in the image.
Each element in the GLCM is an estimate of the joint
probability of a pair of pixel intensities in predetermined
relative positions in the image.
th
element of the matrix is generated by
The (i , j)
finding the probability that if the pixel location (x , y) has
gray level Ii then the pixel location (x+dx , y+dy) has a gray
level intensity Ij. The dx and dy are defined by considering
various scales and orientations. Various textural features
have been defined based on the work done by Haralick .
These features are derived by weighting each of the cooccurrence matrix values and then summing these weighted
values to form the feature value.

Contourlet transform (CT) allows for different and


flexible
number of directions at each scale. CT is
constructed by combining two distinct decomposition
stages , a multiscale decomposition followed by directional
decomposition. . Directionality and anisotropy are the
important
characteristics
of
contourlet
transform.
Directionality indicates that having basis function in many
directions, only three direction in wavelet. The anisotropy
property means the basis functions appear at various aspect
ratios where as wavelets are separable functions and thus their
aspect ratio is one. Due to this properties CT can efficiently
handle 2D singularities, edges in an image. This property is
utilized in this paper for extracting directional features for
various pyramidal and directional filters.
D The Best Bit in an Iris Code
The fractional Hamming distance weights all bits in an iris
code equally. However, not all the Biometric systems
up a 0 for some images of the iris and a 1 for other
images

115

Efficient IRIS Recognition based on Improvement of Feature Extraction

between the vectors of the generated coefficients is


calculated.Numbers ranging from 0 to 0.5 for inter-class
distribution and 0.45 and 0.6 for intra-class distribution are
included. In total 192699 comparisons inter-class and 1679
comparisons intra-class are carried out. In Figure.7 you can
see inter-class and intra-class distribution. In implementing
this method, we have used point 0.42 the inter-class and intraclass separation point.

Figure.4: Percent of Fragile Bit in Iris Pattern [52]

B.COMBINATION OF LOCAL AND GLOBAL PROPERTIES


IN AN IRIS IMAGE
For example edges are considered a local property. The
edge should be extracted from the lower levels because in
upper levels the edges are usually removed. Another point is
that the first level is usually very sensitive to noise. By
studying coefficients, we find that when the edges are noticed,
the coefficient is Positive and when the edges are gone this
coefficient is Negative. After running contourlet transform the
extreme value which could be positive or negative is reached.
Another method we used for creating iris feature vector is
local and global properties of an iris image. The detailed
changes of in an iris images is called the local properties.
C.THE CREATION OF IRIS FEATURE VECTOR
BY USING PCA AND ICA
In this method by using the generated sub bands and PCA
(Principal Component Analysis) and ICA (Independent
Component Analysis) techniques the features in question are
extracted.PCA is a classic method for analyzing statistical data,
extracting features and condensing data. This method via
modifying data presents an appropriate representation with
smaller dimensions and less added information. ICA is also a
statistical method for finding the components of multi variable
data .Now ever, what makes this method distinct with regard to
other methods of data representation is that it searches for
statistically independent components that are at the same time
comprised of Non-Gaussian distribution. In actual fact this
method can be regarded as an extension of methods such as PCA
and FA (Factor Analysis), which is by comparison stronger and
in many cases of classic methods insufficient, is of much use.
Similar to PCA method, we used level 3 sub bands of Contourlet
Transform for creating iris feature vector.

D. Feature Vector in the Coefficient domain


One of the most common methods for creating features
vector is using is using the extracted coefficients by using
various transformations such as Gabor filters, Wavelet
etc.douagman made use of this technique in his method. In our
proposed method in this section according to the extracted
coefficients in level 2 of Contourlet transform, feature vector
is created. Also techniques regarding decreasing vector
dimensions are used.

(a)Inter-Class Distribution

(b)Inter-Class Distribution

Figure.5: Inter and Intra Class Distribution

2) Non Linear Approximation Coefficients (NLAC): in this


method we use non linear approximation coefficients for
select the significant coefficient from the binary feature vector
we create in the last section .for this purpose use the following
formula:
Nsignif=round (npixel*2.5/100)
(1)
Where n pixel is the number of pixel in iris normalized image
and nsignif is the number of significant coefficient.
3) Genetic Algorithm (GA): optimal features subset selection
with the aid of genetic algorithm is studied in this section. In fact,
for creating the iris feature vector we use the level 2 binary
coefficients and by using GA we try to reduce the dimensions of
iris feature vector.
Our problem consists of optimizing two objectives:
(i) Minimization of the number of features,
(ii)Minimization of the recognition error rate of the Classifier.
Therefore, we deal with the multi objectives optimization
Problem. In Table I you can see the parameters used in the
genetic algorithm.
Feature 1st is selected for Classifier

Feature 15th is not selected

for Classifier

1) Binary vector creation with coefficient: as stated in the


previous section level 2 sub bands are extracted and according
to the Following Rule are modified into binary mode:
If Coeff (i)>=0 then NewCoeff (i) =1 Else NewCoeff (i) =0
And with hamming distance between the vectors of the
generated coefficients is calculated.Numbers ranging from 0
to 0.5 for inter-class distribution and 0.45 and 0.6 for intraclass distribution are included. And with hamming distance

1111100000011100000111100011110000
Length of chromosome, l = feature dimension
Figure.6: Binary feature vector of l dimension.
Table. I: GA Parameters

116

Proceedings of the National Conference on Communication Control and Energy System

Parameters
Population size
Length of chromosome
code
Crossover probability
Mutation probability
Number of generation

Local
Feature

CASIA Dataset
108 (the scale of iris sample)
600 (selected dimensionality of
feature

seque
nce)

0.65
0.002

1
(2)

N
N

where N is the number of pixels in the image, m is the mean of


the image, and f(x,y) is the value at point (x, y).The AAD
feature is a statistic value similar to variance, but experimental
results show that the former gives slightly better performance
than the latter. The average absolute deviation of each filtered
image constitutes the components of our feature vector.
IV.EXPERIMENTAL RESULT
In our experiments, three-level Contourlet decomposition is
adopted. The above experiments are performed in Matlab 7.0.
The normalized iris image obtained from the localized iris image
is segmented by Dugman method. We have used the filters
designed by A. Cohen, I. Daubechies, and J.-C. Feauveau. For the
quincunx filter banks in the DFB stage. In Table II we compared
our proposed methods with some other well known methods from
3 view points: feature vector length, the correct of percentage
classification and feature extraction time. Also we modified the
classifier of well known method to SVM for better comparison.
Table II: Comparison Pertaining to Our methods and
And some well - known Method.

Method

Vector
Length(
Bit)

Feature
Extraction(ms)

Well Known methods

2048

H
D

SV
M

100

Lim[17]

87

L
V
Q

SV
M

90.
4

Jafar Ali[48]

87

H
D

SV
M

92.
1

Ma[20]

1600

E
D

SV
M

95.
0

Daugman[3]

1
0
0
9
2.
3
9
2.
8
9
5.
9

628.5

180

260.3

80.3

Our Proposed Methods


Gray Level Co-occurrence Matrix
GLCM[54]

21

SVM

94.2

20.3

GLCM
(Combining
Sub bands)

56

SVM

96.3

20.3

Local and Global Feature

24

ED

78.6

20.3
20.3

VI.REFERENCES
[1] R. P. Wildes, Iris recognition: an emerging biometric technology,
Proceedings of the IEEE, vol. 85, no. 9, pp. 13481363, 1997.
[2] A. Jain, R. Bolle, and S. Pankanti, Biometrics: Personal Identification in a
Networked Society, Kluwer Academic Publishers, Norwell, Mass, USA, 1999.
[3] J. Daugman, Biometric personal identification system based on iris
analysis, 1994, US patent no. 5291560.
[4] T. Mansfield, G. Kelly, D. Chandler, and J. Kane, Biometric product
testing, Final Report, National Physical Laboratory, Middlesex, U.K, 2001.
[5] J. G. Daugman, High confidence visual recognition of persons by a test of
statistical independence, IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 15, no. 11, pp. 11481161, 1993.
[6] J. Daugman, Demodulation by complex-valued wavelets for stochastic
pattern recognition, International Journal of Wavelets, Multiresolution and
Information Processing, vol. 1, no. 1, pp. 117, 2003.

[7] CASIA,Chinese Academy of Sciences Institute of Automation.


Database of 756 Grayscale Eye Images.http://www.sinobiometrics.com
Versions 1.0, 2003.

The Correct
Of
Percentage
Classifier
(%)

Classifier

90.32

In this paper we proposed an effective algorithm for iris


feature extraction using contourlet transform. For reduce iris
feature vector we use several techniques. For Segmentation
and normalization we use Daugman methods. The Contourlet
transform is used to extract the discriminating features, and
several methods are applied for the feature subset selection.
Our proposed methods can classify iris feature vector
properly. The rate of expected classification for the fairly
large number of experimental date in this paper verifies this
claim. In other words most proposed methods in this paper
provide a less feature vector length with an insignificant
reduction of the percentage of correct classification.

Average Absolute Deviation (ADD): In this algorithm, the


feature value is the average absolute deviation (AAD) of each
output image defined as follows:

The
Feature

HD

V.CONCLUSION

110

F L f (x, y) m

2520

Global
Feature

[8] X. He and P. Shi, An efficient iris segmentation method for recognition, in

Proceedings of the 3rd International Conference on Advances in Patten


Recognition (ICAPR 05), vol. 3687 of Lecture Notes in Computer Science,
pp. 120126, Springer, Bath, UK, August 2005.
[9] K. Bae, S. Noh, and J. Kim, Iris feature extraction using independent
th
component analysis, in Proceedings of the 4 International Conference on
Audio- and Video-Based Biometric Person Authentication (AVBPA 03), vol.
2688, pp. 10591060,Guildford, UK, June 2003.
[10] W. W. Boles and B. Boashash, A human identification technique using
images of the iris and wavelet transform,IEEE Transactions on Signal
Processing, vol. 46, no. 4, pp. 11851188, 1998.
[11] S. C. Chong, A. B. J. Teoh, and D. C. L. Ngo, Iris authentication using
privatized advanced correlation filter, in Proceedings of the International
Conference on Advances on Biometrics (ICB 06), vol. 3832 of Lecture Notes
in Computer Science, pp. 382388, Springer, Hong Kong, January 2006.
[12] J. Daugman, Statistical richness of visual phase information: update on
recognizing persons by iris patterns, International Journal of Computer
Vision, vol. 45, no. 1, pp. 2538, 2001.
[13] X. He and P. Shi, An efficient iris segmentation method for recognition,
in Proceedings of the 3rd International Conference on Advances in Patten
Recognition (ICAPR 05), vol. 3687 of Lecture Notes in Computer Science,
pp. 120126, Springer, Bath, UK, August 2005.
[14] D. S. Jeong, H.-A. Park, K. R. Park, and J. Kim, Iris recognition in mobile
phone based on adaptive Gabor filter, in Proceedings of the International
Conference on Advances on Biometrics (ICB 06), vol. 3832 of Lecture Notes in
Computer Science, pp. 457463, Springer, Hong Kong, January 2006.

[15] B. V. K. Vijaya Kumar, C. Xie, and J. Thornton, Iris verification using


th
correlation filters, in Proceedings of the 4 International Conference Audioand Video-Based Biometric Person Authentication (AVBPA 03), vol. 2688 of
Lecture Notes in Computer Science, pp. 697705, Guildford, UK, June 2003.

117

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.118-126.

Hand Gesture Recoginition using Image Processing


C. Praveen
M.Tech, Embedded System, VelTech Dr.RR&Dr.SR Technical University, Chennai.
Email: Praveenchellapandian@gmail.com
Abstract In this paper, we describe techniques for
barehanded interaction between human and computer.
Barehanded means that no device and no wires are attached
to the user, who controls the computer directly with the
movements of his/her hand.
Our approach is centered on the needs of the user. We
therefore defi ne requirements for real-time barehanded
interaction, derived from application scenarios and usability
considerations. Based on those requirements a fi nger-fi
nding and hand-posture recognition algorithm is developed
and evaluated.
To demonstrate the strength of the algorithm, we build three
sample applications. Finger tracking and hand posture
recognition are used to paint virtually onto the wall, to
control a presentation with hand postures, and to move
virtual items on the wall during a brainstorming session. We
conclude the paper with user tests, which were conducted to
prove the usability of bare-hand human computer
interaction.
Categories and Subject Descriptors H.5.2 [Information
interfaces and presentation]: User interfaces -Input devices
and strategies; I.5.5 [Pattern recognition]: Implementation Interactive systems.
General Terms Algorithms, Design, Experimentation,
Human Factors
Keywords
Computer
Vision,
Human-computer
Interaction, Real-time, Finger Tracking, Hand-posture
Recognition, Bare-hand Control.

I. INTRODUCTION
For a long time research on human-computer interaction
has been restricted to techniques based on the use of a
graphic display, a keyboard and a mouse. Recently this
paradigm has changed. Techniques such as vision, sound,
speech recognition, projective displays and context-aware
devices allow for a much richer, multi-modal interaction
between man and machine.
Today there are many different devices available for
hand-based human-computer interaction. Some examples
are keyboard, mouse, track-ball, track-pad, joystick,
electronic pens and remote controls. More sophisticated
examples include cyber-gloves, 3D-mice (e.g. Labtecs

Spaceball) and magnetic tracking devices (e.g. Polhemus


Isotrack). But despite the variety of new devices, humancomputer interaction still differs in many ways from
human-to-human interaction. Natural interaction between
humans does not involve devices because we have the
ability to sense our environment with eyes and ears. In
principle, the computer should be able to imitate those
abilities with cameras and microphones.
In this paper, we will take a closer look at humancomputer interaction with the bare hand. In this context,
bare means that no device has to be in contact with the
body to interact with the computer. The position of the
hand and the fingers will be used to control applications
directly.
Our approach will be centered on the needs of the user.
Requirements derived from usability consideration will
guide our implementation, i.e. we will not try to solve
general computer vision problems, but rather find specific
solutions for a specifi c scenario.
In the next section of the paper, we will describe
applications, where bare-hand input is superior to
traditional input devices. Based on these application
scenarios, we will set up functional and non-functional
requirements for bare-hand human-computer interaction
systems.
In the fourth section, there will be an overview about
possible approaches and related work to the problem.
Because none of the current hand-finding and -tracking
systems suf fi ciently fulfill our requirements, we
developed our own algorithm, which will be described in
sections fi ve and six. To demonstrate the strength of the
algorithm, we build three sample applications that are
presented in section seven. We conclude the paper with an
evaluation of the performance of our algorithm and
describe user tests, which were conducted to prove the
usability of bare-hand human-computer interaction.
II. APPLICATIONS
We found three main application scenarios for bare-hand
human-computer interaction. First, in many cases the bare
hand is more practical than traditional input devices:

118

Proceedings of the National Conference on Communication Control and Energy System

 During a presentation, the presenter does not have to


move back and forth between computer and screen to
select the next slide.
 Remote controls for television sets, stereos and room
lights could be replaced with the bare hand.
 During video conferences, the camera's attention
could be acquired by stretching out a fi nger, similar
to a classroom situation.
 Household robots could be controlled with hand
gestures.
 Mobile devices with very limited space for user
interfaces could be operated with hand gestures.
Additionally, perceptual interfaces allow the creation of
computers that are not perceived as such. Without
monitor, mouse and keyboard, a computer can hide in
many places, such as household appliances, cars, vending
machines and toys. The main advantage of perceptual
interfaces over traditional buttons and switches are as
follows:
 Systems can be integrated on very small surfaces.
 Systems can be operated from a certain distance.
 The number of mechanical parts within a system can
be reduced, making it more durable.
 Very sleek designs are possible (imagine a CD-Player
without a single button).
 Systems can be protected from vandalism by creating
a safety margin between the user and the device.
 In combination with speech recognition, the
interaction between human and machine can be
greatly simplified.
Finally, there is a class of applications, which can be built
in combination with a projector. Virtual objects that are
projected onto the wall or onto a table can be directly
manipulated with the fingers. This setup can be useful in
several ways:
 Several persons can simultaneously work with the
objects projected onto the wall.
 Physical systems, such as a schedule on the wall, can
be replaced with digital counterparts. The digital
version can be easily stored, printed, and sent over the
Internet.
 If projector and camera are mounted in a place that is
not accessible for the user, an almost indestructible
interface can be built. To the user, the computer
physically only consists of the wall at which the
interface is projected.
It has to be noted, that speech recognition systems might
also be able to provide some of the listed properties.
Vision-based techniques have the advantage that they do
not disturb the fl ow of conversation (e.g. during a

presentation) and work well in noisy environments (e.g.


for public-space installations).
III. REQUIREMENTS
In this section, we will define requirements for real-time
human-computer interaction with bare hands that will
guide our implementation and evaluation later on.
A. Functional Requirements
Functional requirements can be described as the
collection of services that are expected from a system. For
a software system, these services can cover several layers
of abstraction. In our context, only the basic services are
of interest.
We identify three essential services for vision-based
human-computer interaction: detection, identification and
tracking. We will briefly present the three services and
describe how they are used by our envisaged applications.
a) Detection
Detection determines the presence and position of a class
of objects in the scene. A class of objects could be body
parts in general, faces, hands or fingers. If the class
contains only one object type, and there is just one object
present in the scene at a time, detection suffices to build
simple applications.
For example, if we detect the presence of fingertips and
we constrain our application to one fingertip at a time, the
detection output can be used to directly control a mouse
pointer position. For more complex applications, such as
hand posture recognition and multi-handed interaction,
we will need an additional identifi cation and tracking
stage.
b) Identification
The goal of identification is to decide which object from a
gi ven class of objects is present in the scene.
For bare-hand interaction, different identification tasks
are potentially interesting:
 Identification of a certain hand posture: Many
applications can be realized with the reliable
identification of a stretched out forefinger.
Examples:
Finger-driven
mouse
pointer,
recognition of space-time gestures, moving projected
objects on a wall, etc.
 Number of fingers visible: Applications often
need only a limited number of commands (e.g.
simulation of mouse buttons, ne xt slide/previous
slide
command
during
presentation). Those
commands can be controlled by the number of
fingers presented to the camera.
 2D-positions of fingertips and the palm: In
combination with some constraints derived from the

119

Hand Gesture Recoginition using Image Processing

hand geometry, it is possible to decide which fingers


are presented to the camera. Theoret-ically, thirty-two
different finger configurations can be detected with
this information. For non-piano players only a subset
of about 13 postures will be easy to use, though.
 3D-position of all fingertips and two points on the
palm: As shown by [7], those parameters uniquely
define a hand pose. Therefore, they can be used to
extract complicated postures and gestures. An
important application is automatic recognition of hand
sign languages.
All those identification tasks are solved in the literature,
as long as the background is uniform and the speed of
hand movement is restricted (see section four). In order to
limit the diffi culty of the problem, we limit ourselves to
the first two points. This allows us to built a system with a
minimum number of constraints on the user and the
environment while still providing the required services for
our targeted applications.

Fig. 1. Motion blurring. (a) Resting hand Normal movement


(c) Fast Movement

c) Tracking
In most cases, the identified objects will not rest in the
same position over time. If two objects of the same class
are moving in the scene, tracking is required to be able to
tell which object moved where between two frames.
There are two basic approaches to deal with the tracking
problem. First, it is possible to remember the last known
position of an identifi ed object. Given some known
constraints about the possible movements of an object
between two frames, a tracking algorithm can try to
follow the object over time.
The second possibility is to re-run the identification stage
for each frame. While this approach might seem rather
crude and also requires a very fast identification
algorithm, it might be the only feasible tracking technique
for unconstrained hand motion for two reasons:
 Measurements show that hands can reach speeds of 5
m/s during normal interaction. At a frame rate of 25
frames/s the hand jumps with steps of up to 20cm
per frame.
 As shown in Figure 1, fast finger motion results in
strong motion burring. The blurred fingers are almost
impossible to identify and therefore lead to unknown
finger positions during fast motion.

The two effects combined typically result in distances of


about one meter between two identified finger positions,
for fast hand movements. This requires the tracking
algorithm to search through most of the image for each
frame. By skipping the tracking stage altogether one does
not loose much processing speed, but gains a lot of
stability1.
B. Non-Functional Requirements
Non-functional requirements describe the minimum
quality expected from a service.
a) Latency
Latency is the time gap between an action of the user and
the system response. There is no system without latency,
so the basic question is what is the maximum acceptable
latency for our system.
Several studies ([10], [18]) have shown that user
performances degrade significantly at high latencies. Ho
wever, it is difficult to derive a maximum acceptable lag
from those studies because the answer differs, depending
on the chosen task and the performance degradation is
gradual.
We therefore take a different approach to the problem: the
described applications require real-time interaction, which
we defi ne as interaction without a perceivable delay. A
classical experiment conducted by Michotte and reported
in [3] shows that users perceive two events as connected
by immediate causality, if the delay between the events
is less than 50ms. The perception of immediate causality
implies that the user does not notice a delay between the
two events. We therefore conclude that the minimum
acceptable latency for real-time applications is in the area
of 50ms, resulting in a required minimum processing
frequency of 20Hz.
b) Resolution
The required spatial resolution depends on the
application. For point-and-click tasks, the smallest
possible pointer movement should be at most as large as
the smallest selectable object on the screen. For other
applications, such as simple gesture interfaces, the output
resolution does not affect the quality of the service, but
detection and identification processes require a minimum
input resolution. For example, we found it diffi cult to
identify fingers with a width below six pixels in the
image.
C) Stability
A tracking method can be called stable if the measured
position does not change, as long as the tracked object
does not move. There are several possible sources of
1

120

Every tracker looses track from time to time (e.g. due to occlusion)
and has to be restarted. By restarting the system at each frame,
many tracking problems can be avoided.

Proceedings of the National Conference on Communication Control and Energy System

instability, such as changing light conditions, motion of


distracting objects and electrical noise.
The stability of a system can be measured by calculating
the standard deviation of the output data for a non-moving
object over a short period.
As a necessary condition, the standard deviation has to be
smaller than the smallest object the user can select on a
screen (e.g. a button). As a suffi cient condition, it should
be smaller than the smallest displayable position change
of the pointer to avoid annoying oscillation of the pointer
on the screen.
IV. RELATED WORK
In the last ten years, there has been a lot of research on
vision-based hand gesture recognition and finger tracking.
Interestingly there are many different approaches to this
problem with no single dominating method. The basic
techniques include color segmen-tation [8], [12], infrared
segmentation [13], blob-models [6], contours [13], [9],
[15], correlation [4], [11] and wavelets [17].
Typical sample applications are finger dri ven mice [11],
[12], fi nger driven drawing applications [4], [6], [9],
bare-hand game control [5], [15], and bare-hand
television control [5].
Most authors use some kind of restriction, to simplify the
computer vision process:
 Non real-time calculations [17]

V. HAND SEGMENTATION
When processing video images, the basic problem lies in
the extraction of information from vast amount of data.
The Matrox Meteor frame grabber, for example, captures
over 33 megabytes of data per second, which has to be
reduced to a simple fi ngertip position value in fractions
of a second.
The goal of the segmentation stage is to decrease the
amount of image information by selecting areas of
interest. Due to processing power constraints, only the
most basic calculations are possible during segmentation.
Typical hand segmentation techniques are based on stereo
information, color, contour detection, connected
component analysis and image differencing.
Each technique has its specific disadvantages:
Stereo image based segmentation requires a hardware
setup that currently only can be found in laboratories.
Color segmentation is sensitive to changes in the overall
illumi-nation [19]. In addition, it is prone to segmentation
errors caused by objects with similar colors in the image.
It also fails, if colors are projected onto the hand (e.g.
during a presentation).
Contour detection tends to be unreliable for cluttered
backgrounds. Much stability is obtained by using a
contour model and post-processing with the condensation
algorithm, but this restricts the maximum speed of hand
movement [2].

 Colored gloves [8]


 Expensive hardware requirements (e.g. 3D-camera or
infrared-camera) [13], [14]
 Restrictive background conditions [15]
 Explicit setup stage before starting the tracking [4]
 Restrictions on the maximum speed of hand
movements [4], [6], [9], [11]
Most systems additionally have problems in the case of
changing light conditions and background clutter. The
systems described in [6] and [9] seem to perform the best
under such difficult conditions.
None of the presented work provides a robust tracking
technique for rapid hand movements. In addition, most
systems require some kind of setup-stage before the
interaction can start. Finally, none of the reviewed
systems allows simultaneous tracking of several fingers in
real-time.

Connected component algorithms, tend to be heavy in


computa-tional requirements, making it impossible to
search through the whole image in real-time. Successful
systems employ tracking techniques, which again restrict
the maximum speed of movement [6].
Image differencing generally only works well for moving
objects and requires suffi cient contrast between
foreground and background. The next sub-section will
show how it nevertheless can work quite reliably with the
help of simple heuristics.
Looking at the failure-modes of the different
segmentation techniques, the obvious idea is to combine
several techniques to get results that are more robust.

Because we believe that those three points are crucial for


most bare-hand human-computer interaction applications,
we decided to develop an own fi nger-fi nding algorithm,
which will be described in the next two sections.

121

Fig. 2. Image differencing with reference image. From left to


right: input image, reference image, output image

Hand Gesture Recoginition using Image Processing

Fig. 3. The finger-finding and hand-posture recognition process

Surprisingly, a quantitative comparison of low-level


image processing techniques2 showed that all evaluated
methods tend to fail under similar conditions (fast hand
motion, cluttered background). For this reason, a
combination of techniques does not yield a much better
performance.

We found a simple but effective solution to this problem.


In our setup, the background is generally lighter than the
foreground (white walls, white boards). For this reason,
we can update dark regions slowly, but light regions
instantly. The value N in formula
(1) is calculated as follows:

After comparing the different possible techniques


qualitatively as well as quantitatively, we decided to work
with a modified image differencing algorithm, which will
be described in the next sub-section.
5.1 Smart Image Differencing
Studies on human perception show that the visual system
uses changes in luminosity in many cases to set the focus
of attention. A change of brightness in one part of the
visual field, such as a fl ashing light, attracts our attention.
Image differencing follows the same principle. It tries to
segment a moving foreground from a static background
by comparing the gray-values of successive frames.
Inter-frame image differencing is only useful if the
foreground object constantly moves. If we take the
difference between the actual image and a reference
image instead, it is also possible to fi nd resting hands
(see Figure 2). To cope with changes in illumi-nation the
reference image is constantly updated with newly arriving
image using the following formula [16]:
x,y R ( x, y) =
t

N 1

-------------

t 1

( x, y) +

I ( x, y)

(1)

x,y

N ( x, y ) =

for I t 1( x, y) I t ( x, y)

500 for I t 1( x, y) I t ( x, y) >

(2)

With this modification, the algorithm provides the


maximum contrast between foreground and background at
any time.

Fig. 4. Typical finger shapes (a) Clean segmentation


(b) Background clutter (c) Sparsely segmented fingers

VI. FINDING FINGERS AND HANDS


Figure 3 gives a schematic overview of the complete fi
nger-finding and hand-posture recognition process. The
system searches for fi ngertips first and uses the found
positions to deri ve higher-level knowledge, such as hand
postures or gestures.

----

A. Fingertip Shape Finding

With R standing for the reference image and I for the


newly arrived frame. The formula calculates a running
average over all frames, with a weighting that decreases
exponentially over time. For setups in front of a wall or
white board, we found that the user practically never rests
with his hand in the same position for more than 10
seconds, which implies a value of about 500 for N.

This section will present a simple, fast and reliable


algorithm that fi nds both the position of fingertips and
the direction of the fi ngers, given a fairly clean
segmented region of interest.

The main problem with this type of reference image


updating is that dark, non-moving objects, such as the
body are added to the reference image. If the user moves
his/her hand into regions with dark objects in the
reference image, there is not suffi cient contrast between
foreground and background to detect a difference.

 The center of the fingertips is surrounded by a circle


of fi lled pixels. The diameter of the circle is defined
by the finger width.

Figure 4 shows some typical finger shapes extracted by


the image-differencing process. Looking at these images,
one can see two overall properties of a fingertip:

 Along a square outside the inner circle, fingertips are


surrounded by a long chain of non-filled pixels and a
shorter chain of filled pixels (see Figure 5).

Segmentation based on color, image differencing, connected


components and correlation was applied to a set of 14 different hand
labeled sequences

122

Proceedings of the National Conference on Communication Control and Energy System

To build an algorithm that searches these two features,


several parameters have to be derived first:
Diameter of the little fi nger (d1): This value usually lies
between 5 and 10 pixels and can be calculated from the
distance between the camera and the hand.

( x, y) Region_of_Interest
Add number of filled pixels in circle with
diameter d1 and center (x, y)

If

(filled_pixel_nb < circle_area)


Continue loop
Calculate number of filled pixels
along
search square with diameter d3

If

(filled_pixel_nb < min_pixel)


(filled_pixel_nb > max_pixel)
Continue loop
(connected_filled_pixel_nb <
filled_pixel_nb error_margin)
Continue loop

Memorize

(x,

y)

2) There has to be the correct number of filled and unfilled pixels along the described square around (x, y).
3) The filled pixels along the square have to be
connected in one chain.
This basic algorithm runs easily at real-time and reliably
fi nds possible fingertips. We implemented two
enhancements to further improve the stability. First, it is
useful to define a minimum distance between two
fingertips to a void classifying two pixels next to each
other as different fingertips. Second, the middle position
of the chain of inner pixels along the search square shows
the direction of the fi nger. This information can be used
to determine whether the found fingertip is connected to
an outstretched finger.

Fig. 5. A simple model of the fingertip

If

1) There has to be a sufficient number of filled pixels


around the close neighborhood of the position (x, y).

B. Hand Posture Classification


The second part of the algorithm analyzes the relationship
between the found fingers.
(1)

As a first step, a standard connected component analysis


algorithm is used to analyze which of the found fingers
belong to the same hand. As a by-product, the size of the
hand-region is calculated. This can be used to filter out
small fi nger-shaped objects, such as pens.

or
(2)

Table 1. Dropped and misclassified frames (out of 25 frames)


Motio
n

(3)

Background
White

Light Condition
Diffuse Daylight
Daylight with Shadows

Droppe
d
Misclas.
Frames Frames
0
0
0
1

Slow

position

Fig. 6. Fingertip-finding algorithm

Diffuse Neon light

Diffuse Daylight

Daylight with Shadows


Diffuse Neon light

0
4

0
0

Diffuse Daylight
Daylight with Shadows

3
5

3
2

Diffuse Neon light

Diffuse Daylight

Daylight with Shadows


Diffuse Neon light

1
8

0
0

(<2.5

Diameter of the thumb (d2): Experiments show that the


diameter is about 1.5 times the size of the diameter of the
little finger.

m/s)
Clutter

Size of the search square (d3): The square has to be at


least two pixels wider than the diameter of the thumb.

White
Fast

Minimum number of filled pixels along the sear ch square


(min_pixel): As shown in Figure 5 the minimum number
equals the width of the little finger.

(<4.5

Maximum number of filled pixels along the sear ch square


(max_pixel): Geometric considerations show that this
value is twice the width of the thumb.
Given those parameters, we wrote the straightforward fi
ngertip finding algorithm reported in Figure 6.
The algorithm performs three checks to find out whether a
gi ven position (x, y) is a fingertip.

m/s)

Clutt
er

In a next step, the fingers are sorted into the right


geometric order (minimum distance between every pair).
Afterwards the directions and positions of fingers relati ve
to each other allow calculating an approximation for the

123

Hand Gesture Recoginition using Image Processing

center of the palm. Fingers can then be classifi ed by their


position relative to the palm and their position to each
other.

In section three we set up requirements for real-time barehand human computer interaction. While the functional
requirements have to be evaluated on a system level, we
can already check at this point if the non-functional
requirements are met.

c) Accuracy
The accuracy was calculated for all correctly classified
frames. The mean error for the evaluate sequences was
between 0.5 and 1.9 pixels with variances between 0.1
and 2.0. As expected, fast fi nger movements are tracked
with lower accuracy than slow movements, due to motion
blurring. Taking into account the error margin of the hand
labeling process, the overall accuracy of the algorithm is
better than one pixel and therefore suffi cient for precise
point-and-click tasks.

Our measurements were done one a Pentium III


1000MHz machine with 384x288 sized images. To test
accuracy and robustness we ran the finger finding
algorithm on 12 sequences with varying light conditions
(daylight, neon light, diffuse and with shadows), different
degrees of background clutter and different speeds of
movement. The output of the fi nger-tracker was
compared to hand-labeled ground truth data to calculate
mean error and error variance.

Fig. 7. Finger classification

C. Evaluation

a) Latency
The total latency of the algorithm was between 26 and
34ms, depending on the number of pixels in the region of
interest. Image differencing alone takes about 10ms. The
maximum latency is therefore still well below the
required maximum latency of 50ms. We intentionally left
some room for the latency of image acquisition and
graphical output.

d) Finger Classification
Figure 7 shows two examples of the finger classification
algorithm. The finger positions (blue dots) and directions
(red dots) are reliably found for all fingers in front of a
cluttered background. Other fi nger-like objects such as
the pen or the ball are ignored. Additionally, the
forefingers (yellow dots) of the two hands are correctly
found. The ten fingers are grouped into two dif ferent
hand objects (not visible in the picture)3.

b) Robustness
For each sequence two types of frames have been
counted:
 Dropped frames: Frames in which no finger could be
found
 Misclassified frames: Frames in which the finger
position w as off by more than 10 pixels from the right
position, or frames in which the nearby fi ngershadow has been tracked instead of the finger itself.
Dropped frames usually occur if the finger mo ves very
fast. In this case they do not cause problems to most
applications, because the resting position of a finger is
usually much more important than the fast moving
position. Misclassified frames, on the other hand, are
quite annoying. If the finger controls a pointer, for e
xample, this pointer might jump forth and back erratically
between correctly and misclassified finger positions.
Table 1 shows that the algorithm is quite robust for most
circum-stances. The occasional misclassified frames can
be explained with nearby shadows and body parts that
resemble a fi nger. a simple stabilization step, that always
chooses the fi nger-position closest to the last known
position will eliminate most of those problems later on.

Fig. 8. Bare-hand human-computer interaction. (a) Finger


controlled web browser (b) Painting with the finger
(c) Controlling a presentation with hand postures
(d) Multi-user spatial reorganization of text items.

makes it possible to replace existing remote-control


solutions with the bare hand. It uses the following hand
postures to control a presentation: two outstretched
fingers for next slide (see Figure 8c),
three
outstretched finger for previous slide and
3

Of course the two demonstrated cases are just examples for many
different possible conditions and hand states. MPEG movies of the
finger finder and of all described applications can be viewed at
http://iihm.imag.fr/hardenbe/Videos.htm

124

Proceedings of the National Conference on Communication Control and Energy System

five outstretched fingers to open a slide menu. The slide


menu makes it possible to jump to a specific slide quickly
during a presentation, by selecting one of the displayed
side numbers with the finger.
VII. SAMPLE APPLICATIONS
We developed three applications, named FingerMouse,
FreeHandPresent and BrainStorm for this paper. All of
them aim to improve the interaction between human and
computer for a specifi c scenario, and all demonstrate
different capabilities of the finger-finding and handposture recognition system.

B. Evaluation of the Applications


All applications fulfilled the functional requirements
defined in section three. They worked in real-time (2025Hz) and proved to be suffi ciently precise and robust to
fulfill the chosen tasks. Inexperienced users were able to
use all of the applications after a short explanation.
Especially the selecting/clicking technique with a short
pause of the finger proved to be very intuitive.
Nevertheless, we noticed one main problem during our
evaluation: the projected background usually changes a
lot during interaction. The image-differencing layer
therefore produces plenty of false

A. Description of the Applications


a) Fingermouse
The FingerMouse system allows control of the mouse
pointer with the bare hand. The user just moves an
outstretched forefinger in front of the camera, to position
the mouse pointer on the screen. Mouse-clicks are
generated by keeping the finger in the same position for
one second. The mouse-wheel is activated by stretching
out all fi ve fingers. In combination with a projector, the
system can be used to control Windows applications, such
as the Internet Explorer or Paint, directly on a wall (see
Figure 8a and b). The finger replaces both a physical
mouse as well as a mouse pointer, allowing for fast and
intuitive interaction.
b) Freehandpresent
The second system is built to demonstrate how simple
hand postures can be used to control an application. A
typical scenario where the user needs to control the
computer from a certain distance is during a presentation.
The FreeHandPresent system
c) Brainstorm
The last system was built to demonstrate multi-user/multihand tracking capabilities of our system. The application
scenario is a brainstorming session. Normally such
sessions consist of two phases: first, a large number of
ideas are collected from the participants and pinned to the
wall. Second, the items on the wall are sorted and
categorized.
With the BrainStorm system, users can type their ideas
to the wall, using a wireless keyboard. In the second
phase, everyone can walk up to the wall and rearrange
items with his/her fi ngers. An item is selected by resting
on it with an outstretched finger for half a second.
Selected items can be moved freely on the wall. To
unselect an item, the user again makes a short pause with
the finger . Several users can rearrange items in parallel
(see Figure 8d). The main advantage of the virtual brain
storming system is that results can be saved, printed and
send over the Internet at any time.

(a)

(b)

Fig. 9. User study (a) Physical objects (b) Virtual objects.

foreground objects, which might be accidentally classified


as fingers. There are two ways to cope with this problem.
 For applications such as FreeHandPresent the user
can do his/ her hand sign at the side or above the
presentation in a pre-defined control area.
 In all other cases, it is possible to eliminate the
disturbing effect of the projection by illumination the
room (e.g. sun-light or a strong lamp).
In principle, this problem could also be solved, with
synchronized camera-projector setups, which capture
images during short periods of blacked out projection.
To prove the usability of barehanded human-computer
interaction more formally, we arranged a user study with
the BrainStorm system. Eighteen inexperienced users had
to group twenty projected words on the wall into four
categories (cities, countries, colors and fruit). The same
experiment was done with physical objects (words glued
to magnets, see Figure 9). Half of the users did the
physical object-sorting first, the other half started with the
virtual items.
On average, it took users 37 seconds to sort the physical
objects and 72 seconds for the virtual objects, resulting in
a 95% increase in time. The difference can be mainly
explained with the selection and un-selection pause of 0.5
seconds, which adds up to 20 seconds for the 20 items on
the wall.

125

Hand Gesture Recoginition using Image Processing

VIII. CONCLUSION
In this paper, we described how a computer can be
controlled with the bare hand. We developed a simple but
effective finger fi nding algorithm that runs in real-time at
a wide range of light conditions. Other than in previous
work, our system does not constrain the hand movement
of the user. Also, there is no set-up stage. Any user can
simply walk up to the wall and start interacting with the
system.
The described user tests show that the organization of
projected items on the wall can be easily accomplished
with bare hand interaction. Even though the system takes
more time than its physical counterpart, we think that it is
still very useful: many value-adding services, such as
printing and storing, can only be realized with the virtual
representation.
Further research will be necessary to find a faster
selection-mechanism and to improve the segmentation
with a projected background under difficult light
conditions.

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

REFERENCES
[1]

[2]

[3]

[4]

[5]

[6]

Brard, F. Vision par ordinateur pour linteraction


homme-machine fortement couple, Doctoral Theses,
Universit Joseph Fourier, Grenoble, 1999.
Blake, A., Isard, M., and Reynard, D. Learning to track the
visual motion of contours, Artifi cial Intelligence, 78, 101134, 1995.
Card, S. Moran, T. Newell, A. The Psychology of HumanComputer Interaction, Lawrence Erlbaum Associates,
1983.
Crowley, J., Brard, F., and Coutaz, J. Finger tacking as an
input device for augmented reality, Automatic Face and
Gesture Recognition, Zrich, 195-200, 1995.
Freeman, W., Anderson, D. and Beardsley, P. Computer
Vision for Interactive Computer Graphics, IEEE
Computer Graphics and Applications, 42-53, Mai-June
1998.
Laptev, I. and Lindeberg, T. Tracking of Multi-State Hand
Models Using Particle Filtering and a Hierarchy of Multi-

[15]

[16]

[17]

[18]

[19]

126

Scale Image Features, Technical report ISRN KTH/NA/P00/ 12-SE, September 2000.
Lee, J. and Kunii, T. Constraint-based hand animation, in
Models and techniques in computer animation, 110-127,
Springer Verlag, Tokyo, 1993.
Lien, C. and Huang, C. Model-Based Articulated Hand
Motion Tracking For Gesture Recognition, Image and
Vision Computing, vol. 16, no. 2, 121-134, February
1998.
MacCormick, J.M. and Isard, M. Partitioned sampling,
articulated objects, and interface-quality hand tracking,
European Conference on Computer Vision, Dublin, 2000.
MacKenzie, I. and Ware, C. Lag as a determinant of
Human Performance in Interactive Systems. Conference
on Human Factors in Computing Systems, 488-493, New
York, 1993.
O'Hagan, R. and Zelinsky, A. Finger Track - A Robust and
Real-Time Gesture Interface, Australian Joint Conference
on Artificial Intelligence, Perth, 1997.
Quek, F., Mysliwiec, T. and Zhao, M. Finger mouse: A
freehand pointing interface, International Workshop on
Automatic Face- and Gesture-Recognition, Zrich, 1995.
Rehg, J. and Kanade, T. Digiteyes: Vision-based human
hand tracking, Technical Report CMU-CS-93-220, School
of Computer Science, Carnegie Mellon University, 1993.
Sato, Y., Kobayashi, Y. and Koike, H. Fast Tracking of
Hands and Fingertips in Infrared Images for Augmented
Desk Interface, International Conference on Automatic
Face and Gesture Recognition, Grenoble, 2000.
Segen, J. GestureVR: Vision-Based 3D Hand Interface for
Spatial Interaction, ACM Multimedia Conference, Bristol,
1998.
Stafford-Fraser, J. Video-Augmented Environments, PhD
theses, Gonville & Caius College, University of
Cambridge, 1996.
Triesch, J. and Malsburg, C. Robust Classification of
Hand
Postures
Against
Complex
Background,
International Conference On Automatic Face and Gesture
Recognition, Killington, 1996.
Ware, C. and Balakrishnan, R. Researching for Objects in
VR Displays: Lag and Frame Rate, ACM Transactions on
Computer-Human Interaction, vol. 1, no. 4, 331-356,
1994.
Zhu, X., Yang, J. and Waibel, A. Segmenting Hands of
Arbitrary Color, International Conference on Automatic
Face and Gesture Recognition, Grenoble,

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.127-131.

Biometric Authentication using Infrared Imaging of Hand Vein Patterns


P. Maragathavalli
M.Tech., VEL Tech University, Avadi, Chennai
Abstract Hand vein patterns are unique and universal.
Vein pattern is used as biometric feature in recent years.
But, it is not very much popular biometric system as
compared to other systems like fingerprint, iris etc, because
of the higher cost. For conventional algorithm, it is necessary
to use high quality images, which demand high-priced
collection devices. There are two approaches for vein
authentication, these are hand dorsa and hand ventral.
Currently we are working on hand dorsa vein patterns. Here
we are putting forward the new approach for low cost hand
dorsa vein pattern acquisition using low cost device and
proposing a algorithm to extract features from these low
quality images.
Keywords vein pattern, hand-dorsa, NIR- webcam.

I. INTRODUCTION
A reliable biometric system, which is essentially a
pattern-recognition that recognizes a person based on
physiological or behavioral characteristic, is an
indispensable element in several areas, including
ecommerce(e.g. online banking), various forms of access
control security (e.g. PC login), and so on. Nowadays,
security has been important for privacy protection and
country in many situations, and the biometric technology
is becoming the base approach to solve the increasing
crime [1]. As the significant advances in computer
processing, the automated authentication techniques using
various biometric features have become available over the
last few decades. Biometric characteristics include
fingerprint, face, hand/finger geometry, iris, retina,
signature, gait, voice, hand vein, odor or the DNA
information, while fingerprint, face, iris and signature are
considered as traditional ones. Due to each biometric
technology has its merits and shortcoming, it is difficult
to make a comparison directly. Jain1. have identified
seven factors, which are
1. Universality,
2. Uniqueness,
3. Permanence,

To determine the suitability of a trait to be used in a


biometric application. Vein pattern is the network of
blood vessels beneath persons skin. The idea using vein
patterns as a form of biometric technology was first
proposed in 1992, while researchers only paid attentions
to vein authentication in last ten years [8]. Vein patterns
are sufficiently different across individuals, and they are
stable unaffected by ageing and no significant changed in
adults by observing. It is believed that the patterns of
blood vein are unique to every individual, even among
twins. Contrasting with other biometric traits, such as face
or fingerprint, vein patterns provide a really specific that
they are hidden inside of human body distinguishing them
from other forms, which are captured externally. Veins
are internal, thus this characteristic makes the systems
highly secure, and they are not been affected by the
situation of the outer skin (e.g. dirty hand).
II. FEATURES OF VEIN PATTERN
Vein based authentication uses the vascular patterns of an
individuals hand as personal identification data.
Compared with finger , the back of a hand or a palm has a
broader and more complicated vascular pattern and thus
contains a wealth of differentiating features for personal
identification. The features of vein pattern are enlisted as
below
1. Unique and universal.
2. Carry deoxidized blood towards the heart except
umbilical & pulmonary veins.
3. Palm & digits have collaterals of ulnar vein & palmer
digital veins
4. Developed before birth and persist throughout the life,
well into old age.
5. Differ even between identical twins.
6. Internal trait protected by the skin.
7. Less susceptible to damage.

4. Measurability,

A. Principle

5. Performance,

When hand is exposed to NIR light, then the deoxidized


hemoglobin in the vein vessels absorbs light having a
wave length of about 760 nm within the near infrared
area. When the infrared ray image captured, only the
blood vessel pattern containing the deoxidized

6. Acceptability,
7. Circumvention,

127

Biometric Authentication using Infrared Imaging of Hand Vein Patterns

hemoglobin is visible as a series of dark lines. In vein


authentication based on this principle, the region used for
authentication is photographed with near-infrared light,
and the vein pattern is extracted by image processing and
registered. The vein pattern of the person being
authenticated is then verified against the preregistered
pattern.
III. PREVIOUS WORKS
Vein pattern recognition technology appeared in 1990s,
but it is not attracted much attention in that decade. From
2000, more papers on this topic appeared. However, up to
now, there is no vein pattern database. So, each researcher
has to design his own hardware setup. Toshiyuki Tanaka,
Naohiko Kubo[8], 2004 used 2 infrared LED arrays
(Sanyo SLR931A), CCD camera (Cv-15H) and video
card (IO-DATA GV-VCP3/PCI) and adopted phase only
correlation and template matching for authentication.
L.Wang, G Leedham, S-y Cho[10] used NEC Thermo
Tracer TS 7302 for FIR imaging Wang Lingyu and
G.Leedham used Hitachi KP-F2A infrared CCD camera
for NIR imaging. C.Laxmi, A.Kandaswamy[9] used
WAT902H near IR camera. Some researchers have
worked on Fujistu palm vein sensor for vein pattern
acquisition.

Fig. 1. IR Frame

Fig. 2. IR Frame

VI. HARDWARE SETUP

IV. OUR WORK


From previous work, it is clear that former researchers
used high cost devices to obtain high quality images
which can be easily processed. In our research, we want
to reduce cost of system considerably to make device
cheap. So, we have used webcam for this purpose. First,
the webcam is made sensitive to IR region and used to
obtain vein images. Our webcam (Logitech Pro 2000)
costs about 25$ which is much cheaper than IR cameras
used by others. This can reduce device price by at least 10
times. This is significant cost reduction and if we could
extract vein structure from these low cost devices, as good
as others, then we do a very meaningful wok. In the
following, we will discuss hardware setup and propose an
algorithm for vein pattern analysis.
V. HOW TO MAKE WEBCAM SENSITIVE TO IR REGION?
Webcams are actually sensitive to both visible spectrum
of light as well as IR spectrum of light. But, internal IR
filter blocks IR light and passes only visible light to
camera. This filter is fitted either near to lens or on the
casing of chip. If this filter is removed and replaced by
another filter to block light in visible spectrum, then
webcam becomes sensitive to only IR light. Special care
has to be taken while removing IR filter on chip case,
because it may damage webcam permanently. Frames
captured from IR sensitive webcam are shown below in
Fig. 1 and 2. We can see that IR illumination by single
LED used in TV remote can provide sufficient
illumination.

Fig. 3. Hardware Setup

As mentioned in the introduction, the hardware setup


(Fig. 3) has a crucial role in the acquisition of vein
images. Two aspects can be underlined here:
1. The actual camera used for taking the snapshot has
only one important parameter, the response to near
infrared radiation. Spatial resolution and frame rate
are of lower importance since for the acquisition of a
vein pattern a still image is required and the details are
easily seen even at a lower resolution.
2. The design of the lighting system is one of the most
important aspects of the image acquisition process. A
good lighting system will provide accurate contrast
between the veins and the surrounding tissue while
keeping the illuminations errors to a minimum.

128

Proceedings of the National Conference on Communication Control and Energy System

For lighting system, we have used IR LEDs as it provides


high contrast and also it is cheap and easily available. But,
LED array formed using IR LEDs do not give uniform
illumination. Various matrix arrangements of LEDs will
modify illumination. LEDs can be arranged as 2D single
or double array or rectangular array or concentric arrays.
Among these, concentric LED array arrangement gives
better distribution of light with single or more concentric
LED arrays and camera lens at centre can acquire image
with good contrast. Contrast of image can be controlled
by controlling power supplied to LEDs. Some trials by
controlling power can give required contrast. High power
to light source will decrease contrast due to high intensity.
Polarizing filters can also be used to increase contrast by
reducing specular reflection of skin. With reference to
above discussion, we have designed IR light source
arranged in concentric fashion Fig.4 and Fig.5 shows IR
illumination sensed by IR sensitive webcam.

Fig. 6. Ventral veins

Fig. 7. Dorsal veins

VII. VASCULAR PATTERN ANALYSIS


As we are doing image enhancement in spatial domain,
the steps in processing of image are as follows:
Fig. 4. IR illumination

1. Image acquisition in NIR region


2. Pre-processing
3. Finding region of interest
4. Gray scaling
5. Thresholding
6. Edge Detection
7. Removal of small unwanted objects
8. Thinning
A. Algorithm
Here we are proposing a vascular pattern analysis
algorithm which we are going to implement in the future.

Fig. 5. IR illumination

Fig. 6 & 7 show ventral and dorsal veins respectively


captured by IR sensitive webcam and IR light source
(Fig.4).
The limitation of our hardware setup is, the images
obtained are of low quality. So, we now propose an
algorithm to extract the features from these images.

a)
b)
c)
d)
e)
f)
g)
h)
i)

129

Open Near-Infrared Palm Image File in input mode


Convert the Loaded Image into Planar Image
Set the Horizontal and Vertical kernels (3 x 3).
Generated image is passed through kernel.
Then image is stored into grayscale image file.
Close all Image file(s).
Open resultant Grayscale Image File, in input mode
Open Binary Image File in output mode
While not End of File

Biometric Authentication using Infrared Imaging of Hand Vein Patterns

j)
k)
l)
m)
n)
o)
p)
q)
r)
s)
t)
u)
v)

Loop
Read pixel intensity value
If pixel intensity value lies above y, then
Convert the intensity value to 0 (white)
Elseif
If pixel intensity value lies below x, then
Convert the intensity value to 255 (black)
Else
If neighbouring pixel is edge pixel then make current
pixel 0 (white)
End if
Write the intensity value to Binary Image
End Loop
Close all Image Files

Note: x and y are lower and upper thresholds


respectively for canny edge detection. Thresholding is an
image processing technique for converting a grayscale
image to a binary image based upon a threshold value. If a
pixel in the image has an intensity value less than the
threshold value, the corresponding pixel in the resultant
image is set to black. Otherwise, if the pixel intensity
value is greater than or equal to the threshold intensity,
the resulting pixel is set to white. Thus, creating a
binarized image, or an image with only 2 colors, black (0)
and white (255). Image thresholding is very useful for
keeping the significant part of an image and getting rid of
the unimportant part or noise. This holds true under the
assumption that a reasonable threshold value is chosen. In
our case the threshold range is taken 10 to 70. We have
done image processing using OpenCV1.1 (Open
Computer Vision Library) on the platform of Microsoft
Visual C++ 2003 Edition. We used a computer with
processor Pentium Core2duo with frequency 2.3GHz.
Results after the implementation of above algorithm are
shown below. The Fig.8 below shows the gray level palm
vein image and edge detected image. We can see by
adjusting the lower and upper thresholds, we can get
successful edge detection (Fig. 9 & 10).

Fig. 9. Detected edges

Fig. 10. Detected edges

VIII. APPLICATIONS
a) Log in Control PC Access System
b) Security systems: physical admission into secured
areas
c) Healthcare: ID verification for medical equipment
d) Electronic Record Management
e) Banking and financial services: access to ATM,
kiosks, vault
IX. DISCUSSION

Fig. 8. Gray Image

Due to the unavailability of Palm Vein Image Database,


we have considered images on which different people
have already worked. You can see images captured by
other people and images captured by us using our IRsensitive webcam with which we are going to proceed
further. Here we have proposed an algorithm and a way of
low cost vein pattern authentication using low quality
images.

130

Proceedings of the National Conference on Communication Control and Energy System


[5]

X. FUTURE WORK
We are planning to do verification and matching part
based on our low quality image based authentication
system. We are aiming to implement it as a secured smart
card and hand vein based person authentication system
and we have already started work in that direction.
REFERENCES
[1]
[2]
[3]
[4]

Jain, A.K., Ross, A., Prabhakar, S.: An Introduction to


Biometric Recognition. IEEE Transactions on Circuits and
Systems for Video Technology 14(1), 420 (2004)
Crisan, S., Tarnovan, I.G.: Vein pattern recognition. Image
enhancement and feature extraction algorithms
Cui, F.-y., Zou, L.-j.: Edge Feature Extraction Based on
Digital Image Processing Techniques
Hao, Y., Sun, Z., Tan, T., Ren, C.: Multispectral Palm
Image Fusion for Accurate Contactfree Palmprint
Recognition

Jeyaprakash, R., Lee, J., Biswas, S., Kim, J.M.: Secured


Smart Card Using Palm Vein Biometric On-Card-Process
[6] Wang, J.-G., Yau, W.-Y., Suwandy, A.: Fusion of
Palmprint and Palm Vein Images for Person Recognition
Based on Laplacianpalm Feature
[7] Wang, J.-G., Yau, W.-Y., Suwandy, A.: Feature-Level
Fusion of Palmprint And Palm Vein For Person
Identification Based on A Junction Point Representation
[8] Tanaka, T., Kubo, N.: Biometric Authentication by Hand
Vein Patterns. In: SICE Annual Conference, Sapporo,
August 4-6, pp. 249253 (2004)
[9] Lakshmi Deepika, C., Kandaswamy, A.: An Algorithm for
Improved Accuracy in Unimodal Biometric Systems
through Fusion of Multiple Feature Sets. ICGST-GVIP
Journal 9(III) (June 2009) ISSN 1687-398X
[10] Wang, L., Leedham, G., Cho, S.-Y.: Infrared imaging of
hand vein patterns for biometric purposes. IET Comput.
Vis. 1(3-4) (2007)

131

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.132-137.

Motion Analysis and Identification of Human Fall


using MEMS Accelerometer
K. Malar Vizhi
M.Tech, Dept of Embedded Systems, Veltech Dr.RR & SR Technical University, Chennai.
Email: malarvizhik04@gmail.com
AbstractThis paper offers an overview to protect elderly
and frail person from hip bone fracture using airbag system.
The fall of a person is detected using the Micro Electro
Mechanical System (MEMS) sensor. With the help the
MEMS sensors the normal motion and the fall of the elderly
and frail person was identified and the airbag inflated if the
fall occurs. The entire system consists of MEMS sensor ,
Micro Controller Unit (MCU), optocoupler, relay, solenoid
valve and airbag. The MEMS sensor unit consists of three
dimensional accelerometers in the range of + or 2g, which
produces a static voltage corresponding to the angle of
inclination. The Microcontoller Unit was used as Analog to
Digital (A/D) conversion, fall detection when a fall occurs
and inflation of airbag.
The fall detection was done on the basis of threshold value
that was obtained from the evaluation of different set of
values taken during the experimental analysis process. A
solenoid valve is used to open the gas cartridge to supply gas
to the airbag. The solenoid valve is switched on with the help
relay when a fall is detected. The entire process of airbag
deployment should take within 0.9s to protect the elderly
from fracture.
Keywords MEMS sensor, Accelerometer, optocoupler

B. Proposed System
The main aim of this project is to protect the fracture
induced by fall in the hip bones. We use MEMS sensor to
sense the fall. If the fall occurs the sensors will send the
signal to the Microcontroller Unit and it will trigger the
actuator. The actuator will open the cartridge containing
the compressed gas. Now the compressed gas flow
through the pipe connected to the airbag to fill it.
The fall detection system consists of a MEMS
accelerometer which can sense the different motion of
human being. It can detect imbalance of the elderly
person and send signal to the fall protection system
through the microcontroller output whenever a fall is
detected by MEMS accelerometer fixed on the hip of the
body. The protection unit consists of an airbag which will
be inflated to protect the elderly. The fall detection and
protection systems design is to be small, light-weight and
comfortable as the elderly have to wear it every day.
Owing to the availability of low-cost, small-size MEMS
sensor, it is possible to build self-contained inertial
sensors with overall system dimension of less than 1
cubic inch, and at the same time, the sensors unit can
track the orientation and locomotion in real time

I. INTRODUCTION
C. Features of the Proposed System

A. Existing System
Falls and fall-induced fractures are very common among
the frail and elderly person. Of all the fall-induced
fractures, hip fractures account for most of the deaths and
costs. After a hip fracture, an elderly person usually loses
his/her independence of functional mobility. Hip
protectors are protective devices made of hard plastic or
soft foam and are placed over the greater trochanter of
each hip to absorb or shunt away the energy during
mechanical impact on the greater trochanter. They are
widely demonstrated both biomedically and clinically to
be capable of reducing the incidence of hip fractures.
However, the compliance of the elderly to wear them is
very low, due to discomfort, wearing difficulties, problem
with urinary incontinence and illness, physical
difficulties, and not useful and irrelevant.

The airbag system used for the hip bone protection is a


light weighted system. It gives more comfort to the
wearer. The airbag system is fast enough to protect the
hip bones from fracture when the person wearing the
airbag falls. Unlike the plastic hip protectors the stress
induced due to the impact is very less in the case of the
airbag system. The important factor to be considered in
the fall detection and fall protection system is that a
proper airbag triggering mechanism has to be used and an
appropriate fall identification process have to be
implemented to actuate the solenoid valve. The threshold
based fall identification method makes the entire system
so simply and reliable. The fall and non-fall motions are
determined from the different motions like walking,
sitting, running, jumping, falling etc. The threshold value
has to be fixed so that the fall detection system can
distinguish fall and non-fall motion.

132

Proceedings of the National Conference on Communication Control and Energy System

II. HIP PROTECTION SYSTEM


The hip protection system comprises of two important
units. They are the fall detection system and the fall
protection system. The operation done by the fall
protection and fall detection system are as follows:
A. Fall Detection System
To detect the fall of a person we may use some sensors.
Since the sensor has to be used in a portable system it
should be compact. So MEMS sensor can be used to
detect a fall and normal motion of a person. We may use a
gyroscope or an accelerometer to identify the fall motion.
In this project a tri-axial MEMS accelerometer is used to
detect the fall of a person. The accelerometer can also be
a dual-axis accelerometer, but in the case of dual axis
accelerometer we have to use two accelerometers to sense
fall on three axes. If we use a tri-axial accelerometer one
accelerometer is enough to sense in all the three axes.
Here we use analog devices ADXL327 as an
accelerometer to detect the fall of a person.ADXL327 is
an analog accelerometer and its output is in the form of
voltage. Since we are using a threshold based method to
identify the fall we have fix a value between fall and nonfall motion. Generally human being cannot maintain their
stability after an inclination of 30. To determine fall of a
person we have to measure the static acceleration of
person with respect to earths gravity. Since no one can
maintain their stability at an inclination of 30 we fix the
voltage obtained at this angle as threshold. Based on this
threshold value a program is written on the PIC
microcontroller to identify the fall from values obtained
from the accelerometer.
B. Fall Protection System
After the fall has been detected fall protection
arrangement has to be activated to protect the elderly
from fall. Since we are using an airbag to protect from
fall, gas has to be filled in a gas cartridge to supply the
airbag when a fall occurs. Recent advances in
manufacturing technologies have made it possible to
safely compress air in small, light weight, and low-cost
pressurized cylinders, thereby making a personalized
airbag system not only possible but economically feasible.
The MEMS based accelerometer measurement unit is
suitable for small, light weight hip protector system, and
can be intelligently programmed to measure and
recognize human motions to trigger the inflation of the
airbag before a subject falls to the ground.
To inflate the airbag we have to use necessary mechanism
to open the gas cartridge. To open the cartridge we will
use a solenoid valve. The solenoid valve will be activated
by a relay whenever fall is detected.

Fig. 1. Conceptual illustration of the Smart Hip Protection


system in action.

Fig. 1 illustrates the basic concept of an intelligent hip


protector system. A micro 3-D motion-sensor based belt
is mounted on the waist and is worn by a subject. The
motion sensing system will be fully calibrated to
compensate for temperature and environmental vibration
effects
Airbag is connected to a small compressed gas cylinder
through a solenoid valve and the airbag is embedded in
the belt and are positioned on the greater trochanter of
hip. These airbags are expected to reduce the impact force
during a fall. When an elderly loses balance, the MEMS
micro sensor in the belt will detect his/her disorientation
and triggers the inflation of the airbag on the side he/she
fall in a few milli-seconds before falling to the ground.
The motion-based condition of activating the inflation
process will be defined such that it is sensitive enough to
detect imbalance of an elderly but not too hypersensitive
to induce false inflation of airbag.
C. Hardware Description
The hardware units in the fall detection and protection
system are accelerometer, PIC microcontroller,
optocoupler, relay, solenoid valve gas cartridge and
airbag. Three axis accelerometer (ADXL327) is used to
detect the acceleration motion of the human body in three
dimensions. The analog accelerometer gives the motion of
a person in terms of voltage.
The analog signal generated by the MEMS accelerometer
is transmitted to the ADC channel of the PIC
microcontroller (16F877A). The PIC microcontroller does
the A/D conversion by successive approximation method.
We already found a threshold value to determine the fall
and non-fall motion that value is programmed in PIC
microcontroller to identify fall. The obtained digital value
from the A/D converter is compared with the threshold
value.

133

Motion Analysis and Identification of Human Fall using MEMS Accelerometer

If the received value comes under falling motion the


microcontroller will send signal to the optocoupler
through one of the output port. Then the optocoupler
sends signal to the PCB mounted relay. Once the relay
receives the signal it activates the solenoid value to allow
the gas to fill the airbag.
D. Software Description
The software of the hip detection system includes two
important parts. The first part of the software includes the
conversion of the received analog values into
corresponding 10-bit digital value. The second part
comprises of fall detection program and if a fall occurs it
will send signal to the optocoupler. The A/D conversion
and fall detection programs are done in assembly
language
The PIC microcontroller 16F877A is used to do the A/D
conversion and fall detection. We have to write a program
on the PIC microcontroller to find a fall. The program is
written in microchips MPLAB IDE 8.4. once the
program is successfully build in the IDE it will be burned
on the chip using the PIC burner kit.
Before the system is used in the real time application, the
datas from accelerometer have to be analyzed to detect
the fall and non-fall motion. To get the fall and non-fall
datas at first the datas from the accelerometer is to be
recorded in a oscilloscope to find voltage value at
different inclination angle. The obtained values have to
plotted in a graph for further analysis and to find the
threshold value. The vital importance of the software in
the hip protection system is to inflate the airbag before a
person fall to the ground.

The A/D module has four registers. These registers are:


 A/D Result High Register (ADRESH)
 A/D Result Low Register (ADRESL)
 A/D Control Register0 (ADCON0)
 A/D Control Register1 (ADCON1)
The ADCON0 register controls the operation of the A/D
module. The ADCON1 register configures the function of
the port pins. The port pins can be configured as analog
inputs or as digital I/O. The ADCON1 and TRIS registers
control the operation of the A/D port pins. The port pins
that are desired as analog inputs must have their
corresponding TRIS bits set (input). If the TRIS bit is
cleared (output), the digital output level will be converted.
IV. A/D RESULT REGISTERS
The ADRESH:ADRESL register pair is the location
where the 10-bit A/D result is loaded at the completion of
the A/D conversion. This register pair is 16-bits wide. The
A/D module gives the flexibility to left or right justify the
10-bit result in the 16-bit result register. The A/D Format
Select bit (ADFM) controls this justification. Figure
shows the operation of the A/D result justification. The
extra bits are loaded with 0s. When the A/D result will
not overwrite these locations (A/D disable), these
registers may be used as two general purpose 8-bit
registers.

It may take 1sec for a person to fall on the ground. So the


fall detection and fall protection have to done before this
time period. At the same time the system should not
indicate fall and activate the airbag when there is not a
real fall.
III. ANALOG-TO-DIGITAL CONVERTER (A/D) MODULE

Fig. 2. A/D result justification

The Analog-to-Digital (A/D) converter module has eight


inputs for the 40-pin devices. The analog input charges a
sample and hold capacitor. The output of the sample and
hold capacitor is the input into the converter. The
converter then generates a digital result of this analog
level via successive approximation. The A/D conversion
of the analog input signal results in a corresponding 10-bit
digital number. The A/D module has high and low voltage
reference input that is software selectable to some
combination of VDD, VSS, RA2, Or RA3.
The A/D converter has a unique feature of being able to
operate while the device is in SLEEP mode. To operate in
SLEEP, the A/D clock must be derived from the A/Ds
internal RC oscillator.

V. FALL DETECTION AND PROTECTION PROGRAM


After the A/D conversion is done the converted values
from A/D Result register will be transferred to sample
register to find a fall. A threshold value is already fed into
the program through some temporary register. For each
and every axis a separate threshold value will be set.
Once the value from the X-axis is converted to its
corresponding digital value it will be compared with the
threshold value to find the fall. If a fall detected signal
will be send to the optocoupler for further fall protection
process.
Likewise the analog data from the other two axes also
converted to their corresponding digital value and

134

Proceedings of the National Conference on Communication Control and Energy System

compared with their threshold value to find the fall. As


said earlier for X-axis if a fall occurs the Y and Z-axis
will intimate a fall to protection unit. If a fall didnt occur
the process of fall detection will be done in round robin
fashion.

setups used for the measurement of tilt angle are as


follows.
A. Experimental Setup 1
In the experimental setup 1 we use an oscilloscope to
measure the different voltage level at different angle of
tilt. In this setup we used a scale arrangement as shown in
the figure 5 to find the voltage values at different angles
from 0 to 180. To make the accelerometer to function
we have give a supply voltage of 2.6 to 3.6V to the
accelerometer.
The supply voltage for the accelerometer is get from the
RPS unit as shown in the figure. The output of the
accelerometer is given to the oscilloscope. The
oscilloscope used in the experimental setup is an analog
oscilloscope. The voltage values obtained for the
corresponding tilt is recorded in a PC for further analyses.

Fig. 3

VI. FALL PROTECTION SYSTEM


The fall protection unit consists of components like
compressed gas cartridge, optocoupler, relay, solenoid
valve and airbag. The output signal from the
microcontroller is given to the optocoupler to protect the
PIC microcontroller from any overvoltage. The output of
the optocoupler is given to the relay to drive the solenoid
valve. The solenoid valve has one input and one output.
Whenever a fall is detected by the microcontroller it will
send the signal to relay to open the solenoid valve to fill
the airbag. The input of the solenoid valve is connected to
the gas cartridge and the output of the solenoid valve is
connected to the airbag. The diagrammatic representation
of the fall protection system is given in the following
figure.

Here we give a voltage value of three volt because it is


typical voltage where the sensitivity of the accelerometer
is 420mV/g. as the voltage value is increased the
sensitivity of the accelerometer also increases. At the
same time if we give minimum input voltage value then
the sensitivity will also decreases as the voltage applied
decreases.
Sensitivity at 3V= 420 mV/g.
Minimum sensitivity = 378 mV/g
Maximum sensitivity = 462 mV/g.

A. Block Diagram of Airbag Inflator


PIC
16F877A

OPTOCOUPLER

RELAY

Fig. 5. Experimental setup 1


Table 1. Experimental datas from setup 1
GAS
CARTRIDGE

SOLENOID
VALVE

Degree
AIRBAG

Fig. 4

III. EXPERIMENTAL SETUP


To detect the fall of a person an experimental setup was
developed to analyze the different motion. The aim of
using this experimental setup is to measure the static
acceleration at different angle of tilt. The experimental

135

0
20
40
60
80
90

X-axis
(Primary axis)
Vp+
1.38
1.62
1.74
1.76
1.84
1.86

Y-axis
(Cross axis)
Vp+
0.94
1.1
1.06
1.16
1.28
1.36

Z-axis
(Cross axis)
Vp+
1.36
1.36
1.48
1.48
1.46
1.42

Motion Analysis and Identification of Human Fall using MEMS Accelerometer

In the above table it is shown that the readings taken from


0 to 90 in the order of 10. Here we kept X-axis as the
primary axis and Y and Z-axis as cross axis. While X-axis
acts as the primary axis we can see the variation of
voltage for every angle in X and Y-axis. For the Z-axis
we cannot see much variation, but there is a little
variation or almost constant. The graph plotted from table
is shown below:

Fig. 7. Experimental setup 2

We have also done some real time implementation work


of this fall detection and protection system to distinguish
the fall and non-fall motion. The voltage values obtained
for different motions are as follows:
Fig. 6. Graph from experimental setup 1 readings

Walking: 1.86 to 1.97V

After some set of values are collected from the


experimental setup 1, we plotted a graph from the values
collected. That graph shows that there is a variation for
tilt in every angle, but the variation obtained are nonlinear.

Joking: 1.57 to 2.30V


Bending: 1.80V
Table 2. Experimental datas from setup 2

B. Experimental Setup 2
We also used another experimental setup to measure the
voltage variation at differential angles. In the
experimental setup 1 we used analog oscilloscope to
measure the values of voltage at various angle. Here in
this experimental setup 2 we simply use a multimeter to
measure the values of voltage at various angle. The
voltage values obtained from experimental setup 2 is
more or less equal to the values obtained from setup 1.
Experimental setup 2 is also used to check fall detection
and it illustrated in the figure 7.
Experimental setup 2 is a simple arrangement. The supply
voltage for the accelerometer is get from the 3V battery
and the output of the accelerometer is connected to the
multimeter. Through the multimeter we measured the
voltage value of the accelerometer at different angles as in
the case of experimental setup 1.
After the values are measured from the setup it is plotted
in a graph and the threshold value is found. The obtained
threshold value is programmed in a microcontroller. After
the controller is programmed it is fixed on a PIC
development board and falls are detected on the
corresponding output ports in the microcontroller.
After we checked the operation of the fall detection
system on a PIC development board, the fall detection
system is developed on a general purpose PCB. The
voltage measured at various angle is plotted on table 2.
the readings are taken from 0 to 180. The variations in
voltage value are clearly shown.

Degree
0
20
40
60
80
100
120
140
160
180

X-axis Readings
X-axis
Y-axis
(Primary
(Cross axis)
axis)
X-axis
Y-axis
1.517
1.1
1.661
1.125
1.794
1.198
1.89
1.309
1.942
1.444
1.949
1.593
1.909
1.734
1.822
1.853
1.696
1.934
1.546
1.968

Z-axis
(Cross axis)
Z-axis
1.577
1.579
1.582
1.585
1.588
1.591
1.593
1.594
1.596
1.597

On the above table we kept X-axis as the primary axis and


Y and Z-axis as the secondary axis. We can see there are
some variations in X and Y-axis for variation in angle, but
the values obtained in the Z-axis is almost constant. From
the values taken using experimental 2 we plotted a graph
as given below.
The datas collected from the experiment is given in the
table 2. The graph of the collected data is also plotted on
the graph as shown in the above. From the graph it is
clear that the values obtained are nonlinear from 0 to
180. In the X-axis the voltage value is continuously
increasing from 0 to 180. But for Y-axis it increases up
to 90 and decreases from 90 to 180. From the graph we
can see the linear variation from 30 to 150. If the

136

Proceedings of the National Conference on Communication Control and Energy System

sensitivity of the accelerometer is high means its fall


detection capability is also high.

V. CONCLUSION AND FUTURE WORK


The A/D conversion program and fall detection program
is successfully build and run in the MPLAB IDE. The
outputs for the given analog input values are visualized in
the PIC simulator IDE. Depends upon the analog value
given the A/D conversion program changes it to
corresponding digital value. The digital value is compared
with the threshold value and determined whether the
motion is a fall or a non-fall motion. If the output value
from the accelerometer comes under fall motion, it is
indicated by the blinking of LED in the output port of PIC
microcontroller. The time duration required for the
execution the program is also found with the help of PIC
simulator IDE.

Fig. 8. Graph from experimental setup 2 readings

IV. RESULTS AND DISCUSSION


After the feasible results obtained from the simulation,
real time implementation of the fall detection program is
done. A fall detection and protection kit has been
developed to fix it on the human hip. The kit consists of a
power supply unit, accelerometer, PIC microcontroller,
optocoupler, relay and solenoid valve. The power supply
to the microcontroller and accelerometer is get from 9V
battery. Since the microcontroller requires only 5V a
voltage regulator 7805 is used between the PIC and the
battery.
The voltage regulator will gives an output of 5V whatever
the input voltage we give. To supply the accelerometer
with 3V supply we use diodes in between the voltage
regulator and accelerometer. The output from the
accelerometer is continuously given to the PIC
microcontroller. If a fall is detected by the microcontroller
it will send signal to optocoupler and the optocoupler in
turn sends signal to relay and the relay will switch on the
solenoid valve to fill the airbag.

After the fall is successfully detected in the simulation a


fall detection kit has been developed to identify the fall.
That fall detection kit is placed on hip of the person to
identify the fall. Once the fall detection kit detects fall it
sends signal to the fall protection system to activate the
airbag through a relay. The entire function of the airbag
system ensures that the fall is detected and the fall
protection unit is activated before a person fall on ground.
The future work is to implement the system in real time
and to increase the efficiency of the system and also to
reduce the total weight of the system.
REFERENCE
[1]

[2]

[3]

This entire arrangement is done on a PCB. By tilting the


PCB board at an angle of 30 in any direction we can see
the indication of fall on the output port. The fall in X, Y
and Z-axis are indicated on a single output port. If a fall is
identified in any of the axis it is indicated only on a single
port. In the real application it is fixed on the hip of a
person. When the person falls the fall protection system
will also fall along with him. As the person inclined to an
angle of 30 the fall will be indicated by the fall detection
system. Otherwise it keeps on continuously check for fall.
Once a fall is detected by the fall detection system the
signal is given to the fall protection system through output
ports.

[4]

[5]

137

M. N. Nyan, F. E. H. Tay, T. H. Koh, Y. Y. Sitoh, and K.


L. Tan,Location and sensitivity comparison of MEMS
accelerometers insignal identification for ambulatory
monitoring, Electron. ComponentsTechnol., vol. 1, no. 1
4, pp. 956960, Jun. 2004.
J. Chan, P. S. Lam, P. C. Sze, and K. S. Leung, A study
of the epidemiologyof falls in Hong Kong, in Proc.
Symp. Preventing Falls andFractures in Older Persons,
Yokohama, Japan, Jul. 2004.
P. Kannus, J. Pakkari, and J. Poutala, Comparison of
force attenuationproperties of four different hip protectors
under simulated fallingconditions in the elderly: An in
vitro biomechanical study, Bone, vol.25, pp. 229235,
1999.
P. C. Sze, Mechanical and compliance study of a
modified hip protectorfor old age home residents in Hong
Kong, M.Phil. thesis, TheChinese Univ. Hong Kong,
Hong Kong, China, 2006.
A.K. Nakahara, E. E. Sabelman, and D. L. Jaffe,
Development of asecond generation JointBMES/EMBS
Conf., Oct. 1999, vol. 1, no. 1316, p.630.

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.138-140.

Reflective Semiconductor Optical Amplifier (RSOA) Model


used as a Modulator in Radio over Fiber (RoF) Systems
S. Ethiraj1, S. Anusha Meenakshi2 and M. Baskaran3
Electronics and Communication Engineering, Valliammai Engineering College
Email: 1ethirajsrinivasan@gmail.com, 2 anusha_1091@yahoo.co.in, 3baski.maha@gmail.com
Abstract We improve and validate the multisection model
for a bulk reflective semiconductor optical amplifier (RSOA)
as a modulator in radio over fiber (RoF) systems. An
accurate parameter extraction method is presented. The
intrinsic parameters of RSOA are obtained. Here the model
is used to assess the static characteristics, harmonic and
intermodulation distortions, and transmission performance
of RSOA. Simulations have been validated with
experimental results providing good agreement.
Keywords Distributed antenna systems (DAS), modeling,
radio over fiber (RoF), reflective semiconductor optical
amplifier (RSOA).

which has been optimized for RoF application [6] and


show high linearity and low noise level which are
essential in the radio range calculation. These
performances have been confirmed by comparing the
different types of link. We validate the RSOA model [6].
The RSOA model is validated by static and large signal
measurements. The model can predict the transmission
performance of the saturated RSOA modulator at
different bias points driven by an orthogonal frequency
division multiplexing (OFDM) 802.11g signal.
II. THEORY AND IMPLEMENTATION
The carrier density z at is governed by the rate equation

I. INTRODUCTION
As wireless services increase in carrier frequency and data
bandwidth, the problems associated with in building
coverage are becoming ever more severe. Distributed
antenna systems (DAS) has been proposed as a cheaper,
upgradable alternative to placing multiple access point
and base stations throughout a building to provide
Ubiquitous coverage.
A DAS includes a central unit (CU) and a number of colocated radio units to feed remote antenna units (RU)
which are positioned where coverage is required. Radio
over fiber (RoF) technology has been proposed as a
solution to implement Optical fibre link between the CU
and RAUs
Radio over Fiber (RoF) refers to a technology
whereby light is modulated by a radio signal and
transmitted over an optical fiber link to facilitate access.
Semiconductor optical amplifiers (SOAs) and reflective
semiconductor optical amplifiers (RSOAs) will play an
important role in future optical communication links and
have been investigated recently .
When compared with RSOA, SOA has larger modulation
bandwidth but lower optical gain and higher noise figure,
for devices with similar physical dimensions [4], [5].
Efficient architectures have been proposed using
wavelength division multiplexing (WDM) techniques,
allowing a reconfigurable RoF network, therefore
colorless devices are needed. RSOA is a perfect candidate

(1)
Where N is the active regions carrier density, I is the
injection current of RSOA, L, W and d are length, width
and thickness of the active region of RSOA, respectively.
e is the electron charge. The second term to fourth term
on the right hand represents the spontaneous
recombination rate, stimulated emission recombination
rate and amplified spontaneous emission (ASE) optical
field, respectively.
is the gain compression. h is
Plancks constant. is optical frequency. Pav(z) is the
average optical power at position z .R(N) was presented
in [6].
gc is represented by

(2)
Where is the gain saturation parameter, S is the photon
density,
gm is the material gain coefficient [6].
The net gain gnet is depicted by

138

(3)

Proceedings of the National Conference on Communication Control and Energy System

Where is the optical confinement factor and int is the


internal waveguide loss given by
(4)
Where K0 and K1 are the carrier independent absorption
loss coefficient and carrier dependent absorption loss
coefficient, respectively
The forward and reverse propagating optical fields are
described by the relation between the input optical power
and output optical power, is given by

A. Static Characteristics
The optical powers versus bias current measurements
were carried out. The vector signal generator (VSG),
vector signal analyzer (VSA), photodiode (PD) and the
RF amplifier in Fig. 1 were removed in this case. An
optical power meter was used to measure the optical
power at the circulator output. The parameter extraction
algorithm was performed according to the extraction
technique in Section III. The measured and simulated
output optical powers versus bias current and optical gain
versus input optical powers characteristics of RSOA are
shown in Fig. 2.In Fig. 2(b) the measurement and model
have good fit for 90mA and 120mA.

(5)
Where is the confinement factor,
is the material loss
coefficient, and represents the material gain, which is
normally approximated by a linear function of the carrier
density, given by

Where is the differential gain.


III. PARAMETER EXTRACTION

Fig. 1. Experimental setup

The reliability of models depends crucially on the validity


of the parameters which appear in the models [6]. The
best values for the model parameters are found by fitting
the simulated data as closely as possible to the measured
data. A parameter extraction technique based on an
OPTICSYSTEM was used. The measured data, output
powers versus bias currents, was loaded into
OPTICSYSTEM simulator by a data access component.
The optimization controller based on gradient algorithm
was applied. The goal of optimization was to minimize
the difference between the measured and simulated output
powers. A DC simulator was to perform DC simulation in
order to sweep all bias currents to extract the parameters
of the RSOA model. The parameters were updated after
simulation finished. The simulation started again based on
the new parameters. After several periods above, the
stable model parameters would be obtained.

Fig. 2. Measured and simulated (a) curve; (b) gain versus input
optical power under different bias currents.

IV. SIMULATION AND EXPERIMENT


A back-to-back experimental setup is shown in Fig. 1.
The seeding light of RSOA is from a commercial DFB
Laser with the wavelength of 1550 nm, which is biased at
30 mA. The RSOA with the reflectivity of 20% at rear
facet, whose geometrical parameter length is shown in
Table, is provided byAlcatel-Thales III-V Labs. The input
optical power of RSOA is set to 7 dBm by adjusting the
tunable optical attenuator. A circulator is used to separate
the forward and reverse optical signals. A photodiode
with the responsibility of 0.8 A/W is used as a detector. A
commercial RF amplifier, ZHL-42W, is used to drive the
RSOA.
139

Fig. 3. Measured and simulated (a) harmonic distortions;


(b) intermodulation distortions.

Reflective Semiconductor Optical Amplifier (RSOA) Model used as a Modulator in Radio over Fiber (RoF) Systems

However, for 60mA there are 3 dB differences for the


small signal gain. This is due to a shift in the peak optical
gain towards the higher wavelengths as the bias current
decreases ( the model assumes that the net gain does not
depend on the wavelength).

the upstream signals. An IEEE 802.11g signal at 1 GHz


was used as the input, which was set as a 54Mbit/s
OFDM signal with 64-QAM modulation and 52 active
subcarriers. The EVMs versus RF input powers were
obtained for different optical input powers, 5dBm and
10dBm, and different bias currents as shown in Fig. 4.

B. Nonlinearity
It is common to use single-tone and two-tone
measurements in order to characterize the nonlinearity of
RSOA. The RSOA was biased at 90mA. Gain,
compression point and harmonic distortions were carried
out using single tone signal (1 GHz) generated by VSG.
The RF signal was recovered by a photodiode.

64-QAM modulation and 52 active subcarriers. The


EVMs versus RF input powers were obtained for different
optical input powers, 5dBm and 10dBm, and different
bias currents as shown in Fig. 4.
REFERENCES
[1]

[2]

[3]

[4]

Fig. 4. Measured and simulated EVMs (a) at 5-dBm optical


input; (b) at 10-dBm optical input.

[5]

The measured and simulated harmonic distortions are


depicted in Fig. 3(a). The link gains of measurement and
simulation are 21.3 and 21.5 dB, respectively. The input
powers of measurement and simulation at 1 dB
compression points are 12.7 and 15.1 dBm, respectively.
The measured and simulated two-tone intermodulation
distortions for RSOA are shown in Fig. 3(b). The input
two tones have frequencies of 995 MHz and 1005 MHz,
which are generated by VSG. The IP3 for measurement
and simulation are 19.8 and 20.9 dBm, respectively. The
output noise floor measured was 155 dBm/Hz at the bias
current of 90 mA .

[6]

[7]

C. Transmission Performance
The transmission performance of the RSOA is evaluated
with the error vector magnitude (EVM) measurements of

140

M. J. Crisp, S. Li, A.Wonfor, R. V. Penty, and I. H.White,


Demonstration of a radio over fibre distributed antenna
network for combined in-building WLAN and 3G
coverage, presented at the OFC/NFOEC 2007, Anaheim,
CA, 2007, Paper JThA81, Optical Society of America.
T. Durhuus, B. Mikkelsen, and K. Stubkjaer, Detailed
dynamic model for semiconductor optical amplifiers and
their crosstalk and intermodulation distortion, J. Lightw.
Technol., vol. 10, no. 8, pp. 10561065, Aug. 1992.
M. Connelly, Wideband semiconductor optical amplifier
steady-state numerical model, IEEE J. Quantum
Electron., vol. 37, no. 3, pp.439447, Mar. 2001.
J.-M. Kang, Y.-Y.Won, S.-H. Lee, and S.-K. Han,
Modulation characteristics of RSOA in hybrid
WDM/SCM-PON optical link, presented at the
OFC/NFOEC 2006, Anaheim, CA, 2006, Paper JThB68,
Optical Society of America.
E. Udvary and T. Berceli, Linearity and chirp
investigations on SOA as an external modulator in SCM
systems, EUMA Special Issue on Microwave Photonics,
vol. 3, pp. 217222, Sep. 2007.
G. deValicourt,M.A. Violas,D.Wake, F. VanDijk,
C.Ware,A.Enard, D. Mak, Z. Liu,M. Lamponi, G. H.
Duan, and R. Brenot, Radio over fiber access network
architecture based on new optimized RSOA devices with
large modulation bandwidth and high linearity, IEEE
Trans.Microw. Theory Tech., vol. 58, no. 11, pp. 3248
3258, Oct. 2010.
D. Wake, A. Nkansah, N. J. Gomes, G. de Valicourt, R.
Brenot, M. Violas, Z. Liu, F. Ferreira, and S. Pato, A
comparison of radio over fiber link types for the support of
wideband radio channels, J. Lightw. Technol., vol. 28,
no. 16, pp. 24162424, Aug. 2010.

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.141-145.

Employing Hybrid Automatic Repeat reQuest (HARQ) on MIMO


A. Ramakanth1 and N. Sateesh Kedarnath Kumar2
1

Assistant Professor, 2Student, M.Tech


Department of ECE, G.Pulla Reddy Engineering College, Kurnool, Andhra Pradesh
Email: 1nsk_1357@yahoo.co.in
Abstract Hybrid Automatic-repeat-Request (HARQ) is an
extension of ARQ that incorporates forward error
correction coding. It is a retransmission scheme employed in
current communication systems. The use of HARQ can
contribute in efficient utilization of the available resources
and the provision of reliable services in latest-generation
systems. This paper focuses on wireless systems using HARQ
with an emphasis on the Multiple-Input Multiple-Output
(MIMO) paradigm. MIMO systems with HARQ promises
high throughput with high reliability. MIMO-HARQ offers
new opportunities because of the additional degrees of
freedom introduced by multiple antennas at the transmitter
and receiver. The architecture of MIMO transceivers that
are based on bit-interleaved coded modulation and employ
HARQ is described; also Receiver implementations are
presented and compared in terms of complexity, memory
requirements, and performance with different number of
antennas and different modulation techniques and finding
the optimal one for MIMO transceiver.

received a data frame or packet) and timeouts (specified


periods of time allowed to elapse before an
acknowledgment is to be received) to achieve reliable
data transmission over an unreliable service. If the sender
does not receive an acknowledgment before the timeout,
it usually re-transmits the frame/packet until the sender
receives an acknowledgment or exceeds a predefined
number of re-transmissions. The types of ARQ protocols
include Stop-and-wait ARQ , Go-Back-N ARQ, Selective
Repeat ARQ. These protocols reside in the Data Link or
Transport Layers of the OSI model.

KeywordsMIMO, hybrid ARQ, distance-level combining,


IR-HARQ,CC-HARQ.
Fig. 1. Multiple Input Multiple Output (MIMO), 2x2

I. INTRODUCTION
To multiply throughput of a radio link, multiple antennas
are put at both the transmitter and the receiver. This
system is referred to as Multiple Input Multiple Output
(MIMO). A 2x2 MIMO will double the throughput.
MIMO often employs Spatial Multiplexing (SM) to
enable signal (coded and modulated data stream) to be
transmitted across different spatial domains. Meanwhile,
Mobile WiMAX supports multiple MIMO modes, that's
using either SM or STC or both to maximize spectral
efficiency (increase throughput) without shrinking the
coverage area. The dynamic switching between these
modes based on channel conditions is called Adaptive
MIMO Switching (AMS). If combined with AAS
(Adaptive Antenna System), MIMO can further boost
WiMAX performance . MIMO is a hot topic in today
wireless communications since all wireless technologies
try to add it to increase data rate multiple times to satisfy
their bandwidth-hungry broadband users.
Automatic Repeat reQuest (ARQ), also known as
Automatic Repeat Query, is an error-control method for
data transmission that uses acknowledgements (messages
sent by the receiver indicating that the it has correctly

The most important recent physical layer enhancement of


wireless systems is the use of MIMO transmission.
Multiple antennas provide additional degrees of freedom,
leading to significant capacity increase. Multiple antennas
can also be used to provide beamforming gains and
reduce the outage probability. Therefore, it is of particular
interest to examine how HARQ can be incorporated into
MIMO transceivers and its impact on the system
performance, complexity, and storage requirements. The
performance of MIMO-HARQ depends not only on noise
and temporal channel variations but also on the
interference between the signals transmitted by the
multiple antennas. In some cases the design involves a
trade-off between system performance and receiver
complexity and memory requirements. Simplifying the
receiver of a MIMO-HARQ system to reduce storage and
complexity may increase sensitivity to interstream
interference. Not only can HARQ be viewed as a
retransmission technique that exploits time diversity; it
can also be used in the context of systems that employ
macrodiversity. If a mobile station communicates with
two or more base stations that can exchange information,
the system can combine the signals of the base stations
before decoding using the same techniques as for HARQ.

141

Employing Hybrid Automatic Repeat reQuest (HARQ) on MIMO

In that case, the HARQ receiver storage requirements


translate to requirements on the necessary bandwidth for
the communication between the base stations. Therefore,
results derived for HARQ can be applied to such systems,
which may become increasingly common in the future.
This paper is organized as follows section II deals with
MIMO Model Employing HARQ, in section III The
MIMO case is then compared with different receiver
implementations.

Fig. 2. Transmitter Architecture for MIMO system employing


BICM and ARQ. For SISO systems, the block mapping symbols
to the antennas are not used

II. MIMO MODEL EMPLOYING HARQ


As shown in Fig. 2a, compared to the SISO case, the
transmitter for MIMO-HARQ includes an additional
symbols-to-antennas mapping block after the generation
of the modulated symbols s(i) that determines from which
antenna each symbol will be transmitted. In the general
case, a given symbol s may be transmitted from more than
one antenna, or the antennas may transmit linear
combinations of the original symbols. Specifically, the
symbols-to-antennas mapper generates a sequence of nt
1 symbol vectors x(i) = [x_(i)[0], x_(i)[1], , x_(i)[K]]
based on the symbol sequence s(i) = [s(i) [0], s(i)[1], ,
s(i)[M]],matrix is desired. Therefore, in addition to noise
and fading, the two factors affecting transmission in
MIMO systems are also subject to interstream
interference. Although not shown in the figure, the
symbol-to-antenna mapper may also employ a space-time
block code (STBC).When CC-HARQ is employed, the
simplified transmitter of Fig. 2b can be used.in an nr 1
vector y _ at the receiver, where nr is the number of
receive antennas.
Flat fading is considered, and the effect of the MIMO
channel is modelled using a sequence H(i) of nr nt
matrices. The capacity and diversity gains that can be
achieved depend on the correlation between the received
signals (i.e., the condition of the channel matrix. Ideally, a
well conditioned channel A bits-to-symbols mapper
creates symbol sequence s based on the encoded bits
sequence c, and is followed by a symbols-to antennas
mapper that transforms s to a symbol vector sequence
x. The transmitter is simpler because the symbol vector
sequence x can be precomputed. However, the main

benefit is the simplification of the receiver, as described


in the following. Similar to the SISO case, as shown in
Fig. 4a, the nr 1 received symbol vectors can be sent to
an ML detector and LLR calculator block that combats
interstream interference in addition to compensating for
noise and channel fading. The ML detector and LLR
calculator block is more complex than the SISO case. CCHARQ is considered first. The observations can be
extended to the case of IR-HARQ where bit-to-symbol
vector alignment is preserved. The same symbol vector
sequence x is sent during each retransmission. It can be
shown that an MRC-like combining scheme can be used
to form an equivalent nt 1 symbol vector sequence y_~
and an equivalent channel matrix sequence H~ from the
received symbol vector and channel matrix estimate
sequences y_(i) and H(i), respectively. Each H~[m] is a
Hermitian matrix of size nt nt [6]. Thus, the MIMOHARQ problem is converted to an equivalent singletransmission MIMO problem, because the sizes of y_~
and H~ remain the same after each retransmission. An
ML detector and LLR calculator block that uses only one
symbol vector sequence and one channel estimate
sequence can then be used. Therefore, the memory
requirements of the symbol-level combining receiver of
Fig. 4b are reduced from those of the receiver of Fig. 4a.
This simplification of the receiver is aided by reusing the
same ML detector and LLR combiner block after each
transmission. Moreover, numerical techniques such as QR
decomposition can be used for implementation [6]. The
receivers of Figs. 4a.

Fig. 3. Examples of bit selection for IR-HARQ transmission

and 4b are equivalent, so there is no loss in performance.


For IR-HARQ, the receiver of Fig. 4b can be used by
considering all different symbol vectors that may be
generated. The length of y_~ and H~ will be at least as
large as K, the length of x(i). When the alignment
between bits and symbol vectors is not fixed, the bit-level
combining receiver of Fig. 4c can be employed if using
symbol-level combining is impractical. The main cause is
not the combining of bits instead of symbol vectors, but
the separate detection and LLR calculation after each
transmission. From the viewpoint of the architecture of
Fig. 4b, the condition of the H(i)[m] equivalent matrix is

142

Proceedings of the National Conference on Communication Control and Energy System

Fig. 4. Receiver architectures for MIMO systems employing BICM and HARQ.

better than that of some of the matrices H~[m]. Although


bit-level combining is suboptimal, it is also less complex,
because the same blocks are reused in Fig. 4c regardless
of the number of retransmissions. The required storage is
also reduced because only the accumulated LLRs of the
bits of the mother code need to be stored.In some systems
the ML detector and LLR calculator block may be too
complex to implement, even in the simplified receiver of
Fig. 4b. In this case equalization across the spatial streams
can be used (recall that flat fading is assumed). Linear or
decision feedback equalizers (DFEs) (zero-forcing [ZF] or
minimum mean square error [MMSE]) can be employed.
The MIMO equalization schemes described above are
well known and not particular to HARQ. They can be
implemented efficiently, for example, using QR
decomposition. When CC-HARQ is employed, the
receiver of Fig. 4d can be used. First, the spatial streams
are decoupled using an equalizer. Then, for each element
xj ^ of the equalized symbol vector sequence x^, separate
LLR calculators are used that take into account the
mapping of the transmit symbols into symbol vectors and
the corresponding channel estimates. In general, the xj ^
are soft values and are not sliced to the nearest
constellation symbol. Each time symbol vectors from a

new transmission arrive, they are combined with the


symbol vectors of all previous transmissions, and the
equivalent symbol vectors are re-equalized using the
equivalent channel matrix sequence. The pre-equalization
symbol-level combining operation does not result in
information loss. For this reason, the scheme of Fig. 4d
exhibits the best performance among all equalizationbased architectures . After each retransmission, the
equivalent vector sequence y_~ is stored at the receiver,
together with the equivalent channel matrix sequence H~
Each H~[m] is Hermitian and of size nt nt. Hence, K
nt (nt + 1)/2 complex entries are required for H ~ and K
nt complex entries for y_~ In order to reduce storage, a
post-equalization symbol-level combining scheme, shown
in Fig. 4e, can be used. The received signal vectors y_(i)
are equalized after each retransmission, and the resulting
symbol vector sequences x ^(i) are combined before LLR
calculation. Only the y_(i) are used to obtain the x ^(i). It
can be shown that the optimal way to combine the x ^(i)
is using MRC,which consists of multiplying each element
xj ^(i) of x ^(i) with a complex weight that depends on the
channel estimate H(i) and accumulating the result with the
values from previous transmissions . The resulting
weighted sum is normalized before LLR calculation. Post-

143

Employing Hybrid Automatic Repeat reQuest (HARQ) on MIMO

equalization symbol-level combining reduces receiver


memory because instead of a sequence of K Hermitian
matrices, only K nt normalization weights gj(i) need to
be stored in addition to the weighted and accumulated x
^(i). However, post-equalization symbol-level combining
exhibits performance loss compared to pre-equalization
combining. Therefore, for fixed bit-to-symbol vector
alignment, use of post-equalization combining is
motivated by the need to reduce the storage at the
receiver. Even when nt is small, the savings can be
significant when K is large. The storage requirements can
be reduced further (by K nt complex values per
transmission)by combining the xj ^(i) using equal weights
at the cost of additional performance degradation. The
receiver of Fig. 4e can also be used for IR-HARQ. When
the bit-to-symbol vector alignment is not fixed or the
number of symbol vectors is large, the structure of Fig. 4f
can be employed, whose difference with that of Fig. 4e is
that the LLRs are calculated directly after equalization.
The performance of the receiver of Fig. 4f is inferior
compared to the other schemes. What needs to be stored
now are the LLRs of the bits of the mother code c that are
sent to the channel. Hence, if use of the receiver is
considered for HARQ with fixed bit-to-symbol vector
alignment, in order to determine whether memory
reduction can be achieved compared to other
architectures, the total number of different symbols that
are sent to the channel needs to be taken into account.

HARQ can be recaptured using the receiver of Fig. 4a at


the cost of increased complexity and memory
requirements. When a rate-5/6 code is used, the coding
gain of IR-HARQ over CC-HARQ is much larger than
the rate-1/2 code (on the order of 4 dB, as shown in Fig.
5b). Therefore, although symbol level combining
improves the performance of CC-HARQ, IR-HARQ still
achieves a gain of approximately 1 dB. The gain is
attained for both equalizer-based and ML-based
implementations.

III. COMPARISON OF RECEIVER ARCHITECTURES


In the previous section it was argued that the receiver
implementation depends on the transmission scheme (CCor IR-HARQ), whether the alignment between bits and
symbol vectors is fixed, and the constraints in memory
and complexity. Simplifying the receiver may come at a
price. As an example of the performance degradation
caused by suboptimal receiver implementations, an IEEE
802.16e compliant system using partial usage of sub
channels (PUSC) and spatial multiplexing (Matrix B) is
considered [2]. Two transmit and two receive antennas
are employed, communicating through a vehicular Type
A channel with a high degree of spatial correlation and
Doppler speed equal to 120 km/h. The data are encoded
using the mother rate -1/3 CTC. its are punctured
sequentially to produce sequences of equal length, as in
Fig. 3. In Fig. 5a the bit-level combining receiver of Fig.
4f is employed using zero-forcing linear equalization (ZFBLC). 64-QAM and the rate- 1/2 code of Fig. 3a are
considered. IR-HARQ has a coding gain of more than 1
dB over CCHARQ because of the additional parity bits
that are transmitted. However, when the optimal pre
equalization symbol- combining receiver of Fig. 4d is
used with CC-HARQ (MRC-ZF), the system exhibits a
gain of almost 2 dB over IRHARQ. CC-HARQ also
outperforms IR-HARQ when ML detection is used
instead of equalization, as seen from curves MRC-ML
and MLBLC that correspond to the receivers of Figs. 4b
and 4c, respectively. The performance advantage of IR-

Fig. 5. MIMO system, Type-A vehicular channel, 120 km/h,


high inter-stream correlation, PUSC, spatial multiplexing:
a) 64-QAM, code rate = 5/6, packet size = 54 bytes;
b) 64-QAM, code rate = 5/6, packet size = 60 bytes.

REFERENCES
[1]

[2]

144

S. Lin, D. J. Costello, Jr., and M. J. Miller, AutomaticRepeat Request Error-Control Schemes, IEEE Commun.
Mag., vol. 22, Dec. 1984, pp. 517.
IEEE Std. 802.16e-2005, IEEE Standard for Local and
Metropolitan Area Networks, Part 16: Air Interface for
Fixed Broadband Wireless Access SystemsFeb. 2006.

Proceedings of the National Conference on Communication Control and Energy System


[3]

[4]

[5]

3GPP TS 25.201 V8.0.0 (2008-03), 3rd Generation


Partnership Project; Technical Specification Group Radio
Access Network; Physical
K. R. Narayanan and G. Stber, A Novel ARQ
Technique Using the Turbo Coding Principle, IEEE
Commun.Lett., vol. 1, no. 2, Mar. 1997, pp. 4951.
Z. Ding and M. Rice, Hybrid-ARQ Code Combining for
MIMO Using Multidimensional Space-Time Trellis

[6]

[7]

145

Codes, Proc. IEEE ISIT 07, Glasgow, Scotland, June


2007.
E. W. Jang et al., Optimal Combining Schemes for
MIMO Systems with Hybrid ARQ, Proc. IEEE ISIT 07,
Nice, France, June 2007.
J.-F. Cheng, Coding Performance of Hybrid ARQ
schemes, IEEE Trans. Commun., vol. 54, no. 6, June
2006,

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.146-149.

Accident Detector using Wireless Communication


G. Gowri and K. Kavitha
B.E Final Year, ECE
Bharathiyar Institute of Engineering for Women, Deviyakurichi.
Abstract Life is precious and short. Lifetime is the
opportunity given by God to show ourselves. But our
lifetime is further getting short because of some natural and
artificial disasters. An accident is a specific, identifiable,
unexpected, unusual and unintended external event which
occurs in a particular time and place, without apparent or
deliberate cause but with marked effects. Accidents of
particularly common types (auto, fire, etc.) are investigated
to identify how to avoid them in the future. Though accident
cannot be stopped but we can save the one who is injured.
This paper is here presented to call the ambulance
automatically to the place where accident has occurred. This
circuit uses Microcontroller PIC16F877A in the Transmitter
as well as in Reception to transmit and receive the signal. A
vibration sensor and an auto dialer is also used along with
this. The vibration sensor has the certain range of vibration
and if this exceeds it activates the auto dialer. The device is
placed in the center of the vehicle and the same set up is used
in the reception (Hospitals).
As soon as the detection, the auto dialer gives the alarm to
the hospitals nearby. By using this technique, the lives of
many people can be saved. As this device automatically calls
the ambulance, the loss of lives due to the time lag between
the arrival of ambulance and treatment can be avoided
totally.

I. INTRODUCTION
The automatic accident detector is used to make an
automatic call to the ambulance as soon as the accident
has occurred. The auto dialer and the Microcontroller play
a vital role in this detection. The main working is that the
circuit is connected to the vehicle and as soon as the
accident has occurred there is a great vibration in the
vehicle .The vibration makes the auto dialer to activate.
The auto dialer then makes a call to the number saved in
its memory. The number saved may be the contact
number of the nearest hospital. Then the ambulance can
be able to arrive to the desired location as soon as
possible. The microcontroller helps the auto dialer in
transmitting and receiving the signals.
A. Main Components Used
 Micro controller PIC16F877A
 3202 Vibration sensor
 Auto dialer.

B. Circuit Diagram

Fig. 1

II. WORKING
A. Accident Detection using Vibration Sensor
The vibration sensor with certain range of acceleration is
fixed in the vehicle. In this project 3202Vibration sensor
is used which operates at the frequency of 315Mhz.

Fig. 2

When the vehicle is dashed with another or to any other


obstacle, the vibration sensor detects whether the
vibration is within the range or not. If it is greater, it
reports as accident and activates the auto-dialer. It also
consist of a switch which is placed in our convenient
place. The vibration sensor waits for one minute to
146

Proceedings of the National Conference on Communication Control and Energy System

confirm the accident. If the person inside the car does not
get injured, they can press the switch and stop the
function of auto-dialer. This will helps us to avoid calling
the ambulance when not needed. The range of the
vibration sensor is 215-350Mhz. If the acceleration
crosses this range, it detects the accident. For better
operation this is placed in the center of the vehicles.
B. Microcontoller PIC16F877A
The term PIC, or Peripheral Interface Controller, is the
name given by Microchip Technologies to its single
chip microcontrollers.
These devices have been
phenomenally successful in the market for many reasons,
the most significant ones are mentioned below. PIC
micros have grown in steadily in popularity over the last
decade, ever since their inception into the market in the
early 1990s. PIC micros have grown to become the most
widely used microcontrollers in the 8- bit microcontroller
segment. The PIC16F877 is 40 pin IC. There are six
ports in this microcontroller namely PORT A, PORT B,
PORT C, PORT D and PORT E. Among these ports
PORT B, PORT C and PORT D contains 8-pins, where
PORTA contains 6-pins and PORT E contains 3-pins.

 Flexibility in clock sources


 High current capabilities of ports
 Serial programming via two pins
 On chip EEPROM
 Harvard Architecture
Apart from these features are reasons of non technical
nature like the availability of free development software,
low cost device programmers, and availability of free
datasheets and application notes.
IV. LOCATION IDENTIFICATION USING GPS
A GPS receiver calculates its position by precisely timing
the signals sent by the GPS satellites high above the
Earth. Each satellite continually transmits messages
which include
 the time the message was sent
 precise orbital information (the ephemeris)
 the general system health and rough orbits of all GPS
satellites (the almanac).
The receiver measures the transit time of each message
and computes the distance to each satellite. Geometric
trilateration is used to combine these distances with the
satellites' locations to obtain the position of the receiver.
This position is then displayed, perhaps with a moving
map display or latitude and longitude; elevation
information may be included. Many GPS units also show
derived information such as direction and speed,
calculated from position changes.

Fig. 3

Each pins in the ports can be used as either input or output


pins. Before using the port pins as input or output,
directions should be given in TRIS register. For example
setting all the bits in TRIS D register indicates all the pins
in PORT D are used input pins. Clearing all the bits in
TRIS D register indicates all the pins in PORT D are
used as output pins. Likewise TRIS A, TRIS B, TRIS C,
TRIS E registers available for PORT A, PORT B, PORT
C and PORT E.
C. P1C6F877
The architecture of PIC16F877 contains 4-banks of
register files such as Bank 0, Bank 1, Bank 2 and Bank 3
from 00h-07h, 80h-FFh, 100h-17Fh and 180h-1FFh
respectively. And it is also having program FLASH
memory, Data memory and Data EEPROM of 8K, 368
and 256 Bytes respectively.
III. FEATURES
 Speed
 Instruction set simplicity
 Integration of operational features

Fig. 4

A satellite's position and distance from the receiver define


a spherical surface, centred on the satellite. The position
of the receiver is somewhere on this surface. Thus with
four satellites, the indicated position of the GPS receiver
is at or near the intersection of the surfaces of four
spheres. (In the ideal case of no errors, the GPS receiver
would be at a precise intersection of the four surfaces.)
If the surfaces of two spheres intersect at more than one
point, they intersect in a circle. The article trilateration

147

Accident Detector using Wireless Communication

shows this mathematically. A figure, Two Sphere


Surfaces Intersecting in a Circle, is shown below.

Fig. 5

The intersection of a third spherical surface with the first


two will be its intersection with that circle; in most cases
of practical interest, this means they intersect at two
points. The two intersections are marked with dots.

Fig. 6. Surface of Sphere Intersecting a Circle (not disk)


at Two Points

For automobiles and other near-earth-vehicles, the correct


position of the GPS receiver is the intersection closest to
the earth's surface. For space vehicles, the intersection
farthest from Earth may be the correct one.
The correct position for the GPS receiver is also the
intersection closest to the surface of the sphere
corresponding to the fourth satellite.

After SA, which has been turned off, the largest error in
GPS is usually the unpredictable delay through the
ionosphere. The spacecraft broadcast ionospheric model
parameters, but errors remain. This is one reason the GPS
spacecraft transmit on at least two frequencies, L1 and
L2. Ionospheric delay is a well-defined function of
frequency and the total electron content (TEC) along the
path, so measuring the arrival time difference between the
frequencies determines TEC and thus the precise
ionospheric delay at each frequency.
Receivers with decryption keys can decode the P(Y)-code
transmitted on both L1 and L2. However, these keys are
reserved for the military and "authorized" agencies and
are not available to the public. Without keys, it is still
possible to use a codeless technique to compare the P(Y)
codes on L1 and L2 to gain much of the same error
information. However, this technique is slow, so it is
currently limited to specialized surveying equipment. In
the future, additional civilian codes are expected to be
transmitted on the L2 and L5.Then all users will be able
to perform dual-frequency measurements and directly
compute ionospheric delay errors.
A second form of precise monitoring is called CarrierPhase Enhancement (CPGPS). The error, which this
corrects, arises because the pulse transition of the PRN is
not instantaneous, and thus the correlation (satellitereceiver sequence matching) operation is imperfect. The
CPGPS approach utilizes the L1 carrier wave, which has a
period one one-thousandth of the C/A bit period, to act as
an additional clock signal and resolve the uncertainty. The
phase difference error in the normal GPS amounts to
between 2 and 3 meters (6 to 10 ft) of ambiguity. CPGPS
working to within 1% of perfect transition reduces this
error to 3 centimeters (1 inch) of ambiguity. By
eliminating this source of error, CPGPS coupled with
DGPS normally realizes between 20 and 30 centimeters
(8 to 12 inches) of absolute accuracy.
Relative Kinematics Positioning (RKP) is another
approach for a precise GPS-based positioning system. In
this approach, determination of range signal can be
resolved to a precision of less than 10 centimeters (4 in).
This is done by resolving the number of cycles in which
the signal is transmitted and received by the receiver. This
can be accomplished by using a combination of
differential GPS (DGPS) correction data, transmitting
GPS signal phase information and ambiguity resolution
techniques via statistical testspossibly with processing
in real-time (real-time kinematics positioning, RTK)

Fig. 7

V. GPS RECEIVER

A. Precise Monitoring
The accuracy of a calculation can also be improved
through precise monitoring and measuring of the existing
GPS signals in additional or alternate ways.

The user's GPS receiver is the user segment (US) of the


GPS. In general, GPS receivers are composed of an
antenna, tuned to the frequencies transmitted by the
satellites, receiver-processors, and a highly-stable clock
(often a crystal oscillator). They may also include a

148

Proceedings of the National Conference on Communication Control and Energy System

display for providing location and speed information to


the user. A receiver is often described by its number of
channels: this signifies how many satellites it can monitor
simultaneously. Originally limited to four or five, this has
progressively increased over the years so that, as of 2007,
receivers typically have between 12 and 20 channels. A
typical OEM GPS receiver module measuring 1517 mm.

Fig. 9

A. A Community Alarm
Then the name and address as well as others bits of
information stored on the control centres data base are
displayed on the control centre operators screen so that
she knows as much detail as possible about the source of
the alarm. The base station also transmits the source
within the house of the alarm so the operator will know if
the pendant has been pressed or the button on the alarm or
if a smoke alarm has been triggered.
Fig. 8

GPS receivers may include an input for differential


corrections, using the RTCM SC-104 format. This is
typically in the form of a RS-232 port at 4,800 bit/s speed.
Data is actually sent at a much lower rate, which limits
the accuracy of the signal sent using RTCM. Receivers
with internal DGPS receivers can outperform those using
external RTCM data. As of 2006, even low-cost units
commonly include Wide Area Augmentation System
(WAAS) receivers.
Thus the location of the accident spot has been identified
using GPS. The latitude, longitude and altitude is
measured and the spot is detected.
VI. AUTO DIALER
An auto dialer or automatic calling unit is an electronic
device that can automatically dial telephone numbers to
communicate between any two points in the telephone,
mobile phone and pager networks. Once the call has been
established (through the telephone exchange) the auto
dialer will announce verbal messages or transmit digital
data (like SMS messages) to the called party.
A Smart Auto dialer is an auto dialer capable of
personalizing messages and collecting touch tone or
speech feedbacks. A speech engine is usually included for
converting text to speech and recognizing speech over the
phone.
To customize or personalize messages, a smart auto dialer
system uses message template, which contains variables
that can be replaced later by actual values. For example, a
time variable included in the message template can be
replaced by the actual time when a phone call is made.

This auto dialer may be connected as a switch to all the


hospitals (nearly 12 switches per hospital), so that we can
detect 12 accidents at a time. This can be implemented
effectively over large scale and with relative frequency
variations.
VII. FUTURE OUTLOOK
This paper mainly deals with the control measures taken
to save the lives due to accidents. This research has
shown how the importance of the human lives. However,
it should be also pointed out that the implemented
systems are far from natures sophistication. An approach
to prevent the accidents were made and many things gone
failure. As the accidents are unexpected event,we shall go
to the treatment rather than the prevention. Due to the
advancement of technology, several kinds of vehicles are
invented. Parallely, the accident rates and loss of lives
also getting increased. It has been noted that an accident
takes place once in a micro-second.
This paper has proven that calling an ambulance in a
fraction of second is feasible today utilizing the recent
advanced technology. Thus ACCIDENT DETECTOR
USING AUTO DIALER is an wonderful way to
communicate between the accident spot to the hospitals
automatically.
This has to be implemented in far future inorder to save
mankind.
REFERENCES
[1]
[2]
[3]
[4]
[5]

149

www.google.com
www.wikipedia.com
www.car-accidents.com
Electronics for you
Project ideas for blooming Engineers.

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.150-154.

Transceiver Implementation for Software Defined Radio using DPSK


Modulation and Demodulation
A. Saranya1, S. Saranya Devi2, B. Uma3 and N.L. Venkataraman4
1,2,3

U.G Scholars, 4Lecturer


Department of Electronics & Communication Engineering, Jayaram College of Engineering & Technology,
Thuraiyur, Tiruchirappalli
Abstract This paper presents the design and
implementation of a low cost reconfigurable radio
transceiver platform. The platform consists of four
hardware elements, namely a radio transmitter, a radio
receiver, a baseband interface and a PC to perform signal
processing and configuration. Data and control
communication is performed via a USB 2.0 interface
between the transceiver and a desktop PC. The platform
development included a substantial software element to
configure the hardware and to receive and transmit data
between the PC and the transceiver. The transmission is
done using DPSK modulator and Demodulator because of
less error probability and bandwidth is high.
Keywords SDR, ADC, DAC, USRP2, USB
programmable gate arrays, VLSI, SDR, FPGA

Field

I. INTRODUCTION
A software-defined radio system, or SDR, is a radio
communication system where components that have been
typically implemented in hardware (e.g. mixers, filters,
amplifiers, modulators/demodulators, detectors, etc.) are
instead implemented by means of software on a personal
computer or embedded computing devices. A basic SDR
system may consist of a personal computer equipped with
a sound card, or other analog-to-digital converter,
preceded by some form of RF front end. Significant
amounts of signal processing are handed over to the
general-purpose processor, rather than being done in
special-purpose hardware. Such a design produces a radio
which can receive and transmit widely different radio
protocols (sometimes referred to as a waveforms) based
solely on the software used. Software radios have
significant utility for the military and cell phone services,
both of which must serve a wide variety of changing
radio protocols in real time.The ideal receiver scheme
would be to attach an analog-to-digital converter to an
antenna. A digital signal processor would read the
converter, and then its software would transform the
stream of data from the converter to any other form the
application requires. An ideal transmitter would be
similar. A digital signal processor would generate a
stream of numbers. These would be sent to a digital-toanalog converter connected to a radio antenna. The ideal

scheme is not completely realizable due to the actual


limits of the technology. The main problem in both
directions is the difficulty of conversion between the
digital and the analog domains at a high enough rate and a
high enough accuracy at the same time, and without
relying upon physical processes like interference and
electromagnetic resonance for assistance.
II. AMATEUR OR HOME USE
A typical amateur software radio uses a direct conversion
receiver. Unlike direct conversion receivers of the more
distant past, the mixer technologies used are based on the
quadrature sampling detector and the quadrature sampling
exciter. The receiver performance of this line of SDRs is
directly related to the dynamic range of the analog-todigital converters (ADCs) utilized. Radio frequency
signals are down converted to the audio frequency band,
which is sampled by a high performance audio frequency
ADC. First generation SDRs used a PC sound card to
provide ADC functionality. The newer software defined
radios use embedded high performance ADCs that
provide higher dynamic range and are more resistant to
noise and RF interference. A fast PC performs the digital
signal processing (DSP) operations using software
specific for the radio hardware. Several software radio
efforts use the open source SDR library DttSP. The SDR
software performs all of the demodulation, filtering (both
radio frequency and audio frequency), and signal
enhancement (equalization and binaural presentation).
Uses include every common amateur modulation: morse
code, single sideband modulation, frequency modulation,
amplitude modulation, and a variety of digital modes such
as radio teletype, slow-scan television, and packet radio.
Amateurs also experiment with new modulation methods:
for instance, the DREAM open-source project decodes
the COFDM technique used by Digital Radio Mondiale.
More recently, the GNU Radio using primarily the
Universal Software Radio Peripheral (USRP) uses a USB
2.0 interface, an FPGA, and a high-speed set of analogto-digital and digital-to-analog converters, combined with
reconfigurable free software. Its sampling and synthesis
bandwidth is a thousand times that of PC sound cards,

150

Proceedings of the National Conference on Communication Control and Energy System

which enables wideband operation. The HPSDR (High


Performance Software Defined Radio) project uses a 16bit 135 MSPS analog-to-digital converter that provides
performance over the range 0 to 55 MHz comparable to
that of a conventional analogue HF radio. The receiver
will also operate in the VHF and UHF range using either
mixer image or alias responses. Interface to a PC is
provided by a USB 2.0 interface though Ethernet could be
used as well. The project is modular and comprises a
backplane onto which other boards plug in. This allows
experimentation with new techniques and devices without
the need to replace the entire set of boards. An exciter
provides 1/2 W of RF over the same range or into the
VHF and UHF range using image or alias outputs.

IV. RECEIVER SECTION

III. TRANSMITTER SECTION

Fig. 2

The DPSK Baseband Demodulator block demodulates a


signal that was modulated using the differential phase
shift keying method. The input is a baseband
representation of the modulated signal. The input must be
a discrete-time complex signal. The input can be either a
scalar or a frame-based column vector. The block accepts
the data types double, single, and signed fixed-point. If
the Output type parameter is set to Integer, then the block
outputs integer symbol values between 0 and 3. If the
Output type parameter is set to Bit, then the block outputs
2bit binary representations of such integers, in a binaryvalued vector whose length is an even number.
V. USRP2 TRANSMITTER
Fig. 1

When differentially encoding a message each input data


bit must be delayed until the next one arrives. The
delayed data bit is then mixed with the next incoming
data bit. The output of the mixer give the difference
between the incoming data but and the delayed data bit.
The differentially data is then spread by a high speed
Pseudo noise sequence. This spreading process assigns
each bit its own unique code, allowing only a receiver
with the same spreading sequence to despread the
encoded data.
The DPSK Baseband Modulator block modulates the
signal using the differential phase shift keying method.
The output is a baseband representation of the modulated
signal. If the Input type parameter is set to Integer, then
valid input values are 0, 1, 2, and 3. In this case, the input
can be either a scalar or a frame-based column vector. If
the Input type parameter is set to Bit, then the input must
be a binary-valued vector. In this case, the input can be
either a vector of length two or a frame-based column
vector whose length is an even integer.

Fig. 3

The Universal Software Radio Peripheral (USRP) 2


Transmitter block supports communication between
Simulink and a USRP2 board, allowing simulation and
development of various software-defined radio
applications. The USRP2 Transmitter block enables
communication with a USRP2 board on the same
Ethernet sub network. This block accepts a column vector
input signal from Simulink and transmits signal and
control data to a USRP2 board using User Datagram
Protocol (UDP) packets. Although the USRP2
Transmitter block sends data to a USRP2 board, the block
acts as a Simulink sink. The following block diagram
illustrates how Simulink, USRP2 blocks, and USRP2
hardware interface.

151

Transceiver Implementation for Software Defined Radio using DPSK Modulation and Demodulation

VI. USRP2 RECEIVER

Fig. 4

For hardware optimization and efficient use of the


Ethernet packets at 1500 bytes, the frame size of the input
data to the transmitter block is most efficient to use 358
samples. During simulations, we can dynamically set the
Center frequency, Gain, and Interpolation parameters, and
observe the effect on the transmitted signal.

The Universal Software Radio Peripheral (USRP) 2


Receiver block supports communication between
Simulink and a USRP2 board, allowing simulation and
development for various software-defined radio
applications. The USRP2 Receiver block enables
communication with a USRP2 board on the same
Ethernet sub network. This block receives signal and
control data from a USRP2 board using User Datagram
Protocol (UDP) packets. Although the USRP2 Receiver
block receives data from a USRP2 board, the block acts
as a Simulink source that outputs a column vector signal
of fixed length. The following block diagram illustrates
how Simulink, USRP2 blocks, and USRP2 hardware
interface.

A. Dialogue Box

Fig. 6

Fig. 7

When this block is called, it is possible that the host may


not have received any data from the USRP2 hardware.
The data length port, Data Len, indicates when valid data
is present. When data length port contains a zero value,
there is no data. You can use the data length with an
enabled subsystem to qualify the execution of part of the
model. For hardware optimization and efficient use of the
Ethernet packets at 1500 bytes, the frame size of the
output frame from the receiver block is always 358.
Fig. 5

During simulations, we can dynamically set the Center


frequency, Gain, and Decimation parameters, and observe
the effect on your model.

152

Proceedings of the National Conference on Communication Control and Energy System

A. Dialogue Box

VII. SIMULATION RESULTS


A. Eye Diagram

Fig. 9

B. DPSK Modulator

Fig. 8

Fig. 10

153

Transceiver Implementation for Software Defined Radio using DPSK Modulation and Demodulation

C. DPSK Demodulator

VIII. CONCLUSION
This paper presented a test platform for the exploration
and development of SDR technology using DPSK
modulation & demodulation. The hardware uses off-the
shelf components in wide bandwidth direct conversion
transceiver architecture. The software allows easy
configuration of the hardware and can be integrated with
existing software radio platforms such as OSSIE Tech or
IRIS. A simple DPSK communications link was
established between two desktop computers and
measurements at different points on this link was noted
down for future references
REFERENCES
[1]
[2]

[3]
Fig. 11

154

W. Tuttlebee, Ed., Software Defined Radio, Enabling


Technologies, John Wiley & Sons, Chichester, UK, 2002.
G.Baldwin, L.Ruiz, L.Barrandon, R.Farrell, Low cost
experimental Software Defined Radio Systems, SDR
Technical Forum 2007, 5-9 November, Denver Colorado
M.Sanchez Mora, L.Ruiz, G.Baldwin, R.Farrell,
Software engine development for SDR, SDR Technical
Forum 2007, 5-9 November, Denver

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.155-165.

Routing For Re-configurable System


for Wireless Mesh Networks
Mrs.B.Sathyasri,
Senior lecturer
ECE department
Rajalakshmi Institute of Technology
sathyasri.me@gmail.com

Mr.K.Aanandha Saravanan
Assistant Professor
ECE department
Veltech University

Abstract In multi hop wireless mesh network experiences


frequent link failures caused by channel interference, any
obstacles and/or applications. These failures cause severe
performance degradation in wireless mesh networks or require
expensive manual network management for their real time
recovery. Autonomous network reconfiguration system (ARS)
that enables a multiradio wireless mesh network to autonomously
recover from local link failures to preserve network
performance. By using channel and radio and channel
assignments in order to recover from failures. A generated
configuration change, the system cooperatively reconfigures
network settings among local mesh routers. A new algorithm is
proposed to improve the channel efficiency. Wireless mesh
networks are being developed actively and deployed widely for a
variety of applications, such as public safety, environment
monitoring, and city wireless internet services. Heterogeneous
and fluctuating wireless link conditions preserving the required
performance of such wireless mesh networks is still a challenging
problem. ARS includes a monitoring protocol that enables a
wireless mesh network to perform real time failure recovery in
conjunction with the planning algorithm. The accurate linkquality information from the monitoring protocol is used to
identify network changes the satisfy applications.
Keywords: Multiradio wireless mesh networks, reconfigurable
networks, wireless link failures.
I.

INTRODUCTION

Wireless mesh networks (WMNs) are being developed


actively and deployed widely for a variety of applications,
such as public safety, environment monitoring, and city-wide
wireless Internet services [1][3]. They have also been
evolving in various forms (e.g., using multi-radio/channel
systems [4] [7]) to meet the increasing capacity demands by
the above mentioned and other emerging applications.
However, due to heterogeneous and fluctuating wireless link
conditions [8] [10], preserving the required performance of
such WMNs is still a challenging problem. For example, some
links of a WMN may experience significant channel
interference from other co-existing wireless networks. Some
parts of networks might not be able to meet increasing
bandwidth demands from new mobile users and applications.
Links in a certain area (e.g., a hospital or police station) might
not be able to use some frequency channels because of
spectrum etiquette or regulation [11].

Mr.N.Vignesh Prasanna
Assistant Professor
ECE department
Veltech University

Even though many solutions for WMNs to recover from


wireless link failures have been proposed, they still have
several limitations as follows. First, resource-allocation
algorithms [12][14] can provide (theoretical) guidelines for
initial network resource planning. However, even though their
approach provides a comprehensive and optimal network
configuration plan, they often require global configuration
changes, which are undesirable in case of frequent local link
failures. Next, a greedy channel-assignment algorithm (e.g.,
[15]) can reduce the requirement of network changes by
changing settings of only the faulty link(s). However, this
greedy change might not be able to realize full improvements,
which can only be achieved by considering configurations of
neighboring mesh routers in addition to the faulty link(s).
Third, fault-tolerant routing protocols, such as local re-routing
[16] or multi-path routing [17], can be adopted to use networklevel path diversity for avoiding the faulty links. However,
they rely on detour paths or redundant transmissions, which
may require more network resources than link-level network
reconfiguration.
To overcome the above limitations, we propose an Autonomous network Reconfiguration System (ARS) that allows
a multi-radio WMN (mr-WMN) to autonomously reconfigure
its local network settingschannel, radio, and route
assignmentfor real-time recovery from link failures. In its
core, ARS is equipped with a reconfiguration planning
algorithm that identifies local configuration changes for the
recovery, while minimizing changes of healthy network
settings. Briefly, ARS first searches for feasible local
configuration changes available around a faulty area, based on
current channel and radio associations. Then, by imposing
current network settings as constraints, ARS identifies
reconfiguration plans that require the minimum number of
changes for the healthy network settings.
Next, ARS also includes a monitoring protocol that enables
a WMN to perform real-time failure recovery in conjunction
with the planning algorithm. The accurate link-quality
information from the monitoring protocol is used to identify
network changes that satisfy applications new QoS demands
or that avoid propagation of QoS failures to neighboring links
(or ripple effects). Running in every mesh node, the
monitoring protocol periodically measures wireless linkconditions via a hybrid link-quality measurement technique, as
we will explain in Section IV. Based on the measurement

155

Routing for Re-configurable System for Wireless Mesh Networks

information, ARS detects link failures and/or generates QoSaware network reconfiguration plans upon detection of a link
failure.
ARS has been implemented and evaluated extensively via
experimentation on our multi-radio WMN test-bed as well as
via ns2-based simulation. Our evaluation results show that
ARS outperforms existing failure-recovery methods, such as
static or greedy channel assignments, and local re-routing.
First, ARSs planning algorithm effectively identifies
reconfiguration plans that maximally satisfy the applications
QoS demands, accommodating twice more flows than static
assignment. ARS has been implemented and evaluated
extensively via experimentation on our multi-radio WMN testbed as well as via ns2-based simulation. Our evaluation results
show that ARS outperforms existing failure-recovery methods,
such as static or greedy channel assignments, and local rerouting. First, ARSs planning algorithm effectively identifies
reconfiguration plans that maximally satisfy the applications
QoS demands, accommodating twice more flows than static
assignment. Next, ARS avoids the ripple effect via QoS-aware
reconfiguration planning, unlike the greedy approach. Third,
ARSs local reconfiguration improves network throughput and
channel-efficiency by more than 26% and 92%, respectively,
over the local re-routing scheme. The rest of this paper is
organized as follows. Section II describes the motivation
behind this work. Section III provides the design rationale and
algorithms of ARS. Section IV describes the implementation
and experimentation results on ARS. Section V shows in-depth
simulation results of ARS. Section VI concludes the paper.

may have to relay too much data/video traffic during the


session. Likewise, relay links outside the room may fail to
support all attendees Voice-over-IP calls during a session
break. By re-associating their radios/channels with
underutilized radios/channels available nearby, links can avoid
communication failures.
Coping with heterogeneous channel availability: Links in
some areas may not be able to access wireless channels during
a certain time period (spectrum failures), due to spectrum
etiquette or regulation [11], [21]. For example, some links in a
WMN need to vacate current channels if channels are being
used for emergency response near the wireless links (e.g.,
hospital, public safety). Such links can seek and identify
alternative channels available in the same area.
Motivated by these three and other possible benefits of
using reconfigurable mr-WMNs, in the remainder of this
paper,
1 The terms radio and interface are used interchangeably
in this paper.
2 We consider link bandwidth as a QoS parameter of interest.
we would like to develop a system that allows mr-WMNs to
Fig.1 The effect of wireless link failures and the need for network
reconfiguration:

II. MOTIVATION
We first describe the need for self-re configurable multiradio
WMNs (mr-WMNs). Next, we introduce the network model and
assumptions to be used in this paper. Finally, we discuss the autonomously change channel and radio assignments (i.e., selflimitations of existing approaches to achieving self reconfigurable) to recover from the channel-related link failures
reconfigurability of mr-WMNs.
mentioned above.
B. Network Model and Assumptions Multi-radio WMN:
A. Why is Self-Reconfigurability Necessary?
A network is assumed to consist of mesh nodes, IEEE
Maintaining the performance of WMNs in the face of 802.11-based wireless links, and one control gateway. Each
dynamic link failures remains a challenging problem [18]. mesh node is equipped with n radios, and each radios channel
However, such failures can be withstood (hence maintaining and link assignments are initially made (e.g., see Fig. 2) by
the required performance) by enabling mr-WMNs to using global channel/link assignment algorithms [5], [12], [13].
autonomously reconfigure channels and radio1 assignments, as Multiple orthogonal channels are assumed available. For
in the following examples.
example, an IEEE 802.11a/b/g combo PCMCIA card can tune
Recovering from link-quality degradation: The quality of 16 orthogonal channels. The interference among multiple
wireless links in WMNs can degrade (i.e., link-quality failure), radios in one node is assumed to be negligible via physical
due to severe interference from other co-located wireless separation among antennas or by using shields. The gateway is
networks [8], [19]. For example, Bluetooth, cordless phones, connected to the Internet via wire-line links as well as to other
and other co-existing wireless networks operating on the same mesh routers via wireless links.
or adjacent channels cause significant and varying degrees of QoS support: During its operation, each mesh node periodically
losses or collisions in packet transmissions, as shown in Fig. 1. sends its local channel usage and the quality information for all
By switching the tuned channel of a link to other interference- outgoing links via management messages to the control
free channels, local links can recover from such a link failure.
gateway. Then, based on this information, the gateway controls
Satisfying dynamic QoS demands: Links in some areas may the admission of requests for voice or video flows. For
not be able to accommodate increasing QoS demands from admitted flows, the information about QoS requirements is
end users (QoS failures),2 depending on spatial or temporal delivered to the corresponding nodes for resource reservation
locality [20]. For example, links around a conference room
through the RSVP protocol [22]. Next,

156

Proceedings of the National Conference on Communication Control and Energy System

the network runs routing protocols such as WCETT [6] or


ETX [23] to determine the path of the admitted flows. This
routing protocol is also assumed to include route discovery
and recovery algorithms [16], [24], [25] that can be used for
maintaining alternative paths even in the presence of link
failures.
Link failures: Channel-related link failures that we focus on,
are due mainly to narrow-band channel failures. These failures
are assumed to occur and last in the order of a few minutes to
hours, and reconfiguration is triggered in the same order of
failure occurrences. For short-term (lasting for milliseconds)
failures, fine-grained (e.g., packet-level or in the order of
milliseconds) dynamic resource allocation might be sufficient
[4], [7], and for a long-term (lasting for weeks or months)
failures, network-wide planning algorithms [12][14] can be
used. Note that hardware failures (e.g., node crashes) or
broadband-channel failures (e.g., jamming) are beyond the
scope of this paper.

2, if channel 5 is lightly-loaded in a faulty area, the second


radio of node C can re-associate itself with the first radio of
node, avoiding configuration changes of other links.
QoS-awareness:
Reconfiguration has to satisfy QoS constraints on each link
as much as possible. First, given each links bandwidth
constraints, existing channel-assignment and scheduling
algorithms [5], [12], [13] can provide approximately optimal
network configurations. However, as pointed out earlier, these
algorithms may require global network configuration changes
from changing local QoS demands, thus causing network
disruptions. We need instead a reconfiguration algorithm that
incurs only local changes while maximizing the chance of
meeting the QoS demands. For example, if link

C. Limitations of Existing Approaches


Given the above system models, we now discuss the pros
and cons of using existing approaches for self-reconfigurable
WMNs.
Localized reconfiguration:
Network reconfiguration needs a planning algorithm that
keeps necessary network changes (to recover from link
failures) as local as possible, as opposed to changing the entire
network settings. Existing channel assignment and scheduling
algorithms [12][14] provide holistic guidelines such as
throughput bounds and schedulability for channel assignment
during a network deployment stage. However, the algorithms
do not consider the degree of configuration changes from
previous network settings, and hence they often require global
network changes to meet all the constraints, akin to edge
coloring problems [27]. Even though these algorithms are
suitable for static or periodic network planning, they may
cause network service disruption and thus are unsuitable for
dynamic network reconfiguration that has to deal with
frequent local link failures.
Next, the greedy channel-assignment algorithm, which
considers only local areas in channel assignments (e.g., [15]),
might do better in reducing the scope of network changes than
the above- mentioned assignment algorithms. However, this
approach still suffers from the ripple effect, in which one local
change triggers the change of additional network settings at
neighboring nodes (e.g., nodes using channel 3 in Fig. 2), due
to association dependency among neighboring radios. This
undesired effect might be avoided by transforming a mesh
topology into a tree topology, but this transformation reduces
network connectivity as well as path diversity among mesh
nodes.
Finally, interference-aware channel-assignment algorithms
[5], [28] can minimize interference by assigning orthogonal
channels as closely as possible geographically. While this
approach can improve overall network capacity by using
additional channels, the algorithm could further improve its
flexibility by considering both radio diversity (i.e., link
association) and local traffic information. For example, in Fig.

Fig.2 Multi-radio WMN: A WMN has an initial assignment of frequency


channel as shown

EH in Fig. 2 experiences a QoS failure on channel 1, then


one simple reconfiguration plan would be to re-associate R1
of node H to R2 of node E in channel 5, which has enough
bandwidth.
Next, the greedy algorithm might be able to satisfy
particular links QoS demands by replacing a faulty channel
with a new channel. However, neighboring links, whose
channel has been changed due to ripple effects (e.g., links GH
and HISTORY in Fig. 2), may fail to meet QoS demands if
the links in the new channel experience interference from
other co-existing networks that operate in the same channel.
Cross-layer interaction:
Network reconfiguration has to jointly consider network
settings across multiple layers. In the network layer, faulttolerant routing protocols, such as local re-routing [16] or
multi-path routing [17], allow for flow reconfiguration to
meet the QoS constraints by exploiting path diversity.
However, they consume more network resources than link
reconfiguration, because of their reliance on detour paths or
redundant transmissions. On the other hand, channel and link
assignments across the network and link layers can avoid the
overhead of detouring, but they have to take interference into
account to avoid additional QoS failures of neighboring
nodes.

157

Routing for Re-configurable System for Wireless Mesh Networks

III. THE ARS ARCHITECTURE

gateway via a management message. Second, once it detects a


link failure(s), ARS in the detector node(s) triggers the
formation of a group among local mesh routers that use a
faulty channel, and one of the group members is elected as a
leader using the well-known bully algorithm [29], for
coordinating the reconfiguration. Third, the leader node sends
a planning-request message to a gateway. Then, the gateway
synchronizes the planning requestsif there are multiples
requestsand generates a reconfiguration plan for the request.
Fourth, the gateway sends a reconfiguration plan to the leader
node and the group members. Finally, all nodes in the group
execute the corresponding configuration changes, if any, and
resolve the group. We assume that during the formation and
reconfiguration, all messages are reliably delivered via a
routing protocol and per-hop retransmission timer.
In what follows, we will detail each of these operations,
including how to generate reconfiguration plans, how to
monitor link conditions such as bandwidth (Section III-B),
and how much overhead ARS generates for the monitoring
and for maintaining a reconfiguration group (Section III-C).

We first present the design rationale and overall algorithm of


ARS. Then, we detail ARSs reconfiguration algorithms.
Finally, we discuss the complexity of ARS.
A. Overview
ARS is a distributed system that is easily deployable in
IEEE802.11-based mr-WMNs. Running in every mesh node,
ARS supports self-reconfigurability via the following distinct
features:
Localized reconfiguration: Based on multiple channels and
radio associations available, ARS generates reconfiguration
plans that allow for changes of network configurations only
in the vicinity where link failures occurred, while retaining
configurations in areas remote from failure locations.
QoS-aware planning: ARS effectively identifies QoS
satisfiable reconfiguration plans by (i) estimating the QoS
satisfiability of generated reconfiguration plans and (ii)
deriving their expected benefits in channel utilization.
Algorithm 1 ARS Operation at mesh node i
(1) Monitoring
period
(tm) 1: for every link j do
2: measure link-quality (lq) using passive
monitoring; 3: end for
4: send monitoring results to a gateway g;
(2) Failure detection and group formation period (tf
) 5: if link l violates link requirements r then
6: request a group formation on channel c of link
l; 7: end if
8: participate in a leader election if a request is received;
(3) Planning period (M, tp)
9: if node i is elected as a leader then
10: send a planning request message (c, M) to a gateway;
11: else if node i is a gateway then
12: synchronize requests from reconfiguration groups Mn
13: generate a reconfiguration plan (p) for Mi;
14: send a reconfiguration plan p to a leader of Mi;
15: end if
(4) Reconfiguration period (p, tr)
16: if p includes changes of node i then
17: apply the changes to links at t;
18: end if
19: relay p to neighboring members, if any
________________________________________________
Autonomous reconfiguration via link-quality monitoring:
ARS accurately monitors the quality4 of links of each node in
a distributed manner. Furthermore, based on the measurements
and given links QoS constraints, ARS detects local link
failures and autonomously initiates network reconfiguration.
Cross-layer interaction: ARS actively interacts across the
network and link layers for planning. This interaction enables
ARS to include a re-routing for reconfiguration planning in
addition to link-layer reconfiguration. ARS can also maintain
connectivity during recovery period with the help of a routing
protocol.
Algorithm 1 describes the operation of ARS. First, ARS in
every mesh node monitors the quality of its outgoing wireless
links at every tm (e.g., 10 sec) and reports the results to a

B. Planning for Localized Network Reconfiguration


The core function of ARS is to systematically generate
localized reconfiguration plans. A reconfiguration plan is
defined as a set of links configuration changes (e.g., channel
switch, link association) necessary for a network to recover
from a link(s) failure on a channel and there are usually
multiple reconfiguration plans for each link failure. Existing
channel assignment and scheduling algorithms [5], [12], [13]
seek optimal solutions by considering tight QoS constraints
on all links, thus requiring a large configuration space to be
searched, and hence making the planning often an NPcomplete problem [5]. In addition, change in a links
requirement may lead to completely different network
configurations. By contrast, ARS systematically generates
reconfiguration plans that localize network changes by
dividing the reconfiguration planning into three processes
feasibility, QoS-satisfiability, and optimalityand applying
different levels of constraints. As depicted in Figure 3, ARS
first applies connectivity constraints to generate a set of
feasible reconfiguration plans that enumerate feasible
channel, link, and route changes around the faulty areas,
given connectivity and link-failure constraints. Then, within
the set, ARS applies strict constraints (i.e., QoS and network
utilization) to identify a reconfiguration plan that satisfies the
QoS demands and that improves network utilization most.
Feasible plan generation:
Generating feasible plans is essentially to search all
legitimate changes in links configurations and their
combinations, around the faulty area. Given multiple radios,
channels, and routes, ARS identifies feasible changes that help
avoid a local link failure but maintain existing network
connectivity as much as possible. However, in generating such
plans, ARS has to address the following challenges.
Avoiding a faulty channel: ARS first has to ensure that the
faulty link needs to be fixed via reconfiguration. To this end,
ARS considers three primitive link changes, as explained in
Table I. Specifically, to fix a faulty link(s), ARS can use (i) a

158

Proceedings of the National Conference on Communication Control and Energy System

channel-switch (S) where both end-radios of link AB can


simultaneously change their tuned channel, (ii) a radio switch
(R) where one radio in node A can switch its channel and
associate with another radio in node B, and (iii) a route switch
(D) where all traffic over the faulty link can use a detour path,
instead of the faulty link.
Maintaining network connectivity and utilization: While
avoiding the use of the faulty channel, ARS needs to maintain
connectivity with the full utilization of radio resources.
Because each radio can associate itself with multiple
neighboring nodes, a change in one link triggers other
neighboring links to change their settings. To coordinate such
propagation, ARS takes a two-step approach. ARS first
generates feasible changes of each link using the primitives,

QoS-Satisfiability Evaluation:
Among a set of feasible plans F, ARS now needs to identify
QoS-satisfying reconfiguration plans by checking if the QoS
constraints are met under each plan. Although each feasible
plan ensures that a faulty link(s)

Fig. 3. Localized reconfiguration planning in ARS: ARS generates a


reconfiguration plan by breaking down the planning process into three
processes with different constraints.

and then combines a set of feasible changes that enable


a network to maintain its own connectivity. Furthermore, for the
combination, ARS maximizes the usage of network resources
by making each radio of a mesh node associate itself with at least
one link and by avoiding the use of same (redundant)

channel among radios in one node.


Controlling the scope of reconfiguration changes: ARS has
to limit network changes as local as possible, but at the same
time it needs to find a locally optimal solution by considering
more network changes or scope. To make this tradeoff, ARS
uses a k-hop reconfiguration parameter. Starting from a faulty
link(s), ARS considers link changes within the first k hops and
generates feasible plans. If ARS cannot find a local solution, it
increases the number of hops (k) so that ARS may explore a
broad range of link changes. Thus, the total number of
reconfiguration changes is determined on the basis of existing
configurations around the faulty area as well as the value of k.
Let us consider an illustrative example in Figure 4. Given the
failure in link CI, ARS first generates feasible and desirable
changes per link (gray columns) using the primitives. Here,
the changes must not include the use of a faulty or redundant
channel. Next, ARS combines the generated per-link
primitives of neighboring links to generate a set of feasible
plans. During the combination, ARS has to preserve link
and/or radio connectivitys. For example, plans S(C,I)3!6 and
S(H,I)3!3 in Fig. 4 cannot be connected because each change
requires the same radio of node I to set up different channels.
After the two steps, ARS has 11 feasible reconfiguration plans
(F) by traversing connected changes of all links considered in
the planning. Note that we set k to 2 in this example, but we
will show the impact of k on the planning in Section V-B3.

will use non-faulty channels and maintain its connectivity,


some plans might not satisfy the QoS constraints or even
cause cascaded QoS failures on neighboring links. To filter
out such plans, ARS has to solve the following challenges.
Per-link Bandwidth Estimation: For each feasible plan, ARS
has to check whether each links configuration change satisfies
its bandwidth requirement, so it must estimate link bandwidth.
To estimate link bandwidth, ARS accurately measures each
links capacity and its available channel air-time. In multi-hop
wireless networks equipped with a CSMA-like MAC, each
links achievable bandwidth (or throughput) can be affected by
both link capacity and activities of other links that share the
channel air- time. Even though numerous bandwidthestimation techniques have been proposed, they focus on the
average bandwidth of each node in a network [23], [30] or the
end-to-end throughput of flows [17], which cannot be used to
calculate the impact of per-link configuration changes. By
contrast, ARS estimates an individual links capacity (C),
based on measured (or cached) link-quality information
packet-delivery ratio and data-transmission rate measured by
passively monitoring the transmissions of data or probing
packets [31]and the formula derived in Appendix A. Here,
we assume that ARS is assumed to cache link-quality
information for other channels and use the cached information
to generate reconfiguration plans. If the information becomes
obsolete, ARS detects link failures and triggers another
reconfiguration to find QoS-satisfiable planslazy
monitoring.
Examining per-link bandwidth satisfiability: Given measured
bandwidth and bandwidth requirements, ARS has to check if
the new link change(s) satisfies QoS requirements. ARS

159

Routing for Re-configurable System for Wireless Mesh Networks








function is defined as ( ) = 1 n Pn k=1 ( ), where ( ) is the


relative improvement in the air-time usage of radio , and
the number of radios whose ( ) has changed from the plan.
This definition allows the benefit function to quantify the
overall change in air-time usage, resulting from the
reconfiguration plan. Here, ( ) is considered as a fairness
index on the usage of channel air-time, and it is defined as
follows:

defines and uses the expected busy air-time ratio of each link
to check the links QoS satisfiability. Assuming that a links
bandwidth requirement (q) is given, the links busy air-time
ratio (BAR) can be defined as BAR = q C and must not exceed
1.0 (i.e., BAR < 1.0) for a link to satisfy its bandwidth
requirement. If multiple links share the air-time of one
channel, ARS calculates aggregate BAR (aBAR) of end-radios
of a link, which is defined as aBAR(k) = Pl L(k) ql Cl ,
where k is a radio ID, l a link associated with radio k, L(k) the
set of directed links within and across radio ks transmission
range.
Avoiding cascaded link failures: Besides the link change,
ARS needs to check whether neighboring links are affected by
local changes (i.e., cascaded link failures). To identify





Fig. 5. Busy Air-time Ratio (BAR) of a directed link: BAR (e.g., l1) is
affected by activities of neighboring links (l0, l2,and l3) in channel 1 and is
used to evaluate QoS satisfiability of a link.

such adverse effect from a plan, ARS also estimates the QoSsatisfiability of links one hop away from member nodes whose
links capacity can be affected by the plan. If these one-hopaway links still meet the QoS requirement, the effects of the
changes do not propagate thanks to spatial
reuse of channels.
Otherwise, the effects of local changes will propagate, causing
cascaded QoS failures.
Let us consider an example in Fig. 5. Assuming BAR of
each directed link ( i) is 0.2 (e.g., 2Mbps 10Mbps ) in a tuned
channel, aBAR of each radio tuned to channel 1 does not
exceed 1.0, satisfying each links QoS requirement. In
addition, assuming that BAR( 1) increases from 0.2 to 0.4
(
in Fig. 5. To accommodate this
increase,
1)
reconfiguration plans that have a detour path through node Q
do not affect the QoS-satisfiability of the neighboring nodes.
On the other hand,
plans with radio switches (e.g.,
R(L2,M1)1 2) satisfy the QoS of link MN but cause
aBAR(OR2) to exceed 1.0, resulting in cascaded QoS failures
of links beyond node .
Choosing the best plan:
ARS now has a set of reconfiguration plans that are QoSsatisfiable, and needs to choose a plan within the set for a
local network to have evenly distributed link capacity.
However, to incorporate the notion of fair share into the
planning, ARS needs to address the following challenges.
Quantifying the fairness of a plan: ARS has to quantify the
potential changes in link-capacity distribution from a plan. To
this end, ARS defines and uses a benefit function ( ) that
quantifies the improvement of channel utilization that the
reconfiguration plan
makes. Specifically, the benefit



Fig. 6. Benefit function: B prefers a reconfiguration plan that improves


overall channel utilization close to the desired parameter _.

where 1(k) and 2(k) are estimated aBARs of a radio k in


existing configurations and in new
configurations,
respectively, and the desired channel utilization.
Breaking a tie among ultiple plans: Multiple reconfiguration
plans can have the same benefit, and ARS needs to break a tie
among them. ARS uses the number of link changes that each
plan requires to break a tie. Although link configuration
changes incur a small amount of flow disruption (e.g., in the
order of 10 ms), the less changes in link configuration, the
less network disruption.
Suppose is 0.5 as shown in Fig. 6; ARS favors a plan that
reconfigures links to have 50% available channel air-time
(e.g., plan 1 in the figure). If a plan reconfigures a WMN to
make the links heavily utilized while idling others (e.g., plan
2), then the benefit function considers the plan ineffective,
placing the plan in a lowly-ranked position. The effectiveness
of and will be evaluated and discussed further in Section
V-B2. Eq. (1) implies that if a reconfiguration plan makes
overall links channel utilization closer to the desired
utilization , then (k) gives a positive value, while giving a
negative value otherwise.
C. Complexity of ARS
Thanks to its distributed and localized design, ARS incurs
reasonable bandwidth and computation overheads. First, the
network monitoring part in the reconfiguration protocols is
made highly efficient by exploiting existing data traffic and

160

Proceedings of the National Conference on Communication Control and Energy System

consumes less than 12 Kbps probing bandwidth (i.e., 1 packet


per second) for each radio. In addition, the group formation
requires only O(n) message overhead (in forming a spanning
tree), where n is the number of nodes in the group. Next, the
computational overhead in ARS mainly stems from the
planning algorithms. Specifically, generating its possible link
plans incurs O(n+m) complexity, where n is the number of
available channels and m the number of radios. Next, a
gateway node needs to generate and evaluate feasible plans,
which incurs search overhead in a constraint graph that
consists of O(l(n+m)) nodes, where l is the number of links
that use a faulty channel in the group.
IV.

SYSTEM IMPLEMENTATION AND


EXPERIMENTATION

We have implemented ARS in a Linux OS and evaluated it


in our testbed. We first describe the implementation details,
and then present important experimental results on ARS.
A. Implementation Details
Fig. 7(a) shows the software architecture of ARS. First, ARS
in the network layer is implemented using netfilter [32], which
provides ARS with a hook to capture and send ARS-related
packets such as group-formation messages. In addition, this
module includes several important algorithms and protocols of
ARS: (i) network planner, which generates reconfiguration
plans only in a gateway node; (ii) group organizer, which
forms a local group among mesh routers; (iii) failure detector,
which periodically interacts with a

Fig. 7. ARSs implementation and prototype: (a) ARS is implemented across


network and link layers as a loadable module of Linux 2.6 kernel.powered.

network monitor in the device driver and maintains an up-todate link-state table; and (iv) routing table manager, through
which ARS obtains or updates states of a system routing table.
Next, ARS components in the device driver are
implemented in an open source MADWiFi device driver [33].
This driver is designed for Atheros chipset-based 802.11 NICs
[34] and allows for accessing various control and management
registers (e.g. longretry, txrate,) in the MAC layer, making
network monitoring accurate. The module in this driver
includes (i) network monitor, which efficiently monitors linkquality and is extensible to support as many multiple radios as
possible [31]; and (ii) NIC manager, which effectively
reconfigures NICs settings based on a reconfiguration plan
from the group organizer.

B. Experimental Setup
To valuate our implementation, we constructed a multi hop
wireless mesh network testbed on the fourth floor of the
Computer Science and Engineering (CSE) building at the
University of Michigan. The testbed consists of 17 mesh
nodes and has multiple (up to 5) links. Each node is
deliberately placed on either ceiling panels or high-level
shelves to send/receive strong signals to/from neighboring
nodes. On the other hand, each node will experience enough
multi-path fading effects from obstacles and interference from
co-existing public wireless networks.
As shown in Fig. 7(b), each mesh node is a small-size wireless
routerSoekris board 4826-50 [35] (Pentium-III 266 Mhz
CPU, 128 MB memory). This router is equipped with two
EMP IEEE 802.11 a/b/g miniPCI cards and 5-dBi gain indoor
omni-directional antennae. Each card operates at IEEE
802.11a frequency with a pseudo ad-hoc mode, and is set to
use fixed data rate and transmission power. Next, all nodes run
the Linux OS (kernel-2.6), a MADWiFi device driver (version
0.9.2) for wireless interfaces, and the ARS implementation. In
addition, ETX [23] and WCETT [6] routing metrics are
implemented for routing protocols. Finally, the Iperf
measurement tool [36] is used for measuring end-to-end
throughput, and the numbers are derived by averaging the
experimental results of 10 runs, unless otherwise specified.
C. Experimental Results:
We evaluated the improvements achieved by ARS, including
throughput and channel efficiency, QoS satisfiability, and
reduction of ripple effects.
1) Throughput and channel-efficiency gains: We first study
throughput and channel-efficiency gains via ARSs real-time
reconfiguration. We run one UDP flow at a maximum rate
over a randomly-chosen link in our testbed, while increasing
the level of interference every 10 seconds. We also set the
QoS requirement of every link to 6 Mbps, and measure the
flows throughput progression every 10 seconds during a 400second run. For the purpose of comparison, we also ran the
same scenario under the local re-routing with a WCETT
(Weighted Cumulative Expected Transmission Time) metric
[37], and static channel-assignment algorithms. Note that we
do not intentionally run a greedy algorithm in this single-hop
scenario, because its effect is subsumed by ARS. We,
however, compare it with ARS in multi-hop scenarios in
Section IV-C3.
Fig. 8(a) compares the progression of link throughput
achieved by the above three methods. ARS effectively
reconfigures the network on detection of a failure, achieving
450% and 25.6% more bandwidth than static-assignment and
local re-routing, respectively. ARS accurately detects a links
QoS-failure using link-quality monitoring information, and

161

Routing for Re-configurable System for Wireless Mesh Networks

completes network reconfiguration (i.e., channel switching)


within 15 seconds on average, while the static-assignment
experiences severe throughput degradation. Note that the 15second delay is due mainly to link-quality information update
and communication delay with a gateway, and the delay can
be adjusted. Further, within the delay, actual channel switch
delay is less than 3 ms, which causes negligible flow
disruption. On the other hand, the local re-routing improves
the throughput

initial assignment (e.g., 9.2 Mbps from G to C). moreover,


using the WCETT metric helps find a path that has channel
diversity (e.g., RR1 in Fig. 9(b) favors the path G!H!I!F!C,
and RR2 favors the path G!H!E!F!C), but consumes more
channel resource, due to the use of longer paths. On the other
hand, ARS effectively discovers and uses idle channels
through reconfiguration, thus satisfying QoS demands by up to
three times more than static-assignment
Reconfiguration methods
(a) QoS-satisfaction gain

Fig. 8.

Gains
in
throug
hput
and
chann
el
efficie
ncy:
ARS effectively reconfigures the network around a faulty link, improving
both network throughput and channel-efficiency by up to 26% and 92%,
respectively. By contrast, local re-routing causes degradation in channelefficiency due to the use of a detour path, and static channel-assignment does
not react to faults in a timely manner.

by using a detour path, but still suffers from throughput


degradation because of an increased loss rate along its detour
path.
ARS also improves channel efficiency (i.e., the ratio of the
number of successfully-delivered data packets to the number
of total MAC frame transmissions) by more than 90% over the
other recovery methods. Using the data collected during the
previous experiment, we derive channel efficiency of the UDP
flow by counting the number of total MAC frame
transmissions and the number of successful transmissions. As
shown in Fig. 8(b), ARS improves channel efficiency by up to
91.5% over the local re-routing scheme, thanks to its online
channel reconfiguration. On the other hand, using
staticchannel assignment suffers poor channel utilization due
to frame retransmissions on the faulty channel. Similarly, the
local re-routing often makes traffic routed over longer or low
link-quality paths, thus consuming more channel resources
than ARS.
2) QoS-satisfaction gain: ARS enhances chance to meet the
varying QoS demands. To show this gain, we first assign links
and channels in our testbed as shown in Fig. 2. Here, nodes G,
A, and C are a gateway, a mesh router in a conference room,
and a mesh router in an office, respectively. We assume that
mobile clients in the conference room request video streams
through the router A during a meeting, and after the meeting,
they return to the office room and connect to the router C.
While increasing the number of video streams, we measure the
total number of admitted streams after network
reconfiguration for each place. We use static, WCETT routing
metric that finds a path with diverse channels and ARS for
reconfiguration.
QoS-aware reconfiguration planning in ARS improves the
chance for a WMN to meet the varying QoS demands, on
average, by 200%. As shown in Fig. 9(a), a static channel
assignment algorithm cannot support more bandwidth than the

(b) Reconfiguration results (from G to C)

Fig. 9. QoS-satisfaction gain: ARS reconfigures networks to accommodate


the varying QoS demands in time and space.

algorithms (e.g., the bandwidth from G to C in Fig. 9(a)).


3) Avoidance of ripple effects: We also studied ARSs
effectiveness in avoiding the ripple effects of network
reconfiguration. Fig. 10(a) shows initial channel and flow
assignments in a part of our testbed. In this topology, we run 6
UDP flows (f1, , f6) each at 4Mbps, and measure each
flows throughput while injecting interference into a target
channel. We run same scenarios with 2 different interference
frequencies (5.28 and 5.2 Ghz) to induce failures on different
links. Also, we use three failure- recovery methods (i.e., local
re-routing, greedy, and ARS) for comparison.
Since ARS considers the effects of local changes on
neighboring nodes via aBAR, it effectively identifies
reconfiguration plans that avoid the ripple effects. Fig. 10(b)
shows the average throughput improvement of the flows after
network reconfigurations, with each of the three recovery
schemes. First, with interference on 5.28 Ghz, nodes 1, 3, and
5 experience linkquality degradation, degrading 5 Mbps
throughput among the 6 flows. Under ARS, the network
performs reconfiguration and recovers an average 98% of the
degraded throughput

162

Proceedings of the National Conference on Communication Control and Energy System

V.PERFORMANCE EVALUATION
We have also evaluated ARS in large-scale network
settings via simulation. We first describe our simulation
methodology, and then present the evaluation results on ARS.
A. The Simulation Model & Methods

Fig. 10. ARSs avoidance of ripple effects: ARS finds a local reconfiguration
plan that avoids the ripple effects by considering neighboring nodes channel
utilization, whereas the greedy channel-switching and local re-routing cannot
fully recover from the failure or cause additional QoS failures.

(4.8 Mbps). On the other hand, the flows via local re-routing
achieves 82% of the throughput (3.2 Mbps), because of the use of
detour paths (f5:5!4!3, f6:1!2!3). On the other hand, while
partially recovering from the original link failures, the greedy
approach causes throughput degradation of neighboring links.
This is because one local greedy channel-switching

(from 5.26 to 5.32 Ghz) requires the neighboring links


channel (e.g., between nodes 5 and 7) to change, creating
interference to other neighboring nodes link (e.g., between
nodes 6 and 7) that use adjacent channels (e.g., 5.3 Ghz).
Next, in the second interference case (5.2 Ghz), ARS also
identifies QoS-satisfying reconfiguration plans, achieving
97% throughput improvement over the degraded throughput,
as shown in Fig. 10(b). On the detection of the interference,
ARS switches the channel of all the fault-related links, based
on its planning algorithm to other channel. Naturally, this
result (configuration and throughput) is the same as that
achieved by the greedy method. On the other hand, the local
re-routing causes heavy channel contention for detour paths,
degrading neighboring flows performance (i.e., f5) as well as
others (f3, f4).

ns-2 [38] is used in our simulation study. Throughout the


simulation, we use a grid topology with 25 nodes in an area of
1Km1Km, as shown in Fig. 11(a). In the topology, adjacent
nodes are separated by 180m and each node is equipped with a
different number of radios, depending on its proximity to a
gateway. The gateway is equipped with four radios, onehopaway nodes from a gateway have three radios, and other
nodes have two radios.
For each node in the above topology, we use the following
network protocol stacks. First, the shadowing propagation
model [39] is used to simulate varying channel quality and
multi-path effects. Next, CMU 802.11 wireless extension is
used for the MAC protocol with a fixed data rate (i.e., 11
Mbps) and is further modified to support multiple radios and
multiple channels. Finally, a link-state routing protocol, a
modification of DSDV [40], and multi-radio-aware routing
metric (WCETT [6]) are implemented and used for routing.
In these settings, ARS is implemented as an agent in both
the MAC layer and a routing protocol as explained in Sections
III and IV. It periodically collects channel information from
MAC, and requests channel switching or link-association
changes based on its decision. At the same time, it informs the
routing protocol of network failures or a routing table update.
There are several settings to emulate real-network activities.
First, to generate users traffic, multiple UDP flows between a
gateway and randomly-chosen mesh nodes are introduced.
Each flow runs at 500 Kbps with a packet size of 1000 bytes.
Second, to create network failures, uniformly-distributed
channel faults are injected at a random time point. Random
bit-error is used to emulate channel- related link failures and
lasts for a given failure period. Finally, all experiments are run
for 3000 seconds, and the results of 10 runs are averaged
unless specified otherwise.
B. Evaluation Results 1) Effectiveness of QoS-aware
planning: We measured the effectiveness of ARS in meeting
the varying QoS requirements in a mr-WMN. We initially
assign symmetric link capacity as shown in the channel
assignment of the grid topology (Fig. 11(a)). Then, while
changing the QoS constraints in gray areas at different times
(i.e., T1, . . . , T5), we evaluate the improvement of available
capacity that ARS can generate via reconfiguration.

163

Routing for Re-configurable System for Wireless Mesh Networks

As shown in the tables of Fig. 11(b), ARS reconfigures a


wireless mesh network to meet different QoS requirements.
Before each reconfiguration, the gray areas can only accept 1
to 9 UDP flows. On the other hand, after reconfiguration, the
network in the areas can admit 4 to 15 additional flows,

reconfigurations. As shown in the figure, ARS can improve


the available links capacity by increasing the reconfiguration
range. However, its improvement becomes marginal as the
range increases. This saturation results mainly from the fixed
number of radios of each node. In other words, the
improvement is essentially bounded by the total capacity of
physical radios. Furthermore, because reconfiguration plans
with a larger range are required to incur more changes in
network settings, the bandwidth gain per change significantly
degrades (e.g., capacity gain per change at the hop-count of 4
in Fig. 12). We also observed the similar results in other
reconfiguration requests (T2,T3,T4), but omitted them for
brevity.
VI.

Fig. 11. Satisfying varying QoS constraints: Fig. 11(a) shows requests with
different QoS requirements. Next, Fig. 11(b) shows improved (or changed)
network capability (i) before and (ii) after reconfiguration

improving the average network capacity of the gray areas by


3.5 times.
2) Impact of the benefit function: We also studied the
impact of the benefit function on the ARSs planning
algorithm. We conducted the same experiment as the previous
one with different values of in the benefit function. As
shown in Fig. 11(b), a high value (0.8) of allows ARS to
keep local channel-efficiency high. By contrast, a low value
(0.4) can deliver more available bandwidth (on average, 1.2
Mbps) than when the high value is used, since ARS tries to
reserve more capacity.
3) Impact of the reconfiguration range: We evaluated the

impact of the reconfiguration range. We used the same


experiment settings as the previous one and focused on
reconfiguration requests at T1. As we increase the hop count
(k) from a faulty link(s), we measure the capacity
improvement achieved by the reconfiguration plans. In

can help
ARS
search for
reconfigur
ation plans.

addition, we calculate the capacity gain per change as the costeffectiveness of reconfiguration planning with different k
values.
Fig. 12 plots the available capacity of the faulty area after

We first make concluding remarks and then discuss some


of future work.
A. Concluding Remarks
This paper presented an Autonomous network
Reconfiguration System (ARS) that enables a multi-radio
WMN to autonomously recover from wireless link failures.
ARS generates an effective reconfiguration plan that requires
only local network configuration changes by exploiting
channel, radio, and path diversity. Furthermore, ARS
effectively identifies reconfiguration plans that satisfy
applications QoS constraints, admitting up to two times more
flows than static assignment, through QoS-aware planning.
Next, ARSs on-line reconfigurability allows for real-time
failure detection and network reconfiguration, thus improving
channel- efficiency by 92%. Our experimental evaluation on a
Linux-based implementation and ns-2-based simulation have
demonstrated the effectiveness of ARS in recovering from
local link-failures and in satisfying applications diverse QoS
demands.
REFERENCES

Fig.
12.
The impact
of
reconfigur
ation
range: The
hop length

However, the benefit from the increased


length is small, whereas the number of total changes for the reconfiguration
adversely increases.

CONCLUSION

[1] I. Akyildiz, X. Wang, and W. Wang, Wireless mesh networks:


A survey, Computer Networks, no. 47, pp. 445487, 2005.
[2] MIT Roofnet, http://www.pdos.lcs.mit.edu/roofnet.
[3] Motorola Inc. Mesh Broadband, http://www.motorola.com/mesh.
[4] P. Kyasanur and N. Vaidya, Capacity of multi-channel wireless
networks:
Impact of number of channels and interfaces, in Proceedings of
ACM MobiCom, Cologne, Germany, Aug. 2005.
[5]

K.

Ramanchandran,

E.

Belding-Royer,

and

M.

Buddhikot,

Interferenceaware
channel assignment in multi-radio wireless mesh networks, in
Proceedings of IEEE InfoCom, Barcelona, Spain, Apr. 2006.
[6] R. Draves, J. Padhye, and B. Zill, Routing in multi-radio, multi-hop
wireless mesh networks, in Proceedings of ACM MobiCom, Philadelphia,
PA, Sept. 2004.
[7] P. Bahl, R. Chandra, and J. Dunagan, SSCH: Slotted seeded channel
hopping for capacity improvement in IEEE 802.11 ad-hoc wireless networks,
in Proceedings of ACM MobiCom, Philadelphia, PA, Sept.2004.
[8] D. Aguayo, J. Bicket, S. Biswas, G. Judd, and R. Morris, Linklevel measurements from an 802.11b mesh network, in Proceedings of
ACM SIGCOMM, Portland, OR, Aug. 2004.
[9] A. Akella, G. Judd, S. Seshan, and P. Steenkiste, Self-management
in chaotic wireless deployments, in Proceedings of ACM MobiCom,
Cologne, Germany, Sept. 2005.
[10] J. Zhao, H. Zheng, and G.-H. Yang, Distributed coordination in
dynamic
spectrum allocation networks, in Proceedings of IEEE DySPAN,
Baltimore, MD, Nov. 2005.

164

Proceedings of the National Conference on Communication Control and Energy System


[11] M. J. Marcus, Real time spectrum markets and interruptible
spectrum: New concepts of spectrum use enabled by cognitive radio, in
Proceed-ings of IEEE DySPAN, Baltimore, MD, Nov. 2005.
[12] M. Alicherry, R. Bhatia, and L. Li, Joint channel assignment and
routing
for throughput optimization in multi-radio wireless mesh networks,
in Proceedings of ACM MobiCom, Cologne, Germany, Aug. 2005.
[13] M. Kodialam and T. Nandagopal, Characterizing the capacity region
in multi-radio multi-channel wireless mesh networks, in Proceedings of
ACM MobiCom, Cologne, Germany, Aug. 2005.
[14] A. Brzezinski, G. Zussman, and E. Modiano, Enabling distributed
throughput maximization in wireless mesh networks-a partitioning approach,
in Proceedings of ACM MobiCom, Los Angeles, CA, Sept. 2006.
[15] A. Raniwala and T. Chiueh, Architecture and algorithms for an
IEEE 802.11-based multi-channel wireless mesh network, in Proceedings
of IEEE InfoCom, Miami, FL, Mar. 2005.
[16] S. Nelakuditi, S. Lee, Y. Yu, J. Wang, Z. Zhong, G. Lu, and Z. Zhang,
Blacklist-aided forwarding in static multihop wireless networks,
in Proceedings of IEEE SECON, Santa Clara, CA, Sept. 2005.
[17] S. Chen and K. Nahrstedt, Distributed quality-of-service routing in
ad hoc networks, IEEE JSAC, vol. 17, no. 8, 1999.
[18] L. Qiu, P. Bahl, A. Rao, and L. Zhou, Troubleshooting multi-hop
wireless
networks, in Proceedings of ACM SigMetrics (extended abstract),
Alberta, Canada, June 2005.
[19] D. Kotz, C. Newport, R. S. Gray, J. Liu, Y. Yuan, and C. Elliott,
Experimental evaluation of wireless simulation assumptions, Dept. of
Computer Science, Dartmouth College, Tech. Rep. TR2004-507, 2004.
[20] T. Henderson, D. Kotz, and I. Abyzov, The changing usage of a
mature campus-wide wireless network, in Proceedings of ACM MobiCom,
Philadelphia, PA, Sept. 2004.
[21]

M.

Buddhikot,

P.

Kolodzy,

S.

Miller,

K.

Ryan,

and

J.

Evans,

DIMSUMnet: New directions in wireless networking using coordinated


dynamic spectrum access, in Proceedings of IEEE Symposium on a World of
Wireless Mobile and Multimedia Networks, Naxos, Italy, June 2005.
[22] R. Braden, L. Zhang, S. Berson, S. Herzog, and S. Jamin, Resource
reservation protocol (rsvp), Internet Request for Comments 2205

hoc wireless networks, in the Book of Mobile Computing. Kluwer


Academic Publishers, 1996, vol. 353.
[26] G. Holland, N. Vaidya, and P. Bahl, A rate-adaptive MAC protocol
for multi-hop wireless networks, in Proceedings of ACM MobiCom, Rome,
Italy, Sept. 2001.
[27] J. L. Gross and J. Yellen, Graph theory and its applications,
2nd edition, Chapman & Hall/CRC, 2006.
[28] A. P. Subramanian, H. Gupta, S. R. Das, and J. Cao, Minimum
interference channel assignment in multi-radio wireless mesh networks,
IEEE Transaction on Mobile Computing, Dec. 2008.
[29] A. S. Tanenbaum and M. V. Steen, Distributed systems, Pearson
Education.
[30] Q. Xue and A. Ganz, Ad hoc QoS on-demand routing (AQOR)
in mobile ad hoc networks, ACM Journal of Parallel and Distributed
Computing, vol. 63, no. 2, pp. 154165, 2003.
[31] K.-H. Kim and K. G. Shin, Accurate and asymmetry-aware
measurement
of link quality in wireless mesh networks, To appear in IEEE/ACM
Transactions on Networking, 2009.
[32] Netfilter, http://www.netfilter.org.
[33] MADWiFi, http://www.madwifi.org.
[34] Atheros Communications, http://www.atheros.com.
[35] Soekris Engineering, http://www.soekris.com.
[36] Iperf, Network measurement tool, http://dast.nlanr.net/Projects/Iperf/.
[37] R. Draves, J. Padhye, and B. Zill, Routing in multi-radio, multi-hop
wireless mesh networks, in Proceedings of ACM MobiCom, Philadelphia,
PA, Sept. 2004.
[38] ns-2 network simulator. http://www.isi.edu/nsnam/ns.
[39] T. S. Rappaport, Wireless Communications: Principles and Practice,
Prentice Hall, 2002.
[40] C. Perkins and P. Bhagwat, Highly dynamic destinationsequenced distance-vector routing (DSDV) for mobile computers, in
ACM SIG-COMM, London, UK, Sept. 1994, pp. 234244.
[41] M. M. Carvalho and J. J. Garcia-Luna-Aceves, A scalable model
for channel access protocols in multihop ad hoc networks, in Proceedings
of ACM MobiCom, Philadelphia, PA, Sept. 2004.
[42]Delay analysis of IEEE 802.11 in single-hop networks, in
Proceedings of IEEE ICNP, Atlanta, GA, Nov. 2003.
[43] G. Bianchi, Performance analysis of the IEEE 802.11 distributed
coordination function, IEEE JSAC, vol. 18, no. 3, Mar. 2000.
[44] H. Wu, X. Wang, Y. Liu, Q. Zhang, and Z.-L. Zhang, SoftMAC: Layer
2.5 mac for VoIP spport in multi-hop wireless networks, in Proceedings

(rfc2205.txt), Sept. 1997.


[23] D. S. D. Couto, D. Aguayo, J. Bicket, and R. Morris, A highthroughput path metric for multi-hop wireless routing, in Proceedings of
ACM MobiCom, San Diego, CA, Sept. 2003.

of IEEE SECON, Santa Clara, CA, Sept. 2005.


[45] S. Lee, S. Banerjee, and B. Bhattacharjee, The case for a multihop wireless local area network, in Proceedings of IEEE InfoCom, Hong
Kong, Mar. 2004.

[24] C. Perkins, E. Belding-Royer, and S. Das, Ad-hoc on-demand distance


vector routing, Internet Request for Comments 3561 (rfc3561.txt), July2003.

[25] D. B. Johnson and D. A. Maltz, Dynamic source routing in ad

165

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.166-167.

Study on Blocking Misbehaving Users in Anonymizing Networks


R. Divya
II Year, ME, Communication Systems, S.A.Engineering college
Email: divyarajaa@gmail.com
Abstract Anonymizing networks allow users to access
Internet services privately by using a series of routers to
hide the clients IP address from the server. Their success
has been limited by users employing the anonymity for
abusive purposes such as defacing popular Web sites. Web
site administrators rely on IP-address blocking for disabling
access to misbehaving users, but blocking IP addresses is not
practical if the abuser routes through an anonymizing
network. As a result, administrators block all known exit
nodes of anonymizing networks, denying anonymous access
to misbehaving and behaving users alike. To solve this
problem, a system is designed in which servers can
blacklist misbehaving users, thereby blocking users
without compromising their anonymity. The system is thus
agnostic to different servers definitions of misbehaviour
servers can blacklist users for whatever reason, and the
privacy of blacklisted users is maintained.
Keywords Anonymous, blacklisting, privacy, revocation.

I. INTRODUCTION
Anonymizing networks route traffic through independent
nodes in separate administrative domains to hide a clients
IP address.
Unfortunately, some users have misused such networks
under the cover of anonymity, users have repeatedly
defaced popular Web sites . Web site administrators
cannot blacklist individual malicious users IP addresses,
they blacklist the entire anonymizing network. Such
measures eliminate malicious activity
through
anonymizing networks at the cost of denying anonymous
access to behaving users.
There are several solutions to this problem, each
providing some degree of accountability. In
pseudonymous credential users log into Web sites using
pseudonyms, which can be added to a blacklist if a user
misbehaves. Unfortunately, this approach results in
pseudonymity for all users, and weakens the anonymity
provided by the anonymizing network. Anonymous
credential systems employ group signatures. Basic group
signatures allow servers to revoke a misbehaving users
anonymity by complaining to a group manager. Servers
must query the group manager for every authentication,
and thus, lacks scalability. Traceable signatures allow the
group manager to release a trapdoor that allows all
signatures generated by a particular user to be traced;

such an approach does not provide the backward


unlinkability that we desire, where a users accesses
before the complaint remain anonymous. Backward
unlink-ability allows for what we call subjective
blacklisting, where servers can blacklist users for
whatever reason since the privacy of the blacklisted user
is not at risk. In contrast, approaches without backward
unlinkability need to pay careful attention to when and
why a user must have all their connections linked, and
users must worry about whether their behaviors will be
judged fairly.
Subjective blacklisting is also better suited to servers,
where misbehaviours such as question-able edits to a
Webpage, are hard to define in mathematical terms. In
some systems, misbehaviour can indeed be defined
precisely. For instance, double spending of an e-coin is
considered misbehaviour in anonymous e-cash systems
following which the offending user is deanonymized.
Unfortunately, such systems work for only narrow
definitions of misbehaviourit is difficult to map more
complex notions of misbehaviour onto double spending
or related approaches. With dynamic accumulators, a
revocation operation results in a new accumulator and
public parameters for the group, and all other
existingusers credentials must be updated, making it
impractical. Verifier-local revocation (VLR fixes this
shortcoming by requiring the server (verifier) to
perform only local updates during revocation.
Unfortunately, VLR requires heavy computation at the
server that is linear in the size of the blacklist. For
example, for a blacklist with 1,000 entries, each
authentication would take tens of seconds, 2a prohibitive
cost in practice. In contrast, our scheme takes the server
about one millisecond per authentication, which is several
thousand times faster than VLR. We believe these low
overheads will incentivize servers to adopt such a solution
when weighed against the potential benefits of
anonymous publishing (e.g., whistle-blowing, reporting,
anonymous tip lines, activism, and so on.)
A. In This Paper
Black-listing anonymous users. Servers can blacklist
users of an anonymizing network while maintaining their
privacy.

166

Proceedings of the National Conference on Communication Control and Energy System

authentication code (MAC) computation (MA.Mac)


algorithms.

II. SECURITY MODEL


A. Goals and Threats
An entity is honest when its operations abide by the
systems specification. An honest entity can be curious: it
attempts to infer knowledge from its own information
(e.g., its secrets, state, and protocol communications). An
honest entity becomes corrupt when it is compromised by
an attacker, and hence, reveals its information at the time
of compromise, and operates under the attackers full
control, possibly deviating from the specification.
Blacklistability assures that any honest server can indeed
block misbehaving users. Specifically, if an honest server
complains about a user that misbehaved in the current
linkability window, the complaint will be successful and
the user will not be able to connect, i.e., establish a
authenticated connection, to the server successfully in
subsequent time periods (following the time of complaint)
of that linkability window.
Rate-limiting assures any honest server that no user can
successfully connect to it more than once within any
single time period.
Non-frameability guarantees that any honest user who
is legitimate according to an honest server can connect to
that server. This prevents an attacker from framing a
legitimate honest user, e.g., by getting the user blacklisted
for someone elses misbehaviour. This property assumes
each user has a single unique identity. When IP addresses
are used as the identity, it is possible for a user to frame
an honest user who later obtains the same IP address.
Non-frameability holds true only against attackers with
different identities (IP addresses).
A user is legitimate according to a server if she has not
been blacklisted by the server, and has not exceeded the
rate limit of establishing connections. Honest servers must
be able to differentiate between legitimate and illegitimate
users.
Anonymity protects the anonymity of honest users,
regardless of their legitimacy according to the (possibly
corrupt) server; the server cannot learn any more
information beyond whether the user behind (an attempt
to make) a connection is legitimate or illegitimate.
III. PRELIMINARIES

Secure symmetric-key encryption (Enc).These consist of


the key generation (Enc.KeyGen), encryp-tion
(Enc.Encrypt), and decryption (Enc.Decrypt) algorithms.
Secure digital signatures (Sig). These consist of the key
generation (Sig.KeyGen), signing (Sig.Sign), and
verification (Sig.Verify) algorithms.
IV. DISCUSSIONS
Users of anonymizing networks would be reluctant to use
resources that directly reveal their identity (e.g., passports
or a national PKI). Email addresses could provide more
privacy, but provide weak black-listability guarantees
because users can easily create new email addresses.
Other possible resources include client puzzles and ecash, where users are required to perform a certain
amount of computation or pay money to acquire a
credential. These approaches would limit the number of
credentials obtained by a single individual by raising the
cost of acquiring credentials.
A. Side-Channel Attacks
While our current implementation does not fully protect
against side-channel attacks, we mitigate the risks. We
have implemented various algo-rithms in a way that their
execution time leaks little information that cannot already
be inferred from the algorithms output. Also, since a
confidential channel does not hide the size of the
communication, we have con-structed the protocols so
that each kind of protocol message is of the same size
regardless of the identity or current legitimacy of the user.
V. CONCLUSION
Servers can blacklist misbehaving users while
maintaining their privacy, and we show how these properties can be attained in a way that is practical, efficient,
and sensitive to the needs of both users and services.
REFERENCES
[1]

[2]

A. Cryptographic Primitives
Secure cryptographic hash functions. These are one-way
and collision-resistant functions that resemble random
oracles.

[3]

Secure message authentication (MA). These consist of the


key generation (MA.Key Gen), and the message

167

M. Bellare, R. Canetti, and H. Krawczyk, Keying


Hash Functions for Message Authentication, Proc. Ann.
Intl Cryptology Conf. (CRYPTO), Springer, pp. 1-15,
1996.
J.E. Holt and K.E. Seamons, Nym: Practical
Pseudonymity for Anonymous Networks, Internet
Security Research Lab Technical Report 2006-4, Brigham
Young Univ., June 2006.
P.C. Johnson, A. Kapadia, P.P. Tsang, and S.W. Smith,
Nymble: Anonymous IP-Address Blocking, Proc. Conf.
Privacy Enhancing Technologies, Springer, pp. 113-133,
2007.

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.168-172.

Thermal Management of 3-D FPGA: Thermal Aware Flooring Planning Tool


S. Kousiya Shehannas1, D. Jhansi Alekhya2 and R. Vivekanandan3
1,2

III Year, 3Asst. Professor


Dept of EIE, Panimalar engg College
Abstract Three-dimensional (3-D) integration is an
attractive technology to reduce wire lengths in a fieldprogrammable gate array (FPGA). However, it suffers from
two problems: one, the inter-layer vias are limited in
number, and second, the increased power density leads to
high junction temperatures. the first problem was overcome
by designing switch boxes that maximize the use of the
vias.This paper attempts to show that there is a significant
junction temperature reduction potential in managing
lateral heat spreading through thermal aware floor planning
tool. It makes use of HotSpot temperature modeling tool and
Parquet floor planning tool. HotSpot is used to calculate the
maximum temperature of the floor plan and the maximum
temperature thus calculated is included to the objective
function of Parquet for performing simulated annealing
based floor planning. The peak temperature in a two-layer
3-D FPGA reduces by about 30C after our change.

I. INTRODUCTION
As process technology scales into the nanometer region
[1], the exponential increase of power densities across
process generations results in higher die temperatures.
Such high temperatures, when left unmanaged, could
potentially affect the chips operation. They could also
result in its accelerated aging and reduce its operating
speed, lifetime and reliability. The reliability of the chip
reduces exponentially as the temperature increases [2].
The time to failure has been shown to be a function of
exp(Ea/kT)

(1)

Where Ea is the activation energy of the failure


mechanism being accelerated by the increased
temperature, k is Boltzmanns constant, and T is the
absolute temperature. This has invariably resulted in some
form of cooling solutions.

schemes. They slow down the execution of the chip in


response to the temperature sensed, resulting in the
reduction of the power dissipated and hence in the
reduction of the on-chip temperature.
The question and the associated trade-off between
performance and temperature are examined at a fairly
higher level of abstraction. In spite of using models that
are not necessarily very detailed, this paper hopes to at
least point out the potential of microarchitectural
floorplanning in reducing peak chip temperature and the
possibility of its complementing DTM schemes. It should
be noted that floorplanning does not reduce the average
temperature of the entire chip very much. It just evens out
the temperatures of the functional units through better
spreading. Therefore, the hottest units become cooler
while the temperature of a few of the colder blocks
increases accordingly. This aspect of floorplanning is
particularly attractive in comparison with static external
cooling. While cooling reduces the ambient temperature
and hence the peak temperature on the chip, it does not
reduce the temperature gradient across the chip.
II. PROJECT OVERVIEW
Objective of this project is to develop a thermal aware
floor planning tool which performs the floor planning
operation considering maximum temperature of the floor
plan as one of the objectives for the optimization of a
floor planning problem [4] along with total area and total
wire length. This tool modifies the floor plan of the 3-D
FPGA [5] in such a way that the maximum temperature is
reduced significantly. Lesser temperature would result in
design of the FPGA with better reliability and longevity.

Traditional ones among them have been designed for the


worst-case power dissipation and have focused mainly on
the thermal package (heat sink, fan etc.). As designing the
package for the worst case junction temperature started
becoming too expensive, researchers started looking a
design level solution to reduce the temperature
More recent solutions involve managing the applications
behaviour adaptively in response to the processor [3]
temperature. These run-time feedback driven mechanisms
are called Dynamic Thermal Management (DTM)

168

Fig. 1. A typical 3D integration structure

Proceedings of the National Conference on Communication Control and Energy System

III. RELATED WORK


Han et al. [6] propose a floor planning tool based on
Parquet [7]. The tool uses the Parquet floor planner to
carry out the basic simulated annealing process.
Temperature factor is added to the objective function of
the tool by an approximation method know as heat
diffusion method. The idea is to surround the high-power
density blocks with the low-power density blocks so that
the heat will diffuse from the high-power density block to
low-power density blocks surrounding it and would result
in lower temperature of the block. Overall, the maximum
temperature of the whole chip is kept under a check.
In the paper [6], the authors prove that the temperature of
an isolated block depends linearly on its power density.
They define the following measure as an approximation
for the heat diffusion between two adjacent blocks:
H (d1, d2) = (d1d2)*shared Length.

(2)

Where H is the heat diffusion, d1 and d2 are the power


densities of the two blocks, and shared length is the length
of their shared boundary.

idea about the gains one can expect due to floor planning
[9]. Since the cooling due to floor planning arises due to
lateral spreading of heat, we study the maximum level of
lateral heat spreading possible. This is done using the
HotSpot thermal model which models heat transfer
through an equivalent circuit made of thermal resistances
and capacitances corresponding to the package
characteristics and to the functional blocks of the floor
plan. In the terminology of the thermal model, maximum
heat spreading occurs when all the lateral thermal
resistances of the floor plan are shorted. This is equivalent
to averaging out the power densities of the individual
functional blocks. That is, instead of the default floor plan
and non-uniform power densities, we use a floor plan with
a single functional block that equals the size of the entire
chip and has a uniform power density equal to the average
power density of the default case. On the other extreme,
we also make the thermal resistances corresponding to the
lateral heat spreading to be equal to infinity. This gives us
an idea of the extent of temperature rise possible just due
to the insulation of lateral heat flow. This gives us an idea
of the extent of temperature rise possible just due to the
insulation of lateral heat flow.

For each block, the total heat diffusion is:


H (d) = H (d, di), for all its neighbors di.

Table 1: Peak steady-state temperature for different levels of


lateral heat spreading (oC)

(3)

Bench
bzip2
gcc
crafty
gzip
perlbk
mesa
Eon
art
Facerc
Twolf
Mgrid
swim

Rather than calculating the chip temperature, heat


diffusion of the chip is calculated to approximate
temperature. Also, not all the blocks are considered. Only
the heat diffusions of blocks which may become the
hottest (blocks with high-power density values) in the
chip are considered.
Later the heat diffusion H of all the selected possibly-hot
blocks is calculated and added together. The total thermal
diffusion D is defined as the sum of the heat diffusion of
all possibly-hot blocks:
D = H (d), for all possibly-hot blocks

(4)

The final objective of Parquet floor planner is modified as


follows:
Obj = CA * A + CW *W CD * D

(5)

Where CA, CW, and CD are the weights of area, the wire
length, and the heat diffusion, respectively. CD has a
negative sign because the thermal diffusion D needs to be
maximized. Thus in this method, the temperature part of
the objective is a rough approximation. The thermal
aware floor planning tool developed in this project makes
use of HotSpot tool to approximate the temperature part
of the objective. Even algorithms like simulated annealing
and genetic algorithm [8] can be used to tackle the floor
planning problem.
IV. POTENTIAL IN LATERAL SPREADING
Before the description of the thermal-aware floor planner,
it is important to perform a potential study that gives an

Min
56
55
54
54
54
54
54
55
52
51
47
44

Norm
123
120
120
120
114
114
113
109
104
98
75
59

Max
222
220
217
215
201
203
201
188
183
168
126
84

The table below presents the results of the study for a


subset of SPEC2000 benchmarks [10]. The Min and
Max columns correspond to the case when the lateral
thermal resistances are zero and infinity respectively,
while the Norm column shows the peak steady-state
temperature of the chip when the thermal resistances have
the normal correct values.
V. TOOL OVERVIEW
This project is development of a floor planning tool which
is similar to the tool developed by Han et al [6]. But in
this project, instead of approximating the chip
temperature using heat diffusion technique, maximum
temperature amongst all the blocks in a FPGA is
calculated using the HotSpot 5.0 tool [11].Parquet floor
planner is used, which makes use of simulated annealing

169

Thermal Management of 3-D FPGA: Thermal Aware Flooring Planning Tool

to carry out the floor planning process. The function of


maximum temperature (calculated from HotSpot), chip
area and total wire length is used as an objective to carry
out the simulated annealing.
Basically, the tool is the interfacing of Parquet floor
planning tool and HotSpot thermal modeling tool. The
method developed by Han et al. uses the technique of heat
diffusion, which is a rough estimation for the lateral heat
diffusion phenomenon seen in the silicon devices. The
tool developed in this project makes use of more accurate
HotSpot tool to estimate this lateral heat diffusion.
HotSpot models heat transfer through an equivalent
circuit made of thermal resistances and capacitances
corresponding to the package characteristics and to the
functional blocks of the floor plan.

The search inside simulated annealing tries to avoid local


minima by jumping out of them early in the computation.
Towards the end of the computation, when the
temperature or probability of accepting a worse solution is
nearly zero, this simply seeks the bottom of the local
minimum

Fig. 2. The vertical B*-tree of the horizontal B*-tree.

In this project, the HotSpot has been modified to return


the maximum temperature amongst all the blocks. Parquet
software performs millions of movements in each
simulated annealing run. In every run, HotSpot needs to
be called. It calculates the steady state temperature of the
blocks for each move made by Parquet. In order to
calculate the steady state temperature, HotSpot needs to
construct a new thermal resistance matrix every time. And
a call to HotSpot involves opening and closing of few
files. As a result, the floor planning execution using this
method takes much longer time than any other available
methods.

Fig. 3. The reverse horizontal B*-tree of the B*-tree. All blocks


are packed to the top instead of the bottom.

Following sections describe the thermal aware floor


methodology used to develop the tool.
A. Parquet Floor Planner
Parquet is software for floor planning based on simulated
annealing [7]. It has been developed by the university of
Michigan. It has been used in a number of projects in
computer-aided design and computer architecture. While
originally designed for fixed-outline floor planning [12]
using sequence pair data structure, it can also be applied
to classical min-area block packing. The internal floor
plan representation alternates between sequence pairs and
B*-Trees data structures [13]. There are many different
types of floor planning. But Parquet supports two
important types of floor planning: outline-free floor
planning and fixed outline floor planning.
B. Simulated Annealing
Simulated annealing is a technique to find a good solution
to an optimization problem by trying random variations of
the current solution [14]. A worse variation is accepted as
the new solution with a probability that decreases as the
computation proceeds. The slower the cooling schedule or
rate of decrease, the more likely the algorithm is to find
an optimal or near-optimal solution. This technique stems
from thermal annealing which aims to obtain perfect
crystallizations by a slow enough temperature reduction
to give atoms the time to attain the lowest energy state.

Fig. 4. Shows the reverse vertical B*-tree of the vertical B*- tree
in (a).

The chance of getting a good solution can be traded off


with computation time by slowing down the cooling
schedule. Slower the cooling, higher the chance of finding
the optimum solution and longer is the run time. Thus
effective use of this technique depends on finding a
cooling schedule that gets good enough solutions without
taking too much time.
C. Hotspot Thermal Modeling Tool
HotSpot tool has been developed by University of
Virginia. HotSpot is an accurate and fast thermal
modeling tool suitable for use in architectural studies
[15]. For every input floor plan, HotSpot generates an
equivalent circuit of thermal resistances and capacitances.
The equivalent circuit of thermal resistances and

170

Proceedings of the National Conference on Communication Control and Energy System

capacitances corresponds to micro architecture blocks


[16] and essential aspects of the thermal package.
HotSpot has a simple set of interfaces. The main
advantage of HotSpot is its compatibility with the kinds
of power/performance models used in the computerarchitecture community. It does not require detailed
design or synthesis description. It makes it possible to
study thermal evolution over long periods of real, fulllength applications.
In this project, the tool uses HotSpot tool to calculate the
maximum temperature amongst all the blocks in the floor
plan. The maximum temperature thus calculated is used in
the objective function of the simulated annealing routine
inside the Parquet floor planner. The tool has integrated
the source code of HotSpot 5.0 into the source code of
Parquet 4.5. Source code of both HotSpot, as well as
Parquet, have been altered to facilitate the interfacing.
Especially, the data structures of Parquet have been
modified to accommodate an extra temperature factor.
Also, the definitions of several classes and member
functions have also been modified. There are several
models of HotSpot that could be used for temperature
calculation [17]. But the tool makes use of the block
model [18]. Block model (also called the base model) is
the simplest and fastest method of temperature
calculation. The thermal model is bundled as a trace-level
simulator that takes a power trace file and a floor plan file
as inputs and outputs the corresponding transient
temperatures onto a temperature trace file. But instead of
generating a temperature trace output, the tool has been
modified to be a function which gets invoked inside the
simulated annealing routine of the Parquet floor planner.
The function takes a power trace file and a floor plan file
(generated by parquet every iteration) as arguments and
returns the maximum temperature of the floor plan.
Maximum temperature thus calculated is used in the
objective function of the annealing process.

Parquet reads these files and extracts the blocks


dimensions and other placement information from these
files. It creates the data structures (DB, Nodes, and Node)
to hold all the information relevant to the floor plan.
The data structures of the Parquet have been modified in
this project to contain information related to temperature.
Extra data members have been added. Constructors have
been modified and member functions have been added to
access these extra data members. After the construction of
the data structures, Parquet selects either B*-trees or
sequence pairs [19] to carry out the floor planning process
and the corresponding annealer is called.
During each iteration of the simulated annealing, area,
wire length, and maximum temperature of the chip is
calculated and optimization is done. For the calculation of
temperature HotSpot is called during every iteration. In
order to calculate the temperature, HotSpot takes a floor
plan in the form of flp file. This flp format is completely
different than the input format of the Parquet. The tool
writes the floor plan information taken from the Parquet
data structures to the flp file every time a new solution is
generated. Modified HotSpot returns the maximum
temperature which is stored in the data structures of
Parquet and used later for the optimization using
simulated annealing.
VI. CONCLUTION
This project develops a tool that helps in temperature
reduction of a FPGA through temperature aware floor
planning. As shown in Fig. 5 the maximum temperature
of the chip reduces considerably when thermal aware
floor planning is used, in comparison to normal floor
planning or floor planning without temperature factor. Fig
5 shows the normalized reduction of maximum chip
temperature in various benchmarks.

D. Interfacing of Hotspot and Parquet


HotSpot gets called during all the iterations of the
simulated annealing routine inside Parquet. Every time a
new solution is generated, area, wire length, and
temperature (using HotSpot) are calculated. Optimization
is done using the objective function as shown in the above
sections. Parquet takes a floor plan in the form of
blocks/net/pl files. The blocks/net/pl are the industry
standard formats in benchmarks like GSRC and MCNC
for providing floor plan information. The blocks file
consists of the list of all the blocks in the floor plan. It
provides the name of the blocks, type of the blocks
(hardrectilinear/softrectilinear), size of the blocks, and
names of all the terminals on the blocks. The net and pl
files provide the information related to the interconnects
available in the floor plan, how they are connected, and
where they are connected.

Fig. 5.Comparison of maximum temperature of the chip due to


normal floor planning vs. thermal aware floor planning

About 5-20% reduction in maximum temperature of chip


was noticed when thermal aware floor planner was run. In
apt benchmark, temperature of the floor plan after thermal
aware floor planning was 30C or approximately 18% less
than the temperature of the floor plan after normal floor
planning. The temperature reduction is completely

171

Thermal Management of 3-D FPGA: Thermal Aware Flooring Planning Tool

dependent on the variation in the power densities of the


blocks. If all the blocks in the floor plan have power
densities in the same range, there is hardly any reduction
seen in temperature. Whereas, if there is a large variation
in power densities of the blocks, hot blocks could be
placed besides cooler blocks and better temperature
reduction would be obtained.

[9]
10]

[11]
[12]

In future designs based on deep sub-micron technology


[20], chip temperatures are expected to rise further,
making the advantages of thermal aware floor planning
even more prominent.

[13]

REFERENCES

[14]

[1]

[2]

[3]

[4]

[5]

[6]

[7]
[8]

V. Agarwal , S. W. Keckler, and D. Burger, The effect of


technology scaling on microarchitectural structures,
Tech. Rep. TR-00-02, University of Texas at Austin
Computer Sciences, May 2001
Journal of Instruction-Level Parallelism 8 A Case for
Thermal-Aware Floorplanning at the Microarchitectural
Level,2005 Karthik Sankaranarayanan, sivakumar
velusamy, micea Stan, Kevin stadron
S. Gunther, F. Binns, D. M. Carmean, and J. C. Hall,
Managing the impact of increasing microprocessor power
consumption, Intel Technology Journal, vol. 5, 2001.
R. H. J. M. Otten, Efficient floorplan optimization, in
Proceedings of the International Conference of Computer
Design, pp. 499502, 1983.
M. Ekpanyapong, M. B. Healy, C. S. Ballapuram, S. K.
Lim, H. S. Lee, and G. H. Loh, Thermal-aware 3d
microarchitectural floorplanning, Tech. Rep. GITCERCS-04-37, Georgia Institute of Technology Center for
Experimental Research in Computer Systems, 2004.
Y. Han, I. Koren, and C. A. Moritz, Temperature aware
floor planning, Workshop On Temperature Aware
Microarchitectures, 2005.
Parquet,
Available:
[Online][http://
vlsicad.eecs.umich.edu/ BK/parquet/], 2006.
W. Hung, Y. Xie, N. Vijaykrishnan, C. Addo-Quaye,
T.Theocharides, and M. J. Irwin, Thermal-aware
floorplanning using genetic algorithms, in Sixth
International Symposium on Quality of Electronic Design
(ISQED05), March 2005.

[15]

[16]

[17]

[18]

[19]

[20]

172

Thermal Aware Floor Planning Tool by Gautam Morab


Standard Performance Evaluation Corporation, SPEC
CPU2000
Benchmarks.
http://www.specbench.org/
osg/cpu2000.
Hotspot, Available: [Online] [http://
lava.cs.virginia.edu/hotspot], 2010.
S. Adya and I. Markov, Fixed-outline floor planning:
enabling hierarchical design, IEEE Transactions on Very
Large Scale Integration (VLSI) Systems, vol. 11, pp.
1120
Y.-C. Chang, Y.-W. Chang, G.-M. Wu, and S.-W. Wu,
B*-trees: a new representation for non-slicing floor
plans, in DAC 00: Proceedings of the 37th Annual
Design Automation Conference, pp. 458463, 2000.
S. Kirkpatrick, Optimization by simulated annealing:
Quantitative studies, Journal of Statistical Physics, vol.
34, pp. 975986, 1983.
J. Cong, A. Jagannathan, G. Reinman, and M. Romesis,
Microarchitecture evaluation with physical planning, in
Proceedings of the 40th Design Automation Conference,
June 2003.
K. Skadron, M. Stan, W. Huang, S. Velusamy, K.
Sankaranarayanan, and D. Tarjan, Temperature-aware
microarchitecture, ISCA 03: 30th Annual International
Symposium on Computer Architecture, 2003.
Hotspot-how-to,
Available:
[Online]
[http://
lava.cs.virginia.edu/HotSpot/HotSpot-HOWTO.htm],
2010.
W. Huang, M. R. Stan, K. Skadron, S. Ghosh, K.
Sankaranarayanan, and S. Velusamy, Compact thermal
modeling for temperature-aware design, in Proceedings
of the ACM/IEEE 41st Design Automation Conference, pp.
878883, June 2004
H. Murata, K. Fujiyoshi, S. Nakatake, and Y. Kajitani,
Vlsi module placement based on rectangle-packing by
the sequence-pair, IEEE Transactions on ComputerAided Design of Integrated Circuits and Systems, vol. 15,
pp. 15181524, 1996.
M. Ekpanyapong, J. R. Minz, T. Watewai, H. S. Lee, and
S.
K.
Lim,
Profile-guided
microarchitectural
floorplanning for deep submicron processor design., in
Proceedings of the 41th Design Automation Conference,
pp. 634639, 2004

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.173-176.

Reduction of Inductive Effects using Fast Simulation Approach


in VLSI Interconnects
J. Mohanraj
PG Electronics and Communication Engineering, Vel Tech Dr.RR & Dr.SR Technical University
Abstract Modeling on-chip inductive effects for
interconnects of multiGHz microprocessors remains
challenging. SPICE simulation of these effects is very slow
because of the large number of mutual inductances.
Meanwhile, ignoring the nonlinear behavior of drivers in a
fast linear circuit simulator results in large errors for the
inductive effect. In this paper, a fast and accurate timedomain transient analysis approach is presented, which
captures the non-linearity of circuit drivers, the effect of
non-ideal ground and de-coupling capacitors in a bus
structure. The proposed method models the non-linearity of
drivers in conjunction with specific bus geometries.
Linearized waveforms at each driver output are
incorporated into an interconnect reduced-order simulator
for fast transient simulation. In addition, non-ideal ground
and de-coupling capacitor models enable accurate signal and
ground bounce simulations. Results show that this
simulation approach is upto 68x faster than SPICE while
maintaining 95% accuracy.

I. INTRODUCTION
There is no doubt that our daily lives are significantly
affected by electronic engineering technology. This is true
on the domestic scene, in our professional disciplines, in
the workplace, and in leisure activities. The revolutionary
changes have taken in this field in a relatively short time
and it is also certain that even more dramatic advances
will be made in the next decade .Electronics as we know
it today is characterized by reliability, low power
dissipation, extremely low weight and volume, and low
cost with an ability to cope easily with a high degree of
sophistication and complexity. The integrated circuit has
made possible the design of powerful and flexible
processor which provide high intelligent and adaptable
devices for the user.

II. WHAT IS INTERCONNECT


An interconnect in VLSI design consists of actual path
between global routs and actual circuit. While
interconnect the physical metal lines introduces R,L,C
parameters that can have a dominant influence on the
circuit operation, also this parasitic effects display a
scaling behavior that is different from the active devices.
Their effect increases as device dimensions are reduced
and dominate the performance in submicron technologies.
This situation is aggravated by the fact that improvements
in technology make the production of ever-larger die sizes
economically feasible, which results in an increase in the
average length of an inter connect wire and in the
associated parasitic effects. Interconnect wire introduces
three types of parasitic effects: capacitive, resistive and
inductive. All three have a dual effect on circuit behavior.
1. An introduction of noise, which affects the reliability
of the circuit.
2. An increase in propagation delay.
III. INDUCTANCE EFFECTS ON BUS WIRES
In multi-gigahertz microprocessor designs, ignoring the
parasitic inductance of on-chip interconnects may cause
inaccurate delay and noise estimations. Inductive effects
are quite evident in on-chip buses where signals may
switch simultaneously. Self- and mutual inductance
between parallel bus wires can cause additional timing
delay variations, overshoots, and significant inductive
noise. This is due to the additive inductive coupling from
all parallel wires.

The vast majority of present day electronics is the result


of the invention of the transistor in 1947.The very first
integrated circuit (IC) emerged at the beginning of 1960
and since that time there have already been four
generations of ICs: small scale integration (SSI), medium
scale integration (MSI), large scale integration (LSI), very
large scale integration (VLSI).Now we are beginning to
see the emergence of the fifth generation, ultra large scale
integration (ULSI).

173

Fig. 1. A stage-delay push out is due to inductance. There is no


significant inductive effect when only the victim switches.

Reduction of Inductive Effects using Fast Simulation Approach in VLSI Interconnects

Figure 1 shows a victims far-end waveform in a bus


structure on three layers. When all the signals switch in
the same direction, there are delay push-out and overshoots compared with the case when only the victim
signal switches. Due to the long-range nature of the
mutual inductive coupling, wider buses can cause more
inductive problems for delays and noise. Therefore, the
proposed return limited assumption may not be valid for
on-chip bus simulations. In addition, if all the wires in a
bus are cut into n segments for the RLC distributed effect,
the number of mutual inductances is n (n-1)/2. Because of
the huge number of mutual inductances in a bus, SPICE
runs very slowly and is not feasible for large industry
examples. Meanwhile, the simple truncation of mutual
inductance couplings in an inductance matrix can lead to
large simulation errors and even result in an unstable
circuit model. Because of the in-efficiency of SPICE
simulation, a fast and yet accurate time-domain transient
simulation including inductance is desirable. This paper
focuses on fast and accurate inductance modeling and
transient simulation for onchip buses.
IV. SYSTEM OVERVIEW
There has been some previously reported work on
improving the efficiency of simulating interconnects
including inductance. The effective capacitance method
for RC interconnect simulation has been extended to RLC
interconnect simulation. The efficient Ceff model works
in terms of pre-characterizing the parameters of a time
varying Thevenin voltage source model (in series with a
fixed resistor) over a wide range of effective capacitance
load values.
To avoid the expensive procedure in the above model is
due to a Pad approximation, Kashyap proposed a
synthesis procedure for RLC circuits that guarantees a
realizable reduced order circuit using the first four
moments of the input admittance. While these methods
work well with self-inductance, the mutual inductance is
not included. For on-chip buses, it is the mutual
inductance that makes signal cross-talk worse and
jeopardizes the signal integrity. Pre-characterization needs
to include the mutual inductance for each specific bus
structure. In addition, the Ceff method and its related
techniques are based on the timing analysis theory and
may not work properly with noise estimations.
Meanwhile, the non-linearity of devices plays an
important role in inductive noise. There is no reported
work on fast simulation for the impact of device
nonlinearity on the mutual inductive effects.
In this paper, an efficient transient analysis tool to model
the RLC effect on timing and noise is presented. It
captures the self-/mutual inductance effect, the nonlinearity of drivers, the effect of non-ideal ground and decoupling capacitors (decaps), and the 3-D geometries of
interconnects. To accurately model the inductive effect, a
fast and practical precharacterization technique for on-

chip buses is proposed. The non-linearity of drivers is one


of the keys to accurately model the inductive effect in a
bus structure. In order to have a realistic bus simulation,
packaging parasitics (i.e. non-ideal grounds) are included
with on-chip de-coupling capacitors (decaps).

Fig. 2. Flow chart of the RLC interconnect simulation tool.

Figure 2 illustrates the flow chart of the implemented


tool. Based on the user-specified bus geometries (i.e.,
signal wires and shields on the top, victim and bottom
layers plus power/grounds on a power layer), a complete
multi-bus structure is constructed automatically.
Fasthenry is used to calculate self- and mutual inductance.
Together with RC wire models, a circuit netlist for a
reduce-order interconnect simulation can be generated. A
fast reduce-order circuit simulator is used to generate the
waveforms for timing and noise analysis. Among the
other parameters that users can input are drivers/receivers,
taps along each power/ground wire, decaps and signal
input waveforms. In Section V, a non-linear driver
modeling approach for accurate inductive effect is
proposed, which is one of the key contributions of this
work. Incorporating the device non-linear behavior in a
reduce-order linear circuit simulator significantly
increases inductance simulation speed while maintaining
a good accuracy. The non-ideal ground modeling with
decaps is also discussed. In Section VI, simulation results
from the microprocessor designs are presented to validate
the RLC simulation capabilities. In Section VII the
conclusions are present.
V. MODELING FOR ACCURATE INDUCTIVE EFFECT
A. Modeling for the Non-Linearity of Drivers
Traditionally, a linear resistor model is used for a device
in a linear circuit simulator, which is sufficient for an RC
simulation. For inductance simulation, a linear driver
model is unable to capture the inductance (especially the
mutual inductance) interaction with the drivers.
Simulations show that a linear driver model leads to the
underestimating of the signal inductive noise by 70%
(Figure 3).
In terms of delay simulation, although the linear model
can predict delay with about 15% error, the waveform is
not accurate-the overshoot/undershoot and fast ramp-up
time are not properly captured. This is due to the fact that

174

Proceedings of the National Conference on Communication Control and Energy System

the linear driver model does not capture the fast di/dt
behavior and cannot produce the correct waveforms at the
driver outputs.

line. Parameter n is the number of segments used to


model the distributed effects. The simplified circuit model
uses the total resistance, conductance, inductance and
capacitance values, respectively, instead of the values for
per length. It presents the same impedance information to
the driver as a complete distributed wire model, but with
fewer segments to have a fast precharacterization using
SPICE. The simplified model is accurate to calculate the
near-end waveforms, but it is not accurate enough to
calculate the far-end signal waveforms.
Results show that the pre-characterized waveforms at the
interconnect near-end have a very good match against
SPICE simulation with wire models using n segments.
Each active driver can have a different output waveform
depending on the wires which it connects to and their
coupling. In other words, drivers on different layers and at
a different bit position within a bus will have different
waveforms. A piece-wise linear (PWL) approximation is
then used to represent the characterized waveforms for
each driver. A fast linear circuit simulator uses these
waveforms as interconnect inputs. With a complete wire
circuit model, the interconnect far-end waveforms can be
accurately obtained (i.e., noise and delay). In Section VI,
the pre-characterized piece-wise linear waveforms and the
real driver waveforms will be plotted for accuracy
comparison.

Fig. 3. Cross-talk at a victim wire within a 16-bit bus on M6:


linear driver models lead to the underestimating of inductive
noise by 70%.

B. Modeling for the Non-Ideal Ground and Decaps

Fig. 4. A non-linear driver model results in a much faster ampup time and a waveform with a small ledge.

Figure 4 shows a driver output waveform of a 16-bit bus


on M6. The difference between the two waveforms is due
to the non-linearity of drivers/devices. The interaction
between the self-/mutual inductance and the driver output
voltage are not properly captured by a linear driver model.
To accurately model the waveforms at the driver output
and capture the non-linearity of a device, a fast
precharacterization is done based on the actual
interconnect geometries and the driver models. A nonlinear circuit simulator, like SPICE, is used to extract the
waveforms based on simplified circuit models for the
interconnects to speed up the pre-characterization, but all
self- and mutual inductance between the wires are
preserved. For a transmission line system, the impedance
which a driver sees is

Where R, G, L, C are the resistance, conductance,


inductance and capacitance per length of the transmission

Unlike an RC extraction/simulation where power/ground


wires may be modeled as ideal voltage connections, the
ground/power wires should be modeled by distributed
RLC elements in RLC simulation. The connection
between any ground/power wires to the on-board
VSS/VDD (e.g., C4 bumps) needs to be modeled for any
inductance ground bounce. The non-ideal ground is
modeled using RLC tap models. Decaps provide charging
and discharging currents at very high frequencies when
the on-chip ground/power can not draw enough currents
through the taps from the on-board power sources. In this
work, decaps are modeled based on the area (that the bus
structure covers) and the estimated decap values per chip
area. The decap values are distributed along the
VSS/VDD wires. With the help of decaps, the local
VSS/VDD becomes more stable (i.e., less ground
bounce).
VI. SIMULATION EXAMPLES
Various examples from microprocessor designs are
simulated with typical values to demonstrate the validity
of the proposed scheme. Using 0.18 m technology.
Results from the new tool are compared with the ones
from the full SPICE simulation which includes all the
non-linear drivers.
In Figure 5, a 16-bit same layer bus with only two ground
shields is constructed in order to exaggerate the inductive
effects so that the accuracy can be better compared. The

175

Reduction of Inductive Effects using Fast Simulation Approach in VLSI Interconnects

waveforms show the noise at the far end of the victim


wire. The first peak noise is negative which is mainly due
to the inductive effect while the first positive noise is
largely due to the capacitive coupling.

that the major noise is from the inductive coupling since


there is no capacitive coupling between the signals in a
fully shielded co-planar structure. Both signal delay and
noise waveforms are almost indistinguishable from the
ones with the full SPICE simulation.

Fig. 5. All the signals switch from low to high, and only the
ninth signal keeps low. The pre-characterized driver PWL
waveform is plotted with the one from SPICE simulation for
accuracy comparison.

Fig. 7. A three-layer bus example: the noise becomes larger due


to the non-ideal ground. The inductance in the tap model is 500
pH and the resistance is 5

Figure. 7 shows the two noise waveforms of a signal wire


within a bus where an ideal and a non-ideal ground tap
model are used. Due to the parasitic inductance and
resistance associated with the ground/power taps, a larger
inductive noise is observed with a non-ideal ground
model. The signal on a local ground wire on M6 is also
plotted to illustrate the ground bounce. For decap effect,
one simulation shows the delay of a bit in the center of a
three layer bus without decaps is 15% larger than the one
with decaps.
VII. CONCLUSIONS

(a)Delay

Due to parallel routing and possible simultaneous


switching of signals in a bus, mutual and self-inductance
can cause additional delays and inductive noise for bus
signals. A fast and accurate time-domain transient
waveform simulation approach for RLC interconnect is
presented. The capability to include the nonlinearity of
drivers in a reduced-order linear circuit simulator enables
a much faster yet accurate simulation of on-chip
interconnects with inductance. With tap models and
decaps, bus simulation becomes more realistic, and
ground bounce can also be observed. Results from the
proposed method are in a very good agreement with the
ones from SPICE simulation while the simulation is 68X
faster for large examples.

(b) Noise
Fig. 6. The delay and noise waveform of the victim wire on M4
is compared with the ones from the full SPICE simulation.

A more complicated example consists of three-layer buses


with 12 bits on M6, 48bits on M4 and M2. All signals are
fully shielded, but the shield widths are comparable to the
signal wires and they do not serve as completely effective
shields for inductive effects. In the simulation, all the
signals switch in the same direction at the same time. The
victim wire is in the middle of the bus in M4. The signal
delay (Figure 6(a)) and noise (Figure 6(b)) are plotted for
the victim wire. In Figure 6(b), the noise waveform shows

REFERENCES
[1]
[2]
[3]
[4]

176

Basic VLSI design Douglas A.Pucknell, Kamran


Eshraghian.
Modern VLSI design Wayne Wolf.
Digital integrated circuits Jan M.Rabaey.
IEEE Journal on On-chip wiring design challenges for
gigahertz operation, April 2009.

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.177-178.

Design and VLSI Implementation of High-Performance FaceDetection Engine for Mobile Applications
R.Ilakiya, PG(Electronics &Communication Engineering)
Vel Tech Technical University
Abstract--In this paper, we proposes a novel hardware
architecture of face-detection engine for mobile applications. We
II. BASIC ALGORITHMS
used MCT(Modified Census Transform) and Adaboost learning
A. MCT (Modified Census Transform)
technique as basic algorithms of face-detection engine.
We have designed, implemented and verified the hardware
MCT presents the structural information of the window
architecture of face-detection engine for high-performan ce
with the binary pattern {0, 1} moving the 3 3 window in an image
face etection and real-time processing. The face-detection chip
small enough to assume that lightness value is almost constant,
is developed by verifying and imple menting through FPGA
though the value is actually variable. This pattern contains
and ASIC. The developed ASIC chip has advantage in realinformation on the edges, contours, intersections, etc. MCT can be
time processing, low power consumption, high performance
defined with the equation below:
and low cost. So we expect this chip can be easily use d in
mobile applications.
(X) =
( I( X
Y), I ( ))
(1)
Y N'
I. INTRODUCTION
Here X represents a pixel in the image, the 3 3 windo w of
Face-detection systems carry out a major role in biometric
which X is the center is W(X); N is a set of pixels in W(X) and Y
authentication, which uses features of the face, iris, fingerprint,
represents nine pixels each in the windo w. In addition,
retina, etc. These systems are usually used in places requiring
I (Y
is
I (X
is)the average value of pixels in the window, and )
high security, such as government agencies, bank, and research
the brightness value of each pixel in the window. As a
institutes; it is also applied to two- or three-dimensional face
comparison function, () becomes 1 in the case of )
I(<X
Y), I (
detection in areas such as artificial intelligence and robots,
other cases are 0. As a set operator, connects binary patterns of
access control systems, cutting-edge digital cameras and
function, and then nine binary patterns are connected through
advanced vehicle systems. Recently, the face-detection
operations. As a result, a total of 511 structures can be produced, as
technology is being adopted in the mobile phone applications
theoretically not all nine-pixel values can be 1. Thus connected
because of the pros in easy installation, low-cost, nonbinary patterns are transformed into binary numbers, which are the
contacting method. Most of the existing face-detection engines
values of the pixels in MCT-transformed images.
in digital camera or mobile phone have been run by software.
However, the tendency is that the technology is being
developed to be run by hardware for improving the processing
speed. These days, the technology is to combine hardware
technique of face-detection and software technique of emotion,
feeling, physiognomy and fortune recognition.
Face detection performance is known to be highly
influenced b y variations in illumination. Especially in mobile
environment, the illuminatio n condition is dependent on the
surroundings (indoor and outdoor), time, and light reflection,
etc. The proposed face-detection method is designed to detect
in the variable illumination conditions through the MCT
techniques, which can reduce the effects of illumination by
extracting the structural information of objects. The proposed
face-detection engine also renders high performance face
detection rate by extracting highly reliable and optimized
learning data through the Adaboost learning algorithm.

B. Adaboost learning Algorithm


Adaboost learning algorithm is created high-reliable
learning data as an early stage for face-detectio n using faces data.
Viola and Jones [1] have proposed fast, high-performance facedetection algorithm. It is composed of cascade structure with 38
phases, using extracted features to effectively distinguish face and
non-face areas through the Adaboost learning algorithm proposed
by Freund and Schapire [2]. Furthermore, Froba and Ernst[3] have
introduced MCT-transformed images and a face detector consisting
of a cascade structure with 4 phases using the Adaboost learning
algorithm.
This paper consists of a face detector with a single-layer
structure, using only the fourth phase of the cascade structure
proposed by Froba and Ernst.
III. PROPOSED HARDWARE STRUCTURE

Regarding proposed hardware structure as shown in Figure


1, it is co mposed of color conversion module to convert to gray
image from color image, noise reduction module to reduce image
noise, image scaler module to detect various size of the face, MCT
transform module to transform image for robustness various
illumination, CD (Candidate detector) / CM (Confidence mapper) to
detect candidate for final face-detection, position resizer module to
resize face candidate

177

Design and VLSI Implementation of High-Performance Face-Detection Engine for Mobile Applications

areas detected on the scaled-down images as their corresponding


points of original image size, data grouper mo dule to group the
duplicate areas determined to be the same face prior to
determining the final face detection areas, overlay processor to
play in displaying an output by marking square in relation to the
final face-detection area on the color-based original image from
the camera or to output to transfer the information of area and size
in face-detection area to embedded system through host interface .

(b) Results from Yale and Bio ID Face DB


Fig 2. Face detection Results

(a) FPGA development environment (b) Developed chip and system


Fig 3. FPGA verification environ ment and ASIC chip system
Fig 1. Block diagram of proposed face-detection engine
V. CONCLUSION
IV. EXPERIMENTAL RESULT

The developed face detection system has verified superb


performance of 99.76 % detection-rate in various illumination
environments using Yale face database [4] and BioID face
database [5] as shown in Table 1, Table 2, and Figure 2. And
also we had verified superb performance in real-time hardware
though FPGA and ASIC as shown in figure 3. First, the system
was implemented using Virtex5 LX330 Board [6] with
QVGA(320x240) class camera, LCD display. The developed
face-detection FPGA system can be processed at a maximum
speed of 149 frames per second in real-time and detects at a
maximum of 32 faces simultaneously. Additionally, we
developed ASIC Chip [7] of 1.2 cm x 1.2 cm BGA type through
0.18um, 1-Poly / 6-Metal CMOS Logic process. Consequently,
we verified developed face-detection engine at real-time by 30
frames per second within 13.5 MHz clock frequency with 226
mW power consumption.

This paper has verified a process that overcomes low


detection rates caused by variations in illuminations thanks to the
MCT techniques. The proposed face-detection hardware structure
that can detect faces with high reliability in real-time was
developed with optimized learning data through the Adaboost
algorithm. Consequently, the developed face-detection engine has
strength in various illumination conditions and has ability to
detect 32 various sizes of faces simultaneously.
The developed FPGA module can detect faces with high
reliability in real-time. This technology can b e applied to human
face-detection logic for cutting-edge digital cameras or recently
developed smart phones. Finally, face-detection chip was
developed after verifying and implementing through FPGA and it
has advantage in real-time, low power consumption and low cost.
So we expect this chip can be easily used in mobile applications.

REFERENCE
[1] Paul Viola and Micha el J. Jones, obust real-time face detection In
International Journal of Computer Vision, pp. 137-154, 2004.
[2] Yoav Freund and Robert E. Schapire. "A decision-theoretic generalization of on-line learning and an application to boo sting" in Journal of
Computer and System Sciences, pp. 119-139, 1997.
FACE D E T E CT IO N R E SU L T

(YAL E TE ST SE T A ND BIO ID TEST

S ET)
Type Detection rate False-positives rate

Yale Test set [6] 100 % (165/165) 0 % (0/165)


BioID Test set [7] 99.74 % (1517/1521) 0.20% (3/1521)
Average 99.76 % (1682/1686) 0.18 % (3/1686)

[3] Bernhard Froba and Andreas Ernst, "Face d etection with the Modified
Census Transform", IEEE International Conference. On Automatic Face
and Gesture Recognition, pp. 91-96, Seoul, Korea, May. 2004.
[4] Georghiades, A. : Yale Face Database, Center for computational Vision
and Control at Yale Un iversity, http: //cvc.yale.edu /projects /yalefaces
/yalefa

(a) Result in various illumination conditions

178

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.179-184.

An Efficient Implementation of Floating Point Multiplier


B. Vijayarani
PG (Electronics and Communication Engineering), Vel Tech Technical University, Chennai.
Abstract In this paper we describe an efficient
implementation of an IEEE 754 single precision floating
point multiplier targeted for Xilinx Virtex-5 FPGA. VHDL
is used to implement a technology-independent pipelined
design. The multiplier implementation handles the overflow
and underflow cases. Rounding is not implemented to give
more precision when using the multiplier in a Multiply and
Accumulate (MAC) unit. With latency of three clock cycles
the design achieves 301 MFLOPs. The multiplier was
verified against Xilinx floating point multiplier core.
Keywords floating point; multiplication; FPGA; CAD
design flow

I. INTRODUCTION
Floating point numbers are one possible way of
representing real numbers in binary format; the IEEE 754
[1] standard presents two different floating point formats,
Binary interchange
format and Decimal interchange
format. Multiplying floating point numbers is a critical
requirement for DSP applications involving large
dynamic range. This paper focuses only on single
precision normalized binary interchange format. Fig. 1
shows the IEEE 754 single precision binary format
representation; it consists of a one bit sign (S), an eight bit
exponent (E), and a twenty three bit fraction (M or
Mantissa). An extra bit is added to the fraction to form
what is called the significand1. If the exponent is greater
than 0 and smaller than 255, and there is 1 in the MSB of
the significand then the number is said to be a normalized
number; in this case the real number is represented by (1)

significand of the two numbers, and 3- calculating the


sign by XORing the sign of the two numbers. In order to
represent the multiplication result as a normalized number
there should be 1 in the MSB of the result (leading one).
Floating-point implementation on FPGAs has been the
interest of many researchers. In [2], an IEEE 754 single
precision pipelined floating point multiplier was
implemented on multiple FPGAs (4 Actel A1280). In [3],
a custom 16/18 bit three stage pipelined floating point
multiplier that doesnt support rounding modes was
implemented. In [4], a single precision floating point
multiplier that doesnt support rounding modes was
implemented using a digit-serial multiplier: using the
Altera FLEX 8000 it achieved 2.3 MFlops. In [5], a
parameterizable
floating
point
multiplier
was
implemented using the software-like language Handel-C,
using the Xilinx XCV1000 FPGA; a five stages pipelined
multiplier achieved 28MFlops. In [6], a latency optimized
floating point unit using the primitives of Xilinx Virtex II
FPGA was implemented with a latency of 4 clock cycles.
The multiplier reached a maximum clock frequency of
100 MHz.
II. FLOATING POINT MULTIPLICATION ALGORITHM
As stated in the introduction, normalized floating point
numbers have the form of Z= (-1S) * 2 (E - Bias) * (1.M).
To multiply two floating point numbers the following is
done:
1. Multiplying the significand; i.e. (1.M1*1.M2)
2. Placing the decimal point in the result
3. Adding the exponents; i.e. (E1 + E2 Bias)

Fig. 1. IEEE single precision floating point format

4. Obtaining the sign; i.e. s1 xor s2


Z = (-1S) * 2 (E - Bias) * (1.M)
-1

-2

-3

5. Normalizing the result; i.e. obtaining 1 at the MSB of


the results significand

-22

Where M = m22 2 + m21 2 + m20 2 ++ m1 2 + m0


2-23; Bias = 127.

6. Rounding the result to fit in the available bits

Multiplying two numbers in floating point format is done


by 1- adding the exponent of the two numbers then
subtracting the bias from their result, 2- multiplying the
1

Significand is the mantissa with an extra MSB bit. This research has
been supported by Mentor Graphics.

7. Checking for underflow/overflow occurrence


Consider a floating point representation similar to the
IEEE 754 single precision floating point format, but with
a reduced number of mantissa bits (only 4) while still
retaining the hidden 1 bit for normalized numbers:

179

An Efficient Implementation of Floating Point Multiplier

Significand multiplication, and Results sign calculation


are independent and are done in parallel. The significand
multiplication is done on two 24 bit numbers and results
in a 48 bit product, which we will call the intermediate
product (IP). The IP is represented as (47 downto 0) and
the decimal point is located between bits 46 and 45 in the
IP. The following sections detail each block of the
floating point multiplier.

A = 0 10000100 0100 = 40, B = 1 10000001 1110 = 7.5


To multiply A and B
1. Multiply significand:
1.0100 1.1110
00000
10100
10100
10100
_10100_
1001011000

2. Place the decimal point: 10.01011000


3. Add exponents:
10000100
+10000001
100000101

The exponent representing the two numbers is already


shifted/biased by the bias value (127) and is not the true
exponent; i.e. EA = EA-true + bias and EB = EB-true +
bias
And
EA + EB = EA-true + EB-true + 2 bias
So we should subtract the bias from the resultant
exponent otherwise the bias will be added twice.

Fig. 2. Floating point multiplier block diagram

III.

HARDWARE OF FLOATING POINT MULTIPLIER

A. Sign Bit Calculation


Multiplying two numbers results in a negative sign
number iff one of the multiplied numbers is of a negative
value. By the aid of a truth table we find that this can be
obtained by XORing the sign of two inputs.
B. Unsigned Adder (For Exponent Addition)

100000101
- 01111111
10000110

4. Obtain the sign bit and put the result together:


1 10000110 10.01011000

5.
Normalize the result so that there is a 1 just
before the radix point (decimal point). Moving the radix
point one place to the left increments the exponent by 1;
moving one place to the right decrements the exponent
by 1.
1 10000110 10.01011000 (before normalizing)
1 10000111 1.001011000 (normalized)

The result is (without the hidden bit):


1 10000111 00101100

6. The mantissa bits are more than 4 bits (mantissa


available bits); rounding is needed. If we applied the
truncation rounding mode then the stored value is:

This unsigned adder is responsible for adding the


exponent of the first input to the exponent of the second
input and subtracting the Bias (127) from the addition
result (i.e. A_exponent + B_exponent - Bias). The result
of this stage is called the intermediate exponent. The add
operation is done on 8 bits, and there is no need for a
quick result because most of the calculation time is spent
in the significand multiplication process (multiplying 24
bits by 24 bits); thus we need a moderate exponent adder
and a fast significand multiplier.
An 8-bit ripple carry adder is used to add the two input
exponents. As shown in Fig. 3 a ripple carry adder is a
chain of cascaded full adders and one half adder; each full
adder has three inputs (A, B, Ci) and two outputs (S, Co).
The carry out (Co) of each adder is fed to the next full
adder (i.e each carry bit "ripples" to the next full adder).

1 10000111 0010.

In this paper we present a floating point multiplier in


which rounding support isnt implemented. Rounding
support can be added as a separate unit that can be
accessed by the multiplier or by a floating point adder,
thus accommodating for more precision if the multiplier is
connected directly to an adder in a MAC unit. Fig. 2
shows the multiplier structure; Exponents addition,
180

Fig. 3. Ripple Carry Adder

Proceedings of the National Conference on Communication Control and Energy System


Table 2. 1-Bit Subtractor with the Input T = 0

The addition process produces an 8 bit sum (S7 to S0) and


a carry bit (Co,7). These bits are concatenated to form a 9
bit addition result (S8 to S0) from which the Bias is
subtracted. The Bias is subtracted using an array of ripple
borrow subtractors.
A normal subtractor has three inputs (minuend (S),
subtrahend (T), Borrow in (Bi)) and two outputs
(Difference (R), Borrow out (Bo)). The subtractor logic
can be optimized if one of its inputs is a constant value
which is our case, where the Bias is constant (127|10 =
001111111|2). Table I shows the truth table for a 1-bit
subtractor with the input T equal to 1 which we will call
one subtractor (OS).

Bi

Difference(R)

Bo

0
1
0
1

0
0
0
0

0
0
1
1

0
1
1
0

0
0
1
0

The Boolean equations (4) and (5) represent this


subtractor:

Fig. 6 shows the Bias subtractor which is a chain of 7 one


subtractors (OS) followed by 2 zero subtractors (ZS); the
borrow output of each subtractor is fed to the next
subtractor. If an underflow occurs then Eresult < 0 and the
number is out of the IEEE 754 single precision
normalized numbers range; in this case the output is
signaled to 0 and an underflow flag is asserted

()=SB

(4)

()=SB

(5)

Fig. 5. 1-bit subtractor with the input T = 0

Fig. 6. Ripple Borrow Subtractor

C. Unsigned Multiplier (For Significand


Multiplication)

Fig.
Table 1. 1-Bit Subtractor With The Input T = 1
S

Bi

Difference(R)

Bo

0
1
0
1

1
1
1
1

0
0
1
1

1
0
0
1

1
0
1
1

The Boolean equations (2) and (3) represent this


subtractor:
(2)

SB

This unit is responsible for multiplying the unsigned


significand and placing the decimal point in the
multiplication product. The result of significand
multiplication will be called the intermediate product (IP).
The unsigned significand multiplication is done on 24 bit.
Multiplier performance should be taken into consideration
so as not to affect the whole multipliers performance. A
24x24 bit carry save multiplier architecture is used as it
has a moderate speed with a simple architecture. In the
carry save multiplier, the carry bits are passed diagonally
downwards (i.e. the carry bit is propagated to the next
stage). Partial products are made by ANDing the inputs
together and passing them to the appropriate adder.
Carry save multiplier has three main stages:
1. The first stage is an array of half adders.
2. The middle stages are arrays of full adders. The
number of middle stages is equal to the significand
size minus two.

Fig. 4. 1-bit subtractor with the input T = 1

Table 2 shows the truth table for a 1-bit subtractor with


the input T equal to 0 which we will call zero subtractor
(ZS)

3. The last stage is an array of ripple carry adders. This


stage is called the vector merging stage.
The number of adders (Half adders and Full adders) in
each stage is equal to the significand size minus one. For
example,

181

An Efficient Implementation of Floating Point Multiplier

a 4x4 carry save multiplier is shown in Fig. 7 and it has


the following stages:
1. The first stage consists of three half adders.

The shift operation is done using combinational shift logic


made by multiplexers. Fig. 8 shows a simplified logic of a
Normalizer that has an 8 bit intermediate product input
and a 6 bit intermediate exponent input.

2. Two middle stages; each consists of three full adders.


3. The vector merging stage consists of one half adder
and two full adders.
The decimal point is between bits 45 and 46 in the
significand multiplier result. The multiplication time
taken by the carry save multiplier is determined by its
critical path. The critical path starts at the AND gate of
the first partial products (i.e. a1b0 and a0b1), passes
through the carry logic of the first half adder and the carry
logic of the first full adder of the middle stages, then
passes through all the vector merging adders. The critical
path is marked in bold in Fig. 7

Fig. 8. Simplified Normalizer logic

IV. UNDERFLOW/OVERFLOW DETECTION


Overflow/underflow means that the results exponent is
too large/small to be represented in the exponent field.
The exponent of the result must be 8 bits in size, and must
be between 1 and 254 otherwise the value is not a
normalized one. An overflow may occur while adding the
two exponents or during normalization. Overflow due to
exponent addition may be compensated during subtraction
of the bias; resulting in a normal output value (normal
operation). An underflow may occur while subtracting the
bias to form the intermediate exponent. If the intermediate
exponent < 0 then its an underflow that can never be
compensated; if the intermediate exponent = 0 then its an
underflow that may be compensated during normalization
by adding 1 to it.

Fig. 7. 4x4 bit Carry Save multiplier

In Fig. 7:
1. Partial product: aibj = ai and bj
2. HA: half adder
3. FA: full adder
D. Normalizer
The result of the significand multiplication (intermediate
product) must be normalized to have a leading 1 just to
the left of the decimal point (i.e. in the bit 46 in the
intermediate product). Since the inputs are normalized
numbers then the intermediate product has the leading one
at bit 46 or 47
1. If the leading one is at bit 46 (i.e. to the left of the
decimal point) then the intermediate product is already
a normalized number and no shift is needed.
2. If the leading one is at bit 47 then the intermediate
product is shifted to the right and the exponent is
incremented by 1.

When an overflow occurs an overflow flag signal goes


high and the result turns to Infinity (sign determined
according to the sign of the floating point multiplier
inputs). When an underflow occurs an underflow flag
signal goes high and the result turns to Zero (sign
determined according to the sign of the floating point
multiplier inputs). Denormalized numbers are signaled to
Zero with the appropriate sign calculated from the inputs
and an underflow flag is raised. Assume that E1 and E2
are the exponents of the two numbers A and B
respectively; the results exponent is calculated by (6)
Eresult = E1 + E2 - 127

(6)

E1 and E2 can have the values from 1 to 254; resulting in


Eresult having values from -125 (2-127) to 381 (508127); but for normalized numbers, Eresult can only have
the values from 1 to 254. Table 3 summarizes the Eresult
different values and the effect of normalization on it.

182

Proceedings of the National Conference on Communication Control and Energy System


Table 3. Normalization Effect on Results Exponent and
Overflow/Underflow Detection
Eresult
-125 Eresult < 0

Category

Comments

Underflow

Eresult = 0

Zero

1 < Eresult < 254

Normalized
number

255 Eresult

Overflow

Cant be compensated
during
normalization
May turn to normalized
number during
normalization (by
adding 1 to it)
May result in overflow
during
normalization
Cant be compensated

A testbench is used to generate the stimulus and applies it


to the implemented floating point multiplier and to the
Xilinx core then compares the results. The floating point
multiplier code was also checked using Design Checker
[7]. DesignChecker is a linting tool which helps in
filtering design issues like gated clocks, unused/ undriven
logic, and combinational loops. The design was
synthesized using Precision synthesis tool [8] targeting
Xilinx Virtex-5 5VFX200TFF1738 with a timing
constraint of 300MHz. Post synthesis and place and route
simulations were made to ensure the design functionality
after synthesis and place and route. Table IV shows the
resources and frequency of the implemented floating
point multiplier and Xilinx core.
Table 4. Area and frequency comparison between the
implemented floating point multiplier and xilinx core

V. PIPELINING THE MULTIPLIER


In order to enhance the performance of the multiplier,
three pipelining stages are used to divide the critical path
thus increasing the maximum operating frequency of the
multiplier. The pipelining stages are imbedded at the
following locations:
1. In the middle of the significand multiplier, and in the
middle of the exponent adder (before the bias
subtraction).
2. After the significand multiplier, and after the exponent
adder.
3. At the floating point multiplier outputs (sign, exponent
and mantissa bits).
Fig. 9 shows the pipelining stages as dotted lines.

Function Generators
CLB Slices
DFF
Max Frequency

Our Floating Point


Multiplier
1263
604
293
301.114 MHz

Xilinx Core
765
266
241
221.484 MHz

The area of Xilinx core is less than the implemented


floating point multiplier because the latter doesnt
truncate/round the 48 bits result of the mantissa multiplier
which is reflected in the amount of function generators
and registers used to perform operations on the extra bits;
also the speed of Xilinx core is affected by the fact that it
implements the round to nearest rounding mode.
VII. CONCLUSIONS AND FUTURE WORK
This paper presents an implementation of a floating point
multiplier that supports the IEEE 754-2008 binary
interchange format; the multiplier doesnt implement
rounding and just presents the significand multiplication
result as is (48 bits); this gives better precision if the
whole 48 bits are utilized in another unit; i.e. a floating
point adder to form a MAC unit. The design has three
pipelining stages and after implementation on a Xilinx
Virtex5 FPGA it achieves 301 MFLOPs.

Fig. 9. Floating point multiplier with pipelined stages

ACKNOWLEDGMENT
Three pipelining stages mean that there is latency in the
output by three clocks. The synthesis tool retiming
option was used so that the synthesizer uses its
optimization logic to better place the pipelining registers
across the critical path.

Authors would like to thank Randa Hashem for her


invaluable support and contribution.
REFERENCES
[1]

VI. IMPLEMENTATION AND TESTING


The whole multiplier (top unit) was tested against the
Xilinx floating point multiplier core generated by Xilinx
coregen. Xilinx core was customized to have two flags to
indicate overflow and underflow, and to have a maximum
latency of three cycles. Xilinx core implements the
round to nearest rounding mode.

[2]

[3]

183

IEEE 754-2008, IEEE Standard for Floating-Point


Arithmetic, 2008.
B. Fagin and C. Renard, Field Programmable Gate
Arrays and Floating Point Arithmetic, IEEE Transactions
on VLSI, vol. 2, no. 3, pp. 365367, 1994.
N. Shirazi, A. Walters, and P. Athanas, Quantitative
Analysis of Floating Point Arithmetic on FPGA Based
Custom Computing Machines, Proceedings of the IEEE

An Efficient Implementation of Floating Point Multiplier

[4]

[5]

[6]

Symposium on FPGAs for Custom Computing Machines


(FCCM95), pp.155162, 1995.
L. Louca, T. A. Cook, and W. H. Johnson,
Implementation of IEEE Single Precision Floating Point
Addition and Multiplication on FPGAs, Proceedings of
83 the IEEE Symposium on FPGAs for Custom
Computing Machines (FCCM96), pp. 107116, 1996.
A. Jaenicke and W. Luk, "Parameterized Floating-Point
Arithmetic on FPGAs", Proc. of IEEE ICASSP, 2001, vol.
2, pp.897-900.
B. Lee and N. Burgess, Parameterisable Floating-point
Operations on FPGA, Conference Record of the Thirty-

Sixth Asilomar Conference on Signals, Systems, and


Computers, 2002
[7] DesignChecker User Guide, HDL Designer Series
2010.2a, Mentor Graphics, 2010
[8] Precision Synthesis Users Manual, Precision RTL
plus 2010a update 2, Mentor Graphics, 2010.
[9] Patterson, D. & Hennessy, J. (2005), Computer
Organization and Design: The Hardware/software
Interface , Morgan Kaufmann .
[10] John G. Proakis and Dimitris G. Manolakis (1996),
Digital Signal Processing: Principles,. Algorithms and
Applications, Third Edition.

184

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.185-187.

A FPGA-Based Viterbi Algorithm Implementation


for Speech Recognition Systems
J. Ashok Kumar
PG Electronics and Communication Engineering, Vel Tech Dr.RR & Dr.SR Technical University, Chennai.
Abstract This work proposes a speech recognition system
based on a hardware/software co-design implementation
approach. The main advantage in this approach is an
expressive processing time reduction in speech recognition,
because part of the system is implemented by dedicated
hardware. This work also discuss another way to implement
Hidden Markov Models (HMM), a probabilistic model
extensively used in speech recognition systems. In this new
approach, the Viterbi algorithm, used to compute the HMM
likelihood score, will be built in together with the HMM
structure designed in Hardware, and implementing
probabilistic state machines that will run as parallel
processes each one for each word in the vocabulary handled
by the system. So far, we have a dramatic speed up
performance, getting measures around 500 times faster than
a classic implementation with the correctness comparable
with others isolated word recognition systems.

I. INTRODUCTION
A. Speech Recognition System Structure
A speech recognition system (SRS) is basically a pattern
recognition system dedicated to detect speech, or in other
words, to identify language words into a sound signal
achieved as input from the environment.

II. SECTION II
A. Signal Analysis Implementation
Signal analysis is responsible for signal sampling, its
conversion in a digital representation, and vector
quantization. At end, the speech signal will be replaced by
sequences of label-codes. ( figure 2)
SPEECH
SOUND
FS
SAMPLING RATE
WINDOWING
CODE
SEQUENCE

Fig. 2: Signal Analysis main tasks

Figure 2 shows the six sequential tasks to be executed in


the signal analysis step, which are:
 Sampling: the SRS converts speech sound from the
outside world into digital representation. Essentially,
this task will include a sample and hold device, and an
analogic-digital (A/D) converter. It is necessary to
choose the appropriate sampling rate, according with
Nyquist-Shannon theorem.
 Low pass filter: cuts those high frequencies found on
the signal due to sampling. Usually this filter is
adjusted by sampling rate [DELL1993].
 Pre-emphasis filter: adjusts the high variations on
spectrum frequencies due to glottal pulse and lips
radiation found in the speech signal behavior.
[GRAY1982],[RABI1978].

Fig. 1

Figure 1 shows the main steps to process a front-end


speech recognition system. In the signal analysis step a
speech sampling will be made with an A/D converter.
Those samples are processed in order to extract some
relevant features from speech signal input. [FAGU1993]
[RABI1993]. The next step,pattern matching, makes a
comparison among source reference patterns (also sets of
signal parameters from reference patterns) previously
stocked on the system and scores the likelihood of this
reference patterns against the input set. The next step,
decision logic, chooses one of those reference sets that
match with the signal parameters set from the input
(usually called test set).

 Windowing: cuts the speech signal into blocks of 10


ms signal frame each. A hamming window adjusts
those frame samples [OSH1987],[HARR1978].
 LPC/Cepstral analysis: algorithms process each frame
in order to complete the cepstral coefficients from line
predictive coefficients [GRAY1982][MAKH1975].
 VQ Vector quantization: each vector of cepstral
coefficients is evaluated by distance measure. Using,
as a map, a codebook with reference vectors in the
acoustic space. The final output is a sequence of label
codes1 (usually called observation sequence) that will
be evaluated by the pattern matching process.
[RABI1985][LIND1980][DELL1993].

185

A FPGA-Based Viterbi Algorithm Implementation for Speech Recognition Systems

III. SECTION III


A. The Pattern Matching Process
Pattern matching is the identification step, where the
words spelled in the speech signal are identified
generating a sequence of text words. The sequence
observation is evaluated using Hidden Markov Models
(HMM), which, as the acoustic reference pattern, plays
the main role in the recognition process. [RABI1989a],
[FAGU1993] [FAGU1998].
a) Hidden Markov Models (Hmm)
Figure 5 shows a HMM structure usually applied in
speech recognition systems [RABI1986][FAGU1993]:

For a more didactical approach in Hidden Markov Models


with applications in speech recognition, we would
recommend the following references [RABI1989b]
[HUAN1990] [FAGU1993]
b) FPGA Implementation
Our approach proposes the use of HMM implementation
with FPGA as a tool for the evaluation of the code label
sequences (instead of the traditional utilization of Viterbi
algorithm). The main idea is to use the FPGA to
implement another kind of HMM, by the use of adders,
comparators and logic operators, performing the pattern
matching process without running Viterbi algorithm. This
new HMM can be seen in the figure 6:

1 These labels are numbers, relating the codebook


reference vectors, usually called centroids, in VQ. Each
label-code is really just a label to those centroids. After
VQ, each label-code in the sequence output is called
observation.

2 Discrete Density function Hidden Markov Model

Figure 5: Hidden Markov Model As seen in figure 5, we


can define a HMM

scoring likelihood from the code-label sequence achieved


by VQ (as done in Viterbi algorithm) directly by logical
blocks in FPGA chip. The whole process is built by the
structure shown in figure 7.

= (A,B, )with this set of parameters:


A : {a } P{q q } ij j i = , Probability Transition Matrix,
with dimension N2 , and N is the number of states. This
matrix describes a probability transition from state i q to j
q.
B : Matrix b k P{V q } j k j ( ) = , the probability to get
the symbol Vk in the state j q . and for DDHMM2

Figure 6 : HMM implementation with FPGA


In this new approach, the recognition process is done by

Figure 7: Pattern Matching step implemented by


dedicated hardware.
Figures 6 and 7 shows all the tasks that pattern matching
process performs, which are:
1. To fetch the probability score addressed by each code
label from the observation sequence, for each current
HMM state.

b k {b } k M j jk ( ) = ; 1 j N e 1 .
for all N model states and M symbols used on VQ.
: Initial Probability Vector (I) . Concerning Hmm
From Figure 5 , This Vector Will Always be Defined as
[1 0 0 0 ..],
{V V V } 1 k M ,..., ,..., : set ofM symbols
O {O O } T = 1, ..., : observation sequence in the interval

2. To accumulate those scores thorough all HMM states,


generating a final accumulated score.
3. To choose the highest HMM accumulated score using a
comparison logic block. This new approach saves time in
the speech recognition process, because instead of
running an algorithm to score, it computes directly the
likelihood scoring.

[1,T]

IV. SECTION IV

Q {q q } T = 1,..., : state sequence through the HMM in


the
interval [1,T].
N Number of states
M Number of symbols (number of centroids or, also,
number of label-codes)

A. Methodology
As can be seen in figure 8, the left side of the internal bus
comprises the Motorola 56002 microprocessor and is
responsible for executing the software part of the speech
recognition procedure (i.e., data acquisition and signal
processing). The right side of the internal bus is composed
by the Altera FPGA design kit, which is based on the
FPGAs MAX 7K128 [UP1BOARD] and FLEX 10K20
chips [FLEX10K]. This part implements the HMMs in a
dedicated hardware, and is responsible for the pattern

186

Proceedings of the National Conference on Communication Control and Energy System

(voice) recognition procedure itself. This task is


dramatically speedup by implementing into the FPGAs a
dedicated hardware to perform parallel arithmetic
operations.

providing a discrete probability density function of the


VQ symbols associated for each state.
V. RESULTS
Table 1 below shows the results achieved in this new
fpgabased approach:

Speech recognition system


Figure 8: Speech Recognition System: the left side of the
internal bus (bold line) is based on the DSP 56002
microprocessor development kit, from Motorola; the right
side is based on the Altera FPGA development kit.

Table 1. Comparison results between fpga-based approach and


classic Viterbi approach.

Words Obs.
Sequence
size
Total
[clk]
FPGABASED
Time
[ms]
Classic
Viterbi Time
[s]
2 66 5280 1.320 0.577
3 66 7920 1.980 0.917
4 66 10560 2.640 1.240

a) Signal Analysis
The signal analysis step is done by the Motorola 56002
EVM, with a 16 kHz sampling rate. The following tasks
are programmed with C and Motorola assembler
languages and stored into Motorola 56002 EVM.
Furthermore, we use 10 ms sample frames, with 2/3 frame
overlapping in windowing and LPC/Cepstral steps, in
order to avoid loosing samples during parameters
extraction. After running a short speech detector
algorithm (in order to discard silence segments in the
input), the signal analysis provides an observation
sequence to the pattern matching step.
b) Pattern Matching
The pattern matching to implement these structures as
shown in figures 6 and 7, uses two Altera FPGA chips:
MAX 7K128 and FLEX 10K20, placed into an Altera
development board. In addition, we use VHDL (VHSIC
Hardware
Description
Language)
[BERG1996]
[OTT1996] to describe the whole structure proposed:
using 6-state HMM and 128 vectors in the codebook. For
each state, there is a probability table relating each codelabel with the probability of occurrence in the input
CPU
8051
FPGA
Flex 10k
FPGA
Max 7000 S
Bus Control
Memory
DSP 56002
Processor
A/D Converter
Audio IN
PC
sequence for that state. Due to space limitations in the
FPGA, we use a memory module storage, controlled by a
8051 Intel
microcontroller, to store those probability tables and
provide those values during score computation. Such
tables plays therole of Matrix B in a HMM parameters,

The time measured in an fpga viterbi implementation and


a classic vitebi implementation can be seen in table 1, for
different amount of words (using maximum 4 words due
to FPGA space limitations). The number of observations
(observation sequence size) and the number of clock
pulses is also presented for performance evaluation.
VI. CONCLUSIONS
Our research group is now working towards the
integration of the speech-recognition system. This task is
being executed by developing a system based on a single
board containing the 56002 Motorola microprocessor, the
Altera FPGAs, mass memory modules and the overall
glue logic that the system requires in order to operate in a
stand-alone configuration. At the same time we have the
isolated word recognition system already trained with
MATLAB, using a VQ codebook with 128 centroids, and
a six states HMM. We have also tried another
configuration such as to use 8 states in the HMM.
However, due space limitations in the FPGA, we have
decided for 6 states as the best choice. In our test
procedures we have achieved a very good performance
with 99.7% correctness rate in a vocabulary with more
than 10 words. From the results above it should be note
the dramatic speed up achieved, around 500 times faster
than a classic Viterbi, due to parallel hardware Viterbi
implementation. Also, we must point out that the few
amount of words currently reached in this implementation
will be overcame by the new VLSI technologies,
supporting this approach to be used in small speech
recognition circuits in the future.

187

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.188-191.

Design and Simulation of UART Serial


Communication Module Based on VHDL
A.Sujitha
PG Electronics and Communication Engineering
Vel Tech Dr.RR & Dr.SR Technical University

Abstract : UART (Universal Asynchronous Receiver


Transmitter) is a kind of serial communication protocol;
mostly used for short-distance, low speed, low-cost data
exchange between computer and peripherals. During the
actual industrial production, sometime s we do not need
the full functionality of UART, but simply integrate its
core part. UART includes three kernel module s which are
the baud rate generator, receiver and transmitter. The
UART implemented with VHDL language can be
integrated into the FPGA to achieve compact, stable and
reliable data transmission. Its significant for the design of
SOC.The simulation results with Quartus II are completely
consistent with the UART protocol.

I.

INTRODUCTION

Asynchronous serial communication has advantages of

transmission line, high reliability, and long


transmission distance, therefore is widely used in data
exchange
between
computer
and
peripherals.
Asynchronous
serial
communicationis
usually
implemented by Universal Asynchronous Receiver
Transmitter (UART) . UART allows full-duplex
communication in serial link, thus has been wid ely used in
the data communications and control system. In actual
applications, usually only a few key features of UART are
needed. Specific interface chip will cause waste of
resources and increased cost. Particularly in the field of
electronic design,SOC technology is recently becoming
increasingly mature.This situation results in the
requirement of realizing the whole system function in a
single or a very few chips. Designers must integrate the
similar function module into FPGA. This paper uses
VHDL to implement the UART core functions and
integrate them into a FPGA chip to achieve compact,
stable and reliable data transmission, which effectively
solves the above problem .
less

asynchronous transmissions, a bit called the "Start Bit" is added


to the beginning of each word that is to be transmitted. The Start
Bit is used to alert the receiver that a word of data is about to be
sent, and to force the clock in the receiver into synchronization
with the clock in the transmitter. These two clocks must be
accurate enough to not have the frequency drift by more than 10
% during the transmission of the remaining b its in the word .
After the Start Bit, the individual data bits of the word are
sent, with the Least Significant Bit (LSB) being sent first. Each
b it in the transmission is transmitted for exactly the same
amount of time as all of the other bits, and the receiver looks
at the wire at approximately halfway through the period
assigned to each bit to determine if the bit is a 1 or a 0. For
example, if it takes two seconds to send each bit, the receiver
will examine the signal to determine if it is a 1 or a 0 after one
second has passed, then it will wait two seco nds and then
examine the value of the next bit, and so on.
When the entire data word has been sent, the transmitter
may add a Parity Bit that the transmitter generates. The Parity Bit
may be used by the receiver to perform simple error checking.
Then at least one Stop Bit is sent by the transmitter.
When the receiver has received all of the bits in the data
word, it may check for the Parity Bits (both sender and receiver
must agree on whether a Parity Bit is to be used), and then the
receiver looks for a Stop Bit. If the Stop Bit does not appear when
it is supposed to, the UART considers the entire word to be garbled
and will report a Framing Error to the host processor when the data
word is read. The usual cause of a Framing Error is that the sender
and receiver clocks were not running at the same speed, or that the
signal was interrupted.
Regardless of whether the data was received correctly or
not, the UART automatically discards the Start, Parity and Stop
bits. If the send er and receiver are configured identically, these bits
are not passed to the host.
If another word is ready for transmission, the Start Bit for
the new word can be sent as soon as the Stop Bit for the previous
word has been sent. Because asynchronous data are selfsynchronizing, if there are no data to transmit, the transmission
line can be idle. The UART frame format is shown in Fig. 1.

188

Proceedings of the National Conference on Communication Control and Energy System

Data bits

12345678
Idle bit Start bit

Parity bit

Stop bit

Figure 1. UART Frame Format


II. IMPLEMENTATION OF UART

In this paper, the top to bottom (T op to Down) design


method is used. The UART serial communication module is
divided into three sub-modules: the baud rate generator, receiver
mod ule and transmitter module, shown in Fig. 2. Therefore, the
implementation of the UART communication module is actually
the realization of the three sub-modules. The baud rate generator
is used to produce a local clock signal which is much higher
than the baud rate to control the UART receive and transmit;
The UART receiver mod ule is used to receive the serial signals
at RXD, and convert them into parallel data; The UART
transmit module converts the bytes into serial bits according to
the basic frame format and transmits those bits through TXD.

correctly determine the start bit of a frame data. The receiver


module receives data from RXD pin. RXD jumps into logic 0 from
lo gic 1 can be regarded as the b eginning of a data frame. When
the UART receiver module is reset, it has been waiting the RXD
level to jump. The start bit is identified by detecting RXD level
changes from high to low. In order to avo id the misjud gment of
the start bit caused by noise, a start bit error detect function is
added in this design, which requires the received low level in RXD
at least over 50% of the baud rate to be able to determine the start
bit arrives. Since the receive clock frequency is 16 times the baud
rate in the design, the RXD low level lasts at least 8 receiving
clock cycles is considered start bit arrives. Once the start bit been
identified, from the next bit, begin to count the rising edge of the
baud clock, and sample RXD when counting. Each sampled value
of the logic level is deposited in the register rbuf [7, 0] by order.
When the count equals 8, all the data bits are surely received, also
the 8 serial bits are converted into a byte parallel data..
The serial receiver module includes receiving, serial and
parallel transform, and receive caching, etc. In this paper we use
finite state machine to design, shown in Fig. 3.

R_S T A RT

Baud Rate Generator

RX D_ SY NC =1

R _ST O P

Data

RCNT16=1110 AND
RBITCNT=FRAMELEN

Receiver Transmitter
Figure 2. UART Module

A. Baud Rate Generator


Baud Rate Generator is actually a frequency divider. The
baud rate frequency factor can be calculated according to a
given system clock frequency (oscillator clock) and the
requested baud rate. The calculated baud rate frequency factor is
used as the divider factor. In this design, the frequency clock
produced by the baud rate generator is not the baud rate clock,
but 16 times the baud rate clock. The purpose is to precisely
sample the asynchronous serial data at the receiver. Assume that
the system clock is 32MHz, baud rate is 9600bps, and then the
output clock frequency of baud rate generator should be 16 *
9600Hz. Therefore the frequency coefficient (M) of the baud
rate generator is:
M =32MHz/16*9600Hz=208
When the UART receives serial data, it is very critical to
determine where to sample the data information. The ideal time
for sampling is at the midd le point of each serial data bit. In this
design, the receive clo ck frequency is designed to be 16 times
the baud rate, therefore, each data width received by UART is
16 times the receive clock cycle.
B. Receiver Module
During the UART reception, the serial data and the
receiving clock are asynchronous, so it is very important to

R XD _S YN C =0

R_S AM P L E

R CNT16=1110 AND
RBITCNT/ =FRAMELEN

R _CE N T E R
RC NT 1 6=0 100
AND

R_WA IT

Figure 3. UART Receiver State Machine

The state machine includes five states: R_START (waiting


for the start bit), R_CENTER (find midpoint), R_WAIT (waiting
for the sampling), R_SAMPLE (sampling), and R_STOP
(receiving stop bit).
R_START Status: When the UART receiver is reset, the
receiver state machine will be in this state. In this state, the state
machine has been waiting for the RXD level to jump over from
logic 1 to logic 0, i.e. the start bit. This alerts the beginning of a
new d ata frame. Once the start bit is identified, the state machine
will be transferred to R_CENTER state. In Fig. 3, RXD_SYNC is a
synchronization signal of RXD. Because when sampling logic 1 or
logic 0, we do not want the detected signal to be unstable. So we
do not directly detect RXD signal, but detect the synchronization
signal RXD_ SYNC.
R_ CENTER Status: For asynchronous serial signal, in
order to detect the correct signal each time, and minimize the total
error in the later data bits detection. Obviously, it is the most ideal
to detect at the middle of each bit. In this state, the task is to find
the midpoint of each bit through the start bit. The method is by
counting the number of bclkr (the receiving clock frequency
generated by the baud rate generator) (RCNT16 is

189

Design and Simulation of UART Serial Communication Module based on VHDL

the counter of bclkr). In addition, the start bit detected in the


R_START may not be a really start bit, it may be an occasional
interference sharp pulse (negative pulse). This interference pulse
cycle is very short. Therefore, the signal that maintains logic 0
over 1 / 4 bit time must be a start bit.
R_WAIT Status: When the state machine is in this state,
waiting for counting bclkr to 15, then entering into R_SAMPLE
to sample the data bits at the 16th bclkr. At the same time
determining whether the collected data bit length has reached
the data frame length (FRAMELEN). If reaches, it means the
stop bits arrives. The FRAMELEN is modifiable in the design
(using the Generic). In this design it is 8, which correspond s to
the 8-bit data format of UART.
R_SAMPLE Status: Data bit sampling. After sampling the
state machine transfers to R_WAIT state unconditionally, waits
for the arrival of the next start bit.
R_ STOP Status: Stop bit is either 1 o r 1.5, or 2. State
machine doesnt detect RXD in R_STOP, but output frame
receiving done signal (REC_DONE <= '1 '). After the stop bit,
state machine turns back to R_START state, waiting for the next
frame start bit.
C. Transmit Module
The function of transmit module is to convert the sending
8-bit p arallel data into serial data, adds start bit at the head of
the data as well as the parity and stop bits at the end of the data.
When the UART transmit module is reset b y the reset signal,
the transmit module immediately enters the ready state to send.
In this state, the 8-bit parallel data is read into the register txdbuf
[7: 0]. The transmitter only needs to output 1 bit every 16 bclkt
(the transmitting clock frequency generated by the baud rate
generator) cycles. The order follows 1 start bit, 8 data b its, 1
parity bit and 1 stop bit. The parity bit is determined according
to the number of logic 1 in 8 data bits. Then the parity bit is
output. Finally, logic 1 is outp ut as the stop bit. Fig. 4 shows the
transmit module state diagram.
X CN T 16= 01 111 A ND
X MIT _CM D _P =0
X_ ID LE

X M IT _CM D _P =1

RX D_ SY NC =1

X _ST OP

XCNT16=01110 AND
XBITCNT=FRAMELEN

X _S HIF T

XCNT16=01110 AND
XBITCNT/ =FRAMELEN

Figure 4.

X_S T A RT

XMIT_CMD. XMIT_CMD_P is a processed signal of


XMIT_CMD, which is a short pulse signal. Since XMIT_CMD is
an external signal, outside FPGA, its pulse width is unable to be
limited. If XMIT_CMD is valid, it is still valid after sending one
UART data frame. Then the UART transmitter will think by
mistake that a new data transmit command has arrived, and once
again start the frame transmit. Obviously the frame transmit is
wrong. Here we limit the pulse width of XMIT_CMD.
XMIT_CMD_P is its processed signal. When XMIT_CMD_P = '1
', the state machine transferred to X_START, get ready to send a
start bit.
X_START Status: In this state, sends a logic 0 signal to the
TXD for one bit time width, the start bit. Then the state machine
transferred to X_WAIT state. XCNT16 is the counter of bclkt.
X_WAIT Status: Similar with the R_WAIT of UART
receive state machine.
X_SHIFT Status: In this state, the state machine realizes the
parallel to serial conversion of outgoing data. Then immediately
return to X_WAIT state.
X_STOP Status: Stop bit transmit state. When the data
frame transmit is completed, the state machine transferred to this
state, and sends 16 bclkt cycle logic 1 signal, that is, 1 stop bit. The
state machine turns back to X_IDLE state after sending the stop bit,
and waits for another data frame transmit command.

III. SIMULATION OF MODULES

The simulation software is Quartus II. And the selected


device is Alteras Cyclone II FPGA: EP2C5F256C6.
A. Baud Rate Generator Simulation
During simulation, the system clock frequency is set to
32MHz, and baud rate is set to 9600bps. Therefore the receiving
sampling clock frequency generated by the baud rate generator is
153600Hz, which is 16 times of the baud rate. Thus the frequency
coefficient of baud rate generator can be calculated, which equals
208. Fig. 5 shows the simulation result of baud rate generator. The
simulation report shows that this module uses 42 logic elements(
<1%) , 33 registers
(2%), and meets timing requirement.

X CN T 16= 011 11

X_ WAIT

Transmit Module State Diagram

This state machine has 5 states: X_IDLE (free), X_START


(start bit), X_WAIT (shift to wait), X_SHIFT (shift), X_STOP
(stop bit).
X_IDLE Status: When the UART is reset, the state
machine will be in this state. In this state, the UART
transmitter has been waiting a data frame sending command

Figure 5.

Simulation Result of Baud Rate Generator

B. Receiver Simulation
During receiver simulation, the receiving sampling clock
freq uency generated by the baud rate generator is set to 153600
Hz, UART receiving baud rate is set to 9600bps. The input
sequence is: 00110110001, including the start bit 0, parity bit 0 and
1 stop bit. The received data is stored into the register rbuf.

190

Proceedings of the National Conference on Communication Control and Energy System

Fig. 6 shows the receiver module simulation diagram. The


figure shows that the data in rbuf from high to low is
00110110, which is just the part of data bits of UART frame.

Figure 8. RTL of Top File


Figure 6. Receiver Simulation Diagram

C. Transmitter Simulation
During transmitter simulation, the sending clock frequency
generated by the baud rate generator is set to 153600 Hz, and
UART transmitting baud rate is set to 9600bps. Fig. 7 shows
the transmitter module simulation diagram. The simulation
report shows that this module uses 78 logic elements(<1%,
13 pins (4%), and meets timing requirement.

IV. CONCLUSION

This design uses VHDL as design language to achieve the


modules of UART. Using Quartus II software, Altera's Cyclone
series FPGA chip EP2C5F256C6 to complete simulation and
test. The results are stable and reliable. The design has great
flexibility, high integration, with some reference value.
Especially in the field of electronic design, where SOC
technology has recently become increasingly mature, this design
shows great significance.
REFERENCES

Figure 7

Transmitter Simulation Diagram

D. RTL of Top File


Fig. 8 shows the RTL of UART Top File. It includes the
baud rate generator, receiver, and transmitter modules.

[1] Zou,Jie Yang,Jianning Design and Realization of UART Controller


Bas ed on FPGA
[2] Liakot Ali , Roslina Sidek , Ishak Aris, Alauddin Mohd. Ali ,
Bambang Sunaryo Sup arjo. Design of a micro - UART for SoC
application [J].In: Computers and Electrical Engineering 30 (2004) 257
268.
[3] HU Hua, BAI Feng-e. Design and Simulation of UART Serial
Communication Module Based on Verilog -HDL[J]. J ISUANJ I YU
XIANDA IHUA 2008 Vol. 8
[4] Frank Durda Serial and UART Tutorial. uhclem@FreeBSD.org

191

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.192-194.

Fuzzy Logic Technique Applied to 3 Phase Induction Motor


S. Johnson
Veltech University
Abstract Until not long ago, most fuzzy logic based control
applications were limited to the management of user
interfaces, sensors and actuators, corresponding to a slow
software operation. These techniques are especially
appropriate whenever the system model is non-linear and/or
difficult to obtain.
It is viable to use fuzzy logic is faster real
timeapplications? It is interesting to use these techniques
even when the system is known (including linear cases)? We
are trying to answer some of these questions applying fuzzy
logic to control electrical machines. In this paper we use a
slip control scheme for a three phase induction motor fed by
a voltage source PWM inverter as an example.
The evaluation of the fuzzy logic controller behavior is made
through the comparison with a traditional technique (PI
controller with ant windup mechanism), using computer
simulations (performed with
Mat lab/simulink) and experimental results. Implementation
was made using the fuzzy technique software tool to design
the fuzzy logic controller and to produce C-code for an Intel
80C196KC microcontroller.
A 1KW three phase induction motor fed by a voltage source
IGBT power module was used in the experiments.

ability to disturb gain over a range of inputs in order to


not saturate the control capability.
In this paper we try to show, simulation and experimental
results, that the use of fuzzy logic techniques can be
advantageous, even in case classical control is used and
perform well. As an example fuzzy logic is compared
with a PI controller with anti windup mechanism, in a slip
control scheme for an induction motor fed by a voltage
source inverter.
II. SIMULATION & DESIGN TOOLS
Matlab/simulink was emerged as a simulation tool.
Matlab is an integrated technical computing environment
thjat combines numeric computation, advanced graphics
and visualization & a high level programming language.
It is a natural environment for analysis, algorithm
prototyping & application development. Simulink is built
on top of Matlab and is an interactive environment for
modeling, analyzing and simulating wide variety pf
dynamics systems. Together with a graphic interface, this
tool provides an extensive block library, several
integration algorithms and allows the user to select the
simulation parameters.

I. INTRODUCTION
Fuzzy logic has emerged as a profitable tool for the
controlling of complex industrial process,as well as for
household & entertainment electronics,diagnosis system
& other expert system.It is a superset of conventional
(Boolean)logic that has been extended to handle the
concept of partial truth(truth values between completely
true & completely false)Fuzzy logic was introduced by
Dr.Lofti zadeh of UC/Berkely in 1960s as a mean to
model the uncertainity of natural language, but only
recently its use has spread over a large range of
Engineering applications. Fuzzy logic attempt to simulate
human thought process, even in technical environments.
In doing so, the fuzzy logic approach allows the designer
to handle efficiently very complex closed-loop control
problems, reducing engineering time & cost.
Fuzzy logic is mainly used in industrial automation for
relatively slow process. Fuzzy controls supports nonlinear design techniques that are now being exploited in
motor control applications. An example includes the

Fig. 1. Slip control for an induction motor voltage-source


inverter drive

The Matlab non-linear control design block set was used


to optimize the PI controller parameters. This tool
provides a time-domain based optimization approach to
control design. It is designed for use with simulink block
diagrams and automatically tunes system parameters
based on user defined time domain performance
constraints. The Fuzzy TECH MUC_96 edition was used
to design the fuzzy logic controller. It is a full graphical
tool that supports all design steps for fuzzy system
engineering, structure design, linguistic variables and
rules definition and interactive debugging.

192

Proceedings of the National Conference on Communication Control and Energy System

Moreover, this tool generates C-code with optimized


assembly functions to the Intel MCS-96 microcontroller
family. It also produces M-code file, representing the
fuzzy logic controller developed with fuzzy TECH, was
imported by the Matlab/simulink to perform the fuzzy
logic simulations.
III. SLIP CONTROL
A conventional slip control scheme for an inverter fed
induction motor which is used for low performance
variable speed drives is presented. Traditionally the speed
error (

The use of a PID controller was considered. However,


simulation results showed that, duty to the relatively slow
sampling rate used, the improvement achieved was
minimal and would not compensate the increase in
computational efforts.

is input to a PI or PID controller (block A)

that sets the motor slip frequency (


(

Fig.2 presents the system block diagram used to perform


the PI controller simulations in the simulink environment.
The zero order Hold Block is used to set the simulation
sampling time equal to 5.1ms,which is the value used in
the practical implementation.

.stator frequency

is obtained by adding slip frequency to rotor

frequency speed (

and stator voltage (

is set

according to a pre-defined
constant law (block B),
so the motor flux is kept at its nominal value. Voltage and
frequency values are then input to the voltage source
inverter (block c).
Assuming that fast response is not required a linear
approximation of the induction motor steady state model
can be used. slip frequency is almost proportional to
torque and must be limited, setting(indirectly)a limit to
both peak torque and stator current.
IV. CONTROLLER DESIGN

Fig. 3. PI controller block diagram expanded

B. Fuzzy Logic Controller


To simulate the fuzzy logic control the PI controller in
block in Fig.2 is replaced by the Fuzzy controller block,
which is expanded in Fig.4.The M-File block, seen in this
figure, was produced by the fuzzy TECH software tool
and represents the same fuzzy logic controller
implemented in the real time environment.
The structure of fuzzy logic controller is presented in
Fig.5.It has two inputs, the speed error (speed error) and
the speed error variation (error Var) and one output, :the
slip increment (slip_Inc). Therefore negative (N), zeros
(ZE), positive (P). Error Var is described with 5 M.F.
Therefore negative large (NL), negative small (NS), zero
(ZE), positive small (P) and positive large (PL).

Fig. 2. Slip control system block diagram in the simulink


environment

In order to evaluate the merits of the fuzzy logic


techniques compared to a classical approach the induction
motor slip controller was implemented, first using PI
control and then fuzzy logic.
A. PI Controller
Note that because of slip limitation, which introduces a
non-linearity at the controller output, a PI with anti
windup mechanism must be used.
The PI controller was first designed through a classical
control approach (root locus). Then the non-linear design
block set (NCD) was used to optimize its response to a
speed reference step and to minimize the speed variation
when a torque disturbances is applied.

The defuzzification method applied was the COM (center


of maximum). In control applications, COM is most
commonly used because the output value represents the
best compromise of all inferred results with high
computational efficiency. once COM considers just the
maximum output values, the output was represented by
singleton membership function(which can be considered
as a special case of triangular M.F).Ther output slip_Inc
uses 7 M.F .Therefore negative large (NL), negative
medium (NM), negative small (NS), zero (ZE), positive
small (PS), positive medium (PM), positive large (PL).
The membership functions for the inputs and the output
are showed in the Fig.6.The inputs and the outputs are
related through 11 rules(table 1).Each rule output was
determined by MIN-MAX interference.

193

Fuzzy Logic Technique Applied to 3 Phase Induction Motor

Fig. 4. Fuzzy Controller block diagram expanded


Table 1. Fuzzy Logic Slip Controller Rules

implementation. Then the inertia value was modified in


order to evaluate the controllers sensitivity to system
parameters changes.
Fig. 7 presents the simulations results with J=2.10-2kgm2.
The response for both controllers is almost identical.
Fig.8 shows simulations results with J=10.10-2kgm2. In
this case the fuzzy logic response is better the overshoot
value are smaller and its response is faster.
The controllers behavior can be better compared using
standard performances indexes. TableII shows the value
of IAE ( integral of absolute error) and ITSE (integral of
time squared error) for the PI and fuzzy controllers,
during start-up and load application conditions. These
indexes show that the fuzzy logic controller performance
better than the PI controller when the motor-load inertia is
changed to J=10.10-2kgm2
VI. CONCLUSION

Fig. 5. Fuzzy logic controller structure

Fig.5 shoes that the input speed error and error var are
obtained from
and
,after saturation and
normalization respectively through block P1 and block
P3.the slip_Inc value is normalized through block P4 and
then added to its previous value to give, after saturation,
the slip value, which is the motor slip control output.
Table 2. Simulation Speed Response Performance - J = 10.10
2
kg.m2

V. SIMULATION RESULTS
The computational simulations compare the behavior of
PI and fuzzy logic controllers, showing their speed and
slip values during motor starts-up and then in response to
a sudden load change from zero to nominal motor torque
value.
First simulations were performed with the motor-load
inertia value for which the PI controller parameters were
optimized, which is the same value of the experimental

An evaluation of fuzzy logic techniques applied to the


control of electric machines was presented. As an
example, a slip control scheme for an induction motor fed
by a voltage source inverter was presented. Simulation
results confirmed that the fuzzy logic approach is feasible
and can be an interesting alternative to conventional
control, even when the system model is known a linear.
The implemented fuzzy logic controller presented a
slightly superior dynamic performance when compared
with a more conventional scheme (PI controller with anti
windup mechanism), namely in term of insensitivity to
changes in model parameters and to speed noise, which
can be an important requirement in speed/position control
schemes using electrical machines, namely in robotics.
Some authors claim the fuzzy logic controller is easier to
tune than conventional ones, and therefore the
development times are shortened. From our experience
we cannot support this statement, at least for this type of
application. Matlab/simulink and fuzzy TECH software
tools were used for respectively simulation and controller
design. The hardware used to accomplish the system is
minimum. No fuzzy processors or any other specific
hardware was used. A standard Intel 80C196KC
microcontroller performs the control algorithms and
generates the PWM waveforms for the IGBT motor drive
inverter.

194

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.195-198.

Design and Implementation of High Performance Optimal PID Controller for


Fast Mode Control System
K. Pooranapriya
PhD scholar, Anna University of Technology, Coimbatore.
Email:pooranapriyak@gmail.com
Abstract The control systems cannot be represented by
certain analytical expression typically, so they need to use
filters to approximate realization. A digital realization
algorithm of fractional order system, the optimal Oustaloup
digital realization algorithm is proposed. Optimal algorithm
is used to find the optimal parameters of filter and achieve
high fitting accuracy of fractional order systems in the
frequency bands. In the process of Fast mode vehicle,
attitude control model is a complex object of fast timevarying parameters of a wide uncertain range. This paper
design Optimal PID controller to increase the complexity of
the controller designed using modern control methods.
Optimal PID controller inherits the advantages of the
traditional PID controller and has stronger robustness and
better control quality. In this paper Optimal PID controller
is achieved by optimal Oustaloup digital algorithm and
based on time-varying nonlinear model of high speed
vehicle. Use D -decomposition to analyze the affect to Mach
number and angle of attack stability region caused by the
order of Optimal PID. Simulation results show that Optimal
PID parameters, the stability of high speed vehicle can be
achieved within a wide range.
Keywords Fractional order PID controller; optimal
Oustaloup digital algorithm; D-decomposition; Fast mode
vehicle.

I. INTRODUCTION
In the process of re-entry, High speed vehicle is a kind of
fast time-varying parameters within a wide range, strong
non-linear complexity object. Mach number and angle of
attack change large range and are subject to external
disturbances. The traditional PID control is often difficult
to obtain satisfactory control performance. At present, the
universal application is robust control, adaptive control
and sliding-mode control, etc[1][2], which not only increase
the complexity of the controller design but also there is no
relevant analysis for the parameters of the stability of the
high speed vehicle.

among the Fractional-order PID digital implementation


. This paper proposes digital algorithm of optimal
Oustaloup based on Xue Dingyu's improved Oustaloup
algorithm. In the stability domain analysis of Fractional
order PID[8][9], using the D-decomposition method only
analyzes the stability region of the controller parameters
for the fractional order systems. However, there isn't any
study about the stability domain caused by changes in
order of Fractional-order PID.
[6][7]

Based on the Fractional PID Controller achieved by


digital algorithm of optimal Oustaloup and High speed
vehicle nonlinear pitch channel model, Simulation block
diagram is built. Use D-decomposition to analyze the
affect to Mach number and angle of attack stability region
caused by the order of Fractional-order PID in order to
determine the stability region of the controller designed
for the high speed vehicle and compare the Fractionalorder PID with the traditional PID. At last, determine all
the parameters of Fractional-order PID controllers by IT
AE index.
II. ATTITUDE CONTROL MODEL
Take pitch channel model of high speed vehicle as
example:

(1)

Where,

Fractional-order PID[3] extends the integer order of


traditional PID to the fractional-order. Due to stronger
robustness and better control effect than the traditional
PID[4][5], and inheriting the characteristics of simple
structure of traditional PID, Fractional-order PID has been
applied in other areas. Xue Dingyu's improved Oustaloup
algorithm has obtained better approximation results
195

(2)

Design and Implementation of High Performance Optimal PID Controller for Fast Mode Control System

Diagram of the implementation model shown in figure 1:


Where , h z respectively represent velocity, flight
path angle , altitude, angle of attack, velocity of path
angle ; L Mz represent lift, pitching moment; m Iz S RE
represent mass, moment of inertia, gravitational constant
reference area, radius of the Earth.
III. DIGITAL REALIZATION OF OPTIMAL OUSTALOUP

Fig. 1: Block diagram model of digital realization

Achieve the approximation of fractional differential


operator S in the frequency bands (b,h) based on
Oustaloup filter. Increase a filter before this filter to
improve the approximate accuracy of digital realization.
Fractional calculus S will be approximated as:

Simulink use the Oustaloup algorithms improve


Oustaloup algorithm and the optimal Oustaloup algorithm
respectively, the frequency response shown in figure 2:

(3)
Where, G s the filter and Gc is the Oustaloup filter:

(4)

Fig. 2. Frequency response of simulation

G is as follows:

IV. STABILITY DOMAIN ANALYSIS USE THE


D DECOMPOSITION METHOD

(5)
In which the parameters are determined by the optimal
algorithm. In order to improve the approximation
accuracy of the amplitude- frequency and phasefrequency in frequency bands, take the error between
amplitude -frequency and phase frequency of the
fractional calculus approximation algorithm and the actual
amplitude -frequency and phase-frequency as the
optimization performance index. That is,

Transform the pitch channel mode of the high speed


vehicle into a nonlinear transfer function. The controlled
system model is a function of Mach number and angle of
attack.
(7)
Where

(6)
Where, M1 , P1 represent the actual amplitude frequency
and phase frequency.M2, P2 represent the amplitudefrequency of the approximation algorithm. is the
adjustment factor, and it can adjust the weight of the
amplitude frequency and phase frequency.
Generally take = 0.3.Make the J to a minimum to
determine parameters of the filter G through the
optimization.

(8)

Choose the fractional PID controller model as follows:

196

Proceedings of the National Conference on Communication Control and Energy System

(9)

The stability domains of the high speed


shown in Figure 3.

vehicle are

Therefore, the system closed loop transfer function is

(10)
The characteristics polynomial is:

(11)
Theorem : Set the stability domain of the Mach number
and angle of attack S, when K = (Ma,)S,the stability
condition of the control system is that characteristic roots
are in the left-hand of the s plane. The boundaries of the
stability domain S which are described by real root
boundary (RRB), infinite root boundary (IRB) and
complex root boundary (CRB) can be determined by the
D-decomposition method:
RRB : P(0:K) = 0
IRB: P(;K)
CRB:P(

(12)
0

The RRB border line equation can be gotten by putting


s=0 into the characteristic polynomial:
Ma = 0

(13)

As numerator and denominator of the most high-end


times of the high speed vehicle pitch channel transfer
function are not equal, there is no IRB boundary line.
The CRB border line equation can be gotten by putting s
= j into the characteristic polynomial.

Fig. 3. The impact on the stability region when changes

From the figure we can see that the stability region first
decreases and then increases as the increases. Changes
are apparent when [5 ,30]. The stability domain of
traditional PID is minimum and the fractional order PID
expands the stability domain. Illustrate the Fractional
order PID is not sensitive to changes in system parameters
than the traditional PID. And the application is broader.
Therefore, should take fractional-order, and take a
smaller value.
Finally determine the value of by the combination of
ITAE index.
B. The Impact on the Stability Region Caused by
Take = 1 , and = 0 0.2 0.4 0.6 0.8 1. The stability
domains of the high speed vehicle are shown in Figure 4.

The Formula of fractional power for complex number is :

Where, is the real part, is the imaginary part,


order.

is the

The following can be gotten:


(14)
Put the type into the characteristic polynomial. Make the
real and imaginary parts equal to 0, so the stability
domain of Ma and can be calculated. During
,
take
of the results. Then the stability region
boundaries for the RRB connect with the CRB when
=. The stable region is above the boundary curve for the
CRB.
A. The Impact on the Stability Region Caused by
Take = 1 , and = 0 0.2 0.4 0.6 0.8 1.

Fig. 4.The impact on the stability region when changes

From the figure we can see that the stability region first
decreases and then increases as increases. The range of
stable region is smaller than the traditional PID when is
smaller. The stable region will remain when is 0.6 0.8
and the range of stable region is bigger than the traditional
PID. Therefore should take a larger value. Finally
determine the value of by the combination of ITAE
index or take = 1 to simple the fractional PID design

197

Design and Implementation of High Performance Optimal PID Controller for Fast Mode Control System

when the system parameters are not charged with the


curves of critical circumstances.

simulation results are shown in Figure 6, but the


traditional PID controller transpires under the same
conditions.

C. Fractional-Order PJD Parameter Selection


Establish the Fractional order PID controller and the
pitch-channel non-linear model of hypersonic aircraft.
Use genetic algorithm tuning parameters of fractional
order PID by the combination of IT AE index. The orders
of integral and differential are [0:0.1:1].Parameters are
alitude is h=30Km,Ma =15, expected
angle of attack is =10. The simulation results are shown
in Figure 5.

Fig. 5. ITAE index of different and

From the figure we can see that Fractional-order PID


controller has a short rise time, a fast response and meet
the fast requirements of the high speed vehicle better. It
has a higher accuracy of stability and can meet the
expectations by tracking control instructions accurately,
but the traditional PID has a long rise time, a slow
response.

From the figure we can see that anti-Jamming capability


and robustness of the Fractional-order PID are stronger
than the traditional PID. The Fractional-order PID
controller can maintain system stability and gain good
control quality at the same time in the case of strong
interference. Therefore, the Fractional order PID has a
wider application.
V. CONCLUSION
Use the optimal Oustaloup digital algorithm to design the
Fractional order PID attitude controller for the high speed
vehicle and use D-decomposition method to analyze the
affect to Mach number and angle of attack stability region
caused by the order of Fractional order PID. Determine
the Fractional order PID control by the combination of
stable domain and IT AE index. Simulation results show
that the Fractional order PID not only inherits the
advantages of simple structure of the traditional PID but
also gets better control quality and greater robustness. It
meets the requirements of rapidity of high speed vehicle
and is not sensitive to changes in the system parameters.
It can guarantee the control quality and control system
stable when the Mach number and angle of attack change
within a wider range. The stable flight of high speed
vehicle can be achieved within a wide range. The
Fractional order PID facilitates the design of attitude
control system of high speed vehicle for the entire
journey which has a value in engineering.
REFERENCES
[1]

[2]

[3]

[4]

[5]

[6]
Fig. 6. Simulation results of attack
[7]

D. Simulation with Interference


When adding a random disturbance of two and a pulse
interference of I at the forth second, Fractional order PID

198

Xu H J,Ioannou P A,Mirmirani M.Adaptive sliding mode


control design for a highspeed vehicle[J] . Journal of
Guidance,Control and Dynamics,2004,25(5] :S29-S3S.
Gao Daoxiang, Sun Zengqi, Luo Xiong, Du Tianrong.
Adaptive fuzzy control of highspeed vehicle based on
Backstepping[J].
Control
Theory
and
Applications.200S,25(5):S05-SIO.
I Podlubny. Fractional-order systems and PIADJ.lcontrollers[J]. IEEE Transactions on automatic
contro1.l999,44(1):20S-214.
Zhang Bangchu, Wang Shaofeng, Han Zipeng, Li
Chenming.fractional order PID Control and Digital
Implementation of Cruise Missile[J]. Astronautics Journal.
.2005,26(5):653-656.
Zhang Bangchu,Zhu Jihong,Pan Shushan,Wang Shaofeng.
Using Fractional-order PIADJ.l Controller for Control of
Aerodynamic Missile[J]. Journal of China ordnance.
2006,127-131.
D Valerio, J Sa da Costa. Time-domain implementation of
fractional order controllers[J]
. lEE Proc.-Control
Theory Appl., 2005, 152(5):539-552.
Xue Dingyu, Zhao Chunna,Chen Yangquan. A modified
approximation method of fractional order system [A].
Proceedings of IEEE Conference on Mechatronics and
Automation [C] . Luoyang, China,2006, 1043-104S.

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.199-204.

Design of Axial Flux Permanent MAGNET Synchronous Generator


for Wind Power Generation
Rakesh Raushan and Rahul Gopinath
Mahendra Institute of Technology, Namakkal
Email:royalrakesheee@gmail.com, rahulg262@gmail.com
Abstract Permanent Magnet generators are duly known
for wind power conversion applications for a decade. The
use of PM generators can be justified by so many advantages
like low maintenance, High power density, high efficiency
and no gear box requirement (direct-drive).Cogging torque
is an inherent characteristic of PM generators due to the
geometry of the generator. It affects self-start ability of
PMSG base wind turbine and produces noise and
mechanical vibration. Thus, minimizing cogging torque is
mandatory requirement in improving the operation of small
wind turbines during starting. This project presents the
basic analytical design and 3D Finite element analysis (FEA)
of a small single sided axial flux permanent magnet
synchronous generator (AFPMSG) using MAGNET
software. In the second part, design optimization is carried
out to achieve cogging torque minimization by different
methods.
Keywords Axial flux PMSG, cogging torque, fractional
slot winding, skewing.

I. INTRODUCTION
To convert wind power into electricity, many types of
generator concepts have been used and proposed. Most of
the low-speed wind turbine generators presented is
permanent-magnet (PM) machines. These have the
advantages of high power density, high efficiency and
reliability since there is no need of external excitation and
conductor losses are removed from the rotor.
Wind energy conversion (WEC) conventional systems
with gearbox have been more and more replaced by WEC
systems without gearbox, since the WEC systems with
gearbox are subject to vibration, noise and fatigue. Directdrive generators must have a large number of poles, since
rotational speed of the wind turbine is very low.
Generators with a large number of poles are easily
achieved using permanent magnets in synchronous
generator.

superior to other types in some aspects. The overall length


of AFPMSG is short compared to the radial flux PMSG.
The PMs which are used in the AFPMSG can be of a flat
shape easy to manufacture.
Funda sahin [3] gave the detailed analytical design of
AFPMSG machine and analyzed it in 2D FEA. To
analyze in wind energy applications, Azouzzi[7]
presented an analytical model. The analytical machine
design could be optimized with the help of MAGNET
software. For the point of the modeling [2], the geometry
of the axial flux machine is an actual 3D problem, which
cannot be reduced to the 2D plane if an accurate
electromagnetic analysis is required. So in this paper, the
machine is modeled as a 3D problem by using MAGNET
3D FEA-software.
Cogging torque is an inherent characteristic which
determines the self start ability of PM generators, caused
by the geometry of the generator. In addition to that, it
produces noise and mechanical vibration.Muljadi [4]
discussed about methods to reduce cogging torque for
RFPM machine.
An AFPMSG machine is designed for the required
specifications in the first part and the effort is taken to
reduce cogging torque by fractional slot winding, skewing
and varying pole arc to pole pitch ratio. Reduction in the
harmonics also discussed in last section.

Basically, PM generators can be divided into radial-flux


and axial-flux machines, according to the flux direction in
the air gap. Axial flux machines [1] have high power
density than radial flux machines. Transverse flux
machines exist, but do not seem to have gained a foothold
in wind power generation. In the PMSG family, axial flux
permanent magnet synchronous generator (AFPMSG) is
199

Fig. 1. Exploded View Of Axial Flux Machine

Design of Axial Flux Permanent MAGNET Synchronous Generator for Wind Power Generation

II. MACHINE STRUCTURE


AFPM machines can be designed as single-sided, doublesided, or even as multi-disk configurations. Naturally, the
easiest and the cheapest construction is the single-sided
(only one rotor and one stator disk) type. Fig.1 shows the
exploded view of axial flux machine.

strengths respectively, Br magnet remanence, r relative


permeability.
With the assumption of there is no tangential flux density
component (Bg0 = Bm), the air gap flux density is derived.

Bgo =

III. ANALYTICAL DESIGN


The basic dimensions related to the sizing of the AFPM
machine can be determined by the sizing equations [5].

Br
2gr
1+
Lm

(4)

The fundamental component of the airgap flux density is


given below.
m
4
Bg1 = Bg0 sin
2

(5)

Where g is the air gap in mm, Lm is the magnet length


in mm, m is magnet pitch in electrical angle.

Fig. 2. Simplified representation of AFPMSG

The RMS phase EMF equation [3] of the AFPM


machines in terms of average diameter of machine (Dav )
and effective length of the stator core in radial direction
(Li) is
E=

2
B g1 m k w N ph Dav L i
2

(1)

Where Do and Di are the outer and inner diameter of the


machine respectively, Bg1 is fundamental component of
air gap flux density in Weber, m is speed of the
machine in rad/sec, kw is the winding factor , Nph is
Number of turns per phase.

Fig. 3. Axial cross-section of the surface-mounted PM rotor


AFPM machine

B. Specific Electric Loading


Since there are 3 phases, 2Nph conductors in each phase
and 2I as the peak current, the fundamental component
of the Ampere conductor per meter (specific electric
loading) is defined as

The output power equation of the machine with a 3-phase


stator-system can be calculated as
Q = 3E ph I =

2
B g1 Acm k w D av
Li
4

(2)

where Ac is ampere conductor per meter(specific electric


loading) .

A. Basic Magnet Design


Using the axial cross-section shown in Fig.3 where the
flux paths are shown, the air gap flux density equation can
be derived using Maxwells equation. By assuming that
the stator iron has infinite permeability, neglecting
magnet leakage flux and using a simple circuit
approximation [3]

B m = 0 rH m + B r

(3)

Where, Bm and Bg magnet and air gap flux densities


respectively, Hm and Hg magnet and air gap field

200

Ac =

3 2I2N ph
D av

(6)

Table 1. Design of AFPM Machine


Parameters
Stator outside diameter
Stator inside diameter
Stator yoke length
Number of poles
No of slots/pole/phase
Number of turns/phase
Air gap length
magnet axial length
Tooth bottom width
Tooth top width
Slot depth
Slot top depth 1
Slot top depth 2
Total slot depth

Symbol
Do
Di
Lya
2p
q.
Nph
g.
Lm
Wtbi
Wti
Db
dt1
dt2
Ds

Designed Value
215 mm
85 mm
30 mm
12
1
372
1.5 mm
4 mm
3mm
6.5 mm
19 mm
1.5 mm
1.5mm
22 mm

Proceedings of the National Conference on Communication Control and Energy System

For the required specification given in appendix I of the


AFPM machine with single rotor and stator, Table 1
shows the final analytically designed dimensions of the
machine

IV. FINITE ELEMENT ANALYSIS


The completed design in the previous section is to be
analyzed by finite element method. To visualize the actual
machine in the software, the 3D analysis should be carried
out than 2D analysis. The static analysis of the machine is
used to find out the distribution of the static fields and
flux linkages. The transient analysis of the machine is
performed using the TRANSIENT 3D capability of the
MAGNET package. The various parameters that can be
calculated from the transient analysis are magnetic torque,
position, speed, mass, moment of inertia, acceleration etc.

B. Flux Density Distribution


The study of magnetic flux density distribution in
different parts of the machine is essential to determine
magnetic losses and saturation factor. The magnetic flux
density distribution of the machine is shown in Fig. 5.

C. Flux Linkage and Induced EMF


The main goal in designing the PM rotor is to provide the
maximum number of flux lines per pole, with minimum
cost and with the minimum amount of flux leakage. The
flux linkage determines the Emf induced in the coil. The
flux linkage and induced EMF waveform under no load
condition is obtained from 3D static analysis shown in the
fig.6. Under full load condition, flux linkage ,induced
EMF and armature current of the machine are obtained by
using 3D transient analysis.

A. FEA Model

Flux linkage

3
Flux linkage in wb

The finite element analysis is a versatile tool to design a


given machine to meet the required design specification,
in terms of performance and power density of the
machine and to reduce the cost of product development.
The FEA model of the proposed AFPM (dimensions of
machine are given in Table1) is shown in fig. 4.

2
1

0
-1 0

Y
100

200

300

400

500

-2
-3
-4
angle in e le c de g

Induced emf

Induced emf in volts

150
100
50

R
B

0
-50

50

100

150

200

250

300

350

400

450

-100
-150
angle in elec deg

Fig. 6. Flux linkage and EMF under no-load


flux linkage

flux linkage in wb

0.6

Fig. 4. 3D - FEA model

0.4
0.2

R
Y

0
-0.2

10

20

30

40

50

-0.4
-0.6
Time in ms

INDUCED EMF

INDUCED EMF IN V

200
150
100
50

0
-50 0

B
20

40

60

80

100

-100
-150
-200
TIME IN ms

Fig. 7. Flux linkage and EMF under load


Fig. 5. Flux density distribution.

201

120

Design of Axial Flux Permanent MAGNET Synchronous Generator for Wind Power Generation

For the machine having 54 stator slots with number of


slot per pole per phase, q = 1.5, the fractional slot double
layered wave winding layout is shown in fig.10

full load current


4

Current in A

3
2
1

-1 0

The flux linkage and induced EMF waveforms for the


above double layered fractional winding are shown in
fig.11.The total harmonic distortion (THD) content of no
load induced EMF is 18.91% for integral slots, but for
fractional slot winding it is reduced to 16.1%. The EMF
harmonic spectrum of the fractional slot winding is shown
in fig.12.

0
20

40

60

80

100

120

-2
-3
-4
time in ms

Fig. 8. Full load current

V. FRACTIONAL SLOT WINDING


For the low speed machines, number of slots poles per
phase (q) will be normally two or less as it has more
number of poles. Integral slot winding with small integers
of q , would give rise to appreciable tooth harmonics in
the induced emf. Because, the corresponding winding
element of each phase, under different poles, occupy
similar positions with respect to pole to pole axis. So the
harmonic EMF adds algebraically. But in fractional slot
winding, [5] they occupy dissimilar positions with respect
to pole axis. The displacement of winding (fractional slot)
is selected so that the higher order harmonics are
decreased without affecting fundamental. Fig.9 shows the
harmonics present in the no load induced emf (fig.6) for
number of slots per pole phase q=1 i.e. integral slot
winding.

flux linkage
0.8
flux linkage in wb

0.6
0.4
0.2

r
y

0
-0.2 0

200

400

600

800

1000

-0.4
-0.6
-0.8
electrical angle

induced EMF
200
induced EMF in V

150
100
50
0
-50 0

R
Y
100

200

300

400

500

600

700

800

900

-100
-150
-200
Electrical angle

Fig. 11. Flux linkage and EMF waveform for Fractional slot
winding

Fig. 9. EMF Harmonic spectrum for q=1.

Fig. 12. EMF Harmonic spectrum for fractional slot winding

In addition to the harmonic reduction, it also helps in


reducing cogging torque which makes this design
modification attractive. This is dealt in next section.

VI. COGGING TORQUE

Fig. 10. Fractional slot winding layout

Cogging torque is the torque produced by the shaft when


the rotor of a PM generator is rotated with respect to the
stator at no load condition. It occurs due to the reluctance
variations in the air gap mainly because of slotting.

202

Proceedings of the National Conference on Communication Control and Energy System

This component exists when there is no armature current,


so it can be determined with the FE method by calculating
the torque for several positions of the rotor at no-load
case. Fig.13 shows the cogging torque of a machine with
q=1 for one slot angle and it has 14.29% cogging torque
of the full load torque. Fig.14 shows the full load torque.

C p = C T

(10)

where is the tip speed ratio.


limit), CT =CTmax.

When Cp=0.593(Betz

A typical Cp
curve is shown in Fig.15.
This
characteristic defines Cp as a function of the
tip-speed
ratio (TSR) given by equation(11).
(11)

= r R / V
where R is the radius of the wind turbine rotor.

The operating value of TSR depends on the maximum


value of Cp. The upper limit of the TSR is based on noise
generated by the wind turbine

Fig. 13. Cogging torque for Integral slot winding

Fig. 14. Torque under full load

Fig. 15. Cp versus TSR curve

A. Effect of Cogging Torque on Wind Generators


The impact of cogging torque on the performance of
PMSG based wind generators is considerable and is
explained in detail in this section.
The Power available in the wind Pa

1
Pa = AV 3
2

(7)

The mechanical power from the blades of the wind


turbine Pt develops the corresponding shaft torque T
Pt = C p Pa = Tr

(8)

Where is the density of air, A is the swept area of the


blade, Cp is the performance coefficient, V is the wind
speed, r is rotor speed of the wind turbine.
Torque coefficient[6],
CT =

T
Tmax

Machines with lower speed require maximum torque


coefficient particularly during starting. It means that wind
turbine requires high starting torque to overcome the stall.
If the cogging torque also operating against to movement,
it clearly needs more torque for starting than it needs
before.
During running condition also, effect of cogging torque is
significant. It excites the structure of the wind turbine and
smoothing effect of inertia is apparent in a small wind
turbine during low wind velocity and the rotor rotational
speed is low. Noise and mechanical vibration excited by
the cogging torque, may threaten the integrity of the
mechanical structure of an improperly designed small
wind turbine.
So it is concluded that minimization of cogging torque is
much imperative in design of AFPMSG for wind energy
applications. There are many techniques available for
reduction. This paper concentrates on three methods.
1) Skewing the stator teeth.

(9)

Since, Tmax = Pa /r . The performance coefficient can


be written as

2) Reducing Pole arc to pole pitch ratio.


3) Fractional slot winding.

B. Skewing the Stator Teeth


Cogging torque can be reduced by skewing the stator
stack or skewing the magnet pole. Skewing the stator

203

Design of Axial Flux Permanent MAGNET Synchronous Generator for Wind Power Generation

stack is to spatially skew one end of the stator stack a few


degrees
with respect to the other end of the stack.
Usually a full skew of one slot pitch is implemented to
reduce the cogging torque. Fig.16 shows the cogging
torque for skewed stator.

VII. CONCLUSION
This paper presents the basic analytical design of a small
axial flux permanent magnet wind generator and the 3D
static and transient FEA analysis. Flux distribution, EMF
induced and its harmonic spectrum in AFPMSG are
obtained and analyzed. It also proves that there is a
significant reduction in cogging torque by optimizing the
basic design by applying skewing, fractional slot winding
and different pole arc to pole pitch ratio. This design
could be implemented practically and performance may
be verified with results obtained.
Appendix I. Specification of AFPMSG model
Parameter
Output kVA, Q
Rated speed of the machine
Nph
Frequency f
Phase voltages Eph
No of pole p
Average flux density Bav
Ampere conductors ac

Fig. 16. Cogging torque for 7 deg skewed stator

C. Pole Arc to Pole Pitch Ratio


By reducing the pole arc to pole pitch ratio, cogging
torque can be reduced. The Fig.17 shows the cogging
torque for the machine having pole arc to pole pitch ratio
of 0.75 and 0.65.The cogging torque for 65% 0f is 70%
reduced from 75% of .

Specifications
1 KVA
500 rpm
50HZ
110 volt
12
0.608
20000

REFERENCES
[1]

[2]
[3]
Fig. 17. Cogging torque for 75% and 65% pole arc to pole pitch
ratio

[4]

D. Cogging Torque for Fractional Slot Winding


In section 5, it is mentioned that fractional slot winding
also minimizes cogging torque[8]. The Fig.18 shows the
cogging torque for fractional slot winding with slot per
pole per phase q=1.5. The cogging torque is highly
reduced.

[5]
[6]
[7]

[8]

Fig. 18. Cogging torque for fractional slot winding

204

Yicheng Chen, Pragasen Pillay and Azeem Khan, PM


Wind Generator Comparison of Different Topologies,
39th IAS Annual Meeting, Industry Applications
Conference, Conference Board of IEEE, Vol. 3, No. 1,
pp.1405-1412, 2004.
Silvester.P.P and Ferrari.R, Finite elements for electrical
engineers, Cambridge University Press, 2001.
Funda Sahin, Design and development of a high-speed
axialflux
Permanent-magnet
machine,TechnischeUniversiteity Eindhoven, 2001.
E. Muljadi and J. Green, Cogging torque reduction in a
permanent
magnet wind turbine generator, 21st
American Society of Mechanical Engineers Wind Energy
Symposium Reno, Nevada, January 1417, 2002.
Alexande S.Langsdorf , Theory of alternating current
Machinery, Tata McGraw Hill edition, 1988.
Lubosny.Z, Wind Turbine operation in Electric power
systems advanced modeling, Springer Verlag, 2003.
J. Azzouzi, G.Barakat and
B. Dakyo, Analytica
modeling of an axial flux permanent magnet synchronous
generator for wind energy application, IEEE Transactions
on Industry Applications, Vol. 40, No. 3, pp. 771-779,
2004.
Ion Boldea, variable speed generators, Taylor and
Francis, Boca Raton, 2006.

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.205-208.

LED Aplication in Power Factor Correction


S. Alexzander and V. Prem Kumar
IV Year, BE-EEE Department of Electrical & Electronics Engineering,
Srinivasa Institute of Engineering and Technology
Email: alexzander.eee@gmail.com
Abstract This paper presents a single-stage fly back power
factor-correction (PFC) front-end for high-brightness light
emitting- diode (HB LED) applications. The proposed PFC
front-end circuit combines the PFC stage and the dc/dc stage
into a single stage Experimental results obtained on a 78-W
(24V/ 3.25-A) prototype circuit show that at VIN = 110 Vac,
the proposed PFC front-end for HB LED applications can
achieve an efficiency of 87.5%, a power factor of 0.98, and a
total harmonic distortion (THD) of 14% with line-current
harmonics that meet the IEC 61000-3-2 Class C standard.
Index terms Driver, high-brightness light emitting diodes
(HB LEDs), power factor correction (PFC), single-stage, fly
back.

I. INTRODUCTION
The technology and performance of high-brightness light
emitting diodes (HB LEDs) has undergone significant
improvements driven by new applications in liquid-crystal
display (LCD) backlighting, automobiles, traffic lights,
and general-purpose lighting. As a solid state light source
which does not contain mercury, HB LEDs have been
widely accepted because of their superior longevity, low
maintenance requirements, and continuously-improving
luminance, and they have great potential to replace
existing lighting sources such as incandescent lamps and
fluorescent lamps in the future.
Another active PFC implementation employs a single
stage ac/dc converter, where the PFC stage is integrated
with the dc/dc stage, resulting in a reduced complexity
and cost. There are two embodiments of the single-stage
PFC ac/dc converters: without and with a bulk capacitor
at the primary side, as illustrated in Figs. 2 and 3,
respectively. Although the fly back single-stage PFC
circuit in Fig. 2 has the advantage of a low component
count, its output voltage has a high ripple at twice the line
frequency unless very large output capacitors are used.
For an LED load, a small change in the driving voltage
results in an increase of the LED current by orders of
magnitude. Therefore, with this approach, a post-regulator
is often required, which adds cost and lowers the
efficiency.
The fly back single-stage PFC topology shown in Fig. 3
presents one of the most cost-effective single-stage
solutions. In this converter, the PFC stage operates in

discontinuous conduction mode (DCM), while the dc/dc


stage operates at the DCM/CCM boundary. A low inputcurrent harmonic distortion can be achieved due to the
inherent property of the DCM boost converter to draw a
near sinusoidal current if its duty cycle is held relatively
constant during a half line cycle. However, voltage VB
across bulk capacitor CB is unregulated and at high lines
it can increase to non-practical levels. To reduce the bulk
capacitor voltage, one terminal of the boost inductor
winding is connected to a tapping point of the primary
winding of the fly back transformer, which provides a
negative magnetic feedback. This solution has been
successfully applied in adapter/charger applications for
the universal line voltage, where the line current
harmonics need to meet IEC 61000-3-2 Class D limits.
However, the tapping of the fly back primary winding
also results in a zero-crossing distortion due to the dead
angle, as shown in Fig. 4. In fact, as long as the
instantaneous line voltage is lower than the feedback
voltage at the tapping point, no current is drawn from the
input, which deteriorates the power factor and the line
current harmonics.
Therefore, applying the fly back single-stage PFC
topology in Fig. 3 for lighting applications, where the line
current harmonics have to meet the more stringent limits
set by IEC 61000-3-2 Class C standard, presents a
challenging task, especially when the input voltage is
universal.

In this paper it is shown that by optimizing the location of


the tapping point of the primary-winding, the boost
inductor, and the fly back transformer, high power factor

205

LED Aplication in Power Factor Correction

and low THD can be achieved such that the line current
harmonics meet the limits set by the IEC 61000-3-2 Class
C standard, with a relatively high efficiency, if the line
voltage range is limited to either the universal low-voltage
(90 -140 V AC) or high voltage range (180 - 276 V AC).

currents iLB and iCB until the voltage across capacitor


COSS reaches (VB + nVO), where n = NP/NS.
3) Mode (c): Switch Q1 is turned off but diode D5
conducts at t = T2. Energy stored in the transformer is
Released to the secondary side, inductor current iLB d
decreases and continues to flow through primary winding
N2 to charge the bulk capacitor, i.e., iCB = iLB. The
down slope of the inductor current iLB is given by

(2)
Magnetizing current iM can be expressed as

(3)

II. OPERATION OF PROPOSED SINGLE-STAGE PFC


FRONTEND FOR HB LED APPLICATION
To facilitate the understanding of operation, Fig. 5 shows
the topological stages while Fig. 6 shows the key
waveforms of the proposed PFC front-end for HB LEDs.
There are five basic operation modes. Detailed description
of each mode is given below assuming that input voltage
is positive.
1) Mode (a): Switch Q1 is turned on at t = T0, no current
flows through the secondary winding NS because diode
D5 is reverse biased. The sum of inductor current iLB and
discharging current iCB of bulk capacitor CB flows
through the switch. Magnetizing current iM of the
transformer increases linearly with a slope of VB/LM,
where LM is the magnetizing inductance of the
transformer, and VB is the voltage across bulk capacitor
CB, respectively. When input voltage is higher than
(N1/NP) VB, inductor current iLB can flow because
diodes D1 and D4 are forward biased, otherwise, no
current iLB can flow. Inductor current iLB increases
linearly with a slope of (vin- (N1/NP) VB)/LB.
Magnetizing current iM of the fly back transformer is a
function of iLB and iCB, and can be expressed as

Rearranging (3) gives secondary current iS as expressed


by (4)

(4)
It can be seen from (4) that during the off time, secondary
current is composed of two components, namely, the
reflected magnetizing current and the reflected primary
current which draws energy directly from the input line.
Generally, direct energy transfer from the line to the
output improves the conversion efficiency.
4) Mode (d): Switch Q1 is turned off and inductor current
iLB reaches zero at t = T3. Energy stored in the
transformer continues to be released to the secondary
side.
5) Mode (e): Switch Q1 remains turned off and secondary
current is falls to zero at t = T4. Capacitor COSS and
magnetizing inductance LM forms a series resonant
circuit. During the resonance switch voltage decreases
and reaches a minimum of (VB -nVO), and the switch is
turned on again at this moment (t = T5), achieving partial
zero-voltage switching.

(1)
It can be seen that during the on-time of switch Q1,
energy from two sources is stored in the fly back
transformer. Namely, part of the magnetizing energy is
supplied from the bulk capacitor, whereas part of the
energy is supplied directly from the line. Generally, direct
energy storage from the line improves the conversion
efficiency.
2) Mode (b): Switch Q1 is turned off at t = T1.
Magnetizing current iM continues to increase, and output
capacitor COSS of switch Q1 is charged by the sum of

206

Proceedings of the National Conference on Communication Control and Energy System

and n1 = 0.5 (N1 = N2 = 12 turns), measured THD, PF,


VB, and efficiency are 34.95%, 0.9284, 166.2 V, and
88.65%, respectively. When LB is increased to 166 H,
i.e., ratio C= 0.31, measured THD, PF, VB, and efficiency
are 40.56%, 0.9014, 147.2 V, and 89.11%, respectively. It
can be seen that a higher boost inductance LB lowers bus
voltage VB, which agrees with the theoretical analysis.
Better efficiency results when the boost inductance is
increased since the peak inductor current decreases with
lower switching loss and conduction loss. However, as
inductance LB increases, bus voltage VB decreases,
leading to a lower reset voltage for inductor LB. As a
result, inductor LB might not be completely reset during
turn-off time of switch Q1, and enters CCM operation at
the peak of input voltage, significantly deteriorating the
power factor and THD. Measured waveforms in Fig. 12
(b) show that inductor LB enters CCM operation from
DCM operation after LB is increased, and serious
distortion of the line current at the peak can be observed,
as shown in Fig. 11 (b).
The tapping location also has significant effect on THD
and PF as well as efficiency and bus voltage. Therefore,
the tapping location as well as inductor LB has to be
optimized in order to have a good efficiency while
meeting the THD and PF requirements.

IV. PERFORMANCE EVALUATION


A PFC front-end employing the proposed scheme has
been simulated to verify the analysis in the previous
section and demonstrate its performance. Table I lists the
key components. Figure 11 shows the measured line
current and voltage waveforms at VIN = 110 V AC (60
Hz) on a prototype of proposed PFC front-end with an
output voltage VO = 24 V and output current IO = 3.25
A. With a ratio C= 0.16 (LB = 83 H and LM = 530 H)
207

LED Aplication in Power Factor Correction

V. CONCLUSION
A single-stage fly back power-factor-correction front-end
for HB LED applicationis presented in this paper. With
the integration of the PFC stage and dc/dc stage,
significant reduction of component count, size, and cost
can be achieved. Theoretical results obtained show that at
VIN = 110 V AC, VO = 24 V, and IO = 3.25 A, the
proposed PFC front-end for LED driver has achieved an
efficiency of around 87.50%, a power factor of 0.98 and a
total harmonic distortion (THD) of 14% for the line
current with harmonic contents meeting IEC 61000-3-2
Class C standard. Experimental results have also been
obtained at high line when the inductance of the input
current shaping inductor is increased. Measured output
voltage ripple with an actual LED load at VO = 24 V, IO
= 3.8 A is less than 20 mV. Therefore, LED strings can be
directly driven without a post regulator, improving the
efficiency, lowering the cost, and reducing the size.
REFERENCES
[1]

[2]

[3]

208

J. Y. Tsao, Solid-state lighting: lamps, chips, and


materials for tomorrow, IEEE Circuits and Devices
Magazine, vol.20, no. 3, pp. 28 - 37, May-June 2004.
N. Narendran and Y. Gu, Life of LED-based white light
sources, Journal of Display Technology, vol. 1, no. 1, pp.
167 - 171, Sept. 2005.
T. Komine and M. Nakagawa, Fundamental analysis for
visible light communication system using LED lights,

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.209-212.

Wireless Power Transmission


Suraj and M. Vinoth
3rd year EEE, Srinivasa Institute of Engg and Technology, Parivakkam Poonamallee bypass road, Chennai - 56
Email: Suraj.kkd@gmail.com, Vinothkumarm_91@yahoo.com
Abstract In this paper, we present the concept of
transmitting power without using wires i.e., transmitting
power as microwaves from one place to another is in order
to reduce the transmission and distribution Losses. This
concept is known as Microwave Power transmission (MPT).
We also discussed the technological developments in
Wireless Power Transmission (WPT). The advantages,
disadvantages, biological impacts and applications of WPT
and experimentally demonstrated the efficient of non
radiative power transfer over distances up to 8 times the
radius of the coils. We were able to transfer 60 watts with
~40% efficiency over distances in excess of 2 meters are also
presented.

resonant objects of the same frequency tend to couple


very strongly." Resonance can be seen in musical
instruments for example. "When you play a tune on one,
then another instrument with the same acoustic resonance
will pick up that tune, it will visibly vibrate," Instead of
using acoustic vibrations, system exploits the resonance
of electromagnetic waves. Electromagnetic radiation
includes radio waves, infrared and X-rays. Typically,
systems that use electromagnetic radiation, such as radio
antennas, are not suitable for the efficient transfer of
energy because they scatter energy in all directions,
wasting large amounts of it into free space.

I. INTRODUCTION
One of the major issue in power system is the losses
occurs during the transmission and distribution of
electrical power. As the demand increases day by day, the
power generation increases and the power loss is also
increased. The major amount of power loss occurs during
transmission and distribution. The percentage of loss of
power during transmission and distribution is
approximated as 26%. The main reason for power loss
during transmission and distribution is the resistance of
wires used for grid. The efficiency of power transmission
can be improved to certain level by using high strength
composite over head conductors and underground cables
that use high temperature super conductor. But, the
transmission is still inefficient. According to the World
Resources Institute (WRI), Indias electricity grid has the
highest transmission and distribution losses in the world
a whopping 27%. Numbers published by various Indian
government agencies put that number at 30%, 40% and
greater than 40%. This is attributed to technical losses
(grids inefficiencies) and theft . Any problem can be
solved by stateof-the-art technology. The above
discussed problem can be solved by choose an alternative
option for power transmission which could provide much
higher efficiency, low Transmission cost and avoid
power theft. Microwave Power Transmission is one of the
promising technologies and may be the righteous
alternative for efficient
II. HOW WIRELESS ENERGY COULD WORK
Resonance, a phenomenon that causes an object to vibrate
when energy of a certain frequency is applied. Two

Fig. 1

To overcome this problem, the team investigated a special


class of "non-radiative" objects with so-called "longlived resonances". When energy is applied to these
objects it remains bound to them, rather than escaping to
space. "Tails" of energy, which can be many metres long,
flicker over the surface. If another resonant object is
brought with the same frequency close enough to these
tails then it turns out that the energy can tunnel from one
object to another. Hence, a simple copper antenna
designed to have long-lived resonance Could transfer
energy to a laptop with its own antenna resonating at the
same frequency. The computer would be truly wireless.
Any energy not diverted into a gadget or appliance is
simply reabsorbed. The systems that are described would
be able to transfer energy over three to five metres. This
would work in a room let's say but can be adapted to
work in a factory. It could also be scaled down to the
microscopic or nanoscopic world.

209

Wireless Power Transmission

III. WIRELESS POWER TRANSMISSION SYSTEM

B. Transmitting Antenna

William C. Brown, the pioneer in wireless power


transmission technology, has designed, developed a unit
and demonstrated to show how power can be transferred
through free space by microwaves. The concept of
Wireless Power Transmission System is explained with
functional block diagram shown in Figure 2. In the
transmission side, the microwave power source generates
microwave power and the output power is controlled by
electronic control circuits. The wave guide ferrite
circulator which protects the microwave source from
reflected power is connected with the microwave power
source through the Coax Waveguide Adaptor. The tuner
matches the impedance between the transmitting antenna
and the microwave source. The attenuated signals will be
then separated based on the direction of signal
propagation by Directional Coupler.

The slotted wave guide antenna, microstrip patch antenna,


and parabolic dish antenna are the most popular type of
transmitting antenna. The slotted waveguide antenna is
ideal for power transmission because of its high aperture
efficiency (> 95%) and high power handling capability.
C. Rectenna
The rectenna is a passive element consists of antenna,
rectifying circuit with a low pass filter between the
antenna and rectifying diode. The antenna used in
rectenna may be dipole, microstrip or parabolic dish
antenna. The patch dipole antenna achieved the highest
efficiency among the all. Performance of Printed
Rectenna is shown in table 1.
Table 1. Performance of Printed Rectenna

The transmitting antenna radiates the power uniformly


through free space to the rectenna. In the receiving side, a
rectenna receives the transmitted power and converts the
microwave power into DC power. The impedance
matching circuit and filter is provided to setting the output
impedance of a signal source equal to the rectifying
circuit. The rectifying circuit consists of Schottky barrier
diodes converts the received microwave power into DC
power
V. EXPERIMENT STUDY
In 2007, Marin Soljacic led a five member team of
researchers at MIT (funded by Army Research Office,
National Science Foundation and the Department of
Energy) and experimentally demonstrated transfer of
electricity without the use of wires. These researchers
were able to light a 60W bulb from a source placed
seven feet away, with absolutely no physical contact
between the bulb and the power source.
Fig. 2. Functional Block Diagram of Wireless Power
Transmission System

IV. COMPONENTS OF WPT SYSTEM


The Primary components of Wireless Power Transmission
are Microwave Generator, Transmitting antenna and
Receiving antenna (Rectenna)..
A. Microwave Generator
The microwave transmitting devices are classified as
Microwave Vacuum Tubes like Magnetron is widely used
for experimentation of WPT. Themicrowave transmission
often uses 2.45GHz or 5.8GHz of ISM band. The other
choices of frequencies are 8.5 GHz , 10 GHz and 35
GHz. The highest efficiency over 90% is achieved at 2.45
GHz among all the frequencies .

The first copper coil (24 inches in diameter) was


connected to the power source and the second was
connected to the bulb, and were made to resonate at a
frequency of 10 MHz. The bulb glowed even when
different objects (like a wooden panel) were placed
between the two coils. The system worked with 40%
efficiency and the power that wasn't utilized remained
in the vicinity of the transmitter itself and did not
radiate to the surrounding environment
A. How Safe is WPT?
Human beings or other objects placed between the
transmitter and receiver do not hinder the transmission
of power. However, does magnetic coupling or
resonance coupling have any harmful effects on humans?
MITs researchers are quite confident that WPT 'coupling
resonance' is safe for humans. They say that the magnetic
fields tend to interact very weakly with the biological

210

Proceedings of the National Conference on Communication Control and Energy System

tissues of the body, and so are not prone to cause any


damage to any living beings.

concept is interference of microwave with present


communication systems.
C. Biological Impacts
Common beliefs fear the effect of microwave radiation.
But the studies in this domain repeatedly proves that the
microwave radiation level would be never higher than the
dose received while opening the microwave oven door,
meaning it is slightly higher than the emissions created by
cellular telephones.
VII. APPLICATIONS OF WPT
Generating power by placing satellites with giant solar
arrays in Geosynchronous Earth Orbit and transmitting
the power as microwaves to the earth known as Solar
Power Satellites (SPS) is the largest application of WPT.
A. Consumer Electronics
 Direct wireless powering of stationary devices (flat
screen TVs, digital picture frames, home theater
accessories, wireless loud speakers, etc.)
eliminating expensive custom wiring, unsightly cables
and wall-wart power supplies

Fig. 3. 60W light-bulb being lit from 2m away. Note the


obstruction in the lower image

VI. ADVANTAGES, DISADVANTAGES, AND BIOLOGICAL


IMPACTS OF WPT
A. Advantages
Wireless Power Transmission system would completely
eliminates the existing high-tension power transmission
line cables, towers and sub stations between the
generating station and consumers and facilitates the
interconnection of electrical generation plants on a global
scale. It has more freedom of choice of both receiver and
transmitters. Even mobile transmitters and receivers can
be chosen for the WPT system. The cost of transmission
and distribution become less and the cost of electrical
energy for the consumer also would be reduced. The
power could be transmitted to the places where the wired
transmission is not possible. Loss of transmission is
negligible level in the Wireless Power Transmission;
therefore, the efficiency of this method is very much
higher than the wired transmission. Power is available at
the rectenna as long as the WPT is operating. The power
failure due to short circuit and fault on cables would never
exist in the transmission and power theft would be not
possible at all.

Fig. 4

 Automatic wireless charging of mobile electronics


(phones, laptops, game controllers, etc.) in home, car,
office, Wi-Fi hotspots while devices are in use and
mobile.

B. Disadvantages
The Capital Cost for practical implementation of WPT
seems to be very high and the other disadvantage of the
211

Fig. 5

Wireless Power Transmission

VIII. FUTURE OF WPT


MIT's WPT is only 40 to 45% efficient and according to
Soljacic, they have to be twice as efficient to
compete with the traditional chemical batteries. The
team's next aim is to get a robotic vacuum or a laptop
working, charging devices placed anywhere in the room
and even robots on factory floors. The researchers are
also currently working on the health issues related to
this concept and have said that in another three to
five years time, they will come up with a WPT
system for commercial use.
WPT, if successful will
definitely change the way we live. Imagine cellphones,
laptops, digital camera's getting self charged! Wow! Let's
hope the researchers will be able to come up with the
commercial system soon. Till then, we wait in
anticipation!

radius of the coils. We were able to transfer 60 watts with


~40% efficiency over distances in excess of 2 meters are
also discussed.

Fig. 7

This concept offers greater possibilities for transmitting


power with negligible losses and ease of transmission
than any invention or discovery heretofore made. Dr.
Neville of NASA states You dont need cables, pipes, or
copper wires to receive power. We can send it to you like
a cell phone call where you want it, when you want it,
in real time. We can expect with certitude that in next
few years wonders will be wrought by its applications if
all the conditions are favourable.
REFERENCES

Fig. 6
[1]

IX. CONCLUSION
The concept of Microwave Power transmission (MPT)
and Wireless Power Transmission system is presented.
The technological developments in Wireless Power
Transmission (WPT). The advantages, disadvantages,
biological impacts and applications of WPT and
experimentally demonstrated the efficient of
non
radiative power transfer over distances up to 8 times the

[2]
[3]

[4]
[5]

212

http://cleantechindia.wordpress.com/2008/07/16/
indiaselectricity-transmission-and-distribution-losses/
www.sciencemag.org
Nikola Tesla, The Transmission of Electrical Energy
Without Wires as a Means for Furthering Peace,
Electrical World and Engineer. Jan. 7, p. 21, 1905
http://www.fullinterview.com
"Goodbye
wires".
MIT
News.
2007-06-07.
http://web.mit.edu/newsoffice/2007/wireless-0607.html

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.213-218.

Power Flow Control using U P F C with Fuzzy Logic Controller


G. Kumara swamy1, K. Suresh2 and K. Manohar3
1,2

Asst.Professor, 3M.Tech student


Rajeev Gandhi Memorial College of Engineering & Technology, Andhra Pradesh.
Email: 1kumar1718@gmail.com, 2suri01_276@yahoo.co.in, 3Kalangiri.manohar@gmail.com
Abstract In modern deregulated power systems much
focus has been given on the ways to utilize the existing
transmission systems better. A new dimension in this way is
the proper adjustment of parameters including line
impedances, bus voltage magnitudes and phase angles. High
speed power electronics based FACTS controllers like
STATCOM, TCSC, SSSC, phase shifters independently
controls the each variable parameter in transmission lines.
The most versatile FACTS controller is unified power flow
controller (UPFC), which makes it possible to control all
these parameters independently as well as simultaneously
shifting from one scheme to other. Fuzzy logic adjustment of
these parameters is carried out. In this fuzzy controller
Mamdani fuzzy inference engine is chosen for accurate and
optimal results. Simulation results are presented for selected
IEEE-30 bus system using MATLAB software package.
Keywords Power flow, UPFC, Mamdani fuzzy inference
engine, power injection.

I. INTRODUCTION
The UPFC can be used for power flow control, loop flow
control and load sharing among parallel corridors, voltage
regulation, enhancement of transient stability and
mitigation of system oscillations.The steady state models
of the UPFC treat the UPFC either as one series voltage
source and one shunt current source model or two voltage
source model. Mainly this focuses on the development of
steady state UPFC model and its implementation into a
power flow algorithm. Firstly, power injection
transformation of a two-voltage source UPFC model is
derived in rectangular form the power flow equations are
quadratic. Certain degree of numerical advantages can be
obtained from the form. The rectangular form also leads
naturally to the idea of an optimal multiplier. After
detailed considerations of issues in implementation of
UPFC in power flow by various power flow algorithms
adopted. Here the sending end of the UPFC is
transformed into a PQ bus, whilst the receiving end is
transformed into a PV bus and a standard Newton
Raphson load flow is carried out. Moreover, since the
UPFC parameters are computed after the load flow has
converged, there is no way of knowing during the
iterative process whether or not the UPFC parameters are
within limits. This has provided the motivation for
developing from first principles a new UPFC model

suitable for incorporation into an existing NewtonRaphson load flow algorithm.


II. UPFC BASIC STRUCTURE
Within the framework of traditional power transmission
concepts, the UPFC is able to control, simultaneously or
selectively, all the parameters affecting power flow in the
transmission line (i.e., voltage, impedance and phase
angle), and this unique capability is signified by the
adjective unified in its name. Alternatively, it can
independently control both the real and reactive power
flows in the line. The unified power flow controller in its
general form can provide simultaneous, real time control
of all basic power system parameters (transmission
voltage, impedance and phase angle) or any combinations
of these parameters can be changed without hardware
alterations.
The proposed implementation of the unified power flow
controller, using two voltage-sourced inverters operated
from a common DC link capacitor. This arrangement is
actually a practical realization of an AC-to-AC power
converter with independently controllable input and
output parameters. Inverter2 is used in the arrangement
shown to generate voltage Vpq (t) = Vpq sin (t pq) at
fundamental frequency with in variable amplitude (0 <=
Vpq<= Vpqmax) and phase angle (0 <= pq <= 2), which is
added to the AC system terminal voltage VO (t) by series
connected coupling transformer.
The voltage sourced inverter used in the implementation,
can internally generate or absorb at its AC terminal all the
reactive power demanded by the voltage / impedance /
phase-angle control applied and only the real power
demand has to be supplied at its DC input terminal.

213

Fig. 1. Simple Two-Power System Generalized Power Flow


Controller

Power Flow Control using U P F C with Fuzzy Logic Controller

Inverter 1 (connected in shunt with the AC power system


via a coupling transformer) is used primarily to provide
the real power demand of Inverter 2 at the common DC
link terminal from the AC power system. The flow of real
power in or out of the DC link capacitor is controlled by
the real power exchange between the inverter and the AC
power system. This real power exchange is governed by
the phase angle between the inverter and AC system
voltages. By contrast the reactive power exchange
resulting from the line compensation is determined by the
amplitude difference between the inverter and AC system
voltages.
Assume that the voltage source (Vpq) in series with the
line can be controlled without restrictions. That is the
phase angle of phasor Vpq can be chosen independently of
the line current between 0 and 2 and its magnitude is
variable between zero and defined max value i.e. Vpqmax.
This implies that voltage source Vpq must be able to
generate and absorb both real and reactive power. The
reactive current source Iq is assumed to be either
capacitive or inductive with a variable magnitude
(0<=Iq<=Iqmax) that is independently of the terminal
voltage.

Using phasor representation, the basic UPFC Power flow


control functions are illustrated in fig. 3.
III. LOAD FLOW STUDIES OF SYSTEMS EMBEDDED
WITH UPFC
The load flow solution provides the bus voltages and
phase angles and hence the power injection at all the
buses and power flows through interconnecting power
channels. Load flow solution is essential for designing a
new power system and for planning extension of the
existing one for increased load demand.
A. Conventional Newton-Raphson Power Flow
Algorithm
Newton-Raphson method is the most commonly used
approach for load flow solution this is the mainly because
the quadratic convergence characteristics of NewtonRaphson method.

f1
Y1 f1 (x10 ,------x n 0 ) x1


0
0
Y
f
(x
,------x
)

2 2 1
f 2
n

= x

0
0
Yn f n (x1 ,------x n ) fn
x1

f1
f
0
1 x 1

x2
x n
x 2 0

f 2
f2


x2
x n

f n
fn

x n 0

x2
x n

B=J*C
Here J is the first derivative matrix known as the jacobian
matrix.

P J1 ..J 2 e
Q = J ..J f
2 4
B. UPFC Mathematical Modelling
To formulate UPFC, the power injected model introduced

Fig. 2. Basic circuit arrangement of UPFC

The operation of UPFC based on series compensation,


reactive shunt compensation and phase shifting, the UPFC
can fulfill these functions and thereby meet multiple
control objectives by adding the injected voltage Vpq, with
appropriate amplitude and phase angle, to the terminal
voltage Vo.

Fig. 4. UPFC placed in a line

Fig. 5. UPFC power injection model


Fig. 3. Phasor diagrams illustrating the operation of UPFC

Under the assumption that this UPFC operation is lossless


and the line resistance is small enough to be neglected,
the power exchanges caused by inserting the UPFC are
214

Proceedings of the National Conference on Communication Control and Energy System

In above equations Ps,upfc+jQs,upfc and Pr,upfc+jQr, upfc are


respectively the equivalent complex power injected into
the two bus bars, buses i and j, which are practically the
resultant power injections contributed by both the series
and shunt branches of UPFC. Vs, Vr are respectively the
voltage magnitude components of the voltages on buses s
and r, while s and r are respectively the phase angle
components of voltages on buses s and r. bse is the leakage
susceptance of the series coupling transformer. r and are
respectively magnitude and phase angle of series voltage
source UPFC parameters.

The Constraint Specified value could be the line active of


reactive power flows, or node voltage magnitude. The
input signal W is fuzzified into WFUZZY with seven
linguistic variables, large negative (LN), medium negative
(MN), small negative (SN), very small (VS), small
positive (SP), medium positives (MP) and large positive
(LP). They are represented in triangular functions. Each
three points left bottom top, right bottom are designed as
LN: [-INF, -10, -2], MN: [-9, -5, 0], SN: [-2, -0.8, 0], VS:
[-0.8, 0, 0.8], SP: [0, 0.8, 2], MP: [0, 5, 9], LP: [2, 10,
INF]. Figure5.5 gives sketches of these membership
functions.

In order to evaluate UPFC overall steady state


performance, an adequate model is required in this UPFC
model using power injection concept is derived for power
flow studies. In this model two voltage sources are used
to represent the fundamental components of the pulse
width modulated controlled output voltage waveforms of
the two branches in the UPFC. The impedance of the two
coupling transformer are included in the proposed model
and losses of UPFC taken into account. The series
injection branch, a series injection voltage source
performs the main functions of controlling power flow
while the shunt branch is used to provide real power
demanded by the series branch and the losses in UPFC,
i.e. balancing the real power between the two branches.
However in proposed model the function of reactive
compensation of shunt branch is completely neglected.
Series voltage source Vse can be mathematically
expressed as

Fig. 7. Input membership function

Vse = rVs e j
Where 0<=r<=rmax and 0<= <=2
IV. FUZZY LOGIC CONTROLLER
In this, parameter adjustment, namely control vector
update is through the use of fuzzy logic. The adjustment
procedure is illustrated schematically in fig.6

Fig. 8. Output membership function

This fuzzy input signal is sent to the inference engine,


which will generate a fuzzy output signal, which is
denoted by OUTFUZZY and is also represented by seven
linguistic variables similar to WFUZZY their membership
functions are illustrated in figure5.6. The three points of
each are designed as LN: [-16,- 10,-4], MN: [-6,-3,-2],
SN: [-4, -2 ,0], VS: [-3, 0, 3], SP: [0,2, 4], MP: [2, 3, 6],
LP: [4, 10,.16].
There are seven rules in the rule base corresponding to the
seven linguistic variables.

Fig. 6. Schematic diagram of fuzzy logic controller

W is sent to the fuzzy logic controller (FLC) to generate


a signal OUT. It consists of a fuzzifier, a defuzzifier, an
inference engine and a rule base. The fuzzifier will
transform the crisps input signal into a fuzzy one. The
difference between the constraint specified value and the
computed one during an iterations, say the it is selected as
the crisps input signal it is denoted by and is defined by

Rule 1: If WFUZZY is LN then OUTFUZZY is LN


Rule 2: If WFUZZY is MN then OUTFUZZY is MN
Rule 3: If WFUZZY is SN then OUTFUZZY is SN
Rule 4: If WFUZZY is VS then OUTFUZZY is VS
Rule 5: If WFUZZY is SP then OUTFUZZY is SP
Rule 6: If WFUZZY is MP then OUTFUZZY is MP

W = Wspecified - WCalculated

Rule 7: If WFUZZY is LP then OUTFUZZY is LP


215

Power Flow Control using U P F C with Fuzzy Logic Controller

Design of these fuzzy rules is based on the observation


that, when the computed value obtained in each iteration
is far away from the specified one, it will require more
compensation from the apparatus.
Though the fuzzy rules are simple, their performance is
sufficiently robust. Finally the defuzzifier will transfer
fuzzy output signals OUTFUZZY into crisps value OUT.
The Centroid of area (COA) defuzzification algorithm
is adapted.
The member of fuzzy membership functions used, their
shapes and fuzzy rules are selected heuristically and from
computational experience. Comprehensive computer
simulations are required in order to make the proper
selection and to cater for a wide variety of different
conditions.
For UPFC there are two parameters r and to be
determined. This can be done by sending the deviation of
active and reactive power into the FIS separately and this
will result into output signals OUTP and OUTQ
respectively. The update of inject voltage magnitude and
angle can be calculated relationship in equations and they
yield.

Fig. 10
This showcases the simulation results using MATLAB
package for the considered four case studies. In each case
UPFC is installed in specified line for power flow control.
All these case studies are followed by plotting of active
and reactive power differences against number of
iterations.
VI. CASE-1. POWER FLOW CONTROL IN LINE 1-3
Table 1. Bus Data before Installation of UPFC
Bus
No

Voltage
Mag
(p.u)

Phase
angle
Deg

1.060

0.000

1.021

-8.017

Load
MW
MVAR

GENERATOR
MW
MVAR
0.000 0.000 261.278
16.979
2.400 1.200 0.000
0.000

Table 2. Line Flows before Installation of UPFC


Line
From to
1
3

Line flow
MW MVAR
83.361
5.201

Desired active power P des

Line losses
MW
MVAR
2.818
7.126

= 88 MW

Desired reactive power Q des = 8 MVAR


UPFC is placed in line 1-3.
Table 3. Bus Data after Installation Of UPFC
Bus. Voltage
No Mag(p.u)
1
3

1.060
1.016

Phase
angle
Deg
0.000
-8.458

Load
Generator
MW MVAR MW
MVAR
0.000
2.400

0.000
1.200

261.278 -16.979
0.000
0.000

Table 4. Line Flows after Installation of UPFC


Fig. 9. Flow Chart

Line
From
to
1
3

V. SIMULATION RESULTS OF IEEE 30- BUS SYSTEM


The single line diagram of a IEEE 30-bus system is
shown in Figure with generator at bus 1 is considering it
as a slack bus.

Line flow
MW MVAR
87.999

7.997

Line losses
MW
MVAR
3.158

The solution converged in 4 iterations.

216

8.542

Proceedings of the National Conference on Communication Control and Energy System


Table 10. Line Flows before Installation of UPFC

INJECTED POWERS AT VARIOUS BUSES:


Active power injected at bus 1 is

=10.199995 MW

Line
From to
4
12

Reactive Power injected at bus 1 is = 1.481084 MVAR


Active Power injected at bus 3 is

VII. CASE-2. POWER FLOW CONTROL IN LINE 6-10


Table 5. Bus Data before Installation of UPFC
Bus.
No

Voltage
Mag(p.u)

6
10

1.012
1.040

Load
MW
MVAR
0.00 0.00
5.80 2.00

Desired active power P des

Line
to
10

Line flow
MW MVAR
16.233 1.547

Desired active power P des

= 53 MW

Desired reactive power Q des = 10 MVAR


UPFC is placed in line 4-12.

Generator
MW
MVAR
0.00 0.00
0.00 0.00

Table 11. Bus Data after Installation of UPFC


Bus. Voltage
No Mag(p.u)
4
12

Table 6. Line Flows before Installation of UPFC


From
6

Line losses
MW
MVAR
0.00
4.917

= -9.376171 MW

Reactive Power injected at bus 3 is = -7.16725 MVAR

Phase
angle
Deg
-11.405
-16.171

Line flow
MW MVAR
45.010 15.547

Line losses
MW
MVAR
0.000
1.355

1.006
1.063

Phase
angle
Deg
-10.595
-17.390

Load
MW MVAR
7.600
11.200

1.600
7.500

Generator
MW
MVAR
0.00 0.00
0.00 0.00

Table 12: Line Flows after Installation of UPFC


Line
From
to
4
12

= 18MW

Line flow
MW MVAR
53.003

Line losses
MW
MVAR

10.003

0.00

6.396

Desired reactive power Q des = 2 MVAR


The solution converged in 5 iterations.

UPFC is placed in line 6-10.


Table 7. Bus Data after Installation of UPFC
Bus.
No
6
10

Voltage Phase
angle
Mag
(p.u)
Deg
1.011 -11.578
1.038 -16.880

Load
MW
MVAR
0.00 0.00
5.80 2.00

Generator
MW
MVAR
0.00
0.00
0.00
0.00

INJECTED POWERS AT VARIOUS BUSES:


Active power injected at bus 4 is

Reactive Power injected at bus 4 is = -16.984097MVAR


Active Power injected at bus 12 is

Line flow
MW MVAR
17.999
2.006

= -11.740193MW

Reactive Power injected at bus 12 is = 16.721857MVAR


IX. CASE-4. POWER FLOW CONTROL IN LINE 15-23

Table 8. Line Flows after Installation of UPFC


Line
From
to
6
10

=-7.287912 MW

Table 13. Bus Data before Installation of UPFC

Line losses
MW
MVAR
0.000
1.674

The solution converged in 9 iterations.

Bus. No

Voltage
Mag(p.u)

15
23

1.033
1.019

Phase
angle
Deg
-16.411
-16.773

Load
Generator
MW
MW
MVAR
MVAR
8.20 2.500 0.00 0.00
3.20 1.600 0.00 0.00

INJECTED POWERS AT VARIOUS BUSES:


Active power injected at bus 6 is

Table 14. Line Flows before Installation of UPFC

=4.058937MW

Line

Reactive Power injected at bus 6 is=-1.46479MVAR

From
15

Active Power injected at bus 10 is=-7.524701 MW


Reactive Power injected at bus 10 is=-0.6657MVAR
VIII. CASE-3. POWER FLOW CONTROL IN LINE 4-12
Table 9. Bus Data before Installation of UPFC
Bus. Voltage
No Mag(p.u)
4
12

1.013
1.055

Phase
angle
Deg
-9.678
-15.445

Load
MW
MVAR
7.60 1.60
11.20 7.50

to
23

Line flow
MW MVAR
5.631 4.721

Desired active power P des

Line losses
MW
MVAR
0.051
0.102

= 12MW

Desired reactive power Q des = 8 MVAR


UPFC is placed in line 15-23.

Generator
MW
MVAR
0.00
0.00
0.00
0.00

217

Power Flow Control using U P F C with Fuzzy Logic Controller


Table 15. Bus Data after Installation of UPFC
Bus. No

Voltage
Mag(p.u)

15
23

1.015
0.987

Phase
angle
Deg
-16.045
-16.974

Load
MW
MVAR
8.20 2.50
3.20 1.60

be a useful tool for power flow control. The proposed


approach can easily be incorporated with standard NRLF.
The investigations related to the variation of control
parameters of UPFC on the power flow are also carried
out.

Generator
MW
MVAR
0.00 0.00
0.00 0.00

REFERENCES
[1]

Table 16. Line Flows after Installation of UPFC


Line
From
15

to
23

Line flow
MW MVAR
12.004 8.00

Line losses
MW
MVAR
0.202
0.408

The solution converged in 3 iterations.


INJECTED POWERS AT VARIOUS BUSES:
Active power injected at bus 15 is

=8.666033MW

Reactive Power injected at bus 15 is=-7.58838MVAR


Active Power injected at bus 23 is=-8.233847 MW
Reactive Power injected at bus 23 is=-6.88485MVAR
X. CONCLUSION
Simulation studies were conducted on IEEE-30 bus
system with UPFC located in specified lines to control the
power in these lines. A fuzzy logic control of power flow
converged in about less than ten iterations. A fuzzy logic
controller with properly chosen membership function can

N.G. Hingorani, Laszlo Gyugyi "Understanding Facts


IEEE Press
[2] Gyugyi L., Schaunder C.D., Williams S.L.: The Unified
Power Flow Controller: A New Approach to Power
Transmission Control, IEEE Trans. on Power Delivery,
Vol.10, 1995
[3] Power System Analysis By Hadi Sadat
[4] Fuzzy Logic Method For The Adjustment Of Variable
Parameters In Load Flow Calculation By K.L.Lo, Y.J.Lin
And W.H.Siew IEEE Proceedings on Generation,
Transmission, Distribution 1999,Vol 146.No.3
[5] Modern Power System Analysis By Nagarath And Kothari
[6] Fuzzy Logic Tool Box By Mathworks Corp..
[7] Fuzzy Logic And Neural Networks By Bart Kosco
[8] Ashwini Kumar, S.C.Srivatsava, S.N.SinghA Zonal
Congestion Management Approach Using Real and
Reactive Power Rescheduling IEEE Transactions on
Power Systems Vol 19 No 1 Feb 04.
[9] FACTS Controllers In Power Transmission And
Distribution By K.R.Padiyar
[10] Control Systems Engineering By I.J.Nagrath And
M.Gopal

218

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.219-222.

Human-Computer Interface Technology


R. Rama
Veltech Dr.RR & Dr.SR Technical University, Avadi, Chennai.
Email: ramarju89@gmail.com
Abstract This paper discusses about the human computer
interface(HCI) through body languages namely the hand
gesture and facial expression, which can be used for
intelligent space application such as smart home application,
It is achieved by capturing human actions with the camera,
processing meaning of the actions and then interfacing them
with the computer process or applications.HCI modalities
has been developed to allow human to interact naturally and
intuitively with these intelligent space through non-verbal
body-language modalities which include facial expressions,
eye movement, hand gestures, body postures and walking
style. This paper deals with how to interact with a computer
using hand gesture and facial expressions.
Keywords Handgesture,Handposture, Facial expression,
Anthropometric-Expression,Computer vision.

continuous motions, arecognizer can be trained against a


possible grammar. Withthis, hand gestures can be
specified as building up out of agroup of hand postures in
various ways of composition, just as phrases are build up
by words. The recognized gestures can beused to drive a
variety of applications (Fig.1).
II. APPROACHES FOR HAND POSTURE AND
GESTURERECOGNITION
The approaches to Vision based hand posture and
gesturerecognition can be divided into two categories 3
D hand model based approaches and appearance based
approaches[1].
A. 3 D Hand Model Based Approach

I. INTRODUCTION
The human hand has a complex anatomical
structureconsisting of many connected parts and joints,
involvingcomplex relations between them providing a
total of roughly27 degrees of freedom (DOFs) [2]. User
Interface development requires a sound understanding of
human hands anatomical structure in order to determine
what kind of postures and gestures are comfortable to
make. Although handpostures and gestures are often
considered identical, the distinctions between them need
to be cleared. Hand posture is a static hand pose without
involvement of movements. Forexample, making a fist
and holding it in a certain position is a hand posture.
Whereas, a hand gesture is defined as a dynamic
movement referring to a sequence of hand postures
connected by continuous motions over a short time span,
such as waving good-bye. With this composite property
of hand gestures, theproblem of gesture recognition can
be decoupled into twolevels- the low level hand posture
detection and the high levelhand gesture recognition.In
vision based hand gesture recognition system,
themovement of the hand is recorded by video camera(s).
Thisinput video is decomposed into a set of features
takingindividual frames into account. Some form of
filtering may also be performed on the frames to remove
the unnecessarydata, and highlight necessary components.
For example, thehands are isolated from other body parts
as well as otherbackground objects. The isolated hands
are recognized fordifferent postures. Since, gestures are
nothing but a sequenceof hand postures connected by

Three dimensional hand model based approaches rely


onthe 3D kinematic hand model with considerable DOFs,
andtry to estimate the hand parameters by comparison
betweenthe input images and the possible 2 D appearance
projected bythe 3-D hand model. Such an approach is
ideal for realisticinteractions in virtual environments.One
of the earliest model based approaches to the problemof
bare hand tracking was proposed by Rehg and
Kanade.This article describes a model-based hand
tracking system,calledDigitEyes, which can recover the
state of a 27 DOF hand model from ordinary gray scale
images at speeds of up to10 Hz. The hand tracking
problem is posed as an inverseproblem: given an image
frame (e.g. edge map) find theunderlying parameters of
the model. The inverse mapping isnon-linear due to the
trigonometric functions modelingthejoint movements and
the perspective image projection. A keyobservation is that
the resulting image changes smoothly asthe parameters
are changed. Therefore, this problem is apromising
candidate for assuming local linearity. Severaliterative
methods that assume local linearity exist for solvingnonlinear equations (e.g. Newtons method). Upon findingthe
solution for a frame the parameters are used as the
initialparameters for the next frame and the fitting
procedure isrepeated. The approach can be thought of as a
series ofhypotheses and tests, where a hypothesis of
model parametersat each step is generated in the direction
of the parameterspace (from the previous hypothesis)
achieving the greatestdecrease in mis-correspondence.
These model parameters arethen tested against the image.

219

Human-Computer Interface Technology

This approach has severaldisadvantages that has kept it


from real-world use. First, ateach frame the initial
parameters have to be close to thesolution, otherwise the
approach is liable to find a suboptimalsolution (i.e. local
minima). Secondly, the fitting process is also sensitive to
noise (e.g. lens aberrations, sensor noise) in the imaging
process. Finally, the approach cannot handle the
inevitable self-occlusion of the hand.
Fig. 3. The 3 D model (left) and its generated contour (right)

Fig. 2. Snapshot of 3 D Tracker in action

In a deformable 3D hand model is used (Fig 2). The


model is defined by a surface mesh which is constructed
via PCA from training examples. Real-time tracking is
achieved by finding the closest possibly deformed model
matching the image. The method however, is not able to
handle the occlusion problem and is not scale and rotation
invariant. Fig. 2. Snapshot of 3 D Tracker in action.In a
model-based method for capturing articulated hand
motion is presented. The constraints on the joint
configurations are learned from natural hand motions,
using a data glove as input device. A sequential Monte
Carlo tracking algorithm, based on importance sampling,
produces good results, but is view-dependent, and does
not handle global hand motion.
Stenger et al presented a practical technique for model
based 3D hand tracking (Fig 3). An anatomically accurate
hand model is built from truncated quadrics. This allows
for the generation of 2D profiles of the model using
elegant tools from projective geometry, and for an
efficient method to handle self-occlusion. The pose of the
hand model is estimated with an Unscented Kalman filter
(UKF), which minimizes the geometric error between the
profiles and edges extracted from the images. The use of
the UKF permits higher frame rates than more
sophisticated estimation methods such as particle
filtering, whilst providing higher accuracy than the
extended Kalmanfilter. For a single camera the tracking
algorithm operates at a rate of 3 frames per second on a
Celeron 433MHz machine. However, the computational
complexity grows linearly with the number of cameras,
making the system non-operatable in real time
environments. Fig. 3. The 3 D model (left) and its
generated contour (right).

More recent efforts have reformulated the problem within


a Bayesian (probabilistic) framework [2]. Bayesian
approaches allow for the pooling of multiple sources of
information (e.g. system dynamics, prior observations) to
arrive at both an optimal estimate of the parameters and a
probability distribution of the parameter space to guide
future search for parameters. On contrary to Kalman filter
approach, Bayesian approaches allow non-linear system
formulations and non-Gaussian (multi-modal) uncertainty
(e.g. caused by occlusions) at the expense of a closedform solution of the uncertainty. A potential problem with
the approach is that certain independent assumptions of
the underlying probabilistic distribution are made, for
computational tractability reasons that may not hold in
practice. Also, it is a computationally expensive approach.
Three dimensional hand model based approaches offer a
rich description that potentially allows a wide class of
hand gestures. However, as the 3D hand models are
articulated deformable objects with many DOFs, a very
large image database is required to cover all the
characteristic shapes under different views. Another
common problem with model based approaches is the
problem of feature extraction and lack of capability to
deal with singularities that arise from ambiguous views.
B. Appearance Based Approaches
Appearance based approaches use image features to
model the visual appearance of the hand and compare
these parameters with the extracted image features from
the video input. Generally speaking, appearance based
approaches have the advantage of real time performance
due to the easier 2 D image features that are employed.
There have been a number of research efforts on
appearance based methods in recent years. A
straightforward and simple approach that is often utilized
is to look for skin colored regions in the image [3], [4].
Although very popular, this has some drawbacks. First,
skin color detection is very sensitive to lighting
conditions. While practicable and efficient methods exist
for skin color detection under controlled (and known)
illumination, the problem of learning a flexible skin
model and adapting it over time is challenging. Secondly,
obviously, this only works if we assume that noother skin
like objects are present in the scene. Lars andLindberg
used scale-space color features to recognize hand
gestures. Their gesture recognition method is based on
feature detection and user independence, but the authors

220

Proceedings of the National Conference on Communication Control and Energy System

showed real time application only with no other skin


coloured objects present in the scene. So, although, skin
color detection is a feasible and fast approach given
strictly controlled working environments, it is difficult to
employ it robustly on realistic scenes. Another approach
is to use the eigenspace for providing an efficient
representation of a large set of high-dimensional points
using a small set of basis vectors. The
eigenspaceapproach seeks an orthogonal basis that spans
a low-ordered subspace that accounts for most of the
variance in a set of examplar images. To reconstruct an
image in the training set a linear combination of the basis
vectors (images) are taken, where the coefficients of the
basis vectors are the result of projecting the image to be
reconstructed on to the respective basis vectors. In the
authors present an approach for tracking hands by an
eigenspace approach. The authors provide three major
improvements to the original eigenspaceapproach
formulation, namely, a large invariance to occlusions,
some invariance to differences in background from the
input images and the training images, and the ability to
handle both small and large affine transformations (i.e,
scale and rotation) of the input image with respect to the
training images. The authors demonstrate their approach
with the ability to track four hand gestures using 25 basis
images. For a small set of gestures this approach may be
sufficient. With a large gesture vocabulary (e.g. American
Sign Language) the space of views is large, this poses a
problem for collecting adequate training sets and more
seriously the compactness in the subspace required for
efficient processing may be lost.Recently there has been
increased interest in approaches working with local
invariant features[5]. In [5], AdaBoost learning algorithm
is used with Scale invariant feature transform- a
histogram representing gradient orientation and
magnitude information within a small image patch. Using
the sharing feature concept, efficiency of 97.8% is
achieved. However, different features such as contrast
context histogram need to be studied and applied to
accomplish hand posture recognition in real time. In Haar
like features are used for the task of hand detection. Haar
like features focus more on the information within a
certain area of the image rather than each single pixel. To
improve classification accuracy and achieve realtime
performance, AdaBoost learning algorithm that can
adaptively select the best features in each step and
combine them into a strong classifier can be used. The
training algorithm based on AdaBoost learning algorithm
takes a set of positive samples, which contain the object
of interest and a set of negative samples, i.e., images
that do not contain objects of interest. During the training
process, distinctive Haar-like features are selected to
classify the images containing the object of interest at
each stage. Originally for the task of face tracking and
detection, Viola and Jones proposed a statistical approach
to handle the large variety of human faces. In their
algorithm, the concept of integral image is used to
compute a rich set of Haar-like features. Compared with

other approaches, which must operate on multiple image


scales, the integral image can achieve true scale
invariance by eliminating the need to compute a
multiscale image pyramid and significantly reduces the
image processing time. The Viola and Jones algorithm is
approximately 15 times faster than any previous
approaches while achieving accuracy that is equivalent to
the best published results . However, training with this
method is computationally expensive prohibiting the
evaluation of many hand appearances for their suitability
to detection. The idea behind the invariant features is that,
if it is possible to identify characteristic points or regions
on objects, an object can be represented as assembly of
these regions i.e. rather than modeling the object as a
whole, one models it as a collection of characteristic
parts. This has the advantage that partial occlusions of an
object can be handled easily, as well as considerable
deformations or changes in viewpoint. As long as a
sufficient number of characteristic regions can be
identified, the object may still be found. Therefore, these
approaches seem rather promising for the task of real time
hand detection.
III. FACIAL EXPRESSION
Human facial expression recognition by a machine can be
described as an interpretationof human facial
characteristics via mathematical algorithms. Gestures of
the body are read by an input sensing device such as a
web-cam. It reads the movements of the human body and
communicates with computer that uses these gestures as
an input. These gestures are then interpreted using
algorithm either based on statistical analysis or artificial
intelligence techniques. The primary goal of gesture
recognition research is to create a system which can
identify specific human gestures and use them to convey
information. By observing face, one can decide whether a
man is serious, happy, thinking, sad, feeling pain and so
on. Recognizing the expression of a man can help in
many of the areas like in the field of medical science
where a doctor can be alerted when a patient is in severe
pain. It helps in taking prompt action at that time. In this
paper the main focus is to define a simple architecture that
recognizes the humanfacial expression.
IV. FACIAL EXPRESSION RECOGNITION
Our system, Fig.5, consists of two main modules: (i)
tracking module, delivering the 2D point measurements
pi(ui,vi)of the tracked features, where i= 1,,m and m is
the number of measurement points, and (ii) estimator
module (for the estimation of 3D motion and
expressions), delivering a state vector: ( , , , , , , , ) x y z j s
= t tt f awhere ( , , , , , ) x y z t ttare the six 3D
camera/object relative motion, namely translation and
rotation, f is the camera focal length, and ajare the 3D
models actuators (muscles) that generate facial
expressions, where j= 1,,n and n is the number of
actuators. The set of 2D features to be tracked is obtained

221

Human-Computer Interface Technology

byprojecting their corresponding 3D model points Pi (Xi,


Yi, Zi), where i= 1,,m and m is the number of tracked
features.In this formulation the observation motion vector
isexpressed from the 2D projection of the 3D head
features. The facial expression parameters are fully
observable, hence their corresponding part of observation
and state vector are identical. Figure 7.Visual tracking and
recognition of facialexpression.
The 3D AMB AAM allows modeling the shape and
appearance of human faces in 3D using a generic 3D
wireframe model of the face, to solve the storage, pose
and labeling related problems. The shape of our model is
described by two sets of controls: (i) the muscle actuators
to model facial expressions, and (ii) the anthropometrical
controls to model different facial types. The 3D face
deformations are defined on the AnthropometricExpression (AE) space, which is derived from Waters
muscle-based face model and Facial Action Coding
System (FACS). FACS defines the Action Unit (AU) as
the basic visual facial movement controlled by one or
more muscles. The muscle model is independent of the
facial topology, and an efficient method to represent
facial expressions by a reduced set of parameters.
Expression Action Unit (EAU) represents the facial
expression space, and Anthropometry Action Unit (AAU)
represents the space of facial types.

Fig. 4. Facial Expressions

Fig.4.Facial expressions of surprise, disgust, fear,


sadness,anger, happiness, and neutral position. Fig.4
illustrates a few instances of the 3DAnthropometric
Muscle-Based Model obtained by varying anthropometric
and muscle parameter values. Tracking should allow to
estimate the motion while locating the face and coping
with different degrees of rigid motion and non-rigid
deformations. Since we separate the global from local
motion search spaces, the features to be tracked must
originate from rigid regions of the face. The main purpose
of the EKF is to filter and predict the rigid and non-rigid
facial motion. The feedback from the EKF imposes a
global collaboration among the separate 2D feature
trackers. EKF consists of two stages: time updates (or
prediction) and measurement updates (or correction). The
filter provides iteratively an optimal estimate of the
current state using the current input measurement, and
produces an estimate of the future state using the

underlying state model. The EKF state and measurement


equations are: s(k +1) = As(k) + (k) m(k )=Hs(k ) + (k )
where s is the state vector, m is the measurement vector,
A is the state transition matrix, H is the Jacobian that
relates state to measurement, and(k) and (k) are error
terms modelled as Gaussian white noise. The observations
are the 2D feature coordinates (u,v) and aiare
concatenated into a measurement vector m(k) at each
step. The observation vector is the back-projection of the
sstate vector containing the relative 3D camera-scene
motion, and the camera focal length. In our case the state
vector is s(translation, rotation, velociyu, focal_length,
actuators) that contains the relative 3D camera-object
translation, rotation and their velocities, camera focal
length, and the actuators/muscles that generate the facial
expressions.
V. CONCLUSION
Both the hand gesture and the facial expression tracking
and recognition systems are able to work in a realistic
environment without makeup, with un-calibrated cameras
and unknown lighting conditions and background. Hand
gestures and facial expressions are powerful nonverbal
strongly
context-dependent
human-to-human
communication modalities. While understanding them
may come naturally to humans, describing them in an
unambiguous algorithmic way is not an easy task. We will
use Fuzzy Neural Networks and Fuzzy Cognitive Maps to
develop an expert system that captures the collective
wisdom of human experts on the best procedures to
extract the semantic information from the multimodal
gestural data streams. These systems are smart and help
full for next generation.
REFERENCES
[1]

[2]

[3]

[4]

[5]

222

K G Derpains, A review of Vision-based Hand


Gestures, InternalReport, Department of Computer
Science. York University, February2004
B. Stenger, A. Thayananthan, P.H.S. Torr, R. Cipolla ,
Model-basedhand tracking using a hierarchical Bayesian
filter , IEEE transactionson pattern analysis and machine
intelligence, September 2006
Elena Snchez-Nielsen, Luis Antn-Canals, Mario
Hernndez-Tejera,Hand Getsure recognition for Human
Machine Intercation, In Proc.12th International
Conference on Computer Graphics, Visualization
andComputer Vision : WSCG, 2004
B. Stenger, Template based Hand Pose recognition using
multiplecues, In Proc. 7th Asian Conference on
Computer Vision : ACCV 2006
C C Wang, K C Wang, Hand Posture recognition using
Adaboost withSIFT for human robot interaction, Springer
Berlin, ISSN 0170-8643,Volume 370/2008.

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.223-227.

Labview-Based Fuzzy Controller Design of a Lighting Control System


R. Saravana Kumar1, R. Bharath Kumar2 and J. Saravanan3
1

Student, Pre- Final Year, 2Lecturer, 3Asst.Professor


Jaya Engineering College
Email: zealysaravanan@gmail.com, bharathkumar_86@yahoo.co.in, jsarav78@yahoo.co.in
Abstract This paper describes how to design a lighting
control system including hardware and software. Hardware
includes light sensing circuit, control circuit, and 8255
expanding I/O circuit, PC, and bulb. Sensing circuit uses
photo-resistance component to sense the environmental light
and then transmit the signal of the lightness to the computer
through an 8-bit A/D converter 0804. The control circuit
applies reed relay in digital control way to adjust the
variable resistor value of the traditional dimmer. Software
incorporates LABVIEW graphical programming language
and MATLAB Fuzzy Logic Toolbox to design the light fuzzy
controller. The rule-base of the fuzzy logic controller either
for the single input single output (SISO) system or the
double inputs single output (DISO) system is developed and
compared based on the operation of the bulb and the light
sensor. The control system can dim the bulb automatically
according to the environmental light. It can be applied to
many fields such as control of streetlights and lighting
control of cars headlights and it is possible to save energy
by dimming the bulb.

Dimming of the fluorescent lamp can be done by


changing the input frequency of the electronic ballast

I. INTRODUCTION

Section II introduces the proposed lighting control system


including hardware design. One of the important
problems involved with the design of fuzzy logic
controllers is the development of fuzzy if-then rules for
fuzzy controllers. Section III presents the design method
of fuzzy logic controllers. The results and discussions are
shown in Section IV.

After Lotfi Zadeh had introduced the fuzzy logic in 1965,


the fuzzy control method is extensively used since it has
the advantage of being model-free without any a priori
information required. It is easy to design a fuzzy control
system with requisite knowledge and the experience of a
skilled operator. Many issues focus on determining fuzzy
control rules, membership functions, and structures of
fuzzy controllers [2, 4, 6]. Various ways of fuzzy logic
used to improve industrial control [5] and methodology to
reduce the number of variables and the number of fuzzy
if-then rules [3] were discussed.

Fig. 1. The block diagram of the proposed lighting control


system

The application of fuzzy logic in lighting control has been


presented in a number of papers. For example, it is
possible to save energy by dimming the bulb or the
fluorescent lamp.

1)
A microprocessor- based intelligent control
device for streetlight control applying fuzzy decision
theory to distinguish various interferences accurately and
to make it operate reliably, which can turn on or off the
transformer automatically according to environmental
light [7].

Fig. 2. The fuzzy control scheme

II. SYSTEM DESCRIPTION


The block diagram of the proposed lighting control
system is shown in Fig. 1, which can be represented by a
simple form like Fig. 2. The 8255 I/O shown in Fig. 1 is
the expanding I/O circuit of a PC printer port. Originally,
only 8-bit I/O in PC printer port can be used. After the
8255 expanding I/O circuit shown in Fig. 3 had been
established, we have 24-bit I/O available now. The bulb
drive shown in Fig. 2 is basically composed of a dimmer
with 6 reed relays, and its corresponding hardware circuit
is shown in Fig. 4.
The reed relays in the bulb drive are regarded to as digital
outputs of the light control system, thus the D/A converter
is unnecessary for our case. One more thing need to be
done is to design a parallel set of resistor circuit work as
the function of the variable resistor in the dimmer. The
main objective of designing a parallel-resistor circuit is to

223

Labview-Based Fuzzy Controller Design of a Lighting Control System

make the total resistance decreases homogeneously with


the increase of the DN value of the control output Fig.5

Shows the hardware layout of the parallel- resistor circuit,


and it stands for one of the many possible solutions as
long as the above requirement (homogeneously
decreasing) can be met. The simulation result of the total
resistance versus control output is shown in Fig. 6.
The sensor block in Fig. 2 corresponds to the combination
of A/D block and the Light sensor block, its hardware
circuit is shown in Fig. 7

Fig. 3. The 8255 expanding I/O circuit connected with a PC


printer port

Fig. 7

It contains photo-resistor CdS sensor and an 8-bit A/D


converter and LED displaying circuit. Consequently, the
hardware layout of the lighting control system is shown in
Fig. 4. The control circuit of the system

Fig. 5. The hardware layout of the parallel-resistor circuit


(Ro=330K)

Fig. 8. The hardware layout of the lighting control system

Fig. 8. After all the hardware works had been done,


subsequently, our goal is to design a FLC used to dim the
bulb according to the environmental light.
III. FUZZY LOGIC CONTROLLER

Fig. 6. Total resistance versus control output (DN:0-63)

In this paper, the single input single output (SISO) system


and the double inputs single output (DISO) system are
discussed. The environmental light is used as input
variable x1 for the SISO system, while another input
variable x2 (the changing rate of the environmental light)
is considered for the DISO system. The output variable is
the numerical value DN to the output circuit. The input
variables are fuzzified by assigning them a singleton
fuzzy set, i.e. a set with membership function and zero

224

Proceedings of the National Conference on Communication Control and Energy System

elsewhere. The fuzzy set of the output variable is Inferred


by the max-min composition and the fuzzy relation
describe the desired control action. The fuzzy set of the
output variable is defuzzified to deliver a crisp numerical
value by the centre-of-gravity method. The fuzzy rule
base consists of a collection of fuzzy IF-THEN rules of
the form.
Table 1. Labels for the membership functions in the SISO
system

A. SISO
We assume that the input to the system is the
environmental light x1. We further assume that the
environmental light can be Big, Medium, and Small. The
output DN can range between 0 and 63 and is divided into
Small, Medium, and Big. Figure 9 shows the fuzzy sets
describing the above. Labels for the membership
functions are given in Table 1. The rules base, with its
rules, is in Table 2.

Table 2. Rule base of the SISO system

Fig. 10. The inferred output of SISO system

Figure 10 shows the results of the simulation of the rule


base by fuzzy inference development environment
(MATLAB Fuzzy Logic Toolbox) software for the input
value of the environmental light x1 = 164. The output is
24.22.
Here only two rules are needed to calculate the output.
The inferred output solved by the centre-of-gravity
method can also be checked by hand calculation as
follows:

Fig. 9. Fuzzy sets showing the input and the output of the SISO
system

For k = 1, 2, , n

(1)

Where x1, x2 U, and y R are the input and output of

Environmental light x1

the fuzzy logic system, respectively,


are
labels of fuzzy sets in U1, U2 and R representing the kth
antecedent pairs and consequent pair respectively and n is
the number of rules.
The SISO system and the DISO system will be discussed
and analyzed by using MATLAB Fuzzy Logic Toolbox in
the followings.
Changing rate of environmental light x2

225

Labview-Based Fuzzy Controller Design of a Lighting Control System

Changing rate of the environmental light x2 = 5. The


output is about 9.41. Here only six rules are needed to
calculate the output.

Output Control DN
Fig. 11. Fuzzy sets showing the inputs and the output of the
DISO system
Table 3. Labels for the membership functions in the DISO
system
Fig. 13. The result of the simulation of the DISO system using
surface viewer of MATLAB fuzzy logic toolbox

The surface shown in Fig. 13 is the control surface. It


means that for every possible value of the two inputs,
there is a corresponding output based on the rules. For
example, if the environmental light x1 and the changing
rate of the environmental light x2 are given, the control
output DN of the system can be obtained immediately.

Table 4. Rule base of the DISO system

IV. RESULTS AND DISCUSSION


B. DISO
We assume that the inputs to the system are the
environmental light x1 and the changing rate of the
environmental light x2. Where the changing rate of the
environmental light ranges between -10 and +10 and is
divided into Negative-Small, Zero, and Positive-Small.
We further assume that the environmental light can be
Small, Medium, and Big. To make the inferred output
value homogeneously distribute on all regions (especially,
for dark region), the output DN ranges between -20 and
80 and is divided into VS, S, SB, MB, B, and VB. Figure
11 shows the fuzzy sets describing the above. Labels for
the membership functions are given in Table 3. The rules
base, with its 9 rules, is shown in Table 4.

The light control system is constructed in a PC base using


the 8255 expanding I/O circuit instead of a DAQ card.
The 8255 I/O expanding circuit, sensing circuit, and the
output control circuit become the requisite components in
the design of a light control system. The resolution of
A/D device is 8-bit and the digital outputs used only 6-bit.
The tasks of acquiring the input signal, process of the
input data, and output of the DN are commanded by using
LabVIEW, which is a graphical programming language to
accommodate the light control system.

Fig. 14. Bulb responses for control value DN = 5, 15, 25.

Figure 14 shows the corresponding bulb responses for the


control value DN = 5, 15, 25 respectively.

Fig. 12. The inferred output of the DISO system.

Figure 12 shows the results of the simulation of the rule


base by fuzzy inference development environment
(MATLAB Fuzzy Logic Toolbox) software for the input
values of the environmental light x1 = 150 and the

Figure 15 shows the change of the lightness of the bulb


against the variations of the environment lightness. The
front panel and the block diagram of the LabVIEW
program are shown in Fig. 16. At begin, we create a
combo box with two items (SISO and DISO) in the front
panel of LabVIEW, shown in the upper part of Fig. 16. To
simplify the whole program, we make two sub-vis icons
denoted as FLC SISO and FLC DISO to do the
corresponding job of the FLC, shown in the lower part of
Fig. 16.

226

Proceedings of the National Conference on Communication Control and Energy System

controller with the SISO system to dim the fluorescent


lamp based on the availability of the outside light.
And Zhang [7] adopted fuzzy controller with the
DISO system to distinguish environmental
interferences, avoiding the fault action and jittery and
improving the reliability. In this paper, both the SISO
system and the DISO system are discussed
respectively in the fuzzy controller. The experimental
results show, due to the light signal and the changing
rate are considered, the DISO system responses faster
than the SISO system when environmental light
changes suddenly. It has also verified the effectiveness
and robustness of the proposed fuzzy controller.
V. CONCLUSIONS
Important hardware such as the light sensing circuit and
the dimmer with relays (like parallel-resistor circuit) were
designed and proved work well. The control device can be
used to adjust the light level of the bulb, making relays
engaged or disengaged and saving the power. The device
is reliable and convenient for maintenance. The control of
the bulb light system using a fuzzy logic controller is
presented. Both the SISO system and the DISO system
are discussed in the fuzzy logic controller and the
LabVIEW program incorporated with MATLAB Fuzzy
Logic Toolbox for the FLC is carried out. The
experimental results have shown the satisfactory response
of the bulb system against sudden change of the
environmental light. This intelligent dimming device can
also be applied to dim the bulb of the streetlights or cars
headlights automatically according to environmental light,
which worked reliably and had obtained good effect of
energy saving.

Fig. 15. Bulb output response against different environmental


light.

REFERENCES
[1]

[2]

[3]
Fig. 16. Front panel and block diagram in LabVIEW

After that, we can easily run the LabVIEW program and


choose the item in the combo box and even switch the
item to another one during the execution of the LabVIEW
program.
The main features in the design of the light control system
using the LabVIEW are:

[4]

[5]

[6]

1) Low cost.
[7]

2) Easy implemented in a PC without any DAQ card.


3) Incorporate MATLAB Fuzzy Logic Toolbox with
LabVIEW program. Javidbakht [1] used fuzzy

227

Javidbakht, Saeid, Design of a Controller to Control Light


Level in a Commercial Office, M.S.Egr. Thesis,
Department of Electrical Engineering, Wright State
University, pp. 1-106 (2007).
Mamdani, E. M., Application of fuzzy algorithms for
control of simple dynamic plant, IEE Proceedings, Vol.
121, No. 12, pp. 1585-1588 (1974).
Mamlook, R., Tao, C. W., and Thompson, W. E., An
advanced fuzzy controller, Fuzzy Sets and Systems, Vol.
103, pp. 541-545 (1999).
Takagi, T. and Sugeno, M., Fuzzy identification of
systems and its applications to modeling and control,
IEEE Transactions on Systems, Man and Sybernetics, Vol.
15, No. 1, pp. 116-132 (1985).
Van der Wal, A. J., Application of fuzzy logic control in
industry, Fuzzy Sets and Systems, Vol. 74, pp. 33-41
(1995).
Wu, Z. Q., Wang, P. Z., and Wang, H. H., A rule selfregulating fuzzy controller, Fuzzy Sets and Systems, Vol.
47, pp. 13-21 (1992).
Zhang, C., Cui, N., Zhong, M., and Cheng, Z.,
Application of Fuzzy Decision in Lighting Control of
Cities, 44th IEEE Decision and Control Conference,
Seville, Spain, pp. 4100-4104 (2005).

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.228-232.

Testing Methodologies for Embedded Systems and Systems-On-Chip


S. Sayee Kumar
M.Tech., VEL Tech University, Avadi, Chennai
Abstract Testing of a fabricated chip is a process that
applies a sequence of inputs to the chip and analyzes the
chips output sequence to ascertain whether it functions
correctly. As the chip density grows to beyond millions of
gates, Embedded systems and systems-on-chip testing
becomes a formidable task. Vast amounts of time and money
have been invested by the industry just to ensure the high
testability of products. On the other hand, as design
complexity drastically increases, current gate-level design
and test methodology alone can no longer satisfy stringent
time-to-market requirements. The High-Level Test
Synthesis (HLTS) system, which this paper mainly focuses
on, is to develop new systematic techniques to integrate
testability consideration, specially the Built-In Self-Test
(BIST) methodology, into the synthesis process. It makes
possible for an automatic synthesis tool to predict testability
of the synthesized embedded systems or chips accurately in
the early stage. It also optimizes the designs in terms of test
cost as well as performance and hardware area cost.

I. INTRODUCTION
Driven by the rapid growth of the Internet,
communication technologies, pervasive computing,
automobiles, airplanes, wireless and portable consumer
electronics, Embedded Systems and Systems-on-Chip
(SoC) have moved from a craft to an emerging and very
promising discipline in todays electronic industry.
Testing of a fabricated very large scale integrated
embedded systems and system-on-chip is a process that
applies a sequence of inputs to the circuit and analyzes the
circuits output sequence to ascertain weather it functions
correctly. As the chip density grows to beyond millions of
gates, testing becomes a formidable task. Vast amounts of
time and money have been invested by the semiconductor
industry just to ensure the high testability of products. A
number of semiconductor companies estimate that about
7% to 10% of the total cost is spent in single device
testing [17]. This figure can rise to as high as 20% to 30%
if the cost of in-circuit testing and broad-level testing is
added. However, the most important cost can be the loss
in time-to-market due to hard-to-detect faults. Recent
studies show that a six-month delay in time-to-market can
cut profits by 34% [17]. Thus, testing can pose serious
problems in embedded system and systems-on-chip
designs.
Part of reason testing cost so much is the traditional
separation of design and testing. Testing is often viewed
inaccurately as a process that should start only after the

design is complete. Due to this separation, the designer


usually has little appreciation of testing requirements,
whereas the test engineer has little input into the design
process. In order to effectively reduce testing cost,
methods which take into account testability of the final
product are needed and are usually called Test Synthesis.
This approach is motivated by the high complexity of
current design and related testing costs. The design test
related activities, such as test generation and test
application, usually have a relatively big share of the total
design and test cost. In some cases, this can reach to as
high as 50% of the total cost. Thus the main idea of Test
Synthesis is to improve testability of the design during
early stages which is expected to reduce the later design
testing costs. On the other hand, as design complexity
drastically increases, current gate level synthesis
methodology alone can no longer satisfy stringent timeto-market requirement. High-level Synthesis [2,5] which
takes a behavioral specification of a digital system and a
set of design constraints as input and generates a RegisterTransfer Level (RTL) hardware implementation is hence
considered as a promising technology to boost design
quality and shorten the development cycle.
The main objective of the High-level Test Synthesis this
paper focus on is to develop new systematic techniques to
integrate testability consideration into synthesis process
and make it possible for an automatic synthesis tool to
predict testability of the synthesized circuits accurately in
the early stage and optimize the designs in terms of test
cost as well as performance and area cost.
II. RECENT RESEARCH SUMMARY
Due to the increasing gate-to-pin ratios which limit the
feasibility of testing digital circuits externally, this paper
mainly describes some recent research progress on our
work of a built-in self-test synthesis system. Its
framework is depicted in Figure 1.
A. Design Representation
First of all, our system takes a VHDL behavioral
specification of a digital system and a set of design
constraints as input and generates a Register-Transfer
Level (RTL) hardware implementation which consists of
a data path and a controller. The kernel of the system is an
intermediate design representation, called Extended
Timed Petri Net (ETPN), which can be used both for

228

Proceedings of the National Conference on Communication Control and Energy System

testability analysis and high-level synthesis [14]. In


ETPN, the structural properties of the data path and
controller are explicitly captured in order to facilitate
accurate analysis of the intermediate design in term of
performance, area and testability.

observability (SO) [31] based on Markov chain model [3],


and provides a means of measuring the effect of test
improvement with regards to BIST test quality.
C. State Reachability Analysis
Besides the data path testability metrics, we also have
developed state reachability metrics which are used to
characterize the testability of the given controller in term
of a ETPN [23,25]. It is defined by the difficulty of
reaching a state from an initial state. This measurement is
associated with each state in the control part. The state
reachability consists of two measurements, namely
combinational state reachability (CSR) and sequential
state reachability (SSR) [7]. The combinational state
reachability measures the probability to reach the current
state from an initial state, and the sequential state
reachability measures the number of cycles (transitions)
needed to reach the current state from the initial state.
D. Incremental Testability and Reachability

Fig. 1. The built-in self-test synthesis system

B. Data Path Testability Analysis


Based on the design representation, we have developed
register transfer level data path testability metrics to
evaluate various BIST configurations and make
improvement decision [23,25]. The early decision about
testability improvement gives the possibility that designs
can be optimized in later synthesis processes. The
testability analysis carried out at high-level abstraction
will also reduce the computational complexity, since the
complexity of a design at this level is significantly lower.
The objective of testability metrics is to analyze and
quantify BIST testability for a given register transfer level
design. Basically, our BIST testability metrics quantify
two important testability aspects, namely controllability
and observability. In our approach, we mainly follow the
test scheme, namely minimal behavioral BIST originally
proposed in [10]. Both of controllability and observability
are further divided into two factors: combinational factor
and sequential factor. The combinational factor is
measured in terms of the quality of pseudo-random values
as they propagate through embedded modules and
registers, and the sequential factor is used for the
estimated number of steps or clock cycle to control under
test. Similarly the combinational observability is
measured in terms of sensitivity of embedded modules
and registers to erroneous value propagation, i.e. in terms
of how difficult it is to propagate an erroneous value
through to an observable output, and the sequential factor
is used for the estimated number of steps or clock cycle to
observe under test. As a result, our testability metric
consists of, therefore, four measures:combinational
controllability (CC), sequential controllability (SC),
combinational observability (CO) and sequential

Due to the large computational complexity of testability


and state reachability analysis and the need to perform
such analysis after each synthesis steps, we have applied a
similar systematic technique used for ATPG technique for
the present BIST technique to approximate accurately the
repeated testability and state reachability calculation and
evaluation [23,25]. First of all, the global testability of a
data path is based on a cost function in [9] and is used to
estimate the global testability of an entire design. Based
on the above global testability measurement, we propose
a new and efficient estimation method [23,25] which is
based partially on explicit re-calculation and partially on
gradient techniques for incremental testability and state
reachability to update the test property.
E. Bist Partitioning
Based on the above testability measurements, we develop
a new improvement method with BIST technique at
register transfer level(RTL). RTL circuits consist of
interconnections of registers, functional units (ALUs),
multiplexors and buses. Both conventional BIST [1] and
circular BIST [15,16] are well-suited for automatic circuit
improvement at the register transfer level. Traditionally,
each ALU in a circuit is made directly testable by placing
test registers to generate test patterns at the ALUs inputs,
and the test registers to compact the responses at the
ALUs output. However, it may not be necessary to add
this many test registers [4]. For example, suppose that the
input registers to the ALU are not directly controllable,
but they still can generate patterns that are random
enough to efficiently test the ALU; in this case, there is no
need to replace the normal system registers with more
expensive, slower test registers. Thus, an efficient
partitioning technique, which decide either which
registers should be configured as test registers
(conventional BIST) or which registers should be linked
in the circular scan path (circular BIST), is necessary.

229

Testing Methodologies for Embedded Systems and Systems-On-Chip

Partitioning for a design can lead to the simplifications of


many design procedures such as synthesis and test.
Partitioning for testability will lead to the simplification
of test efforts and the ability to apply different test
strategies to different partitions. The proposed
partitioning technique in the paper [23,25] transforms
some hard-to-test registers and/or lines to boundary
components. These components act as normal registers
and/or lines in the normal mode and serve as partitioning
boundaries in test mode or test registers. Therefore, a
design is partitioned into several sub-circuits and each of
them can be tested and controlled based on BIST test
schemes. It is, therefore, possible to apply different test
strategies, such as scan for deterministic and BIST for
random test to different partitions. The circuit partitioning
problem can, in general, be formulated as a graph
partitioning problem. Given a graph with nodes and arcs,
the objective is to partition the nodes into several subsets,
such that the total costs of the arcs between nodes in
different partitions is minimized. Optimal partitioning is
known to be NP-complete [6]. In our research work, we
present an efficient and economic BIST partitioning
approach [23,25]. It is based on a BIST testability
analysis algorithm with an incremental testability analysis
approach for data path and a state reachability analysis
algorithm with its incremental analysis approach for
control path at register-transfer level. Initially we use the
testability algorithm for data path and state reachability
algorithm for control part to find partitioning boundaries.
Then the partitioning procedure is performed
quantitatively by a clustering algorithm which clusters
directly interconnected components excluding boundary
components based on the global testability of data path
and global state reachability analysis of control part. After
each selection step, we use the proposed new and efficient
estimation method which is based partially on explicit recalculation and partially on gradient techniques for
incremental testability and state reachability to update the
test property. This process will be iterated until the design
is partitioned into several disjoint sub-circuits and each of
them can be tested independently. Therefore, the design is
fully self-testable.
F. Resource Optimization
Applying BIST techniques for resource optimization
before going to RTL implementation or performing highlevel synthesis involves modification of the hardware on
the chip so that the chip has the capability to test itself.
Table 1 in [20] shows different types of test registers that
can be used. Concurrent builtin logic observation
(CBILBO) and built-in logic observation (BILBO)
registers can both generate test pattern and compress test
response, and ensure high fault-coverage. BILBO
registers need more test sessions while CBILBO registers
require more hardware area. Note that a test register
usually has larger hardware area than a normal register
(see Table 1 in [20] where is the area scaling factor that
of the normal registers. One of the main considerations

for BIST resource optimization is, therefore, the extra


area for the test circuitry. Here we would like to describe
an optimal modification approach [21] based on Integer
Linear Programming formulation [11] to find BIST
embeddings in the data path prepared for the synthesis
algorithm or before going to RTL implementation such
that the cost of modification is minimum.
G. Data Path Allocation
It has been shown that MISR registers can also be used to
generate pseudorandom test patterns [8,18]. This results
in both reduction of testing times and reduction of extra
registers which reduces hardware area. However, since
the actual time required for a MISR register to obtain
exhaustive pattern coverage is exponential with respect to
the number of bits in the register, as shown in [20], the
test quality might be reduced. How to reduce this area
overhead without sacrificing the test quality is one o f the
major concern of our research work. Considering
testability issues at high-level synthesis can lead to a more
efficientexploration of the design space, thus resulting in
a digital circuit that requires minimal BIST area overhead
and has high test concurrency while guaranteeing the test
quality. The fact that the contents of signature registers
(MISR) can be used as test patterns leads to the following
advantages. First, the algorithm produces designs with
high test concurrency which reduces the overall testing
time due to increased testing parallelism. Moreover, the
number of extra registers for implementing BIST can be
reduced. However, since the actual time required for a
MISR register to obtain exhaustive pattern coverage is
exponential with respect to the number of bits in the
register, we consider such template as incompletely
embedded module. We describe a high-level data path
allocation algorithm in [20] which generates highly
testable data path designs while maximizing the sharing
of modules and test registers.Module allocation is guided
by a testability balance principle where incompletely
embedded modules can be mapped into the same function
module that is completely embedded. In this way, the
incompletely embedded module after allocation will be
fully testable. The register allocation is mainly based on
the sharing degrees of registers which reflects the number
of modules for which the register can be configured as
RTPG and the number of modules for which it can be
configured as a MISR. Using this measure the register
allocation is guided by choosing mergers that result in
large increases in the sharing degrees of registers over
those resulting in smaller increases. This would result in
registers with high sharing degrees, thereby requiring a
smaller number of BIST registers globally in the design.
However, the approach still has some drawbacks, for
example, if an incompletely embedded module can not
find a match to be merged with a completely embedded
module during the iterative allocation algorithm. It
probably will not become fully testable. If there are
several such modules un-mapped in the design, the
resulting testing quality will be not satisfactory. This

230

Proceedings of the National Conference on Communication Control and Energy System

motivates us to make use of two types of redundant


transformations introduced in [11,12,13], which add
redundancy that improves test resources to be shared in
the data path without affecting the scheduling step
(latency) and functional resource requirement of the
behavior, to improve our data path allocation algorithm
and to make all incompletely embedded modules become
fully testable [27].
H. Integrated Synthesis Algorithm
After our system takes a VHDL behavioral specification
of a digital system and a set of design constraints as input,
the design representation is always unscheduled.
Therefore, we need to consider not only operation
scheduling but also data path allocation. In our research
work, we describe a high-level test synthesis algorithm
for operation scheduling and data path allocation [24]. It
generates highly testable data path designs while
maximizing the sharing of test registers, which means that
only a small number of registers is modified for BIST.
The algorithm produces also designs with high test
concurrency, thereby decreasing test time. The algorithm
is motivated by that if the contents of signature registers
can be used as test patterns, the overall testing time can be
reduced due to increased testing parallelism, moreover,
the number of extra registers for implementing BIST can
be reduced. In our approach, module allocation is guided
by a testability balance principle where incompletely
embedded modules can be mapped into the same function
module that is completely embedded. In this way, the
incompletely embedded module after allocation will be
fully testable. The register allocation is guided by an
incremental sharing measurement which chooses merges
that result in large increases in the sharing degrees of
registers. When two modules are merged, the operations
executed on these modules must be scheduled in different
control steps so that they can share the same component.
Similar for registers, the variables stored in these registers
must be disjoint. We will present the rescheduling
transformation which is performed by a merge-sort
algorithm. These transformations change locally the
execution orders of some operations in the current
schedule in order to improve the testability and satisfy the
scheduling constraints imposed by the merger. Contrary
to other works in which the scheduling and allocation
tasks are performed independently, our approach
integrates scheduling and allocation by performing them
simultaneously so that the effects of scheduling and
allocation on testability are exploited more effectively. In
[22], we also introduce some concepts and techniques to
improve our previous work [24] during the operation
scheduling part, specially to determine the execution
order of different operations when rescheduling
transformations are performed.
However, the above described integrated approach still
has some drawbacks, for example, during the module
allocation, if an incompletely embedded module can not

find a match to be merged with a completely embedded


module during the iterative allocation algorithm. It
probably will not become fully testable. If there are
several such modules un-mapped in the design, the
resulting testing quality will be not satisfactory. Similarly
if a pair of operation in the same schedulingstep to be
merged based on the allocation balance principle is
decided, we have to introduce dummy places which have
negative impacts or increase the control steps leading to
longer execution time or slow performance. This
motivates us to make use of two types of redundant
transformations introduced in [11,13], which add
redundancy that improves test resources to be shared in
the data path without affecting the scheduling step
(latency) and functional resource requirement of the
behavior, to improve our data path allocation algorithm
and to make all incompletely embedded modules become
fully testable [28]. Also this can avoid the increase of
scheduling steps because one of the operations can be
merged with the introduced redundant operations at
different scheduling steps. In [28] we have demonstrated
the advantage of the approach by introducing the
redundant transformations for operation scheduling and
data path allocation.
I. Testability Metrics-Based Synthesis
In [19], we also present a different BIST synthesis
methodology, namely a BIST testability metrics-based
algorithm for operation scheduling and data path
allocation. It is based on the BIST data path testability
analysis algorithm at register-transfer level described in
previous subsections. In the approach, module and
register allocation are guided by a testability balance
technique. In our approach, the selection of nodes to be
merged is based on the testability measures generated by
the testability analysis algorithm. The main goal is to
generate a data path with good controllability and
observability for all the nodes and with as few loops as
possible. The basic idea is to fold nodes with good
controllability and bad observability to nodes with good
observability and bad controllability. Note that the
controllability of a node is defined as the best
controllability of any of its input lines. While the
observability of a node is the best observability of any of
its output lines. In this way, the new node will inherit the
good controllability from one of the old nodes and the
good observability from the other. The synthesis
algorithm introduces scheduling constraints imposed by
data path allocation and performs scheduling and
allocation simultaneously in an iterative fashion so that
their effects on testability are exploited more effectively.
With the help of an incremental BIST testability analysis
and a state reachability analysis with its incremental
approach for control path at register-transfer level, we
mainly make use of some concepts and techniques to
improve the above work [19] not only during the data
path synthesis part, but also during the operation
scheduling part [26]. Similarly redundant transformations

231

Testing Methodologies for Embedded Systems and Systems-On-Chip

have been introduced [29]. We add redundancy that


improves test resources to be shared in the data path
without affecting the scheduling step (latency) and
functional resource requirement of the behavior, to
improve our data path allocation algorithm and to make
all modules become fully testable. Also this can avoid the
increase of scheduling steps because one of the operations
can be merged with the introduced redundant operations
at different scheduling steps [30].
REFERENCES
[1]

[2]
[3]

[4]

V. D. Agrawal, C. R. Kime, and K. K. Saluja. A tutotial


on Build-in Self-test, part 1: principles. IEEE Design and
Test of Computers, March 1993.
R. Camposano and W. H. Wolf. High-Level VLSI
Synthesis. Kluwer Academic Publishers, 1991.
J. Carletta and C. A. Papachristou. Testability analysis and
insertion for RTL circuits based on pseudorandom BIST.
In Proceedings of International Conference on Computer
Design, 1995.
S. Chiu and C. A. Papachristou. A design for testability
scheme with applications to data path synthesis. In

Proceedings of Design Automation Conference, pages


271277, June 1991.
[5] D. D. Gajski, N. D. Dutt, A. C-H.Wu, and Steve Y-L. Lin.
High-Level Synthesis: Introduction to Chip and System
Design. Kluwer Academic Publishers, 1992.
[6] M. R. Garey and D. S. Johnson. Computers and
Intractability: A Guide to the Theory of NP-Completeness.
W. H. Freeman and Co., San Francisco, 1979.
[7] X. Gu, E. Larsson, K. Kuchcinski, and Z. Peng. A
controller testability and enhancement technique. In
Proceedings of European Design and Test Conference,
pages 153157, Paris, France, March 1997.
[8] K. Kim, D. S. Ha, and J. G. Tront. On using signature
registers as pseudorandom pattern generators in built-in
self-testing. IEEE Transactions on Computer-Aided
Design of Integrated Circuits and Systems, 7(8):919928,
August 1988.
[9] R. Lisanke, F. Braglez, A. J. Degues, and D. Gregory.
Testability-driven random test pattern generation. IEEE
Transactions on Computer-Aided Design of Integrated
Circuits and Systems, 6:10821087, 1987.
[10] C. A. Papachristou and J. Carletta. Test synthesis in the
behavioral domain. In Proceedings of International Test
Conference, October 1995.

232

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.233-238.

Micro Electro Mechanical Systems


M. Yuvaraj and B. Gokul
IV Year, EEE Department, Srinivasa Institute of Engineering and Technology
Abstract Micro electro mechanical systems (MEMS) are
the technology of very small mechanical devices driven by
electricity; it merges at the nano-scale into nano
electromechanical systems (NEMS) and nanotechnology.
MEMS are also referred to as micro machines or Micro
Systems Technology MST.MEMS are separate and distinct
from the hypothetical vision of molecular nanotechnology or
molecular electronics MEMS are made up of components
between 1 to 100 micro meters in size (i.e. 0.001 to 0.1 mm)
and MEMS devices generally range in size from 20
micrometers (20 millionths of a meter) to a millimeter. They
usually consist of a central unit that processes data, the
microprocessor and several components that interact with
the outside such as micro sensors. This paper deals with the
detail explanation about MEMS and its wide range of
application in every field. It gives a brief idea how the
technology is getting replaced through MEMS. Some
important application of MEMS is included in a detailed
way in this paper. One of the recent MEMS SENSOR in
RETINA of the eye to get rid of the GLUCOMA serious
disease in the eye has een described in a detailed way in this
paper.

IV. MEMS MANUFACTURING TECHNOLOGIES


A. Bulk Micromachining
Bulk micromachining is the oldest paradigm of silicon
based MEMS.
The whole thickness of a silicon wafer is used for
building the micro-mechanical structures.
Bulk micromachining has been essential in enabling high
performance pressure sensors and accelerometers that
have changed the shape of the sensor industry in the 80's
and 90's.

I. INTRODUCTION
MEMS technology can be implemented using a number
of different materials and manufacturing techniques; the
choice of which will depend on the device being created
and the market sector in which it has to operate.
II. MATERIALS FOR MEMS MANUFACTURING
1. Silicon

Fig. 1. Bulk micromachining

B. Surface Micromachining
Surface micromachining uses layers deposited on the
surface of a substrate as the structural materials, rather
than using the substrate itself.

2. Polymers
3. Metals
III. MEMS BASIC PROCESSES
1. Deposition processes
2. Physical deposition
3. Chemical deposition
4. Patterning
5. Lithography
6. Etching processes

The original surface micromachining concept was based


on thin polycrystalline silicon layers patterned as movable
mechanical structures and released by sacrificial etching
of the underlying oxide layer.
Inter digital comb electrodes were used to produce inplane forces and to detect in-plane movement capacitive.
This MEMS paradigm has enabled the manufacturing of
low cost accelerometers for e.g. automotive air-bag
systems and other applications where low performance
and/or high g-ranges are sufficient.

233

Micro Electro Mechanical Systems

C. High Aspect Ratio (HAR) Silicon Micromachining


Both bulk and surface silicon micromachining are used in
the industrial production of sensors, ink-jet nozzles, and
other devices.
But in many cases the distinction between these two has
diminished.
A new etching technology, deep reactive-ion etching, has
made it possible to combine good performance typical of
bulk micromachining with comb structures and in-plane
operation typical of surface micromachining.
While it is common in surface micromachining to have
structural layer thickness in the range of 2 m, in HAR
silicon micromachining the thickness can be from 10 to
100 m.

damaged, leading to progressive, irreversible loss of


vision. It is often, but not always, associated with
increased pressure of the fluid in the eye. Glaucoma has
been nicknamed the "silent thief of sight" because the loss
of vision normally occurs gradually over a long period of
time and is often only recognized when the disease is
quite advanced. Once lost, this damaged visual field
cannot be recovered. Worldwide, it is the second leading
cause of blindness. It is also the first leading cause of
blindness among African Americans. Glaucoma affects 1
in 200 people aged fifty and younger, and 1 in 10 over the
age of eighty. If the condition is detected early enough it
is possible to arrest the development or slow the
progression with medical and surgical means.

The materials commonly used in HAR silicon


micromachining are thick polycrystalline silicon, known
as epi-poly, and bonded silicon-on-insulator (SOI) wafers
although processes for bulk silicon wafer also have been
created (SCREAM).
V. APPLICATIONS
1. Sensor
2. Actuator
3. Inkjet printers
4. Accelerometers

Fig. 2.Statistics of glaucoma in ages

5. Accelerometers in consumer electronics devices such


as game controllers (personal media players / cell
phones (Apple phone, various Nokia mobile phone
models, various HTC PDA models and a number of
Digital Cameras (various Canon Digital IXUS
models).
6. MEMS gyroscopes used in modern cars and other
applications to detect yaw; e.g. to deploy a roll over
bar or trigger dynamic stability control.
7. Silicon pressure sensors e.g. car tire pressure sensors,
and disposable blood pressure sensors.
8. Displays e.g. the DMD chip in a projector based on
DLP technology has on its surface several hundred
thousand micro mirrors.

a) The Risk in Glaucoma


Glaucoma is a random affliction, yet there are significant
risk groups. The main risks include age, race, and family
history. While prevalence of glaucoma increases
significantly with age, one also observes increasing
prevalence in younger populations. Prevalence is about 3
to 5 times higher among the population of African or
Hispanic ascent. Family history of glaucoma is another
strong risk factor. Other risk factors include diabetes and
nearsightedness. Glaucoma testing every five years is
recommended starting at age 35 for people at low risk,
and every one to two years for people at high risk or over
the age of 60.

9. Optical switching technology which is used for


switching technology and alignment for data
communications.

There are no initial indications or symptoms. The disease


progresses very gradually, and damage to the optic nerve
is irreversible. It is estimated that nearly half of the cases
go undiagnosed.

10. Bio-MEMS applications in medical and health related


technologies from Lab-On-Chip to Micro Total
Analysis (biosensor, chemo sensor).

B. Diagonising

VI. APPLYING MEMS TECHNOLOGY TO THE


DIAGNOSIS OF GLAUCOMA
A. Description
Glaucoma is a disease in which the optic nerve is

While checking nerve damage or loss of peripheral vision


works, by the time these indications appear irreversible
damage has occurred. The challenge is to prevent damage
from happening and progressing. The main obstacle to
earlier diagnosis is due to the fact that IOP has a dynamic
behavior and will vary throughout the day in cycles that

234

Proceedings of the National Conference on Communication Control and Energy System

tend to be repeatable. This makes it very difficult to


monitor in real time because the eye is very sensitive.
Existing devices enable ophthalmologists to perform
single pressure measurement snapshots on IOP but fail to
capture the around-the-clock dynamic behavior of IOP,
particularly pressure peaks happening during sleep or
outside of office hours.
C. Objective of MEMS in Diagonisis of Glaucoma
The solution is designed to achieve two objectives. The
main objective is to help manage the treatment of
glaucoma patients by obtaining around-the-clock IOP
profiles and thereby enable tailoring personalized
treatment. Our system lets ophthalmologists verify that
the treatment is working and allows subsequent follow up
measurements to monitor the efficacy of the treatment.
The second objective is to provide new insights to earlier
diagnosis in at-risk patients. From a technical point of
view we do not perform a measurement of IOP by
applying a force such as by indentation, application or
rebound techniques on the cornea.

physical connection to other system components. The


sensor provides continuous monitoring information to
identify peaks in IOP and when they occur.
E. Components of Monitoing System
The main component is
1. The soft, disposable contact lens with an embedded
MEMS sensor.
2. Micro processor.
The MEMS sensor includes circular active and passive
strain gages to measure corneal curvature changes, and a
loop antenna to receive power and to send back
information to the external system. The microprocessor is
a small full custom ASIC about 2mm square and 50
microns thick.
The other components include the adhesive external loop
antenna worn around the eye, the data cable driving the
antenna and connected to a portable rechargeable batterypowered recorder.
Finally, software on the ophthalmologists computer
initiates the monitoring session to present IOP data
collected up to a 24 hour period. A monitoring session
will normally start and end with a regular gold-standard
IOP measurement called "tonometry". The Sensimed
monitoring data then provide additional information to the
ophthalmologist on the dynamic behavior of IOP to help
diagnose and treat glaucoma patients in a better way. Our
solution does not replace current equipment, nor does it
require the practitioner to change the basis for decision.
Instead it complements current information and enhances
the physicians data for better patient care. However, the
solution intends to replace expensive and cumbersome
curves obtained by a series of single pressure
measurement snapshots currently made in a sleep
laboratory or by multiple office visits.

Fig. 3. The soft contact lens like sensor, with its MEMS antenna
(golden rings), its MEMS sensor (silver ring close to the outer
edge), and microprocessor.

Fig. 4. Sensor placed on the eye, centered on the cornea with no


elements in the line of sight.

D. The Working of MEMS as Sensor in Eye


Instead, we developed a sensor that floats on the eye
following changes in corneal curvature and with no
235

Fig. 5. This illustration shows the various components of the


solution placed on the body.

Micro Electro Mechanical Systems

The data cable and the recorder are reusable equipment


and belong to the ophthalmologist. The sensor and the
antenna are disposable.
People who are currently treated for glaucoma are
gradually losing sight. Based on our limited experience,
there is eagerness from these patients to understand what
happens in their eyes and high hope that information
gathered by our system will help them and their
ophthalmologist keep the disease under control. Trough
this MEMS SENSOR we can believe that patients will see
the benefits of our monitoring system as far outweighing
any resistance to the procedure.
VII. OTHER APPLICATIONS
A. MEMS Mirror

Digital Light Processing (DLP) has become a household


name with the advent of HD-TVs. DLP technology is
based on Digital Micro-mirror Device (DMD) chips that
contains an array of more than 2 million micro-mirrors. In
a projection system, actuation electrodes tilt mirrors either
toward a light source (ON state) or away from it (OFF
state), thereby creating a light or dark pixel on the
projection surface. The precise control of the mirror is
therefore very important.

a rigid plate it is assumed the mirror plate behaves like a


rigid body and does not flex. The electrodes used to
actuate the mirror are labeled as 'Yoke electrode' and
'Hinge Electrode' and are connected to 'Pulse' voltage
sources. The electrodes are modeled using 'Electrode'
components from the Architect parts library. There are
two sets of electrodes. One set tilts the mirror in
clockwise direction; the other tilts the mirror a counterclockwise direction. The mirror is mounted on a yoke,
also modeled using a 'Rigid Plate' component, and
attached to the mirror support posts using hinges. The
hinges are modeled using 'Linear Beam' components.A
'Plate Damper' is used to model damping forces as the
mirror moves.
a) Simulation
'e_left' shows the actuation voltage for the left-side
electrodes; 'e_right' for electrodes on the right. The
'Displacement' curve shows the movement of one mirror
corner. The voltage pulse applied on the left electrode
causes the mirror to tilt at around 5 us. At ~20 us the
lower side of the mirror plate contacts the landing pad,
oscillates to rest. The plate stays in contact until ~40us at
which time the left electrode is deactivated causing the
mirror lift-off from the landing pad and oscillate with a
decaying amplitude. At 100us the right electrode is
activated, causing the mirror to tilt in the opposite
direction and come into contact with the other landing pad
at ~110 microseconds. After the right electrode is
deactivated at ~110us the mirror again lifts-off and starts
to oscillate with decaying amplitude towards rest.This
simulation (coupled Electro-Mechanical simulation with
damping and contact) takes just 13 seconds to run on a 2
GHz processor.

Fig. 6. The simulation results of a transient analysis of switch


cycle over 200us.

B. MEMS Microphones
Fig. 5. 3D model of DLP mirror

The mirror plate is modeled using a 'Rigid Plate'


component, labeled as 'Mirror' in the schematic. By using

A combination MEMS microphone/amplifier chip holds


the promise of generating large-scale applications for
hearing aids and cell phones.

236

Proceedings of the National Conference on Communication Control and Energy System

The analog chip is a multi membrane CMOS IC that


holds an array of 64 micro machined condenser
microphones etched in silicon. Also incorporated on the
3- by 3.65- by 0.5-mm chip is a MOSFET amplifier. The
chip's Omni directional performance characteristics
include sensitivity (at 1 V/Pa) of -40 to +4 dB and a target
noise level of 35 dB SPL (sound pressure level).
Frequency range is 100 Hz to 10,000 kHz. The device
typically operates from a 3-V supply (5 V maximum) and
consumes less than a mere 130 A. This level of
performance requires much larger chips of electrets
microphones than those being used in hearing aids today.
The chip accepts a maximum input sound level of 110 dB
and produces an analog output voltage. Future versions
might also integrate analog-to-digital converters (ADCs)
for digital outputs, providing even higher levels of noise
immunity.

death in the United States. If the Aneurysm ruptures, a


person can bleed to death within minutes. Doctors can
treat an aneurysm with a stent graft, a slender fabric tube
placed inside the bulging artery to brace it and relieve
pressure by creating a channel for blood flow. Still, the
stent can fail, resulting in leakage of blood into the
aneurysm, which can cause the aneurysm to burst. For
this reason, lifetime monitoring is required. The photo
shows the Endo Sensor that can be implanted to measure
pressure in an aneurism being treated by a stent graft.
Cardio MEMS biocompatible sensor implanted along
with the stent, monitors the stent more effectively than CT
scans. Its also cheaper and more convenient. During
checkups, patients dont even need to remove cloths. The
physician merely waves an electronic wand in front of the
patients chest. Radio-frequency waves activate the Endo
Sensor, which takes pressure measurements and then
relays the information to an external receiver and monitor.

Fig. 7. MEMS microphones

Fig. 9. Cardio MEMS acting as monitoring device for heart


attack

VIII. CONCLUSION

Fig. 8. Usage of mems microphone in years

C. Cardio MEMS

Cardio MEMS, a member of Georgia Techs Advanced


Technology Development Center (ATDC), is pioneering a
Bio MEMS sensor to monitor heart patients. They have
combined wireless technology with Bio MEMS to
provide doctors with more information while making
testing less invasive for patients. The Endo Sensor
measures blood pressure in people who have an
abdominal aortic aneurysm, a weakening in the lower
aorta. This condition ranks as the 13th leading cause of

In many respects, MEMS technology development


parallels that of solid state electronics. However, it lacks
the definitive pedigree of that more extensive topic, which
started with the invention of the transistor in 1947,
followed by the integrated circuit in 1959. The closest
analogy for MEMS was the 1954 discovery of the
piezoelectric effect in silicon, which enabled strain in
micromechanical structures to be measured. Subsequent
developments in high gain, low noise amplifier
technology made capacitance based sensors and actuators
feasible. MEMS products based on piezoelectric and
capacitance sensing now include pressure and flow
sensors, accelerometers, gyroscopes, microphones, digital
light projectors, oscillators, and RF switches. MEMS
based products produced in 2005 had a value of $8
billion, 40% of which was sensors. The balance was for
products that included micro machined features, such as
ink jet print heads, catheters, and RF IC chips with
embedded inductors.
Growth projections follow a hockey stick curve, with the
value of products rising to $40 billion in 2015 and $200
billion in 2025! Growth to date has come from a
combination of technology displacement, as exemplified

237

Micro Electro Mechanical Systems

by
automotive
pressure
sensors
and
airbag
accelerometers, and new products, such as miniaturized
guidance systems for military applications and wireless
tire pressure sensors. Much of the growth in MEMS
business is expected to come from products that are in
early stages of development or yet to be invented. Some
of these devices include disposable chips for performing
assays on blood and tissue samples, which are now
performed in hospital laboratories, integrated optical
switching and processing chips, and various RF
communication and remote sensing products.

electronics on a single chip. At present, there are two


approaches to integrating MEMS devices with
electronics. Either the MEMS device is fabricated in poly
silicon, as part of the CMOS wafer fabrication sequence,
or a discrete MEMS device is packaged with a separate
ASIC chip. Neither of
these approaches is entirely satisfactory, though, for
building the high value, system-on-chip products that are
envisioned. It is this authors opinion that a combination
of self-assembly techniques in conjunction with wafer
stacking, offer a viable path to realizing ubiquitous,
complex MEMS systems.
REFERENCE
[1]
[2]
[3]

[4]
[5]

Fig. 10. application of MEMS in various field and its evolution

The key to enabling the projected 25 fold growth in


MEMS products is development of appropriate
technologies for integrating multiple devices with

[6]

238

Waldner, Jean-Baptiste (2008). Nano computers and


Swarm Intelligence. p. 205.
Electromechanical monolithic resonator,
R.J. Wilfinger, P. H. Bardell and D. S. Chhabra: The
resonistor a frequency selective device utilizing the
mechanical resonance of a substrate, IBM J. 12, 113-118
(1968)
Williams, K.R.; Muller, R.S. (1996). "Etch rates for
micromachining
processing".
Journal
of
Microelectromechanical Systems 5: 256.
Kovacs, G.T.A.; Maluf, N.I.; Petersen, K.E. (1998). "Bulk
micromachining of silicon". Proceedings of the IEEE 86:
1536.
Chang, Floy I. (1995). Gas-phase silicon micromachining
with xenon difluoride. pp. 117.

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.239-242.

Design Technique of Hardware to Reduce Power Consumption


in Embedded System
H. Anandkumar Singh
M.Tech, EEE dept, Veltech Technical University, Chennai
Email: hanandsingh@yahoo.com
Abstract Power consumption emerged as a distinct axis for
system optimization especially for battery operated
applications. Most of all, circuit and device level low-power
design has leveraged battery-operated embedded systems
over dozens of years. As of today, high-level or system-level
power reduction is believed for another significant power
saving opportunity. Nevertheless, existing power-related
tools are not familiar with system and software designers,
who have to pay more attention to power consumption than
other optimization factors. In this paper, we introduce a
series of power measurement and estimation design that
differentiates the quality and effectiveness of high-level
power reduction practices for embedded systems. This paper
will explain a prototype design of embedded hardware
algorithm to fulfill necessary requirement for high-level
power reduction, we have developed a cycle-accurate energy
measurement technique using switched capacitors. This new
technique enabled us to develop innovative power
measurement tools for memory devices, FPGAs and CPUs.
This individual power measurement tools contribute quality
energy characterization of components, and eventually come
up with an integrated system-level power estimation tool [1].

I. INTRODUCTION
Together with speed and cost, energy consumption is now
a primary performance metric for battery-operated
embedded systems. A well-designed embedded system
should be globally optimized to the target application,
from user interface right through to device technology.
This kind of global optimization over many layers of
software and hardware is challenging, due to the need for
extensive inter-disciplinary collaborations. Energy
estimation is a routine job in low-level hardware design.
Unfortunately, at this stage, the specific application of
most hardware components is not known, and designers
cannot perform an application-specific optimization.
Another opportunity for optimization is given to software
and system designers; but they are often unfamiliar with
hardware-related energy issues. This problem is
compounded because traditional energy estimation tools
like SPICE and Power Mill [1] are designed for use by
low level hardware engineers, which can discourage
designers working at a higher level to attempt global
optimization.
With the increasing trend towards low-power design, a
higher-fidelity,
system-level
energy
estimation

environment is demanded. In this paper, we propose a


series of energy measurement, estimation and exploration
tools, which overcome limitations of existing tools. And
also propose a design for hardware and software develop
technique to overcome excess usage of energy by
processor and other controller in embedded system.
II. RELATED WORK
A simple approach is to analyze actual measurements
from a hardware platform. Tools like Power scope [2] and
Itsy [3] use computer-controlled multi-meters or A/D
converters to measure energy consumption. Other studies
adopt software energy estimation methodology, which has
been an important aspect of embedded systems design
since it was first introduced by [4]. While the early
literature [4] focuses on the possibility of software
optimization, recently proposed tools extend the
methodology to support high level hardware optimization
[5,6,7,8]. Joule Track [5] is a publicly available webbased software energy profiling tool for processor cores.
Simple Power [6] and Wattch [7] estimate the power
consumption of processors including the on-chip cache,
on-chip bus and on-chip SRAM, but still excluding offchip subsystem. Joule Track and similar systems [5,6,7]
are good for architecture-level analysis since they only
consider processors. On the other hand, a power
estimation framework [8] presents a system level energy
estimation that includes a processor, L1/L2 cache, offchip memory, and DC-DC converter. However, energy
models for off-chip memory devices are too simple to
support a cycle-level analysis. A simple power
characterization is taken and devices are assumed to have
only two modes: active and idle. Consequently, most
high-level energy estimators are not incapable of cycleaccurate analysis of each important component, since
energy consumption is averaged out over the entire
execution time, which means that they are not suitable for
high-level power reduction but for high-level power
estimation.
III. POWER CHARACTERIZATION FOR HIGH-LEVEL
POWER REDUCTION
High-level design such as RTL and behavior levels
overcome maximum complexity of power consumption

239

Design Technique of Hardware to Reduce Power Consumption in Embedded System

with proper abstraction and also hides low-level design


details like power consumption. It means that, high-level
power saving may not have to consider microscopic
power changes. This is true for high level power
estimation. Suppose we have three different
characterization schemes of petrol consumption of a
vehicle as shown in Table 1. All the three different
characterization are as follows.

Fig.1 illustrates the variation of power supply current idd


in asynchronous and synchronous devices. It also justifies
the static and the dynamic energy association of the
energy state machine. Asynchronous devices consume the
dynamic energy when strobe signals are issued. The
strobe signal changes the device state leading to variation
of the static energy consumption. Synchronous devices
consume the dynamic energy at each clock edge rather
than logical state change.

Table 1. Characterization of petrol consumption of a vehicle


(P,S, I,R,n,c: petrol consumption, vehicle speed, idle energy
consumption, engine restarting cost, number of engine starts,
and a constant, respectively).

Definition.1. We can come to conclusion that for every


state machine with three tupple M(S, ,,) , where M is a

Characterization 1
Characterization 2
Characterization 3

state machine having leakage energy of (1......n) ,


which have finite state of S(s0..sn) which associate a

) with State Transition


dynamic energy (
T(T1Tn).

Linear mode
Non linear model
Including restarting
cost

G = cS
G = cS+I
G = cS+I +nR

All the three characterization schemes are useful for highlevel petrol consumption estimation. Of course,
Characterization 3 is more accurate than Characterization
2, and Characterization 2 is better than Characterization 1.
However, if a vehicle stops only a few minutes and the
engine is never turned off during the entire trip, all the
characterization schemes may show similar accuracy.
Now, suppose we need to devise petrol consumption
saving scheme that is useful when a vehicle temporary
stops at a parking space. Characterization 1 shows that the
vehicle does not consume any petrol when the speed is
zero, i.e. the vehicle stops. Thus, there is even no need to
devise such a petrol consumption saving technique.
Characterization 2 shows that I is still consumed while
the vehicle stops. Thus, the best way to save petrol
consumption from Characterization 2 is turning off the
engine whenever the vehicle stops even for just a second.
Characterization 3 considers the engine restarting cost,
and thus we better keep the engine running during a short
stop, which is a practical solution while previous ones are
not applicable to real situation. Consequently, for the
derivation of a power saving policy, we must be
extremely careful in abstraction of low-level behaviors
even for high-level approaches, unlike in case of highlevel power estimation.

Fig. 1. (a) idd of an asynchronous device

IV. ENERGY STATE MACHINE


In this paper, we introduce an energy state machine to
describe accurate energy consumption behavior of digital
systems where each node and each transition describe the
static power and the dynamic energy, respectively. A
finite state machine M is four tuple (S, ,, s0) where S
=(s0, ..., sn) is a set of finite states,

is a finite input

alphabet, : X S S is a state transition function, and


s0 is the initial state. Each arc, that denotes state transition
, is labeled with a finite set of state transitions T = (t0,
...,tm).

240

Fig. 1. (b) idd of a synchronous device.

Proceedings of the National Conference on Communication Control and Energy System

energy calculations complex. We will denote this


capacitance by CB in Fig. 3. Its value is determined by the
charge-sharing rule [10]. In addition, modern highperformance devices are not generally free from leakage
current. We add Rs into the device model to represent the
leakage current. While most system-level energy
simulators are primarily concerned about CL, we use
fairly realistic energy models for both measurement and
characterization.
Fig. 2. (a) State diagram of Asynchronous Machine.

Fig. 3. Schematic diagram of the measurement setup.

The switch represent the clock input to the circuit. So the


static or the leakage energy consumption across the
capacitor at ith clock input can be represent by
Fig. 2. (b) State diagram of Synchronous Machine

Es(i) = (CS1 +CB) x

V. CYCLE-ACCURATE ENERGY MEASUREMENT


To complete the energy state machine, we have to
annotate the energy values and . At first, we need to
distinguish energy consumption behavior of target devices
before deciding a characterization method. While
synchronous energy FSM is ideal for high-fidelity
characterization of energy consumption for synchronous
digital systems, a special technique is required to annotate
energy values for transitions and states. As shown in Fig.
1, dynamic energy consumption represented by the idd
current only occurs during the propagation time, which is
usually a matter of nanoseconds. The propagation delay is
not determined by the operating frequency but by the
physical design, and thus the power spectrum of idd
reaches well over several hundred MHz, regardless of the
operating frequency. This seriously discourages designer
from trying to distinguish idd from the cycle-by-cycle
dynamic energy using conventional equipment such as an
ammeter [9]. Since cycle-accurate energy measurement is
essential to annotate a synchronous energy FSM, we have
design a special technique to handle the cycle-by-cycle
energy measurement of high-speed digital systems [9].
Fig. 3 shows a schematic diagram of the measurement
setup. We transfer charges to the capacitor and operate the
target device using these charges. By simply measuring
the initial and the final voltage at the capacitor, we can
derive the exact energy consumed by the target device.
There are on-chip bypass capacitors for mitigating power
supply fluctuation in most modern devices, which make

We eliminate t by converting the static power to energy


consumption over the clock period. The dynamic energy
of i-th clock cycle, Ed(i), is denoted by

Ed(i) = (CS1 +CB) x (VC1(i+)2VC1(i++)2)


The total energy is given by

E=
It is not easy to determine the exact time that delimits the
period for the dynamic energy, i.e., VC1(i++). Improper
division into dynamic and static energy may cause severe
errors if there are major changes in clock frequency. To
avoid this, we have to measure the cycle-accurate energy
at various clock frequencies. We need to cross-check the
dynamic energy values measured at different clock
frequencies and thus confirm the dynamic energy values
[10].
VI. PROPOSE HARDWARE DESIGN
The embedded hardware has a central processing unit
which performs ALU operation. An internal bus for data
communication with memory device and to store the
output bit in latch D flip-flop. The power requirement of
the unit can be calculate and required power can be
delivered to the unit require in operation. Whenever an

241

Design Technique of Hardware to Reduce Power Consumption in Embedded System

operation is require to use the unit then only activate


using a switching transistor along with the capacitor CB.
When the operation is completed the unit will not
consume energy.
After measurement of the power require to execute the
unit, require frequency of clock can be delivered and the
switching capacitor will store the energy require to
execute the instruction. When the capacitor receive
energy the same will activate the transistor and the energy
will deliver to the CPU/Memory device for the execution
of the instruction.

Fig. 4. Schematic Hardware prototype to control CPU or


memory device

VII. CONCLUSION
We come to conclusion that the energy requirement
module will measure the energy requirement of the
module and that energy will use for switching a transistor
which will activate the module for require duration of
time and will remain idle in other time. This will optimize
the power usage of embedded system. System
components are modeled as finite-state machines,
associating transitions with dynamic energy and states
with leakage power. The superior modeling ability of the
energy state machine enables precise energy estimation
while providing a fast and user-friendly environment for
system designers who are not familiar with device
technologies. The series of energy measurement and
estimation tools are easy and free energy exploration
environment which encourages users without detailed
knowledge to perform a system-level energy
optimization.

REFERENCE
[1]

Naehyuck Chang, In-House Tools for Low-Power


Embedded Systems, (Seoul National University Energy
Explorer, http://see.snu.ac.kr).
[2] C. X. Huang, B. Zhang, A.-C. Deng, and B. Swirski, The
design and implementation of powermill, in Proceedings
of International Workshop on Low Power Design, pp.
105110, Apr. 1995.
[3] J. Flinn and M. Satyanarayanan, Powerscope: a tool for
profiling the energy usage of mobile applications, in
Proceedings of the Second IEEE Workshop on Mobile
Computing Systems and Applications, pp. 210, Feb.
1999.
[4] W. R. Hamburgen, D. A. Wallach, M. A. Viredaz, L. S.
Brakmo, C. A. Waldspurger, J. F. Bartlett, T. Mann, and
K. I. Farkas, Itsy: Stretching the bounds of mobile
computing, IEEE Computer, vol. 34, pp. 2837, Apr.
2001.
[5] V. Tiwari, S. Malik, and A. Wolfe, Power analysis of
embedded software: A first step towards software power
minimization, IEEE Tranactions on Very Large Scale
Integration (VLSI) Systems, vol. 2, pp. 437445, Dec.
1994.
[6] A. Sinha and A. Chandrakasan, Jouletrack - a web based
tool for software energy profiling, in Proceedings of
ACM/IEEE Design Automation Conference, pp. 220225,
June 2001.
[7] W. Ye,N. Vijaykrishnan, M. Kandemir, andM. J. Irwin,
The design and use of simple power: a cycle-accurate
energy estimation tool, in Proceedings of ACM/IEEE
Design Automation Conference, pp. 340345, June 2000.
[8] D. Brooks, V. Tiwari, and M.Martonosi, Wattch: A
framework for architectural-level power analysis and
optimizations, in Proceedings of International
Symposium on Computer Architecture, pp. 8394, June
2000.
[9] T. Simunic, L. Benini, and G. de Micheli, Energyefficient design of battery-powered embedded systems,
IEEE Tranactions on Very Large Scale Integration (VLSI)
Systems, vol. 9, pp. 1528, Feb. 2001.
[10] N. Chang, K.-H. Kim, and H. G. Lee, Cycle-accurate
energy measurement and characterization with a case
study of the ARM7TDMI, IEEE Transactions on Very
Large Scale Integration (VLSI) Systems, vol. 10, pp. 146
154, Apr. 2000.
[11] H. G. Lee, S. Nam, and N. Chang, Cycle-accurate energy
measurement and high-level energy characterization of
FPGAs, in Proceedings of 4th International Symposium
on Quality Electronic Design (ISQED 2003), pp. 267272,
Mar. 2003.
[12] A. Sinha and A. Chandrakasan, Energy aware software,
in Proceedings of the 13th International Conference on
VLSI Design, pp. 5055, Jan. 2000.

242

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.243-247.

Embedded Virtual Machines for Robust Wireless Control Systems


D. Venkateshwari
Dept. of Embedded Systems &technologies, Vel Tech technical University,Avadi.
Email: venkateshwarid@gmail.com
Abstract Embedded wireless networks have largely
focused on open-loop sensing and monitoring. To address
actuation in closed-loop wireless control systems there is a
strong need to re-think the communication architectures and
protocols for reliability, coordination and control. As the
links, nodes and topology of wireless systems are inherently
unreliable, such time-critical and safety-critical applications
require programming abstrac-tions where the tasks are
assigned to the sensors, actuators and controllers as a single
component rather than statically mapping a set of tasks to a
specific physical node at design time. To this end, we
introduce the Embedded Virtual Ma-chine (EVM), a
powerful and flexible programming abstrac-tion where
virtual components and their properties are main-tained
across node boundaries. In the context of process and
discrete control, an EVM is the distributed runtime system
that dynamically selects primary-backup sets of controllers
to guar-antee QoS given spatial and temporal constraints of
the under-lying wireless network.
The EVM architecture defines explicit mechanisms for
control, data and fault communication within the virtual
component. EVM-based algorithms introduce new
capabilities such as predictable outcomes and provably minimal graceful degradation during sensor/actuator failure,
adap-tation to mode changes and runtime optimization of
resource consumption. Through the design of a natural gas
process plant hardware-in-loop simulation we aim to
demonstrate the prelim-inary capabilities of EVM-based
wireless networks.

I. INTRODUCTION
Automation control systems form the basis for significant
pieces of our nations critical infrastructure. Time-critical
and safety-critical automation systems are at the heart of
essential infrastructures such as oil refineries, automated
factories, logis-tics and power generation systems.
Discrete and process control represent an important
domain for real-time embeddedsystems with over a
trillion dollars in installed systems and $90 billion in
projected revenues for 2008 .
In order to meet the reliability requirements, automation
sys-tems are traditionally severely constrained along three
dimen-sions, namely, operating resources, scalability of
interconnected systems and flexibility to mode changes.
Oil refineries, for example, are built to operate without
interruption for over 25 years and can never be shutdown
for preventive maintenance or up-grades.

They are built with rigid ranges of operating through-put


and require a significant re-haul to adapt to changing market conditions. This rigidity has resulted in proprietary
sys-tems with limited scope for re-appropriation of
resources during faults and retooling to match design
changes on-demand. For example, automotive assembly
lines lose an average of $22,000 per minute of downtime
during system faults. This has cre-ated a culture where the
operating engineer is forced to patch a faulty unit in an ad
hoc manner which often necessitates mask-ing certain
sensor inputs to let the operation proceed.
A. Embedded Virtual Machines
The current generation of embedded wireless systems has
largely focused on open-loop sensing and monitoring
applica-tions. To address actuation in closed-loop
wireless control sys-tems there is a strong need to re-think
the communication archi-tectures and protocols for
reliability, coordination and control. As the links, nodes
and topology of wireless systems are inher-ently
unreliable, such time-critical and safety-critical applications require programming abstractions where the tasks
are as-signed to the sensors, actuators and controllers as a
single com-ponent rather than statically mapping a set of
tasks to a specific physical node at design time. Such
wireless controller grids are composed of many wireless
nodes, each of which share a common sense of the control
application but without regard to physical node
boundaries.
To this end, we introduce the Embedded Virtual Machine
(EVM), a powerful and flexible programming abstraction
where virtual components and their properties are
maintained across node boundaries. EVMs differ from
classical virtual machines (VM). In the enterprise or on
PCs, one (powerful) physical ma-chine may be partitioned
to host multiple virtual machines for higher resource
utilization. On the other hand, in the embed-ded domain,
an EVM is composed across multiple physical nodes with
a goal to maintain correct and high-fidelity operation even
under changes in the physical composition of the network.
The goal of the EVM is to maintain a set of functional
invari-ants, such as a control law and para-functional
invariants such as timeliness constraints, fault tolerance
and safety standards across a set of controllers given the
spatio-temporal changes in the physical network.

243

Embedded Virtual Machines for Robust Wireless Control Systems

By incorporating EVMs in existing and future wireless


au-tomation systems, our aim is to realize:
1. Predictable outcomes in the presence of controller
failure.
During node or link faults, EVM algorithms determine
if and when tasks should be reassigned and provide
the mechanisms for timely state migration.
2. Provably minimal QoS degradation without violating
safety. In the case of (unplanned) topology changes of
the wire-less control network, potential safety
violations are routine oc-currences and hence the
EVM must reorganize resources and task assignments
to suit the current resource availability (i.e. link
bandwidth, available processing capacity, memory
usage, sensor input,etc).
3. Composable and reconfigurable runtime system
through synthesis In the EVM approach, a collection
of sensors, actu-ators and controllers make a Virtual
Component as shown in Fig. 1. A Virtual Component
is a composition of interconnected communicating
physical components defined by object trans-fer
relationships. At runtime, nodes determine (via
centralized or distributed algorithms) the task-set and
operating points of different controllers in the Virtual
Component. This machine-to-machine coordination
require task-set generation, task mi-gration and remote
algorithm activation which are executed via synthesis
at runtime.
4. Adaptive Resource Re-appropriation and Optimization
for dynamic changes in service. For planned system
changes such as a factory shift, increase in output or
retooling for a dif-ferent chassis, nodes are required to
be re-scheduled in a timely and work conserving
manner. For example, if an assembly line is to process
two types of units, red units and blue units, it must
ensure that the additional processing time required for
blue units does not violate the processing of red units
along the shared conveyor belt.

2. Programming of sensor networks is currently at the


physical node-level where the tasks are bound to the
node at compile-time. This makes it non-trivial to
decompose a large control problem into defining
components and applications for each mote. In the
case of sensor network virtual machines such as Mate
[4], Scylla [5] and SwissQM [6] and runtime programming frameworks such as SOS [7] and Contiki [8], the
interac-tion is assumed to be between an end-user and
a single isolated node in a network and not among the
nodes themselves.
3. Design of systems with flexible topologies is hard with
physical node-level programming as the set of tasks
(or respon-sibility) is associated with the physical
node. Thus, a change in the link capacity, node energy
level or connectivity in the cur-rent topology will
render the application useless. It is necessary to
associate a logical mapping of tasks to nodes and
incorporate mechanisms to transfer responsibilities
during physical and en-vironmental changes in the
network.
4. Fault diagnostics, repair and recovery are manual
and template-driven for a majority of networked
control systems. Approximately 30% of the code in
automation systems is ded-icated to fault detection
and recovery. In the case of WSAC networks, it is not
plausable to exhaustively capture all possi-ble faults at
design time and thus provisions must be made for
runtime diagnostics and recovery.
5. Template-driven Safety: A majority of automation systems use if-then template-driven statements to detect
safety. With frequent code patches, it is hard to
provide safety guar-antees. Any change in topology or
the number of associated nodes may violate the fixed
safety rules which are determined at design-time.
Nodes must operate in tandem where the per-formance
and operational safety of one node is continuously
monitored by others and vice versa.
II. BACKGROUND AND PRELIMINARY WORK

B. Research Challenges
While there has been considerable research in the general
area of wireless sensor networks, a majority of the work
has been on open-loop and non-real time monitoring
application. As we extend the existing programming
paradigm to closed-loop control applications with tight
timeliness and safety re-quirements, we identify five
primary challenges with the design, analysis and
deployment of extending such networks:
1. Programming motes in the event-triggered paradigm
is tedious for control networks. It is hard to provide
any ana-lytical bounds on the response time, stability
and timeliness of tasks in an event-driven regime [3,
4]. Real-time tasks are time-triggered while sensor
inputs are event-triggered. It is generally easier to
incorporate sporadic tasks in a time-triggered regime
than vice versa.

The EVM architecture and algorithms are built on a


modified version of the FireFly sensor network platform
[9] and nano-RK sensor real-time operating system
(RTOS) [10]. The EVM is implemented in the form of a
virtual machine abstraction layer on top of the RTOS and
executes as a special task within nano-RK. As a special
task, the EVM has both parametric and pro-grammable
control of the entire operating system and hardware
resources. We describe below the current developments
and ex-periences of the FireFly platform and nano-RK
RTOS and also the preliminary investigations with the
EVM.
A. Embedded Network Platforms for Time
Synchronized Communication
Several platforms, such as Mica2, MicaZ, Telos, ExScale
and TinyNode [11], that have enabled sensor networks are

244

Proceedings of the National Conference on Communication Control and Energy System

avail-able. Many of these platforms are based on the


component-based, event-triggered operating system and
application frame-work called TinyOS [3]. While this
framework is flexible, tim-ing predictability and finegrained deterministic resource con-trol were not its
primary design objectives.
We use the FireFly platform [9] which is designed to
support real-time sensor networking applications [12, 13].
The Fire-Fly node shown in Fig. 2 is a low-cost, lowpower, platform that is based on the Atmel ATmega1281
8-bit micro-controller with 8KB of RAM and 128KB of
ROM along with a Chipcon CC2420 IEEE 802.15.4
standard-compliant radio transceiver. A FireFly node can
also operate with a solar cell driven by am-bient light.
Each node supports and expansion card with light,
temperature, audio, passive infrared motion, dual axis
accelera-tion and voltage sensors.

Atmel-AVR, 16-bit TI-MSP430, Crossbow motes,


FireFly). It supports fixed-priority preemptive scheduling
for ensuring that task deadlines are met, along with
support for and enforcement of CPU and network
bandwidth reservations. Tasks can specify their resource
demands and the operating sys-tem provides timely,
guaranteed and controlled access to CPU cycles and
network packets in resource-constrained embedded sensor
environments. It also supports the concept of virtual
energy reservations that allows the OS to enforce energy
bud-gets associated with a sensing task by controlling
resource ac-cesses. nano-RK provides various medium
access control and networking protocols including a lowpower-listen CSMA pro-tocol called B-MAC, an implicit
tree routing protocol and RT-Link.
For networked control systems, it is essential that the
under-lying sensor operating system expose precision
timing, sched-uled tasks and synchronized networking so
that the trade-offs between energy-consumption (node
lifetime), reliability and re-sponsiveness are specifiable
and enforceable both at design-time and runtime. Support
for the above services is required for low-duty cycle and
energy-constrained sensor networks too because the
computation and communication are packed into a short
duration so all nodes may maximize their common sleep
time. As shown in Fig. 3, the EVM is built upon nano-RK
and adds the capability for a suite of runtime services with
paramet-ric and programmable control.

Fig. 2. FireFly node with sensors & AM time sync

The primary reason we use FireFly for EVMs is for its


ability to support tight global hardware-based time
synchronization
for
real-time
TDMA-based
communication with the RT-Link pro-tocol [12]. FireFly
nodes are able to achieve sub-150 s jitter by using a
passive AM radio receiver. Through the tight time synchronization of RT-Link, it has been demonstrated to
have an effective battery lifetime of 1.8 years with a 5%
duty cycle. RT-Link outperforms asynchronous protocols
such as B-MAC [14] and loosely synchronous protocols
such as S-MAC [15] across all duty cycles and event
rates. We have demonstrated real-time two-way
interactive voice streaming across multiple Fire-Fly nodes
using the RT-Link protocol [13]. With RT-Link,
communication for real-time applications is collision-free
and is scheduled in well-defined TDMA slots that ensures
timely communication between nodes within an EVMs
Virtual Com-ponent.
B. Real-Time Sensor Operating System as a Basis for
the EVM
To address the need for timing precision, priority
scheduling and fine-grained resource management the
nano-RK resource kernel [10] was developed with
timeliness as first-class citi-zens. nano-RK is a fully
preemptive RTOS with multi-hop net-working support
that runs on a variety of sensor network plat-forms (8-bit

III. EVM ARCHITECTURE AND ALGORITHMS


The system under consideration includes a number of
wire-less sensors, actuators and controllers composed into
a Virtual Component. The Virtual Component acts as a
single entity for the control algorithm execution. The
EVM provides a flexi-ble programming abstraction to
share state and responsibilities across physical nodes and
allows multiple EVM-enabled nodes to be composed into
a single logical entity.
Control algorithms are automatically distributed across
physical nodes based on computing load and proximity to
the corresponding sensors and actuators. Multiple copies
of each algorithm are present on the physical nodes and
state is shared either passively or actively to enable fault
tolerance. Control al-gorithms spawn automatically
proliferating to nodes capable of executing them and
maintain a common state at all times.
If one of the nodes executing control algorithm fails,
another node capable of performing the same control
function takes over con-trol execution. Algorithm
migration from one physical node to another is a key
feature of this system. Control algorithm exe-cution by
one node is passively observed by other nodes capa-ble of
executing the same algorithm. Control algorithm failure is
detected by backup observers and a new master is
selected based on an arbitration algorithm.

245

Embedded Virtual Machines for Robust Wireless Control Systems

informs the kernel that the battery is out of energy and


the kernel activates a task migration operation to move
operations to a more able node.

A. EVM Architecture
We now consider the design of the EVM within the nanoRK RTOS framework. The EVM describes its own
instruction set for efficient control, task and fault
management between nodes. As with Mate, the EVM is
based on a FORTH-like interpreter. The interpreter runs
within nano-RK as a super task. However, unlike Mate,
the EVMs instruction set is extensible at runtime.
Furthermore, EVM instructions are focused on node-tonode communication and control rather than PC-to-node
control. We describe two main architectural components
within the EVM
Figure 3. nano-RK sensor RTOS with interfaces to the
EVM. EVM includes parametric and programmable
control algorithms for runtime logical-task to physicalnode mapping.
EVM node-specific operations and object transfers for
efficient node-to-node communication.

6. Node membership and data migration The


membership of a Virtual Component is not fixed. If
new nodes are present they are admitted to the Virtual
Component. This operator en-sures that the
requirements of new nodes or the network state of
surviving nodes is stable. Furthermore, this operator
invokes the optimization sub-routine if more resources
are added to the Virtual Components resource pool.
7. Run-time optimization This operation executes optimization of resource allocation and task assignment at
runtime. We use Binary Quadratic Programming for
fixed-point opti-mization for functional and parafunctional requirements across controller nodes. Due
to space limitation we will not discuss this in detail.
8. Software attestation When new code or data is
received by a node from another node, the node
executes a basic attesta-tion test to ensure the
code/data is not corrupted and passes the
schedulability test.

a) EVM Node-Specific Operations


The EVM is responsible for the following core nodespecific operations. The parametric control has been
implemented as an EVM library for core pre-defined
instructions. The pro-grammable control will be
implemented as a runtime service and requires hooks
within the kernel, device drivers and link layer.

While the above operations are not exhaustive, we will


select the ones that matter the most in our case studies and
test them under changing conditions with large dynamic
ranges.

1. Runtime Task Management This includes basic task


al-location, assignment and manipulation. The specific
operations supported by the EVM are task assignment
to a particular node, task migration from one node to
another, task partition from one node to another and
itself and finally task replication where an instance of
a task is also invoked on another node (using the same
state information, stack and register settings).

b) EVM Object Transfers


We now describe the mechanisms used to communicate
control, data and fault information between controllers
within a virtual component. Five elementary object
transfer types are included in the EVM design. These
include: disjoint, bi-directional transfers, temporalconditional transfers, causal-conditional transfers and
health assessment.

2. Runtime Resource allocation This operation facilitates


allocation or re-allocation of a task control block and
reserva-tion with the scheduler and network for a new
task or for an existing task on the local node.
3. Scheduling and schedulability analysis This operation
is invoked when there has been a change to the
scheduler or task-set on a node. The new task-set or
schedule will only be ac-tivated if the schedulability
test is passed. This ensures that all tasks are
schedulable within the schedulers utilization bounds
even after a new task is added.
4. Priority assignment This parametric control operation
allows a node to re-prioritize its tasks upon the
admission of a new task or change in operating
conditions.
5. Fault/failure detection and adaptation This handler is
activated when a fault message is received by the
kernel and the desired action is carried out. An
example of this would be when a fault message

A disjoint relation between two nodes indicate that the


nodes may operate concurrently in both temporal and
spatial domains without any shared state. Directional and
bi-directional trans-fers define relationships such as
master-slave, publish-subscribe and producer-consumer.
This is the basic transfer type for all active controllers
within a virtual component. Temporal and causal transfers
define the type of relationship between inter-connected
controllers and enforce a set of restrictions between the
controllers. Finally, health assessment transfers are used
for monitoring and tracking and define which node is the
primary or backup and the nature of response to faults
such as trigger alert, trigger backup, halt and local failsafe operation.
IV. EVM EVALUATION
We have implemented the parametric control capability of
the EVM on the FireFly nodes over the nano-RK sensor
RTOS. This allows remote runtime triggering of

246

Proceedings of the National Conference on Communication Control and Energy System

individual sensor drivers, modification of task


reservations and network time-slot assignment. Through a
process control case study, we evalu-ate the
programmable control, more specifically the fault tolerant capability, of the EVM. We employ the Honeywell
Unisim plant simulator with hardware-in-loop via a a set
of six inter-connected FireFly nodes. Each sensor,
controller and actuator node interfaces with a gateway
node via RT-Link. The gateway communicates with
Unisim (on the work-station) via ModBus. The
controllers operate on information generated by the plant
simulation and sensor I/O for a realistic closed-loop
WSAC evaluation. This allows us to evaluate the network
with large dynamic input ranges and dramatic topol-ogy
changes.
Our focus is on the fault-tolerance of controllers only, all
of which are connected with wireless connections to each
other and to the physical sensors and actuators that
interface to Unisim. When a particular backup controller
detects a series of faults in the primary controller, it
triggers a task migration op-eration to the backup
controller. This operation includes a capa-bilities check
and the migration of the task control block, stack, data
and timing/precedence-related metadata. The backup controller is activated and the primary controller switches to a
pas-sive indicator mode.

Inlet Separator, InletSep. These liquids are then processed


at the Depropanizer column to produce a low-propanecontent bottoms product.
a) In Summary, the Specific Objectives of This Effort
Are
1. Ability to deploy control algorithms in a virtual
compo-nent defined over a grid of wireless
controllers.
2. On-line capacity expansion where more controllers
can be added to share the load and trigger redistribution of tasks.
3. Algorithm replication to a set of nodes capable of performing the same control function for throughput
adaptation.
4. Fault tolerance to node and communication failures.
5. Control algorithm execution with high-speed
operation (1/4 second or less control cycle) and with a
small latency ( 1/3 of the control cycle).
REFERENCES
[1]
[2]
[3]

A. Natural Gas Plant Model


We employed a Unisim model for a natural gas
processing application. This case study models a natural
gas processing facility that uses propane refrigeration to
condense liquids from the raw natural gas feed and a
distillation tower to process the liquids. The flowsheet for
this process is in Fig. 4. In this plant, a raw natural gas
stream containing N2, CO2, and C1 through n-C4 is
processed in a refrigeration system in order to remove the
heavier hydrocarbons. The liquids removed from the input
stream yield to a liquid product that has desired propane
con-tent.
B. Fault-Tolerant Wireless Controllers
The plant model has a several control loops (presented
with light green connections). In considered application 8
differ-ent controllers are used (4 in top level system and 4
in DePropanizer). These controller algorithms are
implemented us-ing EVM across multiple physical nodes.
To show performance of designed EVM we will focus on
the controller for valve at liquid flow from LTS output
and TowerIn-let (Fig. 6(a)). In the presented
configuration, 2 physical con-trollers, Ctrl-A and Ctrl-B,
To show performance of designed EVM we will focus on
the controller for valve at liquid flow from LTS output
and TowerIn-let (Fig. 6(a)).its input stream, while
remaining gas is fed back to the gas/gas exchanger.
Liquid output of LTS is mixed with free liquids from the

[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]

247

Frost and Sullivan, North American Sensor Markets,


Technical Report A-761-32, 2004.
Nielsen Research, Downtime Costs Auto Industry, March
2006.
J. Hill et. al. System architecture directions for network
sensors. ASPLOS, 2000.
P. Levis and D. Culler. Mate: A tiny virtual machine for
sensor networks . ACM ASPLOS-X , 2002.
P. Marbell and L. Iftode. Scylla: A smart virtual machine
for mobile embedded systems. In WMCSA, 2000.
R. Mller, G. Alonso, and D. Kossmann. A virtual
machine for sensor networks. In ACM EuroSys, 2007.
S. Han et. al. SOS : A Dynamic Operating System for
Sensor Nodes. ACM Mobisys, 2005.
A. Dunkels and N. Finne and J. Eriksson and T. Voigt.
Run-time dynamic linking for reprogramming wireless
sensor networks. ACM SenSys, 2006.
R. Mangharam, A. Rowe, and R. Rajkumar. FireFly: A
Cross-layer Platform for Real-time Embedded Wireless
Networks. Real-Time System Journal, 2007.
nano-rk sensor rtos. http://nanork.org.
J. Hill, M. Horton, R. Kling, and L. Krishnamurthy. Platforms
enabling
wireless
sensor
networks
.
Communications of the ACM, 47(6):41-46, 2004.
A. Rowe, R. Mangharam, and R. Rajkumar. RT-Link: A
Time-Synchronized Link Protocol for Energy-Constrained
Multi-hop Wireless Networks. IEEE SECON, 2006.
R. Mangharam, A. Rowe, and R. Rajkumar. Voice over
Sensor Networks. RTSS, 2006.
J. Polastre, J. Hill, and D. Culler. Versatile Low Power
Media Access for Wireless Sensor Networks. ACM
SenSys, 2005.
W. Ye, J. Heidemann, and D. Estrin. An Energy-Efficient
MAC Protocol for Wireless Sensor Networks. INFOCOM,
June 2002.

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.248-251.

Design of new Auto Guard Anti-Theft System Based


on RFID and GSM Network
M. Anjugam
M.Tech,Embedded System, VelTech Dr.RR&Dr.Sr Technical University
Email: anjugam14@gmail.com
Abstract This paper proposed an autoguard system to
reduce the theft rate of the car and meet the intellectualized
auto-guard demand of people.The system combined the
radio frequency identify technology and the global mobile
communication network.The system took Texas instrument
companys car-used microcontroller STM8AF51AA as the
control core.The radio reader IC MF RC522 could identify
the car owner quickly and then realized the function of
keyless entry and keyless start-up.At the same time.The
infrared sensors and vibration sensors completed the
monitoring function.GSM module SIM300DZ of Simcom
company finished setting and dismissing the prevention of
message or call and controlled the cars states remotely
through AT instructions. It has showed in practice that
compared with the traditional auto-guard system, this
system could not only identify the owner singly,but also
improve the security and reliability.So it has achieved the
unity between intellectualized safeguard and remote control.
Keywords Auto-guard; RFID; GSM network; Remote
control

I. INTRODUCTION
With
the
rapid
development
of
national
economy,automobiles have increased greatly as the
human's important vehicles. However,it is the
development of modern technology that makes the
commit means of crimes become smarter and the
automobiles stolen events more frequent. Electronics antitheft is the most widely used among all the appliances at
the moment.But the chip one and the network one are the
developing directions of the auto-guard technology.RFID
Radio Frequency Identification technology which is
passive,contactless,security and convenient identifies
objectives automatically and gets the data through the
radio frequency signals.It meets the need of the
intellectualized guarding perfectly.GSM(Global System
for Mobile Communication) is a case which is the most
mature and widely used in mobile communication
system.It ensures the information transmission so
realtime, security and reliable that realizes the long
distance control[1].This paper is about a design of a new
fashion auto-guard which is a smart measurement
generalized in the automobile security area.

II. SYSTEM PRINCIPLE


This system mainly includes three parts:controller,RFID
and mobile communication, as shown in Fig.1.

Fig. 1. System structure

A. Controller
The core of this system is micocontroller which includes
monitoring circuit (checkingthestate), matrix keyboard,
actuator, sound-light alarm devices (loudspeaker and car
light),CAN communication and power managment.This
system will send sound-light alarm as soon as the damage
is checked by the sensor to warn the thieves and send the
information to ECU(Electronic Control Unit). In that
case, ECU can control the delay to cut off the electric
circuit and oil circuit.In the meanwhile,the ECU also
sends message to hosts for help.So it is convenient to
control the auto for hosts.And users can also select the
priority of auto control casually,such as MCU,ECU or
hosts.
B. RFID
RFID system consists of three parts.They are antenna,tag
and reader.The principle of RFID system is that the reader
puts the signals to be sent into a carrier signal with a
certain frequency after encoding.And then the signals will
be sent out by antenna.When the pulse signals are
received by tag which works within the scope of
reader,the circuit in the chip will do some work like
modulating,decrypting,and decoding.And then it will
make a distinguish about the command request,
passwords or permission.If it is the read command,the

248

Proceedings of the National Conference on Communication Control and Energy System

control logical circuit will get the message from the


memory.After encrypting,ecoding and modulating,the
message will be sent to reader by the antenna in the
chip.And then after demodulating,decoding and
decrypting,the reader is going to send the message out to
the center of message system to manipulate data.If it is
the write command of message modification,the inner
charge pump originated by control logical circuit will pull
the working voltage up,so that the content in the EPROM
will be modified.
Only when the key with right identification code is
inserted into ignition switch,the auto can start up in a right
way.When the ignition switch is turned off,the reader will
send a 13.56MHz charge pulse out to the tag.It will
charge the capacitor as soon as the pulses are received.So
the responser is able to give a specific code to reader.As a
result, the signals are transmitted between the antenna of
reader and the antenna of tag.The control part of the
reader will encode signals and compare them with the
codes in microcomputer.If they are in the same,the
control process of engine and ignition switch will start up
immediately by the control part.As soon as there is one bit
different from each other,the system will send the alarm
information out.
C. Mobile Communication
GSM is a module with isolated operation system,RF
process,baseband process and the function module
providing standard interfaces which integrated RF chips
of GSM,baseband chips, memory, amplifier on the same
circuit board.Designers make the microcomputer
communicate with GSM module by RS232 serial port and
also use the standard AT instructions to control GSM
module to realize all kinds of communication, for
example, sending message, making telephone and GPRS
dial internet. But the function of sending message is
usually adopted to realize the long-range control just
because of the low cost and well real-time.

down to 5V by the regulator LM7805,and then pulls it


down to3.3V through ASM1117 to supply MF
RC522.The MAX708 is selected as the power
management chip.When the power is down,it will save
the message with the previous state into E2PROM
immediately,so that it is convenient to read when the
power is on.Moreover,when the power voltage becomes
low,it produces a reset pulse to avoid program flying.
B. RF Interface Circuit
Mifare card of NXP company is the general trend of the
market recently.MF RC522 is a readwrite base chip with
low power consumption and contactless character which
is applied for three instruments especially. It is a highly
integrated reader for contactless communication and
protocol at 13.56MHz.It supports all the levels of
ISO14443A and three kinds of interfaces,the SPI,I2C and
serial UART.It can communicate with any MCU and
associate with PC directly in the way of S232 and
RS485.In that case,it makes terminal design more flexible
than ever[4-5].
Due to the MF RC522 supporting a variety of digital
interfaces, the connection of external pins can be checked
at the moment of reset. MF RC522 requires two extra pins
I2C and EA to connect with low level and high level,
besides the four universal SPI signal wires(clock wire
SCK,input data wire MOSI, output data wire MISO and
strobe wire NSS).The two pins wont take part in the
transmission of SPI bus, but only set the digital interface
of MF RC522 in SPI way. In addition,CS signal keeps
low level when the data flow in, otherwise it will be at
high level. The hardware circuit connection of MF RC522
and STM8AF51AA in the SPI way is shown in Fig.2.

III. HARDWARE DESIGN


A. Control Circuit
The STM8AF51AA made in ST company is selected as
the microcontroller. It is an 8-bit chip which is very
reliable, highly robust and with low cost. And it is
designed for automobile especially. Inside of the chips are
E2PROM with real data and varieties of communication
interfaces,
such
as
CAN2.0B,
USARTL,
INUARTLIN2.1, SPI, and I2C. Its operation speed
reaches 10 MIPS(16 MHz).The monitoring circuit is
made up of infrared sensors,which is finished product
component in the type of CS9803GP and vibration
sensors,CHT-ZD01[2].The relay is chosen as the
actuator.The PCA82C250 is adopted as the drive interface
of CAN bus[3].The LM7805 and ASM1117 are the power
transfer modules.Since three kinds of voltage are
needed,12V,5V and 3.3V,the system pulls the voltage

Fig. 2. Circuit of the RF interface.

C. GSM Interface Circuit


In this system, the GSM 900/1800 MHz network double
band module made in Simcom company is selected as
GSM module. This module is able to analyze baud rate
automatically and improve the performance of electronic
public service. This module with energy save function,
embedded TCP/IP and transparent mode belongs to the

249

Design of new Auto Guard Anti-Theft System Based on RFID and GSM Network

series of GPRS in three frequency(900/1800/1900).The


peripheral circuit of SIM300DZ mainly consists of the
communication interfaces of SIM cassette and module,
such as SIM-CLK and SIM I/O, which are the
communication wires of module clock and data, SIMRST and VCC, which are the reset and the power supply.
Whats more, the RXD and TXD are included in the
peripheral circuit of SIM300DZ which are connected with
the serial port of MCU. It is the AT instructions that
transported between the MCU and GSM through the very
two channels[6].In addition, the GSM module includes
voice system channel and MIC channel. These channels
are switched by MCU because of the AT instructions
which are mainly applied to the switching between the
voice and microphone in the monitor system.Finally,the
transmitting ports IN+ and IN- are also included which
are with the dual tone multi frequency(DTMF)
signals.When the user communicates with the phone
equipped in the car,if the button is pressed,it will produce
DTMF signal which is sent out to the multi- frequency
decode chip to analyze and produce Q signal through IN+
and IN-.At this moment,the MCU decides how to operate
according to the Q signal.

card is accessed by STM8A not only through several


simple codes but also a series of operation. They mainly
includes:(1)asking for making up;(2)preventing overlap
(preventing the data error caused by overlapping);
(3)selectingcard; (4)password identification;(5)readwrite.These procedures of Mifare card must be done in a
certain order by STM8A.When there is Mifare card into
the available range of the antenna, the reading program
will perform the operations above to read the unique 64
bit ID from the card and compare it to the one in
EPROM to identify the user[7-8].
C. GSM Operation Process
In this issue,GSM part is the most important and difficult
point.When a new message to the SIM is detected by
serial port,it will be activated.The AT+CNMI command
will be called by initialization so that the module can send
+CMTI:<mem><index> to STM8A automatically.And
the <index> represents the position of new message in the
memory of SIM.It is convenient to read.It is the CMTI
received or not that decides the activation of GSM by the
system.

IV. SOFTWARE DESIGN


The control software of auto-guard system needs to
complete these functions as follow: sensor signal
detection, identification, sending short message, the
control of the main components of the auto and soundlight alarm. The system will be in the state of low power
and standby after initializing and opening interrupt. When
the interrupt takes place, it will end the corresponding
process of interrupt event in the interrupt service
subroutine by idle mode and set the corresponding flag bit
at 1.After being interrupted, the routine will execute their
process programs according to the state of flag bit. With
the modularization program idea, the auto-guard system
program modules mainly includes control module,
identification, GSM manipulating and alarm module etc.
A. The Process of Main Program
This module is the core of the system including the
calling and setting of initialized function of related
equipments.In addition,in its main function,the related
function of other modules can be called to complete the
program.The basic idea is using the way of polling.A
small loop of every function module can be called in a big
loop.And the watchdog is set in each key position in case
of the systems crash.Main process is shown in Fig.3.
B. Personal Identification Process
Firstly,MF RC522 will be initialized by STM8A.After
setting the registers,MF RC522 can receive the
commands of MCU and implement the operation to
achieve the communication with Mifare card.So the
appropriate operation will be done by Mifare card
according to the commands received. However, the IC
250

Fig. 3. Flow chart of main program

Proceedings of the National Conference on Communication Control and Energy System

card which is contactless,security and convenient.The


long-rage monitor and grading responses could be
realized by the mobile phones of users,which made the
alarm cover a broad rage. The microcontroller for vehicle
was adoped,which enhanced the reliability and the
capability of anti- interference.These advantages
mentioned above meet therequirements of auto-guard
system.So that a better effect was made in practice.In
addition,it is easy to extend functions.If the function of
position tracking is needed,the GPS module can be
added.If the Internet of Things is to be entered,we only
need to rewrite the soft[10].As a result,the radio
technology at present can be replaced completely.So the
practical value and the market prospect are considerable.
REFERENCES
[1]

Fig. 4. Flow chart of GSM operation

The flow chart of the task is shown in Fig4.First,the PDU


of message will be read by a special array through
AT+CMGR=<index>/v/n command will be sent out to
module . So the message which in the <index> of memory
in SIM card will be sent out in the form of
+CMGR:<start>,[<alpha>],<length><CR><LF><pdu>O
K. It is convenient to read PDU in the way of looking for
by pointer. Second, the telephone number of sender and
password and content of UD will be picked up from
PDU.So the operation type will be chosen according to
the key words,such as password setting, user number,
oil and power supply and oil and power stop.The
PDU(Protocol Data Unit) codes of these key words are
saved in a fixed array and will be compared with the
received codes by strncmp function to get the
appropriate operation type.
V. CONCLUSION
The auto-guard system combines the advantages of RFID
and GSM together.The key of the automobile is a RFID

Huaqun Guo,Cheng H.S.,Wu Y.D.,Ang J.J.,Tao F.,et


al.,An automotive security system for anti-theft,
Proceedings of the Eighth International Conference on
Digital Object Identifier,pp.421-426, Mar. 2009.
[2] Hui Song,Sencun Zhu,Guohong Cao,SVATS:a sensornetwork-based vehicle anti-theft system, Proceedings of
The 27th Conference on Computer Communications,
pp.2128-2136,May,2008.
[3] Yang Wang,Xian-Jun Gao,Zhang Gang,A study on Mn
coding for guarding against theft and remote control
device of an automobile, Proceedings of International
Conference on Vehicle Electronics,pp.294-297,1999.
[4] Khangura K.S.,Middleton N.V.,Ollivier M.M.,Vehicle
anti-theft system uses radio frequency identification,
Proceedings of IEE Colloquium on Vehicle Security
Systems,pp.1-7,Oct. 1993.
[5] Hirano M.,Takeuchi M.,Tomoda T.,Nakano K.-I.,Keyless
entry system with radio card transponder [automobiles],
Proceedings of IEEE Transactions on Industrial
Electronics,vol.35,no.2,pp.208-216, May 1988.
[6] Wan Lili,Chen Tiejun,Automobile anti-theft system
design based on GSM,Proceedings of International
Conference on Advanced Computer Control,pp.551554,Jan. 2009.
[7] Jayendra G.,Kumarawadu S.,Meegahapola L.,RFIDbased anti-theft auto security system with an
immobilizer, Proceedings of International Conference on
Industrial and Information Systems,pp.441-446, Aug.
2007.
[8] Guo Hongzhi,Chen Hong,Ji Guohuang,Zhou Xin,The
vehicle passive keyless entry system based on
RFID,Proceedings of the 7th World Congress on
Intelligent Control and Automation,pp.8612-8617,Jun.
2008.
[9] Zhixiong Liu,Guiming He,A vehicle anti-theft and alarm
system based on computer vision,Proceedings of IEEE
International Conference on Vehicular Electronics and
Safety,pp.326-330,Oct. 2005.
[10] Karimi H.A.,Krishnamurthy P.,Real-time routing in
mobile networks using GPS and GIS techniques,
Proceedings of the 34th Annual Hawaii International
Conference on,pp.11,Jan. 2001.

251

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.252-254.

Unmanned Adoration Car


S. Parijatham, N. Janani and C. Nagalalitha
Students ECE Department, Veltech Hightech Dr. RR & Dr. SR Engineering College, Avadi-62
Email: parijathamsraj@gmail.com, jaanu.gn@gmail.com, c.nagalalitha@gmail.com
Abstract This paper presents a real-time system that is
autonomous car. It is a robotic vehicle that drives itself and
takes decision about speed, gear, steering, and route. The
overall system is controlled by ARM processor. The main
aim of this paper is to define the routes and giving directions
to processor without any human interventions. It can able to
detect the movable and unmovable obstacles with the help of
RADAR and sensors fixed at various positions on the
vehicle.
Keywords GPS, ARM processor, RADAR and scanners,
HS-CAN protocol.

Fig. 1. Block diagram of the system

II. SATELLITE NAVIGATION SYSTEM

I. INTRODUCTION
This paper is based on the perception of the road
environment using Global Positioning System, ARM
processor and various sensors. A passenger will sit in the
vehicle, which will drive autonomously and it will
conduct experimental tests on sensing, decision, and
control subsystems, and will collect data throughout the
driving, it only needs human interventions to define the
destination. It performed in unfriendly environments; the
previous vehicles were fielded and operated during
challenges defined by precise rules and in predefined
scenarios. The current technology is mature enough for
the deployment of non-polluting and no-oil based
autonomous vehicles for people and goods transportation
in real environment conditions, thus setting a new
milestone in the domain of intelligent vehicles. This paper
is taken all those advantages to drive autonomously using
the processors and mechanical systems are interconnected
by CAN (Control Area Network) protocol. This is more
Secured and concerned in avoiding accidents by the
installation of RADAR warning system.
Moreover it can also detect the unexpected obstacles and
damages in paths can also be easily detected by sensors
which are fixed at frontal bumper and all other sides of
the car. The main control of the whole system is
embedded with ARM7 (LPC2148) chip inside the car.
And also we can add addition security system to the car
using ARM processor. The full system is controlled using
this processor like when to apply the brake and speed etc.
Moreover the speed and the brake system can handle by
the cruise control in the processor.

Global positioning system or GPS navigation systems for


cars consist of a small unit with a screen that displays the
shortest route to reach the desired destination and guides
throughout the journey by giving directions in advance
where to turn, which lane to follow, whether you are
exceeding the speed limit, etc. Let us first take a look at
receiver, in the GPS unit, guides accordingly. With the
help of these signals, the unit determines the direction and
speed and those outputs is given to ARM processor to
control various operation like automatic brake, steering
control and gear control. You can select the voice and the
language to tell where you want to go. It is easy to install
and use. Here we use the 3D position of routes so we get
signals from four or more satellites.
III. ARM PROCESSOR
Here we use ARM7 (LPC2148) chip to but the system. It
is
based
on
high-performance
32-bit
RISC
Microcontroller and 64KB RAM which of quite large
capacity in embedded world to store the most important
code. We collect statistics to pick those kernel functions
out by profile tools contained in ARM Emulator (the
instruction set simulator of ARM Co.), then put them into
ESRAM when running, by one scattered-loading file
which guides the ARMCC (ARM compiler tool) to
compile corresponding files with special memory address
and whole project program is compiled into AXF (ARM
executable File) format. During the boot process of the
program, several assembly codes produced by ARMCC
will copy those special codes to special addresses. These
kernel codes will be stored in ESRAM and can be
accessed quickly.

252

Proceedings of the National Conference on Communication Control and Energy System

Embedded control platform is an embedded system board,


which contains ARM processor chip, memory, outside
interface modules and so on. Embedded control platform
controls the following processes:
 Receiving signals from GPS
 Detect the obstacles using RADAR and sensors
 To control the mechanical system like Gear, steering
and brake.
 Interfacing with HS-CAN
IV. RADAR WARNING SYSTEM AND SCANNERS
Forward looking radar used to detect distance to leading
vehicles and warning to automatically reduce speed to
maintain safe distance. The figure shows that monitoring
of vehicles immediate surroundings to create safety
around the car.

and objects, and to detect vehicles moving in front and


measure their distance and relative speed.
The system also relies on a number of other sensors. In
addition to the radar sensor, the system processes data
from speed, yaw rate, and steering-angle sensors. It also
uses a digital signal processor and longitudinal controller.
By adjusting engine speed and applying the brakes
automatically, the system ensures that a preset distance is
maintained between vehicles in the same lane. Forwardcollision warning system is used to senses a potential
collision occurred in front of the vehicles. It uses a
camera, a vision processor module, and a radar sensor.
Radar also plays a role in blind-spot monitoring. This
Shorter-range radar placed on the side of the vehicle to
detect other vehicles that might be lurking in the blind
spot. Radar sensors can also detect vehicles crossing
behind you when you're backing out of a parking spot.
Lane-departure warning systems use a video camera and
image-processing techniques that look at lane markings
and edges ahead of the car to detect objects and vehicles.
If the car crosses a lane divider then the turn signal will
automatically turned ON. Obstacles can be difficult to
define and expensive to compute. To enable real-time
operation considered unmovable and movable obstacles to
define. The unmovable obstacles find using maximalcurvature cubic spiral. For movable obstacles suddenly
caused it is used to find with laser scanners.

(a)

Fig. 2

It includes adaptive cruise control, imminent braking


systems,
forward-collision
warning,
blind-spot
monitoring, and lane-departure warning. In all the new
systems, sensors play a major role, with radar, vision
sensors, and lasers monitoring a vehicle's immediate
surroundings to create safety around the car. Adaptive
cruise control uses forward-looking radar to track vehicles
253

(b)
Fig. 3

Unmanned Adoration Car

A. Front Stereo Scanner


The frontal stereo system is used to locate obstacles,
determine the terrain slope and locate lane markings.
Figure (a) is image obtained by front side scanner. Figure
(b) is removal of the perspective effect from the image
B. Rear Stereo Scanner
The rear stereo system is used to locate obstacles in short
ranges. It is equipped with back of the vehicle.
C. Off-Road Laser Scanner
This mono beam laser scanner is pitched down so that the
beam hits the ground in front of the vehicle; it provides
information about the presence of ditches, bumps, and
obstacles right in front of the vehicle, especially when
driving off-road.
D. Lateral Laser Scanners
Two single beam laser scanners are mounted right on the
corners of the frontal Bumper; they are used to detect
obstacles, pedestrians, and vehicles in the immediate
surroundings. Each laser scanner has an aperture of about
270 degrees, while the perception depth is about 30
meters.
E. Central Laser Scanner
This laser scanner is based on a 4-planes laser beam
which is used to detect vehicles, obstacles, and
pedestrians in front of the vehicle. Its four planes allow to
partially overcoming a common problem of laser
scanners. When the vehicle is pitching down or up, the
laser beams hits the ground, or points to the sky, acquiring
no useful data. Its perception depth is about 80 meters and
its aperture about 100 degrees.
V. CAN PROTOCOL
Here we use TJA1048 dual HS-CAN (High Speed
Control Area Network) transceiver to interface directly
with microcontrollers with supply voltages from 3V to
5V. It includes enhancements such as: Two TJA1042/3
HS-CAN transceivers combined monolithically in a single
package. It supports very low power consumption for
better energy efficiency. The CAN protocol may be used
to connect engine control unit and transmission, or (on a
different bus) to connect the door locks, climate control,

seat control, etc. Each node is able to send and receive


messages from other devices like sensors and control
devices. These devices are not connected directly to the
bus, but through a host processor and a CAN controller. It
has different layers like application layer, object layer,
transfer layer and physical layer. And also having data
frames, remote frames, error frames and overload frames.
VI. CONCLUSION
In this paper we described a method to improve the best
trajectory search for autonomous car. This entire system
gets controlled by the special processor called ARM7
(LPC2148). No actions will be carried out without the
knowledge of this particular processor. The prescribed
route and various directions are given by GPS and its
main purpose is to control the steering process. The
unexpected obstacles can be avoided with the help of
laser scanners and RADAR. Mechanical devices can gets
connected to the above ARM processor with help of the
HS-CAN protocol.
REFERENCE
[1]

P.Cerri, L.Gatti, L.Mazzei, F.Pigoni G.Ho day and night


pedestration detection using Cascade AdaBoost system
in process IEEE Intelligent Transportation Systems
2010, sep 2010.
[2] P.Grisleri and I.Fedriga, The BRAiVE Platform, in
procs, 7th IFAC symposium on Intelligent Autonomous
Vehicles, Lecce, Italy, Sept, 2010
[3] Y.IChenV.Sundareswaran,
C.Anderson,
A.Broggi,
P.P.Porta and J.Beck, Terramax Team Oshkosh Urban
Robot journal of field Robotics, vol.25, oct 2008.
[4] Obstacle Detection for Unmanned Ground Vehicles: A
Progress Report* Larry Matthies, Alonzo Kelly, and Todd
Litwin Jet Propulsion Laboratory - California Institute of
Technology 4800 Oak Grove Drive Pasadena, California
May 11, 1995
[5] Crowd detection in video sequences Pini Riesman Ofer
Mano Shai Avidan Amnon Shashua
[6] Formations of Autonomous Vehicles Using Global
Positioning Systems (GPS) Kourosh Rahnamai; Kevin
Gorman; AndrewGray; Payman Arabshahi Jet Propulsion
Laboratory California Institute of Technology Pasadena,
CA 91109
[7] "ARM Achieves 10 Billion Processor Milestone", ARM
Technology, 22 January 2008.
[8] CAN communication Protocol, series smtbd1 digital
brushless servo controllers (Version 1.1) European version
1.0

254

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.255-256.

Patrolbot
S. Janani, C. Sai Smarana and G. Kalarani
1,2

Student, Pre- Final Year, Dept Of EIE, 3Asst. prof &HoD / EIE
Jaya Engineering College
Email: 1daalu.ja@gmail.com, 2smarana91@gmail.com, 3kalaaravindhnithish@yahoo.co.in
Abstract This paper reviews current advances in the
development of remote surveillance and security tasks.
Humans no matter how good they are, will not be able to
recognize everyone but with Artificial Intelligence, Patrolbot
can program all possible faces into them and allow them to
do facial recognition and alert the crime authorities in time
before anything happens which is used in airports. It is a
special eye-safe laser-rangefinder display footprints of
intruders walking by, even in darkness and verifies false
alarms .It also has a robotic visual sensors and cameras that
will enable it to read from dials 20 feet away.
Thus patrol bots will be more efficient than human security
personnel. It also employs revolutionary ARCS (Automatic
Robotic Control System) controls shared by AGVs which
map buildings on-the-fly and navigate point-to-point without
wires or other retrofitting.

docking/charging station as needed. Using a Wi-Fi system


the device can operate autonomously or be controlled
remotely.
III. OVERCOME OF PATROLBOT
Early mail robots just followed a wire that had to be
installed in the floor. Later robots used sonar and infrared
range-finding devices to follow walls and corridors.
PatrolBot fuses gyroscopic, wheelshaft and laserrangefinding data so that it knows its position in
relationship to its starting point and to the building around
it. Because it can make and store floor plans in its
memory, it can compare its current position to its
calculated position many times a second. Using statistical
methods, it determines walls in the building are moved,
the robot can map the new layout in minutes

A Patrolbot run with embedded linux on versalogics p-IIIbased vsbc-8 board patrolbot is made by active media
robotics. It can listen to speech and respond.

IV. COMPONENTS OF PATROLBOT

Keywords facial recognition, remote surveillance, Artificial


Intelligence

I. INTRODUCTION
PatrolBot is a programmable autonomous general purpose
service robot rover built by MobileRobots Inc. It is a high
quality, differential-drive robot designed for research
projects that require reliable, continuous 24X7 use or a
mid-size payload. The PatrolBot has been designed from
the ground up to carry payloads and sensors over all
normal indoor surfaces in wheelchair-accessible facilities.
PatrolBots are manufactured in various configurations and
serve as bases for companies developing delivery robots,
security robots, environmental monitoring rovers, robot
guides and other indoor service robots.
II. CAPABILITIES OF PATROLBOT
PatrolBot can scan buildings, create floor plans and
navigate them autonomously using a laser range-finding
sensor inside the robot. It employs Monte Carlo/Markovstyle localization techniques using a modified valueiterated search technique for navigation. It searches for
alternative paths if a hall is blocked, circumnavigates
obstacles and re-charges itself at its automated

Fig. 1

A. Visual Sensors
Patrolbot also has robotic visual sensors and cameras that
will enable it to read from dials 20 feet away.
B. Microphone and Speaker
It also has a microphone and speaker system that will
enable the Patrolbot to communicate with a person.
Hence, in this way the controller will not be put in harms
way. The Patrolbot will also be able to pick up small
objects and deliver them to locations as specified by the
controller.

255

Patrolbot

C. Eye Laser
PatrolBots special eye-safe laser-rangefinder displays
footprints of intruders walking by, even in darkness
D. Gas Sensor
A gas sensor reads oxygen, CO2 and H2S levels and can
sound alarm on the robot and/or at the control station if
acceptable levels are exceeded; likewise for smoke and
temperature. PatrolBot verifies false alarms, instructing
intruders to scan their IDs within a prescribed time to
avert alarms and further investigation.
V. APPLICATIONS OF PATROLBOT
A. Patrolbots in Enterprise
Robot is used to increase reliability, to increase flexibility,
to increase redundancy, to provide better situational
awareness to the staff, to reduce risk to the staff and to
save money.
PatrolBot provides situational awareness to the other
guards. To be looking at something and to see the floor
plan, see where the robot is in the floor plan and what it's
looking at perhaps from a 360 view and a pan-tilt-zoom
view, that gives you what's called situational awareness.

Fig. 2

VII. CIRCUITARY BOARD

B. Find and Interaction with Intruder


The standard PatrolBot has two-way audio. It also has text
to speech, and it can play WAV files. The video is pantilt-zoom, and we have a 360-degree camera. You can
also get a configuration of the robot that will flash startle
lights. One thing security loves is that you can see the
people walking by the robot because of the laser, even in
the dark. So you can see their footprints moving across
the floor plan.
C. Face Recognition
Robot are embedded with memory of suspected criminals
or terrorists and do face recognition procedures with
different scenarios (i.e. beard shaved, plastic surgery).
With A.I, we can program all possible faces into them and
allow them to do facial recognition and alert the
authorities in time before anything happens. This would
be of great use in embassies and especially in airports.
VI. STRUCTURE OF PATROLBOT
PatrolBot has a 59cm x 48cm x 38cm, CNC aluminum
body. Its 19cm dia tires handle nearly any indoor surface.
The two motor shafts hold 1000-tick encoders. This
differential drive platform is holonomic so it can turn in
place. Moving wheels on one side only, it forms a circle
of 29 cm radius.PatrolBot can move at speeds of 2mps. It
can carry payloads of 25kg on an 8:1 ramp, 40kg on flat
floors;. PatrolBots can travel through shallow
puddles.PatrolrBot with PC and a single laser can operate
approximately 3.5 hours on a fully charged battery.
Recharge to run-time is less than 1:1.

Fig. 3

ActivMedia Robotics' PatrolBot includes a robot


controlled by a VSBC-8 embedded computer from
VersaLogic. This Pentium III EBX-compliant single
board computer, was designed for high-end embedded
applications.
PatrolBot is a closed system, not designed for user access.
Inside are an Hitachi SH-based 32-bit RISC
microcontroller, sonar boards and other electronics. Each
robot has 16 sonar and two through ARIA.1000-tick
motor encoders. Digital I/O, A/D, RS232, USB and
firewire interfaces are all possible within the system and
may be integrated.
REFERENCE
[1]
[2]

256

www.MobileRobots.com
James Lee,Robotics Istitute, Carnegie Mellon University,
Pittsburgh

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.257-260.

Design of Reduced Order Controller for the Stabilization of Large-Scale Linear


Control Systems
M. Kavitha1 and V. Sundarapandian2
1

Department of Mathematics, 2Research and Development Centre


Vel Tech Dr. RR & Dr. SR Technical University, Avadi-Vel Tech Road, Avadi, Chennai-600 062, Tamil Nadu.
Email: 1kavi78_m@yahoo.com, 2sundarvtu@gmail.com
Abstract In this paper, we investigate the design of reducedorder controllers for the discrete-time linear control systems.
Sufficient conditions are derived for the design of reduced
order controllers by obtaining a reduced order model of the
original large scale linear system using the dominant state of
the system. The reduced order controllers are assumed to use
only the state of the reduced order model of the original plant.
KeywordsModel reduction, discrete-time systems, linear
systems, controller design.

order model for the original large scale linear system. In


Section III, we deploy the reduced order model obtained
in Section 2 to derive conditions for the existence of
reduced order controllers for the original system that uses
only the state of the reduced-order model. In Section IV, a
numerical example is shown to verify the result. In
Section V, we summarize the main results obtained in this
paper.
II. REDUCED ORDER MODEL FOR THE LINEAR SYSTEM

I. INTRODUCTION
During the past four decades, a significant attention has
been paid to the construction of reduced order observers
and stabilization using reduced-order controllers for linear
control systems [1-10]. In the recent decades, there has
been a good attention paid to the control problem of large
scale linear systems. This is due to the practical and
technical issues like information transfer networks, data
acquisition, sensing, computing facilities and the
associated cost involved which stem from using full order
controller design. Thus, there is a great demand for the
control of large scale linear systems with the use of
reduced-order controllers rather than full-order
controllers.
In this paper, we present the design of reduced-order
controllers for large scale linear discrete-time control
systems. Our design is carried out by first deriving a
reduced-order model of the large scale linear discretetime plant retaining only the dominant state of the given
system. The dominant state of a linear control system
corresponds to the slow modes of the linear system, while
the non-dominant state of the control system corresponds
to the fast modes of the linear system [3-10].
As an application of our recent work [9-10], we first
derive the reduced-order model of the given linear
discrete-time control system. Using the reduced-order
model obtained, we characterize the existence of a
reduced-order controller that stabilizes the full linear
system, using only the dominant state of the system.
This paper is organized as follows. In Section II, we
derive sufficient conditions for the derivation of reduced

In this section, we consider a large scale linear discretetime control system given by

x(k + 1) = Ax(k) + Bu(k)

(1)

where x R n is the state and u R m is the control input.


Our goal is to derive a reduced-order model for the large
scale linear plant (1).
We assume that A and B are constant matrices with real
entries of dimensions n n and n m respectively.
First, we suppose that we have performed an
identification of the dominant (slow) and non-dominant
(fast) states of the original linear system (1) using the
modal approach as described in [9].
Without loss of generality, we may assume that

x
x = s ,
xf
where

x s R r represents the dominant state and

x f R n r represents the non-dominant state.


Then the system (1) takes the form

x s (k + 1) A ss
x (k + 1) = A
f
fs

A sf x s (k)

Bs
+ u(k)

A ff x f (k) B f

(2)

For the sake of simplicity, we shall assume that the matrix


A has distinct eigenvalues. We note that this condition is

257

Design of Reduced Order Controller for the Stabilization of Large-Scale Linear Control Systems

usually satisfied in most practical situations. Then it


follows that A is diagonalizable.
Thus, we can find a nonsingular (modal) matrix
M consisting of the n linearly independent eigenvectors
of A so that the linear transformation

x(k) = M z(k)

Using the assumption (H3), we derive the following


equation from (11):

x f (k) = G ff1G fs x s (k) + G ff1 z f (k)


Substituting (8) into (12), we get

x f (k) = P x s (k) + Q u(k),

(3)

results in the original system (2) being transformed into


the following diagonal form

z(k + 1) = z(k) + u(k)

(12)

(13)

where

P = G ff1G fs , Q = G ff1 ( I f ) f
1

(14)

(4)
Substituting (13) into (2), we get the reduced order model
of the original linear system as

where

= M 1 AM = s
0

0
f

x s (k + 1) = A s x s (k) + Bs u(k)

(5)

where

is a diagonal matrix consisting of the n eigenvalues of A


and


=M B= s
f
1

A s = A ss + A sf P and Bs = Bs + A ff Q

(6)

x s (k + 1) A s
x (k + 1) = A
f
f

s
+ u(k)

f z f (k) f
0 z s (k)

(7)

Next, we make the following assumptions:


(H1) As k , z f (k + 1) z f (k), i.e. z f takes a
constant value in the steady-state.

Bs
x (k) + u(k)
0 f
Bf

x (k)

0 s
= K s x s (k)
x
(k)
f

(8)

(
x (k + 1) = ( A

)
B K ) x (k)

x s (k + 1) = A s Bs K s x s (k)
z f (k) (I f )

f u(k)

z(k) = M 1 x(k) = G x(k)

(9)

The inverse of the linear transformation (3) is given by


(10)

(18)

so that the resulting closed-loop system governed by the


equations

i.e.
1

(17)

In this section, we consider the design of reduced order


controller for the linear system (17) using only the
dominant state x s of the system. Thus, for the linear
system (17), we investigate the problem of finding a state
feedback controller of the form

u(k) = K s

z f (k) f z f (k) + f u(k)

0 x s (k)

III. REDUCED ORDER CONTROLLER DESIGN

(H2) The matrix I f is invertible.


Then it follows from (7) that for large values of k, we
have

(16)

Thus, under the assumptions (H1)-(H3), the original


linear system (1) can be expressed in a simplified form as

Thus, the plant (4) can be written as

z s (k + 1) s
z (k + 1) = 0
f

(15)

(19)

is exponentially stable. [Note that the stabilizing feedback


control law (18), if it exists, will also stabilize the
reduced-order linear system (15).]
From the first equation of (19), it follows that

i.e.

z s (k) G ss
z (k) = G
f
fs

G sf x s (k)
G ff x f (k)

Next, we assume the following:

x s (k) = A s Bs K s

(11)

x s (0)

(20)

which shows that x s (k) 0 as k if and only if the

system pair A s , Bs is stabilizable.

(H3) The matrix G ff is invertible.


258

Proceedings of the National Conference on Communication Control and Energy System

Especially, if the system pair A s , Bs is controllable, then


the eigenvalues of the closed-loop system matrix
A s Bs K s can be arbitrarily placed in the complex

The dominance measure of the eigenvalues is calculated


as in [9] and obtained as

0.5232 0.6021 0.0018 0.1814


0.1922 0.6495 0.0243 0.4396

=
0.0363 0.1598 0.0247 0.7480

0.2644 0.4070 0.0999 0.6610

plane. In particular, we can always find a gain matrix Fs


such that the closed-loop system matrix A s Bs K s is
convergent. Hence, we obtain the following result

Theorem 1. Under the assumptions (H1)-(H3), the system


(15) is a reduced-order model for the original linear
system (1). Also, the original linear system (1) can be
expressed by the equations (20). Next, the feedback
control law (18) that uses only the dominant state of the
linear system (1) stabilizes the system (17) if and only if it
stabilizes the reduced-order linear system (15). Thus, the
reduced order feedback controller problem for the given
linear system (1) is solvable if and only if the system pair
A s , Bs is stabilizable, i.e. there exists a feedback gain

matrix K s such that the closed-loop system matrix

A s Bs K s is convergent. If the system pair A s , Bs is


controllable, then we can always find eigenvalues inside
the unit circle of the complex plane and hence we can
construct the feedback control law (18) that stabilizes the
full linear system (17).

To determine the dominance of the k th eigenvalues in all


the states, we use the measure
4

k = ik
i =1

Thus, we obtain

= [ 1.0161 0.6143 0.0527 0.1713]


Thus, it is clear that the first two states ( x1 , x 2 ) are the
dominant (slow) states, while the last two states
( x 3 , x 4 ) are the non-dominant (fast) states of the linear
system (21).
Using the procedure described in Section 2, the reducedorder linear model for the given linear system (21) is
obtained as

IV. NUMERICAL EXAMPLE


We consider the fourth order linear discrete-time control
system described by

x(k + 1) = A x(k) + B u(k)

x s (k + 1) = A s x s (k) + Bs u(k)
where

(21)

2.7375 1.0961
A s =

0.4343 1.9577

where

2.6966
0.3557
A=
0.0490

0.7553

0.8948 0.1310 0.2093


1.3681 0.9408 0.4551
0.2512 0.5124 0.0811

0.9327 0.8477 0.8511

(23)

and

0.2628
Bs =
.
0.1301
Clearly,

and

the

system

pair

(A , B )

is

completely

controllable. Thus, the eigenvalues of the closed-loop


system matrix A s Bs K s can be placed arbitrarily inside
the unit circle.

0.9126
0.4523
.
B=
0.6721

0.3895

In particular, we choose the control law

u = K s x s

The eigenvalues of the matrix A are

(24)

such that the closed-loop system matrix A s Bs K s has

1 = 3.1402, 2 = 1.5550, 3 = 0.4280, 4 = 0.3050

Thus, we note that 1 , 2 are unstable (slow) eigenvalues

the eigenvalues 1 = 0.1, 2 = 0.1. A simple calculation in


MATLAB (using Ackermanns formula) gives

and 3 , 4 are stable (fast) eigenvalues of the system


matrix A.

259

K s = [ 38.8383 43.9036 ]

Design of Reduced Order Controller for the Stabilization of Large-Scale Linear Control Systems

Upon substitution of the control law (24) into the


reduced-order linear system (23), we obtain the closedloop linear system

x s (k + 1) = A s Bs K s x s (k)

(25)

which has the stable eigenvalues 1 = 0.1 and 2 = 0.1.


Thus, the closed-loop
exponentially stable.

system

(25)

is

globally

The general solution of the system (25) is given by the


equation

x s (k) = A s Bs K s

x s (0).

The response x s (k) of the closed-loop system (25) for the


initial state

10
x s (0) =
10
is depicted in Figure 1.

Fig. 1. Time Responses of the Closed-Loop System (25)

V. CONCLUSIONS
In this paper, sufficient conditions are derived for the
design of reduced order controllers by obtaining a reduced
order model of the original plant using the dominant state
of the system. The reduced order controllers are assumed
to use only the state of the reduced order model of the
original plant. An example has been presented to illustrate
the effectiveness of the proposed design of reduced order
controllers for a four-dimensional linear discrete-time
control system.

REFERENCES
[1]

S.D. Cumming, Design of observers for reduced


dynamics, Electronics Letters, vol. 5, pp. 213-214, 1969.
[2] T.E. Fortman and D. Williamson, Design of low-order
observers for linear feedback control laws, IEEE
Transactions on Automatic Control, vol. 17, pp. 301-308,
1971.
[3] L. Litz and H. Roth, State decomposition for singular
perturbation order reduction a modal approach,
International J. Control, vol. 34, pp. 937-954, 1981.
[4] G. Lastman, N. Sinha and P. Rozsa, On the selection of
states to be retained in reduced-order model, IEE Proc.
Part. D, vol. 131, pp. 15-22, 1984.
[5] B.D.O. Anderson and Y. Liu, Controller reduction:
concepts and approaches, IEEE Transactions on
Automatic Control, vol. 34, pp. 802-812, 1989.
[6] D. Mustafa and K. Glover, Controller reduction by H
balanced truncation, IEEE Transaction on Automatic
Control, vol. 36, pp. 668-692, 1991.
[7] M. Aldeen, Interaction modelling approach to distributed
control with application to interconnected dynamical
systems, International Journal on Control, vol. 241, pp.
303-310, 1991.
[8] M. Aldeen and H. Trinh, Observing a subset of the states
of linear systems, IEE Proc. Control Theory, vol. 141, pp.
137-144, 1994.
[9] V. Sundarapandian, Distributed control schemes for large
scale interconnected discrete-time linear systems, Math.
Computer Modelling, vol. 5, pp. 313-319, 2005.
[10] V. Sundarapandian, Observer design and stabilization of
the dominant state of discrete-time linear systems, Math.
Computer Modelling, vol. 41, pp. 581-586, 2005.

260

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.261-266.

Global Chaos Synchronization of Liu-Su-Liu and Liu-Chen-Liu Systems by


Active Nonlinear Control
R. Suresh1 and V. Sundarapandian2
1

Research and Development Centre, 2Department of Mathematics


Vel Tech Dr. RR & Dr. SR Technical University, Avadi-Vel Tech Road, Avadi, Chennai-600 062, Tamil Nadu.
Email: 1mrpsuresh83@gmail.com, 2sundarvtu@gmail.com
Abstract
This
paper
investigates
the
global
synchronization of the 3-D chaotic systems, viz. identical
Liu-Su-Liu systems (2006), identical Liu-Chen-Liu systems
(2007) and non-identical Liu-Su-Liu and Liu-Chen-Liu
chaotic systems. Active nonlinear control is the method used
to achieve the synchronization of chaotic systems addressed
in this paper and our theorems on global exponential
synchronization of Liu-Su-Liu and Liu-Chen-Liu chaotic
systems are established using Lyapunov stability theory.
Since the Lyapunov exponents are not required for these
calculations, the active control method is effective and
convenient to synchronize identical and different Liu-Su-Liu
and Liu-Chen-Liu chaotic systems. Numerical simulations
are provided to illustrate and validate the synchronization
results for Liu-Su-Liu and Liu-Chen-Liu chaotic systems.
Keywords Chaos synchronization, nonlinear control, LiuChen-Liu system, Liu-Su-Liu system.

the synchronization for the chaotic systems such as PC


method
([3]-[4]),
the
sampled-data
feedback
synchronization method ([10]-[11]), OGY method [12],
time-delay feedback approach [13], backstepping design
method [14], adaptive design method ([15]-[19]), sliding
mode control method [20], Lyapunov stability theory
method [21], hyperchaos [22], etc.
This paper has been organized as follows. In Section II,
we give the problem statement and our methodology. In
Section III, we discuss the chaos synchronization of two
identical Liu-Su-Liu chaotic systems ([23], 2006). In
Section IV, we discuss the chaos synchronization of two
identical Liu-Chen-Liu chaotic systems ([24], 2007). In
Section V, we discuss the chaos synchronization of LiuSu-Liu and Liu-Chen-Liu chaotic systems. In Section VI,
we present the conclusions of this paper.

AMS Subject Classification 34H10, 93C10, 93B52.

II. PROBLEM STATEMENT AND OUR METHODOLOGY


I. INTRODUCTION

Consider the chaotic system described by the dynamics

Chaotic systems are dynamical systems that are highly


sensitive to initial conditions. This sensitivity is popularly
known as the butterfly effect [1]. Chaos synchronization
problem was first described by Fujisaka and Yemada [2]
in 1983. This problem did not receive great attention until
Pecora and Carroll ([3]-[4]) published their results on
chaos synchronization in early 1990s. From then on,
chaos synchronization has been extensively and
intensively studied in the last three decades ([3]-[22]).
Chaos theory has been explored in a variety of fields
including physical [5], chemical [6], ecological [7]
systems, secure communications ([8]-[10]) etc.
In most of the chaos synchronization approaches, the
master-slave or drive-response formalism is used. If a
particular chaotic system is called the master or drive
system and another chaotic system is called the slave or
response system, then the idea of the synchronization is to
use the output of the master system to control the slave
system so that the output of the slave system tracks the
output of the master system asymptotically.
Since the seminal work by Pecora and Carroll ([3]-[4]), a
variety of impressive approaches have been proposed for

x = Ax + f (x)

(1)

where x R n is the state of the system, A is the


n n matrix of the system parameters and f : R n R n is
the nonlinear part of the system. We consider the system
(1) as the master or drive system.
As the slave or response system, we consider the
following chaotic system described by the dynamics

y = By + g(y) + u

(2)

where y R n is the state of the system, B is the

n n matrix of the system parameters, g : R n R n is the


nonlinear part of the system and u R n is the controller of
the slave system.
If A = B and f = g, then x and y are the states of two

identical chaotic systems. If A B and f g, then x and


y are the states of two different chaotic systems.

261

Global Chaos Synchronization of Liu-Su-Liu and Liu-Chen-Liu Systems by Active Nonlinear Control

In the nonlinear feedback control approach, we design a


feedback controller u, which synchronizes the states of
the master system (1) and the slave system (2) for all
initial conditions x(0), z(0) R n . If we define the
synchronization error as

e = y x,

where x1 , x 2 , x 3 are the states of the system (6) and a > 0,

b > 0, c > 0 are parameters of the system (6).


The slave or response system is also described by the
Liu-Su-Liu dynamics

y 1 = a(y 2 y1 ) + u 1

(3)

y 2 = by1 + y1 y 3 + u 2

then the synchronization error dynamics is obtained as

(7)

y 3 = cy 3 y1 y 2 + u 3

e = By Ax + g(y) f (x) + u

(4)

Thus, the global synchronization problem is essentially to


find a feedback controller u so as to stabilize the error
dynamics (4) for all initial conditions e(0) R n , i.e.

lim e(t) = 0 for all e(0) R n .


t

(5)

where y1 , y 2 , y 3 are the states of the system (7) and

u = (u 1 , u 2 , u 3 ) is the nonlinear controller to be designed.


The Liu-Su-Liu system (6) is one of the three-dimensional
chaotic systems discovered by the scientists L. Liu, Y. C.
Su and C. X. Liu, which is a modified Lorenz system.
The Liu-Su-Liu system (6) is chaotic when

We take as a candidate Lyapunov function

a = 10, b = 35, c = 1.4

V(e) = e T Pe,
where P is a positive definite matrix. Note that
V : R n R n is a positive definite function by
construction. We assume that the parameters of the master
and slave system are known and that the states of both
systems (1) and (2) are measurable.

(8)

Figure 1 illustrates the chaotic portrait of the Liu-Su-Liu


dynamics (6).

If we find a feedback controller u so that

V(e)
= e T Qe,
: R n R n is
where Q is a positive definite matrix, then V
a negative definite function.
Thus, by Lyapunov stability theory [25], the error
dynamics (4) is globally exponentially stable and hence
the condition (5) will be satisfied. Then the states of the
master system (1) and the slave system (2) will be
globally and exponentially synchronized.

Fig. 1. Portrait of the Liu-Su-Liu System (6)

The synchronization error e is defined by

III. SYNCHRONIZATION OF IDENTICAL LIU-SU-LIU


SYSTEMS

e i = y i x i , (i = 1, 2, 3)

In this section, we apply the nonlinear control technique


for the synchronization of two identical Liu-Su-Liu
chaotic systems ([23], 2006). Liu-Su-Liu chaotic system
is one of the paradigms of the three-dimensional chaotic
systems proposed by the scientists, L. Liu, Y.C. Su and
C.X. Liu.

Then the error dynamics is obtained as

Thus, the master or drive system is described by the LiuSu-Liu dynamics

We choose the nonlinear controller as

e 1 = a(e 2 e1 ) + u 1

e 2 = be1 + y1 y 3 x1 x 3 + u 2
e 3 = ce 3 y1 y 2 + x1 x 2 + u 3

x 3 = cx 3 x1 x 2

(9)

u 1 = (a + b)e 2

x 1 = a(x 2 x1 )
x 2 = bx1 + x1 x 3

(8)

u 2 = e 2 y1 y 3 + x1 x 3
(6)

(10)

u 3 = y1 y 2 x1 x 2
Substituting (10) into (9), the error dynamics simplifies to

262

Proceedings of the National Conference on Communication Control and Energy System

e 1 = ae1 be 2
e 2 = be1 e 2
e 3 = ce 3

(11)

Thus, the master or drive system is described by the LiuChen-Liu dynamics

We consider the Lyapunov function defined by

x 1 = (x 2 x1 )

1
V(e) = e12 + e 22 + e 32 ,
2

chaotic systems proposed by the scientists, L. Liu, S.Y.


Chen and C.X. Liu.

(12)

x 2 = x1 + kx1 x 3
x 3 = x 3 hx1 x 2

which is a positive definite function on R 3 .

where x1 , x 2 , x 3 are the states of the system (14) and

Differentiating (12) along the trajectories of (11), we get

V(e)
= ae12 e 22 ce 32 ,

(14)

, , , h, k are positive parameters of the system (14).

(13)
3

which is a negative definite function on R .

The slave or response system is also described by the


Liu-Chen-Liu dynamics

y 1 = (y 2 y1 ) + u 1

Thus, the error dynamics (11) is globally exponentially


stable and hence the states of the master system (6) and
slave system (7) are globally and exponentially
synchronized by the active nonlinear controller (10).

y 2 = y1 + ky1 y 3 + u 2

(15)

y 3 = y 3 hy1 y 2 + u 3
where y1 , y 2 , y 3 are the states of the system (15) and

A. Numerical Results

u = (u 1 , u 2 , u 3 ) is the nonlinear controller to be designed.

For simulations, the fourth-order Runge-Kutta method


with time-step 10 6 is used to solve the differential
equations (6) and (7) with the active nonlinear controller
(10).

The Liu-Chen-Liu system (14) is one of the threedimensional chaotic systems discovered by the scientists
L. Liu, S.Y. Chen and C. X. Liu, which is a new reversed
butterfly-shaped attractor.

The parameters of the chaotic systems are chosen as in (8).


The initial conditions of the master and slave systems are
chosen as x(0) = (6,8,10) and y(0) = (2,10, 5).

The Liu-Chen-Liu system (14) is chaotic when

Figure 2 shows the synchronization of the master system


(6) and slave system (7).

Figure 3 illustrates the chaotic portrait of the Liu-ChenLiu dynamics (14).

= 10, = 40, = 2.5, h = 1, k = 16.

(16)

Fig. 3. Portrait of the Liu-Chen-Liu System (14)

Fig. 2. Error Trajectories for Systems (6) and (7)

The synchronization error e is defined by

IV. SYNCHRONIZATION OF IDENTICAL LIU-CHEN-LIU


SYSTEMS
In this section, we apply the nonlinear control technique
for the synchronization of two identical Liu-Chen-Liu
chaotic systems ([24], 2007). Liu-Chen-Liu chaotic
system is one of the paradigms of the three-dimensional

e i = y i x i , (i = 1, 2, 3)
Then the error dynamics is obtained as

263

(17)

Global Chaos Synchronization of Liu-Su-Liu and Liu-Chen-Liu Systems by Active Nonlinear Control

e 1 = (e 2 e1 ) + u1

e 2 = e1 + k ( y1 y 3 x1 x 3 ) + u 2

(18)

equations (14) and (15) with the active nonlinear


controller (19).
The parameters of the chaotic systems are chosen as in
(16). The initial conditions of the master and slave
x(0) = (2, 4,15) and
systems
are
chosen
as

e 3 = e 3 h(y1 y 2 x1 x 2 ) + u 3
We choose the nonlinear controller as

y(0) = (5, 9,1).

u1 = ( + )e 2

u 2 = e 2 k(y1 y 3 x1 x 3 )

(19)

u 3 = h(y1 y 2 x1 x 2 )

Figure 4 shows the synchronization of the master system


(14) and slave system (15).

V. SYNCHRONIZATION OF LIU-SU-LIU AND LIU-CHENLIU CHAOTIC SYSTEMS

Substituting (19) into (18), the error dynamics simplifies


to

e 1 = e1 e 2
e 2 = e1 e 2
e 3 = e 3

(20)

As the master system, we consider the Liu-Su-Liu


dynamics described by

We consider the Lyapunov function defined by

V(e) =

1 2
e1 + e 22 + e 32 ,
2

In this section, we apply the nonlinear control technique


for the synchronization of non-identical Liu-Su-Liu and
Liu-Chen-Liu chaotic systems.

x 1 = a(x 2 x1 )

(21)

x 2 = bx1 + x1 x 3

which is a positive definite function on R 3 .

As the slave system, we consider the Liu-Chen-Liu


dynamics described by

Differentiating (21) along the trajectories of (20), we get

V(e)
= e12 e 22 e 23 ,

(23)

x 3 = cx 3 x1 x 2

(22)

y 1 = (y 2 y1 ) + u1

y 2 = y1 + ky1 y 3 + u 2

which is a negative definite function on R 3 .

(24)

y 3 = y 3 hy1 y 2 + u 3

Thus, the error dynamics (20) is globally exponentially


stable and hence the states of the master system (14) and
slave system (15) are globally and exponentially
synchronized by the active nonlinear controller (19).

Here, a, b, c, , , , h, k are positive parameters and


u = (u 1 , u 2 , u 3 ) is the nonlinear controller to be designed.
The synchronization error is defined by

e i = y i x i , (i = 1, 2, 3)

(25)

Then the error dynamics is obtained as

e 1 = (e 2 e1 ) + ( a)(x 2 x1 ) + u1

e 2 = e1 + ( b)x1 + ky1 y 3 x1 x 3 + u 2
e 3 = e 3 + (c )x 3 hy1 y 2 + x1 x 2 + u 3

(26)

We choose the nonlinear controller as

u1 = ( + )e 2 + (a )(x 2 x1 )
u 2 = e 2 + (b )x1 ky1 y 3 + x1 x 3

(27)

u 3 = ( c)x 3 + hy1 y 2 x1 x 2
Substituting (27) into (26), the error dynamics simplifies
to

Fig. 4. Error Trajectories for Systems (14) and (15)

A. Numerical Results
For simulations, the fourth-order Runge-Kutta method
with time-step 10 6 is used to solve the differential

264

e 1 = e1 e 2
e 2 = e1 e 2
e 3 = e 3

(28)

Proceedings of the National Conference on Communication Control and Energy System

We consider the Lyapunov function defined by

V(e) =

1 2
e1 + e 22 + e 32 ,
2

(29)

which is a positive definite function on R 3 .


Differentiating (29) along the trajectories of (28), we get

V(e)
= e12 e 22 e 23 ,

Numerical simulations have been given to illustrate and


validate the proposed synchronization approach for the
global chaos synchronization of the above three cases of
chaotic systems. Since Lyapunov exponents are not
required for these calculations, the nonlinear control
method is very effective and convenient to achieve global
chaos synchronization for the three cases of chaotic
systems discussed in this paper.

REFERENCES

(30)
[1]

which is a negative definite function on R 3 .


Thus, the error dynamics (28) is globally exponentially
stable and hence the states of the master system (23) and
slave system (24) are globally and exponentially
synchronized by the active nonlinear controller (27).

[2]

[3]

A. Numerical Results
For simulations, the fourth-order Runge-Kutta method
with time-step 10 6 is used to solve the differential
equations (23) and (24) with the active nonlinear
controller (27).
The parameters of the chaotic systems are chosen as in (8)
and (16). The initial conditions of the master and slave
systems are chosen as x(0) = (8, 2, 7) and y(0) = (2, 7,1).

[4]

[5]

[6]

[7]

[8]

[9]

[10]

[11]

[12]
[13]
Fig. 4. Error Trajectories for Systems (23) and (24)

Figure 4 shows the synchronization of the master system


(23) and slave system (24).

[14]

[15]

VI. CONCLUSIONS
In this paper, nonlinear control method based on
Lyapunov stability theory has been deployed to achieve
global chaos synchronization for the following three cases:

[16]

A) Identical Liu-Su-Liu chaotic systems (2006)

[17]

B) Identical Liu-Chen-Liu chaotic systems (2007) and


C) Non-Identical Liu-Su-Liu and Liu-Chen-Liu systems.
265

K. T. Alligood, T. Sauer and J. A. Yorke, Chaos: An


Introduction to Dynamical Systems, Berlin, Germany:
Springer-Verlag, 1997.
H. Fujisaka and T. Yamada, Stability theory of
synchronized motion in coupled-oscillator systems,
Progress of Theoretical Physics, vol. 69, no. 1, pp. 32-47,
1983.
L. M. Pecora and T. L. Carroll, Synchronization in
chaotic systems, Phys. Rev. Lett., vol. 64, pp. 821-824,
1990.
L. M. Pecora and T. L. Carroll, Synchronizing chaotic
circuits, IEEE Trans. Circ. Sys., vol. 38, pp. 453-456,
1991.
M. Lakshmanan and K. Murali, Chaos in Nonlinear
Oscillators: Controlling and Synchronization, Singapore:
World Scientific, 1996.
S. K. Han, C. Kerrer and Y. Kuramoto, D-phasing and
bursting in coupled neural oscillators, Phys. Rev. Lett.,
vol. 75, pp. 3190-3193, 1995.
B. Blasius, A. Huppert and L. Stone, Complex dynamics
and phase synchronization in spatially extended ecological
system, Nature, vol. 399, pp. 354-359, 1999.
J. Lu, X. Wu, X. Han and J. Lu, Adaptive feedback
synchronization of a unified chaotic system, Phys. Lett. A,
vol. 329, pp. 327-333, 2004.
L. Kocarev and U. Parlitz, General approach for chaotic
synchronization with applications to communications,
Phys. Rev. Lett., vol. 74, pp. 5028-5030, 1995.
K. Murali and M. Lakshmanan, Secure communication
using a compound signal using sampled-data feedback,
Applied Mathematics and Mechanics, vol. 11, pp. 13091315, 2003.
T. Yang and L. O. Chua, Generalized synchronization of
chaos via linear transformations, Internat. J. Bifur. Chaos,
vol. 9, pp. 215-219, 1999.
E. Ott, C. Grebogi and J. A. Yorke, Controlling chaos,
Phys. Rev. Lett., vol. 64, pp. 1196-1199, 1990.
J. H. Park and O. M. Kwon, A novel criterion for delayed
feedback control of time-delay chaotic systems, Chaos,
Solitons and Fractals, vol. 17, pp. 709-716, 2003.
X. Wu and J. L, Parameter identification and
backstepping control of uncertain Lu system, Chaos,
Solitons and Fractals, vol. 18, pp. 721-729, 2003.
J. Lu, X. Wu, X. Han and J. Lu, Adaptive feedback
synchronization of a unified chaotic system, Phys. Lett. A,
vol. 329, pp. 327-333, 2004.
Y. G. Yu and S. C. Zhang, Adaptive backstepping
synchronization of uncertain chaotic systems, Chaos,
Solitons and Fractals, vol. 27, pp. 1369-1375, 2006.
J. H. Park, S. M. Lee and O. M. Kwon, Adaptive
synchronization of Genesio-Tesi chaotic system via a
novel feedback control, Physics Letters A., vol. 371, no. 4,
pp. 263-270, 2007.

Global Chaos Synchronization of Liu-Su-Liu and Liu-Chen-Liu Systems by Active Nonlinear Control
[18] J. H. Park, Adaptive control for modified projective
synchronization of a four-dimensional chaotic system with
uncertain parameters, J. Computational and Applied
Math., vol. 213, no. 1, 288-293, 2008.
[19] J. H. Park, Chaos synchronization of nonlinear Bloch
equations, Chaos, Solitons and Fractals, vol. 27, no. 2,
pp. 357-361, 2006.
[20] H. T. Yau, Design of adaptive sliding mode controller for
chaos synchronization with uncertainties, Chaos, Solitons
and Fractals, vol. 22, pp. 341-347, 2004.
[21] R. Suresh and V. Sundarapandian, Synchronization of an
optical
hyperchaotic
system,
International
J.
Computational and Applied Math., vol. 5, no. 2, pp. 199207, 2010.

[22] R. Vicente, J. Dauden, P. Colet and R. Toral, Analysis


and characterization of the hyperchaos generated by a
semiconductor laster object, IEEE. J. Quantum
Electronics, vol. 41, pp. 541-548, 2005.
[23] L. Liu, Y. C. Su and C. X. Liu, A modified Lorenz
system, Internat. J. Nonlinear Science and Numerical
Simulation, vol. 7, pp. 187-190, 2006.
[24] L. Liu, S. Y. Chen and C. X. Liu, Experimental
confirmation of a new reversed butterfly-shaped attractor,
Chin. Phys., vol. 16, pp. 1897-1900, 2007.
[25] W. Hahn, The Stability of Motion, Berlin, Germany:
Springer-Verlag, 1967.

266

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 -30 August, 2011. pp.267-270.

New Improved Methodology for Pedestrian Detection


in Advanced Driver Assistance System
Vijay Gaikwad, Sanket Borhade, Pravin Jadhav and Manthan Shah
Department of Electronics Engineering, Vishwakarma Institute of Technology, Pune
Email: vdg_3@rediffmail.com, sanket.borhade@gmail.com, pravin.jadhav.vit@gmail.com,
manthan.shah03@gmail.com
Abstract In recent years, pedestrian detection (PD) plays a
vital role in a variety of applications such as security
cameras, automotive control and so forth. These applications
require two essential features, i.e. high speed performance
and high accuracy. Firstly, the accuracy is determined by
how the image features are described. The image feature
description must be robust against occlusion, rotation, and
the change in object shapes and illumination conditions. A
number of feature descriptors have been proposed.
Previously histogram of oriented gradients(HOG) features
were extensively used along with support vector machine
(SVM) classifier for PD. HOG features and SVM classifier
can achieve good performance for PD, but they are time
consuming. To achieve high detection speed with good
detection performance, a Two-step framework method was
proposed. The Two-step framework consists of a full-body
detection (FBD) step and a head-shoulder detection (HSD)
step. Zhen Li proposed the fusion of Haar-like and HOG
features to get better performance, and the HSD step utilizes
edgelet features for classification and detection. This paper
has limitations as low detection rate and less computation
speed.
In order to alleviate these limitations we propose here a new
methodology for improving the detection rate and speed.
Hence, the performance and accuracy of the detection can
be improved by the combination of Haar-like and
Triangular features for FBD and Edgelet/Shapelet for HSD.
We expect an average 95% detection rate and 60% faster
speed for the proposed method.
Keywords HOG, Haar-like features, Triangular features,
edgelet/shapelet.

I. INTRODUCTION
Automatic PD is a rapidly evolving area in image analysis
and surveillance, it is essential in many Applications such
as advanced robotics and intelligent vehicles. It is a
challenging problem due to changing articulated pose,
style and color of clothing, background, illumination and
weather conditions. More importantly, robust pedestrian
identification and tracking are highly dependent on
reliable detection and segmentation in each frame.
There are many different techniques which have been
proposed to address the problem of people detection.
Most popular methods are based on feature extraction,

which was developed in recent years. A good feature


descriptor is crucial for accuracy of PD. Voila built an
efficient moving person detector, which is based on Haarlike features. Takuya Kobayash improved the histogram
of oriented gradients (HOG) features for pedestrian and
obtained a high recognition rate. Bo Wu has used local
edge pieces called edgelet for people detection. And
Supriya Rao proposed a new method to detect individual
body parts of people. However, these features are not
robust enough to detect many unpredictable problems, so
the fusion of these features is needed to realize a higher
detection rate.
Haar-like features can capture different image details and
fast algorithms which use integral images can calculate
such features very quickly. But they are not very efficient
for PD due to the high variety of pedestrian appearances.
HOG features and shapelet features can achieve good
performance for PD, but they are time consuming. To
achieve high detection speed with good detection
performance, a new methodology is proposed.
A new 2-step framework detecting pedestrian through
video image is introduced. This framework contains two
steps, the first of which is full body detection (FBD). In
this step, a detector is trained by the fusion of Haar-like
and Triangular features to detect candidate areas. In the
second step, called headshoulder detection (HSD),
another detector trained by edgelet/shapelet features is
applied to only detect the headshoulder part of people in
the FBD output area.
Experiment results show that this method can take
advantage of the speed merit of haar-like features and
good performance of edgelet/shapelet features.
II. ALGORITHMS
The detection process includes two stages: In the first
stage, a detector using haar-like features scans the whole
image over different positions and scales. So we get
several rectangular regions which may contain
pedestrians. In the second stage, a classifier using shapelet
features classifies them into pedestrian or non-pedestrian.
The general architecture is shown in Fig.1

267

New Improved Methodology for Pedestrian Detection in Advanced Driver Assistance System

of human contour. Bo Wu has defined three kinds of


edgelet, line, arc and symmetric pair. In order to detect
whether position w in an image is similar with above
kinds of edgelet features, define MI(w) and nI(w) to be the
edge intensity and normal at position w. I(w) and nI(w)
are calculated by 3 x 3 sober kernel convolution, and
quantize the orientation of the normal vector into six
discrete values. Because edgelet features is only used for
head-shoulder detection in our method, we select 3
edgelet features shown in Fig.3 for detection, all of which
give an accurate description of the head-shoulder part.

Fig. 1. General Architecture

A. Detector using Haar-Like Features


Viola and Jones proposed Haar-like features for rapid
object detection and then applied to the pedestrian
detection. With the simple haar-like features which can be
calculated efficiently by using integral images and
AdaBoost classifiers in a cascade structure, their detector
has high detection speed. Lienhart add some other similar
features to extend the haar-like feature sets. In our method
we use the haar-like features used in and some extended
haar-like features proposed by Lienhart. These features
are shown in Fig. 2.

Fig. 2. Haar-like features

A haar-like feature is composed of several white or black


areas. The intensity values of pixels in the white or black
areas are separately accumulated. Then the feature value
is computed with a weighted combination of these two
sums. The detector is trained through AdaBoost algorithm
and finally composed of a cascade of classifiers. The first
several classifiers are used to reject a large number of
negative examples with simple computation. Then
subsequent classifiers are used to achieve low false
positive rates with more complex processing. So this kind
of cascaded structure can achieve increased detection
performance while radically reducing computation time.
The detector should have a high detection rate to generate
as many ROI as possible. If the pedestrian region is
rejected in this stage, it would be judged as nonpedestrian directly, while the false positives can be further
classified in the second stage. So the focus of the first
stage is to keep high detection rate even with many false
positives.

Fig. 3. Edgelet Features

C. Adaboost Algorithm
Adaboost is an iterated learning method, a strong learning
algorithm upgraded from a group of weak learning
algorithms. And the algorithm is implemented by
changing the distribution of data, via training set result to
determine the correctness of the classification of samples
and the last total accuracy of classification, then
determine the weight of every single sample, and combine
the weak classifiers obtained by every training process
together as the final decision classifier. In Adaboost,
every training sample is assigned with a weight, which
indicates the probability of some kind of weak classifier
to be selected into the training set. If one sample is not
classified correctly, when construct the next training set,
the probability to be chosen will increase. In contrast, the
probability will decrease. In this way, Adaboost can focus
on those more difficult samples which has more extra
information.
However, when training classifier by using adaboost, it is
a great probability to encounter such situation that if one
or several samples of the training set is difficult to be
classified correctly, after several rounds of updating, these
difficult samples will be assigned by a high weight.
And during the subsequent training process, the selected
weak classifier and weight will be the small portion of the
total samples, in other words, training samples are simply
over fitting on these samples. For the purpose of reducing
over fitting, there are two ways to achieve: pick out the
high weight difficult samples, its easy to implement but
also restrict the classification capacity; or adjust the
weight of difficult samples, and this way is adopted by
this paper.

B. Edgelet Features
Improved edgelet features were proposed by Bo Wu and
Jie Xu for moving pedestrian detection. Edgelet is a short
segment of line or curve, which represents for a little part

D. Classifier using Shapelet Features


Sabzmeydani and Mori proposed Shapelet features for
pedestrian detection. These features are a set of midlevel

268

Proceedings of the National Conference on Communication Control and Energy System

features focused on local regions of the image and built


from lowlevel gradient information. The training phase
of their algorithm consists of three steps:

multi-format audio, video, voice and image processing,


multimode baseband and packet processing, control
processing, and real-time security.

Firstly, for a training sample, the gradient responses in


different directions are extracted. Then the local average
of these responses around each pixel are used as low-level
features to build mid-level features (shapelet features).

DaVinci Digital Video Processors Optimized for digital


video systems, DaVinci digital media processor solutions
are tailored for digital audio, video, imaging, and vision
applications. The DaVinci platform includes a general
purpose processor, video accelerators, an optional DSP,
and related peripherals.

Secondly, the training sample is divided into a number of


small sub-windows. Then for each sub-window a subset
of its low-level features is selected to construct a midlevel shapelet feature through AdaBoost algorithm. So
each shapelet feature is a combination of gradients with
different orientations and strengths at different locations
within the sub-window. We can see that the shapelet
features describe local neighborhoods of the image. To
combine the information from different parts of the
image, the final step is to train the final classifier using
shapelet features through AdaBoost algorithm.
In shapelet classifier one important parameter is the size
of sub-window. Sabzmeydani and Mori define three sets
of shapelet features with different sizes and investigate
the influence of shapelet size on the detector performance.
The six types of sub-windows with size from 55 to
1818 are used. In our classifier we use all the six types
of sub windows. For each shapelet feature, we choose mi
= ni weak classifiers, where ni is the number of features
inside the sub-window. For example, the 55 type has
100(=554) low level features, so the number of weak
classifiers in the shapelet feature is 10. The six types of
sub windows are shown in table 1.
For each type of shapelet feature, we scan the detection
window (of size 3264) with that shapelets sub-window
size, with strides of 4 pixels between sub-windows.

B. Database
The training dataset used in experiments contains more
than 3000 images, most positive samples consist of two
well-known pedestrian datasets as shown in Fig.6 , the
MIT dataset and the INRIA dataset with the size of 32
64 pixels, and others come from surveillance videos.
Also, the head-shoulder training samples are obtained by
cutting the image from these two datasets and zoom out to
20 20 pixels. Some negative samples are selected from
the INRIA dataset, and others are from the Internet. And
the experiments are carried out on videos from CAVIAR
sequences and campus, which are down sampled to
320 240 pixels in our framework.
IV. RESULTS
After experimentally testing various images from the MIT
and INRIA datasets, following results as shown in Fig.4
and Fig.5 were obtained. Fig.4 shows detection rate for
head-shoulder detection for various methods like Haarlike, HOG and edgelet. From the obtained results it can be
clearly concluded that Edgelet provides better Detection
Rate than the Har- like and HOG methods.

Table 1. Six types of sub-windows


Size
55
88
1010
1212
1515
1818

Number of weak classifiers


10
16
20
24
30
36

III. EXPERIMENTAL DATA

Fig. 4. Result from head-shoulder part detection using Haar-like,


HOG, and edgelet

A. Software & Hardware Platform


MATLAB has been the ready tool for simulation purpose
of various papers. OPEN CV too forms a tool to test the
various algorithms. These platforms are generally used in
initial stages of projects. Various DSP processors are used
in the real time implementation of the Algorithms.
Blackfin 16/32-bit embedded processors offer software
flexibility and scalability for convergent applications:

Thus from the obtained results it can be proposed that the


performance and accuracy of the detection can be
improved by the combination of Edgelet/Shapelet for
HSD and Haar-like and Triangular features for FBD.
Thus MATLAB can be used as tool for simulation
purpose and OPEN CV as a tool to test our proposed
method. Finally, our proposed method can be practically

269

New Improved Methodology for Pedestrian Detection in Advanced Driver Assistance System

tested in the real world by developing an embedded


system for it using either Blackfin 16/32-bit embedded
processors or DaVinci Digital Video Processor and we
expect our proposed method will provide a very high
average detection rate and less time for the computation.

classifier can achieve good performance for PD, but they


are time consuming. To achieve high detection speed with
good detection performance, a Two-step framework
method was proposed. The Two-step framework consists
of a full-body detection (FBD) step and a head-shoulder
detection (HSD) step. Zhen Li proposed the fusion of
Haar-like and HOG features to get better performance,
and the HSD step utilizes edgelet features for
classification and detection. This paper has limitations as
low detection rate and less computation speed.
In order to alleviate these limitations we propose here a
new methodology for improving the detection rate and
speed. Hence, the performance and accuracy of the
detection can be improved by the combination of Haarlike and Triangular features for FBD and Edgelet/
Shapelet for HSD. We expect an average 95% detection
rate and 60% faster speed for the proposed method.
REFERENCES

Fig. 5. Comparison with other methods

[1]
[2]

[3]

[4]

[5]

[6]

Fig. 6. MIT dataset and the INRIA dataset used for training and
testing purpose

V. CONCLUSION
A number of feature descriptors have been proposed.
Previously HOG features were extensively used along
with SVM classifier for PD. HOG features and SVM

[7]
[8]

270

P. Viola, M. Jones and D. Snow, Detecting Pedestrians


Using Pattern of Motion and Appearance, The 9th ICCV,
France, 2003, pp 734-741.
Takuya Kobayashi, Akinori Hidaka, Takio Kurita.
Selection of Histograms of Oriented Gradients Features
for Pedestrian Detection. Neural Information Processing :
14th International Conference. Japan, 2007,pp 598-607.
B. Wu and R. Nevatia. Detection of multiple, partially
occluded humans in a single image by bayesian
combination of edgelet part detectors. IEEE International
Conference on Computer Vision , China, 2005,pp 90-97.
Supriya Rao,N. C. Pramod, Chaitanya Krishna Paturu.
People Detection in Image and Video Data.Proceeding of
the 1st ACM workshop on Vision networks for behavior
analysis, Canada ,2008,pp 85-92
J. Janta P. Kumsawat, K.Attakitmaongkol and A.Srikaew
.A Pedestrian Detection System Using Applied Log-Gabor
Filters. Proceedings of the 7th WSEAS International
Conference on Signal, Speech and Image Processing.,
China, 2007, pp 55-60
Navneet Dalal and Bill Triggs. Histograms of Oriented
Gradients for Human Detection. Proceedings of the 2005
IEEE Computer Society Conference on Computer Vision
and Pattern Recognition. USA,2005,Vol 1, pp 886-893.
MIT
Pedestrian
Dataset
http://cbcl.mit.edu/cbcl/
softwaredatasets/PedestrianData.html
INRIA
Person
Dataset
http://pascal.inrialpes.fr/
data/human/

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.271-274.

Spintronics Based Cancer Detection


V. Raghavi
Final year, VelTech Multitech Dr.Rangarajan Dr.Sakunthala Engineering College, Chennai.
Email: raaghavee@gmail.com
Abstract Spintronics is a branch of science that deals with
the spin of an electron and not on its charge which
electronics does. There are two spins (UP spin and DOWN
Spin) like 0s and 1s to represent to states. The important
property that spins could be directed in a particular
direction, and the change in spin could be detected is used in
our paper.Cancer cells are easy to identify only when they
are large in number. These cancer cells when matured
results in the formation of tumor, which has to be removed
by surgery. After surgery the presence of a single cancer cell
would result in the growth of tumor in that part of the
body.This spintronic scanning technique is an efficient
technique to detect cancer cells even when they are less in
number. The steps involved are,1) The patient is exposed to
a strong magnetic field so that his body cell gets
magnetized.2) A beam of electrons with polarized spin is
introduced on the unaffected part of the body and the
change in spin is detected by a polarimeter. Let it be X 3) A
beam of electrons with polarized spin is introduced on the
part which had undergone surgery. And the corresponding
change in spin be Y 4)If X - Y = 0, it indicates that cancer
cells have been removed from the body, if not it indicates the
presence of traces of cancer cells and it has to be treated
again for ensuring complete safety to the patient Thus this
technique efficiently identifies the presence of cancer cells in
that part of the body that has undergone surgery to prevent
any further development.

I. INTRODUCTION
An emerging research field in physics focused on spindependent phenomena applied to electronic devices is
called spintronics. The promise of spintronics is based on
manipulation not only of the charge of electrons, but also
their spin, which enables them to perform new functions.
Currently, the ability to manipulate electron spin is
expected to lead to the development of remarkable
improvements in electronic systems and devises used in
photonics, data processing and communications
technologies only.Now this paper brings out an innovative
idea of extending the hands of spintronics in MEDICAL
FIELD, in the detection of cancer cells even when they
are very few in numbers in the human body.This approach
is relied on two important aspects:
 the behavior of electron spin in a magnetic field
 the cancer cells abnormality over normal cells

II. SPINTRONICS
Spintronics, or spin electronics, refers to the study of the
role played by electron spin in solid state physics, and
possible devices that specifically exploit spin properties
instead of or in addition to charge degrees of freedom. In
spintronics electron spin, in addition to charge, is
manipulated to yield a desired outcome.
An electron is just like a spinning sphere of charge. It has
a quantity of angular momentum (its "spin") and an
associated magnetism, and in an ambient magnetic field
its energy depends on how its spin vector is oriented.
Every electron exists in one of two states, namely, spin-up
and spin-down with its spin either +1/2 or 1/2. In other
words, an electron can rotate either clockwise or
counterclockwise around its own axis with constant
frequency. . Two spins can be "entangled" with each
other, so that neither is distinctly up nor down, but a
combination of the two possibilities.
III. CANCER CELLS
Cancer cells are the somatic cells which are grown into
abnormal size. The cancer cells have different
electromagnetic pattern when compared to normal
cells.For many types of cancer, it is easier to treat and
cure the cancer if it is found early. There are many
different types of cancer, but most cancers begin with
abnormal cells growing out of control, forming a lump
that's called a tumor. The tumor can continue to grow
until the cancer begins to spread to other parts of the
body. If the tumor is found when it is still very small,
curing the cancer can be easy. However, the longer the
tumor goes unnoticed, the greater the chance that the
cancer has spread. This makes treatment more
difficult.Tumor developed in human body, is removed by
performing a surgery. Even if a single cell is present after
the surgery, it would again develop into a tumor. In order
to prevent this, an efficient method for detecting the
cancer cells is required.
Here, in this paper, we introduce a new method for
detecting the cancer cells after a surgery. This accurate
detection of the existence of cancer cells at the beginning
stage itself ensures the prevention of further development
of the tumor.

271

Spintronics Based Cancer Detection

IV. PROPERTIES OF SPIN


 The spin of an electron has a great dependence over an
external magnetic field. If a polarized electron is
exposed in a magnetic field, its spin orientation gets
varied. This behavior of the electron plays a crucial
role in the presented approach in this paper.
 Polarization of spin of electrons is possible so that the
spin of all electrons orient in a particular direction.

A spin filter is more efficient electron polarizer which


uses an ordinary electron source along with a gaseous
layer of Rb(Rubidium). Free electrons diffuse under the
action of an electric field through Rb vapour that has been
spin polarized in optical pumping. Through spin exchange
collisions with the Rb, the free electrons become
polarized and are extracted to form a beam. To reduce the
emission of depolarizing radiation, N2 is used to quench
the excited Rb atoms during the optical pumping cycle.

 The electron spin can be detected by using devices


like polarimeters.

B. Spin Detectors

 The electron spin can be controlled electrically just


with the application of few volts.

There are many ways by which the spin of the electrons


can be detected efficiently. The spin polarization of the
electron beam can be analyzed by using:

V. DETECTION OF CANCER CELLS

 Mott polarimeter

It is very important that the cancer cells should be


diagnosed at the earlier stages itself failing which develop
rapidly into acute tumors.The following setup is used for
the detection of cancer cells in a human body:
 Polarized electron source
 Magnetic field
 Spin detector
A. Polarised Electron Source
A beam of electrons is said to be polarized if their spins
point, on average, in a specific direction. There are
several ways to employ spin on electrons and to control
them. The requirement for this paper is an electron beam
with all its electrons polarized in a specific direction.

 Compton polarimeter
 Moller type polarimeter
Typical Mott polarimeters require electron energies of
~100 keV. To achieve these energies, high voltages are
needed. Because of these voltage requirements, spacing
between electrodes need to be sufficient to prevent
electric discharge. The designed polarimeter uses energies
of ~25 keV, requiring a smaller potential between the two
hemispheres and hence a smaller overall design. More
importantly, this device has a higher efficiency, defined to
be I/Io. This is because of its smaller size. Increased
efficiency improves the figure of merit.

The following are the ways to meet the above said


requirement:
 Photoemission from negative electron affinity GaAs

Fig. 2. Schematic diagram of a Mott Polarimeter

 Chemi-ionization of optically pumped meta stable


Helium
 An optically pumped electron spin filter
 A Wein style injector in the electron source

Fig. 3. (a) The Mini Mott polarimeter

Fig. 1. optical spin filter

The Mini Mott polarimeter has three major sections: the


electron transport system, the target chamber, and the
detectors. The first section the electrons enter is the
transport system. An Einsel lens configuration(An einzel
lens is a charged particle lens that focuses without
changing the energy of the beam) was used here. Two sets
of four deflectors were used as the first and last lens
element in this system. Depicted below is a cross
sectional view of the entire polarimeter. The transport
system is represented by the light blue cylinders.
272

Proceedings of the National Conference on Communication Control and Energy System

The electrons next enter the target chamber. The chamber


consists of a cylindrical target within a polished stainless
steel hemisphere. A common material used for the high-Z
nuclei target is gold. Careful consideration must also be
employed in choosing the material. Low-Z nuclei help
minimize unwanted scattering, so aluminum was chosen.
Scattered electrons then exit the target chamber and are
collected in the detectors. The target chamber is shown in
detail in Fig4. The transport system has been removed.
The target is shown in red and the inner hemisphere in
orange.Thus there are many methods for detecting the
spin polarization of electrons.
C. External Magnetic Field
An external magnetic field is required during this
experiment. The magnetic field is applied after the
surgery has undergone. First, it is applied to an unaffected
part of the body and then to the surgery undergone part of
the body.

VI. THE SPINTRONIC SCANNER


This technique using spintronics is suggested by us to
identify tumor cells after surgery.
A. Experimental Setup
The procedure for doing this experiment is as follows:
 After surgery and the removal of the tumor, the patient
is exposed to a strong magnetic field.
 Now the polarized electron beam is applied over the
unaffected part and spin orientation of electrons are
determined using polarimeter.
 Then the same polarized beam is targeted over the
affected part of the body and from the reflected beam,
change in spin is determined.

Fig. 5. Mott Polarimeter optical spin filter experimental


arrangement

Based on these two values of spin orientation, the


presence of tumor cells can be detected even if they are
very few in numbers. Hence, we suggest this method for
the detection purpose. A detailed view of this innovative
approach is given as follows.
a) Spin Orientation of the Unaffected Part of the Body
on Applying External Magnetic Field
Fig. 3. (b) Cross sectional view of a Mott Polarimeter

Fig. 6. Spin Orientation of the unaffected part of the body on


applying external magnetic field

Fig. 4. Cross sectional view of the Target chamber in a Mott


Polarimeter

It is already mentioned that the magnetic field could


easily alter the polarization of electrons.

When the magnetic field is applied to the unaffected part


of the human body, the normal somatic cells absorbs the
magnetic energy and retains it.

273

Spintronics Based Cancer Detection

b) Determining the Spin Orientation

d) Determining the Spin Orientation


Now an electron beam which is polarized is incident on
the surgery undergone part of the body. The magnetic
energy absorbed by the cancer cell alters the spin
orientation of the electron beam. Since cancer cells absorb
more magnetic energy, the change in orientation caused
by them is also more. If no cancer cells are present the
amount of change is equal to the previous case. The
change in spin is measured by the polarimeter as Sy.
VII. INFERENCE
If the change in the spin in the unaffected part of the body
is same as that of the surgery undergone part, i.e.

Fig. 7. Determination of the spin orientation

When the electrons get incident on the cells the magnetic


energy absorbed by the cells alters the spin orientation of
the electrons. These electrons get reflected and it is
detected by the Mott polarimeter. Then the change in spin
orientation of the electrons is measured as Sx.
c) Spin Orientation of the Surgery Undergone Part of
the Body on Applying External Magnetic Field
In the surgery undergone part of the body an external
magnetic field is applied.

If Sx=Sy
Then,There are no cancer cells in the surgery undergone
part of the body and all the cells have been removed by
the surgery.
If the change in spin in the unaffected part is not equal to
the change caused by the surgery undergone part of the
body, i.e.
If Sx not equals Sy

The cancer cells which are present, if any, will absorb


more magnetic energy than the normal(denoted by a red
outline in the figure 8) cells since they differ in their
electromagnetic pattern.

Then,There are some cancer cells in the surgery


undergone part of the body and the cancer cells are not
completely removed by the surgery.
VII. FUTURE SCOPE
In this paper, the concept is in the sense that the magnetic
field is applied to the unaffected part and the surgery
undergone part of the body and the difference in spin is
measured to ascertain the presence of cancer cells. In the
future, this idea may be extended to the application of the
magnetic field not just to the surgery undergone part of
the body, but to all other suspected cancerous parts and
thus detecting the spread of cancer as early as possible.

Fig. 8. Spin orientation of the surgery undergone part of the


body on applying external magnetic field

IX. CONCLUSION
Thus the usage of electron spin in the cancer cell
detection provides way for the entry of spintronics in
medical field. The approach suggested in this paper acts
as an efficient way of cancer cell detection after surgery
thereby preventing the further growth of tumor cells.
REFERENCES
[1]

[2]
Fig. 9. Determination of the spin orientation

274

S.A.Wolf,A.Y.Chtchelkanova,D.M.Tregor,Spintronics:A
retrospective and Perspective,IBM Journal of Research
and Development-Spintronics Volume 50 Issue 1, January
2006
Hinkel G.W,Farell D,Hook S.S, Panero N.J,
Ptak.KCancer therapy through nanomedicine6-12,June
2011, volume 5 Issue 2

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.275-277.

Swarm Robotics - To Combat Cancer


Divya Devi Mohan1, Reenu Aradhya Surathy2 and G. Kalarani3
1,2

Student, Prefinal year, 3Asst. prof & HOD


Dept of EIE, Jaya Engg College
Email: dvmhn07@gmail.com1, reenu91@gmail.com2, kalaaravindnithish@yahoo.co.in
Abstract Science makes itself, grows itself and is
transformed by its own upheavals and surprises. But still,
combating cancer has been a great challenge to science.
Cancer research has turned into something like a running
hunt.
According to the research, Nanobots (nanotechnology +
robots) are to be employed in killing the cancer causing cells
by a technology called RNA interference. The above
mentioned technology sneaks in, evades the immune system,
treats and exits.
Nanobots target the messenger RNA (mRNA) which is the
main cause of the cancer growth to stop the production of
protein in the cancer cell, thus starving the cancer from its
source of survival. Many of these nanobots can be infused
into the blood stream, and theyll keep on raiding and
killing the cancerous cells and stopping tumors.
This paper takes this innovative technology into account and
combines swarm robotics. By this, the treatment will be even
more effective.
Keywords m-RNA, NanoRobots, Swarm Intelligence,
Swarm Robotics.

studies of insects, ants and other fields in nature, where


swarm behavior occurs.
Swarm robotics is a relatively young area of research,
which is growing rapidly and comprehensive. As with
many technologies, there is no formal definition for
swarm robotics that engenders universal agreement,
however there are some characteristics that have been
generally accepted.
These include robot autonomy; decentralized control;
large numbers of member robots; collective emergent
behavior and local sensing and communication
capabilities. From our security perspective it is reasonable
to consider swarm robotics as a special type of computer
network with the aforementioned characteristics.
Swarm robotics differs from more traditional multi-robot
systems in that their command and control structures are
not hierarchical or centralized, but are fully distributed,
self-organized and inspired by the collective behavior of
social insect colonies and other animal societies.
III. SWARM INTELLIGENCE

I. INTRODUCTION
Chemotherapeutic cancer treatment necessitates making
complex, and often life-critical, decisions about the best
way to administer cytotoxic (i.e. destroying the cells)
drugs. Typically patient information is incomplete and
noisy, the range of treatment options is subject to
complex constraints and the aims of treatment are multiobjective. For these reasons, medical decision support is a
rich application area for evolutionary algorithms and
related techniques
II. THE NEW APPROACH
Swarm robotics is a new approach to the coordination of
multirobot systems which consist of large numbers of
mostly simple physical robots. It is supposed that a
desired collective behavior emerges from the interactions
between the robots and interactions of robots with the
environment. This approach emerged on the field of
artificial swarm intelligence, as well as the biological

Swarm intelligence is the collective behaviour of


decentralized, self-organized systems, natural or artificial.
The concept is employed in work on artificial
intelligence.
Swarm intelligence (swarm robotics) is the term used to
denote artificial intelligence systems where collective
behavior of simple agents causes coherent solutions or
patterns to emerge. This has applications in swarm
robotics. This type of artificial intelligence is used to
explore distributed problem solving without having a
centralized control structure.
Further the nanobots are programmed in such a way that
they maintain the trace of area in body which they cured.
Thus, swarm intelligence when incorporated in nanobots,
facilitate collective interaction among the nanobots,
identify cured and uncured cancerous cells, and help in
effective treatment and complete eradication of cancer.

275

Swarm Robotics - To Combat Cancer

IV. ALGORITHM
Ant colony optimization (ACO) is a class of optimization
algorithms modeled on the actions of an ant colony. ACO
methods are useful in problems that need to find paths to
goals.
Artificial 'ants' simulation agents locate optimal solutions
by moving through a parameter space representing all
possible solutions.
Real ants lay down pheromones directing each other to
resources while exploring their environment. The
simulated 'ants' similarly record their positions and the
quality of their solutions, so that in later simulation
iterations more ants locate better solutions.
One variation on this approach is the bees algorithm,
which is more analogous to the foraging patterns of the
honey bee.
V. NANO ROBOTS
Nanotechnology as a whole is fairly simple to understand,
but developing this universal technology into a nanorobot
has been slightly more complicated. Nan robots are
essentially an adapted machine version of bacteria. They
are designed to function on the same scale as both
bacteria and common viruses in order to interact with and
repel them from the human system.
Since they are so small that you cant see them with your
naked eye, they will also possibly be used to perform
miracle functions. Nanobots measure more like six
atoms across, but they are far more complicated in design
and need to be engineered in such a way that they are
autonomous.
The ideal nanobots consist of a transporting mechanism,
an internal processor and a fuel unit of some kind that
enables it to function. The main difficulty arises around
this fuel unit, since most conventional forms of robotic
propulsion cant be shrunk to nanoscale with current
technology. Scientists have succeeded in reducing a robot
to five or six millimeters, but this size still technically
qualifies it as a macro-robot.

Experts believe that silicon might make the ideal material,


especially since it has been traditionally used for delicate
electronics,
particularly
small
computer
parts.
Microscopic silicon components called transducers have
so far been successfully built into nanorobot legs.
Equipping the nanobots with several sets of fast-moving
legs and keeping its body low to the ground, they can
create a quick, efficient machine that would also be
suitably shaped for introduction into human blood vessels
to perform functions such as clearing away built-up
cholesterol or repairing tissue damage.
They could rebuild tissue molecules in order to close a
wound, or rebuild the walls of veins and arteries to stop
bleeding and save lives. They could make their way
through the bloodstream to the heart and perform heart
surgery molecule by molecule without many of the risks
and discomfort associated with traditional open-heart
operations.
Likewise, researchers hope that nanorobots will have
many miraculous effects on brain research, cancer
research, and finding cures for difficult diseases like
leukemia and AIDS.
VI. SWARM INTELLIGENCE SYSTEM
A targeted nanoparticle used as an experimental
therapeutic and injected directly into a patients
bloodstream can traffic into tumors, deliver doublestranded small interfering RNAs (siRNAs), and turn off
an important cancer gene using a mechanism known as
RNA interference (RNAi).
Moreover, the team provided the first demonstration that
this new type of therapy, infused into the bloodstream,
can make its way to human tumors in a dose-dependent
fashion i.e., a higher number of nanoparticle sent into the
body leads to a higher number of nanoparticle in the
tumor cells.

Fig. 1. This picture illustrates how nano robots are attached to


the blood cells

Fig. 2. Illustrates the construction of nano robots with a micro


camera and swimming tail

276

Proceedings of the National Conference on Communication Control and Energy System

The electron micrograph shows the presence of numerous


siRNAs-containing targeted nanoparticle both entering
and within a tumor cell. The feasibility of using both
nanoparticle and RNAi-based therapeutics in patients,
and open the door for future game-changing
therapeutics that attack cancer and other diseases at the
genetic level.
RNAi is a new way to stop the production of proteins.
The vulnerable areas of a protein may be hidden within
its three-dimensional folds, making it difficult for much
therapeutics to reach them.
In contrast, RNA interference targets the messenger RNA
(mRNA) that encodes the information needed to make a
protein in the first place. It means every protein now is
druggable because its inhibition is accomplished by
destroying the mRNA.
And we can go after mRNAs in a much designed way
given all the genomic data that are and will become
available.
More to the point, the mRNA fragments found were
exactly the length and sequence they should be if theyd
been cleaved in the spot targeted by the siRNAs; it proves
that the RNAi mechanism can happen using siRNA in a
human.
There are many cancer targets that can be efficiently
blocked using siRNA.

technique that attacks specific genes in malign cells,


disabling functions inside and killing them.
The 70-nanometer attack bots made with two polymers
and a protein that attaches to the cancerous cell's surface
carry a piece of RNA called small-interfering RNA
(siRNA), which deactivates the production of a protein,
starving the malign cell to death.
Once it has delivered its lethal blow, the nanoparticle
breaks down into tiny pieces that get eliminated by the
body in the urine.
VIII. FUTURE
The most amazing thing is that you can send as many of
these soldiers as you want, and they will keep attaching to
the bad guys, killing them left, right, and center, and
stopping tumors. The more they put in, the more ends up
where they are supposed to be, in tumour cells.
It sneaks in, evades the immune system, delivers the
siRNA, and the disassembled components exit out.
Both miniaturization and cost are key-factors in swarm
robotics. These are the constraints in building large
groups of robotics; therefore the simplicity of the
individual team member should be emphasized.
This should motivate a swarm-intelligent approach to
achieve meaningful behavior at swarm-level, instead of
the individual level.
REFERENCES
[1] Beni, G., Wang, J. Swarm Intelligence in Cellular Robotic
Systems, Proceed. NATO Advanced Workshop on Robots
and Biological Systems, Tuscany, Italy, June 2630
(1989)
[2] Ant Colony Optimization by Marco Dorigo and Thomas
Sttzle, MIT Press, 2004.
[3] Couvreur, P. & Vauthier, C. (2006). "Nanotechnology:
Intelligent Design to Treat Complex Disease".
[4] Patel, G. M., Patel, G. C., Patel, R. B., Patel, J. K., Patel,
M. (2010). "Nanorobot: A versatile tool in nanomedicine"
[5] Mulhall, Douglas (2002). Our Molecular Future: How
Nanotechnology, Robotics, Genetics and Artificial
Intelligence Will Transform Our World. Prometheus
Books.

Fig. 3. Figure to illustrate the nanorobots into blood stream

VII. NANO ROBOTS IN BLOOD STREAM


A clean, safe way to deliver RNAi sequences to
cancerous cells. RNAi (Ribonucleic acid interference) is a

277

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.278-282.

Remote Testing of Instruments using Image Processing


M. Sankeerthana, V. Sindhusha and V. Murugan
1,2

Student, Final Year, 3Assistant Professor


EIE,Jaya engineering college, Thiruninravur.
Abstract In this world of high speed manufacturing
industry, the of testing and measurements using internet and
machine vision plays a major role in producing good quality
products. The term Machine Vision describes the work that
provides a practical and affordable visual sense for any type
of machine that works in real-time and is concerned with
utilizing existing technology in the most effective way to
endow a degree of autonomy in specific application. Since
many of the instruments are provided with communication
interfaces, one can build a remote testing system from the
actual hardware and a computing unit with Internet
connection capabilities. This work, remote testing of
instruments after showing simple clientserver architecture,
we came to know how the use of mobile agent technique
solve most of the security issues and work efficiently. In this
we use the possibilities of high-resolution images capturing
device such as CCD camera and interface with the computer
such that the real time images of the meter can be acquired.
These images are then processed to analyze the position of
the needle and in different inputs and with the data that is
obtained, we are able to determine the percentage of error
and conclude if the meter has passed the required levels of
accuracy. The success of the project shows that such
automation can be used to reduce the time of inspection and
also increases accuracy.

I. INTRODUCTION
Machine vision is an important sensor technology with
potential application in many industrial operations. Many
of the current application of machine vision are in
inspection, robotics and measurement. Machine vision
system may allow us for complex inspection for close
dimensional tolerance and improved recognition and part
location capabilities and increased speed.
The Recent development of PC-driven instruments and
the evolution of the networking capabilities of PCs over a
world wide network (Internet) have led to the realization
distributed measurement systems for industrial
applications.
The remote testing of instruments is an interesting
application that has not yet been exploited mainly due to
the security issues. In general, the tachometers are tested
manually; it leads some observational and environmental
errors. Machine vision is concerned with the sensing of
vision data and its interpretation by the computer .The
vision system consist of the Charge Coupled Device

(CCD) camera and digitizing hardware, a digital computer


with hardware and software necessary to interface them.
This work uses the endless possibilities of machine vision
and TCP\IP Protocol to automate the remote test of
measuring instruments specialized in automotive industry.
Here the camera and virtual instrument environment with
Data Acquisition (DAQ) interface is used to acquire the
data required for purpose of testing and determination of
the quality of product. The DAQ card, which we used, is
PCI 6024E National Instruments.
It consists of eight analog input/output channels, and eight
digital input/output channels with in build function
generator The product here is an analog meter where the
movement of the needle is used to determine the
measured value. The product is put under test according
to different conditions to check the functionality and the
accuracy within the required values. One of the main
featured tests is the calibration test, which checks the
needle position for various inputs given to the meter. For
the purpose of remote testing of measuring instruments
we have taken a tachometer as the test subject of our
work. The work uses the resource of machine vision
system for the purpose of calibration test done on the
analog tachometer depending on the needle deflection.
This system solely depends on the machine vision to
extract the details in accuracy, thus the accuracy and
perfection of this project depends on the type of the
machine vision system. Any clinch in the working of the
machine vision system will cause error in the readings
taken.
The tachometer is made to deflect to various levels of
input given by the computer from the server side using
Local Area Network (LAN) by TCP\IP Protocol. These
deflections are aptured at the right time using the CCD
camera. The processing step uses the Image Analysis
function, which uses pattern recognition technique for the
identification of the needle and its orientation.
The next step is to calculate the angle of deflection with
respect to the horizontal level. This process is carried out
for various values of inputs given by the user or generated
by the virtual instrument environment. Then these data
are compared to the standard values for the purpose of
error calculation. Then it decides whether the meter is fit
for commercial use or defective. In continuous production

278

Proceedings of the National Conference on Communication Control and Energy System

industry it is required to have two or more personal


working in shifts, which were further governed by the
other higher personal to ensure that the products are
thoroughly tested and certified before release. Here
mistakes can cause loss in accuracy and the possibility of
damaged goods floating in the market, which affects the
name and value of the company. So as in the companys
interest to look for a complete automated system, which
can guarantee the final product is tested and certified in
the most accurate manner as possible.
The main objective of the project is to provide automation
solution using the Machine vision and Local Area
Network (LAN) as the methodology of procedure in the
fields of remote testing done on analog deflection type
meters. Here this project helps in replacing the normal
technique of testing which in earlier days where
conducted by a human being in the place of production.
The human factor played an important role in the level of
accuracy and time taken for the complete test. Due to
difficulties of stress and strain and constant working
environments, these factors proved to change the level of
perception and also the time taken for the proper testing
procedure. The objective of this project is able to
automate the entire procedure from the initial input to the
final decision of accepting or rejecting the meter, and also
to maintain a database to store the details of batch and
serial number and also the time and date of testing done
on that particular device such that these records can be
used for further reference. The system developed finds its
application in the large scale testing of analog meter in
automotive industry. It can be extended to wide area
network also. Digital meters can also be tested using the
same concept.
II. IMAGE PROCESSING
A. Image Acquisition
The sensing and digitizing function involves the input of
vision data by means of a camera focused on the region of
interest (ROI). Special lighting techniques are frequently
used to obtain an image of sufficient contrast and clarity.
The digital image is called a frame of vision data, and is
frequently captured by a hardware device called a frame
grabber. These devices are capable of digitizing images at
the rate of over 30 frames /sec. The frame consists of
matrix of data representing projections of the scene
sensed by the camera. The elements of the matrix are
called Picture Element or Pixels. The digitized image
matrix for each frame is stored and then subjected to
image processing and analysis for data reduction and
interpretation of the image [These steps are required to
permit real time application.
B. Image Processing and Analysis
The acquired image after conversion into digital data is
sent for further processing. The processing involves one
or all the following process.

1. Image data reduction.


2. Feature extraction.
3. Object Recognition
C. Image Data Reduction
The main objective of this process is to reduce the
complex sized image into reduced image. This is achieved
by the process of Digital conversion, which converts the
image into an 8-bit grayscale image. This process results
in a large data reduction without reducing the properties
of the image. Image Data reduction is essential to do the
further process
D. Feature Extraction
In image vision application, its required to distinguish
the main object from the rest of the image. This is
accomplished by the means of feature that uniquely
characterize the image. Some of the main feature of the
image is the contrast, colour, brightness of the image etc.
The main feature that we utilize is the grayscale level of
the image. This feature is used to identify the object or
part of the object and to determine the size, location and
orientation
E. Object Reorganisation
The next step in image data processing is to identify the
object that the image represents using the extracted
feature information. The recognition algorithm must be
powerful enough to uniquely identify the object .The
various object recognition techniques are:
1. Template matching is a subset of pattern recognition
technique that serve to classify objects in an image.
Here it matches the object with the model template,
which is previously obtained as reference image. It is
applicable when the number of model templates
required is less.
2. Structural Technique is a pattern recognition
technique that considers the relationships between
features or edges of an object.
F. Analysis of the Image using Pattern Matching
Pattern matching locates regions of a gray scale image
that match ma predetermined template. Pattern matching
finds template matches regardless of poor lighting, blur,
noise, shifting of the template, or rotation of the template.
The shape matching searches the presence of a shape in a
binary image and specifies the location of each matched
shape. IMAQ (Image Acquisition Vision Builder) Vision
detects the shape even if it is rotated or scaled. Binary
shape matching is performed by extracting parameters
from a template image that represent the shape of the
image and or invariant to the rotation and scale of the
shape. These parameters are then compared with a similar
set of parameters extracted from other images. Binary
shape matching has the benefit of finding features

279

Remote Testing of Instruments using Image Processing

regardless of size and orientation. The analysis of the


needle image is done using pattern recognition template
matching techniques. Here it uses the features of the
template image for identification purpose. Mostly used
features are shape, area, extra. It compares the model
needle image of the needle tip the pivot point stored in
separate image files. The comparison and classification
functions are performed using the software program
developed using LabVIEW
III. METHODOLOGY

Fig. 1. Block diagram of remote testing system

The Recent development of PC-driven instruments and


the evolution of the networking capabilities of PCs over a
worldwide network have led to the realization distributed
measurement systems for industrial applications. The
remote testing of instruments is an interesting application
that has not yet been exploited mainly due to the security
issues. Machine vision is an important sensor technology
with potential application in many industrial operations.
Many of the current application of machine vision are in
inspection, robotics. Machine vision system may allow us
for complex inspection for close dimensional tolerance
and improved recognition and part location capabilities
and increased speed. The purpose of the vision system to
replace the input data in terms of images that is stored in
form of featured values, which can be subsequently
compared against the corresponding feature values from
image of known object. Machine vision is concerned with
the sensing of vision data and its interpretation by the
computer. The vision system consists of the camera and
digitizing hardware, a digital computer with hardware and
software necessary to interface them. The basic block
diagram of the Machine vision system consists of three
functionsThey are:
1. Image acquisition.
2. Image processing and analysis.
3. Application.
The calibration testing of Tachometer involves really
complex formation procedures if done manually and the
trustworthiness of these testing results relies on various
conditions that are subjects of change on each time of
testing.
The testing methodology we have chosen is purely based
on LabVIEW, which is acclaimed for its fast responses
while testing tachometer and of course accuracy,
precision and also the user-friendly interface can be
modeled. The different processes involved in the method
we adopted is confined into the following steps. They are

1. Implementation of client-server architecture.


2. Activation of tachometer.
3. Activation of camera.
4. Image Acquisition.
5. Image Processing.
6. Angle calculation.
7. Error calculation.
Tachometer testing can be accomplished by providing
three input, they are voltage, frequency and ground.
Tachometer contains three wires of color black, orange,
and red .We provides voltage to the tachometer through
red wire, frequency signal through orange wire, and black
is grounded. The voltage, which we are providing to the
tachometer, will be in the range of 10V -14V. The
frequency will be in the form of square pulse generated by
the user and giving it to the DAQ card and taking output
from the DAQ card connecting back to the tachometer.
By providing all the inputs to the tachometer it will start
to show the r.p.m from the range 0-12 thousands. The
deflection of the needle of the tachometer will linearly
vary with the frequency applied. At first there will be
some fluctuations or time delay in the needle of the
tachometer while it show the reading, but it can be
corrected by adjusting the sample rate and sampling
frequency. At this time it will satisfy the Nyquist theorem
fs2f  1
Where
fs= Sampling Frequency,
fm=Message Frequency
i.e. sampling frequency should be greater than or equal to
twice the modulating frequency. Camera, which is used
here in order to acquire the image of a Speedo-meter, is
monochrome camera. It is a high resolution with highspeed monochrome camera. To activate this camera we
are applying input supply of 12V.There are two terminals
of monochrome camera, one terminal is provided with
power supply of 12 volts and the other terminal is directly
interface with the monitor the interfacing terminal is
provided with IMAQ so that whenever it acquire the
image of a Speedo-meter the data should directly go to the
monitor for further processing .The main function of the
camera which is used here is to acquire the image and
directly send it to the monitor. Certain precaution we have
to take here while using the monochrome camera is about
proper lightening or intensity, sometimes due to the
improper lightening or intensity the images acquire by the
camera may be change so we must take care about the
proper lightening. So in order to provide the proper
lightening or intensity here we are using round fluorescent
tube, which is directly mounted in front of the camera and
the tachometer so that the proper and accurate image can
be obtained by the camera. The adjustment of the camera

280

Proceedings of the National Conference on Communication Control and Energy System

should be in such a way that it should be mounted on a


stand made up of steel or wooden. The stand, which we
are used here, should be in such a way that the camera
height to acquire the proper image can be varying the
height of the stand. The distance in between the camera
and the tachometer can be adjustable. At the front end of
the camera there is a round movable mount by varying
this we can also acquire the proper focused image. The
sensing and digitizing function involves the input of
vision data by means of a camera focused on the region of
interest. Special lighting techniques are frequently used to
obtain an image of sufficient contrast and clarity.

IV. RESULT ANALYSIS


The standard frequency from the server side is used in the
client side to generate the frequency using frequency
generator. The output of frequency generator is given to
the tachometer using DAQ interface. The deflection of the
needle is calculated using CCD camera and IMAQ
software. Standard angles are calculated for an error-free
meter and stored in a text document. During testing
process, the angle for the corresponding meter is stored in
a separate text document. Using these two documents, the
performance of a particular meter can be calculated. The
average error can be calculated as shown below

The digital image is called a frame of vision data, and is


frequently captured by a hardware device called a frame
grabber. These devices are capable of digitizing images at
the rate of over 30 frames /sec. The frame consists of
matrix of data representing projections of the scene
sensed by the camera. The elements of the matrix are
called Picture Element or Pixels
After a proper adjustment of camera, frequency and
sampling frequency the needle will show the deflection in
the tachometer, now by keeping the monochrome camera
at a proper distance and at proper intensity image of a
deflected needle can be acquire, which will produce an
image of pixel size 640480. After this the capture image
is need to be processed by the technique called pattern
matching as explained before. The deflection of needle is
calculated with the horizontal by pattern matching various
sets of readings are obtained.
The standard values are obtained from an error free or
standard meter and stored in a text file with the
corresponding rpm to which it is related. These standard
values are acquired using the same image processing set
up used for the testing procedure. The deflection of needle
is calculated with the horizontal y pattern matching
various sets of readings are obtained and it is stored in an
buffer, the various deflection values which is stored in the
buffer, then compare with the predetermined standard
value so in this way we can find the error in the
tachometer.

Table 1
S.no
1
2
3
4
5
6
7

Input
frequency
34
51
68
85
102
119
136

Standard
Angle
342
324
37
290
273
257
241

Observed
Angle
340
326
305
288
270
256
240

Percentage
Of error
-0.58
0.61
-0.65
-0.68
-1
-0.38
-0.41

If the error is within the limit that is specified by the


design department, then that particular meter will be
accepted otherwise it is rejected. That limit should be
within tolerable limit. By calculating this error, the
particular production industry came to know whether the
meter is in standard or not. This testing will calculate the
quality of the product in turn show the quality of the
organization. For any type of meter we can calculate the
performance.
V. CONCLUSION
The image is acquired with the proper orientation and
lightening. The process of testing of tachometer was done
by the developed program. Thus the machine vision
eliminates the observational and environmental error that
occurs during manual test. It increases the speed of the
production and quality of the manufacturing during batch
production process. Here this project can be implemented

Fig. 2. The basic block diagram of machine vision system

281

Remote Testing of Instruments using Image Processing

on to conveyer system, which can hold the analog meters.


The conveyer system can be made to control by the
program, by uses of stepper motor device. It find its
application in the analog meter manufacturing industries
where after detecting the error or faulty tachometer are
send back to accomplish the process to overcome that
defect or it may be dispatched for packing.
This project also finds immense scope of application in
testing of any types of analog meters. The project also
provides room for future extension such as extending it to
access by a WAN connection so that the whole process of
testing can be done even far away from the manufacture
area and also the properties of remote access can be used
to acquire the image and the report through the Internet.
Using these features one can access, operate and also
study the report from any PC having access to the
Internet.

[2]

[3]

[4]

[5]

[6]

REFERENCES
[1]

Matteo Bertocco, Sandro Cappellazzo, Alessio Carullo,


Marco Parvis, Alberto Vallan, Virtual Environment for
Fast Development of Distributed Measurement

[7]

282

Applications, IEEE Transactions on Instrumentation and


Measurement,Vol.52,pp.681-685,June 2003.
Giovanni Moschioni , A Virtual Instrumentation System
for Measurements on the Tallest Medieval Bell Tower in
Europe IEEE Transactions on Instrumentation and
Measurement,Vol.52,pp.693-702,June 2003.
F.Toran, D.Ramirez, S.Casans, A.E.Navarro, J.Pelegri,
Distributed Virtual Instrument For Water Quality
Monitoring Across the Internet, in proceedings of the
IEEE Instrumentation and Measurement Technology
Conference, Baltimore, MD, USA, May 2000.
F.Toran, D.Ramirez, S.Casans, Programming Internet
Enabled Virtual Instruments, LabVIEW Tech Resource
7(2) (1999) 22~23.
Giovanni Bucci and Carmine Landi, A Distributed
Measurement Architecture for Industrial Applications,
IEEE Transactions on Instrumentation and Measurement.,
Vol.52. NO.1, Feb 2003.
D.Georges, E.Benoit, A.Chovin, D.Koenig, B.Marx, and
G.Mauris, Distributed Instruments for control and
diagnosis applied to a water distribution system, in Proc.
IMTC, Vol.1.May 2002, pp-565-569
G.Bucci and C.Landi, A Multi-DSP based instrument on
a VXI C-Size Module for Real Time Measurements,
IEEE Transactions on Instrumentation and Measurement,
Vol.49, pp.884-889, Aug 2000.

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.283-286.

Application of Nanotechnology in Solar Cell


R. Adarsh1, H. Prabhu2 and R. Bharath Kumar3
1,2

Student, Pre- Final Year, 3Lecturer


Jaya Engineering College
Email: 1Adarsh_falcon@ymail.com, 2arik7prabhu@gmail.com, 3bharathkumar_86@yahoo.co.in
Abstract This paper is designed to incorporate about nano
technology and their applications in current world. The
main objective of our paper is to incorporate the use of
nano materials in solar cells. Current solar power
technology has little chance to compete with fossil fuels or
large electric grids. Todays solar cells are simply not
efficient enough and are currently too expensive to
manufacture for large-scale electricity generation. However,
potential advancements in nanotechnology may open the
door to the production of cheaper and slightly more efficient
solar cells. These new plastic solar cells utilize tiny nano rods
dispersed within in a polymer. To store the light rays for
long time, we can use organic layers in solar panel which
helps in bringing out more number of electrons to flow. But
in conventional solar cells, the band gap should completely
be filled with sun rays for the flow of electrons. After using
nano materials in solar cells, their efficiency is increased
comparatively and their cost is decreased.

I. INTRODUCTION
Conventional solar cells have two main drawbacks:
Efficiencies and their expensive manufacturing cost. The
first drawback, inefficiency is almost unavoidable with
silicon cells. This is because the incoming photons, or
light, must have the right energy, called the band gap
energy, to knock out an electron. If the photon has less
energy than the band gap energy then it will pass through.
If it has more energy than the band gap, then that extra
energy will be wasted as heat. These two effects alone
account for the loss of around 70 percent of the radiation
energy incident on the cell. Nano particles are motes of
matter tens of thousands of times smaller than the width
of a human hair. Because they're so small, a large
percentage of nano particles' atoms reside on their
surfaces rather than in their interiors. This means surface
interactions dominate nano particle behavior. And, for
this reason, they often have different characteristics and
properties than larger chunks of the same material. Nanostructured layers in thin film solar cells offer three
important advantages. First, due to multiple reflections,
the effective optical path for absorption is much larger
than the actual film thickness. Second, light generated
electrons and holes need to travel over a much shorter
path and thus recombination losses are greatly reduced.
As a result, the absorber layer thickness in nanostructured solar cells can be as thin as 150 nm instead of
several micrometers in the traditional thin film solar cells.

Third, the energy band gap of various layers can be made


to the desired design value by varying the size of nano
particles. This allows for more design flexibility in the
absorber of solar cells. Thin film is a more cost-effective
solution and uses a cheap support onto which the
activecomponent is applied as a thin coating. As a result
much less material is required (as low as 1% compared
with wafers) and costs are decreased. Most such cells
utilize amorphous silicon, which, as its name suggests,
does not have a crystalline structure and consequently has
a much lower.
II. NANOTECHNOLOGY BOOSTS SOLAR CELLS
PERFORMANCE
Current solar cells cannot convert all the incoming light
into usable energy because some of the light can escape
back out of the cell into the air. Additionally, sunlight
comes in a variety of colors and the cell might be more
efficient at converting bluish light while being less
efficiency (8%), however it is much cheaper to
manufacture efficient at converting reddish light. See in
Figure 1. Lower energy light passes through the cell
unused. Higher energy light does excite electrons to the
conduction band, but any energy beyond the band gap
energy is lost as heat. If these excited electrons arent
captured and redirected, they will spontaneously
recombine with the created holes, and the energy will be
lost as heat or light.

Fig. 1. Visible Light Spectrum

In conventional solar cells, ultraviolet light is either


filtered out or absorbed by the silicon and converted into
potentially damaging heat, not electricity. Ultraviolet light
could efficiently couple to correctly sized nano particles
and produce electricity. Integrating a high quality film of
silicon nano particles, 1 nanometer in size directly onto
silicon solar cells improves power performance by 60
percent in the ultraviolet range of the spectrum. In bulk
material, the radius is much smaller than the
semiconductor crystal. But nanocrystal diameters are

283

Application of Nanotechnology in Solar Cell

smaller than the Bohr radius. Because of this, the


continuous band of electron energy levels no longer can
be viewed as continuous. The energy levels become
discrete, and quantum confinement is seen to operate. The
difference of a few atoms between two quantum dots
alters the band gap boundaries. Small nanocrystals absorb
shorter wavelengths or bluer light, whereas larger
nanocrystals absorb longer wavelengths or redder light.
Changing the shape of the dot also changes the band gap
energy level as shown in Figure 2.

Fig. 2. The relationship of size of quantum dot to the light


absorbed

To make the improved solar cells, the researchers began


by first converting bulk silicon into discrete, nano-sized
particles. Depending on their size, the nano particles will
fluoresce in distinct colors. Nano particles of the desired
size were then dispersed in isopropyl alcohol and
dispensed onto the face of the solar cell. As the alcohol
evaporated, a film of closely packed nano particles was
left firmly fastened to the solar cell. Solar cells coated
with a film of 1 nanometer, blue luminescent particles
showed a power enhancement of about 60 percent in the
ultraviolet range of the spectrum, but less than 3 percent
in the visible range. Solar cells coated with 2.85
nanometer, red particles showed an enhancement of about
67 % in the ultraviolet range, and about 10 % in the
visible range of the spectrum. Ultra thin films of highly
mono dispersed luminescent Si nano particles are directly
integrated on polycrystalline Si solar cells. Films of 1 nm
blue luminescent or 2.85 nm red luminescent Si nano
particles produce large voltage enhancements with
improved power performance of 60% in the UV/blue
range. In the visible, the enhancements are ~10% for the
red and ~3% for the blue particles. Another potential
feature of these solar cells is that the nanorods could be
tuned to absorb various wavelengths of light. This could
significantly increase the efficiency of the solar cell
because more of the incident light could be utilized
Single-walled carbon nanotubes to a film made of
titanium-dioxide nanoparticles, doubling the efficiency of
converting ultraviolet light into electrons when compared
with the performance of the nanoparticles alone.
Escape route: Electrons created in a nanoparticle-based
solar cell have to follow a circuitous path (red line) to
reach an electrode. Many don't make it, lowering the
efficiency of these cells. Researchers at Notre Dame have

used carbon nanotubes to help the electrons reach the


electrode, improving efficiency. Without the carbon
nanotubes, electrons generated when light is absorbed by
titanium-oxide particles have to jump from particle to
particle to reach an electrode. Many never make it out to
generate an electrical current. The carbon nanotubes
"collect" the electrons and provide a more direct route to
the electrode, improving the efficiency of the solar cells.
The CNTs provide better electron ballistic transport
property along its axis with high current density capacity
on the surface of the solar cell without much loss. The
alignment of the CNT with the polymer composites
substrate give very high efficiency in photovoltaic
conversion. The polymer composites increase contact area
for better charge transfer and energy conversion. In this
process, the efficiency of solar cell is about 50% at the
laboratory scale. The optimum efficiency was achieved
with the aligned CNTs with poly 3 - octyl thiophene
(P3OT) based PV cell. P3OT has improved the property
due to polymer - and nano tubes junctions within the
polymer matrix. High electric field within the nano tube
splits the exciton to electrons and holes, and enables faster
electron transfer with improved quantum efficiency of
more than 50%.

Fig. 3. Escape route of electron

A. Improving the Efficiency of Solar Cells by Using


Semiconductor Quantum Dots (QD)
Another starting point for the increase of the conversion
efficiency of solar cells is the use of semiconductor
quantum dots (QD). By means of quantum dots, the band
gaps can be adjusted specifically to convert also longerwave light and thus increase the efficiency of the solar
cells. These so called quantum dot solar cells are, at
present still subject, to basic research. As material
systems for QD solar cells, III/V-semiconductors and
other material combinations such as Si/Ge or Si/Be Te/Se
are considered. Potential advantages of these Si/Ge QD
solar cells are:
1) Higher light absorption in particular in the infrared
spectral region,
2) Compatibility with standard silicon solar
production (in contrast to III/V semiconductors),

cell

3) Increase of the photo current at higher temperatures,


4) Improved radiation hardness
conventional solar cells.

284

compared

with

Proceedings of the National Conference on Communication Control and Energy System

from excited CdS into SWCNT excitation of CdS


nanoparticle. When CNTS attached in Cdse & CdTe can
induce charge transfer process under visible light
irradiation. The enhanced interconnectivity between the
titanium dioxide particles and the MWCNTs in the porous
titanium dioxide film was concluded to be the cause of the
improvement in short circuit current density.
IV. COST REDUCTION BY NANO TECHNOLOGY
Conventional crystalline silicon solar cell manufactured
by high of using a low temperature process similar to
printing.

Fig. 4. Schematic structure of a Si/Ge QD solar cell with layers


Ge quantum dots in the active layer of the Si solar cell substrate

III. NANOTECHNOLOGY IMPROVE THE SOLAR CELL


Present available nanotechnology solar cells are not as
efficient as traditional ones, however their lower cost
offsets his. In the long term nanotechnology versions
should both be lower cost and, using quantum dots,
should be able to reach higher efficiency levels than
conventional ones. To coat the nanoparticles with
quantum dots--tiny semiconductor crystals. Unlike
conventional materials in which one photon generates just
one electron, quantum dots have the potential to convert
high-energy photons into multiple electrons. Quantum
dots work the same way, but they produce three electrons
for every photon of sunlight that hits the dots. Electrons
moves from the valance band into the conduction band
The dots also catch more spectrums of the sunlight waves,
thus increasing conversion efficiency to as high as 65
percent. Another area in which quantum dots could be
used is by making so-called a hot carrier cells. Typically
the extra energy supplied by a photon is lost as heat, but
with a hot carrier cells the extra energy from the photons
result in higher-energy electrons which in turn leads to a
higher voltage.

Nanotechnology reduced installation costs achieved by


producing flexible rolls temperature vacuum deposition
process but nanotechnology . Reduced manufacturing
costs as a result instead of rigid crystalline panels. Cells
made from semiconductor thin films will also have this
characteristic Nanosolar company have successfully
created a solar coating that is the most cost-efficient solar
energy source ever. Their Power Sheet cells contrast the
current solar technology systems by reducing the cost of
production from $3 a watt to a mere 30 cents per watt.
This makes, for the first time in history, solar power
cheaper than burning coal.

Fig. 6. Cost/Efficiency Tradeoff

Fig. 5. a) Quantum-dot (QD)-enhanced solar-cell design


concept. (b) Current density-voltage curves for control and 520
layer enhanced cells under one sun global air mass 1.5
(AM1.5g) light.

These cells did not have antireflective coating. In GaP:


Indium gallium phosphide. GaAs: Gallium arsenide. The
transport of electrons across the particle network is the
major problem in achieving higher photo conversion
efficiency in nanostructured electrode. Utilization of CNT
network support to anchor light harvesting semiconductor
particles by assisting the electron transport to the
collecting electrode surface in DSSC. Charge injection

Photovoltaic devices are limited in their practical


efficiencies governed by the thermodynamic limits and
production costs that involve tradeoffs in materials,
production processes, and PV device packaging. The
Lewis roup as a result of higher efficiency or lower
production provides a thorough illustration of the
efficiency trends for various PV devices materials such as
crystalline silicon used in semiconductors as well as the
new approaches to thin film PV including amorphous
silicon, cadmium telluride (CdTe), copper indium
deselenide (CIS) and copper indium gallium deselenide
materials (CIGS). These thin film material could offer
substantial PV devices price reductions costs.
V. APPLICATION OF NANOTECHNOLOGY USE SOLAR
CELL
1) Inexpensive solar cells,
nanotechnology,
would
environment.

285

which
help

would utilize
preserve
the

Application of Nanotechnology in Solar Cell

2) Coating existing roofing materials with its plastic


photovoltaic cells which are inexpensive enough to
cover a homes entire roof with solar cells, then
enough energy could be captured to power almost the
entire house. If many houses did this then our
dependence on the electric grid (fossil fuels) would
decrease and help to reduce pollution.
3) Nanotechnology in solar cells would also have
military implications. The U.S. Army has already
hired Konarka Technologies to help design a better
way to power their soldiers electrical devices.
According to Daniel McGahn, Konarkas executive
vice president, "A regular field soldier carries 1.5
pounds of batteries now. A special operations has a
longer time out, has to carry 140 pounds of to create
inexpensive and reasonably efficient solar equipment
soldier, 60 to 70 pounds of which are batteries If
nanotechnology could be used cells, it would greatly
improve soldiers mobility.
4) Inexpensive solar cells would also help provide
electricity for rural areas or third world countries.
Since the electricity demand in these areas is not high,
and the areas are so distantly spaced out, it is not
practical to connect them to an electrical grid.
However, this is an ideal situation for solar energy.
5) Cheap solar cell could be used for lighting, hot water,
medical devices, and even cooking . It would greatly
improve the standard of living for millions, possibly
even billions of people.
6) Flexible, roller-processed solar cells have the potential
to turn the sun's power into a clean, green, convenient
source of energy Even though the efficiency of Plastic
photovoltaic solar cell is not very great, but covering

cars with Plastic photovoltaic solar cells or making


solar cell windows could be generate the power and
save the fuels and also help to reduce the emission of
carbon gases.
VI. CONCLUSION
Nanotechnology (nano) incorporation into the films
shows special promise to both enhance efficiency of solar
energy conservation & reduce the manufacturing cost.
Although the nanotechnology is only capable of
supplying low power devices with sufficient energy, its
implications on society would still be tremendous. It
efficiency by increasing the absorption efficiency of light
as well as the overall radiation-to-electricity would help
preserve the environment, decrease soldiers carrying
loads, provide electricity for rural areas, and have a wide
array of commercial applications due to its wireless
capabilities
REFERENCES
[1]

[2]

[3]

[4]

286

Nayfeh, Thin film silicon nanoparticle UV


photodetector IEEE Photonics Technology, Volume 16,
Issue 8, Pages 1927-1929, August 2004
Nayfeh, Enhancement of polycrystalline silicon solar
cells using ultrathin films of silicon nanoparticle Applied
Physics, Volume 91, Issue 6, Article 063107, August 6,
2007.
Aldous, Scott. How Solar Cells Work. How Stuff
Works. 22 May 2005.<http://science.howstuffworks.com/
solar-cell1.htm>.
K.R. Catchpole and A. Polman Plasmonic Solar Cells,",
Optics Express, Vol. 16, Issue 6, December 22, 2008,
Focus Issue on Solar Energy edited by Alan Kost,
University of Arizona.

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.287-289.

Intrusion of Nanotechnology in Food Science


P. Arun Karthik Kani1, A. Asif Iqbal2 and J. Saravanan3
1,2

Student, Pre-final year, 3Asst.Prof


Dept of EIE, Jaya Engineering College
Email: arun.karthik.289@gmail.com1, Iqbal_june13@yahoo.co.in2, jsarav78@yahoo.co.in3
Abstract Nanotechnology is having an impact on several
aspects of food science, from how food is grown to how it is
packaged. Zinc oxide nano particles can be incorporated
into plastic packaging to block UV rays and provide anti
bacterial protection, while improving the strength and
stability of the plastic film.
Nanosensors are being developed that can detect bacteria
and other contaminates, such as salmonella, at a packaging
plant. The Point-of-packaging testing, if conducted properly,
has the potential to dramatically reduce the chance of
contaminated food reaching grocery store shelves.

II. APPLICATIONS OF NANO TECHNOLOGY IN FOOD


PROCESSING
The fundamental characteristics of food packaging
materials, such as strength, barrier properties, and stability
to heat and cold are being achieved using nano materials.
With the use of nanoparticles, bottles and packaging can
be made lighter and stronger with better thermal
performance and less gas absorption.
III. CURRENTNANOTECHNOLOGY APPLICATIONS
1. Clay nano composites

Nanocomposite technology improves barrier to gases, such


as oxygen or carbon dioxide, to chemicals, both in liquid and
vapor phases and to odour migration. Greater barrier also
impedes absorption of flavors and vitamins by the plastic
packaging itself. Barrier enhancement is important for
longer product shelf life. Where barrier is a limiting factor
in existing packages its improvement can lead to lower
weight packages and reduce package cost. Improved shelf
life and lower package cost dominate the uses of
nanotechnology in consumer packaging.
Keywords Nano sensors, Nano composite, salmonella

I. INTRODUCTION
Nanotechnology is having an impact on several aspects of
food science, from how food is grown to how it is
packaged. This is a paper which gives a brief message on
the inclusion of nano technology and nano particles in
current food processing and packaging units. The food
industry is the largest manufacturing sector in the world.
It presents a very different innovation scenario than the
chemical and pharma industries do, and introducing new
processing technologies has always been challenging.
Nanotechnology research in the food industry is getting
focused on food packaging. The final step in food
processing is packaging the processed food. In present
world almost all food products are packed mostly using
plastic materials. These plastic materials are toxic and we
cannot preserve food in it for a long time, due to this the
food items might get contaminated. The contamination
process makes the packed food material unfit to consume,
so the packing materials has to undergo number of
sterilization process before getting ready for packing.

2. Silver nano particles


IV. APPLICATIONS UNDER DEVELOPMENT
1. Zinc oxide nano particles
2. Nano sensors
V. NECESSITY FOR FOOD PACKAGING
Packaging fulfils the diverse role from protecting
products, preventing spoilage, contamination, extending
shelf life, ensuring safe storage thereby helping to make
them readily available to consumers.
VI. WHY PLASTICS FOR PACKING?
Plastics have emerged as the most preferred choice of
packaging material for various products- from food,
beverages, chemicals, electronic items and so on. They
offer unique advantages over conventional materials.
 Safety: Plastics are safer materials for packaging of
food products specially polyolefins which do not react
with food. Pilferage and contamination is difficult.
 Shelf Life: Plastics packaging material offer better
shelf life
 Cost: Plastics are the most cost effective medium of
packaging when compared with any other material,
the cost of transportation is reduced considerable on
account of lower weight and less damage
 Convenience: Plastics can be converted in any form
with various processing techniques, thus can pack any
type of substances like liquids, powders, flakes,
granules, solids.

287

Intrusion of Nanotechnology in Food Science

 Waste: Packaging in plastics reduces the wastage of


various food products, typical example is potatoes or
onions packed in leno.
 Aesthetics: A right choice of plastics packaging
increased the aesthetic value of products and helps in
brand identity
 Handling and Storage: Products packed in plastics are
easiest to handle and store as well as transport.
 Plastic products are easy to recycle: Every day there
are new products packed in plastics replacing
conventional products and when a thought is given to
pack a new product the first choice appears in the
mind is Plastic packaging material.

Fig. 1. Salmonella Bacteria

VII. ROLE OF PLASTIC IN FOOD PACKAGING

X. NANOCOMPOSITES

Plastics have developed into the most important class of


packaging materials. Their relative impermeability for
substances from the surroundings has great influence on
the shelf life and the quality of the packed goods. At the
same time the interaction between the contents and the
various components of the packaging plays a decisive
role. Plastic packaging being used is usually nonbiodegradable due to possible interactions with the food.
Also, biodegradable polymers often require special
composting conditions to properly degrade.

Nanocomposites are prepared by dispersing a Nanomer


nanoclay into a host polymer, generally at less than 5wt%
levels. This process is also termed exfoliation.
Exfoliation is facilitated by surface compatibilization
chemistry, which expands the nanoclay platelets to the
point where individual platelets can be separated from
another by mechanical shear or heat of polymerization.

VIII. EFFECT OF PLASTIC IN FOOD PACKAGING


Plastic products give desirable performance properties
also have negative environmental and human health
effects. These effects include
 Direct toxicity, as in the cases of lead, cadmium, and
mercury
 Endocrine disruption, which can lead to cancers, birth
defects, immune system supression and developmental
problems in children.
Fig. 2. Clay nano composites

IX. MAJOR CONTAMINANT OF PLASTIC IN FOOD


PACKAGING
A. Salmonella
Salmonella is a genus of rod-shaped, Gram-negative,
predominantly motile enterobacteria and flagella which
grade in all directions They are chemoorganotrophs,
obtaining their energy from oxidation and reduction
reactions using organic sources, and are facultative
anaerobes.
Detection of this food contaminant is critical to control
food safety, and while different methods have been
employed to detect Salmonella, the biggest challenges in
the various approaches have been speed and sensitivity.
A nano sensor, to detect Salmonella bacteria has been
developed which could enhance food safety and security.

Nanocomposites can be created using both thermoplastic


and
thermoset
polymers,
and
the
specific
compatibilization chemistries designed and employed are
necessarily a function of the host polymer's unique
chemical and physical characteristics.
In some cases, the final nanocomposite will be prepared
in the reactor during the polymerization stage.
Nanocomposites also demonstrate enhanced fire resistant
properties and are finding increasing use in engineering
plastics.
Nanomer nanoclays provide plastics product development
teams with exciting new polymer enhancement and
modification options. With the proper choice of
compatibilizing chemistries, the nanometer-sized clay
platelets interact with polymers in unique ways.

288

Proceedings of the National Conference on Communication Control and Energy System

Application possibilities for packaging include food and


non-food films and rigid containers. In the engineering
plastics arena, a host of automotive and industrial
components can be considered, making use of
lightweight, impact, scratch-resistant, and higher heat
distortion performance characteristics.

Zinc oxide nano particles, one of a range of nano


ingredients produced by advanced nano technology, could
also have anti-microbial properties and had been found to
be effective in killing microorganisms.

Fig. 5. Zinc oxide nanoparticles


Fig. 3. Exfoliated nano Composites

Nanoclay polymer composites can be produced by a


number of different methods:
 solution intercalation the organo-clay is first
swollen with solvent (e.g. water or an organic solvent)
before mixing with polymer. The polymer diffuses
between the nanoclay layers, displacing the solvent;
 in situ intercalative polymerisation the organo-clay
is swollen within a solution of monomer;
 so that polymerisation occurs between the clay layers;
 melt intercalation (used for thermoplastic polymers)
the organo-clay and polymer are mixed at a
temperature above the softening point of the polymer.

Zinc oxide exhibits antibacterial activity that increases


with decreasing particle size. This activity does not
require the presence of UV light (unlike titanium which is
dioxide photocatalytic disinfecting material for surface
coatings)but is stimulated by visible light. The exact
mechanism(s) of action is still unknown.
Zinc oxide nanoparticles have been incorporated in a
number of different polymers including polypropylene75.
In addition zinc oxide effectively absorbs UV light,
without re-emitting as heat, and therefore improves the
stability of polymer composites.

Fig. 6. Zinc oxide nano powder

XII. CONCLUSION

Fig. 4. Processing of nano composites

XI. ZINC OXIDE NANO PARTICLES


Replacing the chemical absorbers with dispersed zinc
oxide nano particles provides the required UV protection
and product stability, while remaining transperant, inert
and stable within the film, it claims.

From the above informations, we have identified that


nano technology is very useful in preservation of food
items and to protect us from the harmful diseases. So it is
better to implement this technology to all types of
packages especially for food materials. By using this
technology, we are protected from the contaminants
coming from the plastic packaging. Zinc oxide nano
particle is the main one to detect and eliminate the
harmful bacteria salmonella. This is the only
technology to fully eliminate this harmful bacteria.

289

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.290-293.

Microelectronic Pills for Remote Biomedical Measurements


P. Kavitha1, E. Elakkiya2 and A.S. Anupama3
1,2

Student,Pre-final year, 3Lecturer, EIE


EIE,Jaya Engineering College, Thiruninravur-602024.
Email: bkavi.kavitha@gmail.com1, eelakkiya9@yahoo.com2, arjun_anu2000@yahoo.co.in3
Abstract A novel microelectronic pill has been developed
for in situ studies of the gastro-intestinal tract, combining
microsensors and integrated circuits with system-level
integration technology. The measurement parameters
include real-time remote recording of temperature, pH,
conductivity, and dissolved oxygen. The unit comprises an
outer biocompatible capsule encasing four microsensors, a
control chip, a discrete component radio transmitter, and
two silver oxide cells (the latter providing an operating time
of 40 h at the rated power consumption of 12.1mW).The
sensors were fabricated on two separate silicon chips located
at the front end of the capsule. The robust nature of the pill
makes it adaptable for use in a variety of environments
related to biomedical and industrial applications.
Keywords Microelectronicpill, microsensor integration,
mobile
analytical
microsystem,
multilayer
silicon
fabrication, radiotelemetry, remote in situ measurements.

I. INTRODUCTION
The invention of the transistor enabled the first
radiotelemetry capsules, which utilized simple circuits for
in vivo telemetric studies of the gastro-intestinal (GI)
tract. These units could only transmit from a single sensor
channel, and were difficult to assemble due to the use of
discrete components. The measurement parameters
consisted of either temperature, pH or pressure, and the
first attempts of conducting real-time noninvasive
physiological measurements suffered from poor
reliability, low sensitivity, and short lifetimes of the
devices. The first successful pH gut profiles were
achieved in 1972, with subsequent improvements in
sensitivity and lifetime. Single-channel radiotelemetry
capsules have since been applied for the detection of
disease and abnormalities in the GI tract where restricted
access prevents the use of traditional endoscopy. Most
radiotelemetry capsules utilize laboratory type sensors
such as glass pH electrodes, resistance thermometers, or
moving inductive coils as pressure transducers. The
relatively large size of these sensors limits the functional
complexity of the pill for a given size of capsule.
Adapting existing semiconductor fabrication technologies
to sensor development has enabled the production of
highly functional units for data collection, while the
exploitation of integrated circuitry for sensor control,
signal conditioning, and wireless transmission has

extended the concept of single-channel radiotelemetry to


remote distributed sensing from microelectronic pills.
Our current research on sensor integration and onboard
data processing has, therefore, focused on the
development of Microsystems capable of performing
simultaneous multiparameter physiological analysis. The
technology has a range of applications in the detection of
disease and abnormalities in medical research. The overall
aim has been to deliver enhanced functionality, reduced
size and power consumption, through systemlevel
integration on a common integrated circuit platform
comprising sensors, analog and digital signal processing,
and signal transmission.
In this paper, we present a novel analytical microsystem
which incorporates a four-channel microsensor array for
real-time determination of temperature, pH, conductivity
and oxygen. The sensors were fabricated using electron
beam and photolithographic pattern integration, and were
controlled by an application specific integrated circuit
(ASIC), which sampled the data with 10-bit resolution
prior to communication off chip as a single interleaved
data stream. An integrated radio transmitter sends the
signal to a local receiver (base station), prior to data
acquisition on a computer. Real-time wireless data
transmission is presented from a model in vitro
experimental setup, for the first time. Details of the
sensors are provided in more detail later, but included: a
silicon diode to measure the body core temperature,
while also compensating for temperature induced signal
changes in the other sensors; an ion-selective field effect
transistor, ISFET, to measure pH; a pair of direct contact
gold electrodes to measure conductivity; and a threeelectrode electrochemical cell, to detect the level of
dissolved oxygen in solution. All of these measurements
will, in the future, be used to perform in vivo
physiological analysis of the GI-tract.
II. MICROELECTRONIC PILL DESIGN AND FABRICATION
A. Sensors
The sensors were fabricated on two silicon chips located
at the front end of the capsule. Chip 1 comprises the
silicon diode temperature sensor, the pH ISFET sensor
and a two electrode conductivity sensor. Chip 2

290

Proceedings of the National Conference on Communication Control and Energy System

comprises the oxygen sensor and an optional nickelchromium (NiCr) resistance thermometer.

Fig. 1

1) Sensor Chip 1
An array of 4X2 combined temperature and pH sensor
platforms were cut from the wafer and attached on to
glass cover slip using photo resist cured on a hotplate.
The cover slip acted as temporary carrier to assist
handling of the device during the first level of lithography
when the electric connection tracks, the electrodes and the
bonding pads were defined.

B. Control Chip
The ASIC was a control unit that connected together the
external components of the microsystem. It is a novel
mixed signal design that contains an analog signal
conditioning module operating the sensors, an 10-bit
analog-to-digital and digital-to-analog converters, and a
digital data processing module. An RC relaxation
oscillator provides the clock signal.
The analog module offer a combination of both a power
saving scheme and a compact integrated circuit design.
The temperature circuitry biased the diode at constant
current, so that a change in temperature would reflect a
corresponding change in the diode voltage. The pH
ISFET sensor was biased as a simple source and drain
follower at constant current with the drain-source voltage
changing with the threshold voltage and pH. The
conductivity circuit operated at direct current measuring
the resistance across the electrode pair as an inverse
function of solution conductivity. An incorporated
potentiostat circuit operated the amperometric oxygen
sensor with a 10-bit DAC controlling the working
electrode potential with respect to the reference. The
analog signals had a full-scale dynamic range of 2.8 V.
The analog signals were sequenced through a multiplexer
prior to being digitized by the ADC.. The digital data
processing module conditioned the digitized signals
through the use of a serial bitstream data compression
algorithm, which decided when transmission was required
by comparing the most recent sample with the previous
sampled data. This technique minimizes the transmission
length, and is particularly effective when the measuring
environment is at quiescent, a condition encountered in
many applications.

The pattern was defined in resist by photolithography


prior to thermal evaporation of 200 nm gold. An
additional layer of gold was sputtered to improve the
adhesion of the electroplated silver used in the reference
electrode. Liftoff in acetone detached the chip array from
the cover slip. Individual sensors were then diced prior to
their re-attachment in pairs on a cover slip by epoxy resin.
The left-hand-side) unit comprised the diode, while the
right-hand-side unit comprised the ISFET. The floating
gate of the ISFET was pre-covered with a proton sensitive
layer for pH detection.
2) Sensor Chip 2
Level 1 pattern was defined in 0.9 m UV3 resist by
electron beam lithography. It contains three-electrode
electrochemical oxygen sensor &NiCr resistance
thermometer.
Fig. 2

291

Microelectronic Pills for Remote Biomedical Measurements

The entire design is constructed with a focus on low


power consumption and immunity from noise
interference. Separate on-chip power supply trees and
pad-ring segments were used for the analog and digital
electronics sections in order to discourage noise
propagation and interference.
C. Radio Transmitter
The radio transmitter is assembled prior to integration in
the capsule on a single sided printed circuit board.. A
second crystal stabilized transmitter is also used. This
second unit is similar to the free running standard
transmitter. Pills incorporating the standard transmitter
are denoted Type I, whereas the pills incorporating the
crystal stabilized unit are denoted Type II.
D. Capsule
The microelectronic pill consisted of a machined
biocompatible (noncytotoxic), chemically resistant
polyether-terketone capsule and a PCB chip carrier acting
as a common platform for attachment of the sensors,
ASIC, transmitter and the batteries.
The transmitter is integrated in the PCB which
incorporates the power supply rails, the connection points
to the sensors, as well as the transmitter and the ASIC and
the supporting slots for the capsule in which the chip
carrier was located.

supply rails, the sensor inputs, and the transmitter. The


unit is powered by two standard silver oxide cells.
The capsule was machined as two separate screw-fitting
compartments. The PCB chip carrier was attached to the
front section of the capsule.The rear section of the capsule
was attached to the front section by a 13-mm screw
connection. The seals render the capsule water proof, as
well as making it easy to maintain.
III. MATERIAL AND METHODS
A. General Experimental Setup
All the devices were powered by batteries in order to
demonstrate the concept of utilizing the microelectronic
pill in remote locations (extending the range of
applications from in vivo sensing to environmental or
industrial monitoring). The pill was submerged in a 250mL glass bottle located within a 2000 mL beaker to allow
for a rapid change of pH and temperature of the solution.
A scanning receiver captured the wireless radio
transmitted signal from the microelectronic pill by using a
coil antenna wrapped around the 2000-mL polypropylene
beaker in which the pill was located. A portable Pentium
III computer controlled the data acquisition unit which
digitally acquired analog data from the scanning receiver
prior to recording it on the computer. The solution volume
used in all experiments was 250 mL.
The beaker, pill, glass bottle, and antenna were located
within a 25 25 cm container of polystyrene, reducing
temperature fluctuations from the ambient environment
(as might be expected within the GI tract) and as required
to maintain a stable transmission frequency. The data is
acquired and processed using a MATLAB routine.
IV. RESULTS
The power consumption of the microelectronic pill with
the transmitter, ASIC and the sensors connected was
calculated to 12.1 mW, corresponding to the measured
current consumption of 3.9 mA at 3.1-V supply voltage.
The ASIC and sensors consumed 5.3 mW, corresponding
to 1.7 mA of current, whereas the free running radio
transmitter (Type I) consumed 6.8 mW (corresponding to
2.2 mA of current) with the crystal stabilized unit
consuming 2.1 mA. Two SR44 batteries used provided an
operating time of more than 40 h for the Microsystems.
VI. CONCLUSION

Fig. 3

The ASIC was attached with double-sided copper


conducting tape prior to wire bonding to the power

We have developed an integrated sensor array system


whichhas been incorporated in a mobile remote analytical
microelectronic pill, designed to perform real-time in situ
measurements of the GI tract, providing the first in vitro
wireless transmitted multichannel recordings of analytical
parameters. Further work will focus on developing
photopatternable gel electrolytes and oxygen and
cationselective membranes. The microelectronic pill will
be miniaturized for medical and veterinary applications

292

Proceedings of the National Conference on Communication Control and Energy System

by incorporating the transmitter on silicon and reducing


power consumption by improving the data compression
algorithm and utilizing a programmable standby power
mode.
The generic nature of the microelectronic pill makes it
adaptable for use in corrosive environments related to
environmental and industrial applications, such as the
evaluation of water quality, pollution detection,
fermentation process control and the inspection of
pipelines. The integration of radiation sensors and the
application of indirect imaging technologies such as
ultrasound and impedance tomography, will improve the
detection of tissue abnormalities and radiation treatment
associated with cancer and chronic inflammation. In the
future, one objective will be to produce a device,
analogous to a micro total analysis system or lab on a
chipsensor which is not only capable of collecting and

processing data, but which can transmit it from a remote


location. The overall concept will be to produce an array
of sensor devices distributed throughout the body or the
environment, capable of transmitting high-quality
information in real time.
REFERENCES
[1]
[2]
[3]

[4]

293

S. Mackay and B. Jacobson, Endoradiosonde, Nature,


vol. 179, pp. 12391240, 1957.
H. S.Wolff, The radio pill, New Scientist, vol. 12, pp.
419421, 1961.
S. J. Meldrum, B. W. Watson, H. C. Riddle, R. L. Bown,
and G. E. Sladen, pH profile of gut as measured by
radiotelemetry capsule, Br. Med. J., vol. 2, pp. 104106,
1972.
D. F. Evans, G. Pye, R. Bramley, A. G. Clark, T. J. Dyson,
and J. D. Hardcastle, Measurement of gastrointestinal pH
profiles in normal ambulant human subjects, Gut, vol. 29,
no. 8, pp. 10351041, Aug. 1988.

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.294-297.

System Implementation of Pushing DATA to


Handheld Devices via Bluetooth high speed
specification, Version 3.0 + HS
Ms.A.Valarmathi,
Lecturer
ECE department
Rajalakshmi Institute of Technology

Valarmathi1010@gmail.com

Abstract- In the present scenario, mobile phones, laptops and


PDAs have become the elixir of human needs. Almost everyone
today has a mobile phone and Bluetooth has already become a
standard inclusion on even the most basic mobile phones, student
laptops and PDAs. Data t ransfer through Bluetooth among the
handheld devices is common and popular. Bluetooth establishes
links in a more convenient manner to facilitate sharing data
between devices. It motivates us to apply this tech nology to
hypermarket advertising. In this paper, we present a framework
that pushes data to hypermarket customers with handheld
devices via Bluetooth Impleme nting the system in upermarkets
can reduce the cost of distributing handbills by using this
proactive advertising system, which also provides dynamic
delivering of advertise ments. In other words, hyper market can
make and deliver advertisemen ts, offer pamphlets and price lists
on the fly without resort to printing it.
Keywords- System Implementation, Pushing data, Bluetooth,
version 3.0, hand held devices.
Introduction

expenses, mailing DM can only reach the members rather than


those potential consumers of the hypermarket. Seeing the
shortcoming of conventional advertising, we built a virtual
proactive data pushing system which pushes advertisements,
offer information and the price list to on-site customers with
handheld devices via Bluetooth. By using the system,
hypermarket needs only to convert its majo r advertisements
offer information and the price list, into pictures or video clips
which are stored in the servers deployed around the site, then
pushes the converted files to o n-site customers who have
handheld devices. The remainder of the paper is organized as
follows. Bluetooth technique and its file transfer protocol
OBEX (Object Exchange) employed by our system are
discussed in the next section. Section 3 describes the complete
system architecture and the functional modules. We represent
the flow chart of push mechanism and program logics in
Section 4. Finally, section 5 concludes the paper.

I. BLUETOOTH V3.0 +HS AND OBEX PROTOCOL

Data Procurement from the air is so prevalent in recent Years


and this phenomenon inspires many researches on. The
application of mobile devices with Bluetooth wireless
communication [1-2][5]. But the rate at which the date is
pushed is the matter of concern in todays world. Mobile
devices such as cellular phones, Personal Digital
Assistants (PDAs), and laptops have been with Bluetooth V2.1
as a basic function providing short distance file transfer without
using proprietary cables, but the process has been slow, it
makes Bluetooth a powerful yet slow technology to relay o n.
So to improve this aspect we use Bluetooth V3.0 +HS which is
the best tool for exchanging messages among versatile mobile
devices. Bluetooth establishes links in a mo re convenient
manner to facilitate sharing data between devices. It motivates
us to apply this technology to data pushing fields like
advertising.
Hypermarket and Supermarkets usually distrib utes handbills
manually on-site or mails DM (Direct Mail) to members
mailbox. Although the latter costs only p ostage with
comparing to the former which costs manpower and flier

Bluetooth high speed wireless technology V3.0 +HS is the


newest enhancement of the Bluetooth core specification. The
V3.0 + HS specification enables the use of the 802.11 radio to
provide greater data rate capabilities while maintaining the
familiar, easy-to-use classic Bluetooth interface. With V3.0,
high speed no longer requires established network architecture.
V3.0 is paving the way for new use cases and the expansion of
existing product tiers. Consumers will benefit from two key
features of this specification enhancement. Unicast
Connectio nless Data lowers latencyproviding faster, more
reliable experience, while Enhanced Power Control ensures
more limited dropouts, a key b enefit for headset users. For
manufacturers and integrators, the enhancements of V3.0 can
improve the quality of prod ucts while at the same time
reducing costs. Bluetooth is a short-distance wireless data
transfer technology, operating in the frequency range of 2.4
~2.485 GHZ, the so-called ISM (Industrial, Scientific and
Medical) band, using a spread spectrum, frequency hopping,

294

Proceedings of the National Conference on Communication Control and Energy System


full-duplex signals. Lets compare Bluetooth V 2.1 and V3.0
for a clear insight.
With high hopping rates, Bluetooth is d ifficult to be intercepted
and less interrupted by electromagnetic wave. The key features
of Bluetoo th technolo gy are robustness, low power, and low
cost. It is designed for comp act devices, such as cellular phone,
PDA, to provide point to point wireless data transfer. Bluetooth
devices are classified [4] by output power which determines
the transmission range. Most handheld devices fall in the class
2 that outputs a 2.5 mW power having an approximate range o f
up to 10 meters or 33 feet. OBEX is a communication protocol,
which supports GET and PUT operations like those in the wellknow FTP (File Transfer Protocol), facilitates the exchange of
binary objects between devices.
It is maintained by the IrDA (Infrared Data Association) but
has also been adopted by the Bluetooth SIG (Bluetooth Special
Interest Group). Therefore, most data transfer between
Bluetooth devices is based on OBEX protocol by which a
sender can send a file along with its filename, file size, and file
description so that a receiver can better know about the
received data. The Community Development of Java
Technology Specifications also published JSR-82 API [3] that
provides classes and interfaces for OBEX from which
programmers can be more easily to develop file transfer
programs on handheld devices.

site customers. The member customers who have not registered


their handheld devices will receive the registration program in
order to complete the registration of the devices.
Those who have done the registration for their handheld
devices can decide to accept the advertisement or no t. The back
end database contains user registration information along with
the Bluetooth address of the mobile device. Data push module
and member registration module are run in the front end push
servers, the former module pushes either registration program
or advertisement while user login information associated with
the Bluetooth d evice ad dress is processed by the latter module.
The handheld device will receive the registration program if its
Bluetooth device address is not registered otherwise it will
receive datas like advertisements, offer pop-ups and price list.
B. Software modules in the system
There are four software modules in our system:
1. Bluetooth Ad push module.
2. Server side Bluetooth member registration module.
3. Client side member registration module.
4. Dynamic URL advertisement generator.
Bluetooth Ad push module and server side Bluetooth member
registration module, which handles data transfer processes
registration respectively, are located in the front end servers.
Client side member registration mod ule is pushed by Bluetooth
Ad push module and runs in customers handheld device.
Dynamic URL advertisement generator, placed in the front end
servers, produces a Jar file containing a URL pointing to an
advertisement webpage. Fig. 2 shows the interactivity of these
four modules.

II. SYSTEM ARCHITECTURE AND SOFTWARE M ODULES


This section describes system architecture, and software
modules.
A. System Architecture
The proactive advertising system architecture is shown in
Figure1 .

Fig 2. Interactivity of software modules


Functions of the four modules are described as follows.
Fig 1. System architecture

1) Bluetooth Ad Push Module


This module will periodically push files to around handheld
devices with Bluetooth turned on. If the push succeeds, it will
store the successful information in datab ase for later inquiry. If
the push is rejected by some handheld device, the Bluetooth
address of the device is logged and set with a reset timer in a
denial device list which will b e referred for the next push.
Therefore, the device will not receive files pushed from the
module for some p eriod of time. If the timer is up the

The system requires users to register a set of username and


password to become members online or at the service counter
of hypermarket so that they can use the information to validate
their handheld devices when coming to the hypermarket. The
front end push servers, which are interconnected to the back
end database for synchro nizing member and mobile device
information, are deployed around within hypermarket. These
servers will push a registration program or advertisement to on-

295

System Implementation of Pushing DATA to Handheld Devices via Bluetooth...

associated device will be removed from the list. When


Bluetooth Ad Push Module is running, it will receive Bluetooth
device addresses which are discovered by front end servers.
This module will check the addresses with database to see if
they are registered, that is, whether the ad dresses have
associated with members username and password. If the
addresses have not registered then this module will push client
side member registration module to these devices.

C. Flow Chart of Push Mechanism and Program Logics


Let us consider the following scenario: Hypermarket might
only want to send special offers to its members without
bothering non-member customers; and some customers do not
want to get any ad vertisement no matter whether they are
members or not. A customer might have received the
advertisement that was just pushed off. How Bluetooth Ad
push module deals with these situations? In this section, we
describe the push mechanism
and its logic. Figure 3 displays the flow chart of push
mechanism; and the pro gram logic within Bluetooth

2) Server Side Bluetooth Member Registration Module


As counterpart of client side member registration module, this
module will receive the registration information sent from
users and check it with database to see whether they are
members or not. This module will notify handheld devices of
the registration status.

Ad push module is expounded as follows.

3) Client Side Member Registration Module


This module can be automatically installed and running in a
Java-enable device. Customers use this module to register their
handheld devices by input their username and password. The
username/password and the associated Bluetooth device
address are sent to server side Bluetooth member registration
module to complete the one-time registration of the handheld
device. After the handheld device has been registered,
Bluetooth Ad push module will use its Bluetooth device
address to identify the customer.

(1) There are two pushing modes in Bluetooth Ad push


module. One is automatic push, the other is manual
push. Hypermarket can select either automatic push or
manual push depending on requirements.
a.

files according to a predefined sequence within


a period of time that is adjustable.
b.

4) Dynamic URL Advertisement Generator


This module provides an alternative to lead the customer to
access the advertisements. Hypermarket prepares its
advertisements on a website rather than pushes them to the
customer; and then uses this module to create a Jar file
containing the URL of the website. The Jar file will be pushed
to customers handheld device on which the jar file can be
executed and bring the URL to the browser of the handheld
device. Thus, customers can browse the advertisements on their
handheld devices. The functions of these modules are
summarized in Table 1.
T A BL E 1. S O FT WA RE

MO D U LE

Bluetoo th data push


mo dule
Server side
Bluetooth member
registration module
Client side member
registration module
Dynamic URL data
generator

Push registration program, URL jar file


or advertisements to handheld devices
which are discovered by front end
servers.
Gather user login information and
associated Bluetooth device address
then check if the device is registered or
not.
This module sent from Bluetooth Ad
push module is used to register
handheld devices.
Prepare a jar file containing a URL that
points to an advertisement webpage
which will be shown on the handheld
device.

Manual Push: Manually push a specific file. We


define the file being p ushed as F (File).

(2) Search Bluetooth devices around the front end servers;


put the Bluetooth devices being found in a device list.
From the list choose consecutively one device which
will go through the processes (3) ~ (6) described
below. We define the chosen device as D (Device).
(3) If device D is shown in the denial device list, that
means the device has rejected receiving file. If the
reset timer is expired then remove the device D from
the denial device list.

F UN C T ION

Module Name Function

Auto matic Push: Program continuously pushes

Location
Front
End
Servers
Front
End
Servers
Handheld
Devices
Front
End
Servers

296

(4) If device D is not in the denial device list, then check


whether it is registered or not a. Is registered: Check
with database to see if device D has received file F. b.
Is not registered: Transmit Client Side Member
Registration Module to device D.
(5) If the registered device D has received file F, then stop
pushing file F to device D.
(6) If the registered device D rejects receiving file F, then
system will not push any file to device D within a
period of time which is adjustable.

Proceedings of the National Conference on Communication Control and Energy System

D. Conclusion
Hypermarket and superstore usually distribute their fliers
manually or by mail, which is inefficiency as well as
consuming workforce and material. We proposed a proactive
advertising system to cope with this problem. Hypermarket can
reduce the cost of distributing handbills by using this system
which also provides dynamic delivering of advertisement. In
other words, hypermarket can make and deliver advertisement
on the fly depending on real time sales data without resort to
printing it. We have completed the field test in a local
superstore and the result was satisfied. The proactive
advertising system can also apply to similar places such as food
court, amusement park and cinema. It can be used as an
alternative advertising method for these places.
Fig 3. Flow chart of push mechanism
No matter whether a device rejects receiving files or it is not
registered, the above mechanism can response a correct action
to the device either stops pushing files or pushes registration
program. In other words, customers can interact with the
system so that hypermarket cannot force them to receive
advertisements.
T AB L E 2. COMPARISON OF TECHNOLOGIES
Technical
Specifications

Bluetooth v2.1 +
EDR Bluetooth v3.0 + HS

Radio frequency 2.4 GHz 2.4 GHz and 5 GHz


Distance/Range 10 meters 10 meters
Over th e air d ata
rate 1-3 Mbps up to 54 Mbps
Application
throughput 0.7-2.1 Mbps up to 24 Mbps
Nodes/Active slaves 7 / 16,777,184 Same as 2.1 + EDR
64b/128b and
Security
application layer
128b AES
user defined
Adaptive fast
CSMA/CA with collision
Robustness
frequency hopping,
detection, ARQ, FEC, CRC
FEC, fast ACK
Latency (from a non
connected state)
Same as 2.1 + EDR with
Total time to send
100ms
AMP Less than 2.1 + EDR
data
with UCD
(det.battery life)
Governmen t
regulation Worldwide Same as 2.1 + EDR
Certification body Bluetooth SIG Same as 2.1 + EDR
Voice capable Yes Same as 2.1 + EDR
Network topology Scatternet Same as 2.1 + EDR
Power consumption 1 as the reference < 1
Service discovery Yes Same as 2.1 + EDR
Profile concept Yes Same as 2.1 + EDR

Primary use cases

Mobile phones,
gaming, headsets,
stereo audio
streaming,
automotive, PCs,
etc.

Same as 2.1 + EDR plus bulk


data transfer, synchronization
and video streaming

297

REFERENCES
[1] Bluetooth SIG, Bluetooth Basics,
[2] Paper by Yu-Liang Chen(1,2), Hung-Jen Chou(1), ChenPu Lin(1), Hsien-Tang Lin(2), Shyan-Ming Yuan(1 ,3) YuChia Liu, Blueg, A New Blog-like P2P System built on
Bluetooth, Master Thesis of Computer Science and
Engineering, National Chiao Tung University, Jun 2006.
[3] Bruce Hopkins, Part 1: File transfer with JSR-82 and
OBEX,
[4] Nai-Shuoh Yeh, Linux Bluetooth Protocol Stack/BlueZ
on SCAN Device, Computer and Communication, CEPS,
Volume 107, Mar 2004, pp. 38-47.
[5] Chatschik, B., An overview of the Bluetooth wireless
technology, Communicatio ns Magazine, IEEE Volume
39, Issue 12, Dec 2001, pp. 86 94.

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.298-303.

TEXT DETECTION AND RECOGNITION IN IMAGES AND VIDEO


FRAMES
C.Selvi, M.E(II-year)., Student, Department of Computer Science and Engineering.
Selvichandran.it@gmail.com
many applications, while the robustness and
ABSTRACT
computation cost of feature matching
This paper presents a new method
algorithms based on other high-level
for detecting and recognizing text in
features is not efficient enough to be applied
complex images and video frames. Text
to large databases.
detection is performed in a two-step
Text detection and recognition in
approach that combines the speed of a text
images and video frames, which aims at
localization step, enabling text size
integrating advanced optical character
normalization, with the strength of a
recognition (OCR) and text-based searching
machine learning text veri3cation step
technologies, is now recognized as a key
applied on background independent features.
component in the development of advanced
Text recognition, applied on the detected
image and video annotation and retrieval
text lines, is addressed by a text
systems. Unfortunately, text characters
segmentation step followed by an traditional
contained in images and videos can be any
OCR algorithm within a multi-hypotheses
gray-scale value (not always white), lowframework relying on multiple segments,
resolution, variable size and embedded in
language modeling and OCR statistics.
complex backgrounds. Experiments show
Experiments conducted on large databases
that applying conventional OCR technology
of real broadcast documents demonstrate the
directly leads to poor recognition rates.
validity of our approach.
Therefore,
efficient
detection
and
INTRODUCTION
segmentation of text characters from the
Content-based multimedia database
background is necessary to 3ll the gap
indexing and retrieval tasks require
between image and video documents and the
automatic extraction of descriptive features
input of a standard OCR system. Previously,
that are relevant to the subject materials
proposed methods can be classi3ed into
(images, video, etc.). The typical low-level
bottom-up methods and top-down methods.
features that are extracted in images and
Bottom-up methods segment images into
video include measures of color, texture,
regions and then group character regions
shape. Although these features can easily be
into words. The recognition performance
obtained, they do not give a precise idea of
therefore relies on the segmentation
the image content. Extracting more
Algorithm and the complexity of the image
descriptive features and higher level entities,
content. Top-down algorithms 3rst detect
such as text and human faces, has recently
text regions in images and then segment
attracted signi3cant research interest. Text
each of them into text and background. They
embedded in images and video, especially
are able to process more complex images
captions provide brief and important content
than bottom-up approaches but difficulties
information, such as the name of players or
are still encountered at both the detection
speakers, the title, location, date of an event,
and segmentation/recognition stages.
etc. This text can be a keyword resource as
powerful as the information provided by
speech recognizers. Besides, text-based
search has been successfully applied in
298

Proceedings of the National Conference on Communication Control and Energy System

accurate text characters. This methodology


allows us to handle background gray-scale
multi-modality and unknown text gray-scale
values, which are the problems that are often
not taken into account in the existing
literature. When applied to a database of
several hours of sports video, it reduces by
more than 50% the word recognition error
rate with respect to a standard Otsu
binarization step followed by the OCR.
TEXT DETECTION
There are two problems in obtaining
efficient and robust text detection using
machine learning tools. One is how to avoid
performing
computational
intensive
classification on the whole image, the other
is how to reduce the variance of character
size and gray scale in the feature space
before training.
TEXT LOCALIZATION
The first part of the text localization
procedure consists of detecting text blocks
characterized by short horizontal and
vertical edges connected to each other. The
second part aims at extracting individual text
lines from these blocks.Candidate text
region extraction Let S denote the set of
sites (pixels) in an input image. The task of
extracting text-like regions, without
recognizing individual characters, can be
addressed by estimating at each site s (s S)
in an image I the probability P(T|s; I ) that
this site belongs to a text block and then
grouping the pixels with high probabilities
into regions. To this end, vertical and
horizontal edge maps Cv and C h are first
computed from the directional second
derivative zeros produced by a Canny alter
[20]. Then, according to the type of edge,
different dilation operators are used so that
vertical edges extend in horizontal direction
while horizontal edges extend in vertical
direction:

Algorithm proposed
for text detection and recognition.
The method we propose belongs to the topdown category, and consists of two main
tasks as illustrated by Fig a text detection
task, and a text recognition task applied to
the detected text regions. Following the
cascade altering idea, which consists of the
sequential processing of data with more and
more selective 3lters, the text detection task
is decomposed into two subtasks. These are
a text localization step, whose goal is to
quickly extract potential text blocks in
images with a low rejection rate and a
reasonable precision, and a text veri3cation
step based on machine learning. Such an
approach allows us to obtain high
performance with a lower computation cost
than other methods. To address the
recognition
task
we
propose
a
multihypotheses approach. More precisely
in this approach, the text image is segmented
two or three times, assuming a different
number of classes in the image each time.
The different classes, all considered as text
candidates, are processed by a commercial
optical character recognition (OCR)
software, and the 3nal result is selected from
the generated text string hypotheses using a
con3dence level evaluation based on
language modeling. Additionally, we
propose a segmentation method based on
Markov random field to extract more

Dv(s) = Cv(s)
Rect vand Dh(s) = Ch(s)
Rect h..(`1)

299

Text Detection and Recognition in Images and Video Frames

paragraphs in candidate text regions. This


task can be performed by detecting the top
and bottom baselines of horizontally
aligned text strings. Baseline detection also
has two additional purposes. Firstly, it will
eliminate false alarms, such as slant stripes,
which do not contain any well-defined
baselines. Secondly, it will refine the
location of text strings in candidate regions
that contain text connected with some
background objects.
Candidate text region extraction. (a) original
image, (b) vertical edges detected in image
(a), (c) horizontal edges detected in image
(a), (d) dilation result of vertical edges using
51 vertical operator, (e) dilation result of
horizontal edges using 36 horizontal
operator, (f) candidate text regions.
The dilation operators Rect v and Rect h are
de3ned to have the rectangle shapes 1 5
and 6 3. (b) And (c) displays the vertical
and horizontal edges resulting of this
process for the video frame showed in (a).
The vertical and horizontal edge dilation
results are shown in (d) and (e). Due to the
connections between character strokes,
vertical edges contained in text-like regions
should be connected with some horizontal
edges, and vice versa, we consider only the
regions that are covered by both the vertical
and horizontal edge dilation results as
candidate text regions. Thus, the probability
P(T|s; I ) can be estimated as P(T|s; I) =
Dv(s)Dh(s): (f) illustrates the result of this
step. The above text detection procedure is
fast and invariant to text intensity changes.
Also, ideally, the threshold of the edge
detection step can be set in such a way so
that no true text regions will be rejected. The
false alarms resulting from this procedure
are often slant stripes, corners, and groups of
small patterns, for example human faces.
Text line localization in candidate text
regions In order to normalize text sizes, we
need to extract individual text lines from

Text line localization: (a) candidate text


region with located baselines (top and
bottom
boundaries), (b) the rectangle
boundaries of candidate text lines.
Typical characteristics of text strings
are then employed to select the resulting
regions and the final candidate text line
should satisfy the following constraints: it
contains between 75 and 9000 pixels; the
horizontalvertical aspect ratio is more than
1.2; the height of the region is between 8
and 35. (b) Shows the rectangle boundaries
of the candidate text lines. In general, the
size of the text can vary greatly (more than
35 pixels high). Larger characters can be
detected by using the same algorithm on a
scaled image pyramid.
TEXT VERIFICATION
FEATURE EXTRACTION
After the text localization step, each
candidate text line is normalized using
bilinear interpolation into an image I having
a 16 pixels height. A feature image If is then
computed

300

Proceedings of the National Conference on Communication Control and Energy System

from I . The fixed size input feature vectors


zI for the MLP or SVM are directly extracted
from If on 1616 sliding windows. Since
the gray-scale values of text and background
are unknown, we tested four alternative
features invariant to gray-scale changes.
Gray-scale spatial derivatives feature:
To measure the contribution of
contrast in the text veri3cation process, the
spatial derivatives of the image brightness
function in both the X and Y directions are
computed at each site s.
Distance map features:
Since the contrast of text characters
is background dependent, the brightness
spatial derivatives may not be a stable
feature for text verification. Thus, we
considered as a second feature image the
distance map DM, which only relies on the
position of strong edges in the image.
Constant gradient variance features:
To avoid the need for a threshold, we
propose a new feature, called constant
gradient variance (CGV), to normalize the
contrast at a given point using the local
contrast variance computed
in a neighborhood of this point. More
formally, let g(s) denote the gradient
magnitude at site s, and let LM(s) (resp.
LV(s)) denote the local mean (resp. the local
variance) of the gradient defined by LM(s) =
1 /|Gs| g(si) and
LV(s) = 1 /|Gs| (g(si)
LM(s))2

DCT COEFFICIENTS.
The last feature vector we tested is
composed of discrete cosine transform
(DCT) coe Acients computed over 16 16
blocks using a fast DCT algorithm .These
frequency domain features are commonly
used in texture analysis.
MULTI-LAYER
PERCEPTRONS
(MLPS)
MLPs are a widely used neural
network, usually consisting of multiple
layers of neurons: one input layer, hidden
layers and one output layer. Each neuron in
the hidden or output layers computes a
weighted sum of its inputs (each output of
the neurons in the previous layer) and then
passes this sum through a non-linear transfer
function to produce its output. In the binary
classification case, the output layer usually
consists of one neuron whose output
encodes the
class membership. In theory, MLPs can
approximate any continuous function, and
the goal in practice consists of estimating
the parameters of the best approximation
from a set of training samples. This is
usually done by optimizing a given criterion
using a gradient descent algorithm.
SUPPORT
VECTOR
MACHINE
(SVMS)
SVMs are a technique motivated by
statistical learning theory which have shown
their ability to generalize well in highdimensional spaces, such as those spanned
by the texture patterns of characters. The
key idea of SVMs is to implicitly project the
input space into a higher dimensional space
(called feature space) where the two classes
are more linearly separable. This projection,
denoted , is implicit since the learning and
decision process only involve an inner dot
product in the feature space, which can be
directly computed using a kernel K defined
on the input space. In short, given m labeled
training samples: (x1, y1); : : : ; (xm, ym),
where yi = 1 indicates the positive and

where Gs is a 99 neighborhood around s.


Then, the CGV value at site s is defined as
CGV(s) = (g(s) LM(s)) - GV LV(s)
where GV denotes the global gradient
variance computed over the whole image
grid S. It can be shown that statistically,
each local region in the CGV image has a
zero mean and the same contrast variance
equal to the global gradient variance GV.

301

Text Detection and Recognition in Images and Video Frames

negative classes, and assuming there exists a


hyperplane defined by w(x) + b = 0 in the
feature space separating the two classes, it
can be shown that w can be expressed as a
linear combination of the training samples,
i.e. w = j jyj(xj) with j0. The
classi3cation of an unknown sample z is
thus based on the sign of the SVM function:
G(z) = jyj(xj)(z)+b
K(xj, z)=(xj)(z) is called the kernel
function. The training of an SVM consists of
estimating the $j (and b) to find the
hyperplane that maximizes the margin,
which is defined as the sum of the shortest
distance from the hyperplane to the closest
positive and negative samples.
TEXT-LINE VERIFICATION
In the text verification step, the
feature vectors discussed and provided to the
classifier are extracted from the normalized
candidate text line on 16 16 sliding
windows with a slide step of 4 pixels. Thus,
for each candidate text line r, we obtained a
set of feature vectors Zr =(zr1; , zr l ).
The con3dence of the whole candidate text
line r is defined as
Conf(r) = _l i=1 G(zr i ) 1/ 1 2oe (d 2/
1 2o2) where di is the distance from the
geometric center of the ith sliding window
to the geometric center of the text line r, 0
is a scale factor depending on the text line
length.
TEXT RECOGNITION
Most of the previous methods that
addressed text recognition in complex
images or video worked on improving the
binarization method before applying an
OCR module. However,
an optimal binarization might be diAcult to
achieve when the background is complex
and the gray-scale distribution exhibits
several modes. Moreover, the gray-scale
value of text may not be known in advance.
A segmentation algorithm that classifies the
pixels into K classes is applied on the text
image. Then, for each class label, a binary

text image hypothesis is generated by


assuming that this label corresponds to text
and all other labels correspond to
background. This binary image is then
passed through a connected component
analysis
and
gray-scale
consistency
constraint module and forwarded to the
OCR system, producing a string hypothesis.
Rather than trying to estimate the right
number of classes K, e.g. using a minimum
description length criterion, we use a more
conservative approach that varies K from 2
to 3 (resp. 4), generating in this way five
(resp. nine) string hypotheses from which
the text result is selected.

Text recognition scheme.

Segmentation and post processing steps

302

Proceedings of the National Conference on Communication Control and Energy System

DISCUSSION AND CONCLUSIONS


This paper presents a general scheme
for extracting and recognizing embedded
text of any gray-scale value in images and
videos. The method is split into two main
parts: the detection of text lines, followed by
the recognition of text in these lines.
Applying machine learning methods for text
detection encounters dificulties due to
character size and gray-scale variations and
heavy computation cost. To overcome these
problem, we proposed a two-step
localization/veri3cation scheme. The 3rst
step aims at quickly locating candidate text
lines, enabling the normalization of
characters into a unique size. In the
veri3cation step, a trained SVM or MLP is
applied on background independent features
to remove the false alarms.
Experiments showed that the
proposed scheme improves the detection
result at a lower cost in comparison with the
same machine learning tools applied without
size normalization, and that an SVM was
more appropriate than an MLP to address
the text texture veri3cation problem.The text
recognition method we propose embeds the
traditional character segmentation step
followed by an OCR algorithm within a
multiple hypotheses framework. A new
gray-scale consistency constraint (GCC)
algorithm was proposed to improve
segmentation results. The experiments that
were conducted on approximately 1 h of
sports video demonstrate the validity of our
approach.
More
specifically,
when
compared to a baseline system consisting of
the standard Otsu binarization algorithm,
the GCC postprocessing step was able to
reduce the character and word error rates by
more than 20%, showing its ability to
remove burst-like noise that greatly disturbs
the OCR software. Moreover, added to the
multiple hypotheses framework, the whole
system yielded approximately 97% character
recognition rate and a more than 93% word

recognition rate on our database, which


constitutes a reduction of more than 50%
w.r.t. the baseline system. This clearly
shows that (i) several text images may be
better modeled with 3 or 4 classes rather
than using the usual 2 class assumption (ii)
multiple segmentation maps provide
alternative solutions and (iii) the proposed
selection algorithm based on language
modeling and OCR statistics is often able to
pick up the right solution. We proposed to
use a maximum a posteriori criterion with a
MRF modeling to perform the segmentation.
Used as a traditional binarization algorithm,
it performed better than Otsus method.
However, embedded in the multi-hypotheses
system with GCC, it yielded similar results
to the Kmeans.Thus, the latter is preferred
for real applications since it runs faster. The
performance of the proposed methodology is
good enough to be used in a video
annotation and indexing system.In the
context of the ASSAVID European project,
it was integrated with other components
(shot detector, speech recognizer, sports and
event recognizers, etc.) in a user interface
designed to produce and access sports video
annotation. A simple complementary
module combining the results from
consecutive frames containing the same text
was added. User experiments with librarians
at the BBC showed that the text detection
and recognition technology produced robust
and useful results, i.e. did not produce many
false alarms and the recognized text was
accurate. The same proposed scheme is
currently used in the CIMWOS project to
index French news programs.

303

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.304-306.

Smart Power \ Energy Metering


R. Balaji1, K.S. Ashwin Kumar2 and M. Vignesh1
1

ECE, 2EEE
veltech Dr RR & Dr SR technical university, Chennai.
Abstract Energy conservation has taken new leap in the
decade but the main problem is that excess use of energy on
each household. We are presenting a unique method for
charging over hyperactive customers with extra charge
We will use power measurement unit instead of energy rate
at each instant of time is measured hence is an option of an
hyperactivity of customer, for us we require few components
like sampler, ADC, controllers
Our paper can easily be implemented at each household on
proper basis

I. EXISTING SYSTEM
The most common type of electricity meter is the electro
mechanical induction watt-hour meter
The electromechanical induction meter operates by
counting the revolutions of an aluminum disc which is
made to rotate at a speed proportional to the power. The
number of revolutions is thus proportional to the energy
usage. It consumes a small amount of power, typically
around 2 watts.
The metallic disc is acted upon by two coils. One coil is
connected in such a way that it produces a magnetic flux
in proportion to the voltage and the other produces a
magnetic flux in proportion to the current. The field of the
voltage coil is delayed by 90 degrees using a lag coil.[17]
This produces eddy currents in the disc and the effect is
such that a force is exerted on the disc in proportion to the
product of the instantaneous current and voltage. A
permanent magnet exerts an opposing force proportional
to the speed of rotation of the disc. The equilibrium
between these two opposing forces results in the disc
rotating at a speed proportional to the power being used.
The disc drives a register mechanism which integrates the
speed of the disc over time by counting revolutions, much
like the odometer in a car, in order to render a
measurement of the total energy used over a period of
time.
The type of meter described above is used on a singlephase AC supply. Different phase configurations use
additional voltage and current coils.

Fig. 1

Three-phase electromechanical induction meter, metering


100 A 230/400 V supply. Horizontal aluminum rotor disc
is visible in center of meter
The aluminum disc is supported by a spindle which has a
worm gear which drives the register. The register is a
series of dials which record the amount of energy used.
The dials may be of the cyclometer type, an odometer-like
display that is easy to read where for each dial a single
digit is shown through a window in the face of the meter,
or of the pointer type where a pointer indicates each digit.
With the dial pointer type, adjacent pointers generally
rotate in opposite directions due to the gearing
mechanism.
The amount of energy represented by one revolution of
the disc is denoted by the symbol Kh which is given in
units of watt-hours per revolution. The value 7.2 is
commonly seen. Using the value of Kh, one can
determine their power consumption at any given time by
timing the disc with a stopwatch. If the time in seconds
taken by the disc to complete one revolution is t, then the
power in watts is
. For example, if Kh = 7.2,
as above, and one revolution took place in 14.4 seconds,
the power is 1800 watts. This method can be used to
determine the power consumption of household devices
by switching them on one by one.

304

Proceedings of the National Conference on Communication Control and Energy System

Most domestic electricity meters must be read manually,


whether by a representative of the power company or by
the customer. Where the customer reads the meter, the
reading may be supplied to the power company by
telephone, post or over the internet. The electricity
company will normally require a visit by a company
representative at least annually in order to verify
customer-supplied readings and to make a basic safety
check of the meter.
In an induction type meter, creep is a phenomenon that
can adversely affect accuracy, that occurs when the meter
disc rotates continuously with potential applied and the
load terminals open circuited. A test for error due to creep
is called a creep test.
Two standards govern meter accuracy, ANSI C12.20 for
North America and IEC 62053.
II. DRAW BACK OF EXISTING SYSTEM
1. One of the principle disadvantages of the existing
system is for power measurement a representative has
to come to each household for checking the power
consumption
2. Its inability to detect electricity theft
3. There are chances of tampering of meters by using
another magnets
4. It is not capable of supporting the smart grid
technology
III. THE PROPOSED SYSTEM
A. Block Diagram

5
15 - 30

Ceiling Fan
Computer
Laptop
Desktop PC
Printer
Coffee Maker
Clock Radio
Dishwasher
Dryer (Clothes)
Electric*
Gas Heated
Electric Blanket
Electric Clock
Electric Frying Pan
Freezer
Conventional 14cf
(15 hrs/day runtime)

10 - 50

Sun Frost 19cf Freezer


Furnace Blower
Garage Door Opener
Heater
Engine Block*
Portable*
Waterbed*
Stock Tank*
Hot Plate
Iron
Lightbulbs
Incandescent Bulbs

B. Basic Household Power Consumption Table


Table 1
Watts
1000
2000 - 5000
3517
17585
300
1000 - 1500

305

20 - 75
80 - 200
100
800
1
1200 - 1500
4000
300 - 400
200
1
1200

445
112
300 - 1000
350
150 - 1000
1500
400
100
1200
1000
CFL Bulbs

100

23

75

20

60

15
11
600 - 1500
250

40
Microwave
Popcorn Popper
Refrigerator/Freezer (runtime in hours/day)
Conventional 20cf (15)
Conventional 16cf (15)

Fig. 2

Appliance
Air Conditioner
Room*
Central*
Air Conditioners rated in tons
Per Ton
e.g., 5 Ton AC Unit
Blender
Blow Dryer

CB Radio
CD Player

540
475

Sun Frost 16cf DC (7)

112

Sun Frost 12cf DC (7)


Conserv 10.5cf (8)
Conserv 7.5cf (8)
Satellite Dish
Sewing Machine
Shaver
Sink Waste Disposal

70
60
50
30
100
15
450

Smart Power \ Energy Metering


Stereo
Table Fan
Toaster
Tools
Weed Eater
1/4" drill
1/2" drill
1" drill
9" disc sander
3" belt sander
12" chain saw
14" band saw
7 1/4" circular saw
8 1/4" circular saw
Vacuum Cleaner
Upright
Hand
VCR
Waffle Iron
Washing Machine

D. Blockdiagram

10 - 30
10 - 25
800 - 1500
500
250
750
1000
1200
1000
1100
1100
900
1400
200 - 700
150
40
1200
500

Fig. 2

Thus by using this technology we can easily differentiate


between hyperactive consumers and normal consumer and
using the sirens we can warn customers of over usage and
the utility company can also charge the over consumer
extra making them indirectly reduce the ultimate
consumption

C. From the Above Chart We can See That a Typical


House Power Consumption
A village house consumption approx 110-150 kwh
A city one bedroom house consumption approx 500 -600
kwh
A two bedroom apartment with multiple ac approx 600800 kwh
Consumption of electric vehicles approx 240 to 960 watts
Using our metering system

IV. CONCLUSION
We in this paper have submitted a model of meter which
could easily differentiate between normal users and
hyperactive users to differentiate power charging
The system not only warns the user of extra power
consumption but also helps to differentiate them from
other users

306

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.307-310.

Transparent Electronics
T. Gopala Krishnan, G.D. Vigneshvar and V.R. Arun
Veltech Dr. RR & Dr. SR Technical University
Abstract Transparent electronics is an emerging science
and technology field focused on producing invisible
electronic circuitry and opto-electronic devices. Applications
include consumer electronics, new energy sources, and
transportation.the first scientific goal of this technology
must be to discover, understand, and implement transparent
high-performance electronic materials. The second goal is
their implementation and evaluation in transistor and circuit
structures. The third goal relates to achieving applicationspecific properties since transistor performance and
materials property requirements vary, depending on the
final product device specifications.

I. INTRODUCTION
During the past 10 years, the classes of materials available
for transparent electronics applications have grown
dramatically. Historically, this area was dominated by
transparent conducting oxides (oxide materials that are
both electrically conductive and optically transparent)
because of their wide use in antistatic coatings, touch
display panels, solar cells, flat panel displays, heaters,
defrosters, smart windows and optical coatings. All
these applications use transparent conductive oxides as
passive electrical or optical coatings
II. TRANSPARENT ELECTRONICS DEVICES
In order to produce a transparent-electronics-based
system, appropriate materials must be selected,
synthesized, processed, and integrated together in order to
fabricate a variety of different types of devices. In turn,
these devices must be chosen, designed, fabricated, and
interconnected in order to construct circuits, each of
which has to be designed, simulated, and built in such a
way that they appropriately function when combined
together with other circuit and ancillary non-circuit
subsystems. Thus, this product flow path involves
materials devices circuits systems, with each
level of the flow more than likely involving multifeedback iterations of selection, design, simulation,
fabrication,
integration,
characterization,
and
optimization. From this perspective, devices constitute a
second level of the product flow path. The multiplicity,
performance, cost, manufacturability, and reliability of
available device types will dictate the commercial product
space in which transparent electronics technology will be
able to compete

III. PASSIVE, LINEAR DEVICES


A passive device absorbs energy, in contrast to an active
device, which is capable of controlling the flow of energy
A linear device is distinguished by the fact that its inputoutput characteristics are describable using a linear
mathematical relationship. The three passive, linear
devices of interest are resistors, capacitors, and inductors.
IV. COMBINING OPTICAL TRANSPARENCY WITH
ELECTRICAL CONDUCTIVITY
Transparent Conductors are Neither 100% Optically
Transparent Nor Metallically Conductive. From the Band
Structure Point of View, the Combination of the Two
Properties in the Same Material is Contradictory: a
Transparent Material is an Insulator Which Possesses
Completely Filled Valence and Empty Conduction Bands;
Whereas Metallic Conductivity Appears When the Fermi
Level Lies Within a Band with a Large Density of States
to Provide High Carrier Concentration Efficient
transparent conductors find their niche in a compromise
between a sufficient transmission within the visible
spectral range and a moderate but useful in practice
electrical conductivity. This combination is achieved in
several commonly used oxides In2O3, SnO2, ZnO and
CdO. In the undoped stoichiometric state, these materials
are insulators with optical band gap of about 3 eV. To
become a transparent conducting oxide (TCO), these TCO
hosts must be degenerately doped to displace the Fermi
level up into the conduction band. The key attribute of
any conventional n-type TCO host is a highly dispersed
single free electron- like conduction band Degenerate
doping then provides both (i) the high mobility of extra
carriers (electrons) due to their small effective mass and
(ii) low optical absorption due to the low-density of states
in the conduction band. The high energy dispersion of the
conduction band also ensures a pronounced Fermi energy
displacement up above the conduction band minimum, the
BursteinMoss (BM) shift. The shift helps to broaden the
optical transparency window and to keep the intense
optical transitions from the valence band out of the visible
range. This is critical in oxides which are not transparent
throughout the entire visible spectrum, for example, in
CdO where the optical (direct) band gap is 2.3 eV.
Fig.1: (a) Schematic electronic band structure of
aTCOhost an insulator with a band gap Eg and a

307

Transparent Electronics

dispersed parabolic conduction band which originates


from interactions between metal s and oxygen p states. (b)
and Schematic band structure and density of states of a
TCO, where a degenerate doping displaces the Fermi
level (EF) via a Burstein-Moss shift, EBM, making the
system conducting. The shift gives rise to inter-band
optical transitions from the valence band,

major topic addressed is transistors. This is the most


important matter considered in this chapter. Most of this
discussion focuses on TTFTs, since they are perceived to
be the most useful type of transistor for transparent
electronics. Additionally, a very brief overview of
alternative transistor types -static-induction transistors,
vertical TFTs, hot electron transistors, and nano wire
transistors - is included. This is motivated by recognizing
the desirability of achieving higher operating frequencies
than are likely obtainable using TTFTs with minimum
gate lengths greater than ~2-10 m, a probable lowerlimit dimensional constraint for many types of low-cost,
large-area applications. Alternative transistors such as
these offer possible routes for reaching higher operating
frequencies, in the context of transparent electronics.
VI. TRANSPARENT THIN-FILM TRANSISTORS(TTFTS)

Fig. 1

V. TRANSPARENT ELECTRONICS DEVICES


In order to produce a transparent-electronics-based
system, appropriate materials must be selected,
synthesized, processed, and integrated together in order to
fabricate a variety of different types of devices. In turn,
these devices must be chosen, designed, fabricated, and
interconnected in order to construct circuits, each of
which has to be designed, simulated, and built in such a
way that they appropriately function when combined
together with other circuit and ancillary non-circuit
subsystems. Thus, this product flow path involves
materials devices circuits systems, with each
level of the flow more than likely involving multifeedback iterations of selection, design, simulation,
fabrication,
integration,
characterization,
and
optimization. From this perspective, devices constitute a
second level of the product flow path. The multiplicity,
performance, cost, manufacturability, and reliability of
available device types will dictate the commercial product
space in which transparent electronics technology will be
able to compete. Thus, an assessment of the device toolset
available to transparent electronics is of fundamental
interest, and is the central theme of this chapter. Passive,
linear devices - resistors, capacitors, and inductors
comprise the first topic discussed. Passive devices are
usually not perceived to be as glamorous as active
devices, but they can be enabling from a circuit system
perspective, and they are also the simplest device types
from an operational point-of-view. Together, these two
factors provide the rationale for considering this topic
initially. Next, two-terminal electronic devices - pn
junctions, Schottky barriers, hetero junctions, and metalinsulator-semiconductor (MIS) capacitors - constitute the
second major topic. The motivation for this topical
ordering is again associated with their relative operational
complexity, rather than their utility. The third and final

TTFT s constitute the heart of transparent electronics. The


first two section focus on ideal and non-ideal behavior of
n-channel TTFT s. Next, n-channel TTFT stability is
considered. Finally, issues related to alternative device
structures double-gate TTFT s and the realization of pchannel TTFT s are discussed
Two possible transparent thin-film transistor (TTFT)
device structures,(a) a staggered, bottom-gate, and (b) a
coplanar, top-gate
conventional TFT employs a narrow band gap, opaque
semiconductor, a highly insulating, wideband gap
transparent semiconductor is used in a TTFT.
VII. ACTIVE-MATRIX OLED

Fig. 2

Active-matrix OLED (active-matrix organic light-emitting


diode or AMOLED) is a display technology for use in
mobile devices and televisions. OLED describes a
specific type of thin film display technology in which
organic compounds form the electroluminescent material,
and active matrix refers to the technology behind the
addressing of pixels. As of 2011, AMOLED technology is
used in mobile phones, media players and digital
cameras[1] and continues to make progress toward lowpower, low-cost and large-size (for example 40 inches)
applications.

308

Proceedings of the National Conference on Communication Control and Energy System

The amount of power the display consumes varies


significantly depending on the color and brightness
shown. As an example, one commercial QVGA OLED
display consumes 3 watts while showing black text on a
white background, but only 0.7 watts showing white text
on a black background
X. APPLICATIONS
 Printed electronics
 Low-cost electronics

Fig. 3

Magnified image of the AMOLED screen on the Google


Nexus One smartphone using the RGBG system of the
PenTile Matrix Family.

 Disposable electronics
 Large-area electronics
 Macroelectronics
 Flexible electronics

VIII. TECHNICAL
An active matrix OLED display consists of a matrix of
OLED pixels that generate light upon electrical activation
that have been deposited or integrated onto a thin film
transistor (TFT) array, which functions as a series of
switches to control the current flowing to each individual
pixel.[5]
Typically, this continuous current flow is controlled by at
least two TFTs at each pixel, one to start and stop the
charging of a storage capacitor and the second to provide
a voltage source at the level needed to create a constant
current to the pixel and eliminating need for the very high
currents required for passive matrix OLED operation.[6]
TFT backplane technology is crucial in the fabrication of
AMOLED displays. Two primary TFT backplane
technologies, namely polycrystalline silicon (poly-Si) and
amorphous silicon (a-Si), are used today in AMOLEDs.
These technologies offer the potential for fabricating the
active matrix backplanes at low temperatures (below
150C) directly onto flexible plastic substrates for
producing flexible AMOLED displays.

 Wearable electronics
Reversible Display, Front Drive Structure for Color
Electronic
Paper,
Color
Micro
encapsulated
Electrophoretic Display, Novel Display Structure Front
Drive Structure. Indium oxide nano wire mesh as well as
indium oxide thin films were used to detect different
chemicals, including CWA simulants.
XI. FUTURE SCOPE
It should be apparent from the discussion that although
much progress has been made in developing new
materials and devices for high performance transparent
solar cells, there is still plenty of opportunity to study and
improve device performance and fabrication techniques
compared with the nontransparent solar cell devices. In
particular, the stability of transparency solar cells has not
been studied yet. Solution-processable transparent PSC s
have become a promising emerging technology for
tandem solar cell application to increase energy
conversion efficiency. The transparency of solar cells at a
specific light band will also lead to new applications such
as solar windows. The field of energy harvesting is
gaining momentum by the increases in gasoline price and
environment pollution caused by traditional techniques.
Continued breakthroughs in materials and device
performance, accelerate and establish industrial
applications. It is likely that new scientific discoveries
and technological advances will continue to cross fertilize
each other for the foreseeable future

Fig. 4. Schematic of an active matrix OLED display

XII. CONCLUSION AND REMARKS


IX. ADVANTAGES
Active-matrix OLED displays provide higher refresh rates
than their passive-matrix OLED counterparts, and they
consume significantly less power.[8] This advantage
makes active-matrix OLEDs well suited for portable
electronics, where power consumption is critical to
battery life.

Oxides represent a relatively new class of semiconductor


materials applied to active devices, such as TFT s. The
combination of high field effect mobility and low
processing temperature for oxide semiconductors makes
them attractive for high performance electronics on
flexible plastic substrates. The marriage of two rapidly
evolving areas of research, OLED s and transparent

309

Transparent Electronics

electronics, enables the realization of novel transparent


OLED displays. This appealing class of see through
devices will have great impact on the humanmachine
interaction in the near future. EC device technology for
the built environment may emerge as one of the keys to
combating the effects of global warming, and this novel
technology may also serve as an example of the business
opportunities arising from the challenges caused by
climate changes The transparency of solar cells at a
specific light band will also lead to new applications such

as solar windows. The field of energy harvesting is


gaining momentum by the increases in gasoline price and
environment pollution caused by traditional techniques.
REFERENCE
[1]
[2]

310

Transparent Electronics , Springer publications,


J.F.Wager, D. A. Keszler, R. E.Presley.
Transparent electronics: from synthesis to applications,
Wiley publications:Antonio Facchetti, Tobin J.

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.311-313.

A Mobile Healthcare Service


A. Devendran
Abstract Nowadays there are many healthcare
questionnaires which help to check our health status
easily. The healthcare questionnaires are, however,
usually accessible through Web pages or hardcopies.
According to the World Health Organization reports,
chronic diseases are by far the leading cause of mortality
in the world, which places an ever enormous strain on the
worlds healthcare industry. At the same time, mobile
phones have been gradually adopted for solving some
tough health care issues which are hard to tackle
otherwise from conventional medical strategies. Since
mobile phone is a most easily available electronic device
which supports a variety of technical functions for human
daily activities, efforts are being made to dig out its roles in
the delivery of healthcare services and the promotion of
personal health. Some most typical technical approaches
leading to several patented health care mobile phone are
outlined and digested. Mobile healthcare questionnaire
service is one of useful ways to check their health status at
any time and at anywhere.

Mobile Healthcare Questionnaire Service is to provide


healthcare questionnaires and to deliver the analysis
results of the replies from users via mobile devices.
There are some studies on questionnaire systems.
However, most of them are focused on how to distribute
questionnaires to remote participants and gather the
information from the replies.
K. Morton developed a questionnaire service system for
the analysis of target groups. This system deals with the
questionnaire results from a group of people is deal but
not that of single user. J. Cheng et al. developed a
general-purpose questionnaire service system for
ubiquitous environment. This system provides
questionnaires to users at anytime and at anywhere, but
they also targeted the system gathering information
from a group of people. In large, the previous
questionnaire service studies are inadequate to apply for
mobile healthcare questionnaire services that check the
health status of users.

Keywords Mobile phone, health care, wearable medical


device, low cost medicine.

I. INTRODUCTION

Questionnaire service to users. In this framework users


can access the questionnaire services and check their
health status at anytime and at anywhere via mobile
devices with the support of Web technologies.
Applications ported in a mobile device communicate
with server side modules through Web protocols. For
this, all the components are developed in the form of
Web services and sometimes the Web services are
bundled to build a composition of Web services for
applications. Our framework is not designed just for the
delivery of a single health index for a disease. If
necessary, it can deliver rich information on the health
status of users.

Nowadays there are many healthcare questionnaires


which have been used as a simple indicator for diseases.
As a matter of fact, sometimes and in some cases, we
can simply check our health status by answering some
questions and summing up the points of the
questionnaires without any help of disease experts. The
results of healthcare questionnaires are reliable if the
questionnaires are deliberately designed by experts, and
the diagnosis results have a strong backup by reliable
validations and tests.
There are many other healthcare questionnaires to check
if a person has a possibility of having diseases.
Sometimes they provide decisive

Our framework can be incorporated with other


u-healthcare services as well. Many u-healthcare
services adopt learning based diagnosis methods and
they often use only bio signals for the diagnosis of some
diseases. Our framework can extract and provide many
useful features from the questionnaires, so it can help
the diagnosis methods improve the accuracies of
diagnosis.

Clues in diagnosing of a disease. The healthcare


questionnaires are, however, usually accessible through
Web pages or hardcopies. As a consequence, they have
limitations in the aspect of usage. In order to enhance
the use of questionnaires in mobile environment, the
access ways to questionnaires must be improved so that
users can access and reply to the questionnaires at any
time and at anywhere. Such ubiquitous access to the
questionnaire service has become possible with the
progress of IT technologies.

II. THE RECENT DEVELOPMET IN HEALTH


MONITORING
At the present stage, health monitoring is mostly
implemented in hospital or healthcare center. However,
this augments the costs of ailments, and at the same

IT technologies enable users to fetch and reply to the


healthcare Questionnaires on their mobile devices.
311

A Mobile Healthcare Service

time, the treatment of chronic diseases requires a


long-term health data monitoring. To some extent, the
resource allocation is unreasonable. The requirement for
mobile monitoring is increasing rapidly.

IV. RECENT PATENTS OF MOBILE PHONE IN


HEALTHCARE
Some applications of mobile phones used for healthcare
are emerging in the commercial field. Mobile phone can
be used on its own or combined with other
technologies. Obesity is a major health challenge with
over 65% of Indian. Adults either overweight or obese.
Some mobile phone applications allow users to
self-monitor caloric balance in real time and help users
track their behaviors related to weight management.
Applications such as calorie counting can be supported
by software on the phone, and customized meal
recommendation can be delivered to the users mobile
phone during the day. The phone camera can be used to
take a picture of a meal which is sent to a dietician who
interprets meal content and makes recommendation
about diet are needs.

In mobile health monitoring applications, the patients


are locate data distance mobile. Small portable devices
such as mobile phones, personal digital assistants and
watches rather than settled equipment, are used for
collecting and processing health information.
Transmission technologies such as Bluetooth, USB,
Global System for Mobile Communication (GSM), and
General Packet Radio Service (GPRS), are used to
communicate information between patients and health
care providers. to meet the increasing demands for
health monitoring and healthcare management, many
research projects and public administrative reforms are
being carried out world- wide. Since mobile phones that
integrate
various functions, such as personal
information management, camera and application
platform, are increasing, and 3g mobile phone makes it
possible to transfer large data, such as photos, movies,
and application files, a lot of big enterprises and
research institutes, universities are applying mobile
phone- based infrastructure and matured it technologies
to provide new services for mobile health monitoring
and daily healthcare management

In the following part, according to the innovativeness of


the inventors work, we have chosen several patents
related to mobile phone used in health care. Some of
them have been applied in several areas throughout the
world.
V. SYSTEMS MODIFYING BATTERY PACK OF
MOBILEPHONE
The battery pack of a mobile phone is an alternative.
Users can choose whether or not to use a special battery
pack. By modifying the battery pack, researchers have
provided several systems.

III. TYPICAL USE OFMOBILEPHONE IN HEALTHCARE


According to the research in latest few years, the most
common use of mobile phone in healthcare is a
combination of sensor technology, such as home based
patient monitoring devices, and web-based programs,
etc. The overall system Implementation Is Structured
Into
Three
Functional
Layers:
Sensing,
Communication, And Management Fig. Sensing
Layer Implements A Mobile Monitoring For Health
Data, Signal Processing, And Data Analysis.
Communication Layer Includes Short Range Wireless
Connection Via Bluetooth, Or Different Technologies
For Short Range Communication, And Worldwide
Wireless Connection Via A Mobile Phone.
Management Layer Carries Out Data Processing And
Management Tasks, Mostly Through Internet.
Researchers Have Proposed Several Systems Based On
The Above Three Layers around it.

A blood sugar tester and data uploading method was


disclosed by Lee and Kim , which later leads to a
product of Health Company, and is perhaps the worlds
first All-in-One Diabetes Phone Fig.. According to that
invention, two embodiments are provided. One is a
mobile phone integrated with blood sugar test function,
and another is a mobile phone with a connection
terminal which connects a blood sugar test adaptor used
to perform blood sugar tests. Both can transmit
measured blood sugar level to the blood sugar level
administration server. A strip connector allowing a strip
applied with blood sample to be inserted into it is
integrated into the battery pack in the former while a
strip is located at an outer face of the adapter in the
latter. Furthermore, the battery pack in the former
includes a sensor part for measuring electric current
corresponding to a glucose level, a temperature sensor
part form temperature, and a signal conversion part for
converting a measured electric current into a glucose
value referring to the respective temperature, while in
the latter all these are integrated into the adapter . The
method provides a blood sugar test adaptor that allows a
conventional mobile phone to be used as a blood sugar
test device.

Fig. 1

312

Proceedings of the National Conference on Communication Control and Energy System

(a)

The health conscious phone, designed by Lee ,enables


users to track their health through nutrition and fitness,
other users of this phone with similar goals such as lose
weight networks together to motivate, and hold the
others accountable for their workout sessions Fig. The
phone is capable of recognizing foods eaten by their
unique chemical signature. It tracks intake wherever one
goes, and will periodically analyze info to let one know
what food groups are missed.

(b)

Fig.2. (a) Diabetes Phone, Convergence products of mobile


phone and glucose meter; (b) Regardless of phone model, it
works with mobile phone through the phone connector.

VIII. CONCLUSION
This paper has attempted to give a snapshot of
completed, ongoing and emerging applications of
mobile phone based health care technologies. With
regard to the concerns mentioned here, it is worth noting
that mobile phone technologies are affecting but not
limit to the medical field of
life. Emerging
technological trends provide promising solutions for
mobile health care applications. In order to facilitate
wider application of mobile health care and offer better
service to the patients, efforts in compatibility,
standards, and security are needed.

VI. HEALTHCARE RELATED TO ENVIRONMENT


According to the truth that some chronic diseases such
as asthma, is largely influenced by the environmental
factors, for instance, air pollution index, temperature
and moisture, an invention providing method, internet
platform, and mobile phone analyzing and managing for
the health state and the environment of a patient was
disclosed by Taiwan Chest Disease Association . A user
may complete a physical state information survey
provided by the software and send a message via a
mobile phone to an internet platform. The internet
platform determines present location information of the
user, obtains environment information of the users
present location, and then analyzes the health data
according to the environment information.

Mobile healthcare questionnaire service provides


questionnaire and analysis results to users with the
utilization of mobile devices and Web technologies.
With the mobile health care questionnaire service, users
can check health status easily anytime and anywhere.

According to this invention, one can imagine that it is


possible to integrate sensors used to collect environment
information into the mobile phone, which would be
more convenient for users.

[1]

VII. CURRENT & FUTURE DEVELOPMENTS

[3]

REFERENCES

[2]

Advances in the technologies that underlie mobile


phones are enabling them to become better, faster, and
less expensive. Mobile phones will play a part and
parcel role in health care. There have already been some
examples .T+ medical, who works with Oxford
University, uses mobile phones, mainly working on
diabetes, to offer cost effective Disease Management
and Remote Monitoring Solutions .Researchers have
presented a new medical imaging system made of two
independent components connected through cellular
phone technology. The cellular phone technology
transmits unprocessed raw data from the patient site,
which includes a data acquisition device with limited
controls and no image display capability, and receives
and displays the processed image from the central site,
which includes an advanced image reconstruction and
hardware control multi-server unit.

[4]

[5]

[6]

313

http://www.who.int/topics/chronic_diseases/en/, Latest
access on 2008; Sep.16.
http://www.who.int/topics/ageing/en/, Latest access on
2008; Sep.16.
Intille SS. A new research challenge: Persuasive
technology to motivate healthy aging. IEEE Trans
InfTechnol Biomed 2004; 8:235-237.
Chen W, Wei D, Zhu X, Uchida M, Ding S, Cohen M. A
mobile phone-based wearable vital signs monitoring
system. Proceedings of the 2005 the fifth international
conference on computer and information technology.
Shanghai, China 2005.
Kogure Y, Matsuoka H, Kinouchi Y, Akutagawa M.
The development of a remote patient monitoring system
using Javaenabled mobile phones. Proceedings of the
2005 IEEE engineering in medicine and biology 27th
annual conference. Shanghai, China 2005.
Takeuchi H, Kodama N, Hashiguchi T, Hayashi D.
Automated healthcare data mining based on a personal
dynamic healthcare system. Proceedings of the 28th
IEEE EMBS annual international

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.314-318.

Remote Testing of Instruments using Image Processing


M. Sankeerthana1, V. Sindhusha1 and V. Murugan2
1

Student, Final Year, 2Assistant Professor


EIE, Jaya Engineering College, Thiruninravur.
Abstract In this world of high speed manufacturing
industry, the of testing and measurements using internet and
machine vision plays a major role in producing good quality
products. The term Machine Vision describes the work that
provides a practical and affordable visual sense for any type
of machine that works in real-time and is concerned with
utilizing existing technology in the most effective way to
endow a degree of autonomy in specific application. Since
many of the instruments are provided with communication
interfaces, one can build a remote testing system from the
actual hardware and a computing unit with Internet
connection capabilities. This work, remote testing of
instruments after showing simple clientserver architecture,
we came to know how the use of mobile agent technique
solve most of the security issues and work efficiently. In this
we use the possibilities of high-resolution images capturing
device such as CCD camera and interface with the computer
such that the real time images of the meter can be acquired.
These images are then processed to analyze the position of
the needle and in different inputs and with the data that is
obtained, we are able to determine the percentage of error
and conclude if the meter has passed the required levels of
accuracy. The success of the project shows that such
automation can be used to reduce the time of inspection and
also increases accuracy.

I. INTRODUCTION
Machine vision is an important sensor technology with
potential application in many industrial operations. Many
of the current application of machine vision are in
inspection, robotics and measurement. Machine vision
system may allow us for complex inspection for close
dimensional tolerance and improved recognition and part
location capabilities and increased speed.
The Recent development of PC-driven instruments and
the evolution of the networking capabilities of PCs over a
world wide network (Internet) have led to the realization
distributed measurement systems for industrial
applications.
The remote testing of instruments is an interesting
application that has not yet been exploited mainly due to
the security issues. In general, the tachometers are tested
manually; it leads some observational and environmental
errors. Machine vision is concerned with the sensing of
vision data and its interpretation by the computer .The
vision system consist of the Charge Coupled Device

(CCD) camera and digitizing hardware, a digital computer


with hardware and software necessary to interface them.
This work uses the endless possibilities of machine vision
and TCP\IP Protocol to automate the remote test of
measuring instruments specialized in automotive industry.
Here the camera and virtual instrument environment with
Data Acquisition (DAQ) interface is used to acquire the
data required for purpose of testing and determination of
the quality of product. The DAQ card, which we used, is
PCI 6024E National Instruments.
It consists of eight analog input/output channels, and eight
digital input/output channels with in build function
generator The product here is an analog meter where the
movement of the needle is used to determine the
measured value. The product is put under test according
to different conditions to check the functionality and the
accuracy within the required values. One of the main
featured tests is the calibration test, which checks the
needle position for various inputs given to the meter. For
the purpose of remote testing of measuring instruments
we have taken a tachometer as the test subject of our
work. The work uses the resource of machine vision
system for the purpose of calibration test done on the
analog tachometer depending on the needle deflection.
This system solely depends on the machine vision to
extract the details in accuracy, thus the accuracy and
perfection of this project depends on the type of the
machine vision system. Any clinch in the working of the
machine vision system will cause error in the readings
taken.
The tachometer is made to deflect to various levels of
input given by the computer from the server side using
Local Area Network (LAN) by TCP\IP Protocol. These
deflections are aptured at the right time using the CCD
camera. The processing step uses the Image Analysis
function, which uses pattern recognition technique for the
identification of the needle and its orientation.
The next step is to calculate the angle of deflection with
respect to the horizontal level. This process is carried out
for various values of inputs given by the user or generated
by the virtual instrument environment. Then these data
are compared to the standard values for the purpose of
error calculation. Then it decides whether the meter is fit
for commercial use or defective. In continuous production

314

Proceedings of the National Conference on Communication Control and Energy System

industry it is required to have two or more personal


working in shifts, which were further governed by the
other higher personal to ensure that the products are
thoroughly tested and certified before release. Here
mistakes can cause loss in accuracy and the possibility of
damaged goods floating in the market, which affects the
name and value of the company. So as in the companys
interest to look for a complete automated system, which
can guarantee the final product is tested and certified in
the most accurate manner as possible.
The main objective of the project is to provide automation
solution using the Machine vision and Local Area
Network (LAN) as the methodology of procedure in the
fields of remote testing done on analog deflection type
meters. Here this project helps in replacing the normal
technique of testing which in earlier days where
conducted by a human being in the place of production.
The human factor played an important role in the level of
accuracy and time taken for the complete test. Due to
difficulties of stress and strain and constant working
environments, these factors proved to change the level of
perception and also the time taken for the proper testing
procedure. The objective of this project is able to
automate the entire procedure from the initial input to the
final decision of accepting or rejecting the meter, and also
to maintain a database to store the details of batch and
serial number and also the time and date of testing done
on that particular device such that these records can be
used for further reference. The system developed finds its
application in the large scale testing of analog meter in
automotive industry. It can be extended to wide area
network also. Digital meters can also be tested using the
same concept.
II. IMAGE PROCESSING
A. Image Acquisition
The sensing and digitizing function involves the input of
vision data by means of a camera focused on the region of
interest (ROI). Special lighting techniques are frequently
used to obtain an image of sufficient contrast and clarity.
The digital image is called a frame of vision data, and is
frequently captured by a hardware device called a frame
grabber. These devices are capable of digitizing images at
the rate of over 30 frames /sec. The frame consists of
matrix of data representing projections of the scene
sensed by the camera. The elements of the matrix are
called Picture Element or Pixels. The digitized image
matrix for each frame is stored and then subjected to
image processing and analysis for data reduction and
interpretation of the image [. These steps are required to
permit real time application.
B. Image Processing and Analysis
The acquired image after conversion into digital data is
sent for further processing. The processing involves one
or all the following process.

1. Image data reduction.


2. Feature extraction.
3. Object Recognition
C. Image Data Reduction
The main objective of this process is to reduce the
complex sized image into reduced image. This is achieved
by the process of Digital conversion, which converts the
image into an 8-bit grayscale image. This process results
in a large data reduction without reducing the properties
of the image. Image Data reduction is essential to do the
further process
D. Feature Extraction
In image vision application, its required to distinguish
the main object from the rest of the image. This is
accomplished by the means of feature that uniquely
characterize the image. Some of the main feature of the
image is the contrast, colour, brightness of the image etc.
The main feature that we utilize is the grayscale level of
the image. This feature is used to identify the object or
part of the object and to determine the size, location and
orientation
E. Object Reorganisation
The next step in image data processing is to identify the
object that the image represents using the extracted
feature information. The recognition algorithm must be
powerful enough to uniquely identify the object .The
various object recognition techniques are:
1. Template matching - is a subset of pattern recognition
technique that serve to classify objects in an image.
Here it matches the object with the model template,
which is previously obtained as reference image. It is
applicable when the number of model templates
required is less.
2. Structural Technique is a pattern recognition
technique that considers the relationships between
features or edges of an object.
F. Analysis of the Image using Pattern Matching
Pattern matching locates regions of a gray scale image
that match ma predetermined template. Pattern matching
finds template matches regardless of poor lighting, blur,
noise, shifting of the template, or rotation of the template.
The shape matching searches the presence of a shape in a
binary image and specifies the location of each matched
shape. IMAQ (Image Acquisition Vision Builder) Vision
detects the shape even if it is rotated or scaled. Binary
shape matching is performed by extracting parameters
from a template image that represent the shape of the
image and or invariant to the rotation and scale of the
shape. These parameters are then compared with a similar
set of parameters extracted from other images. Binary
shape matching has the benefit of finding features

315

Remote Testing of Instruments using Image Processing

regardless of size and orientation. The analysis of the


needle image is done using pattern recognition template
matching techniques. Here it uses the features of the
template image for identification purpose. Mostly used
features are shape, area, extra. It compares the model
needle image of the needle tip the pivot point stored in
separate image files. The comparison and classification
functions are performed using the software program
developed using LabVIEW
III. METHODOLOGY

Fig. 1. Block diagram of remote testing system

The Recent development of PC-driven instruments and


the evolution of the networking capabilities of PCs over a
worldwide network have led to the realization distributed
measurement systems for industrial applications. The
remote testing of instruments is an interesting application
that has not yet been exploited mainly due to the security
issues. Machine vision is an important sensor technology
with potential application in many industrial operations.
Many of the current application of machine vision are in
inspection, robotics. Machine vision system may allow us
for complex inspection for close dimensional tolerance
and improved recognition and part location capabilities
and increased speed. The purpose of the vision system to
replace the input data in terms of images that is stored in
form of featured values, which can be subsequently
compared against the corresponding feature values from
image of known object. Machine vision is concerned with
the sensing of vision data and its interpretation by the
computer. The vision system consists of the camera and
digitizing hardware, a digital computer with hardware and
software necessary to interface them. The basic block
diagram of the Machine vision system consists of three
functionsThey are:
1. Image acquisition.
2. Image processing and analysis.
3. Application.
The calibration testing of Tachometer involves really
complex formation procedures if done manually and the
trustworthiness of these testing results relies on various
conditions that are subjects of change on each time of
testing.
The testing methodology we have chosen is purely based
on LabVIEW, which is acclaimed for its fast responses
while testing tachometer and of course accuracy,
precision and also the user-friendly interface can be
modeled. The different processes involved in the method
we adopted is confined into the following steps. They are

1. Implementation of client-server architecture.


2. Activation of tachometer.
3. Activation of camera.
4. Image Acquisition.
5. Image Processing.
6. Angle calculation.
7. Error calculation.
Tachometer testing can be accomplished by providing
three input, they are voltage, frequency and ground.
Tachometer contains three wires of color black, orange,
and red .We provides voltage to the tachometer through
red wire, frequency signal through orange wire, and black
is grounded. The voltage, which we are providing to the
tachometer, will be in the range of 10V -14V. The
frequency will be in the form of square pulse generated by
the user and giving it to the DAQ card and taking output
from the DAQ card connecting back to the tachometer.
By providing all the inputs to the tachometer it will start
to show the r.p.m from the range 0-12 thousands. The
deflection of the needle of the tachometer will linearly
vary with the frequency applied. At first there will be
some fluctuations or time delay in the needle of the
tachometer while it show the reading, but it can be
corrected by adjusting the sample rate and sampling
frequency. At this time it will satisfy the Nyquist theorem
fs2f  1
Where
fs= Sampling Frequency,
fm=Message Frequency
i.e. sampling frequency should be greater than or equal to
twice the modulating frequency. Camera, which is used
here in order to acquire the image of a Speedo-meter, is
monochrome camera. It is a high resolution with highspeed monochrome camera. To activate this camera we
are applying input supply of 12V.There are two terminals
of monochrome camera, one terminal is provided with
power supply of 12 volts and the other terminal is directly
interface with the monitor the interfacing terminal is
provided with IMAQ so that whenever it acquire the
image of a Speedo-meter the data should directly go to the
monitor for further processing .The main function of the
camera which is used here is to acquire the image and
directly send it to the monitor. Certain precaution we have
to take here while using the monochrome camera is about
proper lightening or intensity, sometimes due to the
improper lightening or intensity the images acquire by the
camera may be change so we must take care about the
proper lightening. So in order to provide the proper
lightening or intensity here we are using round fluorescent
tube, which is directly mounted in front of the camera and
the tachometer so that the proper and accurate image can
be obtained by the camera. The adjustment of the camera

316

Proceedings of the National Conference on Communication Control and Energy System

should be in such a way that it should be mounted on a


stand made up of steel or wooden. The stand, which we
are used here, should be in such a way that the camera
height to acquire the proper image can be varying the
height of the stand. The distance in between the camera
and the tachometer can be adjustable. At the front end of
the camera there is a round movable mount by varying
this we can also acquire the proper focused image. The
sensing and digitizing function involves the input of
vision data by means of a camera focused on the region of
interest. Special lighting techniques are frequently used to
obtain an image of sufficient contrast and clarity.

the tachometer using DAQ interface. The deflection of the


needle is calculated using CCD camera and IMAQ
software. Standard angles are calculated for an error-free
meter and stored in a text document. During testing
process, the angle for the corresponding meter is stored in
a separate text document. Using these two documents, the
performance of a particular meter can be calculated. The
average error can be calculated as shown below
Table 1
S.
No
1
2
3
4
5
6
7

The digital image is called a frame of vision data, and is


frequently captured by a hardware device called a frame
grabber. These devices are capable of digitizing images at
the rate of over 30 frames /sec. The frame consists of
matrix of data representing projections of the scene
sensed by the camera. The elements of the matrix are
called Picture Element or Pixels
After a proper adjustment of camera, frequency and
sampling frequency the needle will show the deflection in
the tachometer, now by keeping the monochrome camera
at a proper distance and at proper intensity image of a
deflected needle can be acquire, which will produce an
image of pixel size 640480. After this the capture image
is need to be processed by the technique called pattern
matching as explained before. The deflection of needle is
calculated with the horizontal by pattern matching various
sets of readings are obtained.
The standard values are obtained from an error free or
standard meter and stored in a text file with the
corresponding rpm to which it is related. These standard
values are acquired using the same image processing set
up used for the testing procedure. The deflection of needle
is calculated with the horizontal y pattern matching
various sets of readings are obtained and it is stored in an
buffer, the various deflection values which is stored in the
buffer, then compare with the predetermined standard
value so in this way we can find the error in the
tachometer

Fig. 2. The basic block diagram of machine vision system

IV. RESULT ANALYSIS


The standard frequency from the server side is used in the
client side to generate the frequency using frequency
generator. The output of frequency generator is given to

Input
frequency
34
51
68
85
102
119
136

Standard
Angle
342
324
37
290
273
257
241

Observed
Angle
340
326
305
288
270
256
240

Percentage
Of error
-0.58
0.61
-0.65
-0.68
-1
-0.38
-0.41

If the error is within the limit that is specified by the


design department, then that particular meter will be
accepted otherwise it is rejected. That limit should be
within tolerable limit. By calculating this error, the
particular production industry came to know whether the
meter is in standard or not. This testing will calculate the
quality of the product in turn show the quality of the
organization. For any type of meter we can calculate the
performance.
V. CONCLUSION
The image is acquired with the proper orientation and
lightening. The process of testing of tachometer was done
by the developed program. Thus the machine vision
eliminates the observational and environmental error that
occurs during manual test. It increases the speed of the
production and quality of the manufacturing during batch
production process. Here this project can be implemented
on to conveyer system, which can hold the analog meters.
The conveyer system can be made to control by the
program, by uses of stepper motor device. It find its
application in the analog meter manufacturing industries
where after detecting the error or faulty tachometer are
send back to accomplish the process to overcome that
defect or it may be dispatched for packing.
This project also finds immense scope of application in
testing of any types of analog meters. The project also
provides room for future extension such as extending it to
access by a WAN connection so that the whole process of
testing can be done even far away from the manufacture
area and also the properties of remote access can be used
to acquire the image and the report through the Internet.
Using these features one can access, operate and also
study the report from any PC having access to the
Internet.

317

Remote Testing of Instruments using Image Processing


[4]

REFERENCES
[1]

[2]

[3]

Matteo Bertocco, Sandro Cappellazzo, Alessio Carullo,


Marco Parvis, Alberto Vallan, Virtual Environment for
Fast Development of Distributed Measurement
Applications,IEEE Transactions on Instrumentation and
Measurement,Vol.52,pp.681-685,June 2003.
Giovanni Moschioni , A Virtual Instrumentation System
for Measurements on the Tallest Medieval Bell Tower in
EuropeIEEE Transactions on Instrumentation and
Measurement,Vol.52,pp.693-702,June 2003.
F.Toran, D.Ramirez, S.Casans, A.E.Navarro, J.Pelegri,
Distributed Virtual Instrument For Water Quality
Monitoring Across the Internet, in proceedings of the
IEEE Instrumentation and Measurement Technology
Conference, Baltimore, MD, USA, May 2000.

[5]

[6]

[7]

318

F.Toran, D.Ramirez, S.Casans, Programming Internet


Enabled Virtual Instruments, LabVIEW Tech Resource
7(2) (1999) 22~23.
Giovanni Bucci and Carmine Landi, A Distributed
Measurement
Architecture
for
Industrial
Applications,IEEE Transactions on Instrumentation and
Measurement.,Vol.52.NO.1,Feb 2003.
D.Georges, E.Benoit, A.Chovin, D.Koenig, B.Marx, and
G.Mauris, Distributed Instruments for control and
diagnosis applied to a water distribution system, in Proc.
IMTC, Vol.1.May 2002, pp-565-569
G.Bucci and C.Landi, A Multi-DSP based instrument on
a
VXI
C-Size
Module
for
Real
Time
Measurements,IEEETransactions on Instrumentation and
Measurement, Vol.49, pp.884-889, Aug 2000.

Proceedings of the National Conference on Communication Control and Energy System (NCCCES'11),
Vel Tech Dr. RR & Dr. SR Technical University, Chennai, TN. 29 - 30 August, 2011. pp.319-322.

Wireless Communication using Human Area Networking


H. Lalithkrishnan, Kiran Mohan and S. Gowtham
Email: lalithkrishnanh@gmail.com, scifikiran@gmail.com, gautham_narnia@yahoo.co.in
Abstract The major problem in todays world is
communication. Even though the digital communication has
evolved, it is still not efficient for all the people to access and
also the infrared communication has some serious
disadvantages .To overcome this a newer communication
method, which uses conduction property of human body to
transmit signals called as red tacton is proposed in this
paper. RedTacton is a new Human Area Networking
technology that uses the surface of the human body as a safe,
high-speed network transmission path. Red tacton uses the
very weak electric field emitted on the surface of the human
body. A transmission path is formed automatically by body
contact and this initiates communication between electronic
devices.
Keywords Human area network, Red tacton, Blue tooth,
Intra-body tacton, Infrared, Electro-optic sensor, Laser.

I. INTRODUCTION
Today people can communicate any time, anywhere, and
with anyone over a cellular phone network. Moreover, the
Internet lets people download immense quantities of data
from remotely located servers to their home computers.
Essentially,
these
two
technologies
enable
communications between terminals located at a distance
from each other. Short-range wireless communication
systems such as Blue tooth and wireless local area
networks. Throughput is reduced by packet collisions. In
crowded spaces such as meeting rooms and auditoriums
filled with people , Communication is not secure because
signals can be intercepted.The ultimate human area
network solution to all these constraints of conventional
technologies is intrepid communication, in which the
human body serves as the transmission medium. In
everyday services, if we could use the human body itself
as a transmission medium, then this would be an ideal
way of implementing human area networks because it
would solve at a stroke all the problems throughput
reduction, low security, and high network setup costs.
II. RED TACTON
Red Tacton involves initiating action with a touch that
could result in a wide range of actions in response. So,
NTT Combined touch and action to coin the term Tacton,
and then added the word Red - a Warm color -- to
emphasize warm and cordial communications, creating
the name RedTacton. RedTacton is a new Human Area
Networking technology that uses the surface of the human

body as a safe, high-speed network transmission path.


Red tacton uses the very weak electric field emitted .
RedTacton works through shoes and clothing as well.
fingers, arms, feet, face, legs or toes. using any body
surfaces, such as the hands,legs.etc.
III. PRINCIPLE
The principle of RedTacton is illustrated in Fig. 1.The
weak electric fields pass through the body to a RedTacton
receiver, where the weak electric fields affects the optical
properties of an electro-optic crystal. The extent to which
the optical properties are changed is detected by laser
light, which is then converted to an electrical signal by a
detector circuit.Ea represents the electric field induced
toward the body by the transmitters signal electrode. The
system requires a ground close to the transmitter signal
electrode, so electric field is induced from the body can
follow a return path to the transmitter ground. Moreover,
since people are usually standing on a floor or the ground,
electric field Es escapes from the body to ground, mainly
from the feet.

Fig. 1. Principle of ReadTacton

The electric field Es that reaches the receiver is Es = Ea


(Eb + Ec) to be initiated through natural actions such as
grasping, sitting walking and stepping onto something.
Electrical
sensors
basically
involve
two-wire
communication with a signal line and a ground line, so
when there is only a signal line a(i.e., the human body)
,the signal cannot be transmitted properly.
The three major functional features of RedTacton are
highlighted below.

319

Wireless Communication using Human Area Networking

1) A communications path can be created with simple


Touch, automatically initiates the flow of data For
example, two people equipped with RedTacton
devices could exchange data just by shaking hands.
2) Using a RedTacton electro-optic sensor, two-way
communication is supported between any two points
on the body at a throughput of up to 10 Mbps.
Communication is not just confined to the surface of
the body

finger and between other locations on the persons body.


We also verified that good communication was achieved
not only when the electrodes were in direct contact with
the persons skin, but also when the signals passed
through clothing and shoes.

3) RedTacton can utilize a wide range of materials as a


transmission medium, as long as the material is
conductive and dielectric, which includes water and
other liquids, various metals, certain plastics, glass,
etc.
Using ordinary structures such as tables and walls that are
familiar and readily available, one could easily construct a
seamless communication environment at very low cost
using RedTacton [Fig 3].
IV. RED TACTON TRANCEIVER
Fig. 2 shows a photograph of the RedTacton transceiver
connected to a PDA and a block diagram of the Red
Tacton transceiver developed by NTT. The transmitter
consists of a transmitter circuit that induces electric fields
toward the body and a data sense circuit, which
distinguishes transmitting and receiving modes by
detecting both transmission and reception data and
outputs control signals corresponding to the two modes to
enable two-way communication. The receiver consists of
an electro-optic sensor and a detector circuit that
amplifies the minute incoming signal from the electrooptic sensor and converts it to electrical signal.. We
quantitatively measured the bit error rates of signals sent
through the body.

Fig. 3

VI. APPLICATIONS
Information recorded in the RedTacton device is sent to
the touched objects. Thereby, the following applications
are inhibited.
1) Personalization
1.1) Personalization of mobile phones,
1.2) Personalization of Automobiles
2) New Behavior patterns
2.1) Conferencing system
3) Security Applications
3.1) User verification and lock management at
entrance. It forms a replacement for biometric
systems and also the alternative for a secured
world

Fig. 2. Block diagram of red tacton transceiver

V. ANALYSIS
The results showed that the system had no significant
practical problems at a transmission speed of 10 Mbit/s.
Besides communications between two hands, we also
demonstrated reliable communication between a foot and

320

Fig. 4. Just by touching the lock

Proceedings of the National Conference on Communication Control and Energy System

Trans-receiver circuit gets activated to enable


other biometric system to recognize individual.
3.2) Confidential document management
4) Mobile computation
4.1) Can connect a single phone with several other
phones via blue tooth using this principle.

Fig. 5

4.2) Without any blue tooth device, just by touching a


device you could can hear sounds.

excluding the transceiver chip consists of off-chip


components: 24-bit analog-to-digital converter (ADC)
and digital-to-analog converter (DAC), digital audio
interface transmitter and receiver as shown in Figure 7.
The S/PDIF standard is chosen for the digital audio
interface. The analog audiosignals for two stereo channels
are sampled by the 24-bit sigma-delta ADC. The 24-bit
sampled audio data is converted to a 32-bit packet
consisting of a 4-bit preamble, the 24-bit data, and a 4-bit
channel status and error detection bits. Subsequently, the
data is transmitted with biphase-mark encoding at twice
of the bit rate, which allows clock recovery from the data
at the receiver. The effective data rate is 2.048-Mb/s at the
sampling rate of 16-kHz over the channel. The audio
sound is played back by the 24-bit DAC. No audio
decoder is required at the earset. Thus, it leads to the
reduction of the cost and the power consumption. The
transceiver chip controls the WBS link for the
transmission of digital audio data. The 0.25-0.5m CMOS
WBS transceiver chip including a receiver AFE consumes
only 5-mW from a 1-V supply. Compared with a
Bluetooth transceiver [3], it exhibits lower power
consumption at high data rate. Figure 1 (b) shows the
photo of the prototype system consisting of a WBS audio
transmitter connected to a MP3 player and a WBS audio
receiver assembled into an earset. The transmitter attaches
to the back of the audio player.The Rx electrode is
mounted on the hanger of the earset.

Fig. 6

VII. DESIGN ARCHITECTURE


The design architecture for our prototype system is
illustrated, which exploits wideband signaling (WBS) for
the digital audio streaming over the human body. Its
architecture consists of five parts: physical medium,
interface layer, signaling link layer, transceiver module,
and application. The human body as the physical medium
shows a wide band-pass operational spectrum and
sustains the channel capacity sufficient to transfer realtime multimedia data streams as investigated in [2]. The
electrical signal can be flowed by the feeble electric field
induced on the skin of the body. The interface layer
provides the connection with the physical medium. After
the connection is established, the digital audio data
transfers to the wearable I/O devices over a WBS link.
Transceiver module performs the packet generation of the
streaming data and supports the WBS link that enables a
point-to-point transfer or broadcasting between each
application layer.
VIII. REALIZATION USING MP3 PLAYER
The prototype system is realized by using the WBS
transceiver chip developed of [2]. The system hardware

Fig. 7. Block diagram of wireless MP3 player

IX. ADVANTAGE OVER BLUETOOTH


The system envisioned by NTT, utilizes a conversion
method which takes digital data into a stream of lowpower digital pulses. These can be easily transmitted and
read back through the human electric field. While it is
true that similar personal area networks are already
accessible by using radio-based technologies like Wi-Fi
or Bluetooth, this new wireless technology claims to be
able to send data over the human skin surface at transfer
speeds of up to 10Mbps, or better than a broadband T1
connection. Receiving data in such a system is more
complicated because the strength of the pulses sent

321

Wireless Communication using Human Area Networking

through the electric field is so low. Red Tacton solves this


issue by utilizing a technique called electric field
photonics: A laser is passed though an electro-optic
crystal, which deflects light differently according to the
strength of the field across it. These deflections are
measured and converted back into electrical signals to
retrieve the transmitted data.
It has two limitations:
1) The operating range through the body was Limited to
a few tens of centimeters and
2) The top communication speed is only 40 K bits/s.
These limitations arise from the use Of an electrical
sensor for the receiver. An Electrical sensor requires
two lines (a signal Line and a ground line), whereas in
intra-body communication there is essentially only
one Signal line, i.e., the body itself, which leads to an
unbalanced transmission line, so the signal is not
transmitted correctly.
X. CONCLUSION
In a busy place there could be hundreds of Bluetooth
devices within range.Chances are more for the

information to be trakked. Using this transceiver, we


succeeded in achieving 10 BASE communications
through a human body from one hand to the other hand.
While our immediate objective is to implement a Red
Tacton system supporting two-way intra body
communication at a rate of 10 M bits between any two
points on the body, our longer-term plans include
developing amass-market transceiver interface supporting
PDAs s and notebook computers. Thus this problem has
been solved by the red tacton and has made it very
innovative.
REFERENCES
[1] T. G. Zimmerman, Personal Area Networks: Near-field
intrabody communication, IBM Systems Journal, Vol.
35, Nos. 3&4, pp. 609- 617, 1996.
[2] M. Shinagawa, Development of Electro-optic Sensors for
Intra-body Communication, NTT Technical Review, Vol.
2, No. 2, pp. 6-11, 2004.
[3] http://www.redtacton.com/
[4] http://www.arib.or.jp/english/html/overview/st_j.html
[5] M. Mizoguchi, T. Okimura, and A. Matsuda,
Comprehensive Commercialization Functions, NTT
Technical Review, Vol. 3, No. 5, pp.

322

Author Index
A
Aanandha Saravanan, K.
Adarsh, R.
Alexzander, S.
Amal Raj, S.
Anandkumar Singh, H.
Anjugam, M.
Anupama, A.S.
Anusha Meenakshi, S.
Arun, V.R.
Arun Karthik Kani, P.
Ashok Kumar, J.
Ashwin Kumar, K.S.
Asif Iqbal, A.

Himanshu Joshi
7, 81, 155
283
205
4
239
248
290
138
307
287
185
304
287

B
Bala Aiswarya, R.
Balaji, R.
Balasathiya, D.
Baskaran, M.
Bharath Kumar, R.
Bharathi, C.R.

49
304
15
138
223, 283
103

C
Chandra Sekhar, N.

57

D
Deepa, D.
Devendran, A.
Dhivya, M.
Dinesh Kumar, C.K.
Divya Devi Mohan
Divya, R.

53
311
43
11
275
166

E
Elakkiya, E.
Ethiraj, S.

290
138

G
Gauthugesh, V.
Gayathri, R.
Gokul, B.
Gopala Krishnan, T.
Gowri, G.
Gowtham, S.
H
Hemanth Kumar, H.

43
107, 113
233
307
146
319

19

23

I
Ilakiya, R.

177

J
Janani, N.
Janani, S.
Jayanth, D.
Jhansi Alekhya, D.
Johnson, S.

252
255
11
168
192

K
Kalarani, G.
Kastro, A.
Kavitha, K.
Kavitha, M.
Kavitha, P.
Kavitha, R.
Kiran Mohan
Kousiya Shehannas, S.
Krishnakumar, D.
Krishnamoorthy, R.
Kumara swamy, G.

255, 275
7
146
257
290
87
319
168
28
81
213

L
Lalithkrishnan, H.

319

M
Malar Vizhi, K.
Manikandan, J.
Manohar, K.
Manthan Shah
Maragathavalli, P.
Maruti, A.M.V.N.
Mohanraj, J.
Murugan, V.

132
28
213
267
127
57
173
278, 314

N
Naga Jyothi, G.
Nagalalitha, C.
Naresh, C.
Naveena, S.
P
Parijatham, S.
Pooranapriya, K.
Porselvi, S.

323

100, 94
252
19
49, 53

252
195
37

Author Index

Prabhu, H.
Praveen, C.
Pravin Jadhav
Prem Kumar, V.
R
Radhika, K.
Raghavi, V.
Rahul Gopinath
Rajasekar, A.
Rakesh Raushan
Rama, R.
Ramakanth, A.
Ramanjaneyulu, D.V.S.
Ramesh, M.
Reenu Aradhya Surathy
S
Sai Smarana, C.
Sankara Gomathi
Sankeerthana, M.
Sanket Borhade
Sapna, S.
Saranya Devi, S.
Saranya, A.
Saravana Kumar, R.
Saravanan, J.
Sateesh Kedarnath Kumar, N.
Sathyasri, B.
Sayee Kumar, S.
Selvi, C.
Shanthi, V.
Sherly Sofie, C.I.
Sindhusha, V.

283
118
267
205

53
271
199
1
199
219
141
100, 94
1
275

255
76
278, 314
267
15
150
150
223
223, 287
141
155
228
298
103
76
278, 314

Sivakumar, S.
Sridhar, R.
Suganthy, M.
Sujitha, A.
Sundarapandian, V.
Suraj
Suresh, K.
Suresh, R.
Swati, G.

1
1
107, 113
188
257, 261
209
213
261
76

U
Uma, B.
Umamaheswari, T.

150
60

V
Valarmathi, A.
Vanaja, S.
Vanitha, T.
Venkatanadhan, G.
Venkataraman, N.L.
Venkateshwari, D.
Vignesh, M.
Vignesh, N.
Vigneshvar, G.D.
Vignesh Prasanna, N.
Vijay Gaikwad
Vijayarani, B.
Vinoth, K.
Vinoth, M.
Vivekanandan, R.
Y
Yuvaraj, M.

324

294
66
91
4
150
243
304
7, 81
307
155
267
179
11
209
168

233

Anda mungkin juga menyukai