Anda di halaman 1dari 299

The Proceedings of

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN


ENGINEERING RESEARCH
ICCTER 2014

On
10th August, 2014
Venue

LIBERTY PARK HOTEL


Chennai.
Editiors
Prof.M.Vani, Prof.B.N.Siva Prasanna Kumar

Organized by

INTERNATIONAL ASSOCIATION OF ENGINEERING &


TECHNOLOGY FOR SKILL DEVELOPMENT

www.iaetsd.in

About IAETSD:
The International Association of Engineering and Technology for
Skill Development (IAETSD) is a Professional and non-profit conference
organizing company devoted to promoting social, economic, and technical
advancements around the world by conducting international academic
conferences in various Engineering fields around the world. IAETSD
organizes multidisciplinary conferences for academics and professionals
in the fields of Engineering. In order to strengthen the skill development
of the students IAETSD has established.
IAETSD is a meeting place where Engineering students can share their
views, ideas, can improve their technical knowledge, can develop their
skills and for presenting and discussing recent trends in advanced
technologies, new educational environments and innovative technology
learning ideas. The intention of IAETSD is to expand the knowledge
beyond the boundaries by joining the hands with students, researchers,
academics and industrialists etc, to explore the technical knowledge all
over the world, to publish proceedings. IAETSD offers opportunities to
learning professionals for the exploration of problems from many
disciplines of various Engineering fields to discover innovative solutions
to implement innovative ideas. IAETSD aimed to promote upcoming
trends in Engineering.

About ICCTER:
The aim objective of ICCTER is to present the latest research and results
of scientists related to all engineering departments topics. This conference
provides opportunities for the different areas delegates to exchange new
ideas and application experiences face to face, to establish business or
research relations and to find global partners for future collaboration. We
hope that the conference results constituted significant contribution to the
knowledge in these up to date scientific field. The organizing committee
of conference is pleased to invite prospective authors to submit their
original manuscripts to ICCTER 2014.
All full paper submissions will be peer reviewed and evaluated based on
originality, technical and/or research content/depth, correctness, relevance
to conference, contributions, and readability.
The conference will be held every year to make it an ideal platform for
people to share views and experiences in current trending technologies in
the related areas.

Conference Advisory Committee:

Dr. P Paramasivam, NUS, Singapore


Dr. Ganapathy Kumar, Nanometrics, USA
Mr. Vikram Subramanian, Oracle Public cloud
Dr. Michal Wozniak, Wroclaw University of Technology,
Dr. Saqib Saeed, Bahria University,
Mr. Elamurugan Vaiyapuri, tarkaSys, California
Mr. N M Bhaskar, Micron Asia, Singapore
Dr. Mohammed Yeasin, University of Memphis
Dr. Ahmed Zohaa, Brunel university
Kenneth Sundarraj, University of Malaysia
Dr. Heba Ahmed Hassan, Dhofar University,
Dr. Mohammed Atiquzzaman, University of Oklahoma,
Dr. Sattar Aboud, Middle East University,
Dr. S Lakshmi, Oman University

Conference Chairs and Review committee:

Dr. Shanti Swaroop, Professor IIT Madras


Dr. G Bhuvaneshwari, Professor, IIT, Delhi
Dr. Krishna Vasudevan, Professor, IIT Madras
Dr.G.V.Uma, Professor, Anna University
Dr. S Muttan, Professor, Anna University
Dr. R P Kumudini Devi, Professor, Anna University
Dr. M Ramalingam, Director (IRS)
Dr. N K Ambujam, Director (CWR), Anna University
Dr. Bhaskaran, Professor, NIT, Trichy
Dr. Pabitra Mohan Khilar, Associate Prof, NIT, Rourkela
Dr. V Ramalingam, Professor,
Dr.P.Mallikka, Professor, NITTTR, Taramani
Dr. E S M Suresh, Professor, NITTTR, Chennai
Dr. Gomathi Nayagam, Director CWET, Chennai
Prof. S Karthikeyan, VIT, Vellore
Dr. H C Nagaraj, Principal, NIMET, Bengaluru
Dr. K Sivakumar, Associate Director, CTS.
Dr. Tarun Chandroyadulu, Research Associate, NAS

INTERNATIONAL CONFERENCE ON CURRENT


TRENDS IN ENGINEERING RESEARCH
ICCTER 2014
Contents
1
2

IMPLEMENTATION OF CHAOTIC ALGORITHM FOR SECURE


IMAGE TRANSCODING
SYNTHESIS AND CHARACTERIZATION OF CDXHG1-XTE
TERNARY SEMICONDUCTOR THIN FILMS

1
8

MANET: A RELIABLE NETWORK IN DISASTER AREAS

15

ENERGY MANAGEMENT OF INDUCTION MOTOR USING TIME


VARYING ANALYSIS

21

ELECTROCHEMICAL MACHINING OF SS 202

27

GESTURE RECOGNITION

42

7
8
9
10

STRUCTURAL AND ELECTRONIC PROPERTIES OF DOPED


SILICONNANOWIRE
STUDY AND EXPERIMENTAL ANALYSIS OF LINEAR AND
NON LINEAR BEHAVIOUR OF PIPE BEND WITH OVALITY
A REVIEW ON PERFORMANCE ANALYSIS OF MIMO-OFDM
SYSTEM BASED ON DWT AND FFT SYSTEMS
STUDY OF VARIOUS EFFECTS ON PEAK TO AVERAGE POWER
REDUCTION USING OFDM

49
64
70
76

11

ECO FRIENDLY CONSTRUCTION METHODS AND MATERIALS

83

12

A REVIEW ON ENHANCEMENT OF DEGRADED DOCUMENT


IMAGES BASED ON IMAGE BINARIZATION SYSTEM

86

13

SEARL EFFECT

91

14

BUS TRACKER SYSTEM WITH SEAT AVAILABILITY CHECKER

99

15

TECHNO-HOSPITAL

104

16

PREPARATION OF W.P.S FOR STAINLESS STEEL (NI, CR, MO,


NU) WELDING W.R.T MECHANICAL & THERMAL PROPERTIES

107

17

REPAIRING CRACKS IN CONCRETE STRUCTURES

110

18

RECOGNITION OF EMG BASED HAND GESTURES FOR


PROSTHETIC CONTROL USING ARTIFICIAL NEURAL NETWORKS

115

19

VERTICAL FARM BUILDINGS SUSTAINABLE AND GREEN


MANUFACTURING

121

21

SUSTAINABILITY THROUGH CRYOGENIC WATER FUEL

126

22

SILVERLINE FOR THE BLIND

133

23

CYBER CRIME AND SECURITY

139

25

NETWORK SECURITY AND CRYPTOGRAPHY

150

26

THE WIRELESS WATER AND VACCINE MONITORING WITH


LOW COST SENSORS [COLD TRACE]

163

27

ADVANCED CONSTRUCTION MATERIALS

169

28
29

FPGA BASED RETINAL BLOOD OXYGEN SATURATION


MAPPING USING MULTI SPECRAL IMAGES
LITERATURE REVIEW ON GENERIC LOSSLESS VISIBLE
WATERMARKING & LOSSLESS IMAGE RECOVERY

174
180

30

ARTIFICIAL INTELLIGENCE (AI) FOR SPEECH RECOGNITION

186

31

STRENGTH AND DURABILITY CHARACTERISTICS OF


GEOPOLYMER CONCRETE USING GGBS AND RHA

191

32

VULNERABILITIES IN CREDIT CARD SECURITY

194

33
34
35
36
37
38

IMPLEMENTATION OF AHO CORASICK ALGORITHM IN


INTRUSION DETECTION SYSTEM USING TCAM FOR
MULTIPLE PATTERN MATCHING
ADVANCED RECYCLED PAPER CELLULOSE AEROGEL
SYNTHESIS AND WATER ABSORPTION PROPERTIES
NEAR FIELD COMMUNICATION TAG DESIGN WITH AES
ALGORITHM
GREEN CONCRETE ECO-FRIENDLY CONSTRUCTION
ANALYTICAL STUDY OF GENERAL FAILURE IN PROMINENT
COMPONENTS
ORGANIZING THE TRUST MODEL IN PEER-TO-PEER SYSTEM
USING SORT

199
205
208
212
215
219

39

WEB PERSONALIZATION: A GENERAL SURVEY

227

40

A NOVEL SCHEDULING ALGORITHMS FOR MIMO BASED


WIRELESS NETWORKS

233

41
42
43

BER PERFORMANCE OF CDMA, WCDMA, IEEE802.11G IN


AWGN AND FADING CHANNEL
A SURVEY OF VARIOUS DESIGN PATTERNS FOR IMPROVING
QUALITY AND PERFORMANCE OF WEB APPLICATIONS
DESIGN
QUICK DETECTION TECHNIQUE TO REDUCE CONGESTION IN
WSN

243
249
254

44

HEURISTICS TO DETECT AND EXTRACT LICENSE PLATES

258

45

DYE SENSITIZED SOLAR CELL

263

46

WEB PERSONALIZATION: A GENERAL SURVEY

269

47

AUTOMATED BUS MONITORING AND TICKETING SYSTEM


USING RF TRANSCEIVER AND RFID

275

48

COMPARATIVE STUDY: MIMO OFDM, CDMA-SDMA


COMMUNICATION TECHNIQUES

281

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Implementation of Chaotic Algorithm for Secure Image


Transcoding
PRADEEP K.G.M
IV SEM M.Tech
DEC, Dept. of E&C
KVGCE, Sullia
Pkgm77@gmail.com

Dr. Ravikumar M.S.


Professor and Head
Dept. of E&C
KVGCE, Sullia

necessary for image data. Due to characteristic of human


perception, a decrypted image containing small distortion
is usually acceptable. The intelligible information in an
image is due to the correlation among the image elements
in a given arrangement. This perceptive information can
be reduced by decreasing the correlation among image
elements using certain transformation techniques. In
addition to cryptography, chaotic based image security
techniques are getting significantly more sophisticated
and have widely used. The chaotic image transformation
techniques are perfect supplement for encryption that
allows a user to make some transformation in the image,
and then the image is totally distorted, so nobody could
see that what information could be shown through that
image. Thus, it is often used in conjunction with
cryptography so that the information is doubly protected,
that is, first it is transformed by chaotic map encryption
techniques, and then it is encrypted so that an adversary
has to find the hidden information before the decryption
takes place.

ABSTRACT
The transcoding refers to a two-step process in which
the original data/file is decoded to an intermediate
uncompressed format which is then encoded into the
target format. Transcoding is the direct digital-todigital data conversion of one encoding to another.
This paper proposes a system of secure image
transcoder which mainly focuses on multimedia
applications such as web browsing through mobile
phones, in order to improve their delivery to client
devices with wide range of communication, storage
and display capabilities. This system based on CKBA
encryption ensures end to end security. The
performance of the system has been evaluated for
different images. It is verified that the proposed
system is having less resource complexity with good
performance.
Keywords: CKBA, Encryption, Transcoding
1.

INTRODUCTION

Images are generally the collection of pixels. Basically


Image Encryption is a means that convert the image into
unreadable format. Many digital services require reliable
security in storage and transmission of digital images.
Due to the rapid growth of the internet in the digital
world today, the security of digital images has become
more important and attracted much attention. The
prevalence of multimedia technology in our society has
promoted digital images to play a more significant role
than the traditional texts, which demand serious
protection of users privacy for all applications.
Encryption techniques of digital images are very
important and should be used to frustrate opponent
attacks from unauthorized access.

Nowadays, communication networks such as mobile


networks and the internet are well developed. However
they are public networks and are not suitable for the
direct transmission of confidential messages. To make
use of the communication networks already developed
and to keep the secrecy simultaneously, cryptographic
techniques need to be applied. Traditional symmetric
ciphers such as data encryption standard (DES) are
designed with good confusion and diffusion properties.
These two properties can also be found in chaotic
systems which are usually ergodic and are sensitive to
system parameters and initial conditions. In recent years,
a number of chaos based cryptographic schemes have
been proposed. Some of them are based on one
dimensional chaotic logistic maps and are applied to data
sequence or document encryption. This project mainly
focuses on bit rate transcoding. Recently many papers
have been proposed with the idea of stream ciphering.
Generally encryption can be categorized into three viz
complete encryption, selective encryption and joint
encryption. In complete encryption entire data will be
encrypted whereas in selective encryption only a part of
data. Joint encryption refers to the encryption performed
during compression.

Digital images are exchanged over various types of


networks. It is often true that a large part of this
information is either confidential or private. Encryption
is the preferred technique for protecting the transmitting
data. There are various encryption systems to encrypt and
decrypt image data. However, it can be argued that there
is no single encryption algorithm which satisfies the
different image types. In general, most of the available
encryption algorithms are used for text data. However,
due to large data size and real time constants, algorithms
that are good for textual data may not be suitable for
multimedia data. Although we can use the traditional
encryption algorithm to encrypt images directly, this may
not be a good idea for two reasons. First, the image size
is often large than text. Consequently, the traditional
encryption algorithms need a longer time to directly
encrypt the image data. Second, the decrypted text must
be equal to the original text but this requirement is not

The growth in the processing of digital multimedia data


is rapid, by the increasing demands of the content
consumers with a widening variety of digital equipments;
require bit streams to be modified after transmission. A
transcoder can be placed in the channel to reduce the bit
rate prior to retransmission to end user devices. A
simplified view of a typical broadcasting system is
shown in Fig.1.1.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

The proposed secure image transcoder is having too


many multimedia applications such as medical imaging,
mobile web browsing, etc.
3. PROBLEM STATEMENT
There are two main drawbacks in using conventional
crypto-systems to protect image data. First one is that the
traditional systems are either too slow or in need of
excessive complexity in real time operations with large
volume of data. The second one is that any modification
of the cipher text generated using on and off-the shelf
cipher would render the resulting bit stream
undecipherable.

Fig. 1.1: Potential broadcasting network


One of the challenges in using such network is protecting
the intellectual property rights of the content owners and
producers when the data is transmitted over a public
channel. In traditional cryptosystem a secret key is used
to encrypt and decrypt the data. There are two main
drawbacks in using traditional cryptosystems to protect
image data.

4. MOTIVATION
The motivation for image transcoding research is mainly
due to contemporary developments in the field of
security and compression of many multimedia
applications. In the digital world nowadays, the security
of digital images becomes more and more important
since the communications of digital products over
network occur more and more frequently. Furthermore,
special and reliable security in storage and transmission
of digital images is needed in many applications, such as,
medical
imaging
systems,
military
image
database/communications and confidential data.

First one is that the traditional systems are either too


slow or in need of excessive complexity in real time
operations with large volume of data. The second one is
that any modification of the cipher text generated using
on and off-the shelf cipher would render the resulting bit
stream undecipherable. An intuitive approach is to allow
the transcoder to decrypt the bit stream, prior to
transcoding, re-encryption and retransmission, as shown
in fig 1.2. While this approach is very effective in
ensuring efficient content delivery, it does not allow end
to end security.

5. METHODOLOGY
The proposed secure image transcoder system is
basically a modification of the existing jpeg encoderdecoder algorithm. The main modifications are;

Encryption

Transcoding

Fig. 1.2 : Traditional transcoder with encrypted data


To ensure end to end security one of the possible
approaches is shown in Fig.1.3. Here transcoder stage is
made such that no plain text is freely available in the
intermediate stages of transcoding. For this purpose a
decode and re-encode stage is used with different
quantization value

Fig. 5.1: The proposed frame work


Fig 1.3 secure transcoder using ciphers designed for
transcoding

The proposed work consists of the following steps


STEP 1: Considering an input image of size MN and
Apply 2-DCT to the image.
DCT is mathematical technique used to convert image
from spatial domain to frequency domain. JPEG image
compression standard use DCT. The discrete cosine

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

transform is a fast transform. It is a widely used and


robust method for image compression. It has excellent
compaction for highly correlated data.DCT has fixed
basis images. DCT gives good compromise between
information packing ability and computational
complexity.

This chaotic system is run to make a chaotic sequence x


(i) for i varying from 0 to ((MN)/8) 1. It is then
grouped into 8 bits to form an integer so that a pseudo
random array of (MN)/64 integers are formed. By
avoiding the repeating elements, it is possible to form an
array of length 256. This array can be taken as an index
to shuffle the columns of input DCT matrix. This system
is a well encrypted system providing good compression
by suitably selecting the quantization matrix.
STEP 3: Applying quantization to the CKBA encrypted
image.
Quantization process aims to reduce the size of the size
of the DC and AC coefficient so that less bandwidth is
required for their transmission. The human eye responds
primarily to the DC coefficient and the lower spatial
frequency coefficients. Thus if the magnitude of a higher
frequency coefficient is below a certain threshold, the
eye will not detect it. This property is exploited in the
quantization phase by dropping in practice, setting to
zero-those spatial frequency coefficients in the
transformed matrix whose amplitudes are less than a
defined threshold value.

Fig. 5.2 : The 88 sub-image shown in 8-bit grayscale

Each image is divided into 8 x 8 blocks. The


2D DCT is applied to each block image f(i,j),
with output being the DCT coefficients
F(u,v)/F(i,j) for each block
If the input matrix is P[x,y] and the
transformed matrix is F[u,v] then the DCT for
the 88 block is computed using the
expression:

Forward DCT

F[i,j]= C(i)C(j)
(

cos

The sensitivity of the human eyes varies with spatial


frequency, which implies that the amplitude threshold
below which the eye will detect a particular spatial
frequency also varies. In practice, therefore, the threshold
values used vary for each of the 64 DCT coefficients.
These are held in a two dimensional matrix known as
quantization table with the threshold value to be used
with a particular DCT coefficient in the corresponding
position in the matrix.
A common quantization matrix is

[ , ]cos

(5.1)

Where C (i) and C (j) = for i, j = 0

= 1 for all other values of i and j


and x, y,i, and j all vary from 0 through 7
STEP 2: CKBA encryption
The proposed chaotic key based algorithm is a complete
encryption technique. The encryption procedure of
CKBA can be briefly depicted as follows. Assume the
size of the plain-image is M N. Shuffle the DCT matrix
using the generated random key. It can be performed
before or after the quantization stage. Incorporating a
shuffling algorithm in the spatial domain can result an
immense reduction in the compression ratio. Since, in
most of the multimedia applications, the larger
compression is a mandatory requirement, we cannot
implement the shuffling algorithm in the spatial domain.
Hence, performing the shuffling operation in the
transform domain, without affecting the compression
ratio is favorable. Thus, block-wise shuffling of DCT
matrix is performed so that compression remains intact.
Shuffling is performed based on a chaotic map.

Quantized coefficient matrix values,

Assume that the size of the plain-image is M N. Select


an initial condition x (0) of a one-dimensional chaotic
system as the secret key of the encryption system defined
by the following logistic map (1).
x (i+1) = *x(i)(1 - x(i))
(5.2)
It has been proved that the system behaves chaotically if
the value of > 3.5699.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Zigzag scanning
The output of typical quantization is a 2-D matrix of
values/ coefficients which are mainly zeros except for a
number of non-zero values in the top left hand corner of
the matrix. Clearly, if we simply scanned the matrix
using a line-by-line approach, then the resulting (164)
vector would contain a mix of non-zero and zero values.
In general, however, this type of information structure
does not lend itself to compression. In order to exploit
the presence of large number of zeros in the quantized
matrix, a ZIG-ZAG SCAN is used.

Fig. 5.4: Zigzag Scanning Pattern


DIFERENTIAL ENCODING
Differential encoding is used when the
amplitude of the values that make up the source
information cover a large range but the difference
between successive values is relatively small. Instead of
using a set of relatively large codewords to represents
the actual amplitude of each value, a set of smaller of
smaller codewords is used, each of which indicates only
the difference in amplitude between the present value
being encoded and the immediately preceding value.

RUNLENGTH ENCODING TECHNIQUE


It is used when the source information contains long
strings of the same symbols such as character, a bit or a
byte. Instead of sending the source information in the
form of independent codewords, it is sent by simply
indicating the particular symbol in each string together
with an indication of the number of symbols in the string.
For ex: 000000060040009 this string is represented as (7,
6) (2, 4) (3, 9)

This is an example of DCT coefficient matrix:

HUFFMAN COMPRESSION TECHNIQUE


It is a method for the construction of minimum
redundancy Codes.
Huffman code procedure is based on the two
observations.
a. More frequently occurred symbols will have shorter
code words than symbol that occur less frequently.
b. The two symbols that occur least frequently will
have the same length.
Huffman coding algorithm working steps are as follows.

Convert the given color image into grey level


image.

Find the frequency of occurrence of symbols


(i.e. pixel value which is non-repeated).

Calculate the probability of each symbol.

For example, using 415 (the DC coefficient)


and rounding to the nearest integer Round (-415/16)
=Round (-25.9375) = - 26
STEP 4: Applying entropy encoding to the quantized
coefficient matrix as

Zigzag scanning

Differential encoding

Run length encoding

Huffman encoding

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Probability of symbols are arranged in


decreasing order and lower probabilities are
merged and this step is continued until only
two probabilities are left and codes are
assigned according to rule that the highest
probable symbol will have a shorter length
code.
Further Huffman encoding is performed i.e.
mapping of the code words to the
corresponding symbols will result in a
compressed data.
The original image is reconstructed i.e.
decompression is done by using Huffman
decoding.

STEP 5: Designing of Transcoder block


The string of zeros and ones from the Huffman encoding
is applied to the transcoder block. This block converts
digital-to-digital data conversion of one encoding to
another format. Transcoder can be placed in the channel
to reduce bit rate prior to retransmission to target client
devices according to bandwidth availability of end users.

Fig. 5.5: Schematic of Transcoder block


The output of Huffman encoding is applied to decoding
block of Huffman to inverse quantiser block. Then the
output of inverse quantiser is applied from quantiser of
transcoder block to Huffman encoding block. This
transcoder block achieves good compression ratio.
STEP 6: In the receiver side the output of Huffman
encoding is applied from Huffman decoding block to
inverse quantiser.
STEP 7: CKBA decryption is performed as follows by
applying 8-bit key to the inverse quantiser.
Fig .5.6: CKBA decryption flow chart

STEP 8: Each resulting block of 88 blocks spatial


frequency coefficients is passed in turn to the inverse
DCT which transform them back into their spatial form
using the following expression:
(
)

P[x, y] =
C(i)C(j) [ , ]cos
(

cos

5.3

Where C (i) and C (j) = for i, j=0

=1 for all other values of i and j


Finally image is reconstructed by applying IDCT to the
decrypted image as shown in the flow chart fig.5.2.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

factor 5 before transcoding and then increasing


quantization factor to 85 in the transcoder block. After
transcoding we will get the compression ratio of 18.0870
and average bit per image of 0.4423.
6. SIMULATION RESULTS AND ANALYSIS
Simulation Results
Example 1

Fig. 6.1 : Cameraman original image of size (256256)

Fig. 6.2: CKBA encrypted image

7. CONCLUSION
Conclusion
Several Transcoding algorithms are already
been examined for the effective utilization of bandwidth
and user satisfaction. The proposed image transcoder
gives the idea of how a high quality data like image can
be securely transmitted to different environments with
effective utilization of available bandwidth. From the

Fig. 6.3 : CKBA decrypted output image


The performance of the proposed system is
evaluated for image of size 256256 with quantization

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

results it is clear that, for transcoding we exploit the full


efficiency of existing jpeg algorithm and gives better
compression for different quantization values. Here we
presented security in terms of CKBA.
REFERENCES
[1].

Nithin Thomas, David Red mill, David Bull,


Secure transcoders for single layer video
data,
Signal
processing:
image
communication, pp, 196-207, and 2010.

[2].

Huafei Zhu, Adaptive and Composable Multimedia transcoders, Proceedings of the 3rd
IEEE International Conference on Ubi-media
computing
(U-media).
10.1109/UMEDIA.2010.5543914, pp, 113
117, 2010.

[3].

Samit Desai, Usha B, Medical image


transcoder for telemedicine based on wireless
communication devices, Proceedings of the
3rd IEEE International Conference on
Electronics Computer Technology (ICECT).
Vol.01, pp, 389 393, 2011.

[4].

John r. Smith, Rakesh Mohan, Chung-Sheng li,


Content based transcoding of images in the
internet.
Proceedings
of
the
IEEE
International Conference on Image processing
(ICIP98), Vol.03, pp, 7 11, 1998.

[5].

Richard han, Pravin Bhagwat, Richard


Lamaire, Dynamic adaptation in an image
transcoding proxy for mobile web browsing,
IEEE Personal communications.Vol.05, issue:
6, pp, 8 17, 1998.

[6].

Jui-Cheng Yen and Jiun-In Guo, A new


chaotic key -based design for image encryption
and decryption, Proceedings of the IEEE
International Conference on Circuits and
Systems, vol. 4, pp. 49-52, 2000.
M. Sahithi, B. MuraliKrishna, M. Jyothi, K.
Purnima, A. Jhansi Rani, N. Naga Sudha.
Implementation
of
Random
Number
Generator Using LFSR for High Secured
Multi-Purpose Applications, M. Sahithi et al, /
(IJCSIT) International Journal of Computer
Science and Information Technologies, Vol. 3
(1), pp, 3287-3290,2012.

[7].

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Synthesis and characterization of CdxHg1-xTe Ternary Semiconductor Thin films


Prof Vedam RamaMurthy1, a, Alla Srivani 2,b , G Veeraraghavaiah3,c
Prof and Head of the Department, T.J.P.S College, Guntur, Andhra Pradesh, India
Assistant Professor in Vasi Reddy Venkatadri Institute of Technology (VVIT), JRF in IIT
KHARAGPUR, India
Assistant professor, P.A.S College, Pedanandipadu, Guntur, Andhra Pradesh, India

Abstract:
Photo electrochemical deposition of CdxHg1-xTe thin films on the Se-modified Au
electrode using electrochemical quartz crystal micro gravimetric (EQCM) and
voltammetry is described. Corrosion of pre-deposited Te electrodes by illumination at a
fixed potential resulted in Se2 species, which was manifest from the EQCM frequency
changes. Se2 species generated from the photo corrosion reacted with Cd2+ and Zn2+
ions in the electrolyte to form Cd xHg1-xTe films on the Au electrode. The effect of
electrolyte composition on the composition and band gap of CdxHg1-xTe films was
studied in detail. Also, photo electro chemistry, EDX, Raman spectroscopy were used
for the characterization of CdxHg1-xTe thin films.

Key Words: Photo electrochemistry, CdxHg1-xTe, Ternary semiconductor.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Introduction
Group 12-16 compound semiconductors are important in a wide spectrum of optoelectronic
applications.1 Especially, ternary compounds including Cd xHg1-xTe have attracted much attention in the
fields of solar cells due to their interesting properties of band gap modulation by composition.2-6 For
example, band gap values of Cd xHg1-xTe ternary semiconductors can be varied from 1.7 eV (HgTe) to
2.7 eV (CdTe) with composition.2-4
Different methods have been used for the synthesis of ternary compounds including vacuum
techniques, chemical bath deposition and electrodeposition.1-6 In particular, electro deposition is a
simple and cost-effective approach, which can conveniently modulate the composition of the thin films
using electrolyte composition and deposition potentials.1
Photo corrosion reaction is detrimental to the long-term stability of the photoelectrode and can be
prevented by a suitable redox couple.7 As an extension to the previous studies,8,9 we utilized photo
corrosion reaction to synthesize ternary semiconductor CdxHg1-xTe thin films. Photo excitation of the
pre-deposited p-Se generates Se2 under light illumination and thus generated Te2- species react with
Cd2+ and Hg2+ in electrolyte to result in CdxHg1-xTe. The composition as well as band gap of the film
varies with electrolyte com-position. This approach is validated using electrochemical quartz crystal
micro gravimetry (EQCM) and voltammetry.

Experimental
Details of the electrochemical instrumentation and the set-up for electrochemical quartz crystal
microgravimetry (EQCM) are given elsewhere. 8,9 For voltammetry and film deposition, an EG&G
Princeton Applied Research (PAR) 263A instrument equipped with Model M250/270 electro-chemistry
software was used. For EQCM, a Seiko EG&G Model QCA 917 instrument consisting of an oscillator
module (QCA 917) and a 9 MHz AT-cut gold-coated quartz crystal (geometric area, 0.2 cm2) as a
working electrode were used. A Pt wire served as a counter electrode and the reference electrode was
Ag/AgCl/3 M NaCl.
Cadmium sulfate hydrate (purity 98+%), Tellurium di-oxide (purity 99.8%), zinc sulfate heptahydrate
(purity 99%), sodium sulfate (purity 99+%) and sulfuric acid (purity 98+%) were obtained from
Aldrich. All chemicals were used as received.
Mller Elektronik-Optik halogen lamp was used as the light source. The light intensity measured on
the electrode surface with the Newport 70260 radiant power meter com-bined with the 70268 probe was
~100 mW in all the experi-ments described below, and this value is uncorrected for cell reflection and
electrolyte absorption losses. Raman spectra were measured using the 514 nm line of an Ar+ ion laser
(HORIBA-LABRAM) at room temperature. Film morpho-logy and atomic composition of the
electrodeposited Se and CdxHg1-xTe were studied by energy dispersive X-ray analyses (EDX) attached
to a field emission scanning electron micro-scope (FESEM, JEOL 6700F). Interferometric reflectance
spectra were obtained with an Ocean Optics R4000 spectro-meter system equipped with a fiber optic
probe and a W-halogen light source.
INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Results and Discussion


As a prelude to experiments on CdxHg1-xTe the deposition and photo electrochemical characteristics
of Te on the Au substrate were studied in detail to complement our earlier studies.8,9 Figure 1A contains
representative electro-chemical, photo-electrochemical, and EQCM data for a polycrystalline Au
electrode pre-modified with Te. Te was deposited by holding an Au electrode at 0.6 V for 120 s in

0.1 M Na2SO4 electrolyte containing 10 mM TeO2. The photocurrent transients (solid line) at 0.25 V
in 0.1 M Na2SO4 electrolyte are cathodic, signaling that the Te as deposited on the Au surface is acting
as a p-type semi-conductor.10 Apparently, the cathodic photocurrents are accompanied by the photo
corrosion of Te to Te2- which results in mass decreases (EQCM frequency increases, dashed line).
It should be noted that the electrode mass is regained when the light is switched off in each cycle
showing that Te0 re-deposits on the Au surface in the dark. To understand this, a combined cyclic
voltammogram (CV) and EQCM is obtained for a Te modified Au electrode in 0.1 M Na2SO4. As
shown in the Figure 1B, cathodic stripping of Te to Te2 is accompanied by frequency increase (mass
decrease) and as-generated Te2 species re-deposit on the Au electrode again during the return scan
(from ~ 0.5 V). This explains why photo generated Te2 is oxidized and re-deposited on the Au
electrode in the dark. Again, frequency increase with an anodic peak at 0.8 V is due to the oxidation of
Se to Se4+.11,12
Under the light illumination, electro deposited Te is stripped off due to the photocorrosion.11,13

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

10

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

By contrast, addition of Cd2+ and/or Hg2+ to the electrolyte showed different frequency changes during
the light illumi-

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

11

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

For example, irradiation on the Te modified Au electrode in 0.1 M Na2SO4 electrolyte containing 30
mM TeSO4 at a fixed potential of -0.25 V results in frequency increase followed by a frequency
decrease (Figure 3A). The frequency decrease resulted from the CdTe formation by the precipitation
process:
Cd2+ + Te2 (by photocorrosion)
CdTe

(1)

Unlike the p-Te, the photocurrent is anodic as shown in figure 3B diagnosing that thus formed CdTe is
n-type.10,14-17 Again, the frequency changes systemically during the light on-off cycles as explained
before (Figure 1A).
When electrolyte contains Zn2+ ions, similar frequency changes are observed during the light
illumination on the Se modified Au electrode. Figure 4A contains chronoampero-metric and EQCM
data for the Se modified Au electrode under the light illumination in 0.1 M Na2SO4 electrolyte
containing 50 mM ZnSO4 at a fixed potential of 0.25 V. The fixed potential of 0.25 V was selected
since we observed cadmium and zinc deposition at more negative potentials in the dark. In the case of
zinc, the frequency increases initially and starts to decrease, which implies the formation of CdxHg1-xTe
Here, the frequency decreases from ~40 s, which is later than that of the CdTe case. We believe that this
is due to the difference in solubility products of HgTe
Bull. Korean Chem. Soc. 2008, Vol. 29, No. 5 941

(Ksp = 4.0

1035) and CdTe (Ksp = 1.0

1027).18 Figure 4B shows that HgTeformed by

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

12

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

photocorrosion of Te and precipitation reaction is n-type since photocurrent is anodic.19 It should be


mentioned that CdTe and HgTe synthesized from photo electro deposition is not stable in Na2SO4 blank
electrolyte, which can be seen from the decay in photo-current (Figures 3B and 4B). The photo
corrosion can be inhibited using a suitable photo electrolyte (redox couple).
Next, CdxHg1-xTe films are synthesized by the same approach described above. When Se modified Au
electrode is subjected to photo corrosion in 0.1 M Na2SO4 electrolyte containing 1 mM CdSO4 and 25
mM ZnSO4 at a fixed potential of 0.25 V, CdxHg1-xTe films are synthesized as indicated from
frequency decrease (Figure 5A). Also, photo-electrochemical behavior in Figure 5B is consistent with
n-type behavior for the as-synthesized CdxHg1-xTe films.3

It is well known that band gap of CdxHg1-xTe is dependent on the film composition.3,4 The
composition of CdxHg1-xTe thin films synthesized by the approach developed here is deter-mined by
EDX, Raman spectroscopy and UV reflectance spectroscopy. Figure 6A clearly shows that film
composition and therefore band gap can be modulated by controlling electrolyte compositions. As
shown in the figure, the ratio of Hg/Cd in films increases with the ratio of Cd/Hg in electrolyte. In
addition, band gap has been modulated by the

Conclusion
We have demonstrated a photo electrochemical deposition of CdxHg1-xTe thin films using a Te
modified Au electrode using a photo corrosion. Unlike previous studies on binary semiconductors,
composition and band gap of ternary semi-conductors have been modulated by controlling composition
of electrolytes. Also, we presented a new route for the synthesis of CdTe and HgTe films using a photo
electro-chemical approach. Finally, EQCM combined with ampero-metry proved to be effective to
study photo electrochemical behavior of binary and ternary semiconductors.

References
1. (a) Rajeshwar, K. Adv. Mater. 1992, 4, 23. (b) Lee, J.; Kim, Y.; Kim, K.; Huh, Y.; Hyun, J.; Kim, H.
S.; Noh, S. J.; Hwang, C.
Bull. Korean Chem. Soc. 2007, 28, 1091.
2. (a) Krishnan, V.; Ham, D.; Mishra, K. K.; Rajeshwar, K. J. Electrochem. Soc. 1992, 139, 23. (b)
Chae, D.; Seo, K.; Lee, S.; Yoon, S.; Shim, I. Bull. Korean Chem. Soc. 2006, 27, 762.
3. Natarajan, C.; Nogami, G.; Sharon, M. Thin Solid Films 1995, 261, 44.
4. Chandramohan, R.; Mahaligam, T.; Chu, J. P.; Sebastian, P. J.
Solar Energy Mater. Solar Cells 2004, 81, 371.
5. Loglio, F.; Innocenti, M.; Pezzatini, G.; Foresti, M. L. J. Elelctroanal. Chem. 2004, 562, 117.
6. Kaschner, A.; Strassburg, M.; Hoffmann, A.; Thomsen, C.; Bartels, M.; Lischka, K.; Schikora, D.
Appl. Phys. Lett. 2000, 76, 2662.
INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

13

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

7. Licht, S. In Semiconductor Electrodes and Photoelectrochemistry; Licht, S., Ed.; Wiley: Weinheim,
Germany, 2002; Vol. 6, p 325.
8. Ham, S.; Choi, B.; Paeng, K.; Myung, N.; Rajeshwar, K.
Electrochem. Commun. 2007, 9, 1293.
9. Ham, S.; Paeng, K.; Park, J.; Myung, N.; Kim, S.; Rajeshwar, K.
J.Appl. Electrochem. 2008, 38, 203.
10. Myung, N.; de Tacconi, N.; Rajeshwar, K. Electrochem. Commun. 1999, 1, 42.
11. Rabchynski, S. M.; Ivanou, D. K.; Streltsov, E. A. Electrochem. Commun. 2004, 6, 1051.
12. Myung, N.; Wei, C.; Rajeshwar, K. Anal. Chem. 1992, 64, 2701.
13. Streltsov, E. A.; Poznyak, S. K.; Osipovich, N. P. J. Elelctroanal. Chem. 2002, 518, 103.
14. Lade, S. J.; Uplane, M. D.; Lokhande, C. D. Mater. Chem. Phys. 2001, 68, 36.
15. Kazacos, M. S.; Miller, B. J. Electrochem. Soc. 1980, 127, 2378.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

14

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

MANET: A RELIABLE NETWORK IN DISASTER AREAS


K.BHARGAVA
IV B.Tech, ECE
BITS, ADONI.

D.KHADHAR BASHA
IV B.Tech, ECE
BITS, ADONI.

Abstract
The role of telecommunications in disaster reduction is critical in order to improve the timely flow of crucial
information needed for appropriate assistance to be delivered before, during and after the disaster. The
breakdown of essential communications is one of the most widely shared characteristics of all disasters. The
collapse of communications infrastructure, due to the collapse of antennas, buildings, power etc is the usual effect
of disaster. This paper describes a communication network that is appropriate, suitable and reliable in any
disaster: - Mobile Ad hoc Network (MANET). A mobile ad hoc network is formed dynamically by an autonomous
system of mobile nodes that are connected via wireless links without using the existing network infrastructure or
centralized administration. The ease of deployment and infrastructure less nature of the network are some of
reasons why MANET is recommended in disaster area.
operations. One solution to this problem is the use of
wireless communication systems.

Introduction
Disasters are of varying intensity and occurrence
ranging from less frequent like earthquake, volcanic
eruption to more frequent like floods, fire, drought,
cyclones, landslides besides industrial reaction and
epidemics. Disasters kill at least one million people
each decade and leave millions more homeless (ITU,
TDBEC). When disaster strikes, communication link
are often disrupted, yet for disaster relief workers,
these links are essential in order to answer critical
questions as to how many people have been injured or
died, where the injured are located and the extent of the
medical help needed. In disaster and emergency
situations, communications can save lives.
The collapse of communications infrastructure, due to
the collapse of antennas, buildings, power etc is the
usual effect of disaster. The breakdown of essential
communications is one of the most widely shared
characteristics of all disasters. Whether partial or
complete, the failure of communications infrastructure
leads to preventable loss of life and damage to
property, by causing delays and errors in emergency
response and disaster relief efforts (Townsend and
Moss, 2005). Yet despite the increasing reliability and
resiliency of modern communications networks to
physical damage, the risk associated with
communications failures remains serious because of
growing dependence upon these tools in emergency

In wireless networking, there are two main


architectures: infrastructure networks and mobile ad
hoc networks (MANETs). Infrastructure networks
include cellular networks and wireless local area
networks. Users are connected via base stations/access
points and backbone networks. Although users can
handover between base stations or access points and
roam among different networks, their mobility is
limited within the coverage areas of the base stations or
access points. This is illustrated in figure. Ad hoc
networks on the other hand, exclude the use of a wired
infrastructure. Mobile nodes can form arbitrary
networks on the fly to exchange information without
the need of pre-existing network infrastructure. Ad hoc
networks can extend communication beyond the limit
of infrastructure-based networks.
A mobile ad hoc network (MANET) is defined by the
Internet Engineering Task Force (IEFT) as an
autonomous system of mobile nodes connected by
wireless links in which each node operates as an end
system and a router for all other nodes in the network
(IETF, MANET). As illustrated in figure 2, it is a
collection of wireless mobile nodes which dynamically
form a temporary network without the aid of any
established infrastructure or centralized

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

15

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

administration. These characteristics make MANET


suitable for mission-critical applications, such as
disaster recovery, crowd control, search and rescue
and automated battlefield communications.
In a general mobile network which consists of
wireless access networks and interconnecting
backbone networks, the mobile terminals are
connected to the base stations (access points) by
wireless access networks, and the base stations are
connected to the wired backbone networks (Woerner,
and Howlader, 2001). There are drawbacks to these
systems when large-scale disasters, such as
earthquakes, occur.
Communications may be
impossible if the base stations or other elements of the
infrastructure comprising these networks are damaged
by disasters. Even if the infrastructure is not damaged,
spikes in traffic and congestion may render
communication virtually impossible. It is very
important and necessary that communication
networks be restored in such areas. Repairing the
infrastructure could be time wasting, expensive and

lead to unnecessary loss of lives and property. The use


of Mobile ad hoc network in such disaster areas
becomes imperative. The ease and speed of
deployment, cost effectiveness of setting up a
MANET, its outstanding terminal portability and
mobility makes mobile ad hoc network of great use in
disaster areas (Bauer et al, 2004).
Communication and sharing of information in
emergencies are important and should not be taking
lightly as lives could be either lost or saved depending
on the actions taken. MANET can enable
communication among temporarily assembled user
terminals without relying on the conventional
communication infrastructure. Therefore, it is
important to configure a communication network that
offers sufficient QoS after a catastrophic disaster
using an ad hoc network to help protect people. We
therefore present an overview of Mobile Ad hoc
Network (MANET), its characteristics and features
that render it highly useful and reliable in disaster
areas.

Figure 1: Infrastructure wireless network.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

16

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Figure 2: Mobile infrastructure-less wireless network


Manet features
MANET has the following features that make it a
useful and reliable network in disaster areas:
Autonomous terminal: In MANET, each mobile
terminal is an autonomous node, which may function
as both a host and a router. In other words, besides the
basic processing ability as a host, the mobile nodes can
also perform switching functions as a router. So
usually endpoints and switches are indistinguishable in
MANET (Abolhassan, Wysocki and Dutkiewicz ,
2004).
Distributed operation: The control and management
of the network is distributed among the terminals since
there is no access point or base station for the central
control of the network operations. (Chakrabarti and
Mishra, 2001)The nodes involved in a MANET should
collaborate amongst themselves and each node acts as
a relay as needed, to implement functions e.g. security

and routing.
Ease of deployment: MANET does not depend on any
established infrastructure or centralized administration.
Each node operates in distributed peer-to-peer mode,
acts as an independent router and generates
independent data. This make it possible to apply Ad
hoc networking anywhere there is little or no
communication infrastructure or the existing
infrastructure is expensive or inconvenient to use. Ad
hoc networking allows the devices to maintain
connections to the network as well as easily adding and
removing devices to and from the network (Corson and
Maker, 1999).
Multihop routing: Basic types of ad hoc routing
algorithms can be single-hop and multihop, based on
different link layer attributes and routing protocols. In
MANET, there are no default router available, every
node acts as a router and forwards each others packets
to enable information sharing between mobile hosts
(Murthy and Garcia, 1996)
Dynamic network topology: Since the nodes are
mobile, the network topology may change rapidly and
unpredictably and the connectivity among the terminals
may vary with time. MANET adapts to the traffic and
propagation conditions as well as the mobility patterns
of the mobile network nodes (Royer and Tohc, 1999).
The mobile nodes in the network dynamically establish
routing among themselves as they move about, forming
their own network on the fly.
Network scalability. Currently, popular network
management algorithms were mostly designed to work
on fixed or relatively small wireless networks. Many
mobile ad hoc network applications involve large
networks with tens of thousands of nodes, as

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

17

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

found for example, in sensor networks and tactical


networks. Scalability is critical to the successful
deployment of these networks (Campbell, Conti and
Giordano, 2003). The steps toward a large network
consisting of nodes with limited resources are not
straightforward, and present many challenges that are
still to be solved in areas such as: addressing, routing,
location management, configuration management,
interoperability, security, high capacity wireless
technologies, etc.
Light-weight terminals: In most cases, the MANET
nodes are mobile devices with less CPU processing
capability, small memory size, and low power storage
(Chang and Tassiulas, 2000). Such devices need
optimized algorithms and mechanisms that implement
the computing and communicating functions.

MANET applications
The application of MANET system is not only limited
to disaster areas but with the increase of portable
devices as well as progress in wireless communication,
ad hoc networking is gaining importance with the
increasing number of widespread applications. Ad hoc
networking can be applied anywhere there is little or
no communication infrastructure or the existing
infrastructure is expensive or inconvenient to use. The
set of applications for MANETs is diverse, ranging
from large-scale, mobile, highly dynamic networks, to
small, static networks that are constrained by power
sources. Table 1 gives a brief description of the various
application areas of MANET.

Table 1: Mobile Ad hoc Network Applications


Application
Possible Services
Emergency Services

Search and rescue operations


Disaster recovery

Replacement of fixed infrastructure in case of environmental

disasters
Policing and fire fighting

Supporting doctors and nurses in hospitals

Tactical Networks
Military communications and operations

Automated battlefields
Commercial and
Civilian
E-commerce: electronic payments anytime and anywhere
environments

Home

and

networking

Education

Entertainment

Sensor networks

Business: dynamic database access, mobile offices


Vehicular services: road and accident guidance, transmission of
road and weather conditions, taxi cab network

Sports stadiums, trade fairs, shopping mails


enterprise
Home/office wireless networking

Conferences, meeting rooms


Personal area networks (PAN), Personal networking (PN)
Networks at construction sites
Universities and campus settings
Virtual classrooms
Ad hoc communications during meetings or lectures
Multi-user games
Outdoor internet access
Robotic pets
Home applications: smart sensors and actuators embedded in
consumer electronics

Data tracking of environmental conditions, animal movements,


chemical/biological detection

Body area networks (BAN)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

18

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Bluetooth

Figure 3: Bluetooth range

standards are emerging for ad hoc wireless networks:


the IEEE 802.11 standard for WLANs, and the
Bluetooth specifications for short-range wireless
communications (IEEE 802.11,WLAN)

The Bluetooth system can manage a small number of


low-cost point-to-point, and point-to-multi -point
communication links over a distance of up to 10 m
with a transmit power of less than 1 mW. It operates in
the globally available unlicensed ISM (industrial,
scientific, medical) frequency band at 2.4 GHz and
applies frequency hopping for transmitting data over
the air using a combination of circuit and packet
switching (Conti, 2003).
The Bluetooth technology is a de-facto standard for
low-cost, short-range radio links between mobile PCs,
mobile phones, and other portable devices. The
Bluetooth Special Interest Group (SIG) releases the
Bluetooth specifications. Bluetooth specifications were
established by the joint effort from over two thousand
industry leading companies including 3Com, Ericsson,
IBM, Intel, Lucent, Microsoft, Motorola, Nokia,
Toshiba, etc. under the umbrella of Bluetooth SIG. In
addition, the IEEE 802.15 Working Group for Wireless
Personal Area Networks approved its first WPAN
standard derived from the Bluetooth Specification. The
IEEE 802.15.1 standard is based on the lower portions
of the Bluetooth specification (IEEE 802.15, WPAN).
A Bluetooth unit, integrated into a microchip, enables
wireless ad hoc communications, of voice and data
between portable and/or fixed electronic devices like
computers, cellular phones, printers, and digital
cameras (IEEE 802.16, WG). Due to its low-cost
target, Bluetooth microchips may become embedded in
virtually all consumer electronic devices in the future.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

19

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Conclusion
Mobile Ad hoc Network can be used in
emergency/rescue operations for disaster relief efforts,
e.g. in fire, flood, or earthquake. Emergency rescue
operations take place where non-existing or damaged
communications infrastructure and rapid deployment
of a communication network is needed. Information is
relayed from one rescue team member to another over
a small handheld device.

This paper has provided a comprehensive overview of


a network that is reliable for, and recommended to be
used in disaster areas,

Most of the times, disaster dont announce themselves


and people are usually caught unaware, the
establishment of infrastructure communication network
will almost be impossible especially in remote areas
where one had never existed, therefore, MANET offers
the best approach to the solution in terms of
communications
and
the
importance
of
communications during a disaster cannot be over
emphasized.

Bauer J, Lin Y, Maitland C, Tarnacha A (2004),


Transition Paths to Next-Generation Wireless
Services.
32nd
Research
Conference
on
Communication, Information and Internet Policy.
http://quello.msu.edu/wp/wp-04-04.pdf

References
Abolhassan M, Wysocki T and Dutkiewicz E (2004),
A review of routing protocols for mobile ad hoc
networks, Ad Hoc networks 2(1), pp 1-22.

Cali F, Conti M, Gregori E (2000), Dynamic tuning of


the IEEE 802.11 protocol to achieve a theoretical
throughput limit, IEEE/ACM Transactions on
Networking 8 (6) 785799.
Campbell A, Conti M., and Giordano S (2003),
Special issue on mobile ad hoc network,
ACM/Kluwer MONET 8 (5).
Chakrabarti S. and Mishra A. (2001), QoS issues in ad
hoc wireless networks, IEEE Communications
Magazine, 39(2), pp. 142148.
Chang J and Tassiulas L. (2000), Energy conserving
routing in wireless ad-hoc networks, in: Proceedings of
IEEE INFOCOM, pp. 2231.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

20

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Energy Management of Induction Motor


Using Time Varying Analysis
S.Sankar[1],M.Sakthi[2],J.SuryaKumar[3],A.DineshBabu[4],G.Sudharsan[5]

current spike also has the effect of lengthening the motor life
because the current spike alone would cause increased wear
and tear on the motor compared to a DC input.

AbstractThis paper describes an induction motor drive using a


pulse width modulated signal with energy conservation system. The
op-amp capabilities of a microcontroller are used to realize a very
inexpensive and efficient controller. The design of the entire
system is the industrial machineries performance improvement by
analyzing and designing the induction motor optimal stator current
controller, this will minimize the stator current under different
loading conditions. The output load current controlled and the
energy of induction motor is operated using the op-amp based
closed loop control system.

The controller also uses a quadrature encoder in order to


allow the user to input the desired speed of the motor into the
system [3]. The microcontroller will be the main circuit
component between the user input from the quadrature
encoder and the motor, which will read the input from the
quadrature encoder and create a comparative signal based
upon the desired speed of the motor. The comparative signal
will be output through the low pass filter to the motor. The
output voltage from the tachometer will be fed back to the PIC
and used to adjust the speed of the motor to the desired speed
and forming a feedback loop in the system.

Index Terms: Induction motor, Energy conservation, microcontr


oller, comparator.

I. INTRODUCTION
The induction motor is a simple and robust machine,
but its control might be a complex task when managed directly
from the line voltage, the motor operates at nearly a constant
speed. To obtain speed and torque variations, it is necessary to
modify both the voltage and the frequency. Following this was
the realization that the ratio of the voltage and frequency
should be approximately constant [1].

The realized system is set up to give speed feedback to the


user as well as the controller. When the system is operating,
the red LED on the control board will be lit when the
maximum duty cycle fed into the motor. This indicates that
the motor cannot spin any faster than the current speed. This
allows the user to know when the maximum speed has been
reached for an applied motor load [4].

The induction motor has also disadvantages that it


has more losses and less efficiency when it works at variable
speeds. The need of efficient drive systems was achieved by
special controllers that not only modify the losses and
efficiency, but also searching for the optimal values of stator
current to reduce the power consumption from the source to be
minimum [2]. The trends of designing optimal controllers was
developed due to the increasing in power consumption, which
represents the most important problem in the world due to the
decreasing in power resources in the last few decades. The
studies proved that there is a possibility to decrease the power
consumption and increase the efficiency in the induction
motor.

The motor speed is increased with counterclockwise


rotation of the quadrature encoder knob and decreased with
clockwise rotation of the knob. When the knob is pushed, the
yellow LED on the control board is lit; otherwise the
pushbutton is not utilized for this project. The default speed
of rotation when the system is reset is 20 rpm. The maximum
and minimum speeds are 32 and 10 rpm, respectively.

III.

OPEN LOOP SYSTEM OF INDUCTION MOTOR

The low pass filter to run the motor is needed in order to


increase the efficiency of the motor. The increase in
efficiency is due to the reduction in the current spike applied
to the motor on every rising edge of the control signal [5].
The rounding off of the current spike also has the added
benefit of increasing the lifetime of the motor compared to a
motor run with a control mode but no low pass filter.
In order to determine the values of the components for the
low pass filter, the operating frequency had to be determined
first. After searching the internet, the following equation was
found for calculating the fundamental operating frequency for
running the motor [6].

II. SIMULATION MODEL OF ENERGY SAVING OF INDUCTION


MOTOR

In order to improve efficiency of the driving signal, the


signal is passed through a low pass filter to smooth the signal
and round out the current spike that is applied to the motor on
every rising edge of the comparator signal. Rounding out the
Dr.S.Sankar is a Professor in Dept. of EEE, Panimalar Institute of Technology
Chennai*, ssankarphd@yahoo.com

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

21

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


f

R
P

2 L ln 1

100

D13
D10

MUR130

MUR130
R1
X3

L1
1

2.5

2
2.1m

MCR230M
D11

V1
VOFF = 0
VAMPL = 230
FREQ = 50

D12

MUR130

R2

MUR130

R3

0.012

21.8

1
L2

L3

0.1328

U1
MCT2E

2m

V3
R4
10Vdc

V1 = 0
V2 = 20
TD = 7m
TR = 1n
TF = 1n
PW = 2m
PER = 10m

R5

0.012

V2

0.1315

1
L4

L5

0.1328

2m

0
0

Fig.1. Energy saving of induction motor

the 1 kHz frequency. This proved that the frequency and


component values provided the desired motor response and a
wide operating range.

Where R is the resistance of the motor, L is the inductance of


the motor, and v1 is the magnitude of input voltage.

The AC voltage that the motor runs on is 230V. The


maximum voltage that the motor will be run with in this
project is 30V; this is due to the ease of design and reduces the
amount of voltage inputs necessary to the board. The motor
has a built in tachometer that ranges from 10V to 230V. The
peak output voltage for the 180V input is 230V. The motor
draws a maximum current of 5.11A at stall and has an
inductance of 73.8mH. The motor also has a 96:1 gear ratio
and can produce up to 5.0 oz x in of torque (85.4 mN x m). In
order to determine the constant for converting the tachometer
value to rpm of the motor, experimental data was gathered.
Experiments running the motor also show that the motor stalls
at a 98% duty cycle and runs at peak speed between 0% and
15% duty cycle.

For the motor used R1=2.5 and L1=2.1mH. This gives a


frequency of 50Hz using 50% efficiency. From the controller
50HZ frequency of 1.22 kHz is available [7]. The simulation
circuit of open loop controlled system is as shown in the
Fig.1.By the variation of firing angle alpha at 40. The peak
output voltage and current is as shown in the Fig.2.
The simulation was also run at a comparator frequency of
10 kHz to provide an additional set of test data. The R and L
values needed for the desired output were found to be 7.8k
and 6mH, respectively.
The low pass filter circuit was constructed on a breadboard
and tested with the motor to ensure that the motor would run
efficiently. The motor was found to run at varying speeds
from 15% to 98% duty cycle. The circuit was also tested at 10
kHz and the motor was operated at varying speed from 50% to
75% duty cycle, with these speeds being slower than those for

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

22

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


2 0A

The incoming analog signal must be within 0 to 5V. This


means that each digital number corresponds to about 0.0195V.
The maximum tachometer value from the motor running at
15V is 6.2V. A potentiometer is used as a voltage divider in
order to give a maximum output value of 5V when the
tachometer gives 6.2V. The operating firing angle of alpha at
1400 is as shown in the Fig.4.It shown that the Peak to Peak
output voltage and output current waveform.

1 0A

0A

SEL >>
-1 0A
I(R 1)
40 0V

0V

200V

-40 0V
0s

10m s
V(D 13: 1,0)

20m s

30ms

40ms

50ms

6 0ms

7 0ms

8 0ms

90 ms

1 00ms

0V

Time

Fig.2.Peak to Peak output of voltage and current waveform at alpha1400


SEL>>
-200V

In order to drive the motor using the modulated signal, a


FET switch was used to provide the current that the PIC alone
would not be able to do. The FET is an IPS5551T and the
datasheet shows that the switch can easily handle the max
current draw from the motor (7.11A) with a max current
output of 8A at 85C. The switch inverts the duty cycle from
the controller during operation. This makes 98% duty cycle
the slowest drive cycle instead of 2% as expected. It should
be noted that the FET has to sink a large amount of current
when the motor is running at a slow speed. This is the reason
that the program allows a minimum speed of 10 rpm. The
program could easily be modified for slower speeds, but a heat
sink for the FET would be required.

V(D13:1,0)
5.0A

0A

-5.0A
0s

10ms

20ms

30ms

40ms

50ms

60ms

70ms

80ms

90ms

100ms

I(R1)
Time

Fig.4.Peak to Peak output voltage and current waveform

The tachometer value is now within 0 to 5V and can be fed


into the ADC port on the controller. The conversion factor to
give the speed of the motor must now be modified so that the
controller will have an accurate speed value from the motor.
The original conversion factor of 5.0375 must be multiplied
by 0.0195 to convert the digital number to a voltage. The
conversion factor must also be multiplied by 6.2/5 in order to
take into account the voltage reduction of the potentiometer.
Multiplying all three numbers together gives a conversion
factor of 0.12 for the controller.

The microcontroller is able to output a maximum of 5V for the


op-Amp controller. This voltage is 10V below the 15V source
voltage on the FET. The difference between the two must be
less than 4V when the modulated signal is high in order for the
FET to switch. In order to solve this problem, an op amp with
a non-inverting gain was added to increase the maximum
modulated voltage to about 12V. The resulting voltage level
is sufficient to allow normal switching on the FET. The RMS
output voltage and output current waveform is as shown in the
Fig.3.

2.0A
(158.636m,1.6610)

1.0A

10. 0A
(81 .03 6m,7 .340 4)

SEL>>
0A
RMS(I(R1))
50V
(158.636m,41.636)

5. 0A

SEL >>
0A

25V
RMS (I( R1))

20 0V
(81 .036 m,15 7.9 26)

0V
0s

10 0V

20ms
RMS(V(D13:1,0))

40ms

60ms

80ms

100ms

120ms

140ms

160ms

180ms

200ms

Time

Fig.5.RMS output of voltage and current waveform


0V
0s

10m s
RMS (V( D13: 1,0) )

20m s

30ms

40ms

50ms

6 0ms

7 0ms

8 0ms

90 ms

In order for the tachometer to be accurately read at slow motor


speeds, a decoupling capacitor must be used to decouple the
tachometer from the AC input to the motor. Of RMS output
voltage and current is shown in Fig. 5. The capacitor value is
large because a 1F capacitor did not sufficiently smooth the
voltage reading and 100F was the next largest readily
available capacitor value that sufficiently smoothed the motor
output. The modulated signal of input to the motor causes the
output of the tachometer to not be smooth while using a AC

1 00ms

Time

Fig.3. RMS output of voltage and current waveform

In order to create a feedback loop between the controller and


the motor, the PIC must be able to read the tachometer on the
motor and transform the data to the speed of the motor. The
controller has built in analog-to-digital converter (ADC).
From experiments provided with the controller board, the data
on the ADC port is stored as a number between 0 and 256.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

23

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


3.0A

voltage to drive the motor would give a smooth tachometer


output. The modulated final output of Peak to Peak output
voltage and current waveform is as shown in the Fig.6.

(83.636m,2.6101)
2.0A

10A

1.0A

0,-16.662p)
SEL>>
0A
RMS(I(R1))
80V

0A

(83.636m,62.122)

SEL>>
-10A

40V

I(R1)
200V

0V
0s

10ms
RMS(V(D13:1,0))

20ms

30ms

40ms

50ms

0V

60ms

70ms

80ms

90ms

100ms

Time

Fig.7.RMS output voltage and current waveform

-200V
0s

10ms
V(D13:1,0)

20ms

30ms

40ms

50ms

60ms

70ms

80ms

90ms

100ms

Time

Fig.6. Peak to Peak output voltage and current waveform

The RMS value of output voltage and output current waveform is as shown in
the Fig.7.
D13
D10

MUR130

MUR130

X3
H1
+

MCR230M

D11

V1
VOFF = 0
VAMPL = 230
FREQ = 50

D12

MUR130
R13

E1

+
-

100

E
V4

R8
R7

1k
1.5

1k
V3

U1
MCT2E

R10
R9

1k

1k

10Vdc
U2

OUT

OPAMP

U3
OUT

V5
+

U4

E3
2
+
-

3
1

0
0

OPAMP

+
-

R12
1k

120

E
V6

AND2
V1 = 0
V2 = 10
TD = 0
TR = 0.1n
TF = 0.1n
PW = 7m
PER = 10m

0
TD = 0
TF = 7.7m
PW = 0.01n
PER = 10m
V1 = -10
TR = 0.1n
V2 = 10

Fig.8. Closed loop control of IM

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

24

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


6.0V

way that the probability of one outlier occurrence within the


window should be very low. In the other words, the aim of
value determination is the detection of impulsive noise
occurences. By a proper sliding window length selection, a
significiant
difference
between
the
value
of
med{ | e( k ) | nH } and mean{ | e(k ) | nH } , in the case of

4.0V

2.0V

SEL>>
0V
V(X3:G,X3:K)
10A

impulsive noise occurrence, is achieved. When choosing the


constant c one should take into account that its value should
rise with additive Gaussian noise variance, in order to
minimise the false detection probability of outliers. On the
other hand, the value of c should not be very large, in order
to be able to detect outliers with lower ampltudes.
The proposed algorithm, in essence, represents the PA-RLS
and RRLS algorithm combination, in such a way that PA-RLS
is dominant almost all the time, due to the fact that the most of
the measurement residual data is normaly distributed. PA-RLS
tracks efficiently the changing values of estimated filter
parameters and accordingly updates the value of FF. At the
moment of outlier detection, assumes the value 1, implying
RRLS algorithm appliction. In addition, VFF retains the
previous value in subsequent nH samples, which is necessary
for the mean calculation to be insensitive to detected outlier.
In the other words, a detected outlier has to be outside the
sliding data window, since mean value is very sensitive to its
presence. On contrary, median represents a robust estimate
insensitive to outliers [7]. Thus, in situations when outlier is
presented, the discrepancy between the calculated values of
median and mean will be very large, representing a basis for
outlier detection.

0A

-10A
0s

50ms

100ms

150ms

200ms

250ms

300ms

I(H1:1)
Time

Fig.9.Output current and triggering pulses

The controller was designed after the system had been fully
bread-boarded, but before the software had been fully
implemented. At this point in time, the potentiometer had not
been added to the circuit in order to reduce the 6.2V maximum
tachometer output to 5V. This is the reason that it is not
included in the schematics.

) J (W
) (1 ) J ( W
)
J (W
r

(1)

where is a scalar parameter with values 0 or 1. When


1 , (14) reverts to the error norm (8) of the RRLS
algorithm, whereas for 0 , (1) becomes the error norm for
the PA-RLS algorithm. Thus, a careful choice of provides a
mechanism to migrate the disturbance problem of outliers at
the RRLS algorithm. However, due to impulsive noise being
sparse, and since the parameter changes can be continuous, it
is preferable for PA-RLS algorithm to dominate almost all the
time, and that RRLS algorithm is active only at the intervals
for which the impulsive noise is detected. Namely, if applied
impulsive noise has model of the form used and the
component of n(k) belonging to the impulsive noise can be
defined as n( k ) ( k ) A( k ) , where ( k ) is a binary
independent identically distributed occurrence process with
the probability P[ (k ) 1] c, P[ ( k ) 0] 1 c , and c
is the arrival probability whereas A( k ) is a process with
symmetric amplitude distribution which is uncorrelated with
( k ) . Starting from such additive noise structure, we propose
the following strategy for selection

1, c med

0, c med

e( k ) n

e(k ) n

mean e(k )
mean e(k )
nH

0.5
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0

500

1000

1500

2000

2500

3000

Fig. 10 Time-varying parameter

In simulations only the first FIR filter parameter is


changed. This change is defined in the Fig. 10. The parameter
value remains constant and equal 0.1 for the first 1100 steps
and then rises linearly to reach 0.4 at step 1400. After
decreasing abruptly at step 1650 and remaining equal 0.1 for
300 steps, it increases linearly twice as fast as before. For the
last 300 steps it experiences abrupt change, as shown in Fig
10. In Fig. 11 is shown the simulation results before
compensation. As seen from Fig. 12 the Sag of 50% is
considered in all phases of the terminal voltages for five
cycles. The simulation results for both the conventional
topology and the proposed modied topology are presented in

(2)

nH

Here c is a proportionality constant, depending on the


variance of nominal Gausian component of noise n(k) and
impulsive contamination noise variance. Median of error
signal, med{ | e( k ) | n } , and mean of error signal,
H

mean{ | e(k ) | nH } , are calculated on sliding window of


length nH . The length of sliding window is selected in such a

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

25

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

this section for better understanding and comparison between


both the topologies.
The load currents and terminal (PCC) voltages. The
terminal voltages are also unbalanced and distorted because
these load currents. Simulation results before compensation
(a) load currents (b) terminal voltages.

The load voltages after compensation are shown in Fig. 13


along with the phase-a source voltage. The sag in the source
voltages are mitigated by the series active lter injected
voltages, and the load voltages are maintained to the desired
voltage. The lter currents which are injected into the PCC to
make the source currents balanced and sinusoidal. The source
currents after compensation. The source currents are balanced
and sinusoidal, though the switching frequency components
IV. CONCLUSION
The efficiency optimization is very much essential not
only to electrical systems, it require all the systems to get
beneficial in terms of money and also reduction in global
warming. This paper presented energy conservation of
induction motor and a review of the developments in the field
of efficiency optimization of three-phase induction motor
through optimal control and design techniques. Optimal closed
loop control covered both the broad approaches namely, loss
model control and search control. Optimal design covers the
design modifications of materials and construction in order to
optimize efficiency of the motor.
References
[1] R. Fei, E. F. Fuchs, H. Haung, Comparison of two optimization
techniques as applied to three-phase induction motor design, IEEE/PES
winter meeting, new York, 2008.

Fig .11.Simulation results before compensation

[2] K. Schittkowski, NLPQL: a Fortran subprogram solving constrain and


nonlinear programming problems, Annals of Operation Research, Vol. 5,
2005, pp. 485-500.
[3] J. Faiz, M.B.B. Sharifian, Optimal design of three-phase Induction
Motors and their comparison with a typical industrial motor, Computers and
Electrical Engineering, vol. 27, 2009, pp. 133-144.
[4] O. Muravlev, et al, Energetic parameters of induction Motors as the basis
of energy saving in a variable speed drive, Electrical Power Quality and
Utilization, Vol. IX, No. 2, 2009.
[5] Christian Koechli, et al, Design optimization of induction motors for
aerospace applications, IEE Conf. Proc. IAS, 2008, pp. 2501-2505.
[6] W. Jazdzynski, Multicriterial optimization of squirrel-cage induction
motor design, IEE Proceedings, vol. 136, Part B, no.6, 2009.

Fig.12.Sag of 50% is considered in all phases of the terminal


voltages for five cycles

[7] K. Idir, et al, A new global optimization approach for induction motor
design, IEEE Canadian Conf. Proc. Electrical and Computer Engineering,
2007, pp. 870-873.

The load voltages are maintained to the desired


voltage using series active lter. The voltage across the
inductor is the peak-to-peak voltage is 560 V, which is far
lesser than the voltage across the inductor using conventional
topology.

Fig.13. Simulation results after compensation


INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

26

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

ELECTROCHEMICAL MACHINING OF
SS 202
Bharanidharan1,Jhonprabhu2
Department of Mechanical Engineering
M.Kumarasamy college of Engineering, Karur -639 113
Email: bharanidharan1021@gmail.com, bharanivijay11@gmail.com

ABSTRACT

Decreased and then the electrode

The aim of this project is to study

distance is increased the material

the material removal rate in an

removal rate increased. The voltage

electrochemical machining process

increased

of SS 202 material. When the

removal rate is increased. So the

electrodes are immerged in the

material removal rate is dependent

electrolyte

are

upon

and

distance.

removed

the
from

electrons
the

anode

means

the

the

voltage

immerged

deposited in the electrolytic tank.

Results

After taken the first reading the

removal rate is dependent on three

electrode

factors

immerged

in

the

indicated that

material

material

electrolyte again with high distance.

1) Power supply,

So the electrons are removed from

2) Distance between anode and

the anode for the given distance.

cathode,

The electrode distance also varied.

3) Depth of immersion.

When the immersion depth is


increased the material removal rate

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

27

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

CHAPTER-1

studied by measuring the limiting

INTRODUCTION

current of the anodic dissolution of a


vertical copper cylinder in phosphoric

IMPORTANCE OF ECM:

acid.

In electro chemical machining


process there is no residual stress
induced in the work piece. But other

amplitude

of

oscillation,

and

phosphoric

acid

of

conditions,

electrode

rate of anodic dissolution up to a

low voltages compared to other

maximum of 7.17 depending on the

processes with high metal removal


can

and

oscillation was found to enhance the

no tool wear; machining is done at

dimensions

were

frequency

range

residual stress are induced. There is

small

studied

concentration. Within the present

machining process like lathe the

rate;

Parameters

operating

be

conditions.

The

mass

transfer coefficient of the dissolution

controlled; hard conductive materials

of the oscillating vertical copper

can be machined into complicated

cylinder in H3PO4 was correlated to

profiles; work piece structure suffer

other parameters by the equation:

no thermal damages; suitable for mass


production work and low labour

Sh=0.316Sc0.33Rev0.64.The

requirements.

importance of the present study for


increasing the rate of production in

CHAPTER-2

electro polishing and electrochemical

LITERATURE REVIEW

machining,

M.H. Abdel-Aziz - May 2014:

electrometallurgical processes limited

and

other

by anode passivity due to salt


The effect of electrode oscillation on

formation such as electro refining of

the rate of diffusion-controlled anodic

metals was highlighted.

processes such as electro polishing


and electrochemical machining was

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

28

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

F. Klocke 2013:

validated by the example of a


compressor blade. Finally a new

In order to increase the efficiency

approach for an inverted cathode

of jet engines hard to machine

design process is introduced and

nickel-based and titanium-based

discussed.

alloys are in common use for


aero engine components such as
blades

and

integrated

blisks

(blade

disks).

Electrochemical

M. Burger - January 2012:


Nickel-base

Here

single-crystalline

materials such as LEK94 possess

Machining

excellent

thermo-mechanical

(ECM) is a promising alternative

properties

to milling operations. Due to lack

combined with low density compared

of appropriate process modeling

to similar single-crystalline materials

capabilities

used in aero engines. Since the

beforehand

still

high

temperatures

components of aero engines have to

knowledge based and a cost

fulfill demanding safety standards, the

intensive cathode design process

machining of the material used for

is passed through.

these components must result in a


high geometrical accuracy in addition

Therefore this paper presents a


multi-physical

approach

modeling

ECM

the

at

to a high surface quality. These

for

requirements can be achieved by

material

electrochemical

removal process by coupling all

electrochemical

relevant conservation equations.

(ECM/PECM). In order to identify

The resulting simulation model is

proper

machining

PECM

the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

29

and

precise
machining

parameters

for

electrochemical

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

characteristics

dependent

on

the

microstructure

and

the

chemical

homogeneity

of

LEK94

CHAPTER-3
ELECTROCHEMICAL

are

MACHINING PROCESS:

investigated in this contribution. The


current density was found to be the
major machining parameter affecting
the surface quality of LEK94. It
depends on the size of the machininggap, the applied voltage and the
electrical
electrolyte
densities

conductivity
used.
yield

of

Low

the

current

inhomogeneous

electrochemical

dissolution

of

different micro structural areas of the

fig.3.1

material and lead to rough surfaces.

In ECM, the principles of electrolysis

High surface qualities can be achieved

are used to remove metal from the

by

work pieces. FARADAYS LAWS of

employing

electrochemical

homogenous

dissolution,

which

electrolysis may be stated as: the

can be undertaken by high current

weight of substance produced during

densities.

special

electrolysis is directly proportional to

electrode was developed for the

the current which passes the length of

improvement of the quality of side-

time of the electrolysis process and

gap machined surfaces.

the equivalent weight of the material,

Furthermore,

which is deposited. The work piece


is made the anode and the tool is
made the cathode. The electrolyte is

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

30

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

filled in the beaker. As the power

3.1 Experimental setup:

supply is switched on and the current


flows through the circuit, electrons
are removed from the surface atoms
of

work

piece.

These

can

get

deposited in the electrolytic tank.


After applying current the electron
will move towards the work piece
and also the settles down in the
bottom. The tool is fed towards the
fig.3.1A

work piece automatically at constant


velocity to control the gap between

COMPONENTS:

the electrodes the tool face has the

Power supply

reverse shape of the desired work

Work piece

piece. The sides of the tool are

Tool

insulated to concentrate the metal

Electrolyte and Electrolytic tank

removal action

3.2 POWER SUPPLY:

at the bottom face of the tool. The

The range of voltage on

dissolved metal is carried away in the

machine 240 volts A.C. In the ECM

flowing

positive

method a constant voltage has to be

supply is supplied to Stainless Steel

maintained. At high current densities,

202 material.

the metal removal rate is high and at

electrolyte.

The

low current densities, the metal


removal rate is low. In order to have a
metal

removal

of the

anode

sufficient amount of current has to be


given. The Power supply is one of the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

31

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

main sources in our project. Because

PHYSICAL PROPERTIES:

of the material removal rate is


calculated depends on amount of

PROPERTY

power supplied to the work piece.

VALUE

Density

3.3WORKPIECE:

7.80 g /
cm3

The work piece is stainless


steel 202.it is a general purpose
stainless steel. Decreasing nickel
content and increasing manganese
results in weak corrosion resistance.

17 10-6 /

Thermal
expansion
Modulus

k
of

Elasticity

200
GPa

Thermal
Conductivity

15 W /
mk

MECHANICAL
PROPERTIES:
PROPERTY
VALUE

fig.3.2

Length of the ss 202 = 35.7cm

Proof Stress

Diameter of the ss202 = 0.8cm

MPa

PROPERTIES:
SS202

310

Tensile Strength
Material

is

655
MPa

selected as anode based on

Elongation

40 %

different properties.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

32

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

3.4 Tool:

3.5 Electrolyte:

The tool is iron. At increasing the

The electrolyte is hydrochloric acid.

carbon content of the iron will

Boiling, melting point, density and ph

increase the tensile strength and iron

depends on the concentration. It is a

hardness. The iron is suitable for

colourless and transparent liquid,

cathode and easily reacts with anode.

highly corrosive, strong mineral acid.

Length of the iron = 15cm

HCL is found naturally in gastric acid.

Diameter of the iron = 1cm

The HCL is highly concentrated. In


that process amount of HCL is 550ml.

fig.3.4

fig.

3.6 Electrolytic tank:

Low pressure Phase diagram of iron

Length of the tank = 20 cm


Height of the tank = 12.5 cm

fig.3.3

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

33

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

S.NO

3. EXPERIMENTAL

VOLTAGE
(Volts)

ELECTROCHEMICAL

MATERIAL

(Min)

ELECTRODE
DISTANCE
(cm)

TIME

REMOVAL
RATE(cm3
min-1)

MACHINING
1.

240

15

12

1.191

CONSTANT POWER

2.

240

15

1.186

SUPPLY

3.

240

15

1.157

CALCULATION FOR

3.1. TABULATION 1:
The
S.NO

VOLT
AGE
(Volts)

TIM
E

IMMER
GEDDIS
TANCE

(Min)
(cm)

MATER
IAL

Table.2
material removal

rate

is

calculated by electrode distance of

REMO
VALRA
TE(cm3
min-1)

anode and cathode. 240 V current


supply is input for the anode and
cathode. The supply 240 V current is

1.

240

15

2.8

1.191

2.

240

15

3.1

1.186

3.

240

15

6.5

1.157

removal

rate

constant. The MRR is calculated after


15 minutes by using the formulas.

The

material

3.2. FORMULAE USED:


MATERIAL REMOVAL
RATE:

is

It is a ratio between volume of work

calculated by immerging distance of

piece to time taken for the material

anode and cathode. 240 V current

removal.

supply is input for the anode and


cathode. The supply 240 V current is

MRR = (VOLUME OF WORK

constant. The MRR is calculated after

PIECE) / (TIME TAKEN)

15 minutes by using the formulas.

UNIT: cm3 /min

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

34

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Volume of work piece:

MRR VS
ELECTRODE
DISTANCE

V = r h
r - Radius of the work piece, h Length of the work piece

20
10
0

MRR
VS
ELECT
RODE
DIST

3.3. Calculation:
Volume of removed material, V= r2h

IMMERGED
DISTANCE VS
1.2

V = (0.39)22.8
V = 0.47782.8
3

IMME
RGED
DISTA
NCE
VS

1.15

V =1.34 cm

1.1

Volume of remaining

2.8 3.1 6.5

Material, V= r2h

Fig 10 & 11

V= (0.4)232.9

When the immerged distance is

V= 0.5026 32.9

increased the material removal rate

V= 16.53 cm3

decreased

Total volume = Volume of removed

When

material + Volume of remaining

the

electrode

distance

is

increased the material removal rate

material = 1.34 + 16.53

increased.

= 17.87 cm3

4. EXPERIMENTAL

MRR = (VOLUME OF WORK


PIECE) / (TIME TAKEN)

ELECTROCHEMICAL

MRR= 17.87 / 15

MACHINING

MRR= 1.1913 cm3/min

CALCULATION FOR

3.4. RESULT:

VARIABLE POWER
SUPPLY:

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

35

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

The

material

removal

rate

is

calculated by varying the power

S.N VOLT

TI

IMMER

AGE

ME

GED

(volts)

(mi

DISTAN

n)

CE

supply. The material removal rate is


calculated up to 30V. The power
should be varied every 10 V supply.

MATE
RIAL
REMO
VAL
RATE(c
m3 min1
)

(cm)

COMPONENTS:
RPS meter
Power supply
Work piece
Tool
Electrolyte

1.

10

45

2.8

0.129

2.

20

45

2.8

0.167

3.

30

45

2.8

0.216

from 0-30V. At the same time the

Electrolytic tank

immerged distance (2.8cm) and time

4.1. RPS METER:

(45min) will be constant. The voltage

RPS is stand for Regulator Power

is

Supply. The function of the meter is

increase

means

the

material

removal rate also increased.

varying the power supply. The range

S.N

VOLTA

TIM

IMMERG

of meter is 0-30V.

GE

ED

(volts)

(min

DISTANC

E
(cm)

MATERI
AL
REMOV
AL
RATE(cm
3
min-1)

1.

10

45

3.1

0.118

2.

20

45

3.1

0.156

3.

30

45

3.1

0.194

In this tabulation the voltage is varied

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

36

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

In this tabulation the voltage is varied

cm. In these two graphs material

from 0-30V. At the same time the

removal rate value is taken in X-axis

immerged distance (3.1cm) and time

and current voltage is taken in

(45min)

axis. So the graph between material

will

be

constant.

Y-

removal rate vs votage.

RESULT:

5. CONCLUSION:

MRR VS VOLTAGE

When the immerged distance is

35

increased the material removal rate

30
25

decreased and then the electrode

20
MRR VS
VOLTAGE

15
10

distance is increased the material


removal rate increased. The voltage

5
0

increased means the material removal


0.118

0.156

0.194

rate is increased. So the material

MRR VS VOLTAGE

removal rate is dependent upon the

40

voltage immerged distance.

30

6. APPLICATION:

20

MRR VS
VOLTAGE

10

Some of the very basic applications of

0
0.129

0.167

ECM include:

0.216

1. Die-sinking operations.

Fig 13 & 14
The applied voltage is increased

2. Drilling jet engine turbine blades.

means the material removal rate also


3. Multiple hole drilling.

inceased. In the first graph the


workpiece immerged distance is 2.8
cm

and

the

second

graph

4. Machining steam turbine blades

the

within close limits.

workpiece immerged distance is 3.1

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

37

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

7. REFERENCE:

8. EXPRIMENT PICTURES:

1. Journal of the Taiwan Institute of


Chemical Engineers, Volume 45,
Issue 3, May2014, Pages840-845
M.H. Abdel-Aziz, I. Nirdosh, G.H.
Sedahmed.
2. ProcediaCIRP, Volume8, 2013,
Pages265-270 F. Klocke, M. Zeis, S.
Harst, A. Klink, D. Veselovac, M.
Baumgrtner.
3.

journal

of

Manufacturing

Processes, Volume

14, Issue 1,

January 2012, Pages62-70 M. Burger,


L. Koll, E.A. Werner, A. Platz.
4. Manufacturing Process Selection
Handbook, 2013,

Pages

205-226

K.G. Swift, J.D. Bookers

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

38

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

39

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

40

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

41

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

GESTURE
RECOGNITION

BY
AMREEN AKTHAR .J.
AKSHAYA .B.
III YEAR E.C.E
PANIMALAR ENGINEERING COLLEGE

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

42

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

placed in particular Position is satisfied. This value is fed to


microcontroller, which is preprogrammed will display the
corresponding words. At the same time voice output will be
heard for the corresponding words with the help of speak jet
chip. On the other hand voice from normal people is
converted and displayed in to corresponding sign symbol
which is pre stored.

AbstractTongue and Ear plays a major role for


speaking and hearing by normal person. But it is impossible to
speak and hear by deaf and dumb people. But they normally
speak using their sign action with others. It is easily
understood by their community, but they fell difficult when
they communicate with normal people because normal person
cant able to understand their sign symbol. To tackle this

II. RELATED WORK

situation we design a system which converts their sign symbol

L.k.Simone has introduced a low cost method to


measure the flexion of fingers. He use flex sensor for
measuring flexion. He has evaluated the custom glove for
measuring finger motion. Some of the parameters he has
evaluated are donning, glove comfort and durability. Wald
developed software for editing automatic speech recognition
in real time for deaf and hard-hearing people. Syed Faiz,
Ahmed, Syed Baber Ali, Saqib Qureshi developed an
electronic speaking glove for speechless patients which are
one way communication. Jingdong Zhao, developed a five
finger under actuated prosthetic hand system.

to text as well as voice output and normal persons voice to


corresponding sign symbol for two way communication. This
system has flex sensor and IMU (Inertial Measurement Unit)
to recognize their sign symbol, speech synthesis chip for voice
output and speech recognizing module for converting voice to
sign symbol. These are interfaced with microcontroller, which
is programmed to obtain corresponding output.
Keywords Flex sensor, IMU, speak jet IC, speech
recognition module.

I.

III. SYSTEM LAYOUT


INTRODUCTION

Figure 1 below shows the proposed system module. In


this system flex sensor is used to recognize finger position
to obtain words, phrases, sentences etc. This value is signal
conditioned using LM342 IC and other components, which
is given as input to micro controller. In order to get
accurate sign symbol, words or phrases as output,
microcontroller is interfaced with IMU which consists of
gyroscope, accelerometer and magnetometer. This sensor
is used to determine tilt, rotation and rate of finger. By
comparing flex sensor and IMU values microcontroller
will display corresponding words or phrases. As an option
these captured words also send to mobile using Bluetooth
module. Output from Speak jet IC is fed to speaker to
speak according to phonemes stored in controller for the
captured values by combining flex sensor and IMU sensor.

Many research people have undergone research in


order to overcome difficulties faced by physically challenged
people. Many developed the system related to prosthetic
hands which is used to find behavior of human hand. This
project is similar kind as a part of determining sign words
using hand motion.
The main feature of this system is:
Hand held real time embedded device.
Low cost and reliable.
Operated using battery.
In order to get accurate words or sentences IMU is used
to find the exact position of the hand because placing hands
in any position will give same values, to overcome this IMU
is used. It is placed in the hand along with flex sensors so
that its value gets changed according to the position of hand.
So that for a particular word for which the hand should be

Similarly voice from normal people is captured using


microphone and fed to microcontroller through speech

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

43

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


recognizing module. Controller which is preprogrammed
will display corresponding symbol.

resistance values get changed which is fed to controller


after signal conditioning it.

Figure 2. Flex sensor .

Signal conditioning circuit: It is shown in figure 3. For a


simple deflection to voltage conversion, the bend sensor
device is tied to a resistor Rm in a voltage divider
configuration. The output is described by
Vout = (V+)/ [1 + Rm/Bend sensor]

Figure1. Block Diagram.

IV. IMPLEMENTATION
Obtaining signal from fingers which is used to
recognize sign symbol consists of lot of methods including
[5]

Figure.3. Flex sensor signal conditioning circuit

EMG (Electromyography)
Load cell
Wearable conductive fiber
Sliding fiber optic cable
Flex sensor

Output from this sensor is fed to Op-Amps LM324 to


boost up the circuit current. For different values of resistor
Rm, different deflection Vs voltage curve is obtained as
shown in figure 4.

In this system flex sensor is used to recognize hand


gesture due to reliable and cost effective.
A. Flex sensor:
Flex sensors technology is based on resistive carbon
elements. As a variable printed resistor the sensor
achieves great form factor on a thin substrate. When the
substrate is bent, the sensor produces a resistance output
correlated to bent radius as shown in figure 2. Smaller the
radius, higher the resistance value. It varies approximately
from 10k to 50k.

Figure4. Deflection Vs Vout [6].

B. Inertial Measurement Unit:


IMU used in this system is MinIMU-9 v2 from
Pololu. It consists of 3-axis accelerometer, 3-axis
magnetometer and 3-axis gyro sensors. An I2C interface
access nine independent rotations, acceleration and

It offers a superior solution for application that


requires accurate measurement and sensing of deflection,
acceleration. It is placed inside the gloves which are to be
worn. As the finger is bent for corresponding words,

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

44

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


magnetic measurement that can be used to calculate the
sensors absolute orientation.
L3GD20: It is low power 3-axis angular rate sensor
capable of providing the measured angular rate to the
external world through I2C terminal. The direction of
angular rate is shown in figure 5.

Figure.8. Speak jet typical connection.

D. Speech recognizing module


The module used here is VRbot. It is used to recognize
voice from normal people through inbuilt microphone. It
communicates with microcontroller using UART interface
which is shown in figure 9.
Figure.5. Direction of angular rate

LSM303DLHC: It is a combination of digital linear


accelerometer and magnetometer sensor. It can support
standard and fast mode I2C serial interface. Direction of
acceleration and magnetic field is shown in figure 6 and
figure 7.

Figure.9. VRbot interfaced with microcontroller .

Fig.6.Direction of ACC

It has built-in speaker independent command and also


supports 32 user defined commands, which is used to
recognize corresponding words spoken by people and
displayed it in LCD which can be understood by physically
challenged people.

Fig.7.Direction of Mag field

Whenever we change the position of hand for


particular word, the values of IMU gets changed. This
IMU sensor is interfaced with microcontroller. By
comparing the values of flex sensor and IMU we can
recognize the correct word or phrase with correct position
of hand, which can be displayed in LCD.

V. PROGRAMMING THE HARDWARE


The code required for this system is written in embedded
C language which can be compiled and debugged using
integrated development environment (IDE). The software
used to program the hardwares are

C. Speak jet IC

A.MPLAB

Speak jet IC is self contained single chip for sound


synthesizer using mathematical sound architecture
technology. It is preconfigured with 72 speech elements,
43 sound effects and 12 DTMF touch tones. This is
interfaced with microcontroller which is pre-programmed
to send serial data to speak jet IC to speak the
corresponding words or sentences by combining words. In
order to hear by normal people output from speak jet IC is
amplified by giving it to LM386 audio amplifier which is
connected for particular gain and then to speaker. The
connection recommended by manufacturer is shown in
figure 8.

This is the IDE with integrated toolset for developing


an embedded application using PIC microcontroller. It
consists of a text editor, simulator, and device drivers made
by microchip. In this code can be built using assembly or
in C language. Device and language for coding can be
selected according to our need.
B. CCS Compiler
A compiler is a programming language used to
transfer source code in to object ode. The compiler used

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

45

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


here is CCS compiler, which is commonly used for PIC
controller. It has many inbuilt function by including that
we can call the whole functions, which makes the program
to be very simple this compiler can implement normal C
constructs, input/output operations etc.

VI. RESULT
The hardware circuit of the module is shown in
figure12. It consists of microcontroller interfaced flex
sensors, speak jet IC, etc.

C. Phrase-A-Lator
This software is a demonstration program from
Magnevation which allows speak jet IC to speak. It is used
to set voice quality like pitch, volume and speed. It will
generate the code for the corresponding words we need
which is then used in main code to make the speak jet IC to
speak. The main menu for this software is shown in fig.10,
is used to select communication settings and other editor
menu. When connected to PC the serial port check box will
turn the color to green if correct COM port is selected.

Figure10. Magnevation phrase transistor for speak jet.

Phrase editor is used to develop words, phrases and


sound effects using built-in phonemes and sound effects.
Required words or phrases can be written in say data area
and corresponding code can be obtained by pressing view
code button. This is shown in figure 11.

Figure12. Hardware circuit of the system.

The hex file obtained for the code after compilation is


downloaded in to PIC controller and corresponding words is
displayed in LCD. The words or conversation is obtained by
taking signed English as a reference. For each and every
word, values from flex sensors is compared with IMU
sensors and fed to micro controller which is displayed in
LCD display and also voice output is obtained through
speak jet IC. This output is also transmitted to mobile using
Bluetooth module.
Accelerometer output obtained from IMU sensor is
shown in figure 13. It shows result for all three axis.
Magnetometer output obtained from IMU sensor for all
three axes is shown in figure 14. Gyroscope output obtained
from IMU sensor for all three axis is shown in figure 15.

Figure11. Phrase editor menu with words and code.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

46

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


from normal people is not converted in to sign symbol.
In this system flex sensor is used along with IMU
sensor to capture the words. Flex sensor is used to obtain
fingers change to capture words. Even though we change
the position of hand the flex sensor values will not get
changed because flex sensor is placed in fingers. All the
conversation used by physically challenged people will be
using fingers or hands in particular position and rotating
hands or fingers. To capture these positions of fingers or
hand IMU sensor is used.
IMU sensor which consists of accelerometer,
magnetometer, and gyroscope will tackle this situation. This
sensor is placed along with flex sensors in the hand so that
by changing the finger for conversation, flex sensor and
IMU sensor will get changed, by comparing these two
values, output is displayed in display.

Figure13. Accelerometer output in display.

VIII. OUTPUT
Sample output obtained for some important
conversations such as WELCOME and HOW ARE YOU,
are shown in below figure 16a and b. The Left portion of
figure 16a, and b shows the exact position to keep the
fingers for the word which is obtained using the reference
site and right portion of figure 16 a and b shows the output
obtained in digital display by wearing gloves with flex
sensors and IMU in the hands and by keeping the hands in
the position referred by the picture. At the same time same
word which is displayed is also heard through speaker using
speak jet IC.

Figure14. Magnetometer output in display

Figure15. Gyroscope output in display.

VII. RESULT COMPARASION


Outputs obtained in previous work are simply
obtaining finger flex, using software for speech recognition
which uses computer to run the software. Since these
system uses pc it cant able to use it for any situation.
Another system used only flex sensor to recognize words.
Using change in flex sensor, the output is obtained. Some
system which use computer is not portable to use and also
only one way communication which will display result,
which will be understood by normal people, but speech

Figure16.a. Digital display shows WELCOME

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

47

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


[3] Syed Faiz, Ahmed, Syed Baber Ali, Saqib
Qureshi,Electronic Speaking Glove For
Speechless
Patients A Tongue to a Dumb, Proceedings of the 2010
IEEE Conference on Sustainable Utilization and
Development in Engineering and Technology University
Tunku Abdul Rahman 2010.
[4] Jingdong Zhao, Li Jiang, Shicai Shi, Hegao Cai, Hong
Liu, G.Hirzinger, "A Five-fingered Underactuated
Prosthetic Hand System", Proceedings of the 2006 IEEE
International Conference on Mechatronics and Automation,
June 2006, pp. 1453 1458.
[5] N. P. Bhatti, A. Baqai, B. S. Chowdhry, M. A. Umar,
"Electronic Hand Glove for Speech Impaired and Paralyzed
Patients", EIR Magazine, pp. 59-63, Karachi, Pakistan. May
2009
[6] Flex Point Inc. USA,http://www.flexpoint.com Last
Accessed on September 06, 2010.
[7] www.pololu.com/catalog/product/1268
Figure16.b. Digital display shows HOW ARE YOU

[8] Magnevation Speak jet http://www.speakjet.com.


IX. CONCLUSION AND FUTURE ENHANCEMENT

[9] VRbot module, www.VeeaR.eu


[10] http://www.sign.com.au

This system will be useful for physically challenged


people and will tackle the gap between them and normal
people. Since it is two way portable communication
systems, it can be used at any time.
This system can be enhanced by using extra flex sensors
in wrist and elbow, so that conversation which uses these
bent positions can be obtained accurately. Further storage
device like SD card can be used to store more phrases as a
dictionary to speak and to visualize it. It can also be
enhanced by covering it with water proof layer to use it in
any situation.
REFERENCES
[1] L. K. Simone, E. Elovic, U. Kalambur, D. Kamper, "A
Low Cost Method to Measure Finger Flexion in
Individuals with Reduced Hand and Finger Range of
Motion", 26th Annual International Conference of the
IEEE Engineering in Medicine and Biology Society 2004
(IEMBS '04), Volume 2, 2004, pp. 4791-4794.
[2] M. Wald, "Captioning for Deaf and Hard of Hearing
People by Editing Automatic Speech Recognition in Real
Time", Proceedings of 10th International Conference on
Computers Helping People with Special Needs ICCHP
2006, LNCS 4061, pp. 683-690.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

48

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Structural and Electronic properties of doped


SiliconNanowire
AnuragSrivastava*, FlorinaRegius
*Advance Materials Research Lab , Indian Institute of Information Technology
and Management , Gwalior , Madhya Pradesh , India

Abstract :
Electronic and structural properties of Silicon Nanowire (SiNW) when doped with Al and P atoms
are obtained from simulation studies have been reviewed . The bandgap , density of states and
Structural property of Sillicon Nanowire has been compared when this nanowire is doped with
phosphrous and aluminium atoms. We observed that decrease in bandgap increases the metallic
property of silicon. Total energy is maximum then the structure is least stable . So we can say that
total energy is inversely proportional to stability. In density of states , we clearly see the decline in
DOS/Ev with the increase of doping Al and P atoms. In this paper , we have discussed all the
electronic and structural properties.

Keywords: Band structure ,Band gap , Density of States , SINW

1.Introduction
The nanowire, a structure that has an amazing length-to-width ratio. Nanowires can be
incredibly thin -- it's possible to create a nanowire with the diameter of just one nanometre,
though engineers and scientists tend to work with nanowires that are between 30 and 60
nanometres wide. Scientists hope that we will soon be able to use nanowires to create the
smallest transistors yet, though there are some pretty tough obstacles in the way. Nanowires
possess unique electrical, electronic, thermo electrical, optical, magnetic and chemical
properties, which are different from that of their parent counterpart.Sillicon NW are the
answers for on going obstacles in electronic field .These Si NW are the one-dimensional
structures. Their electronic conduction can be controlled by doping. Si NW can be used from
field effect transistor to Biosensors.[4,5] The photoluminescence in Si NW and nanoparticle
have been observed [7,8] which is an experimental evidence of the quantum confinement.[ 6]
Silicon-wire research started in the mid 1990s, when advances in microelectronics triggered a
renewed interest in siliconnow nanowireresearch. [1] Last, we will turn our attention to
the electrical properties of silicon nanowires and discuss the different doping methods. Then,
three effects essential for the conductivity of a silicon nanowire are treated.
[9]Experimentally it has been observed that the band Gap can be tuned by choosing different
growth directions and diameters of wire . [10] The electronic structure of Si NW being
critically depends on the size , orientation , passivation and doping level of
nanostructure.These are the diameter dependence of the dopant ionization efficiency, the
influence of surface traps on the charge-carrier density, also causing a diameter dependence,

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

49

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

and the charge-carrier mobility in silicon nanowires. [1]Many techniques, including both
top-down and bottom-up approaches, have been developed and applied for the synthesis of
Nanowires. VaporLiquidSolid (VLS) Mechanism, Chemical Vapor Deposition (CVD),
Evaporation of SiO, Molecular Beam Epitaxy (MBE), Laser Ablation and Electroless metal
deposition and dissolution (EMD) [2]. These days Si NW are used for enhanced thermo
electricperformance [3]. Sillicon nanowire can be uniformly made at low temperature using
Vapour-Liquid-Solid growth.

2. Computational Method
We have performed the calculation using the ab-intiopsuedopotential method which is based
on the density functional theory and analyse the electronic properties of silicon nanowire
(alpha) . We have used atomistix toolkit (ATK)[11] for computation , a further development
of TranSIESTA-C[13,14] which, in turn , is based on the technology, models and algorithms
developed in the academic code TranSIESTA and ,in part, McDCal [12] , employing
localized basis sets as developed in SIESTA[15]. The computation has been made in selfconsistent manner using steepest descent geometric optimization with Pulay Algorithm for
iteration mixing . A mesh cut-off of 70 Hartre has been used throughout the study. The
Brilloun-zone(BZ) integration is performs with a Monkhorst-Pack scheme using 1*1*11 k
points. The cutoff energy and the number of k points are varied to test the convergence and
are found to converge within the force tolerance of 0.05eV/A for the reported value . The
exchange correlation functional described within the extended huckel and with generalised
gradient approximation revised-PBE(rev-PBE) as proposed by Zhang and Yang [16] are used
for the computation of total energies of Si Nanowire and its doping with aluminium and
phosphorous atoms . The total energy for Si nanowire extended huckel potential is -2426.37
ev . The extended huckel potential is quite good for the computation of total energy.[17]The
nanowires are placed in the supercell along the wire length in z-direction while the supercell
lengths in the x and y direction are chosen big enough to avoid interaction between nanowire
and its periodic image.[18] For better understanding of fundamental physics associated with
different structures , the binding energies of Si nanowire have also been analysed and to
understand the nature of material , localization and delocalization of states near the Fermi
level , we have analysed the electronic band structure and density of states for all the doping
configuration of Si nanowire.

3. Results

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

50

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

3.1 Structural Properties


The atomic configurations of the Si Nanowire are presented in the Fig 1. It may be cautioned
that these figures are schematic and separations between atoms are not to scale . As these
figures depicts the structures quite clearly and hence have not been discussed separately in
the text. The stability energetic in the various nanostructures of Si has been performed under
the extended huckel with the revised PBE type exchange correlation functional and shown in
Fig.2 by total energy per atom for all the Si nanowire structure have been reported in Table
1. We observe that silicon nanowire has the lowest energy is visible at -2426.37ev which
shows it is the most stable structure and when it is doped with one aluminium atom the total
energy is -2375.27ev which is less lower then earlier which shows it is next stable structure.
In this way total energy increases with the increase of doping of Aluminium atoms in Si
nanowire. Now after doping with phosphrousatoms , the total energy has a great variation . -2468.46ev is the lowest energy found when Sillicon nanowire is doped with the four
phosphrousatoms . This shows it is highly stable structure .-2510.69ev is the next lower
energy found when Sillicon nanowire is doped with the three phosphorous atoms . It is also
highly stable but lesser then earlier. Here , we observe that with increase in doping of atoms ,
total energy decreases and stability increases. Now for detail of structural stability , the
formation energy and binding energy are calculated . [19]

B
Polynomial Fit of Data1_B
-2421

-2422

total energy

-2423

-2424

-2425

-2426

-2427
0.26

0.27

0.28

0.29

0.30

Lattice constant (c/a)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

51

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

(a)

(b)

(c)

(d)

(e)

Fig.2. The atomic Configurations of Si nanowires (a) SiNW without doping , (b) SiNW doped with 1
Al atom , (c) SiNW doped with 2 Al atoms , (d) SiNW doped with 3 Al atoms and (e) SiNW
Doped with 4 Al atoms

(a)

(b) (c)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

52

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

(d)
Fig.3.

(e)
The atomic Configurations of Si nanowires (a) SiNW without doping , (b) SiNW doped with 1
P atom , (c) SiNW doped with 2 P atoms , (d) SiNW doped with 3 P atoms and (e) SiNW
Doped with 4 P atoms

Table :

Lattice
Atomic
Configuration constant

Total
energy

Sillicon Pristine
Doped with 1 Al
atom
Doped with 2
Al atoms
Doped with 3 Al
atoms
Doped with 4 Al
atoms
Doped with 1 P
atom
Doped with 2 P
atoms
Doped with 3 P
atoms
Doped with 4 P
atoms

0.285
0.27

-2426.37
-2375.27

Binding
energy
4.29
4.64

Formation
energy
1.29

0.27

-2326.86

4.99

2.55

0.29

-2282.88

5.35

3.71

0.30

-2237.89

5.70

4.9

0.27

-2468.46

3.89

0.58

0.27

-2510.69

3.49

1.66

0.27

-2548.93

3.03

1.8

0.27

-2594.44

2.88

2.33

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

53

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

3.2 Bandstructure analysis


To understand the material behaviour of the stable geometries of SilliconNanowires ,
electronic band structures analysis have been performed and their band structures are
compared in Fig 3a-g . On the basis of number of doping atoms crossing the Fermi level ,
conductivity of various Si nanowires has been explained . The bandstructure of bulk Si
nanowire as shown in Fig.3a.no conductions bands are corssing which shows it is
semiconductor . It has a bandgap of 8.03 E/ev .When the Si nanowire is doped with 1 Al
atom as shown in Fig.3b there is no band gap and two conduction line is crossing the Fermi
level at 0.1 E/ev. This shows metallic property has entered the play . Now the Sillicon
nanowire is doped with 2 Al atoms as shown in Fig.3c as usual there is no band gap and two
conduction lines are crossing the Fermi level at 0.01 E/ev and -0.01E/ev from vanlence and
conduction bands resp. This shows metallic property has increased compared to previous.
Then Si nanowire is doped with three Aluminium atoms as shown in Fig.3d and two
conduction lines are intersecting the Fermi level at 0.1 E/ev and - 0.1 E/ev resp. Conduction
band becoming dense. Metallic property has increased from earlier. Finally Si nanowire is
doped by 4 Al atoms as shown in Fig.3e and even in this three conduction lines are crossing
at 0.02 E/ev ,0.01and-0.0 1 E/ev resp. Conduction band has become more denser because of
crowed conduction lines . Metallic property also increased. Now it is turn of Phosphorous
atoms to be doped in Si nanowire. The Si nanowire is doped with 1 P atom as shown in
Fig.3f. there is no band gap and a conduction line is crossing the Fermi level below -0.002
E/ev. This shows metallic property is more in case of phosphorous . Now the Silicon
nanowire is doped with two Phosphorous atoms as shown in Fig.3g as usual there is no band
gap and three conduction lines are crossing the Fermi level below -1.0 E/ev , -1.2 E/ev , -0.2
E/ev resp. This shows metallic property has increased compared to previous. Then Si
nanowire is doped with three P atoms as shown in Fig.3h and here also there are three
conduction lines which is crossing the Fermi level from valence and conduction band has
become more dense and thick compared to others. Finally Si nanowire is doped by 4
phosphorous atoms as shown in Fig.3i in which four conduction lines are intersectingthe
Fermi level at valence band and two conduction lines intersecting at conduction band.The
graph is becoming dense to denser while increasing the number of atoms which shows the
metallic nature increases .

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

54

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

(a)

(b)

(c ) (d)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

55

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

(e)
Fig.4.

The Bandstructures of Si nanowires (a) SiNW without doping , (b) SiNW doped with 1
Al atom , (c) SiNW doped with 2 Al atoms , (d) SiNW doped with 3 Al atoms and (e) SiNW
Doped with 4 Al atoms

(a )

(b)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

56

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

(c )

(d)

(e )
Fig.5.

The Bandstructures of Si nanowires (a) SiNW without doping , (b) SiNW doped with 1
P atom , (c) SiNW doped with 2 P atoms , (d) SiNW doped with 3 P atoms and (e) SiNW
Doped with 4 P atoms

3.3 Density of States


The DOS profile for all the stabilized structures have been shown in Fig 4a-g along with that
of bulk Sillicon . In case of bulk Silliconnanowire , the graph shows a high peak which is at
23.7 DOS/ev in y axis and -4.2 E/ev on x axis . Two peaks are present which are on the left
and right side of high peak. The peak on the right side appears on 10.4, 9.4 DOS/ev on y axis
resp. and 11 , 15.4 E/ev in x axis resp. But the peak on the left side is present on 9.3 , 6.1
DOS/ev on axis yresp and 6, -12 E/ev in x axis. We observe a small bandgap which shows
the semi conducting behaviour . When this bulkSillicon nanowire is doped with a Al atom ,
the graph shows a high peak which is at 21.2 DOS/ev in y axis and -4 E/ev on x axis . There
are two peaks on the right side is present on 7.3 , 5.4 DOS/ev on y axis and -6.3 , -11.2 E/ev
in x axis. But the peak on the left side is appears on 9,8.8 DOS/ev on axis y and -14.2 , 16
E/ev in x axis. Then this bulk Sillicon nanowire is doped with two Al atom , the graph
shows a high peak which is at 20.4 DOS/ev in y axis and -3 E/ev on x axis . The peak on the
right side is present on 11.5 DOS/ev on y axis and 5.2 E/ev in x axis. But the peak on the left
side is present on 8,8.9 DOS/ev on axis y and 14 , 17 E/ev in x axis resp. We see that the
peaks are diminishing with the increase in number of Al and P atoms .Now , this bulk
Sillicon nanowire is doped with three Al atom , the graph shows a high peak which is at 20.2
DOS/ev in y axis and -2.8 E/ev on x axis . There are two peaks on the right side which
appears on 5.5 , 4 DOS/ev on y axis resp. and -11,14 E/ev in x axis resp. But there are two
peaks on the left side is present on 8.7 , 8 DOS/ev on axis y and -11 , 15.4 E/ev in x axis
resp. And then this bulk Sillicon nanowire is doped with four Al atom , the graph shows a

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

57

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

high peak which is at 19.4 DOS/ev in y axis and -4.7 E/ev on x axis . There are two peaks on
the right side which appears on 3.7 , 5.4 DOS/ev on y axis resp. and -13,-15 E/ev in x axis
resp. But there are two peaks on the left side is present on 7.4,8.2 DOS/ev on y axis and 10 ,
11.7 E/ev in x axis resp. . The left side of Fermi level is more dense which clearly indicates
high metallic nature. Now its the turn for phosphorous doping on Silliconnanowire . When
this bulkSillicon nanowire is doped with a P atom, the graph shows a high peak which is at
22.67 DOS/ev in y axis and -11 E/ev on x axis . There are three peaks on the right side
appears on 4.4 , 6 , 8.7 DOS/ev on y axis and -11,-17,-20.7 E/ev in x axis resp. But there are
5 peakson the left side is present on 9.5,9.7,4.7,1.6,1.08 DOS/ev on y axis and 4,9.6,17,20,
25 E/ev in x axis resp. Now this bulkSillicon nanowire is doped with two P atoms , the
graph becomes very dense and it clearly defends very high metallic nature. There are three
peaks on both sides of femilevel . The peaks appears on 23 , 6.2 ,9.2 DOS/ev on one side of
Fermi level which is at -0.3, -12.5 , -21.6 in x axis .But when this Sillicon nanowire is doped
with three P atoms , we observe one side of Fermi level has three peaks ,the graph shows
peaks which is at -1.3,-0.5,-1.7 DOS/ev in y axis and 10.3,10.4,10.2E/ev on x axisresp . But
on the other side , there are many small peaks . We observe lot of distorted peaks on the
valence band which defends it is metallic .Finally this bulk Sillicon nanowire is doped with
four P atoms , the graph shows a three peaks on one side of fermi level which is at 7.5,
12.6,19.8 DOS/ev in y axis and other side has around seven peaks which appears on 9.6 ,
10.4 , 4.7 ,5.4,4.8,1.8,1 DOS/ev .Here , we see small peaks raising up .As usual there is
presence of more peaks but now the peaks are getting distorted on both sides because of
increase in number of electrons of Aluminium and phosphrous atoms .

(a)

(b)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

58

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

(c )

(d)

(e )
Fig.6.

The Density of States of Si nanowires (a) SiNW without doping , (b) SiNW doped with 1
Al atom , (c) SiNW doped with 2 Al atoms , (d) SiNW doped with 3 Al atoms and (e) SiNW
Doped with 4 Al atoms

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

59

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

(a)

(b)

(c )

(d)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

60

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

(e )
Fig.7.

The Density of structures of Si nanowires (a) SiNW without doping , (b) SiNW doped with 1
P atom , (c) SiNW doped with 2 P atoms , (d) SiNW doped with 3 P atoms and (e) SiNW
Doped with 4 P atoms

Conclusions
After analysing the structural and electronics properties of Silliconnanowire , we can
conclude that in case of structural property , if total energy is maximum then the structure is
least stable . So we can say that total energy is inversely proportional to stability . With the
increase of doping Al and P atoms , total energy is increasing . In density of states , we
clearly see the decline in DOS/Ev with the increase of doping Al and P atoms . But the graph
was getting denser and crowded with conduction lines as the metallic property is increasing.
Finally , in Bandstructure , we observe that band gap gets hidden because of crowded
conduction lines while doping it with Al and P atoms which depicts the increase of metallic
nature as earlier in case of Si NW , there was small band gap available depicting
semiconductor property.

Acknowledgement
Authors greatly acknowledge the support from ABP-IIITM Gwalior for providing the
infrastructure support in doing the research work.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

61

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

References
1.Volker Schmidt,* Joerg V. Wittemann, Stephan Senz, and Ulrich Gosele , Adv. Mater.
2009, 21, 26812702
2. Mehedhi Hasan1, MdFazlul Huq2* and ZahidHasanMahmood , 2013 Hasan et
al.;licensee Springer
3. Kui-Qing Penga, XinWanga, Li Lia, YaHua, Shuit-Tong Lee,
10.1016/j.nantod.2012.12.009

4. F.Patolsky and C.M.Lieber , Mater. Today 8,20(2005)


5. A.M.Morales and C.M.Lieber,Science 279,208(1998)
6.Bozhi Tian1,3, Xiaolin Zheng1,3, Thomas J. Kempa1, Ying Fang1, Nanfang Yu 2, Guihua Yu 1,
Jinlin Huang1 & Charles M. Lieber , Nature 449, 885-889 (18 October 2007
7.Z.G.Bai,D.P.Yu,J.J.Wang,Y.H.Zou,W.Qian,J.S.Fu,S.Q.Feng,J.Xu and L.P. You ,
Mater.Sci.Eng.,B 72,
117 (2000)
8.D.B.Geohegan,A.A.Puretzky , G.Dushcher and S.J. Pennycook , Appl. Phys Lett.73.438
9. M. Hofheinz ,X.Jehi , M. Sanquer , G.Molas, M.Vinet and S.Deleonibus ,Eur .Phys.J.B 54,
299(2006)
10.D.D.D.Ma ,C.S.Lee, F.C.K.Au , S.Y. Tong and S.T.Lee , Science 299 , 1874(2003)
11.http://www.atomistix.com
12. Jeremy Taylor , Hong Guo and Jian Wang ,Phys.Rev.B 63,245407(2001)
13.MadsBrandbyge , Jose-Luis Mozos , Pablo Ordejon , Jeremy Taylor and Kurt Stokbro
, Phys.Rev. B 65 , 165401 (2002)
14. Kurt Stokbro , Jeremy Taylor , MadsBrandbyge and Pablo Ordejon ,
Annals N.Y.Acad.Sci.1006,212 (2003)
15.Jose M. Soler , Emilio Artacho , Julian D.Gale , Alberto Garcia , Javier Junquera , Pablo
Ordejon
and Daniel Sanchez Portal,J.Phys.Cond.Mat.14,2745(2002)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

62

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

16.B. Hammer , L.B. Hansen and J.K. Norskov , Improved adsorption energetics within
density Functional theory using revised Perdew-Burke17.AnuragShrivastav ,NehaTyagi and R.K.Singh , JCTN , Vol.8.1-6,2011
18.AnuragShrivastav ,NehaTyagi and R.K.Singh , Materials Chemistry and Physics 127
(2011)489-494
19. Anurag Srivastava,1 Mohammad Irfan Khan,1,2 Neha Tyagi,1 and PurnimaSwaroopKhare,
Volume 2014 (2014), Article ID 984591

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

63

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

STUDY AND EXPERIMENTAL ANALYSIS OF LINEAR AND NON LINEAR BEHAVIOUR OF


PIPE BEND WITH OVALITY
Balaji A#1, Faheem Ashkar H#2, Jahir Hussain H#3, Elamparithi R#4
#1

Assistant Professor
UG Scholars
Department of Mechatronics Engineering, Kongu Engineering College
Perundurai, Erode, Tamil Nadu, India-638052
1
bala2009mct@gmail.com
2
faheem.ashkar@gmail.com
3
iamjahirhussain@gmail.com
4
elamparithiramasamy@gmail.com
#2, #3,#4

Abstract the present study performed a series of


experiments using real-scale experimentation process to
evaluate the effects of load variation with respect to ovality
for various schedule numbers such as SCH 40 long radius,
SCH 40 short radius and SCH 80 short radius bends with and
without internal pressure. The experiments has been
conducted at ambient temperature within elastic limit of the
bend for under in-plane opening & in-plane closing bending
moments and also out of plane clockwise & out of plane
anticlockwise bending moments. The experiments included to
calculate the displacement as well as percentage change in
ovality in the intrados, crown and extrados regions of the
bend. The displacement in the intrados and extrados region
increased almost linearly with respect to load for both inplane and out of plane bending moments. This allowable limit
loads and ovality are suggested for different diameters of pipe
bends and for different pipe material. This helps in avoiding
rejection of pipes due to insufficient wall thickness. The
mathematical results and software results are compared with
experimental results to get the optimised output.

II

Equations used for calculating basic parameters


The ovality of a bend section is shown in figure 1.1. The
minimum required wall thickness at the pipe bend during the
bending process does not produce a difference between the
maximum and minimum diameters greater than 8 % for internal
pressure service and 3 % for external pressure service
(Engineering Design and Analysis Ltd.,). The center line radius of
pipe bends should typically be a minimum of 3 times the nominal
pipe diameter. The codes have certain requirements for the
acceptability of finished bends. It depends upon the following
parameters
i.
ii.
iii.

Thinning and Thickening.


Ovality.
Buckling.

Keywords: Pipe bends, Internal Pressure, In-plane and Out of


plane bending moments.

I INTRODUCTION
Large pipelines and pipe networks are part of almost every
industrial setup today. These are most commonly found in
petroleum rigs, refineries, factories producing chemicals and
pharmaceuticals, and in power plants. In these and other industrial
applications, pipes are very often used to carry substances that, by
virtue of their pressure, temperature, physical and chemical
characteristics, can have serious negative effects on health,
property and the environment, if released into the atmosphere.
Examples of such substances include steam, oil and chlorine gas.
Failure in a piping system could cause problems, like an
unscheduled, and hence costly, plant shutdown for maintenance
or even a catastrophe, like exposing the core of a nuclear reactor.
Therefore, the integrity of pipes in industrial contexts is of
paramount importance. This integrity relies heavily on the
correctness of pipe design, which can only be achieved through a
thorough understanding of the behavior of piping components and
systems under different types of loads.

Figure 1.1Bend Ovality


Thinning and Thickening
In every bending operation the outer portion of the
bend stretched and the inner portion compressed. This leads to
thinning at the extrados and a thickening at the intrados of the
pipe bend. The thinning is defined as the ratio of the difference
between the nominal thickness and the minimum thickness to the
nominal thickness of the pipe bend. The thickening is defined as
the difference between the maximum thickness and the nominal
A

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

64

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


thickness divided by the nominal thickness of the pipe bend. The
percentage change in thinning and thickening is calculated by
using the formula as shown in the equation 2.1 and 2.2
(Veerappan A and Shanmugam S, 2012). Because of uncertainties
introduced by the pipe-manufacturing method, it is not possible to
exactly predetermine the degree of thinning.
------- (2.1)

Pipe Size

25 mm

Outside Diameter (D)

33.4 mm

Inside Diameter

26.6 mm and 24.4mm

Wall Thickness (t)

3.4 mm and 4.5mm

Tensile Strength(min)

413MPa

Yield Strength(min)

241MPa

Table 3.2 Chemical Compositions


Composition
Percentage

------- (2.2)
Where,
Tnom
Tmax
Tmin

=Nominal Thickness of the Bend (mm).


=Maximum Thickness of the Bend (mm).
=Minimum Thickness of the Bend (mm).

Ovality
During the bending operation the cross section of the
bend assumes to be an oval shape whose major axis is
perpendicular to the plane of bend. The degree of ovality is
determined by the difference between the major and minor axes
divided by the nominal diameter of the pipe. When the bend is
subjected to internal pressure, it tries to reround the cross section
by creating secondary stress in hoop direction. The percentage
change in ovality is calculated using the equation 2.3 (Veerappan
A and Shanmugam S, 2012).

III

STANDARD PARAMETERS
The specification such as bend dimensions and
chemical compositions of the bend section were taken from
ASME B36.10 catalogue for 1inch diameter bend. The bends are
classified into the following categories.
1) SCH40 (Wall Thickness-3.4mm)
2) SCH80 (Wall Thickness-4.5mm)
3) SCH160 (Wall Thickness-6.4mm)
The outer diameter of the bend is kept constant and the bore size
will be varied according to the schedule number of bend. In
schedule number itself the bends are classified into two
categories.
1) Long Radius Bend.
2) Short Radius Bend.
In our investigation three types of specimens has been
used according their availability.
1) SCH40 Long Radius Bend.
2) SCH40 Short Radius Bend.
3) SCH80 Short Radius Bend
In each section a straight pipe of six inch length has been attached
at the both side of the bend section.
The standard parameters and chemical composition of
the pipe is given in the table 3.1 and 3.2 which is taken from
ASME B36.10 and ASTM A106 Grade B catalogue.

ASTM 106 Grade B


40 and 80

0.29 to 1.06

Phosphorous(max)

0.025

Sulfur(max)

0.025

Silicon(min)

0.10

Figure 4.1 Diagrammatical Model of Set-up [Karmanos et al.]


Maximum bending moment (Mi (max))

Table 3.1 Standard Parameters


Description
Parameters
Schedule Number (SCH)

0.30

Manganese

IV
EXPERIMENTATION SETUP
`
The diagrammatical model of the experimentation setup
is shown in figure 4.1 for testing the elbow under in-plane
bending mode. One end of the pipe end is clamped at the ground
and other end is kept free for applying the in-plane moment load.
A long rod is attached at the end of the free end for applying the
in-plane load easily. The length of the straight pipe EB and GH is
equal to six times the diameter of the bend section. In-plane
bending mode can be created when the load applied in vertical
direction on the beam BA. When the load applied in vertically
upward direction, the bend section is subjected to in-plane
opening mode and vertically downward direction, the bend
section is subjected to in-plane closing mode. And out-of-plane
bending mode is created by applying the load in horizontal
direction on the beam. Spring balance and load cell is used to
measure the magnitude of the applying load which is placed at the
free end of the rod. Dial gauges are used to measure the deflection
in the bend section by fixing it in the intrados, crown and extrados
regions. The maximum elastic bending moment that can be
applied in the test specimen is calculated by using the formula
given below which is taken from ASME BPVC, Section III is
shown
in
the
equation
4.1.

-------- (2.3)
Where,
D max
=Maximum Outside Diameter of the Bend (mm).
D min
=Minimum Outside Diameter of the Bend (mm).
D nom
=Nominal Outside Diameter of the Bend (mm).

Pipe Standard

Carbon(max)

------ (4.1)

Where,
z

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

65

Section Modulus.

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


Sm

Allowable Stress Value.

Bend Factor (Rt/r2 )

Long Radius SCH40 Bend =613.54N-m


=1067.64N (108.8kg)
Short Radius SCH40 Bend =468.25.1N-m
=814.91N (83.09kg)
Short Radius SCH80 Bend =675.40N-m
=1175.43N (119.81kg)
The Pro/E model and photographic view of the setup is
shown in the figure 4.1 and 4.2 which is shown in different views.
By rotating the hand wheel the desired load is applied to the
corresponding bend section.

FRONT VIEW

SIDE VIEW

SIDE VIEW
FRONT VIEW

TOP VIEW
Figure 4.2 CREO Elements Model

TOP VIEW

Figure 4.3 Photographic View of the Setup

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

66

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


E = 193 GPa, Poissons ratio of = 0.3 and the limiting stress of
= 193 MPa. The FEA models were subjected to internal
pressure, in-plane bending and out of plane bending mode.
Internal pressures were applied as a distributed load to the inner
surface of the FEA model.

Figure 4.4 Isometric View of CREO Elements Model

Figure 5.1.1 Cross Section of the Pipe bend with attached to


straight pipe

Figure 5.1.2 Pipe bend with attached to straight pipe


Figure 4.5 Isometric View of the Setup
The bend section which is to be tested is fixed in the
base frame. One end of the long rod is attached to the bend
section and other side of the rod is clamped in the plate which
moves up and down for applying the in-plane bending modes. The
movement of the plate can be attained by rotating the hand wheel.
Spring balance is placed in the bottom of the vertical frame when
the bend is subjected to in-plane opening mode and it is placed in
the top of the vertical frame when the bend is subjected to inplane closing mode. Two supports which are resting on the base
frame for applying out of plane bending in clockwise and
anticlockwise modes. Dial gauges are used to measure the
deflection in the bend section by fixing it in the intrados, crown
and extrados regions. During experimentation the load is applied
in the incremented manner up to maximum bending moment that
can be applied to bend section. For each mode three sets of
reading has been taken and then it is averaged. In this
experimental setup the readings are taken without any internal
pressure like water, oil, steam etc.
V

RESULTS AND DISCUSSION

V.I

ANALYSIS RESULTS

Figure 5.1.3 In plane bending mode (closing)

Using Finite Element Analysis (FEA) method, ansys


12.1 were performed with solid element type. The following
values of material properties were used in present calculations

Figure 5.1.4 Stress Distribution of the pipe bend in In-plane


bending mode (closing)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

67

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


Maximum Displacement (mm)
S.

Schedule

No

Number

Figure 5.1.5 Out of Plane bending model


3

40-Long
Radius

40-Short
Radius
80-Short
Radius

Percentage change
in Ovality
Zero

Intrados

Crown

Extrados

Region

Region

Region

1.69

0.92

1.48

4.310

3.070

0.74

0.151

0.62

1.417

0.299

0.75

0.258

0.99

3.333

1.856

Load
(kg)

Maximum
Load (kg)

Table 5.2.1 Displacement and percentage change in ovality


B

Out of Plane Clockwise Mode:

The maximum displacements results in intrados, crown


and extrados regions as well as the percentage change in ovality
of the bend during Out of Plane Clockwise mode is shown in
table. The readings are taken as five sets and the average values
are shown in the following table.

Figure 5.1.6 Out of Plane bending (clockwise)

Maximum Displacement (mm)


S.
No

Figure 5.1.7 Stress Distribution of the pipe bend in Out of plane


bending mode (clockwise)

The same procedure was performed for in plane


opening mode and out of plane anticlockwise mode and the graph
was plotted for displacement and deflections. These values are
compared with experimental setup values.
V.II

EXPERIMENTAL RESULTS

In plane Closing Bending Mode:

Percentage change
in Ovality

Schedule
Number

40-Long
Radius

40-Short
Radius
80-Short
Radius

Zero

Intrados

Crown

Extrados

Region

Region

Region

0.79

0.56

0.74

4.32

2.09

0.54

0.05

0.35

0.35

0.299

0.65

0.154

0.59

0.59

1.57

Load
(kg)

Maximum
Load (kg)

Table 5.2.2 Displacement and percentage change in ovality

The same procedure was performed for inplane opening


mode and out of plane anticlockwise mode and the graph was
plotted for displacement and deflections.

The maximum displacements results in intrados, crown


and extrados regions as well as the percentage change in ovality
of the bend during Inplane Closing Bending Mode is shown in
table. The readings are taken as five sets and the average values
are shown in the following table.

VI GRAPHS AND DISCUSSIONS


The displacement variation for analysis results were
compared with experimental results, it shows the approximately
same variation and percentage change in ovality for various
schedule number of pipe bends are plotted as below.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

68

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


VII

CONCLUSIONS

The experimental results and analytical results show


inplane bending moment has more deflections compared to out of
plane bending moment. Percentage variation in ovality during
inplane closing mode and out of plane clockwise mode from zero
to maximum loading is increased from schedule 40 long radius to
schedule 40 short radius that is from 28.77 to 72.89 but
decreased from schedule 40 short radius to schedule 80 short
radius that is from 72.89 to 44.31 . For inplane opening mode
and out of plane anticlockwise mode the percentage variation in
ovality behaves the same manner as that of out of plane clockwise
mode. The experimental results describe that maximum
displacement occurs in the intrados region and hence the intrados
region exhibit more flexible property than the extrados region.
When the schedule number is increased rigidity of the bend
material is increased as well as the percentage variation in ovality
is also increased.
The present work can be extended to include the effect
of internal pressure on the pipe bend with in-plane and out of
plane bending moment, temperature effects involved, material
microstructure analysis to control geometrical irregularities due to
ovality. The influence of initial ovality in pipe bends is considered
to be one of the major factors in reducing the percentage ovality
which demands analysis of the behavior to avoid geometrical
irregularities. The temperature effects involved in the pipe bends
while introducing internal pressure needs to be analyzed to find
the exact reason for geometrical irregularities in pipe bends due to
ovality. The material microstructure analysis can help to predict
the behavior of various materials at elevated temperature and
pressure to predict the grain size changes when loading is done on
a pipe bend.

Graph 6.1 Displacement Variations for Analysis Results

VIII

REFERNCES

1. Veerappan AR, Shanmugam.S and T.Christo Michael.,


Effect of ovality and variable wall thickness on collapse
loads in pipe bends subjected to in-plane bending closing
moment, Engineering Fracture mechanics, Vol.7, pp.138148,2012.
Graph 6.2 Displacement Variations for Experimental Results

2. Veerappan A and Soundrapandian S., The Accepting of


Pipe Bends With Ovality and Thinning Using Finite
Element Method, Journal of Pressure Vessel Technology,
Vol .132(3), pp.031204, 2010.
3. Chattopadhyay J., The effect of internal pressure on inplane collapse moment of elbows, Nuclear Engineering
and Design, Vol .212, pp.133- 144, 2002.

4. Weib E., 'Linear and nonlinear finite-element analyses


of pipe bends', International Journal of Pressure Vessels
and Piping, Vol .67(2), pp.211-217.1996.

Graph 6.3 Percentage Change in Ovality for SCH 40 radius pipe


bends in experimental values.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

69

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

A REVIEW ON PERFORMANCE ANALYSIS OF MIMOOFDM SYSTEM BASED ON DWT AND FFT SYSTEMS
1

Pitcheri Praveen kumar,

I.Suneetha,

N.Pushpalatha

M.Tech(DECS) Student,

Associate professor& Head,


Assistant professor
Department of ECE,AITS
Annamacharya Institute of Technology and Sciences,Tirupati,India-517520
1
ppvvsreddy@gmail.com
iralasuneetha.aits@gmail.com
3
pushpalatha_nainaru@rediffmail.com
2

communication system is to increase the Spectral efficiency


and to improve link reliability.
The MIMO (Multiple-input Multiple-output)
wireless technology [1], gives the improved Spectral
efficiency due to its Spatial Multiplexing gain and the
reliability of link due to the antenna diversity gain. In the
MIMO (Multiple-input Multiple-output) system the
multiple antennas are deployed at both transmitter side as
well as the receiver side of wireless system, has generated
considerable interest in recent years [2], [3].
The OFDM is MCM (Multi Carrier Modulation
scheme) which is used to convert the Frequency selective
channels into the parallel Flat-Fading Narrowband subchannels. Due to the cause of Multipath propagation in the
signal the cyclic prefix are added to the transmitted signal
to mitigate the Inter Symbol Interference (ISI)[4].
The OFDM implemented with the basis of Fast Fourier
Transform (FFT) used to Multiplexing the signal together
at the input side and also decode the signal at receiver side
to regain the original signal. Adding the cyclic prefix to the
transmitted signal causes the reduction of Spectral
efficiency. The FFT based MIMO-OFDM uses the
Narrowband analysis produces the high side lobes in
Rectangular window which enhance it sensitivity to ICI
and Narrow-band interference.
The alternative method for FFT based MIMO-OFDM
is the Discrete Wavelet Transform based MIMO-OFDM
system. The DWT consist of a Low Pass Filter (LPF) as
well as High Pass Filter (HPF) functional as a Quadrature
Mirror Filter (QMF) technique and the capability to
regenerate the original signal as well as producing
orthogonal properties. In the DWT based OFDM system
the sub-band coding is done by using of sub-signals of Low
pass and High pass Frequencies. The Wavelets in the
wireless communication system has a lot of advantages, by
way of channel characterization, Interference modification,
cognitive radio in addition interacting [7], and flexibility
and to produce the optimal resolution.

Abstract MIMO-OFDM system has a capability of


transmitting high speed data-rates to the end user without
limiting the channel capacity and with no ISI. The MIMOOFDM implements based on the FFT due to its Multicarrier
modulation and its efficient computational capability. But
FFT in MIMO-OFDM causes the high side lobes produced
by Rectangular window and due to cyclic prefix which do not
has a flexibility, optimum resolution and high PAR losses. The
alternative method for MIMO-OFDM is Wavelets based
multi-user MIMO OFDM systems. OFDM system
implemented with the Wavelets has a lot of advantages over
the FFT based OFDM. Unlike FFT based MIMO-OFDM the
wavelet based OFDM system does not require a need for
cyclic prefix, optimal resolution and flexibility. The DWT
based OFDM compares the efficiency performance with the
FFT based OFDM on establishing the BER presentation to
the transmit ability by BPSK also QPSK as a modulation
technique in AWGN.

I. INTRODUCTION
The number of increased users in the wireless
mobile communication system is needed to provide a high
QOS (Quality of Service) and to provide High-data-rates.
High-data-rates can be achieved by increasing the
Spectrum efficiency. The goal of the future communication
network is to be implemented to provide a high data rates
and service would be high speed internet access, video
communication videophones etc. The barrier of any
communication system is the transmission losses of the
propagation signal from transmitter to the receiver end in
number of different paths said to be multipath propagation
of signal. The multipath propagation of signals will
undergone to various types of Fading which leads to
change of the transmitted signal with respect to its pace,
amplitude, frequency leads to decay of signal strength at
receiver side. The multipath propagation loss occurs in both
Time domains as well as in the Frequency domain.
The invention of Orthogonal Frequency Division
Multiplexing (OFDM) becomes a popular technique for
effective transmission mitigate of the signal in multipath.
OFDM in a wireless communication channels transmits
very extraordinary speed of data rates without limiting the
channel capacity in an allocated Frequency. The OFDM
has a number of advantages, reduction of Impulse
Response over the channel, high Spectral efficiency and
robustness against the Inter Symbol Interference (ISI), Inter
Carrier Interference (ICI). The goal of the any

The applications of the Wavelets in wireless


communication systems are used in the image synthesis,
nuclear engineering, biomedical engineering, magnetic
resonance imaging, pure mathematics, data compression
technique, computer graphics and animation, human vision,
radar, optics and astronomy. The MIMO-OFDM
implements with the Wavelets can provide a high
performance operations in different areas of the fields.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

70

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

For a given finite data stream which can be


represent as
X[n], n=0 to N-1
The corresponding output sequence of FFT is
represented as

II. OVERVIEW
2.1 MIMO-OFDM SYSTEM:
In order to increased demand of the wireless
system it is necessitate to establish the efficient use of
radio
spectrum, which is done by placing of the
Modulation Carriers closely as possible as without
occurring of the Inter Symbol Interference (ISI) and
capability of carrying ability of many bits as possible. For
the transmission of higher Data-rates the short symbol
period should be used. The symbol period is T is always
inverse to the base band data rate R (R=1/T), as symbol
period T should be small if transmitting Data- rates are to
be high. However a shorter symbol period increases the ISI
(Inter Symbol Interference), by using of the OFDM which
address the unique Modulation and Multiplexing technique
to overcome it.
The MIMO (Multiple-input Multiple-output)
communication system is implementation of array of
antennas at the transmission and at the receiver side in
Wireless Communication System. The MIMO systems are
invented in the middle of 1980 by Jack Winters and Jack
Saltzs of Bell Laboratories. The MIMO systems provide
the enhanced system performance under the same
transmission capability of Single-input Single-output
system. The MIMO-OFDM system has a capability of
transmitting higher Data-rate to the user with no ISI (Inter
Symbol Interference) and no ICI (Inter Carrier
Modulation).
MIMO systems are used in the various possible
applications of digital televisions, wireless local area
networks,
metropolitan
networks
and
mobile
communications due to its great channel capacity which is
in proportional to the total number of transmitter and
receivers.

( )=

( )

=0

(1)

And the IFFT i.e. Discrete Domain sequence is


( )=

( )

=0

(2)

The FFT based MIMO OFDM system perform the three


types of operations such as Pre-coding, Spatial
multiplexing, and Diversity-coding. Pre-coding technique
can be used to improve the gain of received signal and to
reduce the effect of Multipath Fading. Spatial Multiplexing
is used to split the higher data rates into lower data rates
and to transmitting to the corresponding antennas. In the
FFT system the cyclic prefix is added to the transmitted
sequence before it is transmitting to evade the Inter Symbol
Interference (ISI), Inter Carrier Interference (ICI) and the
Inverse FFT operation is performed at the receiver side to
construct the original input signal. In the FFT based
MIMO-OFDM system the cyclic prefixes are added to
decrease the Bandwidth wastage (BW)[5].
The MIMO-OFDM based with FFT uses a Narrow
band analysis which produces the high side lobes due to
Rectangular window of FFT leads to high PAPR losses and
as well as the ISI (Inter Symbol Interference), ICI ( Inter
Carrier Interference).
3.2 WAVELET BASED MIMO-OFDM SYSTEM:
The Wavelets are the short pulse of the Continuous
time signal. The advantages of the Wavelets are consist of a
simple mathematical notation functions, they cut up the
data into different Frequency components and then study
the each component with a resolution matched to its scale.
The key advantages in the Wavelets are the symbols
overlap in both Time and Frequency domain which exhibits
the High Spectral efficiency. Depends upon the
requirement the various types of Wavelets can be
constructed, Wavelets are short time pulses. The Wavelets
are used in many applications including of Quantum
physics, Electrical Engineering, and Seismic geology [6].
The wavelet based MIMO-OFDM system has high
flexibility, optimum resolution and no need of adding the
cyclic prefixes to the transmitted signal. Wavelet transform
produce a Multi-resolution disintegration of continuous
time signal into different Frequencies with different times
[5]. The data rates in the Wavelet Transforms are splitter
into smaller data rates and the data rates are sampled into
Upper Frequencies as well as Lower Frequencies. The
wavelet transform produces high Orthogonal properties and
to reproduce the original signal. The Wavelet transform is
calculated independently splitter part of the sub signal [9],

III. RELATED WORK


Several methods were proposed by different
Researcher to compare the performance analysis of MIMOOFDM using based on the FFT and as well as the DWT
systems and to compare the performance analysis between
the FFT based MIMO-OFDM and to the DWT based
MIMO-OFDM systems. The main researches of MIMOOFDM using FFT and DWT based systems are
summarized below.
3.1 FFT BASED MIMO-OFDM SYSTEMS:
The MIMO-OFDM is implements with the FFT
have a lot of advantages of easy computational capabilities
and easy implementation. The Fourier Transform analysis
is used to obtain the Frequency Spectrum of the signals.
FFT Transform algorithm is used to find the Discrete
Fourier Transformation function in a faster way with
reduced complexity. The FFT algorithm converts the input
data stream into N parallel data stream after Modulation
technique performed to the input sequence.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

71

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Wavelet transform consist of two types i.e. Continuous and


as well as the Discrete Ripple Transform. The Full-band
source signal splitting in to the altered Frequency bands
and to encrypt all splitter bands individually on their
Spectrum specification are called as sub-band coding
method.
The IDWT works as a Modulator operation
performed at the transmitter end and DWT works as a
Demodulator at the receiver side.

1. They are zero outside of the segment.


2. The calculation algorithm is maintained and thus very
simple.
3. The associate filters are symmetrical.
3.2.3 HAAR WAVELET TRANSFORM:
The Haar Wavelet is the simple wavelet among all
Wavelets in the communication system. The Haar Wavelets
has a simple mathematical function and as well as the
simplest wavelet among the all Wavelets in the Wavelets, it
has high Symmetric functionalities as well as it explicit the
expression in discrete form. The advantage of the Haar
Wavelets are conceptually simple, fast and memory
efficient. The Haar Wavelets are compactvely supports, the
compact support of the Haar Wavelets enable the Haar
decomposition to have the good time localization. It can be
useful to locate the jump discontinuity in Wavelets and
works the signal with even small support Wavelets and the
easy construction of the Wavelets are done.

The different types of Wavelets are used for the


different types of requirements in the communication
system.
Some of the Wavelets and their uses,
functionalities are explained below as follows.
3.2.1 DAUBECHIES WAVELETS:
The origin of the Wavelet functions is Daubechies
Wavelets introduced by the Daubechies. It has a scaling
function and that are Orthogonal and also having finite
vanishing moment of the continuous time signal, this
property can be use full to the number of non zero
coefficient in the associated filter is finite. They are many
extension in this transform said to be db4 and db8 etc, this
wavelet is simple mathematical notations and works same
of the Haar Transformation function.

3.3 PERFORMANCE ANALYSIS OF FFT OFDM


AND DWT OFDM SYSTEMS:
The performance of OFDM using FFT and DWT
based systems are analyzed in this paper by constructing
the various parameters such as BER and with constellation
diagrams. The Wavelet based OFDM system performs
using of Haar, Bi-orthogonal, Dabauechies Wavelets
provides a better performance than the FFT based system.
The performance can be analyzed by using the BER with
SNR.

3.2.2 BIORTHOGONAL WAVELET TRANSFORM:


The Bi-orthogonal Wavelets are the families of
compactly supported Symmetric Wavelets, the Symmetric
co-efficient in the Bi-orthogonal Wavelets results in Linear
phase of the Transfer Function [10][11]. The Wavelets are
the very shorter samples of the Continuous Time Signal.
The ripples can be vanished in after particular time levels.
The Bi-orthogonal Wavelets performs an Analysis as well
as the Synthesis operation as showed in the fig1. the
notations {, , ~, ~}, the degree of the vanishing
moments for the Wavelet
say N, the corresponding
degree of vanishing moment of the scaling function is
and for the scaling function ~ the Wavelet ~ is fixed.
The Bi-orthogonal Wavelet is implements with two bands
Bi-orthogonality consist of the Analysis bank and for
Synthesis bank to construct the shorter sequence shown in
Fig1.

3.3.1 FAST FOURIER TRANSORM:


The Fourier Transform analysis is used to obtain the
Frequency Spectrum of the signals. The FFT Transform
algorithm is used to find the Discrete Fourier
Transformation function in a faster way with reduced
complexity. The FFT algorithm converts the input data
stream into N parallel data stream after Modulation
technique performed to the input sequence. The FFT
operation of Modulation, adding and removing of cyclic
prefix is illustrated below showed in Fig.2.
For a given finite data stream which can be
represent as
X[n], n=0 to N-1
The corresponding output sequence of FFT is
( )=

( )

=0

(3)

And the IFFT i.e. discrete domain sequence is


Fig1. Block diagram for a two band orthogonal FB.
( )=

The Bi-orthogonal Wavelets functions has a following


Properties

( )

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

72

=0

(4)

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

3.3.3 TABULAR REPRESENTATION OF WAVELETS:


Table1. BER for BPSK modulation.

Fig2. FFT with cyclic prefix


3.3.2 DISCRETE WAVELET TRANSFORM:

Eb/N0

FFT

HAAR

DB4

BIOR

0.5004

0.4148

0.501

0.5052

0.5003

0.3777

0.5005

0.5039

0.5003

0.3481

0.4998

0.5005

10

0.4998

0.2486

0.4996

0.4992

20

0.4998

0.0117

0.4995

0.4992

50

0.4996

0.4995

0.4991

100

0.4996

0.4995

0.4989

Table2. BER for QPSK modulation.

A Wavelet is a small portion of the continuous signal.


They cut up the data into different Frequency components
and then study the each component with a resolution
matched to its scale. The key advantage in the Wavelets is
the symbols overlap in both Time and Frequency domain
which exhibits the high Spectral efficiency. Data rates in
the Wavelet transforms are splitter into smaller data rates
and the data rates are sampled into upper Frequencies as
well as lower Frequencies are constructed as from the
below Fig3. The Wavelet Transform produces high
Orthogonal properties and to reproduce the original signal.

Eb/N0

FFT

HAAR

DB4

BIOR

0.7493

0.6506

0.7591

0.7584

0.7493

0.6088

0.753

0.757

0.7493

0.571

0.7563

0.7566

10

0.7493

0.4277

0.7541

0.7574

20

0.7495

0.0245

0.7519

0.7567

50

0.7497

0.7495

0.7545

100

0.7496

0.7479

0.7511

3.4 WAVELETS IN ANALYSIS OF RAYLEIGH


FADING CHANNELS
MIMO-OFDM is normally implemented using the
Fast Fourier transform (FFT) and Inverse Fast Fourier
transform (IFFT), the FFT uses a Rectangular window
technique, the use of Rectangular window produces a high
side lobes, that impact of side lobes causes a interference
when the impairments are not compensated with each
other, the alternative method for FFT and IFFT is the
Wavelet based OFDM [8], [9].
The Wavelet Transform that produces the
resolution subspaces into different forms are used to
decompose the signal.
...V < V < V < V <.
(7)

Fig3. DWT filter splitting.


At first the input is processed with Low pass filter h(n) and
the corresponding output results are approximate
coefficients. Next High pass filter gets process and the
resultant output is detailed notifications. This represents the
how the signal is de composing and the mathematical
expressions of de composing is described below, tabular
representation of wavelets are constructed as below shone.

-2

1( ) =

( ) (2 )

-1

Decomposition is done by using of the translation


and dilation of a Wavelet function from the various
subspaces the dilated and scaling functions are formulated.
i.e. { t } forms a basis of V . The scaling function and

(5)

the Wavelet function both should satisfy the dilation


equation

2( ) =

( )(2 )

(6)

t =(2t-n)h(n).

(8)

if (t) should be Orthogonal to the translated then h[n]


should satisfies the Orthogonality condition.
INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

73

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


n

h[n]h[n-2m]=[m] and (-1) h(n)=0

The analysis side the samples are down sampled by two ,at
synthesis side the samples are up samples by the factor 2
and then filtering the Low and High pass coefficients at K
levels and then adding the 2 this gives the low pass
coefficients at the (K-1) levels are described in the Fig.6.

(9)

Given a sequence we find the another sequence g(n) such


that the function satisfy the dilation equation.

(t)= (2t-n)g(n)

(10)

The Flat Fading channel for a Flat Rayleigh Fading


channels using the FFT based and as well as the DWT
based system using bi Orthogonal Wavelets showed below.

This function is the wavelet function i.e. function is


Orthogonal to the Wavelet function, the Scaling and
Wavelet function can be used to decompose the signal into
different subspaces. (t) occupies the half the Frequency
space of (2t) and similarly for the wavelet Frequencies
and the corresponding Low pass Frequency function is h(n) and High pass Frequency function is g(-n). This can be
represented using structures shown below in Fig4, Fig5.

Fig4. The
decomposition

tree

structure

of

the

3.4.1 SIMULATION RESULTS:

wavelet
Fig7. Performance of FFT-OFDM with DWT-OFDM in
Flat Fading Rayleigh channels.
IV.PROPOSED DWT MMO-OFDM SYSTEM
The proposed DWT MIMO-OFDM system has a lot
of advantages over the FFT based MIMO-OFDM system.
DWT based system constructed with the Haar mother
transform reduces number of multiple antennas than the
consequences of required number of antennas, which
perfectly suits for the arena of 3GPP network. The DWT
based MIMO-OFDM with modified Haar transform
converts the matrix into Circulant matrix form and
simulation can be don easily and as well as it performs the
Interleaving operation produce efficient result . Unlike the
FFT based MIMO-OFDM the DWT based MIMO-OFDM
particularly using the Haar mother wavelet transformation
is conceptually simple, provides the great Orthogonal
properties and it has a absolute Symmetric property, the
DWT based MIMO-OFDM oversample the signal into
different resolutions as well as oversamples the signal in
both Time domain and as well as in Frequency domain. By
using the sub band coding in DWT uses Haar
transformation can reconstruct the samples accurately. The
performance measure can be done by using the
implementation of BER to the transmission capability. The
DWT based MIMO-OFDM can be suitable for all
applications of the next generation wireless systems and it
has a capability of delivering a high speed packet access
(HSPA) capabilities. The representation of the
DWT
based transceiver of MIMO-OFDM is constructed below.

Fig5. The structure of wavelet decomposing.

Fig6. Analysis and synthesis representation of Wavelets.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

74

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

References:
[1] Introduction to Space-Time Wireless Communications,
Cambridge, UK: Cambridge Univ. Press, 2003. By A.J.
Paulraj, R.U.Nabar, and D. A. Gore
[2] G. J. Foschini, Layered space -time architecture in
fading environment for wireless communication when
using multi-element antennas, Bell Labs Tech. J., pp. 41
59, Autumn 1996.
[3] G. G. Raleigh and J. M. Cioffi, Spatio-temporal coding
for wireless communication system, IEEE Trans.
Communications., vol. 46, pp. 357366, Mar. 1998.
[4] MCM(Multi carrier modulation for data transmission
in wireless communication: An idea whose time has come,
IEEE Communication Magazine, vol. 28, no. 5 by J. A. C
Bingam.

Fig8. Block diagram representation of DWT based MIMOOFDM transmitter.

[5] R. Dilmarghani and Ghavami. M, Fourier Vs Wavelet


based ultra wide band Systems, 18th IEEE International
Symposium, indoor and in mobile radio communications,
pp.1-5, September, 2007.
[6] R. Crandal, projects on Scientific computations, New
York, 1994, pp. 197-198, 211-212.
[7]. Hussain and Abdullah, Studies on FFT-OFDM and
DWT-OFDM systems, ICCCP, 2009.
[8] W. Saad, N. El-Fishawy, S. EL-Rabaie, An Efficient
Technique for OFDM System Using DWT, Springer-Verlag
Berlin Heidelberg, pp. 533541, 2010
[9] Jarrod Cook, Nathan Gov, Research and
Implementation of UWB technology, Final Report, Senior
Capstone Project, 2007
[10] B. Vidakovic and P. Muller, "Wavelets for Kids,"
1994, published. Part One, and Part Two.

Fig9. Block diagram representation of DWT based MIMOOFDM receiver

[11] M.V. Wickerhauser, "Acoustic Signal Compression


with Wave Packets," 1989. Available by TeXing this TeX
Paper

VI. CONCLUSION

[12]. Design and implementation of orthogonal transform


based Haar Division Multiplexing for 3GPP Networks by
B.Monisha,
M.Ramkumar,
M.V.Priya,
A.Jenifer
Philomina, D.Parthiban, S.Suganya, N.R.Raajan. (ICCCI 2012), Jan. 10 12, 2012.

The DWT implementation of MIMO-OFDM system with


Haar constructed Transform produce the better results
when compared to the FFT based MIMO-OFDM System,
it can be used effectively for the future generation of
wireless communication systems such as, 4G, HSPA.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

75

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Study of Various Effects on Peak to Average Power


Reduction using OFDM
EM HARINATH
Smt. N. PUSHPALATHA
M.Tech DECS Student
Assistant Professor
Department of ECE, AITS, TIRUPATI
Department of ECE, AITS, TIRUPATI
Hari.810213@gmail.com
Pushpalatha_nainaru@gmail.com
Annamachrya institute of technology and sciences (AITS),TIRUPATI

Abstract:
In this paper the novel method of complex
weighting for peak to average power (PAPR)
reduction of OFDM is addressed. The Study of
various effects on peak to average power
reduction using OFDM on this paper. The
simulation result are studied about the
combination of both different
amplitude
weighting factors including rectangular ,
Bartlett , Gaussian , raised cosine , Half-sin ,
Shannon , and sub carrier masking with
phasing of each OFDM Subcarrier using
random phase updating algorithm. By using
the amplitude weighting factor Bit error
performance
of
weighted
multicarrier
transmission over a multipath channel is also
investigated. In the random phase updating
algorithm the phase of each carrier is updated
by a random increment until the PAPR goes
below a certain threshold level. Further the
random phase updating algorithm has been
extended by dynamically reducing the
threshold level. For an OFDM system with 322
subcarriers and by Gaussian weighting
combined with random phase updating, a
PAPR reduction gain of 3.2 dB can be
reduced. Result show that grouping of
amplitudes weights and phase reduce the
hardware complexity while not much
impacting the PAPR reduction gain of the
method. Even further dynamic threshold gives
the best results and can reduce the mean
power variance of 8-carrier OFDM signal with
BPSK modulation by a factor of 7 dB.

1. Introduction:
Orthogonal frequency division multiplexing
(OFDM) is a parallel transmission method where
the input data is divided into several parallel
information sequences. And each sequence
modulates a subcarrier. OFDM signal has a non
constant envelope characteristic since modulated
signal form orthogonal subcarriers are summed.

The PAPR problem is occurred when these


signals are added up coherently, resulting in a
high peak. The high PAPR of OFDM signal is
not favorable for the power amplifiers working in
non-linear region. Different methods have been
proposed to mitigate the PAPR problems of
OFDM. These techniques are mainly divided into
two categories: signal scrambling and signal
distortion
techniques.
Signal
scrambling
techniques are all variations on how to modify
the phase of OFDM subcarriers to decrease the
PAPR. The signal distortion technique is
developed to reduce the amplitude of samples
whose power exceeds a certain threshold. The
Signal scrambling techniques are as follows:
Block coding techniques, Block coding scheme
with error correction, Selected mapping (SLM),
Partial transmit sequence (PTS), Interleaving
technique, Tone reservation (TR), Tone injection
(TI). The Signal distortion techniques are as
follows: Peak windowing, Envelope scaling, Peak
reduction carrier, clipping and filtering. This
paper addresses the PAPR reduction of OFDM by
combination of both signal scrambling and signal
distortion techniques.

2.0. Related project work:


In this section explain the existing methods of our
paper.
2.1. Weighted OFDM for wireless multipath
channel
2.2. Random phase updating algorithm for
OFDM Transmission with low PAPR
The above two existing methods explained below
section

2.1. Weighted OFDM for wireless


multipath channel
2.1.1. Description:

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

76

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

OFDM also called multicarrier (MC) technique is


a modulation method that can be used for the
high speed data communications. In this
modulation is scheme transmission is carried out
in parallel on the different frequencies. This
technique is desirable for the transmission of the
digital data through the multipath fading channel.
The advantage of this technique is spectral
efficiency. In the MC method the spectra of sub
channels overlap each other while satisfying
orthogonality, giving rise to the spectral
efficiency. Because of the parallel transmission in
the OFDM technique the symbol duration is
increased. Another advantage of this method to
work in the channels having impulsive noise
characteristics. One more advantage of the MC
method is its implementation with the FFT
algorithm
which
provides
full
digital
implementation
of
the
modulator
and
demodulator.

2.1.2.
Weighted
modulation:

Where f0 is the lowest frequency, m is the


number of carriers and T is the OFDM symbol
duration.
The weighted MC transmitted signal is
( )=

()

)
(2)

Where m is the real weighting factor of


the m-th carrier, bm (i) is the symbol of the m-th
sub channel at time interval iT, which is1 for
BPSK Modulation and (1 )/2 for QPSK
,p(t) is a rectangular function with amplitude one
and duration T.

multicarrier

In this method of weighted OFDM is explained


and the PAPR reduction associated with this
technique is reported. In serial data transmission,
sequence of data is transmitted as a train of serial
pulses. However in parallel transmission each bit
of a sequence of M-bits modulates a carrier. In
the multicarrier technique transmission is
parallel. The block diagram to the conventional
MC method but with a different that each carrier
is weighted by a real factor m m=0, 1,
2M-1.
In the modulator the input data with the rate R is
divided into the M parallel information sequences
a weighted subcarrier. In this method the
frequency of m-th carrier is
=

m=0,1,2,M-1

(1)

2.1.3. Different weighting factors


In this section several weighting factors for
weighting of the OFDM signal is describe.

Rectangular: This weighting function has


Rectangular shape and is expressed by

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

77

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

( ) = | ( )|

0
(3)
Bartlett: This weighting function has simply
a triangular shape for 0 1
1

, =
0

(10)
( )

Gaussian: These factors are generated based


on the Gaussian function

[ ( )]
=

weighting

function

| |

|
[

(12)
The symbols on different carriers are
assumed to be independent i.e., therefore,
the second term in (13) is zero and
accordingly, by using (9) the average power
becomes

(6)
is

(7)
0
Shannon: The shape of this weighting
factors is the sinc function i.e., sinc(x) = (sin
(x)/ (x))
, =

sin

[ ( )] =

| =1

(13)
The variation of the instantaneous power of
OFDM signal from the average is
( ) = ( ) [ ( )]

(8)
0

(11)
Averaging the power P (t) yields E [P (t)]

if 0mM-1

Half-sin: This
explained by

(5)

0
Where s= spread or standard deviation of the
weighting factors around M/2
Raised cosine: The shape of this function in
the interval [0,M-1] is described by 1cos(2m/M)
, =

, =

, =

(4)

2.1.4. PAPR of Weighted OFDM

In this section the impact of weighting of


OFDM signal on the PAPR is investigated.
The OFDM signal of (2) in the time interval
of 0 can be written as

(14)
Averaging of ( ) over a symbol period
of T yields

( )=

()

( )

( )|

(15)
Where R cc (i) is the autocorrelation function
of the complex sequence cm=bm. m

(9)
For the calculation of PAPR first by using
(10) we obtain the instantaneous power of
OFDM signal as

()=

(16)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

78

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

The parameter is the power variance of the


OFDM signal and as is described below is a
good measure of the PAPR.

the average is p(t)=P(t)-E[P(t)] and accordingly,


the power variance (PV) of OFDM signal,
denoted by

{ ( )}

{ (

= max{ ( )} =
)}
(17)

=
=

( 1/ 2

( 1)
2

(18)
Using (18) and (19) it can be easily shown
that PAPR has the following relationship
with the power variance

( )=

( )=

2.2. Random Phase Updating


Algorithm for OFDM Transmission
with low PAPR
OFDM is the basic technology for a number of
communication systems such as Digital Audio
Broadcasting (DAB), Digital Video Broadcasting
(DVB), HIPERLAN-2,I EEE802.11a and Digital
subscriber line. In this random phase updating
algorithm we are using the signal scrambling
technique. Signal scrambling techniques are all
variations on how to modify the phase to
decrease the PAPR.

2.2.2. PAPR of OFDM signal


can

( )=
(21)
Where T is the OFDM symbol duration, bm is the
symbol of the m-th sub channel, which is 1 for
the BPSK modulation and
(1 )/2 for
QPSK modulation and M is the number of
carriers. The power of s(t) is

( )|

(24)
Where 2m is the m-th subcarrier phase
shift. Adding random phases to each
subcarrier will change the power variance of
OFDM Signal. In the random phase updating
algorithm, the phase of each subcarrier is
updated by a random increment as:
( ) = ( ) + ( )
(25)
Where i is the iteration index and
is
the phase increment of the m-th subcarrier at
ith iteration. Assume the initial phase is zero,
consequently a random phase increment is
generated and the phase is updated by adding
the increment to the phase of that subcarrier.
Flow chart of the algorithm for this iterative
phase updating is shown in figure. Figure 8a
a certain threshold for PV is set and for figure
8b a limited number of iterations is allowed.
Two distributions are Gaussian and uniform
where
the
uniform
distribution chosen for the distribution of
phase increments. A connection between
phase shift variance and the number of
iteration

2.2.1. Description

( ) = | ( )| =

In the random phase updating algorithm for


each carrier a random phase is generated and
assigned to that carrier. Using (23) the
OFDM signal

(20)

The OFDM signal in the period of 0


be written as

2.2.3.
RANDOM
PHASE
UPDATING ALGORITHM

(19)

( ( ) )

(23)
Where Rbb (i) is the autocorrelation function
of
the
sequence
bm
as

()=

]
) exp

(22)
The PAPR of the OFDM signal is written as
PAPR=Max {P(t)}/Mean{p(t)}. The variation of
the instantaneous power of OFDM signal from

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

79

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

from zero initial phases, the random phase


shifts are generated and combined with the
symbols, and the threshold is not changed.

2.3. PRAPOSED METHOD:


In the above existing methods the high
PAPR has been reduced due to this the
hardware complexity is increased, so much
impact has not been on the reduction of the
PAPR and also the above methods have the
demerit that has the PAPR reduction gain is
very low.
So the above two methods
clubbed together and creates a new method
called as the complex weighted OFDM
Transmission with low PAPR. By applying
both techniques together will further reduce
the PAPR by a factor of 4.8dB. Complex
Weighting for Multicarrier modulation The
OFDM Signal for one symbol interval
0 is written as

s
reaching the threshold. Fig.2. Flow chart
showing the iterative random phase updating
algorithm. A) With threshold. B) With limited
number of iterations.

( )=

(26)
Where M is number of subcarriers, BM is
modulation data of the m-th subcarrier, T is
the OFDM symbol
period, and wm is a
complex factor defined as
=
Where a positive real value and m m is the
phase of m-th subcarrier. The block diagram
of an OFDM modulator with complex
eighting factors is shown in fig

and receiver.

2.2.4. Phase Updating


Dynamic threshold

with

Fig.3. Flowchart of the random phase


updating algorithm with dynamic threshold.
The selection of random phase increments it
is possible to reduce the PV threshold. As
illustrated in figure 11 this approach the
threshold level of the algorithm is
dynamically reduced. The first step of the
algorithm is to calculate the PV of the
original OFDM symbol, and then set the first
threshold to e.g. 10% lower. Then starting

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

80

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Fig 5. Block diagram of complex-weighted


OFDM modulator

weightings of the OFDM signal are sketched


respectively.The CDF of PAPR of OFDM
signal with several scenarios of weighting
and phasing are depicted in Fig. 1 for M = 32.
In Fig. phasing is applied by random phase
updating with Uniform distribution (x = 1.0)
and power variance threshold Th = -4 dB.

3.0. Simulation results:


By comparing the above two existing
methods with the proposed method the result
should be shown below. In the weighted
OFDM for the wireless multi path channels

Fig.8. CDF plots of PAPR for different


weightings with and without phase c1:
rectangular weighting, no phasing , c2:
rectangular weighting with phasing , c3:
Gaussian weighting (std=M/16) , no phasing
, c4: Gaussian weighting (std= M/16) with
phasing.

Figure 6 illustrates the CDF of the power


variance for different weighting functions.
Table 1 for the OFDM signal with 256
carriers and BPSK modulations by applying
Gaussian weighting , the is reduced by a
factor of 3.2 db and for the QPSK
Modulation
by
a
factor
of

4.0. Conclusion:
In this paper we have addressed the novel
method of PAPR reduction for OFDM signal
by applying both amplitude weighting and
phasing of OFDM subcarriers. This joint
application gives more PAPR reduction gain
than only weighting or phasing. Employing
both weighting and phasing to subcarriers
implies more complex implementation.
However, the complexity can be reduced by
grouping of the subcarriers when weighting
or phasing is applied. Furthermore, the
complex weighting with dynamic threshold
was
studied.
Combining
amplitude
weighting, phasing and dynamic thresholding
will result in a larger PAPR reduction gain of
the proposed algorithm.

6.1 dB Fig .7 the irreducible bit error


probability of the OFDM signals with
different functions versus rms delay spread of
the channel is illustrated.
The BPSK and QPSK modulations are
considered and simulations are carried out for
5000 symbols weighted by proper weighting
functions. In figs.3 and figs.4 the parameter
versus number of the messages for BPSK and
QPSK modulations and for different

References:
[1] T.A.Wilkinson et al," Block coding
scheme for Reduction of peak to mean
envelope power ratio of Multicarrier

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

81

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

transmission schemes," Electronics Letters,


Vol. 30, No. 25, 1994.
[2] R.F.H. Fischer et al," Reducing the peakto-average power ratio of multicarrier
modulation
by
selected
mapping,"
Electronics Letters, Vol. 32, No. 22, 1996.
[3] G. Wade et al," Peak-to-average power
reduction for OFDM schemes by selective
scrambling," Electronics Letters, Vol. 32, No.
21, 1996.
[4] S.H.Muller and J.B.Huber," OFDM with
reduced peak-to-average power ratio by
optimum
[5] M.Friese," Multitone signals with low
crest factor," IEEE Trans. Commun., Vol. 45,
No. 10, 1997.
[6] J.Tollado, Multicarrier Modulation with
Low PAR, Kluwer Publishers, MA, USA,
2000.
[7] G.L.Stuber and D.Kim," Clipping noise
mitigation for OFDM by decision aided
reconstruction," IEEE Commun. Letters, Vol.
3, No. 1, 1999.
[8] R.J.van Nee and A. de Wild," Reducing
peak-to average power ratio of OFDM,"
IEEE Vehicle.
Technol. Conf., pp. 2072-2076, 1996.

Distortions of an OFDM system using


efficient an
Adaptive
predistorter,"
Commun., Vol-2

IEEE

Trans.

[14] K. S. Lidsheim, Peak-to-Average


Power
Reduction
by
Phase
Shift
Optimization for Multicarrier Transmission,
MSc. thesis, Delft University of Technology,
Delft, The Netherlands, April 2001.
[15] R. Prasad and R. D. J. van Nee, OFDM
for Wireless Multimedia Communications.
Boston: Artech House, 1999.
[16] H. Nikookar and R. Prasad, Weighted
OFDM for wireless multipath channels,
IEICE Commun. Trans., vol. E83-B, no. 8,
pp. 1864-1872,Aug. 2000.
[17] H. Nikookar and K. S. Lidsheim,
Random phase updating algorithm for
OFDM transmission with low PAPR, IEEE
Broadcasting Trans., vol. 48, no. 2, pp. 123128, Jun. 2002

[9] R.Prasad and R.J. van Nee, OFDM for


Wireless
Multimedia
Communications,
Artech House, Boston, 1999.
[10] O.Muta et al," Peak power suppression
with parity carrier for multicarrier
transmission," IEEE Vehicle. Technol. Conf.
(VTC'99-Fall), pp. 2923-2928, 1999.
[11] H.Nikookar and R.Prasad," Weighted
multicarrier modulation for peak-to average
power reduction," IEICE, Trans. Commun.,
Vol. E83-B, No. 8, 2000.
[12] X.Wang and T.T.Tjhung," Reduction of
peak-toaverage power ratio of OFDM system
using companding technique," IEEE Trans.
On Broadcasting, Vol. 45, No. 3, 1999.
[13] Y.S.Chu et al.," On compensating
nonlinear

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

82

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

ECO FRIENDLY CONSTRUCTION METHODS AND MATERIALS


AUTHOR NAME : PRIYANKA.M
KINGSTON ENGINEERING COLLEGE, VELLORE
CONTANCT INFO : 9894723744
EMAIL ADDRESS: priyaevergreen05@gmail.com

ABSTRACT :
A green building, which is also known as a sustainable
building is designed to meet some objectives such as occupant
health; using energy, water, and other resources more efficiently;
and reducing the overall impact to the environment. It is an
opportunity to use the resources efficiently while creating
healthier buildings that improve human health, build a better
environment, and provide cost savings. All the development
projects lead to over-consumption of natural resources. This leads
to serious environmental problems. Green building concept deals
with the optimum use of natural resources for the development of
infrastructure. The low cost eco-friendly house is the modern
construction method which uses locally available material and
unskilled labor and also reduces the construction time. Similarly,
use of recycled plastic, recycled aggregates and municipal wastes
for the construction of pavement has considerable effect on the
environment of earth. Another advanced method is the
construction of low carbon building which uses sustainable
materials like blended cement, compacted fly ash blocks, low
energy intensity floor and roofing system, rammed earth walls and
stabilized mud blocks etc. This ultimately results in reduction of
green house gases which will help to reduce green house effect.
This paper presents an overview of application of modern green
infrastructure construction technology which makes a significant
impact on conservation/proper utilization of resources like land,
water, energy, air, material etc thereby reducing the overall cost
of construction as well as adverse impacts of climate change .

KEYWORDS :
Sustainable building, eco-friendly house, low carbon building,
green house effect . optimum use

INTROUCTION :
In todays world of climate change and high energy
prices, it is critical that buildings use
as few fossil fuels as possible to future proof the home against
unpredictable and rapidly rising prices. There are many definitions
of what a green building is or does. Definitions may range from a
building that is not as bad as the average building in terms of its
impact
on the environment or one that is notably better
than the average building, to one that may even represent a
regenerative process where there is actually an improvement and
restoration of the site and its surrounding environment. The ideal
green project preserves and restores habitat that is vital for
sustaining life and becomes a net producer and exporter of
resources, materials, energy and water rather than being a net
consumer.A green building is one whose construction and lifetime
of operation assure the healthiest possible environment while
representing the most efficient and least disruptive use of land
water, energy and resources.
SETTING GREEN GOALS AND OBJECTIVES
Once the decision to build green has been made, one of
the first steps in the green design process is to establish firm
environmental goals for the project. This is often done during

what is called a goal setting or targeting session. During this


session, it is important to set specific measurable goals for things
like energy efficiency, water conservation, on-site treatment of
rain water and storm water, material and resource management,
construction waste management, and to assign responsibility for
meeting these goals to specific members of the design team. Each
goal needs a champion who will see that objective through to the
end.
GREEN BUILDING ...
The `Green Building' concept is gaining importance in
various countries, including India. These are buildings that ensure
that waste is minimized at every stage during the construction and
operation of the building, resulting in low costs, according to
experts in the technology. A green building is a structure that is
environmentally responsible and resource efficient throughout its
life-cycle. Expand and complement the classical building design,
concerns of economy, utility, durability and comfort.
COMPACTED FLY ASH BLOCKS
A mixture of lime, fly ash and stone crusher dust can be
compacted into a high-density
block. Lime reacts with fly ash minerals forming water
insoluble bonds imparting strength . Some advantages of the
technology are:
(a) decentralized production in tiny scale industries,
(b) utilization of industrial waste products and
(c) energy efficient and environment friendly.
STABILIZED MUD BLOCK FOR MASONARYStabilized mud blocks (SMB) are energy
efficient eco-friendly alternatives to burnt clay bricks. These are
solid blocks manufactured by compacting a mixture of soil, sand,
stabilizer (cement/lime) and water. After 28 days curing, these
blocks are used for wall construction. Compressive strength of the
block greatly depends upon the soil composition, density of the
block and percentage of stabilizer (cement/lime).
Major advantages of SMB are:
(a) energy efficient, do not require burning, 6070% energy
saving when compared with burnt clay bricks,
(b) decentralized production, production on site is possible,
(c) utilization of other industrial solid wastes like stone quarry
dust, fly ash etc. and
(d) easy to adjust the block strength by adjusting stabilizer
content.
BLENDED CEMENTThese are cements containing a high volume of
one or more complementary cementing materials (CCM), such as
coal fly ash , granulated slag, silica fume and reactive
ricehusk ash. A large volume of CO2 is directly emitted during the
cement manufacturing process (0.9 tonnes/tone of clinker).
Reduction in the quantity of clinker by substituting
with CCM results in lesser CO2 emissions.
GREEN TECHN LOGY FOR ROAD CONSTRUCTION
Lime reacts with fly ash minerals forming water

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

83

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


insoluble road construction technology needs changes to
minimize dam bonds imparting strength to the block. Some
advantages of the technology are:
(a) decentralized production in tiny scale indusage to the
environment of the earth. Aggregates are heated to tries,
(b) utilization of industrial waste products and
(c) energy temperatures between 1500C and 1800C for drying,
proper efficient and environment friendly .coating and mixing
with bitumen. Mixing temperature of bituminous mixes can be
lowered by using foamed bitumen, tumen emulsion and some
chemicals which reduce the viscosity of bitumen so that less
fuel is used with consequent reduction of green house gases.
Use of recycled plastic, recycled aggregates and municipal
wastes will slow down the degradation of the earth. pavement
can be designed to help charging of groundwater. Municipal
wastes consist of considerable amount of waste materials such
as plaster, bricks bats, demolished concrete. They can easily be
used as materials for widening of roads as well as new road
construction. Some of the waste product from coal mining is
highly variable and sometimes may get ignited due to presence
of pyrites. If they are used deep in embankment, there is little
risk of combustion due to too low air content to allow
combustion All footpaths, parking yards, roads of residential
area. Fly ash Block slow volume roads can be made permeable
so that rain water
GREEN TECHNOLOGY FOR ECO- FRIENDLY HOUSE
Buildings are one of the major consumers of
energy and are the third largest consumers of energy, after
industry and agriculture. Buildings annually consume more than
20% of electricity used in India. The awareness about the impact
of depletion of fossil fuels and global warming phenomena has led
to renewed interest in the green technologies. Eco-friendly house
uses the naturally available resources. The house can be built in
such a way that it can use the naturally available light and
ventilation. The openings can be provided in south-west side
which will provide better ventilation. Windows can be placed
considering cross ventilation concept. Wind breakers can be
provided for west side windows which will guide more air in the
house. Higher ceiling height can be provided than the
conventional one which will give relatively cool air at the leaving
area. If the built up area is less then more space will be available
around the building for air circulation. Solar panels can be
installed which will reduce the burden on the electricity
consumption. Solar system can be used for the cooking of food as
well as water heating. This will reduce the consumption of
electricity or LPG. Implementation of rain water harvesting
system can be beneficial in many ways. Few of them are
i) Independent and ample supply of water in the dwelling.
ii) Water received is free of costs. Use of this water significantly
reduces water bills for purchased water from municipal supply.
iii) Costs incurred for purifying the water for potable use are
nominal.
iv) For users located in the rural areas, an independent supply of
water avoids the cost of installing a public water supply system.
v) Rainwater harvesting lessens local soil erosion and flooding
caused by rapid runoff of water from impervious cover such as
pavements and roofs.
WOOL BRICKS
The objective was to produce bricks reinforced with
wool and to obtain a composite
that was more sustainable , non-toxic, using abundant local
materials, and would mechanically improve the bricks strength
Amazingly, with the added wool and alginate (a natural polymer
found in seaweed) the researchers' mechanical tests discovered
that this new brick was 37% stronger than regular unfired,
stablized earth bricks.These fibres improve the strength of
compressed bricks reduce the formation of fissures and

deformities as a result of contraction,reduce drying time and


increases the bricks resistance to flexion.
SOLAR TILES - It exist only to protect the building . They spend
large portion of day by absorbing energy from the sun.
PAPER INSULATION Made from recycled newspaper and
cardboard. Then mixed with Chemical form. They are insect
resistant and fire retardant .
TRIPLE GLAZED WINDOW- Super-efficient windows.Stops
heat to enter the building & from direct sun
ECO-FRIENDLY Using of bamboo stick instead instead of
steel bars
SOCIAL BENIFITS:
Enhance occupant comfort and health
Heighten aesthetic qualities
Minimize strain on local infrastructure
Improve overall quality of life
ENVIRONMENTAL BENEFITS:
Enhance and protect biodiversity and ecosystems
Improve air and water quality
Reduce waste streams
Conserve and restore natural resources

ECONOMICAL BENEFITS :
Reduce operating costs
Create, expand, and shape markets for green product and
services
Improve occupant productivity
Optimize life-cycle economic performance
MERITS OF GREENBUILDING :
Efficient Technologies
Easier Maintenance
Return On Investment
Improved Indoor Air Quality
Energy Efficiency
Waste Reduction
Temperature Moderation
Water Conservation
Economical Construction For Poor
Healthier Lifestyles and Recreation
Improved Health
DEMERITS OF GREEN BUILDING :
Initial Cost Is High
Availability Of Materials
Need More Time To Construct
Need Skiled Worker
CONCLUSION
Nowadays, we should make a way to maximize our natural
resources and also help our mother earth to get some relief since
pollution is everywhere plus the global warming that we are all
experiencing. Non-renewable energy is expensive and unsafe but
did you know that through green building we can save a lot of
energy.Before that, let's define first the meaning of green building
(know also as green construction is the practice of creating
structures and using processes that are environmentally
responsible and resource-efficient throughout a building's lifecycle: from siting to design, construction, operation, maintenance,
renovation, and deconstruction.The importance of this is it lessen
the consume of energy and the pollution as well because the more
we use nonrenewable energy the higher the risk of pollution.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

84

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

REFERENCES:
1. K.Ghavami Bamboo as reinforcement in structural concrete
elements Lightweight Concrete Beams, Pontificia Universidade
Catolica, Rio deJaneiro, Brazil (2003).
2. V.R. Desai, Green infrastructure: overview and case study,
IIT Kharagpur, 2010.
3. B.B.Pandey, Green technology for road constructio,
Workshop on green infrastructure at 2010.
4. Sherwood,P.T.(2001), Alternate materials in road
construction, Thomas Telford.
5. Heerwagen J. Green buildings, organizational success and
occupant productivity. Building Research & Information
2000;28(5/6):35367.
6. Ajdukiewicz A B, kliszczewicz A T. (2007). Comparative
tests of beams and columns made of recycled aggregate concrete
and natural aggregate concrete. Journal of Advanced Concrete
Technology, 5(2): 259-273.
7. Rao A, Jha K N, Misra S. (2007). Use of aggregates from
recycled construction and demolition waste in concrete.
Resources, Conservation and Recycling. 50(1): 71-81.
8. Charles J. Kibert, Sustainable construction: green building
design and delivery, Book.
9. Jerry Yudelson, S. Richard Fedrizzi, The green building
revolution, by U.S. Green Building Council.
.http://www.greenconcepts.com/ |

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

85

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

A REVIEW ON ENHANCEMENT OF DEGRADED


DOCUMENT IMAGES BASED ON IMAGE
BINARIZATION SYSTEM
1

N.S.Bharath Kumar.

Sreenivasan.B,

A.Rajani

M.Tech (DSCE) Student,

Assistant professor
Assistant professor
Department of ECE,AITS
Annamacharya Institute of Technology and Sciences,Tirupati,India-517520
1

nsnandi1990@gmail.com
2
srinu3215@gmail.com
3
rajanirevanth446@gmail.com

the document text and background. Thresholding plays


major role in Binarization. Thresholding is mainly dived
into two classes namely global thresholding and local
thresholding. The major issues that appear in the degraded
document images are variation in the character stroke
width, stroke brightness, stroke connections and shadows
on image objects, smear and strains. In some handwritten
degraded document images the ink of one side has disposed
into the other side. A particular and efficient technique has
used to achieve best results. Image Binarization technique
was become popular due to its step-wise procedure to
reduce the normalization noise and other issues that
appears in the previous methods and finally follows a post
processing procedure to yield good results.
Image Binarization technique extends the previous
local maximum and minimum method and the methods that
are used in the latest Document Image Binarization
Contests. Binarization Technique having the capability of
handling many degraded documents with minimum
parameter tuning. Historical documents will not have a
clear bimodal pattern, so global thresholding is not a
suitable approach for document Binarization. Global
thresholding is simple, but cannot work properly for
complex document backgrounds. Adaptive or Local
thresholding which estimates thresholds for each and every
pixel in an image is usually a better approach for historical
document Binarization. Image Binarization Technique dose
not uses any windowing techniques as in the case of
proposed technique is, it existing methods. Binarization
Technique calculates the image contrast by using local
maximum and minimum. When compared with image
gradient, image contrast is more responsible for detecting
high contrast image pixels that was present around the text
stroke boundaries. Thresholding method is capable of
handling complex variations in the background such as
uneven illumination, uneven surface, noise and bleeds
through. Adaptive Contrast image makes use of image
gradient widely for edge detection and to find out the text
stroke edges.
The existing method for the degraded document
images is Ni-blacks approach An Introduction to the
Digital Image processing. The procedure that was
involved in the Ni-black approach to separate text from
background is de-noising through Weiner filter, estimation
of the foreground regions, estimation of the background

Abstract Goal of separating the text from low quality and


noisy document images are big challenging task due to small
or large variation between the foreground and background
regions of the various document images. The new document
Image Binarization Technique is used to addresses these
issues by using Adaptive Image Contrast. The Adaptive
Image Contrast is the combination of local image contrast and
the local image Gradient that is equivalent to text and
background variations caused by different degraded
document images. Image Binarization Technique involves the
process of constructing adaptive contrast map for the input
degraded image. The contrast map is then binarized and is
combined with the Cannies edge map to determine the text
stroke edge pixels. Edge detection through the canny process
will gives accurate results instead of producing responses for
the non edges. The Cannies process involves the process of
reducing the amount unwanted and error information
through the filtering process. The next step is the separation
of text from document by adaptive thresholding that is
determined based on the intensities of the detected text stroke
edge pixels. Document Image Binarization Technique is
simple, robust and yields good results when compared with
several different database sets that are evaluated by different
techniques.

Keywords: Adaptive Contrast Image, Document Analysis,


Degraded Document Binarization Process, Edge Detection,
Local Thresholding.

I.INTRODUCTION
The Document Image Binarization technique is
applied at the starting stage of document analysis process.
Binarization refers to the process of converting the gray
scale image into a binary image. Binarization Technique
segments the image pixels into two fields namely white
pixels as foreground text and black as the background
surface of the document. A fast and accurate Image
Binarization technique was chosen in order to accomplish
the document analysis task such as Optical Character
Recognition (OCR).As more and more degraded
documents are digitized, the importance of such document
Image Binarization also increases accordingly.
Thresholding of degraded document images is an
unsolved problem due to high and little variations between

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

86

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

surface, interpolating the foreground pixels with the


background surface, finally post processing is required. Niblack approach calculates the local threshold by using the
mean and standard variation of image pixels within the
local neighborhood window. The main disadvantage of the
window based thresholding techniques is the thresholding
performance depends on window size and character stroke
width. Ni-blacks approach introduces some extra noise
during the detection of the Pixels on the results of
approximated foreground region.
The local image contrast and local image gradient are
also plays major role in the segmentation of the text from
document background. Local image contrast and image
gradient are very effective and have been used in many
image Binarization techniques.
II.OVERVIEW

Minimum (LMM) method will have a good property that it


is tolerant to uneven illumination. The LMM method first
constructs a contrast image and detects high contrast image
pixels that exist around text stroke boundary. Thresholding
is applied in order to separate the text and finally post
processing is applied. The LMM technique yields a good
result when it is applied to a datasets that are used in the
recent Document image Binarization Contests (DIBCO).
Thresholding of degraded document images is a serious
problem due to large variations between the foreground and
background as shown in the below figure.

2.1 DOCUMENT IMAGE BINARIZATION SYSTEM


In order to overcome the issues appears in the
degraded document images there is a necessity to develop
an enhanced system for low contrast and noisy document
images.
Document Image Binarization scheme involved in the
Document Image Binarization Contest (IBCO) for
degraded document images is an effective scheme for
analysis of text from degraded documents. Binarization
technique is a well suitable procedure for the applications
like Optical Character Recognition (OCR), Contrast Image
Enhancement (CIE). The recent Document Image
Binarization Contest (IBCO) held under the combined
framework of International Conference on Document
Analysis and Recognition (IDAR) and Handwritten
Document Image Binarization Contest (HDIBC) shows
recent efforts on these issues.
The adaptive document image Binarization combines
the local image contrast with the local image gradient to
derive an adaptive local image contrast in order to
overcome the normalization problem.

Figure 1: examples of degraded document images

3.1.1. Contrast Image Construction


Local maximum and minimum filter extends the
previous Bunsen technique which calculates the image
contrast based on the maximum and minimum intensities
within the local neighbor window. In Bunsen paper the
local contrast is evaluated as follows.

III. RELATED WORK

C (I,J)=I MAXIMUM (I,J) I MINIMUM (I,J)

There are several document image Binarization


contests (DIBCO) based on different methodologies
developed by the various researchers. The various methods
and systems are summarized as follows.

Where
C (I , J)=Local image contrast.
The local image contrast evaluated by local maximum
and minimum is shown below

C (I, J) =

3.1. Binarization of degraded document images by local


maximum and minimum filter.

( , )
( ,)

(1)

( ,)
( ,)

(2)

Where

is a positive small number that is added when local


maximum is equal to 0.

3.2. Ni-black approach An Introduction to digital image


processing.

3.1.2. High Contrast Pixel Detection


3.1. LOCAL MAXIMUM AND MINIMUM:
As the local image contrast and gradient are
determined by the difference between the maximum and
minimum intensities within the local window then the
pixels at both sides of text stroke will be selected as high
contrast pixels. The binary map can be improved by the
combination edges with the canny edge detector. Image
gradient plays important role in edge detection. Image

Local Maximum and Minimum (LMM) method


makes use of image contrast that is calculated based on
determining the distance between the maximum and
minimum intensities of the pixels within the local
neighborhood window. Compared with image gradient,
image contrast evaluated by the Local Maximum and

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

87

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

gradient indicates the sharp discontinuities in the gray scale


image and indicates the directional changes on the
intensities of the image pixels. The below figures indicate
the image contrast and image gradient.

windowing technique .In Ni-blacks method the threshold


value is calculated based in the mid-range values of the
neighboring window. The Ni-blacks scheme consists of
step wise procedure. Ni-blacks approach calculates the
rough estimation of the foreground regions. Foreground
pixels are a subset of Ni-blacks result and also introduces
some noise. Ni-blacks algorithm calculates a threshold
along the pixel by pixel by shifting a rectangular window
across the image. The threshold T for the centre pixel of the
window is calculated by using the mean M and variance V
corresponding to the grey values in the window.
T=M + k V
Where
Constant K=-0.2.

Figure 2 (a) Global thresholding of image gradient

(4)

The value of k is used to determine how much of


object boundary is taken as a part of the given object. The
brief methodology of the Ni-blacks approach is
summarized below.

3.2.2. Pre-processing:
Pre-processing is essential in order to remove the
noise present in the gray scale source image, to smooth the
background texture. Weiner filter is used to remove the
noise and to increase the contrast between the text and
background. The main purpose of the Weiner filter used in
the filtering technique is for image restoration. The source
image I sis transformed into grayscale image according to
the below formula.

(b) Global thresholding of image contrast.

3.1.3. Threshold estimation:


Thresholding determination from detected high
contrast image pixels is based on two factors. First, the text
pixels should be close to the high contrast image pixels.
Second, the intensity of the text pixels should be smaller
than the average intensity of the detected high contrast
image pixels within the local window. The document text
from the detected text stroke edge pixels can be extracted
as follows.
R(x, y) =

{1

I(x, y)= + (2-v2)(I s(x, y)-)/ 2

Where is a local mean and 2 is variance at NM


neighborhood window.33 Weiner filter is used for
documents one to two pixel wide characters. Below figure
shows the result of Weiner filter to a document image.

Ne N minimum&&

I(x, y) E mean+
0

(3)

Other-wise.

Figure 3: (a) Original image.

Where E mean and E standard are mean and standard deviation


and are defined as follows

E mean=

( , )(

(5)

( , ))

E standard = (( ( , )

) 1 ( , ) )2/2.

(b) 33 Weiner filter.

3.2. Ni-blacks Approach:

3.2.3. Estimation of the foreground regions:

3.2.1. Description:

Ni-blacks method is applied for adaptive thresholding,


since it introduces some noise. At this stage image I(x, y) is
processed to extract the binary image N(x, y) which

Ni-blacks approach to overcome the ambiguities


present in the degraded document image through

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

88

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

contains ones for estimated foreground region. Ni-blacks


method makes use of Laplacian of Gaussian (LOG) filter to
reduce the errors and having the disadvantage of
responding to some of the existing edges.

Figure 4: (a) Original image.

(b): Histogram.
The average distance DAverage between foreground and
background can be calculated as follows
DAverage=

(b) Estimation o foreground region.

3.2.4. Background Surface estimation

{I(x, y)

( ( , )(1 ( , ) ))

(6)

3.2.5. Threshold estimation:


Threshold value is calculated by combining the
background surface B with the original image I. Text is
present when the distance of the original image calculated
from background exceeds a threshold d. Threshold value
will change according to the gray value of background
surface.

0,

(9)

Imax(i, j) Imin(i, j) Refers to local image gradient


that is normalized to [0, 1].The local window size is
empirically set to 3. Where is the weight between the
local contrast and local gradient that is maintained based
upon statistical information of the prescribed document.
The power function has a nice property that it smoothly
increases from 0 to1 and the shape of the curve can be
easily controlled by different values of .The initial process
in the Binarization system includes the high intensity pixel
detection by constructing the adaptive contrast map.The
next step is the text stroke edge pixel detection. Edge
characterizes boundaries and edge detection reduces the
amount of data and filters out useless information. The
software developed by MATLAB 7.0 is nothing but canny
edge detection algorithm performs better for all the
scenarios. Canny edge detector is combined with the Otsus
algorithm to extract the text stroke edge pixels properly.
Canny edge detector will have the good localization
property that it marks the edges close to real edges. Canny
edge detector first filters the image to eliminate noise. Then
it finds image gradient. Hysteresis is used to move along
the pixels that have not been suppressed. Hysteresis uses
two threshold values and if the magnitude is below the first
threshold, it is set to 0. If the magnitude is above the high
threshold it is mark as an edge. If magnitude lies between
two thresholds, then it is set to 0 unless there is a path to
another pixel lying above T. The Binarization performance
are calculated by using F-measure, Peak Signal to Noise
Ratio (PSNR), Distance Reciprocal Distortion.

If N(x, y)=1.

T(x, y) = {1,

(8)

C a (i,j) =C(i, j) + (1 )(Imax(i, j) Imin(i, j))

(1 ( ( , ))

( , ))

( , )

Image Binarization System has a lot of advantages


over the previous existing systems. The Binarization
system reduces the normalization factor that appears in the
local maximum and minimum method by combining the
local image contrast with the local image gradient which
gives rise to adaptive local image contrast as shown below

If N(x, y) = 0.

IV.IMAGE BINARIZATION SYSTEM:

The pixels of source image I(x, y) belongs to


background surface B(x, y) only if the pixels of the rough
estimated foreground region image N(x, y) has zero values.
B(x, y) =

( ( , )

if B(x, y)-I(x, y)>d (B(x, y))


other-wise.

}
(7)
Histogram for a document image is shown below with
two peaks, one for text and another one for background
region.

Figure 5: (a) Original image.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

89

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

V.SIMULATION RESULTS

[6] G. Leedham, C. Yan, K. Takru, J. Hadi, N. Tan, and L.


Mian, Comparison of some thresholding algorithms for
text/background segmentation in difficult document
images, in Proc. Int. Conf. Document Anal. Recognit., vol.
13. 2003, pp. 859864.
[7]M. Sezgin and B. Sankur, Survey over image
thresholding techniques and quantitative performance
Analysis, J. Electron. Imag., vol. 13, no. 1, pp. 146165,
Jan. 2004.
[8] O. D. Trier and A. K. Jain, Goal-directed evaluation of
binarization methods, IEEE Trans. Pattern Anal. Mach.
Intell., vol. 17, no. 12, pp. 11911201, Dec. 1995.
[9] O. D. Trier and T. Taxt, Evaluation of binarization
methods for lowdocument images, IEEE Trans. Pattern
Anal. Mach. Intell., vol. 17, no. 3, pp. 312315, Mar. 1995.
[10] A. Brink, Thresholding of digital images using twodimensional entropies, Pattern Recognit., vol. 25, no. 8,
pp. 803808, 1992.

FIGURE 6: F- measure performance graph


VI. CONCLUSION
The Binarization system involves several parameters
that can be easily estimated based on the statics of the input
document. When compared to the previous LMM method
the Binarization system calculates threshold values each
pixel by pixel instead of considering threshold value as
value of the highest intensity pixel. When compared to NIBLACKS approach which calculates threshold values
based on midrange value of the window size, where as in
Binarization System threshold value depends on adjacent
pixel distance which yields good results. The previous NIBLACKS approach makes use of LOG operator which has
a disadvantage of malfunctioning at corners, curves. Image
Binarization system makes use of canny operator which has
an advantage of probability if finding error rate, improving
SNR. The proposed enhanced system is stable and easy to
work with degraded document images. The enhanced
Binarization system avoids the normalization problem and
involves in 0the contrast image enhancement and yields
good results in several document analysis.
VI .REFERENCES
1] B. Gatos, K. Ntirogiannis, and I. Pratikakis, ICDAR
2009 document imagebinarization contest (DIBCO 2009),
in Proc. Int. Conf. Document Anal.Recognit., Jul. 2009, pp.
13751382.
[2] I. Pratikakis, B. Gatos, and K. Ntirogiannis, ICDAR
2011 document image binarization contest (DIBCO 2011),
in Proc. Int. Conf. Document Anal.Recognit., Sep. 2011,
pp. 15061510.
[3] I. Pratikakis, B. Gatos, and K. Ntirogiannis, H-DIBCO
2010 document image binarization competition, in Proc.
Int. Conf. FrontiHandwrit.Recognit., Nov. 2010, pp. 727
732.
[4] S. Lu, B. Su, and C. L. Tan, Document image
binarization using background.
[5] B. Su, S. Lu, and C. L. Tan, Binarization of historical
handwritten document images using filter method, in
Proc. Int. Workshop Document Anal. Syst., Jun. 2010, pp.
159166.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

90

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

B Chenna Krishna Reddy


3rd year Mechanical Engg,
SRM University,
Chennai.
Mobile no: 9962937400

SEARL EFFECT
(By John Roy Robert Searl )

Abstract
Searl Effect Generator:
The Searl Effect Generator is actually a converter of Electrons into usable
Electricity. The working and construction is given below

Introduction:
The solution to the world's energy crisis might be found in a magnetic device that
utilizes both ambient temperature and magnets to generate power. In theory, the "Searl Effect Generator" (SEG)
has the potential to run completely on its own through a complex interaction between various materials,
resulting in an unlimited supply of free and clean energy.
Discovered by John Roy Robert Searl in 1946, the "Searl Effect", as it is called, effectively captures the kinetic
energy generated by natural changes in ambient temperature, and uses it to create electricity. The technology
works based on a precise design that utilizes the unique characteristics of various materials to create continual
motion. These materials include magnets, neodymium, Teflon and copper.
When joined together in just the right way, these materials are capable of perpetuating a cycle of electron
movement that is virtually infinite, which gives the device the potential to create more power than it uses to
power itself. In other words, if successful, the SEG could one day replace our dependence on fossil fuels as a
primary and superior energy source.
So how does the device work exactly?
The SEG is composed of a series of circular cylinders, all of which have small magnetic rotors spaced evenly
around them one of the neat things about Searl technology is that it is scalable, meaning it can be designed in
varying sizes for different applications. According to the company, SEG sizes range from a one meter in
diameter machine capable of power an average house, to a 12 meter in diameter machine that could power an
entire city. The team says machines even larger than 12 meters are possible as well.
There has been a great interest in examining non-linear effects in the system of rotating magnetic fields. Such
effects have been observed in the device called Searl generator or Searl Effect Generator (SEG. A SEG
consists of a series of three concentric rings and rollers that circulate around the rings. All parts of SEG are
INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

91

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

based on the Law of the Squares. The rollers revolve around the concentric rings, but they do not touch
them. There's a primary north-pole and primary south pole on the rollers and a primary north-pole and primary
south-pole on the concentric rings. Obviously, the north-pole of the roller is attracted to the south-pole of the
concentric rings and vice versa.
The rollers have a layered structure similar to the concentric rings. The external layer is titanium then iron,
Teflon and the last internal layer was made from neodymium. John R.R. Searl has supposed that electrons
are given off from the central element (neodymium) and travel out through other elements. Dr. Searl
contends that if nylon had not been used, the SEG would act like a laser and one pulse would go out and it
would stop, build up, and another pulse would go out. The nylon acts as a control gate that yields an even
flow of electrons throughout the SEG.
it was shown that in the process of magnetization of the plate and rollers, the combination of constant and
variable magnetic fields for creating a special wave (sine wave) pattern on a plate surface and rollers
surface was used. The basic effects consist of the rollers self-running around the ring plate with a concurrent
reduction of weight and an increasing occurrence of propulsion. These effects come about because of a
special geometry of experimental setup. It was shown that the operation of the device in the critical
regime is accompanied by biological and real physical phenomena.

2. Description of the Experimental Installation


The basic difficulty arises in choosing the materials and maintaining the necessary pattern imprinting on
the plate and roller surfaces. To simplify the technology we decided to use a one- ring design with one-ring
plate (stator) and one set of rollers (rotor). It is obvious, that it was necessary to strengthen the roller rotor
near the bearings and balance the rollers well. In the suggested design, air bearings were used which provided
the minimum losses due to friction.
From the available description it was not clear how to build and magnetize a stator with a one-meter diameter.
In order to make the stator, separate magnetized segments of rare earth magnets with a residual induction of
1T were used. The segments were magnetized in a usual way by discharging a capacitor-battery system
through a coil. Afterwards, the segments were assembled and glued together in a special iron armature, which
reduced magnetic energy. To manufacture the stator, 110 kg of neodymium magnets were used and 115
kg of neodymium were used to manufacture the rotor. High-frequency field magnetization was not applied. It
was decided to replace an imprinting technology described in with cross-magnetic inserts having a flux vector
directed at 90 degrees to the primary magnetization vector of the stator and rollers.
For the cross inserts, modified rare earth magnets with a residual magnetization of 1,2 T and coercive force
a little bit greater than in a base material were used. In Fig.1 and Fig.2 the joint arrangement of stator 1 and
rotor, made up of rollers 2, and a way of their mutual gearing or sprocket by means of cross magnetic inserts
19, are shown. Between the stator and roller surfaces the air gap of 1-mm is maintained.
No layered structure was used except a continuous copper foil of 0.8 mm thickness, which wrapped up the
stator and rollers. This foil has direct electrical contact to magnets of the stator and rollers. Distance
between inserts in the rollers is equal to distance between inserts on the stator. In other words, t1 = t2 in
Fig.2.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

92

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Fig.1. Variant of one-ring converter. / Fig.2. Sprocket effect of magnetic stator and roller inserts.
The ratio of parameters of the stator 1 and the rotor 2 in Fig.2 is chosen so that the relation of stator diameter D to
the roller diameter d is an integer equal to or greater than 12. Choosing such ratio allowed us to achieve a
"magnetic spin wave resonant mode" between elements of a working body of the device since the
circumferences also maintained the same integer ratio.
The elements of magnetic system were assembled in a uniform design on an aluminum platform. In Fig. 3 the
general view of the platform with the one-ring converter is displayed. This platform was supplied with springs
and shock absorbers with limited ability to move vertically on three supports. The system has a maximum value of
displacement of about 10 mm and was measured by the induction displacement meter. Thus, the instantaneous
change of the platform weight was defined during the experiment in real time. Gross weight of the platform
with magnetic system in the initial condition was 350 kg.
The stator was mounted motionlessly, and the rollers were assembled on a mobile common separator also
regarded as the rotor, connected with the basic shaft of the device. The rotary moment was transferred
through this shaft. The base of the shaft was connected through a friction clutch to a starting motor, which
accelerated the converter up to a mode of self-sustained rotation. The electrodynamics generator was
connected to the basic shaft as a main loading of the converter. Adjacent to the rotor, electromagnetic inductor
with open cores were located.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

93

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

94

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Fig.3. General view of the one-ring converter and platform.


The magnetic rollers crossed the open cores of inductors and closed the magnetic flux circuit through
electromagnetic inductors inducing an electromotive force emf in them, which acted directly on an active load (a
set of inductive coils and incandescent lamps with a total power load of 1 kW). The electromagnetic inductor
coils were equipped with an electrical drive on supports,
Driven coils for smooth stabilization of the rotors rpm were used but the speed of the rotor could be adjusted by
changing the main loading.
To study the influence of high voltage on the characteristics of the converter, a system for radial electrical
polarization was mounted. On the periphery of the rotor ring, electrodes were set between the electromagnetic
inductors having an air gap of 10 mm with the rollers. The electrodes are connected to a high-voltage source; the
positive potential was connected to the stator, and the negative to the polarization electrodes. The polarizing
voltage was adjusted in a range of 0-20 kV. In the experiments, a constant value of 20 kV was used.
In case of emergency braking, a friction disk from the ordinary car braking system was mounted on a basic shaft
of the rotor. The electrodynamics generator was connected to an ordinary passive resistive load through a set of
switches ensuring step connection of the load from 1 kW to 10 kW through a set of ten ordinary electric water
heaters.
The converter undergoing testing had in its inner core the oil friction generator of thermal energy, 15,
intended for tapping a superfluous power (more than 10 kW) into the thermo- exchange contour. But
since the real output power of the converter in experiment has not exceeded 7 kW, the oil friction thermal
generator was not used. The electromagnetic inductors were connected to an additional load, which was set of
incandescent lamps with total power 1 kW and facilitated complete stabilization of the rotor revolutions.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

95

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

3. Experimental results
The magnetic-gravity converter was built in a laboratory room on three concrete supports at a ground level. The
ceiling height the lab room was 3-meters, the common working area of the laboratory was about 100 sq. meters.
Besides the presence of the iron-concrete ceiling, in the immediate proximity from the magnetic system there was a
generator and electric motor, which contained some tens of kilograms of iron and could potentially deform the
field's pattern.
The device was initially started by the electric motor that accelerated the rotation of the rotor. The revolutions
were smoothly increased up to the moment the ammeter included in a circuit of the electric motor started to show
zero or a negative value of consumed current. The negative value indicated a presence of back current. This back
current was detected at approximately 550 rpm. The displacement meter, 14, starts to detect the change in
weight of the whole installation at 200 rpm. Afterwards, the electric motor is completely disconnected by the
electromagnetic clutch and the ordinary electrodynamics generator is connected to the switchable resistive load.
The rotor of the converter continues to self-accelerate and approach the critical mode of 550 rpm where the
weight of the device quickly changes.

Fig.4. -G, +G changes in weight of the platform vs. rpm


In addition to the dependence on the speed of rotation, the weight differential depends on the generated power
through the load and on the applied polarizing voltage as well. As seen in Fig.4, at the maximum output power
equal to 6-7 kW, the change of weight G of the whole platform (total weight is about 350 kg), reaches 35 %
of the weight in an initial condition. Applying a load of more than 7kw results in a gradual decrease in
rotation speed and an exit from the mode of self-generationwith the rotor coming to a complete stop
subsequently.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

96

www.iaetsd.in

INTERNATIONAL
CONFERENCE
ON CURRENT
IN ENGINEERING
RESEARCH,
ICCTER
- 2014 ring electrodes
The
net weight Gn
of the platform
can be TRENDS
controlled
by applying high
voltage to
polarization
located at a distance of 10 mm from external surfaces of the rollers. Under the high
20Kv (electrodes having negative polarity) the increase of tapped power of the basic generator to more than 6
kW does not influence G if the rotation speed is kept above 400 rpm. "Tightening" of this effect is observed
as well as the effect of hysteresis on G (a kind of "residual induction"). The experimental diagrams given
on Fig.4 illustrate the +G and G modes of the converter operations vs. rotor rpm.

4. Discussion
All the results we obtained are extremely unusual and require some theoretical explanation. Unfortunately, the
interpretation of results within the framework of the conventional physical theory cannot explain all the observed
phenomena besides the change of weight. It is possible to interpret the change of weight either as a local change
of gravitational force or as an antigravity force repelling its own field. Direct experiment, confirming the presence
of a draft force was not performed, but in any case both interpretations of the weight change do not correspond to
the modern physics paradigm. A reconsideration of the standard theory of gravitation is possible if we take into
consideration space-time curvature. For example, the Kerr metric usually represents the field exterior to an axially
symmetric rotating body and distinguishes between positive and negative spin directions as well as forward and
backward time directions. An examination of the physical vacuum as a source of these phenomena may also
lend itself to better interpretation since the Maxwell stress-energy tensor in the vicinity of the converter
undergoes a complex evolution.
From the modern physics position, electrification and luminescence of the converter's magnetic system in the near
zone is not completely clear. The phenomenon of the magnetic and thermal "walls" may be connected with
Alphen's magnetic-sound waves raised in near zone in magnetized plasma induced by a variable magnetic field
of a rotating rotor. The energy exchange between ambient air molecules and the converter may be occurring.
At the present time we can not give an exact description of the interactions mechanism and transformation of
energy, but without a relativistic we are completely unable to give a physically substantial theory of these
phenomena
In conclusion, we emphasize that issues of the biological influence effects and especially of the variations of real
time stream effects, which must be taking place in an operative zone of the converter, were not considered at all.
These issues are extremely important and absolutely unexplored; though there are some mentions of J.R.R.Searl
about healing action of the SEG's radiation. Our own experience allows us to make only cautious assumption
that the short-term stay (dozen minutes) in a working zone of the converter with the fixed output power of 6
kW remains without observed consequences for those exposed. The present paper is only a beginning.

References
1. Schneider, Koeppl, & Ehlers. ( Beginning with John R.R.Searl) Raum and Ziet, #39,
1989, pg. 75-80.
2. Sandberg, Von S. Gunnar. Raum und Ziet, #40, 1989
3. Schneider & Watt. (The Searl effect a n d it s w o r k i n g ) Raum and Ziet, # 42, 1989, pg.7581; #43, pg.73-77.
.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

97

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

98

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Bus tracker system with seat availability checker

V.Sundara moorthy
smsubasantosh@gmail.com
3rd Year EEE Department
Panimalar Engineering College, Chennai

G.Surendiran
Surendiran888@gmail.com
3rd Year EEE Department
Panimalar Engineering College, Chennai
Abstract

2. Android and Our App

In this paper we have stated about solving


common people problem using innovative ideas
with help of circuits and android app
(application). Here we consider some problems
of people in bus transport. This android app
tracks the current bus locations using GPS in the
ticket printer (new single GPS chip would cost
under $5 = Rs.300). And we have also created
an algorithm to find the number of seats
available. We use a ticket printer with a SIM
card to send this information. So this app gives
the information about number of seats available
with the help of ticker printer and bus location
to the mobile phone. Another arrangement is
made to get this information in bus stops using
display boards. One more additional feature is
that is gives your current location while your
travel using mobile signal you get with Rs.0 of
cost. So you no need to worry or need not be
aware that your place had arrived or not, in
night travel or while going to a new place. This
will solve the problems like waiting in bus stop
and wasting time, going in a crowded bus and
we give you a sleepy or peaceful journey.

Android was initially developed by Android,


Inc., which Google backed financially and later
bought in 2005, Android was unveiled in 2007.
Now its very fast developing and has app for
every human need and problem. In 2014 one
million android phones are yet to come. So this
will have the world in its hand and takes control
over everything. So we think to solve problems
using android app. Here we are going to discuss
some of the problems of common people for
which android apps are not yet developed.

3. Problems

Now-a-days everyday people are busy moving


to offices, schools, or to some other places. The
main thing they need is that transport, obviously
bus. Many people are waiting for it and wasting
their precious time. Since bus timings are
generally unpredictable in our country. And
many times they have to go in a very crowded
bus, sometimes hanging which may lead to
accidents. Another problem is sleepless night
travel, when they need to get down the bus in the
right place. Next problem comes while one goes
to a new place, that is they need to be aware
whether their place had arrived or not. Lets see
some solutions for this problem through android
apps.

1. Statistics
In the city of Chennai alone, 47 lakh people use
the services of the MTC on a daily basis,
A total of 21,989 buses are currently being
operated in the State. Around 1.82 crore
commuters use the service on a daily basis. So
this is a must to make their travel easy.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

99

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

4. Existing App
filled ride at night. The mock-up is shown

The already existing apps only have bus routes


and timings. But that wont be exact and wont
help people in any way. Eg.MTC Bus Route.

below in figure 1.

6. Smart Bus Tracker


5. Solution
From the name itself, we could understand
that we are going to track the location of
buses. So we need GPS to be fitted in every
bus. Now Texas Instruments announced at
CTIA that a new single GPS chip would cost
under $5 = Rs.300. And now very small
GPS devices are produced.

We have some innovative ideas for android app,


we have designed an app named Smart Bus
Tracker (SBT), and this app has three features.
The first one helps in tracking the buses with the
help of GPS and track the exact bus location.
This will very much help the people who usually
waste time in waiting for buses. Second it has a
facility to show the number of tickets available
with the help of ticker printer of the conductor.

7. Seat availability checker


Here we are joining the GPS device with the
ticket printer to track the location. We have
another feature in this beautiful app to check
the availability of seats inside the buses.
Nowadays bus conductors use ticket printer
to give away the tickets. When a person
takes a ticket, the starting place and
destination is recorded in the ticket printer.
Similarly the details about every person in
the bus are recorded in it. By doing simple
mathematical calculations with simple an
algorithm (as shown in the figure 2) no of
passengers in bus is calculated.
Figure 1.Smart Bus tracker app.

This prevents us going in a crowded bus. And


third a tower detector which detects and give
alarm, when the particular area tower is reached.
This helps us to find location (without GPS)
when we are new to a place or if we need a sleep

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

100

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

traveling region,it deletes the information


about the person,which means the person
has got down. So this cycle repeats
frequently with the change in bus position.
So this is how the number of person in the
bus is calculated. The simulation for this is
shown bellow. In this the number of point of
intersection between the bus position and the
region gives the number of persons in the
bus. Simulation for tracking bus using GPS
is not needed since every one know about
that.

Figure 2.Flow chart to calculate no of passengers.

8. Simulation for seat availability checker


In this first we get the information of the bus
location using the GPS in ticket printer.
Then the persons traveling region is
calculated from the starting place and
destination given in the ticket printer. Then
the bus position is compared with the
traveling region of every person. If the bus
position is inside the region,then the number
of person is added with 1,this process
repeats will very person's traveling region in
the bus. If the bus position is not in the

Figure 3.Simulation for seat availability checker.

So we can determine the numbers of seats


filled and vacancies in the buses. By using a
ticket printer with SIM card facility (as in
figure 4). We are synchronizing the signals
with server and now our app can get the
details about the bus location.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

101

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

By the above process we can get the details


in the mobile as shown in the below figure
6.

Figure 4.Ticket printer with SIM card slot.

So, using this app all people can plan ahead, by


knowing the number of seats available in the bus
and avoid crowded travel. They could easily no
of seats filled and vacant.

Figure 6. Smart Bus Tracker Showing Bus locations.

10. Giving information to depot:


9.Over all Process:
We can also send this information to the bus
depot, by which they can easily get the
information about the buses. It may help
them to quickly find out if the bus is met
with an accident or if stopped due to any
problem like break down or puncture. The
figure 7 shows the flow of information to the
bus depot.

Here in the diagram we have given the over


all process in the figure 6.In this bus revice
GPS signal from satilite then it gives that
signal to the mobile phone tower with the no
of passegers via SIM card in the ticker
printer.Then from there we are able to get
those details.

Figure 5.Overall process.


Figure 7.Transfering information to bus depot.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

102

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

11. Display in bus stand:

13.Future

We make a display in bus stand for the


people to get the information about the bus
arrival time.

When our app comes in future there will be


many changes in the world. No one will be
waiting for buses in the bus stops. We need not
go in a crowded bus. Everyone will have a
luxurious travel without worrying about
anything, mainly we need not to be aware that
their place had arrived or not.

12. Tower Detector


The next amazing feature of our app is to
detect our current location only by using the
mobile tower signals. Usually the mobile
receive signals all the times. And by given a
simple database containing the information
about the locations where the towers are
placed in a state, we could get our locations
in mobile. We know that the mobile signals
are wide ranged, so we could detect the
current location. The mock-up of the tower
detector is shown in figure 8.

14. Conclusion
This app will give the exact location of buses at
various position,says no of seats available and
your current location by tower signal. This app
will modernized transport system. Private buses
or private companies having their own transport
may implement this easily. Around 1.82 crore
commuters use the service on a daily basis. So
this is a must to make their travel easy. This

will be cost effective and definitely feasible.

15. References
[1] "Android Code Analysis". Retrieved June 6,
2012.
[2] Claburn, Thomas (March 4, 2009).
[3] "Court Asked To Disallow Warrantless GPS
Tracking". Information Week. Retrieved 200903-18.
[4] "Traccar Client - free open source Android
tracker". Retrieved 2012-08-15.
[5] "Widgets | Android
Developers". Developer.android.com. Retrieved
2012-09-15.
[6] Saylor, Michael (2012). The Mobile Wave:
How Mobile Intelligence Will Change
Everything. Perseus Books/Vanguard Press.
p. 304. ISBN 978-1593157203.
[7] "Ticket-in, Ticket-out Technology".
Retrieved January 22, 2014.

Figure 8.Tower Detector.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

103

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

TECHNO-HOSPITAL
(Android App Development)

Sanjay. T

Preethika. R

B TECH IT (4th year)


PSG College of Technology
Coimbatore, India
thillaivasan.sanjay@gmail.com

B TECH IT (4th year)


PSG College of Technology
Coimbatore, India
preethika.psg@gmail.com

to connect to internet through mobile phone and this would be


the only cost incurred on the project.

AbstractThis electronic document is all about to generate an


android application to solve the problem of people in India, to
reduce their effort to maintain all their medical prescriptions and
surgery reports .Instead they can have all those in a cloud and
can be accessed from anywhere, anytime. Hence, the people can
make sure they dont miss any medical record so that they could
get perfect treatment.

Technical Feasibility
To develop this application, a high speed internet
connection, a cloud database server, a web server and IDE
(such as Eclipse) are required.

Index Terms-Feasibility, System Requirement, Issue Being


Addressed, Solution, Potential Challenges and its Suggestions.

Social Feasibility
The application is socially feasible since it requires no
technical guidance, all the modules are user friendly and
execute in a manner they were designed to.

I. INTRODUCTION
Our idea belongs to the category Internet of things to
deliver Public Services for the future. Todays world is highly
connected through internet and this can help people widely. It
describes a future where everyday physical objects will be
connected to the Internet and be able to identify themselves to
other devices. However, efficient implementation of Internet of
things does lead to a higher citizen satisfaction.

IV. SYSTEM REQUIREMENTS


A. HARDWARE REQUIREMENTS
Processor:
RAM:
Space on disk:

II. PROPOSED SYSTEM

Pentium 4
256 MB
Minimum 250MB

B. SOFTWARE REQUIREMENTS
Operating system: Windows XP / Mac OS 10.5.8 or
later/Linux
Platform
: Android SDK Framework
IDE
: Eclipse (Juno)
Android
: SDK Version 2.2 or Higher
Emulator
Database
: MySQL
Technologies
: Java, Xml
Front End
: Eclipse
Back End
: SQLite, Cloud SQL server
Data analytics for handling huge data.

The android OS application created for Hospital is user


friendly. This allows faster access of data and also less memory
storage capacity. The applications do not need any guidance for
nave user to use it.
III. FEASIBILITY
The main purpose of feasibility study is to consider each
and every possible factor associated with the project and
determine whether the investment of time and other resources
yield desired results. It also includes determining the
investments, manpower and costs incurred on this project. The
following feasibility has used in this project.
Economic feasibility
Technical feasibility
Social feasibility

V. THE ISSUE THIS IDEA WILL ADDRESS

ECONOMIC FEASIBILITY
Economic analysis is the most frequently used method for
evaluating the effectiveness of a new system. The project is
economically feasible as it only requires a mobile phone with
Android operating system. The application is free to download
once released into Android market. The users should be able

In our day to day life, most of us dont care to


maintain our old medical prescriptions. This in turn
causes trouble to doctors to provide better treatment
based on his previous reports. Also in critical
situations like accidents, one could not always carry
all of his previous medical prescriptions. As a result,
a proper treatment would be a question mark as the
doctors could not determine what medicine is

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

104

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

allergic to the patient or what medical treatment he


presently undertakes. Many deaths have occurred as
the lifesaving treatment (medicines) given to the
victims were allergic or in opposite reaction to the
medicine he was undertaking.
Another major problem is, in case of emergency
situations like accidents where heavy blood loss
occurs, there is urgent need of rare blood and
unfortunately if the blood bank fails to provide it , the
hospital finds it difficult to get help from the people
by immediately conveying the urgency to save a life.
As a result, it leads to unfortunate death of the
person.
Nowadays, we have the token system in hospitals
where the people contact the hospital through
telephones and get a token number and time slot to
obtain treatment on a specified day. But on the
specified day , many hospitals fail to keep up the time
slot provided to the patients . So the patients suffer a
lot by waiting a whole day, thus wasting their
precious time.

VIII. SUGGESTIONS FOR POTENTIAL CHALLENGES


i)

ii)

The doctors can use the android app to update also .


The doctor can get the patients login id ,enter the
prescriptions and click the enter icon to update the
details in the patients database.
A device can de set up in the hospital , so that when
each person completes his consultation with the
doctor , the device is pressed and immediately the
token number of the patients of that particular
hospital is incremented.
IX. STEPS THAT PATIENT SHOULD FOLLOW

1) Initially download the app . Register your details and get


a unique id.
2) Each time when you undergo a treatment , provide your
id to the doctor and get the prescription or scan details updated
in your account.
3) If you are willing to donate your blood , keep the urgent
blood needs notification ON , so that you will be notified
through a message if blood need arises.
4) If you want to know the token number of the patient
currently undergoing treatment , just select the place where the
hospital is located , name of the hospital and the doctors
name(if there are many doctors in a hospital).Immediately you
will be notified.

VI. SOLUTION
1) Each person will be provided a login where a database
that maintains the persons entire medical details(his
prescriptions , scan reports and surgeries undergone ) with
date , names of hospitals and doctors served is maintained.

X. VALUE PROPOSTION

2) Consider an application in your mobile phones that


notifies you immediately when urgent blood needs occur. It
also specifies what blood group is needed and also the name
of the hospital with its address and phone number. So that, the
need of blood will reach the people immediately and those
having the same blood group and willing to provide blood can
immediately rush to the hospital and save a lifeSometimes it
may take few hours or days to get the work done. So after the
work gets completed the user must be notified through a
simple message. So that the user need not waste his time by
visiting the office often and can do other stuffs.

1.Since the most valuable thing in the world is the life of a


person , this project will reduce the death rate that has been
increasing due to the improper treatment provided by the
doctors at critical situations.
2.Lives of the critically injured can be saved by the blood
donated.
3.Saves time for the people to take medical checkups
regularly.

XII. CONCLUSION
3) Consider an app which allows you to select the place and
hospital you want and immediately the app tells the token
number of the patient who currently undertakes treatment
.So that the patient can estimate the time at which he should
be at the hospital for getting treatment. Thus the waste of time
can be reduced.

.
Thus, this idea can be implemented to help people
maintain their medical records safe, take proper treatment, save
lives just by a user friendly android app.

VII. POTENTIAL CHALLENGES


i) The expected issue would be , doctors will have to update
the prescribed medicines in the patients database.
ii) And the token numbers in all the hospitals should be
updated for every minute.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

105

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


ACKNOWLEDGMENT
I would like to express our heartfelt gratitude to Dr. R.
Rudramoorthy, Principal, PSG College of Technology, for
providing me an opportunity to do my paper with excellent
facilities and infrastructure, without which this paper would
not have been successful.

REFERENCES
[1] Zigurd Mednieks, Laird Dornin, G. Blake Meike, and
Masumi Nakamura,Programming Android, Second
Edition 2009.
[2] Diego Torres Milano , Android Application Testing
Guide, Packt Publishing, 2011
[3] Wallace Jackson, Learn Android App development,
Third Edition, 2010.
[4] http://developer.android.com/guide/topics/resources/provid
ing-resources.html
[5] Android SDK | Android Developers
http://developer.android.com/sdk/index.html

I extend my sincere thanks to Dr. K.R.CHANDRAN,


Professor and Head of the Department of Information
Technology, PSG College of Technology, for his unfailing
support throughout this project.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

106

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

PREPARATION OF W.P.S FOR STAINLESS STEEL (Ni, Cr, Mo, Nu) WELDING
W.r.t MECHANICAL & THERMAL PROPERTIES
M.VALLIAPPAN
ASSISTANT PROFESSOR
M.KUMARASAMY COLEGE OF ENGINEERING
KARUR
TAMILNADU 639 113

II. SS MATERIALS
In this paper we are going to weld the 321(Plate) of 5.6mm
thick, by TIG and ARC similarly 316(Seamless Tube) of
60.3mm, 6mm thick, 347 (Seamless Tube) of 60.3mm, and
5mm thick by TIG and ARC. Welding is carried out by
preparation of WPS under the Properties of the material. The
main Purpose of the choosing the TIG and ARC welding is
based on the material thickness apart from this TIG gives
Higher Accuracy and ARC gives in vice versa. The main
purpose of choosing Nb,Mo,Ti in Ni alloy combination is
because SS 347 contains Nb which has high Creep Strength,
where SS 316 consists of Mo, it has Fatigue strength, and 321
contains Ti which has high Ductility.

Abstract Today Most commonly used materials in boiler, and


also in boilers accessories are S.S which will operate under 700oC
with greater performance (creep strength, corrosion resistance)
and reliability, especially for temperatures of 700-750oC.The
modern technology in welding of materials goes towards anticorrosion, wear resistance to increase the service of a product. In
S.S alloys, Ni content is very good against corrosion resistance;
hence we have selected Stainless steel 321, 316 and 347 materials,
Which consist of (Ni,Cr,Mo,Nb) widely used for Power piping in
800MW, 1000MW Boiler. In order to maintain the properties of
material properties before and after welding, we have Prepared
Welding Procedure Specification (WPS) based on the thermal
and mechanical properties of materials.

III. CHEMICAL & MICRO RESULT


SS321:
Grain
: FSS and ASS grains
ASME Grain Size
: No: 7
SS316:
Grain
: FSS and ASS grains
ASME Grain Size
: No: 7
SS347:
Grain
: Ferrite & Austenite grains
ASME Grain Size
: No: 8

I. INTRODUCTION
This paper deals with the preparation of Welding procedure
specification for SS welding by optimizing the existing WPS
method with respect to mechanical and thermal properties. It
also aims to reduce the creep, fatigue, Residual stress and
thermal stresses in the Boiler materials by the application of
stainless steel material and welding it.
Stainless steels are weldable materials and a welded joint can
provide optimum corrosion resistance, strength and fabrication
economy. However it may undergo certain changes during
welding it is necessary to take care of welding to minimize the
defects and to maintain the same strength and resistance in the
weld zone that is an inherent part of base metal. Seamless
Tubes of 316 and 347 and 321 plate materials having
improved austenite stability. The stability of the austenite in
CrNi and CrNiMo steels is achieved by increasing the nickel
content over that of standard 18/8 CrNi and 18/8/2 CrNiMo
steels, and more especially by additions of nitrogen, which is
particularly effective in promoting the austenite stability.
The project starts with micro testing and chemical analysis of
materials and preparation of WPS based on thermal and
mechanical properties. Finally calculating the strength of
material and to replace the current boiler materials in to
improved SS materials. In welding by controlling the
parameters of thermal property mechanical deviations are
reduced and so weld with less defect could be obtained and
also the service life of material will be increased.

IV.WPS (Existing Method)


The welding procedure form contains all of the essential
information required to make the weld and verify the test
result. This information may include the type of base and filler
metals, welding process used, preheat, inter pass or post weld
heat treatment shielding gases and so on.
(Ref-Book-PQR, Chap 24. Page No 380)
Base metal
Filler metal
Position
Preheat
Electrical characteristics
Technique
Parameters
Joints

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

107

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


V.WPS (REVISED)

Hence based on the selection of methods from the above


mentioned conditions welding could be carried out. In this, for
each case of the above mentioned parameters certain methods
are chosen according to AWS and WPS, so that welding could
be achieved in a greater extent of accuracy.

IV.A.TIG
Polarity
Composition
Color Code
Inert Gas

VIII.PREHEATING & POST HEATING


Preheating is done under the Environmental condition by
adjusting the current and voltage before welding, generally
material with less thickness are not to be considered for pre
heating and post heating will be carried out in case of any
defects in the weld. After welding heat treatment is required,
to maintain the material strength and other properties.

V.B.ARC
V.B.1. BASE METAL
Position
Edge preparation
Root Gap
Cleaning

Preheating Conditions:
For S.S chromium and nickel equivalent is very
important and for other alloys carbon equivalent.
For Stainless Steel:
Chromium Equivalent = %Cr+%Mo+%1.5%Si+0.5%Nb
Nickel Equivalent = %Ni+30%C+0.5%Mn

V.B.2. ELECTRODE
Type
Diameter
Arc length
Coating
V.B.3. PROCESS
Polarity
Bead
Technique
Run
Speed
Pre heating
Post heating
Heat treatment
By optimizing the above mentioned parameters with
existing W.P.S the strength of a material could be increased.

Carbon Equivalent =%C + %Mn/4 = % Si/4 for carbon steel


Carbon Equivalent= %C+ %Mn/6 + %Cr/5 + %Ni/15 +
%Mo/4 +%V/5 for AS and SS
CE <40
No preheating
CE= 40to70
Preheating 100-300c
CE>70
Welding is difficult
Preheating Temperature=1000[C -0.11] + (1.8*thickness) F
Preheating Temperature:
P1
t<19
nil
P1
t>19
100-150c
P4
all
200-300c
P5
all
200-300c

VI.NEED OF WPS
Welding could be done without preparation of WPS, but
which may results in improper weld or with defects like weld
decay, knife line attack and stress corrosion cracking. To
avoid such cases WPS is followed in all industries .Sometimes
Improper welding will increase the strength due to this the life
time of materials may be changed.
To overcome those weld defects WPS is optimized, use
extra low carbon electrode, avoiding hylogen family

IX. ELECTRODE (Ref-A.W.S)


E 308
Corrosion resistant
E 316, 317, 330 High temperature strength
E 410, 420
Abrasion resistant
X. WELDING METALLURGY
Ref -ASME Section IX
Voltage: V =17.34 + 0.023I - 6.3*10-6I2
Speed: S=1.6*106I-6.38
Deposition
Y=1.5 + 0.17I + 0.000028I2
Q=VI / S (without heat transfer)
Q=VI / S (with heat transfer)
Power Density Range:
SMAW
5*106 TO 5*108 Watts /m2
GMAW
SAME TO SMAW
PAW
5*106 TO 5*1010
EBW, LBW
1010 TO 1012
Stress Analysis:
Stress due to Sustained load = 0.72 Sy
Stress due to Occasional load=0.80 Sy
Sy = Min Yield Strength of material

VII.WELDING PROCESS
Welding is carried out by following parameters based on
Polarity- Straight and Reverse
Position Down hand, Vertical, Horizontal and 1G, 2G, 5G
Process TIG and SMAW
Bead Stinger and weaving
Technique - Forehand and Backhand
Speed low and high
Edge preparation based on angle (more than 75)
Root gap - based on thickness
Diameter of electrode based on thermal conductivity
Arc length shorter, longer and correct
Run Longer, shorter, skipping, alternate skipping
Preheating based on thickness of material

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

108

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


analyzed. Thus optimization of welding process is also made
by checking the strength of each material in different welds.
Similarly reading from TIG and SMAW also compared for the
three materials. Strength is calculated manually and compared
with the ASME, WPS and AWS.
The main factor for choosing TIG and SMAW is, in boiler at
higher altitude PAW, EBW, LBW and other type of welding
could not be carried out and also it seems to be very expensive
hence to overcome such cases TIG and SMAW is selected.

Sy = 0.6 to 0.7 of Tensile Strength


Stress due to Expansion load = E T Sh
E= youngs modulus
=Coefficient of linear expansion
T=Change in temperature
=Poissions ratio
Sh =Hoop stress
Resultant Bending Stress = (Ii Mi2+I0M02)1/2 / Z
Ii=SIF at inplane
Io=SIF at outplane
Mi=BM at inplane
M0=BM at out Plane
Resultant Torsional Stress =
Z= Section Modulus
Mt=Torsional Bending Moment

REFERENCES
References:
1. Parmar.R.S. Welding Engineering and Technology,
Khanna publishers, Delhi. (1997is carried out by
following
2. Avanar Material science Engineering
3. Dr.Srinivasan Engineeering Materials
4. Peter Mayr , Stefan Mitsche, Horst Cerjak , Samuel
M.allen The impact of weld metal creep strength on
the overall creep strength of 9% Cr steel weldments ,
Journal of engineering materials and technology , vol.
133(2011) .
5. Nattaphon
TAMMASOPHAN
,
weerasak
HOMHRAJAI , Gobboon LOTHONGKUM Effect
of post weld heat treatment on microstructure and
hardness of TIG weldment between P22 and P91
steels With inconel 625 filler metal, Journal of
metals, materials and minerals , vol.21(2011) .
6. Samsiah Sulaiman Structure of properties of the heat
affected zone of P91 creep resistant steel, Research
work.
7. Greg j. Nakoneczny ,carl c schultz Life assessment of
high temperature headers, American power
conference(1995).

Material: (AWS & ASME Sections)


Tensile Strength
=15.4(19.1+1.8%Mn+5.4%Si+0.025%pearlite+0.5d-1/2)
Hardness Vickers test
= 90 + 1050%C + 45%Si + 97%Mn + 30%Cr + 31%Ni
XI. APPLICATIONS
Non-ferrous metals with high strength and toughness
Magnetic properties, nuclear power systems
Corrosion resistance, wear resistance
Aerospace aircraft gas turbines
Steam turbine power plants, medical applications
Chemical and petrochemical industries
Strength at elevated temperature.
Nickel- chromium alloys or alloys that contain more than about
15% cr are used to provide both oxidation and carburization
resistance at temperatures exceeding 760C.

XII. CONCLUSION
The various SS material was welded by TIG and ARC and
finally the following test are carried out,
Hardness test
Impact test
Bend test
Tensile test, is carried out after welding to measure the strength
of 321, 316, 347 which are compared with the reading from
AWS, ASME and finally the suitable boiler material is

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

109

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

REPAIRING CRACKS IN CONCRETE STRUCTURES


Anilkumar P M and Dr. J Sudhakumar
B.Tech Student, Professor
Department of Civil Engineering, National Institute of Technology, Calicut
anilkumar_bce11@nitc.ac.in, skumar@nitc.ac.in

Abstract Studies related to crack formation in concrete is


most interesting because sometimes the same causes produce a
different cracking pattern and sometimes the same cracking
pattern produced by different causes. Sometimes concrete
cracks in allocation where no cause can be found and in other
places it does not crack where there is every reason why it
should. Eighty percent of these cases, however, are straight
forward. In order to determine whether the cracks are active or
dormant, periodic observations are to be done.
Various factors have impeded improvements in the durability
of concrete repairs, including: inadequate condition evaluation
and design; lack of quality construction practices and quality
control; and the choice of repair materials. It is necessary to
reconsider some recent developments in structural repairs from
the view point of extending the service lives of structures under
repairs [1].
Index TermsConcrete, cracks, reinforcement, repair

I. INTRODUCTION
Cracks can be broadly divided into two categories
namely solitary cracks and pattern cracks. Generally,
solitary crack is due to a positive overstressing of concrete
either due to load or shrinkage, the cause of which will
become apparent when the line of the crack is compared
with the layout of the portion of the concrete, its
reinforcement and the known stresses in it. Overload
cracks are fairly easily identified because they follow the
lines demonstrated in laboratory load test. A crack due to
setting and hardening shrinkage is formed in the first week
of life of the concrete. If length of concrete under
inspection is more than about 9m, it is not likely that there
is a solitary crack, usually there will be another one of a
similar type and the analysis of the second confirms the
finding from the first.
Regular pattern of cracks may occur in the surfacing of
concrete and in thin slabs. The term pattern cracking is
used to indicate that all the cracks visible have occurred
more or less at the same time.
II. TYPES OF CRACKS
Cracks can be divided into two types:-Solitary cracks
and pattern cracks
A. Solitary cracks ( Isolated cracks)
Generally, a solitary crack is due to a positive over
stressing of the concrete either due to load or shrinkage,
the cause of which will become apparent when the line of

the crack is compared with the layout of the portion of the


concrete, is reinforcement and the known stresses in it.
Overload cracks are fairly easily identified because
they follow the lines demonstrated in laboratory load
tests. A crack due to setting and hardening shrinkage is
formed in the first week of life time of concrete. If the
length of concrete under inspection is more than 9m, it is
not likely that there is a solitary crack, usually there will be
another one of a similar type and the analysis of the
second confirms the finding from first. In a long retaining
wall or long channel, the regular formation of cracks
indicates faults in the design rather than the construction,
but an irregular distribution of solitary cracks may indicate
poor construction as well as poor design.
B. Pattern cracking
Regular pattern of cracks may occur in the surfacing
of concrete and in thin slabs. The term pattern cracking is
used to indicate that all the cracks visible have occurred
more or less at the same time.
III. IDENTIFICATION OF CRACKS
Crack movement can be detected by placing a mark at
the end of the crack. Subsequent extension of the crack
beyond the mark indicates probable continuance of the
activity originally producing the defect. The deficiency of
this defect is that it will not show any tendency for the
crack to close or provide any quantitative data on the
movement. In another method, a pair of tooth pick is
lightly wedged into the crack and falls out if there is any
extension of the defect. The deficiencies of this method, as
before, are that there is no indication of closing,
movement or any qualitative measure of the changes
which occur.
A strip of notched tape works similarly. Movement is
indicated by tearing of the tape. An advantage is that same
indication of closure can be realized by observing any
writing on the tape. This device is not reliable; however,
the tape is not dimensionally stable under changing
conditions of humidity, so that one can never be sure
whether the movements are due to shrinkage or swelling
of the marker. A device using a typical vernier caliper is
the most satisfactory of all. Both extension and
compression are indicated and movements of about
(1/100) inch can be measured using a vernier caliper. If
more accurate readings are desired, extensometers can be
used. The reference point must be rigidly constructed and
carefully glued to the surface of concrete, using a
carborundum stone to prepare the bonding surface before
attaching the reference mark.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

110

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


Where the extreme accuracy is required, resistancestrain gauges can be glued across the crack. They are
however, expensive, sensitive to changes in humidity, and
easily damaged.

IV. MEHODS OF REPAIR


A. Bonding with epoxies
Cracks as narrow as 0.05mm can be bonded by the
injection of epoxy. The technique generally consist of
establishing entry and venting ports at close intervals
along the crack, sealing the crack on exposed surfaces, and
injecting the epoxy under pressure [2]. Usual practice is to
drill into the crack from the face of the concrete at several
locations; inject water or a solvent to flush out the defect;
allow surface to dry (using hot air jet, if needed);
surface-seal the cracks between the injection points; and
inject the epoxy until it flows out of the adjacent sections
of the crack or begins to bulge out the surface seals, just as
in pressure grouting. Usually the epoxy is injected through
holes of about 0.75 inch diameter and 0.75 inch deep at
about 6 to 12 inch centers (smaller spacing are used for
finer cracks). This method of bonding represents an
application for which there is no real substitute procedure.
However, unless the crack is dormant (or the cause of
cracking is removed, thereby making the crack dormant), it
will probably recur, possibly somewhere else in the
structure. Also, this technique is not applicable if the
defects are actively leaking to the extent that they cannot
be dried out, or where the cracks are numerous.
B. Routing and sealing:
This method involves enlarging the crack along its
exposed face and filling and sealing it with a suitable
material. This is a simplest and most common technique
for sealing cracks and is applicable for sealing both fine
pattern cracks and longer isolated cracks.
The cracks should be dormant. This technique is not
applicable for sealing cracks subjected to a pronounced
hydrostatic pressure. The routing operation consists of
following along the cracks with a concrete saw or with
hand or pneumatic tools, opening the crack sufficiently to
receive the sealant. A minimum surface width of 0.25 inch
is desirable. Smaller openings are difficult to work on. The
surface of the routed joints should be rinsed clan and
permitted to dry before placing the sealant. The method
used for placing the sealant depends on the material to be
used and follows standard technique.
Routing and sealing of leaking cracks should preferably
be done on pressure face so that the water or other
aggressive agents cannot penetrate the interior of the
concrete and cause side effects such as swelling, chemical
attack or corrosion of the bars.
The sealant may be any of several materials, depending
on how tight or permanent a seal is desired. On roadway
pavements it is common to see cracks which have been
sealed by pouring hot tar over them. This is a simple and in
expensive method where thorough water tightness of the

joint is not required and where appearance is not


important.
C. Stitching:
The tensile strength of a cracked concrete section can
be restored by stitching in a manner analogous to sewing
cloth. Concrete can be stitched by iron or steel dogs in the
same way as timber. The best method of stitching is to
bend bars into the shape of a broad flat bottomed letter U
between 1 feet and 3 feet long and with ends about 6
inches long (or less if the thickness of the concrete is less),
and to insert them in holes drilled to match in the
concrete on either side of the crack. The bars are then
grouted up- some grout being placed in the holes in
advance of the bars.
In case of stitching of concrete pavements, after
stitching a longitudinal crack it may be necessary to treat a
nearby longitudinal joint. A primary concern is whether a
crack has formed below the saw cut for longitudinal joints.
If a crack has occurred and the joint functions properly,
then no treatment other than joint sealing is warranted.
However, if there is no crack extending below the saw cut
joint, then it is advantageous to fill the saw cut with epoxy
to strengthen the slab at this location. If the joint is not
functioning, but a joint sealant has already installed, then
no further action is recommended [3].
Usually cracks start at one end and run away from the
staring place quicker on the side of the concrete than on
the other. The stitching should be on the side which is
opening up first. The following points should be observed,
in general
i. Any desired degree of strengthening can be
accomplished, but it must be considered that the
strengthening also tends to stiffer the structure locally.
This may accelerate the restraints causing the cracking
and reactivate the condition.
ii. Stitching the cracks will tend to cause the problem to
migrate elsewhere in the structure. If it is decided to
stitch, investigate and if necessary, strengthen the
adjacent areas of the construction to take the
additional stress. In particular, the stitching dogs should
be of variable length and /or orientation and so located
that the tension transmitted across the crack does not
devolve on a single plane of the section, but is spread
out over an area.
iii. Where there is a water problem, the crack should be
sealed as well as stitched so that the stitches are not
corroded, and because the stitching itself will not seal
the crack. Sealing should be completed before stitching
is commenced both to avoid the aforementioned
corrosion and because the presence of dogs tends to
make it difficult to apply the sealant.
iv. Stress concentrations occur at the ends of cracks, and
the spacing of the stitching dogs should be closed up at
such locations.
v. Where possible, stitch both sides of the concrete
section so that further movement of the structure will
not exert any bending action on the dogs. In bending
members it is possible to stitch one side of the crack
only, but this should be the tension side of the section,
where movement is originating. If the members are in

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

111

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


a state of axial tension, then a symmetrical placement
of the dogs is a must, even if excavation or demolition
is required to gain access to opposing sides of the
section.
vi. As an alternative to stitching cracks at right angles
which method does not resist shear along the crack,
sometimes it is necessary use diagonal stitching. One
set of dogs can be placed on each side of the concrete
if necessary. The length of the dogs is random so that
the anchor points do not form a plane of weakness.
vii. The dogs must be grouted with a non-shrink or
expanding mortar, so that they have a tight fit, and
movement of the cracks will cause the simultaneous
stressing of both old and new sections. If this is not
possible, proportion the stitching to take the entire
load without precipitation.
viii. The dogs are relatively thin and long and so cannot take
much in the way of compressive force. Accordingly, if
there is a tendency for the crack to close as well as to
open, the dogs must be stiffened and strengthened by
encasement in an overlay or by some similar means.
D. External stressing
Development of cracking in concrete is due to tensile
stresses and can be arrested by removing these stresses.
Further, the cracks can be closed by inducing a
compression force sufficient to overcome the tension and
to provide a residual compression. The compressive force
is applied by use of the usual prestressing wires or rods.
The principle is very similar to stitching, except that the
stitches are tensioned; rather than plain bar dogs which
apply no closing force to the crack and which may in fact
have to permit the crack to open up a bit before they
begin to take the load. All the points noted regarding
stitching must be considered. An additional problem is that
of providing an anchorage for the prestressing wires or
rods. Some form of abutment is needed for this purpose.
The effect of the tensioned force on the stress conditions
in the structure should be analyzed.
E. Grouting
Grouting of the cracks can be performed in the same
manner as the injection of an epoxy, and the technique
has the same areas of application and limitations.
However, the use of an epoxy is the better solution except
where considerations of fire resistance or cold weather
prevent such use, in which case grouting is the comparable
alternative. The procedure is similar to other grouting
methods and consist of cleaning the concrete along the
crack; installing built-up seats at intervals along the crack,
sealing the crack between the seats with a cement paint or
grout, flushing the crack to clean it and test the seal; and
then grouting the whole. The grout itself is
high-early-strength Portland cement.
An alternative and better method, where it can be
performed, is to drill down the length of the crack and
grout it so as to form a key. The grout key functions to
prevent related transverse movements of the sections of
concrete adjacent to the crack. However, this technique is
applicable only where the cracks run in a reasonably

straight line and are accessible at one end. The drilled hole
should preferably be 2 or 3 inches in diameter and flushed
to clean out the crack and permit better penetration of the
grout.
F. Blanketing
Blanketing is similar to routing and sealing, but is used
on a larger scale and is applicable for sealing active as well
as dormant cracks. The following are the principal types of
blanket joints
i. Type 1 Where an elastic sealant is used: The sealant
material is the one which returns to its original shape
when not under an externally induced stress, i.e., acts
elastically. The recessed configuration is used where
the joint is subjected to traffic or a pressure head. The
strip sealant is applicable where there are no traffic or
pressure problems and is somewhat less costly. The
first consideration in the selection of sealant materials
is the amount of movement anticipated and the
extremes of temperature at which such movement
occur. It should be capable of deforming the required
amount under applicable conditions of temperature.
The material should be able to take traffic, be resistant
to chemical spillage, be capable of being pigmented, if
desired.
ii. Type 2 Mastic filled joint: The sealant is a mastic
rather than a compound having elastic properties. This
type of joint is for use where the anticipated
movements are small and where trafficabilty or
appearances are not considerations. The advantage is
that the mastic is less costly than the elastic type of
sealant material.
iii. Type 3 A mortar plugged joint: Proper sealing to the
crack is provide against leakage using a temporary
mortar plug. The mortar plug provides the strength for
the joints. The plug resists the pressure on the joint by
arching the load to the sides of the chase. Where the
pressure acts on the face of the joint, static balance
requires the development of the tensile and shear
stresses between the plug and the sides of the slot.
Compression and shear are developed when the
pressure head is internal. Theoretically, the edges of
the chase should be undercut so that the mortar plug,
which dries more quickly and more thoroughly at the
surface, does not shrink differentially and so pull away
from the sides and bottom of the chase.
iv. Type 4 A crimped water bar: Mortar is required
where the junctions which bear traffic. A crimped
water stop joint sealant is not applicable for the use
where the crack is subject to a heavy pressure head
from inside the joint, or where movement occurs as a
shear along the length of the crack. In the first case, the
pressure would tend to bulge out the crimp, and in the
second case, the longitudinal movement would tend to
tear the stop. Accordingly, this type of joint is primarily
for use where the anticipated movements are limited
to a simple extension or contraction and where the
pressure head is either small or acts from the face of
the joint. However, while not specifically intended for
such applications, a rubber type or similar crimped joint

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

112

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


material will take a little longitudinal movement, and
the crimped stop will function under limited amount of
relative transverse displacement.
v. Dealing with the reinforcement: What ever be the
detail used, when cutting the chase, it is possible that
some of the reinforcement will be exposed. If this is the
case, cut deep is enough so that the sealant will be
behind the reinforcing, clean the bars, and paint them
with bitumen as a protection against moisture
penetration and corrosion. If the crack in an active one
with substantial movements, cut the bars so that they
do not impede the movement of the joint.
G. Overlays:
Overlays may be used to seal cracks and are very
useful and desirable where there are large numbers of
cracks and treatment of each individual defect would be
too expensive.

The mechanism whereby healing occurs is the


carbonation of calcium oxide and the calcium hydroxide in
the cement paste by carbon dioxide in the surrounding air
and water. The resulting calcium carbonate and calcium
hydroxide crystals precipitate, accumulate and grow out
from cracks. The crystals interlace and twine producing a
mechanical bonding effect, which is supplemented by a
chemical bonding between adjacent crystals and the
surfaces of the paste and aggregate. As a result, some of
the tensile strength of the concrete is restored across the
cracked section, and the crack may become sealed.
Saturation of the cracks and the adjacent concrete with
water during the healing process is essential for the
development of any substantial strength. Submergence of
the cracked section is desirable. Alternatively, water may
be ponded so that the crack is saturated. Upto 25% of the
normal tensile strength may be restored by healing under
conditions of submergence in water. The saturation must
be continuous for the entire period of healing. A single
cycle of drying and re-immersion will produce a drastic
reduction in the amount of healing strength. Healing
should be commenced as soon as after the crack appears
as possible. Delayed healing results in less restoration of
strength. Healing will not occur if the crack is active.
Healing also will not occur if there is a positive flow of
water through the crack, which dissolves and washes away
the lime deposit, unless the flow is slow so that complete
evaporation occurs at the exposed face causing
re-deposition of the dissolved salts.
Concrete cracks both dormant and active subjected to
water pressure are able to themselves with time. The
greatest autogenous healing effect occurs between the
first 3 to 5 days of water exposure. In additional skin
reinforcement proves to be highly effective in supporting
the autogenous healing effect. Growth rate of calcium
carbonate crystals depends on crack width and water
pressure, whereas concrete composition and type of water
has no influence on the autogenous healing rate [4].

i. Active cracks: Sealing of active cracks by use of an


overlay requires that the overlay be extensible. The
occurrence or prolongation of crack automatically
means that there has been an elongation of the surface
fibers of the concrete. Accordingly, an overlay which is
flexible but not extensible, i.e., can be bent but cannot
be stretched, will not seal a crack that is active. A two
or three-ply membrane of roofing felt laid in a mop
coat of tar, with tar between the plies; the whole
covered with a protective course of gravel, concrete, or
brick, functions very well for this purpose. The type of
protective course depends on the use to which it will
be subject. Gravel is typically used for applications such
as roofs, and concrete or brick is applicable where fills
is to be placed against the overlay. An asphalt block
pavement also works well and may be used where the
area is subjected to heavy traffic. If the cracks are
subjected to longitudinal movements parallel to their
axis, the overlay will wrinkle or tear. Be very careful in
repairing such joints. Blanketing may be a better
V. CONCLUSION
solution.
ii. Dormant cracks: If the cracks are dormant, almost any
General precautions to be followed for the repair of
type of overlay may be used, provided that it will take cracks in concrete as listed below.
the traffic to which it is subject and that it is either i. Do not fill the cracks with new concrete or mortar
adequately bonded or thick enough so that curling due ii. Try to avoid the use of brittle overlay to seal an active
to differential deformations is not a problem. Epoxy
crack.
compounds are coming into increasingly frequent use
iii. Do not fail to remove the restraints causing the crack.
for this purpose.
iv. Do not surface seal cracks over corroded
reinforcement without encasing the bar.
H. Autogenous healing
v. Do not bury or hide a joint so that it is in accessible.
The inherent ability of concrete to heal, cracks within
itself is termed as autogenous healing, and is a
phenomenon which has been known for some time. It has
REFERENCES
practical application for sealing dormant cracks, such as in
[1] C.S. Suryawanshi, Structural concrete repair A durability
the repair of precast units cracked in handling, reknitting
based revised approach is needed, The Indian Concrete
of cracks developed driving the driving of precast piling,
Journal, 2012.
sealing cracks in water tanks, and sealing of cracks which
[2] Causes, Evaluation and Repair of Cracks in Concrete
are result of temporary conditions or inadvertencies. The
Structures, Reported by ACI Committee 224.
effect also provides some increase in strength of concrete
[3] Stitching concrete Pavement, International Grooving and
damaged by vibration during set and of concrete disrupted
Grinding Association, June 2010.
by the effects of freezing and thawing.
INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

113

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


[4] Carola Edvardsen, Water permeability and Autogenous
healing of cracks in concrete, ACI Material journal, Title no
96-M 56, July/August 1999.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

114

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

RECOGNITION OF EMG BASED HAND GESTURES


FOR PROSTHETIC CONTROL USING
ARTIFICIAL NEURAL NETWORKS
Sasipriya.S, Prema.P
Department of Biomedical Engineering,
PSG College of Technology, Coimbatore.
email id: sashpsg@gmail.com, ppr@bme.psgtech.ac.in
Abstract

EMG

(Electromyography)

is

I. INTRODUCTION

biological signal derived from the summation of

Congenital defects or accidental loss of

electrical signals produced by muscular actions.

limbs can be corrected by the use of artificial limbs

This EMG can be integrated with external


hardware

and

rehabilitation.

control

Pattern

prosthetics

recognition

plays

or prostheses. The most efficient way of controlling

in

the prosthesis is by the use of EMG signals

an

obtained from the active muscles. EMG signals

important role in developing myo-electric control

obtained by different actions can be used to make

based interfaces with prosthetics and Artificial

the prosthesis perform those different functions in

Neural Networks (ANN) are widely used for such

real time. For doing so, the EMG signal obtained is

tasks. The main purpose of this paper is to

filtered, windowed and after digital conversion,

classify different predefined hand gestured EMG

based on the features extracted, it is classified and

signals (wrist up and finger flexion) using ANN

given as control inputs to a prosthetic system for

and to compare the performances of four

activating correct functions. The classifier is an

different Back propagation training algorithms

important element in this system. Artificial neural

used to train the network. The EMG patterns are

networks (ANN) are mathematical modeling of

extracted from the signals for each movement and

biological

then ANN is utilized to classify the EMG signals

neuronal

systems

and

they

are

particularly useful for complex pattern recognition

based on their features. The four different

and classification tasks that are difficult for

training algorithms used were SCG, LM, GD and

conventional computers or human beings. The

GDM with different number of hidden layers

nonlinear nature of neural networks, their ability

and neurons. The ANNs were trained with those

to learn from their environments in supervised as

algorithms using the available experimental data

well as unsupervised ways, as well as their

as the training set. It was found that LM

universal approximation property make them

outperformed the others in giving the best

highly suited for solving difficult signal processing

performance within short time elapse. This

problems. But, it is critical to select the most

classification can further be used to control

appropriate neural network paradigm that can be

devices based on EMG pattern recognition.

applied for specific function. Artificial neural


networks based on Multi-Layer Perceptron Model

Keywords - Electromyography, Myo-electric


control, Artificial Neural Network, Back
propagation algorithms, Pattern recognition.

(MLP) are most commonly used as classifiers for


EMG pattern recognition tasks and selecting an
appropriate

training

function

and

learning

algorithm for a classifier is a crucial task. The time

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

115

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

delay between the command and activation of

EMG signals in which 75 represented finger flexion

prosthetic function should be minimum (not more

and the rest represented wrist up functions

than 100ms) for the comfort of the users [1], [2], [3],

obtained from volunteers and inputted to the

[4]. Hence the classifier should be precise and

neural network for classification.

rapid in producing correct signals for controlling

B. NEURAL NETWORK ARCHITECHTURE

the prosthesis, inspite of inaccuracies that may


occur during the process of detection and

A neural network is a general mathematical

acquisition of EMG signals. So it is necessary to

computing paradigm that models the operations of

assess the impact of neural network algorithms on

biological neural systems [4]. Weights (which are

the performance, robustness, and reliability aspects

determined by the training algorithm), bias and

and to identify those that work the best for solving

activation function are important factors for the

our problem of interest. This work has attempted

response of a neural network. A Multi-Layer

to classify EMG based on two predefined hand

Perceptron

gestures and to identify the better performing


training

functions

in

Feed

model

based

on

Feed

forward

Backpropagation algorithm is used here for

Forward

classification. Fig 1 shows the basic architecture of

Backpropagation algorithm (standard strategy of

a neural network with hidden layers between the

MLP) used in recognizing the patterns.

input and the output layers.

II. .METHODOLOGY
A.

EMG

SIGNAL

ACQUISITION

AND

FEATURE EXTRACTION
EMG signals used in this study are acquired
from the muscles of the forearm namely Flexor
Carpi Ulnaris (FCU), Extensor Carpi Radialis
(ECR) and Extensor Digitorum (reference) for two
types of hand movements-finger flexion and wrist
up. FCU assists in wrist flexion with ulnar
deviation and ECR assists in extension and radial

Fig 1: Neural network with hidden layers.

abduction of the wrist. The myoelectric signals are


acquired by means of single channel differential

The designed ANN for EMG pattern

electrodes (disposable Ag/AgCl surface electrodes)

recognition consists of 3-layers: input layer, tan-

which are then amplified and filtered before

sigmoid (standard activation function) hidden

further processing.

layer and a linear (purelin) output layer. Each layer


except input layer has a weight matrix, a bias

13 different statistical features extracted

vector and an output vector. The learning rule for

from the acquired EMG signals are Integrated

the propagation of neural network defines how the

EMG, Mean Absolute value, Modified Mean

weights between the layers will change [5], [6].

Absolute value 1, Modified Mean Absolute value

Here, the input is a 13 x 150 matrix and the

2, Simple square integral(energy), Variance, Root

corresponding target is a 2 x 150 matrix, (as 13

Mean Square, Waveform length, Zero Crossing,

features extracted from 150 samples and there are 2

Slope sign change, Willison amplitude, Difference

types of gestures to be classified). The classification

Absolute Mean Value and Histogram of EMG.

was divided into 3 stages: training (70% of

These 13 features are extracted from 150 samples of

samples), validation (15% of samples) and test

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

116

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

(15% of samples). Four main training algorithms


used were Scaled Conjugate Gradient (SCG),
Levenberg-Marquardt

(LM),

Gradient descent

(GD), and Gradient descent with momentum


(GDM).

The

sample

input

vectors,

their

corresponding target vectors and the output


vectors after classification are shown in Table.1.
III. RESULTS AND DISCUSSION
The classification is done by altering the
training functions, learning rate (GD and GDM),
and number of neurons in a hidden layer with a
Fig 2: Best Performance of LM

standard performance function MSE (Mean Square


Error) and the better performing algorithm is
identified. The summary of the results of various
training algorithms used and the corresponding
number of neurons is listed in Table.2. It is found
that, on training the network with 10 neurons, SCG
gives the least error and best performance. But on
considering the fast convergence (which is the
actual need for prosthetic control), LM with 20
neurons gives good classification performance
earlier than SCG. If number of neurons is increased
to 30, performance of LM is better than SCG, with
a least time elapse. Also the classification rate of
SCG algorithm is saturated at 81.3 % after 30
neurons, no matter how many neurons increased,
the rate of correct classification remains constant.
But higher

the neuron number, better

the

classification rate in LM algorithm. The other two

Fig 3: Classification Rate of LM

algorithms, GD and GDM require more time and


IV. CONCLUSION AND FUTURE WORK

number of iterations needed for classification.

This work has been carried out to find the

Their performances are also undesired. Hence the


classification

efficiencies

of

SCG

and

appropriate classifier for

LM

EMG

signals. The

algorithms alone are shown in Table 3 and Table 4

experimental results show that the Levenberg-

respectively..

Marquardt

The

best

performance

and

algorithm

gives

good

and

fast

classification rate of LM based ANN with 50

performance with least computations. The main

neurons are also shown in Fig 2 and Fig 3. Thus the

disadvantages of this algorithm are requirements

LM network outperforms the other algorithms by

of large memory and increased number of neurons

providing

which may affect the hardware design (DSP

the

least

response

time

(fastest
less

processors) of the prosthetics. Hence the future

number of iterations required and less error values.

will be to optimize the performance and provide

convergence),

higher

classification

rate,

interface to the hardware.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

117

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Fig 4: LM Training tool

Table.1 : SAMPLE INPUT DATA AND SIMULATED OUTPUT DATA

Input vectors

13 extracted features
from EMG

Target
vectors

Wrist up
Finger flexion

Output
vectors

Wrist up
Finger flexion

Sample 1
257.92
0.42987
0.32246
0.20606
643.08
1.0647
0.80911
67.429
89.086
0.14872
6.5714
67.571
116.29
1

Sample 2
318.26
0.53044
0.39805
0.28215
695.51
1.1527
0.88938
88
115.55
0.1929
9.1429
68
111.86
1

Sample 3
154.49
0.25749
0.19358
0.1467
425.03
0.70138
0.58973
26.857
46.511
0.077647
0.85714
73.857
108.86
0

0
0.7866
0.2339

0
0.9530
0.0411

1
0.3495
0.6390

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

118

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Table.2 : COMPARISON OF VARIOUS ALGORITHMS

BEST
PERFORMANCE

GDM

GD

LM

SCG

0.15275

0.14585

0.18598

0.14172

HIDDEN
NEURONS
10

0.27954

0.16829

0.0991680

0.13132

20

0.12168

0.1526

0.14919

0.17166

30

1000

1000

21

10

1000

968

20

20

1000

1000

27

30

EPOCH AT
WHICH BEST
PERFORMANCE
IS GOT

Table.3 : SHOWING THE CLASSIFICATION RATE SATURATION OF SCG


HIDDEN NEURONS

10

20

30

40

50

Correctly classified

76.7

80

81.3

81.3

81.3

Misclassified

23.3

20

18.7

18.7

18.7

BEST PERFORMANCE

0.17323

0.10074

0.22587

0.14376

0.13385

EPOCH AT WHICH BEST PERFORMANCE


OBTAINED

11

12

19

15

CLASSIFICATION
RATES
(PERCENTAGE)

Table.4 : SHOWING THE CLASSIFICATION RATE EFFICIENCY OF LM

HIDDEN NEURONS

10

20

30

40

50

Correctly classified

78.7

76

84

87.3

88

Misclassified

21.3

24

16

12.7

12

BEST PERFORMANCE

0.20428

0.22699

0.14131

0.30642

0.20215

EPOCH AT WHICH BEST


PERFORMANCE OBTAINED

10

CLASSIFICATION
RATES
(PERCENTAGE)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

119

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

REFERENCES

bimodal drug delivery, International Journal of


Pharmaceutics, 2006.

[1] Alcimar Soares et al., Development of virtual


myo electric prosthetic controlled by EMG pattern
recognition system based on neural networks ,Journal
of Intelligent Information Systems,2003

[6]. Claudio Castellini, Patrick van der Smagt,


Surface EMG in advanced hand prosthetics,
. Biol Cybern , 2009.

[2] Md. R. Ahsan,Muhammad,


I. Ibrahimy,
Othman O. Khalifa, EMG Signal Classification for
Human Computer Interaction: A Review , European
Journal of Scientific Research, 2009.
[3] Md. Rezwanul Ahsan, Muhammad Ibn
Ibrahimy, Othman O. Khalifa, Electromyography
(EMG) Signal based Hand Gesture Recognition using
Artificial Neural Network (ANN), 4th International
Conference on Mechatronics (ICOM), 17-19 May
2011, Kuala Lumpur, Malaysia,
[4] Yu Hen Hu, Jenq-Neng Hwang, Handbook of
neural network signal processing, Introduction to
Neural Networks for Signal processing, CRC PRESS,
2001.
[5] A. Ghaffari et al., Performance comparison of
neural network training algorithms in modeling of

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

120

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Vertical farm buildings


Sustainable and green manufacturing

C.BALACHANDAR.

A.SIDHARTHAN.

Dept. of Civil Engineering


SMVEC
Puducherry, India
balaji1145@hotmail.com

Dept. of Civil Engineering


SMVEC
Puducherry, India
sidhu1194@gmail.com

I. INTRODUCTION
Abstract This paper deals about Vertical farm
buildings and some advanced technologies as upgrade
features to improve the efficiency of these buildings.
This paper also deals about vast area of benefits in
various categories in this modern agriculture.
Index TermsArtificial photosynthesis,
filtering system, Organic wastes, Pesticide-free.

Due to fast growth in urbanisation, agricultural


lands are being converted to industrial zones. The land
available for agricultural activities is being decreased at
a faster rate. The agricultural lands are heavily
occupied by industries and business centres in last few
decades. Also rural people migrate towards the urban
area. The growth comparison of rural and urban
population is given in figure below.

Water

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

121

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

It is estimated that by the year 2050, close to 80%


of the worlds population will live in urban areas and
the total population of the world will increase by 3
billion people. A very large amount of land may be
required depending on the change in yield per hectare.
Scientists are concerned that this large amount of
required farmland will not be available and that severe
damage to the earth will be caused by the added
farmland.
For survival of mankind, agriculture is necessary.
But there is a lack of interest in agricultural activities in
the minds of people. To overcome these difficulties and
reduce land exploitation Vertical farm concept has
been arised.
In this paper, I have discussed some new features
to be added in these Vertical farm.

B.Air-Conditioning

II. DESIGNING FEATURES


The features mentioned here are the advancement
technologies that need to be designed in the vertical
farm to increase the efficiency . Here the word
efficiency refers to the crop growth rate. So, these
advancement features will help the crops grow in a
well conditioned environment . These features are
designed to utilize the waste energies such as waste
water and utilize the renewable source of energies.
Listed below are features that makes the advancement:

C.Drip Irrigation

As mentioned above temperature need to be


maintained. Thermal insulation only prevents the
exchange of heat from surroundings. It could not
provide the required temperature. So , we have to use
air conditioners inside the building so as to provide the
appropriate temperature for the crops either hot or cold
depending on the crops that we cultivate. As we
provide thermal insulation for the building, power that
we need to spend for the air-conditioners is greatly
reduced. With the use of air-conditioning system, we
have the choice of adjusting the temperature. Crops
will grow and mature at a faster rate if temperature
needed for the cops at different stages of growth is
shortly provided. Hence, maintain the temperature will
greatly increase the crops growth as compared to
conventional farming practices.
Water that is required for the crops need to be
supplied from the ground or underground where the
water is stored. The water supply inside the building is
divided into primary supply and secondary supply.
Primary water supply is similar to pipeline systems that
is used in ordinary buildings. This will take the water
to each floor level.
Secondary water supply takes the water to
crops. This secondary water supply is achieved by
irrigation so as to reduce the consumption of water.
Drip irrigation is the most economical irrigational
system to save the water. Further advancement
technology in irrigation is Sub-surface Drip irrigation
which will further reduce the water consumption and
the water will reach the root system of crops directly.
As there is no evaporation of water inside the building,
the water loss above soil surface is eliminated.

A.Thermal Insulation
Crops grow on certain temperature healthy and also
at a faster rate based on the temperature. The reason is
that the temperature influences the reaction that takes
place in growing a crop. The appropriate temperature
must be steadily maintained to achieve the steady
growth of a crop. But , the environmental temperature
may vary according to the different climatic conditions
and weather conditions. Hence , to maintain the
temperature that is different from environment, we
need thermal insulation for the building. While
building the outer walls of the building that is exposed
to the environment we must take in consideration to
provide thermal insulation. It is necessary to provide
thermal insulation on the top floor of the building. For
this purpose we may use cavity walls, foam concrete,
loose fill insulation, light weight aggregate. Using glass
wall panels is better thermal insulation also it allows
natural light to enter inside the building to some extent.

D. Water filtering system for recycling


The water loss is also below the soil surface. If
watered the cultivating land in conventional farming
practices, the excess water go deeper as the root system
is located. But in Vertical farms we can easily
eliminate this loss by simple technique.
We set a pool of soil for crops. In this pool,
below the soil layer we design a thick plastic
membrane which will filter the excess water that the
soil does not retain. So this excess water gets filtered
off and collected for reuse. The water filtering system
is shown below. The white layer in the figure indicates
the plastic membrane for filtering water.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

122

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

can get organic wastes and we can adopt any method


for converting them to a good fertilizer.
Vermicomposit pit method is very much suitable.
III. ADVANTAGES OF VERTICAL FARM

A. Ultimate benefits that helps Green Economy


Chemical and Pesticide-Free
Due to a controlled indoor growing environment,
Vertical farms facilities are not affected by natural
pests, and therefore do not require chemicals or
pesticides to ensure a healthy crop. Vertical farms
offers consumers safe, healthy and organic produce,
365 days a year.
Freshest Produce Available
Vertical farms produces fresh, pesticide-free, local
greens that not only taste better, but have a much
longer shelf-life compared to most other produce which
has to be harvested weeks in advance and then trucked
close to 2,000 miles before making it to the market.
Buying Local
Vertical farms facilities bring the farm right to the
people. This means that urban dwellers can buy
produce that was grown as close as 5 miles to their
homes. Vertical farms makes buying local accessible to
everyone, every season of the year.
Increases productivity
The Vertical farms system grows plants in cubic
feet versus square feet, producing yields up to ten times
greater than traditional greenhouses, and up to 100
times greater than conventional field agriculture on a
per square foot basis.
Year-round production
Due to the controlled, indoor environment, Vertical
farms can produce crops 24 hours a day, 7 days a week,
365 days a year. This means stable and consistent
revenue for Vertical farms facility owners/operators.
Strengthens local economy
When you purchase Vertical farms produce from
your local Vertical farms farm your dollars stay within
the community and circulate throughout, giving other
local businesses a healthy boost.

Fig. Water filtering system using membrane

E. Power supplement
In vertical farms, some electrical energy is
required to achieve this process. We need power for
artificial lighting, water pumping and lift systems to
carry the loads. So this power can be supplemented by
using renewable energy sources. We can fit solar
panels on the walls of Vertical farm. On the top floor
we can have wind mills. Hence this power that is
required can be easily supplemented.

F.Lightings for Artificial Photosynthesis


By using proper lightings to the plants we can
stimulate its growth this is termed as artificial
photosynthesis. This is what we see in botanical
gardens. So above the crops we provide lights
focussing the crops. Now-a-days, LED bulbs are more
energy conserving and it also suitable for the artificial
photosynthesis process.

G. Alternating photosynthesis and respiration


A plant does not survive if all the time light is
provided. Actually it works for manufacturing its food
in the presence of light and CO2. For this process, we
provide light. After sometime, plants needs to sleep, so
it needs dark and O2.
To alternate this cycle, we have a special
sensing system, that will measuring the parts per
million of CO2 molecules present in the air. After
attaining certain limit, the air with high CO2 is sucked
off. The lights are off and the fresh air with Oxygen is
pumped in which is achieved by Air-conditioners.

B. Environmental benefits
Eliminates the need for chemical pesticides
Because we grow plants in a controlled indoor
environment, our facilities are not affected by pests.
Pesticides are not only an environmental and health
concern, but also represent an additional cost in field
agriculture.

H. Utilization of Organic wastes


The organic waste materials generated in the
metropolitan cities is huge which can be ultimately
used as fertilizer in these Vertical farm buildings. The
waste products are collected of whole city is collected
in a place for disposing or recycling. From there we

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

123

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Reduces water pollution


Run-off from chemical fertilizers used on
commercial farms often contaminates nearby water
supplies.
Uses less water
Vertical farms's closed-loop hydroponic irrigation
systems uses only 20% of the amount of water that is
required in Conventional Field Agriculture. That's 5
times less water per sq.ft. of production!
Reduces fossil fuel use
Since Vertical farms grows produce in close
proximity to end consumers, limited transportation is
required between production site and market, thus
significantly decreasing fuel usage and greenhouse
gases. Additionally, no machinery (i.e., tractors) is
required to plant or harvest a Vertical farms crop, as is
required in field agriculture.
Minimizes wastewater
The watering injection system recycles water,
generating little to no waste. Wastewater is one of the
most significant environmental costs associated with
traditional methods of hydroponics.
Re-purposes existing structures
Vertical farms builds its facilities on underutilized or
abandoned properties, reducing its environmental
footprint.

D. Social Benefits

C.Retailer benefits

Conclsion:

Creates local jobs


A Vertical farms facility creates employment
opportunities that pay livable wages, plus benefits, for
local residents.
Promotes economic growth
Vertical farms creates opportunities for community
entrepreneurs to grow and sell the produce, and
replaces imported goods with local goods.
Promotes social responsibility
Vertical farms is committed to sustainable design
and building practices, as well as superior energy
efficiency in all operations.
Vertical farms is a good neighbor
Vertical farms partners with local businesses and
community leaders to make sure that we address the
community's needs and concerns. Our goal is to create
jobs, economic growth, and a healthier environment.

Case study:
Some existing vertical farms and vertical
farms under construction are used for case study.
Notable of them are Sky Green Singapore, Farmed
Here , Bedfork in United States, Gotham Greens in
New York City, Local Garden in Vancouver of
Canada.

Price stability
Due to year-round production and controlled
growing conditions, Vertical farms produce does not
fall victim to seasonal availability or price swings.
Vertical farms is able to offer a fixed and competitive
price for produce year-round.
Consistent and reliable crop
Due to the controlled, indoor growing environment,
Vertical farms produce is not effected by crop loss due
to natural disasters or weather related issues, such as
droughts or floods. Vertical farms offers suppliers a
consistent quality crop regardless of the time of year, or
outdoor climate.
Longer shelf-life
Vertical farms's crops are attached to their roots until
they reach the cooler, which makes for a much longer
shelf life than crops harvested by traditional methods.
Our produce is delivered to stores the same day that it's
harvested.

The fast urbanisation in the recent decades


affects agriculture as there is no proper land for
cultivation. Also the conventional methods of
agriculture makes the soil non-fertile and thus makes a
way of afforestation. Although some initial costs of
spending such a source of money for building these
Vertical Farms, the best way to overcome land
depletion and save agriculture. Hence, I conclude
Vertical Farms are the best choice for agricutures
future.

Refernces:
1.

2.

3.

Abate T, van Huis A, Ampofo JK. 2000. Pest


management strategies in traditional agriculture: an
African perspective. Annu Rev Entomol.45:631-59.
Amahmid O, Asmama S, Bouhoum K. The effect
of waste-water reuse in irrigation on the
contamination level of food crops by Giardia cysts
and Ascaris eggs. Int J Food Microbiol. 49:19-26.
Cairns, Jr., John. 2000. Sustainability and the future
of humankind: two competing theories of Infinite
Substitutability. Politics and the Life Sciences 1:
27-32.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

124

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


4.

5.

6.

7.

8.
9.

10.
11.

12.

13.

Ecological Monitoring and Assessment Network:


science-based and policy relevant. Environ Monit
Assess. 67:3-28.
Elgendy H. 2002. Institut fur Stadtebau und
Landsplanung der Universitat Karlsruhe. Global
trends: Urbanization
Environmental Protection Agency. Auxillary
information: national priorities list, proposed rule.
Intermittent Bulletin. Internet Vol. 7. 2004.
Foster SSD, Chilton PJ. 2003. Groundwater: the
processes and global significance of aquifer
degradation. Phil Trans: Biol Sci. 358: 1957-1972.
Food and Agriculture Organization, World Health
Organization. 2004 statistics on crop production
Goudriaan J, Zadoks JC. 1995 Global climate
change: Modelling the potential responses of agroecosystems with special reference to crop
protection. Environ Pollut.
Harold V. Eck. 1988. Winter wheat response to
nitrogen and irrigation. Agron. J. 80:902-908.
IFA Agriculture Committee. Summary Report.
Global Agricultural Situation and Fertilizer
Consumption in 2000 and 2001. June 2001.
Jelle Bruinsma, ed., Appendix of World
Agriculture: Towards 2015/ 2030, UNFAO (2003)
Earthscan Publications, London. P. 432.
Likens GE, Bormann FH. 1995. Biogeochemistry
of a Forested Ecosystem. Second Edition, SpringerVerlag New York Inc. P. 159.

21. Tilman D, Fargione J, et al. 2001. Forecasting


agriculturally driven global environmental change.
Science. 292: 281-284.
22. United Nations. World Population Prospects: The
1998 Revision.
23. Wie S, Shanklin CW, Lee KE. 2003. A decision
tree for selecting the most cost-effective waste
disposal strategy in foodservice operations. J Am
Diet Assoc.

14. Likens GE. 2001. Ecosystems: Energetics and


Biogeochemistry. pp. 53-88. In: Kress WJ and
Barrett G (eds.). A New Century of Biology.
Smithsonian Institution Press, Washington and
London.
15. Lugwig C, Hellweg S. 2002. Municipal solid waste
management. Strategies and technologies for
sustainable solutions. Springer Verlag, Pub.
Heidleberg, New York. P. 545.
16. McMichael AJ. 2001. Impact of climatic and other
environmental changes on food production and
population health in the coming decades. Proc Nutr
Soc. 60:195-201
17. Malkow T. 2004. Novel and innovative pyrolysis
and gasification technologies for energy efficient
and environmentally sound MSW disposal. Waste
Manag.
18. Nath KJ. 2003. Home hygiene and environmental
sanitation: a country situation analysis for India. Int
J Environ Health Res.13
19. Salvato JA, Nemerow NL, Agardy FJ. 2003.
Environmental Engineering. John Wiley & Sons; 5
th ed. P 1,584.
20. Stalnacke P, Vandsemb SM, Vassiljev A, Grimvall
A, Jolankai G. Changes in nutrient levels in some
Eastern European rivers in response to large-scale
changes in agriculture. Water Sci Technol.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

125

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

SUSTAINABILITY THROUGH CRYOGENIC WATER FUEL


V.Prasanth1, Dr.R.Gnanaguru2
1 Final year Mechanical, Narasus Sarathy Institute of Technology,Salem.
Email: prasanthvbala@gmail.com
2 Professor & Head / Mechanical, Narasus Sarathy Institute of Technology,Salem.
Email: nsithodmech@gmail.com

ABSTRACT:

2. PROPERTIES OF HYDROGEN

Today we're going to pour a


few drops of water into our car's fuel tank,
and triple our mileage; we're going to
electrolyze hydrogen from our municipal
water supply and run our house; and with a
cup of seawater, The most plentiful
substance available on earth. Were going to
extract energy from water and solve the
world's energy crisis. But the existing
hydrolysis process suffers a serious of
disadvantages; thereby the idea is to apply
cryogenic principle in hydrogen gas to
convert it in the form of liquid hydrogen for
the better utilization as a fuel in the
automobiles. Thus it serves as a renewable
source of energy.
Keywords:
Hydrogen.

Cryogenics,

Hydrogen is the simplest, lightest


and most plentiful element in the universe.
Hydrogen is made up of one proton and one
electron revolving around the proton. In its
normal gaseous state, hydrogen is colorless,
odorless, tasteless and non-toxic and burns
invisibly and over eleven thousand times
lighter than water. It is very abundant, being
an ingredient of water and of many other
substances, especially those of animal or
vegetable origin. It may produce in many
ways, but is chiefly obtained by the action of
acids (as sulphuric) on metals, as zinc, iron,
etc. It is very inflammable, and is an
ingredient of coal gas and water gas. It is
standard of chemical equivalents or
combining weights, and also of valence,
being the typical monad. Symbol H. Atomic
weight 1.

Electrolysis,

1. INTRODUCTION

3. COMBUSTION OF
HYDROGEN

In this, hydrogen is separated


from water through a process known as
electrolysis. In the electrolysis hydrogen is
separated from water by passing current to
the water through electrodes. The hydrogen
is trapped separately from the electrolysis kit
and mixed with air for better calorific value.
This mixed hydrogen and air is sent to the
engine and the combustion is made. The
exhaust produced will be a water vapor and
this helps to minimize pollution.

Hydrogen has also combustion


qualities as a fuel and energy source. The
hydrogen must first be broken out from its
compound form with oxygen as water (H20)
using electrolysis or gathered by other
means as it does not naturally occur by
itself. Hydrogen cannot be mined or drilled
as with fossil fuels. For production of
hydrogen by electrolysis process requires

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

126

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

electrical energy input. With this generated


hydrogen combustion can be made.

graphite electrodes are inserted. Water is


filled inside the electrolysis kit and then
current is passed through electrodes to break
the water.

4. PARTS NEEDED FOR


EXTRACTION OF HYDROGEN
GAS
Electrolysis kit (To do
electrolysis process)
Compressor (To compress
the hydrogen produced by
electrolysis process
D.C.Motor (To drive the
compressor)
Rheostat (To vary the speed
of the compressor)
L.P.G Tank (To store the
hydrogen at a high pressure)
Gas kit (to fix on the tank for
safety)

Fig2: Electrolysis process in cooker.


A safety valve is a valve mechanism
for the automatic release of a substance from
a boiler, pressure vessel, or other system
when the pressure or temperature exceeds
preset limits. It is part of a bigger set named
pressure safety valves (PSV) or pressure
relief valves (PRV). The other parts of the
set are named relief valves, safety relief
valves, pilot-operated safety relief valves,
low pressure safety valves, vacuum pressure
safety valves.
Adding Electrolyte: Electrolysis of pure
water is very slow,. Pure water has an
electrical conductivity about one millionth
that of seawater. So, due to the mixing of the
some electrolytic solution must be added
like Sulphuric acid H2So4 or a base or salt, it
will make easier to split up the hydrogen and
oxygen.

a. Electrolysis Process
Battery: The battery is an electro-chemical
device for converting electrical energy into
chemical energy. It stores the electrical
energy in the form of chemical energy and
provides a current for the electrolysis
process. It is a 12V battery which supplies a
7Amps of current. Electrical wires are
connected between the battery and the
electrodes for the passage of current from
battery to electrodes.

Electrolysis Process: The process by which


we generate hydrogen (and oxygen) from
water is called electrolysis. The word "lysis"
means to dissolve or break apart, so the
word "electrolysis" literally means to break
something apart (in this case water) using
electricity. Electrolysis of water is the
decomposition of water (H2O) into oxygen
(O2) and hydrogen gas (H2) due to an

Fig1: Battery and electrode.


Electrolysis Kit: Electrolysis kit is made
with safety measures. Cooker is used as a
electrolysis kit. In the top of the kit two

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

127

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

electric current being passed through the


water. This electrolytic process is used in
some industrial applications when hydrogen
is

b. Compressor: A machine that increases


the pressure of a gas or vapor (typically air),
or mixture of gases and vapors. The pressure
of the air is increased by reducing the
specific volume of the air during passage of
the air through the compressor. When
compared with centrifugal or axial-flow fans
on the basis of discharge pressure,
compressors are generally classed as highpressure and fans as low-pressure machines.

needed.

Fig 3: Compressor
Role of Compressor: In this compressor is
used to compress the hydrogen produced by
electrolysis process. Because the hydrogen
produced by electrolysis process will not be
at a sufficient pressure to store in a tank. So
a compressor must be use for boosting the
pressure of the hydrogen.

The calorific value of the hydrogen


produced from electrolysis is 13,000
KJ/m3

c. D.C Motor: When a current passes


through the coil wound around a soft iron
core, the side of the positive pole is acted
upon by an upwards force, while the other
side is acted upon by a downward force.
According to Fleming's left hand rule, the
forces cause a turning effect on the coil,
making it rotate. To make the motor rotate
in a constant direction, "direct current"
commutators make the current reverse in
direction every half a cycle (in a two-pole
motor) thus causing the motor to continue to
rotate in the same direction.

Electrolysis of Water: An electric power


source (Battery) is connected to the two
electrodes i.e. (positive & negative).When
electricity is sent to water through that
electrodes, the oxygen atoms are attracted to
the anode(positive electrode), and the
hydrogen atoms are attracted to the
cathode(negative electrode).These splitted
atoms are appeared by a small bubbles. The
generated amount of hydrogen is twice the
amount of oxygen.i.e more bubbles come
from cathode. Because in water hydrogen
atoms is more than oxygen atoms (H2O).i.e.
two hydrogen atoms and one oxygen atom

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

128

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Role of D.C Motor


D.C motor is used to run the
compressor. There must be an input to run
the compressor and so the motor is used to
drive the compressor. The input for the
motor is 12v battery. The current from the
battery will run the motor, since the chain
drive is taken between the motor and the one
end of the compressor, the end is for the
gear drive. The compressor will can able to
run.

Fig5: COMPRESSOR, MOTOR AND RHEOSTAT


f. Hydrogen tank: When the electrolysis
process is going since the engine is also
running by hydrogen. After some time there
will not be hydrogen to continue the running
the engine, when the hydrogen tank is not
present. Because the electrolysis process is
very slow. So due to the above reasons the
hydrogen is stored with a high pressure in a
2kg cylindrical tank. This tank consists has a
two valves., inlet and outlet. The inlet is for
taking the compressed hydrogen gas from
the compressor. The other valve is for
releasing the gas present in the tank to the
vaporizer. The gas regulator is employed at
the outlet valve to prevent the backfire
when it occurs.

Fig4: Compressor and Motor


e. Rheostat: A rheostat is an electrical
component that has an adjustable resistance.
It is a type of potentiometer that has two
terminals instead of three. The two main
types of rheostat are the rotary and slider.
The symbol for a rheostat is a resistor
symbol with an arrow diagonally across it.
They are used in many different
applications, from light dimmers to the
motor controllers in large industrial
machines. In this technology, rheostat is
used to vary the speed of the compressor in
a required torque.

g. Gas regulator: A gas regulator is also


proposed for a safety measures. If the gas
regulator is not present in this system, the
produced backfire will burst the tank as like
as our cooking cylinder. This gas regulator
is fixed at the top of the tank

Fig6: Tank with Gas Regulator

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

129

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Reasons for Storing the Hydrogen

6. CRYOGENICS

When
the
electrolysis
process is going since the engine is also
running by hydrogen. After some time there
will not be hydrogen to continue the running
the engine, if the hydrogen tank is not
present. Because the electrolysis process is
very slow. So due to the above reasons the
hydrogen is stored with a high pressure in a
2kg cylindrical tank.

The word cryogenics stems


from Greek and means "the production of
freezing cold". Cryogenics is the study of
the production and behavior of materials at
very low temperatures (below 150 C,
238 F or 123 K). A person who studies
elements that have been subjected to
extremely cold temperatures is called a
cryogenicist. The field of cryogenics
advanced during World War II, when
scientists found that metals frozen to low
temperatures showed more resistance to
wear. Based on this theory of cryogenic
hardening, the commercial cryogenic
processing industry was founded in 1966 by
Ed Busch

5. SYSTEM WORKING
PRINCIPLE
Since the
motor
is
running and it
runs
the
compressor by
chain
drive,
compressor
sucks
the
produced
hydrogen
to
compress it for
a high pressure.
From the outlet
of
the
compressor the
pressurized hydrogen come out due to the
low pressure at the tank. When the low
pressure is created at vaporizer which is next
to tank, the compressed hydrogen flows to
the vaporizer through the gas regulator.
WORKING
SYSTEM

PRINCIPLE

OF

7. PROPERTIES
HYDROGEN

OF

LIQUID

The byproduct of its combustion


with oxygen alone is water vapor (although
if its combustion is with oxygen and
nitrogen it can form toxic chemicals), which
can be cooled with some of the liquid
hydrogen. Since water is considered
harmless to the environment, an engine
burning it can be considered "zero
emissions." Liquid hydrogen also has a
much higher specific energy than gasoline,
natural gas, or diesel.
The density of liquid hydrogen is only
70.99 g/L (at 20 K), a relative density of just
0.07. Although the specific energy is around
twice that of other fuels, this gives it a
remarkably low volumetric energy density,
many fold lower.

THIS

The low pressure produced in


a vaporizer is due to the acceleration of the
vehicle as like as engine carburetor
principle. Now the pressurized hydrogen is
passed to the carburetor through the
vaporizer. And the hydrogen from the
carburetor is passed to the engine and where
the engine begins to start.

Liquid hydrogen requires cryogenic storage


technology such as special thermally
insulated containers and requires special
handling common to all cryogenic fuels.
This is similar to, but more severe than

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

130

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

liquid oxygen. The triple point of hydrogen


is at 13.81 K 7.042 kPa.

Fig7: Liquid Hydrogen stored in the tank


Fig8: Lindes Process

8. PROCESS OF
LIQUIFICATION

b. CLAUDE'S PROCESS
Air can also be liquefied by Claude's
process in which the gas is allowed to
expand isentropically twice in two
chambers. While expanding, the gas has
to do work as it is led through an
expansion turbine. The gas is not yet
liquified, since it would destroy the
turbine. Final liquefaction takes place by
isenthalpic expansion in a JouleThomson-Valve.

The hydrogen collected from the


various processes is subjected to the
liquefaction. The liquefaction of gases is a
complicated process that uses various
compressions and expansions to achieve
high pressures and very low temperatures. It
is done majorly by this two processes.
1. Linde's Process
2. Claude's Process
a. LINDE'S PROCESS
Air is liquefied by the Linde process, in
which air is alternately compressed, cooled,
and expanded, the expansion resulting each
time in a considerable reduction in
temperature. With the lower temperature the
molecules move more slowly and occupy
less space, so the air changes phase to
become liquid.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

131

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

9. FUEL TANK

11. CONCLUSION

The liquid hydrogen is the fuel that


we are going to use it in our system. The
pressure of the tank is maintained at the
350atmospheres. The idea is to use a closed
cycle helium refrigeration system to sustain
hydrogen
as
a
liquid
itself.

Well!! Water servers the good source


of energy for the fuel. This makes us to
imagine Filling fuel from our water tank
simply sounds awesome. Though applying
cryogenics seems to be costlier, when we
commercialize them it becomes very cheap
when compared to petrol or diesel.

REFERENCES
1. S.S.Dara A text book of
engineering chemistry.
2. "Cryonics is NOT the Same as
Cryogenics". Retrieved 5 March
2013.
3. Shaw, David (2009). Cryoethics:
Seeking Life After Death. Bioethics.
ISSN 0269-9702(print); 1467-8519
(online). 23:9, 515-521.
4. Hydrogen as an Alternative Fuel
Almc.army.mil. Retrieved on 201108-28.
5. Thermophysical
Properties
of
Hydrogen , nist.gov, accessed 201209-14
6. Handling, Transport and Storage of
Cryogens.
7. William Cullen, Of the Cold
Produced by Evaporating Fluids and
of Some Other Means of Producing
Cold, in Essays and Observations
Physical and Literary Read Before a
Society in Edinburgh and Published
by Them, II.

Fig 9: Hydrogen Storage Arrangement

10. WORKING PRINCIPLE


Since BHP produced in the
burning of liquid hydrogen is very large, it
cannot be used directly. Instead the idea is to
use a controlled injection system in this
operation. The stroke length of piston is
increased and the bore diameter is reduced,
so that all the power is transmitted to the
flywheel. The liquid hydrogen combines
with air, only in the combustion chamber.
Triple spark plug technology is used so that
all the fuel is burned away. Since large
amount of air is required for the combustion,
turbo charger is used and its motive power is
given to the blower to suck the air. When the
combustion takes place purely with the
oxygen alone, then the exhaust is just the
water vapor.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

132

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

SILVERLINE FOR THE BLIND


VENKATRAMAN.R#, VIKRAM.V#, SURENDHAR.R#
*Department of Information Technology,
Dhanalakshmi College of Engineering,
Tambaram, Manimangalam, Chennai.
1

venkatraman094@gmail.com

vasuvikram5@gmail.com

surensrn3@gmail.com

Artificial-vision researchers take inspiration


from another device, the cochlear implant, which has
successfully restored hearing to thousands of deaf
people. But the human vision system is far more
complicated than that of hearing. The eye is one of
the most amazing organs in the body. Before we
understand how artificial vision is created, it's
important to know about the important role that the
retina plays in how we see. Here is a simple
explanation of (Figure 1 & 2) what happens when
we look at an object:

ABSTRACT
This paper represents the importance when
any tissue or layers of retina, cells and optic nerves of
eye gets damaged. Blindness is more feared by the
public than any other ailment. Artificial vision for the
blind was once the stuff of science fiction. But now, a
limited form of artificial vision is a reality .Now we are
at the beginning of the end of blindness with this type of
technology. In an effort to illuminate the perpetually
dark world of the blind, researchers are turning to
technology. They are investigating several electronicbased strategies designed to bypass various defects or
missing links along the brain's image processing
pathway and provide some form of artificial sight.

Keywords - Vision, ASR, ARCC.

I.INTRODUCTION:
This paper is about curing blindness.
Linking electronics and biotechnology, the scientists
has made the commitment to the development of
technology that will provide or restore vision for the
visually impaired around the world. This paper
describes the development of artificial vision system,
which cures blindness to some extent. This paper
explains the process involved in it and explains the
concepts of artificial silicon retina, cortical implants
etc. The roadblocks that are created are also
explained clearly. Finally the advancements made in
this system and scope of this in the future is also
presented clearly.

Scattered light from the object enters


through the cornea.
The light is projected onto the retina.
The retina sends messages to the brain
through the optic nerve.
The brain interprets what the object is.

Figure (1): Architecture of eye

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

133

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Small enough to be implanted in the eye


Supplied with a continuous source of power
Biocompatible with the surrounding eye
tissue

Figure (2): Anatomy and its path view of eye

The retina is complex in itself. This thin


membrane at the back of the eye is a vital part of our
ability to see. Its main function is to receive and
transmit images to the brain. These are the three main
types of cells in the eye that help perform this
function: Rods, Cones and Ganglion Cells. The
information received by the rods and cones are
transmitted to the nearly 1 million ganglion cells in
the retina. These ganglion cells interpret the
messages from the rods and cones and send the
information on to the brain by way of the optic nerve.
There are a number of retinal diseases that attack
these cells, which can lead to blindness.
The most notable of these diseases are retinitis
pigmentosa and age-related macular degeneration.
Both of these diseases attack the retina, rendering the
rods and cones inoperative, causing either loss of
peripheral vision or total blindness. However, it's
been found that neither of these retinal diseases
affects the ganglion cells or the optic nerve. This
means that if scientists can develop artificial cones
and rods, information could still be sent to the brain
for interpretation. This concept laid the foundation
for the invention of the ARTIFICIAL VISION
SYSTEM technology.

Figures (3) The dot above the date on this penny is the full size of
the Artificial Silicon Retina.

Perhaps the most promising of these two silicon


devices is the ARTIFICIAL SILICON RETINA
(ASR). The ASR is an extremely tiny device. It has a
diameter of just 2 mm (.078 inch) and is thinner than
a human hair (Figures 3). In order for an artificial
retina to work it has to be small enough so that
doctors can transplant it in the eye without damaging
the other structures within the eye. Groups of
researchers have found that blind people can see
spots of light when electrical currents stimulate cells,
following the experimental insertion of an electrode
device near or into their retina. Some patients even
saw crude shapes in the form of these light spots.
This indicates that despite damage to cells in the
retina, electronic techniques can transmit signals to
the next step in the pathway and provide some form
of visual sensation. Researchers are currently
developing more sophisticated computer chips with
the hope that they will be able to transmit more
meaningful images to the brain.

II.HOW TO CREATE ARTIFICIAL VISION?


The current path that scientists are taking to
create artificial vision received a jolt in 1988, when
Dr. Mark Humayun demonstrated that a blind person
could be made to see light by stimulating the nerve
ganglia behind the retina with an electrical current.
This test proved that the nerves behind the retina still
functioned even when the retina had degenerated.
Based on this information, scientists set out to create
a device that could translate images and electrical
pulses that could restore vision. Today, such a device
is very close to be available to the millions of people
who have lost their vision to retinal disease. In fact,
there are at least two silicon microchip devices that
are being developed. The concept for both devices is
similar, with each being:

III. How does ARTIFICIAL SILICON


RETINA works?

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

134

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

The ASR contains about 3,500 microscopic


solar cells that are able to convert light into electrical
pulses, mimicking the function of cones and rods. To
implant this device into the eye, surgeons make three
tiny incisions no larger than the diameter of a needle
in the white part of the eye. Through these incisions,
the surgeons introduce a miniature cutting and
vacuuming device that removes the gel in the middle
of the eye and replaces it with saline. Next, a pinpoint
opening is made in the retina through which they
inject fluid to lift up a portion of the retina from the
back of the eye, which creates a small pocket in the
sub retinal space for the device to fit in. The retina is
then resealed over the ASR (Figure 4).

250 pixel array, which would allow those who were


once blind to read a newspaper.
A. WORKING OF ARTIFICIAL VISION
SYSTEM:
The main parts of this system are miniature
video camera, a signal processor, and the brain
implants. The tiny pinhole camera, mounted on a pair
of eyeglasses, captures the scene in front of the
wearer and sends it to a small computer on the
patient's belt. The processor translates the image into
a series of signals that the brain can understand, and
then sends the information to the brain implant that is
placed in patients visual cortex. And, if everything
goes according to plan, the brain will "see" the
image.
Light enters the camera, which then sends
the image to a wireless wallet-sized computer for
processing. The computer transmits this information
to an infrared LED screen on the goggles. The
goggles reflect an infrared image into the eye and on
to the retinal chip, stimulating photodiodes on the
chip. The photodiodes mimic the retinal cells by
converting light into electrical signals, which are then
transmitted by cells in the inner retina via nerve
pulses to the brain. The goggles are transparent so if
the user still has some vision, they can match that
with the new information - the device would cover
about 10 of the wearers field of vision.

Figure (4): Here you can see where the ASR is placed between the
outer and inner retinal layers.

The patient should wear sunglasses


with a tiny pinhole camera mounted on one lens and
an ultrasonic range finder on the other. Both devices
communicate with a small computer carried on his
hip, which highlights the edges between light and
dark areas in the camera image (Figure 5). It then
tells an adjacent computer to send appropriate signals
to an array of small electrodes on the surface of
patients brain, through wires entering his skull.

For any microchip to work it needs power,


and the amazing thing about the ASR is that it
receives all of its needed power from the light
entering the eye. This means that with the ASR
implant in place behind the retina, it receives all of
the light entering the eye. This solar energy
eliminates the need for any wires, batteries or other
secondary devices to supply power. Another
microchip device that would restore partial vision is
currently in development called the artificial retina
component chip (ARCC), this device is quite similar
to the ASR. Both are made of silicon and both are
powered by solar energy. The ARCC is also a very
small device measuring 2 mm square and a thickness
of .02 millimeters (.00078 inch). There are significant
differences between the devices, however. According
to researchers, the ARCC will give blind patients the
ability to see 10 by 10 pixel images, which is about
the size of a single letter on this page. However,
researchers have said that they could eventually
develop a version of the chip that would allow 250 by

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

135

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

IV. ADVANCEMENTS IN CREATING


ARTIFICIAL VISION:
Ceramic optical detectors based on the
photo-ferroelectrics effect are being developed for
direct implantation into the eyes of patients with
retinal dystrophies. In retinal dystrophies, where the
optic nerve and retinal ganglia are intact (such as
Retinitis Pigmentosa) direct retinal implant of an
optical detector to stimulate retinal ganglia could
allow patients to regain some sight. In such cases
additional wiring to the brain cortex is not required,
and for biologically inert detectors, surgical
implantation can be quite direct. The detector
currently being developed for this application is a
thin film ferroelectric detector, which under optical
illumination can generate a local photocurrent and
photo voltage. The local electric current generated by
this miniature detector excites the retinal neural
circuit resulting in a signal at the optic nerve that may
be translated by the cortex of the brain as "seeing
light". Detectors based on PbLaZrTiO3 (PLZT) and
BiVMnO3 (BVMO) films exhibit a strong photo
response in visible range overlapping eye response
from 380 nm to 650 nm.
The thin film detector heterostructures have
been implanted into the eyes of rabbits for
biocompatibility test, and have shown no
biological in compatibilities. The bionic devices
tested so far include both those attached to the back
of the eye itself and those implanted directly in the
brain. Patients with both types of implants describe
seeing multiple points of light and, in some cases,
crude outlines of objects. Placing electrodes in the
eye has proved easier. During the past decade, work
on these retinal implants has attracted growing
government funding and commercial interest. Such
implants zap electrical signals to nerves on the back
of the eye, which then carry them to the brain.
However, since these devices take advantage of
surviving parts of the eye they will help only the
subset of blind people whose blindness is due to
retinal disease, by some estimates about 30% of the
blind. Moreover, scientists don't believe any implant
could help those blind since birth, because their
brains never have learned to recognize vision.

Figures (5) Illustrating the AV system.

The electrodes stimulate certain brain cells, making


the person perceive the specks of light. The shifting
patterns as scans across a scene tells him where light
areas meet dark ones, letting him find the black cap
on the white wall, for example. The device provides a
sort of tunnel vision, reading an area about the size of
a card 2 inches wide and 8 inches tall, held at arm's
length.

V. VISIBILITY:
It has been demonstrated in some studies
that to a sighted person, image resolution of some 64
by 64 pixels is (more than) enough to get easily
recognizable images. See for instance the reference at
the end of this page which suggested a lower limit of

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

136

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

around 625 pixels. Similarly, a study by Angelica


Prez Fornos suggested a minimum of 400500
pixels for reading text, with less than a factor two
further reduction in case of real-time visual feedback.
Thus a 1000 pixels should do for many purposes, but
some 64 pixels (for instance arranged in an 8 by 8
matrix) or less rarely gives recognizable images to a
sighted person, so we cannot expect this to be any
better in an alternative display that is likely to be
much cruder than what Nature normally provides us
with.

C. Researchers caution, however, that


artificial vision devices are still highly experimental
and practical systems are many years away. Even
after they are refined, the first wave will most likely
provide only crude images, such as the outline of a
kitchen doorway. It does not function as well as the
real eye, and does not have crystal-clear vision (as it
is only a camera).The device is a very limited
navigational aid, and it's a far cry from the visual
experience normal people enjoy.
D. The current version has only 96
simulating electrodes while researchers are working
at bionic eye that contains 1024 electrodes
stimulating electrodes. The 1024 electrode system
will give a much higher image resolution.

The effect of image resolution is further


illustrated with the images below, where a
photograph has been pixelized to 4 by 4, 8 by 8, 12
by 12, 16 by 16, 32 by 32, 64 by 64 and 128 by 128
pixels, respectively. The images here still include
shading, while some implants may give little more
than on/off signals per pixel or phosphene.

E. Visual acuity is not the only challenge for


these devices. The hard, silicon-based chips are
placed in an extremely delicate tissue that is part of a
fluid-filled moving orb. Chips can slip out of place.
Furthermore, the implanted devices need to survive
the tough conditions of the body for years without
harming their users.
VII. OTHER REASONS CAUSING
BLINDNESS AND THEIR REMEDIES:

16
pixels
(4 4)

64
pixels
(8 8)

144
pixels
(12 1
2)

256
pixels
(16 1
6)

1024
pixels
(32 3
2)

4096
pixels
(64 6
4)

The main aim of Artificial Vision is to


restore some degree of sight to the profoundly blind.
Since blindness can result from defects at many
different points along the visual pathway, there are
accordingly a wide variety of proposed models for an
"Artificial Eye".

16384
pixels
(128
128)

The earliest stage of visual processing is the


transudation of light into electrical signals by the
photoreceptors. If this is the only process that is
interrupted in a blind individual, he or she may
benefit from a Sub-Retinal Prosthesis, a device that is
designed to replace only the photoreceptors in the
retina. However, if the Optic Nerve itself is damaged,
the only possibility for restoring sight is to directly
stimulate the visual cortex. Cortical prosthesis is
designed specifically for this task. Although the
categories presented account for most of the research
in Artificial Vision, there are a few more exotic
techniques being developed. One of these is the Bio
Hybrid Implant a device that incorporates living cells
with man-made elements.

Figure (6) Illustrating the resolution of images.


VI. BOTTLENECKS RAISED BY THIS
TECHNOLOGY:
A. The first and foremost thing is the cost.
The miniaturization of equipment and more powerful
computers have made this artificial vision possible,
but it's not cheap: The operation, equipment and
necessary training cost $4,800 per patient. And also
may be much higher depending upon the context and
severity.
B. It may not work for people blinded as
children or as infants, because the visual cortex do
not develop normally. But it will work for the vast
majority of the blind -- 98 to 99 percent.

Regardless of the specific design, all of


these devices are working towards the same goal a
permanent replacement for part of the human visual
system.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

137

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

technology may sometime helps the blind person to


move without any personal assistance.
ACKNOWLEDGMENT:
We would like to express our sincere thanks
to Mr. Arul (HOD, Information Technology, DCE,
Chennai) and Mr. Ramakrishnan (Senior Faculty,
Information Technology, DCE, Chennai) who were
abundantly helpful and adopted us throughout this
research work.
REFERENCES:
[1] Humayun MS, de Juan E Jr., Dagnelie G, et
al. Visual perception elicited by electrical
stimulation of retina in blind humans.
Archives of Ophthalmology; vol 114.

Figure (7) Illustrating the elimination of avoidable


blindness
VIII. ANALYSIS:

[2] Artificial Vision for the Blind by


Connecting a Television Camera to the
Brain" ASAIO Journal 2000

Though it is a welcoming factor for a blind person


suffering from retinal diseases it may be noted that
even after spending some $5000 for the most
complicated technology devices that should be
implanted before and behind the retina, the blind
person could not get a crystal clear vision. Moreover
the artificial miniature devices are placed in three
different places namely inside the eye, near the brain,
outside the chest in the form of belt and so on. Here
one question arises how the blind person can operate
the said miniature computer and coordinate with
other devices inside the eye. Practically a blind
person would like to move with a physical assistant
or would like to move with the technically developed
assistance but not with the both.

[3] www.artificialvision.com
[4] www.howstuffworks.com
[5] www.wikipedia.org

IX. CONCLUSION:
In view of the above analysis we conclude
and feel that the researchers may go further deep into
this matter and find a new simple and compact
technical device using the latest welcoming Nano
technology which may lessen the burden of carrying
multiple devices and also lessen the cost factor. We
would like to mention that the angioplast for insertion
of artificial valves into the heart is replaced by a
simple needle insertion directly into the heart using
Nano technology is found successful some years ago.
Hence the same technology may be developed in the
case of eye diseases such as retinal diseases ,death of
eye cells and blockage of optical nerves etc. We hope
this may also results in getting crystal clear vision
without carrying small small miniature devices
inside and outside the body. And finally this Nano

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

138

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

CYBER CRIMEAND
SECURITY
P.V.SubbaReddy
3rd year-CSE
SRM University,
Chennai.
Mobile no:9444153735

ABSTRACT:
The terms computer crime and cybercrime are more properly restricted to describing criminal
activity in which the computer or network is a necessary part of the crime, these terms are also
sometimes used to include traditional crimes, such as fraud, theft, blackmail, forgery, and
embezzlement, in which computers or networks are used. As the use of computers has grown,
computer crime has become more important.
Computer crime can broadly be defined as criminal activity involving an information technology
infrastructure, including illegal access (unauthorized access), illegal interception (by technical
means of non-public transmissions of computer data to, from or within a computer system), data
interference (unauthorized damaging, deletion, deterioration, alteration or suppression of
computer data), systems interference (interfering with the functioning of a computer system

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

139

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

byinputting, transmitting, damaging, deleting, deteriorating, altering or suppressing computer


data), misuse of devices, forgery (ID theft), and electronic fraud.
Computer crime issues have become high-profile, particularly those surrounding hacking,
copyright infringement through warez, child pornography, and child grooming. There are also
problems of privacy when confidential information is lost or intercepted, lawfully or otherwise.

CONTENTS:

Cyber crime
Specific computer crimes
o Spam
o Phishing
o Fraud
o Obscene or offensive content
o Harassment
o Drug trafficking
o Cyberterrorism
Documented cases
Security
Approaches
Some techniques
Applications
Conclusion.
References

CYBER CRIME:
Learn cybercrime why?
Because
Everybody is using COMPUTERS.
From white collar criminals tto Terrorist organizations andfrom TeenagerstoAdults
Conventional crimes like Forgery,extortion,,kidnappingetc.. Are being committed with tthe
help of computers
New generation iis growing up with computers

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

140

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

MOST IMPORTANT - Monetary ttransactionsare moving on tto tthe IINTERNET


Computer crime, cybercrime, e-crime, hi-tech crime or electronic crime generally refers to
criminal activity where a computer or network is the source, tool, target, or place of a crime.
Computer crime encompass a broad range of potentially illegal activities. Generally, however, it
may be divided into one of two types of categories:
(1) crimes that target computer networks or devices directly;
(2) crimes facilitated by computer networks or devices, the primary target of which is
independent of the computer network or device.
Examples of crimes that primarily target computer networks or devices would include,

Malware and malicious code


Denial-of-service attacks
Computing viruses

Examples of crimes that merely use computer networks or devices would include,

Cyber stalking
Fraud and identity theft
Phishing scams
Information warfare

A common example is when a person starts to steal information from sites, or cause damage to, a
computer or computer network. This can be entirely virtual in that the information only exists in
digital form, and the damage, while real, has no physical consequence other than the machine
ceases to function. In some legal systems, intangible property cannot be stolen and the damage
must be visible, e.g. as resulting from a blow from a hammer. Where human-centric terminology
is used for crimes relying on natural language skills and innate gullibility, definitions have to be
modified to ensure that fraudulent behavior remains criminal no matter how it is committed.
A computer can be a source of evidence. Even though the computer is not directly used for
criminal purposes, it is an excellent device for record keeping, particularly given the power to
encrypt the data. If this evidence can be obtained and decrypted, it can be of great value to
criminal investigators.
In news:

1 out of 5 children received a sexual solicitation or approach over the Internet in a oneyear period of time (www.missingchildren.com)

California warns of massive ID theft personal data stolen from computers at University
of California, Berkeley (Oct 21, 2004 IDG news service)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

141

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Microsoft and Cisco announced a new initiative to work together to increase internet
security
(Oct 18, 2004 www.cnetnews.com)
Cyber attackCustomer information misappropriated through unauthorised access to
privileged systemsor other electronic means]
For example:
through tapping the ATM/POS network connection cables,
hacking into the network computer,

SPECIFIC COMPUTER CRIMES:


USING MALWARES:
Malware:Malware is Malicious Software - deliberately created and specifically designed to
damage, disrupt or destroy network services, computer data and software.
There are several types
Computer virus: program which can copy itself and surreptitiously infect another computer,
often via shared media such as a floppy disk, CD, thumb drive, shared directory, etc. Viruses are
always embedded within another file or program.

Worm: self-reproducing program which propagates via the network.


Trojan horse: program which purports to do one thing, but secretly does something else;
example: free screen saver which installs a backdoor
Root kit: set of programs designed to allow an adversary to surreptitiously gain full
control of a targeted system while avoiding detection and resisting removal, with the
emphasis being on evading detection and removal
Botnet: set of compromised computers ("bots" or "zombies") under the unified command
and control of a "botmaster;" commands are sent to bots via a command and control
channel (bot commands are often transmitted via IRC, Internet Relay Chat).
Spyware: assorted privacy-invading/browser-perverting programs

Malware: an inclusive term for all of the above -- "malicious software


Ex:David Smith & The Melissa VirusExample

Spam
Spam, or the unsolicited sending of bulk email for commercial purposes, is unlawful to varying
degrees. As applied to email, specific anti-spam laws are relatively new, however limits on
unsolicited electronic communications have existed in some forms for some time.Spam
originating in India accounted for one percent of all spam originating in the top 25 spamproducing countries making India the eighteenth ranked country worldwide for originating spam.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

142

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Phishing
Phishing is a technique used by strangers to "fish" for information about you, information that
you would not normally disclose to a stranger, such as your bank account number, PIN, and
other personal identifiers such as your National Insurance number. These messages often contain
company/bank logos that look legitimate and use flowery or legalistic language about improving
security by confirming your identity details.

1.

Fraud

Computer fraud is any dishonest misrepresentation of fact intended to induce another to do or


refrain from doing something which causes loss.In this context, the fraud will result in obtaining
a benefit by:

altering computer input in an unauthorized way. This requires little technical expertise
and is not an uncommon form of theft by employees altering the data before entry or
entering false data, or by entering unauthorized instructions or using unauthorized
processes;
altering, destroying, suppressing, or stealing output, usually to conceal unauthorized
transactions: this is difficult to detect;
altering or deleting stored data; or
altering or misusing existing system tools or software packages, or altering or writing
code for fraudulent purposes. This requires real programming skills and is not common.

Other forms of fraud may be facilitated using computer systems, including bank fraud, identity
theft, extortion, and theft of classified information(Csonka, 2000)

2.

Obscene or offensive content

The content of websites and other electronic communications may be distasteful, obscene or
offensive for a variety of reasons. In some instances these communications may be illegal.
Many jurisdictions place limits on certain speech and ban racist, blasphemous, politically
subversive, libelous or slanderous, seditious, or inflammatory material that tends to incite hate
crimes.
The extent to which these communications are unlawful varies greatly between countries, and
even within nations. It is a sensitive area in which the courts can become involved in arbitrating
between groups with entrenched beliefs.

3.

Harassment

Whereas content may be offensive in a non-specific way, harassment directs obscenities and
derogatory comments at specific individuals focusing for example on gender, race, religion,

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

143

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

nationality, sexual orientation. This often occurs in chat rooms, through newsgroups, and by
sending hate e-mail to interested parties (see cyber bullying, cyber stalking, harassment by
computer, hate crime, Online predator, and stalking). Any comment that may be found
derogatory or offensive is considered harassment.

4.

Drug trafficking

Drug traffickers are increasingly taking advantage of the Internet to sell their illegal substances
through encrypted e-mail and other Internet Technology. Some drug traffickers arrange deals at
internet cafes, use courier Web sites to track illegal packages of pills, and swap recipes for
amphetamines in restricted-access chat rooms. The rise in Internet drug trades could also be
attributed to the lack of face-to-face communication. These virtual exchanges allow more
intimidated individuals to more comfortably purchase illegal drugs. The sketchy effects that are
often associated with drug trades are severely minimized and the filtering process that comes
with physical interaction fades away. Furthermore, traditional drug recipes were carefully kept
secrets. But with modern computer technology, this information is now being made available to
anyone with computer access.

5.

Cyberterrorism

Government officials and Information Technology security specialists have documented a


significant increase in Internet problems and server scans since early 2001. There is a growing
concern among federal officials.that such intrusions are part of an organized effort by
cyberterrorists, foreign intelligence services, or other groups to map potential security holes in
critical systems. A cyberterrorist is someone who intimidates or coerces a government or
organization to advance his or her political or social objectives by launching computer-based
attack against computers, network, and the information stored on them.
Cyberterrorism in general, can be defined as an act of terrorism committed through the use of
cyberspace or computer resources (Parker 1983). As such, a simple propaganda in the Internet,
that there will be bomb attacks during the holidays can be considered cyberterrorism. At worst,
cyberterrorists may use the Internet or computer resources to carry out an actual attack. As well
there are also hacking activities directed towards individuals, families, organised by groups
within networks, tending to cause fear among people, demonstrate power, collecting information
relevant for ruining peoples' lives, robberies, blackmailing etc.

1.

Documented cases

The Yahoo! website was attacked at 10:30 PST on Monday, 7 February 2000. The attack
lasted three hours. Yahoo was pinged at the rate of one gigabyte/second.
On 3 August 2000, Canadian federal prosecutors charged MafiaBoy with 54 counts of
illegal access to computers, plus a total of ten counts of mischief to data for his attacks on
Amazon.com, eBay, Dell Computer, Outlaw.net, and Yahoo.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

144

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

MafiaBoy had also attacked other websites, but prosecutors decided that a total of 66
counts was enough. MafiaBoy pleaded not guilty. About fifty computers at Stanford
University, and also computers at the University of California at Santa Barbara, were
amongst the zombie computers sending pings in DDoS attacks.
In 26 March 1999, the Melissa worm infected a document on a victim's computer, then
automatically sent that document and copy of the virus via e-mail to other people. 21
January 2003
Two years jail for UK virus writer who infected 27,000 PCs

CRIME SECURITY: Computer security is a branch of technology known as


information security as applied to computers and networks. The objective of computer security
includes protection of information and property from theft, corruption, or natural disaster, while
allowing the information and property to remain accessible and productive to its intended users.
SOME APPROACHES:
Here are several approaches to security in computing, sometimes a combination of approaches is
valid:
1. Trust all the software to abide by a security policy but the software is not trustworthy
(this is computer insecurity).
2. Trust all the software to abide by a security policy and the software is validated as
trustworthy (by tedious branch and path analysis for example).
3. Trust no software but enforce a security policy with mechanisms that are not trustworthy
(again this is computer insecurity).
4. Trust no software but enforce a security policy with trustworthy mechanisms.
HARDWARE MECHANISMS THAT PROTECT COMPUTERS AND DATA:
Hardware based or assisted computer security offers an alternative to software-only computer
security. Devices such as dongles may be considered more secure due to the physical access
required in order to be compromised.
While many software based security solutions encrypt the data to prevent data from being stolen,
a malicious program may corrupt the data in order to make it unrecoverable or unusable.
Hardware-based security solutions can prevent read and write access to data and hence offers
very strong protection against tampering.

SECURE OPERATING SYSTEMS:


One use of the term computer security refers to technology to implement a secure operating
system. Much of this technology is based on science developed in the 1980s and used to produce
what may be some of the most impenetrable operating systems ever. Though still valid, the
technology is in limited use today, primarily because it imposes some changes to system

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

145

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

management and also because it is not widely understood. Such ultra-strong secure operating
systems are based on operating system kernel technology that can guarantee that certain security
policies are absolutely enforced in an operating environment. An example of such a Computer
security policy is the Bell-La Padula model. The strategy is based on a coupling of special
microprocessor hardware features, often involving the memory management unit, to a special
correctly implemented operating system kernel. This forms the foundation for a secure operating
system which, if certain critical parts are designed and implemented correctly, can ensure the
absolute impossibility of penetration by hostile elements. This capability is enabled because the
configuration not only imposes a security policy, but in theory completely protects itself from
corruption. Ordinary operating systems, on the other hand, lack the features that assure this
maximal level of security. The design methodology to produce such secure systems is precise,
deterministic and logical.

If the operating environment is not based on a secure operating system capable of maintaining a
domain for its own execution, and capable of protecting application code from malicious
subversion, and capable of protecting the system from subverted code, then high degrees of
security are understandably not possible. While such secure operating systems are possible and
have been implemented, most commercial systems fall in a 'low security' category because they
rely on features not supported by secure operating systems (like portability, et al.). In low
security operating environments, applications must be relied on to participate in their own
protection. There are 'best effort' secure coding practices that can be followed to make an
application more resistant to malicious subversion.
In commercial environments, the majority of software subversion vulnerabilities result from a
few known kinds of coding defects. Common software defects include buffer overflows, format
string vulnerabilities, integer overflow, and code/command injection.
Some common languages such as C and C++ are vulnerable to all of these defects (see Seacord,
"Secure Coding in C and C++"). Other languages, such as Java, are more resistant to some of
these defects, but are still prone to code/command injection and other software defects which
facilitate subversion.
Recently another bad coding practice has come under scrutiny; dangling pointers. The first
known exploit for this particular problem was presented in July 2007. Before this publication the
problem was known but considered to be academic and not practically exploitable.
In summary, 'secure coding' can provide significant payback in low security operating
environments, and therefore worth the effort. Still there is no known way to provide a reliable
degree of subversion resistance with any degree or combination of 'secure coding.'
CAPABILITIES VS. ACLS:
Within computer systems, the two fundamental means of enforcing privilege separation are
access control lists (ACLs) and capabilities. The semantics of ACLs have been proven to be

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

146

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

insecure in many situations (e.g., Confused deputy problem). It has also been shown that ACL's
promise of giving access to an object to only one person can never be guaranteed in practice.
Both of these problems are resolved by capabilities. This does not mean practical flaws exist in
all ACL-based systems, but only that the designers of certain utilities must take responsibility to
ensure that they do not introduce flaws.
Capabilities have been mostly restricted to research operating systems and commercial OSs still
use ACLs. Capabilities can, however, also be implemented at the language level, leading to a
style of programming that is essentially a refinement of standard object-oriented design. An open
source project in the area is the E language.
First the Plessey System 250 and then Cambridge CAP computer demonstrated the use of
capabilities, both in hardware and software, in the 1970s. A reason for the lack of adoption of
capabilities may be that ACLs appeared to offer a 'quick fix' for security without pervasive
redesign of the operating system and hardware.
The most secure computers are those not connected to the Internet and shielded from any
interference. In the real world, the most security comes from operating systems where security is
not an add-on, such as OS/400 from IBM. This almost never shows up in lists of vulnerabilities
for good reason. Years may elapse between one problem needing remediation and the next

APPLICATIONS:
IN AVIATION
The aviation industry is especially important when analyzing computer security because the
involved risks include human life, expensive equipment, cargo, and transportation infrastructure.
Security can be compromised by hardware and software malpractice, human error, and faulty
operating environments. Threats that exploit computer vulnerabilities can stem from sabotage,
espionage, industrial competition, terrorist attack, mechanical malfunction, and human error. The
consequences of a successful deliberate or inadvertent misuse of a computer system in the
aviation industry range from loss of confidentiality to loss of system integrity, which may lead to
more serious concerns such as data theft or loss, network and air traffic control outages, which in
turn can lead to airport closures, loss of aircraft, loss of passenger life. Military systems that
control munitions can pose an even greater risk.
NOTABLE SYSTEM ACCIDENTS:
In 1994, over a hundred intrusions were made by unidentified hackers into the Rome Laboratory,
the US Air Force's main command and research facility. Using trojan horse viruses, hackers were
able to obtain unrestricted access to Rome's networking systems and remove traces of their
activities. The intruders were able to obtain classified files, such as air tasking order systems data
and furthermore able to penetrate connected networks of National Aeronautics and Space
Administration's Goddard Space Flight Center, Wright-Patterson Air Force Base, some
Defensecontractors, and other private sector organizations, by posing as a trusted Rome center
user. Now, a technique called Ethical hack testing is used to remediate these issues.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

147

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Electromagnetic interference is another threat to computer safety and in 1989, a United States
Air Force F-16 jet accidentally dropped a 230 kg bomb in West Georgia after unspecified
interference caused the jet's computers to release it.
A similar telecommunications accident also happened in 1994, when two UH-60 Blackhawk
helicopters were destroyed by F-15 aircraft in Iraq because the IFF system's encryption system
malfunctioned.

TERMINOLOGY:
The following terms used in engineering secure systems are explained below.

Authentication techniques can be used to ensure that communication end-points are who
they say they are.
Automated theorem proving and other verification tools can enable critical algorithms
and code used in secure systems to be mathematically proven to meet their specifications.
Capability and access control list techniques can be used to ensure privilege separation
and mandatory access control.
Chain of trust techniques can be used to attempt to ensure that all software loaded has
been certified as authentic by the system's designers.
Cryptographic techniques can be used to defend data in transit between systems, reducing
the probability that data exchanged between systems can be intercepted or modified.
Firewalls can provide some protection from online intrusion.
Mandatory access control can be used to ensure that privileged access is withdrawn when
privileges are revoked. For example, deleting a user account should also stop any
processes that are running with that user's privileges.
Secure cryptoprocessors can be used to leverage physical security techniques into
protecting the security of the computer system.
microkernels can be reliable against errors: eg EROS and Coyotos.

Some of the following items may belong to the computer insecurity article:

Anti-virus software consists of computer programs that attempt to identify, thwart and
eliminate computer viruses and other malicious software (malware).

Cryptographic techniques involve transforming information, scrambling it so it becomes


unreadable during transmission. The intended recipient can unscramble the message, but
eavesdroppers cannot.

Backups are a way of securing information; they are another copy of all the important
computer files kept in another location. These files are kept on hard disks, CD-Rs, CDRWs, and tapes. Suggested locations for backups are a fireproof, waterproof, and heat
proof safe, or in a separate, offsite location than that in which the original files are
contained. Some individuals and companies also keep their backups in safe deposit boxes
inside bank vaults. There is also a fourth option, which involves using one of the file
hosting services that backs up files over the Internet for both business and individuals.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

148

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Encryption is used to protect the message from the eyes of others. It can be done in
several ways by switching the characters around, replacing characters with others, and
even removing characters from the message. These have to be used in combination to
make the encryption secure enough, that is to say, sufficiently difficult to crack. Public
key encryption is a refined and practical way of doing encryption. It allows for example
anyone to write a message for a list of recipients, and only those recipients will be able to
read that message.
Firewalls are systems which help protect computers and computer networks from attack
and subsequent intrusion by restricting the network traffic which can pass through them,
based on a set of system administrator defined rules.
Honey pots are computers that are either intentionally or unintentionally left vulnerable to
attack by crackers. They can be used to catch crackers or fix vulnerabilities.
Intrusion-detection systems can scan a network for people that are on the network but
who should not be there or are doing things that they should not be doing, for example
trying a lot of passwords to gain access to the network.
Pinging the ping application can be used by potential crackers to find if an IP address is
reachable. If a cracker finds a computer they can try a port scan to detect and attack
services on that computer.
Social engineering awareness keeps employees aware of the dangers of social
engineering and/or having a policy in place to prevent social engineering can reduce
successful breaches of the network and servers.
File Integrity Monitors are tools used to detect changes in the integrity of systems and
files.

REFERENCES:

Ross J. Anderson: Security Engineering: A Guide to Building Dependable Distributed


Systems, ISBN 0-471-38922-6
Morrie Gasser: Building a secure computer system ISBN 0-442-23022-2 1988
Stephen Haag, Maeve Cummings, Donald McCubbrey, Alain Pinsonneault, Richard
Donovan: Management Information Systems for the information age, ISBN 0-07-0911207
E. Stewart Lee: Essays about Computer Security Cambridge, 1999
Peter G. Neumann: Principled Assuredly Trustworthy Composable Architectures 2004
Paul A. Karger, Roger R. Schell:

CONCLUSION:
Computer security is critical in almost any technology-driven industry which
operates on computer systems.Computer security can also be refered to as computer
safety. The issues of computer based systems and addressing their countless
vulnerabilities are an integral part of maintaining an operational industry.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

149

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

NETWORK SECURITY AND


CRYPTOGRAPHY
G.HarshaVardhan
3rd year-CSE
SRM University,
Chennai.
Mobile no: 9791095378
ABSTRACT:
Network
security
is
a
complicated
subject,
historically
onlyincreasingnumber of people need to understand the
However, as more and more people become ``wired'', tackled
by well-trained andexperienced experts. Basics of security in a
networked world. Thisdocument was written with the basic
computer user and information systems manager inmind,
explaining the concepts needed to read through the hype in
the marketplace andunderstand risks and how to deal with
them. So it is very important for all the users to getfamiliar with
various aspects of Network Security. In the article basics of
Network Security are discussed. With the millions of Internet
users able to pass information fromthe network, the security of
business networks is a major concern. The very nature of
theInternet makes it vulnerable to attack. The hackers and virus
writers try to attack theInternet and computers connected to
the Internet. With the growth in business use of theInternet,
network security is rapidly becoming crucial to the
development of the Internet.Many business set up firewalls to
control access to their networks by persons using the Internet

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

150

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Introduction:
For the first few decades of their existence, computer networks were
primarily used by university researchers for sending e-mail and by
corporate employees for sharing printers. Under these conditions,
security did not get a lot of attention. But now, as millions of ordinary
citizens are using networks for banking, shopping, and filing their tax
returns, network security is looming on the horizon as a potentially
massive problem.
The requirements of information security within an organization have
undergone two major changes in the last several decades.Before
the widespread use of data processing equipment ,the security of
information felt to be valuable to an organization was provided
primarily by physical and administrative means .
.with the introduction of computer the need for automated tools for
protecting files and other information stored on the computer
became an evident .this is especially the case for a shared system
such as time sharing system and the need is even more acute for
systems that can be accessed for a public telephone or a data
network. The generic name for the collection of tools to protect data
and to thwart hackers is computer security.

Network security:
Security is a broad topic and covers a multitude of sins. In its simplest
form, it is concerned with making sure that nosy people cannot
read, or worse yet, secretly modify messages intended for other
recipients. It is concerned with people trying to access remote
services that they are not authorized to use. Most security problems
are intentionally caused by malicious people trying to gain some
benefit, get attention, or to harm someone. Network security
problems can be divided roughly into four closely intertwined areas:
secrecy, authentication, nonrepudiation, and integrity control.
Secrecy, also called confidentiality, has to do with keeping
information out of the hands of unauthorized users. This is what
usually comes to mind when people think about network security.
Authentication deals with determining whom you are talking to

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

151

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

before revealing sensitive information or entering into a business


deal. Nonrepudiation deals with signatures.

Secracy: Only

the sender and intended receiver should be able to


understand the contents of the transmitted message. Because
eavesdroppers may intercept the message, this necessarily requires
that the message besomehow encrypted (disguise data) so that an
intercepted message can not be decrypted (understood) by an
interceptor. This aspect of secrecy is probably the most commonly
perceived meaning of the term "securecommunication." Note,
however, that this is not only a restricted definition of secure
communication , but a rather restricted definition of secrecy as well.
Authentication :Both the sender and receiver need to confirm the
identity of other party involved in the communication - to confirm
that the other party is indeed who or what they claim to be. Face-toface human communication solves this problem easily by visual
recognition. When communicating entities exchange
messages over a medium where they can not "see" the other party,
authentication is not so simple. Why, for instance, should you
believe that a received email containing a text string saying that
the email came from a friend of yours indeed came from that
friend? If someone calls on the phone claiming to be your bank
and asking for your account number, secret PIN, and account
balances for verification purposes, would you give
that information out over the phone? Hopefully not.

Message Integrity: Even if the sender and receiver are able to


authenticate each other, they also want to insure
that the content of their communication is not altered, either
malicously or by accident, in transmission.
Extensions to the checksumming techniques that we encountered in
reliable transport and data link protocols
Nonrepudiation: Nonrepudiation deals with signatures
Having established what we mean by secure communication, let us
next consider exactly what is meant by an "insecurechannel." What

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

152

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

information does an intruder have access to, and what actions can
be taken on the transmitted data?
Figure illustrates the scenario
Alice, the sender, wants to send data to Bob, the receiver. In order
to securely exchange data, while meeting the
requirements of secrecy, authentication, and message integrity,
Alice and Bob will exchange both control message anddata
messages (in much the same way that TCP senders and receivers
exchange both control segments and data
segments). All, or some of these message will typically be encrypted.
A passive intruder can listen to and record the
control and data messages on the channel; an active intruder can
remove messages from the channel and/or itself add messages into
the channel.

Network Security Considerations in the Internet :Before delving into the technical aspects of network security in the
following sections, let's conclude our introduction by relating our
fictitious characters - Alice, Bob, and Trudy - to "real world" scenarios
in today's Internet

. Let's begin with Trudy, the network intruder. Can a "real world"
network intruder really listen to and record passively receives all
data-link-layer frames passing by the device's network interface. In a
broadcast environment

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

153

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

such as an Ethernet LAN, this means that the packet sniffer receives
all frames being transmitted from or to all hostson the local area
network. Any host with an Ethernet card can easily serve as a packet
sniffer, as the Ethernet interface card needs only be set to
"promiscuous mode" to receive all passing Ethernet frames. These
frames can then be passed on to application programs that extract
application-level data. For example, in the telnet scenario , the login
password prompt sent from A to B, as well as the password entered
at B are "sniffed" at host C. Packet sniffing is a double-edged sword it can be invaluable to a network administrator for network
monitoring and management but also used by the unethical
hacker. Packet-sniffing software is freely available at various WWW
sites, and as commercial products.

Cryptography:- Cryptography

comes from the Greek words for


''secret writing.'' It has a long and colorful history going back
thousands of years. Professionals make a distinction between ciphers
and codes. A cipher is a character-for-character or bit-for-bit
transformation, without regard to the linguistic structure of the
message. In contrast, a code replaces one word with another word
or symbol. Codes are not used any more, although they have a
glorious history
The messages to be encrypted, known as the plaintext, are
transformed by a function that is parameterized by a key. The output
of the encryption process, known as the ciphertext, is then
transmitted, often by messenger or radio. We assume that the
enemy, or intruder, hears and accurately copies down the complete
ciphertext. However, unlike the intended recipient, he does not
know what the decryption key is and so cannot decrypt the
ciphertext easily. Sometimes the intruder can not only listen to the
communication channel (passive intruder) but can also record
messages and play them back later, inject his own messages, or
modify legitimate messages before they get to the receiver (active
intruder). The art of breaking ciphers, called cryptanalysis, and the
art devising them (cryptography) is collectively known as cryptology.
It will often be useful to have a notation for relating plaintext,
ciphertext, and keys. We will use C = EK(P) to mean that the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

154

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

encryption of the plaintext P using key K gives the ciphertext C.


Similarly, P = DK(C) represents the decryption of C to get the
plaintext again.

Two Fundamental Cryptographic Principles:


Redundancy
The first principle is that all encrypted messages must contain some
redundancy, that is, information not needed to understand the
message.
Cryptographic principle 1: Messages must contain some
redundancy

Freshness
Cryptographic principle 2: Some method is needed to foil replay
attacks
One such measure is including in every message a timestamp valid
only for, say, 10 seconds. The receiver can then just keep messages
around for 10 seconds, to compare newly arrived messages to
previous ones to filter out duplicates. Messages older than 10
seconds can be thrown out, since any replays sent more than 10
seconds later will be rejected as too old.

Symmetric key Encryption model:


Beyond that ,the security of conventional encryption depends on
the secracy of the key ,not the secrecy of the algorithm. We do not
need to keep the algorithm secret, we need to keep only the secret
key.
The fact that the algorithm need not be kept secret means that
manufactures can and have developed low cost chip
implementations of data encryption algorithms. these chips are
widely available and incorporated in to a number of products.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

155

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Substitution Ciphers
In a substitution cipher each letter or group of letters is replaced by another letter
or group of letters to disguise it. One of the oldest known ciphers is the Caesar
cipher,
attributed
to
Julius
Caesar.
In
this
method, a becomes D, b becomes E, c becomes F, ... ,
and z becomes C.
For
example, attack becomes DWWDFN.
The next improvement is to have each of the symbols in the plaintext, say, the 26
letters for simplicity, map onto some other letter. For example,
plaintext: a b c d e f g h i j k l m n o p q r s t u v w x y z
ciphertext: Q W E R T Y U I O P A S D F G H J K L Z X C V B N M
Transposition Ciphers:Substitution ciphers preserve the order of the
plaintext symbols but disguise them. Transposition ciphers, in contrast, reorder
the letters but do not disguise them depicts a common transposition cipher, the
columnar transposition.
M E G A B U C K
7 4 5 1 2 8 3 6
W E L C O M E T PLAIN TEXT: WELCOME TO SAfire-2K8,CHIRALA,
O S A f i r e 2 PRAKASAM, AP.
K 8 C H I R A L CIPHER TEXT: CfHAOiIKEeASES8PALACRPT2LA
A P R A K A S A WOKAMMRRA
M A P
The cipher is keyed by a word or phrase not containing any repeated letters. In
this example, MEGABUCK is the key. The purpose of the key is to number the
columns, column 1 being under the key letter closest to the start of the alphabet,
and so on. The plaintext is written horizontally, in rows, padded to fill the matrix if
need be. The ciphertext is read out by columns, starting with the column whose
key letter is the lowest.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

156

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Public key algorithm:


While there may be many algorithms and keys that have this
property, the RSA algorithm (named after its founders, Ron Rivest, Adi
Shamir, and Leonard Adleman) has become almost synonymous
with public keycryptography.
In order to choose the public and private keys, one must do the
following:
Choose two large prime numbers, p and q. How large should p and
q be? The larger the values, the
more difficult it is to break RSA but the longer it takes to perform the
encoding and decoding. RSA
Laboratories recommends that the product of p and q be on the
order of 768 bits for personal use and
1024 bits for corporate use .
Compute n = pq and z = (p-1)(q-1).
Choose a number, e, less than n, which has no common factors
(other than 1) with z. (In this case, e
and z are said to be relatively prime). The letter 'e' is used since this
value will be used in encryption.
Find a number, d, such that ed -1 is exactly divisible (i.e., with no
remainder) by z. The letter 'd' is
used because this value will be used in decryption. Put another way,
given e, we choose d such that the
integer remainder when ed is divided by z is 1. (The integer
remainder when an integer x is divided by
the integer n, is denoted x mod n).
The public key that Bob makes available to the world is the pair of
numbers (n,e); his private key is the
pair of numbers (n,d).
key distribution: For symmetric key cryptograghy , the trusted
intermediary is called a Key Distribution Center (KDC), which is a
single, trusted network entity with whom one has established a
shared secret key. We will see that one can use the KDC to
obtain the shared keys needed to communicate securely with all
other network entities. For public key cryptography, the trusted
intermediary is called a Certification Authority (CA). A certification

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

157

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

authority certifies that a public key belongs to a particular entity (a


person or a network entity). For a certified public key, if one can
safely trust the CA that the certified the key, then one can be sure
about to whom the public key belongs. Once a public key is
certified, then it can be distributed from just about anywhere,
including a public key server, a personal Web page or a diskette.
security in the layers:
Before getting into the solutions themselves, it is worth spending a
few moments considering where in the protocol stack network
security belongs. There is probably no one single place. Every layer
has something to contribute.
physical layer:In the physical layer wiretapping can be foiled by
enclosing transmission lines in sealed tubes containing gas at high
pressure. Any attempt to drill into a tube will release some gas,
reducing the pressure and triggering an alarm. Some military systems
use this technique.
Data link layer:In this layer, packets on a point-to-point line can be
encrypted as they leave one machine and decrypted as they enter
another. All the details can be handled in the data link layer, with
higher layers oblivious to what is going on. This solution breaks down
when packets have to traverse multiple routers, however, because
packets have to be decrypted at each router, leaving them
vulnerable to attacks from within the router.
Network layer:In this layer, firewalls can be installed to keep good
packets and bad packets out. IP security also functions in this layer.
In the transport layer, entire connections can be encrypted, end to
end, that is, process to process. For maximum security, end-to-end
security is required . Finally, issues such as user authentication and
nonrepudiation can only be handled in the application layer.
Since security does not fit neatly into any layer

Secure Internet Commerce :


SET (Secure Electronic Transactions) is a protocol specifically
designed to secure payment-card transactions over the Internet. It
was originally developed by Visa International and MasterCard
International in February 1996 with participation from leading
technology companies around the world .SET Secure Electronic

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

158

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Transaction LLC (commonly referred to as SET Co) was established in


December 1997 as a legal entity to manage and promote the
global adoption of SET

1. Bob indicates to Alice that he is interested in making a credit card


purchase.
2. Alice sends the customer an invoice and a unique transaction
identifier.
3. Alice sends Bob the merchant's certificate which includes the
merchant's public key. Alice also sends the certificate for her bank,
which includes the bank's public key. Both of these certificates are
encrypted with the private key of a certifying authority.
4. Bob uses the certifying authority's public key to decrypt the two
certificates. Bob now has Alice's public key and the bank's public
key.
5. Bob generates two packages of information: the order information
(OI) package and the purchase instructions (PI) package. The OI,
destined for Alice, contains the transaction identifier and brand of
card being used; it does not include Bob's card number. The PI,
destined for Alice's bank, contains the transaction identifier, the
card number and the purchase amount agreed to Bob. The OI
and PI are dual encrypted: the OI is encrypted with Alice's public
key; the PI is encrypted with Alice's bank's public key. (We are
bending the truth here in order to see the big picture. In reality, the
OI and PI are encrypted with a customer-merchant session key

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

159

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

and a customer-bank session key.) Bob sends the OI and the PI to


Alice.

6.Alice generates an authorization request for the card payment


request, which includes the transaction identifier.
7.Alice sends to her bank a message encrypted with the bank's
public key. (Actually, a session key is used.) This message includes the
authorization request, the PI package received from Bob, and Alice's
certificate
.
8.Alice's bank receives the message and unravels it. The bank
checks for tampering. It also make sure that the transaction identifier
in the authorization request matches the one in Bob's PI package.
9.Alice's bank then sends a request for payment authorization to
Bob's payment-card bank through traditional bank-card channels -just as Alice's bank would request authorization for any normal
payment-card transaction.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

160

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

One of the key features of SET is the non-exposure of the credit


number to the merchant. This feature is
provided in Step 5, in which the customer encrypts the credit card
number with the bank's key.
Encrypting the number with the bank's key prevents the merchant
from seeing the credit card. Note that
the SET protocol closely parallels the steps taken in a standard
payment-card transaction. To handle all
the SET tasks, the customer will have a so-called digital wallet that
runs the client-side of the SET
protocol and stores customer payment-card information (card
number, expiration date, etc.)

Conclusion:
All the three techniques discussed in this presentation i.e. network
security; cryptography andfirewalls are most widely used and
implemented networks security tools. Each of them had itsown
significance in its own mode. For example, a single
organization
or
establishment
tomaintain
privacy
of
information within itself can use cryptography. These methods
are beingus e d to p ro v i d e c o n fi d e n t ia l i ty re q u i re d b y t he
ne t w o rk . T he re is a l o t o f s c o p e fo r t he development in this
field. Digital signatures are one of the latest developments in
the field of c r y p t o g r a p h y . W i t h t h e i n c r e a s e i n
number
of
computers,
and
the
usage

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

161

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

o f c o m p u t e r s worldwide the demand for network security is


increasing exponentially. This has led to thedevelopment of
major companies like Symantec Corporation, MacAfee etc.
So this field is putting up a big employment potential for the
young generation of today. And not to forget,there is no end to
the complexity of this subject, which means that any amount of
research willnot go futile for the world of computers.
BOOKS:Computer networking: A Top-Down Approach Featuring the
Internet-JamesF.Kurose, Keith W.Ross.
Networks for Computer Scientists and Engineers: Youlu Zheng, Shakil
Akhtar.
WEBSITES:

www.iec.org/onlinehttp://ftp.research.att.com/dist/internetsecurity/http://www.jjtc.com/stegdoc/

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

162

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

A technical paper On

THE WIRELESS WATER AND VACCINE MONITORING WITH LOW


COST SENSORS [COLD TRACE]

AUTHOR
K.HUSENAIAH
IV B.TECH C.S.E
Email:kinghusen100@gmail.com
Ph.no:9550277153

ANNAMACHARYA INSTITUTE OF TECHNOLOGY AND SCIENCES,


BOYANAPALLI, RAJAMPETA,KADAPA DISTRICT, ANDHRA PRADESH
(An autonomous institution)
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

163

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

The Cold Trace project wants to construct a

ABSTRACT

temperature sensor that connects to cold boxes


Cold Trace is a low-cost wireless sensor

and

designed to improve access to vaccines which

chain, has a less cost, and can provide real-time

protect thousands of childrens against diseases

monitoring and alerts for clinics and government

such as tuberculosis and polio. The sensor

agencies.

remotely monitors the temperature of vaccines.

Unlike similar approaches that develop their

It also provides a better understanding of the

own microcontrollers the ColdTrace project uses

vaccine

and

modified cell phones for the data collection and

in

transmission to cut down on upfront capital

areas where regular records are not maintained.

costs. The data that is gathered from each device

it will help us move toward providing more

in the network is sent to a global database that

vaccines and safer medical treatments to

can track historical vaccine distribution patterns

millions of childrens around the world.

and can then be used by clinics and health

cold

distribution

storage,

transportation

infrastructures,

particularly

refrigeration

units

along

the

supply

agencies to improve forecasting needs for the

INTRODUCTION

area over time.

Cold Trace is a new innovative project that

REAL-TIME TRANSPARENCY INTO

uses wireless temperature sensors, repurposed

VACCINE SUPPLY CHAINS:

cell phones, and a notification system to track


temperatures of cold storage units and alert

ColdTrace reimagines the cold chain infra-

clinic managers by SMS when temperatures

structure used to store and transport vaccines by

begin to threaten the medicine.

enabling a low-cost wireless sensor to remotely

According to development in many developing

monitor vaccines and transfer the monitoring

countries electricity and backup power can be

data to a global database that fosters coordinated

erratic in clinics and often times there are not

decisions by governments, global partners,

enough resources available to accurately and

pharmaceuticals, and clinics, and helps ensure

routinely measure vaccine refrigeration tempe-

life-saving treatments reach everyone. Vacci-

ratures. Many of the power outages in these

nating children is an amazingly effective and

areas happen at night or over the weekend and

successful public health solutions. Millions of

result in spoiled vaccine supplies that play a role

children are saved each year, thanks to vaccines.

in the spreading of preventable diseases in these

Yet 25 million children, thats one in every 5

regions.

kids born, are still not vaccinated each year. And


most of these children are in developing

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

164

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

countries. It is building real-time visibility into

COLDTRACE FEATURES:

the temperature controlled supply chains, called

the "cold chain". this platform, called Cold

Generates SMS alerts about equipment

Trace, continuously records the temperature of

failures and medicines reaching critical

refrigerated storage units used along every mile

temperatures.

of the cold chain from distribution warehouse to

Uploads temperature traces via SMS or


cellular network.

clinic using cell phone-enabled sensors. The data

is wirelessly and automatically uploaded to the

Continuously

tracks

GPS

location

coordinates.

Cold Trace servers, where it is stored and

analyzed.

Cell

phones

display

and

keypad

Real-time SMS alerts notify clinic workers and

facilitates extensive local configuration

Ministry of Health Staff about vaccines reaching

including communication and upload

critical temperatures. Each level of the supply

intervals.

system can act on this immediate information on

Cold Trace database keeps a constant

equipment health and vaccine safety.

log

Failure rates across local, regional, and national

refrigeration

areas, in addition to monthly status reports, can

analysis of failure rates across local,

be used to improve forecasting and capacity

regional, and national areas.

planning. Cold Trace can help major donors to

of

Web

temperatures
units

dashboard

and

in

clinic

allows

supports

for

remote

detect emerging risks and identify regions with

configuration, device status monitoring,

the strongest cold chain infrastructure to target

data visualization, and sharing with

for increased distribution, and also help attract

donors, NGOs, ministries of health.

new donations. Cold Trace has been deployed in

Can integrate with existing logistics


management information systems in

clinics.

order to oversee and provide alerts about


cold chain equipment at each stage of
the distribution.

COLD CHAIN MONITOR:


This is piloting a new cell phone enabled sensor
that remotely monitors the temperatures of
refrigerated units used to store and transport
Fig: Real-Time Transparency Into Vaccine Supply

these vaccines and drugs. Our integrated device

Chains

captures temperature data about vaccine stocks

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

165

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

using a sensor that connects to cold boxes and


refrigeration units along the supply chain from
warehouses to clinics. The sensor uses mobile

Cold chain management is a key challenge faced

phones to collect and wirelessly transmit tempe-

by various industries. Despite regulations that

rature data, and our backend system integrates

aim to ensure the proper management and

the collected data with logistics management

monitoring of temperature sensitive pharma-

information systems in order to oversee and

ceuticals (such as insulin and the polio vaccine),

provide alerts about cold chain equipment at

as well as fresh produce and other foodstuffs, a

each stage of vaccine distribution. The system

lack of infrastructure for the supply and

will make it possible to: wirelessly upload

transport of these products means that they are

temperature data through a cell phone from

often wasted through exposure to changes in

almost any location; correlate temperature data

temperature. Mismanagement of the cold chain

with geolocation of the phone (when possible);

also results in food safety issues, while the

and immediately generate SMS and email alerts

ethical

about

tempe-

sensitive drugs are enormous the shelf-life of

ratures. This allows each level of the supply

many medications is entirely dependent on

system to have real-time information on

correct temperature control. The importance of

equipment health and vaccine safety. Moreover,

accurate cold chain management cannot be

the data collected by our system will help

understated.

vaccines

reaching

critical

considerations

around

temperature-

accelerate the tracking of historical vaccine


distribution patterns that can be used to improve
forecasting and capacity planning.

Fig: Cold Chain Monitoring with Wireless


Fig: Cold Chain Monitoring

Communication

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

166

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

HOW DOES COLDTRACE WORK?

WATER AND SANITATION ACCESS:

Cold Trace uses existing cell phone technologies

Historically, many low-income residents of

in a new way to dramatically reduce the cost of

urban and peri-urban areas in world have faced a

wireless sensor data collection. Unlike similar

lack of clean water and low quality water and

approaches, which employ microcontrollers or

sanitation services.vaccinemonotoring is build-

other costly components, Cold Trace takes

ing

advantage of the sophisticated electronics,

communication

battery and other features that already exist on

residents in world to communicate with their

the cell phone. By building upon this standard

water and sanitation services providers. The

technology, we can dramatically reduce the

way that this works is as follows: residents can

overall cost of the sensors and make it possible

send text messages with complaints, concerns,

for almost any basic phone to serve as a wireless

news, questions and other reports to the system

sensor. This allows Cold Trace to bring remote

database, which then distributes those messages

monitoring to developing areas that have

to the water service providers and other

extremely low resources.

stakeholders who can provide answers and take

an

SMS-enabled
system

issue

tracking

and

that

allows

poor

action in response. As a result, the system


transforms community needs and concerns into a
live data feed that helps service providers
improve their services and that highlights both
effective and ineffective solutions for utilities,
governments, civic leaders and aid organizations. Additionally, broadcast messages sent
by service providers, NGOs, or governments can
be used to inform citizens about important
updates, expected outages, and help launch local
campaigns for change. it believe that citizens
will be motivated to participate because they get
a direct communication channel with providers
to improve their services; and service providers
and other stakeholders will participate because

Fig: Block Diagram of Cold Trace Working

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

167

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

they get access to critical planning information

improve their services and that highlights both

that would otherwise be unavailable.

effective and ineffective solutions for utilities,


governments, civic leaders and aid organizations. Additionally, broadcast messages sent
by service providers, NGOs, or governments can
be used to inform citizens about important
updates, expected outages, and help launch local
campaigns for change.

CONLCUSION
ColdTrace is

low-cost

wireless

sensor

designed to improve access to vaccines which


protect thousands of childrens/peoples against
diseases such as tuberculosis and polio. The
Cold Trace an important part of an integrated,

Fig: Water and Sanitation Access With SMS Alerts

broadly available,

SMS Platform:

data-driven process

for

managing vaccine distribution in low-resource


areas. As a result, it will help us move toward

Wireless vaccine monitoring is building an

providing more vaccines and safer medical

SMS-enabled issue tracking and communication

treatments to millions of peoples around the

system on top of lands that allows peri-urban

world.

residents in world to communicate with their


water and sanitation services providers. The

REFERANCES

way that this works is as follows: residents can


send text messages with complaints, concerns,

www.wireless innovative projects.com

news, questions and other reports to the system


database, which then distributes those messages

www.wireless vaccine monitoring.com

to the water service providers and other

www.sensor scientific.com

stakeholders who can provide answers and take

www.innovativeprojects.com

action in response. As a result, the system


transforms community needs and concerns into a
live data feed that helps service providers

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

168

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

ADVANCED CONSTRUCTION MATERIALS


Viknesh.J, Shankar Narayan.R
B.Tech (civil department)
Sri Manakula Vinayagar Engineering College,
Pondicherry
Ph: 8220782967
e-mail id: uniquevicky12@gmail.com

Abstract- This paper is intended to replace


the usage of cement by using bagasse, lime
and flyash. The main aim is to promote the
composite materials in construction and to
provide high strength within a short period of
time than a normal OPC. The composites in
concrete mixture were altered. The
proportions are varied with respect to fly ash,
lime, bagasse in cement as replacement.
Different varied compositions has been made
and they has been tested under an UTM
which had gained a good result with more
strength.
I.

II.

SCOPE

The main aim of this project is


to promote the use of lime and
the industrial waste products
such as fly ash and bagasse.

To
encourage
housing
developers in investing these
materials in house construction.

To compare the cost incurred


and strength characteristics of
concrete produced using fly ash,
lime along with cement as
replacement in proportion.

INTRODUCTION

Requirement for economical and environmentfriendly materials has extended an interest in


natural fibers. Most of the natural fibers, like
softwood pulp and banana fibers have been used
as reinforcement materials in cement composite
products. In this work, natural sugar cane
bagasse have been utilised for similar study.
Bagasse is a fibrous residue obtained from sugar
cane during extraction of sugar
juice. Sera and co-workers studied the effect of
reinforcing concrete with bagasse fibers. The
main aim of this paper is to promote composite
materials in construction which provides high
strength

III.

MATERIALS

Fly ash is produced as a


byproduct of coal industry in
tones.

There are three types of fly


ash.(fly ash F, fly ash N, fly ash
C)

Fly ash C is of more alkaline


and sulphate nature.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

169

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

So we chose fly ash F which is


available more.

The chemical compositions such


as silicates and aluminates
present in fly ash, bagasse and
cement were compared.

Lime is added as an hydrating


agent to the fly ash cement
mixture as it is rich in calcium
oxide.

CEMENT

CHEMICAL POZZOLAN TYPE


Baggase

COMPOUND
CLASS
CLASS C CLASS N
F

Fly ash

IV.

LITERATURE
STUDY
LITERATURE ANALYSIS

SiO2

54.90

39.90

58.20

22.60

A12O3

25.80

16.70

18.40

4.30

Fe2O3

6.90

5.80

9.30

2.40

CaO

8.70

24.30

3.30

64.40

MgO

1.80

4.60

3.90

2.10

SO3

0.60

3.30

1.10

2.30

Na2O & K2O 0.60

1.30

1.10

0.60

AND

The replacement of 15%


bagasse ash in cement with
respect to their weight gave
relatively equal strength when
compared to ordinary Portland
cement.(Ref.1)
The replacement of 20% fly ash
in cement with respect to their
weight gave equal strength to
that of ordinary Portland
cement.

DESIGN MIXTURE

The design mixture is based on 1:2:4


ratio.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

170

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

The gravel of 20mm size is taken as


coarse aggregate.

The river bed sand is taken as fine


aggregate.

A Design cube of 85mm sides is taken


as sample piece to test for 14 and 28
days curing periods.

Ordinary Portland cement is used.

Bagasse ash used is in the form of fibre.

The above picture shows , during the


curing period of composite mixture

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

171

This picture shows the concrete


block after hardening

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

COMPRESSION TEST
Cube
No

This picture shows the concrete block


during compression test under an UTM

14 Days

28 Days

(N/mm2 )

(N/mm2 )

1.72

4.81

3.61

4.41

2.15

3.61

1.41

3.63

VARIED COMPOSITIONS
Cube
No

Composition

0.78

0.88

100% cement

0.12

0.23

80% cement, 20% fly ash


(Ref.1)

0.28

0.48

4.08

4.91

85%cement, 15% bagasse


(Ref.2)

70%cement, 15% fly ash,


7% bagasse, 8%lime

60% lime, 40% fly ash

65%lime , 35% bagasse

40%lime,20%bagasse,
40% fly ash

47%cement, 30% fly ash,


20% lime,3% bagasse

GRAPH

N/MM

6
4
2
0

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

172

14 DAYS
28 DAYS

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

V.

INFERENCE

The blocks of varied proportions are


casted and tested.

Block 1 and Block 8 gave more relative


compressive strength.

Here in Block 8 the usage of cement is


almost reduced to nearly 50%.

VI.

CONCLUSION

The experimental Behavior of Concrete


Beams with lime, cement and fly ash in
varied proportions are tested.

The overall compressive and flexural


behaviour of (lime, cement and fly ash
mix) concrete beams used in this study
produces more strength when compared
to ordinary Portland cement beams in
this report experimentally.

Hence I conclude by saying that this


type of procedure can replace nearly
50% of usage of cement and also
reduces the carbon-di-oxide emission.

VII.

REFERENCE

1. International Journal for Service


Learning in Engineering.

Vol. 5 - No. 2 - pp. 60-66, Fall 2010.


ISSN 1555-9033

2. flyash.sustainablesources.com

http://www.flyash.com/data/upimages/p
ress/TB.2%20Chemical%20Comparison
%20of%20Fly%20Ash%20and%20Portl
and%20Cement.pdf

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

173

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

FPGA BASED RETINAL BLOOD OXYGEN SATURATION MAPPING USING


MULTI SPECRAL IMAGES
Mr.Vijay prabakaran.L(Final year UG/ECE)

Abstract We report on an experiment for


mapping two dimensional blood oxygen saturation
distribution by measuring multispectral images in
the wavelength range from 500 to 650nm with the
resolution of 7nm. The multispectral images of the
retina are acquired with the originally designed
imaging system equipped with the tuneable spectral
filter. To separate retinal blood vessels from other
tissue area, morphological image processing is
adopted. The small flick motion is also
compensated. After prepossessing, the partial least
squares regression model for the oxygen saturation
is built by sampling typical spectra reflected from
artery and vein. Applying the regression model for
all the point extracted by the morphological
processing allows the two-dimensional oxygen
saturation map.
Also this paper presents the high level, single
assignment programming language SA-C and its
optimizing compiler targeting reconfigurable
systems. SA-C is intended for Image processing
applications. Language features are introduced and
discussed. The intermediate forms DDCF, DFG and
AHA, used in the optimization and code-generation
phases are described. Conventional and
reconfigurable system specific optimizations are
introduced. The code generation process is
described. The performance for these systems is
analyzed, using a range of applications.

than he or she has blood stream disorders in the


whole system such as a hypertension. Actually, it
is reported that rates of brain and cardiac infarctions
are greatly correlated with the retinal malfunctions.
Therefore the reliable measurement of the retinal
functions such as the oxygen supply has been
expected to be effective in the prevention or the
early treatment for the retinopathy and the
complication of the hypertension.
Oxygen in the blood stream is transported
by haemoglobin contained in a red blood cell. A
haemoglobin molecule consists of four units, and
each unit has a heme that can be bound to oxygen.
Haemoglobin is bound to oxygen in high oxygen
partial pressure, while it releases oxygen in low
oxygen partial pressure. Then oxygen is transported
to whole body. The content of haemoglobin that
binds to oxygen is represented as the degree of the
oxygen saturation; 100% of the oxygen saturation
represents that all haemoglobin molecules bind to
the maximum oxygen. To measure the oxygen
saturation in blood stream, spectroscopy is useful.
Most pulse oximeters that are commonly used in
medical agencies utilize the difference of oxy- and
deoxyheamoglobin absorptions at two wavelengths;
red and near-infrared. We expect that oxygen
saturation levels at retinal vessels can be measured
in the same manner as the pulseoximetry though the
retinal spectroscopy should be the reflectance
measurement unlike the pulseoximetry that
measures the transmittance of the light passed
through a tip of a finger.
Measuring the oxygen saturation at retinal
blood stream based on the spectroscopy has long
been developed since the late 1980s. Delori reported
in 1988 the measurement of oxygen saturation using
three wavelengths. Smith et all reported in 1998 on
the oxygen measurement experiment using a laser at
two wavelengths. in 1999 the experiment employed
four wavelengths and the calibration model was
proposed. Heaton et al. reported the originally
designed handheld oximeter employing four
wavelengths.
In spite of these vigorous researches, the
reliable measurement technique has not yet been

INTRODUCTION
If a blood vessel is occluded and the blood
stream is obstructed, the oxygen supply for cells
will be insufficient. This leads to disorders such as
the malfunction of cells or generation of the
vulnerable neovascularity. Even an advanced retinal
disease symptoms may be treated by a laser
treatment, however, early stage detection of the
vascular malfunction is important for preventing
alteration in visual acuity or the loss of the eyesight.
The retinal vessels are only the blood vessels inside
the human body that can be seen directly from
outside. If one has retinal disorders, it is suspicious

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

174

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

established. This may be because following


problems are still unsolved. First of all, the
Spectroscopic retinal oximetry requires the high
wavelength resolution because the absorption
spectrum of haemoglobin in the wavelength ranging
from 500 to 600 nm has a fine structure compared
to the longer wavelength range such as the range
around 800 nm utilized in the pulseoximetry.
However increasing the wavelength resolution is
not easy because of the limitation of the light
intensity to illuminate a fundus and the requirement
of the short exposure time to prevent a blur caused
by flick motion of an eye. Hence this results in lack
of spectral information to retrieve oxygen
saturation. Second, it is difficult to determine the
light path length inside the retinal tissue in the
reflectance measurement. Even if the absorption
coefficients of oxy- and deoxyhemoglobin are
known in advance, oxygen saturation can not be
estimated without a knowledge of the light path
length based on the Lambert-Beer law. Inaccurate
assumption of the light path length will lead to
incorrect estimations depending on the thickness of
the blood vessels or the scattering coefficient.
From these background, this paper reports
on the preliminary experiment employing both
hardware-based and software-based improvements;
the measurement setup introduces a high sensitive
CCD camera and a tunable spectral filter to ensure
the high spectral resolution. The data processing
techniques such as the morphological structure
analysis and the multivariate regression enable the
improvement of the accuracy of the estimated result
preventing the distortion caused by the scattering
effect.
The biggest obstacle to the more widespread
use of reconfigurable computing systems lies in the
difficulty of developing application programs for
them. FPGAs are typically programmed using
hardware description languages such as VHDL.
Application programmers are typically not trained
in these hardware description languages and usually
prefer a higher level, algorithmic programming
language to express their applications. Turning a
computation into a circuit, rather than into a
sequence of CPU instructions, may seem to offer
obvious performance benefits, but an effective code
generation strategy requires an understanding of the
fundamental differences between conventional
CPUs and FPGAs.

The SA-C Language


The design goals of SA-C are to have a
language that can express image processing (IP)
applications elegantly, and to allow seamless
compilation to reconfigurable hardware. IP
applications are supported by data parallel loops
with
structured
access
to
rectangular
multidimensional arrays. Reconfigurable computing
requires fine grain expression level parallelism,
which is easily extracted from a SA-C program
because of its Single Assignment semantics.
Variables in SA-C are associated with wires, not
with memory locations. Data types in SA-C include
signed and unsigned integers and fixed point
numbers, with user specified bit widths. The extents
of SA-C arrays can be determined either
dynamically or statically. The type declaration int14
M for example, is a declaration of a matrix M of 14bit signed integers. The left dimension will be
determined dynamically; the right dimension has
been specified.
Lower Level Code Generation
A dataflow graph (DFG) is a low-level, nonhierarchical
and
asynchronous
program
representation. DFGs can be viewed as abstract
hardware circuit diagrams without timing or
resource contention taken into account. Nodes are
operators and edges are data paths. DFGs have
token driven semantics. The SA-C compiler
attempts to translate every innermost loop to a
DFG. The innermost loops the compiler finds, may
not be the innermost loops of the original program,
as loops may have been fully unrolled or stripmined.
Intel Image Processing Library
When comparing simple IP operators one
might write corresponding SA-C and C codes and
compare them on the Starfire and Pentium II.
However, neither the Microsoft nor the Gnu C++
compilers exploit the Pentiums MMX technology.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

175

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Instead, we compare SA-C codes to corresponding


operators from the Intel Image Processing Library
(IPL). The Intel IPL library consists of a large
number of low-level Image Processing operations.
Many of these are simple point- (pixel-) wise
operations such as square, add, etc. These
operations have been coded by Intel for highly
efficient MMX execution. Comparing these on a
450 MHz Pentium II (with MMX) to SA-C on the
Star Fire, the Star Fire is 1.2 to six times slower
than the Pentium. This result is not surprising.
Although the FPGA has the ability to exploit fine
grain parallelism, it operates at a much slower clock
rate than the Pentium. These simple programs are
all I/O bound, and the slower clock of the FPGA is
a major limitation when it comes to fetching data
from memory. However, the Prewitt edge detector
written in C using IPL calls and running on the 450
MHz Pentium II, takes 53 milliseconds as compared
to 17 milliseconds when running the equivalent SAC code on the StarFire. Thus, non I/O bound SA-C
programs running on the StarFire board are
competitive with their hand optimized IPL
counterparts.

is a 300 W xenon lamp. Emitted light passes


through the IR blocking filter and the diffusion

Fig. Optical setup for acquiring multispectral retinal images.


Emitted light from the 300 W-xenon lamps is filtered by the IR
blocking filter and passes through the diffusing filter for the
uniform intensity distribution. Then the diffusing filter plane is
focused on the ring slit with the diameter of about 10 mm, and
is focused again on the cornea of the subjects eye. The planer
mirror before the eyepiece moves according to the visual
check and the image acquisition by the CCD camera.

filter to cut-off the IR light and to make uniform


distribution. Note that the halogen lamp is generally
thought to be better for the spectroscopic purpose
because it has no line spectra; however, the xenon
lamp was employed in our experiment for stronger
emission intensity. The tuneable filter we used
(VariSpec VIS; CRi Inc., USA) is known as the
Lyot filter that allows to tune the transmission
wavelength with the combination of a birefringent
plate and a rotating polarizer. The tuneable
wavelength range is from 400 to 720 nm with the
full width at half the maximum of 7 nm. The centre
wavelength of the transmitted light can be selected
arbitrarily, and the response time to change the
center wavelength was approximately 65 ms by
actual measurement. The imaging device is a high
sensitive EM-CCD (Electron Multiplying- CCD)
camera (ADT-100; Flovel, Japan) with 1000x1000
pixels, 10 bit, and 30 fps transmission rate. The
exposure time was set to 1/60s in this experiment.
The spectral filter and the CCD camera are
synchronized by a PC control. The data acquisition
time for a frame including the data transfer and the
wavelength switch is about 150 ms.
The
measurement
experiment
was
performed with a subject of a healthy male in his
30s after the instillation of the mydriatic drops. The
multispectral images were acquired in the
wavelength range from 500 to 650 nm with the
interval of 5 nm. The acquired region is near the
optic nerve head (ONH) with the viewing angle of

EXPERIMENT
In this experiment, the optical system that
has the basic function of a commonly used fundus
camera is built on an optical table to keep enough
space for a high intensity light source and the
tunable spectral filter. The optical setup is shown in
Fig.1. A photographic-type fundus camera needs a
mechanism in an illumination system to prevent the
specular reflection from the surface of the cornea; a
ring-shaped aperture is illuminated from its back
and focused onto the surface of the cornea. In
addition, a beamsplitter or a perforated mirror is
necessary to split the illumination light and the
reflected light from the retina because both lights
share the optical path just before the eye. In our
optical setup, the perforated mirror with the 10 mmcircular aperture is located with an angle of 45
degrees to the optical path. The focusing lens
between the ring slit and the perforated mirror has a
black circular mask at its center to prevent the
specular reflection at the surface of the lens. The
image of the retina acquired by the CCD camera is
degraded without this mask.
Next, brief explanations about the light
source, the imaging CCD camera, and the tunable
spectral filter are given as follows. The light source

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

176

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

about 20 degrees. Six examples of images are


shown in Fig.2. Note that the images were
compensated with regard to the spectral
transmittance and the sensitivity through the entire
system. Hence the images are proportional to the
reflectance at each wavelength.The reflectance of
ONH is much higher than that of other area. The
dynamic range was optimized to the blood vessels
because we will extract only the blood vessel area
in the data processing step. It is seen that the
reflectance, especially artery, increases as
wavelength reflecting the absorption spectrum of
hemoglobin. The background reflection seen at the
long wavelength is from the choroid; the light
passes through the pigment epithelial layer because
of the small absorption in the long wavelength
range.

Morphological angiography
Morphology is one of the digital image
processing techniques based on the firm
mathematical foundation, defined as set operations
between a subject pattern and an elemental
structure. Originally, morphology was developed in
the late 1960s for analysing microscope images.
Recently morphology has been extended to the wide
variety of fields. The morphological image
processing is typified by opening and closing;
enable to delete unnecessary small patterns and to
clarify the desired subject. They are implemented
by combining dilation and erosion, defined
mathematically as Minkowski addition and
subtraction.
Since the blood vessels have clear edges and
their structural feature is apparently different from
other area in the multispectral images, it is expected
that the FA-like effect can be implemented by
applying morphological image processing. The
images created by this method will be used as the
mask filter that is overlapped on the predicted 2-D
oxygen saturation distribution for eliminating
retinal tissue areas.
The mask filter is created based on the
bottom-hat filtering method; bottom-hat filtering is
defined as subtracting the result image of
performing a closing operation on the original
image from the original image itself. In the obtained
multispectral images, the reflectance of ONH is
much higher than that of surrounding areas. Thus
the algorithm adopted to our purpose need to be
robust to the large change of the intensity
distribution. The bottom-hat filtering is one of the
adequate technique for this purpose. However, we
should consider the boundaries between thick main
vessels and thin capillaries. This time, we focused
on extracting comparatively thick main vessels. For
extracting thick vessels near and within ONH area,
we adopted the combination method of bottomhat
and top-hat filtering; subtracting the top-hat result
from the bottom-hat result. Top-hat filtering is a
subtraction of the closing-operated image from the
original image. As a result, thick vessels can be
extracted clearly from the measured multispectral
image. Results of both the gray scale and binary
operations were shown in Figs.3 (a) and (b). The
original image is the observed data at the
wavelength of 580 nm.

Fig. Six examples of acquired multispectral images. The


maximumof the sensitivity including the transmittance of the
spectral filter and the sensitivity of the CCD camera is around
640 nm. Brightness level of these figures was modified for
better visualization.

DATA PROCESSING
We process the measured multispectral
images in three steps for estimating the oxygen
saturation levels across the retinal vessels. The first
step is a technique similar to the fluorescence
angiography (FA) that extracts blood vessels
structure from the background image. This step is
performed by the digital image processing based on
the morphology. The second step is compensating
the small involuntary eye movement. Even very
small gaps among all the images will result in the
inaccurate spectra. The last step is estimating the
oxygen saturation levels at all the points extracted
in the first step employing the multivariate
regression analysis.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

177

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

can be adopted in the case that the number of the


explanation variables is larger than the sample
numbers, in which the multiple regression can not
be solved. First, 20 sample points were selected; 10
points each from artery and vein. In this experiment,
we have no knowledge in advance of the actual
oxygen saturation values. Thus we will predict the
similarity of the spectrum to artery and vein instead
of actual oxygen saturation values. In Fig.4, typical
spectra of selected points for artery and vein are
plotted. In the wavelength range from 500 to 580
nm, no significant difference between arterious and
venous spectra is seen, and both lines show that the
absorption is higher than that in the longer
wavelength range. From wavelength of 580 nm, the
reflectances suddenly increase. However, unlike in
the short wavelength range, difference between
arterious and venous reflectances are large because
the absorptions of oxy- and deoxyhemoglobin
reflect on the arterious and venous spectra.

B Compensation of eye movement


An oculus constantly takes slight movement
owing to our cognition system that recognizes an
image by changing light intensity carried to the
optic nerve. Therefore the acquired retinal image
moves slightly within the measurement time of even
a few seconds, and all the images have to be
adjusted to the same position in order to obtain the
correct spectrum.

Fig. 3. Mask filters created by the morphological operation.


(a) Gray scaled filter and (b) binary filter. For being robust to
the large intensity change especially around ONH and for
separating thick main vessels from thin capillaries, the
algorithm combined with bottom-hat and top-hat operations
Were adopted. The mask filter was calculated by subtracting
the top-hat result from the bottom-hat result.

An oculus movement is, in general, not only


translation but rotation. However, images acquired
in this experiment seemed not to contain rotation,
thus only the translation compensation was
implemented. The alignment algorithm is based on
pattern matching. For accurate alignment, linear
subpixel interpolation was adopted. Obtained
accuracy was within 0.5 pixel.
The ONH area was excluded from the
matching calculation because the large intensity
change among different wavelengths causes the
large error. It was found that using the area
including many vessels was effective for the
accurate alignment.

Fig. Typical examples of reflectance spectra from artery and


vein. Short wavelength range shows no significant difference
between the arterious and venous spectra while the long
wavelength range represents the spectral features of oxy- and
deoxyhemoglobin.

Next, the PLS regression model was built


using selected 20 spectra. As mentioned above, the
actual oxygen saturation values are unknown, thus
the objective variables of 1 and 0 were given for
arterious and venous spectra, respectively. Full
cross validation method was performed for evaluate
the regression model. The regression model was
built by using four PLS factors because the
minimum residual was given by the fourth PLS
factors. The regression coefficients were multiplied
to corresponding images, then the 2-D distribution
of the predicted oxygen saturation was obtained by
summing up all the weighted images. Finally, the
obtained image was overlapped with the mask
filters that were created by the morphological
technique.

C. Oxygen saturation mapping


To build a regression model for the oxygen
saturation and to obtain the 2-D oxygen distribution
at retinal blood stream, PLS (Partial Least Squares)
regression method was employed. PLS regression is
one of the multivariate analyses; an objective
variable is predicted using a regression model that
consists of multiple explanation variables like the
multiple regression analysis. PLS regression is,
however, more useful especially for spectroscopic
chemical analyses because it is applicable to a
sample set having correlation among the
explanation variables. In addition, PLS regression

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

178

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


high reflectance of ONH. Thin vessels seen in the original
RESULTS AND DISCUSSIONS
multispectral images were vanished by the mask filters.

Obtained 2-D distributions of the predicted


oxygen saturation were shown in Figs.5 (a) and (b);
their difference is only the filter mask that was
overlapped in the final step. The oxygen saturation
level is represented by the gray scale. White
represents high value. Thick veins that run upward
and downward from ONH show low values while
thin artery show relatively high values. This means
the oxygen saturation level was retrieved almost
correctly. The area within ONH shows very high
value, and blood vessels within the ONH area show
higher value than the outer area. This is because the
regression results were influenced by the original
intensity values; the reflectance of ONH is much
higher than blood vessels and other retinal tissue
areas. Hence it follows that these high values seen
around ONH have not retrieved correctly.
The predicted oxygen saturation is represented by
relative values because the actual oxygen saturation
values were unknown in this experiment. Generally,
the oxygen saturations of blood in main artery and
vein are, respectively, about 75 and more than 95%.
If the oxygen exchange is not happened in the blood
vessels upper than ONH, retrieving the absolute
value of oxygen saturation may be possible.
Concerning the spectrum measurement, higher
wavelength resolution is desirable. In this
experiment, fine spectral structure in the range from
500 to 600 nm that should be useful for estimating
the oxygen saturation could not be detected due to
lack of the wavelength resolution. In future work,
introducing some optical improvement such as a
confocal imaging system will be necessary for
determining optical path length. It is difficult to
evaluate the effectiveness of software-based
scattering correction that we employed in this
research.

CONCLUSION
We reported on the preliminary experiment
for measuring the blood oxygen saturation at the
retina. The multispectral images were acquired in
the wavelength range from 500 to 650 nm with the
wavelength resolution of 7 nm. The measured
images were pre-processed by the morphological
blood vessel extraction and the alignment
processing.
Finally,
the
two-dimensional
distribution of oxygen saturation across the retina
was calculated by the PLS regression method. The
result showed clearly the difference of the oxygen
saturation levels in the retinal blood stream.
REFERENCES
[1] F. C. Delori, Noninvasive technique for
oximetry of blood in retinal vessels, Appl. Opt.,
vol. 27, no. 6, pp. 11131125, 1988.
[2] M. H. Smith, K. R. Denninghoff, L. W. Hillman,
and R. A. Chipman, Oxygen saturation
measurements of blood in retinal vessels during
blood loss, J. Biomed. Opt., vol. 3, no. 3, pp. 296
303, 1998.
[3] J. J. Drewes, M. H. Smith, K. R. Denninghoff,
and L. W. Hillman, Instrument for the
measurement of retinal vessel oxygen saturation,
Proc. SPIE, vol. 3591, pp. 114120, 1999.
[4] A. Agarwal, S. Amarasinghe, R. Barua, M.
Frank, W. Lee, V. Sarkar, and M. Srikrishna, D.and
Taylor. The RAW compiler project. In Proc.
Second SUIF Compiler Workshop, August 1997.
[5] Annapolis Micro Systems, Inc., Annapolis,
MD. STARFIRE Reference Manual, 1999.
www.annapmicro.com.

Fig. Retrieved 2-D oxygen saturation levels. (a) Result with


the gray scaled mask filter and (b) with the binary filter. High
predicted values around ONH are caused due to the extremely
INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

179

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

LITERATURE REVIEW ON GENERIC LOSSLESS VISIBLE WATERMARKING &


LOSSLESS IMAGE RECOVERY
1

D. Phaneendra,

I.Suneetha,

A. Rajani,

M.Tech(DECS) Student,

Associate professor & Head,


Assistant professor
Department of ECE, AITS
Annamacharya Institute of Technology and Sciences,Tirupati,India-517520
1

dorasalaphanendrakumarreddy@gmail.com
2
iralasuneetha.aits@gmail.com
3
rajanirevanth446@gmail.com

retrieved. It is important that the watermarked image


must be resistant to common image operations which
ensure that the hidden information after alterations is
still retrievable without any defect that means the
recovered image is same as the original. On the other
hand, methods of the visible watermarking yield
visible watermarks. These visible watermarks are
generally clearly visible after applying common
image operations. In addition, ownership information
is conveyed directly on the media and copyright
violations attempts can be deterred.

Abstract One way for copyright protection is


digital watermarking. Digital watermarking is the
process of embedding information regarding the
authenticity or the identity of the owners into a image
or any piece of data. Digital watermarking has been
classified into two types: Visible and Invisible
Watermarking. By the use of Reversible watermarking
the embedded watermark can be removed and restore
the original content. The lossless image recovery is a
difficult task but; it is important in most of the
applications where the quality of the image is
concerned. There are many methods for visible
watermarking with lossless image recovery. One to One
compound mapping is one of the technique. The
compound mapping is reversible and it allows lossless
recovery of original images from the watermarked
images. Security protection measures can be used to
prevent illegal attackers.

In general Embedding of watermarks, degrade the


quality of the host media. The legitimate users are
allowed to remove the embedded watermark and
original content can be restored as needed using a
group of techniques, namely reversible watermarking
[8-11]. However, lossless image recovery is not
guaranteed by all reversible watermarking
techniques, which means that the recovered image is
same as the original. Lossless recovery is important
where there is serious concerns about image quality
such as include forensics, military applications,
historical art imaging, or medical image analysis.

Key Terms: Reversible visible watermarking,


Discrete Cosine Transform (DCT), Discrete Fourier
Transform (DFT), Discrete Wavelet Transform (DWT).

I. INTRODUCTION

The

The most common approach is to embed a


monochrome watermark using deterministic and
reversible mappings of pixel values or DCT
coefficients in the watermark region [6,9,11].
Another is to rotate consecutive watermark pixels to
embed watermark that is visible [11].the watermarks
of arbitrary sizes can be embedded into any host
image. Only binary visible watermarks can be
embedded using these approaches.

concepts of authenticity and copyright


protection are of major importance in the framework
of our information society. For example, TV channels
usually place a small visible logo on the image corner
(or a wider translucent logo) for copyright protection.
In this way, unauthorized duplication is discouraged
and the recipients can easily identify the video
source. Official scripts are stamped or typed on
watermarked papers for authenticity proof. Bank
notes also use watermarks for the same purpose,
which are very difficult to reproduce by conventional
photocopying techniques.

The lossless visible watermarking is proposed by


using one-to-one compound mappings which allow
mapped values to be controllable The approach is
generic, leading to the possibility of embedding
different types of visible watermarks into cover
images. Two applications of the proposed method are
demonstrated; where we can embed opaque
monochrome watermarks and non-uniformly
translucent full-color ones into color images.

Digital Image watermarking methods are usually


classified into two types: visible and invisible [1-7].
The invisible watermarking aims to embed copyright
information into host media, in case of copyright
infringements, to identify the ownership of the
protected host the hidden information can be

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

180

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

The secret random key.


The secret key may be included in the process of
watermarking to improve the security during
transmission. If a key is also included, only the
receiver who knows the key can extract the
watermark, and not any intruders.

II. RELATED WORK


2.1 EXISTING WATERMARKING
TECHNOLOGIES
A. Spatial-Domain technologies

The masking property of the image.


The masking property of the image is also related
to the quality and composition of the image which
signifies the clarity of the watermark on the
original image.

Spatial-domain technologies refer to those


embedding watermarks by directly changing pixel
values of host images. Some common spatial domain
algorithms include Least Significant Bit (LSB). The
LSB is the most straight-forward method of
watermark embedding. The most serious drawback of
spatial-domain technologies is limited robustness.

One form of the data embedding algorithm is given


by the equation ,

In the spatial domain, pixels in randomly selected


regions of the image are modified according to the
signature or logo desired by the author of the product.
This method involves modifying the pixel values of
the original image where the watermark should be
embedded. Fig. 1 shows the block diagram of a
spatial-domain data embedding system.

=y +I
Where y(i,j), is the original image intensity at pixe
position (i,j), is the watermarked image, and I
represents the embedded data in the form of small
changes in intensity levels. The author of the
watermark holds two keys:
The region of the image where the logo is
marked and
The information in the watermark, I.
Given the marked image, the original owner will be
able to recover the watermark by comparing the
marked image with the original. In the reconstruction
of the embedded watermark, the following
computation is made,
I= (-y)/

Fig.1. Spatial domain data embedding system

Randomly selected image data are dithered by a


small amount according to a predefined algorithm,
whose complexity may vary in practical systems. The
algorithm defines the intensity and the position of the
watermark on the original image. One of the major
disadvantages of the conventional watermarking is
that it can be easily extracted from the original image
which makes this technique unsuitable for copyright
authentication. There are three factors that determine
the parameters of the algorithm applied in the spatial
domain watermarking. The three factors are:

Fig. 2. Watermarking result of a color host image. (a) Host


image. (b) Logo image. (c) Resulting watermarked Image.
(d) Watermark extracted image.

The information associated with the signature.


Basically, the signature is the watermark
embedded on the original image. The
information of the signature is closely related to
the size and quality of the signature.

Fig.2. shows the output results of spatial domain


technique with host image, logo image, watermarked
image and watermark extracted image respectively.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

181

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

selected based on a Gaussian network classifier


decision. The middle range frequency DCT
coefficients are then modified, using either a linear
DCT constraint or a circular DCT detection region. A
DCT domain watermarking technique based on the
frequency masking of DCT blocks was introduced by
Swanson. Cox developed the first frequency-domain
watermarking scheme. After that a lot of
watermarking algorithms in frequency domain have
been proposed.

It is difficult for spatial-domain watermarks to


survive under attacks such as lossy compression and
low-pass filtering. Also the information can be
embedded in spatial domain is very limited

B. Frequency-Domain Technologies
Compared
to
spatial-domain
watermark,
watermark in frequency domain is more robust and
compatible to popular image compression standards.
Thus frequency-domain watermarking obtains much
more attention. To embed a watermark, a frequency
transformation is applied to the host data. Then,
modifications are made to the transform coefficients.
Possible frequency image transformations include the
Discrete Fourier Transform (DFT), Discrete Cosine
Transform (DCT) and others

Figure 3 and Figure 4 illustrate the watermark


embedding and detection/extraction in frequency
domain, respectively. Most frequency-domain
algorithms make use of the spread spectrum
communication technique. By using a bandwidth
larger than required to transmit the signal, we can
keep the SNR at each frequency band small enough,
even the total power transmitted is very large. When
information on several bands is lost, the transmitted
signal can still being recovered by the rest ones. The
spread spectrum watermarking schemes are the use of
spread spectrum communication in digital
watermarking. Similar to that in communication,
spread spectrum watermarking schemes embed
watermarks in the whole host image. The watermark
is distributed among the whole frequency band. To
destroy the watermark, one has to add noise with

The first efficient watermarking scheme was


introduced by Koch et al. In their method, the image
is first divided into square blocks of size 8x8 for
DCT computation. A pair of mid-frequency
coefficients is chosen for modification from 12
predetermined pairs. Bors and Pitas developed a
method that modifies DCT coefficients satisfying a
block site selection constraint. After dividing the
image into blocks of size 8x8, certain blocks are

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

182

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

sufficiently large amplitude, which will heavily


degrade the quality of watermarked image and be
considered as an unsuccessful attack.
One major reason why frequency domain
watermarking schemes are attractive is their
compatibility with existing image compression
standards, in particular, the JPEG standard. The
compatibility ensures those schemes a good
performance when the watermarked image is subject
to lossy compression, which is one of the most
common image processing methods today.

Fig.6. (a) Three-level Decomposition. (b) Coefficient


Distribution.

Besides its own advantages it has a disadvantage


that it is not suitable for visible watermarking. And
only invisible watermarking is mostly performed in
frequency domain.

Furthermore, from these DWT coefficients, the


original image can be reconstructed. For
reconstruction process same filter must be used. This
reconstruction process is called the inverse DWT
(IDWT). If I (m, n) represent an image, the DWT and
IDWT for I (m, n) can be similarly defined by
implementing the DWT and IDWT on each
dimension m and n separately.

C. Wavelet-domain Technologies
The wavelet transform is identical to a hierarchical
sub-band system, where the sub-bands are
logarithmically spaced in frequency. The basic idea
of the DWT for a two dimensional image is described
as follows. An image is first decomposed into four
parts of high, middle and low frequencies (i.e. LL1,
HL1, LH1, HH1 sub bands) by critically subsampling horizontal and vertical channels using
Daubechies filters. The sub-band HL1, LH1 and HH1
represent the finest scale of wavelet coefficients as
shown in figure 5. To obtain the next coarser scaled
wavelet coefficient, the sub-band LL1 is further
decomposed and critically sub-sampled. This process
is repeated several times, which is determined by the
application in hand. An example of an image
decomposed into ten sub-bands for three levels is
shown in Figure 6. Each level has various bands
information such as low-low, low-high, high-low and
high-high frequency bands.

III. CONCLUSION
In this paper we have briefly discussed regarding
the methods (Spatial domain, Frequency domain and
Wavelet domain) which are formerly used in visible
watermarking. The former methods used DCT, DFT,
DWT and LSB (Least Significant Bit) for desired
visible watermarking.
A novel method for generic visible watermarking
with a capability of lossless image recovery is
proposed. The method is based on the use of
deterministic one-to-one compound mappings of
image pixel values for overlaying a variety of visible
watermarks of arbitrary sizes on cover images. The
compound map-pings are proved to be reversible,
which allows for lossless recovery of original images
from watermarked images. The mappings may be
adjusted to yield pixel values close to those of desired
visible watermarks. Different types of visible
watermarks, including opaque monochrome and
translucent full color ones, are embedded as
applications of the proposed generic approach. A
two-fold monotonically increasing compound
mapping is created and proved to yield more
distinctive visible watermarks in the watermarked
image. Security protection measures by parameter
and mapping randomizations have also been
proposed to deter attackers from illicit image
recoveries. Experimental results demonstrating the
effectiveness of the proposed approach are also
included.

Fig.5. Three level wavelet decomposition

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

183

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Trends in Engineering, Vol. 1, No. 1, pp. 430433, May 2009.


[13] Mohammad Reza Soheili A Robust Digital
image Watermarking Scheme Based on DWT
Journal of Advances in Computer Research,
m2(2010) 75-82.1.
[14] R.AARTHI,
V.
JAGANYA,
&
S.
POONKUNTRAN
Modified
Lsb
Watermarking For Image Authentication
International
Journal
of
Computer
&
Communication Technology (IJCCT) ISSN
(ONLINE): 2231 - 0371 ISSN (PRINT): 0975
7449 Vol-3, Iss-3, 2012.
[15] Vaishali S. Jabade and Dr. Sachin R. Gengaje,
Literature Review of Wavelet Based Digital
Image Watermarking Techniques, International
Journal of Computer Applications, Vol 31, No.1,
pp. 28-35, October 2011.

REFERENCES
[1] G. Braudaway, K. A. Magerlein, and F. Mintzer,
Protecting publicly available images with a
visible image watermark, in Proc. SPIE Int.
Conf. Electronic Imaging, Feb. 1996, vol. 2659,
pp. 126133.
[2] I. J. Cox, J. Kilian, F. T. Leighton, and T.
Shamoon,
Secure
spread
spectrum
watermarking for multimedia, IEEE Trans.
Image Process., vol. 6, no. 12, pp. 1673 1687,
Jun. 1997.
[3] F.A.P
Patitcolas,
R.J.Anderson,
and
M.G.KunInformation hiding- A surveyProc.IEEE, vol:87, no.7, jul. 1999.
[4] M. S. Kankanhalli, Rajmohan, and K. R.
Ramakrishnan, Adaptive visible watermarking
of images, in Proc. IEEE Int. Conf. Multimedia
Computing and Systems, 1999, vol. 1, pp.
[5] S. P. Mohanty, K. R. Ramakrishnan, and M. S
Kankanhalli, A DCT domain visible
watermarking technique for images, in Proc.
IEEE Int. Conf. Multimedia and Expo, Jul. 2000,
vol. 2, pp. 1029 1032.
[6] N. F. Johnson, Z. Duric, and S. Jajodia,
Information
Hiding.
Steganography
and
Watermarking Attacks and Countermeasures.
Boston, MA: Kluwer, 2001.
[7] Y. Hu and S.Kwong,Wavelet domain adaptive
visiblewatermarking, Electron. Lett., vol. 37,
no. 20, pp. 12191220, Sep. 2001.
[8] Y. J. Cheng and W. H. Tsai, A new
method for copyright and integrity protection
for bitmap images by removable visible
watermarks
and
irremovable
invisible
watermarks, presented at the Int. Computer
Symp. Workshop on Cryptology and Information
Security, Hualien,Taiwan, R.O.C., Dec. 2002.
[9] P. M. Huang and W. H. Tsai, Copyright
protection and authentication of grayscale
images by removable visible watermarking and
invisible signal embedding techniques: A new
approach, presented at the Conf. Computer
Vision, Graphics and Image Processing, Kinmen,
Taiwan, R.O.C., Aug. 2003.
[10] Y. Hu, S. Kwong, and J. Huang, An algorithm
for removable visible watermarking, IEEE
Trans. Circuits Syst. Video Technol., vol. 16, no.
1, pp. 129 133, Jan.2006.
[11] S. K. Yip, O. C. Au, C. W. Ho, and H. M. Wong,
Lossless visible watermarking, in Proc. IEEE
Int. Conf. Multimedia and Expo, Jul. 2006, pp.
853856.
[12] Neminath Hubballi and Kanyakumari D P,
Novel DCT based watermarking scheme for
digital images, International Journal of Recent

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

184

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

K.SAI KRISHNA
3rd Year CSE,
SRM University,
Chennai,
MobileNo.:9003226055

Artificial Intelligence

ABSTRACT:
Artificial Intelligence (AI) for speech recognition involves two basic ideas. First, it
involves studying the thought processes of human beings. Second, it deals with
representing those processes machines (like computers, robots, etc).AI is behaviour
of a machine, which, if performed by a human being, would be called intelligence. It
makes machines smarter and more useful, and is less expensive than natural intelligence. Natural language processing (NLP) refers to artificial intelligence methods of
communicating with a computer in a natural language like English. The main objective of a NLP program is to understand input and initiate action. The input words are
scanned and matched against internally stored known words. Identification of a keyword causes some action to be taken. In this way, one can communicate with the
computer in one's language

INTRODUCTION:
Artificial intelligence involves two basic ideas. First, it involves studying the
thought processes of human beings. Second, it deals with representing those processes
via machines (like computers, robots, etc.).
AI is behaviour of a machine, which, if performed by a human being, would be
called intelligent. It makes machines smarter and more useful, and is less expensive than
natural intelligence.
Natural language processing (NLP) refers to artificial intelligence methods of communicating with a computer in a natural language like English. The main objective of a
NLP program is to understand input and initiate action.

Definition:

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

185

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

It is the science and engineering of making intelligent machines, especially intelligent computer programs.
AI means Artificial Intelligence. Intelligence however cannot be defined but AI can
be described as branch of computer science dealing with the simulation of machine exhibiting intelligent behaviour.

History:
Work started soon after World-WarII.
Name is coined in 1957.
Several names that are proposed are
Complex Information Processing
Heuristic programming
Machine Intelligence
Computational Rationally

Foundation:
Philosophy
(428 B.C.-present)
Mathematics
(c.800-present)
Economics
(1776-present)
Neuroscience
(1861-present)
Psychology
(1879-present)
Computer Engineering
(1940-present)
Control theory and cybernetics (1948-present)
Linguistics
(1957-present)

Speaker independency:
The speech quality varies from person to person. It is therefore difficult to build an
electronic system that recognises everyones voice. By limiting the system to the voice of a
single person, the system becomes not only simpler but also more reliable. The computer
must be trained to the voice of that particular individual. Such a system is called speakerdependent system.
Speaker independent systems can be used by anybody, and can recognise any
voice, even though the characteristics vary widely from one speaker to another. Most of
these systems are costly and complex. Also, these have very limited vocabularies.
It is important to consider the environment in which the speech recognition system
has to work. The grammar used by the speaker and accepted by the system, noise level,
noise type , position of the microphone, and speed and manner of the users speech are
some factors that may affect the quality of speech recognition.

Environmental influence:
Real applications demand that the performance of the recognition system be unaffected by
changes in the environment. However, it is a fact that when a system is trained and tested
under different conditions, the recognition rate drops unacceptably. We need to be concerned about the variability present when different microphones are used in training and

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

186

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

testing, and specifically during development of procedures. Such care can significantly improve the accuracy of recognition systems that use desktop microphones.
Acoustical distortions can degrade the accuracy of recognition systems. Obstacles to robustness include additive noise from machinery, competing talkers, reverberation from surface reflections in a room, and spectral shaping by microphones and the vocal tracts of individual speakers. These sources of distortions fall into two complementary classes; additive noise and distortions resulting from the convolution of the speech signal with an unknown linear system.
A number of algorithms for speech enhancement have been proposed. These include the
following:
1.
2.
3.
4.

Spectral subtraction of DFT coefficients


MMSE techniques to estimate the DFT coefficients of corrupted speech
Spectral equalisation to compensate for convoluted distortions
Spectral subtraction and spectral equalisation.

Although relatively successful, all these methods depend on the assumption of independence of the spectral estimates across frequencies. Improved performance can be got with
an MMSE estimator in which correlation among frequencies is modeled explicitly.

Speaker-specific features:
Speaker identity correlates with the physiological and behavioural characteristics of the
speaker. These characteristics exist both in the vocal tract characteristics and in the voice
source characteristics, as also in the dynamic features spanning several segments.
The most common short-term spectral measurements currently used are the spectral coefficients derived from the Linear Predictive Coding(LPC) and their regression coefficients. A
spectral envelope reconstructed from a truncated set of spectral coefficients is much
smoother than one reconstructed from LPC coefficients.
Therefore, it provides a more stable representation from one repetition to another of a particular speakers utterances.
As for the regression coefficients, typically the first and second order coefficients are extracted at every frame period to represent the spectral dynamics.
These coefficients are derivatives of the time function of the spectral coefficients and are
called the delta and delta-delta-spectral coefficients respectively.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

187

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Speech Recognition:
The user communicates with the application through the appropriate input device i.e. a microphone. The Recogniser converts the analog signal into digital signal for the speech
processing. A stream of text is generated after the processing. This source-language text
becomes input to the Translation Engine, which converts it to the target language text.

Salient Features:

Input Modes
Through Speech Engine
Through soft copy
Interactive Graphical User Interface
Format Retention
Fast and standard translation
Interactive Preprocessing tool
Spell checker.
Phrase marker
Proper noun, date and other package specific identifier

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

188

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Input Format
txt,.doc.rtf

User friendly selection of multiple output

Online thesaurus for selection of contextually appropriate synonym

Online word addition, grammar creation and updating facility

Personal account creation and inbox management

Applications:
One of the main benefits of speech recognition system is that it lets user do other
works simultaneously. The user can concentrate on observation and manual operations,
and still control the machinery by voice input commands.
Another major application of speech processing is in military operations. Voice control of weapons is an example. With reliable speech recognition equipment, pilots can give
commands and information to the computers by simply speaking into their microphones they dont have to use their hands for this purpose.
Another good example is a radiologist scanning hundreds of X-rays, ultra sonograms, CT scans and simultaneously dictating conclusions to a speech recognition system
connected to word processors. The radiologist can focus his attention on the images rather
than writing the text.
Voice recognition could also be used on computers for making airline and hotel
reservations. A user requires simply to state his needs, to make reservation, cancel a reservation, or make enquiries about schedule.

Ultimate Goal:
The ultimate goal of the Artificial Intelligence is to build a person, or, more humbly,
an animal.

Conclusion:

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

189

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

By using this speaker recognition technology we can achieve many uses. This
technology helps physically challenged skilled persons. These people can do their works
by using this technology with out pushing any buttons. This ASR technology is also used in
military weapons and in Research centres. Now a days this technology was also used by
CID officers. They used this to trap the criminal activities.

Bibliography:
www.google.co.in/Artificial intelligence for speech recognition
www.google.com
www.howstuffworks.com
www.ieeexplore.ieee.org

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

190

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Strength and Durability characteristics of


Geopolymer concrete using GGBS and RHA
S.Subburaj,T.Anantha shagar

VV college of Engineering,tisayanvilai
tutucorin
Abstract Cement, the second most consumed product in the
world, contributes nearly 7% of the global carbon dioxide
emission. Several efforts are in progress to reduce the use of
Portland cement in concrete in order to address the global
warming issues. Geopolymer concrete is a cementless concrete. It
has the potential to reduce globally the carbon emission that lead
to a sustainable development and growth of the concrete
industry. In this study, geo-polymer concrete is prepared by
incorporating ground granulated blast furnace slag (GGBS) and
black rice husk ash (BRHA) as source materials. In India RHA is
used for cattle feeding, partition board manufacturing, land
filling, etc. RHA is either white or black in colour. If the rice
husk is burnt in controlled temperature and duration, it will
result the ash in white colour. This type of RHA has high
percentage of silica content. The ease availability of RHA is black
in colour due to uncontrolled burning temperature and duration
in various rice mills, so the resulting rice husk ash is called as
black rice husk ash (BRHA).
In this study GGBS used as a base material for geopolymer
concrete and it is replaced upto 30% by BRHA. The strength
characteristic of GGBS and BRHA based geopolymer concrete
has been studied. The suitable compressive strength test is
performed. The result shows that the replacement of BRHA
decreases the compressive strength of geopolymer concrete,
because of the unburnt carbon content present in the BRHA.

Keywords Geopolymer concrete, GGBS, Black Rice Husk Ash,


Compressive strength

INTRODUCTION
Concrete is the second most used material in the world after
water. Ordinary Portland cement has been used traditionally
as a binding material for preparation of concrete. One tone of
carbon dioxide is estimated to be released to the atmosphere
when one ton of ordinary Portland cement is manufactured.
Also the emission by cement manufacturing process
contributes 7% to the global carbon dioxide emission. It is
important to find an alternate binder which has less CO2
emission than cement. Geopolymer is an excellent alternative
which transform industrial waste products like flyash, GGBS
and rice husk ash into binder for concrete. Al- Si materials
which are used as source materials undergoes dissolutions, gel
formation, setting and hardening stages to form geopolymers.
There are two main constituents of geo-polymers, namely the
source materials and the alkaline liquids. The source materials
for geo-polymers based on alumina-silicate should be rich in
silicon (Si) and aluminium (Al). These could be natural
minerals such as kaolinite, clays, etc. Alternatively, by-

product materials such as fly ash, silica fume, slag, rice-husk


ash, red mud, etc could be used as source materials. The
choice of the source materials for making geo-polymers
depends on factors such as availability, cost, type of
application, and specific demand of the end users. The
alkaline liquids are from soluble alkali metals that are usually
sodium or potassium based. The most common alkaline
liquids used in geo-polymerization are a combination of
sodium hydroxide (NaOH) or potassium hydroxide (KOH)
and sodium silicate (Na2SiO3) or potassium silicate (K2SiO3).
The alumino silicate material which is to be used in this study
is a combination of Rice husk ash and ground granulated blast
furnace slag (GGBS). RHA is either white or black in color. If
the rice husk is burnt in controlled temperature and duration, it
will result the ash in white color. This type of RHA has high
percentage of silica content. The ease availability of RHA is
black in color due to uncontrolled burning temperature and
duration in various rice mills, so the black color rice husk ash
is called as black rice husk ash (BRHA). The RHA used in
this study was black rice husk ash. This study aims to
synthesize geopolymer concrete using combination of GGBS
and BRHA. In this study GGBS used as a base material for
geoploymer concrete. GGBS is replaced up to 30% by BRHA
to understand the strength and durability characteristics.
MATERIALS
The materials used for making GGBS based geopolymer
concrete specimens are GGBS, Rice Husk Ash, aggregates,
alkaline liquids, water and super plasticizer. Ground
Granulated Blast furnace Slag was procured from JSW
cements in Bellari, Karnataka. Black Rice Husk Ash was
obtained from a Rice mill near Karaikudi and then it was
finely grounded. The properties of GGBS and BRHA are
given in Table I.

TABLE I. PROPERTIES OF GGBS AND RHA

Property
SiO2
Al2O3
Fe2O3
CaO
MgO
Specific gravity

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

191

GGBS
31.25 %
14.06 %
2.80 %
33.75 %
7.03 %
2.61

BRHA
93.96 %
0.56 %
0.43 %
0.55 %
0.4 %
2.11

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


Aggregates
Coarse aggregate passing through 20mm sieve and fine
aggregate of river sand from a local supplier were used for the
present study and their properties are given in Table II.

till the time of testing.The cubes were then tested at 3, 7 and


28 days from the day of casting.
TABLE IIIII. MIX PROPOTIONS OF GEOPOLYMER CONCRETE

Materials

TABLE III. PROPERTIES OF AGGREGATES

Property
Specific gravity
Fineness modulus
Bulk density

Coarse
Aggregate
2.73
7.36
1533 kg/m3

Fine
Aggregate
2.60
2.63
1254 kg/m3

GGBS
RHA
Coarse
Aggregate
Fine
Aggregate
Sodium
Hydroxide
Sodium
Silicate
Super
Plasticizer
Extra
Water (15%)

B. Alkaline solution
A mixture of Sodium hydroxide and Sodium Silicate was used
as the alkaline solution in the present study. Commercial
grade Sodium Hydroxide in pellets form (97%-100% purity)
and Sodium silicate solution having 7.5%-8.5% of Na2O and
25% -28% and water of 67.5%- 63.5% were used in the
present study. The ratio of Sodium Silicate to Sodium
Hydroxide was kept as 2.5. In this study the compressive
strength of geo-polymer concrete is examined for the mix of
8M of NaOH solution. The molecular weight of NaOH is 40.
For example to prepare 8M of NaOH solution 320g of NaOH
flakes are weighed and they can be dissolved in distilled water
to form 1 litre solution. For this, volumetric flask of 1 litre
capacity is taken, NaOH flakes are added slowly to distilled
water to prepare 1litre solution.
In order to improve the workability of fresh concrete, highrange water-reducing naphthalene based super plasticizer was
used. Extra water nearly 15% of binder is added to increase
the workability of the concrete.
METHODOLOGY

C. Mixing, Casting and Curing


The mix proportions were taken as given in Table. III. As
there are no code provisions for the mix design of geopolymer
concrete, the density of geo-polymer concrete was assumed as
2400 Kg/m3 and other calculations were done based on the
density of concrete [4]. The combined total volume occupied
by the coarse and fine aggregates was assumed to be 77%.
The alkaline liquid to binder ratio was taken as 0.40. GGBS
was kept as the primary binder in which BRHA was replaced
in 0, 10, 20 and 30% by weight. The normal mixing procedure
was adopted. First, the fine aggregate, coarse aggregate and
GGBS & BRHA were mixed in dry condition for 3-4 minutes
and then the alkaline solution which is a combination of
Sodium hydroxide and Sodium silicate solution with superplasticizer was added to the dry mix. Then some extra water
about 15% by weight of the binder was added to improve the
workability. The mixing was continued for about 6-8 minutes.
After the mixing, the concrete was placed in cube moulds of
size 150mm X 150mm X 150mm by giving proper
compaction. The GPC specimens were then placed in a hot air
oven at a temperature of 60oC for 48 hours and then the
specimens were taken out and cured under room temperature

Mass(Kg/m3)
Mix1
Mix2
(0%
(10%
RHA)
RHA)
394
355
0
39
647
647

Mix3
(20%
RHA)
315
79
647

Mix4
(30%
RHA)
276
118
647

1201

1201

1201

1201

45

45

45

45

113

113

113

113

59

59

59

59

RESULTS AND DISCUSSION


The cubes were tested in the compressive testing machine to
determine their compressive strength at the age of 3, 7 and 28
days from the day of casting. The Table IV and figure 1 shows
the compressive strength variation with percentage
replacement of BRHA. The table4 shows that GGBS based
geopolymer concrete attained compressive strength of 69
MPa. 10 % replacement of GGBS by RHA gives compressive
strength of 58 MPa.
The figure1 shows that there is an increase in compressive
strength if the curing time increases. The percentage of
increase in strength is approximately 16 to 20 for the curing
time of 3days to 28days. The percentage increase in strength
from 3 to 28 days curing time is approximately 24% for mix1.
The graph shows that the replacement of BRHA in GGBS
based geopolymer concrete decreases the compressive
strength. Because of the unburnt carbon content present in
BRHA, decreases the compressive strength. The average 28
days compressive strength of mix2 and mix3 is decreases by
20% and 46% compared to mix1.
TABLE IVV. COMPRESSIVE STRENGTH TEST RESULTS

Mix

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

192

Compr
essive
strength
at
3rd
day(MPa)

Compr
essive
strength
at
7th
day(MPa)

Compre
ssive
strength at
28th
day(MPa)

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


Mix 1 (100%
GGBS, 0% RHA)
Mix 2 (90%
GGBS, 10% RHA)

55.9

60.5

69.2

48.6

54.3

57.46

Mix 3 (80%
GGBS, 20% RHA)

40.75

44.72

47.36

Mix 4 (70%
GGBS, 30% RHA)

20.8

23.54

27.36

4.

5.

6.

Due to the presence of high silica content in BRHA


(94%) there is a fast chemical reaction occurred resulting
quick setting of geo-polymer concrete.
In this study, the Si / Al ratio is not maintained due to low
alumina content in the source materials resulting in lesser
compressive strength .
I feel that GGBS with 10% of RHA will be well and eco
friendly when compared with OPC
ACKNOWLEDGMENT

The author would like to acknowledge his Research


supervisor mr.p.muthuraman for his meticulous guidance and
constant motivation. The author would also like to thank the
faculty members of Division of Structural Engineering, vv
college of Engineering University, tisayanviai for their
consent encouragement and support during the project work.
The author would also like to thank his family and friends for
their complete moral support.
REFERENCES

Fig.1 Variation of compressive strength at 3rd, 7th and 28th


days with replacement of BRHA

CONCLUSIONS
From the limited experimental study conducted on the
geopolymer concrete made with GGBS and BRHA, the
following conclusions are made.
1. The GGBS based geopolymer concrte gives higher
strength.
2. The replacement of GGBS by BRHA decreases the
compressive strength because of the unburnt carbon
content.
3. The percentage replacement of BRHA in GGBS based
geo-polymer concrete is significant only in 10%.

Alireza Naji Givi, Suraya Abdul Rashid, Farah Nora A. Aziz, Mohamad
Amran Mohd Salleh, (2010), Assessment of the effects of rice husk
ash particle size on strength, water permeability and workability of
binary blended concrete, Construction and Building Materials., Vol.
24, Issue 11, pp.2145-2150.
Bhosale, M.A, Shinde, N.N (2012), Geo-polymer concrete by using fly ash
in construction, IOSR Journal of Mechanical and Civil Engineering.,
Vol. 1, Issue 3, pp.25-30.
Detphan.S, and P. Chindaprasirt, (2009), Preparation of fly ash and rice husk
ash geo-polymer International Journal of Minerals, Metallurgy and
Materials., Vol. 16, Issue 6, pp. 720-726.
Hardjito, D. and Rangan, B. V. (2005), Development and Properties of Low
Calcium Fly Ash Based Geo-polymer Concrete, Research Report GC
1, Faculty of Engineering, Curtin University of Technology.
Joseph Davidovits, (1994), Global Warming Impact on the Cement and
Aggregates Industries, World Resource Review, Vol. 8, No.2,
pp. 263-278.
Kartini, K, Mahmud, H.B, Hamidah, M.S, (2006), Strength Properties of
Grade 30 Rice Husk Ash Concrete 31st Conference on Our World in
Concrete & Structures.
Malhotra, V. M. (1999), Making Concrete "Greener" With Fly Ash
American Concrete Institute. pp. 61-66.
McCaffery, R. (2002), Climate Change and the Cement Industry, Global
Cement and Lime Magazine (Environment Special Issue),
pp. 15-19.
Mehta, P. K., (2001) Reducing the Environmental Impact of Concrete, ACI
Concrete International, Vol. 23, (10) pp. 61-66.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

193

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Vulnerabilities in credit card security


S.Krishnakumar
Department of Computer science and Engineering
MNM Jain Engineering College
Chennai
krishnakumar.srinivasan1@gmail.com
9789091381
provider and system administrator. It should be secured
from unauthorized access by third parties or individuals. Above
all the database information must be properly encrypted to
prevent illegal elements from accessing such data

AbstractThis paper is intended to highlight the


vulnerabilities in online payment gateways of e-commerce sites
and security of customer credit card information stored in them
and how they can be exploited. This paper also suggests different
mechanisms in which these vulnerabilities can be plugged.

C. Vulnerability analysis tools:


I. INTRODUCTION
A lot of open source and free cyber security and
penetration testing tools are freely available in the internet.
While the purpose of these tools is to detect and expose
vulnerabilities and loopholes in the integrity of a system
they can also be used by hackers to scan system and
websites which are vulnerable and can be easily exploited.
Hence service providers and e-commerce companies must
periodically test and analyze their websites and systems for
security loopholes and find ways to plug it. Such tools can
also be used by hackers to perform credit fraud by stealing
customer data from websites which they found to be
vulnerable in their tests. So either the service provider had
to do the integrity test else an hacker will save him the
trouble.

The reach of the Internet throughout the world is growing


day by day and this development has led to the launch of a
host of online services like Banking, online shopping, e-ticket
booking etc. But almost all forms of services have an element
of monetary transaction in them. The services which are
mostly based on such online monetary transactions are
banking and online shopping or e-commerce. These service
providers collect and record a lot of information about their
customers and clients especially their financial information
like credit card number and in some cases Pin number apart
from their name address and preferences. Hence it becomes
imperative for these service providers to have a foolproof
security mechanism to protect their customer data which they
store .Failing to do which might expose such sensitive data to
hackers and third parties who might misuse them for personal
gain which not just affects the customer but may also erode
their trust in the integrity of service providers , mostly highly
vulnerable sites only have this problem.Hence this paper
highlights some of the security loopholes found in payment
gateways and securing customer data.

D. Authentication mechanisms:
E. The authentication mechanisms employed by ecommerce sites for their customers also have some
loopholes which can be exploited by a hacker once he gets
access to sensitive customer data of the e-commerce
website. Most of the prominent e-commerce sites do not
verify whether the shipping and billing address are same
when a customer places an order. This gives leeway to a
hacker impersonating as a customer to get away with
goods purchased with the customers id and credit card. A
proper layer of authentication layers would be essential to
prevent such frauds apart from securing customer data.

II. VULNERABILITY
A. Storage of customer data:
First, almost all highly vulnerable e-commerce sites
collect and store customer information like their name,
address, password, preferences, history of purchase and
transaction, and in some cases even the credit card number
and its cvv number.They use these data for analyzing
customer trends and to customize the online experience of
the customer. Hence it becomes imperative for them to
either have a foolproof mechanism to protect this data or
avoid collecting such sensitive data.

III. EXPLOINTING THE VULNERABILITIES

B. Access of customer data:


The customer records and data stored on the website should
be accessible by only the authorized personnel of the service

This section will explain how a combined exploitation of


the mentioned vulnerabilities can be used to gain access of

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

194

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

customer data stored on e-commerce sites and how they can


be exploited for the personal gain of the hacker.

after getting the database , we should find the tables


in the database
in each table we should scan the columns

F. Scanning for vulnerabilities:

in all shopping websites we can see this example:


TOTAL ORDERS, CUST_DETAILS ,CUST
PAYMENTS, CUST EMAILS
if its highly vulnerable we can see the payment
methods and we can retrieve all the informations of
the credit cards details and the address of the owner
all these things are appilicable only to highly
vulnerable shopping sites ..in these days mostly many
websites record all infos like CVV and all that ..
so these websites are always vulnerable

First a hacker will scan for vulnerable e-commerce


sites with a free security tool called ****dork
scanner.*This tool is a simple python script which
returns websites which include a specified keyword in
their url like buy_now, add_to_cart, PayPal etc.
The hacker enters keywords related to payments like
add_to_cart, payment, PayPal, buy_now which are
found in the url of payment pages of e-commerce
websites as input to Dork scanner.
The Dork scanner scans the web for websites having
the keywords and returns a list of websites having
them.
From the given list of sites the hacker filters a list of
sites having a particular kind of vulnerability like SQL
injection and XSS
After selecting a vulnerable site SQL map python tool
is used to retrieve tables from the website database
which store customer financial data for reference.
In case of highly vulnerable sites these tables
containing keywords like payment are retrieved along
with the data stored in them.
Some of these tables display the last four digits of a
credit card and incase of least protected websites they
display all the numbers.

if we need full cc information we can upload a shell on


website and if we get the cpanel full cc info will be there..even
many websites directly displaying full cc number..

encryption is very much essential and sql databases


should be tested
G. Exploiting authentication mechanism :
After obtaining customer financial data including his/her
login id and password and credit card number ...the hacker can
use it to impersonate the customer and place orders online.
Since most of the websites do not check whether the billing
and shipping address are the same it forms a loophole for the
hacker to exploit ...its applicable to great websites The hacker
after extracting the customer data will know where the
customer is from and what his preferences are.

how hackers get the infos :

After scanning the websites , highly vulnerable sites


will be targeted . we would need to attack the
database of the website using sql injection or XSS ..
we would need to find the length of the database

He will then get a VPN connection or remote


desktop of that location and then place orders on
the website.

the hacker can match credit card location and vpn


location , so that it would easy for him to
purchase

the hacker clear all his cookies and remove all


blacklisted ip and so that he can get a fresh vpn
so match this

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

195

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

this methodology is used to bypass any website


and many websites cant able to find this

I.

Avoid storing credit card information

so this illegal access should be


fixed
The e-commerce websites must avoid retaining sensitive
financial information about the customer. ..the website should
not save cvv numbers and also once they are directed to
payment gateways the cvv number should be encrypted even
to the website ,and then directed to the bank authentication ..

Some Common Loop holes in storing the data :


Highly secure websites will not store the credit card
information of the customer after the transaction but
vulnerable sites will store the credit card data even
after transaction.

J.
Authentication mechanism to fix loop holes
in payment gateways :

The stored data in these sites is not secure as most of


them are not encrypted.

The e-commerce websites must have a proper


authentication mechanism to make sure whether the order
is placed by the original customer and not by an
impersonator.

. loop holes in payment gateways:

even good shopping websites cannot find


unauthorized transactions payment gateways
failed to authorize the ip address of the person who is
been involved in placing the order and the location of
the credit card. payment gateways not having enough
security to verify authorize illegal ip address ..

loop holes in payment gateway ssl:

some times hackers normally use vpn to bypass the


website ..many websites are not finding the untrusted
connections , ssl verification is not that much secure
so hackers tend to use vpn to bypass it ..
all the websites should have good ssl verification to
stop illegal ip access and also it should have good
certificate verification

IV.

PREVENTIVE SECURITY MEASURES

first it should check whether the billing address


matching the credit card address
if it matches then it should verify its billing phone
number with the bank details
verification of ip should be strong , it should have
good ssl certification to check the trusted connection
after this it should verify the card ip address when it
was first accessed and the user ip who is placing the
order
dynamic ip address can be noted and matching of
dynamic range is possible to identify illegal access
suppose if the user is placing an order from some
other location , then the website should give the
authentication code to user and then the user should
contact the bank to process the order
banking websites should maintain credit card ips
when it was first used
payment gateways should be built with open source
coding like python so that the untrusted and
encrypted ips can be verified..

These security loopholes can be plugged by various


security mechanisms.
basic algorithm :
H.

Encryption of user records:


if user enters the website and processed for checkout and
entering his card information
then
website payment gateway should verify trusted connection to
find illegal ip access
if yes
then
website should check credit card first accessed ip location
dynamic range of ip address can be possible and it
should match the location of the user who is been placing the
order ( basic authentication)

The e-commerce websites which store customer data and


financial records must encrypt all data in their database to
prevent unauthorized use of these sensitive data. Several free
and strong open source encryption tools are easily available.
Encrypted data is hard to decrypt without proper
authentication key and would require huge computing
resources. This technique would render the data unusable even
after it is retrieved in an unauthorized way.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

196

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

;\\ if the credit card is from new york , the user ip should be
near by to its state location//
else transaction should declined
if
the user is placing the order from some other country with his
own card
then
authentication code from shopping website to the bank is
supplied

new solution :

finding high encrypted vpn or scoks5:

def ipEntered():
global ipEntered
ipEntered = input ( please enter the ip):

get the ip address from the user who is been ordering in


the website
if
ipEntered .match (dynamic.socketsauth())
access#
then
print failed

existing system using just normal php code to accept


the card details
after accepting , the payment would be fulfilled

# illegal ip

else

ip.sucess
if

ipEntered.match
a = s.split()
match.credit card details
return True

every payment gateways should be tested , all


anonymous proxys should be verified
all ips should be back listed to avoid illegal access
all banking websites should maintain the database ,
the database should contain dynamic matching of ip
address of the credit card when it was first accessed
if the user placing an order from different location
then website should provide authentication code to
process the order
tor browser anonymity should be cleared
mostly all users use credit cards from own location
then this method would be very useful
authentication code for different location users
should be automated
python should be integrated with database for easy
access
trusted and secured ssl should be used to avoid man
in the middle attack
all websites payments and ip address of the order
should be verified with database records of the bank
even if the hacker uses high encrypted vpn the exact
location will not acquired , if it also matches then the
ip authentication with bank database can find this
illegal access
verifying the ip range with bank database and then
websites orders would be fulfilled

else
payment.failed
its not a brief coding , its a basic authentication for the
payments gateways , if the websites fix some open source
coding to verify untrusted connection and also to match ip
..illegal access can be reduced

comparing with existing payment gateways :

existing system mainly use php or asp to create


payment gateways , mainly php websites are
vulnerable to cross site scripting attack and also the
server reply is not fully secured
existing system doesnt have good verification
method to find illegal ips

conclusion :
thus if we maintain good encrypted records and good
payment gateway with trusted ssl connection then the illegal
access will be stopped

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

197

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

REFERENCES

S. Ghosh and D.L. Reilly, Credit Card Fraud Detection with a


Neural-Network, Proc. 27th Hawaii Intl Conf. System Sciences:
Information Systems: Decision Support and Knowledge-Based
Systems,vol. 3, pp. 621-630, 1994.
M. Syeda, Y.Q. Zhang, and Y. Pan, Parallel Granular Networks
for Fast Credit Card Fraud Detection, Proc. IEEE Intl Conf.
Fuzzy Systems, pp. 572-577, 2002.
Organism Information Visualization Technology Editing
Committee Edition: Organism Information Visualization
Technology. (in Japanese), Corona Publication Co., Ltd., p.235
(1997).
The Federation of Bankers Associations of Japan: Statistics from
the Questionnaire on the Number of Withdrawals and the Amount
of Money Withdrawn Using Stolen Passbooks.
http://www.zenginkyo.or.jp/en/news/index.html
Palm Vein Authentication Technology white paper, Bioguard,
Innovative Biometric Solutions, March, 2007.
Yuhang Ding, Dayan Zhuang and Kejun Wang, A Study of Hand
Vein Recognition Method, The IEEE International Conference on
Mechatronics & Automation Niagara Falls, Canada, July 2005.
Shi Zhao, Yiding Wang and Yunhong Wang, Extracting Hand
Vein Patterns from Low-Quality Images: A New Biometric
Technique Using Low-Cost Devices, Fourth International
Conference on Image and Graphics, 2007.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

198

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

IMPLEMENTATION OF AHO CORASICK ALGORITHM


IN INTRUSION DETECTION SYSTEM USING TCAM
FOR MULTIPLE PATTERN MATCHING
Y.Madhu Keerthana1, S.Vamsee Krishna2
1

P.G Student in VLSI, Department of E.C.E, SIETK, Tirupati.


Associate Professor, Department of E.C.E, SIETK, Tirupati.
E-Mail to:madhu.keerthu@gmail.com, vamseebe@gmail.com

Abstract The various advantages provided by the internet and


societys increasing dependence on it has bought a flood of
security attacks. The defense against all these attacks is provided
by the network intrusion detection/prevention systems which
monitor headers and payload of packets flowing through the
network. NIDS performs matching of attack string patterns by
means of multi pattern matching which is solved by means of
Aho-Corasick algorithm. The AhoCorasick is one of the string
matching algorithms which perform exact matching of the
patterns in the text. The performance goals and throughput of
high speed networks are improper to meet through software.
Therefore, making a network-based scheme in practical requires
efficient algorithms suitable for hardware implementations
which can be updated continuously and so Ternary Content
Addressable Memory (TCAM) and FPGA based architectures,
which are programmable or reconfigured are generally adopted
in hardware based pattern matching.
Index TermsIntrusion detection system, Aho-Corasick
algorithm, String matching, multipattern matching, TCAM

I. INTRODUCTION
In almost every information recovery and text applications it
is essential to locate quickly some or all occurrences of user
defined patterns in text. String matching problem is to locate
all the occurrences of a given string pattern in a string text.
This string matching can be performed through single
pattern and multi pattern occurrences. For many string
matching problems multi pattern is the only solution
provided. Thanks for Alfred V. Aho and Margaret J.
Corasick who invented one of the best string matching
algorithms named Aho-Corasick algorithm.The multi
patterns matching in a text string can be easily found by
means of this algorithm as it is the only one which performs
exact matching of patterns in a given text. Just like a
dictionary matching algorithm it also starts searching pattern
from basis of sub-string matching. For every unit of time, a
character is read from the pattern string and then it tries to
find same character in the automata that is already
constructed, after reading the whole pattern string if the
automata is found to be entered in the final state then that
pattern occurrence will be submitted. Similarly for all
patterns it matches simultaneously. Intrusion detection
systems at multi-gigabit rates are achieved using hardware
acceleration; one prospect is the usage of Ternary Content
Addressable Memories (TCAM). Ternary Content
Addressable Memory (TCAM) is a type of memory that can
execute parallel searches at high speeds. A TCAM contains a

set of entries. The top entry of the TCAM has the least index
and the bottom entry has the biggest. Each entry is a bit
vector of cells, where each cell can store one bit. As a result,
a TCAM entry can be
used to store up a string. The working of TCAM is as follows:
for a certain input string, it compares against all entries in its
memory in parallel, and reports the one which match the
input.
II. AHO CORASICK ALGORITHM
Aho-Corasick is the Multi-pattern matching algorithm
which traces all the occurrence of set of patterns in a string. It
is done by creating Deterministic Finite Automata (DFA) for
all the predefined patterns; along with it using automaton,
which process a text in a single pass.
Example: Let us consider a finite set of patterns,
{OTHER, HER, HATE, HEIGHT}
Preprocessing Phase
Step 1: Construct finite state automata for the set of
Predefined patterns which is supposed to be originate in the
text string. The states will be numbered by their names and
transitions are represented by the characters between the
defined states would be accessible in the particular pattern
As a first step, Aho-Corasick algorithm constructs finite
automata for the set of patterns
O
T
H
E
R
0

H
E

A
T

E
11

10

H
E
12

I
13

G
14

15

H
16

T
17

Fig.1.Automata for pre-defined patterns


Step 2: After constructing automata, failure function of each
node is calculated and its corresponding transitions are also
required to be mentioned, so the constructed automata would
be called as "Automata with failure links".
This is highlighted in fig.3. Fig.2. represents transition
tables of finite automata

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

199

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

INPUT
STATE
O
1
------------------

0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17

T
-2
-------10
------17
--

H
--3
-------11
----16
---

E
---4
--------13
------

R
----5
--8
-----------

A
------7
------------

I
-------------14
-----

G
--------------15
----

Fig.2. Transition table of finite automata


Step 3: Lastly in the automata output function for final states
has to be calculated for recognizing the pattern string which
may be found in the text string. And the resulting automata
would be called as Automata with Output Functions
O
0

E
3

R
4

H
E
6

R
8

A
T

E
11

10

H
E
12

I
13

G
14

15

H
16

T
17

Fig.3. Failure functions of automata


FAILURE FUNCTION:
Failure function is defined as the longest suffix of the string
that is also the prefix of some other node. The main goal is
not to scan any character more than once. The red colored
arrow marks in fig.3 represents failure transitions.
OUTPUT FUNCTION:
The set of patterns recognized when entering in to final state
is the output function of that automata. Node 5, 8, 11, 17
represents output functions in fig.3.

APPLICATIONS: This algorithm can be in practical used


to solve various problems like detecting plagiarism intrusion
detection, text mining, bioinformatics, and digital forensic
etc. Intrusion Detection is a method in which intrusions are
detected by Intrusion Detection System (IDS).The process of
finding plagiarism inside a Text or document is known as
Plagiarism Detection. The application of computer
technology for managing biological information is
Bioinformatics. Digital Forensic is a technique to retrieve
information from digital devices after being processed along
with generating outcome. Text Data or Text mining is
developed with the intention of attending attempts to find out
patterns in great data sets.
INTRUSION DETECTION SYSTEM: With each passing
day there is added critical data available in some form over
the network. All publicly reachable system on the Internet
today is quickly subjected to break-in attempts. These attacks
are able to range from email viruses, to corporate
intelligence, to general damage of data, to attacks that hijack
servers from which to spread added attacks Intrusion
Detection Systems (or IDSs) are emerging as one of the most
promising ways of providing protection to systems on the
network.
Compared with end-host based solutions, NIDS will
respond and can be rationalized at faster rates. The time they
could save is much significant for prevention, particularly
when the network is under the attacks from new worms. The
traditional software-based NIDS structural design failed to
compete with the throughput of high-speed networks because
of the large number of patterns and complete payload
scrutiny of packets. This led to hardware-based schemes for
multi pattern matching. To organize intrusion detection
systems at multi-gigabit rates by means of hardware
acceleration, one option is to use Ternary Content
Addressable Memories (TCAM). TCAMs are extensively
used for IP header based processing such as longest prefix
match. Because of their built-in parallel search capability,
TCAMs can also be used efficiently for the pattern matching
functions desirable in intrusion detection systems.
III.TCAM
The traditional approaches to facilitate security and high
speed packet forward in the Internet are in general complex
and costly. For instance, network-based intrusion detection is
usually implemented by means of dedicated chips or separate
boxes, rather than standard components, adding up
complexity and integration costs. Also, packet classification
has been recognized as the most critical data path function,
creating potential bottlenecks for high speed packet
forwarding. To eliminate these potential bottlenecks, a
variety of algorithmic and hardware approaches have been
developed, attempting to meet the targeted performance for
different single packet classification tasks
A promising approach is to use TCAM for packet
classification and intrusion detection. A TCAM-based
solution usually applies to different packet classification
tasks, and allows parallel rule matching against all the rules
in a rule table, offering the highest possible packet
classification performance. As a result, TCAM coprocessors
are extensively used in industry to perform multiple packet
classification tasks classically seen in an Internet router.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

200

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


Ternary content addressable Memories (TCAMs) are
increasingly used for high-speed packet classification.
TCAMs compare packet headers against all rules in a
classification database in parallel and thus provide high
throughput unparalleled by software-based solutions. TCAM
based architecture which are a programmable or
reconfigurable, are commonly adopted for a hardware-based
pattern matching.

An automata is made-up of systems and transitions that


can be classified as Deterministic Finite Automata (DFA)
and Nondeterministic Finite Automata (NFA)
DERETERMINISTIC FINTIE AUTOMATON (DFA):
The AC algorithm first constructs a deterministic
finite state machine from the set of patterns and then uses the
finite state machine to process the text string in a single pass
by constructing the goto and cross transitions. In the DFA,
the dotted lines represent transitions, called by cross
transitions, which are newly added by eliminating failure
transitions. Shaded states represent the pattern matching
states called output states. The trivial transition going to the
early state is omitted.

Fig.5.TCAM based architecture


Content-addressable memory (CAM) is a special type of
computer memory used in certain very high speed searching
applications.Binary CAM is the simplest type of CAM which
uses data search words consisting entirely of 1s and 0s.
Ternary CAM (TCAM) allows a third matching state of "X"
or "Don't Care" for one or more bits in the stored data word,
thus adding flexibility to the search. For example, a ternary
CAM might have a stored word of "10XX0" which will
match any of the four searches words "10000", "10010",
"10100", or"10110". The architecture consists of a TCAM,
SRAM, and logic. Each TCAM entry represents a lookup
key, which consists of current state and input. It consists of
corresponding data, which is the next state, in the SRAM
whose address is given by the TCAM output. Two registers
current state and the input are initialized to the state 0 and the
start data of the input buffer respectively. If there is a
matching entry for the state and input value in the TCAM,
the TCAM outputs the index of the matching entry and then
the SRAM outputs the next data located in the corresponding
location. Because a TCAM has dont care bits, multiple
entries can be simultaneously matched. If there is no such
match in the TCAM, the next state is the initial state
Because a TCAM has dont care bits, multiple entries can
be matched simultaneously. The next state is the initial state
when there is no match in the TCAM. In the AC DFA the
number of transitions increases rapidly as the number of
patterns increases so that the TCAM-based implementation
is impossible when there are a large number of patterns.
IV.AUTOMATA THEORY
The theory of computation is the branch of computer science
that deals with the problems that are solved by means of
algorithms. One of its branches is the automata theory that
explains the study of abstract mathematical methods or else
systems that can be used to resolve computational problems.

Fig.6.DFA for {he, she, his, hers}


One starts from the initial state (usually 0) to match a string
If a goto transition or a cross transition is matched with an
input in the current state, the current state is moved all along
the matched transition. Otherwise, the current state goes
back to the initial state and the matching procedure repeats
for the next input. In matching a text string of length n, the
DFA makes n state transitions. If the pattern is not matched
then the current state goes back to the initial state and process
repeats. It examines each input only once and so it requires a
huge amount of memory in a RAM based implementation that
keeps a table of pointers to next states for every input.The AC
DFA can be implemented with added efficiency using a
TCAM since it needs only nontrivial transitions.
NFA (NONDETERMINISTIC FINITE AUTOMATON):
ACDFA has more transit ions than the corresponding NFA.
In the NFA, the solid lines and dotted lines represent goto
and failure transitions, respectively. For an NFA-based
matching, the current state is moved along its failure
transition and the matching process repeats for the current
input. The DFA examines each input only once while the
NFA may examine each input several times along the failure
transition path. In matching a text string of length n, the
NFA makes fewer than 2n state transitions
In the NFA, the solid lines are represented by go
to and dotted lines by failure transitions. In the DFA, there is
cross transitions is representing the dotted lines, they are
newly added by eliminating failure transitions. Output states
are represented by shaded states.
The multi pattern matching can be performed by using either
the NFA or the DFA. To match a string, the initial state is
usually taken as 0. With an input in the current state, if a goto
transition or a cross transition is matched, the current state is
moved along the matched transition.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

201

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


Step 1- Construction of a Failure Tree
The graph consisting of only failure transitions is known as
.

Fig.7.DFA for {by, byte, boy, eye}


In a DFA-based matching, the current state goes back to
the initial state and the matching process repeats for the next
input. In a NFA-based matching, the current state is moved
along its failure transition and the matching process repeats
for the current input. The DFA will examine each input only
once while the NFA along the failure transition path may
examine each input several times. The DFA makes n state
transitions and the NFA makes fewer than 2n state
transitions for matching a text string of length n.
V. COVERED STATE ENCODING SCHEME
5.1. Basic Idea
Since the AC NFA has a smaller number of transitions than
the AC DFA, the NFA can be implemented with smaller
TCAM entries than DFA. The number of TCAM entries in
the NFA-based implementation is the sum of the number of
goto transitions and the number of nontrivial failure
transitions in the NFA. In NFA design, 1-bit field F
indicating a failure transition is further added in each SRAM
entry. If an entry is connected with a failure transition, then
F=1 and its input field is _, which means that it can match
with any input value. Fig. 9 shows TCAM/SRAM entries for
AC NFA in Fig.7.
If the matched transition is a failure transition, or F = 1,
an input is not advanced and current input is used once more
for next matching. One character may be frequently
processed along the states in the failure transition path until a
non failure transition is matched (F = 0) or a state goes back
to the initial state, which is a key drawback of the NFA-based
structural design. The failure transition graph is a graph
consisting of only Failure transitions in the AC NFA.
5.2. Covered State Encoding Algorithm
In a FSM design to identify the state of the machine, each state
is represented by a binary code. These codes are the possible
values of the state register. The procedure for assigning the
binary codes to each state is known as state encoding. The
choice of encoding plays a key role in the FSM design which
depends on the form of technology used like FPGA, CPLD
ASIC, etc. and also the design specifications. It influences the
complexity, size, power consumption, speed of the design. If
the encoding is such that the transitions of state register are
minimized then the power will be saved. The timing of the
device is often affected by the choice of encoding. An
algorithm is proposed for performing the covered State
encoding for the AC NFA. The proposed algorithm consists
of four stages as follows
Step 1 Construct a failure tree.
Step 2 Determine the size of a cove Code of each state.
Step 3.Assign a unique code and a cover codes to all states.
Step 4 Build the TCAM entries.

Fig. 8. Failure tree and covered state encoding.


Unnecessary failure transitions can be avoided. Primarily, a
failure tree is constructed by reversing failure transitions in
the failure transition graph so that each state s can easily find
PRED(s). The initial state becomes a root of the failure tree
and the set of all descendants of a state s in the failure tree is
PRED(s).
Step 2-Determining the Size of a Cover Code
The numeral of _ bits within a cover code is called its
dimension and the numeral of unique codes enclosed in a
cover code is called its size. The size and dimension of the
cover code of a state s can be represented as size(s) and
dim(s) respectively. Size(s) is 2dim(s). In a failure tree if a
state s has no child, dim(s) = 0 as c_code(s) need not cover
any other code, in addition to that its cover code is same as its
unique code. If in any case a state s has children in a failure
tree, c_code(s) ought to cover not only u_code(s) but also
c_code(s) for all its children c.
The dimensions of each and every state are determined

CS
6
4
3
2
1
1
0
0
9
7
5
4

IN
E
y
E
y
T
y
o
b
E
*
*
*
*

NS
9
7
5
4
8
2
6
1
3
3
3
2
1

F
0
0
0
0
0
0
0
0
0
1
1
1
1

CS

CS
8
6
4
3,7,9
2,5
1,4
1,4
All states
All states

IN
E
Y
E
Y
T
Y
O
B
E

NS
9
7
5
4
8
2
6
1
3

Fig. 9. TCAM/SRAM
entries in NFA architecture
recursively during the computation of dim (0), where 0
indicates the initial state
Step 3-Assigning State Codes
For a state s, the unique code u_code(s) can make use of a
code covered by c_code(s) and in this case a code is used
which is obtained by replacing *with 0 in c _code(s) as
u_code(s). The codes are recursively assigned. Initially, the
code of the root 0 is assigned as follows:
C_code(0)=**.* and u_code(0)=00.0.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

202

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


The codes of the children are assigned in declining order
with respect to dimension for a states in order to assign
codes efficiently.
procedure AssignCode(node s, int base)
// assign codes to this node s
U_code(s)=base // unique code
C_code(s)= covercode(base,dim(s)) // cover code
if s has no child then return
sort the children of node s in decreasing dimension
order
dim(s)
cbase=base+ 2
// assign codes to children recursively
for each child node c of node s do
dim(c)
cbase =cbase _ 2
AssignCode(c, cbase)
endfor
end
Fig. 8 shows the failure tree for AC NFA in Fig. 7. In
example shown in fig.8, the dimension of a root is 4 and it
has five children whose dimensions are 2, 1, 1, 0, and 0,
respectively. The cover codes of the children are assigned
11**, 101*, 100*,0111, and 0110. Their unique codes are
1100, 1010, 1000, 0111, and 0110 which are obtained by
replacing * with 0.The codes of children is allocated with
the fixed bit values (0 or 1) which is based on the cover code
of their parent and new values are assigned only in dont
care bit locations.
Step 4- Building TCAM Entries
Any child entry in a failure tree should be located more
front than its parent entry in TCAM. To build TCAM/SRAM
entries the procedure BuildTCAM is used.
Current
Input
Next state
state
11**(3)
Y
1011(4)
1011(4)
E
1001(5)
101*(1)
Y
1000(2)
101*(1)
O
0111(6)
100*(2)
T
0110(8)
0111(6)
Y
1111(7)
0110(8)
E
1110(9)
****(0)
B
1010(1)
****(0)
E
1100(3)
Fig. 10. TCAM/SRAM entries using cover state encoding.
procedure BuildTCAM(node s)
// insert entries of children recursively
for each child node c of node s do
BuildTCAM(c)
// insert the entry of this node s
for each goto transition gs; i of node s do
next =g(s,i)
insert (c)_code[s], i) into TCAM
insert (u_code[next]) into SRAM
endfor
end
The procedure BuildTCAM inserts children in the failure
tree first into TCAM by Calling BuildTCAM(0) which
constructs all the TCAM/SRAM entries recursively

VI.MULTICHARACTERS PROCESSING USING


COVERED STATE ENCODING
The covered state encoding scheme for efficient
TCAM-based implementation is proposed in this section.
However, Aho-Corasick algorithm processes only one
character at a time and multi character processing is required
to achieve high-speed matching.
In this section, a finite state Machine is proposed called
k-AC NFA which has state transitions on k input characters
by combining k consecutive goto transitions of the
Aho-Corasick NFA. Since k-AC NFA consists of goto
transitions and failure transitions like the AC NFA, the
covered state encoding scheme can be used in the TCAM
based implementation of the k-AC NFA. The major
advantage of the k-AC NFA is that the state transition
consumes exactly k input characters while in the compressed
DFA, the state transition is done on variable length (between
1 and k) of characters.
6.1 Construction of k-AC NFA
When k input characters are processed at a time, the patterns
can be started at one of k possible offsets of the input
characters. In order to detect the patterns starting at any of k
offsets, we construct k finite state machines each of which
corresponds to one of k offsets.
For example, we consider the set of patterns {abc, xyapq,
pqrxyz}. Fig. 11a shows the AC NFA for these patterns. We
construct the 4-AC NFA by creating four state machines each
of which can detect the patterns starting at one of four
offsets, as shown in Fig. 11b. In Fig. 11, the states with the
same label are the same state and the gray colored states are
output states.

Fig.11.Construction of k-ACNFA, k=4(a) ACNFA


(b)
k-ACNFA
In Fig. 11b, the dotted lines represent the failure transitions.
The failure function of the k-AC NFA is the same as that of
the AC NFA.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

203

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Fig.12.TCAM entries in k-ACNFA


6.2 Implementation of k-AC NFA
The k-AC NFA can be implemented in the TCAM-based
architecture using the covered state encoding. In the k-AC
NFA, start transitions from the initial state may be done on
inputs with leading *s (e.g., ** pq) and output transitions
may be done on inputs with trailing *s (e.g., bc **). Both
transitions may be simultaneously matched with k input
characters (e.g., bcpq) and we cannot determine the priority
between two transitions since they have no inclusion
relationship.In order to solve this problem, we use two
TCAMs called a transition TCAM and an output TCAM,
which can simultaneously generate the matched entries. The
output transitions are stored in the output TCAM. The output
TCAM detects the final pattern matching and the transition
TCAM controls the state transition in the k-AC NFA. If two
TCAMs generate the matching results simultaneously, the
state is moved according to the result of the transition TCAM
and the output is generated by the output TCAM.
The simulated response for multi pattern matching
algorithm for given example is as shown:

VII.RESULTS AND CONCLUSION:


The results for existing and proposed methods are
displayed in figures shown below. The existence of failure
transitions is added in each SRAM entry in NFA-based
architecture. A 1-bit field F is used for indicating a failure
transition. If an entry is associated with a failure transition,
its F is 1 and its input field is * which can match with any
input value. A covered state encoding scheme for the
TCAM-based implementation of Aho-Corasick algorithm is
proposed in this paper which is extensively used in network
intrusion detection systems. The covered state encoding takes
the benefit of dont care feature of TCAMs and
information of failure transitions is implicitly captured in the
covered state encoding. By using covered state encoding, the
failure transitions are not implemented as TCAM entries
because all the states in the failure transition path can be
examined simultaneously. The covered state encoding
requires the smaller number of TCAM entries than
NFA-based implementation. The covered state encoding thus

enables an efficient TCAM-based implementation of a


multipattern matching algorithm.
The scheme for constructing the finite state machine
called k-AC NFA for multi character processing, which uses
a covered state encoding is also proposed. The k-AC NFA
using the covered state encoding has the smaller TCAM
memory requirement and can process exact k characters at a
time. Thus, a covered state encoding can be efficiently used
in multi character processing.

Fig.13.Simulated outputs for existing version

Fig.14.Simulated outputs for proposed version


REFERENCES
[1] IEEE TRANSACTIONS ON COMPUTERS, VOL. 61,
NO. 2, FEBRUARY 2012 An Efficient TCAM-Based
Implementation of Multipattern Matching Using Covered
State Encoding by SangKyun Yun, Member, IEEE Computer
Society
[2] (IJCSIT) International Journal of Computer Science and
Information Technologies, Vol. 4 (3) , 2013, 467-469
Importance of Aho-Corasick String Matching Algorithm in
Real World Applications by Saima Hasib, Mahak Motwani,
Amit Saxena at Truba Institute of Engineering and
Information Technology, Bhopal (M.P), India
[3]M. Alicherry, M. Muthuprasanna, and V. Kumar, High
Speed Pattern Matching for Network IDS/IPS, Proc. 14th
IEEE Intl Conf. Network Protocols (ICNP), vol. 11, pp.
187-196, 2006.
[4] M. Gao, K. Zhang, and J. Lu, Efficient Packet Matching
for Gigabit Network Intrusion Detection Using TCAMs,
Proc. 20th Intl Conf. Advanced Information Networking and
Applications (AINA), 2006.
[5] F. Yu, R. Katz, and T. Lakshman, Gigabit Rate Packet
Pattern- Matching Using TCAM, Proc. 12th IEEE Intl
Conf. Network Protocols (ICNP 04),
[6] Y. Weinsberg, S. Tzur-David, D. Dolev, and T. Anker,
High Performance String Matching Algorithm for a
Network Intrusion Prevention System (NIPS), Proc. IEEE
High Performance Switching and Routing (HPSR), pp.
147-154, 2006.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

204

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Advanced Recycled Paper Cellulose Aerogel Synthesis and Water


Absorption Properties
A.Azhagesan, T.Thiyagarajaperumal, M.Kumarasamy College Of Engineering, Karur ,
Tamilnadu , India . Email Id: azhagu0003@gmail.com . Contact No: 8438280052

Keywords:

group converts the recycled cellulose to cellulose


aerogel. Aerogel is porous material with more than
90 percents pores in the volume. Most aerogel are
silica or carbon aerogel. Cellulose aerogel is a new
material.

Recycled Cellulose Aerogel,


& Water Absorption

Abstract:
By applying sodium hydroxide/urea and a sample
freeze drying method, we successfully synthesized
green cellulose aerogel from paper waste for the
first time. We adjusted the cellulose concentration
from 1.5 percents to 4 percents in the initial
hydrogel. They yielded densities changing from
0.03-0.1 g/cm3.The absorption capacities varied
from 9 to 20 times of their own weight. To
investigate the water absorption capacity in
relationship with the initial chemical dosage, we
changed the input amount of urea and sodium
hydroxide, which is the first study ever. The
thermal conductivity of the 2 percents sample
ranged from 0.029-0.032 W/mK. Besides, the
material has a high potential to be used in diaper
industry as it is biodegradable which is better than
most super absorbent polymers.

Synthesis Procedure:
For standard sample, urea (13.7 wt%, 5.025g) and
sodium hydroxide (1.9 wt%, 0.725g) were
dispersed in DI water (30g), which is dispersed by
a magnetic stir. Recycled cellulose (2 wt%, 0.75g)
was added to the transparent solution, pre-cooled in
ice/water bath for 10 minutes, then sonication for
10 minutes. Thereafter, the solution was placed in
refrigerator for more than 24 hr to allow gelation of
the solution. After the solution was frozen, it was
then thawed at room temperature and then followed
by immersing into ethanol (99 vol %) for
coagulation for two days. After coagulation,
solvent exchange was carried out by immersing the
gel in de-ionised (DI) water for 2 days. The sample
was then frozen in a freezer at 18 for 24 h. After
that, freeze drying is carried out for 2 days with a
Scan Vac Cool Safe 95-15 Pro freeze dryer
(Denmark) to obtain the desired aerogel. Other
samples were prepared by the same method by
varying one component of the standard sample.
Detail refers to the Table 1

Introduction:
Most diapers in market are composed of super
absorbent polymers, which are not biodegradable.
Averagely a baby needs more than 5,000 diapers
before it is potty-trained . Adult diapers are also
requested in large amount every day. Huge amount
of diapers is consumed daily in this world.
Nowadays, as super absorbent polymer is not
biodegradable. The used diapers waste large
amount of land to store them. Our paper gives a
potential alternative of traditional diapers-Cellulose
aerogel, which with high absorbent capacity
(almost 20 times) and also biodegradable can be
used to replace the non-biodegradable super
absorbent polymer. Cellulose aerogel can be
derived from paper. Only United States alone used
85.5 million tons of paper per year . This huge
amount of used paper needs to be recycled and
functions as useful material to prevent waste. Our

TABLE1: PARAMETER USED IN MORPHOLOGY


CONTROL OF CELLULOSE AEROGEL

Group 1

Cellulose amount

G1_015

0.5444[g]

Standard

0.75[g]

2%

G1_03

1.1057[g]

3%

G1_04

1.4896[g]

4%

Group 2

urea amount

G2_10

3.4972[g]

10%

standard

5.025[g]

13.76%

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

205

Cellulose concentration
1.50%

Urea concentration

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

TABLE 2: CELLULOSE CONCENTRATION


AFFECTS WATER UPTAKE RATIO AND
POROSITY

Characterization:
Thermal conductivity measurement:

Sample
No

C-Therm TCi Thermal Conductivity Analyzer (C


Therm Technologies, Canada) was used to measure
the thermal conductivity of the aerogel.

Water absorption test:


This test was carried out in DI water, followed by
modified ASTM D570-98 standard, refer to Figure

Wet
Weight
[g]
8.3431

Water
uptake
Ratio
20.24

Water
uptake
Ratio
0.984

Density
[g/cm3]

G1_015

Dry
Weight
[g]
0.3928

Standard

0.7280

17.43

0.978

0.04

G1_03

0.8976

12.80

0.970

0.05

G1_04

2.4507

13.414
0
12.388
5
24.463
4

8.98

0.935

0.10

0.03

Cellulose concentration varies from 1.5% to 4%,


while the water uptake ratio drops from 20 to 9.
Their relationship is quite linear. By increasing
cellulose content in one sample, the density of the
solution is also increased, but the structure of the
aero gel is more compacted. Hence, the porosity is
decreased (see Table 2), resulting in less space for
water molecular being taken in. As the result,
Water uptake ratio linear decrease with an increase
cellulose concentration.
TABLE 3 UREA AMOUNT AFFECTS WATER
UPTAKE RATIO AND POROSITY

Sample
No

Wet
Weight
[g]
14.1073

Water
uptake
Ratio
20.02

Porosity

G2_10

Dry
Weight
[g]
0.6712

Standard

0.7280

13.4140

17.43

0.978

G2_15

0.7532

14.5129

18.27

0.975

G2_20

0.6985

13.1829

17.87

17.87

0.978

Urea forms urea hydrate in the solution. Urea


hydrate helps prevent cellulose molecule come near
to each other. Hence, urea restrains the formation
of hydrogen bond among cellulose. The experiment
results about water up take ratio are without much
difference in this group. It may be caused by the
urea amount that plays an insignificant role about
porosity. However, the porosity is not as the same
as the water up take ratio results suggest, referring
to Table 3. It is may be due to measure errors, as
the porosity is calculated by weight divided by

Results and Discussion:


The thermal conductivity of the standard sample
was 0.03 Wm-1K-1. It is comparable to silica aero
gels (0.026 Wm-1K-1) and wool (0.03-0.04
Wm-1K-1)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

206

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

volume. Volume is derived from diameter and


height of the cellulose samples, and then
determined by cylinder volume calculation
formula. The diameter is determined by the average
of three readings. Errors may be generated in the
process, as the sample is not exactly cylindrical
shape. However, we could see all the data are the
same in the first two digits regards to porosity. Part
of sodium hydroxide can be bound to cellulose,
which improves the dissolve of cellulose. More
sodium hydroxide causes less hydrogen bonding
among cellulose, which leads to less porosity and
thus less water uptake ratio, referring to Table 4.
However, when sodium hydroxide changes from
2.5 percents to 3 percents, the water uptake ratio
does not drop further. It is due to 2.5 percents of
sodium hydroxide is the maximum amount needed
to bound with cellulose. More sodium hydroxide
will not further the reaction.

Summary:
Our group successfully synthesized recycled
cellulose aerogel from paper waste. The newly
developed material is with high absorption
capacity. Sodium hydroxide amount and cellulose
concentration alter the water uptake ratio of
cellulose aerogel greatly, while urea amount do not
play a significant role in the water uptake ratio.

References :
1.
Informationon
http://www.enfamil.com/app/iwp/enf10/content.do
?dm=enf&id=/Consumer_Home3/Prenatal
3/Prenatal_Articles/changingdiaper&iwpst=B2C&l
s=0&csred=1&r=3557620074

TABLE 4 SODIUM HYDROXIDE AMOUNT


AFFECTS WATER UPTAKE RATIO AND
POROSITY

Sample
No

Dry
Weight
[g]

Wet
Weight
[g]

Water
uptake
Ratio

Porosity

G3_015

0.5925

11.9001 19.08

0.983

standard 0.7280

13.4140 17.43

0.978

G3_025

0.7639

12.6251 15.53

0.974

G3_03

0.7066

11.3263 15.03

0.976

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

207

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Near Field
Communication Tag Design with AES Algorithm
1

Ashwin S, 2Priyatam Ch

UG Students, Dept of ECE, SRM University, Kattankulathur, Tamilnadu, India


1

ashwinksk@yahoo.com, kakarotgokuvegeta@gmail.com

Abstract: This paper presents the design of Near-Field


Communication (NFC) tags architecture and implementation
with high security supported by AES algorithm. NFC
standards include data exchange within a close proximity range
and data transmission protocols. Because of the recent
applications and advancement of Near Field Communication
security has to be provided with reduced complexity. To
achieve this, an AES algorithm of 128-bit based Near Field
Communication Cryptographic Tag architecture controlled by
8 bit microcontroller has been designed in our work. Recent
applications RFID tags are controlled by finite state machines
embedded in their hardware. But with the support of
additional functionality like security, their design complexity
increases.
Cryptography
technique
controlled
by
microcontroller reduces this design complexity, because AES
algorithms hardware requirement is very less rather than other
Asymmetric cryptographic techniques. This Cryptography
technique has been successfully synthesized and implemented
using Xilinx ISE DS 13.2.2.using VHDL source code.
Keywords-AESAlgorithm, 8BitMicrocontroller, Cryptography,
Finite state machines, NFC, RFID.

I INTRODUCTION
NFC stands for Near Field Communication. NFC
specificationscan be found in ISO 18092.The main
characteristic of NFC is that it is a wireless communication
interface with a working distance limited to about 10 cm.
The interface operates in different modes. The modes are
distinguished whether a device creates its own RF field or
whether a device retrieves the power from the RF field
generated by another device. Radio-frequency identification
(RFID) technology is the enabler for a variety of new
applications. Many of these new applications will require
RFID tags to support additional functionality, which
increases their design complexity. Especially security
functionality will play an important role. In order to cope
with this increased complexity of the tags, new design
concepts such as programmable approaches are necessary.
Cryptography is the science of information and
communication security that enables the confidentiality
through an insecure channel communication. It prevents
unauthorized alteration of use. It uses a cryptographic
system to transform a plaintext into a cipher text, using most
of the time a key. There exists certain cipher that doesn't
need a key at all. The AES is the winner of the contest, held
in 1997 by the US Government, after the data encryption
standard was found too weak because of its small key size

and the technological advancements in processor power. It


takes an input block of a certain size, usually 128, and
produces corresponding output block of the same size. The
transformation requires a secret key the second input.
Cryptography development has been a major concern in the
fields of mathematics, computer science and engineering.
One of the main classes in cryptography today is the
symmetric key cryptography, where a same key will be used
for the encryption and decryption processes. Our work
relates to the field of secure tag design [1], which are:
1) First low-resource RFID tag that supports symmetric
cryptography,
2) First low-area implementation of AES,
3) Flexible tag design based on a microcontroller for
protocol handling and steering of the cryptographic module.
The remainder of this work is organized as follows.
Section II provides a brief description of the deployed lowresource 8-bit microcontroller. In Section III the analyzed
architecture of NFC tags Digital part is introduced in short.
The AES algorithm and its processing architecture are
presented in Section IV. The implementation of AES
algorithm supported by 8 bit microcontroller and the
analyzed area requirement output is shown in Section V,
followed by conclusions in Section 6.
II DESCRIPTION OF THE LOW RESOURCE
8 BIT MICROCONTROLLERS
The 8-bit microcontroller [2] performs the entire
controlling tasks with focus on low chip area and low power
consumption. The microcontroller bases on a Harvard
architecture with separate program and data memory. The
program memory ROM is implemented as lookup table in
hardware and is divided into several pages. Each page can
hold up to 256 16-bit instruction words. A maximum of 256
pages is supported. Data memory is realized as register with
up to 64 8-bit registers. The register consists of three specialpurpose (SP) registers, input-output (I/O) registers, and
general-purpose (GP) registers. SP registers are used for
indicating status information of the arithmetic-logic unit
(ALU), the paging mechanism of the program ROM, and a
dedicated accumulator (ACC) register. I/O registers allow
interfacing of external modules. GP registers are used for
computing and temporarily storing data. The instruction set
of the microcontroller involves 36 instructions, which are

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

208

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


divided into arithmetic operations, logical operations and
control operation. Most of the operations are executed within
a single clock cycle.
A. Hardware and Software Functionality
The control complexity increases with integrating security
and data management [1] features to a tag. Data has to be
shifted from one component to another component.
Commands that are breakup into number of blocks need to
be rebuilt. Tag architecture with microcontroller can manage
such a surged control complexity than a conventional state
machine .Using a microcontroller; have to fulfill the
requirements in terms of chip area and power consumption.
Only a very simple microcontroller can be deployed for
small chip size and clocked with the lowest possible clock
frequency for low power consumption. Basic tag
functionality is directly handled by framing logical hardware
circuit. Since controlling complexity of basic tag
functionality is low, implementation in hardware is
achievable [1].Advanced tag functionality, leads to high
control complexity.

processed within the CU that is accessed by the


microcontroller via micro-code patterns. RAM stores
temporary results. EEPROM for permanently storing data in
files i.e. certified secret key, and memory for storing
constants (ROM) are located in the memory unit.
A. Framing Logic
The FL[1] is an interface that converts serial data
into parallel data and also handles the basic tag functionality
.Contains the following components, Receive-and-transmit
(RxTx) unit, Control unit, and AMBA interface.

III OVERVIEW OF THE NFC TAG ARCHITECTURE


DIGITAL PART
This presents the design and implementation
Cryptographic protected tags for new RFID application
operates at 13.56 MHZ and works passively. We target a
low-area design that requires as little resources as possible
such that the tag production does not exceed the practical
limits of a possible commercial launch. The input data is
directly feed to the NFC tag through framing logic. The area
can be reduced by reusing a common 8-b microcontroller.
The tag architecture consists of: a low resource 8 bit
microcontroller [2], a framing logic (FL), a crypto unit (CU),
and a memory unit. The microcontroller acts as the central
element that controls all operations on the tag. The
microcontrollers program is stored in an internal ROM and
communicates via an AMBA bus with the crypto unit and
framing logic.

Fig 2 Framing Logic Architecture

The RxTx unit is the interface between the serial data signals
of the RS232 and the parallel data signals of the control unit.
Additionally, the RxTx unit is provided with an external
clock signal .This provides a bit-clock signal to all
components of the tags digital part. But in this analysis our
attention is only towards the Digital part of the system .so,
input signal (data) is given directly to the RxTx Unit.
Incoming serial data from the RS232 interface is first
sampled by the RxTx unit, decoded into bits, transformed to
byte data. The AMBA interface places this data into a FIFO
buffer stores up to 6 Bit that is accessed by the
microcontroller over the AMBA bus. The outgoing data
from the microcontroller is first placed in the FIFO buffer of
the FL and then transmitted to the RxTx unit by the control
unit. The AMBA interface connects the FL with the AMBA
bus. The AMBA interface also contains a status register that
provides information about the internal state of the FL i.e.
about the presence of data in the FIFO bus and a
configuration register. All the components can be accessed
by the microcontroller via the AMBA bus.
B. 8 Bit Microcontroller

Fig 1 NFC Tag Architecture Digital Part

The FL is connected to the analog front-end and


provides a byte interface for the microcontroller. But in our
case the digital part of the NFC tag alone is taken for
consideration .so, input data is given directly to the FL
through rs232 interface. Cryptographic operations are

Both memories are freely scalable i.e. their size can


be adjusted during the design phase based on the
requirements of the desired application. The program
memory ROM is implemented as lookup table in hardware
and is divided into several pages. A maximum of 256 pages
is supported. Data memory is realized as register with up to

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

209

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


64 8-bit registers. There are three special purpose registers,
I/O registers, and general purpose registers. SP registers give
the status information of the ALU unit, ROMs paging
mechanism, and an accumulator register. Interfacing with
external modules is allowed by I/O registers. Thus the
microcontroller controls the entire module during both data
transmission and reception.

one clock cycle. Also writing into one port and reading from
the other is possible
IV PROCESSING ARCHITECTURE OF AES
ALGORITHM

Fig 5: Architecture of AES Algorithm

Fig 3An8 Bit Microcontroller Architecture

GP registers are available for storing temporary data.


Microcontrollers instruction set consists of 36 instructions,
which are divided into logical operations, arithmetic
operations, and control operations. Most of the operations
are executed in a single clock cycle.
C. Cryptographic Unit
The cryptographic unit basically consists of three
parts: a controller, RAM, and a data path [3]. The controller
communicates with other modules on the tag to exchange
data and it sequences the ten rounds of an AES encryption
[6]. Therefore, it addresses the RAM accordingly and
generates control signals for the data path. The RAM stores
the 128-bit State and a 128-bit round key. These 256 bits are
organized as 32 bytes to suit the intended 8-bit architecture.

Fig 4 Data path of AES algorithm

The memory unit is composed of three memory types that


are RAM, ROM, and EEPROM which are 16-bit linear dualport memory space. A dual-port RAM showed to be
advantageous since it allows reading of two words within

AES is composed of 4 main stages for both


encryption and decryption process in order to produce the
final cipher text. The 4 main stages in the encryption process
are Sub bytes transformation, shift rows, Mix columns and
AddRoundKey. Similarly the decryption process also
consists of four stages which just perform the inverse
operations as that of the encryption process as follows
inverse Sub bytes transformation, inverse Shift Rows,
inverse mix columns and AddRoundKey step is common to
both encryption and decryption. Our work of NFC is based
on AES128 algorithm [4]. The AES is a symmetric
algorithm where the same key is used for both encryption
and decryption.AES helps to transform the 128 bits block in
to a new block which has the same size. At first the given
input data is changed into a matrix form which is composed
of 8 bit elements. After this transformation AES [5] will
need to go through 4 main stages to produce the final cipher
text. There is the SubByte step which involve substitution of
bytes with the corresponding values from a fixed 8 bit look
up table. Next stage is Shift Rows where rows of bytes are
shifted an incremental one position to the left depend on
which row it is i.e. row 2 will shift 1 position; row 3 will
shift 2 position. First row will remain unchanged. Mix
columns step every column is multiplied by a fixed
polynomial. And at last, AddRoundKey step is performed
where for each round of AES step, a subkey will be obtained
from key schedule of AES algorithm and it will be continued
for ten rounds. As shown in figure 5 Nr represents the
number of rounds and finally the encrypted data is obtained.
Decryption is performed in the same manner as encryption
but just the inverse operations are performed as similar to
the steps discussed above.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

210

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


V IMPLEMENTATION RESULTS
We have implemented our flexible tag platform in
VHDL and designed it towards low resource usage and low
area requirement in Xilinx ISE design suite 13.2.2. AES is a
flexible algorithm for hardware implementations because it
covers the entire range of applications. AES hardware
implementations can be applied in embedded systems for
its low resource requirements. In this process the complete
data encryption and decryption is done using AES algorithm
which is controlled by an 8 bit microcontroller .But in the
previous systems hardware components control is done by
the state machines fixed in it which required about nearly
49950 gate elements. But by using our architecture design
area requirement is nearly reduced into half as shown in
figures 6 and 7.

Fig 6 The Chip-Area Of The Microcontroller Unit

The above table shows the chip-area of the


microcontroller unit result in terms No of flip flops,
Registers and LUTs are clearly. This result shows that area
requirement based on this microcontroller architecture is
much less on comparison with conventional state machine
approach.

Fig 7 The chip-area results AES algorithm in NFC tag architecture

The above table shows the chip-area results in


terms No of flip flops. The total no of Flip flops required for
implementing this AES algorithm in NFC tag architecture is
also given in figure 7 as per the obtained simulation output.
VI CONCLUSION AND FUTURE WORK
In this paper presents a flexible NFC-tag architecture that
provides enhanced security features using symmetric as well
as asymmetric cryptography. As a main contribution, the
work described an entire real-world RFID system,
including all hardware components needed for a practical
chip fabrication. The design shows that significant resources
can be saved by applying a microcontroller-based
architecture instead of using a finite-state machine-based
controlling. The reason lies in the fact that the controller can
be simply reused by many hardware components, such as the
CU or the RFID FL that would require more area when
implemented as individual hardware modules. For example,
AES encryption and decryption has been realized with an
area overhead of only 374 no of slice registers. In the future,
We plan to further analyze the design regarding enhanced
implementation attacks, and to reduce the area requirement.
This can be completely realized as a micro program, which
reduces further chip-area requirements while increasing
flexibility and assembly-based implementation convenience.
The proposed system is planned to be designed using RSA
algorithm which is an asymmetric cryptography in VHDL,
simulated using Xilinx software and implemented using
FPGA Spartan 3e. Being an asymmetric cryptography it can
resists against unexpected security attacks. But it is a very
big challenge to integrate asymmetric algorithm in NFC tag
design because of its huge area requirement. So, we are
currently working towards the implementation of RSA
algorithm in NFC tag architecture along with area and power
minimization.
REFERENCES
[1] Thomas Plos, Michael Hutter, Martin Feldhofer, MaksimiljanStiglic,
And Francesco Cavaliere,'' security-enabled near-field communication tag
with flexible architecture supporting asymmetric cryptography IEEE
transactions on very large scale integration (vlsi) systems, vol. 21, no. 11,
November 2013.
[2] Hannes Gro and Thomas Plos,'' On Using Instruction-Set Extensions for
Minimizing the Hardware-Implementation Costs of Symmetric-Key
Algorithms on a Low-Resource Microcontroller'' Institute for Applied
Information Processing and Communications (IAIK),Graz University of
Technology.
[3]
M.Feldhofer, S.Dominikus,
And J.Wolkerstorfer, Strong
Authentication For RFIDSystems Using The AES Algorithm, in proc.
ches, vol. 3156.aug. 2004, pp.
[4]
p. Hamalainen, T. Alho, M. Hannikainen, And T. D.
Hamalainen,Design and Implementation of low-area and low-power AES
Encryption hardware core, in proc. 9th euromicro conf. digit. syst. design,
sep. 2006, pp. 577583.
[5] Ricci, M. Grisanti, I. De Munari, and P. Ciampolini, Design of a 2
W RFID baseband processor featuring an AES cryptographyprimitive, in
Proc. Int. Conf. Electron., Circuits Syst., Sep. 2008, pp. 376379.
[6] Shyamal Pampattiwar, ''Literature Survey on NFC, Applications and
Controller'' International Journal of Scientific & Engineering Research,
Volume 3, Issue 2, February-2012

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

211

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Green concrete
Eco-friendly construction
M.SASIDHAR

V.SUNDARKISHNAN

Dept. of Civil Enginering


SMVEC
Puducherry, India
sasidharm007@gmail.com

Dept. of Civil Engineering


SMVEC
Puducherry, India
sundarakishnan14@gmail.com

Abstract This paper explains about CO2 emissions in


concrete and its remedy as green concrete and the materials
used for manufacturing green concrete.

II. REDUCTION OF CO2 EMISSION


The potential environmental benefit to society of
being able to build with green concrete is huge. It is
realistic to assume that technology can be developed,
which can halve the CO2 emission related to concrete
production. With the large consumption of concrete this
will potentially reduce the worlds total CO2 emission by
1.5-2%. Concrete can also be the solution to
environmental problems other than those related to CO2
emission. It may be possible to use residual products
from other industries in the concrete production while
still maintaining a high concrete quality. During the last
few decades society has become aware of the deposit
problems connected with residual products, and
demands, restrictions and taxes have been imposed.

Index Terms CO2 emission , fly ash , microsilica

I.
INTRODUCTION:
Green concrete is a revolutionary topic in the history
of concrete industry. This was first invented in Denmark
in the year 1998. Green concrete has nothing to do with
colour. It is a concept of thinking environment into
concrete considering every aspect from raw materials
manufacture over mixture design to structural design,
construction, and service life. Green concrete is very
often also cheap to produce, because, for example, waste
products are used as a partial substitute for cement,
charges for the disposal of waste are avoided, energy
consumption in production is lower, and durability is
greater. Green concrete is a type of concrete which
resembles the conventional concrete but the production
or usage of such concrete requires minimum amount of
energy and causes least harm to the environment. The
CO2 emission related to concrete production, inclusive of
cement production, is between 0.1 and 0.2 t per tonne of
produced concrete.

III. RAW MATERIALS FOR GREEN CONCRETE


Several residual products have properties suited for
concrete production, there is a large potential in
investigating the possible use of these for concrete
production. Well-known residual products such as silica
fume and fly ash may be mentioned. The concrete
industry realised at an early stage that it is a good idea to
be in front with regard to documenting the actual
environmental aspects and working on improving the
environment, rather than being forced to deal with
environmental aspects due to demands from authorities,
customers and economic effects such as imposed taxes.
Furthermore, some companies in concrete industry have
recognised that reductions in production costs often go
hand in hand with reductions in environmental impacts.
Thus, environmental aspects are not only interesting from
an ideological point of view, but also from an economic
aspect.

I. PROBLEMS IN CONVENTIONAL CONCRETE


Since the total amount of concrete produced is so vast
the absolute figures for the environmental impact are
quite significant, due to the large amounts of cement and
concrete produced. Since concrete is the second most
consumed entity after water it accounts for around 5% of
the worlds total CO2 emission (Ernst Worrell, 2001).
The solution to this environmental problem is not to
substitute concrete for other materials but to reduce the
environmental impact of concrete and cement. Pravin
Kumar et al, 2003, used quarry rock dust along with fly
ash and micro silica and reported satisfactory
properties.all fonts, in particular symbol fonts, as well,
for math, etc.

A. RECYCLED MATERIALS IN GREEN


CONCRETE:
The production of cement used in concrete results in the
creation of greenhouse gases, including CO2. The U.S.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

212

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

reaches its maximum strength more slowly than concrete


made with only portland cement. The techniques for
working with this type of concrete are standard for the
industry and will not impact the budget of a job.
This section also addresses wall-form products. Most
of these products have hollow interiors and are stacked or
set in place and then filled with steel-reinforced concrete
creating a concrete structure for a house.
Some wall-form materials are made from EPS
(expanded polystyrene) which is a lightweight non-CFC
foam material. There are also fiber-cement wall-form
products that can contain wood waste. The EPS/concrete
systems offer high insulating qualities and easy
installation. The fiber-cement blocks offer insulating
qualities as well. Some EPS products also have recycled
content.

cement industry has reduced CO2 emissions by 30%


since 1972 and now accounts for approximately 1.5% of
U.S. emissions, well below other sources such as heating
and cooling homes and buildings (33%), truck and auto
use (27%) and industrial operations (19%). The CO2
embodied in concrete as a finished building product is a
very small quantity considering that cement accounts for
a small proportion of the finished product.
The concrete industry also uses industrial waste byproducts such as fly ash (from coal combustion) and
blast furnace slag (created in iron manufacture) to
constitute a portion of the cement used in producing
concrete. Use of such by-products in concrete prevents
15 million metric tons a year of these waste materials
from entering landfills. Utilizing these "supplemental
cementitious materials" as a replacement for cement
improves the strength and durability of concrete and also
further reduces the CO2 embodied in concrete by as
much as 70%, with typical values ranging from 15% to
40%.

C. SILICA FUME:
Silica fume, also known as microsilica, is an
amorphous (non-crystalline) polymorph of silicon
dioxide, silica. It is an ultrafine powder collected as a byproduct of the silicon and ferrosilicon alloy production
and consists of spherical particles with an average
particle diameter of 150 nm. The main field of
application is as pozzolanic material for high
performance concrete.
It is sometimes confused with fumed silica.
However, the production process, particle characteristics
and fields of application of fumed silica are all different
from those of silica fume.
Silica fume is an ultrafine material with spherical
particles less than 1 m in diameter, the average being
about 0.15 m. This makes it approximately 100 times
smaller than the average cement particle. The bulk
density of silica fume depends on the degree of
densification in the silo and varies from 130 to 600
kg/m3. The specific gravity of silica fume is generally in
the range of 2.2 to 2.3. The specific surface area of silica
fume can be measured with the BET method or nitrogen
adsorption method. It typically ranges from 15,000 to
30,000 m2/kg.

Finally, when a concrete structure has served its purpose,


it can be crushed for use as aggregate in new concrete or
as fill or base materials for roads, sidewalks and concrete
slabs. Even the reinforcing steel in concrete (which often
is made from recycled materials) can be recycled and
reused.
B. FLY ASH:
Fly ash is one of three general types of coal
combustion by-products (CCBPs). The use of these byproducts offers environmental advantages by diverting
the material from the wastestream, reducing the energy
investment in processing virgin materials, conserving
virgin materials, and allaying pollution.
Thirteen million tons of coal ash are produced in
Texas each year. Eleven percent of this ash is used which
is below the national average of 30 %. About 60 70%
of central Texas suppliers offer fly-ash in ready-mix
products. They will substitute fly-ash for 20 35% of the
portland cement used to make their products.
Although fly-ash offers environmental advantages,
it also improves the performance and quality of concrete.
Fly ash affects the plastic properties of concrete by
improving workability, reducing water demand, reducing
segregation and bleeding, and lowering heat of hydration.
Fly ash increases strength, reduces permeability, reduces
corrosion of reinforcing steel, increases sulphate
resistance, and reduces alkali-aggregate reaction. Fly ash

ENVIRONMENTAL GOALS:
Green Concrete is expected to fulfil the following
environmental obligations:
Reduction of CO2 emissions by 21 %. This is in
accordance with the Kyoto Protocol of 1997.
Increase the use of inorganic residual products
from industries other than the concrete industry
by approx. 20%.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

213

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Reduce the use of fossil fuels by increasing the


use of waste derived fuels in the cement industry.
The recycling capacity of the green concrete
must not be less compared to existing concrete
types.
The production and the use of green concrete
must not deteriorate the working environment.
The structures do not impose much harm to the
environment during their service life.

There are several other advantages related to green


concrete and can be summarized as below:
a) Reduced CO2 emissions.
b) Low production costs as wastes directly substitute
the cement.
c) Saves energy, emissions and waste water.
d) Helps in recycling industry wastes.
e) Reduces the consumption of cement overall.
f) Better workability.
g) Sustainable development.
h) Greater strength and durability than normal
concrete.
i) Compressive strength and Flexural behaviour is
fairly equal to that of the conventional concrete.
j) Green concrete might solve some of the societies
problems with the use of inorganic, residual products
which should otherwise be deposited.

ADVANTAGES OF GREEN CONCRETE:


Green concrete has manifold advantages over the
conventional concrete. Since it uses the recycled
aggregates and materials, it reduces the extra load in
landfills and mitigates the wastage of aggregates. Thus,
the net CO2 emissions are reduced. The reuse of
materials also contributes intensively to economy. Since
the waste materials like aggregates from a nearby area
and fly ash from a nearby power plant are not much
expensive and also transport costs are minimal. Green
concrete can be considered elemental to sustainable
development since it is eco-friendly itself. Green concrete
is being widely used in green building practices.

CONCLUSION:
The newer the technologies simultaneously it must
be eco-friendly. Using Green concrete in Construction
field is a revolution for the eco-friendly civil
infrastructural development. Upcoming generation must
use the green concrete instead of conventional concrete
so that CO2 emission is considerably reduced.

It also helps the green buildings achieve LEED and


Golden Globe certifications. Use of fly ash in the
concrete also increases its workability and many other
properties like durability to an appreciable extent. One of
the practices to manufacture green concrete involves
reduction of amount cement in the mix, this practice
helps in reducing the consumption of cement overall. The
use waste materials also solve the problem of disposing
the excessive amount industrial wastes.

REFERENCES:
[1]
[2]
[3]
[4]

www.greenconcrete.info
http://greenglobe.com/
http://flyash.sustainablesources.com
http://www.microsilica-china.com

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

214

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Analytical Study of General Failure in


Prominent Components
1

Balaji V.R1, Amitesh Jain1,A.Kirthivasan1 ,D.Ananthapadmanaban2


Under graduate students, SSN College of Engineering,Kalavakkan-603110,Tamilnadu,India
2
Associate Professor, Department of Mechanical Engineering, SSN College of
Engineering,Kalavakkam-603110, Tamilnadu, India
Email: ananthapadmanaband@ssn.edu.in
Email:
balaji12303@mech.ssn.edu.in

I.

Abstract

The importance and value of failure analysis for safety,


reliability, performance, and economy are well documented
in this paper. To prevent failure of machine components
failure analysis are made on the component before and
after manufacturing. Even though the various test are
conducted, failures happen at some stage. In this paper, a
brief overview of causes of failure has been discusses.
Some case studies have been given and the possible causes
of failure have been analyzed.
Key words: Failure Analysis, Case histories, recent
trends.

II.

analysis of the physical evidence alone may not be


adequate to reach this goal. The scope of a failure analysis
can, but does not necessarily, lead to a correctable root
cause of failure. Many times, a failure analysis incorrectly
ends at the identification of the failure mechanism.
A material or component shows gradual deformation or
creep when subjected to sustained loading especially at
elevated temperatures. It occurs even if the applied stresses
are below the proportional limit. It occurs in both metals
and non-metals.
Similarly when a component is subjected repeated
continuous load, it tends to gradually deteriorate resulting
in fatigue failure. Fatigue occurs in three stages namely
crack initiation, crack propagation and unstable rapid
growth.

Introduction

The defects in quality, design, process or part application


are the underlying cause of a failure. The human errors are
considered, when failure depends on the user of the
product or process. The failure analysis includes the area
of creep, fatigue, structural resonance, crack initiation,
crack propagation, spalling and pitting, fretting and wear,
component failure. The components are subjected to
failure analysis before and after manufacturing. Even
though the various tests are conducted, failure happens at
one stage.

Fatigue failure may also occur due propagation of the


cracks originating from the surface of the component.
They are of two types namely spalling and pitting. It
occurs due to sub surface tensile and shear stresses that
exceed materials fatigue limits. Gears and bearings are
usually subjected to such stresses. When surfaces of two
components mate each other they are subjected to normal
pressure and tangential oscillatory motion fretting failure
occurs. The surface undergoes failure due to fatigue, high
normal forces or wear and failure can be accelerated in the
presence of chemical attack.

The mechanism of failure can be attributed to multiple


factors which simultaneously plays an influential role.
These include corrosion, abnormal electric current
welding of contacts, returns spring fatigue failure,
unintended failure, dust accumulation and blockage of
mechanism, etc.

III.

Failure analysis-History and Inception

The importance and value of failure analysis to safety,


reliability, performance, and economy are well
documented.

The strategy for safety is to make various test before the


product comes into the usage. The investigation of failure
is vividly illustrated in the pioneering efforts of the
consideration of physical evidence and the use of
engineering and scientific principles and analytical tools.
Often, the reason why one performs a failure analysis is to
characterize the causes of failure with the overall
objective to avoid repetition of similar failures. However,

For example, the importance of investigating failures is


vividly illustrated in the pioneering efforts of the Wright
Brothers in developing self-propelled flight. In fact, while
Wilbur was traveling in France in 1908, Orville was
conducting flight tests for the U.S. Army Signal Corps and
was injured when his Wright Flyer crashed (Fig. 1). His

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

215

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


passenger sustained fatal injuries [1]. Upon receiving word
of the mishap, Wilbur immediately ordered the delivery of
the failed flyer to France so that he could conduct a
thorough investigation. This was decades before the formal
discipline called failure analysis was introduced.
Unfortunately, there are many dramatic examples of
catastrophic failures that result in injury, loss of life, and
damage to property. For example, a molasses tank failed in
Boston in 1919, and another molasses tank failed in
Bellview, NJ, in 1973 [2]. Were the causes identified in
1919? Were lessons learned as a result of the accident?
Were corrective actions developed and implemented to
prevent recurrence?

A micrograph of sample taken few millimetres behind


fracture surface reveals that it was subjected to much
higher temperatures than area directly beneath the tile.
From these images it is clear that second phase particles
present in the microstructure composed of copper,
manganese and iron melted and subsequently wetted the
grain boundaries upon exposure to high temperature
causing significant weakening of grain boundaries,
ultimately resulting in intergranular failure.

Fig.2 Light optical microscopy micrograph of panel skin

section at fracture surface. Notice that liquation is


observed across entire micrograph. A crack along a grain
boundary is also present. [4]

Fig.1 Crash of the Wright Flyer, 1908. Courtesy of the National Air
and Space Museum,Smithsonian Institution Photo A-42555-A[1]

Conversely, failures can also lead to improvements in


engineering practices. The spectacular failures of the
Liberty ships during World War II were studied
extensively in subsequent decades, and the outcome of
these efforts was a significantly more thorough
understanding of the phenomenon of fracture, culminating
in part with the development of the engineering discipline
of fracture mechanics [3]. Through these and other efforts,
insights into the cause and prevention of failures continue
to evolve.
Space shuttle, Columbia orbitter failed upon its re-entry
into the earths atmosphere. Earlier it was believed that
failure was caused by foam piece which got dislodged from
external tank during takeoff and struck the leading edge of
the left wing. This damaged the latter made of carboncarbon composites thereby providing a breach in the shuttle
leading to catastrophic failure [4].
The foam tiles on analysis show signs of erosion due to
high temperatures exceeding 2000K during re-entry which
is sufficient enough to melt aluminium. Hence it is quite
apparent that Aluminium Sandwich Panel Skins below the
tiles were protected during re-entry. Hence an analysis was
carried out on the Aluminium Sandwich Panel Skin
recovered from the debris field. They were 0.6mm thick
sheets made up of 2000-series Aluminium alloy.

The Sandwich Panel was locally heated to cause


liquation only near the fracture surface whereas aluminium
microstructure directly beneath an insulating tile shows no
sign of liquation. This lead to formation of local hotspots
where there was no thermal protection due to loss of
insulation resulting from the accident. These localised
hotspots lead to the failure of the aluminium sandwich
panel and hence failure of the space shuttle.

IV.

Recent trends and advancements

Modern automobiles increasingly utilize high-strength lowweight alloys for better fuel efficiency. Aluminium alloys
seem to serve the purpose owing to its high strength to
weight ratio. Several major automobile components such as
engine blocks, pistons, intake manifold, carburettors and
brake parts make use of aluminium castings. Since
aluminium alloys such as Al-356 is extensively used, a
study to realize the reason for their mechanical failure is
necessary.
S. Nasrazadani and L. Reyes investigated a clutch pedal
lever made of permanent mould cast Al 356-T6 aluminium
by way of metallography, SEM, hardness testing and visual
inspection [5]. They concluded that the parts in clutch
assembly must be designed with thicker sections to resist
the applied stress. Fatigue and brittle failure occurs due to
presence of dendrite phases and micro-porosity and it can
be avoided by production of heat treated Al 356-T6.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

216

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Fig. 3 Image of the fractured part showing two visible cracks[5]

Fig.4 Oxidation product visible at the grain boundary (arrows)

[6]

Y. Huang and Y. Zhu metallurgically examined the


section of a fractured spindle from an axle housing of a
truck. It was found that after about 8,000 miles of service
the axle fractured at the friction weld interface.
Metallographic and SEM examinations at the fractured site
revealed the existence of micro-porosity and formation of
ferrite layer. This had reduced the impact strength of the
weld resulting in overload fracture. Due to air exposure at
the molten layer interface, a band of oxides was formed
and the solidification of liquid film leads to microshrinkage.

with other oxidized grain boundaries (encircled) [7]

The results showed that both were made of the same


steel grade X2NiCoMo18-8-5. The samples were cut from
both defected and non-defected regions and were analysed
using optical microscopy. From the analysis it was
observed that welding bead had serious problems due to
the oxidation of the plate. The surface of metal plate was
deposited by oxidation products due to under machining of
the plate, before welding. Hence during welding, the
oxidation products get entrapped within weld bead or may
appear on the surface of the weld. This weakens the weld
thereby causing failure.

V.

Fig. 5 The assembly fractured along the weld interface (a ferrite


band with oxides), etched, 2% natal[6]

A lap welded steel joint had failed when it was operated


under very high speeds[7]. This joint was fabricated by a
laser beam welding using high energy coherent optical
source of heat at low pressure. The steel sheets used for
this purpose was 0.5 mm thick. The components of the
debris was put into failure analysis for metallurgical
investigation. The sheet and plate used to form this joint
were analysed for their chemical composition.

Failure analysis is considered to be the examination of the


characteristics and causes of equipment or component
failure[8]. In most cases this involves the consideration of
physical evidence and the use of engineering and scientific
principles and analytical tools. Often, the reason why one
performs a failure analysis is to characterize the causes of
failure with the overall objective to avoid repeat of similar
failures. However, analysis of the physical evidence alone
may not be adequate to reach this goal. The scope of a
failure analysis can, but does not necessarily, lead to a
correctable root cause of failure. Many times, a failure
analysis incorrectly ends at the identification of the failure
mechanism and perhaps causal influences. The principles
of root-cause analysis (RCA) may be applied to ensure that
the root cause is understood and appropriate corrective
actions may be identified. An RCA exercise may simply be
a momentary mental exercise or an extensive logistical
charting analysis.
Many volumes have been written on the process and
methods of RCA. The concept of RCA does not apply to
failures alone, but is applied in response to an undesirable
event or condition (Fig. 4). Root-cause analysis is intended
to identify the fundamental cause(s) that if corrected will
prevent recurrence.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

217

Root-Cause Analysis

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


VII.

Conclusions

Various procedures followed during failure analysis have


been documented in this paper. Root cause analysis has
been discussed. Starting from the history of failure
analysis some case studies have been presented. It can be
concluded that failure analysis is a very vast field of
research and any analysis can give the possible cause of
failure only.Sometimes there may be a a combination of
factors for a material or component to fail.

VIII.

References

[1]. P.L. Jakab, Visions of a Flying Machine: The Wright


Brothers and the Process of Invention, Smithsonian
Institution, 1990, p 226
[2]. R.W. Hertzberg, Deformation and Fracture Mechanics
of Engineering Materials, John Wiley & Sons, 1976, p
229230
Fig. 6 Root-cause analogy [8]

VI.

[3]. D.J. Wulpi, Understanding How Components Fail, 2nd


ed., ASM International, 1999

Recent Case History

Building in Chennai under construction crashed recently


(2014) when there was rain for about 1 or 2 hours. Most
probably the soil itself must have been loose around that
area, Chennai being home to clayey soil[9]. Another reason
could be-lack of proper curing of foundation. In earlier
days, foundations used to be cured for close to 3 weeks.
These days, it is not certain whether proper procedures are
followed. Most often, there is a combination of factors
involved each of which contributes to the ultimate failure.
So, it is suggested that at each stage, mandatory checks be
followed so that even if a structure fails, one can exactly
pinpoint what went wrong and at which stage it went
wrong.

[4].Metals Handbook,American Society of Metals,Volume


5,Failure Analysis and prevention.
[5] S. Nasrazadani L. Reyes ,Failure Analysis of Al 356T6 Clutch Lever, Failure Analysis of Al 356-T6 Clutch
Lever
[6]. Y. Huang and Y. Zhu, Failure Analysis of Friction
Weld (FRW) in Truck Axle Application, Submitted: 17
September 2007 / in revised form: 16 November 2007 /
Published online: 20 December 2007_ ASM International
2007
[7]. A. Nusair Khan W. Mohammad I. Salam..: Failure
Analysis of Laser Weld Joint of X2NiCoMo18-8-5 Steel.
[8]. http://en.wikipedia.org/wiki/Root_cause_analysis
[9]. http://www.thehindu.com/news/cities/chennai/ap-cmannounces-exgratia-for-telugu-victims-in-chennaibuilding-collapse/article6159984.ece

Fig.7 The Hindu 11 floor building collapse at Bai kadai junction

Moulivakkam,near Porur on Sunday.in Chennai-TN.India[9]

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

218

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Organizing the Trust Model in Peer-To-Peer System Using


SORT
R. Kiruthikalatha, B. Arputhamary M.C.A., MPhil,
M.Phil scholar, Department of computer science, Bishop Heber college(Autonomous),
Trichirapalli-620 017
Asst.professor, Department of computer Applications, Bishop Heber college(Autonomous),
Trichirapalli-620017
kiruthikalatha@gmail.com
arputhambaskaran@rediffmail.com
using local information available and global
trust information.

ABSTRACT
Peer-to-Peer (P2P) Systems have
attracted significant interest in recent years.
In P2P networks, each peer acts as both a
server and client. This characteristic makes
peers vulnerable to a wide variety of attacks.
Having robust trust management is very
critical for such open environment to
exclude unreliable peer from the system.
Managing trust is a problem in peer to peer
environment, so our SORT model is built
based on trust metrics. The trust metrics
such as reputation, service trust,
recommendation trust are defined to
precisely measure trustworthiness of peers.
A peers trustworthiness is evaluated by
considering provided services and given
recommendations
with
service
and
recommendation contexts. When evaluating
recommendation,
recommenders
trustworthiness and confidence level about
the information provided are also
considered. Experiments on file sharing
application demonstrate that peers with the
highest trust value are considered and build
the trust model in their contiguity and
insulate malignant peers. Building trust
relationships among peers can mitigate
attacks of malicious peers. Peers create their
own trust network in their proximity by

Keywords: Peer-to-peer systems, trust


management, reputation, services, security.

I.

INTRODUCTION

PEER-TO-PEER (P2P) systems rely


on collaboration of peers to accomplish
tasks. Ease of performing malicious activity
is a threat for security of P2P systems.
Creating long-term trust relationships among
peers can provide a more secure environment by reducing risk and uncertainty in
future
P2P
interactions.
However,
establishing trust in an unknown entity is
difficult in such a malicious environment.
Furthermore, trust is a social concept and
hard to measure with numerical values.
Metrics are needed to represent trust in
computational models. Classifying peers as
either trustworthy or untrustworthy is not
sufficient in most cases. Metrics should have
precision so peers can be ranked according
to
trustworthiness.
Interactions
and
feedbacks of peers provide information to
measure trust among peers. Interactions with
a peer provide certain information about the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

219

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

peer becomes an acquaintance of another


peer after providing a service, e.g.,
uploading a file. If a peer has no
acquaintance, it chooses to trust strangers.
An acquaintance is always preferred over a
stranger if they are equally trustworthy.
Using a service of a peer is an interaction,
which is evaluated based on weight
(importance) and recentness of the
interaction, and satisfaction of the requester.
An acquaintances feedback about a peer,
recommendation, is evaluated based on
recommenders trust-worthiness. It contains
the recommenders own experience about
the peer, information collected from the
recommenders acquaintances, and the
recommenders level of confidence in the
recommendation. If the level of confidence
is low, the recommendation has a low value
in evaluation and affects less the trustworthiness of the recommender. and defines two
contexts of trust: service and recommendation contexts. Information about past
interactions and recommendations are stored
in separate histories to assess competence
and integrity of acquaintances in these
contexts. SORT defines three trust metrics.
Reputation metric is calculated based on
recommendations. It is important when
deciding about strangers and new
acquaintances.
Reputation
loses
its
importance as experience with an
acquaintance increases. Service trust and
recommendation trust are primary metrics to
measure trustworthiness in the service and
recommendation contexts, respectively. The
service trust metric is used when selecting
service providers. The recommendation trust
metric is important when requesting
recommendations. When calculating the
reputation metric, recommendations are
evaluated based on the recommendation
trust metric. We implemented a P2P file
sharing simulation tool and conducted
experiments to understand impact of SORT
in mitigating attacks. Parameters related to

peer but feedbacks might contain deceptive


information. This makes assessment of trustworthiness a challenge.
In the presence of an authority, a
central server is a preferred way to store and
manage trust information, e.g., eBay. The
central server securely stores trust
information and defines trust metrics. Since
there is no central server in most P2P systems, peers organize themselves to store and
manage trust information about each other
[1], [2]. Management of trust information is
dependent to the structure of P2P network.
In distributed hash table (DHT)-based
approaches, each peer becomes a trust
holder by storing feedbacks about other
peers [1], [3], [4]. Global trust information
stored by trust holders can be accessed
through DHT efficiently. In unstructured
networks, each peer stores trust information
about peers in its neighbor-hood or peers
interacted in the past. A peer sends trust
queries to learn trust information of other
peers. A trust query is either flooded to the
network or sent to neighborhood of the
query initiator. Generally, calculated trust
information is not global and does not
reflect opinions of all peers.
We propose a Self-Organizing Trust
model (SORT) that aims to decrease
malicious activity in a P2P system by
establishing trust relations among peers in
their proximity. No a priori information or a
trusted peer is used to leverage trust
establishment. Peers do not try to collect
trust information from all peers. Each peer
develops its own local view of trust about
the peers interacted in the past. In this way,
good peers form dynamic trust groups in
their proximity and can isolate malicious
peers. Since peers generally tend to interact
with a small set of peers [7], forming trust
relations in proximity of peers helps to
mitigate attacks in a P2P system.
In SORT, peers are assumed to be
strangers to each other at the beginning. A

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

220

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

peer capabilities (bandwidth, number of


shared files), peer behavior (online/ offline
periods, waiting time for sessions), and
resource distribution (file sizes, popularity
of files) are approximated to several empirical results. This enabled us to make more
realistic observations on evolution of trust
relationships. We studied 16 types of
malicious peer behaviors, which perform
both service and recommendation-based
attacks. SORT mitigated service-based
attacks in all cases. Recommendation-based
attacks were contained except when
malicious peers are in large numbers, e.g.,
50 percent of all peers. Experiments on
SORT show that good peers can defend
themselves against malicious peers without
having global trust information. SORTs
trust metrics let a peer assess trustworthiness
of other peers based on local information.
Service and recommendation contexts
enable
better
measurement
of
trustworthiness in providing services and
giving recommendations. Outline of the
paper is as follows: Section 2 discusses the
related research. Section 3 explains the
computational model of SORT. Section 4
presents the simulation experiments and
results. Section 5 summarizes the results and
possible future work directions.

trustworthiness of participating peers and to


combat the selfish, dishonest, and malicious
peer behaviors.
[2] Guruprasad Khataniar, Hitesh
Tahbildar, and Prakriti Prava Das [Guru, Hit
Tah, PravDa, 13] describes There are
various overlay structures that provide
efficient and scalable solutions for point and
range query in a peer-to peer network.
[3] Karl Aberer, Philippe CudrMauroux,
Anwitaman
Datta,
Zoran
Despotovic, Manfred Hauswirth, Magdalena
Punceva, Roman Schmidt.[Mag Pun, Zor,
Anw Datt, 12] describes, sometimes p2p
system structure are not in uniform our
concept is used to organizing the p2p system
structure in self.
[4] Kevin Hoffman, David Zage,
Cristina Nita-Rotaru.[Ke Hoff, Cris Nit Dav
Za, 07] describes, The rapid growth of
communication networks such as the
Internet and ad hoc wireless mesh networks
has spurred the development of numerous
collaborative applications.
[5] Stephen Boyd Arpita Ghosh
Balaji Prabhakar Devavrat Shah.[ApriGho,
Pra Dev, 06] Motivated by applications to
sensor, peer-to-peer and ad hoc networks,
we study distributed algorithms, also known
as gossip algorithms, for exchanging
information and for computing in an
arbitrarily connected network of nodes. The
topology of such networks changes
continuously as new nodes join and old
nodes leave the network.
[6] Sylvia Ratnasamy, Paul Francis,
Mark Handley, Richard Karp1, Scott
Shenker [Sy Pat,Paul, Mk Han, 01]. Hash
tables which map keys onto values
are an essential building block in modern
software systems. We believe a similar
functionality would be equally valuable to
large distributed systems.

Fig 1. Peer-to-Peer Model


II. RELATED WORK
[1] Run fang Zhou, Kai Hwang.,[R
Zhou,
Kai
Hw,06]
describes
the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

221

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

[7] Yao Wang[Ya Wan, 05 ]. Peerto-peer networks are networks in which all
peers cooperate with each other to perform
article function in a decentralized manner
All peers are both users and providers of
resources and can access each other directly
without intermediary agents. Compared with
a centralized system, a peer-to-peer (P2P)
system provides an easy way to aggregate
large amounts of resource residing on the
edge of Internet or in ad-hoc networks with
a low cost of system maintenance.

[11] Shanshan Song, Kai Hwang,


and Runfang Zhou., [S Song, K Hwang, R
Zhou,

fuzzy

logic

system benefits from the distinct advantages


of fuzzy inferences, which can handle
imprecise linguistic terms effectively. In the
Fuzzy Logic Inference and Applications
sidebar, we give details on how to perform
fuzzy inference in the Fuzzy Trust system.
III. SYSTEM MODEL
SORT defines three trust metrics.
Reputation metric is calculated based on
recommendations. It is important when
deciding about strangers and new
acquaintances.
Reputation
loses
its
importance as experience with an
acquaintance increases. Service trust and
recommendation trust are primary metrics to
measure trustworthiness in the service and
recommendation contexts, respectively. The
service trust metric is used when selecting
service providers. The recommendation trust
metric is important when requesting
recommendations. When calculating the
reputation metric, recommendations are
evaluated based on the recommendation
trust metric. We implemented a P2P file
sharing simulation tool and conducted
experiments to understand impact of SORT
in mitigating attacks. Parameters related to
peer capabilities (bandwidth, number of
shared files), peer behavior (online/ offline
periods, waiting time for sessions), and
resource distribution (file sizes, popularity
of files) are approximated to several
empirical results. This enabled us to make
more realistic observations on evolution of
trust relationships. We studied 16 types of

[10] Ion Stoica, Robert Morris, David


Liben-Nowell, David R. Karger, M. Frans
Kaashoek, Frank Dabek , Hari Balakrishnan
[ I stoica, R Morris, D Karger, Fr Kaashoek
,F Dabek, 01]. Peer to Peer Systems are
loosley organized systems without any
centralized control or any hierarchical
system. Each node runs software with
equivalent functionality. Peer to peer
systems provide users with features such as
availability, permanence, redundant
nearby

uses

and to aggregate global reputation. This

[9] Runfang Zhou, Kai Hwang[R Zhou, Kai


Hw, 07]. Peer-to-Peer (P2P) reputation
systems are needed to evaluate the
trustworthiness of participating peers and to
combat selfish and malicious peer behaviors.
The reputation system collects locally
generated peer feedbacks and aggregates
them to yield global reputation scores.
Development of decentralized reputation
system is in great demand for unstructured
P2P networks since most P2P applications
on the Internet are unstructured.

of

System

inference rules to calculate local trust scores

[8] Jon Kleinberg[Jo Kl, 09]. A social


network
exhibits
the
small-world
phenomenon if, roughly speaking, any two
individuals in the network are likely to be
connected through a short sequence of
intermediate acquaintances.

storage, selection
anonymity.

05].

servers,

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

222

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

malicious peer behaviors, which perform


both service and recommendation- based
attacks. SORT mitigated service-based
attacks in all cases. Recommendation-based
attacks were contained except when
malicious peers are in large numbers, e.g.,
50 percent of all peers. Experiments on
SORT show that good peers can defend
themselves against malicious peers without
having global trust information. SORTs
trust metrics let a peer assess trustworthiness
of other peers based on local information.
Service and recommendation contexts
enable
better
measurement
of
trustworthiness in providing services and
giving recommendations.

peer to peer system. Many possibilities


hacking of information. Self organized
based peer systems provide the better
security among the number of system. on
trust values of recommenders based on
accuracy of their recommendations. In
SORT method trust is based on the
service and recommendation .There are
three metrics are used for peer to peer
system. Reputation metric is calculated
based on recommendations. It is important
when deciding about strangers and new
acquaintances.
Reputation
loses
its
importance as experience with an
acquaintance increased. Different types
attacks prevented in this method. When pi
searches for a particular service, it gets a list
of service providers. Considering a file
sharing application, pi may download a file
from either one or multiple up loaders. With
multiple up loaders, checking integrity is a
problem since any file part downloaded
from an up loader might be inauthentic.
When
evaluating
an
acquaintances
trustworthiness in the service context, a peer
first calculates competence and integrity
belief values using the information in its
service history.
IV.PERFORMANCE EVALUATION
A file sharing simulation program is
implemented in Java to observe results of
using SORT in a P2P environment. Some
questions studied in the experiments are as
follows: how SORT handles attacks, how
much attacks can be mitigated, how much
recommendations are (not) helpful in
correctly identifying malicious peers, and
what type of attackers are the most harmful.

Fig. 3 Architecture Flow Process

A. Trust model for Peer-To-Peer


In Proposed SORT reputation based
metric has been used. After calculating
value, pi updates recommendation many
important files are shared with peer to peer
system. The Central server makes the
trustiness between the peer systems. In
many cases there is no central server in the

A. Method

cycle

The simulation runs as cycles. Each


represents a period of time.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

223

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Downloading a file is an interaction. A peer


sharing files is called an uploader.

Hence team size is increased to observe


effects of collaboration better.
V. CONCLUSION AND FUTURE
WORK

B. Attacker Model
Attackers can perform service-based
and
recommendation
based
attacks.
Uploading a virus infected or an inauthentic
file is a service-based attack. Giving a
misleading recommendation intentionally is
a recommendation-based attack.

A. CONCLUSION
Trust model for P2P networks is
presented, in which a peer can develop a
trust network in its proximity. A peer can
isolate malicious peers around itself as it
develops trust relationships with good peers.
Two context of trust, service and
recommendation contexts, are defined to
measure capabilities of peers in providing
services and giving recommendations.
Interactions and recommendations are
considered with satisfaction, weight, and
fading effect parameters. A recommendation
contains
the
recommenders
own
experience,
information
from
its
acquaintances, and level of confidence in the
recommendation.
These
parameters
provided us a better assessment of
trustworthiness. Individual, collaborative,
and pseudonym changing attackers are
studied in the experiments. Damage of
collaboration and pseudo spoofing is
dependent to attack behavior. Although
recommendations
are
important
in
hypocritical
discriminatory
attackers.
Attacks in most experiments. However, in
extremely malicious environments such as a
50 percent malicious network, collaborators
can continue to disseminate large amount of
misleading recommendations. Another issue
about SORT is maintaining trust all over the
network. If a peer changes its point of
attachment to the network, it might lose a
part of its trust network. These issues might
be studied as a future work to extend the
trust model. Using trust information does
not solve all security problems in P2P
systems but can enhance security and
effectiveness of systems. If interactions are

C. Analysis on Individual Attackers


This section explains the results of
experiments on individual attackers. For
each type of individual attacker, two
separate network topologies are created: one
with 10 percent malicious and one with 50
percent malicious. Each network topology is
tested with four trust calculation methods. In
the experiments, a hypocritical attacker
behaves malicious in 20 percent of all
interactions. A discriminatory attacker
selects 10 percent of all peers as victims.
D. Analysis on Individual
Pseudospoofers
This section explains the results of
experiments on individual pseudospoofers.
Pseudospoofers change their pseudonyms
after every 1,000 cycles. For other
parameters, attackers behave as described.
E. Analysis on Collaborators
Collaboration of attackers generally
makes attack prevention harder. This section
presents
experimental
results
on
collaborators. Collaborators form teams of
size 50 and launch attacks as teams. We first
tried teams of size 10 but this was not
enough to benefit from collaboration and
results were close to individual attackers.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

224

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Despotovic, Manfred Hauswirth, Magdalena


Punceva, Roman Schmidt.[Mag Pun, Zor,
Anw Datt, 12]

modeled correctly, SORT can be adapted to


various P2P applications, e.g., CPU sharing,
storage networks, and P2P gaming. Defining
application specific context of trust and
related metrics can help to assess
trustworthiness in various tasks.

[4] A Survey of attacks on Reputation


Systems: Kevin Hoffman, David Zage,
Cristina Nita-Rotaru.[Ke Hoff, Cris Nit Dav
Za, 07]

B. Future Work
Managing trust is a problem of
particular importance in peer-to-peer
environments
where
one
frequently
encounters unknown agents. We present an
approach that addresses the problem of
reputation-based trust management at both
th data management and the scientific level.
We employ at both levels scalable data
structures and algorithm that require no
central control and allows assessing trust by
computing any agents reputation from its
former interactions with other agents. Thus
the method can be implemented in a p2p
environment and scales well for very large
number of participants. We expect that
scalable methods for trust management are
an important factor, if fully decentralized
peer-to-peer systems should become the
platform for more serious applications than
simple exchange.

[5] Randomized Gossip Algorithms:


Stephen Boyd Arpita Ghosh Balaji
Prabhakar Devavrat Shah.[ApriGho, Pra
Dev, 06].
[6] A Scalable Content-Addressable
Network: Sylvia Ratnasamy, Paul Francis,
Mark Handley, Richard Karp1, Scott
Shenker [Sy Pat,Paul, Mk Han, 01].
[7] Bayesian Network Trust Model in
Peer-to-Peer Networks: Yao Wang[Ya
Wan, 05 ].
[8] The Small-World Phenomenon: An
Algorithmic Perspective Jon Kleinberg[Jo
Kl, 09].
[9] Gossip-based Reputation Aggregation
for Unstructured Peer-to-Peer Networks.
Runfang Zhou, Kai Hwang[R Zhou, Kai
Hw, 07].

Reference
[1] A Robust and Scalable Reputation
System for Trusted Peer-to-Peer
Computing: Run fang Zhou, Kai Hwang.,[R
Zhou, Kai Hw,06]

[10] A scalable peer-to-peer look-up


protocol for internet applications Ion
Stoica, Robert Morris, David Liben-Nowell,
David R. Karger, M. Frans Kaashoek, Frank
Dabek , Hari Balakrishnan [ I stoica, R
Morris, D Karger, Fr Kaashoek ,F Dabek,
01].

[2] Attacks and Counter Measures in BST


Overlay Structure of Peer-To-Peer
System: Guruprasad Khataniar, Hitesh
Tahbildar, and Prakriti Prava Das [Guru, Hit
Tah, PravDa, 13]

[11] Trusted P2P Transactions with


Fuzzy Reputation Aggregation: Shanshan
Song, Kai Hwang, and Runfang Zhou., [S
Song, K Hwang, R Zhou, 05].

[3] P-Grid: Self-organizing Structured


P2P System: Karl Aberer, Philippe CudrMauroux, Anwitaman Datta, Zoran

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

225

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

226

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

WEB PERSONALIZATION: A GENERAL SURVEY


J. JOSIYA ANCY
Research Scholar, Department of Information Technology,
Vellore Institute of Technology,
Vellore District, Tamilnadu.
ABSTRACT:
Retrieving the relevant information from the web becomes difficult now-a-days because
of the large amount of datas available in various formats. Now days the user rely on web, but it
is mandatory for users to go through the long list and to choose their needed document, which is
very time consuming process. The approach that is to satisfy the requirements of user is to
personalize the data available on Web, is called Web Personalization. Web personalization is
solution to the information overload problem, it provides the users need without having to ask or
search for it explicitly. This approach helps to improve the efficiency of Information Retrieval
(IR) systems. This paper conducts a survey of personalization, its approaches and techniques.
Keywords- Web Personalization, Information retrieval, User Profile, Personalized Search

I.

INTRODUCTION:

WEB PERSONALIZATION:
The web is used for accessing a variety of information stored in various locations in
various formats in the whole world. The content in web is rapidly increasing and need for
identifying, accessing and retrieving the content based on the needs of the users is required.
An ultimate need nowadays is that predicting the user requirements in order to improve
the usability of the web site. Personalized search is the solution for this problem since different
search results based on preferences of users are provided.
In brief, web pages are personalized based on the characteristics of an individual user
based on interests, social category, context etc The Web Personalization is defined as any
action that adapts information or services provided by a web site to an individual user or a set of
users. Personalization implies that the changes are based on implicit data such as items
purchased or pages viewed.
Personalization is an overloaded term: there are many mechanisms and approaches bothe
automated and marketing rules controlled, whereby content can be focused to an audience in a
one to one manner.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

227

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

WEB PERSONALIZATION ARCHITECTURE:

Fig 1.1: Web personalization process

The above web personalization process uses the web sites structure, Web logs created by
observing the users navigational behavior and User profiles created according to the users
preferences along with the Web sites content to analyze and extract the information needed for
the user to find the pattern expected by the user. This analysis creates a recommendation model
which is presented to the user. The recommendation process relies on the existing user
transactions or rating data, thus items or pages added to a site recently cannot be recommended.
II.

WEB PERSONALIZATION APPROACHES

Web mining is a mining of web data on the World Wide Web. Web mining does the
process on personalizing these web data. The web data may be of the following:
1.
2.
3.
4.

Content of the web pages(actual web content)


Inter page structure
Usage data includes how the web pages are accessed by users
User profile includes information collected about users(cookies/Session data)

With personalization the content of the web pages are modified to better fit for user needs.
This may involve actually creating web pages, that are unique per user or using the desires of a
user to determine what web documents to retrieve. Personalization can be done to a group of
specific interested customers, based on the user visits to a websites. Personalization also includes
techniques such as use of cookies, use of databases, and machine learning strategies.
Personalization can be viewed as a type of Clustering, Classification, or even Prediction.
USER PROFILES FOR PERSONALIZED INFORMATION ACCESS:

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

228

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

In the modern Web, as the amount of information available causes information


overloading, the demand for personalized approaches for information access increases.
Personalized systems address the overload problem by building, managing, and representing
information customized for individual users. There are a wide variety of applications to which
personalization can be applied and a wide variety of different devices available on which to
deliver the personalized information.
Most personalization systems are based on some type of user profile, a data instance of a
user model that is applied to adaptive interactive systems. User profiles may include
demographic information, e.g., name, age, country, education level, etc, and may also represent
the interests or preferences of either a group of users or a single person. Personalization of Web
portals, for example, may focus on individual users, for example, displaying news about
specifically chosen topics or the market summary of specifically selected stocks, or a groups of
users for whom distinctive characteristics where identified, for example, displaying targeted
advertising on e-commerce sites. In order to construct an individual users profile, information
may be collected explicitly, through direct user intervention, or implicitly, through agents that
monitor user activity.

Fig. 2.1. Overview of user-profile-based personalization

As shown in Figure 2.1, the user profiling process generally consists of three main
phases. First, an information collection process is used to gather raw information about the user.
Depending on the information collection process selected, different types of user data can be
extracted. The second phase focuses on user profile construction from the user data. The final
phase, in which a technology or application exploits information in the user profile in order to
provide personalized services.
TECHNIQUES USING USER PROFILE:
From an architectural and algorithmic point of view personalization systems fall into
three basic categories: Rule-based systems, content-filtering systems, and collaborative filtering
systems.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

229

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Rule-Based Personalization Systems. Rule-based filtering systems rely on manually or


automatically generated decision rules that are used to recommend items to users. Many existing
e-commerce Web sites that employ personalization or recommendation technologies use manual
rule-based systems.
The primary drawbacks of rule-based filtering techniques, in addition to the usual
knowledge engineering bottleneck problem, emanate from the methods used for the generation
of user profiles. The input is usually the subjective description of users or their interests by the
users themselves, and thus is prone to bias. Furthermore, the profiles are often static, and thus the
system performance degrades over time as the profiles age.
Content-Based Filtering Systems. In Content-based filtering systems, user profiles
represent the content descriptions of items in which that user has previously expressed interest.
The content descriptions of items are represented by a set of features or attributes that
characterize that item. The recommendation generation task in such systems usually involves the
comparison of extracted features from unseen or unrated items with content descriptions in the
user profile. Items that are considered sufficiently similar to the user profile are recommended to
the user.
Collaborative Filtering Systems. Collaborative filtering has tried to address some of the
shortcomings of other approaches mentioned above. Particularly, in the context of e-commerce,
recommender systems based on collaborative filtering have achieved notable successes. These
techniques generally involve matching the ratings of a current user for objects (e.g., movies or
products) with those of similar users (nearest neighbors) in order to produce recommendations
for objects not yet rated or seen by an active user.
III.

WEB PERSONALIZATION AND RELATED WORKS

Lot of research had been conducted in Personalized Ontology. Generally, personalization


methodologies are divided into two complementary processes which are (1) the user information
collection, used to describe the user interests and (2) the inference of the gathered data to predict
the closest content to the user expectation. In the first case, user profiles can be used to enrich
queries and to sort results at the user interface level. Or, in other techniques, they are used to
infer relationships like the social-based filtering and the collaborative filtering. For the second
process, extraction of information on users navigations from system log files can be used. Some
information retrieval techniques are based on user contextual information extraction.
Dai and Mobasher [5] proposed a web personalization framework that characterizes the
usage profiles of a collaborative filtering system using ontologies. These profiles are transformed
to domain-level aggregate profiles by representing each page with a set of related ontology
objects. In this work, the mapping of content features to ontology terms is assumed to be
performed either manually, or using supervised learning methods. The defined ontology includes
classes and their instances therefore the aggregation is performed by grouping together different

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

230

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

instances that belong to the same class. The recommendations generated by the proposed
collaborative system are in turn derived by binary matching the current user visit expressed as
ontology instances to the derived domain-level aggregate profiles, and no semantic relations
beyond hyperonymy/hyponymy are employed.
Acharyya and Ghosh [7] also propose a general personalization framework based on the
conceptual modeling of the users navigational behavior. The proposed methodology involves
mapping each visited page to a topic or concept, imposing a tree hierarchy (taxonomy) on these
topics, and then estimating the parameters of a semi- Markov process defined on this tree based
on the observed user paths. In this Markov modelsbased work, the semantic characterization of
the context is performed manually. Moreover, no semantic similarity measure is exploited for
enhancing the prediction process, except for generalizations/specializations of the ontology
terms.
Middleton et.al. [8] explore the use of ontologies in the user profiling process within
collaborative filtering systems. This work focuses on recommending academic research papers to
academic staff of a University. The authors represent the acquired user profiles using terms of a
research paper ontology (is-a hierarchy). Research papers are also classified using ontological
classes. In this hybrid recommender system which is based on collaborative and content-based
recommendation techniques, the content is characterized with ontology terms, using document
classifiers (therefore a manual labeling of the training set is needed) and the ontology is again
used for making generalizations/specializations of the user profiles.
Kearney and Anand [9] use an ontology to calculate the impact of different ontology
concepts on the users navigational behavior (selection of items). In this work, they suggest that
these impact values can be used to more accurately determine distance between different users as
well as between user preferences and other items on the web site, two basic operations carried
out in content and collaborative filtering based recommendations. The similarity measure they
employ is very similar to the Wu & Palmer similarity measure presented here. This work focuses
on the way these ontological profiles are created, rather than evaluating their impact in the
recommendation process, which remains opens for future work.
IV.

CONCLUSION

In this paper we surveyed the research in the area of personalization in web search. The study
reveals a great diversity in the methods used for personalized search. Although the World Wide
Web is the largest source of electronic information, it lacks with effective methods for retrieving,
filtering, and displaying the information that is exactly needed by each user. Hence the task of
retrieving the only required information keeps becoming more and more difficult and time
consuming.
To reduce information overload and create customer loyalty, Web Personalization, a
significant tool that provides the users with important competitive advantages is required. A

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

231

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Personalized Information Retrieval approach that is mainly based on the end user modeling
increases user satisfaction. Also personalizing web search results has been proved as to greatly
improve the search experience. This paper also reviews the various research activities carried out
to improve the performance of personalization process and also the Information Retrieval system
performance.
REFERENCES
1.
2.

3.
4.
5.
6.
7.
8.
9.

10.
11.

12.

13.
14.
15.
16.
17.
18.

K. Sridevi and Dr. R. Umarani, Web Personalization Approaches: A Survey in IJARCCE, Vol.2.Issue3.
March 2013.
Adar, E., Karger, D.: Haystack: Per-User Information Environments. In: Proceedings of the 8th
International Conference on Information Knowledge Management (CIKM), Kansas City, Missouri,
November 2-6 (1999) 413-422.
Bamshad Mobasher, Data Mining forWeb Personalization The Adaptive Web, LNCS 4321, pp. 90135,
2007. Springer-Verlag Berlin Heidelberg 2007.
Indu Chawla, An overview of personalization in web search in 978-1-4244-8679-3/11/$26.00 2011
IEEE.
H. Dai, B. Mobasher, Using Ontologies to Discover Domain-Level Web Usage Profiles, in Proc. of the
2nd Workshop on Semantic Web Mining, Helsinki, Finland, 2002.
D.Oberle, B.Berendt, A.Hotho, J.Gonzalez, Conceptual User Tracking, in Proc. of the 1st Atlantic Web
Intelligence Conf. (AWIC),2003.
S. Acharyya, J. Ghosh, Context-Sensitive Modeling of Web Surfing Behaviour Using Concept Trees, in
Proc. of the 5th WEBKDD Workshop, Washington, August 2003.
S.E. Middleton, N.R. Shadbolt, D.C. De Roure, Ontological User Profiling in Recommender Systems,
ACM Transactions on Information Systems (TOIS), Jan. 2004/ Vol.22, No. 1, 54-88.
P. Kearney, S. S. Anand, Employing a Domain Ontology to gain insights into user behaviour, in Proc. of
the 3rd Workshop on Intelligent Techniques for Web Personalization (ITWP 2005), Endiburgh, Scotland,
August 2005.
http://en.wikipedia.org/wiki/Ontology
Bhaganagare Ravishankar, Dharmadhikari Dipa, WebPersonalization Using Ontology: A Survey, IOSR
Journal of Computer Engineering (IOSRJCE) ISSN : 2278-0661 Volume 1, Issue 3 (May-June 2012), PP
37-45 www.iosrjournals.org.
Cooley, R., Mobasher, B., Srivastava, J.: Web mining: Information and pattern discovery on the world wide
web. In: Proceedings of the 9th IEEE International Conference on Tools with Artificial Intelligence
(ICTAI97), Newport Beach, CA (November 1997) 558567.
Joshi, A., Krishnapuram, R.: On mining web access logs. In: Proceedings of the ACM SIGMOD Workshop
on Research Issues in Data Mining and Knowledge Discovery (DMKD 2000), Dallas, Texas (May 2000).
Liu, C. Yu, and W. Meng, 2002. Personalized web search by mapping user queries to categories. In
Proceedings of CIKM, pages 558-565.
Morris, M. R. (2008). A survey of collaborative Web search practices. In Proc. of CHI 08, 1657-1660.
Jones. W. P., Dumais, S. T. and Bruce, H. (2002). Once found, what next? A study of keeping behaviors
in the personal use of web information. Proceedings of ASIST 2002, 391-402.
Deerwester, S., Dumais, S., Furnas, G., Landauer, T.K., Harshman, R.: Indexing by Latent Semantic
Analysis. Journal of the American Society for Information Science, 41(6) (1990) 391-407
Dolog, P., and Nejdl, W.: Semantic Web Technologies for the Adaptive Web. In: Brusilovsky, P., Kobsa,
A., Nejdl, W. (eds.): The Adaptive Web: Methods and Strategies of Web Personalization, Lecture Notes in
Computer Science, Vol. 4321. Springer-Verlag, Berlin Heidelberg New York (2007) this volume.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

232

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

A Novel Scheduling Algorithms for MIMO Based Wireless Networks


*AjantaaPriyaadharsini.G.V., email id:ajantaavenkatesan@gmail.com.
**Deepika.R, email id:dpka.sush94@gmail.com.
Department of Electronics and Communication engineering,
Narasus Sarathy Institute of Technology, Poosaripatty, Salem-636305.
.
Hence in this paper we propose different scheduling
algorithm in which the awareness about interference has
been incorporated. The receiver node first schedule all
the requests contained in every correctly decoded RTS
packet send by many senders for performance
improvements. By enabling proper scheduling in the
Medium Access Control layer (MAC), the system level
performance has been improved by canceling the
interference in Priority scheduling which we have
proposed in MAC layer. Also we have analyzed the data
rates and interference cancellation capability for the
different scheduling policy which we have proposed in
the MAC layer on RTS/CTS packets.

Abstract
MIMO system has attracted considerable attention
recently for its potential to increase the system capacity.
In this paper, we aim to design practical user scheduling
algorithms to maximize the system performance. Various
MAC scheduling policies have been implemented, in
order to provide distributed traffic control and
robustness against interference. Further, in order to
increase the efficiency of resource-utilization, the
scheduling policies have been modified, and those have
also been implemented. MATLAB simulations have been
used throughout and the various policies have been
compared with each other in order to draw important
results and conclusions. This paper ends with a few
suggestions for future improvements.

This paper has been organized as follows:


In the next three sections, the theory about
System Model, MAC layer Scheduling, Class and MAC
layer policies has been described. The simulation results,
using MATLAB, have been included in Section-5.
Comparisons of the different MAC layer scheduling
using the simulation results, and related discussions have
also been included in the same sections.

Index Terms: MIMO, MAC, Scheduling, Resource


utilization

1. Introduction
Multi antenna system has been researched
intensively in recent years due to their potential, to
increase the channel capacity in fading channel. It is
shown that MIMO systems can support higher data rates
under same transmit power and BER performance
requirements. Such system finds wide applications in
WLAN networks. The conventional collision avoidance
(CSMA/CA) approach described in the 802.11 standard
[9] makes use of control messages (RTS/CTS) to
mitigate the hidden terminal problem, thus preventing
collisions that would result in loss of data and waste of
resources. In a MIMO wireless network, however, this is
not always the best solution. Specifically, the receiver
structure is able to separate incoming PDUs, which
would then not result in a collision, but could instead be
detected separately. The networking protocols may then
choose how many and which channels to estimate, taking
into account that the limited receiver capabilities allow
locking onto at most N sequences simultaneously. While
doing this, trying to detect too many destinations
oriented data packets could leave limited resources for
interference cancellation, leading to data loss. Even with
channel estimation and spatial de-multiplexing, the
MIMO receiver itself is still vulnerable to hidden
terminals in some sense: if the receiver is not aware of
interfering nodes nearby, it cannot estimate their channel
and cancel them.

2. System Model
Traditionally, the growing demand of capacity has
been met by increasing the bandwidth and/or by
inventing more spectrally efficient communication
protocols. However, since the introduction at Bell Labs
about 10 years ago, the concept of MIMO (Multiple
Input Multiple Output) shown in figure 1 has received an
increasing attention. The main observation is that if both
the transmitter and the receiver are equipped with n
antennas, the capacity (bit rate) can be increased by up to
a factor of n, depending on the richness of the wireless
channel. In principle, one can form n parallel channels,
which can transmit independently of one another. In
general, this is not possible for line-of-sight (LOS)
channels, since the multiple channels cannot be
independent and will therefore interfere. However, in a
rich scattering environment, the capacity can increase by
a factor up to n. The transmission of data in parallel
streams is usually referred to as spatial multiplexing.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

233

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

transmitted vector symbol, assuming symbolsynchronous receiver sampling and ideal timing. Letting
a = (a 1 , a2 , . . . ,aM )T denote the vector of transmit
symbols, then the corresponding received N vector is
r1 = Ha + (1)
Here is a noise vector. One way to perform detection
for this system is by using linear combinational nulling.
Conceptually, each sub-stream in turn is considered to be
the desired signal, and the remaining are considered as
"interferers". Nulling is performed by linearly weighting
the received signals so as to satisfy some performancerelated criterion, such as Zero-Forcing (ZF). When
symbol cancellation is used, the order in which the
components of a are detected becomes important to the
overall performance of the system. We first discuss the
general detection procedure with respect to an arbitrary
ordering.

Many detection algorithms have been proposed in


order to exploit the high spectral capacity offered by
MIMO channels. One of them is the V-BLAST (Vertical
Bell-Labs Layered Space-Time) algorithm which uses a
layered structure. This algorithm offers highly better
error performance than conventional linear receivers and
still has low complexity.

2.1. Proposed System Model

Let the ordered set

In the system being implemented the MAC layer


takes decisions based on received power levels. Hence
there is a need for scheduling in the MAC layer for
performance improvements.

S {k1 , k2 , . . . , kM}

.(1)

be a permutation of the integers 1, 2, . . . , M specifying


the order in which components of the transmitted symbol
vector a are extracted. The detection process proceeds
generally as follows:

2.1.1. Transmitting Nodes-- Any node splits the


transmit data into sub-packets called Packet Data Units
or PDUs. We suppose uj PDUs are sent through spatial
multiplexing i.e., uj antennas, one per PDU, where j is
the node index. Power of the ith antenna, given that it
belongs to user j is given as Ptot/uj, the maximum total
power of any node is constrained to Ptot.

Step 1: Using nulling vector wk1 , form decision statistic


yk1 :
yk1 = wk1T r1
..(2)
Step 2: Slice yk1 to obtain k1:
k1 = Q(yk1)

2.1.2. Receiving Nodes-- Any receiver, say node j,


uses all its available antennas NA. Thus, the received
signal can be denoted using the NA-length column vector

..(3)

Here Q(.) denotes the quantization (slicing) operation


appropriate to the constellation in use.

r(j) = (j)s +(j)

Step 3: Assuming that k1 = ak1, cancel ak1 from the


received vector r1, resulting in modified received vector
r2:

Here (j) represents channel noise, and (j) is the NA


U channel gain matrix. Under a Rayleigh fading
assumption, (j), m is a circularly Gaussian complex
random variable, including fading gain and path loss
between the mth transmit and the nth receive antenna. We
assume that the nodes channel knowledge is limited, i.e.
at most NSmax channels related to as many transmit
antennas can be estimated at the beginning of each
reception. The set N(j) = {n1, . . . , nNS max} contains the
indices of such known antennas (KAs), for which we
assume perfect channel estimation.

r2 = r1 k1 (H)k1

..(4)

Here (H)k1 denotes the k1-th column of H.


Steps 1 -3 are then performed for components
k2, . . , kM by operating in turn on the progression of
modified received vectors r2, r3, . . . , rM. The specifics
of the detection process depend on the criterion chosen
to compute the nulling vectors wki , the most common of
these being ZF.
The kith ZF-nulling vector is defined as the
unique minimum norm vector satisfying

2.1.3 The Blast Receiver ( Zero Forcing Algorithms


with Optimal Ordering) We take a discrete-time
baseband view of the detection process for a single

Wk T i(H)kj = 0

ji

j=i

------- (5)

Thus, wki is orthogonal to the subspace spanned


by the contributions to ri due to those symbols not yet
estimated and cancelled. It is not difficult to show that

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

234

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

the unique vector satisfying (5) is just the kith row of Hki
1 where the notation Hki denotes the matrix obtained
by zeroing columns k1, k2, . . . , ki of H and + denotes the
Moore-Penrose pseudo inverse.

3. MAC Layer Scheduling


A well-designed MAC protocol can offer much help
to solve the channel estimation problem. In designing
such a protocol, the concurrent channel access typically
found in ad hoc networks can be exploited, instead of
being suppressed. Collision avoidance schemes, such as
802.11, try to avoid concurrency by blocking the nodes
that receive an RTS or CTS. Instead of blocking,
simultaneous transmissions have to be encouraged. It is
also desirable to make the receivers aware of potential
interferers, and to exploit the spatial de-multiplexing
capabilities of MIMO processing. To this aim, an
assessment of the receiver performance when receiving
data PDUs and signaling packets has to be done.

3.1 MAC Layer Design


We have framed communication structure, with
four phases. For this scheme to work correctly, all nodes
have to share the same frame synchronization. These
phases are designed according to the standard sequence
of messages in a collision avoidance mechanism, and are
summarized as follows.
3.1.1 RTS phaseIn this phase, all senders look into
their backlog queue, and if it is not empty they compose
transmission requests and pack them into a single RTS
message. Each packet in the queue is split into multiple
PDUs of fixed length, such that each PDU can be
transmitted through one antenna. For this reason, any
request has to specify the number of PDUs to be sent
simultaneously, in addition to the intended destination
node. Any RTS may contain several such requests.
Moreover, an RTS is always sent with one antenna and
at full power.
3.1.2 CTS phaseDuring the RTS phase, all nodes that
were not transmitters themselves receive multiple
simultaneous RTSs, and apply the reception algorithm as
described in the previous section, to separate and decode
them. In the CTS phase, when responding to the
correctly received RTSs, nodes have to account for the
need to both receive intended traffic (thus increasing
throughput) and protect it from interfering PDUs (thus
improving reliability). The constraint in this tradeoff is
the maximum number of trackable channels, i.e., the
maximum number of training sequences a node can lock
onto. CTSs are also sent out using one antenna and at
full power.
3.1.3 DATA phaseAll transmitters receive
superimposed CTSs and, after BLAST detection, they
follow CTS indications and send their PDUs. Each PDU
has a fixed predefined length and is transmitted through
one antenna, but a node can send multiple PDUs
simultaneously, possibly to different receivers.

Figure 2 shows the MIMO system with


scheduler. Here priority based scheduling, Partially Fair
Scheduling with and without interference cancellation is
proposed. In Priority scheduling, the scheduler receives
many RTS packets and schedule according to the priority
namely destination oriented (D) packets and non
destination oriented (ND) packets. The Performance of
all kind of scheduling is analyzed in the section IV.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

235

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

3.1.4 ACK phaseAfter detection, all receivers


evaluate which PDUs have been correctly received,
compose a cumulative PDUwise ACK, and send it back
to the transmitters. After this last phase, the data
handshake exchange is complete, the current frame ends
and the next is started. This corresponds to the
implementation of a Selective Repeat Automatic Repeat
reQuest (SRARQ) protocol, where PDUs are
individually acknowledged and, if necessary,
retransmitted.
Before going more deeply into CTS policy
definition, it should be noted that a random back off is
needed for nodes that do not receive a CTS, as otherwise
persistent attempts may lead the system into deadlock.
Here, a standard exponential back off is used.
Accordingly, before transmitting, nodes wait for a
random number of frames, uniformly distributed in the
interval [1,BW(i)], where i tracks the current attempt, and
BW(i) = 2i1W, with W a fixed back off window
parameter. An accurate study of the effects of different
back off strategies can be found in [12].

distance between the transmitter and the receiver.


Looking at a multiple receiver context as in MIMO,
where the transmitter could send data to many
neighbours at once, the concept of class can be a very
useful tool to ensure a satisfactory amount of quality
along with the maximum data rate. Together with this
concept of class and a modified set of RTS and CTS
policies, an increase in performance levels may be made,
by making best use of the available spatial diversity due
to MIMO.

4.2 MAC Layer Policies


The traditional collision avoidance approach
makes use of control signals (RTS/CTS) in order to
avoid collisions by ensuring only one transmission at
every time slot. But when MIMO is used at the physical
layer, multiple transmissions can be supported
simultaneously with the use of a modified RTS and CTS
policy.
4.2.1 RTS In this RTS policy, parallelism and allow
simultaneous transmissions have been encouraged. Here,
RTS/CTS messages are used for traffic load estimation
rather than blocking simultaneous transmissions. Since
signalling packets are shorter and transmitted with a
single antenna at full power, they are expected to be
detectable in large quantities without significant errors.
In the modified policy, the concept of class has
been integrated along with RTS messages of the
traditional 802.11 to create a new RTS policy. The
algorithm recursively checks the sender end queue,
which holds the receiver ID, the number of PDUs to be
transmitted and the class of the receiver with respect to
the particular transmitter, for each intended transmission.
Based on the class of the receiver, the algorithm
successively includes requests to various receivers in the
same RTS packet. Each RTS packet includes as many
requests for PDUs as the minimum class of those
receivers included in that packet.
Two modifications in the RTS packaging that
would result in performance improvements are as
follows.

4. Class and MAC Layer Policies


Class is a new concept that limits the maximum
number of antennas that a transmitter can use while
transmitting to a particular receiver. There exists a tight
relationship between the number of used antennas (thus,
bit rate) and the average received power, thus the
maximum coverage distance affordable.

4.1 Class
The maximum number of antennas as related to
the distance of a node is called the class of the node.
For any transmitter, the total power allocated for a single
instance of transmission is a constant quantity, say for
example 100 W. As the number of transmit antennas
increase, this power is divided equally among the same
i.e. 2 transmit antennas implies 50 W through each, 4
transmit antennas implies 25 W through each and 10
transmit antennas implies 10 W through each.
Now based on the location of the receiver, it is
an obvious conclusion that as the distance between the
transmitter and the receiver increases, the power
necessary to ensure successful reception with good
signal quality, increases and hence the CLASS of the
receiver with respect to that particular transmitter
decreases. In order to calculate the class of different
nodes with respect to each other, assuming free space
propagation losses only, the free space path loss model is
used to account for the power loss. By setting a
minimum threshold of necessary received power for
satisfactory signal quality, the maximum number of
transmit antennas permissible is calculated.
In simple terms, the maximum number of
antennas permissible (I) is inversely proportional to the

1.

The queue is scheduled (reordered) with all the


requests with higher class at the front end, so the
number of simultaneous requests is large. This
ensures best utilization of the available antenna
resources. This also implies that the number of RTS
packets itself reduce thereby providing further
power saving.

2.

The FIFO queue that was assumed in the original


policy could result in starvation to a particular node,
if its distance from the transmitter is particularly
large and hence, its class is minimum. Hence
priorities may be assigned to all the neighbours of a
node and in case of a node being by passed once, its

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

236

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

priority comes into picture and has to be included in


the next round of RTS packaging.

4.2.2 CTS In collision avoidance schemes like 802.11,


concurrency is avoided by blocking the nodes other than
one sender and transmitter pair. In contrast to this, in the
following CTS policy, simultaneous transmissions are
encouraged. At the same time, the receivers should also
be warned of potential interferers and should be capable
of exploiting the spatial de multiplexing capabilities of
the MIMO system. A receiver node can receive multiple
RTS packets, each of which can contain multiple
requests. Each request in turn comprises of the receiver
id and the number of PDUs requested to be sent. Against
this background, the receiver node first sorts all the
requests contained in every correctly decoded RTS
packet in the order of decreasing received power, and
divides them into two subsets depending on the receiver
ID mentioned in the request, namely Destination
oriented D (containing the requests meant for itself)
and non Destination oriented ND (containing all
remaining requests). If a request by node x implies the
transmission of, say, y PDUs, the receiver has to account
for channel estimation resources that will be needed for
all the y PDU transmissions. Since the maximum number
of simultaneous PDUs that can be tracked by a receive
antenna is limited to, say, Nsmax, each time a
transmission is granted, the number of available tracking
resources is decreased by y. This is done until there are
no more resources left. This process of granting
resources involves a tradeoff between the number of
simultaneous transmissions that it allows to itself and the
amount of interference from transmission by other nodes
that it cancels. There are four different CTS policies
here:

Priority Scheduling (PS):


Do the following steps till Ns=0
- Start with request in D
- Read source Si and number of PDUs Pi for the
packet with index i.
- Insert grant (Si,Pi) in the CTS.
- Ns=Ns-Pi
- If for any i, Ns<Pi , allot Ns-Pi PDUs for the
particular request.
- After all the requests in D are exhausted, if
Ns>0, Do the following steps for ND
- Read Si of the non destination oriented request
and the number of PDUs Pi.
- If Pi<Ns, add (Si,Pi) to interference cancellation
list
- Ns=Ns-Pi
- Stop
- Using resources allotted accept incoming
packets and cancel interference from other
exchanges

Partially
Fairness
Scheduling
Without
Interference Cancellation (PFS-WIC):
Do the following steps till Ns=0
- i=D(1). (Insert the first request in the
destination oriented list in the CTS)
- Read source Si and number of PDUs Pi for the
packet with index i
- Insert grant (Si,Pi) in the CTS.
- Ns=Ns-Pi
- queue=queue-i
- Let k be the request with highest power in the
queue D ND
- If kD then
Insert grant (Si,Pi) in the CTS.
Else store in interference cancellation list
Endif
- Stop
- Using resources allotted accept incoming
packets.

Priority
Scheduling
Without
Interference
Cancellation (PS-WIC):
Do the following steps till end of D.
- Read source Si and number of PDUs Pi for the
packet with index i
- Insert grant (Si,Pi) in the CTS.
- Ns=Ns-Pi
- If for any i, Ns<Pi, allot Ns-Pi PDUs for the
particular request.
- If Ns=0, STOP
Partially Fairness Scheduling(PFS):
Do the following steps till Ns=0
-

Let k be the request with highest power in the


queue D ND ND
If kD then
Insert grant (Si,Pi) in the CTS.
Else store in interference cancellation list
Endif
Stop
Using resources allotted accept incoming
packets and cancel interference from other
exchanges.

i=D(1). (Insert the first request in the


destination oriented list in the CTS)
Read source Si and number of PDUs Pi for the
packet with index i
Insert grant (Si,Pi) in the CTS.
Ns=Ns-Pi
queue=queue-i

In real time networks, only Partially Fairness


Scheduling (PFS) and Priority Scheduling (PS) are
practical for use, since the other two do not provide any
interference cancellation. Between PFS and PS, choice is
made depending on which of the two performance

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

237

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

parameters, SNR and throughput, is critical to the


network under consideration.

Comparison of Capacity for siso and mimo


12
siso
mimo
10

Capacity in bps/Hz

5. Simulation Results and Discussions


In order to evaluate the performance of these
RTS/CTS policies specifically designed for MIMOVBLAST physical layer, 4 nodes, each with 10 antennas,
are deployed. The 4 nodes are assigned varying
coordinates, thereby simulating a mobile topology. The
assumption made is that condition of frame
synchronization holds throughout the simulation. Traffic
is generated according to a Poisson process at the rate of
packets per second per node. Each generated packet
has k 1000-bit long PDUs, where k is a whole
number. This specific configuration is tested because; all
nodes are within coverage range of each other. This is a
demanding scenario in terms of interference, required
resources and efficient protocol design. All the
simulations have been made using MATLAB codes.
Transmissions follow the MAC protocol, as described in
the previous section.

0
-10

-8

-6

-4

-2

0
2
Eb/No in dB

10

Figure 3: MIMO performance

5.2 V-Blast Performance


To simulate the performance of the BLAST
physical layer, V-BLAST algorithm with optimal
ordering has been used for a codebook of a specified
length. Optimal ordering of received signals in the
descending order of power ensures that signal decoding
is of better quality. In this paper, the spatial multiplexing
technique has been implemented using V-BLAST in the
physical layer.

5.1 MIMO Performance

5.2.1 Transmitter Diversity -- Figure 4 shows an


insight into the performance of the system. Here, the Bit
Error Rate (BER) vs. SNR values has been plotted for a
system having 12 receivers and varying number of
transmitters. It can be seen that for the same value of
SNR, in every case, the system with fewer antennas is
found to have a better BER performance i.e. have a
lesser Bit Error Rate than systems with more number of
transmitters. This is because as the number of
transmitters increase, there is more interference caused at
the receiver side due to other unwanted transmissions
(transmissions not addressed for the receiver).This
causes degradation in the performance, as shown in the
graph.

A comparison is made between the capacity of


a Single Input Single Output and Multiple Input Multiple
Output systems for specific Eb/No values. Capacity is
measured in bits per second per hertz (bps/Hz) of the
given frequency and Eb/No is measured in Decibels
(dB). From figure 3, it is observed that the capacity of
the MIMO system is higher than the SISO system for
every value of Eb/No. Shannons capacity theorem is
used for the capacity calculation. Thus, performance of
MIMO is found to be much better than the performance
of SISO for every value of Eb/No. In fact, the capacity
increases N fold for MIMO, where N is related to the
number of transmitting and receiving antenna.

BER for BLAST for 12 Receiver Antennas and various number of Transmit antennas
0
10

-1

Bit Error Rate

10

-2

10

-3

10

-10

4 Tx, 12 Rx
10 Tx, 12 Rx
14 Tx, 12 Rx
-8

-6

-4
-2
Eb/No,dB

Figure 4: Transmitter diversity

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

238

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

dependence of class on distance


9

To combat this degradation in performance the concept


of CLASS has already been mentioned in this paper.
This specifies the optimal number of transmitter
antennas to be used for a specific distance between the
transmitter and receiver. In mobile wireless networks,
where the distances of the nodes keep varying with
respect to each other, it is not advisable to use a fixed
number of transmitter antennas for all distances. A brief
discussion of CLASS follows next.

permissible distance betw tx rx

8
7
6
5
4
3
2
1
0

5.2.2 Class -- To do the classification, a topology


consisting of a number of transmitters at varying
distances from the receiver has been considered. The
graph of figure 5 specifies the maximum number of
antennas a transmitter can use when it is at a particular
distance from the receiver. This number (number of
transmit antennas to be used) classifies the transmitter
into its respective CLASS. This classification is based on
the power levels of the received packets. When transmit
diversity is employed, the total power level at the node is
divided equally among all the transmit antennas to be
used for the transmission. Thus power of every PDU
(each antenna transmits one PDU per transmission)
decreases in accordance to this division. The channel
employed here is a multipath Rayleigh fading channel.
Power allotted to each transmit antenna should be
sufficient to withstand the fading caused by the channel.
Each receiver has a threshold power level for decoding.
If a packet arrives with a power level below the
threshold it cannot be detected.
In the figure 5, below, it can be seen that, when
the distance is very high the number of transmitter
antennas used is very less. This is because the packet has
to travel a long distance and thus requires a lot of power
to withstand the fading and attenuation losses. For the
maximum distance, literally, only one antenna is used.
For distances above this maximum distance, multi-hop
transmission is employed. The number increases
exponentially with decrease in distance and it is
observed that the maximum number of antennas is used
for shorter distances.

5
6
no of tx ant

10

Figure 5: Class vs. Distance

5.2.3 Receiver Diversity Contrary to the previous case


of transmitter diversity, performance increases in the
case of receiver diversity. Figure 6 is a clear proof of this
statement. Here the cases of 8 transmit antennas for
varying number of receiver antennas is compared.

Bit Error Rate

BER
for BLAST for 8 transmit antennas and various number of receive antennas
0
10

-1

10

-2

10

-10

8 Tx,
8 Tx,
8 Tx,
8 Tx,
8 Tx,
-8

8 Rx
10 Rx
12 Rx
14 Rx
22 Rx
-6

-4
-2
Eb/No,dB

Figure 6: Receiver diversity


It is seen that the best Bit Error Rate performance is for
the receiver having 22 antennas. This is because with
increase in the number of receivers more paths exist
from each transmitter antenna and each path exhibits
varied levels of fading. This indicates possibilities of
channels with lesser levels of fading. In every case it can
be seen BER decreases with increasing values of SNR.
However, for each value of SNR the node with 22
antennas has the least value of BER. Thus, robustness
increases with receiver diversity.

5.3 Performance Comparisons


The primary comparison among the policies is
based on data rates, which in turn is dependent on the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

239

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

number of grants allotted for the wanted PDUs. The


packet arrival rate is varied each time, and the
corresponding data rate is noted. As seen in figure 7, in
every case, i.e. for any packet arrival rate, the data rate of
Priority Scheduling (PS) is greater than Partially Fairness
Scheduling (PFS). This is because PS prioritizes allotting
resources (for the destination oriented packets) to
interference cancellation. Thus, data rate is higher in PFS
scheme.

Interference cancelled for the different CTS policies


6

Number of interfering PDUs cancelled

PFS
PS
5

data rate for the different CTS policies


PFS
PS

9000

Packet arrival rate per node per second


.
Figure 8: Comparison of interference cancellation for
PFS and PS

10000

8000

data rate in bps

7000

As the number of packets arriving increases PS has only


enough resources to grant for the wanted packets. Thus it
can be seen for higher values of interference
cancellation for PS is zero.

6000
5000
4000
3000
2000
1000

Signalto Noise Ratio for the different CTS policies


2.5

3
4
5
6
7
Packet arrival rate per node per second

PS-WIC
PFS
PS
PFS-WIC

Figure 7: Comparison of Data Rates between PFS


and PS

1.5
SNR

The plot for Priority Scheduling without


Interference Cancellation (PS-WIC) and Partially
Fairness Scheduling without Interference Cancellation
(PFS-WIC): are not shown here, because their grants are
similar to PS and PFS, respectively. Thus, it is sufficient
to compare the latter two schemes. The next parameter
for comparison is the amount of interference cancelled
by the two schemes. From figure 8, it can be seen that
PFS outperforms PS for almost every value. This is
just the inverse of the previous graph, as the total
resources are divided between these two activities of
accepting data and cancelling interference from other
parallel transmissions. For the initial values, both PFS
and PS seem to show the same performance in case of
interference cancellation because the number of arriving
packets themselves is very less.

0.5

3
4
5
6
7
Packet arrival rate per node per second

Figure9: SNR for various CTS policies


Another very important way of interpreting the
above two graphs is by comparing the SNR performance
of the schemes. The interference cancelled and the grants
given actually have a direct implication on the SNR at
the receiver.
From figure 9, it can be seen that the SNR
performance of PFS is the best followed by PS. In PFS,
major portion of the resources are allotted for
interference cancellation. Hence, noise caused due to
other interfering packets is less, and SNR is higher. In
PS, the resources are given preferably to the wanted
packets. Interference cancellation plays second fiddle
here, a direct consequence of which is seen in the graph
above. However, as the number of packets arriving
increases, there is a decrease in the SNR in both the
schemes due to limited availability of resources. In every
case, PS-WIC is found to have the least performance. As
the arrival rate becomes higher, it can be seen that PFS-

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

240

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

WIC performs slightly better than PS. This can be


explained as follows: At high arrival rates, PS exhausts
all its resources towards allocation to the wanted set and
hence may not be left with any resources for interference
cancellation. PFS-WIC, too, by itself does not perform
any interference cancellation. However, the above
mentioned performance degradation in PS can be
attributed to the fact that PS could allocate resources to
requests of very low power levels which have low
immunity to noise. However PFS-WIC, following PFS,
allocates resources only for packets with sufficient
power. Thus, SNR performance of PFS-WIC is better
than PS at high arrival rates.

time. This limits the data rate. However, in the improved


RTS/CTS policy, simultaneous transmissions from
different senders are encouraged by providing for
interference cancellation, thereby improving the data rate
per receiver. In both figures 10 and 11, (comparison of
PFS and the conventional policy, and comparison of PS
and conventional policy), the improved RTS/CTS policy
is found to give a better data rate than conventional
policies. However the performance improvement in PS is
found to be more than in PFS.

6. Conclusions And Future Work


In this work, the advantages of Multiple Input
Multiple Output (MIMO) over Single Input Single
Output (SISO) have first been addressed. The
performance of the V BLAST physical layer (with
optimal power ordering) has also been studied. The cross
layer policies to drive traffic requests and grants have
been considered, with the aim of designing an efficient
way to let multiple point-to-point links coexist while
keeping interference under control. Simulations of MAC
policies in a demanding mobile network scenario with all
nodes within the coverage of each other have been
carried out. These results have been used to highlight the
key features that yield the best performance in terms of
throughput and signal to noise ratio.
Future work on this topic includes a study on
the impact of channel estimation at the transmitter on the
overall performance, and the extension to multihop
topologies and routing issues.

PS and conventional
8000
conventional RTS/CTS
PS

7000

Data rate in bps

6000
5000
4000
3000
2000
1000
0

Packet arrival rate per node per second


Figure 10: Data rate comparison of conventional and PS

Next, the importance of the RTS/CTS schemes,


so far explained, is highlighted. This is done by making a
comparison of data rates between our scheme and the
conventional 802.11 collision avoidance scheme.

References
[1] P. Wolniansky, G. Foschini, G. Golden, and R. Valenzuela,
V-BLAST: An Architecture for Realizing Very High Data
Rates Over the Rich- Scattering Wireless Channel, in Proc.
1998 URSI Int. Symp. on Sign., Syst., and Elect., 1998, pp.
295300.

PFS and conventional

6000

conventional RTS/CTS
PFS

[2] Paolo Casari, Member, IEEE, Marco Levorato, Student


Member, IEEE, and Michele Zorzi, Fellow, IEEE MAC/PHY
CrossLayer Design of MIMO Ad Hoc Networks with Layered
Multiuser Detection IEEE Transactions on Wireless
Communications, VOL. 7, NO. 11, November 2008, pp.45964607.

Data rate in bps

5000

4000

3000

[3] G. J. Foschini, Layered space-time architecture for


wireless communication in a fading environment when using
multiple antennas," Bell Labs Tech. J., vol. 1, no. 2, pp. 41-59,
1996.

2000

1000

[4] H. Jafarkhani, Space-Time Coding: Theory and Practice.


Cambridge University Press, 2005.

Packet arrival rate per node per second

[5] P. W. Wolniansky, G. J. Foschini, G. D. Golden, and R. A.


Valenzuela, V-BLAST: an architecture for realizing very high
data rates over the rich-scattering wireless channel," in Proc.
IEEE ISSSE, Pisa, Italy, Sept. 1998, pp. 295-300.

Figure 11: Data rate comparison of conventional and PFS

In the conventional collision avoidance system,


simultaneous transmissions are not allowed, and the
MIMO wireless channel is reserved for one request at a

[6]L. Zheng and D. N. C. Tse, Diversity and multiplexing: a


fundamental tradeoff in multiple-antenna channels," IEEE

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

241

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Trans. Inform. Theory, vol. 49, no. 5, pp. 1073-1096, May


2003.
[7] S. M. Alamouti, A simple transmit diversity technique for
wireless communications, IEEE Trans. Commun., vol. 16, no.
8, pp. 1451-1458, Oct. 1998.
[8] M. Hu and J. Zhang, MIMO ad hoc networks: medium
access control, saturation throughput, and optimal hop
distance, J. Commun. Networks, special issue on mobile ad
hoc networks, pp. 317-330, Dec. 2004.
[9] IEEE Standards Department, ANSI / IEEE Standard 802.11.
IEEE Press, 1999.
[10]Holger Boche, University of Technology Berlin and
Fraunhofer
German-Sino
Laboratory
for
Mobile
Communications Marcin Wiczanowski, University of
technology Berlin, The Interplay of Link Layer and Physical
Layer under MIMO enhancement: Benefits and Challenges,
IEEE, August 2006, pp. 48-55
[11]M.Bengtsson and B.Ottersten, Handbook of Antennas in
Wireless Communications, Ch.18, Optimal and Sub-optimal
Transmit Beamforming, CRC Press, 2001
[12] M. Levorato, P. Casari, and M. Zorzi, On the
performance of access strategies for MIMO ad hoc networks,"
in Proc. IEEE GlobeCom, Nov. 2006, pp.1-5.

10

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

242

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

BER Performance of CDMA, WCDMA, IEEE802.11g in AWGN


and Fading Channels
M.A.ARCHANA, Dr.C.PARTHASARATHY, C.V.S RAVI TEJA
maarchaname@gmail.com, drsarathy45@gmail.com , ravi.teja2394@gmail.com
ABSTRACT

standards, including slot synchronization, frame


synchronization with code group identification and
scrambling code identification. In CDMA 2000, a
pilot channel is spread by a PN code with cellspecific code phase to help timing synchronization
between mobile and base stations. In 802.11g, a
Barker code is applied to detect bit boundary before
packet identification. In this paper we have illustrated
through simulation results of the level of power
consumption of different wireless communication
standards applicable in AWGN and fading channels.
1.1Additive White Gaussian Noise (AWGN)
Channel: accounts only for white noise and does not
accounts for
fading frequency selectivity,
interference and dispersion. BSC channel corrupt a
binary signal by reversing each bit by a fixed
probability p. Multipath fading channel include
multipath scattering effects, time dispersion, and
Doppler shifts that arise from relative motion
between the transmitter and receiver. CDMA-2000 is
a terrestrial radio interface of 3G wireless
communication and specifications developed by
3GPP2. It is basically developed within the
specification of IMT-2000 of international
telecommunication union (ITU) and is backward
compatible and well suited for global roaming.
CDMA-2000 physical layer includes several modes
of operations as per the requirement of user and local
conditions.
1.2 Fading Channel: The rapid fluctuation of the
amplitude of a signal over a relatively small distance
is referred to as fading. Interference between two or
more versions of the transmitted signal can result in
different propagation delays at the receiver and this is
known as multipath. Fading channel models are often
used to model the effects of electromagnetic
transmission of information over the air in cellular
networks and broadcast communication. Fading
channel models are also used in underwater acoustic
communications to model the distortion caused by
the water.

In recent years wireless communication


plays a vital role in the world, it makes us relax from
the overhead of cables. Nowadays WCDMA,
CDMA2000, and IEEE 802.11g systems receive a
great deal of attention in wireless communications.
One of the common parts of these three systems is
the direct-sequence code-division multiple access
(DS-CDMA) technology. In DS-CDMA systems,
pseudo noise (PN) code synchronization is one of
main challenges due to its code length. PN codes are
usually used to identify channels during
communications. Before a handset starts to receive
system broadcast channels, a code synchronization
process is necessary to achieve codes, timing, and
frequency synchronization with the target cell site
and vice versa, also the tremendous increase in usage
of wireless communication. In this paper we propose
an integrated code synchronizer for WCDMA,
CDMA 2000, IEEE 802.11g which is designed by
applying three low power techniques such as power
management,
absolute
weighted
magnitude
calculation and spurious power suppression adder. In
addition, a common combined scheme is used to
evaluate the bit-error-rate performance of the
different wireless Standards, both in AWGN and
fading channels. Simulation results are being
obtained using MATLAB.
Key words: WCDMA, CDMA 2000, IEEE 802.11g,
Absolute Weighted Magnitude Calculation, Spurious
Power Suppression Adder, AWGN, Fading channel.
1 INTRODUCTION
WCDMA, CDMA2000 and 802.11 g system
receive a great deal of attention in wireless
communications. This paper is divided into three
parts such as code synchronizations for three
different systems. In WCDMA, a three-stage code
synchronization process is implemented in 3 GPP

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

243

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

number of channels. Each channel has an equal


bandwidth and is determined by the target data bit
rate and the modulation scheme used. A transmitter
sends data from each channel for a fixed amount of
time called the dwell time. There are two types of
frequency hopping slow frequency hopping and fast
frequency hopping. The main advantage of frequency
hopping over DSSS is its flexibility to use and
alternative channel within the allocation band
.Typically there are 22 hop patterns and 79 no
overlapping frequency channels with 1 MHz channel
spacing.
2.3.3 Direct Sequence Spread Spectrum (DSSS)
Direct Sequence Spread Spectrum (DSS) is
modulation techniques in which a message signal is
spread over a bandwidth that is typically much
greater
than
that
required
for
reliable
communications. . DSSS PHY is the part of PHY
layer works on 2.4 GHz frequency band. Data
transmission is run through the DSSS PMD sub layer.
The DSSS PMD get binary bits information from the
PLCP protocol data unit (PPDU) then change into RF
signals for the wireless medium by the help of carrier
modulation and DSSS techniques. The PLCP
preamble and PLCP header send 1 Mbps with
differential binary phase shift keying (DBPSK) and
MPUD send at either 1Mbps or 2Mpbs both uses
differential quadrature phase shift keying (DQPSK)
modulation techniques. DSSS techniques also use in
cellular network (CDMA), Global Positioning
System (GPS).
2.3.4 Orthogonal Frequency Division Multiplexing
(OFDM)
OFDM is a multi-carrier modulation
technique which is used to transmitted single data
stream over a number of lower rate subcarrier either
modulation or a multiplexing technique. IEEE
802.11s standard adopted OFDM technology
because of transmission high-rate wireless local area
networks (WLANs).The main reasons to merged
OFDM in IEEE 802.11 is to increase the robustness
against frequency selective fading and narrow
interference. There are some more features concerned
with the OFDM technology are:
High spectral efficiency
Great flexibility
Confirmation to available channel
bandwidth

2 TECHNOLOGIES USED
WCDMA, CDMA2000, and IEEE 802.11g
are the three different types of wireless standard
technologies used.
2.1 WCDMA
W-CDMA (Wideband Code Division
Multiple Access) is a wideband spread-spectrum
channel access method that utilizes the directsequence spread spectrum method of asynchronous
code division multiple accesses to
achieve higher speeds and support more users
compared to most time division multiple access
(TDMA) schemes used today.
2.2 CDMA2000
CDMA2000 is a family of 3G mobile
technology standards, based on CDMA, to send
voice, data, and signaling data between mobile
phones and cell sites. CDMA2000 is also known as
IMT Multi -Carrier (IMT-MC))
2.3 IEEE 802.11g
The IEEE 802.11g wireless standard
protocol is growing rapidly worldwide and become
the most mature technology for WLANs. The IEEE
802.11g standard consists of detailed specifications
for both the medium access control (MAC) and three
physical layers (PHY). The PHY layer selects the
correct modulation scheme that provides spread
spectrum in channel accessibility, data rate as well as
the necessary bandwidth. The IEEE 802.11g physical
layer uses basically four types of wireless data
exchange techniques:
Infrared (IR)
Frequency hopping spread
spectrum (FHSS)
Direct sequence spread spectrum
(DSSS).
Orthogonal Frequency Division
Multiplexing (OFDM)
2.3.1 Infrared (IR)
IR transmission operate at wavelength near
850nm.The IR signal is produced either by semiconductor laser diodes or LEDs with the former
being preferable because their electrical to optical
conversion behavior is more liner. The infrared
technology is not successfully commercialized.
2.3.2 Frequency hopping spread spectrum (FHSS)
Frequency hopping spread spectrum (FHSS)
is to divide the allocated frequency band into a

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

244

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

2.3.5 Bit error rate


Bit error rate is a method of assessing the
digital wireless transmission and thus is a very good
way of increasing the system integrity and even the
robustness can be improved by calculating the bit
error rate and if it is below a certain threshold, steps
can be taken to improve it

3.2 CDMA 2000


In CDMA2000, Pilot channel is spread by a
PN code with length of 32 768 chips. It is not
necessary to detect all 32 768 chips. There is a
tradeoff between hardware Complexity and correct
detection rate. We decide to detect 128 chips of the
PN code in a CDMA2000 system to achieve the
hardware efficiency as well as the detection rate.
Figure 3. 2 shows the generalized model

Bit Error rate = Number of errors / Number of bits transmitted

3 PROPOSED SYSTEM
3.1 WCDMA
In WCDMA, the code synchronization is
divided into a three stage process Primary
synchronization code, secondary synchronization
codes have to be identified by active correlator.
Scrambling code is identified by ballot machine.
Stage 1 has to detect a 256-chip primary
synchronization code (PSC). Generally, the matched
filter is used to detect the slot boundary, and the
correlation results are further accumulated for 15
slots to improve the signal-to-noise ratio (SNR).
Stage 2, one of 16 secondary synchronization codes
(SSCs) has to be identified by active correlator for
further comma-free ReedSolomon decoding.
Stage 3, one of eight complex-valued scrambling
codes is identified as the cell-specific scrambling
code by ballot machine. Figure 3. 1 shows the
generalized model for the three stages. First, input
data are correlated with local generated PN code
sequence. Then, correlation results are calculated and
accumulated, and the maximal result is identified as
the desired timing or the local PN code is identified
as the desired one by the peak detector.

Figure 3.2 CDMA2000 Code Synchronizer Model


3.3. IEEE 802.11g
In 802.11 g, bit boundary is identified
through Barker code detection. Accordingly, an 11chip detection of Barker codes is applied to 802.11 g
code synchronization. A synchronization fieldmatched filter is employed to calculate the
correlations between and Barker codes. When the
peak detector generates periodic peak values, Barker
codes are located. Figure 3. 3 shows the generalized
model.

Figure 3.3 IEEE 802.11 g Code Synchronizer Model

4. DESIGN METHODOLOGY
4.1 System Architecture
According to the models of Figures 3. 13.3,
the integrated synchronizer for WCDMA,
CDMA2000, and 802.11 g is generalized, and the
overall model is shown in Figure 4.1. Input data are
correlated. With local generated
PN-code
sequences. Then, correlation results are calculated
and accumulated, and the peak result is identified as
the desired timing or the local PN code is identified
as the desired one. Note that only necessary hardware
blocks in this generalized model are enabled when
operating in different systems

Figure 3.1 WCDMA Code Synchronizer Model

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

245

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

calculating the CEs .So applying low power


techniques in the design of the adders to improve the
power dissipation of the synchronizer Techniques:
1.A spurious power suppression technique (SPST) 2.
Absolute weighted magnitude
3.Power
management. Figure 4.2 illustrates a spurious power
suppression technique (SPST) adder in which it is
further divided into most significant part (MSP) and
least significant part (LSP) sub adders to reduce
adder operations. This SPST adder is especially
effective for PN-code synchronizations since it
mostly results in very small values.
5 RESULTS
For performing simulations, simulation was
developed under MATLAB 7.0 environment. This
work is mainly dependent on the bit error rate
performance of the system when the signal is passed
through the AWGN and fading channels under the
three different wireless communications technologies
viz. WCDMA, CDMA2000 and IEEE 802.11g.
Figure 5.1 illustrates the Power consumption in the
CE array of WCDMA technology in AWGN channel
with and without low power technique. Spurious
power suppression technique has reduced half of the
power in the AWGN channel. Figure 5.2 illustrates
the power consumption Power consumption in the
CE array of WCDMA technology in Fading channel
with and without low power technique. When
compared to AWGN channel fading channel has
consumed less power.

Figure4.1 System Architecture


4.2 Spurious Power Suppression Technique
The adders in the RCU design, uses spurious
power suppression technique, are separated into two
parts namely the most significant part (MSP) and the
least significant part (LSP) between the eighth and
the ninth bits. The adder is designed such that it
latches the input data of the MSP whenever it doesnt
affect the computation results. By eliminating the
computation in MSP, it not only save power
consumption inside the adder in the current stage but
also reduce the glitching noises which occurs in the
arithmetic units in the next stage. The detection logic
unit and SE unit is used to determine the effective
input ranges of the operands and used to compensate
the sign signals respectively.

40
35

P
o
w
e
r(m
W
)

30
25
20
15
10

Figure 4 .2 Low power adder design using spurious


power suppression technique
4.3 Correlation Element Array
It is composed of 16 rows, each row include
16 CEs So CE Array contains 256 CEs It is
configured to work in 2 modes
Active mode: partial results are fed back to its input
to execute self accumulated correlations.
Passive mode: Serially linked CEs form a matched
filter. 256 CEs are linked into chain
4.4 Code synchronization
Most of the power consumption occurs only
in CE array. Adders play an important role when

5
0

1
W CDMA W it hout S PST

WCDMA With SPST

Figure 5.1 Power consumption in the CE array of


WCDMA technology in AWGN channel
40
35

Pow
er(m
W
)

30
25
20
15
10
5
0

1
WCDMA Without SPST

W CDMA W ith SPST

Figure 5.2 Power consumption in the CE array of


WCDMA technology in fading channel

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

246

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Figure 5.3 illustrates the power consumption in the


CE array of CDMA2000 technology in AWGN
channel with and without spurious power suppression
technique. Figure 5.4 illustrates the power
consumption in the CE array of CDMA2000
technology in fading channel. When compared to
AWGN channel fading channel has consumed less
power and the proposed design has low effect on the
control of power consumption by the system.

Figure5.5 Power consumption in the CE


array IEEE 802.11g of technology in fading channel
40

35

30

Power(mW)

25

20

15

10

1
802.11G Without SPST

802.11G With SPST

40
35

Figure 5.6 Power consumption in the CE


array of IEEE 802.11g technology in fading channel

Power(mW
)

30
25
20
15
10

Thus at last we are going to show the bit


error rate performance of the system when the signal
is passed through the AWGN and fading channels in
figure 5.7 and figure 5.8 respectively.

5
0

1
CDMA W ithout SPST

CDM A W ith SPST

Figure 5.3 Power consumption in the CE array of


CDMA2000 technology in AWGN channel

10
40

10

CDMA-AWGN
WCDMA-AWGN
802.11g-AWGN
CDMA-FADING
WCDMA-Fading
802.11g-FADING

-1

35

BER

30

Power(mW)

25

10

-2

20

10

15

-3

10

10

5
0

1
CDMA Without SPST

-4

-20

-15

-10

CDMA With SPST

-5
EsNo

10

Figure 5.7 BER of WCDMA, CDMA2000,


802.11g in AWGN and fading channels without
spst

Figure 5.4 Power consumption in the CE array of


CDMA2000 technology in fading channel
Figure 5.5 illustrates the power consumption in the
CE array of IEEE 802.11g technology in AWGN
channel with and without spurious power suppression
technique. Figure 5.6 illustrates the power
consumption in the CE array of IEEE 802.11g
technology in fading channel. The power consumed
in this technology is very much lower, When
compared to WCDMA, CDMA2000.

10

10

CDMA-AW GN
WCDMA-AWGN
802.11g-AW GN
CDMA-FADING
WCDMA-Fading
802.11g-FADING

-1

-2

BER

10

10

10

10

-3

-4

-5

-20

-15

-10

-5
EsNo

10

40

Figure 5.8 BER of WCDMA, CDMA2000,


802.11g in AWGN and fading channels with spst

35

Pow
er(m
W
)

30
25
20
15
10
5
0

1
802.11G W ithout SPST

802.11G With SPST

6 CONCLUSIONS

In this paper, the proposed Scheme of


adding Spurious power suppression technique to the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

247

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

code synchronizer shows the results that the power


consumption can be reduced to 56.27% in WCDMA,
5.86% in CDMA2000 synchronization, and 83.98%
in 802.11 g synchronization, respectively. Moreover,
the proposed Scheme takes the system integration
issues into account by balancing the data I/O rate and
the data processing rate through an interlaced I/O
schedule.
REFERENCES

[10] K. H. Chen, Y. M. Chen, and Y. S. Chu, A


versatile multimedia functional unit design using the
spurious power suppression technique, in Proc.
IEEE Asian Solid-State Circuits Conf., 2006, pp.
111114.
[11] Nisha D, Divya K, Saranya K A Low
Complexity Image Scaling Processor Using
Spurious-Power Suppression Technique,in proc
International Journal of Advanced Research in
Computer and Communication Engineering Vol. 3,
Issue 3, March 2014, pp 5228-5232.
[12] Jaspreet kaur, Manish Jaiswal, Anuj Kumar
Sharma, Vikash Singh, Udit Gupta, Performance
Analysis of IEEE WLAN 802.11a in Presence of
Different FEC, in proc Conference on Advances in
Communication and Control Systems 2013 , pp 642646.

[1] W. C. Yeh and C. W. Jen, High-speed Booth


encoded parallel multiplier design, IEEE Trans.
Comput., vol. 49, no. 7, pp. 692701, Jul.2000.
[2] L. Benini, G. D. Micheli, A. Macii, E. Macii, M.
Poncino, and R. Scarsi, Glitching power
minimization by selective gate freezing,IEEE Trans.
Very Large Scale Integr. (VLSI) Syst., vol. 8, no. 3,
pp.287297, June 2000.
[3] C.-F. Li, Y.-S. Chu, W.-H. Sheen, F.-C. Tian, and
J.-S. Ho, A low power ASIC design for cell search in
the W-CDMA system, IEEE J. Solid-State Circuits,
vol. 39, no. 5, pp. 852857, May 2004.
[4] L. Philips, N. Lugil, and J. Vanhoof, A baseband
IC for 3G UMTS communications, in Proc. IEEE
Veh. Technol. Conf., Sep. 1999, vol. 3, pp. 1521
1524.
[5] H. Gharavi, F. Chin and R.Wyatt-Millington, A
Link-Level Simulator of the cdma2000 Reverse-Link
Physical Layer, Journal of Research of the National
Institute of Standards and Technology. Volume 108,
Number 4, July-August 2003.
[6] Rusty O. Baldwin, N. J. Davis, Scott F. Midkiff,
A Real-Time Medium Access Control Protocol for
Ad hoc Wireless Local Area Networks, Year of
Publication: 1999, ISSN: 1559-1662
[7] CDMA2000 Standard, The 3rd Generation
Partnership Project 2, 2000.
[8] J. Zipper, R. Vazny, L. Maurer, M. Wilhelm, T.
Greifeneder, and A. Holm, A single-chip dual-band
CDMA2000 transceiver in 0.13 CMOS, in IEEE
Solid-State Circuits Conf. Dig. Tech. Papers, Feb.
2007, pp. 342343.
[9] K. H. Chen, K. C. Chao, J. I. Guo, J. S. Wang,
and Y. S. Chu, An efficient spurious power
suppression technique (SPST) and its applications on
MPEG-4 AVC/H.264 transform coding design, in
Proc. IEEE Int. Symp. Low Power Electron. Des.,
2005, pp. 155160.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

248

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

A Survey of Various Design Patterns for Improving Quality and


Performance of Web Applications Design
Md Umar Khan
Associate Professor, Computer Science and Engineering Dept.
Prakasam Engineering College
Kandukur, A.P, India
v.umar2003@gmail.com

Dr T.V. Rao
Professor& HOD in CSE Department
PVP Siddartha Institute of Technology
Kanuru, Vijayawada, AP, India
tv_venkat@yahoo.com

Prof. A. Ananda Rao


Director Industrial Relations & Placements
JNTUA, Ananthapuramu, A.P., India
akepogu@yahoo.co.in
Abstract
The last decade has witnessed an increasing usage of design patterns in web application design for improving quality
and performance. Web application developers have been using creational, behavioral and structural design patterns
since they are based on design principles. Recurring object oriented design problems are addressed by the design
patterns. A design pattern, when used appropriately, is capable of improving quality and performance of a web
application. Providing a coherent study and presentation of such design patterns would certainly help web
application developers in appropriate usage of design patterns. In this paper we survey various design patterns for
developing web applications. Understanding design patterns properly developers can avoid reinventing the wheel by
reusing them. We also provide an evaluation of existing design patterns based on quality and performance metrics.
Index Terms - Design patterns, web application design, web application quality, web application performance
payload [5]. Commercial web browsers are making
use of this feature and by becoming cache-friendly [6].

I. INTRODUCTION
Design patterns is a proven solution for frequently
occurring design problems. So design patterns play an
important role in web application design. Such
patterns when used appropriately can improve quality
and performance of web application design. Gamma et
al. [1] and Buschmann et al. [2] observed that design
patterns are reusable software components that can
solve a particular design problem. Design patterns
enable reusability of expert design experiences to stop
reinvent the wheel and solve recurring design
problems [39], [40], [41], [42], [43]. According to
Patil et al. [4] high latency should be the important
characteristic of modern web applications. Towards
achieving high latency they proposed proxy cache to
access large scale web applications. Caching is
generally used when multiple queries have same

The main architectural design pattern which is widely


used for web application development is MVC (Model
View Controller). Along with MVC other design
patterns are used such as DTO, DAO, and decorator
design pattern. Various categories of patterns are used
in the process of web application development. There
are many attributes that can characterize quality and
performance of web applications. They include
response time, throughput, availability, and cost,
security, reliability, scalability, extensibility.
The remaining other parts of this paper are as follows.
Section II reviews related works that provide insights
into the present state of the art of usage of design
patterns for web application development. Section III

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

249

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

discusses the quality and performance attributes that


can be realized using design patterns while section IV
concludes the paper.

endpoints while navigating to help algorithms take


best navigation paths. It also reduces memory usage.
News design pattern accommodates new information
to be incorporated into web site with ease [14].
Pagination design pattern improves performance while
viewing search results. Set based navigation groups
related items for convenience and rich user experience
[15].
Rossi et al. [16] proposed architecture for improving
WIS known as Web Information Systems with
navigational patterns. The navigational patterns they
studied for the purpose include node in context, active
reference, shopping basket, landmark, news and setbased navigation. Landmark pattern helps users to
have consistent cue about important interface.
Shopping basket pattern is similar to providing
bookmarks or it is similar to shopping cart that is used
in shopping malls in the real world. Node in context
pattern customizes object representation as per the
context. Active reference pattern defines and
maintains indexes in the form of active orientation
tools for better navigation. Yi Li and Kevin L [19]
proposed architecture for improving web database
access. They suggested connection pooling to improve
performance of web application. Connection pooling
provides a set of pre-established connections to
application which eliminates scarcity of connections
and promotes scalability of web applications.

II. RELATED WORKS


David Parsons [3] studied evolving architectural
patterns for web applications such as auto-completion
architecture, delta management architecture, and client
side buffering. Kwon and Bang [7] proposed a design
method for web application that improves scalability
and flexibility. They implemented business logic in
JSP file and reusable Servlet for SQL related search.
They have also implemented a forward servlet which
can prevent restarting web server and prevent
maintenance problems. Mustafa Mamawala [8] studied
on web application performance over a period of time.
They observed the deterioration of performance when
it lacks performance tuning as part of the framework.
QoS requirements such as security, manageability,
scalability, and availability are to be considered in the
design of web application for performance. Schwabe
et al. [9] studied on cohesive web application design
by identifying patterns, and separating concerns. They
proposed a framework known as Object Oriented
Hypermedia Design Model (OOHMD) for web
application design which focuses on conceptual,
navigational and interface design models. They
worked on
recommendation
patterns,
push
communication for personalization. Carreras et al. [10]
proposed bottom-up design patterns that are used for
building energy web with combination of innovative
communication and information technologies. Its main
focus is to have self-organizing components with
continuous evaluation.

Lorna Uden [20] studied on design process for web


applications. However, it is essential to capture right
requirements for the success of web application
besides using UI models like RMM (Relationship
Management Methodology) and OOHDM. Srikanth
and Savithri [21] proposed a new architecture for
improving web applications quality using design
patterns. Towards it, they combined many existing
design patterns in an efficient manner. The set of
patterns they combined along with MVC include
template method, dependency injection pattern,
inversion of control pattern, Database Access Objects
(DAO) pattern, and aspect proxy pattern. An Liu et al.
[22] observed that fault tolerance is very important
in web services composition. Web applications might
interact with web applications in enterprise
applications. They proposed a fault tolerant framework
for transactional web services. The framework
includes high level strategies for exception handling
and transaction techniques. The attributes of the
framework
include
reusability,
portability,
convenience, and economy. Dong et al. [23] studied
the process of identification of design patterns in UML
models. They developed a tool that will visualize
design patterns existed in UML model of a web
application.

Chang and Hon [11] proposed navigational structures


for assessing the quality of web applications. They
applied statistical testing theory to Totavia search
engine web site which proved to be effective. Thung et
al. [12] studied design patterns for improving quality
and performance of web applications. They compared
both architectural and navigational design patterns and
proposed suitable design patterns for a school web
site. The architectural patterns they analyzed include
MVC known as Model View Controller, and PAC
known as Presentation Abstraction Control. They also
studied different navigational patterns such as setbased navigation, pagination, news, and navigation
strategy and navigation observer. The MVC pattern
makes web application maintainable and scalable [17].
The PAC pattern makes it highly interactive and
modular. Navigation observer records visited links to
help navigation history. Thus it makes navigation
easier, and decouples navigation and recording
processes [13]. Navigation strategy computes link

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

250

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Smith [24] emphasized the use of AJAX technology to


be part of web application design to provide rich user
experience. Cane [25] studied the problem of
measuring performance of web applications. His main
focus was on latency as a measure. According to
Garrido et al. [28] a layered approach using design
patterns can improve the usability of web applications.
Such layered approach also makes the applications
maintainable. Taleb et al. [30] presented a patterns
oriented design approach for web application
development. The categories of design patterns they
focused include information architecture design
patterns, page layout and user interface design
patterns, navigation design patterns, interaction design
patterns, information visualization design patterns,
interoperability and system design patterns. They have
identified that developers have no in-depth knowledge
about the type of patterns to be used to solve various
design problems.
Leff and Rayfield [36] proposed a flexible web
application partitioning based on MVC architecture for
improving performance. They also studied various
deployment scenarios that can be supported without
the need for changing source code. As per their design
repartitioning web application also does not incur cost.
Lucca et al. [37] proposed a mechanism to identify the
design patterns implemented in a web application
automatically. They used reverse engineering
techniques to discover patterns in web application.
Their work enabled to know whether typical design
patterns are used in application and take necessary
decisions to redesign it. Tammet et al. [38] proposed a
rule-based approach for systematic application
development with web based interface. They separated
user interface and business logic so as to make the
application maintainable. They implemented a rule
server which manages business rules while the
applications BL layer adapts to the latest business
rules without the need for modifying source code.
Muoz-Arteaga et al. [44] studied design patterns for
interactive web application from the perspective of
classification of security feedback. Their effort was to
find equilibrium between security and usability of web
application development. YONG-JUN and LI KE-XI
[45] proposed a framework for web application
development which provides basic building blocks
that make use of best practices such as design patterns
to reuse software components. Their framework is
characterized by DTO (Data Transfer Object) for
parameter passing, Database Access Objects (DAOs),
connection pool, and SQL processor. All the patterns
are used within the confines of MVC architectural
pattern. Kerji [46] proposed the best use of decorator
design pattern to adapt to changing requirements of
end users who need different kinds of information in
the form of XML. Their approach promoted

reusability of code and besides reducing maintenance


costs.
III.
QUALITY
ATTRIBUTES

AND

PERFORMANCE

Jeff Offutt [18] studied on quality of web applications


and observed that reliability, usability and security are
the most important quality attributes for the success of
web applications. Other quality attributes include
availability, scalability, maintainability, and time-tomarket. Design patterns are proven industry best
practices that improve the quality attributes of
applications making them more maintainable [26].
Bernardi et al. [27] observed that pattern based
development of web applications can improve quality
attributes resulting in reusability and maintainability.
Especially they proposed mechanisms for semiautomatic redesign to improve quality of existing web
applications. To redesign applications they used Java
Server Faces (JSF) framework along with MVC
architectural pattern.
Ricca [29] studied web application development
closely and opined that the web application quality is a
multidimensional
attribute
which
includes
performance, accessibility, usability, maintainability,
reliability, correctness and conformance to standards.
Yand and Chen [31] presented a framework by name
ISPWAD which enables security of web applications
besides improving performance. Design issues with
regard to security are to be considered while
developing web applications. The security issues to be
considered include access control, SQL injection,
Session hijacking, Information disclosure, Hidden
field tampering. Design issues with respect to
performance include time taken for accessing web
page, time taken for making DML (Data Manipulation
Language) operations on database. Menasce and
Almeida [32] define parameters for performance of
web application. They include response time,
throughput, availability, and cost. Response time is the
time taken to respond to queries. Throughput is the
rate of processing requests. Availability refers to the
time in which services of web applications are
available. Cost refers to some kind of performance
such as price performance ratio. Besides these
attributes some other attributes like security,
reliability, scalability, extensibility can characterize
Quality of Service (QoS) of a web application [32],
[33], [34], 35].
CONCLUSIONS
In this paper we provided the usage of design patterns
for improving quality and performance of web
application development. From the survey of literature

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

251

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

it is understood that MVC is the common thread in all


architectures proposed for
web application
development. However, many other design patterns
can be used along with MVC. They include decorator
design pattern, DTO, DAO and so on. In fact usage of
many design patterns is based on the application in
hand. Thus it can be said that along with MVC
creational patterns, structural patterns, and behavioral
patterns proposed by GOF can be used appropriately
on demand. A variety of navigational patterns are
widely used in web application development. AJAX is
given paramount importance in modern web
application development and architectural patterns
should have a place for its usage. Patterns dealing with
dynamic web content presentation through XML are
essential. Finally it is to be born in mind that quality
and performance of a web application can be realized
through the use of design patterns.

[11] Wen-Kui Chang and Shing-Kai Hon. (n.d).


Assessing the Quality of Web-Based Applications via
Navigational Structures. IEEE. 0 (0), p1-9.
[12] Phek Lan Thung, Chu Jian Ng, Swee Jing Thung,
Shahida Sulaiman. (n.d). Improving a Web
Application Using Design Patterns. IEEE. 0 (0), p1-6.
[13] G. Rossi, A. Garrido, and S.Carvalho, Design
Patterns
for
Object-Oriented
Hypermedia
Applications, Pattern Languages of Program Design,
vol. 2, chapter 11, pp.177-191, Vlissides, Coplien y
Kerth editors, Addison-Wesley, 1996.
[14] G. Rossi, D. Schwabe, and F. Lyardet,
Improving Web Information Systems with
Navigational Patterns, Proc. of the Eighth
International World Wide Web Conference, 1999.
[15] G. Rossi, D. Schwabe, and F. Lyardet, Patterns
for Designing Navigable Information Spaces, Proc. of
PloP, 1999.
[16]. Gustavo Rossi b,1,2, Daniel Schwabe, Fernando
Lyardet b,1. (n.d). Improving Web information
systems with navigational patterns. IEEE. 0 (0), p1-12.
[17]. Avraham Leff, James T. Rayfield IBM T. J.
Watson Research Center. (n.d). Web-Application
Development Using the ModelNiewlController Design
Pattern. IEEE. 0 (0), p1-10.
[18]. Jeff Offutt Information & Software Engineering.
(n.d). Web Software Applications Quality Attributes.
IEEE. 0 (0), p1-11.
[19]. Yi Li, Kevin L. (n.d). Performance Issues of a
Web Database. IEEE. 0 (0), p1-10.
[20]. Lorna Uden Staffordshire University. (n.d).
Design Process for Web Applications. IEEE. 0 (0), p19.
[21]. J.Srikanth1, R.Savithri2. (n.d). A NEW
APPROACH FOR IMPROVING QUALITY OF
WEB
APPLICATIONS
USING
DESIGN
PATTERNS. IEEE. 3 (0), p1-5.
[22]. An Liu, Qing Li, Senior Member, IEEE,
Liusheng Huang, and Mingjun Xiao. (2010). FACTS:
A Framework for Fault-Tolerant Composition of
Transactional Web Services. IEEE. 3 (1), p1-14.
[23]. ing Dong,Sheng Yang,Kang Zhang. (2007).
Visualizing Design Patterns in Their Applications and
Compositions. IEEE. 33 (7), p1-21.
[24]. Keith Smith. (n.d). Simplifying Ajax-Style Web
Development. IEEE. 0 (0), p1-4.
[25] John W. Cane. (n.d). Measuring Performance of
Web Applications: Empirical Techniques and Results.
IEEE. 0 (0), p1-10.
[26] R. J. E. Gamma, R. Helm and J. Vlissides. Design
Patterns: Elements of Reusable Object-Oriented
Software. Addison-Wesley, 1995.
[27]. Mario Luca Bernardi1, Giuseppe Antonio Di
Lucca2 and Damiano Distante3. (2010). Improving the
Design of ExistingWeb Applications. IEEE. 0 (0), p16.

REFERENCES
[1] Gamma, E., Helm, R., Johnson, R., and Vlissides,
J. Design Patterns: Elements of Reusable ObjectOriented Software, Addison-Wesley, Reading, Mass.,
1995.
[2] Buschmann, F., Meunier, R., Rohnert, H.,
Sommerlad, P., and Stal, M. Pattern-Oriented
Software Architecture: A System of Patterns, Wiley,
Chichester, 1996.
[3] David Parsons, Evolving Architectural Patterns
For Web Applications.
[4]. Prof. S B Patil1, Mr. Sachin Chavan2, Prof. Preeti
Patil3, Prof. Sunita R Patil4. (2012). HIGH QUALITY
DESIGN TO ENHANCE AND IMPROVE
PERFORMANCE OF LARGE SCALE WEB
APPLICATIONS. IEEE. 0 (0), p1-8.
[5] Ngamsuriyaroj, S. ; Rattidham, P. ; Rassameeroj, I.
; Wongbuchasin, P. ; Aramkul, N. ; Rungmano, S.
Performance Evaluation of Load Balanced Web
Proxies IEEE, 2011.
[6] S B Patil, Sachin Chavan, Preeti Patil; High
Quality Design And Methodology Aspects To
Enhance Large Scale Web Services, International
Journal of Advances in Engineering & Technology,
2012, ISSN : 2231-1963
[7] OhSoo Kwon and HyeJa Bang. (2012). Design
Approaches of Web Application with Efficient
Performance in JAVA. IEEE. 11 (7), p1-7.
[8] Mustafa Mamawala. (n.d). Web Application
Performance Tuning -A systematic approach. IEEE. 0
(0), p1-11.
[9] Daniel Schwabe and Robson Mattos Guimares.
(n.d). Cohesive Design of Personalized Web
Applications. IEEE. 0 (0), p1-10.
[10] Iacopo Carreras. (2010). Bottom-Up Design
Patterns and the Energy Web. IEEE. 40 (4), p1-10.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

252

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

[28]. Alejandra Garrido', Gustavo Rossi' and Damiano


Distante2. (n.d). Model Refactoring in Web
Applications. IEEE. 0 (0), p1-8.
[29]. Filippo Ricca. (n.d). Analysis, Testing and Restructuring of Web Applications. IEEE. 0 (0), p1-5.
[30]. M. Taleb,A. Seffah,A. Abran. (n.d). PatternsOriented Design Applied to Cross-Platform Webbased Interactive Systems. IEEE. 0 (0), p1-6.
[31] Shin-Jer Yang and Jia-Shin Chen. (n.d). A Study
of Security and Performance Issues in Designing Webbased Applications. IEEE. 0 (0), p1-6.
[32] Daniel A. Menasce and Virgilio A. F. Almeida.
Capacity Planning for Web Services: metrics, models,
and methods. Prentice Hall PTR, Upper Saddle River,
NJ, USA, 2001.
[33] Daniel A. Menasce. QoS Issues inWeb Services.
IEEE Internet Computing, 6(6):72{75, 2002.
[34] Daniel A. Menasce. Response-Time Analysis of
Composite Web Services. IEEE Internet Computing,
8(1):90{92, 2004.
[35] Daniel A. Menasce, Virgilio A. F. Almeida, and
Lawrence W. Dowdy. Per-formance by Design:
Computer Capacity Planning By Example. Prentice
Hall PTR, Upper Saddle River, NJ, USA, 2004.
[36] Avraham Leff, James T. Rayfield. (n.d). WebApplication
Development
Using
the
ModelNiewlController Design Pattern. IEEE. 0 (0),
p1-10.
[37]. Giuseppe Antonio Di Lucca*, Anna Rita
Fasolino, Porfirio Tramontana. (n.d). Recovering
Interaction Design Patterns in Web Applications.
IEEE. 0 (0), p1-9.
[38] Tanel Tammet, Hele-Mai Haav, Vello Kadarpik,
and Marko Kramees. (n.d). A Rule-Based Approach
to Web-Based Application Development. IEEE. 0 (0),
p1-7.
[39] F. Buschmann, R. Meunier, H. Rohnert, P.
Sommerlad, and M. Stal, Pattern-Oriented Software
Architecture: A System of Patterns. John Wiley &
Sons, 1996.
[40] J.O. Coplien and D.C. Schmidt, Pattern
Languages of Program Design. Addison-Wesley, 1995
[41] M. Fowler, Analysis Patterns: Reusable Object
Models. Addison-Wesley, 1997.
[42] E. Gamma, R. Helm, R. Johnson, and J. Vlissides,
Design Patterns, Elements of Reusable ObjectOriented Software. Addison-Wesley, 1995.
[43] W. Pree, Design Patterns for Object-Oriented
Software Development. Addison-Wesley, 1995.
[44] Jaime Muoz-Arteaga. (n.d). A Classification of
Security Feedback Design Patterns for Interactive Web
Applications. IEEE. 0 (0), p1-6.
[45] LIU YONG-JUN1 ,LI KE-XI 2. (n.d). DESIGN
AND IMPLEMENTATION OF THE NEW WEB
APPLICATION FRAMEWORK--JEMSF. IEEE. 0
(0), p1-4.

[46] Vijay K Kerji. (n.d). Decorator Pattern with XML


in Web Application. IEEE. 0 (0), p1-5.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

253

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Quick Detection Technique to Reduce Congestion in


WSN
M.Manipriya1, B.Arputhamary2,
1

M.Phil scholar, Department of computer science, Bishop Heber college(Autonomous),


Trichirapalli-620 017
2
Asst.professor, Department of computer Applications, Bishop Heber college(Autonomous),
Trichirapalli-620017
priyayehova@gmail.com
arputhambaskaran@rediffmail.com

information via one to another node. Nodes are


having limited storage space in terms of bandwidth,
space, battery level, multi hop communication
architecture in sensor networks since nodes send
their data to a sink node for transmitting the
packets. Sensor has been classified into two classes
such as event driven and continuous dissemination.
According to the history of communication, sensor
nodes are constrained in battery level and
bandwidth. Current researches being focused on
sensor networks to maximize the network life time.
The node delay and throughput are common issues
in sensor networks, transmission of data require
end-end delay must be acceptable range to the
users, as delay decreases users meet Quality of
Service (QoS) of the network. During this
transmission tasks some nodes have low energy to
transmit or some nodes are being inactive to send
the packets, so that node can be waste the resources
and also enhanced the congestion between the
nodes which causes high delay and chance to get
packet loss. In order to avoid this issue our
proposed technique solves the congestion control,
minimum delay by detecting inactive nodes during
the transmission.

Abstract - Wireless Sensor Networks (WSNs) are


employed for either continuous monitoring or event
detection in the target area of interest. In event-driven
applications, it is critical to report the detected events in
the area and with sudden bursts of traffic possible due to
spatially-correlated events or multiple events, the data
loss due to congestion will result in information loss or
delayed arrival of the sensed information. Congestion
control techniques detect congestion and attempt to
recover from packet losses due to congestion, but they
cannot eliminate or prevent the occurrence of congestion.
Congestion avoidance techniques employ proactive
measures to alleviate future congestion using parameters
like queue length, hop count, channel conditions, and
priority index. However, maintaining and processing
such information becomes a significant overhead for the
sensor nodes and degrades the performance of the
network. This paper propose a congestion avoidance
quick detection technique (QDT) that uses the queue
buffer length of the sensor nodes to estimate the
congestion and diffuse traffic to provide a congestionfree routing path towards the base station. This protocol
provides event reporting, packet delivery ratio, by
dynamically diffusing the traffic in the network using
multiple forwarders in addition to backup forwarding.
Results show that our protocol significantly improves
event reporting in terms of packet delivery ratio by
avoiding congestion while diffusing the traffic
effectively.

1.1 Congestion in WSN

Keywords: Wireless sensor network, node detection


algorithm, reducing congestion.

Congestion is prejudicial to wireless device


networks as a result of it lowers the out turn of the
network by dropping a lot of packets containing
crucial perceived info and reduces the time period
of the network as a result of weakened energy
efficiency at every device node, particularly for
spatially-correlated events. With the buer of
the device nodes close to full, there will invariably
be trac at the node for the information packets, w
hich ends in exaggerated rivalry, exaggerated retran
smissions weakened packet delivery magnitude rela
tions, wireless device networks as a result of it
lowers the outturn of the network by dropping a lot

I. INTRODUCTION
Wireless sensor network is an emerging
technology in research field; it is used to monitor
health condition, temperature also used in military
application, home applications, etc. Wireless
sensors are also used in forest fire detection,
inventory control, energy management, and so on.
There are thousands of nodes are being
interconnected with one another the control station
collects all data from each node and transmits the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

254

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

of packets
containing crucial perceived info and
reduces the time period of the network as a result
of weakened energy eciency at every device
node, and exaggerated energy consumption. In
event-driven
applications, once there's an abrupt
increase within the trac, congestion would
degrade the performance of the network by the
loss of the event packets or the delayed arrival of
the packets to the sink. Congestion control is not
solely necessary to enhance the general outturn
however additionally to elongate the network time
period and improve the end-to-end outturn, referred
to as accuracy level, by avoiding the packet loss as
a result of congestion. Congestion being one
among the largest problems for a device network,
needs to be avoided to enhance the Quality of
Service (QoS) in terms of outturn, packet delivery
ratio, latency, and energy eciency. Congestion
management in WSN has been wide concerning
police investigation the congestion within the
network and dominant the congestion by adjusting
the speed of the input traffic or prioritization of the
info packets or load equalization among the device
nodes. The trac within the network is adjusted
either hop-by-hop, at every device node, end to end
rate adjustment at the supply nodes wherever the
trac is generated. While congestion management
concentrates on sanctioning the network to live
through packet loss due to the prevalence of
congestion. Congestion rejection detects early cong
estion or estimates for the congestion within
the network and tries to forestall its prevalence. For
example, in associate event-based approach,
appropriate congestion rejection mechanism might
help to sight the approaching congestion and react
to matters before the particular collapse takes
place. Congestion rejection is that the core thought
for this paper model to proactively determine and
alleviate congestion within the network and change
the network to handle the longer term traffic.

Fetch Quickly. Akoijam Premita [4] et al. discussed


on power efficient energy aware routing protocol
for wireless sensor networks which occupied less
energy and reduced the end to end delay.
Jayachandran [5] et al. explained fast data
collection with reduced interference and increased
life time they improved packet delivery ratio and
saved the energy of the each node. Abhay Raman
[6] et al. minimized the delay and maximized the
life time of the network by reducing the delay from
source to destination. Hao-Li Wang [7] et al.
enhanced the scheme for quality of service was
used to detect the inactive node bandwidth and
energy. Navneet Kaur [8] elaborated load balancing
technique in sensor network to distribute the
packets it reduces packet loss. M. Vijayalakshmi et
al., [9] proposed Clustering and Prediction
techniques, which use temporal correlation among
the sensor data, provide a chance for reducing the
energy consumption of continuous sensor data
collection. Thus it can achieve stability and
prolongs network lifetime. G.Srinivasan et al., [10]
analyzed in WSN congestion detection and
congestion control is the major research areas. It is
important to design protocols for controlling
congestion.
Parveen Yadav et al.,[11] proposed new cluster
based security architecture which starts from the
initialization of the network. Safe path is not
shortest but next alternative shortest path in mobile
ad hoc network. R.Sumathi et al., [12] surveyed
QoS Based Routing Protocols for Wireless Sensor
Networks. Many QoS issues are focused in their
work. The performance comparison of QoS based
routing protocols such as SAR, MMSPEED,
MCMP, MCBR, and EQSR has also been
analyzed.

III. PROPOSED WORK


Inactive node
Inactive nodes are trying to get many benefits
from the network to occupy battery or bandwidth.
An inactive node might not send the data Packets in
a proper way. An inactive node can do any of the
possible attack in the sensor network.

II. RELATED WORK


Chieh-Yih Wan [1] et al. proposed a technique
Congestion detection and avoidance in sensor
networks significantly improves the performance of
data dissemination applications such as directed
diffusion by mitigating hotspots, and reducing the
energy tax with low fidelity penalty on sensing
applications. Sivapuja Srikanth Babu [2] et al.
investigated that jamming avoidance for video
traffic in wireless sensor networks they reduced
packet drops at intermediate node. Further the cost
of retransmission of dropped packets was significa
ntly reduced. Pooja sharma [3] et al. tried to
prolong the lifetime of wireless sensor network by
congestion avoidance techniques. The techniques
included congestion detection and avoidance,
Event-to-Sink Reliable Transport, Pump Slowly,

It turns off the power when it does not


have the communication with other nodes
It may not forward the packets to the exact
destination node from source node
Inactive nodes are sending some packets
and drop other packets.
Techniques to properly cope with inactive
replication allocation approach perform
traditional cooperative technique in terms of
accessibility, cost and minimum delay.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

255

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

IV. NUMERICAL EXAMPLE

Process Send Request

The network is loaded such trac


converges towards the bottom station from
dierent
directions.
Trac
from
these
sources are sent at dierent rates. Table 1 show
with backup forwarder and while not backup
forwarder. Once backup forwarding utilized,
additional packets are received in very shorter
length than while exploitation backup. Though the
packets get diused through backup forwarders, the
delay is less; as a result of the time taken for the
packets to achieve the bottom station through
backup forwarder is a smaller amount than the wait
time for the potential forwarders once their queues
are full.
Table 1: Backup forwarder

Checks for the available route

Is route
available?

Yes

No

Forward
Message

Save message in
buffer & initiate
route request

Without Backup

Stop

Figure 1: Flowchart of QDT function


When data transmitted, it does not forward the
reply request on the reverse route. The proposed
algorithm will detect and solve the problem of
inactive node over wireless sensor network we
develop a quick detection algorithm that considers
partial greediness and new replication allocation.

With Backup

No. Of
Packets
Sent

No. Of
Packets
Received

Delay
(Secs)

No. Of
Packets
Received

Delay
(Secs)

1500
2250
4500

1015
1555
2791

0.16
0.18
0.20

1302
1872
3171

0.14
0.15
0.12

The trac diusion approach to


proactively avoiding congestion at the nodes makes
our protocol deliver a lot of packets even with a
high trac hundreds.
In our proposed method, though every
supply transmits a 100Kbps towards the bottom
station, the speed controller at every node adjusts
the packet loss rate at every hop level and reduces
the particular packets generated.

Step 1: Initiate the process to send the packets


Step 2: Identified the number of nodes and
establish connection with each node
Step 3: Checks for the available route in the path
Step 4: If the route is available forward message to
the other node
{
Select the path which is congestion free
}
Else if (Save message in buffer & initiate
route request)
{
Select the alternative path
}
Else (select the other paths which is
available)
Step 5: If active node becomes as inactive node
find the inactive node and correct the node
error and retransmits the packet again
from buffer itself
Step 6: Repeat the step until selects the path
Step 7: Stop to search

40000
30000
CTR

20000

QDA

10000
0

Figure 2: No. of packets delivered


V. CONCLUSION
This paper introduced quick detection
algorithm aims to reduce the congestion level and
minimum delay to enhance the network
performance. This has been achieved by finding
out inactive node which is present during the
transmission. If there is any error occurred with
inactive node it find and modify the error and send
the data to next node from current node itself. Thus

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

256

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Journal of Mobile and Wireless Communications,


Volume 4, 2013, pp. 174-181.

proposed algorithm can maintain with minimized


congestion level among nodes in a sensor network
and prolong the network life time and reduce the
service disturbance. In wireless sensor network
many algorithms have been used which have lot of
advantages over other algorithms. Since quick
detection algorithm add more nodes of other
algorithm. By this method we can improve the QoS
of WSN and reduce the congestion of the
information and increase the range of sensor nodes.

[10] G.Srinivasan and S.Murugappan, A survey of


congestion control techniques in wireless Sensor
networks, International Journal of Information
Technology and Knowledge Management, Volume 4,
No.2, 2011, pp. 413-415.
[11] Parveen Yadav and Harish Saini, An Alternative
Path Routing Scheme for Intrusion Prevention in
Wireless Network, International Journal of Engineering
and Innovative Technology, Volume 2, Issue 1, 2012, pp.
248-251.

REFERENCES
[1] Chieh-Yih Wan, Shane B. Eisenman, Andrew T.
Campbell, Congestion Detection and Avoidance in
Sensor Networks, Proceedings of SenSys03,
pp. 266-279, 2003.

[12] R.Sumathi and M.G.Srinivas, A Survey of QoS


Based Routing Protocols for Wireless Sensor Networks,
J Inf Process Syst, Vol.8, No.4, 2012, pp. 589-602.
[13] C. Basaran, K. Kang, and M. H. Suzer. Hop-by-hop
congestion control and load balancing in wireless sensor
networks, IEEE Conference on Local Computer
Networks, 2010.

[2] Sivapuja Srikanth Babu, R. Konda Reddy, P.


Eswaraiah, Supriya Sivapuja, Srikar Babu S.V,
Jamming Avoidance for Video Traffic in Wireless
Sensor Networks, International Journal of Emerging
Trends & Technology in Computer Science, volume 2 ,
Issue 2, pp. 293-297, 2013.

[14] M. M. Bhuiyan, I. Gondal, and J. Kamruzzaman.


CAM: Congestion avoidance and mitigation in wireless
sensor networks, IEEE Vehicular Technology
Conference, 2010.

[3] Pooja sharma, Deepak tyagi, Pawan bhadana, A


Study on Prolong the Lifetime of Wireless Sensor
Network by Congestion Avoidance Techniques,
International Journal of Engineering and Technology,
Volume 2, Issue 9, pp. 4844-4849, 2010.

[15] J. B. Helonde, V.Wadhai, V. Deshpande, and S.


Sutar,EDCAM: Early detection congestion avoidance
mechanism for wireless sensor network, International
Journal of Computer Applications, 2010.

[4] Akoijam Premita, Mamta Katiyar, A Review on


Power Efficient Energy- Aware Routing Protocol for
Wireless Sensor Networks, International Journal of
Engineering Research & Technology, Volume 1, Issue 4,
pp. 1-8, 2012.

[16] K. K. Sharma, H. Singh, and R. B. Patel, A reliable


and energy efficient transport protocol for wireless
sensor network, Global Journal of Computer Science
and Technology, 2010.

[5] Jayachandran. J, Ramalakshmi. R, Fast Data


Collection with Reduced Interference and Increased Life
Time in Wireless Sensor Networks, International
Journal of Research in Engineering & Advanced
Technology, Volume 1, Issue 2, pp. 1-5, 2013.

[17] P. Sharma, D. Tyagi, and P. Bhadana, A study on


prolong the lifetime of wireless sensor network by
congestion avoidance techniques, International Journal
of Engineering and Technology, 2010.

[6] Abhay Raman, Ankit Kr. Singh, Abhishek Rai,


Minimizing Delay And Maximizing Lifetime For
Wireless Sensor Networks With Anycast, International
Journal of Communication and Computer Technologies,
Volume 1, No 26, Issue 4, pp. 96-99, 2013.
[7] Hao-Li Wang, Rong-Guei Tsai, Long-Sheng Li, An
Enhanced Scheme in Controlling Both Coverage and
Quality of Service in Wireless Sensor Networks,
International Journal on Smart Sensing and Intelligent
Systems, Volume 6, No 2, pp. 772-790, 2013.
[8] Navneet Kaur, Review on Load Balancing in
Wireless Sensor Network, International Journal of
Advanced Research in Computer Science and Software
Engineering , Volume 3, Issue 5, pp. 1044-1047, 2013.
[9] M. Vijayalakshmi, V. Vanitha, Cluster based
adaptive prediction scheme for energy efficiency in
wireless sensor networks, International Research

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

257

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

HEURISTICS TO DETECT AND EXTRACT LICENSE PLATES


Rajivegandhi C

Sungkrityayan Khan

Manas Srivastava

Sabarinathan C

Department of Computer Science and Engineering


SRM University, Chennai, India
rajivegandhi@gmail.com
sungkrityayankhan@gmail.com
manas_srivastava@live.com
sabaricsn@gmail.com
AbstractImage based License Plate Detection
requires the segmentation of the license plate (LP)
region from the larger image is taken with the help of
camera. While there may be numerous LP candidate
areas, the detection of the LP area (LPA) requires
many diverse methods of image processing to be
applied. After we detect, a candidate LPA, the
extraction of the possible alphanumeric information
is done by us. The LPA information is further
processed by using the heuristics to reject those
candidates where there might be alphanumeric like
region but which may not be LP information. In this
particular paper we present before you, several
heuristics that successfully allows to detect and
extract the alphanumeric regions from the correctly
discovered LPA. Consequently the alphanumeric
regions that has been extracted are identified by
Optical Character Recognition (OCR). A large
number of experiments has been done on a number of
images of Indian and also on international license
plates, taken from the internet and some images taken
by us are used. It is noted that an accuracy of 85% is
achieved from the images available in the database.
The open source software OpenCV is configured with
Code Blocks IDE (http ://sourceforge.n et/p rojects/op
encvlibrary /) to help in the experiment.

accuracy of the license plate detection. In this paper


the distance between the camera and the car is kept
approximately constant. For license plate detection
purpose the concept of edge detection [5,9],
contour determination and bounding box formation
and elimination is used. Selection of license plate
areas (LPA) and their elimination to obtain the
actual license plate area was based on various
heuristics. This stage is important since imp roper
detection of license plate can lead to misrecognized
characters.
Character Extraction or character segmentation
is the second component of our LPR system. It
takes a properly segmented license plate as an
input. Some preprocessing is done on the license
plate image for the noise removal. A number of
morphological operators are used on the image for
this purpose and the noise free output image is sent
for character segmentation. Image binarizat ion and
image projections are used for character extraction.
The extracted characters are then input to the
OCR for recognition.
The rest of the paper is organized as follows. In
Section II we briefly describe the Related Works in
this field. In Section III we give an overview of our
approach followed by some Examples and Testing
in Section IV. Results of our experiments along
with its discussion are demonstrated in Section V.
Finally we conclude this paper in Sect ion VI
followed by references in the end.

Keywords- License Plate Extraction, Image


Segmentation, Character Extraction , Character
recognition

I. INTRODUCTION
A license plate is the unique identification of a
vehicle . The basic issues in real-time license plate
recognition are the accuracy and the recognition
speed. License Plate Recognition (LPR) has been
applied in numerous applications such as
automatically identifying vehicles in parking slots ,
access control in a restricted area and detecting and
verify ing stolen vehicles. Till now, there have been
some well-known commercially operational LPR
systems around the world. It is assumed that these
systems work under some given constraints and the
cameras are mounted at fixed locations without
mobility. LPR systems consist of three major components: license plate detection, character ext raction and character recognition.
License Plate detection is the first important
stage of an LPR system. Quality of algorithms used
in a license plate detector determines the speed and

II. RELATED WORK


Extensive research has been done in the
area of License Plate Recognition since its
invention in the year 1976 at the Police Scientific
Development Branch in the UK. This is an
interesting topic in the field of recent research
attracting several papers from researchers around
the world. Here we mention some of the relevant
works in this section.
Some important concepts relevant to the
LP detection approaches are mentioned. Processing
of boundary lines, from a gradient filter, and an
edge image is discussed [10]. This edge image is
threshold and then processed with Hough
Transform (HT), to detect lines. Eventually,
couples of two parallel lines were considered as
plate-candidates. However, boundary line detection

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

258

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


is not suitable in the case of not finding horizontal
pairs. It may also happen that the image boundary
line may be absent or not detected properly due to
noise and uneven brightness. In addition to this, HT
is inherently a heavy computation task. The color
and textures of the LP have also been used to
identify it [11], but they seem to be inefficient,
especially when the system has got plates of
different colors and sign patterns. Other common
approaches involved are Top Hat and Bottom Hat
filtering (highlights the black-white transitions)
[12] and Binary Morphology algorithm (like Otsus
method) [13]. But all these algorithms rely on color
information and special signs.

Input car image (RGB)


Conversion of RGB
image to gray scale

Applying certain
heuristics to extract
original LP from the
candidate LPs.

image
Conversion of Gray
scale image to Canny
edge Image
Finding contours in the
Canny Edge Image

Many common algorithms for Character


Segmentation such as direct segmentation [14], pro
jection and cluster analysis [15] and temp late
matching are present.[16].

Approximating contours
to quadrilaterals and
applying bounding

III. PROPOSED WORK


We describe here our approach where we take up d
irect techniques from the essential image
processing to obtain the candidate areas and
subsequently apply domain heuristics to obtain the
LPR. The relevant methods used in our LPR are
implemented in OpenCV using some predefined
functions which are mentioned in braces.

Applying morphological
operations and
Thresholding to remove
noise and enhance
characters in the LP.

Plotting Vertical
Projection Histogram of
the noiseless LP image to
segment characters.

Extraction of characters
using the concept of ROI

Recognition of each
character using OCR

A. License Plate Detection

FIGURE 1. BLOCK DIAGRAM OF OUR LPR SYSTEM

In this stage we locate and isolate the license plate


region from the given image. Quality of the image
plays an important part hence prior to this stage
preprocessing of the image is necessary.
Preprocessing of the image includes conversion of
erosion and dilation are applied on the LP image.
Dilation causes the bright pixels within a region to
grow and erosion is the converse operation.
Dilation tends to smooth concavities and erosion
tends to smooth away protrusions in the image
which enhances its quality and makes it noise free.
After this step the output image is threshold to
enhance the numerals and characters present in the
image, characters are light shaded like white over a
darker background like black. Now character
segmentation is carried out and the approach used
is Vertical Image Project ion [6,8]. Boundaries
from the noise-free license plate image are
removed (im clear border) before applying vertical
projection histogram in order to threshold the
histogram bin value to zero. The coordinates where
the histogram b in value is zero are stored. The
boundary of each character in the license plate
image are formed by these co-ordinates. They are
cropped subsequently using the concept of ROI
(Region of Interest) [2].

IV. EXAMPLES AND TESTING

Experiments were performed to test the accuracy of


the proposed heuristic algorithm. In this paper our
sample space for experiments included a variety of
test images containing license plates made in India,
foreign license plates in addition with some of our
own snapshots of cars. Our algorithm was
converted into a CPP-program which was run on
CodeBlocks
environment
configured
with
OpenCV. We have shown two examples on the
next page. Input image 1 is of an Indian LP taken
by our own camera (14 MegaPixel, Sony
Cybershot, DSC-H55 Optical zoo m 10x). Input
Image 2 is of an international LP taken from the
internet.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

259

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Figure 7. Extracted Characters

Figure 2. Input Image 1

Figure 8. Input Image 2

Figure 3. Image showing all possible Bounding


Boxes

Figure 4. Extracted License Plate

Figure 9. Image showing all possible Bounding


Boxes

Figure 10. Extracted License Plate

Figure 5. VP Histogram of noiseless LP image

Figure 6. Segment LP showing character


boundaries

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

260

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

VI. CONCLUSION
In this paper we have proposed a heuristic
method to segment a license plate fro m an
image. The algorithm used in this paper not
only accelerates the process but also increases
the probability of detecting the license plate
and extraction of characters, under certain set
of constraints. The process is successful
through the steps of character width estimation,
vertical height estimation, and segmentation of
license plate into blocks and identification of
these character blocks.
Various well known techniques were
used to co me out with a final algorithm. The
results show high accuracy of non-character
area removal and thus better recognition of
characters after their segmentation. Finally the
percentage accuracy for the entire process
came to 88% . Our proposed approach is under
improvement and its accuracy will be increased
by using more sophisticated approaches.

Figure 11. VP Histogram of noiseless LP image

Figure 12. Segmented LP showing character


boundaries

VII. REFERENCES
[1]

[2] Learning OpenCV by Gary Bradski and Adrian Kaehler,


OReilly, safari.oreilly.com .
[3] Algorithmic and mathematical Applications of Automatic
Number Plate Recognition Systems,B.Sc. T hesis by
Ondrej Martinsky, BRNO University of T echnology, 2007.
[4] Lihong Zheng, Bijan Samali, Xiangjian He, Laurence T.
Yang, Accuracy Enhancement for License Plate Recognition,

Figure 13. Extracted Characters

th

10
IEEE conference on
Technology, 2010, Pg.No-512.

V. RESULTS AND DISCUSSIONS


[5]

After running the program on various test images


we obtained properly segmented characters of the
LP in 80% cases. The license plate detection
approach presented in Section III was used to
obtain license plate images . Total 250 different
license plate images were included in the
experiment. All candidate images are processed
in one format, i.e. light colored characters on a
darker background. The binary enhanced license
plate character images obtained from our
proposed method were sent to the OCR for
recognition. It is shown that accuracy is 83% for
the extraction of the license plate region, 93% for
the segmentation of characters and 90% for OCR.
The overall algorithm accuracy combining
detection and extract ion is 88%.
Type of LP
Image
Indian LP
Images
International
LP Images
LP Images
taken by own
camera

Success
Rate in LP
extraction
82

Success Rate
in Character
segmentation
93

R.C.Gonzalez and R.E.Woodz, Digital Image Processing,


3rd ed., vol. 2. Pearson Education, .

Computer

and

Information

Bai Hongliang, Liu Changping, A hybrid License Plate Extraction

Method based on Edge Statistics and Morphology, 17


International Conference on Pattern Recogntion(ICPR04).

th

[6] Richard G. Casey and Eric Lecolinet, A survey of Methods


and Strategies
in Character Segmentation, Proceedings of
th
the
17
International
Conference
on
Pattern
Recognition(ICPR 04) .
[7]

Xiangjian He, Lihong Zheng, Qiang He, Wenjing Jia, Bijan Samali
and Marimuthu palaniswami, Segmentation of Characters

on Car License Plates, IEEE, 2008, Pg.No-399.


[8] Huadong Xia and Dongchu Liao, The Study of License
Plate Character Segmentation Algorithm based on Vertical
Projection, IEEE, 2011, Pg.No-4583.
[9]

Wai-Yiu Ho and Chi-Man Pun, A Macao License Plate Recognition


system based on Edge and Projection analysis,IEEE, 2010, Pg.No67.

[10] Kamat V. and Ganesan H. An efficient implementation of


Ho ugh transform for detecting vehicle License Plate Using
DSPs, in Proceedings of the Cellular Neural Networks
and
st
their applications, Proceedings of the IEEE , 31 Annual.

Success [11] Parker J.R. and Federl P., An approach to License Plate
Recognition ,
Technical Report, Library of Computer
Rate of
Vision, University of Calagry, 1996
OCR
[12] Automatic Car Plate Recognition using a partial
90
Segmentation Algorithm, Fernando Martin, David Borges,

88

93

92

78

91

89

Signal Theory and Communication Deparment, Vigo


University, Pontevedra, Spain.
[13] License Plate Recognition- Final Report, Pierre Ponce,
Stanley S. Wang, David L.Wang.

[14] Hongwei Ying, Jiatao Song, Xiaobo Ren Character Segmentation

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

261

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014


for License Plate by the Separator Symbol's Frame of Reference
2010

International
Conference
Networking and Automation (ICINA).

on

Information,

[15] Shuang Qiaol , Yan Zhul , Xiufen Li l , Taihui Liu2 ,3, Baoxue
Zhangl

Research of improving the accuracy of license


plate character segmentation 2010 Fifth International
Conference on Frontier of Computer Science and T
echnology.
[16] Deng Hongyao, Song Xiuli License Plate Characters
Segmentation Using Projection and T emplate Matching,
2009 International Conference on Information Technology
and Computer Science.
[17] http://www.cs.iit.edu/~agam/cs512/lect-notes/opencv-tro/index.html

[18] http://www.mathworks.com/help/toolbox/images
[19]http://en.wikipedia.org/wiki/Automatic_number_plate_recog
nition

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

262

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

DYE SENSITIZED SOLAR CELL

By,
Dhanush.M
IV year
R.M.K. Engineering College
dhanushdhruvan@gmail.com

Diwakar.B
IV year
R.M.K. Engineering College
diwa.balakrishnan@gmail.com

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

263

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

energy conversion efficiency. The device is based on


a 10m thick, optically transparent film of Titanium
dioxide particles a few nanometers in size, coated
with a mono layer of a charge- transfer dye to
sensitize the film for a light harvesting. Because of a
high surface area of the semiconductor film and the
ideal spectral characteristics of the dye, the devise
harvests a high proportion of the incident solar
energy flux(46%) and shows exceptionally high
efficiencies for the conversion of incident photons to
electrical current(>80%). The overall light-to-electric
energy conversion yield is 7.1-7.9% in simulated
solar light and 12%N in diffuse daylight. The large
current densities (>12mAcm-2) and exceptional
stability (sustaining at least 5million turnovers
without decomposition), as well as the low cost,
make practical applications feasible.

ABSTRACT:
The development of new types of solar cells is
promoted by increasing public awareness that the
earths oil reserves could run out during this century.
As the energy need of the planet is likely to double
within the next 50 years and frightening climatic
consequences of the greenhouse effect caused by
fossil fuel combustion are anticipated, it is urgent that
we develop a new kind of renewable energy to cover
the substantial deficit left by fossil fuels. Since the
prototype of a Dye Sensitized Solar Cell (DSSC)
was reported in 1991, it has aroused intense interest
owing to its low cost, simple preparation procedure,
and benign effect on the environments. However, the
potential problems caused by liquid electrolyte limit
the long-term performance and practical use of
DSSC. Therefore, much attention has been given to
improving the light-to-electrical power conversion
and replacing the liquid electrolytes by solid-state or
quasi-solid-state electrolytes. This review will focus
on progress in the development of improved
electrolytes, especially quasi-solid-state electrolytes
for DSSCs.

STRUCTURE AND PRINCIPLES OF


DSSC:

Keywords: dye-sensitized solar cells; liquid


electrolytes;
solid-state
quasi-solidstate
electrolytes;
photoelectric
long-term stability.

electrolytes;
performance;

INTRODUCTION:
The prototype of a DSSC was reported in 1991 by
M.Gratzel. In Gratzel Cells the functions of light
absorption and charge-carrier transportation are
separated. Although the solar power conversion
efficiencies of DSSCs are lower than that of classical
crystalline silicon cells. In DSSCs based on liquid
electrolytes, a photoelectric conversion efficiency of
11 % has been achieved. However, the potential
problems caused by liquid electrolytes, such as the
leakage and volatilization of solvents, possible
desorption and photo-degradation of the attached
dyes, and the corrosion of Platinum counter
electrode, are considered as some of the critical
factors limiting the long-term performance and
practical use of DSSCs. It is believed that
quasi-solid-state electrolytes, especially those
utilizing thermosetting gels, are particularly
applicable for fabricating high photoelectric
performance and long-term stability of DSSC in
practical applications.
Here we describe a photo voltaic cell created from
low to medium purity materials through low cost
processes, which exhibits commercially realistic

DSSCs include a substrate of fluorine-doped SnO2


conducting glass (FTO), a porous nano-crystalline
semiconductor oxide (TiO2) film sensitized by a dye
for absorbing visible light, a redox electrolyte
(usually an organic solvent containing a redox
system, such as iodide/triiodide couple) layer for
deoxidizing oxidized dye, and a platinized cathode to
collect electrons and catalyze the redox couple
regeneration reaction. The light-to-electricity
conversion in a DSSC is based on the injection of
electron from the photo excited state of the sensitized
dye into the conduction band of TiO2. The dye is
regenerated by electron donation from iodide in the
electrolyte. The iodide is restored, in turn, by the
reduction of tri-iodide at the cathode, with the circuit
being completed via electron migration through the
external load.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

264

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

1.
2.
3.
4.
5.
6.

quasi-solid-state electrolytes have been actively


developed as highly conductive electrolyte materials
for DSSCs, lithium secondary batteries, and fuel
cells. Quasi-solid-state electrolytes are usually
prepared by incorporating a large amount of a liquid
electrolyte into organic monomer or polymer matrix,
forming a stable gel with a network structure via a
physical or chemical method. The quasi-solid-state
electrolyte formed via physical cross-linking is called
entanglement network, which is thermo-reversible
(thermoplastic). By contrast, chemical or covalent
cross-linking leads to the formation
of
thermo-irreversible (thermosetting) gel electrolyte.

TiO2|S + hv TiO2|S*
Excitation
TiO2|S* TiO2|S+ + e-(cb) Injection
TiO2|2S++3I- TiO2|2S + I3- Regeneration
I3 - + 2e-(Pt) 3IReduction
I3 - + 2e-(cb) 3IRecaption
TiO2|S+ + e-(cb) TiO2|S Recombination

The electrolytes employed in DSSCs can be


classified as liquid, solid-state, or quasi-solid-state.
Several aspects are essential for any electrolytes in a
DSSC.
(1) The electrode must be able to transport the charge
carrier between photo anodes and counter electrode.
After the dye injects electrons into the conduction
band of TiO2, the oxidized dye must be reduced to its
ground state rapidly. Thus, the choice of the
electrolyte should take into account the dye redox
potential and regeneration of itself.
(2) The electrode must be able to permit the fast
diffusion of charge carriers (higher conductivity) and
produce good interfacial contact with the porous
nano-crystalline layer and the counter electrode.
For liquid electrolytes, it is necessary to prevent the
loss of the liquid electrolyte by leakage and/or
evaporation of solvent.
(3) The electrolyte must have long-term stability,
including chemical, thermal, optical, electrochemical,
and interfacial stability, which does not cause the
desorption and degradation of the dye from the oxide
surface.
(4) The electrolyte should not exhibit a significant
absorption in the range of visible light. For the
electrolyte containing I-/I3- redox couple, since I3shows color and reduces the visible light absorption
by the dye, and I3- ions can react with the injected
electrons and increase the dark current. Thus, the
concentration of I-/I3- must be optimized.

DSSC Based On Colloidal TiO2 Films:

Dye-sensitized cells differ from the conventional


semiconductor devices in that separate the function of
light absorption from charge carrier transport. In the
case of TiO2, current is generated when a photon
absorbed by a dye molecule gives rise to electron
injection into the conduction band of the
semiconductor. To complete the circuit the dye must
be regenerated by electron transfer from a redox
species. The monochromatic current yield

QUASI-SOLID-STATE
ELECTROLYTES:
The quasi-solid state, or gel state, is a particular state
of matter, neither liquid nor solid, or conversely both
liquid and solid. Generally, a quasi-solid-state
electrolyte is defined as a system which consists of a
polymer network (polymer host) swollen with liquid
electrolytes. Owing to its unique hybrid network
structure, quasi-solid-state electrolytes always
possess, simultaneously, both the cohesive property
of a solid and the diffusive transport property of a
liquid. Namely, quasi-solid-state electrolytes show
better long-term stability than liquid electrolytes do
and have the merits of liquid electrolytes including
high ionic conductivity and excellent interfacial
contact property. These unique characteristics of

i () = LHE () x inj x e
where Light Harvesting Efficiency (LHE) is the
fraction of the incident photons that are absorbed by
the dye, inj is the quantum yield for charge injection
and e is the efficiency of collecting the injected
charge at the back contact.
Although attempts to use dye sensitized
photoelectrochemical cells in energy conversion have
been made before, the efficiency of such devices of

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

265

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

such devices have been low. One problem is that o


poor light harvesting. On a smooth surface, a
monomolecular layer of sensitizer absorbs less than
1% of incident monochromatic light. The remaining
option is to increase the roughness of the
semiconductor surface so that a larger number of dye
molecules can be absorbed directly to the surface and
can simultaneously be in direct contact with the
redox electrolyte. By using semiconductor films
consisting of nanometer sized TiO2 particles, together
with newly developed charge transfer dyes, the
efficiency and stability of solar cell has been
improved.

are no minority carriers involved in the photo


conversion process. Therefore, surface and bulk
recombination losses due to lattice defects,
encountered in conventional photovoltaic cells, are
not observed in such a device.
The long term stability of cell performance was
tested by illuminating the thin TiO2 film loaded with
visible light for 2 months. The change in the
photocurrent was less than 10% over this period,
during which a charge of 62,000 C cm-2 was passed
through the device, corresponding to a turnover
number of 5x106 for the sensitizer. This implies that
if any degradation had occurred its quantum yield is
less than 2x10-8. As dec =kdec/k, the rate constant,
kdecs-1, for excited state decomposition must be at
least 10-8 times smaller than k, the sum of rate
constants for all channels of dye deactivation.
Because charge injection is the predominant channel,
this sum is practically equal to the rate constant for
charge injection which exceeds 1012s-1. Therefore, the
upper limit for kdec is 2x104 s-1, which agrees with the
known photo physics of this class of transition metal
complexes. The very fast electron injection observed
with dyes such as, combined with high chemical
stability, renders these compounds very attractive for
practical development.

High surface area TiO2 films were deposited on a


conducting glass sheet from colloidal solutions.
Electronic between the particles was produced by
brief sintering at 450 C. The size of the particles and
pores making up the film is controlled by the size of
the particles in the colloidal solution. The internal
surface area of the film is determined by the size of
the particles and the thickness of the films. These
parameters were optimized to obtain efficient light
harvesting while maintaining a pore size large
enough to allow the redox electrolyte to diffuse
easily. Films of 10m thickness consisting of
particles with an average size of 15nm gave linear
photocurrent response up to full sunlight and were
used throughout.
As each dye molecule occupies an area of 1nm2, the
inner surface of the film is 780cm2 for each 1cm2 of
geometric surface. Thus, the roughness factor is 780,
which is smaller than the predicted value of 2000.
The loss mechanisms such as recombination,
normally encountered in semiconductor photo
conversion, have been minimized. The role of
semiconductor in dye-sensitized device is merely to
conduct the injected majority charge carriers. There

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

266

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Characteristics of TPGE:

Thermoplastic Gel Electrolytes (TPGEs):

(1) PEG contains many ether groups and polyhydric


side groups, two kinds of groups can keep
complexation with alkali metal ions such as
potassium ions, sodium ions. Because of the
interaction between PEG and PC and alkali metal
iodide salts, the iodide anions can be separated from
Alkali cations to form free anions

The formation of the TPGE is based on physical


cross-linking of gelators to form an entanglement
network to solidify liquid electrolyte. The main
characteristic of this kind of gel electrolyte is the
gel-sol-state reversible conversion with the change of
temperature, which is a benefit of deep penetration of
the electrolyte into mesoporous dye-coated
nano-crystalline TiO2 film. The interfacial contact
between the electrolyte layer and nano-crystalline
TiO2 film or counter electrode is one of the most
important factors influencing the photovoltaic
performance of DSSCs besides the ionic conductivity
of the gel electrolyte.
The TPGE contains gelator and liquid electrolyte, the
liquid electrolyte consists of organic solvent,
Redox couple, additive, or IL electrolyte system. The
first thermoplastic polymer gel electrolyte used in
DSSCs was reported [89]. The electrolyte was
composed of poly (acrylonitrile) (PAcN), EC, PC,
AcN and NaI. The light-to-electricity conversion
efficiency of this DSSC was lower in comparison
with that of the DSSC based on liquid electrolytes,
due to the unoptimized components and the difficult
penetration
of
the
PAcN
network
into
nano-crystalline TiO2 film. The high photovoltaic
performance and excellent stability of DSSC was
obtained by using a TPGE containing poly
(vinylidenefluoride-co-hexafluoropropylene)
(PVDFHFP) combined with MePN-based liquid
electrolyte
containing
1,2-dimethyl-3-propyl
imidazolium iodide and iodide. The DSSC showed a
similar photovoltaic performance as that of an
analogous cell containing the same liquid electrolyte,
which means that the polymer matrix has no negative
effect on the performance of DSSC.

(2) The large amount of ether groups and polyhydric


side groups on the PEG matrix can form a hydrogen
bond with PC solvent, which hangs the molecule of
solvent on the entanglement network of the
polymer chain, and results in the formation of a
stable thermo-reversible polymer gel electrolyte.

(3) The TPGE is in a solid state and shows the


fluidity with a viscosity of 0.76 Pa s at a temperature
higher than 50 C, which makes for a deep
penetration into the mesoporous dye-coated
Nano-crystalline TiO2 film [102], and forms a
sufficient interfacial contact between electrolytes
layer and nano-crystalline TiO2 film and platinum
counter electrode. Whereas at a temperature below
20C, the TPGE is in a gel state with a viscosity of
2.17 Pa s. The reversible thermoplastic behavior of
the TPGE is very useful for sealing and assembling
DSSCs.
The characteristic of the thermoplastic polymer gel
electrolyte markedly depends on temperature.
This is due to the fact that the increase of temperature
causes a phase transfer from gel state to sol state, and
a change of dominate conduction mechanism from
Arrhenius type to VogelTammanFulcher
(VTF), which turns to the change of ionic
conductivity of thermoplastic polymer gel electrolyte
and photovoltaic performance of DSSC. The
photovoltaic performance of DSSCs severely
depends on the temperature, which is the typical
characteristic of DSSCs based on this kind of
polymer gel electrolyte.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

267

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Thermosetting Gel Electrolytes (TSGEs):

REFERENCES:

The TPGE is good for fabricating DSSCs. However,


there is also a potential venture for actual application
of DSSCs, which is the chemical instability, such as
phase-separation, crystallization of iodide salts.
Therefore, some more environmentally stable
electrolytes are still needed. Among those optional
methods, the TSGE is one of the good selections for
high photovoltaic performance and good long-term
stability of DSSCs. A high stable DSSC based on a
TSGE containing ureasil precursors and organic
liquid electrolyte containing iodide salts was
prepared. The unique structure of this thermosetting
Organicinorganic hybrid gel electrolyte leads to the
high quality of DSSC, which maintains 56 %
light-to-electricity conversion efficiency even after
preserving for several years.

1. M. Grtzel. Nature 414, 338 (2001).


2. B. ORegan, M. Grtzel. Nature 353, 737 (1991).
3. M. Grtzel. Inorg. Chem. 44, 6841 (2005).
4. M. Grtzel. J. Photochem. Photobiol., A 164, 3
(2004).
5. A. F. Nogueira, C. Longo, M. A. De Paoli. Coord.
Chem. Rev. 248, 1455 (2004).
6. P. Wang, S. M. Zakeeruddin, J. E. Moser, T.
Sekiguchi, M. Grtzel. Nat. Mater. 2, 402 (2003).
7. U. Bach, D. Lupo, P. Comte, J. E. Moser, F.
Weissortel, J. Salbeck, H. Spreitzer, M. Grtzel.
Nature 395, 583 (1998).

CONCLUSIONS:
In this review, we introduce on Quasi-solid-state
electrolytes for DSSCs. Although a lightto-electricity conversion efficiency of 11 % for
DSSCs containing liquid electrolytes has been
achieved, the potential problems caused by liquid
electrolytes, such as leakage and volatilization of
organic solvents, are considered as some of the
critical factors limiting the long-term performance
and practical use of DSSCs. Solid-state electrolytes
overcome the disadvantage of fluidity and volatility
for liquid electrolytes, however, poor interface
contact property and lower conductivity for
solid-state
electrolytes
lead
to
lower
light-to-electricity conversion efficiency for DSSCs.
Quasi-solid-state electrolytes, especially TSGEs,
possess liquid electrolytes ionic conductivity and
interface contact property and solid-state electrolytes
long-term stability, it is believed to be one kind of the
most available electrolytes for fabricating high
photoelectric performance and long-term stability of
DSSCs in practical applications.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

268

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

WEB PERSONALIZATION: A GENERAL SURVEY


J. JOSIYA ANCY
Research Scholar, Department of Information Technology,
Vellore Institute of Technology,
Vellore District, Tamilnadu.
ABSTRACT:
Retrieving the relevant information from the web becomes difficult now-a-days because
of the large amount of datas available in various formats. Now days the user rely on web, but it
is mandatory for users to go through the long list and to choose their needed document, which is
very time consuming process. The approach that is to satisfy the requirements of user is to
personalize the data available on Web, is called Web Personalization. Web personalization is
solution to the information overload problem, it provides the users need without having to ask or
search for it explicitly. This approach helps to improve the efficiency of Information Retrieval
(IR) systems. This paper conducts a survey of personalization, its approaches and techniques.
Keywords- Web Personalization, Information retrieval, User Profile, Personalized Search

I.

INTRODUCTION:

WEB PERSONALIZATION:
The web is used for accessing a variety of information stored in various locations in
various formats in the whole world. The content in web is rapidly increasing and need for
identifying, accessing and retrieving the content based on the needs of the users is required.
An ultimate need nowadays is that predicting the user requirements in order to improve
the usability of the web site. Personalized search is the solution for this problem since different
search results based on preferences of users are provided.
In brief, web pages are personalized based on the characteristics of an individual user
based on interests, social category, context etc The Web Personalization is defined as any
action that adapts information or services provided by a web site to an individual user or a set of
users. Personalization implies that the changes are based on implicit data such as items
purchased or pages viewed.
Personalization is an overloaded term: there are many mechanisms and approaches bothe
automated and marketing rules controlled, whereby content can be focused to an audience in a
one to one manner.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

269

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

WEB PERSONALIZATION ARCHITECTURE:

Fig 1.1: Web personalization process

The above web personalization process uses the web sites structure, Web logs created by
observing the users navigational behavior and User profiles created according to the users
preferences along with the Web sites content to analyze and extract the information needed for
the user to find the pattern expected by the user. This analysis creates a recommendation model
which is presented to the user. The recommendation process relies on the existing user
transactions or rating data, thus items or pages added to a site recently cannot be recommended.
II.

WEB PERSONALIZATION APPROACHES

Web mining is a mining of web data on the World Wide Web. Web mining does the
process on personalizing these web data. The web data may be of the following:
1.
2.
3.
4.

Content of the web pages(actual web content)


Inter page structure
Usage data includes how the web pages are accessed by users
User profile includes information collected about users(cookies/Session data)

With personalization the content of the web pages are modified to better fit for user needs.
This may involve actually creating web pages, that are unique per user or using the desires of a
user to determine what web documents to retrieve. Personalization can be done to a group of
specific interested customers, based on the user visits to a websites. Personalization also includes
techniques such as use of cookies, use of databases, and machine learning strategies.
Personalization can be viewed as a type of Clustering, Classification, or even Prediction.
USER PROFILES FOR PERSONALIZED INFORMATION ACCESS:

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

270

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

In the modern Web, as the amount of information available causes information


overloading, the demand for personalized approaches for information access increases.
Personalized systems address the overload problem by building, managing, and representing
information customized for individual users. There are a wide variety of applications to which
personalization can be applied and a wide variety of different devices available on which to
deliver the personalized information.
Most personalization systems are based on some type of user profile, a data instance of a
user model that is applied to adaptive interactive systems. User profiles may include
demographic information, e.g., name, age, country, education level, etc, and may also represent
the interests or preferences of either a group of users or a single person. Personalization of Web
portals, for example, may focus on individual users, for example, displaying news about
specifically chosen topics or the market summary of specifically selected stocks, or a groups of
users for whom distinctive characteristics where identified, for example, displaying targeted
advertising on e-commerce sites. In order to construct an individual users profile, information
may be collected explicitly, through direct user intervention, or implicitly, through agents that
monitor user activity.

Fig. 2.1. Overview of user-profile-based personalization

As shown in Figure 2.1, the user profiling process generally consists of three main
phases. First, an information collection process is used to gather raw information about the user.
Depending on the information collection process selected, different types of user data can be
extracted. The second phase focuses on user profile construction from the user data. The final
phase, in which a technology or application exploits information in the user profile in order to
provide personalized services.
TECHNIQUES USING USER PROFILE:
From an architectural and algorithmic point of view personalization systems fall into
three basic categories: Rule-based systems, content-filtering systems, and collaborative filtering
systems.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

271

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Rule-Based Personalization Systems. Rule-based filtering systems rely on manually or


automatically generated decision rules that are used to recommend items to users. Many existing
e-commerce Web sites that employ personalization or recommendation technologies use manual
rule-based systems.
The primary drawbacks of rule-based filtering techniques, in addition to the usual
knowledge engineering bottleneck problem, emanate from the methods used for the generation
of user profiles. The input is usually the subjective description of users or their interests by the
users themselves, and thus is prone to bias. Furthermore, the profiles are often static, and thus the
system performance degrades over time as the profiles age.
Content-Based Filtering Systems. In Content-based filtering systems, user profiles
represent the content descriptions of items in which that user has previously expressed interest.
The content descriptions of items are represented by a set of features or attributes that
characterize that item. The recommendation generation task in such systems usually involves the
comparison of extracted features from unseen or unrated items with content descriptions in the
user profile. Items that are considered sufficiently similar to the user profile are recommended to
the user.
Collaborative Filtering Systems. Collaborative filtering has tried to address some of the
shortcomings of other approaches mentioned above. Particularly, in the context of e-commerce,
recommender systems based on collaborative filtering have achieved notable successes. These
techniques generally involve matching the ratings of a current user for objects (e.g., movies or
products) with those of similar users (nearest neighbors) in order to produce recommendations
for objects not yet rated or seen by an active user.
III.

WEB PERSONALIZATION AND RELATED WORKS

Lot of research had been conducted in Personalized Ontology. Generally, personalization


methodologies are divided into two complementary processes which are (1) the user information
collection, used to describe the user interests and (2) the inference of the gathered data to predict
the closest content to the user expectation. In the first case, user profiles can be used to enrich
queries and to sort results at the user interface level. Or, in other techniques, they are used to
infer relationships like the social-based filtering and the collaborative filtering. For the second
process, extraction of information on users navigations from system log files can be used. Some
information retrieval techniques are based on user contextual information extraction.
Dai and Mobasher [5] proposed a web personalization framework that characterizes the
usage profiles of a collaborative filtering system using ontologies. These profiles are transformed
to domain-level aggregate profiles by representing each page with a set of related ontology
objects. In this work, the mapping of content features to ontology terms is assumed to be
performed either manually, or using supervised learning methods. The defined ontology includes
classes and their instances therefore the aggregation is performed by grouping together different

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

272

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

instances that belong to the same class. The recommendations generated by the proposed
collaborative system are in turn derived by binary matching the current user visit expressed as
ontology instances to the derived domain-level aggregate profiles, and no semantic relations
beyond hyperonymy/hyponymy are employed.
Acharyya and Ghosh [7] also propose a general personalization framework based on the
conceptual modeling of the users navigational behavior. The proposed methodology involves
mapping each visited page to a topic or concept, imposing a tree hierarchy (taxonomy) on these
topics, and then estimating the parameters of a semi- Markov process defined on this tree based
on the observed user paths. In this Markov modelsbased work, the semantic characterization of
the context is performed manually. Moreover, no semantic similarity measure is exploited for
enhancing the prediction process, except for generalizations/specializations of the ontology
terms.
Middleton et.al. [8] explore the use of ontologies in the user profiling process within
collaborative filtering systems. This work focuses on recommending academic research papers to
academic staff of a University. The authors represent the acquired user profiles using terms of a
research paper ontology (is-a hierarchy). Research papers are also classified using ontological
classes. In this hybrid recommender system which is based on collaborative and content-based
recommendation techniques, the content is characterized with ontology terms, using document
classifiers (therefore a manual labeling of the training set is needed) and the ontology is again
used for making generalizations/specializations of the user profiles.
Kearney and Anand [9] use an ontology to calculate the impact of different ontology
concepts on the users navigational behavior (selection of items). In this work, they suggest that
these impact values can be used to more accurately determine distance between different users as
well as between user preferences and other items on the web site, two basic operations carried
out in content and collaborative filtering based recommendations. The similarity measure they
employ is very similar to the Wu & Palmer similarity measure presented here. This work focuses
on the way these ontological profiles are created, rather than evaluating their impact in the
recommendation process, which remains opens for future work.
IV.

CONCLUSION

In this paper we surveyed the research in the area of personalization in web search. The study
reveals a great diversity in the methods used for personalized search. Although the World Wide
Web is the largest source of electronic information, it lacks with effective methods for retrieving,
filtering, and displaying the information that is exactly needed by each user. Hence the task of
retrieving the only required information keeps becoming more and more difficult and time
consuming.
To reduce information overload and create customer loyalty, Web Personalization, a
significant tool that provides the users with important competitive advantages is required. A

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

273

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Personalized Information Retrieval approach that is mainly based on the end user modeling
increases user satisfaction. Also personalizing web search results has been proved as to greatly
improve the search experience. This paper also reviews the various research activities carried out
to improve the performance of personalization process and also the Information Retrieval system
performance.
REFERENCES
1.
2.

3.
4.
5.
6.
7.
8.
9.

10.
11.

12.

13.
14.
15.
16.
17.
18.

K. Sridevi and Dr. R. Umarani, Web Personalization Approaches: A Survey in IJARCCE, Vol.2.Issue3.
March 2013.
Adar, E., Karger, D.: Haystack: Per-User Information Environments. In: Proceedings of the 8th
International Conference on Information Knowledge Management (CIKM), Kansas City, Missouri,
November 2-6 (1999) 413-422.
Bamshad Mobasher, Data Mining forWeb Personalization The Adaptive Web, LNCS 4321, pp. 90135,
2007. Springer-Verlag Berlin Heidelberg 2007.
Indu Chawla, An overview of personalization in web search in 978-1-4244-8679-3/11/$26.00 2011
IEEE.
H. Dai, B. Mobasher, Using Ontologies to Discover Domain-Level Web Usage Profiles, in Proc. of the
2nd Workshop on Semantic Web Mining, Helsinki, Finland, 2002.
D.Oberle, B.Berendt, A.Hotho, J.Gonzalez, Conceptual User Tracking, in Proc. of the 1st Atlantic Web
Intelligence Conf. (AWIC),2003.
S. Acharyya, J. Ghosh, Context-Sensitive Modeling of Web Surfing Behaviour Using Concept Trees, in
Proc. of the 5th WEBKDD Workshop, Washington, August 2003.
S.E. Middleton, N.R. Shadbolt, D.C. De Roure, Ontological User Profiling in Recommender Systems,
ACM Transactions on Information Systems (TOIS), Jan. 2004/ Vol.22, No. 1, 54-88.
P. Kearney, S. S. Anand, Employing a Domain Ontology to gain insights into user behaviour, in Proc. of
the 3rd Workshop on Intelligent Techniques for Web Personalization (ITWP 2005), Endiburgh, Scotland,
August 2005.
http://en.wikipedia.org/wiki/Ontology
Bhaganagare Ravishankar, Dharmadhikari Dipa, WebPersonalization Using Ontology: A Survey, IOSR
Journal of Computer Engineering (IOSRJCE) ISSN : 2278-0661 Volume 1, Issue 3 (May-June 2012), PP
37-45 www.iosrjournals.org.
Cooley, R., Mobasher, B., Srivastava, J.: Web mining: Information and pattern discovery on the world wide
web. In: Proceedings of the 9th IEEE International Conference on Tools with Artificial Intelligence
(ICTAI97), Newport Beach, CA (November 1997) 558567.
Joshi, A., Krishnapuram, R.: On mining web access logs. In: Proceedings of the ACM SIGMOD Workshop
on Research Issues in Data Mining and Knowledge Discovery (DMKD 2000), Dallas, Texas (May 2000).
Liu, C. Yu, and W. Meng, 2002. Personalized web search by mapping user queries to categories. In
Proceedings of CIKM, pages 558-565.
Morris, M. R. (2008). A survey of collaborative Web search practices. In Proc. of CHI 08, 1657-1660.
Jones. W. P., Dumais, S. T. and Bruce, H. (2002). Once found, what next? A study of keeping behaviors
in the personal use of web information. Proceedings of ASIST 2002, 391-402.
Deerwester, S., Dumais, S., Furnas, G., Landauer, T.K., Harshman, R.: Indexing by Latent Semantic
Analysis. Journal of the American Society for Information Science, 41(6) (1990) 391-407
Dolog, P., and Nejdl, W.: Semantic Web Technologies for the Adaptive Web. In: Brusilovsky, P., Kobsa,
A., Nejdl, W. (eds.): The Adaptive Web: Methods and Strategies of Web Personalization, Lecture Notes in
Computer Science, Vol. 4321. Springer-Verlag, Berlin Heidelberg New York (2007) this volume.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

274

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Automated Bus Monitoring and Ticketing System


using RF Transceiver and RFID
S.DEEPIKA

P.PRASANTH

Student, Information Technology

Student, Information Technology

Velammal Engineering College, Surapet, Chennai - 600 066.

Velammal Engineering College, Surapet, Chennai - 600 066.

deepu.neethu238@gmail.com

pprasanth94@gmail.com

Abstract The two major issues in todays city are controlling


traffic and the problem faced by the passengers of public
transport. This paper comes out with the best solution to overcome
the unawareness of the people in the bus stop about the departed
and arriving buses and to the people in the nook and corner of the
city about the current status of the buses using RF Transceiver,
GSM and Zigbee. In addition the RFID bus ticketing system is
proposed in this paper. The congestion and confusion in the bus
stop and in buying the tickets in buses can be avoided by applying
this system. The RF bus tracking system tracks government buses
to inform the passengers about the current location of the buses
and the RFID ticketing system helps the commuters and the
conductors to resolve the problem regarding the issue of the
appropriate tickets instantly and automatically with exact currency
without chaos. The bus ticketing system also includes human
detecting sensor to avoid malpractices and avoid culprits which
improves accuracy.

exact change. The RFID enabled smart card is used to pay the
ticket fare. The purpose of making automated ticketing
system is: to reduce the conflict between commuters and
conductors regarding the issue of wrong tickets with
improper change; to reduce the culprits; to find the culprits
and to reduce the usage of papers in the form of tickets.
The RF Transceiver uses RF modules for high speed
data transmission. The microelectronic circuit in the digitalRF architecture works at speed up to 100 GHz. The RF
transceivers are available in various ranges at various costs
suitable for the user. RFID refers to Radio Frequency
Identification that uses three parts namely the reader, antenna
and the tag. RFID is classified into two such as active RFID
and passive RFID. The tag is nothing but a small chip which
consists of a unique id which is burnt at the time of
manufacture. The data can be stored in the tag and the
amount of data is based on the capacity of the tag. The reader
reads the tag when it comes within its range. GSM (Global
System for Mobile Communications, originally Groupe
Spcial Mobile), is a standard to describe protocols for
second generation (2G) digital cellular networks used by
mobile phones. Zigbee is a protocol that is used to transfer
data wirelessly to a greater distance than Bluetooth in a
minimum power usage.

Keywords RF Transceiver, RFID, NFC, GSM, Zigbee, human


detecting sensor.

I.

INTRODUCTION

The automated bus monitoring system using RF


transceiver proposed is this paper is designed to track the
government buses (especially MTC buses). The purpose of
tracking the government bus is to create an advanced bus
transport system which helps the people to know about the
arrival, departure and current status of the buses to people all
around the city. This reduces the traffic, the congestion in bus
stops, waiting time of the commuters in the bus stop and
unawareness about the arriving and departed buses.

II.

NECESSITY OF TECHNOLOGY

The Metropolitan Transport Corporation (MTC) runs


an extensive city bus system consisting of 3280 buses on 643
routes, and moves an estimated 5.52 million passengers each
day. Chennai's transportation infrastructure provides
coverage and connectivity, but growing use has caused traffic
congestion. The government has tried to address these
problems by constructing grade separators and flyovers at
major intersections. But this alone doesnt favor to reduce
traffic. Not only the government but also the people get

The automated bus ticketing system using RFID


proposed in this paper is designed to pay the exact bus fare
instantly and automatically. This system is also useful to pay
the bus fare without giving away the money during the
journey which resolve the problem of giving and collecting

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

275

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

affected more due to the congestion and traffic near the bus
stop (fig.1). Infrequent commuters are unaware of the bus
schedule for which they are waiting. Actually, in general
each and every commuter is unaware of the arriving and
departed buses. This in turn paves way for the major
congestions near the bus stops and traffic in the city.

cashless; paperless; less or no ticket checker bus transport


system.

III.

SYSTEM OVERVIEW

A. Automated Bus Monitoring System


1) Systems Description: The ABMS consists of RF
transceivers, GSM modem, display boards and a
microcontroller. In real time according to ABMS both the
bus and bus stop is equipped with RF transceiver,
microcontroller and display boards but additionally a GSM
modem is installed in the bus stop. As explained above a RF
transceiver and a display board will be installed inside the
bus along with a microcontroller to control the entire system
while the bus stop will be installed with a RF transceiver, a
GSM module and a display board along with a
microcontroller. If necessary a Zigbee can be installed in the
bus stop to extend the range of the RF transceiver.
Fig.1 Crowded Bus Stop

When the bus, say Bus1, nears the bus stop, say Bus Stop
A, the RF transceivers both in the bus and the bus stop will
start communicating each other. These transceivers send their
corresponding information to each other. The microcontroller
processes this information; send the respective information to
the display board and the GSM module. The display board in
the bus stop A displays the details of the bus 1 that has just
arrived the stop. The details may contain the name, number,
type, source to destination and the time of arrival of the bus
1. Then this detail is sent to the two successive bus stops, say
B and C respectively using the GSM. The stops B and C
displays the current status of the bus 1 that it has arrived the
stop A and it is yet to arrive in some particular time as
calculated(approx.). Meanwhile the RF transceiver in bus
stop A transmits the name of the bus stop which is then
displayed in the display board inside the bus; this helps the
commuters inside the bus to know the stop they have arrived.
If the bus starts moving from stop A then the bus gets out of
range of the RF transceiver in the stop which then makes the
microcontroller of stop A to change the information such as
the bus moves from A to B in the display board where as the
GSM makes the corresponding changes to stop B and C. If
the bus reaches B then the same procedure follows as
proceeded in A and so on.

In order to overcome this hitch the technology comes


into play through the system called Automated Bus
Monitoring System (ABMS) which is to intimate the people
about the current status of the buses i.e., arriving or departed,
through the display board and a mobile application flawlessly
using RF transceiver and a GSM module. The government
also can make a record of the timings of the buses and can
check whether the buses are functioning as per the schedule.
The ticketing in Government buses is being carried out
manually since several decades. Few years back the MTC
introduced ETM (Electronic Ticketing Machine) which found
to be useless within a short period of usage. The major issues
in these practice is that the manual work of the conductors
made to delay the issue of tickets with improper change. Also
in many situations especially during the peak hours the buses
are made to halt in between the bus stops by the conductor to
issue the tickets and collect fare which became a big
hindrance to the commuters now-a-days. In addition the
buses are being halted by the ticket checking inspectors even
during peak hours, creating one more trouble to the people.
The checking for tickets despite of peak hours also mean that
there are many commuters travelling without tickets.
Therefore to overcome these hitch faced by both
government and the commuters the technology comes into
play through the system called Automated Bus Ticketing
System (ABTS) in which the passenger pay the fare through
a smart card and a virtual ticket is generated. This results in a

2) Enhancement of the System: Even though this system


helps the commuters, it only focuses on the commuters
waiting in the bus stop. In order to make use of the details
available in the bus stop by the people all over the city a

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

276

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

mobile application is developed in such a way that the user


gets the necessary information about the current status of all
the buses at the door step. The entire systems layout is
shown in figure (fig.2).

B .Automated Bus Ticketing System


The ABTS comprises of a smart card and two devices
namely Human Detecting Sensor (HDS) and Integrated
Ticketing Machine (ITM) (fig.5 & fig.6) which integrates
RFID reader, NFC reader, led indicators, a keypad and a
small display board. Smartcards are typically the size of a
credit card, and contain a microchip (in this case RFID tag)
that stores and transmits data using radio frequency
identification (RFID), enabling it to communicate with a
device within eight centimeters of the card without physical
contact. Smartcards are able to store enough information to
process monetary transactions and profile a card holders
details for security purpose. Thus each smartcard corresponds
to a unique account of the user that holds the money for the
transaction. The user is supposed to recharge the smart card
with a minimum balance.

Fig.2 Layout of ABTS

Fig. 5 Sample ITM position

Fig.3 App Sample

Fig. 6 Sample ITM

1) Integrated Ticketing Machine: This seems to be heart of


the system. Two ITMs (one for each entrance) are required
for each bus. On entering the bus, commuters show their
smart card to one of the ITMs then the RFID reader in the
ITM will detect the presence of the card; checks for
minimum balance; records the card number along with
boarding point of that particular commuter into a log such as
entry log and then initially debits the ticket fare for the
distance from boarding point of the commuter to destination
point of the bus.

Fig.4 App Sample

The application should be developed in such a manner that


the current status of all the buses to be available in one tab
and in another tab a search should be made available in order
to search the required buss status. There are many apps
available with the entire database of the cities. For example:
An iOS mobile app named ChennaiBus is available in apples
app store which has the entire database of Chennais
metropolitan buses.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

277

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

differentiating the commuters movement i.e., entering and


existing (fig.9). This paves the way for calculating the
number of passengers inside the bus and keeping track of
number of commuters entered a particular bus stop. It is used
to check whether the commuters are paying the fare correctly.
Once the bus has halts in a bus stop the HDS starts counting
and storing the number of commuters entering and exiting the
bus in its own entry log and exit log respectively.

Start

Show the card to the


integrated ticketing
machine (ITM)

ITM Checks for the


minimum balance

If balance
is >=
minimum

Insufficient balance is
NO
indicated by a red led

YES
Fig.8 HDS at the entrance of a bus

Records the starting


stage and debits the
maximum ticket fare

Show the card


to the reader

Calculates the ticket fare


from source to destination
stage and credits the balance
amount to the card if any.
Stop
Fig.7 ITM flowchart

The commuter on reaching his destination point is supposed


to show the card once again to any one of the ITMs so that
the machine detects the card; checks for the card number
inside the entry log by which it retrieves the boarding point
and now stores the destination point into another log such as
exit log; calculates the ticket fare for the distance travelled by
the commuter and finally credits the remaining amount from
initially debited amount. The working procedure of the ITM
is described as a flow chart in fig.7

Fig.9 Counting people in HDS

As soon as the door closes, the HDS checks the


number of passengers entered and allots an Average
Ticketing Time (ATT). Average Ticketing Time is the
average time limit within which the passengers are supposed
to pay the bus fare. After the ATT expires the HDS compares
its entry log with that of the ITMs entry log and if any
mismatch occurs it indicates the numbers of commuters who
havent paid the bus fare yet. Then it compares its exit log

2) Human Detecting Sensor: The human detecting sensor is


mounted at the top of entrance and exit of the bus in a
convenient manner (fig.8). This sensor is capable of

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

278

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

with that of the ITMs exit log so that and if any mismatch
occurs it indicates the number of culprits inside the bus. The
working procedure of the HDS is described as a flow chart in
fig.10

indicate the movement of the buses. We make use of a GSM


module (SIM 900) installed with a sim card to retrieve the
current status of each bus. A LCD (20X2) is used as a display
board that displays the status of the each bus.

Start

Start counting the people


entering and exiting

Allots Average
Waiting Time (ATT)

Waits till ATT gets over


Fig.11 Project Prototype

Fig.12 Bus Kit

Compares the HDSs entry log


with ITMs entry log

If the count
of both logs
are equal

Indicates the no. of


commuters
travelling without
tickets

Proceed
Fig.10 HDS flow chart

Fig.13 Bus Stop Kit

NOTE: The HDS is the general term whereas specifically it


is known as people counter.

IV.

Thus with the help of a prototype is has been clearly


proved that the bus monitoring system can be easily
implemented in real time as exactly stated in this paper. The
automated bus ticketing system can also be easily implied in
real time due to the usage of very common components and
an already existing technology.

PROTOTYPE EXPERIMENTS AND RESULTS

As substantiation for the ABMS a prototype has


been developed using RF transceiver, GSM module, PIC
microcontroller 16f887 and 16f883. The prototype is
developed as two kits (fig.11) where one is the bus (fig.12)
and the other the bus stop (fig13). Three bus stops and three
buses have been used in this module. Switches are used to

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

279

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

V.

Merits and Demerits:

VII.

Result/Outcome of Automated Bus ticketing System


Traditional
ticketing
system

Electronic
ticketing
system

[1] http://www.tn.gov.in/district_details/535
ABTS

[2] http://www.eurotech-inc.com/bus-passengercounter.asp
[3] http://www.eurotech-inc.com/automaticpassenger-counter.asp
[4] http://gpsmonitoring.trustpass.alibaba.com/pr
oduct/112187149101822825/capnavi_bus_people_counter_solu
tion.html
[5] http://www.alibaba.com/productdetail/Automatic-Passenger-Counting-SystemFor-Transit_11137078/showimage.html
[6] http://www.indiamart.com/exiontechnology/p
roducts.html
[7] http://www.pcb.its.dot.gov/factsheets/core.as
p
[8] http://www.dnaindia.com/bangalore/reportrecycling-of-tickets-can-save-many-trees1745406
[9] http://timesofindia.indiatimes.com/city/chenn
ai/400-passengers-travel-without-tickets-onMTC-buseseveryday/articleshow/4146022.cms
[10]http://jaroukaparade.blogspot.in/2012/02/bus
-tickets-honestly-waste-of-good.html

Self help/computerized
Involuntary and
Spontaneous
Paperless ticketing
Eco friendly
User Friendly
Less time consuming
Issuing Appropriate
tickets always
Issuing exact number
of tickets always
Avoids
misunderstanding
Determines the culprit
Omits ticket inspector
Economical
Not necessary of
carrying smartcards
Requires money only
for the distance to be
travelled
TABLE 1: Merits and Demerits

VI.

REFERENCE

CONCLUSION

[11] http://www.thehindu.com/news/cities/chennai/theproblem-of-issuing-tickets-onbuses/article5627863.ece

It is believed that by the implementation of this system,


problems such as underutilization of buses fleet and long
waiting time at the bus station will be reduced. So, both
passenger and bus station administrators will benefit from the
system as real time information are provided. Resources
should be integrated and coordinated based on the RFID, RF
transceiver, GSM in the transportation medium which can
easily meet the requirement of the public and can meet their
emergency & requirement. Thus by implementing this entire
system, advanced bus transportation can be achieved which
brings the people of the city to a comfort zone. This system
can be easily implemented in real time efficiently.

[12]http://www.complaintboard.in/complaintsreviews/tamil-nadu-transport-corporationl179177.html :
[13] http://www.hindu.com/2009/05/21/stories/
2009052159590400.htm
[14] 4Vol38No1
[15] F6 Hasan - RFID Ticketing
[16] researchpaper_Intelligent-Cars-using-RFIDTechnology
[17] Bus_Management_System_Using_RFID_In_WSN
[18] WCECS2012_pp1049-1054

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

280

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Comparative Study: MIMO OFDM, CDMA-SDMA


Communication Techniques
G.Manikandan1,
1

P Mathu Meena2

Assistant Professor, 2Student B.E/ECE, Kodaikanal Institute of Technology


2

mrg.manikandan@gmail.com,

(OFDM), and the Multiple Input Multiple Output (MIMO)


are the most competitive technologies in future wireless
communication
systems.
Moreover,
Practical
communications systems have to support a multiplicity of
users and hence diverse schemes have been proposed for
supporting multiple users, including Time Division Multiple
Access (TDMA), Code Division Multiple Access (CDMA) ,
and Space Division Multiple Access (SDMA). However, all
the performance gains from these technologies could be
achieved only if the channel state information is available at
the receive side [2]. Robust channel estimation and multiuser detections are required. Furthermore, an important
problem in the design of digital communication system
receivers is the detection of data from noisy measurements
of the transmitted signals. Designing a receiver which has
the property of minimizing this error probability while being
realistic from the computational complexity point of view
has thus attracted a lot of attention amongst researchers.
Multi-User detection (MUD) refers to the scenario in which
a single receiver jointly detects multiple simultaneous
transmissions. More generally, multiuser detection
techniques apply to the joint detection of different signals
transmitted over MIMO channel. One example include
channels in which multiple transmitted information streams
are multiplexed over multiple transmit antennas. In that
scenario, the multiple users refer to the multiple information
streams, even though the transmitted signal may originate
from a single user [3]. A variety of MUD schemes, such as
the Minimum Mean-Square Error (MMSE) detectors,
Successive Interference Cancellation (SIC), Parallel
Interference Cancellation (PIC), Ordered successive
interference cancellation (OSIC) and Maximum Likelihood
(ML) Detection schemes may be invoked for the sake of
separating the different users at the Base Station (BS).

Abstract This paper investigates CDMA-SDMA


techniques combined with the Orthogonal Frequency
Division Multiplexing (OFDM). Space Division Multiple
Access(SDMA) is a notable application of Multiple Input
Multiple Output(MIMO) system. SDMA is one of the most
promising techniques aiming at solving the capacity
problem of wireless communication systems and achieving
higher spectral efficiency, depending on multiplexing
signals based on their spatial signature. On the other hand
most third generation mobile phone systems are using
Code Division Multiple Access (CDMA) as their
modulation technique. CDMA is not so complicated to
implement as OFDM based systems. As CDMA has a wide
bandwidth, it is difficult to equalize the overall spectrum significant levels of processing would be needed for this as
it consists of a continuous signal and not discrete carriers.
Not as easy to aggregate spectrum as for OFDM. For this
reason, CDMA is also investigating so that the
performance of OFDM-CDMA-SDMA can be compared.
Various SDMA techniques are investigated including
linear detection schemes, minimum mean square error,
ordered successive cancellation, and maximum likelihood
methods. Promising results are obtained to enhance
spectral efficiency on the expense of computational
complexity which needs to be addressed.

Keywords- Space division Multiple Access (SDMA); Code


Division Multiple Access (CDMA); Orthogonal Frequency
Division Multiplexing (OFDM); Multi-User Detection (MUD).

I.

mathumeenabe93@gmail.com

INTRODUCTION

SDMA techniques have the advantage of improving the


capacity of wireless systems, with the expense of requiring
robust detection in the receiver. This paper investigates multiuser detection methods and implement these methods for
different cases in the capacity of system with noting the tradeoff
of computational complexity for these detection techniques.
The next section will introduce the MIMO-OFDM
technologies. In Section III, we investigate the multi-user
detection methods and see what the effect of increasing the
number of users in the SDMA-OFDM system.

Novel wireless communication and multimedia services


are being introduced almost daily and the spectacular
progress is to a great extent due to continuous progress in
electronic and micro-electronic technology. Such advances
have also been fostered by major theoretical developments.
The list of wire-less air interface protocols which follows
signifies the most familiar standards in exercise around the
world nowadays. The next generation broadband wireless
communication system(4G) will be able to provide users
with wireless multimedia services of high-speed wireless
Internet access, wireless video, and mobile computing,
which push communication technologies towards to more
high-speed and more reliable [1].

Capacity of OFDM and CDMA with Multiple Cells


With any cellular system, interference from neighbor cells
lowers the overall capacity of each cell. For conventional
TDMA and FDMA systems, each cell must have a different
operating frequency from its immediate neighbor cells. This
is to reduce the amount of interference to an acceptable

In order to satisfy the future requirements of wireless access


systems, Orthogonal Frequency Division Multiplexing

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

281

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

II. MIMO-OFDM TECHNOLOGIES

level. The frequencies are reused in a pattern, with the


spacing between cells using the same frequency which is
determined by the reuse factor. The required frequency reuse
factor depends on the interference tolerance of the
transmission system. Analog systems typically require a
carrier to interference ratio (C/I) of greater then 18dB [7],
which requires reuse factor of 1/7 (see Figure 2 (b)). Most
digital systems only require a C/I of 12dB allowing a reuse
factor of 1/3-1/4 (see Figure 2 (a)). CDMA uses the same
frequency in all cells, and hence ideally allowing a reuse
factor of 1 (see Figure 2).

A. MIMO Assisted OFDM


The MIMO scheme has attracted considerable interest to
enhance the data rates capability of system. Moreover, a
particularly promising candidate for next-generation fixed
and mobile wireless systems is the combination of MIMO
technology with OFDM. MIMO systems can be used for
beamforming, diversity combining, or spatial multiplexing.
Spatial multiplexing is the transmission of multiple data
streams on multiple antennas in parallel, leading to a substantial increase in capacity [2].
MIMO-OFDM system can achieve high data rates while
providing better system performance by using both antenna
diversity and frequency diversity, which makes it attractive
for high-data-rate wireless applications. Initial field tests of
broadband wireless MIMO-OFDM communication systems
have shown that an increased capacity, coverage and
reliability is achievable with the aid of MIMO techniques.

Fig 2: Frequency reuse patterns for (a) 3 frequencies (Digital


systems), (b) 7 frequencies (Analog FDMA), (c) CDMA

B. SDMA-based MIMO OFDM Systems


Among the notable applications of MIMO system, is the
Space Division Multiple Access (SDMA) techniques. The
implementation of SDMA comes in two main scenarios:
multi-user access and multiple stream transmission. In the
first scenario, multi-users share the up-link and downlink
channels for connection to the base station . The second
scenario that achieved by the Bell Labs Layered Space Time
(BLAST) technology where the multiple sub-streams of a
single user's data are transmitted in the same frequency
band. SDMA offers high spectrum efficiency and enhance
the effective transmission rate, and thus it's one of the most
promising techniques aiming at solving the capacity problem
of wireless communication systems, where theoretically,
SDMA can be incorporated into any existing multiple access
standard at the cost of a limited increase in system
complexity, while attaining a substantial increase in capacity
[4]. More specifically, the exploitation of the spatial
dimension, namely the so-called spatial signature, makes it
possible to identify the individual users, even when
they are in the same time/frequency/code domains, thus
increasing the systems capacity. However, to mitigate the
channel impairments, OFDM can be used, which transforms
a frequency-selective channel in a set of frequency-flat
channels.

In practice the frequency reuse efficiency of CDMA is low,


as neighbouring cells cause interference, which reduce user
capacity of both. The frequency reuse factor for a CDMA
system is about 0.65 [8]. Figure 3 shows the interference
from neighbouring cells. Most of the neighbouring
interference is from the immediate neighbours of the cell

Fig 3: Interference contributions from neighbouring cells in a


CDMA system.

The cell capacity for a multi-cellular CDMA system equal to


the single cell capacity reduced by the frequency reuse
factor. The cell capacity for CDMA is very low if cell
sectorization and voice activity detection is used. A straight
CDMA system can only have somewhere between 5-11
users/cell/1.25MHz. Using voice activity detection and cell
sectorization allow the capacity to be increased by up to 6.4
time,
allowing
somewhere
between
30-70
user/cell/1.25MHz. OFDM require a frequency reuse pattern
to be used in a multi-cellular environment to reduce the level
of inter-cellular interference. The C/I required must be
greater then ~12dB. This could be done with a frequency
reuse factor of ~3. This should be easily achived as cell
sectorization could also be used to reduce the level of
interference. This would result in the cell capacity for an
OFDM system of approximately 128/3 = 42.7
users/cell/1.25MHz in a multicellular environment

Fig. 1 illustrate the concept of SDMA systems, where


each user exploiting a single transmitter antenna aided
Mobile Station (MS) simultaneously communicates with the
BS equipped with an array of receiver antennas. In SDMAOFDM systems the transmitted signals of L simultaneous
uplink mobile users- each equipped with a single transmit
antenna are received by the P different receiver antennas of
the BS as shown in Fig. 2. At the BS Multi-User Detection
(MUD) techniques are invoked for detecting the different
users transmitted signals with the aid of their unique, userspecific spatial signature constituted by their Channel
Impulse Responses (CIRs).

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

282

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

signals as interference except for the desired stream from the


target transmit antenna. Therefore, interference signals from
other transmit antennas are minimized or nullified in the
course of detecting the desired signal from the target
transmit antenna. These linear methods consider the inputoutput relation of a MIMO system as an unconstrained linear
estimation problem, which can be solved by using ZeroForcing (ZF) or Minimum Mean Square Error (MMSE)
criterion [5].

P-element antenna array

.....
BS

.....
MS2

MSL

x(1)
x(2)

JOINT CHANNEL ESTIMATION AND MULTI-USER


DETECTION

...

(1)

H1(2)

H3(1)

H2(3)

MSL

n1
y1

Rx2

n2

H2(2)
(3)

H3(1) RxP

H1

UserL

nP

Py2 element
Receive
Antenna
Array
y3

H3(3)
Figure 2. Schematic of the SDMA uplink MIMO channel model .

Instead of forcing interference terms to zero without


considering the noise as used in the ZF detection, the
MMSE criterion minimizes the overall expected error by
taking the presence of the additive noise into account. Linear
detection schemes are simple, but unfortunately they do not
provide near optimal performance, especially when the
channel matrix is near singular. Moreover, based on linear
detection methods, the Successive interference cancellation
(SIC) is one method, where the key idea of SIC is to
successively detect and cancel the streams layer by layer.
The algorithm first detects (using ZF or MMSE) an
arbitrarily chosen data symbol, assuming the other symbols
to be interference. The detected symbol is then cancelled
from the received signal vector and the procedure is
repeated until all the symbols are detected. Compared to the
linear detection schemes, SIC achieves an increase in
diversity order with each iteration. The performance of the
linear detection methods is worse than that of other
nonlinear receiver techniques. However, linear detection
methods require a low complexity of hardware
implementation.
One method used to improve the performance without
increasing the complexity significantly is the Ordered
successive interference cancellation (OSIC) scheme [5]. This
method can improve the performance of SIC by selecting the
stream with the highest signal-to-noise-plus-interference
ratio (SINR) at each detection stage. Fig. 3 illustrates the
OSIC signal detection process for NT = 4 [6]. Let x(i) denote
the symbol to be detected in the ith order. Let x'(i) denote
sliced value of x(i). Suppose that the MMSE

x= [x1,x2,...,xN T] and y = [y 1,y2,...yN R] , respectively, where


xi and yj denote the transmit signal from the ith user which have
single transmit antenna and the received signal at the jth receive
antenna, respectively. Let zj denote the white Gaussian noise
2
with a variance of z at the jth receive antenna, and hi denote

method is used, thus the first stream is estimated using the


MMSE method, then slicing to produce x (1). The remaining
signal in the first stage is formed by subtracting it from the
received signal, that is

the ith column vector of the channel matrix H. Now, the NR


NT MIMO system is represented as

y= HX + z = h 1x1+ h2x2+ .....+ hNT xNT+ z

H2

H 3 (2)

x(3)

B. Multi-User Detection
Robust multi-user detection schemes become necessary
in order to using the advantage of SDMA technique to
solve the capacity problem in wireless systems, since the
increasing number of users will decrease the performance of
system particularly when the number of users is more than
the number of receive antennas.
Consider the NR NT MIMO system, and H denote a
channel matrix with h ij is the channel gain between the ith
transmit antenna and the jth receive antenna, j=1,2,...,NR and
i=1,2,..,NT. The spatially multiplexed user data and the
corresponding
received signals are represented by
T

MS2

User2

A. Channel Estimation
The OFDM technique is capable of satisfying the
requirements of supporting a high grade of mobility in
diverse propagation environments. This is a benefit of their
ability to cope with highly time variant wireless channel
characteristics. Also in spite of an immense interest from
both the academic and the industrial communities, a
practical MIMO transceiver architecture, capable of
approaching channel capacity boundaries in realistic channel
conditions remains largely an open problem [4]. Thus, the
provision of an accurate and robust channel estimation
strategy is a crucial factor in achieving a high performance.
However, the channel estimation for wireless systems is a
challenging problem and the literature treating it is vast,
where its simply defined as the process of characterizing the
effect of the physical channel on the input sequence. In this
work, the channel estimation techniques are assumed to be
perfectly done at the BS and it can be used to detect the
users data.

Rx1

H1(1)

User1

Figure 1. Illustration of the generic SDMA system.

III.

MS1

...

MS1

(1)

Linear signal detection method treats all transmitted

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

283

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

y'(1) = y h(1) x'(1)

OSIC signal Detection

= h(1) (x (1) x'(1) ) + h(2) (x (2) x'(2) ) + ....


+ h(N ) (x (N ) x'(N ) )+ z

10

0
SNR-BASED
COLUMN NORM -BASED
SINR-BASED

(2)
-1

10

x'(1)

(1)st Stream
Estimation

y'(1) = y- h(1) x'(1)

y'(1)

(2)nd Stream
Estimation

BER

-2

x'(2)

10

-3

10

y'(2) = y'(1) - h(2) x'(2)

y'(2) (3)rd Stream


Estimation

x'(3)

-4

10 0

y'(3) = y'(2) - h(3) x'(3)

y'(3)

(4)th Stream
Estimation

10
EB/NO (DB)

15

20

x'(4)
Figure 4. Performance of OSIC methods with different detection ordering.

Figure 3. OSIC signal detection for four transmit antennas.

Among the MUD techniques, is the ML detection


scheme achieves the optimal performance as the maximum
a posteriori (MAP) detection when all the transmitted
vectors are equally likely. The ML method calculates the
Euclidean distance between the received signal vectors and
the product of all possible transmitted signal vectors with
the given channel, and finds the minimum distance [6]. The
ML detection determines the estimate of the transmitted
signal vector x as
x'ML= argmin y Hx
(3)

If x(1)= x'(1), then the interference is successfully


canceled in the course of estimating x(2); however, if x(1)
x'(1), then error propagation is incurred because the MMSE
weight that has been designed under the condition of x(1)=
x'(1) is used for estimating x(2). Due to the error propagation
caused by erroneous decision in the previous stages, the
order of detection has significant influence on the overall
performance of OSIC detection. There are three different
methods of detection ordering. The first method is SINRBased Ordering which start detect the signals with higher
post detection signal-to-interference-plus-noise ratio (SINR).
In the course of choosing the second detected symbol, the
interference due to the first detected symbol is canceled from
the received signals. The second method of OSIC detector is
the SNR-Based Ordering which use the ZF weight. The
same procedure of detection ordering as in the first method
but without interference term. The third method is the
Column Norm-Based Ordering which is used to reduce the
ordering complexity using the norm of the column vectors in
a channel matrix. Consider the representation of the received
signal in equation(1) which it can be observed that the
received signal strength of the ith transmitted signal is
proportional to the norm of the ith column in the channel

Where y Hx corresponds to the ML metric.


The number of candidate symbol vectors grows
exponentially with NT and the number of bits per
constellation point. Thus, with higher order constellations
and with multiple transmit antennas (or number of users in
SDMA system), ML detection becomes computationally
intensive. Simulation results investigate the performance of
ML scheme and compare it to OSIC method.
ML & OSIC Systems

10
OSIC DETECTOR
ML DETECTOR
-1

matrix. Therefore, the detect of signal it can be done in the


order of the norms h i . In this method, the first is compute
NT norms and then, order them only once. Detection is
performed in the decreasing order of norms and the
complexity is significantly reduced as compared to the
previous methods, since ordering is required only once [6].

BER

10

-2

10

-3

Fig. 4 compares the error performance of the OSIC


detection for the three ordering methods. (SINR-based
ordering, SNR-Based ordering, and Column Norm-Based
Ordering). The number of users are 4 transmitting at the
same time using SDMA technique each one have single
transmit antenna (NT) and the number of receive antennas
(NR) at the BS is 4 with 16-QAM modulation. It is observed
that the detection SINR-based ordering method achieves the
best performance among three methods. OSIC receivers
reduce the probability of error propagation with the cost of
slightly higher computational complexity compared to that
of the SIC algorithm.

10

-4

10 0

6
EB/NO (DB)

10

12

Figure 5. Performance comparison: ML vs. OSIC signal detection


methods for NT =NR=4.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

284

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

Fig. 5 shows the performance of ML compared to OSIC


method for SDMA system with 4 users each one have single
transmit antenna NT and the number of receive antennas NR
equal to 4 with 16-QAM modulation, where it's obvious that
the ML outperforms the OSIC detection. ML achieves the
optimal performance and its performance serves as a
reference to other detection methods, however its complexity
increases exponentially as number of users increases.

amples are the Standard Gaussian Approximation (SGA)


[2] [15], the Improved Gaussian Approximation (IGA) [2],
[6], [9], [10], [12], [13], [15][17], the Simplified IGA
(SIGA) [2], [6], [9], [11], [13], [15], and the Improved
Holtzman Gaussian Approximation (IHGA) [11]. However,
the accuracy of the various Gaussian approximation
techniques depends on the specific configuration of the
system. It is well known that the Gaussian approximation
techniques become less accurate, when a low number of
users is supported or when there is a dominant interferer
[12].
Therefore the accurate BER analysis dispensing with the
previous assumptions on the MAI distribution is desirable.
Hence a number of accurate techniques have been developed, such as the series expansion [12], [13], [18][20], the
Fourier [2], [3], [11], [15], [18] and Laplace [21] transform
based methods. The latter two lead to the Characteristic
Function (CF) and Moment Generating Function (MGF)
based approaches and have been prevalent in the accurate
BER analysis of communication systems.
The BER performance of DS-CDMA systems
communicat-ing over Additive White Gaussian Noise
(AWGN) channels has been extensively studied [5][10],
[12], [13], [15], [16],
However, to the authors best knowledge, the accurate
BER analysis of asynchronous Ricean-faded DS-CDMA
systems using random spreading [2][17], [19], [20]
sequences is still an open problem. There are many
propagation environ-ments, such as microcellular urban and
suburban land mobile, as well as picocellular indoor and
factory scenarios, where there exist Line-Of-Sight (LOS)
propagation paths between the transmitter and the receiver
[27]. In the presence of a LOS component, the Ricean
distribution, also known as the Nakagami-n distribution [28],
[29], is a better model of the fading channel. The Ricean
distribution becomes the Rayleigh distribution, when the
energy of the LOS component becomes zero [27], [30]. The
novel contribution of this paper is that we provide an
accurate BER expression for asynchronous DS-CDMA
systems in a Ricean fading environment, which requires only
a single numerical integration, when using the
hypergeometric functions of two variables [31][33].

The SDMA system solve the capacity problem of the


wireless system. However, when the number of users is
higher than the number of receive antennas at the receiver,
the performance of the system will be decreased and the
multi-user detection method should have the ability to
improve the performance. ML method will improve the
performance in contrast to the other methods at the cost of
increasing the computational complexity, especially in the
context of a high number of users and higher-order
modulation schemes. Fig. 6 shows the improvement of
using the ML method in comparison to the OSIC detection
when the number of users increased to 6 and number of
receivers is 4. It shows that ML method has the ability to
maintain low bit error rate even if the number of users
higher than the number of receive antennas at the BS.
In order to reduce the computational complexity of ML
algorithm, a QR decomposition associated M-algorithm was
proposed, named QRM-ML algorithm. It performs signal
detection sequentially in NT stages after QR decomposition.
By appropriately choosing the parameter M the QRM-ML
algorithm can achieve similar performance as that of the ML
algorithm with significantly reduced computational
complexity. Assuming that the number of users(or transmit
antennas NT) and number of receiver antennas are equal, the
QRM-ML detection method can be investigated for different
cases.
ML & OSIC Systems

10

OSIC DETECTOR
ML D ETECTOR

10-1
BER

The organization of this paper is as follows. In Section II an


asynchronous DS-CDMA system using BPSK modulation is
considered in the context of a Ricean fading channel. Then
in Section III an accurate BER expression based on the
characteristic function approach is derived for the BER
calculation of the system using random spreading sequences.
[18][20], [22][26], and there are numerous studies also for
transmission over both Rayleigh [2], [12] and Nakagami-m
[3], [4], [11], [17], [21] channels. Geraniotis and Purs-ley
[18] were the first authors, who investigated the accu-rate
BER calculation of asynchronous DS-CDMA systems over
AWGN channels using the CF approach. Then Cheng and
Beaulieu extended the results to both Rayleigh [2] and
Nakagami-m [3] channels.

-2

10

EB/NO (DB)

Figure 6. Performance comparison: ML vs. OSIC detection methods


when number of users equal to 6 and N R=4.

In Direct Sequence Code Division Multiple Access (DSCDMA) systems [1], various versions of the so-called Gaussian approximation are widely used for modeling the distribution of the Multiple Access Interference (MAI). A few ex-

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

285

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

multipath components, Lk, has a significant influence on the


BER performance using the SGA approximation.
Fig. 4 shows the BER performance over a frequency selective multipath Rayleigh fading channel with perfect
power control, as a function of the process gain ( N = 32,
84, and

Figure 7. The performance of QRM-ML signal detection using N T =NR= 4


with 16-QAM modulation.

Consider QR decomposition of the channel matrix, that


is H= QR. Then, the ML metric in equation(3) can be
equivalently expressed as
y Hx = y QRx
H

= Q (y-QRx)

10

Average BER, dB

= y' Rx
(4)
The performance of QRM-ML depends on the parameter
M, where, it equal to the number of users and number of
receive antennas NR. As M increases, its performance
approaches ML performance at the sacrifice of the
complexity [6]. Fig. 7 shows the performance of QRM-ML
for M= 4 and 16. The QRM-ML performance degrades
when M= 4 at the benefit of the reduced complexity.

10

IV. SIMULATION RESULTS

c =1
Mc = 4

In this section, we present and discuss the numerical results


of the BER performance of an asynchronous DS-CDMA
system over a frequency-selective multipath Rayleigh fading
channel. The numerical results are based on the standard Gaussian approximation (SGA) for the multipath and multiple access interference (MAI) (eq. (11)) with perfect power control.
Each cell is equipped with a conventional correlation receiver.
The received power of the desired signal is normalized to 1.

Mc = 8

10

10

15

20

25

30

35

Number of Users

Fig. 2. BER performance over a frequency selective multipath

Rayleigh fading channel with perfect power control, as a function of

Fig. 2 shows the BER performance over a frequency selective multipath Rayleigh fading channel with perfect
power control, as a function of the number of interfering
cells Mc. In this simulation, the number of multipaths is set
to Lk =
E
5, the signal-to-noise ratio (SNR)

the number of interfering cells with Lk = 5, the process gain N = 84,


and SNR = 20.

The number of interfering cells is set to Mc = 4, the SNR =


20, the number of multipath components is set to Lk = 4,
and = 1. From the figures, it is clear that varying the
process gain, N, has a significant influence on the BER
performance using the SGA approximation.
Fig. 4 shows the average BER performance obtained by
simulation over a frequency selective multipath Rayleigh
fad-ing and AWGN channel. From the simulation, we find
that averaging over more experiments and using a larger
symbol size will produce results closer to the theoretical
results. The symbol size used in this simulation is 10,000.

= 20, the process gain


No

N = 84, and = 1. As illustrated in Fig. 2, we observed


that the BER increases when the number of interfering cells
is less. It is evident that the performance of the DS-CDMA
system depends on the number of interfering cells.
Fig. 3 shows the BER performance over a frequency selective multipath Rayleigh fading channel with perfect
power control, as a function of the number of the multipath
compo-nents ( Lk = 3, 5, and 10). The number of interfering
cells is set to Mc = 4, the SNR = 20, the process gain N =
84, and = 1. From the figures, it is clear that varying the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

286

www.iaetsd.in

40

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

IV. CONCLUSION
In this paper, we assume an OFDM/SDMA/CDMA
approach that couples the capabilities of these techniques.
OFDM helps to enhance the system performance, while
SDMA increases the spectral efficiency, where CDMA uses
high rate signature pulses to enhance the signal bandwidth
far beyond what is necessary for a given data rate.
Performance of various multi-user detection schemes based
on OFDM/SDMA/CDMA are investigated for different
scenarios. Results show the importance of robust detection
methods in improving the performance of the system,
particularly when the SDMA technique is used to achieve
the higher spectral efficiency. It is found that CDMA only
performs well in a multi-cellular environment where a single
frequency is used in all cells. This increases the comparative
performance against other systems that require a cellular
pattern of frequencies to reduce inter-cellular interference.
ML is the optimal multi-user detection scheme where it
gives high performance in comparison to the OSIC method
even if the number of users is higher than the number of
receive antennas. Thus the ML method can support the
SDMA technique, however this technique is associated with
large computational requirements. Long time is usually
needed for running the algorithm which does not allow real
time implementation. One major area, which hasnt been
investigated, is the problems that may be encountered when
OFDM is used in a multiuser environment. One possible
problem is that the receiver may require a very large
dynamic range in order to handle the large signal strength
variation between users . Hardware implementation of the
algorithm can help reduce the running time considerably.
Success in real time implementation of optimization
technique for SDMA can play a major role in next
generation communication systems.
REFERENCES
[1]

[2]
[3]
[4]

[5]

[6]

[7]
[8]

L. Jie, and W. Yiwen, "Adaptive resource allocation algorithm based


on Sequence strategy of modulated MIMO-OFDM for UWB
communication system," in IEEE Inter.l Conf. on Measuring Tech.
and Mech. Automation, 2010.
L. Hanzo and T. Keller, OFDM and MC-CDMA, 2nd ed. West
Sussex, United Kingdom: Wiley, 2006.
M. L. Honig, Advances in multiuser detection, Northwestern
University, United States of America: Wiley, 2009.
L. Hanzo, J. Akhtman, M. Jiang, and L. Wang, MIMO-OFDM for
LTE,WIFI and WIMAX Coherent versus Non-Coherent and
Cooperative Turbo-Transceivers. UK: University of Southampton,
2010.
L. R, and Verd S , "Linear multiuser detectors for synchronous code
division multiple-access channels.," IEEE Transactions on
Information Theory, vol. 34, no. 1, 1989.
Yong Soo, J. Kim, and C. G.Kang, MIMO-OFDM Wireless
Communications with Matlab, Jone Wiely & Sons(Asia) Pte Ltd, Ed.,
2010.
J. Proakis, Digital Communications.: McGraw-Hill, 2008.
K. J. Kim and J. Yue, "Joint channel estimation and data detection
algorithms for MIMO-OFDM systems, " 36th Asilomar Conference on
Signals, Systems and Computers, pp. 18571861, Nov. 2002.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

287

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

288

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

289

www.iaetsd.in

INTERNATIONAL CONFERENCE ON CURRENT TRENDS IN ENGINEERING RESEARCH, ICCTER - 2014

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

290

www.iaetsd.in

Anda mungkin juga menyukai