Anda di halaman 1dari 48

From The Desk of HOD

Keeping with the motto of Acquire Knowledge and Grow, under patronage of our Director,
Dr. R.K. Agarwal, department has published the Vol. 7, No. 1 a half-yearly Journal. The aim
of publishing Journal of Computer Application (JCA) is to inculcate habit of writing and
reading a technical paper among faculty and students. Topics and contents have been selected
to educate our students on current state of the art technology of common applications in
simplistic manner without going through the mathematical details. Length of topics have
deliberately been kept very short to accommodate most of the writers, and keep the interest of
readers as well. My sincere appreciation to all writers specially to students of MCA department.
Enjoy the reading and kindly offer your valuable suggestion for improvement in our subsequent
issues.

Looking ahead for your support.

Prof. S. L. Kapoor

1
CONTENTS

S. No. TITLE Page No.

1 Pro*C Embedded SQL 5-7


Dr. B.K. Sharma, Professor, MCA department, AKGEC, GZB

2 ORIENTDB – Use for Handling Graph Database 8-9


Snehlata Kaul, Assistant Professor, AKGEC, Ghaziabad

3 Green Computing 10-12


Saroj Bala, Assistant Professor, AKGEC, Ghaziabad

4 Li-Fi (Light Fidelity)-The future technology In Wireless communication 13-15


Harnit Saini, Assistant Professor, AKGEC, Ghaziabad

5 An Efficient Indexing Tree Structure For Multidimensional Data On Cloud 16-18


Mani Dwivedi, Assistant Professor, AKGEC, Ghaziabad

6 Mobile cloud computing Integration: Architecture, Applications, and Approaches 19-23


Anjali Singh, Assistant Professor, AKGEC, Ghaziabad

7 Towards Developing Reusable Software Components 24-26


Aditya Pratap Singh, Assistant Professor, AKGEC, Ghaziabad

8 Genetic Algorithm Framework for Parallel Computing Environments 27-29


Ruchi Gupta, Assistant Professor, AKGEC, Ghaziabad

9 A Survey on Big Data and Mining 30-32


Dheeraj Kumar Singh, Assistant Professor, AKGEC, Ghaziabad

10 Mobile Cyber Threats 33-37


Arpna Saxena, Assistant Professor, AKGEC, Ghaziabad

11 Applications of Palm Vein Authentication Technology 38-40


Indu Verma, Assistant Professor, AKGEC, Ghaziabad

12. PON Topologies for Dynamic Optical Access Networks 41-43


Sanjeev K. Prasad, Assistant Professor, AKGEC, Ghaziabad

13 An Overview of Semantic Search Systems 44-46


Dr. Pooja Arora, Assistant Professor, AKGEC, Ghaziabad

13 A Soft Introduction to Machine Learning 47-48


Anurag Sharma, Student, MCA Department

3
PRO*C EMBEDDED SQL
Dr. B.K. Sharma
Professor, MCA Department, AKGEC, GZB
Email : bksharma888@yahoo.com

Abstract–The Pro*C/C++ precompiler enables you to create EXEC SQL SELECT salary INTO :a
applications that access your Oracle database whenever rapid FROM Employee
development and compatibility with other systems are your WHERE SSN = THE_SSN;
priorities.The Pro*C/C++ programming tool enables you to
embed Structured Query Language (SQL) statements in a C or
2. HOST VARIABLES
C++ program. The Pro*C/C++ precompiler translates these
statements into standard Oracle runtime library calls, then
generates a modified source program that you can compile, Host variables are the key to the communication between the
link, and run in the usual way. host program and the database. A host variable expression
must resolve to an lvalue (i.e., it can be assigned). You can
1. INTRODUCTION: declare host variables according to C syntax, as you declare
Embedded SQL is a process of combining the computing power regular C variables. The host variable declarations can be
of a high-level language like C / C++ with the database placed wherever C variable declarations can be placed. The C
manipulation capabilities of SQL. It allows you to execute all datatypes that can be used with Oracle include:
SQL statement from an application program. Oracle's embedded z char
SQL environment is called Pro*C. z char[n]
z int
A Pro*C program is compiled in two steps. First, the Pro*C z short
precompiler recognizes the SQL statements embedded in the z long
program and replaces them with appropriate calls to the z float
functions in the SQL runtime library. The output is pure C/C++ z double
code with all the pure C/C++ portions intact. Then, a regular C/ z VARCHAR[n] - This is a psuedo-type recognized by the
C++ compiler is used to compile the code and produces the Pro*C precompiler. It is used to represent blank-padded,
executable. variable-length strings. Pro*C precompiler will convert
it into a structure with a 2-byte length field and a n-byte
Pro*C Syntax character array.
All SQL statements need to start with EXEC SQL and end with
a semicolon ";". The SQL statements can place anywhere within 2.1 Pointers
a C/C++ block with the standard procedure of programming You can define pointers using the regular C syntax, and use
that is the restriction that the declarative statements do not them in embedded SQL statements. As usual, prefix them with
come after the executable statements. As an example: a colon:
{ int *x;
int a; /* ... */
EXEC SQL SELECT salary INTO :a EXEC SQL SELECT xyz INTO :x FROM ...;
FROM Employee The result of this SELECT statement will be written into *x,
WHERE SSN=876543210; not x.
printf("The salary is %d\n", a); Structures
} Structures can be used as host variables, as illustrated in the
following example:
Preprocessor Directives typedef struct {
The C/C++ preprocessor directives that work with Pro*C are char name[21]; /* one greater than column length;
#include and #if. Pro*C does not recognize #define. For for '\0' */
example, the following code is invalid: int SSN;
#define THE_SSN 876543210 } Emp;

5
Journal of Computer Application, Volume 7, No. 1, January-June 2016
Ajay Kumar Garg Engineering College, Ghaziabad

/* ... */ The equivalencing can be done on a variable-by-variable basis


Emp bigshot; using the VAR statement. The syntax is:
/* ... */ EXEC SQL VAR <host_var> IS <type_name> [ (<length>) ];
EXEC SQL INSERT INTO emp (ename, eSSN) For example, suppose you want to select employee names from
VALUES (:bigshot); the emp table, and then pass them to a routine that expects C-
style '\0'-terminated strings. You need not explicitly '\0'-terminate
2.2 Arrays the names yourself. Simply equivalence a host variable to the
Host arrays can be used in the following way: STRING external datatype, as follows:
int emp_number[50]; char emp_name[21];
char name[50][11]; EXEC SQL VAR emp_name IS STRING(21);
/* ... */
EXEC SQL INSERT INTO emp(emp_number, name) The length of the ename column in the emp table is 20 characters,
VALUES (:emp_number, :emp_name); so you allot emp_name 21 characters to accommodate the '\0'-
which will insert all the 50 tuples in one go. terminator. STRING is an Oracle external datatype specifically
Arrays can only be single dimensional. The example char designed to interface with C-style strings. When you select a
name[50][11] would seem to contradict that rule. However, value from the ename column into emp_name, Oracle will
Pro*C actually considers name a one-dimensional array of automatically '\0'-terminate the value for you.
strings rather than a two-dimensional array of characters. You
can also have arrays of structures. You can also equivalence user-defined datatypes to Oracle
external datatypes using the TYPE statement. The syntax is:
When using arrays to store the results of a query, if the size of EXEC SQL TYPE <user_type> IS <type_name> [ (<length>) ]
the host array (say n) is smaller than the actual number of [REFERENCE];
tuples returned by the query, then only the first n result tuples
will be entered into the host array. You can declare a user-defined type to be a pointer, either
explicitly, as a pointer to a scalar or structure, or implicitly as an
2.3 Indicator Variables array, and then use this type in a TYPE statement. In these
Indicator variables are essentially "NULL flags" attached to cases, you need to use the REFERENCEclause at the end of
host variables. You can associate every host variable with an the statement, as shown below:
optional indicator variable. An indicator variable must be typedef unsigned char *my_raw;
defined as a 2-byte integer (using the typeshort) and, in SQL EXEC SQL TYPE my_raw IS VARRAW(4000) REFERENCE;
statements, must be prefixed by a colon and immediately follow my_raw buffer;
its host variable. Or, you may use the keyword INDICATOR in /* ... */
between the host variable and indicator variable. Here is an buffer = malloc(4004);
example:
short indicator_var; Here we allocated more memory than the type length (4000)
EXEC SQL SELECT xyz INTO :host_var:indicator_var because the precompiler also returns the length, and may add
FROM ...; padding after the length in order to meet the alignment
/* ... */ requirement on your system.
EXEC SQL INSERT INTO R
VALUES(:host_var INDICATOR :indicator_var, ...); 4. DYNAMIC SQL
You can use indicator variables in the INTO clause of a SELECT While embedded SQL is fine for fixed applications, sometimes
statement to detect NULL's or truncated values in the output it is important for a program to dynamically create entire SQL
host variables. statements. With dynamic SQL, a statement stored in a string
variable can be issued. PREPAREturns a character string into
3. DATATYPE EQUIVALENCING a SQL statement, and EXECUTE executes that statement.
Oracle recognizes two kinds of datatypes: internal and external. Consider the following example.
Internal datatypes specify how Oracle stores column values char *s = "INSERT INTO emp VALUES(1234, 'jon', 3)";
in database tables. External datatypes specify the formats EXEC SQL PREPARE q FROM :s;
used to store values in input and output host variables. At EXEC SQL EXECUTE q;
precompile time, a default Oracle external datatype is assigned Alternatively, PREPARE and EXECUTE may be combined into
to each host variable. Datatype equivalencing allows you to one statement:
override this default equivalencing and lets you control the char *s = "INSERT INTO emp VALUES(1234, 'jon', 3)";
way Oracle interprets the input data and formats the output EXEC SQL EXECUTE IMMEDIATE :s;
data.

6
Object Recognition Techniques for Digital Images

TransactionsOracle PRO*C supports transactions as defined z NOT FOUND - sqlcode is positive because Oracle could
by the SQL standard. A transaction is a sequence of SQL not find a row that meets your WHERE condition, or a
statements that Oracle treats as a single unit of work. A SELECT INTO or FETCH returned no rows
transaction begins at your first SQL statement. A transaction Examples of the WHENEVER statement:
ends when you issue "EXEC SQL COMMIT" (to make EXEC SQL WHENEVER SQLWARNING DO
permanent any database changes during the current print_warning_msg();
transaction) or "EXEC SQL ROLLBACK" (to undo any changes EXEC SQL WHENEVER NOT FOUND GOTO handle_empty;
since the current transaction began). After the current
transaction ends with your COMMIT or ROLLBACK 7. CONCLUSIONS
statement, the next executable SQL statement will automatically Pro*C is a precompiler enables you to create applications that
begin a new transaction. access your Oracle database. This tool is enables you to embed
Structured Query Language (SQL) statements in a C or C++
If your program exits without calling EXEC SQL COMMIT, all program.
database changes will be discarded.
Error HandlingAfter each executable SQL statement, your 8. REFERENCES
program can find the status of execution either by explicit [1] Database Systems: The Complete Book by Hector Garcia,
checking of SQLCA, or by implicit checking using the Jeff Ullman, and Jennifer Widom.
WHENEVER statement. These two ways are covered in details [2] A First Course in Database Systems by Jeff Ullman and
below. Jennifer Widom.
Gradiance SQL Tutorial
5. SQLCA [3] https://docs.oracle.com/cd/B28359_01/appdev.111/
SQLCA (SQL Communications Area) is used to detect errors b28427.pdf
and status changes in your program. This structure contains [4] https://docs.oracle.com/cd/E11882_01/appdev.112/
components that are filled in by Oracle at runtime after every e10825.pdf
executable SQL statement.
ABOUT THE AUTHORS
6. WHENEVER STATEMENT Dr. B. K. Sharma is a Professor and Dean
This statement allows you to do automatic error checking and Hostel of Ajay Kumar Garg Engineering
handling. The syntax is: College, Ghaziabad. He obtained his
EXEC SQL WHENEVER <condition> <action>; MCA degree from JNU, New Delhi,
Oracle automatically checks SQLCA for <condition>, and if M.Tech. from Guru Gobind Singh
such condition is detected, your program will automatically Indraprastha University, Delhi and Ph.D.
perform <action>. from Shobhit University, Meerut. His
<condition> can be any of the following: areas of specialization are Software
z SQLWARNING - sqlwarn[0] is set because Oracle Watermarking, Discrete Mathematics,
returned a warning Theory of Computation and Compiler Design. During his career
z SQLERROR - sqlcode is negative because Oracle returned of more than decade in the teaching, he has published many
an error Research papers in International/National Journals/
Conferences. He has also published many books for
engineering students.

7
Journal of Computer Application, Volume 7, No. 1, January-June 2016
Ajay Kumar Garg Engineering College, Ghaziabad

ORIENTDB – USE FOR HANDLING


GRAPH DATABASE
Snehlata Kaul
Assistant Professor, MCA Department
Email Id : sneha8kaul@yahoo.co.in

ABSTRACT – Data Management is not now to manage and text all pending documents are recovered and committed.
data, it is beyond it. At present we need to manage the multi-
media, dynamic and swiftly embryonic nature of data which Graph structured data model:
contain text, numbers, audio, video, graphics images etc.. Native management of graphs. Fully compliant with the Apache
Through this article I would like to discuss the multi-model
TinkerPop Gremlin (previously known as Blueprints) open
database, the “OrientDB” which support the graphic data.
source graph computing framework.[2]
1. INTRODUCTION
3. ORIENTDB- WORING WITH GRAPH
In computing, a graph database is a database that uses graph
structures for semantic queries with nodes, edges and In graph databases, the database system graphs data into
properties to represent and store data. Graph databases employ network-like structures consisting of vertices and edges. In
nodes, properties, and edges. To mange this type of data the OrientDB Graph model, the database represents data
different graph databases products are intriguing a prominent through the concept of a property graph, which defines a vertex
roles as well. as an entity linked with other vertices and an edge, as an entity
that links two vertices.
Neo4j, OrientalDB, InfiniteGraph and AllegroGraph are some
of the graph databases. Through this article I will try to discuses OrientDB ships with a generic vertex persistent class, called V,
one of the latest Graph databases “OrientDB”, the NOSQL as well as a class for edges, called E.
database[2]. In effect, the Graph model database works on top of the
underlying document model. But, in order to simplify this
The aim of this article is to guide the reader through process, OrientDB introduces a new set of commands for
understanding what are the most interesting features that managing graphs from the console.[3]
OrientDB brings on the table out-of-the-box and how, melding
them altogether, this database differs from traditional relational 4. ORIENTDB- THE GRAPH MODEL
systems and other NoSQL products, being it document DBs A graph represents a network-like structure consisting of
like MongoDB or key-value stores like Redis or Memcache. Vertices (also known as Nodes) interconnected by Edges (also
known as Arcs). OrientDB's graph model is represented by the
2. ORIENTDB –LET US START concept of a property graph, which defines the following:
OrientDB is a graph database written in Java, mainly developed
by Luca Garulli,AssetData’s CTO. It a 2nd Generation Vertex - an entity that can be linked with other Vertices and
Distributed Graph Database with the flexibility of Documents has the following mandatory properties:
in one product with an Open Source commercial friendly license z unique identifier
(Apache 2 license), which overcome lacing feature of handling z set of incoming Edges
Big data of the First generation Graph Databases. z set of outgoing Edges
Edge - an entity that links two Vertices and has the following
OrientDB is incredibly fast: it can store 220,000 records per mandatory properties:
second on common hardware. Even for a Document based z unique identifier
database, the relationships are managed as in Graph Databases z link to an incoming Vertex (also known as head)
with direct connections among records.[1] z link to an outgoing Vertex (also known as tail)
z label that defines the type of connection/relationship
2.1. Features of OrientDB between head and tail vertex
Fully transactional:
Supports ACID transactions guaranteeing that all database In addition to mandatory properties, each vertex or edge can
transactions are processed reliably and in the event of a crash also hold a set of custom properties. These properties can be

8
Genetic Programming Approach for Reverse Engineering

defined by users, which can make vertices and edges appear any conflicts are caught transparently by OrientDB and the
similar to documents. operations are repeated. The operations that support the auto-
retry are:
OrientDB-GRAPH CONSISTENCY Before OrientDB, the CREATE EDGE
graph consistency could be assured only by using DELETE EDGE
transactions. The problems with using transactions for simple DELETE VERTEX
operations like creation of edges are:
3. CONCLUSION
Speed, the transaction has a cost in comparison with non- OrientedDB is Fully transactional, supports ACID transactions
transactional operations guaranteeing that all database transactions are processed
reliably and in the event of a crash all pending documents are
Management of optimistic retry at application level. recovered and committed.[4]
Furthermore, with 'remote' connections this means high latency
Graph structured data model provide
Low scalability on high concurrency (this will be resolved in inhabitant management of graphs. Fully compliant with the
OrientDB v3.0, where commits will not lock the database Apache TinkerPop Gremlin (previously known as Blueprints)
anymore) open source graph computing framework..

As of OrientDB provides a new mode to manage graphs without 3. REFERENCE:


using transactions. It uses the Java class OrientGraphNoTx or [1] "Multi-Model Database - OrientDB Manual [2] Jump up
via SQL by changing the global setting sql.graph to: a b "Popularity ranking of database management
ConsistencyMode to one of the following values: systems". db-engines.com. Solid IT. Retrieved 2015-07-
04.
tx, the default, uses transactions to maintain consistency. This [3] Apache Software Foundation. "Apache CouchDB".
was the only available setting before OrientDB Retrieved 15 April 2012.
[4] Oracle NoSQL High Availability
notx_sync_repair, avoids the use of transactions. Consistency,
in case of a JVM crash, is guaranteed through a database ABOUT THE AUTHORS
repair operation, which runs at startup in synchronous mode. Snehlata Kaul is working as an Assistant
The database cannot be used until the repair is finished. Professor in Ajay Kumar Garg
Engineering College, Ghaziabad, U.P
notx_async_repair, also avoids the use of transactions. (India). She has obtained her MCA from
Consistency, in case of JVM crash, is guaranteed through a DR. B. R. Ambedkar Marathwada
database repair operation, which runs at startup in University, Aurangabad, Maharashtra
asynchronous mode. The database can be used immediately, and M.Tech. from KSOU, Mysore. She
as the repair procedure will run in the background has more than decades of teaching
experience. Her research area includes
Both the new modes notx_sync_repair and notx_async_repair multi agent system, DBMS, ADBMS & SPM. She has attended
will manage conflicts automatically, with a configurable RETRY several seminars, workshops and conferences at various levels.
(default=50). In case changes to the graph occur concurrently, She has published many papers in national and international
journals and conferences.

9
Journal of Computer Application, Volume 7, No. 1, January-June 2016
Ajay Kumar Garg Engineering College, Ghaziabad

GREEN COMPUTING
Saroj Bala
Assistant Professor, MCA Department
E-mail: saroj_kkr@rediffmail.com

ABSTRACT– During recent years, attention in Green their energy consumption significantly contributes to
Computing has moved research into energy-saving techniques Greenhouse Gas emissions. In response to this finding,
for home computers to enterprise systems' Client and Server organizations are currently using the following equation:
machines. Saving energy or reduction of carbon footprints is Reduced energy consumption =
one of the aspects of Green Computing. The research in the
Reduced greenhouse gas emissions =
direction of Green Computing is more than just saving energy
and reducing carbon foot prints. This study provides a brief Reduced operational costs for the data center
account of Green Computing. The emphasis of this study is on It means adopting fewer and more energy efficient
current trends in Green Computing; challenges in the field of systems while refactoring application environments to
Green Computing and the future trends of Green Computing. make optimal use of physical resources is the best
architectural model.
1. INTRODUCTION
The field of Green Computing encompasses a broad range of 2. Based on the Gartner estimations over 133,000 PCs are
subjects from new energy-generation techniques to the study discarded by U.S. homes and businesses every day and
of advanced materials to be used in our daily life. Green less than 10 percent of all electronics are currently
technology focuses on reducing the environmental impact of recycled. Majority of countries around the world require
industrial processes and innovative technologies caused by electronic companies to finance and manage recycling
the Earth’s growing population. It has taken upon itself the programs for their products especially under-developed
goal to provide society’s needs in ways that do not damage Countries. Green Computing must take the product life
the natural resources. This means creating fully recyclable cycle into consideration; from production to operation
products, reducing pollution, proposing alternative to recycling. E-Waste is a manageable piece of the waste
technologies in various fields, and creating a center of stream and recycling e-Waste is easy to adopt. Recycling
economic activity around technologies that benefit the computing equipment such as lead and mercury enables
environment. The huge amount of computing manufactured to replace equipment that otherwise would have been
worldwide has a direct impact on environment issues, and manufactured. The reuse of such equipments allows
scientists are conducting numerous studies in order to reduce saving energy and reducing impact on environment,
the negative impact of computing technology on our natural which can be due to electronic wastes.
resources. A central point of research is testing and applying
alternative nonhazardous materials in the products’ 3. Currently much of the emphasis of Green Computing area
manufacturing process. is on Data Centers, as the Data Centers are known for
The article is structured as follows: section 2 discusses current their energy hunger and wasteful energy consumptions.
trends, Section 3 and 4 discusses challanges and future trends With the purpose of reducing energy consumption in
respectively. Data Centers it is worthwhile to concentrate on following:
z Information Systems – efficient and right set
2. CURRENT TRENDS information systems for business needs are a key in
Current trends of Green Computing are towards efficient building Green Data Centers.
utilization of resources. Energy is considered as the main z Cooling Systems –cooling system should be designed
resource and the carbon footprints are considered the major in such a way so that it is expandable as needs for
threats to environment. Therefore, the emphasis is to reduce cooling dictates.
the energy utilization & carbon footprints and increase the z Standardized environment for equipment is must for
performance of Computing. There are several areas where Data Center Air Management and Cooling System.
researchers are putting lots of efforts to achieve desired results: z Consider initial and future loads, when designing &
selecting data center electrical system equipment.
1. Organizations are realizing that the source and amount of

10
4. Virtualization is a trend of Green computing it offers equipment, which increases because of increase in total
virtualization software as well as management software power consumption by IT equipments;
for virtualized environments. Virtualization runs fewer z Equipment Life cycle management – Cradle to Grave;
systems at higher levels of utilization. Virtualization allows and
full utilization of computer resources and benefits in: z Disposal of Electronic Wastes
z Reduction of total amount of hardware;
z Power off Idle Virtual Server to save resources and 4. FUTURE TRENDS
energy; and The future of Green Computing is going to be based on
z Reduction in total space, air and rent requirements efficiency, rather than reduction in consumption. The primarily
ultimately reduces the cost focus of Green IT is in the organization’s self interest in energy
cost reduction, at Data Centers and at desktops, and the result
5. Another approach to promote Green Computing and save of which is the corresponding reduction in carbon generation.
environment is to introduce policies all around the World, The secondary focus of Green IT needs to focus beyond energy
so that, companies design products to receive the eco- on innovation and improving alignment with overall corporate
label. There are several organizations in the world which social responsibility efforts. This secondary focus will demand
support “eco-label” IT products. These organizations the development of Green Computing strategies. The idea of
provide certificates to IT products based on factors sustainability addresses the subject of business value creation
including design for recycling, recycling system, noise while ensuring that long term environmental resources are not
energy consumption etc. impacted. There are few efforts, which all enterprises are
supposed to take care of. In future certifications together with
3. CHALLENGES recommendations and government regulations will put more
According to researchers in the past the focus was on pressure on vendors to use green technology and reduce
computing efficiency and cost associated to IT equipments impact on environment. Cloud computing is energy-efficient
and infrastructure services were considered low cost and technology for ICT provided that it’s potential for significant
available. Now infrastructure is becoming the bottleneck in IT energy savings that have so far focused on only hardware
environments and the reason for this shift is due to growing aspects, can be fully explored with respect to system operation
computing needs, energy cost and global warming. This shift and networking aspects also. Cloud Computing results in better
is a great challenge for IT industry. Therefore now researchers resource utilization, which is good for the sustainability
are focusing on the cooling system, power and data center movement for green technology. The product durability and/
space. At one extreme it is the processing power that is or longevity are one of the best approaches towards achieving
important to business and on the other extreme it is the drive, Green Computing objectives. Long life of product will allow
challenge of environment friendly system, and infrastructure more utilization of products and it will put a control on
limitations. Green Computing challenges are not only for IT unnecessary manufacturing of products. It is obvious that
equipments users but also for the IT equipments Vendors. government regulations will push the products vendors to
Several major vendors have made considerable progress in make more efforts to increase the product life.
this area, for example, Hewlett-Packard recently unveiled what
it calls “the greenest computer ever”—the HP rp5700 desktop Power management is proving to be one of the most valuable
PC. The HP rp5700 exceeds U.S. Energy Star 4.0 standards, and clear-cut techniques in near future to decrease energy
and has an expected life of at least five years, and 90% of its consumption. IT departments with focus on saving energy
materials are recyclable. Dell is speeding up its programs to can decrease use with a centralized power management tool.
reduce hazardous substances in its computers, and its new One of the areas where Green Computing can grow is the share
Dell OptiPlex desktops are 50% more energy-efficient than and use efficiently the unused resources on idle computers.
similar systems manufactured in 2005, credit goes to more Leveraging the unused computing power of modern machines
energy-efficient processors, new power management features, to create an environmentally proficient substitute to traditional
and other related factors. IBM is working on technology to desktop computing is cost effective option. Intelligent
develop cheaper and more efficient solar cells plus many other compression techniques can be used to compress the data
solutions from IBM to support sustainable IT. According to and eliminate duplicates help in cutting the data storage
researchers of Green Computing following are few prominent requirements.
challenges that Green computing is facing today:
z Equipment power density / Power and cooling 5. CONCLUSION
capacities; Technology is an active contributor in achieving the goals of
z Increase in energy requirements for Data Centers and Green Computing. IT industry is putting efforts in all its sectors
growing energy cost; to achieve Green computing. Equipment recycling, reduction
z Control on increasing requirements of heat removing of paper usage, virtualization, cloud computing, power

11
Journal of Computer Application, Volume 7, No. 1, January-June 2016
Ajay Kumar Garg Engineering College, Ghaziabad

management, Green manufacturing are the key initiatives [2] Pushtikant Malviya, Shailendra Singh, A Study about
towards Green computing. Current challenges to achieve Green Green Computing, International Journal of Advanced
Computing are enormous and the impact is on computing Research in Computer Science and Software Engineering,
performance. Government regulations are pushing Vendors to Volume 3, Issue 6, page 790-794, June 2013
act green; behave green; do green; go green; think green; use
green and no doubt to reduce energy consumptions as well. Saroj Bala is working as an Assistant
All these efforts are still in limited areas and currently efforts Professor in Ajay Kumar Garg
are mainly to reduce energy consumption, e-Waste but the Engineering College, Ghaziabad,
future of Green Computing will be depending on efficiency U.P.(India). She has obtained her MCA
and Green products. Future work in Green Computing discipline from Punjabi University, Patiala and B.Sc
will also rely on research work in academics since this is an from Kurukshetra University,
emerging discipline and there is much more need to be done. Kurukshetra. She has over 17 years of
There is need for more research in this discipline especially teaching experience. Her research area
within academic sector. includes data clustering, swarm
intelligence, image processing and green computing. She has
REFERENCES attended several seminars, workshops and conferences at
[1] Tariq Rahim Soomro and Muhammad Sarwar, Green various levels. She has published many papers in national and
Computing: From current to Future Trends, International international journals.
Journal of Social, Behavioral, Educational, Economic,
Business and Industrial Engineering Vol:6, No:3, page
326-329, 2012.

12
LI-FI (LIGHT FIDELITY)-THE FUTURE TECHNOLOGY IN
WIRELESS COMMUNICATION
Harnit Saini
Assistant Professor, MCA Department, AKGEC, GZB
E-mail: harnit_saini@yahoo.com

Abstract –Whether you’re using wireless internet in a coffee high speeds in the lab. Researchers at the Heinrich Hertz
shop, stealing it from the guy next door, or competing for Institute in Berlin, Germany, have reached data rates of over
bandwidth at a conference, you have probably gotten frustrated 500 megabytes per second using a standard white-light LED.
at the slow speeds you face when more than one device is tapped Haas has set up a spin-off firm to sell a consumer VLC transmitter
into the network. As more and more people and their many
that is due for launch next year. It is capable of transmitting
devices access wireless internet, clogged airwaves are going to
make it. One germen phycist. Harald Haas has come up with a data at 100 MB/s - faster than most UK broadband connections.
solution he calls “data through illumination” –taking the fibber
out of fiber optic by sending data through an LED light bulb 2. HOW LI-FI WORKS?
that varies in intensity faster than the human eye can follow. Li-Fi is typically implemented using white LED light bulbs at
It’s the same idea band behind infrared remote controls but far the downlink transmitter. These devices are normally used for
more powerful. Haas says his invention, which he calls DLIGHT, illumination only by applying a constant current. However, by
can produce data rates faster than 10 megabits per second, which fast and subtle variations of the current, the optical output can
is speedier than your average broadband connection. He be made to vary at extremely high speeds.
envisions a future where data for laptops, smart phones, and
tablets is transmitted through the light in a room. And security
would be snap – if you can’t see the light, you can’t access the This very property of optical current is used in Li-Fi setup.
data. The operational procedure is very simple-, if the LED is on,
you transmit a digital 1, if it’s off you transmit a 0. The LEDs
1. INTRODUCTION can be switched on and off very quickly, which gives nice
In simple terms, Li-Fi can be thought of as a light-based Wi-Fi. opportunities for transmitting data. Hence all that is required
That is, it uses light instead of radio waves to transmit is some LEDs and a controller that code data into those LEDs.
information. And instead of Wi-Fi modems, Li-Fi would use All one has to do is to vary the rate at which the LED’s flicker
transceiver-fitted LED lamps that can light a room as well as depending upon the data we want to encode.
transmit and receive information. Since simple light bulbs are
used, there can technically be any number of access points. Further enhancements can be made in this method, like using
This technology uses a part of the electromagnetic spectrum an array of LEDs for parallel data transmission, or using mixtures
that is still not greatly utilized- The Visible Spectrum. Light is of red, green and blue LEDs to alter the light’s frequency with
in fact very much part of our lives for millions and millions of each frequency encoding a different data channel. Such
years and does not have any major ill effect. Moreover there is advancements promise a theoretical speed of 10 Gbps –
10,000 times more space available in this spectrum and just meaning one can download a full high-definition film in just 30
counting on the bulbs in use, it also multiplies to 10,000 times seconds.
more availability as an infrastructure, globally.

It is possible to encode data in the light by varying the rate at


which the LEDs flicker on and off to give different strings of 1s
and 0s. The LED intensity is modulated so rapidly that human
eyes cannot notice, so the output appears constant.

More sophisticated techniques could dramatically increase VLC


data rates. Teams at the University of Oxford and the University
of Edinburgh are focusing on parallel data transmission using
arrays of LEDs, where each LED transmits a different data
stream. Other groups are using mixtures of red, green and blue
LEDs to alter the light's frequency, with each frequency
encoding a different data channel.

Li-Fi, as it has been dubbed, has already achieved blisteringly

13
Journal of Computer Application, Volume 7, No. 1, January-June 2016
Ajay Kumar Garg Engineering College, Ghaziabad

TTo further get a grasp of Li-Fi consider an IR remote. It sends ROVs work great, except when the tether isn’t long enough to
a single data stream of bits at the rate of 10,000-20,000 bps. explore an area, or when it gets stuck on something. If their
Now replace the IR LED with a Light Box containing a large wires were cut and replaced with light —say from a submerged,
LED array. high-powered lamp —then they would be much freer to explore.
They could also use their headlamps to communicate with
3. APPLICATION OF LI-FI TECHNOLOGY each other, processing data autonomously and referring
findings periodically back to the surface, all the while obtaining
3.1 You Might Just Live Longer their next batch of orders.
You Might Just Live Longer For a long time, med ical
technology has lagged behind the rest of the wireless world. 3.5 It Could Keep You Informed and Save Lives
Operating rooms do not allow Wi-Fi over radiation concerns, Say there’s an earthquake in New York. Or a hurricane. Take
and there is also that whole lack of dedicated spectrum. While your pick —it’s a wacky city. The average New Yorker may not
Wi-Fi is in place in many hospitals, interference from cell phones know what the protocols are for those kinds of disasters. Until
and computers can block signals from monitoring equipment. they pass under a street light, that is.
Li-Fi solves both problems: lights are not only allowed in
operating rooms, but tend to be the most glaring (punintended) Remember, with Li-Fi, if there’s light, you’re online. Subway
fixtures in the room. And, as Haas mentions in his TED Talk,Li- stations and tunnels, common dead zones for most emergency
Fi has 10,000 times the spectrum of Wi-Fi, so maybe we can, I communications, pose no obstruction. Plus, in times less
dunno, delegate red light to priority medical data. Code Red! stresssing cities could opt to provide cheap high speed Web
access to every street corner.
3.2 Airlines
Airline Wi-Fi. Ugh. Nothing says captive audience like having 4. ADVANTAGES OF LI-FI
to pay for the "service" of dialup speed Wi-Fi on the plane. z Li-Fi can solve problems related to the insufficiency of
And don’t get me started on the pricing. radio frequency bandwidth because this technology uses
Visible light spectrum that has still not been greatly
The best I’ve heard so far is that passengers will "soon" be utilized.
offered a "high-speed like" connection on some airlines. United z High data transmission rates of up to 10Gbps can be
is planning on speeds as high as 9.8 Mbps per plane. achieved.
z Since light cannot penetrate walls, it provides privacy
Uh, I have twice that capacity in my living room. And at the and security that Wi-Fi cannot.
same price as checking a bag, I expect it. Li-Fi could easily z Li-Fi has low implementation and maintenance costs.
introduce that sort of speed to each seat's reading light. I’ll be z It is safe for humans since light, unlike radio frequencies,
the guy WoWing next to you. Its better than listening to you cannot penetrate human body. Hence, concerns of cell
tell me about your wildly successful son, ma’am. mutation are mitigated.

3.3 Smarter Power Plants 5. DISADVANTAGES OF LI-FI


Wi-Fi and many other radiation types are bad for sensitive z Light can't pass through objects.
areas. Like those surrounding power plants. But power plants z A major challenge facing Li-Fi is how the receiving device
need fast, inter-connected data systems to monitor things like will transmit back to transmitter.
demand, grid integrity and (in nuclear plants) core temperature. z High installation cost of the VLC systems.
The savings from proper monitoring at a single power plant z Interferences from external light sources like sun, light,
can add up to hundreds of thousands of dollars. normal bulbs, opaque materials.

Li-Fi could offer safe, abundant connectivity for all areas of 6. CONCLUSION
these sensitive locations. Not only would this save money The possibilities are numerous and can be explored further. If
related to currently implemented solutions, but the draw on a his technology can be put into practical use, every bulb can be
power plant’s own reserves could be lessened if they haven’t used something like a Wi-Fi hotspot to transmit wireless data
yet converted to LE and we will proceed toward the cleaner, greener, safer and
D lighting. brighter future.

3.4 Undersea Awesomeness The concept of Li-Fi is currently attracting a great deal of
Underwater ROVs, those favourite toys of treasure seekers interest, not least because it may offer a genuine and very
and James Cameron,operate from large cables that supply their efficient alternative to radio-based wireless. As a growing
power and allow them to receive signals from their pilots above. number of people and their many devices access wireless

14
internet, the airwaves are becoming increasingly clogged, Harnit Saini has done her Bachelor of
making it more and more difficult to get a reliable, high-speed Computer Applications from Kanya Maha
signal. Vidyalya, Jalandhar, Punjab, affiliated to
Guru Nank Dev University, Amritsar,
This may solve issues such as the shortage of radio-frequency Punjab in the year 2002. She has done
bandwidth and also allow internet where traditional radio based her Master of Computer Applications
wireless isn’t allowed such as aircraft or hospitals. One of the with honours from Punjab Institute of
shortcomings however is that it only work in direct line of Management and Technology, Mandi
sight. Gobindgarh, Punjab, affiliated to Punjab Technical University,
Jalandhar, Punjab in year 2005. She has done her Master of
REFERENCES Technology degree in Computer Science and Engineering from
[1] seminarprojects.com/s/seminar-report-on-lifi Ajay Kumar Garg Engineering College, Ghaziabad. She has
[2] http://en.wikipedia.org/wiki/Li-Fi attended a national conference on Development of Reliable
[3] http://teleinfobd.blogspot.in/2012/01/what-is-lifi.html Information Systems, Techniques and Related Issues during
[4] technopits.blogspot.comtechnology.cgap.org/2012/01/ her M.Tech. at Ajay Kumar Garg Engineering College,
11/a-lifi-world/ Ghaziabad in February, 2012. She has also attended a workshop
[5] www.lificonsortium.org/ on Formal Languages, Automata Theory and Computations at
[6] the-gadgeteer.com/2011/08/29/li-fi-internet-at-thespeed- Ajay Kumar Garg Engineering College, Ghaziabad in April, 2012.
of-light She was also in the organizing team of national conference
DRISTI-2012 and national seminar CYST-2013 held in Ajay
Kumar Garg Engineering College Ghaziabad. She has published
two papers during her M.Tech in International Journals. She is
an active member of IEEE. She possesses good moral values
and calmness. She is ready to face challenges at every moment
of life. Faith in God is her biggest strength.

15
Journal of Computer Application, Volume 7, No. 1, January-June 2016
Ajay Kumar Garg Engineering College, Ghaziabad

AN EFFICIENT INDEXING TREE STRUCTURE FOR


MULTIDIMENSIONAL DATA ON CLOUD
Mani Dwivedi,
Assistant Professor, AKGEC, Ghaziabad
dwivedimani@gmail.com

Abstract – Nowadays cloud computing provides storage resource multidimensional data indexing and supporting
on demand has become increasingly important. Cloud multidimensional complex/similarity queries, which can be either
computing has attracted much attention in industrial and k-nearest-neighbor (KNN) queries or range queries.
research areas, for its advantages such as high availability, high
reliability and costsaving to the business organizations.
Almost every existing cloud computing environment employs
However, multidimensional data indexing remains as a big
challenge for cloud computing, because of the inefficiency in a one-dimensional identifier (or ID, for short) space. The only
storage and search caused by complicated existing index exception protocol, CAN [2], uses a low dimensional torus as
structures, which greatly limits the scalability of applications the topology of the ID space. The dimensionality of the ID
and dimensionality of data to be indexed. A novel index scheme space usually cannot be matched with the dimensionality of
with a simple tree structure for multidimensional data indexing the data to be indexed. Apparently, these distributed-hash-
(SDI) in cloud systems have been proposed, which overcomes table (DHT) based methods cannot be used directly for our
the root-bottleneck problem existing in most other tree-based purpose. Another natural choice is to extend the tree-based
multidimensional indexing schemes for cloud data indexing in centralized databases or traditional distributed
management. Extensive study verifies the superiority of SDI in
databases with a limited number of nodes. Essentially, these
both search and storage management performance.
methods imitate a multidimensional tree index with an additional
1. INTRODUCTION ring-based overlay linking all nodes in the tree together.
Cloud computing is the technology used to access remotely
stored data through the internet. It protects the data from the It can be assumed that tree structured overlay network should
disasters like earthquakes, tsunami, cyclones, fire etc. As cloud be used in the way that additional links should be added with
computing is becoming more accrual, more information is being care. With appropriately designed algorithms, a simple tree
centralized into the cloud. Data owners are relieved from the structured overlay network with less links could be more
burden of data storage and maintenance, to enjoy the on efficient than a complex one in terms of, not only network
demand high quality data service. The customers of the cloud, maintenance, but also multidimensional data search. In this
now has to be secure against the cloud service providers itself, paper a novel Simple tree structure for multidimensional data
because they can leak information to prohibited entities or get indexing (SDI) is presented by overcoming the problems of
hacked. Though current cloud systems have achieved great VBI-tree [1].
success in file-sharing and file management with the help of
mature technologies such as keyword-based search and one 2. SDI STRUCTURE
dimensional data indexing, extending cloud technologies to In order to overcome the shortcomings of VBI-Tree,SDI is
applications with more complicated data management tasks is forwarded, which is shown in Fig.1. Each routing node may
nontrivial. There are several difficulties in implementing “link” to five kinds of

Fig. 1. SDI Structure

16
nodes, if any: one parent, two children, two adjacent routing each other. Ancestor link helps to do a discrete data search
nodes, neighbor routing nodes and one ancestor node. with less hops, which reduces the query processing cost
Compared with VBI-Tree, SDI defines new ancestor links and drastically, compared with VBI-Tree. It is ensured that all the
different adjacent links, but removes upside path. If all data nodes which intersect with the query can be visited once and
nodes are removed which are leaves, then a routing tree with only once. In the query processing algorithm, by using
only routing nodes is created. By an inorder traversal to this additional parameters, we restrict the search to the nodes or
routing tree, we create adjacent link between any two nodes as branches which have not been checked. For query processing,
shown in Fig. 1. Given a node x, the nodes immediately prior to we use a parallel query distribution which guarantees the query
and after it, connected by the adjacent link, are the left and efficiency to be O(log N).
right adjacent nodes, respectively. Adjacent link will always
connect to one LRN at one end. Ancestor link is distributed by 3.1 Performance of query processing
the lower level routing node to its specific higher level routing VBI-Tree uses upside path to log the coverage information for
node descendants, which are at least two levels higher 2 and all ancestors along the way to root and then each node has a
lie on the left(right) child branch but right(left) most positions wider view. So VBI-Tree beats SDI in query efficiency. However
at each level. Ancestor link brings the ancestor coverage both of them can resolve the query by O(log N). The most
information to its selected descendants. For any node at level important thing for SDI is that it can resolve the query with
l (l >= 0), it will distribute at most 2* (log N - l - 2) links out much less cost (query messages) and the more skewed the
with network nodes N. Child height is the child subtree height, data distribution, the more benefit it gets. SDI beats VBI-Tree
which is used to activate the balance algorithm [9]. For the in three cases, especially for the range query and the KNN
data node, it has no sideways routing tables, ancestor or query. By using ancestor links, SDI resolves discrete data
adjacent links, but only one parent link to LRN(Leaf routing checking with less hops but VBI-Tree can only jump one level
Node). Modification to the original adjacent link has restricted at a time. The query cost increases with increasing
the routing inside routing tree, which makes the tree succinct dimensionality, the bigger the dimensionality, the more the space
and using ancestor link instead of upside path reduces the overlapping. KNN query processing adopts the range query
updating cost from O(N) to O(log N), which makes the tree processing algorithm and SDI wins in query cost.
speedy.
4. CONCLUSION
2.1 Ancestor link distribution Indexing of multidimensional data is an essential problem for
In SDI, ancestor links are distributed to routing nodes at each bringing cloud technologies into mission critical data
level evenly except the left and right most ones. Each link will management applications. An enabling technique for this
be maintained by at most two nodes of the same level: the purpose should not only keep search efficiency in a static
right(left) most descendant of the left(right) child branch. All environment, but also provide availability and robustness
nodes with level no less than 2, except the left most and right without performance sacrifice in a large-scale cloud based
most ones, will have ancestor links. If one node maintains an systems where nodes may join or leave the system dynamically.
ancestor link, there is only one. Each node will keep the region SDI, a tree-based overlay network is introduced, in which each
information for the linked ancestor, if any. node only maintains carefully selected additional links to
ancestor and descendant nodes. It has been shown here that
2.2 Index building even with less additional links, the search algorithms based on
Index building for SDI is the same as in VBI-Tree [1], which SDI still bounds the query efficiency by O(log N). The
deals with the data indexing distribution among tree nodes. advantage achieved by the simple yet efficient index structure
The ancestor node will cover the descendant node. Node is huge. It eliminates the root bottleneck problem suffered by
joining or departure causes the data space to be divided or most other tree-based overlay networks. Furthermore, since
combined. Data adding or deleting causes the coverage space less links are maintained, it reduces the cost for both network
of node to be expanded or shrunk, which is the same process maintenance and query processing.
as in centralized index schemes [3,4].
5. REFERENCES
3. QUERY PROCESSING [1] H.V. Jagadish, B.C. Ooi, Q.H. Vu, R. Zhang, A. Zhou, VBI
According to the modification to the tree structure, new -Tree: A Peerto- Peer framework for supporting multi-
algorithms for query processing have been defined. By visiting dimensional indexing schemes, in: ICDE, 2006, pp. 34–
ancestor links among neighbors, the query efficiency is still 43.
O(log N) but with no root-bottleneck problem or high [2] S. Ratnasamy, P. Francis, M. Handley, R. Karp, S. Shenker,
maintenance cost. Point query is a special case of range query A scalable content-addressable network, in: SIGCOMM,
and KNN query, setting query radius to zero. For simplicity, 2001, pp. 161–172.
first we consider the case where no sibling nodes overlap with [3] A. Guttman, R-trees: A dynamic index structure for spatial

17
Journal of Computer Application, Volume 7, No. 1, January-June 2016
Ajay Kumar Garg Engineering College, Ghaziabad

searching, in: SIGMOD, 1984, pp. 47–57. ABOUT THE AUTHORS


[4] P. Ciaccia, M. Patella, P. Zezula, M-tree: An efficient Mani Dwivedi did her MCA from IIMT
access method for similarity search in metric spaces, in: Management College, Merrut ,Affiliated
VLDB, 1997, pp. 426–435. to UP Technical University, Noida in the
year 2008. She also received the degree
in M.Tech (Computer Science
Engineering) from UP Technical
University, Noida. She is currently
working in MCA department as Assistant
Professor of Ajay Kumar Garg Engineering College, Ghaziabad.

18
MOBILE CLOUD COMPUTING INTEGRATION:
ARCHITECTURE, APPLICATIONS, AND APPROACHES
Anjali Singh
Assistant Professor, MCA Department, AKGEC, GZB
E-mail: sinajali2001@yahoo.com

Abstract – Together with an explosive growth of the mobile 2. WHAT IS MOBILE CLOUD COMPUTING?
applications and emerging of cloud computing concept, mobile ‘Mobile cloud computing at its simplest, refers to an
cloud computing (MCC) has been introduced to be a potential infrastructure where both the data storage and data processing
technology for mobile services. MCC integrates the cloud com- happen outside of the mobile device. Mobile cloud applications
puting into the mobile environment and overcomes obstacles
move the computing power and data storage away from mobile
related to the performance (e.g., battery life, storage, and
bandwidth), environment (e.g., heterogeneity, scalability, and phones and into the cloud, bringing applications and MC to
availability), and security (e.g., reliability and privacy) dis- cussed not just smartphone users but a much broader range of mobile
in mobile computing. This paper gives a survey of MCC, which subscribers’.
helps general readers have an overview of the MCC including
the definition, architecture, and applications. The issues, MCC is a new paradigm for mobile applications whereby the
existing solutions, and approaches are presented. In addition, data processing and storage are moved from the mobile device
the future research directions of MCC are discussed. to powerful and centralized computing platforms located in
clouds. These centralized applications are then accessed over
1. INTRODUCTION the wireless connection based on a thin native client or web
Mobile devices (e.g., smartphone and tablet PC) are browser on the mobile devices. Alternatively, MCC can be
increasingly becoming an essential part of human life as the defined as a combination of mobile web and CC which is the
most effective and convenient communication tools not most popular tool for mobile users to access applications and
bounded by time and place. Mobile users accumulate rich services on the Internet. Briefly, MCC provides mobile users
experience of various services from mobile applications (e.g., with the data processing and storage services in clouds. The
iPhone apps and Google apps), which run on the devices and/ mobile devices do not need a powerful configuration (e.g.,
or on remote servers via wireless networks. The rapid progress CPU speed and memory capacity) because all the complicated
of mobile computing (MC) becomes a powerful trend in the computing modules can be processed in the clouds.
development of IT technology as well as commerce and
industry fields. However, the mobile devices are facing many 2.1 ARCHITECTURES OF MOBILE CLOUD
challenges in their resources (e.g., battery life, storage, and COMPUTING
bandwidth) and communications (e.g., mobility and security). From the concept of MCC, the general architecture of MCC
The limited resources significantly impede the improvement of can be shown in Figure 1. In Figure 1, mobile devices are
service qualities. Cloud computing (CC) has been widely connected to the mobile networks via base stations (e.g., base
recognized as the next generation computing infrastructure. transceiver station, access point, or satellite) that establish
CC offers some advantages by allowing users to use and control the connections (air links) and functional interfaces
infrastructure (e.g., servers, networks, and storages), platforms between the networks and mobile devices. Mobile users’
(e.g., middleware services and operating systems), and requests and information (e.g., ID and location) are transmitted
software’s (e.g., application programs) provided by cloud to the central processors that are connected to servers
providers (e.g., Google, Amazon, and Sales force) at low cost. providing mobile network services. Here, mobile network
In addition, CC enables users to elastically utilize resources in operators can provide services to mobile users as
an on- demand fashion. As a result, mobile applications can be authentication, authorization, and accounting based on the
rapidly provisioned and released with the minimal management home agent and subscribers’ data stored in databases. In the
efforts or service provider’s interactions. With the explosion cloud, cloud controllers process the requests to provide mobile
of mobile applications and the support of CC for a variety of users with the corresponding cloud services. These services
services for mobile users, mobile cloud computing (MCC) is are developed with the concepts of utility computing,
introduced as an integration of CC into the mobile environment. virtualization, and service-oriented architecture (e.g., web,
MCC brings new types of services and facilities mobile users application, and database servers). Generally, a CC is a large-
to take full advantages of CC scale distributed network system implemented based on a

19
Journal of Computer Application, Volume 7, No. 1, January-June 2016
Ajay Kumar Garg Engineering College, Ghaziabad

number of servers in data centers. The cloud services are an increase of cost and may not be feasible for all mobile
generally classified based on a layer concept. In the upper devices. Computation offloading technique is proposed with
layers of this paradigm, Infrastructure as a Service (IaaS), the objective to migrate the large computations and complex
Platform as a Service (PaaS), and Software as a Service (SaaS) processing from resource-limited devices (i.e., mobile devices)
are stacked. to resourceful machines (i.e., servers in clouds). This avoids
taking a long application execution time on mobile devices
a. To provide services for customers. Typically, data centers which results in large amount of power consumption. The
are built in less populated places, with high power supply effectiveness of offloading techniques through several
stability and a low risk of disaster. experiments. The results demonstrate that the remote
application execution can save energy significantly. In addition,
b. IaaS. Infrastructure as a Service is built on top of the
many mobile applications take advantages from task migration
data center layer. IaaS enables the provision of storage,
and remote processing. For example, offloading a compiler
hardware, servers, and networking components. The
optimization for image processing can reduce 41% for energy
client typically pays on a per-use basis. Thus, clients
consumption of a mobile device. Also, using memory arithmetic
can save cost as the payment is only based on how
unit and interface (MAUI) to migrate mobile game components
much resource they really use. Infrastructure can be
to servers in the cloud can save 27% of energy consumption
expanded or shrunk dynamically as needed.
for computer games and 45% for the chess game. Improving
c. PaaS. Platform as a Service offers an advanced integrated data storage capacity and processing power. Storage capacity
environment for building, testing, and deploying custom is also a constraint for mobile devices. MCC is developed to
applications. The examples of PaaS are Google App enable mobile users to store/access the large data on the cloud
Engine, Microsoft Azure, and Amazon Map Reduce/ through wireless networks. First example is the Amazon Simple
Simple Storage Service. Storage Service which supports file storage service. Another
example is Image Exchange which utilizes the large storage
d. SaaS. Data storage service can be viewed as either in
space in clouds for mobile users. This mobile photo sharing
IaaS or PaaS. Given this architectural model, the users
can use the services flexibly and efficiently.

Figure1. Mobile cloud computing architeceture


2.2 Advantages of mobile cloud computing Cloud computing
service enables mobile users to upload images to the clouds
is known to be a promising solution for MC because of many
immediately after capturing. Users may access all images from
reasons (e.g., mobility, communication, and portability). In the
any devices. With the cloud, the users can save considerable
following, we describe how the cloud can be used to overcome
amount of energy and storage space on their mobile devices
obstacles in MC, thereby pointing out advantages of MCC.
because all images are sent and processed on the clouds.
Extending battery lifetime. Battery is one of the main concerns
for mobile devices. Several solutions have been proposed to
Mobile cloud computing also helps in reducing the running
enhance the CPU performance and to manage the disk and
cost for compute-intensive applications that take long time
screen in an intelligent manner to reduce power consumption.
and large amount of energy when performed on the limited-
However, these solutions require changes in the structure of
resource devices. CC can efficiently support various tasks for
mobile devices, or they require a new hardware that results in

20
data warehousing, managing and synchronizing multiple mobility (e.g., mobile transactions and payments, mobile
documents online. For example, clouds can be used for messaging, and mobile ticketing). The m-commerce applications
transcoding playing chess or broadcasting multimedia services can be classified into few classes including finance, advertising,
to mobile devices. In these cases, all the complex calculations and shopping. The m-commerce applications have to face
for transcoding or offering an optimal chess move that take a various challenges (e.g., low network bandwidth, high
long time when perform on mobile devices will be processed complexity of mobile device configurations, and security).
efficiently on the cloud. Mobile applications also are not Therefore, m-commerce applications are integrated into CC
constrained by storage capacity on the devices because their environment to address these issues. Yang et al. proposes a
data now is stored on the cloud. Improving reliability. Storing 3G E-commerce platform based on CC. This paradigm combines
data or running applications on clouds is an effective way to the advantages of both third generation (3G) network and CC
improve the reliability because the data and application are to increase data processing speed and security level based on
stored and backed up on a number of computers. This reduces public key infrastructure (PKI). The PKI mechanism uses an
the chance of data and application lost on the mobile devices. encryption-based access control and an over-encryption to
In addition, MCC can be designed as a comprehensive data ensure privacy of user’s access to the outsourced data. A 4PL-
security model for both service providers and users. For AVE trading platform utilizes CC technology to enhance the
example, the cloud can be used to protect copyrighted digital security for users and improve the customer satisfaction,
contents (e.g., video, clip, and music) from being abused and customer intimacy, and cost competitiveness.
unauthorized distribution. Also, the cloud can remotely provide
to mobile users with security services such as virus scanning, 3.2. Mobile learning Mobile learning (m-learning) is designed
malicious code detection, and authentication. Also, such based on electronic learning (e-learning) and mobility. However,
cloud-based security services can make efficient use of the traditional m-learning applications have limitations in terms of
collected record from different users to improve the high cost of devices and network, low network transmission
effectiveness of the services rate, and limited educational resources.
In addition, MCC also inherits some advantages of clouds for
mobile services as follows: Cloud-based m-learning applications are introduced to solve
these limitations. For example, utilizing a cloud with the large
Dynamic Provisioning. Dynamic on-demand provisioning of storage capacity and powerful processing ability, the
resources on a fine-grained, self-service basis is a flexible way applications provide learners with much richer services in terms
for service providers and mobile users to run their applications of data (information) size, faster processing speed, and longer
without advanced reservation of resources. battery life.

Scalability. The deployment of mobile applications can be The benefits of combining mlearning and CC to enhance the
performed and scaled to meet the unpredictable user demands communication quality between students and teachers. In this
due to flexible resource provisioning. Service providers can case, a smartphone software based on the open source answer
easily add and expand an application and service without or students’ questions in a timely manner. In addition, a contextual
with little constraint on the resource usage. m-learning system based on Mobile Interaction in

Multitenancy. Service providers (e.g., network operator and 3.3. Mobile healthcare
data center owner) can share the resources and costs to support The purpose of applying MCC in medical applications is to
a variety of applications and large number of users. minimize the limitations of traditional medical treatment (e.g.,
small physical storage, security and privacy, and medical errors.
Ease of integration. Multiple services from different service Mobile healthcare (m-healthcare) provides mobile users with
providers can be integrated easily through the cloud and convenient helps to access resources (e.g., patient health
Internet to meet the user demand. records) easily and efficiently. Besides, m-healthcare offers
hospitals and healthcare organizations a variety of on-demand
3. APPLICATIONS OF MOBILE CLOUD COMPUTING services on clouds rather than owning standalone applications
Mobile applications gain increasing share in a global mobile on local servers.
market. Various mobile applications have taken the advantages z Intelligent emergency management system can manage
of MCC. In this section, some typical MCC applications are and coordinate the fleet of emergency vehicles effectively
introduced. and in time when receiving calls from accidents or
incidents.
3.1. Mobile commerce Mobile commerce (m-commerce) is a z Health-aware mobile devices detect pulse rate, blood
business model for commerce using mobile devices. The m- pressure, and level of alcohol to alert healthcare
commerce applications generally fulfill some tasks that require emergency system.

21
Journal of Computer Application, Volume 7, No. 1, January-June 2016
Ajay Kumar Garg Engineering College, Ghaziabad

z Pervasive access to healthcare information allows 4.2. Issues in computing side


patients or healthcare providers to access the current (1) Computing offloading. As explained in the previous
and past medical information. section, offloading is one of the main features of MCC to
z Pervasive lifestyle incentive management can be used improve the battery lifetime for the mobile devices and
to pay healthcare expenses and manage other related to increase the performance of applications.
charges automatically. (2) Security. Protecting user privacy and data/application
secrecy from adversary is a key to establish and maintain
A paper proposes @Health Cloud, a prototype implementation consumers’ trust in the mobile platform, especially in
of m-healthcare information management system based on CC MCC.
and a mobile client running Android operating system (OS). (3) Enhancing the efficiency of data access. With an
This prototype presents three services utilizing the Amazon’s increasing number of cloud services, the demand of
S3 Cloud Storage Service to manage patient health records accessing data resources (e.g., image, files, and
and medical images. documents) on the cloud increases. As a result, a method
z Seamless connection to cloud storage allows users to to deal with (i.e., store, manage, and access) data
retrieve, modify, and upload medical contents (e.g., resources on clouds becomes a significant challenge.
medical images, patient health records, and biosignals) (4) Context-aware mobile cloud services. It is important for
utilizing web services and a set of available APIs called the service provider to fulfill mobile users’ satisfaction
Repretational State Transfer. by monitoring their preferences and providing
z Patient health record management system displays the appropriate services to each of the users
information regarding patients’ status, related biosignals,
and image contents through application’s interface. 5. PRICING
z Image viewing support allows the mobile users to decode Using services in MCC involves both mobile service provider
the large image files at different resolution levels given (MSP) and cloud service provider (CSP). However, MSPs and
different network availability and quality. CSPs have different services management, customers
management, methods of payment, and prices. Therefore, this
4. ISSUES AND APPROACHES OF MOBILE CLOUD will lead to many issues; that is, how to set price, how the price
COMPUTING will be divided among different entities, and how the customers
As discussed in the previous section, MCC has many pay. For example, when a mobile user runs mobile gaming
advantages for mobile users and service providers. However, application on the cloud, this involves the game service
because of the integration of two different fields, that is, CC provider (providing a game license), mobile service provider
and mobile networks, MCC has to face many technical (accessing the data through base station), and CSP (running
challenges. This section lists several research issues in MCC, game engine on a data center).
which are related to the mobile communication and CC. Then,
the available solutions to address these issues are reviewed. 6. CONCLUSION
Mobile cloud computing is one of the mobile technology
4.1 Issues in mobile communication side trends in the future because it combines the advantages of
(1) Low bandwidth. Bandwidth is one of the big issues in both MC and CC, thereby providing optimal services for mobile
MCC because the radio resource for wireless networks users. That traction will push the revenue of MCC to $5.2
is much scarce as compared with the traditional wired billion. With this importance, this article has provided an
networks. overview of MCC in which its definitions, architecture, and
(2) Availability. Service availability becomes a more advantages have been presented. The applications supported
important issue in MCC than that in the CC with wired by MCC including m-commerce, mlearning, and mobile
networks. Mobile users may not be able to connect to healthcare have been discussed which clearly show the
the cloud to obtain a service due to traffic congestion, applicability of the MCC to a wide range of mobile services.
network failures, and the out-of-signal. Then, the issues and related approaches for MCC (i.e., from
(3) Heterogeneity. Mobile cloud computing will be used in communication and computing sides) have been discussed.
the highly heterogeneous networks in terms of wireless Finally, the future research directions have been outlined.
network interfaces. Different mobile nodes access to the
cloud through different radio access technologies such REFERENCES
as WCDMA, GPRS, WiMAX, CDMA2000, and WLAN. 1. Satyanarayanan M. Proceedings of the 1st ACM
As a result, an issue of how to handle the wireless Workshop on Mobile Cloud Computing & Services:
connectivity while satisfying MCC’s requirements arises Social Networks and Beyond (MCS), 2010.
(e.g., always-on connectivity, on-demand scalability of 2. Satyanarayanan M. Fundamental challenges in mobile
wireless connectivity, and the energy efficiency of mobile computing, In Proceedings of the 5th annual ACM
devices).

22
symposium on Principles of distributed computing, 1996; 10. Buyya R, Yeo CS, Venugopal S, Broberg J, Brandic I.
1–7. Cloud computing and emerging IT platforms: vision,
3. Ali M. Green cloud on the horizon, In Proceedings of the hype, and reality for delivering computing as the 5th
1st International Conference on Cloud Computing utility. Journal on Future Generation Computer Systems
(CloudCom), Manila, 2009; 451–459. 2009; 25(6): 599–616.
4. http://www.mobilecloudcomputingforum.com 11. Huang Y, Su H, Sun W, et al. Framework for building a
5. White Paper. Mobile Cloud Computing Solution Brief. low-cost, scalable, and secured platform for webdelivered
AEPONA, 2010. business services. IBM Journal of Research and
6. Christensen JH. Using Restful web-services and cloud Development 2010; 54(6): 535–548.
computing to create next generation mobile applications, 12. Tsai W, Sun X, Balasooriya J. Service-oriented cloud
In Proceedings of the 24th ACM SIGPLAN conference computing architecture, In Proceedings of the 7th
companion on Object oriented programming systems International Conference on Information Technology:
languages and applications (OOPSLA), 2009; 627–634. New Generations (ITNG), 2010; 684–689.
7. Liu L, Moulic R, Shea D. Cloud service portal for mobile
device management, In Proceedings of IEEE 7th Ms. Anjali Singh has an teaching
International Conference on e-Business Engineering Experience of 14 years she had pursued
(ICEBE), 2011; 474. her MCA and M.Tech degrees in year
8. Foster I, Zhao Y, Raicu I, Lu S. Cloud computing and grid 2001 and 2013 respectively. which
computing 360-degree compared, In Proceedings of includes teaching in different subjects like
Workshop on Grid Computing Environments (GCE), Computer Networks, multimedia Systems
2009;. Mobile Computing, Modelling Simulation
9. Calheiros RN, Vecchiola C, Karunamoorthy D, Buyya R. ,Cyber Security, Web Technology Human
The Aneka platform and QoS-driven resource Values etc.
provisioning for elastic applications on hybrid Clouds.
Future Generation Computer Systems, to appear.

23
Journal of Computer Application, Volume 7, No. 1, January-June 2016
Ajay Kumar Garg Engineering College, Ghaziabad

GENETIC ALGORITHM FRAMEWORK FOR PARALLEL


COMPUTING ENVIRONMENTS
Ruchi Gupta
Assistant Professor, MCA Department
E-mail: 80 ruchi@gmail.com

Abstract –In this research article, we presented a framework to codes are required for different parallel machines. Therefore, it
execute Genetic algorithms (GA) in various parallel would be of great benefit if GA users were not required to have
environments. GA researchers can prepare implementations of such deep knowledge of novel parallel architectures to run
GA operators and fitness functions using this framework. In
their GAs in parallel.
the proposed framework, the GA model is restricted to a coarse-
grained and micro-grained model. In this paper, different
parallel computing environment using genetic algorithm are Here, we presented e a parallel environment framework for GA
presented. Computational performance is also discussed through that adopts the coarse grained and micro-grained model as an
examples. implementation model. GA researchers prepare the
Index Terms: - Genetic Algorithm, Parallel Computing. implementations of genetic algorithm operators and fitness
functions using the proposed framework.
1. INTRODUCTION
Recently, several types of parallel architecture have come into 2. GENETIC ALGORITHM
wide use. For example, calculation with multi-core CPU which The GA is an optimization algorithm that mimics natural
more than four cores is not unusual. General purposed GPU evolution with variation and adaptation to the environment.
becomes also easy to use. In Japan, some of the In evolution processes in nature, an individual that is better
supercomputing centers are open for researchers to use high- adapted to the environment among a group of individual
end computational resources. forming a certain generation survives at a higher rate, and
leaves offspring to the next generation. In the GA concept, the
Thus, even when we wish to use the same algorithms, it is computer finds an individual that is better adapted to the
necessary to prepare different implementation codes suitable environment, or a solution that yields an optimum value to an
for different parallel architectures. This places a heavy burden evaluation function, by modeling the mechanism of biological
on algorithm researchers, because in-depth knowledge of the evolution. Figure 1 shows a typical flowchart of GA.
different parallel architectures is required to run their
implementation codes efficiently on parallel machines. GA is a
type of optimization algorithm with multipoint search [1]. GA
may find the optimum point even when the landscape of the
objective function has multiple peaks. However, GA requires
much iteration to find the optimum. This results in high
calculation cost. As GA is a multipoint search algorithm, it
implicitly has several types of parallelism [2][3][4][5]. Thus,
several types of research regarding parallelization of GAs are
existed. Ono et al, introduced the GA model and implementation
parallel models of GA should be clarified. As there is parallelism Fig. 1. Flowchart of GA.
in the GA itself, parallel GA can be performed even on a single
process. We call this the logical parallel model. On the other 3. PARALLEL MODEL OF GA
hand, because GA has multiple search points, a single model The GA is able to parallelized because it searches multiple
can be implemented on parallel computers. In this case, an points and repeats sampling. This method has the same
implementation parallel model should be prepared. characteristic as serial GA. The other is to split a population
into multiple subpopulations. Today, the latter method is
In most GA research, these logical and implementation parallel frequently used, because it has the higher parallelism than the
models are not distinguished clearly and are often the same former method. There are GA methods performing very efficient
[6][7][8]. When the logical model is closely related to the parallelism, which combine the both methods. On changing
implementation model, GA users should have deep knowledge the method of parallel GA, it is important to consider that it
of the parallel architectures on which their parallel GAs are may happen to change the amount of calculations and the
running. At the same times, as the logical model and accuracy of solutions. That brings to the point that Parallel GA
implementation model are closely related, different parallel has the following two meanings.

24
z Parallel algorithm for increasing search performance
z Parallel implementation for reducing execution time

For example, Pospichal [9] has proposed the distributed


population GA based on GPU, and achieved a high efficiency.
Although this method has achieved high efficiency, GA other
than the distributed population GA cannot be implemented,
since GA and parallel implementation are inseparable Also, it is
difficult to implement this method into architecture other than
GPU. The most basic parallel models are introduced as the
following Parallel models of GA can be divided into coarse- Fig. 3. Micro-grained model.
grained and micro-grained models
4. CONCLUSIONS AND FUTURE WORK
1) Coarse-grained model In this paper, we have surveyed a framework for GAs in parallel
The coarse-grained model is generally called a distributed environments. GA researchers can prepare implementations of
population model. This model splits the population into multiple GA operators and fitness functions using this framework. In
subpopulations, which are then searched. Therefore, several the proposed framework, the GA model is restricted to coarse
individuals in several subpopulations are moved into other grained and micro-grained model.
subpopulations. This operation is called the migration. Figure
2 shows the flow of the coarse grained model. This model uses In future work, a mechanism to find the best number of
computational resources effectively, because it connects to individuals and to tune it dynamically will be implemented in
computational nodes only during migration. In addition, this the libraries. In addition, we will also attempt to prepare some
model changes performance of the search compared to a serial parallel libraries for other parallel architectures.
algorithm.
5. REFERENCE
1. David E. Goldberg, Genetic Algorithms in Search,
Optimization, and Machine Learning, Addison-Wesley,
1989.
2. T. Starkweather, D. Whitley, and K. Mathimas,
Optimization using Distributed Genetic Algorithms,
Parallel Problem Solving form Nature, 1991.
3. H. M¨uhlenbein, “Parallel genetic algorithms, population
genetics and combinatorial optimization,” in Parallelism,
Learning, Evolution, vol. 565 of Lecture Notes in
Computer Science, pp. 398–406. Springer Berlin /
Heidelberg, 1991.
4. C.Belding Theodore, “The Distributed Genetic Algorithm
Revisited,” Proc.6th International Conf. Genetic
Fig. 2. Coarse-grained model. Algorithms, pp. 114–121, 1995.
2) Micro-grained model 5. M. Miki, T. Hiroyasu, M. Kaneko, and K. Hatanaka, “A
Evaluations account for a large share of total execution time in Parallel Genetic Algorithm with Distributed Environment
complex of objective problems. The micro-grained model is Scheme,” IEEE International Conference on Systems,
based on the general concept of parallelization. This model is Man, and Cybernetics, vol. 1, pp. 695–700, 1999.
a master-slave model. A master processor executes other 6. Lim D., Ong Y. Soon, Jin Y., Sendhoff.B, and Lee B. Sung,
genetic operations besides evaluation. Evaluations are “Efficient hierarchical parallel genetic algorithms using
executed by slave processors. A master processor sends grid computing,” Future Generation Computer Systems,
individuals that should be evaluated. Slave processors vol. 23, no. 4, pp. 658–670, 2007.
evaluate these individual, and return them to the master 7. J. Ming Li, X. Jing Wang, R. Sheng He, and Z. Xian Chi,
processor. Figure 3 shows the flow of micro-grained model. “An efficient fine-grained parallel genetic algorithm
This model shows inferior parallelization performance compared based on gpu-accelerated,” in Network and Parallel
to the coarse-grained model, because it must have many Computing Workshops, 2007. NPC Workshops. IFIP
connections and the master processor uses a CPU. In addition, International Conference on, 2007, pp. 855–862.
this model does not alter the search performance compared to 8. Thompson, A. Matthew and Dunlap, I. Brett,
a serial algorithm.

25
Journal of Computer Application, Volume 7, No. 1, January-June 2016
Ajay Kumar Garg Engineering College, Ghaziabad

“Optimization of analytic density functionals by parallel Ruchi Gupta is an Assistant Professor in


genetic algorithm,” Chemical Physics Letters, vol. 463, AKGEC, Ghaziabad, an affiliated college
no. 1–3, pp. 278–282, 2008. of Uttar Pradesh Technical University,
9. P. Pospichal and J. Jaros, “GPU-based Acceleration of Lucknow, (India). She did her MCA and
the Genetic Algorithm,” GPU competition of GECCO MTech (CS) from U.P Technical
competition, 2009. University Lucknow and pursuing Ph.D.
(Computer Science) from Sharda
University Grater Noida. Her area of
interest includes Genetic Algorithm.

26
TOWARDS DEVELOPING REUSABLE SOFTWARE
COMPONENTS
Aditya Pratap Singh
Assistant Professor AKGEC, Ghaziabad
adityapsingh@gmail.com

Abstract–The CBSE Component based software engineering) Visual Studio” to develop reusable software component with
allows faster development at lower cost and better usability many development options. Now days all Software Industries
using Software Development with reuse and for reuse. The have understood that component can deliver great reusability,
software components work as a plug and play device due to
extensibility and maintainability for benefits for creating a large
their reusability properties, which abstract the software
scale software system by breaking down into the small binary
complexity and increase performance. In spite of numerous
research efforts there are no mature software component components. In Ravichandran and Rothenberger [5] noticed
development guidelines defined for the current technologies that, now industry is emphasizing more on black box reuse
such as .NET or JAVA. over white box reuse. Component based programming
This study presents guidelines for component development for emphasize on “Black Box Reuse” which means implementing
reuse in .NET environment. This study demonstrated an client of such component, not need to worry about its internal
approach by designing a binary component as part of functionality.
development for reuse based on .NET component framework.
The significant contribution of this study is to propose generic
Keywords – Software Reuse, Software Component, CBSE.
comprehensive guidelines for development team to adopt them
while developing software components for reuse.
INTRODUCTION
In last three decades, Software Development has established
COMPONENT BASED SOFTWARE ENGINEERING
itself as a global market for business. In current scenario, IT
Component-Based Software Engineering (CBSE) is concerned
Companies have slowly shifted from traditional software
with improving Component-Based Development (CBD)
development environment to component based environment.
practices. In particular, CBSE aims to provide developers with
The reason behind these changes is increased productivity
final software system properties predictability based on the
and reduced cost due to reusability of software components.
analysis of its constituent components. Therefore, there is a
Correspondingly, CBSE (Component Based Software
need to develop effective ways for developing software
Engineering) is the new trend for software development
components.
industries.
Software component is an independent unit of binary code,
The software component is an independent element which can
which can be used as plug and play, like a hardware device. It
be deployed and composed in further software development
designed and developed in modular architecture, which
cycle without any modification. These software components
promote interoperability with other component and framework
reduce complexity from end user point of view and provide
for reuse or with reuse.
quick and easy implementation, subsequently help to reduce
the cost. The subsequent development work in software
Although, a software component is an independent modular
industry is trying to incorporate component reusability in one
unit, which is loosely coupled and not bonded to one client
or other ways. Zirpins et al. [6] suggested that, Today’s Modern
and most important, it possesses official usage guidelines for
ERP systems are made of several software components and it
further reuse. In general, a typical software component model
shows the real example of Software Component reuse on big
is divided into three parts as:
scale.
a) Semantics: Denotes, what component are mean to be?
Ravichandran, and Rothenberger [4] stated that, Software
b) Syntax: Defines, how it is constructed, developed and
components works similar to an autonomous hardware
represented?
components, which abstracts the internal complexity of device
c) Compositions: Finally, how it is going to be composed
and provide easy user interface to operate and similarly can be
and reassembled?
used as a building block in new product development. Among
all IT giant, Microsoft was the first one, who understood the
Semantics provide description of components and describe
industry needs and succeed to cash this ready market.
the components usage and functionality. Where syntax
Microsoft offered the Development Kit known as “Microsoft

27
Journal of Computer Application, Volume 7, No. 1, January-June 2016
Ajay Kumar Garg Engineering College, Ghaziabad

denotes the component’s algorithms and development promote reusability. Visual Studio .NET also supports such a
complexity, which give the physical structure to component. paradigm, wherein the components that we develop can be
Finally composition provides overall wrapper mechanism to hosted inside the toolbar and then dragged and dropped into
compose the various functionality of a component and present the various projects. The development environment then writes
it for reuse to customers. the necessary code for us. Before starting component design
However, some self-assessment questions can be helpful for
In some areas, life cycle of component is much similarly as architects to component design [5]:
window or web software development lifecycle but modeling,
packaging and implementation are totally different from general • Identifying the common functions in domain to avoid
software development process. An in depth domain analysis duplications of tasks.
and method of packaging and deployment of compiled binary • Dependency on other components and hardware devices.
code in such a loose couple way, makes component’s lifecycle • Optimized designed for further technology up gradation.
special. • Easy use and implementation with some minor changes.
• How valid is component decomposition for reuse?
The demands and requirements for software component keep • From business point of view, a good Return on
fluctuating according to new system requirements in market, Investment (ROI) of a software component ensures the
so while developing a new software component for reuse, component’s longer life and usage in application
developers need to follow some guidelines for component domains.
design and development, however still software industry don’t
have a standard guideline for building reusable components The design for reuse provides a set of clear implementation
but seasoned author like Lowy [2] and [4] proposed their own steps, which should be followed by architects and developers.
guidelines for component architecture. On comparing this study This set of guidelines can be classified into a number of
find that [3] is more specific about lower level of programming categories:
concepts, while [5] is more focused on bigger picture about • Supporting components reuse by providing exception
component design and reuse potential for industries. handlers.
• Interface based programming.
• Using Delegate to provide flexibility using strong type
function pointers.
• Use of Generic<T>, make classes and functions more
reusable with any data type.
• Inheritance (Is a relationship) ensure the relationship
between classes and their derived objects; provide
foundation of Object Oriented Programming concepts.
• Design and develop components with Interface based
Figure 1. Difference between Producer Reuse and Consumer Programming, which abstracts the code complexity for
Reuse. user.
• Using Abstract Classes, which provide flexibility over
Furthermore, software component lies, in two categoriesas, interface, to alter the implementing class’s functionality.
“for reuse” and “with reuse”. Similarly [1], claimedthat usage • Remoting, Object Marshaling
of software component can be divided into two categories: • Multithreading with Thread safety.
• Packaging and deployment.
a) Consumer Reuse: which mean, development of new
software systems, using existing component, called as CONCLUSION
“development with reuse”. Current technologies help establish a solid foundation to enable
b) ProducerReuse: which means, developing, building new the functionalities of software components. With adequate
portable components, for further reuse, called technologies in the field of software development Software
“Development for Reuse”. component guidelines offer the best practice for component
development. The guidelines for new component development
The figure 1, presents the differentiation between producer should be flexible to add in more artifacts and principals to
reuse and consumer reuse. manipulate them according to current businesses requirements,
which would help to save cost and labor and provide flexible
PROPOSED GUIDELINESFOR DEVELOPING development environment. This small article tries to present
REUSABLE .NET COMPONENTS some of the guidelines for software component development
One of the main advantages of building components is to in .NET environment.

28
REFERENCES Aditya Pratap Singh received his
[1]. Lau, K. K. and Wang, Z. (2007) Software Component Master of computer application degree
Models. IEEE Transaction Software Engineering, Vol. from Uttar Pradesh Technical University,
33(10) p. 709-724. Lucknow in 2003. He is perusing PhD
[2]. Lowy, J. (2003) Programming .NETComponent. from Gautam Budhha University, Greater
Cambridge, O’Reilly. Noida. He is assistant professor in the
[3]. MSDN (2013)An Introduction to c# Generics. [online]. Department of MCA at Ajay Kumar Garg
MSDN. Engineering College, Ghaziabad. His
[4]. Ramachandran M. (2008) Software Component: current research interests are in
Guidelines and Applications. New York: Nova Science component-based software engineering and software
publisher. measurement. He has presented his work in several national
[5]. Ravichandran, T. and Rothenberger, A. (2003) Software and international conferences. His work has appeared in IEEE
reuse strategies and component markets. Communication Explore. He has served in program committee relating to national
of ACM. Vol. 6(8) p. 109-110. conferences on cyber security issues.
[6]. Zirpins, C., Ortiz, G., Lamersdorf, W. and Emmerich, W.
(2013) Proceeding of the first international workshop on
engineering service compositions. IBM.

29
Journal of Computer Application, Volume 7, No. 1, January-June 2016
Ajay Kumar Garg Engineering College, Ghaziabad

A SURVEY ON BIG DATA AND MINING


Dheeraj Kumar Singh
Assistant Professor, Department of MCA, AKGEC, Ghaziabad
singhdheeraj4@gmail.com

Abstract–Big Data concern large-volume, complex, growing data


sets with multiple, autonomous sources. With the fast
development of networking, data storage, and the data collection
capacity, Big Data are now rapidly expanding in all science and
engineering domains, including physical, biological and
biomedical sciences. This paper presents a HACE theorem that
characterizes the features of the Big Data revolution, and
proposes a Big Data processing model, from the data mining
perspective. This data-driven model involves demand-driven
aggregation of information sources, mining and analysis, user
interest modeling, and security and privacy considerations. We
analyze the challenging issues in the data-driven model and
also in the Big Data revolution.

1. INTRODUCTION Fig.1 Data Mining with Big Data


What is Big Data? A name and a marketing term, for sure, but
also shorthand for advancing trends in technology that open 3. KEY FEATURES OF BIG DATA
the door to a new approach to understanding the world and The features of Big Data are:
making decisions. There is a lot more data, all the time, growing z It is huge in size.
at 50 percent a year, or more than doubling every two years, z The data keep on changing time time to time.
estimates IDC, a technology research firm. It’s not just more z Its data sources are from different phases.
streams of data, but entirely new ones. For example, there are z It is free from the influence, guidance, or control of
now countless digital sensors worldwide in industrial anyone.
equipment, automobiles, electrical meters and shipping crates. z It is too much complex in nature, thus hard to handle.
They can measure and communicate location, movement, It’s huge in nature because, there is the collection of data from
vibration, temperature, humidity, even chemical changes in the various sources together. If we consider the example of
air, mean lot of different type of data all-together. Facebook, lots of numbers of people are uploading their data
in various types such as text, images or videos. The people
2. BIG DATA AND DATA MINING also keep their data changing continuously. This tremendous
The Big Data is nothing but a data, available at and instantaneously, time to time changing stock of the data is
heterogeneous, autonomous sources, in extreme large stored in a warehouse. This large storage of data requires large
amount, which get updated in fractions of seconds. For area for actual implementation. As the size is too large, no one
example, the data stored at the server of Facebook, as most is capable to control it oneself. The Big Data needs to be
of us, daily use the Facebook; we upload various types of controlled by dividing it in groups. Due to largeness in size,
information, upload photos. All the data get stored at the decentralized control and different data sources with different
data warehouses at the server of Facebook. This data is types the Big Data becomes much complex and harder to
nothing but the big data, which is so called due to its handle. We cannot manage them with the local tools those we
complexity. Also another example is storage of photos at use for managing the regular data in real time. For major Big
Flicker. These are the good real-time examples of the Big Data-related applications, such as Google, Flicker, Facebook,
Data. Another best example of Big data would be, the a large number of server farms are deployed all over the world
readings taken from an electronic microscope of the to ensure nonstop services and quick responses for local
universe. Now the term Data Mining, Finding for the exact markets.
useful information or knowledge from the collected data,
for future actions, is nothing but the data mining. So, 4. CHALLENGING ISSUES IN DATA MINING WITH
collectively, the term Big Data Mining is a close up view, BIG DATA.
with lots of detail information of a Big Data with lots of There are three sectors at which the challenges for Big Data
information. As shown in fig 1 below. Fig. arrive. These three sectors are:

30
z Mining platform. oriented parallel computing model. There is still a certain gap
z Privacy. in performance with relational databases. Improving the
z Design of mining algorithms. performance of MapReduce and enhancing the real-time nature
of large-scale data processing have received a significant
Basically, the Big Data is stored at different places and also the amount of attention, with MapReduce parallel programming
data volumes may get increased as the data keeps on increasing being applied to many machine learning and data mining
continuously. So, to collect all the data stored at different places algorithms. Data mining algorithms usually need to scan
is that much expensive. Suppose, if we use these typical data through the training data for obtaining the statistics to solve
mining methods (those methods which are used for mining the or optimize model. For those people, who intend to hire a third
small scale data in our personal computer systems) for mining party such as auditors to process their data, it is very important
of Big Data, and then it would become an obstacle for it. Because to have efficient and effective access to the data. In such cases,
the typical methods are required data to be loaded in main the privacy restrictions of user may be faces like no local copies
memory, though we have super large main memory. To maintain or downloading allowed, etc. So there is privacy-preserving
the privacy is one of the main aims of data mining algorithms. public auditing mechanism proposed for large scale data
Presently, to mine information from Big data, parallel computing storage.[1] This public key-based mechanism is used to enable
based algorithms such as MapReduce are used. In such third-party auditing, so users can safely allow a third party to
algorithms, large data sets are divided into number of subsets analyze their data without breaching the security settings or
and then, mining algorithms are applied to those subsets. compromising the data privacy. In case of design of data mining
Finally, summation algorithms are applied to the results of algorithms, Knowledge evolution is a common phenomenon
mining algorithms, to meet the goal of Big Data mining. In this in real world systems. But as the problem statement differs,
whole procedure, the privacy statements obviously break as accordingly the knowledge will differ. For example, when we
we divide the single Big Data into number of smaller datasets. go to the doctor for the treatment, that doctor’s treatment
program continuously adjusts with the conditions of the
patient. Similarly the knowledge. For this, Wu [2] [3] [4]
proposed and established the theory of local pattern analysis,
which has laid a foundation for global knowledge discovery in
multisource data mining. This theory provides a solution not
only for the problem of full search, but also for finding global
models that traditional mining methods cannot find.

6. CONCLUSION
Big Data is going to continue growing during the next years,
and each data scientist will have to manage much more amount
of data every year. This data is going to be more diverse, larger,
and faster. We discussed some insights about the topic, and
what we consider are the main concerns and the main
challenges for the future. Big Data is becoming the new Final
While designing such algorithms, we face various challenges. Frontier for scientific data research and for business
As shown in the figure 2 above, there are blind men observing applications. We are at the beginning of a new era where Big
the giant elephant. Everyone is trying to predict their Data mining will help us to discover knowledge that no one
conclusion on what the thing is actually. Somebody is saying has discovered before. Everybody is warmly invited to
that the thing is a hose; someone says it’s a tree or pipe etc. participate in this intrepid journey.
Actually everyone is just observing some part of that giant
elephant and not the whole, so the results of each blind person’s REFERENCES
prediction is something different than actually what it is. [1] C. Wang, S.S.M. Chow, Q. Wang, K. Ren, and W. Lou,
Similarly, when we divide the Big Data in to number of subsets, “Privacy- Preserving Public Auditing for Secure Cloud
and apply the mining algorithms on those subsets, the results Storage” IEEE Trans. Computers, vol. 62, no. 2, pp. 362-
of those mining algorithms will not always point us to the 375, Feb. 2013.
actual result as we want when we collect the results together. [2] X. Wu and S. Zhang, “Synthesizing High-Frequency
Rules from Different Data Sources,” IEEE Trans.
5. RELATED WORK Knowledge and Data Eng., vol. 15, no. 2, pp. 353-367,
On the level of mining platform sector, at present, parallel Mar./Apr. 2003.
programming models like MapReduce are being used for the [3] X. Wu, C. Zhang, and S. Zhang, “Database Classification
purpose of analysis and mining of data. MapReduce is a batch- for Multi-Database Mining,” Information Systems,

31
Journal of Computer Application, Volume 7, No. 1, January-June 2016
Ajay Kumar Garg Engineering College, Ghaziabad

vol. 30, no. 1, pp. 71- 88, 2005 [6] D. Howe et al., “Big Data: The Future of Biocuration,”
[4] K. Su, H. Huang, X. Wu, and S. Zhang, “A Logical Nature, vol. 455, pp. 47-50, Sept. 2008.
Framework for Identifying Quality Knowledge from [7] A. Labrinidis and H. Jagadish, “Challenges and
Different Data Sources,” Decision Support Systems, vol. Opportunities with Big Data,” Proc. VLDB Endowment,
42, no. 3, pp. 1673-1683, 2006. vol. 5, no. 12, 2032-2033, 2012.
[5] E.Y. Chang, H. Bai, and K. Zhu, “Parallel Algorithms for [8] Y. Lindell and B. Pinkas, “Privacy Preserving Data
Mining Large-Scale Rich-Media Data,” Proc. 17th ACM Mining,” J. Cryptology, vol. 15, no. 3, pp. 177-206, 2002.
Int’l Conf. Multimedia, (MM ’09,) pp. 917-918, 2009.

32
MOBILE CYBER THREATS
Arpna Saxena
Assistant Professor, MCA Department
E-mail: saxenaarpna@gmail.com

Abstract–Smart phones and tablets have long been established data, such as email, calendars, contact information, and
as popular personal electronics devices. Cyber-threats, including passwords, on the devices. Mobile applications for social
those targeting mobile devices, are directly linked to cybercrime. networking keep a wealth of personal information. Recent
In most developed countries, creating and distributing malicious
innovations in mobile commerce have enabled users to conduct
software is a criminal offence. Although such criminal acts are
many transactions from their smart phone, such as purchasing
perpetrated in virtual environments, their victims lose real
assets, such as personal data and money. Combating cybercrime goods and applications over wireless networks, redeeming
is particularly difficult because cybercriminals do not need to coupons and tickets, banking, processing point-of-sale
cross the borders of other countries to commit crimes in those payments, and even paying at cash registers.
territories. At the same time, enforcement authorities in these
same countries have to overcome numerous barriers in order to 2. TYPICAL ATTACKS LEVERAGE PORTABILITY AND
administer justice. Therefore, international cooperation SIMILARITY TO PCS
between information security experts and law enforcement Mobile phones share many of the vulnerabilities of PCs.
authorities is required to effectively combat crime in the virtual
However, the attributes that make mobile phones easy to carry,
world.
use, and modify open them to a range of attacks.
z Perhaps most simply, the very portability of mobile
1. INTRODUCTION
phones and PDAs makes them easy to steal. The owner
Smart phones, or mobile phones with advanced capabilities
of a stolen phone could lose all the data stored on it,
like those of personal computers (PCs), are appearing in more
from personal identifiers to financial and corporate data.
people’s pockets, purses, and briefcases. Smart phones’
Worse, a sophisticated attacker with enough time can
popularity and relatively lax security have made them attractive
defeat most security features of mobile phones and gain
targets for attackers.
access to any information they store [5].
z Many seemingly legitimate software applications, or
The number and sophistication of attacks on mobile phones is
apps, are malicious [6]. Anyone can develop apps for
increasing, and countermeasures are slow to catch up.
some of the most popular mobile operating systems, and
mobile service providers may offer third-party apps with
Smart phones and personal digital assistants (PDAs) give users
little or no evaluation of their safety. Sources that are not
mobile access to email, the internet, GPS navigation, and many
affiliated with mobile service providers may also offer
other applications. However, smart phone security has not
unregulated apps that access locked phone capabilities.
kept pace with traditional computer security. Technical security
Some users “root” or “jailbreak” their devices, bypassing
measures, such as firewalls, antivirus, and encryption, are
operating system lockout features to install these apps.
uncommon on mobile phones, and mobile phone operating
z Even legitimate smart phone software can be exploited.
systems are not updated as frequently as those on personal
Mobile phone software and network services have
computers [3]. Mobile social networking applications
vulnerabilities, just like their PC counterparts do. For
sometimes lack the detailed privacy controls of their PC
years, attackers have exploited mobile phone software
counterparts. Unfortunately, many smart phone users do not
to eavesdrop, crash phone software, or conduct other
recognize these security shortcomings. Many users fail to
attacks [7]. A user may trigger such an attack through
enable the security software that comes with their phones,
some explicit action, such as clicking a maliciously
and they believe that surfing the internet on their phones is as
designed link that exploits vulnerability in a web browser.
safe as or safer than surfing on their computers [4].
A user may also be exposed to attack passively, however,
simply by using a device that has a vulnerable application
Meanwhile, mobile phones are becoming more and more
or network service running in the background [8].
valuable as targets for attack. People are using smart phones
z Phishing attacks use electronic communications to trick
for an increasing number of activities and often store sensitive
users into installing malicious software or giving away

33
Journal of Computer Application, Volume 7, No. 1, January-June 2016
Ajay Kumar Garg Engineering College, Ghaziabad

sensitive information. Email phishing is a common attack is often addressed in terms of five cornerstone segments as
on PCs, and it is just as dangerous on email-enabled shown below.
mobile phones. Mobile phone users are also vulnerable
to phishing voice calls (“vishing”) and SMS/MMS
messages (“smishing”) [9]. These attacks target feature
phones (mobile phones without advanced data and
wireless capabilities) as well as smart phones, and they
sometimes try to trick users into receiving fraudulent
charges on their mobile phone bill. Phishers often
increase their attacks after major current events, crafting
their communications to look like news stories or
solicitations for charitable donations. Spammers used
this strategy after the March 2011 earthquake and
tsunami in Japan [10].

3. CONSEQUENCES OF A MOBILE ATTACK CAN BE


SEVERE
Many users may consider mobile phone security to be less
important than the security of their PCs, but the
consequences of attacks on mobile phones can be just as
severe. Malicious software can make a mobile phone a
member of a network of devices that can be controlled by
an attacker (a “botnet”). Malicious software can also send Next, we explore each of the five segments, the threat landscape
device information to attackers and perform other harmful and the proactive steps the mobile industry is taking to address
commands. Mobile phones can also spread viruses to PCs the threats through solutions available today
that they are connected to. Losing a mobile phone used to
mean only the loss of contact information, call histories, 1. Consumers and End Users Industry is working hard, and
text messages, and perhaps photos. However, in more recent with growing success, to educate users on how to reduce their
years, losing a smart phone can also jeopardize financial cybersecurity risks. Best practices that the industry
information stored on the device in banking and payment recommends for consumers to become security savvy include:
apps, as well as usernames and passwords used to access
apps and online services. If the phone is stolen, attackers could z configure Devices to Be more Secure – Smart phones
use this information to access the user’s bank account or credit and other mobile devices have password features that
card account. An attacker could also steal, publicly reveal, lock the devices on a scheduled basis. After a
or sell any personal information extracted from the device, predetermined period of time of inactivity (e.g., one
including the user’s information, information about minute, two minutes, etc.) the device requires the correct
contacts, and GPS locations. Even if the victim recovers the PIN or password to be entered. Encryption, remote-wipe
device, he or she may receive many spam emails and SMS/ capabilities and - depending on the operating system -
MMS messages and may become the target for future anti-virus software may also serve to improve security.
phishing attacks. Some personal and business services add
a layer of authentication by calling a user’s mobile phone or z “caveat Link” – Beware of suspicious links. Do not click
sending an additional password via SMS before allowing the on links in suspicious emails or text messages as they
user to log onto the service’s website. A stolen mobile phone may lead to malicious websites.
gets an attacker one step closer to accessing the services as
the user. If the device contains the owner’s username and z Exercise caution Downloading apps – Avoid
password for the service, the attacker would have everything applications from unauthorized application stores. Some
necessary to access the service. application stores vet apps so they do not contain
malware. Online research on an app before downloading
4. FIVE CORNERSTONES OF MOBILE is often a sound first step.
CYBERSECURITY
Mobile communications are a complex ecosystem comprised z check Permissions – Check the access (i.e., access to
of a broad list of technologies and players integrated into a which segments of your mobile device) that an application
“system-of-systems” that enables the wireless environment requires, including Web-based applications, browsers
that consumers enjoy today. Within this ecosystem, security and native applications.

34
Basics for Integration of Data Warehouse and Data Mining

z Know your Network – Avoid using unknown Wi-Fi information, but network service providers cannot dictate
networks and use public Wi-Fi hot spots sparingly. security policies for consumers to follow. However, service
Hackers can create “honey pot” Wi-Fi hot spots intended providers provide a wealth of consumer educational materials
to attract, and subsequently compromise, mobile devices. and practices for enhanced security protection.
Similarly, they troll public Wi-Fi spots looking for
unsecured devices. If you have Wi-Fi at home, enable 4. Authentication and Control
encryption. A lost, unlocked smart phone with pre-programmed access to
a bank account or a corporate intranet can cause incalculable
z Don’t Publish your mobile Phone Number – Posting damage. Authentication control is the process of determining
your mobile phone number on a public website can make if a user is authorized to access information stored on the
it a target for software programs that crawl the Web device or over a network connection.
collecting phone numbers that may later receive spam, if
not outright phishing attacks. 5. Cloud, Networks and Services Networks deliver many of
the applications and services that consumers enjoy today. As
z Use your mobile Device as it Was Setup – Some people illustrated, the complex security solutions the industry
use third-party firmware to override settings on their provides encompass multiple types of network access
mobile devices (e.g., enabling them to switch service connections: the cloud, the Internet backbone, core network
providers). Such “jail breaking” or “rooting” can result and access network connections.
in malware or malicious code infecting the mobile devices.
5. ACT QUICKLY IF YOUR MOBILE PHONE OR PDA
These are only a few of the strategies and resources available IS STOLEN
from the industry, but the bottom line is that users play an z Report the loss to your organization and/or mobile
important role in protecting their devices, especially what they service provider. If your phone or PDA was issued by an
download and links they click on. Consumers benefit the best organization or is used to access private data, notify
from cybersecurity when they are aware of the variety of your organization of the loss immediately. If your personal
security options that are a part of their mobile devices. phone or PDA was lost, contact your mobile phone
service provider as soon as possible to deter malicious
2. Device use of your device and minimize fraudulent charges.
Today’s mobile devices are miniature computers. In addition z Report the loss or theft to local authorities. Depending
to these truly “smart” phones, there is a growing variety of on the situation, it may be appropriate to notify relevant
devices such as tablets and net book computers that include staff and/or local police.
wireless connectivity. These new mobile devices are more z Change account credentials. If you used your phone or
advanced than those sold even five years ago. All computers, PDA to access any remote resources, such as corporate
including mobile devices, need to be secured to prevent networks or social networking sites, revoke all
intrusion. Applications downloaded from questionable, or even credentials that were stored on the lost device. This may
legitimate sites, can record information typed onto the device involve contacting your IT department to revoke issued
(e.g., bank account numbers, passwords and PINs), read data certificates or logging into websites to change your
stored on the device (including emails, attachments, text password.
messages, credit card numbers and login/password z If necessary, wipe the phone. Some mobile service
combinations to corporate intranets); and record conversations providers offer remote wiping, which allows you or your
(not only telephone calls) within earshot of the phone. A provider to remotely delete all data on the phone.
malicious application or malware can transmit any of this
information to hackers (including those in foreign countries) 6. CYBER SAFETY
who then use the information for nefarious and criminal As our use of the devices increases and expands to new features
purposes, such as transferring money out of bank accounts and functions in other areas such as banking and healthcare,
and conducting corporate espionage. they may hold even more personal data. By following CTIA–
The Wireless Association and its members’ simple
3. Network-Based Security Policies CYBERSAFETY tips, consumers can actively protect
From a consumer perspective, network operators provide a themselves and their data.
wealth of tools that can be used to provide improved security
and data protection for information that resides on the smart C – Check to make sure the websites downloads, SMS links,
phone or tablet. Such tools include device management etc. are legitimate and trustworthy BEFORE you visit or add to
capabilities, firewalls and other network-based functionality. them to your mobile device so you can avoid adware /spyware/
These tools give consumers the power to protect their viruses/ unauthorized charges/etc. Spyware and adware may

35
Journal of Computer Application, Volume 7, No. 1, January-June 2016
Ajay Kumar Garg Engineering College, Ghaziabad

provide unauthorized access to your information, such as networks.


location, websites visited and passwords, to questionable T – Train yourself to keep your mobile device’s operating
entities. You can validate an application’s usage by checking system (OS), software or apps updated to the latest version.
with an application store. To ensure a link is legitimate, search These updates often fix problems and possible cyber
the entity’s website and match it to the unknown URL. vulnerabilities. You may need to restart your mobile device
after the updates are installed so they are applied immediately.
Y – Year-round, 24/7, always use and protect your wireless Many smart phones and tablets are like mini-computers so it’s
device with passwords and PINs to prevent unauthorized a good habit to develop.
access. Passwords/PINs should be hard to guess, changed
periodically and never shared. When you aren’t using your Y – You should never alter your wireless device’s unique
device, set its inactivity timer to a reasonably short period (i.e., identification numbers (i.e., International Mobile Equipment
1–3 minutes). Identity (IMEI) and Electronic Serial Number (ESN)). Similar to
a serial number, the wireless network authenticates each mobile
B – Back-up important files from your wireless device to your device based on its unique number
personal computer or to a cloud service/application periodically
in case your wireless device is compromised, lost or stolen. 7. CONCLUSION
Effective cybersecurity – whether for a nation, business,
E – Examine your monthly wireless bill to ensure there is no organization or individual – is the result of a partnership
suspicious and unauthorized activity. Many wireless providers between the entity being protected and those in the industry
allow customers to check their usage 24/7 by using shortcuts that makes mobile communications possible. All of the
on their device, calling a toll-free number or visiting their participants, from the consumer to the manufacturers, carriers,
website. Contact your wireless provider for details. applications developers, software providers, etc. have a role
to play. At every step of the process, there is a shared
R – Read user agreements BEFORE installing software or responsibility for making cybersecurity a priority. The good
applications to your mobile device. Some companies may use news is that as a result of the historical, ongoing and concerted
your personal information, including location, for advertising efforts of industry, regulators and lawmakers, public knowledge
or other uses. Unfortunately, there are some questionable of the need for heightened cybersecurity has grown and
companies that include spyware/malware/viruses in their continues to grow.
software or applications
While achieving political consensus is always a challenge,
S – Sensitive and personal information, such as banking or there appears to be a widespread understanding among
health records, should be encrypted or safeguarded with policymakers that a single legislative “fix” for cybersecurity
additional security features, such as Virtual Private Networks does not exist; therefore, a flexible approach to legislation in
(VPN). For example, many applications stores offer encryption the wireless arena is necessary. The threat landscape is, by
software that can be used to encrypt information on wireless definition, a non-static one. Enabling cybersecurity, as a result,
devices. cannot be achieved by following a set list of mandated criteria.
Even if such a list were to exist, it would be outdated the same
A – Avoid rooting, jail breaking or hacking your mobile device day it was established.
and its software as it may void your device’s warranty and
increase the risk of cyber threats to a wireless device. Cybersecurity threats and vulnerabilities can change from day
to day, and even hour to hour. The effective steps for managing
F – Features and apps that can remote lock, locate and/or cyber risks today are unlikely to suffice for very long.
erase your device should be installed and used to protect your Maintaining security in a wireless environment is a constantly
wireless device and your personal information from evolving dynamic.
unauthorized users.
However, policymakers play an important role in cybersecurity.
E – Enlist your wireless provider and your local police when Policy efforts that are informed by the realities of the
your wireless device is stolen. If your device is lost, ask your cybersecurity atmosphere — no silver bullet, no single fix,
provider to put your account on “hold” in case you find it. In many moving parts and all of them interdependent — are a
the meantime, your device is protected and you won’t be must.
responsible for charges if it turns out the lost device were
stolen. The U.S. providers are creating a database designed 8. REFERENCE
to prevent smart phones, which their customers report as [1] PandaLabs. “Quarterly Report PandaLabs (January-
stolen, from being activated and/or provided service on the March 2011).”

36
[2] Symantec. “Symantec Report Finds Cyber Threats [7] National Institute of Standards and Technology.
Skyrocket in Volume and Sophistication.” “Guidelines on Cell Phone and PDA Security (SP 800-
[3] National Institute of Standards and Technology. 124).”
“Guidelines on Cell Phone and PDA Security (SP 800- [8] John Cox. “iPhone on Wi-Fi Vulnerable to Security
124).”. Attack.”
[4] Trend Micro. “Smartphone Users: Not Smart Enough [9] US-CERT. “Technical Information Paper-TIP-10-105-01:
About Security.” Cyber Threats to Mobile Devices.”
[5] National Institute of Standards and Technology.
“Guidelines on Cell Phone and PDA Security (SP 800- Ms. Arpna Saxena is working as
124).” Assistant Professor with Ajay Kumar
[6] “Technical Information Paper: Cyber Threats to Mobile Garg Engineering College, Ghaziabad.
Devices” (http://www.us-cert.gov/reading_room/TIP10- She has completed her MCA in 2003 from
105-01.pdf) HNBU University, Uttranchal and
M.Tech. in 2014 from Guru Gobind Singh
University, Delhi.

37
Journal of Computer Application, Volume 7, No. 1, January-June 2016
Ajay Kumar Garg Engineering College, Ghaziabad

APPLICATIONS OF PALM VEIN AUTHENTICATION


TECHNOLOGY
Indu Verma
Assistant Professor, MCA Department AKGEC, Ghaziabad (U.P)
E-mail: induverma029@gmail.com

Abstract– The vein information is hard to duplicate since veins Biometrics are automated methods of recognizing a person
are internal to the human body. The palm vein authentication based on a physiological or behavioral characteristic. Among
technology offers a high level of accuracy. Palm vein the features measured are; face, fingerprints, hand geometry,
authentication uses the vascular patterns of an individual’s handwriting, iris, retinal, vein, and voice. Biometric systems
palm as personal identification data. Compared with a finger
are superior because they provide a nontransferable means of
or the back of a hand, a palm has a broader and more complicated
vascular pattern and thus contains a wealth of differentiating identifying people not just cards or badges. The key point
features for personal identification. This paper discusses the about an identification method that is ”nontransferable" means
contactless palm vein authentication device that uses blood it cannot be given or lent to another individual so nobody can
vessel patterns as a personal identifying factor. The vein get around the system they personally have to go through the
information is hard to duplicate since veins are internal to the control point. The fundamentals of biometrics are that they are
human body. The palm vein authentication technology offers a things about a person:
high level of accuracy.
- Measurable - things that can be counted, numbered or
otherwise quantified
I. INTRODUCTION
- Physiological characteristics - like height, eye color,
Palm vein authentication uses the vascular patterns of an
fingerprint, DNA etc.
individual’s palm as personal identification data. Compared
- Behavioral characteristics - such as the way a person
with a finger [1] or the back of a hand, a palm has a broader and
moves, walks, types.
more complicated vascular pattern and thus contains a wealth
of differentiating features for personal identification. The palm
In a practical biometric system (i.e., a system that employs
is an ideal part of the body for this technology; it normally
biometrics for personal recognition), there are a number of
does not have hair which can be an obstacle for photographing
other issues that should be considered, including: Performance,
the blood vessel pattern, and it is less susceptible to a change
which refers to the achievable recognition accuracy and speed,
in skin color, unlike a finger or the back of a hand. The deoxidized
the resources required to achieve the desired recognition
hemoglobin in the vein vessels absorb light having a
accuracy and speed, as well as the operational and
wavelength of about 7.6 x 10-4 mm within the near-infrared
environmental factors that affect the accuracy and speed;
area [2]. When the infrared ray image is captured, unlike the
image seen in Fig.1, only the blood vessel pattern containing
Acceptability, which indicates the extent to which people are
the deoxidized hemoglobin is visible as a series of dark lines
willing to accept the use of a particular biometric identifier
(Fig.2). Based on this feature, the vein authentication device
(characteristic) in their daily lives; Circumvention, which
translates the black lines of the infrared ray image as the blood
reflects how easily the system can be fooled using fraudulent
vessel pattern of the palm (Fig. 3), and then matches it with the
methods. A key advantage of biometric authentication is that
previously registered blood vessel pattern of the individual.
biometric data is based on physical characteristics that stay
constant throughout one’s lifetime and are difficult(some more
than others) to fake or change. Biometric identification can
provide extremely accurate, secured access to information;
fingerprints, palm vein and iris scans produce absolutely unique
data sets (when done properly). Automated biometric
identification can be done rapidly and uniformly, without
resorting to documents that may be stolen, lost or altered. It is
Fig.1. Fig.2. Fig.3. Fig.4.
Visible Infrared Extracted Palm vein not easy to determine which method of biometric data gathering
ray image ray image vein pattern sensor and reading does the "best" job of ensuring secure
authentication. Each of the different methods has inherent

38
advantages and disadvantages. Some are less invasive than sensor (Fig.6) captures an infrared ray image of the user’s palm.
others; some can be done without the knowledge of the The lighting of the infrared ray is controlled depending on the
subject; others are very difficult to fake. illumination around the sensor, and the sensor is able to capture
the palm image regardless of the position and movement of the
Palm vein authentication uses an infrared beam to penetrate palm. The software then matches the translated vein pattern
the users hand as it is held over the sensor; the veins within with the registered pattern, while measuring the position and
the palm of the user are returned as black lines. Palm vein orientation of the palm by a pattern matching method.
authentication has a high level of authentication accuracy due
to the uniqueness and complexity of vein patterns of the palm. Implementation of a contactless identification system enables
applications in public places or in environments where hygiene
standards are required, such as in medical applications. In
addition, sufficient consideration was given to individuals who
are reluctant to come into direct contact with publicly used
devices.

The first step in all palm vein authentication applications is the


Fig5: Fig6: Fig7: Fig8: enrolment process, which scans the user's palm and records
Iris scan Face recognition Finger print Palm print the unique pattern as an encrypted biometric template in the
database or on the smart card itself. In banking applications,
Because the palm vein patterns are internal to the body, this is for example, once a new customer has been issued a smart
a difficult method to forge. Also, the system is contactless and card, he/she is asked to visit the bank in order to enroll her
hygienic for use in public areas. vein data.

II. PREVIOUS WORKS


Biometrics authentication is a growing and controversial field
in which civil liberties groups express concern over privacy
and identity issues. Today, biometric laws and regulations are
in process and biometric industry standards are being tested.
Automatic recognition based on “who you are” as opposed to
“what you know” (PIN) or “what you have” (ID card).
Recognition of a person by his body & then linking that body
to an externally established identity forms a very powerful tool
for identity management Biometric Recognition. Figure 1 shows
the different type of biometric authentication. Canadian airports Fig 9. Schematic of the hand vein pattern imaging module.
started using iris scan in 2005 to screen pilots and airport
workers. Pilots were initially worried about the possibility that IV. COMPARISONS
repeated scans would negatively affect their vision but the Comparing bioguard and Mohamed Shahin proposal [5], it
technology has improved to the point where that is no longer seems bioguard palm vein technology provide advantages,
an issue. Canada Customs uses an iris scan system called such as:
CANPASS-Air for low-risk travelers at Pearson airport.
Accuracy and Reliability – Uniqueness and complexity of vein
Finger vein authentication is used as a new biometric method patterns, together with advanced authentication algorithms,
utilizing the vein patterns inside one’s fingers for personal ensure unsurpassed accuracy. Field test duration less than iris
identification. Vein patterns are different for each finger and recognition, and near-zero false rejection and false acceptance
for each person, and as they are hidden underneath the skin’s rates.
surface, forgery is extremely difficult. These unique aspects of
finger vein pattern recognition set it apart from previous forms Security – Vein patterns are internal and unexposed, making
of biometrics and have led to its adoption by the many financial them almost impossible to duplicate or forge. Images are
institutions as their newest security technology. converted into encrypted biometric templates at the sensor
level, preventing misuse of the actual image.
III. IMPLEMENTATION OF CONTACTLESS
PALM VEIN AUTHENTICATION Contactless – Hygienic, non-invasive, "no touch" technology
The contactless palm vein authentication technology consists enables use when hands are dirty, wet or even wearing some
of image sensing and software technology. The palm vein types of latex gloves.

39
Journal of Computer Application, Volume 7, No. 1, January-June 2016
Ajay Kumar Garg Engineering College, Ghaziabad

Cost-Effective – Attractively priced while saving you the huge REFERENCES


potential costs of malpractice litigation, privacy violations, etc. [1] “Extraction of Finger-Vein Patterns Using Maximum
Provides a high level of security at a reasonable cost. Curvature Points in Image Profiles,” in Proceedings of
the 9th IAPR Conf. on Machine Vision Applications, N.
Usability – Compact form factor provides greater flexibility Miura, A. Nagasaka, and T. Miyatake, (MVA2005,
and ease of implementation in a variety of security applications. Tsukuba Science City, Japan, 2005), pp. 347-350.
Application areas for palm vein technology are it supports [2] Bio-informatics Visualization Technology committee, Bio-
variety of banking scenarios: informatics Visualization Technology (Corona
Publishing, 1997), p.83, Fig.3.2
1. ATMs 2. Walk-in-customers 3. Internal branch security 4. [3] “Palm Vein Authentication System: A Review”,
Remote banking. International Journal of Control and Automation Vol. 3,
No. 1, March, 2010
Whereas the advantages in [5] is that Hand Vein Verification [4] “Palm vein authentication technology and its
System (HVVS) is accurate in the low to medium security level, applications”, proceeding of The Biometric Consortium
these hand vein verification is purely system base, it’s puts Conference, September 19-21st 2005,Hyatt Regency
more effort on the overall performance of a system. These Crystal City, Arlington, VA, USA
system checks the FAR(%)(false acceptance rate) and the [5] Mohamed Shahin, Ahmed Badawi, and Mohamed Kamel,
FRR(%)(false rejection rate) with different threshold unit, to ”Biometric Authentication Using Fast Correlation of Near
get the optimal threshold. Infrared Hand Vein Patterns”, International Journal of
Biological and Medical Sciences, vol 2,No.1,winter 2007,
V CONCLUSION pp. 141-148.
Reliable personal recognition is critical to many applications [6] Shi Zhao, Yiding Wang and Yunhong Wang, “Extracting
in our day to day life. Hand Vein Patterns from Low-Quality Images: A New
Biometric Technique Using Low-Cost Devices”, Fourth
Biometrics refers to automatic recognition of an individual based International Conference on Image and Graphics, 2007.
on her behavioural and/or physiological characteristics. It is [7] Masaki Watanabe, Toshio Endoh,Morito Shiohara, and
obvious that any system assuring reliable personal recognition Shigeru Sasaki,” Palm vein authentication technology
must necessarily involve a biometric component. This is not and its applications”, The Biometric Consortium
however, to state that biometrics alone can deliver reliable Conference, September 19-21, 2005,USA, pp. 1-2.
personal recognition component. Biometric-based systems also [8] “Palm Vein Authentication Technology” white paper,
have some limitations that may have adverse implications for Bioguard, Innovative Biometric Solutions, March, 2007.
the security of a system. While some of the limitations of [9] Yuhang Ding, Dayan Zhuang and Kejun Wang, “A Study
biometrics can be overcome with the evolution of biometric of Hand Vein Recognition Method”, The IEEE
technology and a careful system design, it is important to International Conference on Mechatronics & Automation
understand that foolproof personal recognition systems simply Niagara Falls, Canada, July 2005.
do not exist and perhaps, never will. Security is a risk
management strategy that identifies controls, eliminates, or Indu Verma is working as an assistant
minimizes uncertain events that may adversely affect system professor in Ajay Kumar Garg
resources and information assets. The security level of a system Engineering College, Ghaziabad,
depends on the requirements (threat model) of an application U.P(India), she has obtained her M-Tech
and the cost-benefit analysis. (Computer Engineering) with Hons. from
Shobhit University and MCA from U.P
As biometric technology matures, there will be an increasing technical University, Lucknow (U.P.). She
interaction among the market, technology, and the applications. has been in teaching from last 9.5 years;
This interaction will be influenced by the added value of the she has been member of several academic and administrative
technology, user acceptance, and the credibility of the service committees.
provider. It is too early to predict where and how biometric During her teaching tenure she has also been coordinated a
technology would evolve and get embedded in which National Conference and many Technical fests at college level.
applications. But it is certain that biometric-based recognition She has attended several seminars, workshops and conferences
will have a profound influence on the way we conduct our at various levels. Also she has some papers published at
daily business. national and international conferences. Her area of research
includes Biometric System authentication, Computer Networks,
Network Security and Database.

40
PON TOPOLOGIES FOR DYNAMIC OPTICAL
ACCESS NETWORKS
Sanjeev K. Prasad
Asst. Prof. MCA Department, AKGEC Ghaziabad
sanjeevkps2002@gmail.com

Abstract– PON-based access networks envisage the demonstration 2. NETWORK ARCHITECTURE


of scalability to allow gradual deployment of time and The network architecture in Fig. 1 exhibits a single 4x4 coarse
wavelength multiplexed architectures in a single-platform array waveguide grating (AWG) in the OLT to route multiple
without changes in fiber infrastructure and also highly-efficient TDM and WDM-PONs by means of a single tunable laser
bandwidth allocation for service provision and upgrade on-
(TL1) and receiver (RX1) allowing for coarse-fine grooming to
demand. This is achieved by the application of coarse-fine
grooming to route reflective-ONUs of time and wavelength display smooth network upgrade. Proposed coarse AWG
PONs collectively and the development of MAC protocols devices display 7 nm, 3 dB Gaussian passband windows [4],
enabling the OLT to dynamically assign available time slots denoted in Fig. 1 by coarse ITU-T channels λ1 = 1530 nm and
among ONUs and demonstrate efficient bandwidth and λ2 = 1550 nm, set to accommodate up to 16, 0.4 nm-spaced
wavelength assignment. wavelengths to address a total of 16 ONUs per PON. In
downstream, TL1 will optimally utilize λ19, placed at the centre
Keywords: PON, coarse WDM (CWDM), dynamic bandwidth of the AWG coarse channel λ1, to broadcast information to all
allocation (DBA), quality of service (QoS). ONUs of TDM-PON1. To address a WDM-PON, TL1 will switch
on all 16 wavelengths, centered ±3.2 nm around coarse channel
1. INTRODUCTION λ2, i.e. λ21-16, to address jointly all ONUs in WDM-PON4 [5].
The emergence of new bandwidth-intensive applications
articulated by distance learning, online gaming, Web 2.0 and
movie delivery by means of high-definition video, has
ultimately justified the necessity of upgrading the access
network infrastructure to provide fat-bandwidth pipelines at
subscriber close proximity. Passive optical networks (PONs)
offer currently more opportunities to communicate these
services than ever before, with potential connection speeds of
up to 100 Mbit/s in mind [1]. A scalable multi-PON access
network architecture [2] has been investigated in that direction
to provide interoperability among dynamic time division
multiplexing (TDM) and wavelength division multiplexing
Figure 1. Unlimited-capacity multi-PON
(WDM)-PONs through coarse WDM (CWDM) routing in the
access network architecture.
optical line terminal (OLT). To provide bandwidth on demand,
a novel TDM dynamic minimum bandwidth (DMB) allocation The established network interoperability is a key feature since
protocol and an upgraded version have been proposed to it allows a smooth migration from single to multi-wavelength
achieve quality of service (QoS) at three different service levels optical access to address increasing bandwidth requirements.
and diverse network throughputs [3]. In addition, to allow for Reflective semiconductor optical amplifier (RSOA)-based ONUs
WDM-PON resource allocation, to overcome the inevitable are universally employed, avoiding the necessity of
network congestion of single wavelength networks, the wavelength-specific, local optical sources. The use of multiple
developed medium access control (MAC) protocols have been transceivers in a single OLT to serve all reflective PONs [5]
extended to implement logical point-to-point topologies based allows for centralized control to distribute ONU capacity among
on general loop-back WDM-PON architectures [2] to increase upstream and downstream on demand and concurrently provide
service provisioning between reflective optical network units each PON with multiple wavelengths for enhanced bandwidth
(ONUs) and the OLT by vigorously distributing network allocation flexibility. Finally, the network exhibits increased
capacity simultaneously between the upstream and scalability since extra TLs can be directly applied at unused
downstream.

41
Journal of Computer Application, Volume 7, No. 1, January-June 2016
Ajay Kumar Garg Engineering College, Ghaziabad

AWG ports in the OLT, e.g. TL2 at I/O port 2, to maintain high by numerous TDM and WDM-PONs [8], offering significant
network performance at increased traffic load with low OLT cost savings in an incorporated access-metro network
inventory count since the ratio of subscriber number to OLT infrastructure [8]. The current architecture is believed to be
transceivers is comparatively high. straightforwardly extendable to such a topology since it already
serves a number of different physical PON locations with
3. NETWORK BANDWIDTH MANAGEMENT dedicated coarse wavelength channels. To that extend,
To allow centralized, dynamic bandwidth allocation among the extensive splitting ratios can be achieved by exploiting the
architecture’s TDM and WDM-PONs, the OLT according to decentralized coarse routing capability of the architecture to
the developed algorithms will assign varying frame time-slots, allow amplification units to be installed on route if necessary
initially to each PON in order-of demand and subsequently and to allocate increased number of dedicated wavelength
arrange each PON bandwidth among their ONUs based on channels to each physical PON using the multiple free spectral
their service level and individual bandwidth requirement [3, 6]. ranges (FSRs) of the CWDM AWG.
In that direction, the DMB protocol [3] provides ONUs in TDM-
PONs with three service levels at different weights, Wt, to In view of the MAC layer of a typical 100 km long-reach TDM-
represent network accessing priority. Subsequently the PON, the direct implementation of the DMB protocol has
algorithm automatically assigns to each ONU a guaranteed displayed limited performance in terms of bandwidth utilization
minimum bandwidth, Bt min, from the overall network capacity, due to a recorded, 500 μs increase in packet propagation time
to satisfy their basic service requirements at various service compared to 25 km-PONs, exhibiting a total of 1000 ìs-wide idle
levels t and apportions any unused bandwidth to ONUs time-slots in each transmission cycle as a result of the report
according to their buffer queuing status. Following probable and grant packets polling times. To achieve acceptable channel
variations in network capacity, it is capable of readjusting the throughput and packet delay performance, an innovative two-
guaranteed minimum and unused bandwidths among ONUs to state DMB (TSD) protocol has been demonstrated that utilizes
comply with subscriber contracts. For service level t, for example the idle time slots in the sense of virtual polling cycles, during
the maximum allocated bandwidth Bmax_allocated for ONUi will be which the ONUs can transmit data by means of a prediction
equal to the addition of Bt min and the extra assigned bandwidth, method estimating their bandwidth requirement [9]. The amount
Bex_assigned. Otherwise, if the bandwidth requirement, Ri, is smaller of virtual bandwidths, allocated to ONUs, are determined by
than the total, Bmax_allocated will be equal to the required the DMB algorithm with each estimated ONU bandwidth
bandwidth, Ri, as given by (1). requirement and 1000 μs idle period regarded as the maximum
polling period parameter [9]. As a result, simulation results
have presented a significant 34% improvement in terms of
channel throughput performance as well as a reduction in
packet delay and packet loss rate [9].

To further reduce the packet waiting time in the ONUs, the 5. HYBRID WIRELESS OPTICAL NETWORKS
network traffic self-similarity characteristics have been The need for convergence of wired and low-cost wireless
incorporated into the DMB protocol [6]. In addition, since the technologies in the access network, providing end users with
upstream and downstream channels are independent, the grant connection flexibility and mobility, is being explored by
massages for subsequent polling cycles can be communicated investigating the interoperability of the multi-PON architecture
before the last ONU has finished its upstream transmission. [2] with WiMAX. This could offer network resilience in case of
Consequently the OLT in what is known as the advanced DMB fiber failure to individual ONUs through the use of overlapping
(ADMB) protocol possess the capability to automatically re- WiMAX cells while allow for efficient dynamic resource
arrange the upstream transmission order by assigning the ONU allocation to base-station-ONUs of a TDM-PON to provide
with the longest upstream transmission period to the last additional WiMAX channels by means of a centralized signal
upstream time slot, reducing the idle period and increasing the processing approach in the OLT.
overall network throughput. Contrasting the ADMB protocol
with published dynamic bandwidth assignment algorithms, 6. CONCLUSIONS
simulation results have shown [6] substantial reductions in The access network architecture presented in this paper utilizes
mean packet delay, particularly at high network load in relation the coarse channels of an AWG in the OLT and reflective ONUs
to the IPACT algorithm [7]. to demonstrate dynamic TDM and WDM-PONs, through a
single OLT with coarse-fine grooming features. The use of a
4. LONG-REACH PONS single-AWG, single-TL OLT to address multiple reflective
The scalability of the presented network architecture is ONUs of a WDMPON have demonstrated error-free routing of
currently further explored by investigating its ability to 16, 0.4 nm-spaced wavelengths over a single, 7 nm-wide
demonstrate a long-reach, wide-splitting ratio network shared Gaussian passband window of the AWG in the presence PDW

42
shifting. To manage packet transmission, several DBA [5] Y. Shachaf, P. Kourtessis, and J. M. Senior, “An
protocols have been proposed to dynamically and efficiently interoperable access network based on CWDM-routed
arrange the bandwidth among ONUs. Depending on the PONs,” presented at 33rd European Conference and
physical-layer architecture, the innovative DMB protocol has Exhibition on Optical Communication (ECOC), Berlin,
successfully been modified to develop the ADMB and MDMB Germany, 2007.
protocols for TDM and WDM-PONs respectively to efficiently [6] C.-H. Chang, P. Kourtessis, and J. M. Senior, “Dynamic
improve the performance of channel throughput and packet Bandwidth assignment for Multi-service access in
delay. Recent developments have concentrated on 100 km- GPON,” presented at 12th European Conference on
reach networks to achieve comparable network performance Networks and Optical Communications (NOC),
in terms of channel utilization rate, packet delay, and packet Stockholm, Sweden, 2007.
loss rate to standard access PONs at a superior 400% wider [7] G. Kramer, B. Mukherjee, and G. Pesavento, “IPACT a
network coverage by means of the application of the TSD dynamic protocol for an Ethernet PON (EPON),” IEEE
protocol. Notable initiatives have also been carried out to Communications Magazine, vol. 40, pp. 74-80, 2002.
investigate the application of multiple wavelength operation [8] R. P. Davey and D. B. Payne, “The future of fiber access
over standard splitter-based GPONs by extending the dynamic systems?” BT Technology Journal, vol. 20, pp. 104 -114,
bandwidth algorithms to include an additional dimension, that 2002.
of wavelength and the integration of WiMAX to terminate [9] C. H. Chang, N. M. Alvarez, P. Kourtessis, and J. M.
wireless users to base station-ONUs with the intention of Senior, “Dynamic Bandwidth assignment for Multi-
providing flexibility in resource allocation among end users. service access in long-reach GPON,” presented at 33rd
European Conference and Exhibition on Optical
REFERENCES Communication (ECOC), Berlin, Germany, 2007.
[1] P.-F. Fournier, “From FTTH pilot to pre-rollout in France,”
presented at CAI Cheuvreux, France, 2007. Sanjeev K. Prasad is an Assistant
[2] Y. Shachaf, C.-H. Chang, P. Kourtessis, J. M. Senior, Professor in AKGEC, Ghaziabad, an
“Multi-PON access network using a coarse AWG for affiliated college of Uttar Pradesh
smooth migration from TDM to WDM PON,” OSA Optics Technical University, Lucknow, (India).
Express, vol. 15, pp. 7840-7844, 2007. He received M. Tech. degree in Computer
[3] C.-H. Chang, P. Kourtessis, and J. M. Senior, “GPON Science & Engineering from Uttar
service level agreement based dynamic bandwidth Pradesh Technical University, Lucknow,
assignment protocol,” Journal of Electronics Letters, vol. (India) and he is pursuing Ph.D. in
42, pp. 1173-1174, 2006. Computer Science from Gurukula Kangri
[4] J. Jiang, C. L. Callender, C. Blanchetière, J. P. Noad, S. Vishwavidyalay, Haridwar, (India). Before joining this college,
Chen, J. Ballato, and J. Dennis W. Smith, “Arrayed he has served many other colleges affiliated to Uttar Pradesh
Waveguide Gratings Based on Perfluorocyclobutane Technical University, Lucknow, (India). He has got published
Polymers for CWDM Applications,” IEEE Photonics 4 papers in international journals and 3 papers published
Technology Letters, vol. 18, pp. 370-372, 2006. international conferences. His areas of interest include Mobile
Ad-Hoc Networks and Computer Graphics.

43
Journal of Computer Application, Volume 7, No. 1, January-June 2016
Ajay Kumar Garg Engineering College, Ghaziabad

AN OVERVIEW OF SEMANTIC SEARCH SYSTEMS


Dr. Pooja Arora
Assistant Professor, MCA Department AKGEC
Ghaziabad, India
puja.arora06@gmail.com

Abstract–Research on semantic search aims to improve provides a survey to gain an overall view of the current research
conventional information search and retrieval methods, and status[1].
facilitate information acquisition, processing, storage and
retrieval on the semantic web. The past ten years have seen a 2. SEMANTIC SEARCH SYSTEMS
number of implemented semantic search systems and various
Conventional search techniques are developed on the basis
proposed frameworks. A comprehensive survey is needed to gain
an overall view of current research trends in this field. In this of words computation model and enhanced by the link analysis.
article, report and findings based on which a generalized On one hand, semantic search extends the scope of traditional
semantic search framework is formalized. Further, issues with information retrieval paradigm from mere document retrieval to
regards to future research in this area are described. entity and knowledge retrieval; on the other hand, it improves
the conventional IR methods by looking at a different
Keywords–Semantic Search, Knowledge Acquisition, Semantic perspective: the meaning of words, which can be formalized
Web, Information Retrieval. and represented in machine processible format using ontology
languages such as RDF and OWL. For example, an arbitrary
1. INTRODUCTION resource or entity can be described as an instance of a class in
Research in information retrieval (IR) community has developed an ontology; having attribute values and relations with other
variety of techniques to help people locate relevant information entities. With the logical representation of resources, a semantic
in large document repositories. Besides classical IR models search system is able to retrieve meaningful results by drawing
(i.e., Vector Space and Probabilistic Model) [7], extended inference on the query and knowledge base. As a simple
models such as Latent Semantic Indexing, Machine Learning example, meaning of the query for “people in School of
based models (i.e., Neural Network, Symbolic Learning, and Computer Science” will be interpreted by a semantic search
Genetic Algorithm based models) and Probabilistic Latent system as individuals (e.g., professors and lecturers) who have
Semantic Analysis (PLSA) have been devised with hope to relations (e.g., work for or affiliated with) with the school. On
improve information retrieval process. However, rapid the contrary, conventional IR systems interpret the query based
expansion of the Web and growing wealth of information on its lexical form. Web pages in which the worlds “people”
pose increasing difficulties to retrieve information efficiently and ”computer science” co-occur, are probably retrieved. The
on the Web. To arrange more relevant results on top of the cost is that users have to extract useful information from a
retrieved sets, most of contemporary Web search engines number of pages, possibly query the search engine several
utilize various ranking algorithms such as PageRank, HITS, times.
and Citation Indexing that exploit link structures to rank the
search results. Despite the substantial success, those 2.1 Document-oriented Search
search engines face perplexity in certain situations due to Document-oriented search can be thought of as an extension
the information overload problem on one hand, and of the conventional IR approaches where the primary goal is
superficial understanding of user queries and documents to retrieve documents, such as web pages and text documents,
on the other. The semantic web is an extension of the current document fragments, scientific publications and ontologies.
Web in which resources are described using logic-based In search systems of this category, documents are annotated
knowledge representation languages for automated machine using logic-based knowledge representation languages with
processing across heterogeneous systems. In recent years, respect to annotation and domain ontologies to provide
its related technologies have been adopted to develop approximate representations for the topics or contents of
semantic-enhanced search systems. Significance of the original documents. The retrieval process is carried out by
research in this area is clear for two reasons: it supplements matching user queries with the resulting semantic
conventional information retrieval by providing search annotations[3]. The early work in SHOE search system
services centered on entities, relations, and knowledge; and facilitates users constructing constrained logical queries by
development of the semantic web also demands enhanced specifying attribute values of ontology classes to retrieve
search paradigms in order to facilitate acquisition, processing, precise results. The main limitations are the manual annotation
storage, and retrieval of the semantic information. The article process and lack of inference support. There has been

44
Ambient Intelligence

considerable work on speculating the significance of logical or an inference process using a knowledge base. AquaLog is a
reasoning for semantic search systems. OWLIR adopts an question-answering system. It implements similarity services
integrated approach that combines logical inference and for relations and classes to identify synonyms of verbs and
traditional information retrieval techniques. A document is nouns appearing in a query using WordNet5. The obtained
represented using original text and semantic markup. The synonyms are used to match property and entity names in the
semantic markup and the domain ontology are exploited by knowledge base for answering queries. SemSearch supports
logical reasoners to provide enhanced search results. natural language queries and translates them into formal queries
Conventional IR system based on the original text is integrated for reasoning. An entity referred by a keyword is matched
into the semantic search system to provide complementary against a subject, predicate or object in the knowledge base
results in case no result is returned by the semantic search. using combinations. The above two systems process entity
relations using language lexicon or word combinations.
2.2 Entity and Knowledge-oriented Search
Entity and knowledge-oriented search method expands the 2.5 Semantic Analytics
scope of conventional IR systems which solely retrieve Semantic analytics, also known as semantic association
documents. Systems based on this method often model analysis, is introduced by Sheth et al. to discover new insights
ontologies as directed graphs and exploit links between entities and actionable knowledge from large amounts of
to provide exploration and navigation. Attribute values and heterogeneous content. It is ssentially a graph-theoretic based
relations of the retrieved entities are also shown to provide approach that represents, discovers and interprets complex
additional knowledge and rich user experiences. TAP is one of relationships between resources[5]. By modeling RDF database
the first empirical studies of large scale semantic search on the as directed graph, relationships between entities in knowledge
Web[4]. It improves the traditional text search by understanding base are represented as graph paths consisting of a sequence
the denotation of the query terms, and augmenting search of links. Search algorithms for semantic associations such as
results using a broad-coverage knowledge base with an breadth first and heuristics-based search are discussed in.
inference mechanism based on graph traversal. Recently a Effective ranking algorithms are indispensable because the
number of systems such as CS-Aktive, RKB Explorer, and number of relationships between entities in a knowledge base
SWSE have been implemented based on the principles of “open might be much larger than the number of entities. In [2] the
world” and “linked data4”, which coincide with spirit of the ranking evaluates a number of parameters: context,
semantic web of being “distributed”. subsumption, trust, rarity, popularity, and association length.

2.3 Multimedia Information Search 2.6 Mining-based Search


In ontology-based image annotation and search systems, Effectiveness of the semantic search depends largely on the
resources are annotated by domain experts or using text around quality and coverage of the underlying knowledge base. The
images in documents. Early works adopt manual approach search methodologies discussed so far either utilise explicit
which suppresses scalability. The work in performs query knowledge, which is asserted in the knowledge base, or implicit
expansion to retrieve semantically related images by drawing knowledge, which is derived using logical inference with
logical inference on its domain ontology (e.g., using rules[6]. Another kind of knowledge, which we refer to as
subsumption relation between concepts). Falcon-S annotates “hidden knowledge”, cannot not be easily observed using
images using metadata by crawling and parsing pages on the techniques such as information extraction, natural language
Web. Disambiguation of resources with same labels is resolved processing, logical inference, and semantic analytics. For
using context information derived from user queries. Squiggle example, “who are the experts in semantic web research
is a semantic framework to help building domain-specific community?”, “Which institutions ranks highly in the machine
semantic search applications for indexing and retrieving learning research area?”. Such knowledge can only be derived
multimedia items. Its knowledge representation model is based from large amount of data by using some sort of sophisticated
on the SKOS vocabulary which enables the system to suggest data analysis techniques. We refer to approaches that utilise
meanings of queries by a simple inference process, e.g., suggest techniques to infer hidden knowledge as mining-based
alternative labels or synonyms for an image. As a result, images semantic search.
annotated with one label can be retrieved using the image’s
alternative labels. 3. CONCLUSION AND FUTURE WORK
Research of semantic search aims to expand the scope and
2.4 Relation-centered Search improve retrieval quality of conventional IR techniques. I have
Relation-centered semantic search approach pays special investigated a number of existing systems, and classified them
attention to relations between query terms implicitly expressed into several categories in accordance with their methodologies,
by users. It usually performs an additional query pre- scope, functionalities, as well as most distinctive features. The
processing step through the use of external language lexicons, main finding is: though varieties of systems have been

45
Journal of Computer Application, Volume 7, No. 1, January-June 2016
Ajay Kumar Garg Engineering College, Ghaziabad

developed, a logical semantic search framework is not Semantic analytics on social networks: experiences in
formalized. addressing the problem of conflict of interest detection..
WWW 2006, ACM, pp. 407–416.
REFERENCES [4] K Anyanwu, A Maduko and A P Sheth, Semrank: ranking
[1] H Alani, K O’Hara and N Shadbolt, Ontocopi: Methods complex relationship search results on the semantic web.
and tools for identifying communities of practice. WWW 2005, ACM, pp. 117–127.
Intelligent Information Processing 2002, Vol. 221, pp. 225– [5] K Anyanwu and A P Sheth, _-Queries: Enabling Querying
236. for Semantic Associations on the Semantic Web. WWW
[2] B Aleman-Meza, C Halaschek-Wiener, I B Arpinar, C 2003, pp. 690–699.
Ramakrishnan and A P Sheth, Ranking complex [6] D Artz and Y Gil, A survey of trust in computer science
relationships on the semantic web. IEEE Internet and the semantic web.. J. Web Sem., Vol. 5, No. 2, 2007,
Computing, Vol. 9, No. 3, 2005, pp. 37–44. pp. 58–71.
[3] B Aleman-Meza, M Nagarajan, C Ramakrishnan, L Ding, [7] R A Baeza-Yates and B A Ribeiro-Neto, Modern
P Kolari, A P Sheth, I B Arpinar, A Joshi and T Finin, Information Retrieval, ACM Press / Addison-Wesley,
1999.

46
A SOFT INTRODUCTION TO MACHINE LEARNING
Anurag Sharma
Student, MCA Department
sharmaanurag.236@gmail.com

Abstract– Machine Learning is an important area of Artificial learning. Example algorithms include: the Apriori algorithm and
Intelligence. It involves automating the procurement of k-Means.
information base required for functioning of expert systems. In
this paper we will be introduced to machine learning in its Semi-Supervised Learning
softest form and will know about what it is, why its is so important
Input data is a mixture of labeled and unlabelled examples.
and what are its various applications in today’s scenario in the
ever going development of artificial intelligence. There is a desired prediction problem but the model must learn
the structures to organize the data as well as make predictions.
INTRODUCTION Example problems are classification and regression. Example
Machine Learning is the study of computational methods for algorithms are extensions to other flexible methods that make
improving performance by mechanizing the acquisition of assumptions about how to model the unlabelled data.[2]
knowledge from experience.[1] Evolved from the study of
pattern recognition and computational learning theory in Reinforcement Learning
artificial intelligence, it has now evolved in a subfield of The learner is not told which actions to take, as in most forms
computer science and an important sector to understand if of machine learning, but instead must discover which actions
one wants to efficiently implement knowledge in a machine. A yield the most reward by trying them. In the most interesting
machine can be made to learn to respond to a problem in two and challenging cases, actions may affect not only the
ways, manually and automatically. To do it manually is to write immediate reward but also the next situation and, through that,
programs which teach the machine how to respond to a all subsequent rewards. These two characteristics--trial-and-
particular situation but wouldn’t it be wonderful if the machine error search and delayed reward--are the two most important
can understand it by itself and act accordingly? That’s where distinguishing features of reinforcement learning.[3]
machine learning plays its part.
ALGORITHMS OF MACHINE LEARNING
In this paper we will learn the algorithms to implement machine
learning, why it’s so important and what some of its real world
application are.

DIFFERENT STYLES OF MACHINE LEARNING


Supervised Learning
Input data is called training data and has a known label or
result such as spam/not-spam or a stock price at a time. A
model is prepared through a training process where it is required
to make predictions and is corrected when those predictions
are wrong. The training process continues until the model
achieves a desired level of accuracy on the training data. WHY IS MACHINE LEARNING SO IMPORTANT?
Example problems are classification and regression. Example Things like growing volumes and varieties of available data,
algorithms include Logistic Regression and the Back computational processing that is cheaper and more powerful,
Propagation Neural Network. and affordable data storage have made machine learning
important in today’s scenario.
Unsupervised Learning
Input data is not labelled and does not have a known result. A All of these things mean it's possible to quickly and
model is prepared by deducing structures present in the input automatically produce models that can analyze bigger, more
data. This may be to extract general rules. It may through a complex data and deliver faster, more accurate results – even
mathematical process to systematically reduce redundancy, or on a very large scale. The result? High-value predictions that
it may be to organize data by similarity. Example problems are can guide better decisions and smart actions in real time without
clustering, dimensionality reduction and association rule human intervention.

47
Journal of Computer Application, Volume 7, No. 1, January-June 2016
Ajay Kumar Garg Engineering College, Ghaziabad

One key to producing smart actions in real time is automated data for human comprehension -- as is the case in data mining
model building. Analytics thought leader Thomas H. Davenport applications -- machine learning uses that data to improve the
wrote in The Wall Street Journal that with rapidly changing, program's own understanding. Machine learning programs
growing volumes of data, "... you need fast-moving modeling detect patterns in data and adjust program actions accordingly.
streams to keep up." And you can do that with machine learning. It was thus seen that machine learning not only plays an
He says, "Humans can typically create one or two good models important part in making a machine more intelligent without
a week; machine learning can create thousands of models a human intervention but it’s also a very elaborate yet interesting
week." field of computer science. By the help of machine learning, not
only we can make a system which can predict its own actions,
APPLICATIONS OF MACHINE LEARNING we can make a system which is close to our own mind.
Some real world applications of machine learning are:
z Face Recognition REFERENCES
z Anti virus [1] Pat Langley, “Applications of Machine Learning and Rule
z Email Filtering Induction”, Stanford University, Stanford, California
z Genetics (USA), March 30, 1995, pp. 1.
z Signal Denoising [2] Jason Brownlee, “A Tour of Machine Learning
z Weather Forecast Algorithms”, Machine Learning Mastery, November 25,
z Computer Vision 2013.
z Internet Fraud Detection [3] Richard S. Sutton and Andrew G. Barto, “Reinforcement
Learning: An Introduction”, The MIT Press, Cambridge,
CONCLUSION London (England).
Machine learning is a type of artificial intelligence that provides
computers with the ability to learn without being explicitly Anurag Sharma is pursuing MCA from
programmed. Machine learning focuses on the development Ajay Kumar Garg Engineering college
of computer programs that can teach themselves to grow and Ghaziabad, U.P(India). He has completed
change when exposed to new data. The process of machine his graduation from CCS university. His
learning is similar to that of data mining. Both systems search hobbies are traveling and reading.
through data to look for patterns. However, instead of extracting

48

Anda mungkin juga menyukai