CHAPTER - 1 INTRODUCTION
SPATIAL query processing is becoming an integral part of many new mobile applications. Recently, there has been a growing interest in the use of locationbased spatial queries (LBSQs), which represent a set of spatial queries that retrieve information based on mobile users current locations [2], [29]. User mobility and data exchange through wireless communication give LBSQs some unique characteristics that the traditional spatial query processing in centralized databases does not address. Novel query processing techniques must be devised to handle the following new challenges: 1. Mobile query semantics. In a mobile environment, a typical LBSQ is of the form find the top-three nearest hospitals. The result of the query depends on the location of its requester. Caching and sharing of query results must take into consideration the location of the query issuer. 2. High workload. The database resides in a centralized server, which typically serves a large mobile user community through wireless communication. Consequently, bandwidth constraints and scalability become the two most important design concerns of LBSQ algorithms. 3. Query promptness and accuracy. Due to user mobility, answers to an LBSQ will lose their relevancy if there is a long delay in query processing or in communication. For example, answers to the query find the top-three nearest hospitals received after 5 minutes of high-speed driving will become meaningless. Instead, a prompt, albeit approximate, answer, telling the user right away the approximate top-three nearest hospitals, may serve the user much better. This is an important issue, as a long latency in a high workload wireless environment is not unusual.
PAGE NO: 1
The wireless environment and the communication constraints play an important role in determining the strategy for processing LBSQs. In the simplest approach, a user establishes a point-to-point communication with the server so that her queries can be answered on demand. However, this approach suffers from several drawbacks. First, it may not scale to very large user populations. Second, to communicate with the server, a client must most likely use a fee-based cellular-type network to achieve a reasonable operating range. Third, users must reveal their current location and send it to the server, which may be undesirable for privacy reasons [19]. A more advanced solution is the wireless broadcast model [1], [15], [30]. It can support an almost-unlimited number of mobile hosts (MHs) over a large geographical area with a single transmitter. With the broadcast model, MHs do not submit queries. Instead, they tune in to the broadcast channel for information that they desire. Hence, the users location is not revealed. One of the limitations of the broadcast model is that it restricts data access to be sequential. Queries can only be fulfilled after all the required on-air data arrives. This is why in some cases; a 5-minute delay to the query find the top-three nearest hospitals would not be unusual. Alleviating this limitation, we propose a scalable low-latency approach for processing LBSQs in broadcast environments. Our approach leverages ad hoc networks to share information among mobile clients in a peer-to-peer (P2P) manner [17], [18]. The rationale behind our approach is based on the following observations: As mentioned previously, when a mobile user launches a nearest neighbor (NN) query, in many situations, she would prefer an approximate result that arrives with a short response time rather than an accurate result with a long latency. The results of spatial queries often exhibit spatial locality. For example, if two MHs are close to each other, the result sets of their spatial queries may overlap significantly. Query results of a mobile peer are valuable for two reasons: o they can be used to answer queries of the current MH directly and o they can be used to dramatically reduce the latency for the current MH relative to on-air information.
PAGE NO: 2
. P2P approaches can be valuable for applications where the response time is an important concern. Through mobile cooperative caching of the result sets, query results can be efficiently shared among mobile clients. In this paper, we concentrate on two common types of spatial searches, namely, kNN queries and window queries (WQs). The contributions of our study are given as follows: 1. We identify certain characteristics of LBSQs that enable the development of effective sharing methods in broadcast environments. 2. We introduce a set of algorithms that verify whether data received from neighboring clients are complete, partial, or irrelevant answers to the posed query. 3. We utilize a P2P-based sharing method to improve the current approaches in answering on-air kNN queries and WQs. 4. We evaluate our approach through a probabilistic analysis of the hit ratio in sharing. In addition, through extensive simulation experiments, we evaluate the benefits of our approach with different parameter sets. 1.1 Scope of the Project The scope of the project is to reduce the latency considerably in answering LBSQs. Our approach is based on peer-to-peer sharing, which enables us to process queries without delay at a mobile host by using query results cached in its neighboring mobile peers.
PAGE NO: 3
PAGE NO: 4
PAGE NO: 5
Thus, the general access protocol for retrieving data on a wireless broadcast channel involves three main steps [15]:
Index search
The client accesses a sequence of pointers in the index segment to figure out when to tune in to the broadcast channel to retrieve the required data.
Data retrieval
The client tunes in to the channel when packets containing the required data arrive and then downloads all the required information. Two parameters, access latency and tuning time, characterize the broadcast model. The access latency represents the time duration from the point that a client requests its data to the point that the desired data is received. The tuning time is the amount of time spent by a client listening to the broadcast channel, which proportionally represents the power consumption of the client. However, nearly all the existing spatial access methods are designed for databases with random access disks. These existing techniques cannot be used effectively in a wireless broadcast environment, where only sequential data access is supported. Zheng et al. [31] proposed indexing the spatial data on the server by a space-filling curve. The Hilbert curve [16] was chosen for this purpose because of its superior locality. The index values of the data packets represent the order in which these data packets are broadcast. For example, the Hilbert curve in Fig. 3 tries grouping data of close values so that they can be accessed within a short interval when they are broadcast sequentially. The MHs use on-air search algorithms [31] to answer LBSQs (kNN and WQs) over data that arrives in the order prescribed by the Hilbert curve.
PAGE NO: 6
schemes, that is, CachePath, CacheData, and HybridCache, for cooperative caching in ad hoc networks. With CachePath, mobile nodes cache the data path and use it to redirect prospective requests to a neighboring node, which has the data instead of fetching data from the remote data center. With CacheData, intermediate nodes cache the data to serve prospective queries. The HybridCache approach further improves performance by taking advantage of both CacheData and CachePath while avoiding their weaknesses. P2P cooperative caching can bring about several distinctive benefits to a mobile system: improved access latency, reduced server workload, and alleviated point-to- point channel congestion. In this research, we leverage the P2P caching technique to alleviate the inherent access latency limitation in wireless broadcast environments.
Feasibility study should be performed on the basis of various criteria and parameters. The various feasibility studies are: Economic Feasibility Operational Feasibility Technical Feasibility
ECONOMIC FEASIBILITY
It refers to the benefits or outcomes we are deriving from the product as compared to the total cost we are spending for developing the product. If the benefits are more or less the same as the older system, then it is not feasible to develop the product.
OPERATIONAL FEASIBILITY
It refers to the feasibility of the product to be operational. Some products may work very well at design and implementation but may fail in the real time
DEPT OF MCA, SSCET, KNL. PAGE NO: 8
environment.
technical expertise.
TECHNICAL FEASIBILITY
It refers to whether the software that is available in the market fully supports the present application. It studies the pros and cons of using particular software for the development and its feasibility. It also studies the additional training needed to be given to the people to make the application work.
SOFTWARE REQUIREMENTS:
Operating System Software Back End : : : Windows XP JAVA (JDK 1.6.0) Oracle
PAGE NO: 9
of detail is exploded into greater detail at the next level. This is done until further explosion is necessary and an adequate amount of detail is described for analyst to understand the process. Larry Constantine first developed the DFD as a way of expressing system requirements in a graphical from, this lead to the modular design. A DFD is also known as a bubble Chart has the purpose of clarifying system requirements and identifying major transformations that will become programs in system design. So it is the starting point of the design to the lowest level of detail. A DFD consists of a series of bubbles joined by data flows in the system.
PAGE NO: 11
Constructing D F D:
Several rules of thumb are used in drawing DFDs: Process should be named and numbered for an easy reference. Each name should be representative of the process. The direction of flow is from top to bottom and from left to right. Data
traditionally flow from source to the destination although they may flow back to the source. One way to indicate this is to draw long flow line back to a source. An alternative is to repeat the source symbol as a destination. Since it is used more than once in the DFD it is marked with a short diagonal. The names of data stores and destinations are written in capital letters. Process and dataflow names have the first letter of each work capitalized. A DFD typically shows the minimum contents of data store. Each data store should contain all the data elements that flow in and out. Questionnaires should contain all the data elements that flow in and out. Missing interfaces redundancies and like is then accounted for often through interviews.
Salient Features D F D:
The DFD shows flow of data, not of control loops and decision are controlled considerations do not appear on a DFD. The DFD does not indicate the time factor involved in any process whether the data flows take place daily, weekly, monthly or yearly. The sequence of events is not brought out on the DFD.
Types of D F D:
1. Current Physical 2. Current Logical
DEPT OF MCA, SSCET, KNL. PAGE NO: 12
MH4
MH 3 MH 1 MH 2
Fig: 3.1 NH-MOBILES HOST
Advantages
Maintaining high scalability and accuracy. Reducing the latency while processing query to neighboring peers.
Applications
PAGE NO: 13
Now a day this is used in mobile search application to get the approximate result in short time instead of getting accurate results in long time. SDLC
METHDOLOGIES:
This document play a vital role in the development of life cycle (SDLC) as it describes the complete requirement of the system. It means for use by developers and will be the basic during testing phase. Any changes made to the requirements in the future will have to go through formal change approval process. SPIRAL MODEL was defined by Barry Boehm in his 1988 article, A spiral Model of Software Development and Enhancement. This model was not the first model to discuss iterative development, but it was the first model to explain why the iteration models. As originally envisioned, the iterations were typically 6 months to 2 years long. Each phase starts with a design goal and ends with a client reviewing the progress thus far. Analysis and engineering efforts are applied at each phase of the project, with an eye toward the end goal of the project. The steps for Spiral Model can be generalized as follows: The new system requirements are defined in as much details as possible. This usually involves interviewing a number of users representing all the external or internal users and other aspects of the existing system. A preliminary design is created for the new system. A first prototype of the new system is constructed from the preliminary design. This is usually a scaled-down system, and represents an approximation of the characteristics of the final product. A second prototype is evolved by a fourfold procedure: 1. Evaluating the first prototype in terms of its strengths,
PAGE NO: 14
weakness, and risks. 2. 3. 4. Defining the requirements of the second prototype. Planning an designing the second prototype. Constructing and testing the second prototype.
At the customer option, the entire project can be aborted if the risk is deemed too great. Risk factors might involve development cost overruns, operatingcost miscalculation, or any other factor that could, in the customers judgment, result in a less-than-satisfactory final product.
The existing prototype is evaluated in the same manner as was the previous prototype, and if necessary, another prototype is developed from it according to the fourfold procedure outlined above.
The preceding steps are iterated until the customer is satisfied that the refined prototype represents the final product desired. The final system is constructed, based on the refined prototype. The final system is thoroughly evaluated and tested. Routine maintenance is carried on a continuing basis to prevent large scale failures and to minimize down time.
PAGE NO: 15
ADVANTAGES:
Estimates(i.e. budget, schedule etc .) become more realistic as work progresses, because important issues discover earlier. It is more able to cope with the changes that are software development generally entails. Software engineers can get their hands in and start working on the core of a project earlier.
3.2 MODULES
PAGE NO: 16
Multiple peer simulation Module Server Module Sharing-based nearest neighbor query visualization Module
Server Module
The server module is responsible for storing points of interest indexed by an R-tree structure. It performs NN queries from peers with pruning bounds and records the I/O load and access frequency of the spatial database server.
PAGE NO: 17
The Query location and preferred criteria are the input for Mobile Host. The Mobile Host gets the results for the corresponding location and criteria, with considerably reducing latency while getting results from neighboring peers.
PAGE NO: 18
Module 1
Mobile Host1
Finding Neighbor
nearest
Mobile Host 2
Module 2
Mobile Host
Centralized Server
PAGE NO: 19
Module 3
MH2
MH1
Server
MH3
The Unified Modeling Language (UML) is a standard language for writing software blueprints. The UML is appropriate for modeling systems ranging from enterprise information systems to distributed Web-based applications and even to hard real time embedded systems. It is a very expressive language, addressing all the views needed to develop and then deploy such systems. The vocabulary of the UML encompasses three kinds of building blocks Things Relationships Diagrams
Things are the abstractions that are first-class citizens in a model, Relationships tie these things together, and diagrams group interesting collections of things.A diagram is the graphical presentation of a set of elements, most often rendered as a connected graph of vertices (things) and arcs (relationships). Diagrams project a system from different angles and perspectives. The UML has nine diagrams, classified into two categories.
PAGE NO: 20
A use case diagram shows a set of use cases and actors and their relationships. Use case diagrams address the static use case view of a system. These diagrams are especially important in organizing and modeling the behaviors of a system. Use case diagrams commonly contain Use cases Actors Dependency, generalization and association relationships
Use case:
A use case describes a set of sequences, in which each sequence represents the interaction of the things outside the system ( its actors) with the system itself. A use case represents a functional requirement of a system as a whole. A use case involves the interaction of actors and the system. Use case
Actor:
An actor represents a coherent set of roles that users of use cases play when interacting with these use cases. Actors can be human or they can be automated systems.
PAGE NO: 21
Dependency:
A dependency is a semantic relationship between two things in which a change to one thing (the independent thing) may affect the semantics of the other thing (the dependent thing).
Generalization:
A generalization is a specialization/generalization relationship in which objects of the specialized element (the child) are substitutable for objects of the generalized element (the parent). With this the child shares the structure and the behavior of the parent
Association:
An association is a structural relationship that describes a set of links, a link being a connection among objects. Aggregation is a special kind of association, representing a structural relationship between a whole and its parts.
PAGE NO: 22
USECASE DIAGRAM:
Fig: 3.3 Use Case Diagram for Finding the Nearest Neighbor CLASS DIAGRAM:
ACTIVITY DIAGRAM:
APPLICATION DEVELOPMENT:
N-TIER APPLICATIONS
N-Tier Applications can easily implement the concepts of Distributed Application Design and Architecture. The N-Tier Applications provide strategic benefits to Enterprise Solutions. While 2-tier, client-server can help us create quick and easy solutions and may be used for Rapid Prototyping, they can easily become a maintenance and security night mare The N-tier Applications provide specific advantages that are vital to the business continuity of the enterprise. Typical features of a real life n-tier may include the following: Security Availability and Scalability Manageability Easy Maintenance Data Abstraction
The above mentioned points are some of the key design goals of a successful n-tier application that intends to provide a good Business Solution.
DEFINITION:
Simply stated, an n-tier application helps us distribute the overall functionality into various tiers or layers:
PAGE NO: 25
Presentation Layer Business Rules Layer Data Access Layer Database/Data Store
Each layer can be developed independently of the other provided that it adheres to the standards and communicates with the other layers as per the specifications. This is the one of the biggest advantages of the n-tier application. Each layer can potentially treat the other layer as a Block-Box. In other words, each layer does not care how other layer processes the data as long as it sends the right data in a correct format.
1. THE PRESENTATION LAYER Also called as the client layer comprises of components that are dedicated to presenting the data to the user. For example: Windows/Web Forms and buttons, edit boxes, Text boxes, labels, grids, etc. 2. THE BUSINESS RULES LAYER
PAGE NO: 26
This layer encapsulates the Business rules or the business logic of the encapsulations. To have a separate layer for business logic is of a great advantage. This is because any changes in Business Rules can be easily handled in this layer. As long as the interface between the layers remains the same, any changes to the functionality/processing logic in this layer can be made without impacting the others. A lot of client-server apps failed to implement successfully as changing the business logic was a painful process.
1. THE DATA ACCESS LAYER This layer comprises of components that help in accessing the Database. If used in the right way, this layer provides a level of abstraction for the database structures. Simply put changes made to the database, tables, etc do not affect the rest of the application because of the Data Access layer. The different application layers send the data requests to this layer and receive the response from this layer. 2. THE DATABASE LAYER This layer comprises of the Database Components such as DB Files, Tables, Views, etc. The Actual database could be created using SQL Server, Oracle, Flatfiles, etc. In an n-tier application, the entire application can be implemented in such a way that it is independent of the actual Database. For instance, you could change the Database Location with minimal changes to Data Access Layer. The rest of the Application should remain unaffected.
PAGE NO: 27
With most programming languages, you either compile or interpret a program so that you can run it on your computer. The Java programming language is unusual in that a program is both compiled and interpreted. With the compiler, first you translate a program into an intermediate language called Java byte codes the platformindependent codes interpreted by the interpreter on the Java platform. The interpreter parses and runs each Java byte code instruction on the computer. Compilation happens just once; interpretation occurs each time the program is executed. The following figure illustrates how this works.
DEPT OF MCA, SSCET, KNL. PAGE NO: 28
Fig: 4.1.1 Procedure of Program execution You can think of Java byte codes as the machine code instructions for the Java Virtual Machine (Java VM). Every Java interpreter, whether its a development tool or a Web browser that can run applets, is an implementation of the Java VM. Java byte codes help make write once, run anywhere possible. You can compile your program into byte codes on any platform that has a Java compiler. The byte codes can then be run on any implementation of the Java VM. That means that as long as a computer has a Java VM, the same program written in the Java programming language can run on Windows 2000, a Solaris workstation, or on an iMac.
PAGE NO: 29
The Java Application Programming Interface (Java API) Youve already been introduced to the Java VM. Its the base for the Java
platform and is ported onto various hardware-based platforms. The Java API is a large collection of ready-made software components that provide many useful capabilities, such as graphical user interface (GUI) widgets. The Java API is grouped into libraries of related classes and interfaces; these libraries are known as packages. The next section, What Can Java Technology Do? Highlights what functionality some of the packages in the Java API provide. The following figure depicts a program thats running on the Java platform. As the figure shows, the Java API and the virtual machine insulate the program from the hardware.
Fig: 4.1.2 Java API and the virtual machine insulate the program from the
hardware.
Native code is code that after you compile it, the compiled code runs on a specific hardware platform. As a platform-independent environment, the Java platform can be a bit slower than native code. However, smart compilers, well-tuned interpreters, and just-in-time byte code compilers can bring performance close to that of native code without threatening portability.
PAGE NO: 30
However, the Java programming language is not just for writing cute, entertaining applets for the Web. The general-purpose, high-level Java programming language is also a powerful software platform. Using the generous API, you can write many types of programs. An application is a standalone program that runs directly on the Java platform. A special kind of application known as a server serves and supports clients on a network. Examples of servers are Web servers, proxy servers, mail servers, and print servers. Another specialized program is a servlet. A servlet can almost be thought of as an applet that runs on the server side. Java Servlets are a popular choice for building interactive web applications, replacing the use of CGI scripts. Servlets are similar to applets in that they are runtime extensions of applications. Instead of working in browsers, though, servlets run within Java Web servers, configuring or tailoring the server. How does the API support all these kinds of programs? It does so with packages of software components that provides a wide range of functionality. Every full implementation of the Java platform gives you the following features:
and output, data structures, system properties, date and time, and so on.
Applets: The set of conventions used by applets. Networking: URLs, TCP (Transmission Control Protocol),
UDP (User Data gram Protocol) sockets, and IP (Internet Protocol) addresses.
be localized for users worldwide. Programs can automatically adapt to specific locales and be displayed in the appropriate language.
electronic signatures, public and private key management, access control, and certificates.
PAGE NO: 31
access to a wide range of relational databases. The Java platform also has APIs for 2D and 3D graphics, accessibility, servers, collaboration, telephony, speech, animation, and more. The following figure depicts what is included in the Java 2 SDK.
We cant promise you fame, fortune, or even a job if you learn the Java programming language. Still, it is likely to make your programs better and requires less effort than other languages. We believe that Java technology will help you do the following:
language is a powerful object-oriented language, its easy to learn, especially for programmers already familiar with C or C++.
DEPT OF MCA, SSCET, KNL. PAGE NO: 32
counts, method counts, and so on) suggest that a program written in the Java programming language can be four times smaller than the same program in C++.
encourages good coding practices, and its garbage collection helps you avoid memory leaks. Its object orientation, its JavaBeans component architecture, and its wide-ranging, easily extendible API let you reuse other peoples tested code and introduce fewer bugs.
may be as much as twice as fast versus writing the same program in C++. Why? You write fewer lines of code and it is a simpler programming language than C++.
You can keep your program portable by avoiding the use of libraries written in other languages. The 100% Pure JavaTM Product Certification Program has a repository of historical process manuals, white papers, brochures, and similar materials online.
programs are compiled into machine-independent byte codes, they run consistently on any Java platform.
easily from a central server. Applets take advantage of the feature of allowing new classes to be loaded on the fly, without recompiling the entire program.
PAGE NO: 33
ODBC
Microsoft Open Database Connectivity (ODBC) is a standard programming interface for application developers and database systems providers. Before ODBC became a de facto standard for Windows programs to interface with database systems, programmers had to use proprietary languages for each database they wanted to connect to. Now, ODBC has made the choice of the database system almost irrelevant from a coding perspective, which is as it should be. Application developers have much more important things to worry about than the syntax that is needed to port their program from one database to another when business needs suddenly change. Through the ODBC Administrator in Control Panel, you can specify the particular database that is associated with a data source that an ODBC application program is written to use. Think of an ODBC data source as a door with a name on it. Each door will lead you to a particular database. For example, the data source named Sales Figures might be a SQL Server database, whereas the Accounts Payable data source could refer to an Access database. The physical database referred to by a data source can reside anywhere on the LAN. The ODBC system files are not installed on your system by Windows 95. Rather, they are installed when you setup a separate database application, such as SQL Server Client or Visual Basic 4.0. When the ODBC icon is installed in Control Panel, it uses a file called ODBCINST.DLL. It is also possible to administer your ODBC data sources through a stand-alone program called ODBCADM.EXE. There is a 16-bit and a 32-bit version of this program and each maintains a separate list of ODBC data sources. From a programming perspective, the beauty of ODBC is that the application can be written to use the same set of function calls to interface with any data source, regardless of the database vendor. The source code of the application doesnt change whether it talks to Oracle or SQL Server. We only mention these two as an example. There are ODBC drivers available for several dozen popular database systems. Even Excel spreadsheets and plain text files can be turned into data sources. The operating system uses the Registry information written by ODBC Administrator to
PAGE NO: 34
determine which low-level ODBC drivers are needed to talk to the data source (such as the interface to Oracle or SQL Server). The loading of the ODBC drivers is transparent to the ODBC application program. In a client/server environment, the ODBC API even handles many of the network issues for the application programmer. The advantages of this scheme are so numerous that you are probably thinking there must be some catch. The only disadvantage of ODBC is that it isnt as efficient as talking directly to the native database interface. ODBC has had many detractors make the charge that it is too slow. Microsoft has always claimed that the critical factor in performance is the quality of the driver software that is used. In our humble opinion, this is true. The availability of good ODBC drivers has improved a great deal recently. And anyway, the criticism about performance is somewhat analogous to those who said that compilers would never match the speed of pure assembly language. Maybe not, but the compiler (or ODBC) gives you the opportunity to write cleaner programs, which means you finish sooner. Meanwhile, computers get faster every year.
JDBC
In an effort to set an independent database standard API for Java; Sun Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a generic SQL database access mechanism that provides a consistent interface to a variety of RDBMSs. This consistent interface is achieved through the use of plug-in database connectivity modules, or drivers. If a database vendor wishes to have JDBC support, he or she must provide the driver for each platform that the database and Java run on. To gain a wider acceptance of JDBC, Sun based JDBCs framework on ODBC. As you discovered earlier in this chapter, ODBC has widespread support on a variety of platforms. Basing JDBC on ODBC will allow vendors to bring JDBC drivers to market much faster than developing a completely new connectivity solution. JDBC was announced in March of 1996. It was released for a 90 day public review that ended June 8, 1996. Because of user input, the final JDBC v1.0 specification was released soon after. The remainder of this section will cover enough
PAGE NO: 35
information about JDBC for you to know what it is about and how to use it effectively. This is by no means a complete overview of JDBC. That would fill an entire book.
JDBC Goals
Few software packages are designed without goals in mind. JDBC is one that, because of its many goals, drove the development of the API. These goals, in conjunction with early reviewer feedback, have finalized the JDBC class library into a solid framework for building database applications in Java. The goals that were set for JDBC are important. They will give you some insight as to why certain classes and functionalities behave the way they do. The eight design goals for JDBC are as follows:
2. SQL Conformance
SQL syntax varies as you move from database vendor to database vendor. In an effort to support a wide variety of vendors, JDBC will allow any query statement to be passed through it to the underlying database driver. This allows the connectivity module to handle non-standard functionality in a manner that is suitable for its users.
PAGE NO: 36
The JDBC SQL API must sit on top of other common SQL level APIs. This goal allows JDBC to use existing ODBC level drivers by the use of a software interface. This interface would translate JDBC calls to ODBC and vice versa.
4. Provide a Java interface that is consistent with the rest of the Java system
Because of Javas acceptance in the user community thus far, the designers feel that they should not stray from the current design of the core Java system.
5. Keep it simple
This goal probably appears in all software design goal listings. JDBC is no exception. Sun felt that the design of JDBC should be very simple, allowing for only one method of completing a task per mechanism. Allowing duplicate functionality only serves to confuse the users of the API.
Networking. And for dynamically updating the cache table we go for MS Access database.
Compilers
My Program
You can think of Java byte codes as the machine code instructions for the Java Virtual Machine (Java VM). Every Java interpreter, whether its a Java development tool or a Web browser that can run Java
DEPT OF MCA, SSCET, KNL. PAGE NO: 38
applets, is an implementation of the Java VM. The Java VM can also be implemented in hardware.
Java byte codes help make write once, run anywhere possible. You can compile your Java program into byte codes on my platform that has a Java compiler. The byte codes can then be run any implementation of the Java VM. For example, the same Java program can run Windows NT, Solaris, and Macintosh.
IP datagrams
The IP layer provides a connectionless and unreliable delivery system. It considers each datagram independently of the others. Any association between datagram must be supplied by the higher layers. The IP layer supplies a checksum that includes its own header. The header includes the source and destination addresses. The IP layer handles routing through an Internet. It is also responsible for breaking up large datagram into smaller ones for transmission and reassembling them at the other end.
UDP
UDP is also connectionless and unreliable. What it adds to IP is a checksum for the contents of the datagram and port numbers. These are used to give a client/server model - see later.
TCP
TCP supplies logic to give a reliable connection-oriented protocol above IP. It provides a virtual circuit that two processes can use to communicate.
Internet addresses
In order to use a service, you must be able to find it. The Internet uses an address scheme for machines so that they can be located. The address is a 32 bit integer which gives the IP address. This encodes a network ID and more addressing. The network ID falls into various classes according to the size of the network address.
Network address
Class A uses 8 bits for the network address with 24 bits left over for other addressing. Class B uses 16 bit network addressing. Class C uses 24 bit network addressing and class D uses all 32.
Subnet address
PAGE NO: 40
Internally, the UNIX network is divided into sub networks. Building 11 is currently on one sub network and uses 10-bit addressing, allowing 1024 different hosts.
Host address
8 bits are finally used for host addresses within our subnet. This places a limit of 256 machines that can be on the subnet.
Total address
Port addresses
A service exists on a host, and is identified by its port. This is a 16 bit number. To send a message to a server, you send it to the port for that service of the host that it is running on. This is not location transparency! Certain of these ports are "well known".
Sockets
A socket is a data structure maintained by the system to handle network connections. A socket is created using the call socket. It returns an integer that is like a file descriptor. In fact, under Windows, this handle can be used with Read File and Write File functions. #include <sys/types.h> #include <sys/socket.h> int socket(int family, int type, int protocol);
DEPT OF MCA, SSCET, KNL. PAGE NO: 41
Here "family" will be AF_INET for IP communications, protocol will be zero, and type will depend on whether TCP or UDP is used. Two processes wishing to communicate over a network create a socket each. These are similar to two ends of a pipe - but the actual pipe does not yet exist.
JFree Chart
JFreeChart is a free 100% Java chart library that makes it easy for developers to display professional quality charts in their applications. JFreeChart's extensive feature set includes: A consistent and well-documented API, supporting a wide range of chart types; A flexible design that is easy to extend, and targets both server-side and client-side applications; Support for many output types, including Swing components, image files (including PNG and JPEG), and vector graphics file formats (including PDF, EPS and SVG); JFreeChart is "open source" or, more specifically, free software. It is distributed under the terms of the GNU Lesser General Public Licence (LGPL), which permits use in proprietary applications.
1. Map Visualizations
Charts showing values that relate to geographical areas. Some examples include: (a) population density in each state of the United States, (b) income per capita for each country in Europe, (c) life expectancy in each country of the world. The tasks in this project include: Sourcing freely redistributable vector outlines for the countries of the world, states/provinces in particular countries (USA in particular, but also other areas); Creating an appropriate dataset interface (plus default implementation), a rendered, and integrating this with the existing XYPlot class in JFreeChart;
DEPT OF MCA, SSCET, KNL. PAGE NO: 42
3. Dashboards
There is currently a lot of interest in dashboard displays. Create a flexible dashboard mechanism that supports a subset of JFreeChart chart types (dials, pies, thermometers, bars, and lines/time series) that can be delivered easily via both Java Web Start and an applet.
4. Property Editors
The property editor mechanism in JFreeChart only handles a small subset of the properties that can be set for charts. Extend (or reimplement) this mechanism to provide greater end-user control over the appearance of the charts.
PAGE NO: 43
Testing Objectives
PAGE NO: 44
The main objective of testing is to uncover a host of errors, systematically and with minimum effort and time. Stating formally, we can say,
Testing is a process of executing a program with the intent of finding an error. A successful test is one that uncovers an as yet undiscovered error. A good test case is one that has a high probability of finding error, if it exists. The tests are inadequate to detect possibly present errors.
Levels of Testing
In order to uncover the errors present in different phases we have the concept of levels of testing. The basic levels of testing are as shown below
Acceptance Testing
Client Needs
System Testing
Requirements
Integration Testing
Design
Unit Testing
Code
Fig: 5.1 Levels of Testing
DEPT OF MCA, SSCET, KNL. PAGE NO: 45
A strategy for software testing integrates software test case design methods into a well-planned series of steps that result in the successful construction of software.
In this project each service can be thought of a module. There are so many modules like Login, New Registration, Change Password, Post Question, Modify Answer etc. When developing the module as well as finishing the development so that each module works without any error. The inputs are validated when accepting from the user. Test Plan: A number of activities must be performed for testing software. Testing starts with test plan. Test plan identifies all testing related activities that needed to be performed along with the schedule and guidelines for testing. The plan also specifies the level of testing that need to be done, by identifying the different units. For each unit specifying in the plan first the test cases and reports are produced. These reports are analyzed. Test plan is a general document for entire project, which defines the scope, approach to be taken and the personal responsible for different activities of testing. The inputs for forming test plans are: 1. Project plan 2. Requirements document 3. System design
DEPT OF MCA, SSCET, KNL. PAGE NO: 46
considered as testing the design and hence the emphasis on testing module interactions. It also helps to uncover a set of errors associated with interfacing. Here the input to these modules will be the unit tested modules. Integration testing is classifies in two types Top-Down Integration Testing. Bottom-Up Integration Testing. In Top-Down Integration Testing modules are integrated by moving downward through the control hierarchy, beginning with the main control module. In Bottom-Up Integration Testing each sub module is tested separately and then the full system is tested. Integration Testing in this project: In this project integrating all the modules forms the main system. Means I used BottomUp Integration Testing for this project. When integrating all the modules I have checked whether the integration effects working of any of the services by giving different combinations of inputs with which the two services run perfectly before Integration.
PAGE NO: 48
This refers to the system testing that is performed by a select group of friendly customers.
PAGE NO: 49
PAGE NO: 50
PAGE NO: 51
PAGE NO: 52
PAGE NO: 53
MODULE 2 :
PAGE NO: 54
PAGE NO: 55
PAGE NO: 56
PAGE NO: 57
PAGE NO: 58
PAGE NO: 59
PAGE NO: 60
PAGE NO: 61
BIBLIOGRAPHY
[1] S. Acharya, R. Alonso, M.J. Franklin, and S.B. Zdonik, Broadcast Disks: Data Management for Asymmetric Communications Environments, Proc. ACM SIGMOD 95, pp. 199-210, 1995. [2] D. Barbara , Mobile Computing and Databases: A Survey, IEEE Trans. Knowledge and Data Eng., vol. 11, no. 1, pp. 108-117, Jan./Feb. 1999. [3] N. Beckmann, H.-P. Kriegel, R. Schneider, and B. Seeger, The R*-Tree: An Efficient and Robust Access Method for Points and Rectangles, Proc. ACM SIGMOD 90, pp. 322-331, 1990. [4] J. Broch, D.A. Maltz, D.B. Johnson, Y.-C. Hu, and J.G. Jetcheva, A Performance Comparison of Multi-Hop Wireless Ad Hoc Network Routing Protocols, Proc. ACM MobiCom 98, pp. 85-97, 1998.
PAGE NO: 62
[5] V. Bychkovsky, B. Hull, A.K. Miu, H. Balakrishnan, and S. Madden, A Measurement Study of Vehicular Internet Access Using In Situ Wi-Fi Networks, Proc. ACM MobiCom 06, Sept. 2006. [6] C.-Y. Chow, H. VA Leong, and A. Chan, Peer-to-Peer Cooperative Caching in Mobile Environment, Proc. 24th IEEE Intl Conf. Distributed Computing Systems Workshops (ICDCSW 04), pp. 528-533, 2004. [7] C.-Y. Chow, H. Va Leong, and A.T.S. Chan, Distributed Group- Based Cooperative Caching in a Mobile Broadcast Environment, Mobile Data Management, pp. 97-106, 2005.
[8] M. de Berg, M. van Kreveld, M. Overmars, and O. Schwarzkopf, Computational Geometry Algorithms and Applications, second ed. Springer, 2000. [9] G. Gaertner and V. Cahill, Understanding Link Quality in 802.11 Mobile Ad Hoc Networks, IEEE Internet Computing, vol. 8, no. 1, pp. 55-60, 2004. [10] A. Guttman, R-Trees: A Dynamic Index Structure for Spatial Searching, Proc. ACM SIGMOD 84, pp. 47-57, June 1984. [11] T. Hara, Cooperative Caching by Mobile Clients in Push-Based Information Systems, Proc. 11th ACM Intl Conf. Information and Knowledge Management (CIKM 02), pp. 186-193, 2002. [12] T. Hara and S.K. Madria, Data Replication for Improving Data Accessibility in Ad Hoc Networks, IEEE Trans. Mobile Computing, vol. 5, no. 11, pp. 1515-1532, Nov. 2006. [13] G.R. Hjaltason and H. Samet, Distance Browsing in Spatial Databases, ACM Trans. Database Systems, vol. 24, no. 2, pp. 265-318, 1999.
PAGE NO: 63