Anda di halaman 1dari 20

Every Web site sits on a computer known as a Web server. This server is always connected to the internet.

Every Web server that is connected to the Internet is given a unique address made up of a series of four numbers between 0 and 256 separated by periods. for example, 68.178.157.132 or 68.122.35.127. When you register a Web address, also known as a domain name, such as tutorialspoint.com you have to specify the IP address of the Web server that will host the site. You can load up with Dedicated Servers that can support your web-based operations. There are four leading web browsers: Apache, IIS, lighttpd and Jagsaw. Now we will see these servers in bit more detail. Apart from these Web Servers, there are other Web Servers also available in the market but they are very expansive. Major ones are Netscape's iPlanet, Bea's Web Logic and IBM's Websphere. Apache HTTP Server This is the most popular web server in the world developed by the Apache Software Foundation. Apache web server is open source software and can be installed on almost all operating systems including Linux, Unix, Windows, FreeBSD, Mac OS X and more. About 60% of the web server machines run the Apache Web Server. You can have Apache with tomcat module to have JSP and J2EE related support. You can have detailed information about this server at Apache HTTP Server Internet Information Services The Internet Information Server (IIS) is a high performance Web Server from Microsoft. This web server runs on Windows NT/2000 and 2003 platforms (and may be on upcoming new Windows version also). IIS comes bundled with Windows NT/2000 and 2003; Because IIS is tightly integrated with the operating system so it is relatively easy to administer it. You can have detailed information about this server at Miscrosoft IIS lighted The lighttpd, pronounced lighty is also a free web server that is distributed with the FreeBSD operating system. This open source web server is fast, secure and consumes much less CPU power. Lighttpd can also run on Windows, Mac OS X, Linux and Solaris operating systems. You can have detailed information about this server at lighttpd Sun Java System Web Server This web server from Sun Microsystems is suited for medium and large web sites. Though the server is free it is not open source. It however, runs on Windows, Linux and Unix platforms. The Sun Java System web server supports various languages, scripts and technologies required for Web 2.0 such as JSP, Java Servlets, PHP, Perl, Python, Ruby on Rails, ASP and Coldfusion etc. Jigsaw Server Jigsaw (W3C's Server) comes from the World Wide Web Consortium. It is open source and free and can run on various platforms like Linux, Unix, Windows, Mac OS X Free BSD etc. Jigsaw has been written in Java and can run CGI scripts and PHP programs.

Setting up a Web Server


The tasks below lead you through the process of setting up your own Web server. While the process described is necessarily incomplete, it is offered as a guide to the things you must do to successfully set up a Web server. Links to software (for networking, HTTP, CGI, etc.) are provided to encourage you to gather your own toolkit of Web server products. Commercial software is also available, including Netscape's Communication Server (available free to educational institutions) and Microsoft's Internet Information Server, for both PC and Macintosh platforms (see Internet Resources below).

Step 1 - The computer: A Web server requires a dedicated computer that is directly connected to the Internet, usually through an ethernet network (LAN/WAN). You can run a Web server on a low-end computer (80386-based PC or 68040 Macintosh), but if you want your server to be responsive to Web surfers you should probably use a more powerful computer (such as a Pentium or PowerPC-based Macintosh). A Web server needs a fast and large hard drive and should have lots of RAM (over 16 MB). Step 2 - The operating system software: The following operating systems can support a Web server: Windows/NT, Windows/95, MacOS, Unix, and Linux. Of these, most of the existing Web servers run on Windows/NT, MacOS (on a PowerMac) or Unix. Linux is a PC/DOS-based version of Unix. Step 3 - The networking software: All Internet computers need TCP/IP, and a Web server is no exception. As stated above, your computer should be directly connected to the Internet and thus may require appropriate ethernet software. Step 4 - The Web server software: There are a variety of Web server programs available for a variety of platforms, from Unix to DOS machines. For the Macintosh, a popular Web server is WebStar from StarNine (see Internet Resources below). For the Windows/NT platform, both Microsoft and Netscape offer a powerful Web server program free to educational institutions (see Internet Resources below). Download or purchase the Web server software and install it on your computer using the instructions provided. Step 5 - Configuring your Web server: Whey you install your Web server, you will be prompted for basic settings - default directory or folder, whether to allow visitors to see the contents of a directory or folder, where to store the log file, etc. Depending on the Web software you install, you will have to configure the software per the instructions that come with it. Step 6 - Managing your Web server: As your Web server is accessed by more and more people, you may need to monitor the log file to see which files people are reading, identify peak access times, and consider upgrading your computer. You can always add more RAM and disk space to your Web server computer to improve its performance. Also check for bottlenecks - such as your TCP/IP software. For example, Open Transport 1.1 from Apple has been modified to support faster TCP/IP access if installed on a Web server. Step 7 - Getting more information on operating a Web server: For more information on finding, downloading, installing, and operating a Web server, see the Internet Resources below. For example, Web66 has information on setting up a Macintosh and Windows/95 Web server, and there are many other useful resources available.

Internet Resources Web Server Software

Communication Server from Netscape [http://home.netscape.com/comprod/mirror/server_download.html] Netscape offers a free Web server program for Windows/NT to educational institutions. Internet Information Server from Microsoft [http://iisa.microsoft.com/iis3/freesoftware/downloadiis3/systemreq.htm] Microsoft offers a "free" Internet server program for Windows/NT. WebStar from StarNine/Quarterdeck [http://emod.starnine.com/evals/evals.html] WebStar is a Macintosh-based Web server available for evaluation from StarNine.

Locating Web Server Programs and Utilities

Macintosh Web Servers [http://www.yahoo.com/Computers_and_Internet/Internet/World_Wide_Web/Servers/Macint osh/] A list of links to Macintosh Web server programs at Yahoo. Macintosh Classroom Internet Server Cookbook [http://web66.coled.umn.edu/Cookbook/MacContents.html]. At Web66, this page contains everything you need to set up your own Macintosh-based Web server. Server Watch [http://serverwatch.iworld.com/] "Serverwatch is designed to be the ultimate resource on the Web for timely and accurate information on Web server technology and supporting tools." Be sure to check out their Web server comparison chart! Windows/95 Classroom Internet Server Cookbook [http://web66.coled.umn.edu/Cookbook/Win95/Contents.html] At Web66, this page has everything you need to set up a Windows/95-based Web Server. Windows & Windows/95 Web Servers [http://www.yahoo.com/Computers_and_Internet/Internet/World_Wide_Web/Servers/Micros oft_Windows_Windows_95/] A list of links to Windows & Windows/95 Web servers at Yahoo. Windows/NT Web Servers [http://www.yahoo.com/Computers_and_Internet/Internet/World_Wide_Web/Servers/Windo ws_NT/] A list of links to Windows/NT Web server programs at Yahoo.

Understanding Web Services

Over the last couple of years, Web services have expanded to become more popular with application developers and for good reason. Web services technology represents an important way for businesses to communicate with each other and with clients as well. Unlike traditional client/server models, such as a Web server or Web page system, Web services do not provide the user with a GUI. Instead, Web services share business logic, data and processes through a programmatic interface across a network. The applications interface with each other, not with the users. Developers can then add the Web service to a GUI (such as a Web page or an executable program) to offer specific functionality to users. Web services' distributed computing model allows application-to-application communication. For example, one purchase-and-ordering application could communicate to an inventory application that specific items need to be reordered. Because of this level of application integration, Web services have grown in popularity and are beginning to improve business processes. In fact, some even call Web services the next evolution of the Web.

Web Services Technology


Web services are built on several technologies that work in conjunction with emerging standards to ensure security and manageability, and to make certain that Web services can be combined to work independent of a vendor. The term Web service describes a standardized way of integrating Web-based applications using the XML, SOAP, WSDL and UDDI open standards over an Internet protocol backbone.

XML
Sponsored Provide your business with the IT resources it needs:: The IBM SmartCloud Simulator is an interactive tour that will show you several ways that you can use and manage this exciting product. Short for Extensible Markup Language, a specification developed by the W3C. XML is a pared-down version of SGML, designed especially for Web documents. It allows designers to create their own customized tags, enabling the definition, transmission, validation, and interpretation of data between applications and between organizations.

SOAP
Short for Simple Object Access Protocol, a lightweight XML-based messaging protocol used to encode the information in Web service request and response messages before sending them over a network. SOAP messages are independent of any operating system or protocol and may be transported using a variety of Internet protocols, including SMTP, MIME, and HTTP.

WSDL
Short for Web Services Description Language, an XML-formatted language used to describe a Web service's capabilities as collections of communication endpoints capable of exchanging messages. WSDL is an integral part of UDDI, an XML-based worldwide business registry. WSDL is the language that UDDI uses. WSDL was developed jointly by Microsoft and IBM.

UDDI
Short for Universal Description, Discovery and Integration. It is a Web-based distributed directory that enables businesses to list themselves on the Internet and discover each other, similar to a traditional phone book's yellow and white pages.

XML is used to tag the data, SOAP is used to transfer the data, WSDL is used for describing the services available and UDDI is used for listing what services are available. Used primarily as a means for businesses to communicate with each other and with clients, Web services allow organizations to communicate data without intimate knowledge of each other's IT systems behind the firewall.

The Critical Choice of Client Server Architecture: A Comparison of Two and Three Tier Systems

Introduction

Most of the initial client/server success stories involve small-scale applications that provide direct or indirect access to transactional data in legacy systems. The business need to provide data access to decision makers, the relative immaturity of client/server tools and technology, the evolving use of wide area networks and the lack of client/server expertise make these attractive yet low risk pilot ventures. As organizations move up the learning curve from these small-scale projects towards mission-critical applications, there is a corresponding increase in performance expectations, uptime requirements and in the need to remain both flexible and scalable. In such a demanding scenario, the choice and implementation of appropriate architecture becomes critical. In fact one of the fundamental questions that practitioners have to contend with at the start of every client/server project is - "Which architecture is more suitable for this project - Two Tier or Three Tier?". Interestingly, 17% of all mission-critical client/server applications are three tiered and the trend is growing, according to Standish Group International, Inc., a market research firm. Architecture affects all aspects of software design and engineering. The architect considers the complexity of the application, the level of integration and interfacing required, the number of users, their geographical dispersion, the nature of networks and the overall transactional needs of the application before deciding on the type of architecture. An inappropriate architectural design or a flawed implementation could result in horrendous response times. The choice of architecture also affects the development time and the future flexibility and maintenance of the application. Current literature does not adequately address all these aspects of client/server architecture. This paper defines the basic concepts of client/server architecture, describes the two tier and three tier architectures and analyzes their respective benefits and limitations. Differences in development efforts, flexibility and ease of reuse are also compared in order to aid further in the choice of appropriate architecture for any given project. DEFINITION

Despite the massive press coverage of client/server computing, there is much confusion around defining what client/server really is. Client and server are software and not hardware entities. In its most fundamental form, client/server involves a software entity (client) making a specific request which is fulfilled by another software entity (server). Figure 1 illustrates the client/server exchange. The client process sends a request to the server. The server interprets the message and then attempts to fulfill the request. In order to fulfill the request, the server may have to refer to a knowledge source (database), process data (perform calculations), control a peripheral, or make an additional request of another server. In many architectures, a client can make requests of multiple servers and a server can service multiple clients.

Figure 1 - Client/Server Transactions It is important to understand that the relationship between client and server is a command/control relationship. In any given exchange, the client initiates the request and the server responds accordingly. A server cannot initiate dialog with clients. Since the client and server are software entities they can be located on any appropriate hardware. A client process, for instance, could be resident on a network server hardware, and request data from a server process running on another server hardware or even on a PC. In another scenario, the client and server processes can be located on the same physical hardware box. In fact, in the prototyping stage, a developer may choose to have both the presentation client and the database server on the same PC hardware. The server can later be migrated (distributed) to a larger system for further pre-production testing after the bulk of the application logic and data structure development is complete. Although the client and server can be located on the same machine, this paper is concerned primarily with architectures used to create distributed applications, i.e. those where the client and server are on separate physical devices. According to Bever (et al.), a distributed application consists of separate parts that execute on different nodes of the network and cooperate in order to achieve a common goal[1]. The supporting infrastructure should

also render the inherent complexity of distributed processing invisible to the end-user. The client in a client/server architecture does not have to sport a graphical user interface (GUI), however, the mass-commercialization of client/server has come about in large part due to the proliferation of GUI clients. Some client/server systems support highly specific functions such as print spooling (i.e. network print queues) or presentation services (i.e. XWindow). While these special purpose implementations are important, this paper is predominantly concerned with the distributed client/server architectures that demand flexibility in functionality and enhanced graphical user interfaces. ARCHITECTURE TYPES When considering a move to client/server computing, whether it is to replace existing systems or introduce entirely new systems, practitioners must determine which type of architecture they intend to use. The vast majority of end user applications consist of three components: presentation, processing, and data. The client/server architectures can be defined by how these components are split up among software entities and distributed on a network. There are a variety of ways for dividing these resources and implementing client/server architectures. This paper will focus on the most popular forms of implementation of two-tier and three-tier client/server computing systems. Two-tier Architecture Although there are several ways to architect a two-tier client/server system, we will focus on examining what is overwhelmingly the most common implementation. In this implementation, the three components of an application (presentation, processing, and data) are divided among two software entities (tiers): client application code and database server (Figure 2). A robust client application development language and a versatile mechanism for transmitting client requests to the server are essential for a two tier implementation. Presentation is handled exclusively by the client, processing is split between client and server, and data is stored on and accessed via the server. The PC client assumes the bulk of responsibility for application (functionality) logic with respect to the processing component, while the database engine - with its attendant integrity checks, query capabilities and central repository functions - handles data intensive tasks. In a data access topology, a data engine would process requests sent from the clients. Currently, the language used in these requests is most typically a form of SQL. Sending SQL from client to server requires a tight linkage between the two layers. To send the SQL the client must know the syntax of the server or have this translated via an API (Application Program Interface).

It must also know the location of the server, how the data is organized, and how the data is named. The request may take advantage of logic stored and processed on the server which would centralize global tasks such as validation, data integrity, and security. Data returned to the client can be manipulated at the client level for further sub selection, business modeling, "what if" analysis, reporting, etc.

Figure 2 - Data Access Topology for two-tier architecture. Majority of functional logic exists at the client level The most compelling advantage of a two-tier environment is application development speed. In most cases a two-tier system can be developed in a small fraction of the time it would take to code a comparable but less flexible legacy system. Using any one of a growing number of PC-based tools, a single developer can model data and populate a database on a remote server, paint a user interface, create a client with application logic, and include data access routines. Most two-tier tools are also extremely robust. These environments support a variety of data structures, including a number of built in procedures and functions, and insulate developers from many of the more mundane aspects of programming such as memory management. Finally these tools also lend themselves well to iterative prototyping and rapid application development (RAD) techniques, which can be used to ensure that the requirements of the users are accurately and completely met. Tools for developing two-tier client/server systems have allowed many IS organizations to attack their applications backlog, satisfying pent-up user demand by rapidly developing and deploying what are primarily smaller workgroup-based solutions. Two-tier architectures work well in relatively homogeneous environments with fairly static business rules. This architecture is less suited for dispersed, heterogeneous environments with rapidly changing rules. As such, relatively few IS organizations are using two-tier client/server architectures to provide cross-departmental or crossplatform enterprise-wide solutions[2] Since the bulk of application logic exists on the PC client, the two-tier architecture faces a number of potential version control and application redistribution problems. A change in business rules would require a change to the client logic in each application in a corporation's portfolio which is

affected by the change. Modified clients would have to be re-distributed through the network - a potentially difficult task given the current lack of robust PC version control software and problems associated with upgrading PCs that are turned off or not "docked" to the network. System security in the two-tier environment can be complicated since a user may require a separate password for each SQL server accessed. The proliferation of end-user query tools can also compromise database server security. The overwhelming majority of client/server applications developed today are designed without sophisticated middleware technologies which offer increased security[3]. Instead, end-users are provided a password which gives them access to a database. In many cases this same password can be used to access the database with data-access tools available in most commercial PC spreadsheet and database packages. Using such a tool, a user may be able to access otherwise hidden fields or tables and possibly corrupt data. Client tools and the SQL middleware used in two-tier environments are also highly proprietary and the PC tools market is extremely volatile. The client/server tools market seems to be changing at an increasingly unstable rate. In 1994, the leading client/server tool developer was purchased by a large database firm, raising concern about the manufacturer's ability to continue to work cooperatively with RDBMS vendors which compete with the parent company's products[4]. The number two tool maker lost millions[5] and has been labeled as a takeover target[6]. The tool which has received some of the brightest accolades in early 1995 is supplied by a firm also in the midst of severe financial difficulties and management transition[7]. This kind of volatility raises questions about the long-term viability of any proprietary tool an organization may commit to. All of this complicates implementation of two-tier systems - migration from one proprietary technology to another would require a firm to scrap much of its investment in application code since none of this code is portable from one tool to the next. Three tier The tree tier architecture (Figure 3) attempts to overcome some of the limitations of the two-tier scheme by separating presentation, processing, and data into separate, distinct software entities (tiers). The same types of tools can be used for presentation as were used in a two-tier environment, however these tools are now dedicated to handling just the presentation. When calculations or data access is required by the presentation client, a call is made to a middle tier functionality server. This tier can perform calculations or can make requests as a client to additional servers. The

middle tier servers are typically coded in a highly-portable, non-proprietary language such as C. Middle-tier functionality servers may be multi-threaded and can be accessed by multiple clients, even those from separate applications. Although three-tier systems can be implemented using a variety of technologies, the calling mechanism from client to server in such as system is most typically the remote procedure call or RPC. Since the bulk of two-tier implementations involve SQL messaging and most three-tier systems utilize RPCs, it is reasonable to examine the merits of these respective request/response mechanisms in a discussion of architectures. RPC calls from presentation client to middle-tier server provide greater overall system flexibility than the SQL calls made by clients in the two-tier architecture. This is because in an RPC, the requesting client simply passes parameters needed for the request and specifies a data structure to accept returned values (if any). Unlike most two-tier implementations, the three tier presentation client is not required to "speak" SQL. As such, the organization, names, or even the overall structure of the back-end data can be changed without requiring changes to PC-based presentation clients. Since SQL is no longer required, data can be organized hierarchically, relationally, or in object format. This added flexibility can allow a firm to access legacy data and simplifies the introduction of new database technologies.

Figure 3 - Three Tier Architecture. Most of the logic processing is handled by functionality servers. Middle-tier code can be accessed and utilized by multiple clients In addition to the openness stated above, several other advantages are presented by this architecture. Having separate software entities can allow for the parallel development of individual tiers by application specialists. It

should be noted that the skill sets required to develop c/s applications differ significantly from those needed to develop mainframe-based character systems. As examples, user interface creation requires an appreciation for platform and corporate UI standards and database design requires a commitment to and understanding of the enterprise's data model. Having experts focus on each of these three layers can increase the overall quality of the final application. The three tier architecture also provides for more flexible resource allocation. Middle-tier functionality servers are highly portable and can be dynamically allocated and shifted as the needs of the organization change. Network traffic can potentially be reduced by having functionality servers strip data to the precise structure required before distributing it to individual clients at the LAN level. Multiple server requests and complex data access can emanate from the middle tier instead of the client, further decreasing traffic. Also, since PC clients are now dedicated to just presentation, memory and disk storage requirements for PCs will potentially be reduced. Modularly designed middle tier code modules can be re-used by several applications. Reusable logic can reduce subsequent development efforts, minimize the maintenance work load, and decrease migration costs when switching client applications. In addition, implementation platforms for three tier systems such as OSF/DCE offer a variety of additional features to support distributed application development. These include integrated security, directory and naming services, server monitoring and boot capabilities for supporting dynamic fault-tolerance, and distributed time management for synchronizing systems across networks and separate time zones. There are of course drawbacks associated with a three tier architecture. Current tools are relatively immature and require more complex 3GLs for middle tier server generation. Many tools have under-developed facilities for maintaining server libraries - a potential obstacle for simplifying maintenance and promoting code re-use throughout an IS organization. More code in more places also increases the likelihood that a system failure will effect an application so detailed planning with an emphasis on the reduction/elimination of critical-paths is essential. Three tier brings with it an increased need for network traffic management, server load balancing, and fault tolerance. For technically strong IS organizations servicing customers with rapidly changing environments, three tier architectures can provide significant longterm gains via increased responsiveness to business climate changes, code

reuse, maintainability, and ease of migration to new server platforms and development environments. COMPARING TWO AND THREE TIER DEVELOPMENT EFFORTS The graphs in Figures 4-6 illustrate the time to deployment for two tier vs. three tier environments. Time to deployment is forecast in overall systems delivery time, not man hours. According to a Deloitte & Touche study, rapid application development time is cited as one of the primary reasons firms chose to migrate to a client/server architecture. As such, strategic planning and platform decisions require an understanding how development time relates to architecture and how development time changes as an IS organization gains experience in c/s.

Figure 4 - Initial Development Effort Figure 4 shows the initial development effort forecast to create comparable distributed applications using the common two tier and three tier approaches discussed above. The three tier application takes much longer to develop this is due primarily to the complexity involved in coding the bulk of the application logic in a lower-level 3GL such as C and the difficulties associated with coordinating multiple independent software modules on disparate platforms. In contrast, the two-tier scheme allows the bulk of the application logic to be developed in a higher-level language within the same tool used to create the user interface.

Figure 5 - Subsequent Development Efforts Subsequent development efforts may see three-tier applications deployed with greater speed than two tier systems (Figure 5). This is entirely due to the amount of middle-tier code which can be re-used from previous applications. The speed advantage favoring the three-tier architecture will only result if the three-tier application is able to use a sizable portion of existing logic. Experience indicates that these savings can be significant, particularly in organizations which require separate but closely related applications for various business units. Re-use is also high for organizations with a strong enterprise data model because data-access code can be written once and re-used whenever similar access needs arise across multiple applications. The degree of development time reduction on subsequent efforts will grow as an organization deploys more c/s applications and develops a significant library of re-usable, middle-tier application logic.

Figure 6 - Client Tool Migration

Figure 6 makes the important case for code savings when migrating from one client development tool to another. It was stated earlier that client tools are highly proprietary and code is not portable between the major vendor packages. The point was also made that the PC tools market is highly volatile with vendor shake outs and technical "leapfrogging" common place. In a two-tier environment, IS organizations wishing to move from one PCbased client development platform to another will have to scrap their previous investment in application logic since most of this logic is written in the language of the proprietary tool. In the three-tier environment this logic is written in a re-usable middle tier, thus when migrating to the new tool, the developer simply has to create the presentation and add RPC calls to the functionality layer. Flexibility in re-using existing middle-tier code can also assist organizations developing applications for various PC client operating system platforms. Until recently there were very few cross-platform client tool development environments and most of today's cross-platform solutions are not considered "best-of-breed". In a three-tier environment the middle tier functionality layer can be accessed by separate client tools on separate platforms. Coding application logic once in an accessible middle tier decreases the overall development time on the cross-platform solution and it provides the organization greater flexibility in choosing the best tool on any given platform. SUMMARY In the early 1980's, ANSI, in conjunction with the University of Minnesota, defined three layer architecture for building portable systems. This architecture divided data processing into presentation, processing (functionality logic), and data. This paper has considered the role of each of these data processing layers within the framework of two popular client/server architectures. Two tier architectures group the presentation with most of the non-database processing in a single client application. The robustness and ease of use of two tier development tools dramatically decrease initial development time, however IS organizations may pay a penalty when trying to update functionality simultaneously in a variety of systems, when trying to integrate systems, or when trying to migrate from a proprietary development tool. Three tier architectures split these three layers into three distinct software entities. This architecture requires more planning and support, but can reduce development and maintenance costs over the long term by leveraging code re-use and flexibility in product migration. Three tier

architectures are also the most vendor-neutral of the architectures considered and thus can facilitate the integration of heterogeneous systems. Kean (1991) pointed out that a firm's long term ability to compete is directly related to (enabled or limited by) the reach and range provided by the firm's technical architecture. His suggestions for defining a platform include selecting architectures which: * protect existing IT investments * ensure the firm's ability to adopt new technologies * provide integration of heterogeneous resources * accommodate emerging standards embraced by a broad base of firms[8] Our discussion of popular client server architectures exposes the weaknesses in the overwhelming majority of current client/server systems - systems employing a two-tier architecture - as they relate to Kean's platform selection criteria. Such systems may provide adequate work group-level systems which can be developed rapidly and employ empowering interfaces. However, such systems lack the openness, flexibility, scalabilty, and integration provided by three-tier systems. The case for deploying three-tier systems will develop over time as tools mature and momentum for vendorneutral standards increase. A variety of research opportunities exist including examining issues in migration from two-tier to three-tier systems, operationalizing the conceptual graphs presented here as they relate to development time, and studying how the level of complexity in three-tier systems acts as a barrier to its wide-spread acceptance.

Features of the client-server architecture


Context
Each session maintains its own context. This allows sessions to be owned by different active objects within the same thread, by different components within the same thread, etc., without interference.

Context switching
Client-server communication requires context switching:

messages are sent via the Kernel handling the message involves a switch from client thread, to server thread, and back to client thread inter-thread data transfer can never be done with simple C++ pointers: it always involves data copying. Furthermore, it may involve cross-address-space data transfer, if the threads are in different processes.

Compared with a simple function call or memory copy, context switching is a relatively expensive operation, and should be minimised. Servers whose performance is critical use buffering to minimise context switches. Sometimes, this is transparent. Often, the client interface design is affected by the requirement for buffering.

Thread-based
The basic architecture of servers is thread-based. This gives the implementers of a system the flexibility to package the server threads into whatever processes they choose, depending on the balance of requirements for security and economy. This can be contrasted with other systems in which all, or most, servers run as part of the Kernel. On Symbian OS this is not necessary, so security is better. Client-server policies can be contrasted with systems which require servers to have their own process. This uses more memory (e.g. for address translation tables), and has worse performance (inter-thread data transfer involves translation as well as copying).

Cleanup
When a client process terminates, all server resources associated with it should be cleaned up. When a session is ended, servers must clean up all objects associated with it, and clients must consider any handles associated with it as invalid, and perform any necessary client-side cleanup.

For a non-sharable session, if the client thread dies, then the Kernel performs thread-death cleanup and sends a disconnect message to the server end of all sessions associated with that client thread. For a sharable session, the death of any or all client threads does not trigger closure of the session. This is because the session is process relative. To close a shared session, either the process must terminate, or the session must be explicitly closed through a client side call to Close() on the client-side session handle, RSessionBase.

Introduction to Web services technologies


Web Services Introduction Before understanding why web services are popular or so important, you should first assess What is Web Services, whats its use and how does it work? The nature and functionality of web services have made it very popular. Nowadays, our business systems have matured, transparent and more logical and high tech, and all for these are because of web services. Definition Web services are the amalgamation of eXtensible Markup Language (XML) and HyperText Transfer Protocol HTTP that can convert your application into a Web-application, which publish its function or message to the rest of the world. In other words, we can say, web services are just Internet Application Programming Interfaces (API) that can be accessed over a network, such as Internet and intranet, and executed on a remote system hosting the requested services. Web-applications are simple applications that run on the web. Web services are browsers and operating system independent service, which means it can run on any browser without the need of making any changes. Web Services take Web-applications to the Next Level. The World Wide Web Consortium (W3C) has defined the web services. According to W3C, Web Services are the message-based design frequently found on the Web and in enterprise software. The Web of Services is based on technologies such as HTTP, XML, SOAP, WSDL, SPARQL, and others.

The Elements of Web Services Web services use XML to code and to decode data, and SOAP to transport it (using open protocols). Besides these, HTTP, Web Services Description Language (WSDL), Universal Description, Discovery and Integration (UDDI), and SPARQL are the elements of Web Services. To understand clearly about Web Services, it is mandatory to have some brief knowledge of web services elements. HTTP HyperText Transfer Protocol in short HTTP is the most widely used protocol by World Wide Web. It defines how messages are formatted and transmitted, and what actions Web servers and browsers should take in response to various commands. One of the shortcomings of HTTP is that it is a stateless protocol, which means each command is executed independently, without any knowledge of the commands that came before it. This shortcoming has been resolved in new technologies includes ActiveX, Java, JavaScript and cookies. Simple Object Access Protocol (SOAP) SOAP is a simple XML-based protocol that allows to communicate applications information over HTTP without the dependency of OS platform. SOAP uses HTTP and XML as the mechanisms for information exchange. Universal Description, Discovery and Integration (UDDI) Universal Description, Discovery and Integration in short UDDI is a web based distributed directory like traditional phone book's yellow and white pages that enables businesses to list themselves on the Internet and discover each other. It defines a registry service a Web service that manages information about service providers, service implementations, and service metadata for Web services and for other electronic and non-electronic services. The service providers can use UDDI to advertise the services they offer while service consumers can use UDDI to discover services.

Web Services Description Language (WSDL)

The WSDL refers to Web Services Description Language, is an XML based protocol used for sending and receiving the information through decentralized and distributed environments. WSDL is an integral part of UDDI that was developed jointly by Microsoft and IBM. It defines what services are available in its Web service and also defines the methods, parameter names, parameter data types, and return data types for the Web service. The WSDL document is quite reliable and applications that use web services accept it SPARQL SPARQL refers to SPARQL Protocol and RDF Query Language is an RDF query language that defines a standard query language and data access protocol for use with the Resource Description Framework (RDF) data model. It works for any data source that can be mapped to RDF. SPARQL allows a query to be consisted of triple patterns, conjunctions, disjunctions, and optional patterns. It is standardized by the RDF Data Access Working Group (DAWG) of the W3C, and is considered a key semantic web technology. Uses of Web Services Web services are a set of tools that can be used in a number of ways most commonly in three styles: Remote Procedure Calls Service-oriented architecture Representational state transfer

Moreover, it is also used as Reusable application-components and Connect existing software. Remote procedure calls (RPC) The Remote procedure calls Web services present a distributed function call interface, which is familiar to many developers. Typically, the basic unit of RPC Web services is the WSDL operation. Service-oriented architecture (SOA) Under Service-oriented architecture (SOA) Web services is used to implement an architecture in which the basic unit of communication is a message, rather than an operation. This is often referred to as "message-oriented" services. Major software vendors and industry analysts support the SOA Web Services.

Representational state transfer (REST) The Representational State Transfer (REST) Web Services attempts to describe architectures based on REST can use WSDL to describe SOAP messaging over HTTP, which defines the operations. REST describes operations, can be implemented as an abstraction purely on top of SOAP or can be created without using SOAP at all. In Reusable application-components uses Web Services offers most frequently used services like currency conversion, weather reports, language translation and much more. In Connect existing software, Web Services offers you to exchange data between different applications and different platforms. In this type of uses, you can solve the interoperability problem by giving different applications a way to link their data. Now, you have assessed why web services is so popular and why it is widely in use. In this section we introduced you with the Web services. In next section you will learn about the importance of Web services.

Anda mungkin juga menyukai