Anda di halaman 1dari 59

Aquarium

Core

Webtier

WS/XML

Tools

Login | Register | Join | Help

GlassFish Community
GlassFish Community

Clustering in GlassFish Version 3.1


By Tom Mueller, Bobby Bissett, Joe Fialli, Mahesh Kannan, February 2011 One of the main new features of version 3.1 of the GlassFish Java EE Application Server is clustering capabilities. The new clustering capabilities bring forw ard the cluster capabilities from GlassFish 2, but w ith some new tw ists on the implementation. The synchronization that occurs w hen an instance is started uses a new algorithm, as does the softw are for replicating dynamic configuration changes that are made w hile an instance is running. This article describes the clustering capabilities of GlassFish version 3.1 and helps you get started deploying your application to a GlassFish cluster. Oracle GlassFish Server 3.1 is the Oracle-supported distribution of the open-source GlassFish version 3.1 application server. This article uses the name GlassFish version 3.1 to embrace both of them.

Downloads Discussion forums Mailing Lists Using GlassFish Training Samples Developing GlassFish Documentation Community Issue tracker

Other Languages
GlassFish 3.1.2 Samples Grizzly Jersey OpenMQ Shoal HK2 Governance Document A java.net project

Table of Contents
Basic Concepts Domain Administration Architecture Clustering Architecture Typical Failover Scenario Group Management Service Using the Command Line Interface for Monitoring Clusters Memory Replication Configuration Memory Replication Implementation Application Server Installation Domain Examination Creating a Cluster Using the Command Line Interface HTTP Load Balancer Plug-In Conclusion Acknow ledgments References

Basic Concepts
Clusters in an application server enhance scalability and availability, w hich are related concepts. In order to provide high availability of service, a softw are system must have the follow ing capabilities: The system must be able to create and run multiple instances of service-providing entities. In the case of application servers, the service-providing entities are Java EE application server instances configured to run in a cluster, and the service is a deployed Java EE application. The system must be able to scale to larger deployments by adding application server instances to clusters in order to accept increasing service loads. If one application server instance in a cluster fails, it must be able to fail over to another server instance so that service is not interrupted. Although failure of a server instance or physical machine is likely to degrade overall quality of service, complete interruption of service is not acceptable in a high-availability environment. If a process makes changes to the state of a user's session, session state must be preserved across process restarts. The most straightforw ard mechanism is to maintain a reliable replica of session state so that, if a process aborts, session state can be recovered w hen the process is restarted. The principle is similar to that used in high-reliability RAID storage systems.

Taken together, these demands necessarily result in a system that sacrifices high efficiency to attain high availability. In order to support the goals of scalability and high availability, the GlassFish application server provides the follow ing server-side entities: Server Instance A server instance is the Java EE server process (the GlassFish application server) that hosts your Java EE applications. As required by the Java EE specification, each server instance is configured for the various subsystems that it is expected to run.

converted by Web2PDFConvert.com

Node1 A node is a configuration of the GlassFish softw are that exists on every physical host w here a server instance runs. The life cycle of a server instance is managed either by the Domain Administration Server (DAS) described later in this article and/or by local operating system services that are responsible for starting and managing the instance. Nodes come in tw o flavors: Secure Shell ( SSH) and config . An SSH node provides centralized administration of instances using the SSH protocol. A config node provides just configuration information w ithout centralized administration. Cluster A cluster is a logical entity that determines the configuration of the server instances that make up the cluster. Usually, the configuration of a cluster implies that all the server instances w ithin the cluster have homogeneous configuration. An administrator typically view s the cluster as a single entity and uses the GlassFish Administration Console or a command-line interface (CLI) to manage the server instances in the cluster.

Nodes, server instances, and clusters can be created at GlassFish installation time, as described near the end of this article. Clusters and instances are organized into administrative domains, described below , that are characterized by the Domain Administration Server (DAS).

Domain Administration Architecture


Central to GlassFish clustering architecture is the concept of an administrative domain. The administrative domain is a representation of access rights for an administrator or group of administrators. The follow ing figure show s an overview of the domain administration architecture, in the context of a single domain.

Figure 1: Domain Administration Architecture


An administrative domain is a dual-natured entity: Used by a developer, it provides a fully featured Java EE process in w hich to run your applications and services.

Used in a real-w orld enterprise deployment, it provides a process that is dedicated to configuration and administration of other processes. In this case, an administrative domain takes the form of a Domain Administration Server (DAS) that you can use purely for administration purposes.

In the file system, an administrative domain is composed of a set of configuration files. At runtime, it is a process administering itself, independent server instances, clusters, applications, and resources. In general, high-availability installations require clusters, not independent server instances. The GlassFish application server provides homogeneous clusters and enables you to manage and modify each cluster as though it w ere a single entity. As show n in the figure, each domain has a Domain Administration Server (DAS), w hich is used to manage Java EE Server instances in the domain. The Administration Node at the center of the figure supports the DAS. Applications, resources, and configuration information are stored very close to the DAS. The configuration information managed by the DAS is know n as the configuration central repository. Each domain process must run on a physical host. W hen running, the domain manifests itself as a DAS. Similarly, every server instance must run on a physical host and requires a Java Virtual Machine. The GlassFish application server must be installed on each machine that runs a server instance. Administrative Domains Don't confuse the concepts administrative domain and network domain the tw o are not related. In the w orld of Java EE, domain applies to an administrative domain: the

Two nodes are shown on the right side of the figure: SSH Node 1 and Config Node 2, each hosting two GlassFish server instances. Typically all of the nodes in a domain will be of the same type, either SSH or config.
W ith an SSH node, the node and the instances can be managed through the use of commands that are sent from the DAS using the SSH protocol. The asadmin subcommands such as createinstance and start-instance (or the console equivalents) internally use SSH, via sshd on the remote host, to run the asadmin commands that perform the operation on the node. The asadmin startcluster command provides the ability to start an entire cluster w ith a single command. In this w ay,

converted by Web2PDFConvert.com

an administrative domain: the the life cycle of instances can be administered centrally from the DAS. machines and server instances that an administrator controls. W ith a config node, the asadmin subcommands to manage instances, such as create-local-instance and start-local-instance, must be run by logging in to the node itself. For either type of node, data synchronization is accomplished using HTTP/S. To provide automatic startup and runtime monitoring (w atchdogs) for instances, the asadmin create-service subcommand can be used to create an operating system service for an instance. Once created, the service is managed using operating system service management tools. W ith this in place, if a server instance fails, it is restarted w ithout administrator or DAS intervention. If the DAS is unavailable w hen an instance is started, the instance is started using the cached repository information. Several administrative clients are show n on the left side of Figure 1. The follow ing administrative clients are of interest: Admin Console The Admin Console is a brow ser-based interface for managing the central repository. The central repository provides configuration at the DAS level. Command-Line Interface The asadmin command duplicates the functionality of the Admin Console. In addition, some actions can only be performed through asadmin, such as creating a domain. You cannot run the Admin Console unless you have a DAS, w hich presupposes a domain. The asadmin command provides the means to bootstrap the architecture. IDE The figure show s the logo for the NetBeans IDE. Tools like the NetBeans IDE can use the DAS to connect w ith and manage an application during development. The NetBeans IDE can also support cluster mode deployment. Most developers w ork w ithin a single domain and machine, in w hich the DAS itself acts as the host of all the applications. REST Interface A computer w ith an arbitrary management application can use the REST interface provided by the DAS to manage the domain.

Clustering Architecture
Figure 2 show s GlassFish clustering architecture from a runtime-centric view point. This view emphasizes the high-availability aspects of the architecture. The DAS is not show n in Figure 2, and the nodes w ith their application server instances are show n to be grouped as clustered instances.

Figure 2: Clustering Architecture Overview


At the top of Figure 2, various transports ( HTTP, JMS, RMI-IIOP) are show n communicating w ith the clustered instances through a load balancing tier. Custom resources, such as enterprise information systems, connect to the load balancer through resource adapters in the Java connector architecture. All of the transports can be load balanced across the cluster, both for scalability and for fault tolerant strategies implemented by redundant units available upon single-point failure. At the bottom of the figure is a High-Availability Application State Repository, an abstraction of session state storage. The repository stores session state, including HTTP session state, stateful EJBsession state, and single sign-on information. This state information can be stored either by means of memory replication. High-Availability Database Alternative Previous versions of GlassFish have offered a robust high-availability solution for application servers based on High-Availability Database (HADB) technology. How ever, its cost to implement and maintain is relatively high and, although freely available, it has not been offered in an open-source version. Requests for a lighter w eight, open-source alternative to accompany the open-source GlassFish application server have resulted in a

converted by Web2PDFConvert.com

memory replication feature for GlassFish, starting w ith version 2. Memory replication relies on instances w ithin the cluster to store state information for one another in memory, not in a database. The HADB option is not supported in GlassFish 3.1. Memory Replication in Clusters Several features are required of a GlassFish-compatible fault-tolerant system that maintains state information in memory. The system must provide high availability for HTTP and EJB session state. The memory replication feature takes advantage of the clustering feature of GlassFish to provide most of the advantages of the HADB strategy w ith much less installation and administrative overhead. In GlassFish version 2, cluster instances w ere organized in a ring topology. Each member in the ring sent memory state data to the next member in the ring, its replica partner, and receives state data from the previous member. This replicated the entire state of one instance in only one other instance. In contrast, GlassFish 3.1 uses a consistent hash algorithm to determine w hich instance should replicate the individual sessions of another. Sessions from one instance are distribute among the other instances in the cluster. For example, if the load balancer is routing sessions S1, S2, and S3 to Instance 1, Instance 1 may replicate S1and S2 to Instance2 and S3 to Instance 3 based on the algorithm and available instances. This leads to more efficient fail over behavior as described below .

Figure 3: Replication Topology: Active Sessions in Instance 1

Typical Failover Scenario


The GlassFish HTTP load balancer plugin participates in the replication topology so that it can optimally reroute requests w hen an instance fails. How ever, the application server has been designed so that the load balancer tier requires no special information in order to perform w ell w hen a failure occurs. W hen the load balancer reroutes a session to a w orking instance, that instance obtains the stored replica session data it needs from another instance, if necessary. Failover requests from a load balancer fall into one of tw o cases: Case 1: The failover request lands on an instance that is already storing replication data from the session. In this case, the instance takes ow nership of the session, and processing continues. Case 2: The failover request lands on an instance w ithout the required replica data. In GlassFish version 2, the instance w as required to broadcast a request to all instances to retrieve the data. W ith the algorithm used by GlassFish 3.1, the instance know s w here the replica data is located and requests the data directly from the instance storing the replica session data. The instance storing the replica data deletes its copy after an acknow ledgment message indicates that the data has been successfully received by the requestor. The data exchange is accomplished through GMS, w hich is described in more detail later.

W henever an instance uses replica data to service a session (both Case 1 and Case 2), the replica data is first tested to make sure it is the current version.

Group Management Service


Group Management Service (GMS) provides dynamic membership information about a cluster and its member instances. Its design ow es much to Project Shoal, a clustering framew ork based on Java technology. At its core, GMS uses Grizzly. GMS manages cluster shape change events in GlassFish, coordinating such events as members joining, members shutting dow n gracefully, or members failing. Through GMS, memory replication takes necessary action in response to these events and provides continuous availability of service. GMS is used in GlassFish application server to monitor cluster health and supports the memory replication module. In summary, GMS provides support for the follow ing: Cluster membership change notifications and cluster state Cluster-w ide or point-to-point messaging Recovery-oriented computing, including recovery member selection and recovery chaining in case of multiple failures A service-provider interface (SPI) for plugging in group communication providers

converted by Web2PDFConvert.com

Timer migrations GMS selects an instance to pick up the timers of a failed instance if necessary

Using the Command Line Interface for Monitoring Clusters


You can use the asadmin subcommand get-health to see the health of instances within a cluster. The following shows example output where one

instance has failed, one instance has not been started, and the others are running normally.

bin/asadmin get-health myCluster instance01 failed since Thu Feb 24 11:03:59 EST 2011 instance02 not started instance03 started since Thu Feb 24 11:03:08 EST 2011 instance04 started since Thu Feb 24 11:03:08 EST 2011 Command get-health executed successfully.
If the state of an instance is not started even though the instance appears to be operational in its server log, there may be an issue with UDP multicast between that instance and the DAS machine. To diagnose these kinds of issues, a new asadmin subcommand, validate-multicast, has been introduced in GlassFish 3.1. This command can be run on 2 or more machines to verify that multicast traffic from one is seen by the other(s). The following shows the command output when the command is run on hosts host1 and host2. In this output, we see that they can communicate with each other. If host1 only received its own loopback message, then multicast is not working between these machines as currently configured.

bin/asadmin validate-multicast Will use port 2048 Will use address 228.9.3.1 Will use bind interface null Will use wait period 2,000 (in milliseconds) Listening for data... Sending message with content "host1" every 2,000 milliseconds Received data from host1 (loopback) Received data from host2 Exiting after 20 seconds. To change this timeout, use the --timeout command line option. Command validate-multicast executed successfully.
In the above, the default values were used. When diagnosing a potential issue between two instances, use the subcommand parameters to specify the same multicast address and port that are being used by the instances.

Memory Replication Configuration


To configure cluster memory replication, you must perform three steps: 1. Create an administrative domain. 2. Create a cluster and its instances, as described later in this article. W hen the cluster is created, then a cluster configuration is created w ithin domain.xml. The configuration sets defaults for replication, enables GMS, and sets the persistence-type property to replicated. 3. Deploy your applications w ith the availability-enabled property set to true. These steps can be accomplished either w ith the GUI or CLI. Some additional tuning may be required. For example, the default heap size for the cluster admin profile is 512 MB. For an enterprise deployment, this value should be increased to 1 GB or more. This is easily accomplished through the domain admin server by setting JVM options w ith the follow ing tags:

<jvm-options>-Xmx1024m</jvm-options> <jvm-options>-Xms1024m</jvm-options>
You also need to be sure to add the <distributable /> tag to your w eb application's web.xml file. This tag identifies the application as being cluster-capable. The requirement to insert the <distributable /> tag is a reminder to test your application in a cluster environment before deploying it to a cluster. Some applications w ork w ell w hen deployed to a single instance but fail w hen deployed to a cluster. For example, before an application can be successfully deployed in a cluster, any objects that become part of the application's HTTP session must be serializable so that their states can be preserved across a netw ork. Non-serializable objects may w ork w hen deployed to a single server instance but w ill fail in a cluster environment. Examine w hat goes into your session data to ensure that it w ill w ork correctly in a distributed environment.

Memory Replication Implementation


In GlassFish 3.1, the memory replication feature is implemented using GMS. GMS uses Grizzly transport for point-to-point communications and UDP multicast broadcast for one-to-many communications. Therefore, cluster topologies are limited to a single subnet at this time. There are future plans to provide an alternative communication mechanism to UDP multicast.

Application Server Installation


The GlassFish Application Server is available in several distributions each employing several installation types. Choose the distribution format that is most appropriate for you. The distributions include: Java EE full distribution, including support for w eb applications, EJBs, and all functionality defined by Java EE
converted by Web2PDFConvert.com

Java EE w eb distribution, including the Java EE w eb profile that supports w eb applications Java EE SDK w hich includes GlassFish (either full or w eb profile) as w ell as Java EE samples. These distributions are available either in English or w ith multiple languages included. The installation types include an executable graphical installer for W indow s, an executable graphical installer for Unix or Unix-like platforms, and a ZIP file containing an installation image. To install the ZIP file form of GlassFish Application Server: 1. Type the follow ing command:

unzip -q filename.zip
For example:

unzip -q glassfish-3.1.jar
2. This unpacks GlassFish into a glassfish3 installation directory. The installation image is already configured w ith a domain called domain1 w hich supports clustering.2 To install the GlassFish Application Server using an executable installer, run the dow nload file and enter the requested information. The installer allow s you to choose the installation directory, choose w hether the update tool should be included, and choose w hether to create an initial domain.

Domain Examination
You can learn about and manage domains from the CLI (the asadmin command) or the GUI (the GlassFish Server Administration Console). Examining Domains From the Command-Line Interface The installation step created a glassfish/domains subdirectory in the installation directory. This directory stores all the GlassFish domains. You can interact w ith domains from the CLI w ith the asadmin command, located in the bin subdirectory beneath the installation directory. The asadmin command can be used in batch or interactive mode. For example, you can list all domains and their statuses w ith the follow ing command:

bin/asadmin list-domains
If you haven't started domain1 yet, the above command issues the follow ing output:

domain1 not running


To start domain1, type the follow ing command:

bin/asadmin start-domain domain1


The argument domain1 is optional if only one domain exists. The command starts domain1 and provides information about the location of the log file, the domain name, and the administrative ports being used. Examining Domains With the GlassFish Server Administration Console As an alternative to the asadmin command, you can use the GlassFish Server Administration Console to control the Application Server. The next section describes how to start the console. The Administration Console makes it easy to deploy applications from .war or .ear files. From the console, you can monitor resource use, search log files, start and stop server instances and clusters, access on-line help, and perform many other administrative and server management functions. To access the console: 1. From the GlassFish installation directory, start the domain by typing the follow ing command:

bin/asadmin start-domain domain_name


For example:

bin/asadmin start-domain domain1


The command starts the GlassFish application server in the domain and provides information in the command shell w indow , including the port that is needed to access the console. 2. Start the Administration Console by directing your w eb brow ser to the follow ing URL:

http://hostname:port
The default port is 4848. For example:

http://kindness.example.com:4848

converted by Web2PDFConvert.com

If the brow ser is running on the machine on w hich the Application Server w as installed, specify localhost for the host name. On W indow s, start the Application Server Administration Console from the Start menu.

W ith the default configuration that has no passw ord for the admin user, the brow ser w ill be directed to the home page for the console:

Creating a Cluster Using the Command Line Interface


Once the domain administration server (DAS) is started by entering the start-domain command, a cluster w ith tw o instances that reside on the same host as the DAS can be created using the follow ing procedure. 1. From the GlassFish installation directory, create a cluster by typing the follow ing command:

$ bin/asadmin create-cluster cluster1 Command create-cluster executed successfully.


2. Create tw o instances for the cluster. The create-local-instance command creates the instance on the system w here the command is run. Since this system is the same as that of the DAS, the instance w ill use a default config node called localhostdomain1. To create an instance on an SSH node on another system, use the create-instance command w ith the --node option. The output of the command displays the ports that are automatically assigned for the instance.

$ bin/asadmin create-local-instance --cluster cluster1 instance1 Rendezvoused with DAS on localhost:4848. Port Assignments for server instance instance1: JMX_SYSTEM_CONNECTOR_PORT=28686 JMS_PROVIDER_PORT=27676 HTTP_LISTENER_PORT=28080 ASADMIN_LISTENER_PORT=24848 JAVA_DEBUGGER_PORT=29009 IIOP_SSL_LISTENER_PORT=23820 IIOP_LISTENER_PORT=23700 OSGI_SHELL_TELNET_PORT=26666 HTTP_SSL_LISTENER_PORT=28181 IIOP_SSL_MUTUALAUTH_PORT=23920 Command create-local-instance executed successfully. $ asadmin create-local-instance --cluster cluster1 instance2 Rendezvoused with DAS on localhost:4848. Using DAS host localhost and port 4848 from existing das.properties for node localhost-domain1. To use a different DAS, create a new node using create-node-ssh or create-node-config. Create the instance with the new node and correct host and port: asadmin --host das_host --port das_port create-local-instance --node node_name instance_name. Port Assignments for server instance instance2: JMX_SYSTEM_CONNECTOR_PORT=28687 JMS_PROVIDER_PORT=27677 HTTP_LISTENER_PORT=28081 ASADMIN_LISTENER_PORT=24849 JAVA_DEBUGGER_PORT=29010 IIOP_SSL_LISTENER_PORT=23821 IIOP_LISTENER_PORT=23701 OSGI_SHELL_TELNET_PORT=26667 HTTP_SSL_LISTENER_PORT=28182
converted by Web2PDFConvert.com

IIOP_SSL_MUTUALAUTH_PORT=23921 Command create-local-instance executed successfully.


3. Start the cluster. This command starts both instances in the cluster.

$ bin/asadmin start-cluster cluster1 Command start-cluster executed successfully.


4. This completes creation and startup of the cluster. Now applications or resources can be deployed to the cluster and the cluster can be managed w ith various asadmin commands. Here are some examples.

$ bin/asadmin list-instances -l NAME HOST PORT PID CLUSTER STATE instance1 localhost 24848 15421 cluster1 running instance2 localhost 24849 15437 cluster1 running Command list-instances executed successfully. $ bin/asadmin collect-log-files --target cluster1 Log files are downloaded for instance1. Log files are downloaded for instance2. Created Zip file under /scratch/trm/test/glassfish3/glassfish/domains/domain1/collected-logs/log_2011-02-24_08-3225.zip. Command collect-log-files executed successfully.
This last command collects the log files from the instances in the cluster. For complete information about the cluster, it is also recommended to look at the DAS log file. To stop the cluster, use the asadmin stop-cluster command.

HTTP Load Balancer Plug-In


A load balancer distributes the w orkload among multiple application server instances, increasing the overall throughput of the system. Although the load balancer tier requires no special know ledge w hen routing session requests to server instances, it does need to maintain a list of available nodes. If a node fails to reply to a request as expected, the load balancer picks another node. Load balancers can be implemented in softw are or hardw are. Refer to information supplied by hardw are vendors for details about implementing their devices. An HTTP load balancer plug-in is available for GlassFish version 3.1 Application Server. The plug-in w orks w ith Oracle iPlanet Web Server as w ell as Apache Web Server and Microsoft IIS. The load balancer also enables requests to fail over from one server instance to another, contributing to high-availability installations. For more information about how to set up the load balancer plug-in, refer to the online help available from the GlassFish Server 3.1 Admin Console. For more detailed information, see Chapter 7, Configuring Web Servers for HTTP Load Balancing, and Chapter 8, Configuring HTTP Load Balancing, in the GlassFish Server 3.1 High Availability Administration Guide.

Conclusion
The GlassFish version 3.1 Application Server provides a flexible clustering architecture composed of administrative domains, domain administrative servers, server instances, and physical machines. The architecture combines ease of use w ith a high degree of administrative control to improve high availability and horizontal scalability. High availability - Multiple server instances, capable of sharing state, minimize single points of failure, particularly w hen combined w ith load balancing schemes. In-memory replication of server session data minimizes disruption for users w hen a server instance fails. Horizontal scalability - As user load increases, additional machines, server instances, and clusters can be added and easily configured to handle the increasing load. GMS eases the administrative burden of maintaining a high-availability cluster.

Acknowledgments
Thank you to Kedar Mhasw ade, Prashanth Abbagani, and Rick Palkovic w ho authored the original article about clustering in GlassFish 2 upon w hich this article w as based.

References
Oracle GlassFish Server 3.1 High Availability Administration Guide Oracle GlassFish Server 3.1 Collection of Guides Dow nload Page for GlassFish Community The Aquarium GlassFish Community Blog Oracle GlassFish Server page w ith links to the support offerings Java EE At a Glance Overview of Java EE w ith dow nloads, documentation, and training

1The node agent entity that was available in GlassFish 2 has been replaced by the node entity in GlassFish 3.1.

converted by Web2PDFConvert.com

2GlassFish 3.1 no longer has the concept of domain profiles such as developer and cluster from GlassFish 2. Any domain can support clustering or any other feature as long as the necessary software modules are installed.

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

converted by Web2PDFConvert.com

Terms of Use; Privacy Policy; C opyright 2008-2012 (revision 20120406.2daf3d8)

converted by Web2PDFConvert.com

Anda mungkin juga menyukai