Anda di halaman 1dari 52

Application Flow Acceleration Technology

<Chapter Title>

Application Flow Acceleration Technology

What Is AppFlow?
AppFlow provides application-level acceleration for three business-critical applications: Microsofts Common Internet File System (CIFS), Microsoft Exchange traffic, and HTTP. AppFlow builds on TCP acceleration by anticipating client-server requests, accelerating data across the WAN, and reducing the number of round-trip times (RTTs) necessary to transfer data with these applications.

AppFlow Requires TCP Acceleration


You must enable TCP on both the client-side WX platform and the server-side platform before you can use AppFlow.

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

AppFlow Builds on Existing Technologies


AppFlow is one of a series of optimization technologies the WX platform provides. The fundamental layer of optimization is the compression capabilities of Molecular Sequence Reduction (MSR) and Network Sequence Caching (NSC). These WX technologies allow you to place more data in the WAN pipe. The next level of WX acceleration, PFA, optimizes data transfers that rely on TCP as the transport mechanism. TCP acceleration can reduce the overall number of round trip requests and responses often necessary for hosts to transmit or receive data using TCP. At the top layer of these optimization techniques is the acceleration of specific applications, which use the following protocols: Common Internet File System (CIFS): This protocol is Microsofts file services transport. Messaging Application Programming Interface (MAPI): This protocol transfers data between Microsoft Exchange servers and Outlook clients. Hypertext Transfer Protocol (HTTP): This protocol transfers Web pages to a clients browser.

Although each of these three protocols uses TCP as its transport mechanism, they present unique challenges that hamper PFAs efforts to accelerate them. We examine each of these protocols in more detail over the remainder of this chapter so you can better understand these challenges and the benefits offered with AppFlow.

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

CIFS Protocol Details: Part 1


CIFS is an extremely chatty protocol developed by Microsoft for file services. When you map a network drive in Windows Explorer or copy and paste a file from a network drive to your PC, your host uses CIFS. CIFS is a LAN-oriented protocol, originally designed for workgroups and small domains whose members all reside on the same local network. Local host-to-host transfer of files, even large ones, on a LAN using CIFS does not affect WAN traffic, for obvious reasons. However, as Windows domains have grown to span multiple networks in multiple locations, using CIFS as a means of file transfer can have a tremendous impact on WAN performance. Users in Windows networks often have numerous drive mappings to remote file servers, to send and retrieve various document and other files over WAN connections using CIFS. CIFS uses a type of ping-pong, request-and-response model between clients and servers, an architecture that might be acceptable on a LAN but is particularly inefficient across the WAN. CIFS breaks files into small blocks of data and sends them one at a time to the destination host. The destination host must acknowledge each block of data before the sender sends the next block. Any delay on the WAN therefore delays each request and response.

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

CIFS Protocol Details: Part 2


The packet capture on the slide shows some of the request and response process, which occurs between two hosts transferring a file using CIFS: The client receiving the file (192.168.1.25) makes a Read AndX (transfer) Request for a block of data from the server sending the file (192.168.1.44). The client host must receive the entire block before making a request for the next block. This process continues until the file transfer is complete. The entire transfer of a 1-MB file requires a total of 1639 packets.

Not illustrated in the packet capture is the tremendous amount of overhead required to begin the actual data transfer. CIFS uses a large number of packets simply to allow a client access to a shared folder and to view that folders contents. This verification process must conclude before the client can even begin to download a file. File transfers using CIFS involve examinations of the clients access privileges, the clients operating system, the file system of both the client and the server, and a host of additional details contained within a series of packet exchanges not covered here. Needless to say, any latency on a WAN link will delay transfers of all of these packets as well as the actual data itself for clients opening or downloading files from remote sites using CIFS.

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

CIFS Acceleration: The Solution


CIFS acceleration provides a mechanism for pipelining the data blocks. The WX devices request data blocks as quickly as possible to fill the available WAN capacity. Because CIFS handles file sharing and transfers, a WX device knows that when a client requests the first block of data for a specific file, that client then subsequently asks for the remaining blocks. The WX device can begin retrieving all data blocks for the file even before the client requests them. CIFS acceleration separates each file transfer process across the WAN into three sessions: Session 1: Between the requesting client and the local WX platform; Session 2: Between two WX devices; and Session 3: Between the WX platform and the sending host.

The WX platform near the sending host issues ACK messages to that host at exactly the rate needed to fill the WAN pipe with compressed data. The WX device on the sending hosts side of the transfer then transmits this data across the WAN to the destination WX device through the transport protocolUDP or IPComp. Once the data arrives at the destination device, the requesting clients WX device delivers it to the destination host. Continued on next page.

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

CIFS Acceleration: The Solution (contd.)


From a technical standpoint, the client and the host do not actually communicate directly with one anotheronly with their own local WX devices. However, CIFS acceleration is completely transparent to client and host machines, similar to TCP acceleration.

CIFS-Compatible Platforms
CIFS acceleration supports more recent Windows operating systems, including Windows 2000, XP, Vista, and 2003 servers and clients. CIFS acceleration also supports Samba v3.0 server for Linux. CIFS does not accelerate traffic from Windows NT, Windows 95, or older clients and servers.

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

CIFS File Transfer Without AppFlow


As a block-based protocol, CIFS differs from a streaming-based protocol such as FTP. When a client requests a file transfer using FTP, the FTP protocol serves the entire file to the client at once, pausing only to wait for acknowledgement packets based on the clients TCP Window size. CIFS, however, breaks bulk file transfers into many small data blocksfrom 64 KB to as little as 256 bytesand transmits these blocks serially. The client application and server exchange messages to acknowledge and verify each block that is transmitted, and both hosts must wait for an acknowledgement of each block of data before requesting or transmitting the next. These acknowledgements are in addition to any similar ones provided by TCP.

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

CIFS File Transfer with AppFlow


AppFlow accelerates CIFS reads and writes by prefetching blocks of data that the client will request. Logically, if a client requests the first block of a file, it will subsequently request all remaining blocks as well. When a CIFS client opens a file, it initiates a series of read requests. The WX platforms at the remote and central sites work together to determine that a client is opening a file and, based on the available WAN bandwidth, request the appropriate number of reads needed to fill the WAN link. The slide depicts the following scenario: 1. 2. 3. When the client requests the first block of data, WX-2 proactively reads the rest of the file from the server. WX-2 then pipelines the data back to WX-1. By the time the client requests subsequent data blocks for the remainder of the file, those blocks are already available on WX-1, which can then supply those blocks to the client as quickly as possible across the LAN.

The process for CIFS writes is the same, with the WX platforms acknowledging the appropriate number of writes to keep the WAN link full. A 1-MB document, for example, would take 20 seconds to copy over a 512-KB link with 100 ms of latency; the same document would take only 2 seconds with AppFlow.

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

The Client-Side WX Device Performs All CIFS Optimizations


The optimization work for CIFS is performed on the client-side WX platform, meaning that all statistics for CIFS acceleration appear on the client sidethe server-side device will have no CIFS statistics. Note that any device can be a server, client, or both, depending on who is mounting the file system. For definition purposes, a client is a host requesting a file, mounting a network drive, or viewing remote drive contents or other file-services through CIFS. A server in a CIFS environment is the host that contains the file or owns the file share.

10

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

CIFS Uses SMB


CIFS request and response messages use TCP port numbers 139 or 445. CIFS uses the Server Message Block (SMB) protocol, which originally ran over Microsofts Network Basic Input Output System (NetBIOS). SMB over NetBIOS uses TCP port 139 (also known as native SMB). SMB over TCP/IP uses TCP port 445 (known as naked or raw SMB transport). Windows 2000 or later and Samba servers use SMB over TCP/IP. SMB over NetBIOS adds a few more handshakes compared to SMB over TCP/IP. Note that the request and response architecture forces a round trip for every transaction.

Application Definitions
The default CIFS application definition includes both port 445 and port 139. When creating new CIFS-based definitions, you should specify both ports.

2008 Juniper Networks, Inc. All rights reserved.

11

Application Flow Acceleration Technology

CIFS Transaction with Common SMB Messages


The packet capture on the slide illustrates a simple CIFS conversation between a server and a client so that the client can map a network drive to the server. Beyond the simple drive mapping, the client also queried the directory, and then disconnected the mapped drive. Notice the SMB messages that are sent back and forth to accomplish this event. The details of the messages are the following: Negotiate the Protocol Request: Starts all CIFS client-server conversations and sets the dialect of CIFS; Session Setup: Used for authentication of the client-server conversation; Tree Connect: Connect to a share; Trans2 Request and Trans2 Response: Often used to view attributes of the file; and Tree Disconnect: Disconnect from a share. NT Notify: Indicates changes on the server; NT Create AndX: Open or create a file; Write AndX (WriteX): Write data to the server; and Read AndX (ReadX): Read data from the server.

Other frequently used commands include the following:

Continued on next page.

12

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

CIFS Transaction with Common SMB Messages (contd.)


WriteX and ReadX are the two most useful commands for acceleration, because these commands occur in file transfers when performing a file copy maneuver. CIFS uses ReadX and WriteX SMBs to receive or send data (always from the client perspective) to and from the server. ReadX and WriteX requests include the file ID, which data blocks to push or pull, and where in the file those blocks reside.

2008 Juniper Networks, Inc. All rights reserved.

13

Application Flow Acceleration Technology

SMB Secures CIFS Transactions


SMB signing attaches a hash value to a packet so that the receiving host can verify that a packet coming from a server has not changed while en route. SMB signing helps reduce the chance of man-in-the-middle attacks. SMB signing was first available in Windows NT 4.0 Service Pack 3 and Windows 98. SMB signing has since been included in Windows Server 2003, Windows XP, and Windows 2000 to stop such man-in-the-middle attacks. These operating systems also support message authentication between clients and servers by placing a digital signature into each SMB; both the client and the server verify this signature. To use SMB signing, you must either enable it or require it on both the SMB client and the SMB server. If SMB signing is enabled on a server, clients that are also enabled for SMB signing use the packet signing protocol during all subsequent sessions. If SMB signing is required on a server, a client is not able to establish a session unless it is at least enabled for SMB signing. If this policy is set as required on the server, an SMB client must sign the packets. If this policy is disabled, the policy does not require the SMB client to sign packets.

14

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

Support for SMB Signing


WXOS software Release 5.5 (and later) supports SMB signing by logging in to a server with a pre-existing user account. You must create this account on the server (or the domain) and assign appropriate privileges to files, directories, and resources just as you would an individual user who will need access to those resources. This setup allows the WX device to simulate a users access and apply CIFS acceleration to SMB-signed files. However, the WX platform does not support SMB2, the preferred version of the protocol used by Windows Vista hosts. For transactions between Vista hosts and non-Vista hosts, the WX platform downgrades the version to SMB (as part of the negotiation process between hosts using CIFS). The WX device cannot apply CIFS acceleration between two Vista hosts (because they will default to SMB2) and such traffic is simply passed through unaccelerated.

SMB Signing Must Be Disabled (or Enabled but Not Required)


For any version of OJWX software prior to Release 5.5, you must disable SMB signing on hosts using CIFS, or select set the option to enable but not require signing. For a more in-depth explanation of SMB signing, refer to Microsofts Knowledge Base at http://support.microsoft.com/default.aspx?scid=kb;en-us;887429, Overview of Server Message Block signing, Article ID 887429.

2008 Juniper Networks, Inc. All rights reserved.

15

Application Flow Acceleration Technology

SMB Signing: Example


The slide shows an Ethereal packet capture of the SMB protocol with SMB signing enabled. When SMB signing is disabled, the Signature field contains all zeros.

16

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

AppFlow Must See the Flow to Accelerate It


AppFlow cannot accelerate a traffic flow unless the WX platform sees the start of the flow. By default, the flow-reset mode is on so that the WX device resets eligible CIFS traffic flows if the WX device receives a packet for the flow within 900 seconds (15 minutes) of the tunnel establishment time. This timing allows the WX device to accelerate CIFS traffic flows each time it is restarted. You can configure flow reset through the command line, and you can set it per tunnel or globally. The following commands control flow resets: flow-reset start duration seconds (5 to 86400) flow-reset stop show flow-reset [configuration | status] For more information on flow reset, refer to the WX/WXC Operators Guide.

2008 Juniper Networks, Inc. All rights reserved.

17

Application Flow Acceleration Technology

Centralized Exchange and the WAN


In a centralized environment with all users in a single location, Microsofts Exchange application performs well because communications take place over the LAN, where redundant transmissions do not matter. Once the enterprise extends to include remote locations, however, Exchanges inefficiencies begin to affect performance, placing a significant burden on WAN resources and slowing user productivity. In the example on the slide with servers in a centralized location, Exchange sends the exact same e-mail from the company CEO to each recipient in all remote offices, consuming bandwidth across each of those WAN connections. Additionally, Exchange clients like Outlook and Outlook Express issue various repetitive transmissions to synchronize accounts, particularly when a user closes Outlook. Not only does this process monopolize WAN bandwidth, often at the expense of more critical applications, but it also reduces productivity with long delays and by freezing user desktops while downloading or sending messages, opening attachments, or shutting down and synchronizing Exchange and Outlook.

18

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

Distributed Exchange and the WAN


Businesses have addressed these problems with a decentralized architecture deploying Exchange servers at remote sites so e-mail messages can be stored and accessed locally. Using the previous example, under the distributed model, the CEO would send the message to the central office Exchange servers. Each Exchange server in the remote office would need to retrieve the message only once and then distribute it locally to each Outlook client over the LAN. This architecture dramatically reduces WAN traffic, not to mention the impact of latency. Over time, however, the widespread deployment of Exchange servers has created significant management problems. Patches, upgrades, and updates are difficult to perform on distributed servers, and regulatory requirements add to the complexity. The Sarbanes-Oxley act, for example, requires companies to archive e-mail for a minimum of five years. Backing up multiple distributed Exchange servers poses a tremendous administrative challenge. As IT department budgets shrink, the decentralized Exchange architecture has become a bit of a nightmare to maintain for large companies. Centralizing Exchange servers in a single facility greatly reduces management and administrative complexity. Patches, upgrades, and backups are easy to perform, and the business requires fewer servers, reducing capital expenses and further easing the IT departments administrative burden. For companies that have completed this centralization, however, the old WAN-performance problems have returned; businesses are back where they started.

2008 Juniper Networks, Inc. All rights reserved.

19

Application Flow Acceleration Technology

Exchange Acceleration: The Problem


Microsoft Exchange suffers from the same problems as CIFS. Exchanges uses MAPI (TCP port 135), which is a chatty protocol. MAPI uses a request-and-response model between clients and servers, breaking files into small blocks of data and sending them one at a time to the destination host. The destination host must acknowledge each block of data before the next block can be sent, creating a ping-pong effect. This inefficiency results in poor performance, even on low-latency links.

20

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

Exchange Acceleration: The Solution


Similar in nature to CIFS acceleration, Exchange acceleration provides a method for accelerating bulk writes and reads by sending as many writes and reads in quick succession to the local WX device as needed to fill the available WAN capacity. The WX platform near the source sends ACK messages to the source at exactly the rate needed to fill the WAN pipe with compressed data. The source WX device then sends the data across the WAN to the destination WX device using UDP or IPComp in its service tunnel. Once the data arrives at the destination device, the WX device can deliver it to the destination hosts at LAN speed.

Best in Centralized Deployments


While the WX platform works within a distributed Exchange server environment, Exchange acceleration works best in a centralized deployment with the clients working in online mode. The next slide elaborates on this topic. The WX platform accelerates traffic between all the Outlook clients and Exchange servers, with one exception: Exchange acceleration does not support mail traffic between Outlook 2003 clients and Exchange 2003 servers. The same is true for Outlook 2007 clients and Exchange 2007 servers. However, the WX platform can still optimize such traffic by accelerating it with TCP acceleration and compressing it using MSR or NSC. Microsoft offers an acceptable means of accelerating Exchange 2003 servers and Outlook 2003 clients. Continued on next page.

2008 Juniper Networks, Inc. All rights reserved.

21

Application Flow Acceleration Technology

Best in Centralized Deployments (contd.)


Microsoft also offers a less-than-optimal solution to accelerate mail traffic between older versions of Exchange and Outlook clients. Exchange acceleration on the WX platform offers a better solution, provided you disable Microsofts mail compression on each Exchange server. To perform this task, you must edit the servers registry, locate HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\ MSExchangeIS\ParametersSystem, and change the Rpc Compression Enabled value to 0 (zero). You must reboot the Exchange server to disable compression.

22

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

Exchange and Outlook Communication Modes


Let us take a closer look at how Exchange and Outlook operated prior to 2003. While working in online mode on the LAN, users saw no problems because bandwidth was plentiful and latency was minimal. Users worked in offline mode usually because they were traveling and needed their e-mail stored on their local disks. The only disadvantage to this mode was the lack of access to free and current information while scheduling meetings and reading or responding to their latest e-mail. Also, users needed to wait until they could get back into online mode to upload any e-mail responses they might have created while offline. Continued on next page.

2008 Juniper Networks, Inc. All rights reserved.

23

Application Flow Acceleration Technology

Exchange and Outlook Communication Modes (contd.)


To resolve these issues and deliver acceptable performance for users, Microsoft introduced Exchange 2003. Touted as a more WAN-friendly solution, Exchange 2003, paired with Outlook 2003 clients, uses caching to pull information once, in the background, and store it on recipients desktops, eliminating multiple transmissions. This modificationincluding a default setting for Outlook to automatically download new messages and attachments from the Exchange server immediately upon notificationimproves performance significantly, at least for users. Delays and desktop freezing problems have essentially disappeared. IT departments avoid the need for remote Exchange servers, and e-mails are not sent repeatedly across the WAN. At first glance, the Exchange problem seems to be solved. The IT staff responsible for managing WAN resources, however, sees things differently because while Exchange 2003 reduces the impact of latency for remote users, it actually places additional and unforeseen burdens on the WAN. Before Exchange 2003, Outlook waited for the user to take some sort of action, such as opening an e-mail, before downloading the file from the central Exchange server. Using our example of an e-mail from the CEO to all employees, the transmission of the data would typically span several hours, because Outlook would retrieve the message as each employee opened the e-mail at various times throughout the day. However, with Exchange 2003, by default, each recipients Outlook application automatically downloads the message and attachment and stores them locally. This process occurs without any user intervention. Consequently, assuming no users change the default setting, Exchange servers still send the message out to all recipients immediately. In a centralized architecture, this action imposes up to 100 times the load on WAN links, reducing the bandwidth available for more critical operations.

24

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

Exchange Inefficiencies
The slide illustrates four different ways that a client can launch a file download from pre-Outlook 2003. Consider a simple e-mail exchange between coworkers. When the recipient opens a message, the Exchange server sends the message content (and any attachments) to the users Outlook client for viewing, although the message itself remains on the Exchange server. If the user closes the application, Outlook initiates a synchronizing process, pulling a copy of the message and its attachments from the server and storing it on the users hard drive for later viewing offline. The same message, therefore, is sent twiceonce for the original viewing and again for the synchronizing process, consuming twice the bandwidth. Exchange compounds this problem with the Sent Items folders. Because most users leave the default setting to copy all messages to that folder, the Outlook client sends the entire message again to the Exchange server to keep the Sent Items folders in sync between the server and the client. Therefore, a single message read and replied to by a single recipient could cross the network four times. Multiply this redundancy by hundreds or thousands of messages per day for a large organization, and add large attachments to the mix, and you can easily see how quickly e-mail can consume bandwidth. Continued on next page.

2008 Juniper Networks, Inc. All rights reserved.

25

Application Flow Acceleration Technology

Exchange Inefficiencies (contd.)


This bandwidth consumption is truly an issue with versions prior to Exchange 2003. Exchange 2003 was a step in the right direction, dramatically improving performance for end users. However, upgrading to Exchange 2003 is not practical for everyone. As with any application, this upgrade is a majorand disruptiveprocess that requires tremendous planning and preparation. For many companies, the benefits are not worth the effort. For organizations that have made the transition, Exchange 2003 has not proven to be a universal cure. WAN utilization issues still exists in Exchange 2003, often canceling out performance improvements. Regardless of the Exchange version in use, customers need a solution to the WAN problem.

26

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

MAPI Is Inherently Inefficient


With the increased use of media-rich messages, users can become understandably frustrated when retrieving 1-MB messages; the process can seem to take forever. In addition to simple file attachments, many e-mail messages contain HTML formatting and objects, further delaying delivery. Rather than sending the entire attachment or the embedded HTML objects simultaneously, MAPI divides the attachment into data blocks that vary from 8 KB to 16 KB. For each data block sent, MAPI requires a reply from the recipient before sending the next block. This ping-pong behavior, like that of CIFS, means that a 1-MB message could require as many as 100 to 150 RTTs. Waiting for these serial RTTs to complete, users feel the impact of even modest latency; as little as 30 ms of delay dramatically increases the time needed to retrieve messages.

2008 Juniper Networks, Inc. All rights reserved.

27

Application Flow Acceleration Technology

Exchange and Outlook Network Utilization


Exchange suffers from two distinct issues that can impact efficient transmission of data. The first issue is the number of times Exchange transmits a particular attachment to the client during a specific set of operations. Using MAPI and other complex protocols to communicate with Outlook clients, the Exchange server requires that both e-mail messages and attachments be sent back and forth repeatedly between the server and each recipient during the reading, confirmation, and synchronization processes. The next page discusses the second form of inefficiency.

28

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

Exchange Receive Without AppFlow


The second form of inefficiency for Exchange is the exact same form of inefficiency from which CIFS suffers. Exchange uses a block-based approach to transfer large files. Therefore, when an Outlook client downloads an e-mail with an attachment, the attachment does not transfer all at once; rather, Exchange transmits in small blocks, one at a time.

2008 Juniper Networks, Inc. All rights reserved.

29

Application Flow Acceleration Technology

Exchange Receive with AppFlow


The WX platform improves Exchange performance through a combination of AppFlow features designed to accelerate the MAPI protocol. PFA ensures that TCP does not become the bottleneck, and NSC eliminates the repeated transmission of data contained within Exchange messages. With the AppFlow feature, the WX devices send multiple data blocks in quick succession without waiting for acknowledgements for each block. AppFlow handles Exchange receive and send operations similarly. When an Exchange client wants to read an attachment from the Exchange server, the client requests the first data block. Working together, the WX devices on both ends of the WAN link determine that the client has begun to download an attachment. The WX devices then make the appropriate number of receive requests to fill the available WAN capacity. By the time the Exchange client requests the subsequent data blocks, the WX devices have already transmitted and received the data. The client-side WX platform can now forward the data at LAN speeds to the client. Compression complements the AppFlow feature by identifying and eliminating repeated data sequences that have already crossed the WAN.

30

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

Accelerating Web Applications: The Problem


HTTP has become extremely popular as many companies look to transition their thick-client applications, such as SAP or Oracle, to a Web interface. Web interfaces are easier to manage from an IT departments perspective because every computer has a browser. The only difficulty this approach presents is that Web-based applications are often bandwidth intensive. In fact, a Web-based application can require ten times more bandwidth to perform the same functions as some older, client-server versions. Latency can dramatically affect HTTP, just as it affects Exchange and CIFS. HTTP is a block-oriented protocol driven by client requests and server responses. Complex Web pages can contain dozens of embedded objects, all of which can require large amounts bandwidth to retrieve. Although Web browsers typically have caches built in to help reduce the amount of bandwidth required to display a particular page, these caches do not necessarily assist in networks with high latency.

2008 Juniper Networks, Inc. All rights reserved.

31

Application Flow Acceleration Technology

Accelerating Web Applications: The Solution


HTTP acceleration provides users the fastest response times when they access their Web applications. With WXOS Release 5.7, the WX platform acts as a transparent proxy, requesting and retrieving Web-based content through service tunnels from Web servers behind other WX devices. Because the cached objects reside on the WX device closest to the clients, the WX platform can send those items from its cache to Web clients at LAN speeds. Just as we have seen with TCP acceleration, HTTP acceleration divides the client-server conversation into three separate sessions: Client-to-local WX; Local WX-to-server-side WX; and Server-side WX-to-Web server.

The Result
From the users perspective, the transaction appears to be a normal end-to-end HTTP session, but with much quicker download times.

32

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

Cache Server Behavior


Proxy devices act as middle-men between Web clients and Web servers. This setup allows a proxy to retrieve and store objects from Web servers so it can serve those objects more quickly to local LAN clients, thus speeding up Web response times. The caching process in the slide follows this general logic: 1. 2. 3. 4. 5. 6. 7. A LAN Web client makes a request for www.bnn.com. The proxy server forwards this request to the Web server for www.bnn.com. The Web server replies with page content and all objects for the page. The proxy server stores those objects that are valid for caching (as indicated by the Web server in the response). The proxy server returns content to Web client. Other LAN Web clients request same content (www.bnn.com). The proxy server provides objects from cache without making the same request to the original Web server.

A number of factors determine which objects a caching server can and cannot (or should not) store locally. Various parameters within the Web servers reply help a cache server verify that the objects it has stored locally are still current before handing them out to other Web clients on the LAN. We examine these factors (like pragma-nocache and If-Modified-Since replies) on the next several slides.

2008 Juniper Networks, Inc. All rights reserved.

33

Application Flow Acceleration Technology

Browsers Can Store Content


Most current Web browsers provide an internal cache that allows them to store previously retrieved objects on the local hard drive. If the user later visit the same site, the browser can retrieve these stored objects from its local cache without having to retrieve them from remote servers. The local cache feature of browsers can dramatically improve responses, as long as those stored objects are still valid. HTTP, and more specifically HTML, includes a blinding array of instructions that a browser must use to determine whether a locally cached object is still fresh. In a simplified manner, a browser checks with a Web server before it displays the contents of a cached Web page to make certain that the page has not changed. If the page has changed since the browsers last visit, the server instructs the browser to download the page again. If the page has not changed, the server issues a specific HTTP response code to the browser, indicating that there have been no changes.

Users Can Modify Browser Caching Behavior


You can define how often the browser cache checks with any Web server to verify the integrity of cached objects. The browser can check on every visit to the page, every time you launch the browser, never, or automatically (this is typically the default setting).

34

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

Embedded Objects
Initial responses from a Web server to a client browser include the HTML code that defines the layout of the Web page. Included (or embedded) within that code are the locations for all objects that the browser needs to display. These embedded objects can include a wide variety of items that might reside on the Web server itself. Images, style sheets, documents, and Java scripts are just a few examples of embedded objects. The src tag and value instructs the clients browser where a given object resides and, therefore, how to retrieve and display it on the page.

2008 Juniper Networks, Inc. All rights reserved.

35

Application Flow Acceleration Technology

Is an HTTP Object Fresh?


Theoretically, a browser could cache an object locally forever, unless method existed a to ensure that the browser displays only the most current (or fresh) version of an object. Without some sort of update mechanism, you could visit a Web site that has changed, but your browser might display objects that it cached three months ago. Fortunately, HTTP provides a way for a browser to check an objects freshness without having to download the entire object again. The browser performs this check by asking the server if the object has changed since the last time the browser downloaded it. Using an if-modified-since request, a browser provides the server a date and time (typically, the last time the browser downloaded the object). If the Web server determines that the object has changed since that date, the server instructs the browser to download the most recent copy of the object. If the Web server determines that the object has not changed, it returns a response header to the browser indicating that the object in cache is still current (and therefore the browser can display the cached object to the user).

36

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

Cache Server BehaviorNontransparent Versus Transparent


Generally speaking, there are two different types of caching serversnontransparent and transparent. While the overall function of both types is essentially the same, the nature of their network behavior differs significantly. Regardless of how a proxy cache device operates, you can accurately consider both transparent and nontransparent proxies server as middle-men between clients and external Web servers. The TCP conversation between the client and the proxy is completely separate from the conversation between the proxy and the Web serverin fact, they are two different TCP sessions altogether. If you have ever configured your Internet browser to use a proxy server, the device in question was a nontransparent proxy. When your browser makes a request for a Web page, the destination IP in the request is that of the nontransparent proxy. Included in your browsers request is information that tells the proxy the Web site you want to see. The slide illustrates the Layer 3 and Layer 4 network translations that occur between a client, a nontransparent proxy, and a Web server. Note that the nontransparent proxy uses a completely different IP address and source port from that of the client. Upon receipt of the Web servers response, the proxy server reverses this translation before sending the reply to the client.

2008 Juniper Networks, Inc. All rights reserved.

37

Application Flow Acceleration Technology

Transparent Proxy
This type of proxy server does not require any special browser configuration because it is, as the term implies, transparent to network devices, including your browser. Often deployed physically in the path of traffic, transparent proxies can intercept Web requests from clients and return cached objects to them without retrieving those objects from Web servers. The illustration on the slide shows that a transparent proxy server does not alter the clients source IP or source port before forwarding that request to the Web server. Therefore, the proxy does not need to alter the destination IP or destination port in the Web servers reply to the client. These two behaviorstransparent and nontransparentare simply general descriptions of how proxy servers can operate. It might be difficult to categorically describe a given vendors caching solution as one or the other because many caching servers can operate transparently for some types of traffic and nontransparently for others. We offer the explanations here to help you better understand how the WX device operates when you deploy it for HTTP acceleration.

38

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

The WX: A (Mostly) Transparent Proxy Server


In most respects, the WX platform operates as a transparent proxy server when you implement HTTP acceleration. Fortunately, you do not need to reconfigure each browser on the LAN to access a proxy server, nor does the WX platform alter the source IP of the client before forwarding the request to the Web server. However, the WX platform does alter the clients source port. For purposes we cover in the following slides, the WX platform adds 10,000 to the clients original source port before forwarding the request to the Web server. Upon receipt of the servers reply, the WX platform subtracts 10,000 from the servers destination port so the client receives the appropriate response. There are still three completely different TCP sessions involved when deploying the WX platform for HTTP acceleration: Client-to-WX; WX to WX; and WX-to-Web server.

Continued on next page.

2008 Juniper Networks, Inc. All rights reserved.

39

Application Flow Acceleration Technology

The WX: A (Mostly) Transparent Proxy Server (contd.)


Note that you must deploy WX devices in pairs to take advantage of the WX platforms HTTP acceleration feature. Although only the client-side WX device actually applies caching and acceleration for HTTP, it can do so only for traffic destined to Web servers located behind remote WX devices. This distinction is significant. The WX platform will not operate as a traditional caching server (either transparently or not) for traffic destined to sites other than those on remote subnets. If a client makes a request for content from a Web site not behind a WX device within the same community, the client-side WX device simply treats that traffic as it does all other uncompressed traffic and forwards it.

40

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

Why 10,000?
The TCP session between the WX device and an external Web server is separate from the initial conversation between the Web client and the WX device. However, the sessions are very similarso much so that should packets from the server inadvertently reach the client directly (that is, if they somehow bypass the WX device), the client might actually begin an ACK storm.

TCP Requirements
The nature of TCP requires that any receiving host acknowledge all data received from a sending host. If a client receives a packet with a sequence number it has already received, the client issues an ACK for that packet and all subsequent packets it has received alreadytherefore, the potential for an ACK storm exists if a client receives an original-looking packet directly from the server. The WX platform therefore adds 10,000 to the clients source port so that any direct replies from the server to the client will not match those response packets the client expects. If one of these modified packets reaches the client from the server, the client simply issues a RST (reset) to the server and drops the packet. You might wonder how a well-designed network can allow any responses to bypass an inline device like the WX platform; however, in networks with multiple paths, the scenario is possible. Additionally, if a WX device suddenly switches-to-wire, all responses from a Web server passes through the WX device unmodified to the client, potentially causing an ACK storm.

2008 Juniper Networks, Inc. All rights reserved.

41

Application Flow Acceleration Technology

HTTP Acceleration Summary


HTTP acceleration operates by caching static objects (.css, .gif, .js, .jpg). HTTP acceleration requires two WX or WXC platforms on each side of the connection (that is, the WX platform is not a traditional, single-sided cache). The WX and WXC platforms do not cache requests and responses with unknown headers. Cookies can contain any arbitrary information the server chooses and are used to introduce state into otherwise stateless HTTP transactions. Without cookies, each retrieval of a component of a Web page from a Web site is an isolated event. The WX and WXC platforms do not cache responses from sites that use cookies or session identifiers. However, the platforms still maintain the URL object database.

42

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

CIFS Acceleration Requirements


You must configure compression, QoS, and TCP acceleration on both ends of the tunnel that will carry CIFS-accelerated traffic. For WXOS releases earlier than 5.5, turn off SMB signing on both the server and the client. Also, ensure that the end hosts are running CIFS-compatible operating systems.

Configuring CIFS Acceleration


CIFS acceleration is enabled by default. The only step you must complete is to confirm those CIFS applications that the WX platform will accelerate.

2008 Juniper Networks, Inc. All rights reserved.

43

Application Flow Acceleration Technology

CIFS Acceleration Settings


CIFS acceleration is enabled by default. If you do not want to use SMB signing if servers do not require it, you can enable that option. If you want to use CIFS with SMB signing, you must select that option and configure the appropriate account information. The account you specify should be preconfigured on servers (or domain controllers) for the WX platform itself. For WXOS releases prior to 5.5, the WX platform can accelerate only CIFS traffic that does not use SMB signing. Only Release 5.5 and later support signing. Note that the user account information you supply in this configuration screen must have sufficient rights to access any and all server or domain resources (for example files, documents, and directories) that individuals might need to retrieve. For further information about the details of this account, see the WX/WXC Operators Guide.

44

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

Exchange Acceleration Requirements


You must configure compression, QoS, and TCP acceleration on both ends of the tunnel that will carry Exchange-accelerated traffic. When pairing Outlook 2003 or 2007 with Exchange 2003 or 2007, remember that the WX and WXC platforms can perform only compression on this traffic because Microsoft changed its procedure for MAPI RPCs for Exchange and Outlook, making it difficult to accelerate the traffic. Also, you must verify that no pre-existing Outlook processes exist before the service tunnel is initiated (for example, synchronization).

2008 Juniper Networks, Inc. All rights reserved.

45

Application Flow Acceleration Technology

Configuring Exchange Acceleration


Exchange acceleration is enabled by default. The only step you must complete is to confirm those applications the WX platform will accelerate.

46

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

HTTP Acceleration Requirements


You must configure compression, QoS, and TCP acceleration on both ends of the tunnel that will carry HTTP-accelerated traffic. Remember that HTTP traffic must be unencrypted. You should define HTTP application definitions as specifically as possible (for example, specify TCP ports). If you want to use HTTP acceleration in both directions (for example, you have Web servers and Web clients at both locations), you should configure both WX platforms as client-side devices. Note that the WX platform will not accelerate HTTP traffic if there is a proxy server between the server-side WX platform and the actual HTTP server. However, if the proxy server is located between the Web client and the client-side WX platform, the WX platform can accelerate HTTP traffic.

2008 Juniper Networks, Inc. All rights reserved.

47

Application Flow Acceleration Technology

Configuring HTTP Acceleration


HTTP acceleration is not enabled by default so you must manually enable it and select the application.

48

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

Viewing AppFlow Results


The slide shows a screen capture of the monitoring report for CIFS acceleration. One of the differences between monitoring CIFS acceleration and monitoring other features of the WX platform is that CIFS acceleration displays time saved rather than the increase in effective bandwidth.

2008 Juniper Networks, Inc. All rights reserved.

49

Application Flow Acceleration Technology

Troubleshooting AppFlow: Part 1


The slide lists several common areas to examine when troubleshooting AppFlow.

50

2008 Juniper Networks, Inc. All rights reserved.

Application Flow Acceleration Technology

Troubleshooting AppFlow: Part 2


The slide lists several common areas to look at when troubleshooting AppFlow.

2008 Juniper Networks, Inc. All rights reserved.

51

Application Flow Acceleration Technology

52

2008 Juniper Networks, Inc. All rights reserved.