Anda di halaman 1dari 47

Uncovering the new RPC Client Access Service in Exchange 2010 Part -1

Introduction Among the architectural changes made in Exchange Server 2010 is the introduction of the new RPC Client Access service which changes the client access business logic as we know it. This new service moves Outlook MAPI mailbox connections from the back-end Mailbox servers and directory access from domain controllers/global catalog servers in the data tier to the Client Access servers in the middle tier. In this article we will begin with a nostalgic look at how the business logic worked back in the Exchange 2000/2003 days, where we had the concept of front-end and back-end servers. We will then talk about the improvements that were delivered with the introduction of the Client Access server role in Exchange 2007. From there we will move on to and concentrate on the new RPC Client Access Service included with Exchange 2010. We will take a look at how this new service works and how you can set static ports for MAPI connections. Lets get started. A brief look at Exchange 2000/2003 front-end & back-end architecture In Exchange 2000 and 2003, we had a basic front-end and back-end architecture, where the front-end servers accepted requests from clients and proxied them to the back-end servers for processing. An Exchange 2000/2003 front-end server could proxy RPC over HTTP (now known as Outlook Anywhere), HTTPS (OWA, Entourage etc.), POP, and IMAP clients to the relevant back-end servers. The front-end servers also supported multiple referrals to public folder data on back-end servers. In Exchange 2000/2003, internal Outlook MAPI clients did not use the front-end server at all; they connected directly to the back-end servers via MAPI over RPC. In fact, because the DSProxy component did not run on the front-end servers, you could not point Outlook MAPI clients to the NetBIOS name or FQDN of a front-end server. With Exchange 2000/2003, the DSAccess component also accessed the Netlogon service on the domain controller and global catalog servers in Active Directory directly via Remote Procedure Calls (RPCs), and then Outlook clients connected directly to the DCs/GCs. Outlook 2000 and earlier connected to DSProxy.

Figure 1: Exchange 2000/2003 Front-end and Back-end architecture One of the main benefits of Exchange 2000/2003 front-end servers was that they allowed you to configure a single namespace (such as mail.domain.com). With a single name space, users didnt have to know the name of the server on which their mailbox was stored. Another benefit was that SSL encryption and decryption were offloaded to the front-end servers and thereby freed up at that time expensive processing power on the back-end servers. But in the end, a front-end server was really just a proxy server that did not process or render anything on its own. Instead, it authenticated and forwarded logon requests to the back-end servers, which severely suffered from a 32-bit architecture that amongst other things limited Exchange 2000/2003 servers to a maximum of 4GB of memory. Introducing the Exchange 2007 Client Access Server role When Exchange 2007 was released, things improved significantly. The intention with the Exchange 2007 Client Access Server (CAS) role was to optimize the performance for the Mailbox server role. Unlike Exchange 2000/2003 front-end servers, the CAS role is not just a proxy server. For instance, the CAS server holds the business logic processes for

Exchange ActiveSync Policies and OWA segmentation. In addition, OWA UI rendering also happens on the CAS server and not the Mailbox server. In fact, all client connections, except Outlook (MAPI) use the Client Access Servers as the connection endpoint in an Exchange 2007 infrastructure. This offloads a significant amount of the load that occurred against the back-end mailboxes in Exchange 2000/2003.

Figure 2: Exchange 2007 Client Access architecture Then came the Exchange 2010 Client Access Server RPC service Exchange 2010 takes things one step further. With Exchange 2010, MAPI and directory access connections has also moved to Client Access Server role. This has been done by introducing a new Client Access Server service known as the RPC Client Access service.

Figure 3: RPC Client Access Service in the Services MMC snap-in on a Client Access server That means that MAPI clients no longer connect directly to a Mailbox server when opening a mailbox. Instead they connect to the RPC Client Access service which then talks to Active directory and Mailbox server. For directory information, Outlook connects to an NSPI endpoint on the Client Access Server, and NSPI then talks to the Active Directory via the Active Directory driver. The NSPI endpoint replaces the DSProxy component as we know from Exchange 2007.

Figure 4: Exchange 2010 Client Access architecture How is this different from Outlook Anywhere (RPC over HTTP) clients that connect to a mailbox in Exchange 2007? Well, although Outlook Anywhere clients connected to the RPC Proxy component on the Client Access Server, they also talked MAPI over RPC directly with the Mailbox server and with the NSPI endpoint in Active Directory. Some of you might wonder what the benefits of the RPC Client Access service are. There are several actually. First, with MAPI and directory connections moved to the Client Access Server role in the middle tier layer, Exchange now has a single common path through which all data access occurs. This not only improves the consistency, when applying business logic to clients, but also provides a much better client experience during switch-over and fail-overs when you have deployed a highly available solution that makes use of the new Database Availability Group (DAG) HA feature which I will cover in-depth in a future article. If the Outlook client user will even notice a disconnection, it will not occur for more than approximately 30

seconds compared to disconnection in Exchange 2007 that could take several minutes, heck even up to 30 minutes if it was a complex AD topology consisting of many AD sites and Domain Controllers throughout which DNS has to replicate. Lastly having a single common path for all data access, will allow for more concurrent connections and mailboxes per mailbox server. In Exchange 2007 a Mailbox server could handle 64.000 connections compared to Exchange 2010 which will increase that number to a 250.000 RPC context handle limit. . Exchange 2010 Client Access Server Arrays So now that we rely even more on the Client Access Servers within an Exchange 2010 infrastructure, clients need to be able to quickly re-connect to another CAS server in case the one they are connected to is down for planned or unplanned reasons. Say hi to the new Client Access array feature in Exchange 2010. A Client Access array is, as the name implies, an array of CAS servers. More specifically, it is an array consisting of all the CAS servers in the Active Directory site where the array is created. So instead of connecting to a FQDN of a CAS server, an Outlook client can connect to the FQDN of the CAS array (such as outlook.domain.com). This makes sure Outlook clients connecting via MAPI are connected all the time even during mailbox database fail and switch-overs (aka *-overs). Here is how things work in regards to CAS arrays. An Exchange 2010 mailbox database has an attribute calledRpcClientAccessServer. When creating a new mailbox database in an Active Directory site where a CAS array has not been created, this attribute will be set to the first CAS server installed in the AD site. You can see what this attribute is set to by running the following command: Get-MailboxDatabase <DB name> | fl RpcClientAccessserver

Figure

5: RPC

Client

Access

Server

FQDN

specified

on

Mailbox

database

If a CAS array exists in the AD site when you create a new Mailbox database, this attribute will automatically be set to the FQDN of the CAS array. This is so the CAS array on the Client Access server knows which Mailbox server and database a user should be directed to. A CAS array is configured the following way. First you create the new CAS array using the following command: New-ClientAccessArray Name name of CAS array Fqdn <fqdn of CAS array> -Site <name of AD site>

Figure 6: Creating a new Client Access array When the CAS array has been created you should create an A record in your internal DNS named outlook.domain.com pointing to the virtual IP address of your internal load balancing solution.

Figure 7: Creating an A record for the CAS array in DNS Note that Windows NLB can't be used on Exchange servers where mailbox DAGs are also being used because WNLB is incompatible with Windows failover clustering. If you're using an Exchange 2010 DAG and you want to use WNLB, you need to have the Client Access server role and the Mailbox server role running on separate servers. In this scenario you should instead use an external hardware-based or virtual load balancer. If you use WNLB it is just a matter of creating the WNLB cluster and pointing the DNS record at the WNLB VIP and make sure that TCP port 135 (EndPoint Mapper) and the dynamic RPC port range (TCP 1024-65535) are added to the port rules list. Note: Later in this article series I will show you how to set static ports for MAPI and directory access. If you use a load balancing solution from a 3rd party vendor, you must create rules in the LB device that round robin traffic for the respective ports. Lastly, if you created mailbox databases on Mailbox servers in the AD site before you created the CAS array, you must change the FQDN specified in the RpcClientAccessServer attribute on these databases. You do this using the following command: Set-MailboxDatabase <name of DB> -RpcClientAccessServer outlook.domain.com

Figure 8: Changing the value of RpcClientAccessServer attribute on any existing Mailbox databases We should now see outlook.domain.com as the FQDN.

Figure 9: RpcClientAccessServer attribute set to FQDN of CAS array If you protect the mailbox databases using a Database Availability Group, and a copy of the respective database in another AD site becomes the active one, remember that CAS servers will talk directly with the Mailbox server on which the Mailbox

database is now mounted. This communication will happen via RPC as Client Access servers and Mailbox servers talk RPC. This is an important detail. If you have a complete site down, clients will not automatically re-connect to CAS servers in another site. This will instead require manual intervention. This topic deserves an article of it's own and is outside the scope of this article.

Part -2
Introduction In part one of this multi-part article, we began with a nostalgic look at how the business logic worked back in the Exchange 2000/2003 days, where we had the concept of front-end and back-end servers. We also had a look at the improvements that were delivered with the introduction of the Client Access server role in Exchange 2007. From there we moved on to and had a look at new RPC Client Access Service as well as the new CAS array functionality included with Exchange 2010. In this article we will talk about supported Outlook clients and why public folder connections still hit the mailbox server directly from the Outlook client. Finally I will show you how to configure static RPC ports, so you do not have to specify a large range of ports in your load balancer device or in your WNLB cluster. Lets get moving. Supported Outlook Clients You might think that the new RPC Client Access service only supports newer Outlook clients. Well, actually all Outlook clients supported by Exchange 2010, that is Outlook 2003, 2007, and 2010, can connect to the RPC Client Access service on an Exchange 2010 CAS server no matter if its a single CAS server or an array of CAS servers. Note: Whether Outlook clients older than Outlook 2003 can connect to the RPC Client Access service is unknown as this has not been tested by either the Exchange Product group or myself during the writing of this multi-part article. Although Outlook 2003 clients are supported and can connect to the RPC Client Access service, there is one thing you should bear in mind. One of the default settings of the RPC Client Access service is that it requires encryption for RPC connections. You can check this setting using the following command: Get-RpcClientAccess | fl

Figure 1: RPC encryption set to true by default This is not an issue if you use Outlook 2007 or Outlook 2010 since these Outlook versions have RPC encryption enabled by default when you create a new Outlook profile (see Figure 2).

Figure 2: RPC encryption enabled in Outlook 2007/2010 But guess what? Yes the old Outlook 2003 version behaves differently. When you create a new Outlook 2003 profile, RPC encryption is disabled by default (see Figure 3).

Figure 3: RPC encryption disabled in Outlook 2003 This means that if you migrate an Exchange 2003 or 2007 mailbox to Exchange 2010, or try to create a new Outlook 2003 profile against an Exchange 2010 mailbox, you would not be able to connect to the mailbox. After authenticating, you will instead receive a dialog box similar to the one shown in Figure 4.

Figure 4: Warning when RPC encryption is enabled on server and client has RPC encryption disabled The issue can be resolved in two ways. You can either enable RPC encryption in the Outlook 2003 profile (if you have many, you could do so via a GPO) or disable the RPC encryption requirement on the Exchange 2010 Client Access server. Enabling RPC encryption on the client is of course the recommended approach, but if you insist on disabling this setting server-side, you can use the following command: Set-RpcClientAccess Server <CAS server> EncryptionRequired $false

Figure 5: Disabling RPC encryption requirement on the server side When you have disabled the RPC encryption requirement on relevant CAS servers, verify that this setting has been set to false by running: Get-RpcClientAccess | fl EncryptionRequired If this is the case, your users can now connect to an Exchange 2010 mailbox using Outlook 2003 clients that do not have RPC encryption enabled. Note: If you choose to disable RPC encryption server-side, remember that you must do so both on the Exchange 2010 Client Access and Mailbox servers. This results from the public folder connections which still go directly to the RPC Client Access service on the back-end Mailbox servers. See the next section for more information about this. Public folder Connections still go to the Mailbox Server? That all MAPI connections now occur against the Client Access servers in the middle tier layer is not exactly true. You see there is still support for public folders in Exchange 2010 and for public folder connections; Outlook will still connect to the Mailbox server when accessing public folder databases. However, an important detail is that while you connect to the Mailbox server role for PF connections, at the service level, the RPC Client Access service will still be the answering RPC endpoint as the RPC Client Access service also exists on a Mailbox server (see Figure 6).

Figure 6: RPC Client Access Service in the Services MMC snap-in on a Client Access server Note: The RPC Client Access service on a Mailbox server is not used by the RPC Client Access service on the Client Access server. In fact, all requests coming from the RPC Client Access service on a Client Access server to a Mailbox server is ignored as it otherwise could result in loops. Also, if both the CAS and Mailbox server roles are installed on the same machine, then the RPC Client Access service will only be registered once on that machine. Why dont PF connections go through the new RPC Client Access service? Public folders have their own replication mechanisms and having PF connections go through RPC Client Access service on the CAS would require a lot of additional complexity due to the way PFs replicate data. The Exchange Product group would have to re-implement legacy functionality that they are trying to drop. Public folders work the same as in Exchange 2007 and you do not really lose anything with this design, so dont expect that the Exchange PG will change this business logic in a future service pack. In Figure 7, we see the Connection Status window on an Outlook 2010 client opened by a user who has a mailbox on the Mailbox server named E2K10EX02. But because MAPI for mailbox access now goes to the CAS server, you will only see the FQDN of the CAS server, not the Mailbox server. The reason why you see E2K10EX02 listed once is because public folders are used in this organization and as I just explained above PF connections goes directly to the RPC Client Access service on the Mailbox server.

Figure 7: Public Folder connection occur against the Mailbox Server Setting static RPC ports for MAPI and Directory Access Since you now connect directly to the RPC Client Access service on the Client Access server for mailbox access or the RPC Client Access service on the Mailbox server for public folder connections, and that clients for directory access connects to the NSPI endpoint, it also means that you by default need to open the TCP 135 EndPointMapper and the dynamic RPC range TCP 1024-65535 between your internal client network and the Client Access servers or arrays and your Mailbox servers. Luckily you can still configure static port mappings just like it has been possible in versions earlier than Exchange 2010. But you have to do so both on any CAS as well as Mailbox servers that are accessed by Outlook MAPI clients. On the CAS servers, for Mailbox connections, you need to use add a DWORD registry key named TCP/IP Port under: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSExchangeRpc\ParametersSystem

Figure 8: Adding the required DWORD key on CAS server to configure a static port number Set the value to the port number to be assigned. In this article we use port 55000, but you are free to choose whatever port you want to use, just remember it should not conflict with other applications using the port. It is recommend you choose a port within the dynamic RPC ranger (1024-65535). To use a static port for public folder access, you need to do the same on the mailbox servers: First open the registry editor. Then add a DWORD key named TCP/IP under:HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSExchangeRPC\ParametersSystem Its fine to use the same port that you specified on the CAS server. Port

Figure 9: Adding the required DWORD key on Mailbox server to configure a static port number

Finally you need to limit the port usage for clients that connect to the NSPI endpoint for directory access. Unlike MAPI access to mailboxes and public folders this is done by modifying a file more specifically theMicrosoft.exchange.addressbook.service.exe.config configuration file located in the Exchange Bin folder.

Figure 10: Microsoft.exchange.addressbook.service.exe.config Open the file in Notepad and then change the RpcTcpPort value from the default assignment of 0 to the port you want Outlook clients and Exchange to use for the directory access via the NSPI EndPoint. In this article we use port 55001.

Figure 11: Setting a static port for the NSPI EndPoint When you have done the above changes, you should reboot the Mailbox and Client Access on which you performed the above changes. Verifying static ports are used between Outlook and Exchange 2010 Now that we have rebooted the relevant servers, let us verify that Outlook actually connects to Exchange 2010 using the static RPC ports we just specified. To do so first connect to your mailbox using Outlook, then open a command prompt window and type Netstat na

Figure 12: Verifying that Outlook connect to the static RPC ports on the Exchange 2010 servers (client-side) As you can see in Figure 12 above, our Outlook client connects to 192.168.2.238 (Client Access Server) and 192.168.2.239 (Mailbox Server) using port 55000 (MAPI) and port 55001 (DSAccess). Let us also try to run netstat on the Client Access server. As you can see, the client with IP address 192.168.2.198 connects to both port 55000 (static RPC port) and 55001 (static DSAccess port).

Figure 13: Verifying that Outlook connect to the static RPC ports on the Exchange 2010 servers (server-side)

That was all I had to share with you in part 2, but you can look forward to part 3 which will be published in the near future here on MSExchange.org. In part 3, I show you how to configure a WNLB and an external load balancer to work with a Client Access array.

Part 3
Introduction In part two of this multi-part article series, we spoke about which Outlook clients are supported by the new RPC Client Access service as well as why public folder connections still go directly from Outlook MAPI to the mailbox server. I also showed you how to configure static RPC ports, so you do not have to specify a large range of RPC ports in your load balancer device or in your Windows NLB array. As you learned in the previous articles in this multi-part article series, with Exchange 2010, the Client Access Server (CAS) has become a true middle tier server. With Exchange 2010, all Exchange clients including Entourage and internal Outlook MAPI clients now connect directly to the CAS role. This also means that making this server role highly available via load balancing technology has become even more important than was the case with previous versions of Exchange Server. In this part 3, we will take a look at how you provide true high availability for internal Outlook MAPI clients by combining a Client Access Server (CAS) array with Windows Network Load Balancing (NLB) technology. Let us get moving! What is Windows Network Load Balancing and how does it Work? Network Load Balancing (NLB) technology can be used to distribute client requests across a set of servers. Windows NLB is often used to ensure that stateless applications such as IIS-based web or applications servers can be scaled out by adding additional servers as client load increases. Doing so makes sure that clients always experience acceptable performance levels. In addition, it reduces downtime caused by a malfunctioning server as the end-users will never know that a particular member server in the Windows NLB is or has been down for planned or unplanned reasons. Windows NLB clusters can provide scalability for services and applications based both on TCP and UDP. On top of that you can have up to 32 servers in a Windows-based NLB cluster. Although up to 32-nodes in a WNLB cluster is feasible for some web/application servers, best practice is to have a maximum of 8 Exchange 2010 Client Access Server (CAS) servers configured in a WNLB. If you need to load balance more than 8 CAS servers, it is recommended to use a hardware-based load balancing solution. Windows NLB is included in both the Standard and Enterprise edition of Windows Server 2008 SP2/R2, and because WNLB is a standard component, it does not require you to use any special or specific server hardware for each member server in the NLB cluster. Seen from the Exchange 2010 perspective though, you should of course follow theCAS server sizing guidelines available in the Exchange 2010 online documentation on Microsoft TechNet. When a Windows NLB array has been properly configured, all servers in the array are represented by a single virtual IP (VIP) address and a fully qualified domain name (FQDN). When a client request comes in, it will be sent to all servers in the Windows NLB array. The client will then be mapped to a particular server and the request to the other servers will be dropped. Having said this, you can use affinity to direct specific client request to particular member servers. You can even configure each member server with a priority, although both are outside the scope of the article. Unicast or Multicast Mode? A Windows NLB array can be configured in either unicast or multicast mode. Since WNLB is not something Exchange administrators or consultants deal with on a daily basis, it can be difficult to decide which mode to choose when time comes to load balancing the RPC traffic to the RPC Client Access service on the CAS servers in the Exchange 2010 infrastructure. So lets take a look at each WNLB mode: Unicast Mode With the WNLB cluster configured in unicast mode, the MAC address of each servers network adapter will be changed to a virtual cluster MAC address, which is the MAC address that will be used by all servers in the Windows NLB cluster. When unicast mode is enabled, clients can only connect to the servers via the VIP address on the network interface card (NIC) that has been configured with the cluster MAC address.

Multicast mode With the Windows NLB cluster configured in multicast mode, a multicast MAC address is added to the cluster adapter of each server in the cluster. Note that I write is added, as each server will retain their original MAC address. A Windows NLB cluster, no matter what mode it is configured in, works with just a single network adapter installed in each server, but it is recommended to install a second network adapter in each server, in order to achieve optimal performance, and to separate ordinary and cluster related network traffic. So what mode should I use for my Exchange 2010 CAS array and how many network adapters should I install in each Client Access server? Well, a best practice recommendation is to install two network adapters and use unicast mode, so that the host and cluster network traffic are separated on their own respective network interface. However, if you only have the option of installing one NIC in each CAS server or if youre forced to using multi-cast mode because of the switches used in your organization, you should pick multicast mode. Note: You can read more about the pros and cons of each NLB mode here. Windows NLB Arrays in a Virtual environment Many organizations are virtualizing most of their servers nowadays, and these organizations often also have a demand for virtualizing the Exchange servers within the infrastructure. As many of you know, it is perfectly fine to virtualize all Exchange 2010 server roles (except Unified Messaging) and fully supported by Microsoft, but there are a few things to be aware of when it comes to configuring virtual Exchange 2010 CAS servers in a WNLB array. VMware ESX Server When you plan to configure a WNLB array for virtual Exchange 2010 CAS servers that uses VMware ESX Server as the virtualization platform, it is recommend to configure your WNLB in multi-cast mode since you otherwise will expect an issue with the WNLB array not working properly. For details on this issue see this VMware KB article. If you insist on running in Unicast mode, VMware has an alternative method to resolve the issue. This method is also described in the VMware KB article. Hyper-V Server When you plan to configure a WNLB array for virtual Exchange 2010 CAS servers that uses Hyper-V Server or Hyper-V Server R2 as the virtualization platform, you can configure the WNLB array in Unicast mode. But you will face an expected issue while you configure the WNLB array. The issue occurs because the MAC address of the NIC you use for the WNLB array is changed when you create the WNLB array. This will make the MAC address that Hyper-V assigned to the NIC different from the one originally assigned when you added the NIC under the Hyper-V settings of the virtual machine. To fix this issue, you need to configure a new static MAC address identical to the one assigned to the WNLB (Figure 1). If you are running the virtual Exchange 2010 CAS servers on Hyper-V Server R2, you should tick Enable spoofing of MAC addresses.

Figure 1: Configuring a static MAC address and enabling MAC address spoofing The issue is described in more detail in this Microsoft KB article. Finally if you use two NICs in each NLB node and you dont have a default gateway specified on the NLB NIC, you should also enable IP forwarding on the NLB NIC. IP forwarding is disabled by default in Windows Server 2008 and Windows Server 2008 R2. To enable it, using the following command: Netsh interface ipv4 set int "[NLB NIC]" forwarding=enabled Since many of you probably want to deploy a solution, similar to the one I describe in this article in your virtualized Exchange 2010 environments, I thought it was a good idea to mention these issues. Configuring the Network Settings Although not necessary (as explained earlier), we will use unicast mode with two network adapters installed in this setup (this gives us the most optimal performance). To configure the second network adapter in each Exchange 2010 Client Access server, open Network Connections and give each LAN connection a meaningful name such as NLBas shown in Figure 2.

Figure 2: Configuring network connections Also make sure you change the binding order of the NICs in each server that will be part of the WNLB. The production NIC should be listed first as shown below. You get to Advanced Settings by opening Network Connections and selecting Avanced in the Menu. You probably need to hold down the CTRL key to make the Network Connections menu visible since it is hidden by default.

Figure 3: Changing binding order for the NICs Installing the Windows Server 2008 NLB component Unlike previous versions of Windows Server, the WNLB component is not installed by default in Windows Server 2008 and Windows Server 2008 R2. To install the WNLB on Windows Server 2008, you can either use the Server Manager GUI or ServerManagercmd.exe (using ServerManagercmd.exe -install NLB). To install the component on Windows Server 2008 R2, you can also use PowerShell if you like. In this article we will just stick to using the GUI. So open up the Server Manager and select Features followed by clicking Add Features. This brings up the Add Features Wizard, where you

simply tick Network Load Balancing and click Install. When the NLB feature has been installed, click Finish and exit the Server Manager.

Figure 4: Installing the NLB feature Creating the CAS array host record in DNS If you have not already done so, it is time to create the CAS array record in DNS. I explained how you do this back in part one of this multi-part articles series. Basically, you simply log on to a domain controller in your Active Directory forest, then open the DNS manager by clicking Start > Run and type dnsmgmt.msc. Now you expand the Forward Lookup Zones container and right-click on the respective forward lookup zone for your Active Directory. On the context menu select New Host (A), then type the name you want to use. As you can see in Figure 5, I used OUTLOOK for the purpose of this specific setup. But using outlook.domain.com is actually a pretty good best practice. When you have entered the name for the host record, type the IP address you plan to use for the WNLB array that we create in the next section.

Figure 5: Creating DNS record for the CAS array If you did not create the CAS array object in Active Directory, now would also be a good idea to do that. This was also explained in part 1, but I can just as well include the command here: New-ClientAccessArray Name name of CAS array Fqdn <fqdn of CAS array> -Site <name of AD site>

Figure 6: Creating the CAS array object in Active Directory Creating the Windows NLB array Okay we have reached the part where we create the actual WNLB array. We can do this using the command prompt or if you have installed the Exchange 2010 CAS server role on Windows Server 2008 R2 we can even use PowerShell. Again I like to include visual representations of what occurs when you configure the solution, so I will use the Network Load Balancing Manager console. So launch this console via Start > Administrative Tools. Now selectCluster in the menu and then New.

Figure 7: Creating a new NLB array Enter the name of the first node you wish to add to the WNLB array, then click Connect. After a little while, you will see the NICs available for configuring the new NLB array. Select the one named NLB, and click Next.

Figure 8: Specifying the NIC to be used as the NLB NIC On the Host Parameter page, leave the defaults as is and click Next.

Figure 9: Host Parameters page in new NLB array wizard Its time to add the IP address you wish to associate with the WNLB array. Remember this should be the same IP address that you also specified when you created the DNS record (outlook.domain.com) for the CAS array. When the IP address has been added, click Next.

Figure 10: Adding an NLB array IP address On the next page, we need to specify the FQDN of the WNLB array as well as the operation mode. In this article I will call the WNLB array array01.exchangelabs.dk, but if you have a special naming standard you must follow, feel free to use another name. Unless your Exchange 2010 CAS servers are on top of VMware ESX server or have other requirements for choosing Multicast mode, make sure Unicast mode is selected and click Next.

Figure 11: Specifying the FQDN for the WNLB On the Port rules page, delete the default port rule.

Figure 12: Default Port Rule Click Add. On the Add/Edit Port Rule page, untick All and then specify the first port you wish to add to the WNLB array. In this case this would be port TCP 135 endpoint mapper port which is required for the CAS array. Make sure the port rule is set to single affinity and click OK.

Figure 13: Adding new Port Rule for Endpoint mappper Now create an additional port rule where you specify the RPC ports that are used by the Outlook clients and the CAS array. Since we configured static ports for RPC communication between Exchange 2010 CAS servers and the Outlook MAPI clients, and because we chose to use TCP port 55000 for mailbox connection and port TCP 55001 for directory access connections, these are the ones we will add in this article. If you chose to use other static RPC ports or just the default dynamic range of RPC ports, you should specify those instead. When you havent specified that static RPC ports should be used, you should add TCP port 1024-65535 in the port rule. Also select single affinity for this port rule and click OK.

Figure 14: Adding new Port Rule for RPC traffic Since you most probably need to use this WNLB array for other Exchange clients as well you should of course also remember to add port rules for the these clients. In the figure below, I also added port rules required by Outlook Anywhere (TCP/443), Exchange ActiveSync (TCP/443), Outlook Web App (TCP/443), Secure IMAP (TCP/993), secure POP (TCP/995). I also added port 80 since this is used for internal IIS redirection (HTTP > HTTPS). Also note that best practice is to configure the secure IMAP and PPOP port rules with affinity set to none.

Figure 15: Required Port Rules added Now select Finish. The NLB array will now be configured and after a little while you should have a WNLB array with one node running. Note: Some of you might feel tempted to just use the default port rule for all Exchange client types and services, and although this should work okay for most Exchange clients/services, personally I like to create separate port rules. The reason behind this thinking is that not all Exchange services/clients use same affinity settings etc. For instance the IMAP/POP protocols should be configured with affinity set to none unlike the other Exchange clients/services that should use single affinity. From a security standpoint we of course have the Windows Firewall running on the servers, so separating the port rules are really not in order to gain extra security.

Figure 16: WNLB array created Now we just need to add any additional nodes to the WNLB array. To do so you can right-click on the new WNLB array and select Add Host to Cluster as shown in Figure 17 below.

Figure 17: Adding host to WNLB array Now enter the NetBIOS or FQDN of the Exchange 2010 CAS server you want to add to the WNLB array, then clickConnect.

Figure 18: Specifying the new WNLB array member On the Host Parameters, click Next.

Figure 19: Host Parameters page in WNLB wizard Click Finish on the Port Rules page.

Figure 20: Port Rules in WNLB array The WNLB array will now begin adding the extra node(s) and update the configuration as necessarry.

Figure 21: Second WNLB array added with success We now have a fully working WNLB array that load balances the RPC connections that goes from Outlook MAPI clients to the RPC Client Access services on the Exchange 2010 CAS servers in the Exchange organization. But before Outlook MAPI client will begin to use the RPC CAS array, you must configure any mailbox databases within the AD site where the CAS array was created to use the CAS arrays FQDN (in this outlook.domain.com).

To do so, open the Exchange Management Shell, and enter the following cmd: Set-MailboxDatabase <name of DB> -RpcClientAccessServer outlook.domain.com

Figure 22: Specifying the FQDN of the CAS array on the Mailbox databases Connecting to the CAS Array with an Outlook MAPI Client Okay it's time to test that we can connect to an Exchange 2010 mailbox and that the new CAS array (aka RPC connection point) is used when a new Outlook profile is created. So lets launch Outlook 2010 on a domain joined client machine. This brings up the Add New Account wizard, where the email address of the user logged on to the client machine is automatically entered in the e-mail address field as shown below.

Figure 23: Launching Outlook 2010 in order to create a new profile When clicking Next Outlook will create the new profile automatically. When it has finished let us tick Manually configure server settings followed by clicking Next, so that we can see the new RPC connection point is used.

Figure 24: Outlook 2010 profile created with success As we can see in Figure 25, the new CAS array is used as the RPC endpoint is used. Click Finish.

Figure 25: New CAS array FQDN used as RPC connection endpoint Outlook is launched and a local copy of the mailbox is being created. Now hold down CTRL while right-clicking on the Outlook icon in the systray in the lower right corner. Select Connection status in the context menu. As we can see in the connection status window, we have two mailbox and two directory connections against the CAS array FQDN and a single public folder connection against a mailbox server storing the public folder database. As you learned in part 2 of this articl es

series, the public folder connection to the mailbox server is expected , since public folders still happen directly against the mailbox server (more specifically the RPC Client Access service on the mailbox server role).

Figure 26: Connection status window showing the CAS array is used as the RPC connection endpoint That was all I had to share with you in part 3, but you can look forward to part 4 which will be published in the near future here on MSExchange.org. In part 4, I show you how to configure an external load balancer to work with a Client Access array.

Part 4
Introduction In part three of this multi-part article series, I explained how to provide true high availability for internal Outlook MAPIs by load balancing the RPC Client Access service (RPC CA) using a combination of a Client Access Server array (CAS array) and Windows server network load balancing (WNLB) technology. In this part four, we will take a look at how to provide true high availability for internal Outlook MAPI clients using a combination of a CAS array and a redundant external hardware-based load balancer solution. Lots of folks seem to think that a hardware based load balancer solution is an expensive luxury of LORGs (large organizations) with just as large IT budgets at their disposal. But you know what? A hardware load balancer solution does not necessarily need to cost thousands of dollars. You can actually get sophisticated, high performance devices at a very affordable price (you just need to find the right vendor). This means that even though you work with/for a SMORG (small or medium organization) with a very limited IT budget, it does not mean they can not afford to invest in a hardware load balancer solution. Personally, I have recommended different hardware load balancer solutions from different vendors to my customers over the years, but for Exchange 2010, I really like the low cost devices from KEMP Technologies. Their smallest device (LoadMaster 2000) has a price tag of $1,590 dollars which even includes one year of support. This means that you can get a redundant hardware load balancer solution for approximately $3,000 dollars! On top of that, the LoadMaster 2000 device has the same rich feature set as the LoadMaster 2500, 3500, and 5500 models, which means that it has full support for fancy features such as load balancing using layer 4 and 7, automatic failover cluster (active/hot standby with failover time of less than 3 seconds in my test environment), SSL offloading, layer 7 persistence (stickiness), up to 256 virtual services (with a total of up to 1000 real servers!) and server/application health checking etc. These are features you typically only see listed when looking at expensive load balancer devices from the more well-known vendors on the market.

By the way, if you are on the virtualization bandwagon (who isnt?), KEMP Technologies also has a virtual appliance with a feature set identical to the hardware based devices. Because I have very good experience with the devices from KEMP Technologies and because they are affordable even for the SMORGs that typically are planning to deploy a fully redundant Exchange solution consisting of two multi-role Exchange 2010 servers, Ive used two LoadMaster 2000 devices configured in a cluster (one active and one hot standby) as the basis for this article.

Figure 1: Topology Note: Im in no way affiliated with KEMP Technologies. In addition, I am not being paid to point readers at hardware load balancer devices they provide. Alright enough shameless promotion, let us get moving. Why use a Hardware Load Balancing solution over Windows NLB? With the architectural changes in Exchange 2010 that amongst other things introduces the new RPC Client Access service (which moves Outlook MAPI mailbox connections from the back-end Mailbox servers in the data tier to the Client Access servers (CAS) in the middle tier) providing both a load balanced and highly available Client Access Server (CAS) is more important than ever before.

Windows Network Load Balancing (WNLB) technology can be the right choice for organizations that do not plan to deploy multi-role Exchange 2010 servers with both DAG protected mailbox databases and a load balanced/highly available CAS server service. In addition, using WNLB can be a valid approach for organizations that do not have more than 8 nodes in a WNLB based array (the Exchange Product group does not recommend more than 8 nodes in a WNLB based cluster due to scalability and functionality limitations). However, if you plan to deploy multi-role Exchange 2010 servers with both DAG protected mailbox databases and a load balanced/ highly available CAS server service, you cannot use WNLB due to Windows Failover Cluster (WFC) and WNLB hardware sharing conflicts (see this KB article for more information). Also, depending on your environment and network topology the persistence (affinity) settings provided by WNLB may not be sufficient. For instance it is recommended to use a hardware-based load balancing solution when you cannot use the Client IP settings for session persistence (affinity). When a hardware-based load balancer CAS array has been properly configured, all servers in the array are represented by a single virtual IP (VIP) address and a fully qualified domain name (FQDN). When a client request comes in, it will be sent to an Exchange 2010 CAS server in the CAS array using DNS round robin distribution method. Of course we have options to prefer one or more CAS servers over other etc. What Persistence (affinity) type should I use for the RPC CA Service? Persistence (aka affinity, stickiness etc.) is the ability of a load balancer to maintain a connection between a client and a server. Persistence can make sure that all requests from a client are sent to the same server in a NLB array or server farm. For the RPC CA service, it is recommended to use client IP session (source IP address) based persistence. When using client IP session based persistence, the session requests from an individual client are directed to the same CAS server based solely on the source IP of the client of a packet. I will not dive into the details of persistence in this article. I will instead save this topic for a future article which will focus on how to load balance not only the RPC CA, but all services living on the CAS server (including OWA, ECP, EAS, OA, EWS etc.). It's just important you understand that the recommended persistence type for Outlook MAPI clients connecting to the RPC CA on the CAS server is Client IP aka Source IP address. Creating the CAS array host record in DNS If you have not already done so, it is time to create the CAS array record in DNS. I explained how to do this back in part one of this multi-part articles series. Basically, you simply log on to a domain controller in your Active Directory forest, then open the DNS manager by clicking Start > Run and type dnsmgmt.msc. Now you expand the Forward Lookup Zones container and right-click on the respective forward lookup zone for your Active Directory. On the context menu select New Host (A), then type the name you want to use. As you can see in Figure 2, I used OUTLOOK for the purpose of this specific setup. Using outlook.domain.com is good best practice. When you have entered the name for the host record, type the IP address you plan to use for the virtual services that we create in the next section.

Figure 2: Creating DNS record for the CAS array If you did not create the CAS array object in Active Directory, now would also be a good time to do that. This was also explained in part 1, but I can just as well include the command here: New-ClientAccessArray Name name of CAS array Fqdn <fqdn of CAS array> -Site <name of AD site>

Figure 3: Creating the CAS array object in Active Directory Configuring the Hardware Load Balancer to work with the CAS array It is now time to configure our hardware load balancer solution. With a LoadMaster HA pair (one-armed) this is very simple and only requires a few steps. Important: The LoadMaster device (no matter what model we are talking about) does not support specific port ranges for a virtual service as of this writing. This is being added to a future firmware build though. This means that in order to use LoadMaster devices to load balance traffic going to an Exchange 2010 CAS array, it is a requirement to set static RPC ports. Please refer to part 2 of this multipart article for specific steps on doing so. When we have logged into the web-based GUI, we need to click Virtual Services > Add New.

Figure 4: Opening the Add News Virtual Service page On the Add a new Virtual Service page, enter the IP address of the Exchange 2010 CAS array. This is the VIP we specified back in Figure 2. Then enter the port number and protocol type. The first port we need to create is port 135 which is the TCP End Point Mapper. Now click Add this Virtual Service.

Figure

5: Specifying

IP

address,

port,

and

protocol

type

of

the

virtual

service

We will now be directed to the Basic Properties page which is the place where we configure the persisence settings etc. Here we enable Force L7 and L7 Transparency and specify Source IP Address under Persistence Options. The Scheduling Method (aka distribution method of client traffic going to the CAS servers) should be set to round robin. When we have configured the different options, we can move on by clicking Add New in the bottom of the page.

Figure

6: Property

page

of

the

virtual

service

This is the page where we specify the IP address of the real servers (Exchange 2010 CAS servers).

Figure 7: Add the real servers to the virtual service When we have added each CAS servers participating in the CAS array, we can move on and create the next virtual service using similar steps and configuration options.

Figure 8: List of real servers added to the virtual service Besides port 135 you also need to create a virtual service for the static port set for RPC connections to the mailbox servers and the static port set for the address book service. Again refer to part two for details on how to configure these. In this multipart article I use port 55000 for mailbox connections and port 55001 for the address book service, so I end up with the list of virtual services shown in Figure 9. If the real servers are reachable on the specified ports, you should see a green indicator.

Figure 9: List of required virtual services We now have a fully working CAS array that load balances the RPC connections that goes from Outlook MAPI clients to the RPC Client Access service on each Exchange 2010 CAS server using a hardware-based load balancer solution. But before Outlook MAPI clients will begin to use the CAS array, we must configure any mailbox databases within the AD site where the CAS array was created to use the CAS arrays FQDN (outlook.domain.com). To do so, open the Exchange Management Shell, and enter the following cmd: Set-MailboxDatabase <name of DB> -RpcClientAccessServer outlook.domain.com

Figure 10: Specifying the FQDN of the CAS array on the Mailbox databases Connecting to the CAS Array with an Outlook MAPI Client Okay it's time to test that we can connect to an Exchange 2010 mailbox and that the new CAS array (aka RPC connection point) is used when a new Outlook profile is created. So lets launch Outlook 2010 on a domain joined client machine. This brings up the Add New Account wizard, where the email address of the user logged on to the client machine is automatically entered in the e-mail address field as shown below.

Figure 11: Launching Outlook 2010 in order to create a new profile When clicking Next Outlook will create the new profile automatically. When it has finished lets tick Manually configure server settings followed by clicking Next, so that we can see the new RPC connection point is used.

Figure 12: Outlook 2010 profile created with success As we can see in Figure 13, the new CAS array is used as the RPC endpoint is used. Click Finish.

Figure 13: New CAS array FQDN used as RPC connection endpoint Outlook is launched and a local copy of the mailbox is being created. Now hold down CTRL while right-clicking on the Outlook icon in the systray in the lower right corner. Select Connection status in the context menu. As we can see in the connection status window, we have two mailbox and two directory connections against the CAS array FQDN and a single public folder connection against a mailboks server storing the public folder database. As you learned in part 2 of this artic les series, the public folder connection to the mailboxs server is expected, since public folders still happen directly against the mailbox server (more specifically the RPC Client Access service on the mailbox server role).

Figure 14: Connection status window showing the CAS array is used as the RPC connection endpoint This concludes this article series. Hope you enjoyed it!

Anda mungkin juga menyukai