This document contains guidelines to help you design and configure EMC ControlCenter for performance and scalability. The guidelines apply to ControlCenter components and products and the networks that connect them.
Note: This document assumes that you are familiar with ControlCenter and have reviewed the EMC ControlCenter 6.1 installation and configuration documentation, especially EMC ControlCenter 6.1 Release Notes, EMC ControlCenter 6.1 Overview and EMC ControlCenter 6.1 Planning and Installation Guide, Volume 1.
Topics include:
Revision History ................................................................................... 2 Introduction........................................................................................... 3 Planning guidelines for the infrastructure........................................ 6 Infrastructure installation guidelines .............................................. 13 Data Collection Policy guidelines .................................................... 17 Upgrade guidelines............................................................................ 21 Networking guidelines ...................................................................... 22 Console guidelines ............................................................................. 25 Agent guidelines................................................................................. 27 StorageScope Guidelines ................................................................... 53 Hardware configuration examples .................................................. 69
Revision History
Revision History
Table 1
Performance and scalability content updates by revision What Changed Where Changed Configure FCC Agent DCPs to run more efficiently. on page 35 Table 3 on page 9 Throughout entire document.
Changes for Revision A07 Removed reference to CNT InRange. Added Note before table for StorageScope parameters. Changes for Revision A06 Added StorageScope FLR information. Changes for Revision A05 Editorial changes. Changes for Revision A04 Editorial changes. Changes for Revision A03 Editorial changes. Changes for Revision A02 Added SPEC information. Changes for Revision A01 New document for ControlCenter 6.1 release. Note: If you are using StorageScope functionality, refer to theStorageScope Guidelines on page 53 for specific information. Throughout entire document.
Introduction
Introduction
This document contains guidelines for planning, installing, and configuring EMC ControlCenter 6.1. The guidelines can help you achieve maximum performance and scalability and therefore realize the full potential of ControlCenter. Before you begin using the guidelines, read this section to gain a high-level understanding of ControlCenter. Also, read the following publications for details on ControlCenter installation, configuration, administration, and operation:
EMC ControlCenter 6.1 Overview EMC ControlCenter 6.1 Release Notes EMC ControlCenter 6.1 Administration Guide EMC ControlCenter 6.1 Planning and Installation Guide, Volume 1 EMC ControlCenter 6.1 Planning and Installation Guide, Volume 2 EMC ControlCenter 6.1 Upgrade Guide (for upgrade only)
The physical and logical elements managed by ControlCenter are known as managed objectsthey include storage arrays, hosts, virtual machines, switches, databases, and fabric zones, etc. One of the major tasks in configuring ControlCenter for optimal performance and scalability is determining the overall size of a ControlCenter configuration, which involves considering the size, number, and types of the managed objects that ControlCenter will manage. Data collection policies (DCPs) specify the data to be collected by ControlCenter agents and the frequency of collection. Each agent has predefined DCPs and collection policy templates that can be managed through ControlCenter Administration. When planning and installing ControlCenter, you will be asked to choose between the following configuration types:
Single-host infrastructure configuration the infrastructure components (ECC Server, Store, Repository and conditionally StorageScope Repository, and Web Server) reside on one host. Distributed infrastructure configuration the infrastructure components listed above reside on two or more hosts. Single-host infrastructure with agents the infrastructure components and agents reside on one host.
EMC ControlCenter 6.1 Performance and Scalability Guidelines
3
Introduction
The listed configuration types, and the ControlCenter components that comprise the infrastructure, are described in detail in EMC ControlCenter 6.1 Planning and Installation Guide, Volume 1. The following interfaces allow users to monitor and manage the ControlCenter environment:
The ControlCenter Console, a Java application through which you can view and manage your ControlCenter environment. Using a basic set of common functions, like discovery, to perform monitoring, management, and ControlCenter administration (for authorized users only). The Web Console, a Web-based interface where you can perform a limited number of ControlCenter functions such as monitoring, reporting, and alert management. Unlike the ControlCenter Console, no installation of the Web Console is necessaryusers can access it through a Web browser. The Web Console is best suited to users who monitor ControlCenter remotely, or local users who do not require full ControlCenter Console capabilities. The StorageScope FLR Console, a browser-based interface that provides policies and reports. Console pages can be customized to include snapshots of information from other areas within the product. The Performance Manager Console, a browser-based interface that is invoked after data collection is complete. Performance Manager uses the data collections to create performance and configuration displays of components in your EMC ControlCenter environment.
Introduction
Specifically, the performance and scalability guidelines in this document can help you determine:
The optimal ControlCenter configuration type (single-host or distributed infrastructure). The sizes of storage arrays, switches, hosts, and other managed objects. When to upgrade a ControlCenter configuration to the next size classification (refer to Guideline P2 on page 7 for details). Optimal settings for data collection policies (DCPs). Where to place ControlCenter components so network latency is minimized. Minimum number of GK required to manage and monitor a Symmetrix
When reading the guidelines, keep in mind that the following factors affect ControlCenter performance and scalability:
Type, number, and size of managed objects Type, number, and frequency of DCPs System resources available to the agent host and ControlCenter infrastructure components (ECC Server, Repository, Store, StorageScope Repository, StorageScope Server, and Web Server) Number of active ControlCenter Consoles (and optionally, Web Consoles) Number of available Stores
Table 2
Managed Object
Symmetrix
Small
1-800 1-1600 1-90 1-512 1-512 2 8 1-16 1-100 1-100 8 16 8 8 2
Medium
801-2000 1601-4000 91-120 513-1024 513-1024 4 16 17-64 101-200 101-200 16 32 16 16 2
Large
2001-8192 4001-16000 121-240 1024-2048 1025 - 2048 8 32 65-256 201-400 201-400 32 64 32 32 2
Virtual machines Storage array devices File systems (vmfs3) SAN Mapped LUN Redundant paths
A Symmetrix logical volume (also known as a Symmetrix logical device or hypervolume) is a virtual disk drive in a Symmetrix array. If the size of a managed object exceeds Large, (other than extra large Symmetrix up to 64K which is addressed in Storage Agent guidelines section of this document), contact the EMC Solutions Validation Center (SVC@EMC.com).
Guideline P2
Determine the configuration size. Using Table 3 on page 9 and Table 4 on page 11, determine the configuration size. The numbers in the tables are based on medium-size managed objects and five hour data collection intervals. Refer to Hardware configuration examples on page 69 for additional information on specification of hosts being used in the lab.
Note: ControlCenter infrastructure, StorageScope, Console and agents performance and scalability has been measured on two distinct hardware types single-core and multi-core. A single-core (# cores per processor =1) host should have at least two processors whereas a multi-core (# cores per processor >= 2) host can be one or more processors (2xdual-core or 1xquad-core). Refer to page 69 for specification of single-core and multi-core hosts used in the lab. To help you compare performance of your hosts to hosts being used in the lab, SPEC score is used as an index. Throughout this document these two distinct hardware types are referred as single-core and multi-core. The minimum SPEC CINT2000 Rate baseline score for a single-core should be at least 26. Multi-core hardware should have a minimum of SPEC CINT2000 Rate baseline score of 123 or SPEC CINT2006 Rate baseline score of 61. For more information on SPEC score refer to Understanding SPEC on page 70.
If a host is to be connected to a storage array, use only dedicated server-class hosts listed in the EMC 6.1 Support Matrix on Powerlink. Otherwise, select a server-class host that is comparable to those listed in Table 43 on page 69. Disk space is the amount required for installing both ControlCenter and the operating system on the target host. If you are upgrading, the amount of disk space required is greater. Refer to the EMC ControlCenter Upgrade Guide for information. The ControlCenter Components column shows the installed componentsalthough not listed, it is assumed that the Master Agent and Host Agent are also installed. The Storage Arrays column lists the total number of medium-sized storage arrays of any type (EMC, HDS, HP, Sun, IBM, etc.). The number of managed objects that are published in a table row are the total number supported. For example, Table 3 on page 9, the last row says 200 arrays, 2500 hosts etc. ControlCenter supports a range up to 4400 for a combination of hosts, arrays, ESXs, databases, and switches together.
Various tables in the document (e.g. Table 3 on page 9) provide minimum or recommended specification of hosts for installing ControlCenter Infrastructure components or agents.
Note: In Table 3, a medium installation size means up to a medium size and includes the small installation size.
The number of hosts that a ControlCenter instance can manage is now defined by Host Device Logical Paths or hosts whichever is lower. Host Device Logical Paths - The total number of logical paths to a storage device accessible from a host. Refer to Figure 1, Host Device Logical Paths, for an example.
HBA0 P0 Host HBA0 P1 HBA1 P0 HBA1 P0
Figure 1
Port1
Switch Z1 Z3 Z4
Port3
FA-2C0
Symmetrix
Port2
Z2
Port4
FA-16C0
Dev 020
Where the number of Host device logical paths equals the number of zones created. Host Device Logical paths = 4
HBA0 P0 port1 port3 FA2C0 HBA0 P0 port1 port4 FA16C0 HBA1 P0 port2 port3 FA2C0 HBA1 P0 port2 port4 FA16C0
Note: StorageScope may be deployed on ControlCenter infrastructure hosts provided some conditions are met. Refer to Determine the appropriate placement of StorageScope. on page 55 and Allocate disk space for the StorageScope host. on page 56 for specific recommendations on component placement and disk space requirements.
Tables 3 and 4 support StorageScope. For specific StorageScope information, refer to StorageScope Guidelines on page 53
Table 3
Installation Size
Configuration Type
Min. Memory
Large (1 Store)
2 GB Distributed 2 GB 2 GB
72 GB 36 GB 72 GB 36 GB 36 GB 72 GB 36 GB 36 GB 36 GB
60
87,040 (680)
35
380
44 (2816)
Large (2 Stores)
Distributed
2 GB 2 GB 2 GB
Large (3 Stores)
Distributed
2 GB 2 GB 2 GB
Table 3
Installation Size
Configuration Type
Min. Memory
Large (2 Stores)
2 GB Distributed 2 GB
72 GB 72 GB
200
320,000 (2500)
100
1500
156 (9984)
10
Table 4
Installation Size
Configuration Type
Min. Memory
ControlCenter Components
Storage Arrays
# ESXs
# Oracle DBs
# Switches (Ports)
Small
Single-Host
2 GB
36 GB
30
42,240 (330)
25
180
24 (1536)
Small
Single-Host
3 GB
36 GB
11
Table 4
Installation Size
Configuration Type
Min. Memory
ControlCenter Components
Storage Arrays
# ESXs
# Oracle DBs
# Switches (Ports)
Small
Single-Host
3 GB
72 GB
60
87,040 (680)
35
380
44 (2816)
StorageScope Repository. Ensuredeployinghaverelated hasofConfigurationspace on RAM Host hosts to on page 64 for additionalused ensure repository and other of RAMcomponents. of freeto#1: 2 GB the drive installed with ControlCenterRAM Host, by that additional 1 GB minimum beenGB Refer to"StorageScope Guidelines" provide for new StorageScope When that you StorageScope on 10 provided the Infrastructure or Configuration #2: 3 GB to be information.
Note: : Refer to the Agent guidelines on page 27 for restrictions on agent configuration.
Note: When deploying Storage Agent for Centera on an infrastructure host like the one shown in Configuration #1: Single-core host with infrastructure and agents or Configuration #2: Multi-core host with infrastructure and agents in Table 4, do not include it in the count of agents such as "Any three of the following:" or "Any five of the following:" or "Any seven of the following:" Storage Agent for Centera has a much smaller footprint than any of the other agents listed.
12
Add 1 GB of RAM to the infrastructure host to allow for Web Server and ECCAPI components. Uninstall Console and install Web Server component. Install EMC ControlCenter Web Server and ECCAPI components on a separate host.
If you decide to install Web Server on a dedicated host, depending on the configuration, you can deploy some ControlCenter agents on that same host, as described in Table 5.
13
Minimum SPEC CINT2000 Rate (baseline) = 26 Web Server and ECCAPI, and any two of the following: Workload Analyzer Archivera FCC Agent Storage Agent for CLARiiON Common Mapping Agent Storage Agent for NAS VMWare Agent Console Storage Agent for Centera
2 GB
8 GB
a. See Guideline A22 on page 51 for additional disk space requirements when installing WLA Archiver.
Note: When deploying Storage Agent for Centera on an infrastructure host like the one shown in Table 5, do not include it in the count of agents such as "Any three of the following:" or "Any five of the following:" or "Any seven of the following:" Storage Agent for Centera has a much smaller footprint than any of the other agents listed.
Guideline I3
When a Store is overloaded or the configuration size increases to the next classification, install another Store or redistribute DCPs. A Store is considered to be overloaded when the alert with the display name Store Load Alert displays in the Alerts View and does not clear for about an hour. The alert triggers when the workload of a Store exceeds a threshold value, indicating that you might need to add another Store. A Store requires a Minimum of 600 MB of disk space. If no other ControlCenter components (other than a Store) are to reside on a disk drive, install the Store on a smaller drive (for example, 9 GB). To determine the cause of the Store being overloaded:
Investigate whether DCPs are properly distributed. If they are scheduled to start concurrently, reschedule them so they are staggered. Refer to Guideline D1 on page 17 and use the DCP Schedule Utility on Powerlink for help with tuning your DCP schedules.
14
Note: Be aware that DCPs run based on the system time of the agent host. The agent host system time and the Store system time may be different (for example, in different time zones).
Note: WLA DCP frequency does not impact Store processing. Transactions generated by WLA DCPs are either saved on the local disk of the agent host running WLA policies or sent to WLA Archiver bypassing Store.
Determine if managed objects were added to ControlCenter over time without being redistributed or without DCPs being reconfigured.
After determining which DCPs caused the overload, address the overload conditions by doing one or all of the following:
Decrease the frequency of the DCPs (for example, change the frequency from one minute to five minutes) so that bottlenecks are not created. Move from a single-host configuration to a distributed configuration by uninstalling the Store and installing it on a new host, or adding more Stores on new hosts. Migration from single-host to distributed configuration on dual-core hosts does not require un-installation of Store on the Repository host. Faster CPU and additional memory resources on the dual-core host configuration allows for collocation of Store on Repository host for distributed configuration.
Guideline I4
Follow these recommendations when providing high availability for infrastructure hosts. When allocating disk space on infrastructure hosts, consider these recommendations.
For high availability, use a mirrored (RAID 1) disk (keep in mind that this doubles the number of required physical hard drives). For system performance, a RAID controller is the Minimum recommendation for disk mirroring. Other options for increased system performance include using disk arrays such as Symmetrix (metavolume) or CLARiiON (RAID-10 LUN). Host-based mirroring is strongly discouraged. Do not use RAID5 when installing the repository. ControlCenter is considered an OLTP application and RAID5 may degrade the performance.
EMC ControlCenter 6.1 Performance and Scalability Guidelines
15
Guideline I5
Follow these recommendations to ensure optimal host operating system performance. Windows provides tools and programs designed to keep your computer safe, manage your system, and perform regularly scheduled maintenance that keeps your computer running at optimum performance. Here are some recommended maintenance tasks:
Cleanup temporary disk space. Shutdown all ControlCenter components on the host and run a disk defragment utility to defragment the drive. Using a Microsoft performance tuning guide, shutdown any unnecessary services, leaving the required ControlCenter services running.
16
Single-core processor host Minimum SPEC CINT2000 Rate (baseline) = 26 Medium Large (1 Store) Large (2 Stores) Large (3 Stores) Single-Host Distributed Distributed Distributed 179,200 (40) 21,760 (170) 44,800 (350) 65,280 (510)
Multi-core processor host Minimum of SPEC CINT2000 Rate (baseline) = 123 or SPEC CINT2006 Rate (baseline) = 61 Medium Large (2 Stores) Single-Host Distributed 44,800 (350) 80,000 (625)
Guideline D2
Schedule daily data collection (polling) policies. Customers with more than 2 Stores (distributed configuration), should ensure that daily Discovery policy is evenly distributed to avoid overloading the ControlCenter repository. The following are best practices:
17
Avoid overlapping DCPs with critical application tasks running on the agent hosts. For example, avoid scheduling an agent host DCP execution at midnight if the agent host has to run a mission-critical backup task at that time. Recommendations for File Level Collection (FLR) DCPs are discussed in Data Collection Policy guidelines on page 59. If possible, avoid overlapping DCP execution with ControlCenter maintenance tasks such as those listed in Table 7 on page 18.
.
Table 7
When Scheduled
10 p.m. daily 2 a.m. daily 9 p.m. daily Defined by the customer 11 p.m. daily
Guideline D3
Disable WLA Revolving Data collection policy to conserve resource utilization on agent host. Performance statistics data being collected by WLA Daily, Revolving, and Analyst DCPs are identical but they operate at different frequencies. Typically, WLA Revolving DCP would collect statistics data for last 2 hours and save it on local disk of agent host. Data collected by WLA Revolving DCP is used during the day whereas WLA Daily data is for short and long term trending etc.
Note: WLA Daily, Revolving and Analyst polices are disabled by default.
Beginning with ControlCenter 5.2 Service Pack 3, data collected by the WLA Daily DCP is processed every hour by the WLA Archiver agent, instead of waiting until midnight before it begins to process WLA Daily data for the previous day. Automation reports will continue to be produced starting at midnight on a daily basis. This change provides several benefits, including but not limited to:
Performance statistics data gathered by the WLA Daily DCP is available for immediate use.
18
The WLA Revolving DCP may no longer be needed since WLA Daily data can be used. Disabling the WLA Revolving policy helps conserve system resource utilization on the agent host running agents like CLARiiON, FCC etc.
19
Guideline D4
Schedule WLA Analyst DCP for short duration only. The WLA Analyst policy, designed to collect granular performance data for troubleshooting a performance problem on one or a few managed objects at a time, should only run for time periods that sufficiently capture the event(s) necessary to identify problems. It is not intended run routinely for all managed objects. In order to achieve the desired level of data granularity, the data collection frequency may be set at a higher rate than other DCPs. This high data collection frequency can have a potential performance impact on the target managed object, as well as the ControlCenter component hosts for the collecting agent and the WLA Archiver. The degree of impact depends on the type and size of the managed object and many other factors. Only experienced users, who understand the effects of data collection frequency on the performance of a particular managed object, should enable this policy for use. Furthermore, it is expected that this DCP will be used when a performance issue has been identified using other tools or DCPs, and further detailed performance data is required to characterize the problem or root cause the issue. WLA Daily DCP should be routinely used to collect round the clock performance data for managed objects. When running concurrent WLA Analyst DCPs each collecting data from a different managed object but using the same agent, restrict concurrent WLA Analyst policies to two managed objects for 3-6 hours per agent, or only as long as required to capture the necessary data, whichever duration is shorter [in order to minimize performance impact on the affected managed object(s)]. Scenario: Symmetrix #37 is having a spike on IO rate between 2:00 and 3:30 pm. This Symmetrix is managed by a Storage Agent for Symmetrix installed on host #46. Create one WLA Analyst policy at 2 minute interval for this Symmetrix. Apply a custom schedule to this policy for a period of 1:00 pm to 4:00 pm. At 4:00 pm, the policy will be automatically closed and data will be available in Performance Manager for analysis.
20
Upgrade guidelines
Upgrade guidelines
This section contains guidelines for upgrading to ControlCenter 6.1 from a previous version. Guideline U1 Upgrade agents in batches. When upgrading multiple agents, ControlCenter begins discovery of managed objects assigned to agents that are already deployed. To avoid system level performance problems during that time, restrict concurrent agent upgrade sessions as recommended in Table 8. If the agent host is selected for upgrade, then all agents on that host will be automatically upgraded. Limiting the number of agent hosts to be upgraded concurrently will also limit the number of concurrent agent upgrades. Also, wait at least one hour between agent upgrade sessions for the Infrastructure to finish discovering managed objects assigned to upgraded agents. Do not run multiple agent upgrade sessions at the same timeif you do, ControlCenter Console response time, and refresh resulting from configuration commands (SDR, Meta Device Configuration, etc.) might be slow.
Table 8
Medium
200
Large
10 5
Note: Install one Storage Agent for Symmetrix at a time that is managing Symmetrix with 32K devices or more. Upgrade two Storage Agent for Symmetrix at a time that are managing Symmetrix with 32K.
21
Networking guidelines
Networking guidelines
This section provides guidelines related to ControlCenter components in a distributed network. Guideline N1 Minimize the effect of network latency on ControlCenter. As network latency increases, so does execution time. In situations where latency is high, some ControlCenter operations may time out, especially for real-time actions. Do the following to reduce the effect of latency on a ControlCenter configuration:
Ensure that latency between infrastructure components (ECC Server, Store, Repository, StorageScope Repository, StorageScope, WLA Archiver, and Web Server) is less than 8 ms (milliseconds). Try to keep infrastructure components on the same LAN segment. Ensure that latency between ControlCenter agents and the infrastructure does not exceed 200 ms. The minimum bandwidth is 386 Kbps with 50 percent utilization.
Note: Because data traffic between agents and the infrastructure is far less than between infrastructure components or between ControlCenter Console interfaces and the infrastructure, agents can tolerate a higher latency.
Limit the network latency between SMI provider and FCC agent to 100 MS. Network latency between the SMI provider and connectivity devices should not exceed 150 ms. Ensure that latency between the Web Console and the infrastructure does not exceed 50 ms. Ensure that latency between the ControlCenter Console and the infrastructure does not exceed 15 ms. Optimally, latency should be at 8 ms or less. If network latency exceeds 8 ms: Consider using the Web Console instead of the ControlCenter Console. Keep in mind that the Web Console does not have all the capabilities of the ControlCenter Console (refer to Key ControlCenter concepts on page 3 for details). If you require the extended capabilities of the ControlCenter Console, use Citrix MetaFrame XP for Windows or Microsoft Terminal Services to access the ControlCenter Console.
22
Networking guidelines
Note: Do not access the ControlCenter Console through Citrix or Terminal Services running on an infrastructure host.
Using Citrix MetaFrame XP allows you to: Access the ControlCenter Console over a high-latency, low-bandwidth network. Access the ControlCenter Console through a Network Address Translation (NAT) firewallfor example, on a virtual private network (VPN) over cable or DSL. Note that this only applies when the ControlCenter Console is accessed through Citrix MetaFrame XP. Operate with an additional layer of security. Guideline N2 Consider the impact of network latency on various DCPs. High latency is generally associated with low bandwidth. ControlCenter agents are very efficient at data transfer. However, when deploying many ControlCenter agents in a remote location, consider the following:
A host agent managing a medium or large host generates a burst of data nightly when Discovery or FLR DCPs are executed. By default, Discovery DCPs are scheduled to occur at the same time each day. This results in all agents transmitting data concurrently and possibly overloading the network segment/connection. Disable WLA DCPs if the agent to WLA Archiver network latency is more 75 ms. Concurrent execution of Discovery DCPs by many remote host agents, over a low-bandwidth network with latency under 200 ms, can cause latency to increase to over 200 ms. As previously recommended, avoid latency above 200 ms. High latencies may cause agent transactions to the Store to be incomplete and rejected, resulting in unpredictable behavior. The ECC Server dynamically assigns agents to Stores taking the Store load into account. There is no benefit in adding a Store in a remote location with the intention of all the remote agents in that location being assigned to that remote Store.
23
Networking guidelines
Table 9 on page 24 lists some DCPs and the resulting data volume that the agent sends to either Store or WLA Archiver. Using Table 9 on page 24, consider adjusting the DCPs of the remote agents so that they are staggered. In this way, the agents do not execute concurrently and create a bottleneck in the network.
Table 9
Data Volumes sent to the Store or WLA Archiver Data Collection Policy
Full configuration load of a medium Symmetrix array Rediscovery of a medium Solaris host Topology validation of a medium switch WLA Daily policy data for a medium Symmetrix array WLA Daily DCP data for a host with 128 host device logical paths
24
Console guidelines
Console guidelines
These guidelines apply to installing and configuring the ControlCenter Console, as well as utilizing the ControlCenter Console and Web Console simultaneously. Guideline C1 When installing a stand-alone ControlCenter Console, ensure that the client host meets minimum specifications. Table 10 shows the minimum hardware requirements for stand-alone ControlCenter Console hosts (a stand-alone ControlCenter Console host does not contain any ControlCenter infrastructure components). The requirements in Table 10 are for one instance of the ControlCenter Console. Refer to the EMC ControlCenter 6.1 Support Matrix on Powerlink for supported operating system versions.
Host requirements for stand-alone consoles with logging # of Processors
1 1
a
Table 10
Operating System
Windows Solaris
Minimum Speed
500 MHz 360 MHz
Minimum Memory
512 MB 512 MB
Guideline C2
When running the Web Console, ensure that the client host meets minimum specifications. Table 11 lists the minimum hardware requirements for ControlCenter Web Console hosts.
Host requirements for Web Consoles Operating System
Windows AIX, HP-UX, Linux, and Solaris
Table 11
# of Processors
1 1
Minimum Speed
500 MHz 360 MHz
Minimum Memory
128 MB 128 MB
25
Console guidelines
Guideline C3
Limit the combined number of active ControlCenter Consoles and Web Consoles to ten during peak ControlCenter use. Limiting the combined number of active ControlCenter Consoles and Web Consoles to ten helps maintain performance. ControlCenter Consoles and Web Consoles are considered active if they are running user commands, including views that update real-time information. An active Web Console session is one that is processing a user request. Additional Web Console sessions can be open as long as they are not simultaneously processing user requests. Contact the Solutions Validation Center (svc@EMC.com) for assistance.
Guideline C4
Use terminal emulation or the Web Console when latency between the ControlCenter Console and infrastructure exceeds 8 ms. When latency between the ControlCenter Console and infrastructure components exceeds 8 ms, consider using terminal emulation software to access the ControlCenter Console, or switch to the Web Console if limited functionality is acceptable. Refer to Guideline N1 on page 22 for details.
26
Agent guidelines
Agent guidelines
This section contains guidelines for deploying and managing ControlCenter Agents.
Note: FCC and NAS agents do not coexist on the same host due to port conflict. Refer to the EMC ControlCenter Planning and Installation Guide, Volume 1 for more information on port assignments.
Guideline A1
When deploying agents, limit the initial discovery of managed objects. The ControlCenter Agent Installation consumes more resources during initial discovery than during rediscoveries of the same managed objects. Therefore, you should restrict concurrent initial discovery of managed objects per hour per store as recommend in Table 12. This is accomplished by limiting the number of concurrent agent installs.
Limitations of initial discovery Large Managed Objects per Store per Hour
Table 12
Do not run multiple agent deployment sessions through the ControlCenter Console Agent Installation Wizard. When deploying agents using tasks in the Agent Installation Wizard, ensure that you wait at least one hour between large tasks (those that involve more than 20 agents). This is because after the wizard reports successful completion of each task, the Infrastructure is still busy processing recently discovered managed objects. Starting consecutive installation processes may overload the Infrastructure. ControlCenter Console response time may be degraded during the agent deployment process because the Infrastructure is busy processing recently discovered managed objects. Do not overlap agent installation process with configuration management commands (SDR, Meta Device Configuration, etc.). The ControlCenter Console might not be refreshed in this situation.
27
Agent guidelines
Guideline A2
Follow these guidelines when deploying agents on stand-alone Store hosts. Follow these guidelines when deploying agents on stand-alone Store hosts.
Note: Stand-alone Store hosts are hosts installed only with Store components in a distributed configuration.
Master and Host agents can be deployed unconditionally on stand-alone hosts. Deploy any two of the following agents. Deploy any one of the following agents if you plan to deploy Console on the stand-alone host. 3 GB of RAM is required if Storage Agent for Symmetrix and Symmetrix SDM Agent are installed on a Multi-core processor host. Otherwise, the minimum requirement is 2 GB of RAM. 2.5 GB of RAM is required if Storage Agent for Symmetrix and Symmetrix SDM Agent are installed on a Single-core processor host. Otherwise, the minimum requirement is 2 GB of RAM. Avoid executing Symmetrix configuration change commands during nightly discovery processing period (typically, midnight through 5 a.m.).
Table 13
Agent Type (Pick any 2)
Storage Agent for Symmetrix and Symmetrix SDM Agent Fibre Channel Connectivity (FCC) Agent WLA Archiver Storage Agent for CLARiiON Common Mapping Agent Storage Agent for NAS Vmware Agent
Oracle
HDS Storage Agent for Centera
28
Agent guidelines
Guideline A3
Follow these guidelines while setting up a dedicated ControlCenter agent host. Follow Table 14 for a list of agents that can be deployed on a dedicated agent host that is running ControlCenter agents only.
Table 14
Agent Type
Storage Agent for Symmetrix and Symmetrix SDM agent Fibre Channel Connectivity (FCC) Agent WLA Archiver Storage Agent for CLARiiON Common Mapping Agent Storage Agent for NAS Vmware Agent Oracle HDS Storage Agent for Centera
Guideline A4
On agent hosts, allocate 300 MB additional disk space for each agent that runs WLA DCPs. The WLA Revolving DCP saves collected data to the local host disk and does not send it over to the WLA Archiver, until and unless the user requests it via the Console. The maximum amount of data that this policy can save on to the disk is 100 MB per policy per managed object. If ControlCenter agents that run WLA Daily and WLA Analyst DCPs (for example, Storage Agent for Symmetrix) cannot communicate with Workload Analyzer Archiver (because of a network failure, for example), those agents will save up to 100 MB of WLA data for each policy on an agent host. This allows WLA data collection to continue while connectivity to Workload Analyzer Archiver is restored. To support this function, on hosts where agents that run WLA Daily and
29
Agent guidelines
WLA Analyst DCPs reside, allocate 200 MB additional disk space for each agent that runs those DCPs. For example, if WLA Daily, WLA Analyst and WLA Revolving DCPs are planned to be enabled for Host Agent for Windows and FCC Agent on a single host, allocate 600 MB (300 MB for each agent) additional disk space on that host.
Note: Concurrent processing of all WLA DCPs on the same host will likely cause data gaps. Refer to Guideline D3 on page 18 and Guideline D4 on page 20 for more details.
Note: If the amount of locally collected data for WLA DCPs exceeds 100 MB, local data collection will continue (if connectivity to the Workload Analyzer Archiver host is not restored)however, some of the collected data might be lost due to the 100 MB limit.
Guideline A5
Use agent resource utilization to help schedule DCPs. The impact of an agent on system resources depends on the agent type, polling frequency, and the number and size of the managed objects managed by the agent.
For agents with polling frequencies that are less than one hour, the average Steady State (over 24 hours) CPU resource utilization of the agent should preferably remain under 1 percent when managing one medium-size managed object. For agents having a daily polling cycle, small spikes of resource utilization occur for a few minutes each day.
Table 15 on page 32 shows the CPU utilization for ControlCenter agent polling with one medium-size managed object. These duration and average CPU usage values were determined by tests conducted on the following agent host configurations.
Note: Consider these differences in hardware configuration when using performance data for agent host comparison.
Windows: 2x3.0 GHz processors with 2 GB of RAM, running Windows 2003 Enterprise Edition, SP1. Solaris: 2x1.5 GHz processors with 4 GB of RAM, running Solaris 10 (SunFire v240).
30
Agent guidelines
HP-UX: 2x1.0 GHz processors with 4 GB of RAM, running HP-UX B11.11 (rp 3440). AIX: 2x1.89 GHz processors with 4 GB of RAM, running AIX 5.3 (eServer P5 model 510).
Note: These performance measurements reflect maximum expected performance for hosts that are dedicated to agent processing with no other applications running. Your actual performance may vary if agents are run on the same host with other applications or other ControlCenter agents.
Use Table 15 to determine resource requirements for ControlCenter agents. When using the table, keep the following in mind.
The information in the Default Polling Cycles column does not address functionality associated with user commands. Average CPU Usage By Platform includes Master Agent CPU usage. No real-time commands were processed during testing. Agents detected a very low number of alerts during testing. Estimated Memory By Platform shows an estimated memory allocation for an agent managing one medium-size managed object. Memory consumption increases proportionally when the size or count of the managed object increases.
31
Agent guidelines
Table 15
Resource requirements for ControlCenter agents (Sheet 1 of 3) Duration (mm:ss) Avg. CPU Usage Estimated Memory Required Disk Space
Agent Type
Local Discovery Performance Statistics BCV/RDF Status Real-time BCV/RDF Status WLA Daily WLA Revolving
Steady State
Steady State 2 Minutes Windows 01:10 Solaris 01:57 HP-UX 01:10 AIX 01:43
Windows <1% Solaris 1.9% HP-UX 1.9% AIX 3.1% Windows 6% Solaris 54% HP-UX 8% AIX 13% Windows 3.0% Solaris 1.0% HP-UX 1.0% AIX 10%
15 Minutes SteadyState 15 Minutes 1 Hour 1 Hour 15 Minutes 15 Minutes 15 Minutes Steady State
32
Agent guidelines
Table 15
Agent Type
Estimated Memory
Discovery
WLA Daily Collection Host Agent WLA Revolving Collection Watermarks for File Systems Watermarks for Logical Volumes Watermarks for Volume Groups
SteadyState
Windows 6.5% Solaris 10.3% HP-UX 10.0% AIX 21.0% Windows 1.2% Solaris 2.5% HP-UX 1.5% AIX 2.0% Windows 3.8% Solaris 6.2% HP-UX 3.0% AIX 4.3% Windows 2.0% Solaris 1.5% HP-UX 2.0% AIX 2.2%
Steady State 2 Minutes Windows 01:10 Solaris 01:15 HP-UX 01:00 AIX 01:05
33
Agent guidelines
Table 15
Agent Type
Estimated Memory
12 Hours
Symmetrix SDM Agent Masking Configuration (4000 masking entries) Celerra Discovery WLA Daily Storage Agent for NAS WLA Revolving NetApp Discovery1 filer, 12 file systems, 23 devices, 2 lvolumes) Discovery (16 nodes) Host Discovery (proxy) Informix Discovery (proxy) Common Mapping Agent Sybase Discovery (proxy) SQL Server Discovery (proxy) IBM UDB Discovery (proxy) Discovery VMware Agent CheckVCForServer Initial Discovery 6 Hours
Windows 10.5 MB Windows 0.1% Windows 80 MB Windows 0:05 Windows < 1% Windows 8.2 MB
10 Minutes
Windows 00:30 Windows 01:14 Solaris 01:39 Solaris 01:38 Solaris 04:20 HP-UX 02:51 AIX 04:25 Windows 00:45 AIX 19:22 1:00 Steady State
Windows 1.0% Windows 1% Solaris 4% Solaris < 1% Solaris 11.38% HP-UX 12.50% AIX 12.2% Windows 2% AIX 6.21% Windows 5%
Windows 15 MB Windows 22.6 MB Solaris 23 MB Solaris 15.5 MB Solaris 25 MB HP-UX 20 MB AIX 50 MB Windows 14 MB AIX 17 MB
Windows 80 MB
Windows 73 MB Solaris 95 MB
Windows 130 MB
5 Minutes
a. To reduce CPU usage by WLA policies, adjust the policy frequency as described in Guideline A20 on page 50.
34
Agent guidelines
Fibre Channel Connectivity (FCC) Agent discovers and monitors connectivity devices and fabrics. This section contains guidelines for deploying and managing FCC Agent. Deploy two agents to considerably improve DCP processing speed. Two FCC Agents process DCPs faster as one FCC Agent. The agents share the workload associated with processing DCPs and other user-initiated tasks. In a failover situation, either agent can take on the workload of the other agent. Before deploying FCC Agent and Storage Agent for NAS on the same host, update the agent port configuration. Unless you perform configuration updates, Fibre Channel Connectivity Agent and Storage Agent for NAS cannot reside on the same host. Refer to the Planning for Agents in EMC ControlCenter 6.1 Planning and Installation Guide, Volume 1 for details. Perform single-switch discovery whenever possible. In cases where the configuration of only one switch is changed, rediscover only that switch (instead of the entire fabric). Rediscovery of a single switch typically takes less than 2 minutes. Configure FCC Agent DCPs to run more efficiently. The DCP execution time depends on the policy types, availability of the Store, switch size (number of ports, number of zones in zoneset/VSANs, nicknames/alias etc.) and network latency between agent and switches. Additionally, performance of different DCPs for different switch types vary. Considering these factors, Table 16, 17, and 18 show the recommended DCP collection interval for Brocade, Cisco, and McData switches. Table Table 19 shows the DCP setting for mixed switches environment.
Note: WLA Daily DCP setting must be a factorial of 60 minutes (e.g., 10 min, 15 min, 20 min, 30 min, or 1 hour). Hence, DO NOT set WLA Daily DCP to 45 min, or to any setting higher than 1 hour (e.g., 2 hour, 3 hour, etc.). This is because WLA processing occur every 1 hour interval and requires at least 1 data point
Guideline A7
Guideline A8
Guideline A9
35
Agent guidelines
.
Table 16
Switch Vendor Data Collection Policy Name Fabric Validation 2 1 Device Validation 2 1 WLA Daily 2 WLA Revolving Performance Statistics 1 or 2 1 or 2 15 minutes Disable Monitor up to 10 switches at a time 15 minutes 30 minutes 1 hour Disable Disable 15 minutes 15 minutes 20 minutes 20 minutes 45 minutes 1 hour 1 hour 1 hour 1 hour Disable 2 hours Disable 15 minutes 15 minutes 20 minutes 30 minutes 20 minutes 1 hour 45 minutes 2 hours 1 hour 2 hours 2 hours 4 hours Number of Agent 1 1 to 512 Ports 15 minutes 513 to 1024 Ports 20 minutes
Table 17
Switch Vendor Data Collection Policy Name Fabric Validation 2 1 Device Validation 2 1 WLA Daily 2 WLA Revolving Performance Statistics 1 or 2 1 or 2
Number of Agent 1
36
Agent guidelines
Table 18
Switch Vendor Data Collection Policy Name Fabric Validation 2 1 Device Validation 2 1 WLA Daily 2 WLA Revolving Performance Statistics 1 or 2 1 or 2 Number of Agent 1
Table 19
Switch Vendor Data Collection Policy Name
Number of Agent 1
Fabric Validation 2 1 Device Validation 2 1 WLA Daily 2 WLA Revolving Performance Statistics 1 or 2 1 or 2
37
Agent guidelines
This section describes guidelines when deploying and configuring these agents:
Storage Agent for Symmetrix Storage Agent for CLARiiON Storage agents for third party arrays Storage agent for NAS Storage agent for Centera
Guideline A10 Follow these guidelines when deploying Storage Agent for Symmetrix/Symmetrix SDM agent on any dedicated, infrastructure or production hosts. Master and Host agents can be installed unconditionally
Appropriate number of gatekeepers must be assigned as described in Guideline A11 on page 40. The number of HBAs required to support the configuration depends on fan-out ratio. If an HBA has a 12:1 fan-out ratio (12 Symmetrix to 1 HBA), 2 HBAs are required on the agent host managing 20 Symmetrix arrays. If you plan to deploy Symmetrix Management Console on the host installed with Storage Agent for Symmetrix and Symmetrix SDM agent, refer to Guideline A11 on page 40 and Guideline A14 on page 44.
38
Agent guidelines
Table 20
1 to 12,000 Devices
Mange Up to 6 Symmetrix
24,001 to 40,000 Devices Set Configuration DCP to 15 min interval Set BCV/RDF Status DCP to 10 min interval Set BCV/RDF Real-time Status aDCP to 3 min interval
40,001 to 64,000 Devices Manage only one 64,000 devices, or two 32,000 devices or three 16,000 devices Symmetrix arrays. 64,000 devices spread over 20 Symmetrix arrays is not supported on single-core host. Installed on a dedicated agent host Not supported on AIX and HP-UX Set Configuration DCP to 20 min interval. Set BCV/RDF Status DCP to 15 min interval Set BCV/RDF Real-time Status a DCP to 4 min interval Set Performance Statistics DCP to 5 min interval Set WLA Revolving DCP to 10 min Not supported
Installed on a Stand Alone Store as described in Guideline A2 on page 28 Installed on a production host or infrastructure host as described in Configuration#1: Single-core infrastructure host on page 9
Mange Up to 6 Symmetrix
Mange Up to 12 Symmetrix
Not supported
Mange Up to 6 Symmetrix
Not supported
Not supported
Not supported
a. BCV/RDF real-time status DCP interval controls how frequently the state of data protection task views will be updated. Example of state are restored, sync in progress, and split. When this DCP is set to 3 min interval, the Console will get refreshed only every 3 min. Even though the state value displayed on the Console may be out of sync, upon new task submission ControlCenter checks for state value and prevents any inappropriate tasks from being executed.
39
Agent guidelines
Table 21
1 to 12,000 Devices
Mange Up to 6 Symmetrix
40,001 to 64,000 Devices Manage up to 20 Symmetrix Support for 64,000 devices is available on Windows platform only. Not supported
Installed on a Stand Alone Store as described in Guideline A2 on page 28 Installed on a production host or infrastructure host as described in Configuration#2 : Multi-core infrastructure host on page 10
Mange Up to 6 Symmetrix
Mange Up to 12 Symmetrix
Mange Up to 20 Symmetrix
Mange Up to 6 Symmetrix
Mange Up to 12 Symmetrix
Not supported
Not supported
Note: Storage Agent for Symmetrix installed on HP-UX host may consume 1.2 GB virtual memory when managing 40,000 devices. The HP-UX operating system has limits as to the amount of virtual memory a process can consume. Kernel tuning may be necessary to allow agent process to grow up to 1.2 GB. Changing maxdsize larger than the memory size of the process as well as change the maxssize to 10 MB may be required. The limit may be increased for the login session to match the maxdsize and maxssize kernel parameters.
Symmetrix/SDM Agent Gatekeeper requirements for Storage Agent for Symmetrix/Symmetrix SDM Agent are dependent on the size of the Symmetrix and use of agent functionality. A larger Symmetrix will require a gatekeeper to be opened for a longer period of time to complete a DCP. The longer a gatekeeper is kept open by a DCP the more likely a different DCP will require an additional gatekeeper.
40
Agent guidelines
Storage Agent for Symmetrix/Symmetrix SDM Agent functionalities as they correspond to gatekeeper usage have been categorized as follows: Monitoring/reporting - All default enabled DCP's (not including WLA) Work Load Analyzer functionality - WLA default policies, Daily DCP (15 minute) Management - Any active management of Symmetrix (SDR, device configuration, feature controls, masking, etc.)
Symmetrix Management Console The size of the Symmetrix being managed by SMC has little effect on SMC gatekeeper requirements. Refer to Table 22 to determine gatekeeper requirements for the Storage Agent for Symmetrix/Symmetrix SDM Agent and SMC.
Table 22
Total Gatekeeper requirements for Storage Agent for Symmetrix/Symmetrix SDM Agent and SMC Monitoring/Reporting/ Monitoring/Reporting/ WLA Policies, and WLA Policies, and Config Change, and Config Change, SMC, SMC (6 Gate Keepers) and CLI Scripts (2 a Gate Keepers)b 12 14
Symmetrix size 6,001 to 16,000 devices 16,001 to 32,000 devices 32,001 to 64,000 devices
13
15
14
16
a. Gatekeeper requirements are for Solutions Enabler processes required by Storage Agent for Symmetrix/Symmetrix SDM Agent (storapid, storsrvd, storevntd). Some other running Solutions Enabler processes may require additional gatekeepers. b. Gatekeeper requirements are for Solutions Enabler processes required by Storage Agent for Symmetrix/Symmetrix SDM Agent and SMC (storapid, storsrvd, storevntd, storsrpd). Some other running Solutions Enabler daemons may require additional gatekeepers.
41
Agent guidelines
Guideline A12 Know the scalability limit for the open system Storage Agent for Symmetrix proxy. One Storage Agent for Symmetrix can act as a proxy for up to four hosts (each running Solutions Enabler) that are connected to three medium Symmetrix arrays. The recommended dedicated hardware configuration is a single-core host. Table 23 lists the recommended DCP frequency when Storage Agent for Symmetrix in proxy mode is managing varying number of Symmetrix arrays.
Table 23
Recommended DCP frequency for Storage Agent for Symmetrix in proxy mode Recommended Frequency
Default Frequency
2 Minutes 5 Minutes Daily 10 Minutes Daily Daily 2 Minutes 1 Minute 15 Minutes 2 Minutes
Guideline A13 Plan for at least two dedicated agent hosts to manage Symmetrix arrays with more than 32K devices. The Storage Agent for Symmetrix when managing a Symmetrix array with 32K devices or larger has a resource footprint that is best managed by installing it on a dedicated agent host. By default, the Storage Agent for Symmetrix participates in failover, and if proper safeguards are not in place, management of a 32K device or 64K device Symmetrix array may be assigned to agent hosts that are ill equipped to handle the workload.
42
Agent guidelines
Ensure that only one 64K device Symmetrix array or two 32K device Symmetrix arrays are zoned-in to two hosts with the proper hardware specification (Guideline A10 on page 38). Example, HostA and HostB are both zoned-in to a 64K device Symmetrix array. HostA is the primary for the Symmetrix array while HostB is secondary (active/passive). HostA goes down and now the ECC Server is forced to assign the responsibility of managing the 64K device Symmetrix array to HostB as that is the only surviving host that can communicate to the 64K device Symmetrix array. It is likely that a 32K or 64K device Symmetrix array would have remote Symmetrix arrays (R2). Use techniques like symavoid to exclude remote Symmetrix arrays from being discovered via the primary agent managing 32K or 64K device Symmetrix arrays. Increase the number of agent log files for your Storage Agent for a Symmetrix with 64K devices. In order to maintain 36 hours of historical log files, especially when running WLA DCPs with a 64K device Symmetrix array, increase the log file count from 10 log files to 25 log files. Log file size should remain the same. Increasing in log file would require 45 MB of additional disk space.
43
Agent guidelines
Guideline A14 Provide additional memory while deploying SMC and Storage Agent for Symmetrix on a host. The following options are available to deploy Symmetrix Management Console (SMC) with ControlCenter.
Table 24
Additional host memory requirements for SMC and Storage Agent for Symmetrix Multi-core host with 2 GB RAM Minimum of SPEC CINT2000 Rate (baseline) = 123 or SPEC CINT2006 Rate (baseline) = 61 24,000 devices or 12 medium Symmetrix arrays whichever is smaller. Conditions: Installed on infrastructure host as described in Configuration #2: Multi-core host with infrastructure and agents Add 512 of MB memory to support SMC Or 64,000 devices or 20 Symmetrix arrays whichever is smaller Conditions: Support for 64,000 devices is available on Windows platform only. Add 768 MB of memory to support SMC
Single-core host with 2 GB RAM Minimum SPEC CINT2000 Rate (baseline) = 26 12,000 devices or 6 Symmetrix arrays whichever is smaller Conditions: Installed on a production host or infrastructure host as described in Configuration #1: Single-core host with infrastructure and agents Add 512 MB of memory to support SMC Or 24,000 devices or 12 Symmetrix arrays, whichever is smaller Conditions: Add 512 MB of memory to support SMC Or 40,000 devices or 20 Symmetrix arrays, whichever is smaller
Conditions:
Provide additional gatekeepers for SMC as per Guideline A11 on page 40. For more information regarding SMC Scalability and additional disc space requirements refer to the SMC documentation.
Guideline A15 Follow these guidelines when deploying Storage Agent for CLARiiON on any infrastructure, dedicated or production hosts. The agent DCP settings must comply with the guidelines shown in Table 26 on page 46, which show recommended DCP frequency for configurations that contain all small, all medium, and all large CLARiiONs. If your configuration contains a mixture of small, medium, and large CLARiiONs, use the DCP settings for the highest recommended configuration in Table 26 on page 46. For example, if your configuration has 5 small and 3 medium CLARiiONs, use the recommended DCP settings for 8 medium CLARiiONs. Refer to the discussion on disabling of WLA Revolving policy for the Storage Agent for CLARiiON in Guideline D3 on page 18.
44
Agent guidelines
If WLA Revolving data is required for troubleshooting, consider creating individual WLA Revolving DCPs for each CLARiiON array. Set the frequency to 5 minutes for these DCPs.
Table 25
Recommended number of managed objects for Storage Agent for CLARiiON deployment on hosts Configuration #2: Multi-core host with infrastructure and agents Large: 4 Medium: 8 Small: 14 Dedicated or shared single-core or multi-core host Large: 10 Medium: 14 Small: 18
Configuration #1: Single-core host with infrastructure and agents Large: 4 Medium: 8 Small: 14
The Storage Agent for CLARiiON managing medium and large CLARiiON arrays consumes significant CPU resources on the agent host. For example, on a single-core Windows host (with 2 GB of RAM), the Storage Agent for CLARiiON consumed 15% CPU and 160 MB of memory while managing 10 large or 14 medium CLARiiON arrays executing DCPs at recommended frequency, as provided in Table 26 on page 46. If this level of CPU utilization is unacceptable on a production host, install the Storage Agent for CLARiiON on a dedicated agent host. Managing a small CLARiiON would result in 2-10% CPU utilization on an agent host based on the count of CLARiiON arrays being managed, and thus may be installed on production host.
45
Agent guidelines
Table 26
Recommended DCP collection settings for CLARiiON arrays Discovery WLA DAily Frequency Small (to 90 disks, 512 LUNS)
5 Minutes 5 Minutes 10 Minutes 10 Minutes 15 Minutes Once a day 20 Minutes 30 Minutes 60 Minutes Disabled Disabled
Guideline A16 Follow these guidelines when deploying Storage Agent for NAS (Celerra) on any infrastructure, dedicated, or production hosts. The agent DCP settings must comply with the guidelines shown in Table 28 on page 47, which show recommended DCP frequency for configurations that contain all small, all medium, and all large Celerras. If your configuration contains a mixture of small, medium, and large Celerras, use the DCP settings for the highest recommended configuration in Table 28 on page 47. For example, if your configuration has 5 small and 3 medium Celerras, use the recommended DCP settings for 8 medium Celerras. Refer to the discussion on disabling of WLA Revolving policy for the Celerra agent in Guideline D3 on page 18. If WLA Revolving data is required for troubleshooting, consider creating individual WLA Revolving DCPs for each Celerra. Set the frequency to five minutes for such DCPs.
46
Agent guidelines
.
Table 27
Recommended number of managed objects for Storage Agent for NAS (Celerra) deployment on hosts Configuration #2: Multi-core host with infrastructure and agents Large: 10 Medium: 14 Small: 16 Dedicated or shared single-core host Large: 8 Medium: 12 Small: 14 Dedicated or shared multi-core host Large: 10 Medium: 14 Small: 16
Configuration #1: Single-core host with infrastructure and agents Large: 8 Medium: 12 Small: 14
The NAS (Celerra) agent managing medium and large Celerras consumes significant CPU resources on the agent host. For example, on a single-core Windows host with 2 GB of RAM), the NAS agent consumed 10-12% CPU and 300 MB of memory while managing 10 large or 14 medium Celerras executing DCPs at recommended frequency, as provided in Table 28. If this level of CPU utilization is unacceptable on a production host, install the NAS agent on a dedicated agent host. Managing small Celerras would result in 2-12% CPU utilization on an agent host based on the count of Celerras being managed, and thus may be installed on a production host.
Table 28
1 2 4 6 8 10 12 14 16
47
Agent guidelines
Guideline A17 Follow these guidelines when deploying Storage Agent for HDS on any infrastructure, dedicated, or production hosts The agent DCP settings must comply with the guidelines shown in Table 30 on page 49, which show recommended DCP frequency for configurations that contain all small, all medium, and all large HDS arrays. If your configuration contains a mixture of small, medium, and large HDS arrays, use the DCP settings for the highest recommended configuration in Table 30 on page 49. For example, if your configuration has 5 small and 3 medium arrays, use the recommended DCP settings for 8 medium HDS arrays. Refer to the discussion on disabling of WLA Revolving policy for the Storage Agent for HDS in Guideline D3 on page 18. If WLA Revolving data is required for troubleshooting, consider creating individual WLA Revolving DCPs for each array. Set the frequency to five minutes for such DCPs.
Table 29
Recommended number of managed objects for Storage Agent for HDS deployment on hosts Configuration #2: Multi-core host with infrastructure and agents Large: 8 Medium: 14 Small: 20 Dedicated or shared single-core host Large: 8 Medium: 14 Small: 20 Dedicated or shared multi-core host Large: 10 Medium: 16 Small: 20
Configuration #1: Single-core host with infrastructure and agents Large: 4 Medium: 8 Small: 16
The Storage Agent for HDS managing medium and large arrays consumes significant CPU resources on the agent host. For example, on a single-core Windows host with 2 GB of RAM, the Storage Agent for HDS consumed 10-15% CPU while managing 8 large or 14 medium HDS arrays executing DCPs at recommended frequency, as provided in Table 30. If this level of CPU utilization is unacceptable on a production host, install the Storage Agent for HDS on a dedicated agent host.
48
Agent guidelines
Table 30
Minimum SPEC CINT2000 Rate (baseline) = 26 1 2 4 6 8 10 12 14 16 Once a day 5 Minutes 5 Minutes 10 Minutes 10 Minutes 15 Minutes 20 Minutes 30 Minutes 30 Minutes Disabled 5 minutes 5 Minutes 10 Minutes 15 Minutes 15 Minutes 30 Minutes 30 Minutes Disabled Disabled 10 Minutes 10 Minutes 15 Minutes 15 Minutes 30 Minutes Disabled Disabled Disabled Disabled
Guideline A18 Follow these guidelines when deploying Storage Agent for Centera on any infrastructure, dedicated, or production hosts Storage Agent for Centera can manage up to 20 Centera content-addressed storage systems of any size when the agent is deployed according to Table 31.
Table 31
Recommended number of managed objects for Storage Agent for Centera deployment on hosts Configuration #2: Multi-core host with infrastructure and agents Large: 20 Medium: 40 Small: 60 Dedicated or shared single-core host Large: 20 Medium: 40 Small: 60 Dedicated or shared multi-core host Large: 20 Medium: 40 Small: 60
Configuration #1: Single-core host with infrastructure and agents Large: 20 Medium: 40 Small: 60
49
Agent guidelines
The Common Mapping Agent can discover and monitor (but not manage) databases and host configuration information locally on a single host or on remote hosts (via proxy). One Common Mapping Agent can replace several individual host and database agents if limited functionality is needed for the associated managed objects. However, to ensure proper performance and scalability, limit the number of managed objects per Common Mapping Agent and consider using host and database agents for large managed objects, as described in this section.
Guideline A19 Do not discover large hosts or large databases via proxy with the Common Mapping Agent. Discovery and rediscovery of large hosts and databases (refer to Table 2 on page 6 for details) using Common Mapping Agent via proxy can take a very long time. For example, daily rediscovery of large hosts and databases with Common Mapping Agent can take up to an hour. When large hosts are to be monitored, use the Host agent for the appropriate platform. Avoid monitoring large SQL Server, Sybase, Informix, and IBM DB2 database instances with Common Mapping Agent. For details on deploying those agents, refer to Plan for Agents in EMC ControlCenter 6.1 Planning and Installation Guidelines, Volume 1. Guideline A20 Limit the number of managed objects per Common Mapping Agent. When the number of managed objects per Common Mapping Agent exceeds the counts listed in Table 32, consider deploying additional Common Mapping Agents to sustain optimum performance. For example, one Common Mapping Agent can manage up to a total count of 140 before adding a second Common Mapping Agent. Discovery polling will be done in five off-peak hours.
Table 32
Note: The limitation is based on an agent host with minimum SPEC rate = 24 and 2 GB of memory.
50
Agent guidelines
Host Agent for Windows allows you to effectively manage Windows servers. Workload Analyzer Archiver generates Performance Manager Automated Reports and processes performance data collected by individual ControlCenter agents. The Database Agent for Oracle allows you to manage the Oracle database. The VMWare Agent manages the ESX Servers.
Guideline A21 Ensure Watermark DCPs are not set below the default of 15 minutes for hosts having more than 1024 volumes and file systems Starting with the 5.2 Service Pack 3 release, the Host agent has three new Watermarks DCPs that collect the capacity utilization information of volume groups, volumes, and file systems size information every15 minutes (default) and these policies are enabled by default. The capacity utilization data for the day is sent to the Store along with the Daily Discovery transactions which are typically scheduled once a day. Watermark data gathered from hosts is reported in various StorageScope reports. Ensure that Watermark DCP frequencies are not faster than 15 minutes for hosts having 1024 volumes and file systems. On a test Solaris 10 host (SunFire V240R 2x1.5GHz, 4GB RAM) with 256 devices, 1024 volumes, and 1024 file systems, the WLA Daily, WLA Revolving, and Watermark DCPs (all at default frequencies) consumed a steady state of 4% CPU utilization. If this level of CPU utilization is not acceptable on a production host, disable or reduce the Watermark policy frequency. Guideline A22 Limit the number of managed objects per Workload Analyzer Archiver Table 33 on page 52 shows the number of medium-sized managed objects that can be managed by a single Workload Analyzer Archiver when installed on a dedicated host or shared with other components on a single-core or multi-core host. Provide for 36 GB of storage.
51
Agent guidelines
Table 33
If you have more managed objects being managed by one WLA Archiver than is shown in Table 33, you need to assign them to a new dedicated (or shared) WLA Archiver. In the WLA Archiver host, search for yesterdays BTP file (e.g. 20080708.btp) to get a count of managed objects that are managed by the WLA Archiver. If the number of managed objects on the WLA Archiver host exceed managed object count in Table 33, contact the Solutions Validation Center (svc@EMC.com) for assistance. Guideline A23 Know the scalability limit of Oracle agent (local and proxy). Manage no more than 50 medium instances when the Oracle agent is installed on the database host (local agent). It is assumed that the database server is a single-core host. An Oracle agent running on such a database server would need approximately 25 MB of virtual memory. In proxy mode, a single Oracle agent can manage up to 30 medium database instances running on a database server. This limitation takes into account processing of WLA Daily. Guideline A24 Limit the number of ESX Servers managed by the VMWare Agent 50 medium ESX servers when deployed on the Infrastructure hosts (e.g. Single-host Infrastructure with agents)
100 medium ESX servers when deployed on an external Store, or on dedicated agent hosts
52
StorageScope Guidelines
StorageScope Guidelines
This section of the document has guidelines specifically for StorageScope technology. It has been designed to answer common performance and scalability questions and to provide recommendations to improve user experience and appropriate placement of StorageScope components. EMC StorageScope technology has changed significantly in ControlCenter 6.0. StorageScope data is now stored in a second Oracle database called the StorageScope Repository separate from ControlCenter Repository. Capacity and utilization data of managed objects is moved on a nightly basis from the ControlCenter Repository to the StorageScope Repository via a process called Extract-Transform-Load (ETL).
Figure 1
Reporting on detailed file-level storage utilization metrics and attributes including file type, owner, size, path, folder, volume, and host. This allows easy identification of hosts with large, rarely used, or non-business related files which may be candidates for storage reclamation or migration.
EMC ControlCenter 6.1 Performance and Scalability Guidelines
53
StorageScope Guidelines
Unified drill-down views and built-in reports of VisualSRM with the custom reporting flexibility of StorageScope 5.x into a single unified GUI.
The StorageScope FLR license remains purchasable separately from the StorageScope base license. If the StorageScope FLR license has been enabled, the Data Collection Policy called File Level Collection (referred to as FLR DCPs in rest of the document) for Host agents can be scheduled to collect file/folder statistics on a nightly basis. The amount of data that the FLR DCP collects and stores in the StorageScope Repository depends on the selection that the ControlCenter Administrator makes on the DCP wizard. While installing StorageScope, keep in mind that the following factors affect its performance and scalability.
ControlCenter Installation size (small, medium or large) ControlCenter Configuration type (single-host, distributed or single-host with agents) Number of files and folders in file systems being processed by the FLR DCPs. Concurrency and types of FLR DCPs Scope of data collection. All Files and Folders will retrieve much more information for the target file systems as compared to Folders only. In addition All Files and Folders will greatly increase the amount of time consumed for DCP processing on the agent host and StorageScope server, ETL, and the demand for disk space. Refer to Table 42 on page 68 for DCP performance information. Exceptional Files and Folders Scans files and folders that are in the top <n> of the following categories, where <n> is any value specified: number of files greater than 1 MB and number of files not accessed in more than 60 days. These file statistics are used for file summary reports in StorageScope. The categories are: Oldest by create date, Oldest by modified date, Oldest by access date, Largest by actual size, Largest by allocated size.
System resources available on StorageScope server host Time to complete ETL process Data retention period of StorageScope data
54
StorageScope Guidelines
Planning guidelines
Guideline P1 Determine managed object size based on Table 34. The managed object for the FLR DCP is file systems. System resource utilization for the FLR DCP on the agent and StorageScope hosts depends on the count of files and folders of the file systems and type of the DCP. The FLR DCP may be used to collect folder or file and folder level information. The host that is installed with StorageScope requires significantly more system resources (CPU and disk space) to process and record folder and file level DCP than the folder only. Use Table 34 on page 55 to determine the size of the managed objects as applicable to StorageScope. If the size of the managed object exceeds Large, contact the EMC Solutions Validation Center for additional recommendations (svc@EMC.com).
Table 34
Managed Object
File System
Small
124,000/2,480
Medium
376,000/7,520
Large
1,125,000/22,500
Note: Classification of file servers is based on the size of and count of file systems that are mounted on them: Small file server: 6 small file systems Medium file server: 4 medium file systems Large file server: 2 large file systems Each of the folders is configured as following irrespective of the size of the File system or the file server: There are 50 files to a folder 6 to 10 distinct file types (.txt, .jpg etc.) are present in each folder Guideline P2 Determine the appropriate placement of StorageScope. Based on the factors discussed earlier, the following recommendations have been made for placement of StorageScope on appropriate host and associated requirements for deployment.
55
StorageScope Guidelines
Table 35
Installation size and configuration type StorageScope StorageScope with FLR Deployment option
Install on Infrastructure a Install on Infrastructure a Dedicated StorageScope host
Installation Size
Small Small Medium
Medium
Install on Infrastructure a
2000
Large
2000
Note: When StorageScope is installed on a dedicated host, the number of file systems that it can scan on a nightly basis is independent of installation size or configuration type. The current scalability of StorageScope is a maximum of 2000 file systems on a nightly basis. Out of total 2000 file system, All Files and Folder scans should be limited to a maximum 160 file systems. Refer to Guideline D5 Distribute FLR DCP evenly.on how to schedule DCPs.
Guideline P3
Allocate disk space for the StorageScope host. The disk space requirement for StorageScope depends on three factors:
Number of managed objects present in the ControlCenter Repository. Number of file systems being scanned by FLR DCPs on a nightly basis. FLR Data Removal Schedule of the FLR StorageScope data.
Use Table 36 on page 57 to estimate the disk space requirement for StorageScope. Note, this estimate includes StorageScope repository, database backup, database export and StorageScope application with default 1 year trending data.
56
StorageScope Guidelines
Table 36
30 arrays, 330 hosts, 180 databases, 25 ESXs, and 24 switches 45 arrays, 550 hosts, 280 databases, 25 ESXs, and 28 switches 60 arrays, 680 hosts, 380 databases, 25 ESXs, and 44 switches 120 arrays,1400 hosts,750 databases, 50 ESXs, and 88 switches 175 arrays, 2050 hosts, 1200 databases, 75 ESXs, and 132 switches 200 arrays, 2500 hosts, 1500 databases,100 ESXs, and 156 switches
To compute disk space requirement for the FLR, use the following formula:
All Files and Folders: One medium host (4 file systems having total of 1.5 million files) requires 900 MB disk space (400 MB on SRM_SCAN DATA tablespace and 500 MB on cold backup folder.
Note: If more than 250 million file records are to be stored in the database as result of All Files and Folder scan, you need to add data files to SRM_SCANDATA tablespace.
Folders only: One medium host (4 file systems having total of 30080 folders) requires 280 MB disk space (130 MB on SRM_SCANDATA, 150 MB on cold backup folder).
Note: If more than 15 million folders records are to be stored in the database as result of Folders only scan, need to add data files to SRM_SCANDATA tablespace and SRM_SCANIDX tablespace. Refer to Guideline M1 on page 68 to monitor disk space usage of data and index tablespaces.
Exceptional Files and Folders: Disk space requirement is somewhere between Folders only and All Files and Folders scans. If # of exceptional files/folders to collect per category is kept low, disk space usage will lean more towards the formula of Folder only.
57
StorageScope Guidelines
StorageScope FLR data, by default, will be removed from StorageScope Repository nightly. Use the above formula to estimate the total disk space if you wish to retain StorageScope FLR data for longer period for reporting (example, Folders only data for 500 file systems on Monday, 100 All Files and Folders data on Tuesday and Folders only data for 1000 file systems on Wednesday).
Installation guidelines
Guideline I1
Table 37
Follow these recommendations while selecting hosts for dedicated StorageScope installation
Hardware specifications and component placement
Multi-core host with 2 GB RAM Minimum of SPEC CINT2000 Rate (baseline) = 123 or SPEC CINT2006 Rate (baseline) = 61
StorageScope Repository StorageScope Web Application Master Agent Host Agent With 1 additional GB of RAM:
- Store or Storage Agent for Symmetrix and Symmetrix SDM agent managing up to 40,000 devices or 20 Symmetrix arrays whichever is lower with an additional 1 GB of RAM (total of 3GB of RAM)
Guideline I2
Follow these recommendations while providing high availability for dedicated StorageScope host. When allocating disk space for the StorageScope Repository, consider these recommendations:
For high availability, use mirrored (RAID 1) disk (keep in mind that this doubles the amount of required disk space)
58
StorageScope Guidelines
Other options for increasing system performance include use of disk arrays such as Symmetrix and CLARiiON. Host based mirroring is strongly discouraged
Guideline I3
Follow these recommendations to ensure optimal operating environment. Follow these maintenance best practices at least once a month.
Cleanup temporary disk space Shutdown StorageScope Components and run disk de-fragmentation utility
There are two data feeds for StorageScope. Metrics, configuration, status, and usage data for managed objects that ControlCenter agents manage are extracted from the ControlCenter Repository on a nightly basis and stored in the StorageScope Repository. This is the first data feed and ETL process does the data migration. If the StorageScope FLR license is enabled, the Host Agent collects file/folder data for file systems on a nightly basis. These transactions are processed by StorageScope Web Application (bypassing ControlCenter infrastructure) and stored in the StorageScope Repository. This is the second data feed to the StorageScope application. This section provides recommendations for various DCPs that impact StorageScope.
Guideline D1 Schedule Discovery DCPs such that it completes before ETL starts. The ETL process extracts managed object information from the ControlCenter Repository, and processes and loads it in the StorageScope Repository for trending and reporting. To obtain the most current information of various managed objects (CLARiiON, 3rd party arrays, hosts, databases, Virtual Machines etc.) schedule their nightly discovery DCPs such that they all complete prior to scheduled start up time of ETL process. If you have a large installation size, it may be necessary to delay the default start time (4:00 am) of the ETL process. A quick and easy way to identify the duration of nightly discovery is to track the CPU utilization of the oracle.exe process using utilities like Windows Performance Monitor on the ControlCenter Repository host. Discovery DCPs are typically scheduled between midnight and 5 am daily. All Discovery DCPs update the ControlCenter Repository. If
59
StorageScope Guidelines
you observe that CPU utilization of oracle.exe flattens out after 4:30 am you know that scheduling ETL at 5:00 am would allow for the most up to date managed object information to be presented in various StorageScope reports.
Note: Track the CPU utilization of the oracle.exe process before you schedule an ETL for the first time.
Guideline D2
Schedule processing of custom reports once ETL is over Completion of ETL process updates the StorageScope Repository with up to data of managed objects and file/folder metrics. The ETL process duration depends on the count of managed objects in the ControlCenter Repository and the number of file systems being scanned by various FLR DCPs. Use the following table to estimate the duration of time that ETL would take to complete processing.
ETL Processing time estimate based on dedicated StorageScope Server installed on single-core host and multi-core host ETL processing time (hh:mm) Single-core host Minimum SPEC CINT2000 Rate (baseline) = 26
00:15
Table 38
Configuration size
Small (up to 30 arrays, 330 hosts, 25 ESXs, 180 databases and 24 switches) Medium (up to 45 arrays,550 hosts, 25 ESXs,280 databases and 28 switches) Large (up to 60 arrays, 680 hosts, 25 ESXs, 380 databases and 44 switches) Large (up to 80 arrays, 1400 hosts, 50 ESXs, 750 databases and 64 switches) Large (up to 175 arrays, 2050 hosts, 75 ESXs, 1200 databases and 132 switches) Large (up to 200 arrays, 2500 hosts,100 ESXs, 1500 databases and 156 switches)
ETL processing time (hh:mm) Multi-core host Minimum of SPEC CINT2000 Rate (baseline) = 123 or SPEC CINT2006 Rate (baseline) = 61
00:14
00:26
00:22
00:30
00:27
00:59
00:58
01:30
01:15
01:54
01:27
60
StorageScope Guidelines
Note: The current scalability of StorageScope is a maximum of 2000 file systems on a nightly basis. Add 6 hours to ETL time for processing Folder only transactions for 2000 file systems. All Files and Folder ETL time for 160 file systems is estimated to take additional 4 hours. These durations are to be added to processing time in Table 38 to arrive at estimated total ETL time.
Guideline D3
To conserve system resource utilization on agent hosts, schedule only one type of FLR DCP for file systems. The FLR DCP definition wizard allows for the same file system to be assigned to more than one DCP. For example, if file system 1 on Host1 is already assigned to Folders only, the FLR DCP, the user may assign the same file system (file system 1 on Host1) to the All Files and Folders FLR DCP. If this happened, these consequences would occur:
The File Server (agent host) would incur multiple processing costs of FLR DCPs. Refer to Table 41 on page 67 for system resource footprint for FLR DCPs.
Note: The StorageScope FLR license enables FLR functionality on host agents running ControlCenter version 6.0 and higher.
The StorageScope Repository would store the last transaction, and overwrite all previous transactions for the same file systems. If you have scheduled summary and detailed FLR at 3 a.m. and 1 a.m. respectively for the same file systems (file system 1 on Host1), then the StorageScope Repository will record summary level information overwriting the detailed information collected earlier during the night.
If these consequences are undesirable, do not assign multiple FLR DCPs to the same file system. Guideline D4 Avoid overlapping FLR DCP with scheduled tasks. The StorageScope installation utility configures various maintenance tasks that assure data recovery and routine upkeep of the StorageScope Repository. Avoid overlapping StorageScope processing tasks (ETL, FLR DCP etc.) with these important maintenance tasks to enable their timely completion.
61
StorageScope Guidelines
Table 39
When scheduled
Daily 11:00 pm Defined by the customer Weekly Sunday 2:30 am Weekly Sunday 6:00 pm Weekly Sunday 12:05 am Weekly Sunday 1:00 am Weekly Saturday 7:30 am Weekly Saturday 10:30 pm Last day of each month 9:30pm
The StorageScope Repository export process starts at 11 pm daily and may take 15-60 minutes based on the size of repository. During this time, the StorageScope Repository is available for transaction processing but response time could degrade. Once the export has finished, the resulting .DMP file is compressed. Based on the size of the export file, the compression could run for 1-3 hours. As the compression process is running on an already exported external database file, the StorageScope repository is available to process transactions during this time. Schedule FLR DCP starting at midnight. By default every Sunday at 2:30 am, a cold backup of the StorageScope Repository is scheduled. During the cold backup process, the StorageScope Repository is not available for transaction processing. Do not schedule FLR DCP on Sunday (uncheck Sunday on DCP definition wizard. It is enabled by default). Guideline D5 Distribute FLR DCP evenly. The FLR DCP would collect different type and volume of data based on the selection made in the DCP definition wizard. To facilitate easy assignment of various file systems to specific type of FLR DCP, a different collection definition may be created.
62
StorageScope Guidelines
As an example, you may create the following three distinct FLR DCPs from the standard template and assign appropriate file systems to them.
Summary-only: Scope of data collection = folders only and Collect summary information on file types = No. Appropriate for reporting of utilization, trending and quick identification of under/over utilized file systems. Summary with File Type: Scope of data collection = folders only and Collect summary information on file types = Yes. In addition to information collected above, information related to utilization by file types (multimedia, audio, video, log, executable etc.) will be collected. It is expected that information retrieved by this type of FLR DCP would assist in planning for storage provisioning or reclamation. Detailed: Scope of data collection = All Files and Folders and Collect summary information on file types = Yes. This will allow for collection of all necessary information to troubleshoot issues with file systems.
The FLR DCP for all supported Host agent platforms is scheduled at 2 a.m. by default. To avoid resource contention on the StorageScope host (either dedicated or collocated with ControlCenter Infrastructure), schedule FLR DCPs evenly over an extended period of time starting at midnight. Evenly schedule summary level DCPs start time (Folders only with or without file type summary) between midnight and 3 a.m., and schedule detail DCPs (All Files and Folders with file type summary) to start at 4 a.m. By following these recommendations, you spread the processing load over an extended period of time. This applies to StorageScope collocated on the ControlCenter infrastructure hosts as well as installed on a dedicated host. Schedule maximum 250 hosts (or 1000 file systems) in a Folder Only DCP. StorageScope Server is expected to take an hour to complete processing of these transactions. If more than 250 hosts are to be scheduled for Folders Only DCP, create multiple DCPs assigning up to 250 hosts (or 1000 file systems) to each of these DCPs.
Note: For nightly basis 2000 file systems, you can schedule 1000 file systems in the first DCP, then another 1000 file systems in the second DCP.
63
StorageScope Guidelines
Schedule maximum of 40 hosts (or 160 file systems) in an All Files and Folders DCP. Expected processing time is 2 hours at StorageScope Server. StorageScope server processing time does not include the time FLR DCP would take on agent hosts. Refer to Table 42 on page 68 for an estimate of DCP processing time of various FLR DCPs on agent hosts. Distributing FLR DCP over a period allows spreading of transaction payload evenly on a customer network. Guideline D6 Use filters to reduce the volumes of information to be processed and stored in the StorageScope repository. Consider the following settings in the FLR DCP to reduce processing and storage costs:
Set Ignore files smaller than option to reasonable value (say 1 MB).
Use Collect file owner information only when needed. On a Windows platform it takes additional time (03:23 min compared to 01:42 min) for a medium file system when file owner information has been requested. On UNIX platforms, there is only a marginal, additional cost. Guideline D7 Consider alternative of All Files and Folders FLR DCPs whenever possible. The All Files and Folders DCP can drastically impact system performance, gathering data which isn't very useful except in specific circumstances. It is best to perform the Folders Only DCP on a regular basis and only create the All Files and Folders scan when trouble spots are identified. It may be preferable to just perform a Exceptional Files and Folders scan, as opposed to a All Files and Folders scan. The Exceptional Files and Folders scan reports on large and rarely used files based on the criteria defined in the DCP. These files may be candidates for storage reclaimation or migration. The five Exceptional categories are pre-defined in the system. If the same file belongs to multiple categories, only one record is written to the StorageScope Repository. For example, Exceptional Files and Folders DCP has been assigned to Host A having file system's C:, D:, and E:, each having 100,000 files. exceptional files per category is set to 1000. At the end of DCP execution, the following count of files will be stored in the StorageScope Repository:
Host A, File System C: 1000 - 5000 files Host A, File System D: 1000 - 5000 files Host A, File System E: 1000 - 5000 files
64
StorageScope Guidelines
By using Exceptional Files and Folder scans on these three file systems, only maximum of 15,000 file records were written to the StorageScope Repository instead of 300,000 if All Files and Folder scan was opted.The processing workload on the StorageScope server is very close or a little higher for the exceptional FLR DCP as compared to the Folders only DCP. Refer to Guideline D5 on page 62 on Folders only for scheduling the Exceptional Files and Folders DCP.
Upgrade Guidelines
This section contains guidelines for upgrading to StorageScope 6.1 from previous versions. Please review the StorageScope Data Migration Utility section in the EMC ControlCenter 6.1 Upgrade Guide for discussion on prerequisite, usage and troubleshooting of the migration utility. Guideline U1 Upgrade from previous version of StorageScope in batches. StorageScope 6.1 has a utility (STSMigrate.bat) that allows migration of old 5.2.x XML repository to the new Oracle based StorageScope Repository. If you are migrating a very large amount of data (say 3 years of XML data for a medium/large ControlCenter Configuration size), than migrate data in stages (e.g. one year at a time). Guideline U2 Allow for sufficient time to complete migration. Migration of XML to the StorageScope Repository is a one time process. It may take up to 5 hours to complete migration of a medium/large installation size having 3 years of StorageScope data.
Networking Guidelines
This section provides guidelines related to StorageScope component placement on a distributed environment. Guideline N1 Minimize the impact of network latency on StorageScope As network latency among ControlCenter Infrastructure, Agents and StorageScope increases, processing time goes up. Transactions may time out and application reliability may degrade. Follow these recommendations while deploying StorageScope for a distributed configuration:
65
StorageScope Guidelines
Ensure that network latency between ControlCenter Repository and dedicated StorageScope host do not exceed 8 ms (milliseconds). Try to keep these two hosts on the same LAN segment. Ensure that the network latency between the dedicated StorageScope host and hosts executing the FLR DCP do not exceed 200 ms. The minimum bandwidth is 386 Kbps with 50% average utilization. Keep network latency between SRM Console and StorageScope host up to 50 ms for optimum user response time.
Guideline N2
Consider the impact of the FLR DCP on network latency and bandwidth. Table 40 lists the resulting payload when the FLR DCPs execute on a medium file system.
FLR DCP payload Data Collection Policy Approximate Payload*
1 MB 2 MB 443 MB 77 MB
Table 40
FLR (Scope of data collection = Folders only; Collect summary information on file types = No) FLR (Scope of data collection = Folders only; Collect summary information on file types = Yes) FLR (Scope of data collection = All Files and Folders; Collect summary information on file types = Yes) FLR (Scope of data collection = Exceptional Files and Folders; Collect summary information on file types = Yes; # of exception files/folders to collect per category = 50000)
*This volume of data is transferred between the Host agent and StorageScope Server per medium file system.
Based on the time synchronization of the FLR DCPs and the number of file systems being polled, a sudden surge of payload will be placed on the network. If this is undesirable, spread the DCPs evenly over time. This is especially recommended if Host agents are deployed on locations that are not supported by network having high latency and low bandwidth.
66
StorageScope Guidelines
Table 41
Guideline C2
Limit the number of concurrent active SRM Consoles to ten. A SRM Console session is considered as active if it is being used by user to request/view snapshots, reports etc.
Note: Active ControlCenter Java and Web Console sessions do not influence the performance of StorageScope. Likewise, the SRM Console does not impact performance of ControlCenter Infrastructure, Agents and Console performance.
Agent Guidelines
Guideline A1 Use agent resource utilization to help schedule DCPs. Table 42 shows the system resource utilization by Host agent executing the FLR DCP on a medium file server. Use the following information to efficiently schedule the FLR DCPs based on your resources. Tests were conducted on hosts having the following configurations:
Windows: 2x3.0 GHz processors with 2 GB RAM, running Windows 2003 Enterprise Edition, SP1. Solaris: 2x1.5 GHz processors with 4 GB of RAM, running Solaris 10 (SunFire v240). HP-UX: 2x1.0 GHz processors with 4 GB of RAM, running HP-UX B11.11 (rp 3440). AIX: 2x1.89 GHz processors with 4 GB RAM, running AIX 5.3 (eServer P5 model 510).
67
StorageScope Guidelines
Table 42
Host agent system resource utilization during FLR data collection Duration (mm:ss)
Windows 01:57 Solaris 06:06 HP-UX 04:16 AIX 05:00 Linux 08:49 Windows 02:09 Solaris 05:49 HP-UX 04:35 AIX 05:14 Linux 07:32 Windows 08:00 Solaris 07:47 HP-UX 09:37 AIX 06:09 Linux 14:06 Windows 04:18 Solaris 07:00 HP-UX 04:53 AIX 05:05 Linux 11:23
Agent Type
FLR (Scope of data collection = Folders only; Collect summary information on file type = Yes)
Once daily at 2 am
Host Agent FLR (Scope of data collection = All Files and Folders; Collect summary information on file type = Yes) Once daily at 2 am
FLR (Scope of Data collection =Exceptional Files and Folders; Collect summary information on file type =Yes)
Once daily at 2 am
Maintenance Guidelines
Guideline M1 Routinely review storage utilization by StorageScope Repository. The emcsts_tbspusage.bat file runs automatically daily at 7:00p.m. to 7:00a.m., check the rpt_emcsts_tbspusage.log file routinely to ensure that your StorageScope Repository is not running out of storage. You may use the ETL Data Removal Schedule and decide appropriate balance between demand for disk space and availability of file level data for reporting. You may also use this scheduler to purge data on demand to free up disk space to avoid potential out of disk space situations.
68
Part
SPEC Score Base Unit Processor Second Processor Memory Hard Drive Hard Drive Controller Floppy Disk Drive Operating System NIC CD-ROM or DVD-ROM Drive Additional Storage Products Miscellaneous
69
Understanding SPEC
Understanding SPEC
What is SPEC
The Standard Performance Evaluation Corporation (SPEC) is a non-profit corporation formed to establish, maintain, and endorse a standardized set of relevant benchmarks that can be applied to the newest generation of high-performance computers. SPEC develops benchmark suites and also reviews and publishes submitted results from our member organizations and other benchmark licensees. A computer benchmark is typically a computer program that performs strictly defined set of operations, a workload, and returns some form of result, a metric, describing how the tested computer performed. Computer benchmark metrics usually measure speed: how fast was the workload completed; or throughput: how many workload units per unit of time were completed. Running the same computer benchmark on multiple computers allows a comparison to be made. The basic SPEC methodology provides the benchmarker with a standardized suite of source code based upon existing applications that has already been ported to a wide variety of platforms by its membership. The benchmarker then takes this source code, compiles it for the system in question and then can adjust the system for the best results. The use of already accepted and ported source code greatly reduces the problem of making comparisons of non-like items. SPEC has benchmarks for CPU, Mail Server, Web Server, File Systems, etc. Refer to www.spec.org for more information.
Note: ControlCenter uses the CPU benchmark.
What is a benchmark
70
Understanding SPEC
If you are planning to compare your host performance with the equipment being used in the lab to measure performance and scalability of ControlCenter, information provided on the SPEC website (www.spec.org) helps. There are two distinct CPU benchmark results available on this website. Hosts using old technology (e.g. single-core hosts) are benchmarked using SPEC CPU2000. This benchmark was retired in February 2007 and a new benchmark (SPEC CPU2006) is used by most hardware vendors. Follow these steps to run the comparison: 1. Navigate to the SPEC website at www.spec.org, 2. From the top navigation bar choose "Result", "CPU2006" then "Search CPU2006 Results." 3. From the drop down list for Available Configurations, select "SPECint2006 Rates." 4. Select "Advanced" radio button for Search Form Request and click on "Go!" 5. Fill out the form with appropriate information (e.g. Hardware Vendor, System, # Cores, # Chips, # Cores Per Chip).
Note: In SPEC nomenclature, the term for "Processor" is "Chip".
6. Submit the search by clicking "Fetch Results" at the bottom of the page. 7. The value listed under Baseline column for your host is value that is compared to the minimum specified in this document. If the CPU2006 benchmark is not available, try the CPU2000 benchmark. The steps to find the result for CPU2000 benchmark are the same as CPU2006.
Note: Results for CPU2000 can not be compared or extrapolated for CPU2006 and vice versa.
71
Understanding SPEC
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. All other trademarks used herein are the property of their respective owners.
72