Group Administration
PS Series Firmware Version 5.0
Copyright 2010 Dell, Inc. All rights reserved.
All trademarks and registered trademarks mentioned herein are the property of their respective owners.
June 2010
iii
Group Administration Table of Contents
iv
Group Administration Table of Contents
v
Group Administration Table of Contents
vi
Group Administration Table of Contents
vii
Group Administration Table of Contents
viii
Group Administration Table of Contents
ix
Group Administration Table of Contents
Deleting inbound replica collection sets, replica collections, or replicas ........................................................... 12-33
x
Group Administration Table of Contents
xi
Group Administration Table of Contents
xii
Preface
This document describes PS Series group functionality and management operations. It describes how you use the
Group Manager graphical user interface (GUI) to perform tasks.
For information about using the command line interface (CLI) to manage a group, see the CLI Reference manual on
the EqualLogic customer support Website.
Audience
This documentation is for administrators responsible for managing a PS Series group or SAN storage and also for
individuals interested to learn more about group operation. You can manage a PS Series array without extensive
network or storage experience. It is important that understand:
Note: Although this documentation includes examples of PS Series arrays in some common network
configurations, information about setting up a network is beyond the scope of this manual. Go to the
EqualLogic customer support Website to find more information about network requirements.
Organization
Part I: General Group Management:
• Chapter 1, Storage solutions for all enterprises, includes an introduction to PS Series arrays, group features,
and functionality.
• Chapter 2, Common group tasks, describes some common post-setup tasks that Dell recommends.
• Chapter 3, Group Manager user interfaces, describes the user interfaces for managing a group and how to
access online help.
• Chapter 4, Group security, describes how to use accounts to protect your group from unauthorized access and
how to set up a dedicated management network.
• Chapter 5, Group configuration, describes how to modify the basic group configuration.
• Chapter 6, Group members, describes how to set the RAID policy for a member and configure the member
network configuration.
xiii
Group Administration Preface
• Chapter 14, Group event logging, describes how the group logs events.
• Chapter 15, Group monitoring, describes how to monitor events, interpret status, identify and solve problems,
and monitor performance statistics.
In addition, this guide includes a Glossary of terms used in PS Series group management and related product
offerings from EqualLogic.
Conventions
Documentation conventions are shown in the following table.
Document Conventions
Convention Usage
fixed width Command, parameter, output, file name, link, button, field, URL address, or e-mail
address.
group_ip_address Variable. Replace the text in italics with the actual object name or identifier.
Note: Text . . . Important information that you should read.
Requirement: Text . . . Information relating to one or more requirements for performing a task.
Recommendation: Text . . . Information relating to one or more Dell recommendations for performing a task.
Restriction: Text . . . Information relating to one or more restrictions for performing a task.
PS Series arrays optimize resources by automating performance and network load balancing. Additionally, PS
Series arrays offer all-inclusive array management software, host software, and free firmware updates. The features
and products described next are available at no additional cost.
xiv
Group Administration Preface
Current Customers: You may not be running the latest versions of the software listed above. If you are under a valid
warranty or support agreement for your PS Series array, you are entitled to obtain the latest updates and new
releases as they become available.
xv
Group Administration Preface
Related Documentation
For detailed information about PS Series arrays, groups, volumes, array software, and host software, see the
documentation listed in the following figure:
xvi
Group Administration Preface
Contacting Dell
Dell provides several online and telephone-based support and service options. Availability varies by country and
product, and some services might not be available in your area.
Note: If you do not have access to an Internet connection, contact information is printed on your invoice, packing
slip, bill, or Dell product catalog.
Use the following procedure to contact Dell for sales, technical support, or customer service issues:
1. Visit support.dell.com or the Dell support URL specified in information provided with the Dell product.
2. Select your locale. Use the locale menu or click on the link that specifies your country or region.
3. Select the required service. Click the “Contact Us” link, or select the Dell support service from the list of
services provided.
4. Choose your preferred method of contacting Dell support, such as e-mail or telephone
Online Services
You can learn about Dell products and services using the following procedure:
1. Visit www.dell.com (or the URL specified in any Dell product information).
2. Use the locale menu or click on the link that specifies your country or region.
xvii
1 Storage solutions for all enterprises
PS Series storage arrays provide consolidated storage in a self-managing, iSCSI (Internet Small Computer System
Interface) storage area network (SAN). Featuring automated management and fast, flexible scalability, PS Series
arrays can greatly decrease the total cost of storage acquisition and management.
Today, large and small businesses alike are under pressure to manage rapidly-growing storage environments. Some
still use storage directly attached to servers, which can be a a difficult configuration to scale that offers low service
levels. The EqualLogic storage solution provides the benefits of storage consolidation to enterprises of all sizes and
types.
To use storage resources efficiently and economically, you can configure disparate storage configurations so
administration is easier and service levels increase. The EqualLogic storage solution also provides centralized data
provisioning, backups, and disaster recovery features to simplify SAN management.
You can connect multiple PS Series arrays as a group to an IP network and manage them as a single storage system,
while leveraging your current network infrastructure. With PS Series arrays, you can deploy a full-featured iSCSI
SAN. Access to storage requires only an iSCSI initiator, which is available with most operating systems.
Virtualization technology masks the underlying complexity of the storage configuration, saving you time and
effort. Arrays share configuration data and “collaborate” to achieve automatic data provisioning and load
balancing.
Self-managing features of PS Series arrays enable straightforward configurations that do not disrupt running
applications. For example, you can add more arrays to a group to seamlessly increase capacity and performance.
You can automatically replicate data over long distances for a simple, yet robust, disaster recovery strategy.
Administration accounts provide security for group management operations. You can set up a dedicated
management network for additional security, if required.
1–1
Group Administration Storage solutions for all enterprises
By default, a group provides a single pool of storage. If you have multiple members, you can divide group space
into different storage pools and then assign members. Pools help you organize storage according to usage and give
you a way to allocate resources, from a single-system management view.
You create volumes to access storage space in a pool, and you can modify volume size and attributes on-demand.
Cloning a volume enables you to make an exact copy of the volume.
Figure 1-1 shows a three-member group (three network connections on each member), with members and volumes
distributed across two pools, and two servers, with iSCSI initiators accessing the volumes.
Modular hardware
A PS Series group can easily adjust to workload requirements; therefore, administrators purchase hardware only
when necessary. New product versions fully operate with previous arrays.
Depending on the model, a PS Series storage array provides the following features:
1–2
Group Administration Storage solutions for all enterprises
• Support for standard Ethernet networks. Only one IP network connection is necessary for array operation.
You can configure all the network interfaces for maximum bandwidth. You do not have to train administrators
in unfamiliar and complex technologies such as Fibre Channel. Also, high-volume production and intense
vendor competition among Ethernet hardware vendors decrease cost of ownership.
• Easy and online expansion. You can expand SAN capacity with no disruption to users and applications. Old
array models can work in groups with new models, ensuring the viability of your initial purchase.
• Optional management connectivity through a serial port. If network connectivity is lost, a serial
connection—providing full management capabilities—lets you connect the array to a console, terminal server,
or computer running a terminal emulator and manage the storage.
• SAS, SATA, and SSD disk drives. A variety of disk drive speeds and capabilities enable you to optimize your
storage environment to meet your business needs.
Interoperability
PS Series arrays are ideal for multi-platform, heterogeneous environments that previously required a different
storage system for each operating system or application. Because a PS Series group provides block-level storage,
computers can use a standards-compliant iSCSI initiator—available for most major operating systems—to access
data. When a computer connects to a volume, it looks like a regular disk that you can format using the normal
operating system utilities.
You can configure arrays and groups by using a serial connection to an array and running the setup utility.
Alternately, Windows computers can run the Remote Setup Wizard to configure an array and set up access to the
SAN. See Host-based applications on page 1-6 for more information.
Password-protected management accounts give easy, secure access to the group. Graphical and command-line user
interfaces give you a single-system view of the storage. With the Group Manager graphical user interface (GUI),
creating and managing volumes, configuring security, and setting up event notification are simple and intuitive
operations. You can access the same functionality by using the Group Manager command-line interface (CLI)
through telnet, SSH, or a serial connection.
1–3
Group Administration Storage solutions for all enterprises
• Automatic RAID configuration and data provisioning. Administrators do not have to manually create
RAID sets or map data onto disks or individual network interfaces. Whether you are expanding a group or
creating, expanding, or deleting volumes, the group manages storage allocation and load balancing across the
resources in the group. iSCSI access to volumes continues without interruption.
• Automatic spare disk configuration and use. A PS Series array can include spare disks, which the array
automatically configures and uses in the event of a disk failure, offering “hands-off” storage management.
• Automatic event notification. A PS Series group uses standard event logging mechanisms and can
automatically notify users of significant events through e-mail, remote syslog servers, or SNMP traps.
A group provides three types of load balancing within the arrays in a storage pool:
• Capacity load balancing. The group distributes volume data across disks and members, based on capacity.
• Performance load balancing. The group tries to store volume data on members with a RAID configuration
that is optimal for volume performance, based on internal group performance metrics.
• Network connection load balancing. The group distributes iSCSI I/O across network interfaces, minimizing
I/O contention and maximizing bandwidth.
Password-protected administration accounts prevent unauthorized users from accessing the group. The grpadmin
account is the default administration account. You can create additional accounts with a variety of privileges.
You can set up and authenticate administration accounts locally in the group or through a RADIUS (Remote
Authorization Dial-in User Service) server. Environments that want additional security can use a dedicated
management network, which enables you to separate management traffic from iSCSI traffic.
1–4
Group Administration Storage solutions for all enterprises
You can restrict iSCSI access to volumes according to IP address, iSCSI initiator name, or Challenge Handshake
Authentication Protocol (CHAP) user name. You can set up and authenticate CHAP accounts locally in the group
or through a RADIUS (Remote Authorization Dial-in User Service) server.
• Online scalability. You can expand array capacity or overall group capacity while online and with no affect on
users or availability. Hot-swappable hardware means that you can replace failed components while data stays
online.
• Multiple storage pools. In a PS Series group, you can divide storage into multiple pools. This helps you
organize storage according to usage, offering more control over resource allocation, while giving you a single
system management view. You get the advantages of storage consolidation and the ability to easily segregate
different workloads.
• Thin provisioning. To use storage more efficiently, you can apply thin provisioning to a volume. Thin
provisioning lets you over-allocate group space to satisfy current and future storage requirements. This
functionality can be useful if an operating system or application cannot handle online volume expansion.
To an application or operating system, a thin-provisioned volume is fully allocated. However, the group
initially allocates only a portion of the volume size. As you use the volume, the group automatically allocates
more space, with no disruption in availability.
• Thin clones. To use storage space efficiently, you can create multiple thin clone volumes that are based on a
template volume. You can write data to each thin clone to make it unique. Only the data that is unique to a thin
clone is consumed from free pool space. Thin clones can be beneficial in storage environments that use
multiple volumes that contain a large amount of common data.
• Volume collections. You can organize volumes into collections. This lets you perform a snapshot or replication
operation on all the volumes in the collection simultaneously.
• Scheduled operations. You can set up schedules to create volume snapshots or replicas at a specific time in
the future or on a regular basis.
• Cloning. Cloning a volume creates a new volume with the same size, contents, and attributes as the original
volume. Cloning is useful when you need a copy of an existing volume, such as when you are deploying copies
of a computer or database. When compared to traditional copy or restore operations, cloning dramatically
decreases the time required to make complete copies of a volume. You can also clone a snapshot or a replica,
creating a new volume with the same contents as the volume at the time you created the snapshot or replica.
• Snapshots. A snapshot is a point-in-time copy of volume data. Snapshots greatly simplify and increase the
performance of backup and recovery operations. You can create snapshots on-demand and through schedules.
To recover data from a snapshot, you can set the snapshot online, connect to the target, and copy the data. You
can also restore the entire volume from a snapshot or clone a snapshot. Cloning a snapshot creates a volume
containing the same data that was in the volume at the time the snapshot was created.
• Replication. By replicating volumes from one group to another, you can set up a simple, yet robust disaster
recovery strategy. Groups can be in the same building or a large distance apart.
1–5
Group Administration Storage solutions for all enterprises
A replica represents the contents of a volume at a specific point in time. You can create replicas on-demand and
through schedules.
• Failover and failback. If a volume is destroyed, you can fail over to the recovery group and recover data from
a replica. Users can then resume access to the recovery volume. When the original volume becomes available,
you can failback to the original group.
Host-based applications
Host-based applications facilitate access to PS Series group storage and expand group capabilities at no extra cost.
These applications include user interfaces that provide interaction between the computer and the group. See
Overview of EqualLogic Products on page xiv.
1–6
Part I: Managing Groups
2 Common group tasks
When you use the setup utility or the Remote Setup Wizard to create a new PS Series group with one or more
members, you have a fully functioning group with many features.
Dell recommends you perform the common post-setup tasks listed here. See About the Group Manager GUI on
page 3-1 for information about logging in to the group.
Group time is based on the first member’s clock. Each array you add updates its clock to match the group’s date
and time. The group date and time determines the timestamp which is used to identify some objects (snapshots, for
example) you create on the group.
You can change the group time manually (see Modifying the time zone and clock time on page 5-3).
You can also specify that the group use an external Network Time Protocol (NTP) server to automatically set the
same time for all members (see Setting the time through an NTP server on page 5-4).
See Group event logging on page 14-1 and select one or more notification methods to automatically receive
notification when events occur:
• E-mail notification. If an event occurs, the group automatically sends a message to designated e-mail
addresses. See Configuring E-Mail notification on page 14-3.
• E-Mail Home. If a hardware component fails or if you update firmware, the group automatically notifies
customer support. E-Mail Home is available to all PS Series customers, but response time and assistance is
based on the validity and level of your support contract. See Configuring E-Mail home on page 14-4.
• Remote server logging. The group logs events to a remote server at the syslog facility. You can also access
events from the syslog server. See Configuring syslog notification on page 14-5.
2–1
Group Administration Common group tasks
You can change the default values to meet the needs of your environment.
See Setting the RAID policy and pool for a new member on page 6-6.
For network configuration requirements and recommendations, see the Hardware Maintenance manual for your
array model.
To configure member network interfaces, see Configuring network interfaces on page 6-9.
2–2
3 Group Manager user interfaces
PS Series arrays provide simple, yet robust user interfaces for creating, expanding, and managing groups. The
graphical user interface (GUI) and the command-line user interface provide virtually identical functionality.
• From a Web browser. In addition to ports 3002 and 3003, the GUI uses the standard HTTP port (80).
• By installing the GUI on a local computer and running it as a standalone application. See Installing and
starting the GUI application on page 3-1.
See the PS Series Release Notes for the latest information on GUI requirements.
2. At the login prompt, enter a valid group administration account name (for example, the grpadmin account)
and password.
In addition, you can install the Group Manager GUI on a computer and run the GUI as a standalone application.
See Installing and starting the GUI application on page 3-1.
Requirement: Make sure the computer on which you installed the GUI application is running the required Java
version. See the PS Series Storage Arrays Release Notes for details.
Note: See the PS Series Release Notes for the latest information on standalone GUI requirements.
1. Using the computer where you installed the GUI, launch the Group Manager GUI. See Starting the GUI from a
Web browser on page 3-1.
2. Expand Tools in the lower-left portion of the GUI and click Run as application.
3. Confirm you want to install the GUI application. Depending on the Java version, you might be prompted to
create a shortcut on the desktop. After installation, the standalone GUI starts automatically.
4. At the login prompt, enter a valid group administration account name and password.
5. Start the standalone GUI application, using one of the following methods (This sequence might vary,
depending on your operating system):
3–1
Group Administration Group Manager user interfaces
b. Navigate to the Windows Programs menu, and click EqualLogic PS Group, then group_name
Group Manager.
When you install the GUI locally, it automatically updates when you upgrade the PS Series firmware. However,
you must log out of the GUI and then log in again after performing a firmware update to make sure the GUI
displays all the features in the updated firmware.
If you change the IP address of a group for which you are running the GUI locally, or configure a management
network for the group, you must uninstall the GUI application and then install it again.
3. Right-click the GUI application for the group and select Delete. The name uses the format: group_name
Group Manager.
Depending on the Java version, the GUI application for a group might have two components--the applet and
the library; both are prefaced with the group name and show EqualLogic as the vendor. Make sure you remove
both components.
Note: The PS Series firmware GUI can also show replication storage objects that originate from other PS Series
groups (replication partners).
The GUI provides a set of view buttons in the bottom left of the window:
3–2
Group Administration Group Manager user interfaces
– Inbound replica collections - related sets of storage objects located on a partner group, sending replicated
data to the login group.
– Inbound replicas- individual storage objects located on a partner group, sending replicated data to the login
group.
– Out bound replica collections - related sets of storage objects located on the login group, sending replicated
data to a partner group.
– Outbound replicas - individual storage objects located on the login group, sending replicated data to a
partner group.
• Monitoring - Provides a view of group information, status, and messages, organized as:
– Events - The group event log messages, filtered by log event class.
– Statistics - Events and data relating to administrative login sessions and iSCSI initiator connections.
– Schedules - Events and data relating to snapshot and replication schedules.
– Replication - The status of storage objects configured for replication on the login group and on replication
partner groups. This view also provides a cumulative log showing all replication events.
Keyboard shortcuts
Table 3-1 lists keyboard shortcuts you can use to navigate the GUI without using a mouse. The first column lists
the pane in the window and the second pane lists the keyboard shortcut for it.
Note: There are also keyboard shortcuts for individual buttons and fields in the GUI. See the GUI online help for
more information.
3–3
Group Administration Group Manager user interfaces
GUI icons
Table 3-2 identifies the icons at the top of the GUI window. The first column lists the icons and the second
describes them.
Refresh data in a window. Refreshes the data that appears in the GUI. Do Control+R
not use the browser refresh button to refresh the data that appears in the GUI.
Navigate the GUI. Moves backward or forward through the GUI windows, Alt+Left Arrow
according to the window history. (to go back)
Alt+Right Arrow
(to go forward)
• Click the Show window icon ( ) or Hide window icon ( ) in the Alarms title bar or click the title bar to
open and close the panel. Each alarm contains a link to the object (member or volume). Click the link for
additional information.
• Click the Acknowledge all icon ( ) to acknowledge all alarms and stop flashing the Caution icon ( ).
3–4
Group Administration Group Manager user interfaces
Table 3-3 shows the Tools panel configuration options and utilities for working in the Group Manager:
• Change the look and behavior of the GUI, and the location of online help.
• Specify what the GUI does when a connection is lost.
• Change event logging options and how alarms are indicated.
• Control the GUI response to typed input and debugging.
3–5
Group Administration Group Manager user interfaces
2. In the User Preferences – General dialog box, select your preferences and click OK.
2. In the User Preferences – Alarms dialog box, select policies and click OK.
To access the group to run CLI commands, you can do one of the following:
• Use a network connection. From a computer, use telnet or SSH to connect to the group (or management) IP
address or—if you are running array management commands on a specific array—connect to an IP address
assigned to a network interface on the array.
• Use a serial connection. Set up a serial connection to the array, as appropriate for the control module model.
Make the connection to Port 0 on the active control module (the ACT LED is green). Use the serial cable that
shipped with the array. See the Hardware Maintenance manual shipped with your array for more information.
See the CLI Reference manual for more information about using the CLI. You can download documentation from
the EqualLogic Customer Support Website.
3–6
Group Administration Group Manager user interfaces
Note: Setting the online help to a local path only applies to users of the local system.
The online help is context-sensitive. Click any of the question mark icons embedded in the Group Manager GUI to
open a specific help topic. You can also open the help in a browser, and use the table of contents or search option to
find information.
To launch online help from the Web site, in the GUI’s far-left panel, expand Tools and click Online Help.
1. Log on to the Customer Support Web site and navigate to the downloads area.
2. Copy the folder named eqlgmhlpnn (where nn corresponds to the PS Series firmware release, such as 5.0).
3. Save the contents of folder to on a local disk, a network share, or Web server repository. The top-level file in
the help hierarchy in this folder is named groupmanager.htm. For example: c:/eqlgmhlp50.
6. In the User Preferences – General dialog box, enter the new help directory location in the Location of
online help files field and click OK. For example:
file:///c:/eqlgmhlp50
http://servername/eqlgmhlp50
file:///system.directory.company.com/myhelpshare/eqlgmhlp50
Depending on your browser choice, and the local internet security settings, you might need to configure browser
access to the help folder. For example, for Internet Explorer, you might need to add the help URL to the list of
trusted sites:
3. Add the group’s IP or management address to the list of trusted sites, using the format: http://
group_ip_address. (Do not select the option to require server verification.)
3–7
4 Group security
Group security features enable you to control access to the group and the data it contains.
In addition to administration account security, Table 4-2 identifies other group security options.
You can disable CLI access, preventing any administrator from logging in to the group or from using CLI
commands.
Note: If you disable all methods of access to the group, you must use a serial connection and the CLI to manage
the group or to re-enable access. See Using the CLI on page 3-6 and your Hardware Maintenance manual
for information about serial connections.
4–1
Group Administration Group security
The CLI provides additional options for managing network services. Using the CLI, you can also enable or disable
SSH V1 protocol support or the ftp service. See the CLI Reference manual.
2. In the Administration Access panel, enable or disable the GUI or CLI access options and network services as
described in Administration access options on page 4-2.
To manage or monitor a group, you must log in to an administration account. Administration accounts prevent
unauthorized individuals from accessing a group.
An account can authorize an individual to perform all group operations, perform only operations on a pool (and
optionally monitor the entire group), manage its own volumes within an assigned quota, or only monitor the group,
depending on the type of account.
The default administration account, grpadmin, can perform all group operations. Dell recommends that you set up
an account for each administrator.
Recommendation: Dell recommends one account per user and that the group administrator monitors the activity
of other accounts. See Chapter 15, Group monitoring, for more information.
4–2
Group Administration Group security
• Locally in the group – If you have relatively few administration accounts, this method is practical. Account
authentication occurs within the group. The default administration account, grpadmin, is a local account
created automatically when the group is first configured.
A group can use both local accounts and RADIUS-authenticated accounts. However, each account name
must be unique.
4–3
Group Administration Group security
• You cannot change the name of an administration account. Instead, you must delete the account and then re-
create it with the new name.
• You cannot disable, delete, change the name, or change the type of the grpadmin account.
• Only group administrator accounts can change the attributes of accounts, with the exception of the grpadmin
account restrictions above.
• Volume administrator, pool administrator, and read-only accounts can only change the password, description,
and contact information for their accounts.
4–4
Group Administration Group security
See the online help for information about the data fields and options.
Before creating a local administration account, gather the information described in Administration account
attributes on page 4-4.
3. Enter the account name, password, and description (optional) and click Next. The Create Account - Account
Permissions dialog box appears.
4. Select the type of account and (if applicable) the pool access and read access to the group.
4–5
Group Administration Group security
• For a pool administrator, select one or more pools the account can manage and whether the account has
read-only access to the entire group.
• For a volume administrator, select one or more pools the account can manage and specify the quotas for
each pool.
5. Select whether to enable (default) or disable the account, then click Next. (You can enable and disable
accounts at any time.)
If you created a volume administrator account, and the group has replication partner(s) configured, the Create
Account - Allowed replication partners dialog box appears.
6. [Optional] Select one or more replication partners that this account can replicate to, then click Next. the Create
Account - Contact information dialog box appears.
7. [Optional] Enter contact information for the account and click Next. The Create Account - Summary dialog
box opens.
8. Review the account information. Click Back to make changes, or click Finish to create the account.
In addition, you cannot disable, delete, change the name, or change the type of the grpadmin default
administration account.
1. Click Group, then Group Configuration, and then the Administration tab.
2. In the Administration Accounts panel, select the account and click Modify.
• To change the account password or description, click the General tab and change the information in the
Modify Administration Account – General dialog box.
• To change the account type or pool or volume administrator settings, click the Permissions tab and
change the information.
• To change replication partners for a volume administrator, click the Replication Partners tab and
change the selection(s).
• To change the account contact information, click the Contact tab and change the information.
3. Click OK.
Note: When you delete a Volume administrator account, the volumes it manages are not deleted, and its
replication and operations continue as scheduled.
4–6
Group Administration Group security
Using a RADIUS server can simplify account management if you have a large number of accounts.
There are various implementations of RADIUS, including Microsoft Windows Internet Authentication Service
(IAS). Depending on the implementation, a RADIUS server can verify account credentials against a local database,
or it can verify them against an external resource, such as a Microsoft Windows Active Directory™ service
domain.
Note: External administration accounts depend on the availability of the RADIUS server and any related
resources. If these resources are not available, accounts cannot be authenticated and a login does not
succeed.
For information about using IAS and Active Directory to manage and authenticate administration accounts, see the
Technical Report Using Active Directory for Account Authentication to a PS Series Group on the customer support
web site.
For other RADIUS implementations, see your RADIUS server documentation for information about setting up the
RADIUS server and configuring vendor-specific attributes (VSAs).
You can use multiple RADIUS authentication servers for increased availability.
Recommendation: For security reasons, Dell recommends that you require vendor-specific attributes.
See your RADIUS server documentation for information on how to set attributes.
For each account, you must set the Service-Type attribute to one of these values:
• Administrative – Specifies that the account is either a group administrator account, a pool administrator
account, or a volume administrator account.
Note: If you do not specify the EQL-Admin attribute, by default, the account is a group administrator account.
In addition, you must configure vendor-specific attributes (VSAs) for each account if you meet one of these
conditions:
• You want to create a pool administrator account. You must specify the EQL-Admin attribute and the EQL-
Pool-Access attribute.
4–7
Group Administration Group security
• You want to create a volume administrator account. You must specify the EQL-Admin attribute, the EQL-
Pool-Access attribute, and (optionally) the EQL-Replication-Site-Access attribute.
• You want to create a read-only account. You must specify the EQL-Admin attribute and the EQL-Admin-
Account-Type attribute.
• You plan to select the Require vendor-specific RADIUS attribute option when you configure the
group to use a RADIUS authentication server. You must specify the EQL-Admin attribute.
Table 4-6 describes the Dell vendor-specific attributes for RADIUS attributes, and lists their possible values.
4–8
Group Administration Group security
See the online help for information about the data fields and options.
Perform the tasks in Table 4-7, in the order listed, before you use a RADIUS server to authenticate administration
accounts (or CHAP accounts for iSCSI access, as described in Using CHAP accounts on a RADIUS authentication
server on page 8-3).
4–9
Group Administration Group security
3. Under RADIUS authentication servers click Add and specify the IP address of the server.
• Request timeout, seconds — Number of seconds the group waits for an accounting server to
transmit before timing out. The default is two seconds.
• Number of retries — Number of times the group tries to contact an accounting server after the first
failure. The default is one.
6. [OPTIONAL] Using the Radius accounting servers panel, repeat steps 2 through 5 if you also want to add
information for RADIUS accounting servers.
All RADIUS authentication and accounting options are now enabled. For a description of these options, see
Displaying RADIUS authentication and accounting servers on page 4-9.
• Server IP address
• Password (secret)
• Request timeout value
4–10
Group Administration Group security
1. Click Group, then Group Configuration, and then the Administration tab.
3. To change a server IP address or password, select the server IP address and click Modify in the RADIUS
settings dialog box. Change the settings and click OK.
4. Click OK.
2. In the RADIUS Authentication panel, click RADIUS settings. The RADIUS settings dialog box appears.
1. In the Group Administration window, deselect Enable RADIUS authentication for login or deselect
Enable RADIUS accounting for authenticated users.
Note: SAN HeadQuarters requires you to configure SNMP access to a group. The Manual Transfer Utility
requires you to configure SNMP access and specify public for the SNMP community name.
See the online help for information about the data fields and options.
4–11
Group Administration Group security
2. Enter a SNMP community name (for example, public) and click OK. You can specify up to five names.
Each application requires a method of accessing the group and its storage:
• SAN HeadQuarters – Enables you to monitor the performance and status of multiple groups from a central user
interface. The application uses SNMP to access the group.
See the SAN HeadQuarters documentation and Displaying and configuring SNMP access to a Group on page
4-11.
• Manual Transfer Utility – Enables you to perform a replication operation using external media, instead of the
network. The application uses SNMP to access the group.
See the Manual Transfer Utility documentation and Displaying and configuring SNMP access to a Group on
page 4-11.
• Auto-Snapshot Manager/Microsoft Edition – Enables you to create consistent backups on group storage. The
application uses special access controls to access the group.
See the ASM/ME documentation and Displaying and configuring Windows service access to a Group on page
4-12.
• Auto-Snapshot Manager/VMware Edition – Enables you to create backups on group storage in a VMware
environment. The application uses the Group Manager GUI channel to access the group.
See the ASM/VE documentation and Accessing the GUI or CLI on page 4-1.
Note: Auto-Snapshot Manager/Microsoft Edition requires you to configure Windows service access to a group.
To allow VDS and VSS access to the group, you must create at least one VDS/VSS access control record that
matches the access control credentials you configure on the computer by using Remote Setup Wizard or Auto-
Snapshot Manager/Microsoft Edition.
4–12
Group Administration Group security
VDS/VSS access control records use the same criteria for restricting access as iSCSI target access control records:
CHAP user name, iSCSI initiator name, or iSCSI initiator IP address. See About iSCSI target access controls on
page 8-1.
To display VDS/VSS Access, click Group, then Group Configuration, and then the VDS/VSS tab.
See the online help for information about the data fields and options.
• Check the box marked Authenticate using CHAP user name to use CHAP (Authenticating
initiators through CHAP on page 8-2).
• Check the box marked Limit access by IP Address to constrain access to an IP address or range of
addresses. Use an asterisk as a wildcard to specify a range of addresses, such as 127.200.*.*.
• Check the box marked Authenticate using CHAP user Limit access to iSCSI Initiator
name to grant access to a specific SCSI initiator (See iSCSI target security on page 8-1). For example:
iqn.2000-05.com.qlogic.qla-4000.sn00044.
4. Click OK.
To delete a record, select the record and click Delete. Then, confirm that you want to delete the record.
When you delete or modify a record you might need to update any computer that was previously accessing
volumes through the access control record.
Without a dedicated management network (the default configuration), administrators connect to the group IP
address for both administrative access to the group and iSCSI initiator access to iSCSI targets (volumes and
snapshots).
With a dedicated management network, administrators do not use the group IP address for administrative access to
the group. Instead, administrators connect to the management network address. All iSCSI traffic, including traffic
by replication partners, continues to use the group IP address.
4–13
Group Administration Group security
Although a dedicated management network can provide additional group administration security, it has
disadvantages:
• Because you assign the highest-numbered network interface on each group member to the management
network, iSCSI traffic is limited to the remaining network interfaces. Therefore, total iSCSI bandwidth might
decrease, depending on the control module type.
Note: Some control modules have a network interface that you can use only if you configure a dedicated
management network. For these control modules, using a dedicated management network does not
decrease iSCSI bandwidth. See the Hardware Maintenance manual for your array model.
• If the management interface fails in a single-member group, or if the management network loses connectivity,
you lose management access to the group. However, you can always connect to the serial port on a group
member and use the Group Manager CLI to manage the group.
Only very knowledgeable users should configure a management network and only if the environment requires
separating management traffic from iSCSI traffic.
• Make sure your network environment can support a dedicated management network. You need a subnet for the
management network that is separate from the subnet (or subnets) for iSCSI traffic.
• Obtain an IP address and default gateway information for the management network address. This is the address
to which administrators can connect.
• For each group member, obtain an IP address for the management network interface. The IP address must be
on the same subnet as the management network address, and this subnet should not be the same that used for
data I/O.
• On each group member, connect at least one network interface, other than the highest-numbered interface, to
the iSCSI network and configure and enable the interface. For the best performance connect, configure, and
enable all iSCSI-eligible network interfaces. To support control module failover, connect the ports on the
active and secondary control modules to the network.
For example, if you have a three-port control module, connect the Ethernet 2 port on both control modules to
the management network. For some control modules, this interface is labeled Management.
4–14
Group Administration Group security
1. Click Group, then Group Configuration, and then the Advanced tab.
2. Click Configure management network in the Dedicated Management Network panel (Alt+C). The
Configure Management Network window appears.
a. Double-click to configure, and enable at least one network interface, other than the highest-numbered
interface, on the iSCSI network.
For the best performance connect, configure, and enable all iSCSI-eligible interfaces. See Configuring a
management network on page 4-14.
b. Click Configure for management-only access next to the highest-numbered network interface.
4–15
Group Administration Group security
8. In the Warning dialog box, click Yes to restart the Group Manager GUI session using the new management IP
address.
Note: When you configure a management network correctly, the highest-numbered interface is on the same subnet
as the management IP address and Mgmt appears in the Traffic column (Figure 4-1). The remaining
interfaces for iSCSI traffic are on a different network and SAN appears in the Traffic column.
The management network address is shown in the Management IP address field. (If there is no
management network, this field is not shown.) The General Settings panel also shows the group IP address,
which you use for all iSCSI traffic, including traffic between replication partners.
4–16
Group Administration Group security
After you select the pool and RAID policy for the new member, as described in Setting the RAID policy and pool
for a new member on page 6-6, the Modify IP Settings – Management Network dialog box appears.
5. Click OK.
6. Verify the configuration. See Displaying management network information on page 4-16.
4. In the Modify IP Settings dialog box, double-click the items and make your changes.
Note: Make sure all the network addresses used in the management network are on the same subnet, which is
different from the iSCSI network.
5. Click OK to confirm that you want to perform the operation. The Group Manager GUI automatically restarts,
using the group IP address.
4–17
Group Administration Group security
1. Click Group, then Group Configuration, and then the Advanced tab.
2. In the Group Advanced window’s Dedicated Management Network panel, click Configure management
network.
3. In the Configure Management Network dialog box, deselect Enable dedicated management network.
4. Click OK to confirm that you want to perform the operation. The Group Manager GUI automatically restarts,
using the group IP address.
b. If the interface is eligible for iSCSI traffic and you want to use it, connect it to the iSCSI network.
c. Click Members, then the member name, and then the Network tab.
e. In the Modify IP settings dialog box, if you do not want to use the interface, delete the IP address and click
OK.
• Click OK.
4–18
5 Group configuration
You can change the group configuration defaults and initial settings. You can also add members to a group to
expand group capacity and improve performance.
The Group Summary window displays the basic group configuration and resources in the following panels:
• Group information panel – Shows a summary of group parameters and resource use
• Group disk space panel – Shows how the group is using space. You can display details about the space by
selecting from the radio button options.
Note: The group free space might not be precise because the GUI uses a rounding algorithm to calculate free
space. To display the actual free space in megabytes, you must use the CLI. Enter the cli-settings
displayinMB on command. Then enter the show command to display the free group space. See the
CLI Reference manual.
• Storage pools and Group members – Shows storage pools in the group, with details of pool capacity and used
space. See About storage pools on page 7-1.
See the online help for information about the data fields and options.
5–1
Group Administration Group configuration
5–2
Group Administration Group configuration
Click Group Configuration, then the General tab. See the online help for information about the data fields
and options.
1. Select the time zone and city from the Time zone pull-down menu.
3. Change the date and time. To synchronize the time with the computer running the GUI, click Set to time
of system running GUI.
4. Click OK to save the changes. Click Cancel or Reset to current group time to cancel the changes.
5–3
Group Administration Group configuration
Requirement: The NTP server must be accessible to all the group members.
Recommendation: If you are using a dedicated management network, Dell recommends that the NTP server be
on the same subnet as the dedicated management network.
2. Click Add under NTP servers in the Date and Time panel.
3. Enter the IP address for an NTP server. Use the ip_address:port format to specify a port number other than
the default, 123, then click OK.
4. [Optional] Repeat steps 2 and 3 to add more NTP servers. You can specify up to three NTP servers. Use the
arrows to move a server up or down in the list.
If the group time changes (which means you manually modified it or the NTP server time changed), the
synchronization operation can take as long as 24 hours if the time changes by an hour or more.
1. Install the new array hardware and connect the array to the network. Make sure you connect the array to the
iSCSI network.
2. Use the setup utility or the Remote Setup Wizard to configure the array as a group member. You need:
• Group name and group IP address for the group that you want to join.
• Group membership password.
• IP address for a network interface on the new array (usually, Ethernet 0), in addition to the netmask and
default gateway for the iSCSI network. You must connect the network interface to the network.
See the Installation and Setup manual for your array model for details.
3. Perform the post-setup member tasks. See Common group tasks.
Note: If you configured a dedicated management network in the group, you must configure the highest-
numbered network interface on the management network. See About dedicated management networks
(advanced).
5–4
Group Administration Group configuration
• Group name – Identifies the group on the network. For example, when you configure a replication partner, you
use the name to identify the group.
• Group IP address – Network address for the group. You use the group IP address as the iSCSI discovery
address when connecting initiators to iSCSI targets in the group.
You also use the group IP address to access the group for management purposes, unless you configured a
dedicated management network.
• You identify replication partners by group name and use the group IP address to perform replication. If you
modify the group name or IP address, make sure replication partner administrators make the change to their
partner configuration. Replication fails if the partner information is incorrect. See Replication partner
requirements for information.
• You use the group IP address as the iSCSI discovery address when connecting initiators to iSCSI targets in the
group. If you modify the group IP address, you might need to change your initiator configuration to use the
new discovery address.
• You use the group IP address to access the group for management purposes, unless you configured a dedicated
management network. If you modify the group IP address, make sure administrators are aware of the change.
• Changing the group IP address disconnects any iSCSI connections to the group and any administrators logged
in to the group through the group IP address.
• If you change the IP address of a group for which you are running the GUI locally, or configure a management
network for the group, you must uninstall the standalone GUI application and then install it again.
• Applications such as SAN HeadQuarters use the group IP address to access the group for monitoring purposes.
If you modify the group IP address, you must modify the SAN HeadQuarters configuration to use the new
group IP address.
Requirement: Group names must be unique in your network environment. A group name can consist of one to 54
letters, numbers, or hyphens. The first character must be a letter or number.
IPv6 Note: If you are using IPv6 addresses exclusively in the group, you must use the CLI to change the group IP
address. See the CLI Reference manual for more information.
5–5
Group Administration Group configuration
2. Select the General tab. (See the online help for information about the data fields and options.)
2. Enter the new password twice and click OK. The password can contain 3 to 16 alphanumeric characters and is
case-sensitive.
To perform maintenance, you might need to shut down a PS Series group. While the group is shut down, group
volumes are not available.
3. Shut down each group member. See Shutting down a member on page 6-14.
Note: Do not turn off power to a group member until you cleanly shut down the member.
By default, the group tries to store volume data on pool members with a RAID configuration optimal for volume
performance, based on metrics collected from each group member. You can enable or disable automatic
performance load balancing. Other load balancing methods continue to apply.
1. Click Group, then Group Configuration, and then the Advanced tab.
2. In the Load Balancing panel, select or deselect Enable performance load balancing in pools
(Alt+E).
3. Click Save all changes (Control+S).
5–6
6 Group members
A PS Series group includes one or more PS Series arrays configured as group members.
• Group disk space panel – Provides information about capacity use for the group and pools and capacity use
distribution by raid level.
• Group members panel – Provides information about individual members (arrays) in the group, physical
attributes, capacity and capacity use.
See the online help for information about the data fields and options.
2. Select the member name, and click the appropriate information tab.
• Member status tab – The general status of a member, such as member information and hardware health.
• Enclosure tab – Array hardware status.
• Controllers tab – The status of controllers, such as which controller is active.
• Disks tab – The status of disks and model or revision of firmware.
• Network tab – Ethernet interface operational status.
• Connections tab – iSCSI connections to storage objects such as volumes.
• Service tab – Information, such as details of the type, model and firmware revision of components in the array.
See the online help for information about the data fields and options.
• General member information – Provides information about the array model, its configuration settings, and
RAID status.
• The Member Health Status panel – Provides a physical view of the array hardware and enables you examine
the following details:
6–1
Group Administration Group members
– Click Front view to see the front panel of the array. For some array models, the GUI shows disk drives
located behind the front bezel.
– Click Inside view (not available on all array models) to see the disk drives located inside the array.
– Click Rear view to see the back panel of the array, including the control modules and the power supply
and cooling modules. The front and rear views shown in your GUI depend on the array model of the group
member.
- A red “X” over a hardware component indicates uninstalled or unconfigured hardware. Place the pointer
over a component to display status details.
- The Member Health Status panel also shows a table with the alarm status for the array hardware
components.
– Click View Alarms to display all the alarms in the member.
• Member space panel – Provides information about the member’s storage capacity and how storage space is
allocated. Be aware of the following:
– Total member capacity does not include the space in spare disk drives because the member uses spare disk
drives only if a failure occurs). Member capacity depends on the number and size of the installed disks and
the selected RAID policy.
– If free member space is low, you can increase the member capacity by adding disk drives, if empty slots are
available, or by replacing the current drives with higher-capacity drives. You can also add more group
members to increase overall group capacity.
See the online help for information about the data fields and options.
Enclosure tab
1. Click Group, then expand Members.
The following panels provide information about the member’s enclosure (array enclosure):
• Power supplies panel – Enables you to identify power supply locations and verify their status.
• Cooling fans panel – Enables you to identify cooling fan locations and verify their status, including operational
parameters.
• Temperature sensors panel – Enables you to identify temperature sensor locations, verify their status and verify
operational parameters.
See the online help for information about the data fields and options. Your array hardware manual contains more
information about physical features of the array.
See also Monitoring the member enclosure on page 15-17 and Monitoring cooling and fans on page 15-18.
6–2
Group Administration Group members
Controllers tab
1. Click Group, then expand Members.
The following panels provide information about the member’s controller modules:
• Control module panel – Enables you to identify controller locations and verify their status and cache battery
condition. Identifies the current PS Series firmware
• Memory cache panel – Describes the cache mode and current caching policies.
See the online help for information about the data fields and options. See also Monitoring control modules on page
15-20.
Disks tab
1. Click Group, then expand Members.
• Disk array summary panel – Enables you to identify disk drive locations and verify the status of a disk.
Provides information about failed disks and operations in progress.
Note: The disk slots are color-coded to represent the status. Mouse over a disk to view its status information as
text.
• Installed disks panel – Describes the physical attributes of installed disks. You can select an individual disk and
display disk activity statistics and performance data. See Using the Performance Monitor on page 15-29.
See the online help for information about the data fields and options. See also Monitoring disk drives on page 15-
22.
Network tab
1. Click Group, then expand Members.
The following panels provide information about the member’s network interfaces:
• Status of network interfaces panel – Provides a description of the operational status of network interfaces in the
member’s controllers, and the physical attributes of the interface. Enables
• IP configuration panel – Describes the IP configuration values for each network interface and enables you to
modify the IP configuration.
See the online help for information about the data fields and options. See also About member network
configuration on page 6-7.
6–3
Group Administration Group members
Connections tab
1. Click Group, then expand Members.
The iSCSI Connection panel displays details of the current initiator connections. See the online help for
information about the data fields and options. See also Monitoring iSCSI connections on page 15-3.
Service tab
1. Click Group, then expand Members.
The following panels provide information about the member’s hardware components:
• Component versions panel – Provides information that enables you to identify your array and its components
for the purpose of service and upgrades, including installed firmware revisions.
• Disk versions panel – Provides information that enables you to identify disk drives for the purpose of service
and upgrades, including installed firmware revisions.
You can also restart or shut down a member from this panel, as described in Restarting a member on page 6-15 and
Shutting down a member on page 6-14
See the online help for information about the data fields and options. See also Displaying member service
information.
To display current RAID policies, click Members in the far-left panel. The Group Members panel shows the RAID
policy for each member.
• RAID 10
• RAID 10 no spares
• RAID 50
• RAID 50 no spares
• RAID 5
• RAID 6
• RAID 6 no spares
• RAID 6 Accelerated (supported only on array models with disk drive configurations that include solid-state
drives and hard disk drives.)
6–4
Group Administration Group members
• RAID level – RAID 10, RAID 50, RAID 5, or RAID 6. See RAID level characteristics.
Recommendation: For optimal performance, Dell recommends that you assign the same RAID level to pool
members with the same disk type and disk speed.
• Spare drive policy – Whether the member automatically configures and uses spare disk drives. Spare drives
increase availability.
The number of spare drives depends on the array model and the number of installed drives. For RAID 6
Accelerated, only one hard disk drive is configured as a spare. The solid-state drives are not protected by
sparing.
Recommendation: Dell recommends that you use a spare-drive RAID policy. Only use a no-spare-drive
RAID policy if you have sufficient support staff and maintain a stock of replacement disk
drives.
The storage in the member is available after you set the RAID policy. The member automatically configures the
disk drives according to the designated RAID level, with the appropriate number of spare drives.
Although all RAID levels provide good performance and data protection, there are some differences. When
choosing a RAID policy, you should identify the performance and availability needs of your workload and select a
RAID policy that meets those needs. If your workload has mixed requirements in terms of performance and
availability, you might want to consider mixing RAID levels in a multi-member group.
RAID 6 Accelerated is supported only on array models with disk drive configurations that include both solid-
state drives and hard disk drives. RAID 6 Accelerated optimizes the use of solid-state drives for critical data.
One hard disk drive is configured as a spare and provides redundancy protection in the event of a hard disk
drive failure or a solid-state disk drive failure.
Each RAID 1 set or RAID 5 set can survive a single disk drive failure. A RAID 6 set can survive two simultaneous
drive failures. If a drive fails in a RAID set, the RAID set is degraded.
Consider the following performance and availability factors when choosing a RAID level for a member:
• RAID 10 and RAID 50 provide excellent reliability for your data, in addition to overall high performance.
• RAID 10 provides the best performance for workloads that are mostly small random writes.
• RAID 6 provides high availability, but at the expense of performance during data reconstruction.
• RAID 6 is not recommended for workloads consisting mainly of random writes.
6–5
Group Administration Group members
Table 6-1 compares the performance and availability characteristics of the supported RAID levels. The first
column lists workload requirements, with the other columns respectively listing the performance quality for each
requirement at RAID 10, RAID 50, RAID 5, and RAID 6.
Table 6-2 shows the supported RAID policy conversions. The first column lists RAID configurations, and the
second column lists supported RAID policy conversions for them.
If a RAID policy conversion is not supported, you can remove the member from the group and then add it to the
group again. You can then set the RAID policy.
Notes: If you used the Remote Setup Wizard to create a group and add the first member to the group, you already
set the RAID policy for the member, and the group automatically assigned the member to the default pool.
If you want to use a no-spare disks RAID policy, you must use the Group Manager CLI to set the RAID
policy.
6–6
Group Administration Group members
1. In the Group Summary window, expand Members and double-click the member name or click Group, then
expand Members, and then select the member name.
2. In the warning dialog box that appears, click Yes to configure RAID on the member.
3. In the Configure Member – General Settings dialog box, select the pool and click Next.
Recommendation: For optimal performance, Dell recommends that you assign the same RAID level to pool
members with the same disk type and disk speed.
6. Click Next.
7. When the configuration is complete, click Finish in the Configure Member – Summary dialog box.
Note: To convert to a no-spare-drive RAID policy, use the Group Manager CLI.
1. Click Group, then expand Members, then select the member name, and then click Modify RAID
configuration.
2. In the Modify RAID Configuration dialog box, select the new RAID policy.
The values in the Member Capacity table automatically update, based on the RAID policy you selected.
3. To ensure member space is not available until the RAID verification completes, you can select Wait until
the member storage initialization completes. If this option is already selected, you cannot
change the selection.
4. Click OK.
While the RAID policy is changing, the member’s RAID status is expanding.
6–7
Group Administration Group members
Recommendation: For high availability and performance, Dell recommends that you configure multiple network
interfaces and use redundant network switches.
Note: Some control module types include a network interface that can be used only in a dedicated management
network.
When you add a member to a group, you configure a network interface with an IP address and netmask and enable
the interface (typically, Ethernet 0).
To increase performance and availability, Dell recommends that you expand the minimum network configuration
as follows:
See the Hardware Maintenance manual for your array model for network configuration details.
Redundant network connections protect against network interface failure and increase performance (network
bandwidth). If a network interface fails or you disconnect it, another interface can continue to service I/O requests.
1. Connect two or more network interfaces on the active control module to the network.
2. Configure the interfaces (assign an IP address and netmask and enable each interface). See Configuring
network interfaces.
Redundant control modules protect against control module failure by enabling control module failover. If the active
control module fails, the secondary control module takes over and becomes active.
6–8
Group Administration Group members
2. For each configured network interface, connect the Ethernet port on the secondary control module to the
network.
Redundant network switches protect against network and switch failures and improve network performance.
Requirement: Requires redundant network connections. See Configuring redundant network connections.
2. In the Member Network window, review the current network configuration for the member.
You can also change the existing member network configuration or enable or disable an interface. If you change the
IP address for a functioning network interface, the member disconnects all iSCSI initiators from that interface.
Most initiators reconnect automatically.
IPv6 Note: If you are using IPv6 addresses exclusively in the group, you must use the CLI to configure a network
interface. See the CLI Reference manual.
1. Connect the network interface on the active control module to the network.
Recommendation: Dell recommends that you also connect the network interface on the secondary control
module to increase availability.
2. Obtain the IP address and netmask for the interface. At least one network interface on a member must be on the
same subnet as the group IP address.
3. Click Group, then expand Members, then select the member name, and then click the Network tab.
4. In the IP Configuration panel, select a network interface and click Modify IP settings.
6–9
Group Administration Group members
Note: Unless you are using a dedicated management network, the default gateway is the same for all network
interfaces on an array. To modify the default gateway, see Modifying the default gateway for a member.
6. Select Enable this interface. You must enable the interface in order to use it.
7. Click OK.
Disabling a functioning network interface disconnects all iSCSI initiators associated with the member. Most
initiators reconnect to another interface automatically.
Requirement: A member must have at least one functioning network interface that is configured, enabled, and
connected to a network. If you disable the only functioning network interface on a member, the
group sets the member offline. If the member is the only member in the group, the group is not
accessible from the network.
1. Click Group, then expand Members, then select the member name, and then click the Network tab.
Requirement: A member must have at least one network interface that is functioning, configured, enabled, and
connected to a network. If you unconfigure the only functioning network interface on a member,
the group sets the member offline. If the member is the only member in the group, the group
becomes inaccessible from the network.
1. Click Group, then expand Members, then select the member name, and then click the Network tab.
5. Click OK.
6–10
Group Administration Group members
Requirement: You must use a default gateway to enable communication outside the local network.
1. Click Group, then expand Members, then select the member name, and then click the Network tab.
2. In the IP Configuration panel next to the Default gateway field, click Modify.
3. Specify a default gateway IP address if you want to use a default gateway, or delete the IP address if you do not
want to use a default gateway.
4. Click OK.
2. In the Modify Member Settings dialog box, modify the member name or specify a description for the member.
3. Click OK.
• Write-back – The member stores data in the cache until it is written to disk drives, which can improve
performance. The cache battery protects the data in the cache. In a dual control module configuration, the
member mirrors cache data across the two controllers, providing additional data protection.
• Write-through – The member immediately writes data to the disk drives, which might decrease performance.
You can set policies that control the mode when the battery is low or a control module fails in a dual control
configuration. See Setting write cache policies.
Recommendation: Dell recommends that you keep the default write cache policy settings for the best
performance and availability.
• Use write-through mode if only one controller is functional – Also called single-controller-safe mode, if you
enable this policy on a member with a single control module, the cache always uses write-through mode. If you
disable this policy (the default), under non-failure conditions, the cache uses write-back mode.
If you enable this policy on a member with dual control modules, but only one module is functioning, the cache
6–11
Group Administration Group members
on the functional control module uses write-through mode. If you disable this policy (the default), the cache on
the functioning control module uses write-back mode.
• Use write-through mode if battery charge is below tolerance – Also called low-battery-safe mode, if you enable
this policy (the default) on a member with a single control module, the cache uses write-through mode if the
charge on the cache battery is low.
If you enable this policy on a member with dual control modules, but do not enable the single-controller-safe
policy, the active control module cache uses write-back mode if its cache battery has an adequate charge, but
the secondary control module’s cache battery has a low battery charge. If you enable the low-battery-safe
policy and the single-controller-safe policy, the active control module cache uses write-through mode if the
battery charge on either control module is low.
If you disable the low-battery-safe policy (not recommended), a cache uses write-back mode, regardless of
cache battery charge.
To change the write cache policies, see Modifying write cache policies.
1. Click Group, then expand Members, then select the member name, and then click the Controllers tab to
display the Member Controllers window.
2. In the Memory Cache panel, select or deselect the cache mode policies.
3. Click Save all changes (Control+S).
Recommendation: Dell recommends that you always run the latest firmware to take advantage of new features
and fixes.
Dell recommends that all group members run the same firmware version. If you are adding a
new array to a group, update the group to the latest firmware before adding the new member.
You can upgrade member firmware to a higher version or downgrade member firmware to a lower version.
6–12
Group Administration Group members
Before updating firmware, see Firmware update considerations and prerequisites for important information to
consider.
Before performing any firmware update, read the Release Notes for the new firmware and the Updating Storage
Array Firmware document. In addition, see your PS Series support provider for detailed information about
firmware and firmware updates.
When updating array firmware, keep in mind the issues listed below. The Updating Storage Array Firmware
document describes them in greater detail and provides recommendations for addressing them.
• SAN planning. Before updating a SAN component, you must fully understand the impact of the update on the
infrastructure. Careful planning of the upgrade process can help you avoid unplanned downtime.
• Timing. Dell recommends that you perform firmware upgrades during off hours or scheduled maintenance
periods to avoid disruption of service to the applications and servers that the storage group supports. When
scheduling a firmware update, allow enough time to update and restart the entire group of arrays.
• Planning for downtime and minimizing host disruption. The update procedure requires an array restart.
During an array restart, volumes with data on the array are temporarily unavailable until the restart completes.
To make sure that applications are not affected, follow the Dell guidelines in the iSCSI Initiator and Operating
System Considerations document.
• Backing up data. Upgrades should be implemented after a backup has occurred.
• Working with Multi-Member Groups. Dell recommends that all PS Series group members run the same
version of the storage array firmware. The PS Series Release Notes describe which firmware versions can co-
exist in a group; however, only those features and bug fixes common to all versions are available.
• Software Prerequisites. Depending on the firmware version, you might be required to update certain software
components prior to applying the upgrade.
• Array Prerequisites. Before upgrading the firmware, you need network access to all group members, the
group IP address, and access to the grpadmin account.
• Supported Upgrade Paths. Usually, you can update an array directly to the latest firmware version. However,
in some cases, you might need to update to an interim version before updating to the latest version.
The Updating Storage Array Firmware document provides step-by-step instructions for updating member
firmware. You can obtain this document from the downloads area of the EqualLogic Customer Support Website.
6–13
Group Administration Group members
Requirement: Before you disallow firmware downgrades, make sure all group members are running the same
firmware version.
1. Click Group, then Group Configuration, and then the Advanced tab.
Note: You cannot remove a member from the group unless the remaining pool members have enough space to
store the data from the member that you want to remove.
Members you remove from a group are automatically reset to the factory defaults. Previous group, member, and
volume information, as well as any volume data on the member, are removed.
Removing a member from a group can be a long operation, depending on the amount of data the group must move
to the remaining pool members. While data is moving, the member status is vacating-in-progress.
1. Click Group, then expand Members, then select the member name, and then click Delete member.
If you receive a “vacate failed” message, the member is offline.
Shutting down a member does not turn off array power. To turn off array power, turn off all power switches on the
array after the shutdown completes.
Note: To restart a member that you shut down, turn on all power switches.
1. Click Group, then expand Members, then select the member name, and then click the Service tab.
3. In the Member Shutdown confirmation dialog box, enter your group administrator account password and click
OK.
6–14
Group Administration Group members
The group sets any volumes with data on the member offline. The group sets the volumes online when you
restart the member.
Restarting a member
When you restart a member, the group sets any volumes with data on the member offline. The group sets the
volumes online when the restart completes. Restarting a member has no effect on member, volume, or group
configuration information or volume data stored on the member.
1. Click Group, then expand Members, then select the member name, and then click the Service tab.
3. In the Member Restart confirmation dialog box, enter your administration account password.
4. Click OK.
6–15
Part II: Using Group Storage Space
7 Storage pools
By default, a group provides a single pool of storage. If your group has multiple members, you can divide group
space into different pools and then assign members.
By default, a PS Series group provides a single pool of storage. From this pool, you allocate space to users and
applications by creating volumes, which are seen on the network as iSCSI targets.
When you first create a group, there is one storage pool in a group, called default. Unless you specify a different
pool, members and volumes are assigned to the default pool.
However, some environments might need to segregate storage space. In this case, you can divide PS Series group
space into multiple storage pools. Using this “SAN within a SAN” technology, administrators can easily separate
workloads, while retaining the advantages of storage consolidation.
For example, you can segregate storage space according to application, service level, RAID type, or department.
Mission critical applications can use storage resources that ensure fast and consistent performance, while little-
used or archived data can use different resources.
To use multiple storage pools, a group must contain more than one member. You can then assign members to
different pools. A pool can contain multiple members. Performance load balancing occurs only across the pool
members. You can create up to four pools in a group.
After you assign a member to a pool, you can assign volumes to the pool. Only the resources that the pool members
provide are available to the volumes. As capacity needs change, you can move members or volumes from one pool
to another, while data remains online.
For detailed information about using storage pools, see the Technical Report Deploying Pools and Tiered Storage
in a PS Series SAN, which you can download from the EqualLogic customer support website.
7–1
Group Administration Storage pools
– Disk speed
– Member RAID level (RAID 10, RAID 50, RAID 5, RAID 6)
Recommendation: Dell recommends that pool members with the same disk spin rate have the same RAID level.
For example, if a pool contains two members that have 7200 RPM disks installed, configure
both members with the same RAID level.
• Disk space and performance needs of each application. Identify which applications deserve priority and
calculate their disk space and network bandwidth needs and performance characteristics. For example, some
applications do many random data transfers, while others do a small number of large sequential data transfers.
In addition, identify rarely-accessed volumes that you use only to archive data.
• How the pools are organized. Identify the number of pools and the members that belong to each pool, based on
the member performance characteristics and capacities and the requirements of the volumes. For example, you
can organize your pools according to disk speed. You could also organize pools according to member RAID
level.
• Which volumes belong to each pool. Put volumes with the highest service level requirements in a pool whose
members can provide that service level. If an application uses multiple volumes, those volumes can be in
different pools.
When deciding which volumes belong to a pool, consider the RAID level of the pool members. See RAID level
characteristics on page 6-5. If you do not know the optimal RAID level for a volume, make sure that the pool
to which the volume belongs contains members with a variety of RAID levels, so automatic performance load
balancing can occur.
• Create an empty pool. You can then assign members to the pool.
• Create a new pool and immediately move a member from its current pool to the new pool.
When you move a member from one pool to another, the remaining pool members in the original pool must
enough space to store any volume data that is on the member. Moving a member to a different pool can take a
long time, depending on the amount of data that the group must move.
Requirement: A storage pool name can be up to 63 characters. Valid characters include letters, numbers, and
hyphens. The first character must be a letter or number. A pool description can be up to 127
alphanumeric characters.
1. Click Group, then Storage Pools, and then Create storage pool.
2. In the Create Storage Pool dialog box, specify a pool name and description and click OK.
1. Click Group, then expand Members, then select the member name, and then click Modify member
settings.
7–2
Group Administration Storage pools
3. In the Create Storage Pool dialog box, enter a name and description for the new pool and click OK. If a
description contains spaces, surround it with quotation marks.
The member status shows as moving until the move operation completes.
If necessary, you can cancel an in-progress member pool move operation. The member immediately returns to the
original pool.
To cancel an in-progress member pool move operation, click Members, then member_name, then Cancel
member move.
• Group disk space panel – Shows available storage space and space usage. Provides information about RAID
distribution. You can change the view by selecting an option.
Recommendation: Dell recommends that free pool space does not fall below the following, whichever is
smaller:
See the online help for information about the data fields and options.
The Status, Volumes, and Volume Templates tabs provide storage pool information and links to task activities.
• General pool information panel – Provides information about members, volumes and snapshots.
• Pool disk space panel – Provides information about available storage space, space usage, and RAID
distribution. Enables you to select a volume administrator account.
7–3
Group Administration Storage pools
• Pool members panel– Provides information that enables you to identify members in the pool, member status
and configuration, and pool iSCSI connections.
See the online help for information about the data fields and options.
The table is sorted alphabetically by Volume name. You can sort the information in this panel table by clicking the
heading of any column.
Moving a member from one pool to another decreases the capacity of the current pool and increase the capacity of
the destination pool.
Requirement: To move a member from one pool to another, the remaining pool members must have enough free
space to store the data from the moved member. Moving a member to a different pool can take a
long time, depending on the amount of data that the group must move to the remaining current pool
members.
1. Click Group, then expand Members, then select the member name, and then click Modify member
settings.
2. In the Modify Member Settings dialog box, select the pool for the member.
The group updates the Pool Space table to show the new amounts of free and used space in each pool. If a pool
does not have enough space to accommodate the pool change, the table cell showing free pool space displays a
negative value.
3. Click OK.
The member status is shown as moving until the move operation completes.
If necessary, you can cancel an in-progress member pool move operation. The member immediately moves to the
original pool.
7–4
Group Administration Storage pools
To cancel an in-progress member pool move operation, click Members, then member_name, then Cancel
member move.
While a volume is moving, the group allocates volume space in the current pool and in the new pool.
Restriction: Thin clones inherit the pool setting of the template volume. You cannot move a thin
clone separately from its template column.
Requirement: The pool to which you want to move a volume must have free space equal to the volume reserve
and any snapshot reserve and local replication reserve for the volume.
1. Click Volumes, then expand Volumes, then select the volume name, then click Modify volume settings,
and then click the General tab.
2. In the Modify volume settings dialog box, select the destination pool for the volume. The group updates the
Pool Space table to show the new amounts of free and used space in each pool. If a pool does not have enough
space to accommodate the pool change, the table cell showing free pool space displays a negative value.
3. Click OK.
The volume status is shown as moving until the pool move operation completes.
If necessary, you can cancel an in-progress volume pool move operation. Volume data moves to the original storage
pool.
To cancel an in-progress volume pool move operation, click Volumes, then volume_name, then Cancel
volume moving.
1. Click Group, then expand Storage pools, then select the pool name, and then click Merge storage
pool.
2. In the Merge Storage Pools dialog box, select the destination pool and click OK.
Requirement: A storage pool name can be up to 63 characters. Valid characters include letters, numbers, and
hyphens. The first character must be a letter or number. A pool description can be up to 127
alphanumeric characters.
7–5
Group Administration Storage pools
1. Click Group, then expand Storage pools, then select the pool name, and then click Modify pool
settings.
1. Click Group, then expand Storage pools, then select the pool name, and then click Delete storage
pool.
7–6
8 iSCSI target security
Volumes and snapshots are seen on the network as iSCSI targets. It is important to understand how to protect your
volumes and snapshots from unauthorized and uncontrolled access by iSCSI initiators.
A volume and its snapshots share a list of access control records (up to 16 for each volume). An access control
record can apply to the volume, its snapshots, or both. For example, you can let a computer have access to the
volume and its snapshots or access only to the volume.
When you create a volume, you can set up one access control record. You can later set up additional records.
To log in to a volume or snapshot, the initiator must comply with conditions specified in one access control record.
You can specify one or more of the following conditions:
• IP address – Restricts access to iSCSI initiators that match the specified IP address (for example,
12.16.22.123). Use asterisks to indicate that any value is accepted in an octet (for example, 12.16.*.*).
• iSCSI initiator name – Restricts access to iSCSI initiators that match the specified name (for example,
iqn.2000-05.com.qlogic.qla-4000.sn00044).
• CHAP user name – Restricts access to computers that supply the specified CHAP user name and its associated
password (or “secret”). The credentials must match a local CHAP account or a CHAP account on an external
RADIUS server. See Authenticating initiators through CHAP.
For example, if a volume has only one access control record, which includes an IP address and CHAP user name,
only a computer with that IP address and the appropriate CHAP credentials can access the volume. If an
administrator creates another record that includes an iSCSI initiator name, a computer with that initiator name can
also access the volume.
You can create an access control record that gives unlimited computer access. However, Dell does not recommend
unlimited computer access unless you are testing access to a target.
8–1
Group Administration iSCSI target security
Using CHAP for iSCSI authentication can help you manage access controls more efficiently because it restricts
target access by using user names and passwords, instead of unique IP addresses or iSCSI initiator names.
Before you can use CHAP for initiator authentication, you must set up the CHAP accounts consisting of a user
name and password (or “secret”). There are two options for accounts; you can use both options simultaneously in a
group:
• CHAP accounts in the group. Local CHAP accounts do not rely on any external system. You can create up to
100 local CHAP accounts. See Displaying local CHAP accounts on page 8-2.
• CHAP accounts on an external RADIUS authentication server. Using a RADIUS server to manage CHAP
accounts is beneficial if you are managing a large number of accounts. However, computer access to targets
depends on the availability of the RADIUS server. See Using CHAP accounts on a RADIUS authentication
server on page 8-3.
Note: If you use CHAP for initiator authentication, you can also use target authentication for mutual
authentication, which provides additional security. See Configuring target authentication.
The Local CHAP accounts panel provides information about the account credentials, account status, and the related
administration account. See the online help for information about the data fields and options.
1. Click Group, then Group Configuration, and then the iSCSI tab. The Group Configuration – iSCSI
window appears.
2. Optionally, in the iSCSI Authentication panel, select Consult locally defined CHAP accounts
first. If selected, credentials that an iSCSI initiator supplies are checked against local CHAP accounts before
external CHAP accounts on a RADIUS server.
• Enter a CHAP user name and (optionally) a password. If you do not enter a password, the group
automatically generates a password that is 16 ASCII characters in length.
8–2
Group Administration iSCSI target security
Note: For optimal security, passwords must contain at least 12 characters (preferably random). Individual
iSCSI initiators have their own rules and restrictions for length and format. Consult your initiator
documentation for details.
• Select whether to enable the account. You must enable an account to use it for initiator authentication. You
can later modify an account and enable or disable it.
• Click OK.
After creating the CHAP account, you can create an access control record and use the CHAP user name in the
record. See Configuring access control records.
If you want to enable target authentication (for mutual authentication), see Configuring target authentication on
page 8-4.
2. Select the account name in the Local CHAP Accounts panel and click Modify.
3. Click Delete.
1. Set up the RADIUS server and CHAP accounts. See the prerequisites in Using RADIUS authentication and
accounting servers on page 4-9.
Recommendation: The RADIUS server must be accessible to all the group members.
2. Click Group, then Group Configuration, and then the iSCSI tab. The Group Configuration – iSCSI
window appears. See Table 8-2.
3. In the iSCSI Authentication panel, select Enable RADIUS authentication for iSCSI initiators.
8–3
Group Administration iSCSI target security
5. If you have not already configured the group to use a RADIUS server, click RADIUS settings and add at
least one RADIUS server. See the procedure in Using RADIUS authentication and accounting servers on page
4-9 for adding RADIUS servers.
After creating the CHAP account, create an access control record for a volume and specify the CHAP user name in
the record. See Configuring access control records.
If you want to enable target authentication (for mutual authentication), see Configuring target authentication on
page 8-4.
With target authentication, when the initiator tries to connect to a target, the target supplies a user name and
password to the initiator. The initiator compares the user name and password to mutual authentication credentials
that you configure in the initiator configuration interface. The iSCSI connection succeeds only if the information
matches.
A group automatically enables target authentication using a default user name and password, which you can
change. Whether the initiator requires target authentication depends on the initiator configuration settings.
To display the current target authentication user name and password, click Group, then Group Configuration,
and then the iSCSI tab. The Group Configuration – iSCSI window appears. See the online help for information
about the data fields and options.
2. Enter the target authentication user name and password from Step 2 in the iSCSI initiator configuration
interface, where you enable mutual authentication.
8–4
Group Administration iSCSI target security
When an initiator tries to log in to a target, the group uses access control records to determine if access should be
authorized. However, access control records do not prevent multiple initiators, either on the same computer or
different computers, from accessing the same target.
Therefore, by default, the group disables multi-host (shared) access to a target. Therefore, only one iSCSI qualified
name (IQN) can connect to a target at one time.
If all group members are not running PS Series Firmware Version 4.0 or higher, the group allows multi-host access
to targets.
An iSNS (Internet Storage Name Service) server can facilitate iSCSI initiator discovery of iSCSI targets in a SAN.
Note: By default, the group disables target discovery by iSNS servers. If you want iSNS servers to discover a
target, you must enable this functionality on the target.
Requirement: The iSNS server must be accessible to all the group members.
A group disables automatic discovery of group targets by iSNS servers only if all the group
members are running PS Series Firmware Version 4.1.4 or a higher version. If a member is running
a previous firmware version, iSNS servers can automatically discover all group targets.
Set up the iSNS server and configure the iSCSI initiator to use the iSNS server for discovery. See
your iSNS server and iSCSI initiator documentation for details.
To display the iSNS configuration in the group, click Group, then Group Configuration, and then the iSCSI
tab. The Group Configuration – iSCSI window appears. See the online help for information about the data fields
and options.
You can specify up to three IP addresses. The group uses only one iSNS server at a time. The first server listed
is the default server. If the default server is not available, the group uses the other servers in the order specified.
Click the up and down arrows to change the IP address order.
To modify the IP address for an iSNS server, select the address in the iSCSI Discovery panel in the Group iSCSI
window and click Modify. Change the IP address, click OK, and then click Save all changes (Control+S).
8–5
Group Administration iSCSI target security
To delete the IP address for an iSNS server, select the address in the iSCSI Discovery panel in the Group iSCSI
window and click delete. Then, click Save all changes (Control+S).
You can prevent computers from discovering unauthorized targets by enabling the iSCSI discovery filter. If you
enable the iSCSI discovery filter, initiators only discover targets for which they have the correct access credentials.
2. In the iSCSI Discovery panel, under iSNS discovery filter, select Prevent unauthorized hosts from
discovering targets.
2. In the iSCSI Discovery panel, under iSNS discovery filter, deselect Prevent unauthorized hosts from
discovering targets.
When an initiator tries to log in to a target, the group uses access control records to determine if access should be
authorized. However, access control records do not prevent multiple initiators, either on the same computer or
different computers, from accessing the same target.
Therefore, by default, the group disables multi-host (shared) access to a target. Therefore, only one iSCSI qualified
name (IQN) can connect to a target at one time.
Restriction: If all group members are not running PS Series Firmware Version 4.0 or higher, the
group allows multi-host access to targets.
If you disable multi-host access to a volume, when an initiator tries to log in to the volume:
• If there is no iSCSI initiator connection to the volume, the group uses access control records to determine
whether to authorize access.
8–6
Group Administration iSCSI target security
• If an initiator is connected to the volume, the group compares the IQN of the current connection to the IQN of
the incoming connection. If the IQNs are not the same, access is denied. If the IQNs are the same, the group
uses access control records to determine whether to authorize access.
However, some environment might need multi-host access to a target. You can enable multi-host access to a target
if you meet one of the following conditions:
• Your cluster environment gives the initiators on each cluster computer a different IQN, and the environment
can manage multiple connections to a target. For example, the environment uses a Distributed Lock Manager
or SCSI reservations.
• Your multipathing solution does not use the same IQN on all initiators, and you cannot modify the names to be
the same.
• You use an environment, such as a virtual server, that can manage multiple connections to the same iSCSI
target (for example, through SCSI reservations).
• Initiators on a single computer do not use the same IQN.
In all cases, use access control records as the primary method of protecting iSCSI targets in a group.
You can enable or disable multi-host access when creating a volume. You can also modify a volume or snapshot
and enable or disable multi-host access.
Both hardware and software iSCSI initiators are available from a variety of vendors. Install and configure an
initiator using the vendor-supplied instructions. See your
PS Series support provider for information related to your iSCSI initiator. Also, read the PS Series Release Notes
for initiator information.
Note: Access to iSCSI targets is through TCP/IP port 3260 (the standard iSCSI port).
See your initiator documentation for the exact procedure for logging in to an iSCSI target.
1. Specify the group IP address as the discovery address or target portal in the iSCSI initiator configuration
interface. If you are using iSNS, the initiator automatically discovers targets from the iSNS server that you
configured in the group.
2. Log in to a target. The initiator must match at least one of the target’s access control records.
As part of the login procedure, you might need to enter a CHAP user name and password (secret) and target
authentication credentials. See About iSCSI target access controls.
After the initiator logs in to the iSCSI target, the computer sees the target as a disk that you can format using the
usual operating system utilities. You can then partition the disk and create a file system.
8–7
Group Administration iSCSI target security
Note: In some file systems, volumes and snapshots must have read-write permission even if the file system is
read-only. See Modifying volume permission and Modifying snapshot permission.
8–8
9 Basic volume operations
Basic volume operations consist of creating volumes, setting up access controls, and modifying volume attributes.
You can also clone a volume to create an exact copy.
About volumes
A computer uses an industry-standard iSCSI initiator to access a volume in a group. Most operating systems
provide an iSCSI initiator (in the form of an iSCSI driver or a host bus adapter), with a range of price and
performance options.
To access storage in a PS Series group, you allocate portions of a storage pool to volumes. Each volume appears on
the network as an iSCSI target. Computers with iSCSI initiators can connect to the target, which appears as a
regular disk.
When you connect to a volume, it appears like a regular disk drive that you can format by using the normal
operating system utilities. As the group configuration changes, volumes continue to be accessible through the same
iSCSI targets, and no modifications are necessary.
You assign each volume a size (reported size) and a storage pool. The group automatically load balances volume
data across pool members. Optionally, you can reserve snapshot space or replication space for a volume.
For each volume (and snapshot), the group generates an iSCSI target name, which you cannot modify. An iSCSI
target name includes a prefix (iqn.2001-05.com.equallogic), a string, and the volume name. Initiators use
the target name to connect to a volume.
The following is an example of an iSCSI target name for a volume with the name db3:
iqn.2001-05.com.equallogic:7-8b0900-6d0000000-001ebbc5d80sf0k0-db3
Access control records and other mechanisms protect your volumes from unauthorized and uncoordinated access
by iSCSI initiators. See iSCSI target security on page 8-1.
You can use snapshots to protect volume data from mistakes, viruses, or database corruption. To protect against
disasters, you can replicate volume data from one group to another.
• Standard volume. The default volume type is a standard volume. There are no restrictions on a standard
volume. You can enable (and disable) thin provisioning on a standard volume.
• Template volume. A template volume is a type of volume that is useful if your environment requires multiple
volumes that share a large amount of common data. When you write the common data to a standard volume,
you can convert it to a template volume and then create thin clones. Template volumes are read-only to protect
the common data.
• Thin clone volume. Thin clones are based on a template volume and enable you to use space efficiently in
storage environments that require multiple volumes with a large amount of common data. After you create a
thin clone, you can write to the thin clone. See About template volumes and thin clones on page 10-6.
9–1
Group Administration Basic volume operations
You can replicate any volume type, resulting in a replica set for the volume. In addition, you can fail over any
volume type, resulting in a recovery version of the volume. However, you can only fail back a standard volume or
a thin clone volume.
See the online help for information about the data fields and options.
When you create a volume, you specify the reported size for the volume, which is the maximum amount of space
that the group might be able to allocate to the volume. You can increase or decrease the reported size.
The reported size is seen by iSCSI initiators. If a write to a volume exceeds the reported size, the write fails, and the
group generates event messages.
The actual amount of pool space that the group allocates to a volume is called the volume reserve. The value of the
volume reserve depends on whether you enable thin provisioning on a volume:
Space allocated for volume operations (for example, snapshot reserve and local replication reserve) is based on the
volume reserve.
You cannot use space that the group allocates to a volume (or for volume operations) for other purposes. Therefore,
make sure you allocate space only when necessary.
You must fully understand application and workload space requirements to allocate the correct amount of space.
Access control records prevent unauthorized computer access to iSCSI targets (volumes or snapshots). A volume
and its snapshots share a list of access control records. An access control record can apply to the volume, its
snapshots, or both. For example, you can authorize computer access to a volume and its snapshots or only to the
volume.
When you create a volume, you can create an access control record for the volume. You can later create additional
access control records (up to 16) for a volume. See Configuring access control records on page 9-10.
9–2
Group Administration Basic volume operations
In addition, you can allow or disallow volume access from multiple initiators, depending on your configuration
needs. See Allowing or disallowing multi-host volume access on page 9-14.
A snapshot is a point-in-time copy of volume data that can protect against mistakes, viruses, or database
corruption. You can recover data from a snapshot by setting it online or by restoring the volume from a snapshot.
See Snapshot management.
To protect against disasters, you can replicate volume data from one group to another. A replica is a point-in-time
copy of volume data that is located on a different group and has no dependencies on the original volume. In the
event of a disaster, you can host the volume from the recovery group and later failback to the original group, with
minimal disruption to users. See Volume replication.
Recommendation: Dell recommends that you protect data by using a robust backup application, in addition to
snapshot and replication functionality.
Volume attributes
Table 9-1 describes the attributes that allocate space and set the characteristics of a volume. The first column lists
volume attributes, and the second column describes them.
You set some attributes when you create a volume; other attributes use default values. In most cases, you can
modify all the volume attributes. Template volumes and thin clones have some restrictions.
9–3
Group Administration Basic volume operations
9–4
Group Administration Basic volume operations
To display group-wide default volume settings, click Group, then Group Configuration, and then the
Defaults tab. The Group Configuration – Defaults window appears, displaying the information about the default
volume settings. You can change the default volume settings. See Modifying group-wide volume settings.
You can change a setting for an existing volume. See the online help for information about the data fields and
options. See also Volume attributes on page 9-3.
You can modify the group-wide default values to meet the needs of your configuration.
Note: Changes to the group-wide default values apply only to new volumes.
1. Click Group, then Group Configuration, and then the Defaults tab.
2. In the Group Defaults window, change the default settings.
• Volume attributes and group-wide default settings the group apply to a volume. See Volume attributes on page
9-3 and Displaying group-wide default volume settings on page 9-5.
• Volume security and access controls. See iSCSI target security.
• Risks and benefits associated with thin provisioning before applying this functionality to a volume. See About
thin provisioning.
9–5
Group Administration Basic volume operations
1. Click Volumes in the lower-left panel and then click Create volume.
• Volume name
• Optional volume description
• Storage pool
3. Click Next.
The values in the Pool Space table depend on the reported size and thin provisioning setting. If creating the
volume exceeds the capacity of the pool, the table cell showing free pool space displays a negative value.
If you enable thin provisioning, the Create Volume – Space dialog box allows you to adjust the default
settings for the following thin-provisioned volume attributes, which are based on a percentage of the
reported size:
5. Click Next.
6. In the Create Volume – iSCSI Access dialog box, specify:
• Conditions a computer must match to connect to the volume and its snapshots. Specify a CHAP user name,
IP address, or iSCSI initiator name. This information generates an access control record that applies to the
volume and its snapshots. See About iSCSI target access controls on page 8-1.
• Permission for the volume, either read-write or read-only.
• Whether to allow or disallow (default) access to the volume and its snapshots by initiators with different
iSCSI qualified names (IQNs). See Multi-host access to targets on page 8-6.
7. In the Create Volume – Summary dialog box, review the volume configuration. If the configuration is correct,
click Finish. Click Back to make changes.
To display the new volume, expand Volumes in the far-left panel and select the name of the new volume.
Displaying volumes
To display all the volumes in a group, click Volumes in the lower-left panel and then Volumes in the far-left tree.
9–6
Group Administration Basic volume operations
See the online help for information about the data fields and options.
1. To display the thin clones attached to each template volume, in the far-left column, pull down the Tree view
options menu and select Show thin clones under templates.
2. Select Organize by volume type to display template volumes and thin clones in separate categories.
Table 9-2 shows the icons that identify a volume in the Group Manager GUI, depending on the volume’s
configuration.
Template volume.
Move the pointer over a volume to display a context message showing the requested volume status and the current
volume status.
9–7
Group Administration Basic volume operations
See the online help for information about the data fields and options.
An asterisk in the CHAP user, IP address, or iSCSI initiator column means that any value meets the condition. If a
record shows asterisks in all columns, the volume or snapshot has unrestricted access, which Dell does not
recommend.
See the online help for information about the data fields and options. See also iSCSI target security on page 8-1.
9–8
Group Administration Basic volume operations
See the online help for information about the data fields and options.
Click Volumes in the lower-left panel, then expand Volumes in the far-left tree, then select the volume name, and
then click the Replication tab. The Volume Replication window appears, containing the following panels:
See the online help for information about the data fields and options. See also Volume replication on page 12-1.
Note: Because template volumes are not supported in volume collections, the Collections tab is not applicable
to template volumes.
See the online help for information about the data fields and options.
9–9
Group Administration Basic volume operations
• Schedules summary panel – Provides information about the snapshot and replication schedules for a volume, schedule
status and the next scheduled event.
• Snapshot and Replication schedules panel – Provides information about individual schedules, event times and
schedule status.
Select Also show schedules for collections that include the volume or enter Alt+A to include
the snapshot and replication schedules for volume collections that include the volume.
See the online help for information about the data fields and options.
• Volume iSCSI settings panel – Provides information about the iSCSI target name and alias name.
• iSCSI connections panel – Provides information about the iSCSI initiator name and connection statistics.
See the online help for information about the data fields and options.
You can set up an access control record when you create a volume. You can later set up additional access control
records (up to 16) and apply them to the volume or its snapshots.
1. Click Volumes in the lower-left panel, then expand Volumes in the far-left tree, then select the volume name,
and then click the Access tab.
The Access Control List panel displays the access control records, the conditions in each record, and whether
the record applies to the volume or its snapshots.
Note: An asterisk in the CHAP user, IP address, or iSCSI initiator column means that any value meets the
condition. If a record shows asterisks in all columns, the volume or snapshot has unrestricted access,
which Dell does not recommend.
9–10
Group Administration Basic volume operations
2. In the Access Control List panel, click Add. The Add Access Control Record dialog box appears.
3. Select the conditions that a computer must meet and specify the required information (CHAP user name, IP
address, iSCSI initiator name).
4. Select whether the access control record applies to the volume, its snapshots, or the volume and its snapshots
(default).
5. Click OK.
1. Click Volumes in the lower-left panel, then expand Volumes in the far-left tree, then select the volume name,
and then click the Access tab.
2. In the Volume Access window, select the record and click Modify. Change the information as needed and click
OK.
1. Click Volumes in the lower-left panel, then expand Volumes in the far-left tree, then select the volume name,
and then click the Access tab.
2. In the Volume Access window, select the record and click Delete. Confirm that you want to delete the record.
Notes: You cannot change the iSCSI initiator name in an access control record. Instead, you must delete the record
and recreate it using the new initiator name.
If you modify or delete a record, computers that met the original conditions might not meet the new
conditions and, therefore, might not be able to log in to the target.
Cloning a volume consumes space from the pool where the original volume resides. The space required for cloning
a volume is equal to the volume reserve at the time of the clone operation. Reserving snapshot space for the new
volume requires additional pool space.
Restriction: If the original volume is a recovery volume, the new volume is not a recovery volume.
See Volume attributes for a description of the attributes that apply to a new volume. See iSCSI target security for
information about volume security.
Cloning a volume
To clone a volume, including a template volume or a thin clone volume:
1. Click Volumes in the lower-left panel, then expand Volumes in the far-left tree, then select the volume name.
9–11
Group Administration Basic volume operations
2. Click Clone. The following dialog box appears: Clone Volume – Settings. Specify the new volume name and
description.
3. Click Next.The following dialog box appears: Clone Volume – Space. Enable or disable thin provisioning
(only applicable to standard volumes). You can also change the thin provisioning space allocation settings for
the new volume:
• Minimum volume reserve
• In-use space warning threshold
• Maximum in-use space (maximum volume reserve)
The group updates the values in the Pool Space table, based on the space settings. If the new volume exceeds
the capacity of the pool, the table cell showing free pool space becomes red, displays a negative number, and
an error message appears.
Optionally, specify the amount of space, as a percentage of the volume reserve, to reserve for snapshots of the
new volume.
4. Click Next. The following dialog box appears: Create Volume – iSCSI Access. Specify the following:
• Conditions that a computer must match to connect to the volume and its snapshots. Specify a CHAP user
name, IP address, or iSCSI initiator name. This information generates an access control record that applies
to the volume and its snapshots. See About iSCSI target access controls.
• Permission for the volume, either read-write or read-only.
• Whether to allow or disallow (default) access to the volume and its snapshots by initiators with different
iSCSI qualified names (IQNs). See Multi-host access to targets.
5. Click Next. The following dialog box appears: Clone Volume – Summary. Review the volume configuration.
Click Volumes in the far-left panel. The new volume appears in the list of volumes. See Displaying volumes on
page 9-6.
Note: Enabling or disabling thin provisioning on a volume is an advanced operation. See Advanced volume
operations.
• If you modify a volume name, the iSCSI target name (and any snapshot or replica set names) does not change.
However, if you modify a volume name, and the volume alias is set to be the same as the volume name, the
alias also changes.
• If you modify the name of a replicated volume, you continue to identify the replica set on the secondary by the
original volume name.
Requirement: A volume name must be unique name and can be up to 63 alphanumeric characters (including
periods, hyphens, and colons). A volume description can be up to 127 alphanumeric characters.
9–12
Group Administration Basic volume operations
1. Click Volumes in the lower-left panel, then expand Volumes in the far-left tree, then select the volume name,
then click the Modify settings, and then click the General tab.
2. In the Modify volume settings – General dialog box, modify the name or the description. The volume name
must be unique in the group and can contain up to 63 alphanumeric characters, including periods, hyphens, and
colons. The description can contain up to 127 ASCII characters.
3. Click OK.
When you create a volume, it has an alias only if the group-wide default is to use the volume name as the alias.
Otherwise, the volume does not have an alias.
1. Click Volumes in the lower-left panel, then expand Volumes in the far-left tree, then select the volume name,
then click Modify settings, and then click the Advanced tab.
2. In the Modify volume settings – Advanced dialog box, specify a volume alias in the Public alias field.
Like its name, the volume alias can contain up to 63 alphanumeric characters, including periods, hyphens, and
colons.
3. Click OK.
2. In the Modify volume settings – General dialog box, change the name of the administrator or select none in
the Volume administrator field.
3. Click OK.
Requirement: To set a volume online, each member that contains volume data must be online.
To set a volume online, click Volumes in the lower-left panel, then expand Volumes in the far-left tree, then select
the volume name, and then click Set volume online.
1. Click Volumes in the lower-left panel, then expand Volumes in the far-left tree, then select the volume name,
and then click Set volume offline.
9–13
Group Administration Basic volume operations
Requirement: To change a volume permission to read-only, you must first set the volume offline.
1. Click Volumes in the lower-left panel, then expand Volumes in the far-left tree, then select the volume name,
and then click Set access type.
3. Click OK.
You can allow or disallow multi-host (shared) access to a volume. If you disallow multi-host access to a volume,
only one iSCSI qualified name (IQN) can connect to the volume at one time. However, if you have a certain
environment, you might want to allow multi-host access to a volume. See Multi-host access to targets.
Requirement: Before disallowing multi-host access to a volume, disconnect all initiators from the volume except
one, unless the initiators have the same IQN. If multiple initiators with different IQNs have
connections to the volume, you cannot disallow multi-host access.
1. Click Volumes in the lower-left panel, then expand Volumes in the far-left tree, then select the volume name,
and then click Set access type.
2. In the Set Access Type dialog box, allow or disallow multi-host access.
3. Click OK.
By default, the group disables automatic discovery of targets by iSNS servers. If you want an iSNS server to
automatically discover a group target, you must enable this functionality on the target.
Requirement: A group disables automatic discovery of group targets by iSNS servers only if all the group
members are running PS Series Firmware Version 4.1.4 or a higher version. If a member is
running a previous firmware version (for example, if you downgrade a member from V4.1.4),
iSNS servers can automatically discover all group targets.
9–14
Group Administration Basic volume operations
You cannot use the Group Manager GUI to enable or disable iSNS discovery for a volume or snapshot. Instead,
you must use the following CLI command formats:
Deleting a volume
You can delete a volume. Space that the group allocated to the volume becomes part of free pool space.
Note: If you delete a volume, the group also deletes its snapshots. However, the group does not delete any volume
replicas on the secondary group.
Requirement: You must set a volume offline before you delete it. The group closes any active iSCSI connections
to the volume.
Restriction: You cannot delete a template volume if it still has thin clones attached to it or if recovery thin clones
exist.
1. Click Volumes in the lower-left panel, then expand Volumes in the far-left tree, then select the volume name,
and then click Delete volume, Delete template, or Delete thin clone.
2. Confirm that you want to delete the volume and its data.
9–15
10 Advanced volume operations
Only knowledgeable group administrators should perform advanced volume operations, including creating thin-
provisioned volumes, using template volumes and thin clones, changing a volume size, using volume collections,
and scheduling volume operations.
Recommendation: Dell recommends that you fully understand the benefits and risks of using thin provisioning
before implementing it in your environment. Environments that use thin provisioning should
have around-the-clock support to handle any space allocation issues and prevent service level
disruption.
Thin provisioning volumes is beneficial in a number of environments. For example, if your environment does not
easily allow you to expand file systems or raw disks, you can give thin-provisioned volumes excessively large
reported sizes to account for future growth. The group automatically allocates space to volumes only if usage
patterns warrant the space.
Thin provisioning also helps you plan for future group expansion. For example, you can size volumes according to
their maximum possible space requirements, even if the group currently cannot provide all the required space (that
is, you can “over-provision” group space). As volume usage increases, you can expand group capacity, with no
user impact. You do not need to change drive letters, expand volume sizes, or add volumes.
However, if your environment requires guaranteed space for volume, thin provisioning might be inappropriate.
Thin provisioning is most effective if you can accurately predict how volume usage increases over time.
When you create a volume, you specify the reported size for the volume. The reported size is seen by iSCSI
initiators. The actual amount of pool space that the group allocates to a volume is called the volume reserve. The
value of the volume reserve depends on whether you enable thin provisioning on a volume:
For example, even if only 10% of a volume is in use, the group allocates the full reported size.
• Thin provisioning – The volume reserve is equal to or less than the reported size, depending on volume usage
and the thin provisioning settings.
Initially, the group allocates the minimum amount of volume reserve for a thin-provisioned volume. The
minimum is 10% of the reported volume size or the user-specified percentage.
As initiators write to the volume, free volume reserve decreases. When free volume reserve falls below a
threshold, the group increases volume reserve, up to a user-defined maximum (assuming available free pool
space):
– For a volume with a reported size of 100GB or greater, when free volume reserve is less than 6 GB, the
group allocates an additional 10 GB.
10–1
Group Administration Advanced volume operations
– For a volume with a reported size that is less than 100GB, when free volume reserve falls below 6% of the
reported volume size, the group allocates an additional 10% of the reported volume size.
Event messages inform you when in-use volume reserve surpasses a user-defined limit and reaches the
maximum.
• Minimum volume reserve – Minimum amount of pool space that the group allocates to the volume, based on
a percentage of the reported volume size. The default group-wide setting is 10%.
• In-use space warning limit – Amount of in-use volume reserve that results in notification, based on a
percentage of the reported volume size. The default group-wide setting is 60%.
When in-use volume reserve reaches the in-use warning limit, the group generates a warning event message.
Additional warning event messages occur as follows:
– For volumes larger than 200 GB, when the in-use volume reserve increases by every additional 10 GB.
– For volumes smaller than 200 GB, when the in-use volume reserve increases by every additional 5%.
For example, if you create a thin-provisioned volume with a size of 500 GB and set the warning limit to 75%, a
warning occurs when the amount of in-use volume reserve is more than or equal to 75% of 500 GB, or 375 GB.
• Maximum in-use space – Maximum amount of in-use volume reserve (maximum size of the volume reserve),
based on a percentage of the reported volume size. The default group-wide setting is 100%.
The maximum in-use space value determines the behavior when the volume reserve reaches its maximum size:
– If the maximum in-use space value is less than 100%, and an initiator write exceeds this limit, the write
fails. The group sets the volume offline and generates event messages.
If you increase the maximum in-use space value or the reported volume size (both operations require free
pool space), the group automatically sets the volume online and writes succeed.
– If the maximum in-use space value is 100%, and an initiator write exceeds this limit, the volume is not set
offline; However, the write fails, and the group generates event messages. If you increase the reported size
of the volume, writes succeed.
This behavior is the same as when in-use space for a volume that is not thin-provisioned reaches its
reported size.
The maximum in-use space value helps prevent one volume from consuming all the pool free space and setting
other thin-provisioned volumes offline.
You can change the group-wide default volume settings (see Modifying group-wide volume settings on page 9-5),
override the default values when you create a thin-provisioned volume, or modify a volume and change the
settings.
10–2
Group Administration Advanced volume operations
Thin provisioning is not appropriate for all environments or volumes. You must fully understand thin provisioning
before implementing the functionality on a volume. See About thin provisioning on page 10-1.
• Enabling thin provisioning on a volume usually decreases the amount of space that the group allocates to the
volume (called the volume reserve).
• Enabling thin provisioning changes the amount of allocated snapshot space and replication space, because the
group allocates snapshot space and replication space based on a percentage of the volume reserve.
However, the group increases the snapshot space and replication space percentages to prevent the deletion of
snapshot or replication data.
1. Click Volumes in the lower-left panel, then expand Volumes, then select the volume, then click Modify
settings, and then click the Space tab.
2. In the Modify volume settings – Space dialog box, select Thin-provisioned volume. The Pool Space table
values change.
3. Optionally, use the sliders to modify the group-wide default thin provisioning space settings:
4. Click OK.
Make sure you carefully monitor the space usage for a thin-provisioned volume. See Monitoring volumes,
collections, and snapshots on page 15-24.
Restriction: You cannot disable thin provisioning on a template volume, thin clone volume, recovery
template volume, or a recovery thin clone volume.
• If you disable thin provisioning on a volume, the group allocates the full reported volume size (the reported
size and the volume reserve is the same). Therefore, you must have sufficient free pool space.
10–3
Group Administration Advanced volume operations
• Because the group bases snapshot space and replication space on a percentage of the volume reserve, disabling
thin provisioning increases snapshot space and replication space. Therefore, you must have sufficient free pool
space.
• In some cases, if you disable thin provisioning on a volume, the group automatically decreases the snapshot
reserve percentage to prevent an excessive allocation of snapshot space.
This can occur if you previously set the snapshot reserve percentage to a high value to prevent the group from
deleting snapshots (for example, if you increased the snapshot reserve percentage from 100% to 500%). If you
disable thin provisioning on the volume, the group might decrease the percentage to a more appropriate value,
closer to 100%. The group does not decrease the snapshot reserve percentage if the decrease requires deleting
snapshots.
1. Click Volumes, then expand Volumes, then select the volume, then click Modify settings, and then click
the Space tab.
2. In the Modify volume settings – Space dialog box, de-select Thin-provisioned volume.
The Pool Space table values change, based on the new volume setting. If the volume change exceeds pool
capacity, the free space cell displays a negative value.
3. Click OK.
1. Click Volumes, then expand Volumes, then select the volume, then click Modify settings, and then click
the Space tab.
2. In the Modify volume settings – Space dialog box, use the sliders to adjust the settings for:
The Pool Space table values change, based on the new values. If a change exceeds capacity, the free pool space
cell displays a negative value.
3. Click OK.
10–4
Group Administration Advanced volume operations
Warning: Not all operating systems, file systems, and applications easily handle volume size changes or behave
in a predictable manner when you change a volume size. Before changing a reported volume size, Dell
recommends that you fully understand the impact on the operating system, file system, and applications
using the volume.
Changing the reported volume size affects the space that the group allocates to the volume and for volume
snapshots and replication:
• For a volume that is not thin-provisioned, changing the reported size proportionally changes the amount of
space the group allocates to the volume (the volume reserve).
• For a thin-provisioned volume, changing the reported size changes the minimum volume reserve, in-use space
warning limit, and maximum in-use space, because they are based on a percentage of the reported size. The
space that the group allocates to the volume (the volume reserve) might also change.
• If the volume reserve changes due to the reported volume size change, snapshot space and replication space
also changes.
Note: If you are replicating the volume, the secondary group does not recognize the reported size change until
the next replication.
See About reported volume size on page 10-5 for information on the impact of changing the reported size.
1. Click Volumes, then expand Volumes, then select the volume, then click Modify settings, and then click
the Space tab.
2. In the Modify volume settings – Space dialog box, specify the new reported volume size in the Volume size
field. If the size you specify is not a multiple if 15MB, the group rounds up the value to the nearest multiple of
15.
The values in the Pool Space table change, based on the new volume size. If you configured the volume for
replication, a table showing delegated space on the replication partner also appears. If the new volume size
exceeds the capacity of a pool, the free space cell displays a negative value.
3. For a thin-provisioned volume, optionally modify the in-use warning value and maximum in-use space value
using the slider bars. See Thin provisioning space settings on page 10-2.
4. Click OK.
5. Optionally, confirm that you want to create a snapshot of the current volume prior to the resizing.
10–5
Group Administration Advanced volume operations
See About reported volume size on page 10-5 for information on the impact of changing the reported size of a
volume.
Caution: If you decrease a volume size to less than the amount of space currently in use, you can lose data.
You cannot use the Group Manager GUI to decrease the reported size of a volume. Instead, you must use the
following Group Manager CLI command:
See the CLI Reference manual for more information about using CLI commands.
To use pool space more efficiently, instead of cloning standard volumes, you can create one volume and populate it
with the common data. After you convert the volume to a template volume, you can create multiple thin clone
volumes and then write to each thin clone to make it unique. For example, you can add data such as a page file to a
thin clone.
Because a thin clone shares the common data in the template volume, each thin clone only consumes the space
needed to store the differences (or deltas) between the thin clone and the template volume.
Initially, a template volume and thin clone are identical in reported size and content. Because the group allocates
space to the new thin clone in the same way it allocates space to a new standard, thin provisioned volume, only the
minimum volume reserve is consumed from free pool space.
When initiators write data to a thin clone, space is consumed from free volume reserve. As needed, the group
allocates additional volume reserve to the thin clone, up to the maximum in-use space setting for the thin clone.
You can also modify the thin clone and change the data that the thin clone shares with the template volume.
However, the data in the template volume is always preserved because a template volume is read-only. Group
Manager tracks the amount of data that is shared between each thin clone and template volume and displays it in
the Volume Status window.
See Space considerations for template volumes and thin clones on page 10-7.
With a few exceptions, all normal volume operations apply to template volumes and thin clones. See Restrictions
on template volumes and thin clones on page 10-8.
Thin clones are considered attached to the template volume and cannot exist without it, similar to how snapshots
depend on the base volume.
10–6
Group Administration Advanced volume operations
The group always maintains and shows the dependency of a thin clone on a template volume. If you expand
Volumes in the far-left panel, you can choose to display all volumes in alphabetical order or display thin clones
under the template volume on which they depend.
If you replicate a template volume and its attached thin clones, the primary and secondary groups maintain the
dependency. For example, you must replicate a template volume before replicating any of its thin clones. On the
secondary group, if you expand Inbound Replicas in the far-left panel, you can choose to display thin clone
replica sets under the template replica set on which they depend.
In addition, the group maintains and shows the dependency of a thin clone on a template volume (or a thin clone
replica set on a template replica set), even if a volume (or replica set) changes state, as occurs during failover and
failback operations. For example, if you promote a thin clone replica set to a recovery thin clone, you can still see
the dependency of the recovery thin clone on the template replica set.
Because of this dependency, the group does not allow you to delete a template volume, a template replica set, or a
recovery template if a thin clone, thin clone replica set, or recovery thin clone depends on it. Also, you cannot
disable replication on a template volume until you disable replication on all its thin clones.
• Thin provisioning is enabled on the volume, the volume is set offline, and the volume permission is set to read-
only.
Note: If you are using the Group Manager CLI, you must perform these tasks manually before you can
convert to a template volume.
• Volume reserve decreases to the amount of in-use space (or the minimum volume reserve, whichever is
greater), and free volume reserve becomes unreserved space.
• Snapshot reserve is adjusted, based on the new volume reserve. If necessary to preserve existing snapshots, the
snapshot reserve percentage is increased.
When you create a thin clone volume, it has the same reported size and contents as the template volume. If you
mount the thin clone, you can see the data that the thin clone shares with the template volume.
The group allocates only the minimum volume reserve when you first create a thin clone. The group allocates
additional space if you specify snapshot reserve for the thin clone. Just as with a standard, thin provisioned volume,
as you write to a thin clone, the group allocates more space and increases the volume reserve.
In the Volume Status window for a thin clone volume, the Volume Space table shows the space utilization for the
thin clone, including the in-use space, which is the portion of volume reserve that is storing data unique to the thin
clone. When you first create a thin clone, it has zero in-use space.
In the Volume Status window for a thin clone volume, the Shared Space table (only appears for thin clones) shows
the amount of space that is shared with the template volume and the unshared (in-use) space. As you write to the
thin clone, unshared (in-use) space increases. In some cases, when you write to a thin clone, shared space can
decrease (for example, if you are overwriting shared data).
Free space in the Shared Space table shows the amount of unwritten thin clone space (that is, the reported volume
size minus the combined shared space and unshared space). This represents the amount of data you can write to the
10–7
Group Administration Advanced volume operations
thin clone before you need to increase its size. This value is the same as the value for “unreserved” space in the
Volume Space table in the Volume Status window for the template volume.
If you detach a thin clone, the resulting new standard volume has in-use space equal to the combined shared space
and unshared space, as shown in the Shared Space table in the Volume Status window.
10–8
Group Administration Advanced volume operations
Note: When you convert to a template volume, the group disables any schedules that include the volume. If you
later convert the template volume to a standard volume, the group does not automatically enable the
schedules.
Requirement: Before converting to a template volume, make sure the standard volume contains all the data that is
shared with the thin clones. Also, make sure that the standard volume has sufficient free space to
hold the approximate amount of data that you write to each thin clone.
For example, if the reported size of the template volume is 1 GB, and in-use space is 900 MB, you
can write approximately 100 MB to each thin clone before you must increase the thin clone size.
1. Click Volumes, then expand Volumes, then select the volume, and then click Convert to template.
Restriction: You must disable replication before you can convert a template volume to a volume.
Note: Space used to store template replicas on the secondary group becomes unmanaged if you convert a template
volume to a standard volume.
1. Click Volumes, then expand Volumes, then select the volume, and then click Convert to volume.
2. In the Create Thin Clone – Volume Settings dialog box, enter a unique name and optional description and click
the Next button.
3. In the Create Thin Clone – Space dialog box, change the snapshot reserve setting and the thin provisioning
settings, and click the Next button.
4. In the Create Thin Clone – iSCSI Access dialog box, create an access control record for the volume, select the
permission (read-write or read-only), and specify the multi-host access setting. Then, click the Next button.
5. In the Summary dialog box, click the Finish button if the thin clone configuration is correct. Click the Back
button to make changes.
10–9
Group Administration Advanced volume operations
If you detach a thin clone from a template volume, the thin clone is converted to a standard volume and no longer
shares space with the template volume. Therefore, when you detach a thin clone, the volume reserve for the thin
clone increases by the amount of space the thin clone shares with the template volume.
Note: If you detach a thin clone from a template volume that is bound to a member or has a RAID preference, the
resulting volume does not inherit the binding or the RAID preference.
Restriction: You cannot detach a thin clone if replication is enabled for the thin clone.
1. Click Volumes, then expand Volumes, then select the thin clone, and then click Detach from template.
1. Click Volumes, then expand Volumes, and then click the template volume or thin clone.
2. Click the Status tab to display the Volume Status window.
For template volumes, there is no Collections tab. You cannot use a template volume in a volume collection.
For template volumes, click the Thin Clones tab to display the thin clones that are attached to the template
volume.
To display thin clones under the template volume in the far-left panel, pull down the tree view options menu and
select Show thin clones under templates.
A volume collection includes one or more volumes from any pool. In a single operation, you can create snapshots
of the volumes (a snapshot collection) or replicas of the volumes (a replica collection).
2. In the Create Volume Collection – General Settings dialog box, specify a name (up to 63 ASCII characters) for
the collection and an optional description (up to 127 ASCII characters). Then, click Next.
3. In the Create Volume Collection – Components dialog box, select the volumes to include in the collection (up
to eight volumes) and click Next.
10–10
Group Administration Advanced volume operations
4. Review the Create Volume Collection – Summary dialog box and, if satisfactory, click Finish. To make
changes, click Back.
The volume collection appears in the far-left GUI panel, under Volume Collections.
See the online help for information about the data fields and options.
Status tab
Click Volumes, then expand Volume Collections, then select the collection. Click the Status tab. The
Volume Collection Status window appears, containing the following panels:
• Collection status panel – Provides information about snapshot collections and replication status.
• Collection volumes panel – Provides information about:
– Volumes in the collection, and the storage pool
– Capacity and reserve
– Volume status and number of snapshots
– Replication partner
– iSCSI connections
The information is sorted by Volume name. You can sort the information by clicking the column headings.
See the online help for information about the data fields and options.
Snapshots tab
Click Volumes, then expand Volume Collections, then select the collection. Click the Snapshots tab. The
Volume Collection Snapshot window appears, containing the following panels:
10–11
Group Administration Advanced volume operations
• Snapshot summary panel – Provides information about snapshot collections, schedules, and scheduled events.
• Snapshots panel – Provides information about snapshot timestamp, schedule name, security, and connections.
See the online help for information about the data fields and options.
Replicas tab
Click Volumes, then expand Volume Collections, then select the collection. Click the Replicas tab. The
Volume Collection Replicas window appears, containing the following panels:
• Replication summary panel – Provides information about the replication partner and replication schedules and
schedule status.
• Remote replicas panel – Provides information about remote replicas and their status.
See the online help for information about the data fields and options.
Schedules tab
Click Volumes, then expand Volume Collections, then select the collection. Click the Schedules tab
• Schedules summary panel – Provides information about snapshot and replication schedules, schedule status,
and scheduled events.
• Snapshot and replication schedules panel – Provides information about objects that a schedule can create and
the parameters of the schedule.
You can use the Snapshot and Replication Schedules panel to create, modify, or delete scheduled snapshot and/or
replication operations. See the online help for information about the data fields and options.
• To change the collection name or description, click the General tab and modify the name (up to 63 ASCII
characters) or description (up to 127 ASCII characters).
• To add volumes or remove volumes from the collection, click the Components tab. Select and deselect
volumes.
3. Click OK.
10–12
Group Administration Advanced volume operations
1. Click Volumes, then expand Volume Collections, then select the collection, and then click Delete
volume collection.
Note: Using a schedule can generate a large number of snapshots or replicas, so make sure that you have sufficient
snapshot or replication space. You can set a limit on the maximum number of snapshots or replicas that a
schedule creates.
If a volume is part of a volume collection, make sure a schedule for the collection does not overlap a schedule for
the volume.
Restriction: You cannot use manual transfer replication with a replication schedule.
Schedule attributes
Table 10-2 describes the schedule attributes. The first column lists attributes, and the second column describes
them. You set the attribute values when you create a schedule. You can later modify the schedule and change the
values, if you want.
10–13
Group Administration Advanced volume operations
Creating a schedule
When you create a schedule, you specify the attributes described in Table 10-2.
• For a volume, click Volumes, then expand Volumes, then select the volume, and then click Create
schedule.
• For a volume collection, click Volumes, then expand Volume Collections, then select the collection,
and then click Create schedule.
Click Next.
3. The next dialog box depends on the selected schedule frequency and enables you to control how often the
schedule runs, the number of snapshots or replicas to keep, and whether the snapshots are read-write or read-
only.
If you selected Run once for a frequency, the Create Schedule – Run Once dialog box appears.
If you select Hourly schedule for a frequency, the Create Schedule – Hourly Schedule dialog box appears.
If you select Daily schedule for a frequency, the Create Schedule – Daily Schedule dialog box appears.
10–14
Group Administration Advanced volume operations
The Schedules Summary panel shows the status of the volume schedules and the creation time for the next
scheduled snapshot or replica. A running schedule is a schedule that you enabled.
2. In the Snapshot and Replication Schedules panel, select a schedule to display details. To display the schedules
for volume collections that include the selected volume, select Also show schedules for collections
that include the volume.
See the online help for information about the data fields and options.
Modifying a schedule
Table 10-2 describes the schedule attributes. You can modify any attribute except for:
• For a volume, click Volumes, then expand Volumes, then select the volume, and then click the
Schedules tab.
• For a volume collection schedule, click Volumes, then expand Volume Collections, then select the
collection, and then click the Schedules tab.
2. Select the schedule in the Snapshot and Replication Schedules panel and click Modify.
Deleting a schedule
Deleting a schedule does not affect the snapshots or replicas that the schedule created.
• For a volume, click Volumes, then expand Volumes, then select the volume, and then click the
Schedules tab.
• For a volume collection schedule, click Volumes, then expand Volume Collections, then select the
collection, and then click the Schedules tab.
2. Select the schedule in the Snapshot and Replication Schedules panel and click Delete.
10–15
Group Administration Advanced volume operations
You can override automatic performance load balancing by enabling a RAID level preference (RAID 10, RAID 50,
RAID 5, or RAID 6) on a volume.
If you enable a RAID preference, the group attempts to store volume data on pool members with that RAID level.
The group still uses capacity-based load balancing on the volume.
If you disable a RAID preference on a volume, the group resumes automatic performance load balancing.
Restriction: Thin clones inherit the RAID preference, if any, of the template volume. You cannot set
a separate RAID preference for a thin clone.
Requirement: To enable a volume RAID preference, make sure at least one member in the volume’s pool has the
preferred RAID level. If no pool member has the preferred RAID level, the group ignores the
RAID preference until a member exists with the preferred RAID level.
See Displaying storage pools on page 7-3 to display the RAID levels of the pool members.
1. Click Volumes, then expand Volumes, then select the volume, then click Modify settings, and then click
the Advanced tab.
2. In the Modify volume settings – Advanced dialog box, under Volume RAID preference, select the preferred
RAID level or select Automatic to disable a RAID preference.
3. Click OK.
You can override automatic performance load balancing (or ignore any RAID preference for the volume) by
binding a volume to a specific pool member.
If you bind a volume to a pool member, the group stores the volume data on the member, instead of distributing
data across multiple pool members.
You can bind a volume only to a member that is in the same pool as the volume. If you bind a volume to a member
and then delete that member from the pool or group, the group cancels the bind operation.
Restriction: Thin clones inherit the member binding setting, if any, of the template volume. You
cannot have a separate member binding setting for a thin clone.
You cannot use the Group Manager GUI to bind a volume to a member. Instead, you must use the following Group
Manager CLI command format:
10–16
Group Administration Advanced volume operations
To unbind a volume from a member, use the following CLI command format:
See the PS Series CLI Reference manual for more information about using CLI commands.
If a volume (or snapshot) loses blocks, the current status of the volume (or snapshot) is offline-lost-cached-
blocks. In addition, the group generates an event message.
1. Click Volumes, then expand Volumes, then select the volume name, and then click the status tab.
2. Click offline-lost-cached-blocks.
• Click Set the volume online but retain the lost blocks to set the volume or snapshot
online but keep the lost blocks.
If an application tries to read a lost block, an error occurs. If an initiator writes new data to a lost block
before it is read, the block is no longer lost. The members containing lost blocks have a status of RAID
lost blocks until initiators write to all the lost blocks.
• Click Mark the lost blocks valid and set the volume online to set the lost block status to
valid.
The volume (or snapshot) status changes to online. The status of the members containing volume data
changes to online.
4. Click OK.
Note: Setting a volume with lost blocks online is a data integrity risk. The blocks might contain old or
invalid data.
10–17
11 Snapshot management
Snapshots greatly simplify and increase the performance of backup and recovery operations.
About snapshots
A snapshot is a point-in-time copy of volume data. Creating snapshots on a regular basis can protect you from data
loss due to mistakes, viruses, or database corruption.
A snapshot represents the contents of a volume at the time of creation. You can create snapshots of standard
volumes, in addition to template volumes and thin clone volumes.
Creating a snapshot does not prevent access to a volume, and the snapshot is instantly available to authorized iSCSI
initiators. If you accidentally delete data, you can set a snapshot online and retrieve the data. If a volume is
corrupted, you can restore the volume from a snapshot. You can also clone a snapshot to create a new copy of a
volume.
To create snapshots of a volume, you must allocate snapshot reserve for the volume. Initially, a snapshot
consumes no space from the snapshot reserve because it shares all data with the volume (sometimes called the base
volume). When the volume changes, the snapshot reserve tracks those changes to maintain the volume contents at
the time of snapshot creation.
Note: If a volume is offline, all its snapshots are also offline. If you delete a volume, the group deletes all its
snapshots.
Like volumes, snapshots appear on the network as iSCSI targets. All the iSCSI target security mechanisms apply to
snapshots. See iSCSI target security on page 8-1.
You can access the data in a snapshot by using the following methods:
• Restore a volume from a snapshot. This operation replaces the volume with the data that existed at the time you
created the snapshot. See Restoring a volume from a snapshot on page 11-9.
• Clone a snapshot. The new volume contains the volume data that existed at the time you created the snapshot.
See Cloning a snapshot to create a new volume on page 11-9.
• Set the snapshot online. iSCSI Initiators can access the target in the usual way. See Setting a snapshot online or
offline on page 11-11.
You can create snapshots of individual volumes or volume collections. When you perform a snapshot operation on
a volume collection, the group creates a set of snapshots (one for each volume in the collection) called a snapshot
collection. You can also simultaneously create snapshots of multiple volumes that are not in a collection. The
resulting set of snapshots is called a custom snapshot collection.
Use volume schedules to create snapshots or snapshot collections at a specific time or time interval, such as hourly,
daily or weekly.
11–1
Group Administration Snapshot management
You can allocate snapshot reserve when you create a volume, or you can modify a volume’s properties to change
the snapshot reserve. Snapshot reserve is a percentage of the volume reserve. Because the volume reserve for a
thin-provisioned volume changes as volume usage increases, the snapshot reserve for a thin-provisioned volume
also changes.
The group generates event messages when the amount of free snapshot reserve falls below a user-defined
threshold. Depending on the policy that you set for snapshot space recovery, the group preserves snapshot reserve
as described in About snapshot reserve settings.
Note: When a snapshot is online and accessible, a user or application can change the contents of the snapshot. If
this happens, the snapshot no longer represents a point-in-time copy of a volume and has limited use for
data recovery.
All iSCSI target security mechanisms apply to snapshots, including access control records, which prevent
unauthorized iSCSI initiator access to a volume and its snapshots. See iSCSI target security.
• Snapshot reserve – Amount of space, based on a percentage of the volume reserve, that the group allocates to
snapshots. When you create a volume, you can specify the snapshot reserve percentage for the volume.
Otherwise, the group applies the group-wide default value. You can modify the snapshot reserve value.
• Snapshot space recovery policy – Action the group takes when a new snapshot exceeds snapshot reserve:
– Delete the oldest snapshots to free space for new snapshots.
– Set the volume (and snapshots) offline.
If a snapshot has active iSCSI connections, the group closes the connections before deleting the snapshot.
Note: In some cases, you might want to preserve the data in a snapshot that might be at risk of deletion. To
preserve the data in a snapshot, you can clone the snapshot. See Cloning a snapshot to create a new
volume on page 11-9.
• Snapshot space warning percentage – Percentage of the snapshot reserve, when reached by in-use snapshot
reserve, results in an event message. The default is 90% of the snapshot reserve.
For example, if snapshot reserve space is 200 MB and the warning level is 90%, a warning occurs when in-use
snapshot reserve equals or exceeds 180 MB.
11–2
Group Administration Snapshot management
Using a schedule can cause a large number of snapshots. Make sure you have sufficient snapshot reserve. You can
set a limit on the number of snapshots the schedule can create. In addition, the size of the volume’s snapshot
reserve limits the number of volume snapshots.
See Scheduling volume operations for information about creating a snapshot schedule.
2. In the Modify Snapshot Settings dialog box, modify the snapshot reserve, space recovery policy, or the
snapshot space warning percentage.
If you change the snapshot reserve, the values in the Pool Space table change. If the new snapshot reserve
value exceeds the capacity of the pool, the free pool space cell displays a negative value.
3. Click OK.
Creating snapshots
You can create a snapshot of a single volume at the current time. Snapshot creation occurs immediately, with no
impact on volume availability or performance.
To create snapshots of multiple volumes at the same time, see About snapshot collections on page 11-5.
Requirement: Before you can create a snapshot, you must allocate snapshot reserve for the volume. See About
snapshot reserve settings on page 11-2.
1. Click Volumes in the lower-left panel, then expand Volumes in the far-left tree, then select the volume name,
and then click Create snapshot now.
• Enter an optional description for the snapshot (up to 127 ASCII characters).
• Select whether to keep the snapshot offline (default) or set the snapshot online.
• Select whether to make the snapshot permission read-write (the default) or read-only.
3. Click OK.
The snapshot appears in the far-left panel, under the volume name.
11–3
Group Administration Snapshot management
The default snapshot name is the volume name followed by the date and time when you created the snapshot (for
example, dbase-2009-03-25-15:31:14.7668). Snapshots appear under a volume in the far-left panel listed by
by timestamp. When you select a snapshot timestamp, its full name (volume and timestamp) appears in the GUI
main window and in the Snapshot iSCSI Settings pane.
2. Select the volume name, and then click the Snapshots tab. The Snapshot Summary panel provides the
following information:
3. Move the pointer over a snapshot to display a context message showing its name, the current snapshot status,
and the requested snapshot status.
See the online help for information about the data fields and options.
If the snapshot belongs to a snapshot collection, click View snapshot collection to see details of the
snapshot collection. This link appears in the General Snapshot Information panel only if applicable.
See the online help for information about the data fields and options.
11–4
Group Administration Snapshot management
See the online help for information about the data fields and options.
A volume and its snapshots share a list of access control records. See About iSCSI target access controls and
Configuring access control records.
In the Snapshot Access window, select a record in the Access Control List panel to display details, including
whether the record applies to the volume or its snapshots.
See the online help for information about the data fields and options.
• Create snapshots of all the volumes in a volume collection. The resulting set of snapshots, one for each volume
in the collection, is called a snapshot collection. You can also schedule snapshot collections.
• Create snapshots of multiple volumes without using a volume collection. The resulting set of snapshots, one
for each volume, is called a custom snapshot collection. You cannot schedule custom snapshot collections.
11–5
Group Administration Snapshot management
Requirement: Before you create a snapshot collection, you must allocate snapshot reserve for each volume in the
volume collection. See About snapshot reserve settings on page 11-2.
1. Click Volumes, then expand Volume Collections, then select the collection name, and then click Create
snapshot now.
2. In the Create Snapshot Collection dialog box, enter an optional description (up to 127 ASCII characters).
3. Click OK.
Snapshot collections appear under the volume collection name in the far-left panel.
The default snapshot collection name is the volume collection name, followed by the date and time when you
created the snapshots (for example, datavols-2009-03-25-15:31:14.7668). However, when snapshot
collections appear under a volume collection, you can identify them only by timestamp.
The Snapshot Summary panel shows the number of snapshot collections for the volume collection, the most
recent snapshot collection timestamp, and any snapshot schedules for the volume collection.
2. Expand a snapshot collection in the Snapshots panel to display the individual snapshots that comprise the
collection. A snapshot that is part of a snapshot collection also appears under its volume name in the far-left
panel.
3. Place the pointer over the snapshot collection or a snapshot to display details.
2. Select the snapshot collection timestamp. The Volume Collection – Snapshot Collection window appears.
The Snapshot Collection Status panel describes the snapshot collection, including the volume collection name, the
time you (or a schedule) created the snapshot collection, the original number of snapshots in the collection, and the
collection Integrity status and Modification status.
• incomplete — A snapshot that was originally part of the collection does not exist.
11–6
Group Administration Snapshot management
• not modified — No snapshot in the collection is set online with read-write permission.
• potentially modified — One or more snapshots in the collection are currently set online with read-write
permission.
The Snapshots panel displays information about the snapshots in the collection. Double-click a snapshot to display
details about the snapshot.
Requirement: Before you create a custom snapshot collection, you must allocate snapshot reserve for each
volume in the volume collection. See About snapshot reserve settings on page 11-2.
1. Click Volumes and then Custom snapshot collections.
2. In the Create Custom Snapshot Collection – General Settings dialog box, enter a name for the custom snapshot
collection (up to 63 ASCII characters) and an optional description (up to 127 ASCII characters). You cannot
specify spaces in the name. Then, click Next.
3. In the Create Custom Snapshot Collection – Components dialog box, select the volumes and click Next.
4. In the Create Custom Snapshot Collection – Summary dialog box, review the information. If correct, click
Finish. Click Back to make changes.
The default custom snapshot collection name is the volume collection name followed by the date and time when
you created the snapshots (for example, UserData-2009-05-14-15:08:18.3).
To display custom snapshot collections, see Displaying custom snapshot collections on page 11-7.
Click Volumes and then expand Custom Snapshot Collections. The Custom Snapshot Collections panel
provides the following information:
See the online help for information about the data fields and options.
11–7
Group Administration Snapshot management
The snapshot collection name appears in the title of the Volume Collection – Snapshot Collection window. Place
the pointer over a snapshot collection in the Snapshots panel to see the description.
Click Volumes, then expand Volume Collections, then expand the collection, and then select the
timestamp for the snapshot collection.
• If you are modifying a custom snapshot collection name or description:
Click Volumes, expand Custom Snapshot Collections, and then select the timestamp for the custom
snapshot collection.
3. In the Modify Snapshot Collection Settings dialog box, modify the snapshot collection name or description and
click OK.
11–8
Group Administration Snapshot management
Click Volumes, then expand Volume Collections, then expand the collection, and then select the
timestamp for the snapshot collection.
• If you are deleting a custom snapshot collection:
Click Volumes, then expand Custom Snapshot Collections, and then select the timestamp for the
custom snapshot collection.
3. Confirm that you want to delete the snapshot collection or the custom snapshot collection.
Note: Before a volume restore operation starts, the group automatically creates a snapshot of the current volume.
Requirement: To restore a volume from a snapshot, all members that contain data from a volume or snapshot
must be online.
1. Disconnect any iSCSI initiators from the volume. Follow the instructions for your operating system and
initiator.
2. Click Volumes in the lower-left panel, then expand Volumes in the far-left panel, and then select the volume.
3. Click Set volume offline and confirm that you want to set the volume offline.
5. In the Restore Volume from Snapshot dialog box, select the snapshot and click OK.
6. In the Restore Volume Confirmation dialog box, choose whether to set the volume online after the restore
operation completes (the default) and confirm that you want to restore the volume.
The restored volume has the same name and iSCSI target name as the original volume.
The group allocates space equal to the volume reserve you specify for the new volume. If you reserve snapshot
space for the new volume, the group allocates additional space.
11–9
Group Administration Snapshot management
See Volume attributes for attributes that apply to a new volume. In addition, you should fully understand volume
access controls. See iSCSI target security.
1. Click Volumes in the lower-left panel, then expand Volumes in the far-left panel, then select the volume, then
select the snapshot timestamp, and then click Clone snapshot.
2. Follow the prompts in the Clone Snapshot wizard. See Cloning a volume on page 9-11 for information about
the information you specify in the wizard dialog boxes.
Note: If you modify a snapshot name, and the public alias (public name) is set to be the same as the snapshot name
(as described in Displaying group-wide default volume settings on page 9-5), the alias changes to match the
new name. The iSCSI target name for the snapshot does not change if you change the snapshot name.
1. Click Volumes in the lower-left panel, then expand Volumes in the far-left panel, then expand the volume
name, then select the snapshot timestamp, and then click Modify snapshot properties.
2. Click the General tab and enter the new snapshot name or description in the Modify Snapshot Properties –
General dialog box.
3. Click OK.
1. Click Volumes in the lower-left panel, then expand Volumes in the far-left panel, then expand the volume
name, then select the snapshot timestamp, and then click Modify snapshot properties.
2. Click the iSCSI tab and enter the new alias in Public alias field in the Modify Snapshot Properties –
iSCSI dialog box.
3. Click OK.
11–10
Group Administration Snapshot management
If you set a snapshot offline, any current iSCSI connections to the snapshot are lost.
1. Click Volumes in the lower-left panel, then expand Volumes in the far-left panel, then expand the volume
name, and then select the snapshot timestamp.
If you write to a snapshot, it might no longer represent the contents of the base volume at the time of snapshot
creation.
Requirement: To change the permission of an online snapshot to read-only, first set the snapshot offline.
1. Click Volumes in the lower-left panel, then expand Volumes in the far-left panel, then expand the volume
name, then select the snapshot timestamp, and then click Set access type.
2. In the Select Access Type dialog box, select the permission for the snapshot.
3. Click OK.
You can allow or disallow multi-host (shared) access to a snapshot. If you disallow multi-host access to a snapshot,
only one iSCSI qualified name (IQN) can connect to the snapshot at one time. However, if you have a certain
environment, you might want to allow multi-host access to a snapshot. See Multi-host access to targets.
Note: To disable multi-host access to a snapshot, first disconnect all initiators from the snapshot except one. If
multiple initiators have connections when you try to disable multi-host access, the operation fails, unless the
initiators have the same IQN.
1. Click Volumes in the lower-left panel, then expand Volumes in the far-left panel, then expand the volume
name, then select the snapshot timestamp, and then click Set access type.
2. In the Select Access Type dialog box, allow or disallow multi-host access.
3. Click OK.
11–11
Group Administration Snapshot management
Deleting snapshots
Note: If you delete a snapshot that is part of a snapshot collection or a custom snapshot collection, the collection
Integrity status changes to incomplete.
1. Click Volumes in the lower-left panel, then expand Volumes in the far-left panel, then expand the volume
name, then select the snapshot timestamp, and then click Delete snapshot.
11–12
12 Volume replication
Volume replication between different groups provides protection against data loss. If a volume is destroyed, you
can fail over to the recovery group and recover data from a replica. Users can then resume access to the recovery
volume. When the original volume becomes available, you can failback to the original group.
About replication
An effective data recovery solution must help you correct day-to-day mistakes (such as when users erroneously
delete files or volumes), computer viruses, and site disasters.
Some solutions are time-consuming and involve backing up data and manually transporting the backups to a
different physical location. Other solutions rely on expensive hardware and the ability to copy data over long
distances, which can decrease application performance.
Using the replication technology provided by PS Series firmware, you can copy volume data from one group to
another, protecting the data from a variety of failures, ranging from the destruction of a volume to a complete site
disaster, with no effect on data availability or performance.
You can use PS Series replication functionality alone or in conjunction with Auto-Snapshot Manager (for Windows
or VMware) or Storage Replication Adaptor for VMware Site Recovery Manager. See the product documentation
for details.
About replicas
Similar to a snapshot, a replica represents the contents of the volume at the time the replica was created. Each
replicated volume has a replica set, which is the set of replicas created over time.
You can create replicas of individual volumes or volume collections. You can create replicas at the current time, or
you can set up a schedule.
Individual replicas are identified by the date and time that the replication operation started.
The replica set name is based on the volume name and includes a dot-number extension to ensure that all replica set
names are unique, in case two different partners replicate volumes with the same name to the same group. The
number in the extension reflects the order that each partner was configured as a replication partner to the group. For
example, all replica sets from the first configured partner have a dot-1 extension (such as, dbase.1). Replica sets
from the next configured partner have a dot-2 extension (such as, dbase.2).
A volume and its replica set are always stored in different groups connected by a robust network link. Separating
the groups geographically protects volume data in the event of a complete site disaster.
12–1
Group Administration Volume replication
Each partner plays a role in the replication of a volume, and you can monitor replication activity from either
partner:
• Primary group. Location of the volume. The primary group administrator configures the secondary group as a
replication partner and initiates the volume replication operation. Replication of the volume is considered
outbound from the view of the primary group.
• Secondary group. Location of the volume’s replica set. The secondary group administrator configures the
primary group as a replication partner and delegates space for storing replicas from the primary group.
Replication of a volume is considered inbound from the view of the secondary group (sometimes called the
destination group).
When you configure the two groups as replication partners, you can configure a volume or volume collection for
replication, specifying the replication partner, the local group space for the replication operation, and the remote
partner space for storing the replicas.
The first time you replicate a volume, the primary group copies the entire contents of the volume to the secondary
group. For subsequent replication operations, the primary group copies only the data that changed since the
previous replication operation started.
Eventually, the oldest replicas are deleted from the replica set to free space for new replicas. The amount of space
you allocate for storing replicas limits the number of replicas you can keep on the secondary group.
Note: To ensure that a complete copy of volume data exists on the secondary group, the most-recent complete
replica of a volume cannot be deleted.
If the primary group becomes available, you can replicate the recovery volume to the primary group and then
fail back to the primary group, returning to the original configuration.
Note: Replication is used primarily for disaster recovery and does not take the place of a comprehensive backup
strategy.
You can manually delete replicas and replica sets that are no longer needed. You cannot delete the most recent
complete replica from a replica set, but you can delete the entire replica set, which disables replication on the
volume.
12–2
Group Administration Volume replication
For example, the first replication of a volume copies the contents of the volume to the secondary group. If in-use
volume space is large and the network is slow, the replication operation can take a long time. In this case, you
might want to use manual transfer replication.
You also might want to use manual transfer replication after you defragment a large volume.
Manual transfer replication should only be used when necessary. A properly constructed and sized network should
be able to handle network replication.
• Manual Replication in Current Group panel – Provides information about the manual transfer replication
operations for the group to which the computer is currently connected:
– Volume name and direction of transfer
– Replica timestamp and status
– Pending actions
• Data Transfers on Local Machine panel – Provides information about the manual transfer operations in the
group that are running on the local computer:
– Group and volume name
– Current operation, its status and progress
– Available user actions
Click the checkbox next to Show data transfers launched from other group to display all manual
transfer operations that are running on the local computer
See the online help for information about the data fields and options.
12–3
Group Administration Volume replication
12–4
Group Administration Volume replication
Centralized replication
Multiple partners replicate volumes to another partner. For example, in Figure 12-4, GroupA and GroupB replicate
volumes to Group C. In this configuration, GroupA and GroupB are primary groups, and GroupC is the secondary
group.
12–5
Group Administration Volume replication
The first replication of a volume copies the entire volume contents from the primary group to the secondary group.
Subsequent replication operations transfer only the data that changed since the previous replication. Replication
time and space requirements increase as the amount of transferred data increases.
It can be difficult to estimate the rate of volume changes because volume usage can vary. Therefore, it can be
difficult to estimate replication time and space requirements. For example:
• Although some applications perform a consistent number of volume writes, others have a workload that
changes daily. Therefore, one replication operation might transfer little data and complete quickly, while
another replication might transfer a large amount of data and take a long time.
• In some cases, a volume might appear to have few changes, but the transferred data is relatively large. Random
writes to a volume can result in a large amount of transferred data, even if the actual data changes are small.
• Some disk operations, such as defragmenting a disk or reorganizing a database, can increase the amount of
transferred data. However, the defragmentation or reorganization can make subsequent replications more
efficient.
In addition, because volume usage can change over time, replication space that was adequate for one workload
might become inadequate when you add more users.
If a replication operation requires copying a large amount of data, you might want to use manual transfer
replication. See About manual transfer replication on page 12-3.
For each replication operation, you can display the amount of data that the primary group is transferring to a
replication partner. See Displaying replication activity and replicas for a volume on page 12-28. You can also
display the replication history for a volume and the amount of data transferred for each replication operation.
a. Gather the following information to help you determine how much replication space you need:
• Number of replicas you want to keep and the average time span between each consecutive replica
• Whether thin-provisioned
c. Identify a replication partner (secondary group) to store the volume replicas. This secondary group must
meet the space and network connectivity requirements in Replication partner requirements on page 12-16.
12–6
Group Administration Volume replication
a. Log in to the primary group and configure the secondary group as a replication partner.
b. Log in to the secondary group and configure the primary group as a replication partner. Make sure you
delegate the correct amount of space to the primary group for storing replicas of primary group volumes.
If you already configured the groups as replication partners, check the secondary group space that is delegated
to the primary group and increase it, if necessary.
3. On the primary group, configure the volume for replication, specifying the appropriate replication space
values. See Configuring a volume for replication on page 12-25.
4. On the primary group, replicate the volume. See Creating a replica on page 12-28.
You can set up a schedule to create replicas on a regular basis. See Using schedules to create replicas on page
12-30.
5. Monitor each replication operation and make sure it is successful. See Displaying replication activity and
replicas for a volume on page 12-28.
If the replication operation is not successful, identify and correct the problem. For example, you might need to
increase network bandwidth or increase replication space.
6. Monitor the number of replicas and replication space usage over time. The goal is to keep a specific number of
replicas without wasting replication space.
If replicas are deleted before you reach the number of replicas you want to keep, you might want to increase the
replica reserve percentage for the volume.
If you are keeping an excessive number of replicas, you might want to decrease the replica reserve percentage
for the volume.
To recover volume data from replicas, see Data recovery on page 13-1.
• Local replication reserve. Each volume requires primary group space for use during replication and,
optionally, for storing the failback snapshot. See About local replication reserve on page 12-8.
• Delegated space. To provide space for storing replicas, the secondary group delegates space to the primary
group. All primary group volumes that you replicate to the secondary group share the delegated space.
Each volume is assigned a portion of delegated space, called the replica reserve. The replica reserve for a
volume limits the number of replicas you can keep. When replica reserve is consumed, the oldest replicas are
deleted to free space for new replicas.
12–7
Group Administration Volume replication
See About delegated space and replica reserve on page 12-11 and Replica reserve usage on page 12-12.
To make sure replication operations complete and to keep the desired number of volume replicas, you must allocate
sufficient replication space.
To determine the optimal amount of replication space, Dell recommends that you set up replication using the
default space values, monitor activity over some time period, analyze the space usage, and make adjustments. This
helps you keep the desired number of replicas while using replication space efficiently.
• Preserve the contents of the volume at the time replication started. If volume writes occur during a
replication operation, the primary group creates a snapshot of the volume in the local replication reserve to
preserve the contents of the volume at the time replication started. As volume changes occur during replication,
the snapshot consumes more local replication reserve.
When replication completes, the primary group deletes the snapshot, freeing the space, unless you chose the
option to keep the failback snapshot.
• Store the failback snapshot (optional). The failback snapshot for a volume can expedite volume failback
operations.
If you choose to keep the failback snapshot when configuring a volume for replication, the primary group does
not delete the snapshot in the local replication reserve when replication completes. Instead, it becomes the
failback snapshot. As volume changes occur between replication operations, the failback snapshot consumes
more local replication reserve.
12–8
Group Administration Volume replication
After each replication completes, the primary group replaces the failback snapshot to update the failback
baseline. Therefore, the volume data represented by the failback snapshot on the primary group always
matches the volume data represented by the most recent complete replica on the secondary group.
If you failover to the secondary group and write to the recovery volume, you can fail back to the primary group
by replicating only the changes made to the recovery volume if the failback snapshot still exists. If the failback
snapshot does not exist, you must replicate the entire volume contents to the primary group to complete the
failback operation.
It is important to allocate sufficient local replication reserve to ensure that replication operations complete and,
optionally, to maintain the failback snapshot.
If there is not enough free local replication reserve to complete a replication operation, one of the following occurs:
• If you enabled the option to borrow free pool space, and sufficient free pool space is available (at least 10%
free pool space), replication continues. The primary group generates an informational message, specifying that
it is temporarily using free pool space during the replication.
• If you did not enable the option to borrow free pool space, or if you enabled the option, but there is not enough
free pool space, the primary group cancels the replication and generates an event message, stating that the
replication was cancelled.
If there is not enough free local replication reserve to maintain the failback snapshot, one of the following occurs:
• If you enabled the option to borrow free pool space, and free pool space is available, the primary group
generates an informational message specifying that it is temporarily using free pool space.
• If you did not enable the option to borrow free pool space, or if you enabled the option, but there is not enough
free pool space, the primary group deletes the failback snapshot and generates an event message. To reestablish
the failback snapshot, increase the local replication reserve and replicate the volume.
In addition, if you attempt to replicate a recovery volume to the primary group, and there is insufficient local
replication reserve to store the changes, the primary group generates an event message advising you to increase the
space.
The local replication reserve size is based on a percentage (5% to 200%) of the volume reserve. For a thin-
provisioned volume, the volume reserve size changes dynamically, based on volume usage; therefore, the local
replication reserve size also changes.
The recommended local replication reserve percentage depends on whether you want to keep the failback snapshot.
12–9
Group Administration Volume replication
However, using the recommended values might not be the most efficient use of local replication reserve. Ideally,
you want to allocate only enough space to meet the volume requirements. However, specifying too little space can
prevent successful replication.
The optimal value for local replication reserve depends on the volume change rate, the replication frequency, and
whether you are keeping the failback snapshot. The volume change rate can be difficult to estimate. See How
volume changes affect replication space on page 12-6.
If you want to use less than the recommended value for local replication reserve, follow these guidelines:
• No failback snapshot. Size the local replication reserve based only on its use during replication.
• Keep the failback snapshot. Size the local replication reserve based on its use during replication and also for
maintaining the failback snapshot. Then, combine the two values.
See Sizing local replication reserve for use during replication and Sizing the local replication reserve for the
failback snapshot.
• Recommended value. A value of 100% ensures sufficient local replication reserve even if the entire volume
changes during a replication operation.
• Space-efficient value. If few volume changes are expected during an average replication operation, use a
value less than 100%.
To obtain an appropriate value, estimate the average volume changes that occur during a typical replication
operation. Then, use this equation, where 5% is the minimum local replication reserve:
5% + change rate
For example, if you estimate that at most 10% of the volume changes, a value of 15% might be appropriate
(5% plus 10%).
If you use a local replication reserve value that is less than 100%, sufficient space might not be available to
complete a replication operation (for example, if more volume changes than expected occur during a
replication operation). Therefore, Dell recommends that you select the option that allows you to borrow free
pool space if there is not enough local replication reserve to complete a replication.
• Recommended value. A value of 100% ensures sufficient local replication reserve even if the entire volume
changes between replication operations.
• Space-efficient value. If few volume changes are expected on average between replication operations, use a
value less than 100%,
To obtain an appropriate value, estimate the average volume changes that occur between consecutive
replication operations. Then, use this equation, where 5% is the minimum local replication reserve:
12–10
Group Administration Volume replication
5% + change rate
For example, if you estimate that at most 20% of the volume changes, a value of 25% might be appropriate
(5% plus 20%).
If you use a local replication reserve value that is less than 100%, sufficient space might not be available to
maintain the failback snapshot. Therefore, Dell recommends that you select the option that allows you to
borrow free pool space if there is not enough local replication reserve to maintain the failback snapshot.
For example, if you want to replicate GroupA volumes and Group B volumes to GroupC, GroupC must delegate
space to GroupA and GroupB. See Figure 12-6.
The secondary group administrator delegates space to the primary group when configuring the group as a
replication partner. The administrator can modify the partner configuration and increase or decrease delegated
space.
When the primary group administrator configures a volume for replication, the administrator assigns a portion of
delegated space to the volume. This space, called replica reserve, limits the number of replicas that you can keep
on the secondary group. You can modify the volume replication configuration and increase or decrease the replica
reserve value.
To determine the correct amount of space that the secondary group must delegate to the primary group, you must
obtain the replica reserve requirement for each primary group volume that you are replicating to the secondary
12–11
Group Administration Volume replication
group. Replica reserve is based on a percentage of the volume’s replica volume reserve. See Replica volume
reserve and Replica reserve usage.
When you configure a volume for replication, the primary group sets the initial value of the replica volume reserve:
• For volumes that are not thin-provisioned, 10% of the reported volume size.
For example, if you have a 10 GB volume that is not thin-provisioned, the initial replica volume reserve is 1
GB (10% of 10 GB).
• For thin-provisioned volumes, the current volume reserve.
For example, if you have a 10 GB volume that is thin-provisioned with a volume reserve of 2.5 GB, the initial
replica volume reserve is 2.5 GB.
The initial value of the replica volume reserve appears in the Configure Replication – General Settings dialog box.
At the start of each replication operation, the primary group determines whether to increase the value of the replica
volume reserve:
• For volumes that are not thin-provisioned, the replica volume reserve increases by the amount of new data
added to the volume since the start of the previous successful replication, up to the reported volume size. That
is, the replica volume reserve increases by the amount of data that must be replicated.
For example, if you have a 10 GB volume that is not thin-provisioned, and initiators write 2 GB to the volume,
the replica volume reserve increases by 2 GB.
• For thin-provisioned volumes, the replica volume reserve increases to the current size of the volume reserve,
up to the maximum in-use space value.
For example, if you have a 10 GB volume that is thin-provisioned, and the volume reserve increases from 2.5
GB to 4.0 GB, the replica volume reserve increases by 1.5 GB.
The size of the replica reserve for a volume is based on a percentage of the replica volume reserve. Therefore, as
the replica volume reserve increases, the replica reserve also increases, providing more space for replicas, up to a
limit. See Replica reserve usage.
It is important to correctly size the replica reserve for a volume. Too little replica reserve might prevent you from
keeping the desired number of replicas. However, delegated space is limited, so keeping too many replicas might
not be an efficient use of delegated space.
12–12
Group Administration Volume replication
When you configure a volume for replication, you specify the replica reserve size as a percentage (minimum
105%) of the replica volume reserve, which approximates in-use volume space. As volume usage increases, the
replica volume reserve increases; therefore, the replica reserve also increases, providing more space for replicas, up
to a limit. See Replica volume reserve.
For example, if you specify 200% for the replica reserve, and the replica volume reserve is 2.5 GB, the replica
reserve size is 5.0 GB. If the replica volume reserve increases to 6.0 GB, the replica reserve size increases to 12.0
GB.
The replica volume reserve has a maximum size (the reported volume size for volumes that are not thin-
provisioned, or the maximum in-use space value for thin-provisioned volumes); therefore, the replica reserve has a
maximum size.
To properly size replica reserve, you must understand how replica reserve space is used. See Replica reserve usage
– first replication and Replica reserve usage – subsequent replications.
2. The primary group increases the replica reserve if the replica volume reserve increased since you enabled
replication on the volume.
Note: If there is not enough free delegated space for the replica reserve increase, the primary group generates
an event message, and the replication pauses. Replication continues automatically once there is
sufficient delegated space.
3. The primary group copies the contents of the volume to replica reserve, decreasing the amount of free replica
reserve. For example, if the volume consists of 10 GB of data, free replica reserve decreases by 10 GB.
At this point, the replica reserve contains one replica, which is the most recent complete replica.
2. The primary group increases the replica reserve if the replica volume reserve increased since the previous
replication operation.
Note: If there is not enough free delegated space for the replica reserve increase, the primary group generates
an event message, and the replication pauses. Replication continues automatically once there is
sufficient delegated space.
3. If there is not enough free replica reserve for the volume data, the primary group deletes the oldest replica to
free space for the new replica. For example, if the data transfer consisted of 5 GB of new data, the replica
reserve must have 5 GB of free space to store this data.
Because each replica is a representation of volume data at the time the replication started, only the data that
differentiates the oldest replica from the other replicas is deleted. Therefore, multiple replicas might be deleted
to free sufficient space.
12–13
Group Administration Volume replication
The most recent complete replica is never deleted automatically, ensuring that you always have a viable copy
of volume data on the secondary group.
Note: If you cannot free enough replica reserve for the volume data by deleting replicas, the replication
pauses, and the primary group generates an event message, indicating the replica reserve percentage
required to complete the replication.
4. The primary group copies the volume data to replica reserve, decreasing the amount of free replica reserve. For
example, if the replication transferred 5 GB of new data, free replica reserve decreases by 5 GB.
When each replication completes, the new replica becomes the most recent complete replica.
To make sure that replication operations complete and keep the desired number of volume replicas, it is important
to allocate sufficient delegated space and specify the correct replica reserve percentage. The amount of delegated
space you need depends on the replica reserve requirements for all the replicated volumes from a partner.
See Guidelines for sizing replica reserve for a volume on page 12-14 and Guidelines for sizing delegated space on
page 12-15.
When you configure a volume for replication, you specify the replica reserve size as a percentage (minimum
105%) of the replica volume reserve, which approximates in-use volume space.
As volume changes occur, the replica volume reserve increases; therefore, the replica reserve increases, providing
more free space for replicas, up to a limit. If there is insufficient free replica reserve for a new replica, the oldest
replicas are deleted to increase free space.
Note: Replica reserve can increase automatically or by administrator action only if free delegated space is
available. See Guidelines for sizing delegated space on page 12-15.
Ideally, you want to allocate only enough replica reserve to store the desired number of replicas. In general, the
higher the replica reserve percentage, the more replicas you can store. However, specifying a high percentage for
all volumes might not be the most efficient use of delegated space.
The optimal value for replica reserve depends on the volume change rate, which can be difficult to estimate, and
the replication frequency. See How volume changes affect replication space on page 12-6.
• Recommended value. Dell recommends that you specify 200% for the replica reserve. Specifying 200%
guarantees that you can store at least two replicas, assuming there is sufficient delegated space for the replica
reserve to reach its maximum size. If the replica reserve cannot reach its maximum size due to lack of
delegated space, you are not guaranteed two replicas.
If you want to guarantee more than two replicas, specify a higher percentage.
• Space-efficient value. For volumes that are not frequently modified, you might be able to keep the desired
number of replicas by using a replica reserve value that is less than 200%.
12–14
Group Administration Volume replication
To obtain an appropriate replica reserve value, estimate the average volume changes that occur between
replication operations. Then, use this calculation, where 105% is the minimum replica reserve value:
For example, if you estimate that at most 20% of the volume changes between replication operations, and you
want to keep three replicas, specify 145% for the replica reserve value.
When replication is ongoing, monitor the number of replicas for each volume and the replica reserve usage. If more
than the desired number of replicas exist, consider decreasing the replica reserve percentage. If less than the desired
number of replicas exist, consider increasing the replica reserve percentage. A low free replica reserve can indicate
optimal use of replica reserve space, if the desired number of replicas exist.
Ideally, you should request from the secondary group administrator only enough delegated space to store the
desired number of replicas for each volume.
• Recommended value. Add together the maximum replica reserve space requirements for all the primary group
volumes that you want to replicate to the secondary group and request at least that much delegated space.
If you later decide to replicate additional volumes, the secondary group administrator might need to increase
the delegated space. See Modifying space delegated to a partner on page 12-20.
• Space-efficient value. You might want to request delegated space that is less than the recommended value
described above.
Initially, replica reserve is not fully allocated. Instead, it increases automatically, based on volume usage. This
enables you to over-provision delegated space. When you over-provision delegated space, the total maximum
replica reserve space for all the partner volumes exceeds delegated space.
Warning: If you over-provision delegated space, a volume’s replica reserve might not be able to increase
automatically or through administrative action, preventing the replication operation from
completing.
For example, assume you are replicating five volumes, and the maximum combined replica reserve needed is 70
GB. If you allocate 50 GB for delegated space, delegated space is over-provisioned by 20 GB. If you specify 70
GB for delegated space, each volume’s replica reserve can increase to its maximum.
After you set up replication, you should monitor delegated space usage. If free delegated space is low and the
replica volume reserve for each replicated volume has not reached its maximum, consider increasing the delegated
space.
12–15
Group Administration Volume replication
Each partner plays a role in the replication of a volume, and you can monitor replication activity and manage
replicas from either partner:
• Primary group. Location of the volume. The primary group administrator configures the secondary group as a
replication partner and initiates the replication operation. Replication of the volume is considered outbound
from the view of the primary group.
• Secondary group. Location of the volume’s replica set. The secondary group administrator configures the
primary group as a replication partner and provides space for replicas. Replication of the volume is considered
inbound from the view of the secondary group (sometimes called the destination group).
When you configure the replication partners, you can replicate a volume or replicate all the volumes in a volume
collection.
• The primary group must have enough free space for the local replication reserve for each replicated volume.
Local replication reserve is located in the same pool as the volume. See About local replication reserve on page
12-8.
• The secondary group must have enough free space to delegate to the primary group. See Guidelines for sizing
delegated space on page 12-15.
• The groups must have network connectivity. The link between the groups must support full IP routing and must
provide sufficient bandwidth to complete replication operations in a timely manner.
• The network link between the groups must be secure (for example, through use of a firewall, VPN, or
encryption).
• The groups must run the same PS Series firmware version. If the groups are not running the same firmware
version, features in the most recent firmware version might not be available. In some cases, you must also
disallow firmware downgrades.
12–16
Group Administration Volume replication
You must:
• Log in to the primary group (where the volume is located) and configure the secondary group as a replication
partner.
• Log in to the secondary group (where the replicas are stored) and configure the primary group as a replication
partner. Make sure you delegate space to the primary group.
Note: Password or configuration problems between partners do not occur until you enable replication on a
volume. If you receive a login error message, make sure that you specified the correct passwords when
configuring the partners.
On each group:
12–17
Group Administration Volume replication
• Group IP address.
• Optional description for the partner.
3. In the Configure Replication Partner – Contact dialog box, enter the optional name, e-mail address, and phone
number or mobile number for the partner administrator. Then, click Next.
4. In the Configure Replication Partner – Authentication dialog box, enter the passwords that the partners use for
mutual authentication:
When you configure the other group as a replication partner, you must specify this password in the
Password obtained from partner field.
For example, if you specify the password 123abc123 in the Password for partner field, specify
123abc123 in the Password obtained from partner field when configuring the other partner.
When you configure the other group as a replication partner, you must specify this password in the
Password for partner field.
For example, if you specify the password abc123abc in the Password obtained from partner
field, specify abc123abc in the Password for partner field when configuring the other partner.
5. In the Configure Replication Partner – Delegated Space dialog box, optionally, select the storage pool and
enter the amount of space the group delegates to the partner. Then, click Next.
6. In the Configure Replication Partner – Summary dialog box, review the configuration. If it is acceptable, click
Finish.To make changes, click Back.
When you have configured both replication partners, you can replicate volume data.
The Replication Partners panel appears, providing the following information for each partner:
• The top entry shows whether outbound replication from the group to the partner is enabled or paused, any
space that the partner delegated to the group, and free delegated space.
• The bottom entry shows whether inbound replication from the partner to the group is enabled or paused, any
space that the group delegated to the partner, and free delegated space.
See the online help for information about the data fields and options.
12–18
Group Administration Volume replication
• General partner information panel – Provides information about the replication partner:
– Group name and IP address
– Partner information and status
• Replication status panel – Provides information about inbound replication, outbound replication, and
unmanaged space:
– Whether replication is paused or not
– Delegated and unmanaged space
– Volumes and collections replicated
• Replication progress panel – Provides information about in-progress replication operations:
– Volume name and direction of transfer
– Timestamp of transfer and transfer status
– Amount of data transferred
See the online help for information about the data fields and options.
Note: Replication partner changes you make on the secondary group are not updated on the primary group until
the next replication.
2. In the Modify Replication Partner – General window, change the group name, IP address, or description.
3. Click OK.
2. In the Modify Replication Partner - Contact window, change the contact name, e-mail address, or phone
numbers.
3. Click OK.
12–19
Group Administration Volume replication
1. Click Replication, then select the partner, and then click Modify passwords.
3. Click OK.
1. Click Replication, then select the partner, and then click Modify delegated space.
2. In the Modify Delegated Space dialog box, enter the new delegated space value or change the delegated space
pool.
The Pool Space table shows how pool space is currently used and how much space is used after the change. If
the new delegated space exceeds the capacity of the pool, the color of the table cell showing free pool space
changes to red. Changing the pool moves all the replicas and any recovery volumes for the partner to the new
pool.
3. Click OK.
Deleting a partner deletes any inbound replicas stored in space that the group delegated to the partner. Then, the
delegated space becomes free pool space. However, replicas stored on the deleted partner are not deleted, and you
can access them by logging in to the partner.
Note: If the group is hosting a recovery volume from the partner, do one of the following before you delete the
partner:
• Demote the recovery volume to an inbound replica set (which is deleted when you delete the partner).
Double-click the recovery volume in the far-left panel and click Demote to replica set.
• Promote the recovery volume to a permanent volume.
12–20
Group Administration Volume replication
1. Click Replication, then select the partner, and then click Pause inbound.
• Click Inbound Replicas to display partner replicas stored in the group. Select an individual replica to
display detailed information.
• Click Inbound Replica Collections to display partner replica collections stored in the group. Select an
individual replica collection or replica to display detailed information.
• Click Outbound Replicas to display replicas of group volumes stored on the partner. Select an individual
replica to display detailed information.
• Click Outbound Replica Collections to display replicas of group volume collections stored on the
partner. Select an individual replica or replica collection to display detailed information.
Note: Individual replicas are identified by the date and time the replication operation started. The time stamp for
the primary group is based on its time zone; likewise, the time stamp for the secondary group is based on its
time zone.
The Inbound Replicas and Inbound Replica Collections windows show the following:
• The Delegated Space panel shows the usage of the space that the group delegated to the partner.
• The Local Replicas panel shows each replicated volume or volume collection, the volume replicas, the replica
reserve size, the replication status, and the replica status.
The Outbound Replicas and Outbound Replica Collections windows show the following:
• The Replica Space panel shows the usage of the space the partner delegated to the group.
• The Remote Replicas panel shows each replicated volume or volume collection, the volume replicas, the
replica reserve size, the replication status, and the replica status.
In the Remote Replicas panel, click Replication history to display details about the last 10 replication
operations for each volume, including the duration of the replication and the size and speed of the data transfer. The
duration includes the amount of time during which replication was paused or the network was down.
To display an individual replica collection, click Replication, then expand the partner name, then expand
Inbound Replica Collections, and then select the volume collection name.
12–21
Group Administration Volume replication
The Inbound Replicas Collections window appears, containing the following panels:
• Delegated space panel – Provides information about the space the group delegated to the selected partner:
– Space delegated, used, and free
– Failback replica space
– Inbound replica status
• Inbound replicas panel – Provides information about inbound replicas and replica status.
See the online help for information about the data fields and options.
The Inbound Replica Summary window appears, containing the following panels:
• Delegated Space – Provides information about the space that the group delegated to the partner.
• Inbound Replicas panel – Provides information about each replicated volume, the volume replicas, the replica
reserve size, the replication status, and the replica status.
See the online help for information about the data fields and options.
• General replica set information panel – Provides information about the inbound volume replication
configuration:
– Status and latest transfer timestamp
– Whether a failback snapshot or thin provisioning are enabled.
– User description and identifiers: Volume name, storage pool, and replication partner
– Requested and free reserve
• Replicas panel – Provides information about the replicas created for the selected volume.
Note: Select the column headers in the GUI to sort the table. By default, the table is sorted by the Created date.
See the online help for information about the data fields and options.
12–22
Group Administration Volume replication
• Template replicas panel – Provides information about template volume replication configuration.
– Size, status and latest transfer timestamp
– Whether a failback snapshot or thin provisioning are enabled
– Storage pool, and replication partner
– Requested and free reserve
• Replicas panel – Provides information about the replicas created for the selected volume.
• Number of thin clone replica sets attached to the template replica set.
• Number of promoted thin clone replica sets (that is, recovery thin clones) attached to the template replica set.
• Thin Clone Replica Sets panel – Provides information about each thin clone replica set or promoted thin clone
replica set, including:
– Name, reported size and status
– Replication partner and status
– iSCSI Connections
See the online help for information about the data fields and options.
• Remote delegated space panel – Provides information about capacity usage for delegated space on the selected
replication partner.
• Remote replicas panel – Provides information about the replicas stored on a partner.
See the online help for information about the data fields and options.
12–23
Group Administration Volume replication
Note: You can also display information about volume collection replication by clicking Volumes in the lower-left
corner, then expanding Volume Collections, then selecting the collection name, and then clicking the
Replicas tab.
• Replication summary panel – Provides information about the volume collection replication configuration,
including replication partner, and schedule status.
• Remote replicas panel – Provides information about the replicas stored on a replication partner.
See the online help for information about the data fields and options.
The Outbound Replicas Summary window appears, containing the following panels:
• The remote delegated space panel – Provides information about the space the selected partner delegated to the
group.
• Remote replicas panel – Provides information about replicas stored on the partner.
In the Remote Replicas panel, select Replication history to display details about the last 10 replication
operations for each volume, including the duration of the replication and the size and speed of the data transfer. The
duration includes the amount of time during which replication was paused or the network was down.
See the online help for information about the data fields and options.
Note: You can also display information about volume replication by clicking Volumes in the lower-left corner,
then expanding Volumes, then selecting the volume name, and then clicking the Replication tab.
• Replication summary panel – Provides information about the replication configuration for the volume:
– Replication partner
– Replica and local reserve
– Failback snapshot an baseline
– Schedule and status
• Remote replicas panel – Provides information about each replica:
– Click Replication history to display details about the last 10 replication operations for the volume.
– Select Thin clone replicas (only applicable if the outbound replica is for a template volume) to
display information about any replicated thin clones attached to the template volume.
12–24
Group Administration Volume replication
Restriction: You cannot configure a thin clone for replication until you replicate the template volume
to which the thin clone is attached.
1. Gather the volume replication attributes. See Volume replication configuration attributes.
2. Click Volumes in the lower-left panel, then expand Volumes, then select the volume, and then click
Configure replication.
4. In the Configure Volume Replication – Advanced Settings window, select whether to keep the failback
snapshot and click Next.
12–25
Group Administration Volume replication
Because a template volume is read-only and cannot be failed back from the secondary group, keeping the
failback snapshot is not necessary for this type of volume.
5. In the Configure Volume Replication – Summary window, review the information and click Finish if the
configuration is correct. Click Back to make changes.
When you complete the volume replication configuration, you can choose to create a replica. You can also choose
to perform the replication by using manual transfer replication. See About replicas on page 12-1.
To modify the settings, click Volumes, then expand Volumes, then select the volume, then click Modify
replication settings, and then click the General tab.
In the Modify Volume Replication Settings – General dialog box, you can change:
• Replication partner. The replicas remain on the original partner. The next replication to the new partner
transfers the full volume contents.
• Replica reserve percentage.
Note: The space currently used to store replicas represents the lower limit for replica reserve. You cannot
decrease replica reserve below this limit.
To change the failback snapshot setting, click the Advanced tab. In the Modify Volume Replication Settings –
Advanced dialog box, select or deselect Keep failback snapshot.
Note: If you select Keep failback snapshot, you must create a replica to establish the failback snapshot.
Note: To replicate a volume collection, you must configure all the volumes in the collection to replicate to the
same partner. See Configuring a volume for replication on page 12-25.
1. Click Volumes, then expand Volume Collections, then select the collection, and then click Configure
replication.
If a volume is not configured for replication, click the not replicated link and configure the volume.
12–26
Group Administration Volume replication
3. Click OK.
1. Click Volumes, then expand Volume Collections, then select the collection, and then click Modify
replication settings.
2. In the Volume Collection – Modify Replication Settings dialog box, you can change the partner name for the
collection. You can also click the partner name next to a volume to change the replication configuration for the
volume. See Modifying volume replication configuration settings on page 12-26.
3. Click OK.
Disabling replication
Disabling replication for a volume or volume collection unconfigures replication on the volume or volume
collection.
Disabling replication does not delete the volume replicas stored on the partner. You can log in to the partner and
access the replicas.
If you later reconfigure replication on the same volume to the same partner, you must delete the existing replica set
on the partner before creating a replica. The first replication is a complete copy of the volume data.
Note: When you disable replication on a volume, the delegated space on the secondary group that is storing the
replicas becomes unmanaged space; that is, the space cannot be managed from the primary group. If you do
not need the replicas, log into the secondary group and delete the replica set.
Restriction: You cannot disable replication on a template volume if any attached thin clones have
replication enabled.
1. Click Volumes, then expand Volumes, then select the volume name, and then click Disable volume
replication.
1. Click Volumes, then expand Volume Collections, then select the collection name, and then click
Disable volume replication.
12–27
Group Administration Volume replication
Creating a replica
The first time you replicate a volume to a partner, the primary group copies the entire volume contents to replica
reserve on the secondary group. Subsequent replication operations transfer only the volume data that changed since
the previous complete replication.
Note: Very large data transfers might exceed the capacity of the network link between the primary group and the
secondary group. For replication operations that require transferring large amounts of data, consider using
manual transfer replication. See About manual transfer replication on page 12-3.
Prerequisite: Set up the replication partners and configure the volume or volume collection for replication.
Prerequisite: You must replicate a template volume before replicating any of its thin clones.
See Configuring replication partners on page 12-17, Configuring a volume for replication on page 12-25.
1. Click Volumes, then expand Volumes, then select the volume name, and then click Create replica now.
• If you want to use manual transfer replication to perform the replication, select Perform manual
replication.
3. Monitor replication to make sure replicas complete in a timely manner. See Displaying replication activity and
replicas for a volume on page 12-28.
If replication operations take longer than expected, make sure you have adequate network bandwidth between the
groups, in addition to full IP routing. A slow network link can cause long replication times.
• Replica set and individual replicas for the volume. Each replica is identified by the date and time the
replication operation started.
• Number of complete replicas and the current size of the replica reserve.
12–28
Group Administration Volume replication
2. In the Remote Replicas panel, click Replication History to show details about each replication operation:
If you replicate a volume collection, the resulting set of replicas is called a replica collection. When complete, a
replica collection contains one replica for each volume in the collection. A replica collection set is the set of all the
replica collections for a volume collection.
Requirement: Before you can replicate a volume collection, the volumes in the collection and the volume
collection must be configured to replicate to the same partner. See Configuring a volume for
replication on page 12-25 and Configuring a volume collection for replication on page 12-26.
1. Click Volumes, then expand Volume Collections, then select the collection, and then click Create
replica now.
• If you want to perform the replication using manual transfer replication, select Perform manual
replication. See About manual transfer replication on page 12-3.
You should monitor replication to make sure replicas complete in a timely manner. See Displaying replication
activity and replicas for a volume collection on page 12-29.
If replication operations are taking longer than expected, make sure you have adequate network bandwidth
between the groups, in addition to full IP routing. A slow network link can cause long replication times.
In the Volume Collection Replicas window, the Replication Summary panel shows:
12–29
Group Administration Volume replication
The Remote Replicas panel shows the replica collections for the volume collection. Expand a replica collection to
see the individual replicas and their status.
Note: Schedules apply only to network replications. Scheduled replications do not run until any in-process manual
transfer replications are complete.
To pause replication for a volume, click Volumes, then expand Volumes, then select the volume name, and then
click Pause volume replication.
To resume replication for a volume, click Volumes, then expand Volumes, then select the volume name, and then
click Resume volume replication.
Click Replication, then expand the partner name, and then click Pause outbound.
• To resume outbound replication to a partner:
Click Replication, then expand the partner name, and then click Resume outbound.
Click Replication, then expand the partner name, and then click Pause inbound.
12–30
Group Administration Volume replication
Click Replication, then expand the partner name, and then click Resume inbound.
Note: To temporarily stop volume replication instead of cancelling it, pause the replication. See Pausing and
resuming replication of a volume on page 12-30.
Click Volumes, expand Volumes, then select the volume name, and then click Cancel replication.
You can clone an inbound replica to create a new volume on the secondary group. The new volume has the same
reported size, is the same type (standard volume, template volume, or thin clone volume), and has the same data as
the original volume at the time you created the replica.
The new volume is located in the same pool as the replica. After the clone operation completes, the replica still
exists, and replication continues as usual.
Note: If you clone a thin clone replica to create a new thin clone volume, the new volume is attached to the
template replica set on which the thin clone replica depends.
1. Click Replication, then expand the replication partner, then expand Inbound replicas, then select the
replica set, and then click Clone replica.
2. In the Clone Replica dialog box, select the timestamp of the replica.
3. In the Clone Volume Replica – Volume Settings dialog box, specify the new volume name and (optional)
description. Then, click Next.
4. In the Clone Volume Replica – Space dialog box, change the thin provisioning settings and the snapshot
reserve value, and click Next.
6. In the Clone Volume Replica – Summary dialog box, review the information and click Finish if satisfactory.
Click Back to make changes.
The new volume appears in the list of volumes in the far-left panel.
12–31
Group Administration Volume replication
1. Click Volumes, then expand Volumes, then select the volume name, and then click the Replication tab.
2. In the Volume Replication window, click Volume replicas in the Remote Replicas panel.
3. To delete the replica set, select the replica set and click Delete replica set.
If you delete a replica from a replica collection, the replica collection longer represents the contents of the volumes
in the collection at the time you created the replica collection.
Deleting a replica collection set disables replication on the volume collection. If you re-enable replication on the
volume collection, the first replication of each volume is a complete transfer of volume data.
1. Click Volumes, then expand Volume Collections, then select the collection name, and then click the
Replicas tab.
2. In the Volume Collection Replicas window, to delete the replica collection set, select it in the Remote Replicas
panel and click Delete replica collection set.
To delete a replica collection, expand the replica collection set, select the replica collection, and click Delete
replica collection.
To delete a replica from a replica collection, expand the replica collection, select the replica, and click Delete
replica.
12–32
Group Administration Volume replication
Recommendation: Dell recommends that you delete replicas when logged in to the primary group. See Deleting
outbound replica sets or replicas on page 12-32.
1. Click Replication, then expand the partner name, and then click Inbound Replicas.
To delete a replica set, select the replica set and click Delete replica set.
To delete a replica, expand the replica set, select the replica, and click Delete replica.
4. Confirm that you want to delete the replica or replica set.
The replica or replica set no longer appears in the group. However, the replica or replica set still appears on the
partner (primary group), if it is available. You can log in to the primary group and delete the replica or replica set.
Recommendation: Dell recommends that you delete replicas when logged in to the primary group. See Deleting
outbound replica collection sets, replica collections, or replicas on page 12-32.
1. Click Replication, then expand the partner name, and then click Inbound Replica Collection.
2. If you are deleting an inbound replica collection set, click Pause inbound.
3. To delete a replica collection set, select it in the Inbound Replicas panel and click Delete replica
collection set.
To delete a replica collection, expand the replica collection set, select the replica collection, and click Delete
replica collection.
To delete a replica from a replica collection, expand the replica collection, select the replica, and click Delete
replica.
12–33
Group Administration Volume replication
The replica, replica collection, or replica collection set no longer appears in the group. However, the replica or
replica set still appears on the partner (primary group), if it is available. You can log in to the primary group and
delete the replica or replica set.
12–34
13 Data recovery
If you replicate a volume to a partner (see Chapter 12, Volume replication), you can recover volume data on the
partner. In addition, you might be able to fail over to the partner and later fail back to the original group.
If the volume becomes unavailable—either temporarily or permanently—you can recover the data from the
partner.
When volume failure occurs, or if the primary group is unavailable because of maintenance, it is important to
resume data availability as soon as possible to prevent or limit application downtime.
The method for recovering data depends on the state of the groups and your specific data recovery requirements.
See Data recovery procedures.
For example, you can clone a replica to create a new volume on the secondary group. The new volume contains the
same data that existed at the time you created the replica; initiators can connect to it in the usual way. Cloning a
replica has no impact on the original volume and the replication configuration. If the original volume is still
available, replication can continue, as usual. See Cloning an inbound replica on page 12-31.
In most situations in which you must recover data, the primary group is not available because of maintenance or a
failure. In this case, you can temporarily – or permanently – fail over the volume to the secondary group and make
the volume data available to initiators. If the original volume on the primary group becomes available again, you
can fail back the volume to the primary group, returning to the original replication configuration.
• promote – Enables you to convert a replica set into a volume and snapshots. The volume contains the data
represented by the most recent complete replica. The snapshots correspond to the remaining replicas.
For example, you can promote an inbound replica set to a recovery volume, as part of a failover operation.
• demote – Enables you to convert a volume into a replica set.
For example, you can demote a volume to a failback replica set, as part of a failback operation.
See Failing over and failing back a volume on page 13-2 for more details.
13–1
Group Administration Data recovery
Restriction: You cannot replicate a recovery template volume, and you cannot demote a template volume to a
failback replica set.
1. Promote the replica set to a recovery volume (and snapshots) on the secondary group, and allow initiators to
connect to the volume. You can choose to keep the same iSCSI target name to facilitate iSCSI initiator access
to the recovery volume. See Promoting an inbound replica set to a recovery volume.
2. When the original volume on the primary group becomes available, synchronize the volume data on both
groups. Use the Replicate to Partner operation to:
• Demote the original volume to a failback replica set on the primary group.
• Replicate the recovery volume to the primary group. If you kept the failback snapshot for the original
volume, only the changes made to the recovery volume are replicated.
13–2
Group Administration Data recovery
3. When you are ready to fail back to the primary group, use the Failback to Partner operation to:
At this point, initiators can connect to the volume on the primary group and replication can continue as usual. By
default, the failback baseline is reestablished.
13–3
Group Administration Data recovery
Figure 13-2 shows the replication configuration after a failure in the primary group (GroupA).
13–4
Group Administration Data recovery
Figure 13-3 shows the first step in recovering data on the secondary group, which is to fail over the volume to the
secondary group. To do this, promote the inbound replica set to a recovery volume and snapshots. The recovery
volume contains the volume data represented by the most recent complete replica. Users can connect to the
recovery volume to resume access to volume data.
13–5
Group Administration Data recovery
Figure 13-4 shows the second step in recovering data—replicate to the primary group. When the primary group is
available:
Note: If the failback snapshot is not available on the primary group, the first replication transfers all the
recovery volume data, instead of only the changes that users made to the recovery volume.
You can perform these tasks separately, or use the Replicate to Primary operation, which encompasses both tasks.
Figure 13-4: Step 2: Replicate to the Primary Group – Data Is Available and Protected
13–6
Group Administration Data recovery
Figure 13-5 shows the final step in recovering data—fail back to the primary group. To fail back to the primary
group:
You can perform these tasks separately, or use the Replicate to Primary operation, which encompasses all the tasks.
At this point, the volume replication configuration returns to its original state and users can connect to the volume
on the primary group.
A recovery volume name is generated automatically, based on the volume name, with a dot-number extension (for
example, vol01.1). You can chose to keep the same iSCSI target name as the original volume to facilitate iSCSI
initiator connections to the recovery volume.
13–7
Group Administration Data recovery
Promoting an inbound replica set does not require any additional space on the secondary group, because it reduces
delegated space by the size of the volume’s replica reserve.
With some exceptions, all volume operations apply to a recovery volume. See Recovery volume restrictions.
Restriction: You cannot convert a recovery template to a standard volume. You must first You must first make the
promotion permanent. You cannot detach a recovery thin clone.
1. Click Replication, then expand the replication partner, then expand Inbound Replicas, then select the
replica set, and then click Promote to volume.
2. Confirm that you want to pause inbound replication from the partner. Replication resumes automatically for all
other volumes after the replica set is promoted.
3. In the Promote Replica Set – Volume Options dialog box, specify the following:
• Whether you want to set the recovery volume online or offline. Set the volume online if you want initiators
to connect to it.
• Whether you want to retain the iSCSI target name of the original volume.
• Whether you want to keep the ability to demote to replica set. Unless you are permanently promoting the
replica set, make sure you keep this ability.
5. In the Promote Replica Set – Summary dialog box, review the information and click Finish if satisfactory.
Click Back to make changes.
After the promote operation completes, the replica set disappears from the list of inbound replica sets, and the
recovery volume appears in the list of volumes.
Where to go next
• Connect to the recovery volume. See Connecting initiators to iSCSI targets on page 8-7.
• Go to the next step in the failover and failback process when the original volume on the primary group
becomes available. See Replicating a recovery volume to the primary group on page 13-9.
• To reverse the inbound replica set promotion and cancel the failover, demote the recovery volume to an
inbound replica set:
Click Volumes, then expand Volumes, then select the recovery volume name, and then click Demote to
replica set.
13–8
Group Administration Data recovery
• If the original volume becomes permanently unavailable, you can make the inbound replica set promotion
permanent. See Making an inbound replica set promotion permanent on page 13-13.
To temporarily fail over a volume to the secondary group, you promote the volume’s inbound replica set to a
recovery volume. Users can connect to the recovery volume and resume access the volume data.
All volume operations apply to a recovery volume, with some exceptions. You cannot change:
• Volume size
• Volume name
• Public alias
• RAID preference
• Replication partner
• Thin provisioning settings (applicable only to recovery template volumes and recovery thin clone volumes)
• Permission (applicable only to recovery template volumes)
In addition, you cannot delete a recovery template volume if there are still recovery thin clone volumes, thin clone
replica sets, or permanently promoted thin clone replica sets attached to the volume.
See Chapter 9, Basic volume operations for information about modifying volumes.
Recommendation: Dell recommends that you replicate the recovery volume to the primary group immediately
before failing back to the primary group. This is because the volume is offline during the final
replication that is part of failing back to the primary group.
The Replicate to Partner operation is available only if the primary group and the secondary group are running PS
Series Firmware Version 4.0 or greater. If you do not meet this requirement, you must perform the steps
individually, as described in Manually performing the replicate to partner operation on page 13-16.
How quickly you can replicate the recovery volume depends on the presence of the failback snapshot on the
primary group. The failback snapshot establishes the failback baseline, which is the point in time at which the
volume on the primary group and the most recent complete replica on the secondary group have the same data. If
the failback snapshot exists, only the changes made to the recovery volume are replicated. If the failback snapshot
does not exist, the first replication is a complete copy of the recovery volume data.
1. Obtain the name and password for a group administrator account on the primary group.
2. Click Volumes, expand Volumes, select the recovery volume, and then click Replicate to partner.
13–9
Group Administration Data recovery
4. Click OK.
To monitor the Replicate to Partner operation and make sure all tasks complete, open the Alarms panel at the
bottom of the GUI window and click the Failback Operations tab. If an individual task fails, you must correct
the problem and then retry the task. See Handling a failed operation on page 13-15.
Note: If you chose to use manual transfer replication, the status of the create replica task is in-progress until
you complete the manual transfer replication. When the manual transfer replication is complete, the
Replicate to Partner operation continues automatically.
When the volume demote task on the primary group completes, the original volume disappears from the list of
volumes, and the failback replica set appears under Inbound Replicas in the far-left panel.
Where to go next
• To create more replicas, select the recovery volume and click Create replica now. You can also configure
replication schedules on the recovery volume.
• When you are ready to fail back, see Failing back to the primary group.
• You can move a failback replica set to a different pool in the primary group. If you later promote the failback
replica set to a volume, the volume belongs to the new pool.
1. On the primary group, click Replication, expand the replication partner, expand Inbound Replicas,
select the failback replica set, and click Change storage pool.
• If you do not want to return to the original replication configuration or switch roles, you can make the inbound
replica set promotion permanent and then delete the replica set:
1. On the primary group, click Replication, then expand the replication partner, then expand Inbound
Replicas, then select the failback replica set, and then click Convert to replica set.
The replica set continues to be shown in the Replication Partner – Inbound window, but it is no longer a
failback replica set.
3. Select the replica set and then click Delete replica set.
13–10
Group Administration Data recovery
1. On the primary group, click Replication, expand the replication partner, expand Inbound Replicas,
select the failback replica set, and click Change storage pool.
Recommendation: Dell recommends that you use the Replicate to Partner operation before failing back to the
primary group. Although the Failback to Primary operation performs a final replication, the
recovery volume is offline during the final replication. See Replicating a recovery volume to
the primary group.
Restriction: The Failback to Primary operation is available only if the primary group and the secondary group are
running PS Series Firmware Version 4.0 or greater. If you do not meet this requirement, you must
perform the steps individually, as described in Manually performing the failback to primary operation
on page 13-16.
1. Obtain the name and password for a group administrator account on the primary group.
2. Click Volumes, then expand Volumes, then select the recovery volume, and then click Failback to
primary.
As part of the failback operation, a replica is created immediately on the secondary group to reestablish the failback
snapshot (and set the failback baseline). Because the volume data is already synchronized between the groups, no
data is actually transferred.
To monitor the Failback to Primary operation and make sure all tasks complete, open the Alarms panel at the
bottom of the GUI window and click the Failback Operations tab. If an individual task fails, you must correct
the problem and then retry the task. See Handling a failed operation on page 13-15.
13–11
Group Administration Data recovery
Note: If you chose to use manual transfer replication, the status of the create replica task is in-progress until
you complete the manual transfer replication. When the manual transfer replication is complete, the
Failback to Primary operation continues automatically.
When the Failback to Primary operation completes, on the secondary group, the recovery volume disappears from
the list of volumes, and the inbound replica set reappears in the list of inbound replica sets. On the primary group,
the failback replica set disappears from the list of inbound replica sets, and the volume reappears in the list of
volumes.
Note: This procedure assumes that the volume does not change while available on the secondary group, or—if the
volume changes—those changes are not replicated to the primary group. If you want to replicate changes,
follow the procedure described in Failing over and failing back a volume.
1. Promote the replica set to a recovery volume. See Promoting an inbound replica set to a recovery volume.
Make sure you select the option that enables you to demote the recovery volume.
2. Perform the desired operation on the recovery volume. See Recovery volume restrictions on page 13-9.
Click Volumes, then expand Volumes, then select the recovery volume, and then click Demote to replica
set.
Note: Because you cannot permanently demote a template volume, when you switch roles for a replication
configuration that includes a template volume with thin clone volumes, only the thin clone replication
configuration switches. Therefore, the original template volume must still exist on the original primary
group after the switch, because the new thin clone replica sets depend on the template volume.
a. Make sure the volume replication configuration includes keeping the failback snapshot. See Modifying
volume replication configuration settings on page 12-26.
b. Set the volume offline. See Setting a volume offline or online on page 9-13.
c. Perform a final replication. This synchronizes volume data across the primary group and the secondary
group. See Creating a replica on page 12-28.
13–12
Group Administration Data recovery
a. Promote the replica set to a recovery volume. Make sure you keep the ability to demote the recovery
volume, in case you decide to cancel the role switch. See Promoting an inbound replica set to a recovery
volume.
Users can now access volume data by connecting to the recovery volume. See Connecting initiators to
iSCSI targets on page 8-7.
b. Replicate the recovery volume to the primary group. See Replicating a recovery volume to the primary
group.
c. Make the inbound replica set promotion permanent. See Making an inbound replica set promotion
permanent.
3. On the primary group, convert the failback replica set to an inbound replica set. See Converting a failback
replica set to an inbound replica set.
Note: After making an inbound replica set promotion permanent, you can no longer demote the volume to the
original inbound replica set.
Restriction: Before you can make a template replica set promotion permanent, you must permanently promote all
the attached thin clone replica sets.
1. Click Volumes, then expand Volumes, then select the recovery volume, then click Make promote
permanent.
• Enter a new volume name, up to 63 alphanumeric characters (including periods, dashes, and colons). A
volume name must be unique in a group.
• Enter an optional description.
• Select the storage pool.
• Access control credentials for the recovery volume. Specify a CHAP user name, IP address, or iSCSI
initiator name. See About iSCSI target access controls on page 8-1.
• Permission, either read-only or read-write.
• Whether to allow initiators with different iSCSI qualified names (IQNs) access to the volume. See
Multi-host access to targets on page 8-6.
13–13
Group Administration Data recovery
4. Review the information in the Convert Recovery Volume – Summary dialog box and click Finish if
satisfactory. Click Back to make changes.
Where to go next
• If you are permanently switching partner roles, see Converting a failback replica set to an inbound replica set
for the next step in the procedure.
Note: After you convert a failback replica set to an inbound replica set, you cannot promote the inbound replica set
to the original volume.
1. On the primary group, click Replication, then expand the partner, then expand Inbound Replicas, then
select the failback replica set, then click Convert to replica set.
When the conversion completes, the replica set continues to appear in the Replication Partner – Inbound window,
but it is no longer a failback replica set.
Permanently promoting an inbound replica set does not require any additional space on the secondary group,
because it reduces delegated space by the size of the volume’s replica reserve.
Restriction: In some cases, you cannot permanently promote a replica set in a single operation. If you do not get
the option to deselect the Keep ability to demote to replica set option, you must
temporarily promote the replica set and then make the promotion permanent. See Promoting an
inbound replica set to a recovery volume on page 13-7 and Making an inbound replica set promotion
permanent on page 13-13.
Restriction: Before you can permanently promote a template replica set, you must permanently promote all the
attached thin clone replica sets.
1. On the secondary group, click Replication, then expand the partner, then expand Inbound Replicas,
then select the replica set name, and then click Promote to volume.
2. Confirm that you want to pause inbound replication from the partner.
13–14
Group Administration Data recovery
• Enter a new volume name, up to 63 alphanumeric characters (including periods, hyphens, and colons). A
volume name should be unique in a group.
• Enter an optional description.
• Select the storage pool.
• Access controls for the recovery volume. Specify a CHAP user name, IP address, or iSCSI initiator name.
See About iSCSI target access controls on page 8-1.
• Permission, either read-only or read-write.
• Whether to allow initiators with different iSCSI qualified names (IQNs) access to the volume. See
Multi-host access to targets on page 8-6.
6. If the information in the Promote Replica Set – Summary dialog box is satisfactory, click Finish. Click Back
to make changes.
The replica set disappears from the list of inbound replicas, and the new volume appears in the list of volumes.
To check the status of a Replicate to Partner operation and the Failback to Primary operation:
1. Open the Alarms panel and click the Failback Operations tab.
2. Expand the recovery volume to display the status of each task in the operation.
If an individual task fails during a Replicate to Partner or Failback to Primary operation, correct the problem.
After correcting the problem, in the Failback Operations panel, right-click the failed operation and select Retry
task. The operation continues automatically.
13–15
Group Administration Data recovery
Requirement: You must promote the inbound replica set to a recovery volume, before performing the individual
Replicate to Partner tasks. See Promoting an inbound replica set to a recovery volume.
a. Set the original volume offline. See Setting a volume offline or online on page 9-13.
b. Cancel any in-progress replication. See Cancelling a volume replication on page 12-31.
c. Set any snapshots for the volume offline. See Setting a snapshot online or offline on page 11-11.
Click Volumes, then expand Volumes, then select the volume, and then click Demote to replica
set.
a. Configure the recovery volume to replicate to the primary group. See Configuring a volume for replication
on page 12-25.
You can use manual replication if a large amount of data must be transferred. See the Manual Transfer
Utility Installation and User Guide.
1. Perform the Replicate to Partner operation. See Replicating a recovery volume to the primary group on page
13-9 or Manually performing the replicate to partner operation.
b. Set the recovery volume offline. See Setting a volume offline or online on page 9-13.
You can use manual replication if a large amount of data must be transferred. See the Manual Transfer
Utility Installation and User Guide.
d. Demote the recovery volume to the original inbound replica set:
Click Volumes, then expand Volumes, then select the volume, and then click Demote to replica
set.
13–16
Group Administration Data recovery
3. On the primary group, promote the failback replica set to the original volume:
Click Replication, expand the partner name, expand Inbound Replicas, select the failback replica set,
and click Promote to volume.
13–17
Part III: Troubleshooting
14 Group event logging
A PS Series group generates events when normal group operations occur and also when significant events occur.
Events enable you to track operations and also detect and solve problems before they affect performance or data
availability.
In addition, you can use SNMP traps to track significant group events.
Event messages help you monitor normal operations and also identify problems before they disrupt service.
Events appear in the Monitoring window, the CLI console, and in the output of the show recentevents
command.
Note: You can disable the display of informational messages in the GUI and CLI; however, the group still logs
these events. See Enabling or disabling the display of INFO event messages on page 14-5.
Control modules are initializing. Control module failover cannot occur until the
initialization completes.
There are 1 outstanding health conditions. Correct these conditions before they affect array
operation.
• Event priority: INFO, WARNING, ERROR, or FATAL (see Event priorities on page 14-2)
• Date and time that the event occurred
• Member on which the event occurred
• Descriptive event text
In addition, the group generates hardware alarms. An alarm is a persistent condition in an array (for example, high
temperature). Alarms help you find and correct problems before they disrupt operations. An alarm always has a
corresponding event. See About hardware alarms on page 14-2.
14–1
Group Administration Group event logging
Event priorities
Each event has a priority. Table 14-1 lists event priorities in order of lowest (least severe) to highest (most severe).
The first column lists the priorities, the second describes them.
Alarms appear in the Alarms and Operations panel at the bottom of the Group Manager GUI. Click the panel
header to open and close the panel.
Each alarm has a priority level, either Warning or Critical, based on the severity of the problem. An alarm always
has a corresponding event. For more information, see Monitoring alarms and operations on page 15-10.
• E-mail notification. If an event occurs, the group automatically sends a message to designated e-mail
addresses.
The group collects multiple events into a single message, eliminating the need for multiple e-mails. If only one
event occurs within one minute, the group sends e-mail to the addresses you configured for notification. If
another event occurs within one minute, the timer starts over and sends email after two minutes.
E-Mail Home is available to all PS Series customers, but response time and assistance is based on the validity
and level of your support contract.
14–2
Group Administration Group event logging
1. Click Group, then Group Configuration, and then the Notifications tab. See the online help for
information about the data fields and options.
3. Under E-mail recipients, click Add and enter an e-mail address. You can enter up to five e-mail addresses to
receive e-mail notifications. Then, click OK.
4. Under Event Priorities, select events for which you want to generate an e-mail message. See About event
messages on page 14-1.
5. Under E-Mail Configuration Settings, click Add and enter an IP address for the SMTP server or e-mail relay
that handles e-mail forwarding. Then, click OK.
Use the ip_address:port format to specify a port number other than the default (25).
You can enter up to three IP addresses. The group uses one SMTP server or e-mail relay at any time. The first
server you specify is the default server. The group uses the other servers in the order specified, if the default
server is not available. Click the up and down arrows to change the order.
6. In the Sender in e-mail address field, type the e-mail address that appears in the “From” field in the
notification e-mail. You can use the group name at your company’s e-mail address. For example:
GroupA@company.com. When the intended recipient receives e-mail, the e-mail itself specifies which group it
came from. This is helpful in multi-group environments.
7. Click Save all changes (Control+S).
To test the e-mail notification, select Enable live informational messages (you can later disable the
display of informational events), log out of the group, and then log in to the group. If an e-mail recipient does not
receive notification of the logout and login events, check and fix the configuration.
14–3
Group Administration Group event logging
Recommendation: Dell strongly recommends that you enable E-Mail Home, to expedite customer support
becoming engaged in solving any problems.
E-Mail Home is available to all PS Series customers, but response time and assistance is based on the validity and
level of your support contract.
Requirement: To support E-Mail Home, the group must have access to an SMTP server or e-mail relay.
1. Click Group, then Group Configuration, and then the Notifications tab.
2. In the E-Mail Event Notifications panel, select Send e-mail alerts to Customer Support (E-Mail
Home).
3. In the Local contact e-mail field, enter an e-mail address to receive E-Mail Home notification messages.
4. Under E-Mail Configuration Settings, click Add and enter an IP address for the SMTP server or e-mail relay
that handles e-mail forwarding. Then, click OK.
Use the ip_address:port format to specify a port number other than the default (25).
You can enter up to three IP addresses. The group uses one SMTP server or e-mail relay at any time. The first
server you specify is the default server. The group uses the other servers in the order specified, if the default
server is not available. Click the up and down arrows to change the order.
5. In the Sender in e-mail address field, type the e-mail address that appears in the “From” field in the
notification e-mail. You can use the group name at your company’s e-mail address. For example:
GroupA@company.com. When the intended recipient receives e-mail, the e-mail itself specifies which group it
came from. This is helpful in multi-group environments, and reduces the chance that the e-mail server or
recipient discards or rejects notifications.
When you first enable E-Mail Home, the group sends the local contact e-mail address a confirmation message. If
you do not receive this message:
• Make sure that you specified the correct information in the Group Notifications window.
• Examine the PS Series event log. If no errors are logged, contact your support provider. If you have a service
agreement, your support provider can help you resolve the problem.
• To disable E-Mail Home, deselect Send e-mail alerts to Customer Support (E-Mail Home).
• To modify the local e-mail address, enter the new address in the Local contact e-mail field.
14–4
Group Administration Group event logging
• To modify the IP address for an SMTP server, select the IP address, click Modify, change the address, and
click OK.
• To delete an SMTP server, select the IP address and click Delete.
2. Click the Notifications tab. See the online help for information about the data fields and options in the
Event Logs panel.
4. Under Syslog Servers, click Add and specify an IP address for the syslog server. You can specify up to three
syslog servers. All the servers receive events.
5. Under Event Priorities, select the event priorities that result in syslog server notification. See About event
messages on page 14-1.
1. Click Group, then Group Configuration, and then the Notifications tab.
2. In the Event Logs panel, select or deselect Enable live informational messages.
14–5
Group Administration Group event logging
See Displaying and configuring SNMP access to a Group on page 4-11 and Configuring SNMP trap destinations
on page 14-7.
The PS Series array MIBs (Management Information Bases) contain information about SNMP traps and trap
thresholds. See Accessing PS Series array MIBs on page 14-7.
14–6
Group Administration Group event logging
1. Click the SNMP tab. See the online help for information about the data fields and options.
3. Enter the IP address where SNMP traps are sent and click OK. You can specify up to five IP addresses. All the
addresses receive traps.
4. Optionally, modify the SNMP trap community name. The default is SNMP-trap. The group uses the SNMP
trap community when it sends SNMP traps. SNMP trap community names must be unique and can contain up
to 63 alphanumeric characters, but no commas.
• To delete an SNMP trap destination, select the IP address, and click Delete.
• To modify the SNMP trap community name, change the name in the SNMP trap community name field.
Requirement: To use MIBs, you must install them on a management station. Not all traps apply to all array
models.
If you have a support account, you can access the PS Series array MIBs from the customer support website.
3. Click MIBS.
14–7
15 Group monitoring
It is best practice to regularly monitor a PS Series group, so you can address issues before service is interrupted.
If you configured SNMP trap notification, you can examine the traps using an SNMP console. See About SNMP
traps on page 14-6.
Table 15-1 describes general best practices for monitoring a group. The first column lists the monitoring condition,
the second column describes it, and the third column provides a reference for more information about addressing
the issue.
15–1
Group Administration Group monitoring
Monitoring events
The group generates a message when a significant event that requires corrective action occurs in the group (for
example, when hardware fails, or replication space is insufficient). The group also generates event messages when
certain normal operations occur (for example, when a user logs in to the group, or you create a volume).
Event messages help you monitor group operations and correct problems before they disrupt operations. See About
event messages on page 14-1.
Table 15-2 shows the filtering options available in the Events panel.
• Display all events or events of a specific priority. Pull down the View menu and select the events you want to
display.
• Retrieve previous events. To retrieve the most recent 100 events, click the More icon ( ). Click it again to
retrieve the next 100 events.
• Acknowledge all events. Unacknowledged events appear in bold. To acknowledge the receipt of all event
messages, click the Acknowledge all icon ( ).
• Clear the event list. To erase all the events from the panel, click the Clear event list icon ( ). To show
the events again, click the More icon.
• Show or hide details, by doing any of the following:
• Move the pointer over an event. A pop-up window appears, showing event details.
15–2
Group Administration Group monitoring
• Double-click an event. The event details panel opens at the bottom of the events list.
• Select an event and click the View details ( ) or Hide details icons ( ). The event details panel
opens at the bottom of the events list.
See the online help for information about the data fields and options.
Double-click the name of a user to open the Modify Administration Account wizard.
See the online help for information about the data fields and options.
15–3
Group Administration Group monitoring
Check for multiple initiators writing to the same target. This can cause volume corruption if not handled correctly
by the servers.
Note: You can sort the table in the GUI by clicking column headings. By default, the table is sorted by Initiator
address.
See the online help for information about the data fields and options.
To display more detail about a schedule, move the pointer over a schedule entry in the panel. A pop-up window
appears, showing additional information such as the:
• To modify a schedule, either double-click the schedule, or select it and click the Modify icon in the Actions
column. The Modify Schedule dialog box opens. Make the changes, then click OK. You can change the:
• Schedule name
• Run date and time
• Number of snapshots to keep
• To disable or enable a schedule, select it and click the flag icon (enable or disable). In the confirmation dialog
box, click Yes to disable the schedule.
• To delete a schedule, select it and click the X icon (delete). In the confirmation dialog box, click Yes to delete
the schedule.
See the online help for information about the data fields and options.
15–4
Group Administration Group monitoring
To see more detail about a schedule, move the pointer over a schedule entry in the panel. A pop-up window
appears, showing additional information such as the:
• To modify a schedule, either double-click the schedule, or select it and click the Modify icon in the Actions
column. The Modify Schedule dialog box opens. Make the changes, then click OK. You can change the:
• Schedule name
• Run date and time
• Number of replicas to keep
• To disable or enable a schedule, select it and click the flag icon (enable or disable). In the confirmation dialog
box, click Yes to disable the schedule.
• To delete a schedule, select it and click the X icon (delete). In the confirmation dialog box, click Yes to delete
the schedule.
See the online help for information about the data fields and options.
Monitoring replication
If you are replicating volume data, you should monitor replication operations to ensure that each operation
completes.
From the Monitoring tab in the navigation panel, you can see information about:
See the online help for information about the data fields and options.
In addition, you should monitor the usage of delegated space. If free delegated space is not available, replica
reserve cannot increase automatically. You can also monitor replica reserve for a volume. Insufficient replica
reserve limits the number of replicas.
For details about displaying information about replication partners, see Monitoring replication partners on page
15-9.
15–5
Group Administration Group monitoring
Table 15-3 describes some best practices for monitoring replication between groups. The first column lists the
monitoring condition, the second column describes it, and the third column provides a reference for more
information about addressing the issue.
To monitor outbound replication, click Monitoring and then Outbound Replication. The Outbound
Replication panel provides the following information:
See the online help for information about the data fields and options.
To see more detail about a volume in the list, move the pointer over the volume name. A pop-up window appears,
listing the following:
• Volume name
• Storage pool
• Thin provisioning status (enabled or disabled)
• Reported size, volume reserve, and snapshot reserve
• Current and requested status (online or offline)
• Access type (read-write permission and whether shared for cluster access)
• Replication partner
• Description, if any
15–6
Group Administration Group monitoring
Table 15-4 shows the outbound replication operation status. The first column lists the status values, the second
column provides descriptions, and the third column provides solutions.
15–7
Group Administration Group monitoring
To monitor inbound replication, click Monitoring and then Inbound Replication. The Inbound Replication
panel provides the following information:
See the online help for information about the data fields and options.
Table 15-5 shows the inbound replication operation status, its description, and how to solve any issues.
To see more detail about a replica set in the list, move the pointer over the replica set name. A pop-up window
appears, listing the:
15–8
Group Administration Group monitoring
Click a replica set name or double-click a row to navigate to the Inbound Replicas window for that partner, with the
replica set selected and its detail shown.
See the online help for information about the data fields and options.
Periodically examine the replication duration information. If you see long replication times, make sure the network
connection between the partners is sufficient. A slow network link between the partners can cause long replication
times. If a replication operation makes no progress, the group generates a warning event. Make sure you have
adequate network bandwidth between the groups, in addition to full IP routing. If necessary, increase the network
bandwidth.
In the Transferred column, check how much data you are replicating. You might want to use manual transfer
replication if you are transferring a large amount of data.
15–9
Group Administration Group monitoring
To display a list of all the replication partners for a group, click Replication and then Replication
Partners.
The Delegated Space panel shows all the delegated space for all partners, and how much free space is available.
The Replication panel shows all the replications, and their direction, between this group and all configured
partners.
You should monitor the usage of delegated space. If free delegated space is not available, replica reserve cannot
increase automatically.
The General Partner Information panel shows the partner name, IP address, and contact information.
The Replication Status panel shows the status of outbound and inbound replications between this partner and the
group. In this panel, check the amount of free delegated space. If free delegated space is low and the replica volume
reserve for each replicated volume has not reached its maximum (and, therefore, can increase), consider increasing
the delegated space.
The Replication Progress panel shows any in-process replication activity between the groups.
For each partner, you can display details about the following:
To open or close the Alarms and Operations panel, click the panel header or the arrow in the header.
Monitoring alarms
The group generates an alarm if a persistent hardware condition occurs in a member (for example, high
temperature or a failed power supply). Alarms help you discover and correct problems before they disrupt
operations. Make sure you investigate all alarms. See About hardware alarms on page 14-2.
Each alarm has a priority level, based on the severity of the problem:
• Warning – Condition that decreases performance or can become critical if you do not correct it. See Displaying
warning alarms on page 15-12.
15–10
Group Administration Group monitoring
• Critical – Serious problem that can cause damage to the array or data loss. See Displaying critical alarms on
page 15-11.
• The Alarms panel header flashes. Click the header to open and close the panel.
• The group generates a corresponding event message.
• LEDs on the array chassis light.
The Alarms panel header is divided into two areas: Alarms and Operations. Each header includes icons that match
the tabs in the panel.
Each alarm entry includes the severity, the member that reported the alarm (if applicable), and the message text.
Move the pointer over the message text to display more information.
Alarms remain in the Alarms panel until you correct the condition or complete the task. However, the event
message associated with the alarm remains in the event log even after the task is complete or the condition is
corrected.
Click the Acknowledge all icon ( ). in the Alarms panel to acknowledge all alarms.
Open the Alarms and Operations panel and click the Critical tab to display Critical alarms.
Table 15-7 shows the data fields available in the Critical tab.
15–11
Group Administration Group monitoring
Critical alarms appear on the Critical tab of the Alarms panel. A critical alarm indicates a serious problem that
can cause damage to the array or data loss. Correct the problem that causes a critical alarm immediately.
• Data integrity:
– RAID is not functioning.
– More than one valid RAID set in the array.
– Full lost block table.
• Cache:
– Control module cache has lost data.
– Cache battery is not charging because it exceeds the temperature limit.
– Cache contains data that does not belong to any of the installed disk drives.
• Cooling component fault
– Array temperature exceeds upper or lower limit.
– Missing fan tray or cooling module.
– Both fans failed on a fan tray or cooling module.
• Hardware component fault:
– Failed NVRAM coin cell battery.
– Control modules are different models.
– Failed critical hardware component.
– Missing or failed operations panel (not all array models).
– Failed array monitoring processor (not all array models).
Open the Alarms and Operations panel and click the Warnings tab to display Warning alarms.
Table 15-8 shows the data fields available in the Warning tab.
Warning alarms appear on the Warnings tab of the Alarms panel. A warning alarm indicates a condition that
decreases performance or can become critical if you do not correct it.
15–12
Group Administration Group monitoring
• Data integrity:
– Degraded, but functioning RAID set.
– RAID (volume-level) has lost blocks.
– Installed spare drive does not have enough capacity to replace a RAID set drive.
• Hardware Component:
– Failed non-critical hardware component.
– Component temperature is near upper or lower limit.
– Fan RPMs exceed upper or lower limit.
– Failed power supply fan.
– Missing power supply.
– Power supply does not have power.
• Control Module:
– One installed control module.
– Control module failover occurred.
– Control module has insufficient RAM.
– Lock on secondary control module is open (not all array models).
– Active control module syncing with secondary.
– No communication between control modules.
• Batteries:
– Real-time-clock battery has low charge.
– Cache battery has less than 72 hours of charge.
Monitoring actions
Open the Alarms and Operations panel and click the Actions tab to display any steps that you must complete. See
Monitoring alarms and operations on page 15-10.
Table 15-9 shows the data fields available in the Actions tab.
15–13
Group Administration Group monitoring
Some complex operations, such as manual transfer replication, require administrators to perform multiple tasks.
To display incomplete tasks, open the Alarms and Operations panel and click the Actions tab.
If a multi-task operation is in progress, the GUI displays the incomplete tasks. Make sure you complete all multi-
task operations.
Table 15-10 shows the data fields available in the Group Operations tab.
The Alarms and Operations panel displays details about in-process operations in the group, including moving
volumes or members to another pool, moving a partner’s delegated space to another pool, or deleting a member.
Depending on the operation, you might be able to perform actions on it, such as canceling it.
Table 15-11 shows the data fields available in the Failback Operations tab.
15–14
Group Administration Group monitoring
1. Click Group and then Storage Pools. The Storage Pool Summary window appears.
2. Check the free pool space value in the Storage Pools panel. Dell recommends that free pool space does not fall
below the following, whichever is smaller:
You can increase free pool space by moving volumes from the low-space pool to a different pool. See Moving a
volume to a pool on page 7-5.
To sort the table, by click a column heading. The table is sorted by Volume Template by default.
1. Click Group and then Members. The Group Disk Space panel shows the total amount of free space in the
group and in each pool (if applicable).
The Group Members panel lists all members, the pool to which each member belongs, the capacity and
amount of free space, RAID policy, number of disks, status, PS Series Firmware version (should be the same
for all members), and the number of iSCSI connections to each member. (This indicates the number of volumes
or snapshots with data on that member that are connected to an initiator. Nothing connects directly to a
member.)
• Member status – Table 15-13 describes member status. If a member is offline, investigate the cause.
Volumes with data on an offline member are also offline. If a member has a problem, double-click the
member to display additional information.
• Low free space – Low free space in a member might indicate that overall group space is low. You can free
space in a member by adding more members to the same pool (the group distributes volume data across the
pool members).
15–15
Group Administration Group monitoring
In the General Member Information panel, check the RAID status. Table 15-12 describes the RAID status values
and provides possible solutions where appropriate.
Table 15-12 shows RAID status for a member. The first column lists the status values, the second column provides
descriptions, and the third column provides solutions.
A red X over a hardware component indicates uninstalled or unconfigured hardware. A warning or error status
symbol in the array graphic indicates a failed or failing component. Move the pointer over a component to show
status details.
15–16
Group Administration Group monitoring
Table 15-13 shows member status, description, and how to solve any issues.
Click Group, then expand Members, then select the member name, and then click the Status tab.
The Member Space panel shows the total amount of usable space on the member, how much space is used by
volumes, snapshots, and replicas, and the amount of free space, numerically and in a graphic.
In addition, to help you identify a member, you can make the fan tray LED and the control module ERR LED on
the member chassis flash.
• To make a member’s LED flash, click Group, then expand Members, then select the member name, and then
click Start LEDs flashing.
• To stop flashing a member’s LED, click Group, then expand Members, then select the member name, and then
click Stop LEDs flashing
Warning: Never turn off power to a group member unless the member has been cleanly shut down. See Shutting
down a member on page 6-14.
To display the member enclosure information, click Group, then expand Members, then select the member name,
and then click the Enclosure tab.
15–17
Group Administration Group monitoring
A member has two or three power supplies. Most PS Series arrays use power supplies that have integrated cooling
modules.
A member can survive one power supply failure. Replace failed power supplies as soon as possible.
For proper cooling, do not remove a power supply until you have a replacement.
For information about replacing a power supply, see the Hardware Maintenance manual for your array model or
contact your PS Series support provider.
The Power Supplies panel shows the status of the power supplies. The number and type of hardware components
shown depends on your array model.
Table 15-14 shows power supply status and possible solutions. The first column lists the status values, the second
column provides descriptions, and the third column provides solutions.
A member has two or three cooling modules and multiple fans. Most PS Series arrays use power supplies that have
integrated cooling modules.
Periodically, feel the room temperature where the hardware is located and make sure that the room is sufficiently
cool and ventilated. Also make sure the fan trays and cooling modules have no red LEDs, and monitor the member
temperature.
A member can survive one cooling module failure. Replace failed cooling modules as soon as possible.
The Cooling Fans panel shows the status of the fans on the cooling modules. Table 15-15 shows the cooling fan
status. These status values apply to array models with combination power supply and cooling modules. The first
column lists the status values, the second column provides descriptions, and the third provides solutions.
15–18
Group Administration Group monitoring
The Temperature Sensors panel shows the current temperature for the various array controllers and processors, in
addition to the normal temperature range. Table 15-16 describes the array temperature status, descriptions, and how
to solve any issues.
Multiple fan failures increase the array temperature. A high temperature results in event messages. The array might
shut down before damage occurs.
Some PS Series arrays also show the ambient temperature, which is calculated in Celsius from the two sensor
temperatures with the highest temperatures, using the following formula:
Some array models include redundant channel cards. An array continues to operate if a channel card fails. You can
replace the failed channel card with no impact on group operation.
Table 15-17 shows channel card status, descriptions, and how to solve any issues.
For information about replacing channel cards, see the Hardware Maintenance manual for your array model or
contact your PS Series support provider.
15–19
Group Administration Group monitoring
Some array models include an Enclosure Interface Processor (EIP) card. An array continues to operate if the
EIP card fails. You can replace the failed EIP card with no impact on group operation.
In the Member Enclosure window, the EIP card panel shows the EIP card status.
Table 15-18 describes EIP card status, descriptions, and how to solve any issues.
For information about replacing the EIP card, see the Hardware Maintenance manual for your array model or
contact your PS Series support provider.
In a dual control module array, the other control module is secondary (mirrors cache data from the active control
module). Upon startup, either control module can be designated active or secondary, regardless of its previous
status.
Under normal operation, the status of a control module (active or secondary) does not change, unless you restart the
member.
In a single control module array, if the control module fails, the member is offline.
In a dual control module array, if the active control module fails, the secondary control module becomes active and
begins serving I/O. This is called control module failover. I/O should continue if you connect cables to the newly
active control module.
For information about replacing control modules, see the Hardware Maintenance manual for your array model or
contact your PS Series support provider.
To display control module information, click Group, then expand Members, then select the member name, and
then click the Controllers tab.
15–20
Group Administration Group monitoring
• Cache battery status and NVRAM battery status. See Table 15-20 and Table 15-21 for descriptions of battery
status and possible solutions where appropriate.
• Model number.
• Boot ROM version.
• PS Series firmware version.
An empty slot means that a control module is not installed or has failed.
For information about replacing a control module, see the Hardware Maintenance manual for your array model or
contact your PS Series support provider. Do not remove a failed control module until you have a replacement.
The Memory Cache panel displays the cache mode. Control module and battery status affect the cache mode.
Write-through mode might impair performance. Identify why the cache is in write-through mode and correct the
problem, if necessary. See About write cache operations on page 6-11.
Table 15-19 describes the control module status, descriptions, and how to solve any issues.
Table 15-20 describes the control module cache battery status, descriptions, and how to solve any issues.
15–21
Group Administration Group monitoring
Table 15-21 describes the control module NVRAM coin cell battery status, descriptions, and how to solve any
issues. Not every array has an NVRAM battery.
To display the disk drive information, click Group, expand Members, then select the member name, and then click
the Disks tab.
The Disk Array Summary panel shows the disk drives in the member. The number and type of drives shown
depends on your array model.
The Installed Disks panel shows more information about each disk, including the slot, type, model and revision,
size, status, and errors. Closely monitor drives with errors.
Table 15-22 shows disk drive status, descriptions, and how to solve any issues.
15–22
Group Administration Group monitoring
Warning: A disk drive failure in a RAID 5 or RAID 10 set that is degraded might result in data loss.
• If a spare disk drive is available: Data from the failed drive is reconstructed on the spare. During the
reconstruction, the RAID set that contains the failed drive is temporarily degraded.
• If a spare disk drive is not available, and the RAID set has not reached the maximum number of drive
failures: The RAID set that contains the failed drive is degraded. For RAID 5, RAID 50, or RAID 6,
performance might decrease.
• If a spare disk drive is not available, and the RAID set has reached the maximum number of drive
failures: The member is set offline, and any volumes and snapshots that have data stored on the member are
set offline. Data might be lost and must be recovered from a backup or replica.
• If a spare disk drive was used: The new drive automatically becomes a spare, with no effect on performance.
• If a RAID set was degraded: Data is automatically reconstructed on the new drive and performance goes
back to normal after reconstruction.
• If a member was offline because of multiple RAID set drive failures: Any volumes snapshots with data on
the member are set offline and data might be lost.
In some cases, a member might detect a problem with a disk drive. The member automatically copies the data on
the failing disk drive to a spare disk drive, with no impact on availability and little impact on performance. The
group generates event messages informing you of the progress of the copy-to-spare operation. I/O is written to both
drives until the copy-to-spare operation completes. If the disk drive completely fails during the operation, data is
reconstructed on the spare using parity data, as usual.
Replace any failed disks immediately. For information about replacing disk drives, see the Hardware Maintenance
manual for your array model or contact your PS Series support provider.
If you experience network problems, group members might lose the ability to communicate with each other over
the network. In such a group, some management operations are not allowed. For example, you cannot change the
IP addresses of an isolated member.
If the members of a group cannot communicate, identify and correct the network problems. This restores the group
to normal full operation, including network communication.
15–23
Group Administration Group monitoring
To display the network information, click Group, expand Members, then select the member name, and then click
the Network tab.
• Operational status – This is the current status of the network interface and can be:
– up – Operational, connected to a functioning network, configured with an IP address and subnet mask, and
enabled.
– down – Not operational, not connected to a functioning network, not configured with an IP address or
subnet mask, or disabled.
• Requested status – This status is set by administrative action:
– enabled – Configured and serving I/O.
If the operational status is down, but the requested status is enabled, identify and correct the error.
To protect against network interface or port failure, connect multiple network interfaces on both control
modules to the network.
The IP Configuration panel shows each interface and its IP address, netmask, MAC address and description, if any.
The iSCSI Connections panel shows information about the initiator address, which volume or snapshot it is
connected to (Target column), how long the connection has been active, and which Ethernet port the initiator is
using.
Check for multiple initiators writing to the same iSCSI target. This can cause target corruption if not handled
correctly by the servers.
15–24
Group Administration Group monitoring
To display information for all group volumes, click Volumes in the lower-left panel and then click Volumes in the
far-left panel. The Volume Summary window appears.
To display detailed information about a specific volume, expand Volumes and select the volume name. The
Volume Status window appears. To display snapshot information, expand a volume name and select a snapshot
timestamp. The Snapshot Status window appears.
• For volumes that are not thin-provisioned, check for in-use space that is near a volume’s reported size. When
free volume space is exhausted, writes to the volume fail, potentially disrupting applications, but the volume
remains online.
To increase free volume space, increase the reported volume size. See Increasing the reported size of a volume
on page 10-5.
• For thin-provisioned volumes, check for in-use space that is near the maximum in-use space setting.
Thin-provisioned volumes are set offline if their maximum in-use space is set to less than 100% and a write
exceeds this value. If the maximum in-use space is set to 100%, and a write exceeds this value, the write fails,
but the volume remains online.
To increase free volume space, increase the reported volume size. See Increasing the reported size of a volume
on page 10-5. You can also increase free volume space by increasing the value of the maximum in-use space
value up to 100%.
• Check the snapshot reserve free space and the space recovery setting. If free snapshot reserve is exceeded, the
space recovery setting controls whether the oldest snapshots are deleted to free space for new snapshots, or the
volume and its snapshots are set offline.
To increase the snapshot reserve or change the space recovery setting, see About snapshot reserve settings on
page 11-2.
• Requested status – Administrator-applied setting for the volume or snapshot. For example, an administrator
can set a volume online or offline. See Table 15-23.
• Current status – Actual status of the volume or snapshot, regardless of the requested status. See Table 15-24.
Under normal conditions, the requested status and current status are the same. However, an event in the group can
result in a current status that differs from the requested status.
For example, if an administrator sets a volume online, but a member containing volume data is shut down, the
requested status is online, but the current status is offline-member-down. Always investigate when the
current status is different from the requested status.
The group sets a volume or snapshot offline if network problems occur or if a member that contains volume data is
not available. If the group sets a volume offline because of a problem, the group also sets all its snapshots offline.
15–25
Group Administration Group monitoring
The group also sets a thin-provisioned volume offline if the volume’s maximum in-use space setting is less than
100% and a write exceeds this value.
In the Volume Summary window, check the current volume status in the Volumes panel. The requested status
appears when you move the pointer over the volume in the Volumes panel. If the requested status and current status
are not the same, investigate further.
In the Volume Snapshots window, check the current snapshot status in the Snapshots panel. The requested status
appears when you move the pointer over the snapshot in the Snapshots panel. If the requested status and current
status are not the same, investigate further.
Table 15-23 shows the possible values for the requested status for a volume or snapshot, descriptions, and how to
solve any issues.
Table 15-24 shows the possible values for the current status for a volume or snapshot, descriptions, and how to
solve any issues.
15–26
Group Administration Group monitoring
You can start the Performance Monitor from the Tools menu, or from the Members panel in the Group Manager
GUI.
a. Expand Members
e. Click OK
The data is displayed in the Performance Monitor main window (Figure 15-1).
15–27
Group Administration Group monitoring
2. On the Disks or Network tab, click a row for a drive or an Ethernet port.
The Performance Monitor window opens and displays the selected data (Figure 15-1).
Recommendation: While it is possible to open several Performance Monitor windows at the same time, Dell
recommends running only one to avoid possible unexpected results.
Figure 15-1: Performance Monitor
15–28
Group Administration Group monitoring
a. Expand Members
In the header of each statistics panel, the following icons perform additional operations:
15–29
Group Administration Group monitoring
Note: The display mode you select applies to all panels currently open in the Performance Monitor window.
In the chart view (for both linear and logarithmic scales), when you click inside the data window, the cursor
changes to a crosshair. You can click at any place along the graph to display the details for that time slice.
For example, you can click the crosshairs on the peak of a graph line to display the data for that moment in time in
the panel on the left Figure 15-2). In this example, at the peak of the graph, the group is processing 303 output
requests.
15–30
Group Administration Group monitoring
On the left side of the window within each statistics panel, you can change the colors used in the display. Click the
colored box (for example, ) to open the Select a color dialog box (Figure 15-3). You can choose from a
predefined swatch panel, or open the HSB or RGB tabs to specify a custom color value. Click OK when done.
15–31
Group Administration Group monitoring
• The time interval between data points. The default is 1 second, and the maximum is 60 seconds. You can
choose from a list of predefined intervals (1, 5, 10, 30, and 60) or enter any integer value between 1 and 60; for
example, 45.
• The number of data points to save. The default is 100 and the maximum is 1000. You can choose from a list of
predefined values (100, 250, 500, or 1000) or enter any integer value between 100 and 1000; for example, 300.
As you choose different values for the interval and number of data points to save, the dialog box shows you the
total amount of time you can save with that combination of values; for example, using an interval of 60 seconds
and saving 300 data points, you can store up to 5 hours of statistics.
Click OK to save your changes. The changes take effect immediately; you do not have to stop and restart polling.
• SAN HeadQuarters – Enables you to monitor multiple PS Series groups from a single graphical interface. It
gathers and formats performance data and other vital group information. Analyzing the data might help you
improve performance and more effectively allocate group resources.
Visit the customer support website for more information about SAN HeadQuarters.
• Multiple Router Traffic Grapher (MRTG) – A supplement to the Performance Monitor, enables you to monitor
the I/O activity, latency, and throughput of volumes and members.
For more information on using MRTG, see the CLI Reference manual.
15–32
Group Administration Group monitoring
To launch the EqualLogic Customer Support website from the GUI, click Tools and then Customer Support.
For more information about creating or logging into a support account, and receiving technical support, see
Technical Support and Customer Service on page xvii.
In the Member Service window, component and disk information is specific to your array model.
You can display specific information about member hardware (for example, a component model, revision, or serial
number). You can also display the Dell service tag identification for each individual array (not available on all
arrays).
Note: Do not collect diagnostic information unless instructed by your support provider.
a. Click Group, then Group Configuration, and then the Notifications tab.
b. In the E-Mail Event Notifications panel, under E-Mail Configuration Settings, click Add, enter the IP
address for an SMTP server (or e-mail relay), and click OK.
Use the ip_address:port format to specify a port number other than the default (25).
You can enter up to three IP addresses. The group uses one SMTP server or e-mail relay at any time. The
first server you specify is the default server. The group uses the other servers in the order specified, if the
default server is not available. Click the up and down arrows to change the order.
c. Optionally, in the Sender e-mail address field, enter the address that appears in the message “From”
field.
2. Optionally, configure E-Mail Home notification. See Configuring E-Mail home on page 14-4. The support
provider address for E-Mail Home receives the reports.
3. Click Tools and then Diagnostic reports.
15–33
Group Administration Group monitoring
• Select whether to send reports to your support provider (requires E-Mail Home), one or two e-mail
addresses (separated by a comma), or both.
Note: Do not change the default settings unless your support provider instructs you to change them.
5. Click OK.
To monitor progress while the group generates diagnostic reports, check the Alarms and Operations panel.
15–34
Appendix A Legal notices
This appendix lists the third-party copyrights for software used in the PS Series product.
This product contains portions of the NetBSD operating system:
For the most part, the software constituting the NetBSD operating system is not in the public domain; its authors retain their
copyright.
Copyright © 1999-2001 The NetBSD Foundation, Inc.
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following
conditions are met:
Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the distribution.
All advertising materials mentioning features or use of this software must display the following acknowledgement:
This product includes software developed by the NetBSD Foundation, Inc. and its contributors.
Neither the name of the NetBSD Foundation nor the names of its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS “AS IS”' AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
THE FOUNDATION OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
This code is derived from software contributed to The NetBSD Foundation by Charles M. Hannum and by Jason R. Thorpe of
the Numerical Aerospace Simulation Facility, NASA Ames Research Center.
This code is derived from software contributed to The NetBSD Foundation by John T. Kohl and Charles M. Hannum.
This code is derived from software contributed to The NetBSD Foundation by Kevin M. Lahey of the Numerical Aerospace
Simulation Facility, NASA Ames Research Center.
This code is derived from software contributed to The NetBSD Foundation by Jun-ichiro Hagino.
This product includes software developed by Christopher G. Demetriou for the NetBSD Project.
Copyright © 1995, 1996, 1997, 1998 Christopher G. Demetriou.
This code is derived from software contributed to The NetBSD Foundation by Luke Mewburn.
This code is derived from software contributed to The NetBSD Foundation by Klaus Klein.
This code is derived from software contributed to The NetBSD Foundation by Jonathan Stone.
This code is derived from software contributed to The NetBSD Foundation by Jason R. Thorpe.
This code is derived from software contributed to The NetBSD Foundation by UCHIYAMA Yasushi.
This product includes software developed for the NetBSD Project by Wasabi Systems, Inc.
Copyright © 2000-2001 Wasabi Systems, Inc. All rights reserved.
This product includes software developed by the University of California, Berkeley and its contributors. This product includes
software developed by the University of California, Lawrence Berkeley Laboratory.
Copyright 1985-1995 The Regents of the University of California.
Copyright 1997-2000 Niels Provos.
This code is derived from software contributed to Berkeley by Ralph Campbell.
A–1
Group Administration Legal notices
A–2
Group Administration Legal notices
Copyright © 1990,1994 The University of Utah and the Computer Systems Laboratory (CSL).
Copyright © 1991 Bell Communications Research, Inc. (Bellcore).
Copyright © 2000 Caldera Systems, Inc. All rights reserved.
Copyright © 1995 - 2000 Kungliga Tekniska Högskolan.
(Royal Institute of Technology, Stockholm, Sweden). All rights reserved.
Copyright © 1993-1995 HEWLETT-PACKARD COMPANY.
Copyright © 1995-1997 Eric Young All rights reserved.
Copyright © 1992 Simmule Turner and Rich Salz. All rights reserved.
Copyright © 1999 - 2001, PADL Software Pty Ltd. All rights reserved.
Copyright © 1985 - 1988 by the Massachusetts Institute of Technology.
Copyright © 1995 by Wietse Venema. All rights reserved.
Copyright © 1999 The OpenSSL Project. All rights reserved.
Copyright © 1992 – 1999 Theo de Raadt. All rights reserved.
Copyright © 1999 Dug Song. All rights reserved.
Copyright © 2000-2002 Markus Friedl. All rights reserved.
Copyright © 2001 Per Allansson. All rights reserved.
Copyright © 1998 CORE SDI S.A., Buenos Aires, Argentina.
Copyright © 2001-2002 Damien Miller. All rights reserved.
Copyright © 2001 Kevin Steves. All rights reserved.
Copyright © 1999 Aaron Campbell. All rights reserved.
Copyright © 2002 Nils Nordman. All rights reserved.
Copyright © 2000 Todd C. Miller. All rights reserved.
Copyright © 1995, 1996 by David Mazieres.
Copyright © 2000 Zembu Labs, Inc. All rights reserved.
Copyright © 2000 Takuya SHIOZAKI. All rights reserved.
Copyright © 1992 Keith Muller.
Copyright © 1994, Jason Downs. All rights reserved.
Copyright © 1997 Matthew R. Green. All rights reserved.
Copyright © 1999 Citrus Project. All rights reserved.
Copyright © 1990-2, RSA Data Security, Inc. All rights reserved.
Copyright © 1995 by International Business Machines, Inc.
Copyright © 1996 by Internet Software Consortium.
Copyright © 1995, 1999 Berkeley Software Design, Inc. All rights reserved.
Copyright © 1993 Carlos Leandro and Rui Salgueiro Dep. Matematica Universidade de Coimbra, Portugal, Europe.
Copyright © 1992, 1993, 1994 Henry Spencer.
Copyright © 1986-1991 by Sun Microsystems, Inc.
Copyright © 1993 Martin Birgmeier.
Copyright © 1991 by AT&T.
Copyright © 1997 Frank van der Linden. All rights reserved.
Copyright © 1999 Michael Graff. All rights reserved.
This product includes software developed by Alistair G. Crooks.
Copyright © 1999 Alistair G. Crooks. All rights reserved.
Copyright © 2001 Cerbero Associates Inc.
A–3
Group Administration Legal notices
A–4
Glossary
This glossary defines the storage technology terminology that is specific to EqualLogic. If a term has unique
meaning in the context of hardware or of a specific software application, that context is indicated.
.bcd
(Auto-Snapshot Manager) The file extension used by ASM to identify a Smart Copy backup document.
.pvss
(Auto-Snapshot Manager) The file extension for a backup document relating to a mounted Smart Copy Set
(post-VSS).
access control list (ACL)
A list of permissions attached to an object such as a storage volume. See access control record.
access control record
Means by which you restrict/control access to a PS Series volume. To ensure that only authorized computers and
users can access a PS Series volume (iSCSI target), use access control records. You can restrict volume access by
using any combination of specific IP addresses, iSCSI initiators, or CHAP user name and password (secret).
access credentials
Identity information that is checked against access control records. A computer must match the credentials
specified in an access control record in order to connect to a volume. See access control record.
actions pane
(Auto-Snapshot Manager) The part of the ASM GUI main window that itemizes operations (actions). When you
select a node in the Console tree, available operations for that node appear in the Actions Pane.
active control module
(hardware) In a dual control module array, the control module that is actively serving I/O on a network. If it ceases
to function, it fails over to the secondary control module. See secondary control module.
application component
(Auto-Snapshot Manager) Any part of an application that ASM supports for Smart Copy operations.
application component (node)
(Auto-Snapshot Manager) Objects in the ASM console tree that represent components of software applications,
such as SQL Server or Microsoft Exchange.
applications
(Auto-Snapshot Manager) Installed applications for which a VSS writer is available, such as Microsoft Exchange
or SQL Server.
applications master node
(Auto-Snapshot Manager) The location of applications and their components in the ASM console tree.
apply logs
(Auto-Snapshot Manager) An ASM option that enables you to apply database transaction logs manually to a
restored database.
Glossary–1
Group Administration Glossary
array
(hardware) A PS Series storage array is a completely self-contained storage unit that includes multiple disk drives
configured in a highly available RAID set with dual control modules and redundant power supplies and cooling
modules.
array serial number
(Auto-Snapshot Manager) A unique PS Series array identification string that is encoded in the array's hardware.
See service tag.
ASM/ME
(Auto-Snapshot Manager) Auto-Snapshot Manager/Microsoft Edition, a snap-in console application for the
Microsoft Management Console that enables you to administer Smart Copies.
ASM/VE
(Auto-Snapshot Manager) Auto-Snapshot Manager/VMware Edition, a web-based application that works with
VMware virtual environments to enable you to administer Smart Copies.
ASMCLI
(Auto-Snapshot Manager) A set of Smart Copy operation commands that you can execute at the Windows
command prompt.
automatic RAID configuration
Internal process that configures the user-selected RAID policy on a member's disk array.
backup document
(Auto-Snapshot Manager) A file residing on the host that describes a Smart Copy on the PS Series array.
backup type
(Auto-Snapshot Manager) Specifies the backup behavior type that you want to create, either copy or full. Microsoft
Exchange Storage Groups support only the copy backup type.
backup validation on startup
(Auto-Snapshot Manager) Automatically validate Smart Copies when ASM is started.
bandwidth
Rate at which an I/O subsystem or component can transfer bytes of data. Also called the transfer rate.
base volume
1. (Auto-Snapshot Manager) A PS series array volume mounted on the computer and reachable through its
Windows-assigned drive letter (such as G:) or mount point.
2. (Group Manager) A volume that has snapshots. Snapshots depend on the base volume. If the base volume is
destroyed, the snapshots have been removed.
broken smart copies
(Auto-Snapshot Manager) Smart Copies that ASM can no longer validate.
CHAP
Challenge Handshake Authentication Protocol, a network login protocol that uses an encrypted challenge-response
mechanism. Used to limit access to volumes and snapshots to hosts that supply the correct account name and
password. CHAP is also used for login/administrator accounts. See access credentials.
CHAP account
An account that uses CHAP configured on an external RADIUS server.
Glossary–2
Group Administration Glossary
CHAP properties
(Auto-Snapshot Manager) An ASM configuration option that enables you to specify CHAP credentials for VSS or
VDS access to groups and for computer access to Smart Copies for importing.
checksum verification
The process of verifying the integrity of Microsoft Exchange Smart Copies. You use the Windows eseutil.exe, a
database maintenance utility.
cloning
The process of creating a new copy of an object such as a volume. The new object is the same type and has the
same contents as the original. Contrast with a thin clone.
collection
(Auto-Snapshot Manager) Related groups of volumes or application components. These objects are represented by
nodes in the ASM Console Tree under the Collections master node. Create collections of related volumes that you
copy frequently. This ensures that ASM creates all the relevant Smart Copies simultaneously in one set. Create,
modify, or delete a collection, create a Smart Copy Set for the collection, or configure a schedule for the collection.
command generation
(Auto-Snapshot Manager) The process of using the ASM GUI to generate an ASMCLI command.
console pane
(Auto-Snapshot Manager) The section of the ASM GUI that contains the console tree. This pane contains a
collapsing, branched structure of related groups of clickable objects on which you can perform many different
operations. See console tree.
console tree
(Auto-Snapshot Manager) A hierarchical structure of branched nodes representing objects on which ASM can
perform operations. Nodes represent objects such as applications, volumes, and collections. The ASM console tree
consists of related groups of objects (nodes) organized in a branching tree structure. Depending on the status of a
node, you are presented with a menu of actions in the Actions Pane. See console pane.
control module
(hardware) The processor and interface component in a PS Series array. A control module contains the PS Series
firmware in flash memory and provides temporary power continuity for data stored in cache memory. It has
multiple network interfaces and an optional serial port. An array can contain two hot-swappable, dual redundant
controllers. The active control module serves I/O, while the secondary control module mirrors data in its cache.
cooling module
(hardware) Hot-swappable hardware component, optionally integrated with a power supply, that provides cooling
to a PS Series array. Arrays are shipped with redundant cooling modules. An array can continue to operate if one
cooling module fails.
custom snapshot collection
(Auto-Snapshot Manager) One or more snapshots created at the same time through a multi-volume snapshot
operation.
defer verification
(Auto-Snapshot Manager) When creating a replica, this operation causes ASM to defer Checksum Verification and
Soft Recovery to a later time. Invoke the procedure manually or create a schedule at some future time.
delegated space
(Group Manager) Space on a group set aside to store received replicas for a partner.
demote
(Group Manager) Convert a volume in a replication configuration into a replica set.
Glossary–3
Group Administration Glossary
Glossary–4
Group Administration Glossary
group access
1. (Auto-Snapshot Manager).The process of enabling computer access to a PS Series Group by configuring and
supplying credentials. See access credentials and CHAP.
2. (Group Manager) Access to the Group Manager UIs for management purposes. Access the GUI or CLI though
the network. You can access the CLI through the optional serial port on the controller.
3. (hardware) Access to the group storage. iSCSI initiators access group iSCSI targets through the group IP
address (discovery address). Access to a specific target is controlled through the access controls assigned to the
target.
group administrator
An account on a PS Series group that has permission to manage all features and objects in a PS Series group,
including configuring replication partners. The default group administrator account is grpadmin. See pool
administrator.
group IP address
The network address that iSCSI initiators use to discover iSCSI targets and administrators use to access the group.
See management IP address.
group member
See member.
group name
A unique identifier assigned to a group.
Host Integration Tools
(HIT) A suite of applications that enable you to configure and manage an array. It includes ASM/ME, DSM
(Multipath I/O Device Specific Module), and RSW (Remote Setup Wizard).
hot-swap
(hardware) Removing a redundant component and installing a replacement while the array is running.
imported Smart Copy credentials
(Auto-Snapshot Manager) When a computer imports a Smart Copy, it must automatically present default
credentials that match one of the Smart Copy’s access control records.
iSCSI host bus adapter (HBA)
(hardware) An iSCSI initiator implemented as a physical I/O adapter through which a computer connects to a
physical storage device such as a volume.
iSCSI initiator
The hardware or software component in a computer that starts the transfer of information to or from an iSCSI
target. See iSCSI host bus adapter (HBA).
iSCSI portal verification
(Auto-Snapshot Manager) Verifying that ASM can connect to arrays to which it previously connected when last
active.
iSCSI target
An iSCSI block storage device that you access through an iSCSI initiator. The volumes and snapshots in a PS
Series group appear on the network as individual iSCSI targets.
iSCSI target discovery
An iSCSI protocol exchange that finds the available iSCSI targets from a target portal or discovery address.
Glossary–5
Group Administration Glossary
jumbo frames
Ethernet frames capable of more than 1,500 bytes of payload (MTU). Enabling jumbo frames might improve
performance on certain configurations.
keep count
1. (Auto-Snapshot Manager) The maximum number of snapshots or replicas retained by a Smart Copy schedule.
2. (Group Manager) The user-established limit on the number of snapshots or replicas created by using a schedule
on the PS Series group.
latency
The time required to complete a specific I/O operation.
load balancing
Automatic distribution of I/O across resources to improve performance.
local replication reserve
(Group Manager) Storage space on a primary group that is used to record changes to a volume when replication is
configured, and optionally to store a failback snapshot for a volume.
management IP address
In a group with a management network configured and enabled, an address used exclusively to log into the Group
Manager GUI or CLI.
management network
An optional management network separates iSCSI traffic (volume I/O) from management traffic (GUI and CLI
sessions, and other group management communications and intergroup operations).
manual restore
(Auto-Snapshot Manager) Mounting a Smart Copy and manually restoring data items.
manual transfer replication
(Host Integration Tools, Group Manager) Replication done through transportable media instead of over a network.
Used in cases where the network link between replication partners is too slow or otherwise unsuitable for
transferring large amounts of data.
Manual Transfer Utility
A stand-alone utility from EqualLogic that performs volume replication using transportable media, instead of the
network. The utility has both graphical and command line user interfaces.
member
A PS Series array configured into a PS Series group. Groups can have several members.
member name
The unique name used to identify a specific member within a group.
membership password
The password required to add an array to a group, making it a member of the group.
merging pools
(Group Manager) The process of moving all the members and volumes from a source pool to a destination pool,
deleting the source pool on completion.
mount
(Auto-Snapshot Manager) To create a connection to an iSCSI volume (clone, replica, or snapshot) and make its file
system accessible to the operating environment.
Glossary–6
Group Administration Glossary
MPIO
Acronym for multipath I/O. Multiple connections from an iSCSI initiator to targets on a PS Series Group over the
network to provide redundancy and enhance performance. See device-specific module (DSM).
MPIO properties tab
(HIT) A EqualLogic-specific tab on the iSCSI Initiator properties page that provides status information about
multipathing sessions. The ehcmn.log file is a rotating log file (such as ehcm0.log) containing the data displayed in
the MPIO properties tab. See MPIO.
notification
(Auto-Snapshot Manager, Group Manager) The method that a group uses to inform you of significant events
through e-mail, remote syslog files, and SNMP traps.
path failover
(hardware, Host Integration Tools) Relocating data traffic from a failed network path to a functional network path.
This can occur automatically if the computer's software and hardware is configured for failover. MPIO provides
server-side path failover.
path uptime
The elapsed time during which a session is active, displayed in the MPIO properties tab.
pool
Storage space provided by one to four group members. You assign volumes to a specific pool and load balancing
operates only within pools. See load balancing and merging pools
pool administrator
(Group Manager) An account on a PS Series group that has permission to manage objects only in a specific pool or
set of pools for a group. Compare to group administrator.
power supply
(hardware) Hot-swappable hardware component, sometimes integrated with a cooling module, that enables you to
connect a PS Series array to a source of power. Arrays are shipped with redundant power supplies. An array can
continue to operate if one power supply fails. Dell recommends that you connect power supplies to different
sources of power, preferably on separate circuits.
primary group
(Group Manager) In a replication partnership the group containing the original volume. See secondary group.
primary volume
(Group Manager) A volume configured for replication to a replication partner.
promote
(Group Manager) To convert a replica set in a replication configuration into a volume. See demote.
PS Series array
A single EqualLogic iSCSI storage unit, usually configured as a PS Series Group. You can join multiple PS Series
arrays into a larger PS Series Group and manage them as a single iSCSI SAN.
PS Series group
An iSCSI storage entity comprised of one or more PS Series storage arrays that you access through a single IP
address and manage as a storage area network (SAN).
Queue-depth reporting
(SAN HQ) The average number of outstanding I/O operations at the start of each incoming I/O operation.
Glossary–7
Group Administration Glossary
RAID policy
The type of RAID level (such as RAID 10 or RAID 6) that you configure for a member, coupled with the sparing
policy (spares or no spares).
read-only account
(Group Manager) An administration account that only provides read-only access to group information.
recovery volume
Temporary volume created by promoting an inbound replica set as part of a failover operation. You can also create
recovery template volumes and recovery thin clones. See promote.
replica
A point-in-time representation of a PS Series volume. The original volume and its replica are located on different
PS Series groups (replication partners) potentially separated at some geographical distance to facilitate disaster
tolerance.
replica collection
(Group Manager) The set of replicas resulting from each replication of a volume collection.
replica collection set
(Group Manager) The set of replica collections for a volume collection.
replication partner
(Group Manager) A group that is configured to send or receive replicas from another partner.
replica reserve
(Group Manager) Portion of the delegated space on a replication partner that is set aside for the replica sets for a
specific volume. You configure the replica reserve for the volume on the primary group, but the actual replica
reserve is on the secondary group.
replica set
(Group Manager) Set of complete replicas for a volume, template volume, or thin clone volume.
replication
(Group Manager) Copying volume data (only deltas) from the primary group, where the volume is stored, to the
secondary group. Groups can be an unlimited distance apart. You can recover data from the secondary group, if
necessary.
restore
Glossary–8
Group Administration Glossary
secondary group
(Group Manager) In a replication configuration, the group that receives replicas of a source volume. See primary
group.
service tag
(Group Manager) A unique ID assigned by Dell to particular equipment, for use by customer service.
sessions
(Group Manager) In an multipath configuration, the number of connections made to targets that span multiple
members. The number of sessions can be configured to manage bandwith use.
shrink (volume)
(Group Manager) Decreasing the reported size of a volume.
Smart Copy
(Auto-Snapshot Manager) Point-in-time, application-consistent copy of objects in a PS Series group. Smart Copies
can be of type snapshot, clone, or replica, depending on the edition of Auto-Snapshot Manager that you are using.
snapshot
A point-in-time representation of a PS Series iSCSI volume. Seen on the network as an iSCSI target. This is
maintained in an array as deltas from the original volume.
snapshot collection
A set of snapshots resulting from a snapshot operation on a volume collection. See volume collection.
spare disk
(hardware) An unused disk in a PS Series array that is used automatically to replace a failed disk.
storage pool
(Group Manager) See pool.
template volume
Read-only volume from which you create thin clones.
thin clone
Volume that shares space with a template volume. Thin clones provide an efficient use of storage space for
configurations with multiple volumes that have a large amount of common data.
thin provisioning
(Group Manager) The process optimizing use of storage space in a group through over-allocation. An object (such
as a volume) is attributed less physical space than is reported by the group to any computer that is connected to the
volume.
torn Smart Copy
(Auto-Snapshot Manager) The resulting Smart Copy that contains only partial data. The partial data set is referred
to as torn because it does not contain all the files in a particular data set. This situation can occur when you attempt
to do a Smart Copy of data sets that span multiple volumes.
transportable
(Auto-Snapshot Manager) A characteristic of a Smart Copy that enables it to be created on one computer and then
moved to another computer to be used.
unmanaged space
Delegated space capacity on the secondary group that is no longer accessible from the primary group.
vacate
(Group Manager) To remove a member from a group while the member remains online.
Glossary–9
Group Administration Glossary
VDS provider
(hardware) The EqualLogic VDS provider is a component of the Host Integration Tools that enables you use
Microsoft Storage Manager for SANS to create and manage volumes in a PS Series Group.
volume
Storage allocated by a PS Series group as an addressable iSCSI target.
volume collection
(Group Manager) A number of volumes grouped together for purposes of performing operations on the volumes
simultaneously. See also snapshot collection and replica collection.
volume administrator
An account on a PS Series group that has permission to manage a quota of storage in one or more pools. A volume
administrator can perform any volume operations, including replication, within their quota limit.
volume reserve
(Group Manager) Amount of space allocated to a volume from free pool space. Without thin provisioning, volume
reserve is the same as reported size.
vss-control volume
(Auto-Snapshot Manager) A special logical volume that enables Microsoft VSS/VDS services to communicate
with a PS Series array. The vss-control volume appears as a non-initialized disk in the Windows Disk Management
interface.
XMLlogsize
(hardware) The variable that controls the size of ehcmn.log.
Glossary–10
Index
A cache modes
access control setting policies 6-12
snapshot 11-2 channel cards
access control records monitoring 15-19
creating 9-6, 9-10 status 15-19
deleting 9-11 CHAP
iSCSI target access 8-1 initiator authentication 8-1
modifying 9-11 local accounts
snapshot access 9-2 creating 8-2
VDS/VSS access 4-12 deleting 8-3
volume access 9-2 displaying 8-2
access controls, iSCSI targets 8-1 modifying 8-3
accessing data CLI
snapshot 11-1 accessing 3-6
accounts (administration) managing a group 3-6
attributes 4-4 network connection 3-6
group administrator 4-3 serial connection 3-6
local 2-1, 4-5 clone
creating 4-5 from snapshots 11-1
displaying 4-5 cloning
modifying 4-6 replicas 12-31
monitoring 15-3 snapshots 11-9
pool administrator 4-3 volumes 9-11
RADIUS 4-7 collection
attributes 4-7 snapshots 11-1
configuring servers 4-10 collections, See volume collections, snapshot
disabling 4-11 collections, and replica collections
prerequisites 4-9 control modules
read-only 4-3 cache modes 6-11, 6-12
types 4-3 monitoring 15-20
unsupported operations 4-4 status 15-21
volume administrator 4-4 cooling modules
accounts (CHAP), See CHAP monitoring 15-18
alarms 14-2 status 15-18
critical 15-12 copy
warning 15-12 point-in-time 11-1
allocating snapshot reserve 11-2 snapshots 11-1
audience 2-xiii custom snapshot collections
creating 11-7
B
deleting 11-8
backup 11-1
displaying 11-7
batteries
naming 11-7
monitoring 15-21
custom snapshot collections, See also snapshot
status 15-21, 15-22
collections
binding a volume 10-16
Customer Support, contacting 15-33
C
cache 6-3
Index-1
Group Administration Index
D priorities 14-2
data WARNING 14-2
accessing in a snapshot 11-1 F
data recovery failback
example 13-3 demoting a recovery volume 13-11
failback 13-3 example 13-3
failover 13-3 promoting a failback replica set 13-11
methods 13-1, 13-2 steps 13-2
date failback baseline 13-9
setting 5-4 failback replica set 13-1
dedicated management network, See management changing pool 13-10, 13-11
network converting to inbound replica set 13-14
default gateway deleting 13-10
management network 4-14 promoting to volume 13-11
members 6-10 failback snapshot 12-8, 13-9
delegated space Failback to Primary operation
monitoring 15-10 individual tasks 13-16
over-provisioning 12-15 retrying 13-15
sizing 12-15 starting 13-11
usage 12-11 failover
demote example 13-3
converting volume to replica set 13-1, 13-11 promoting a replica set 13-7
diagnostics, collecting 15-33 steps 13-2
disk drives firmware 6-12
monitoring 15-22 disallowing downgrades 6-14
status 15-22 displaying 15-21
Displaying 6-1 replication requirements 12-16
downgrades, disallowing 6-14 updating 6-13
E G
EIP card group
monitoring 15-20 alarms 14-2
status 15-20 automatic operations 1-4
E-Mail Home 14-2 capacity 5-1, 5-4
changing configuration 14-4 CLI management 3-6
checking configuration 14-4 connecting to targets 8-7
configuring 14-4 default volume settings 9-5
e-mail notification 14-2 displaying configuration 5-1
accessing events 15-3 events 14-1
changing configuration 14-3 expanding 5-4
configuring 14-3 features 1-1
events firmware 6-12
disabling INFO message display 14-5 free space recommendation 15-15
displaying 14-1, 15-2 hardware 1-2
ERROR 14-2 host-based applications 1-6
FATAL 14-2 interoperability 1-3
format 14-1 introduction 1-1
INFO 14-2 IP address 5-5
monitoring 15-2 iSNS servers 8-5
notification 14-2 load balancing 1-4, 5-6
Index- 2
Group Administration Index
Index-3
Group Administration Index
Index- 4
Group Administration Index
Index-5
Group Administration Index
Index- 6
Group Administration Index
Index-7
Group Administration Index
Index- 8