Anda di halaman 1dari 24

Virtual lun Migration

Virtual LUN technology for disk group provisioned (DP) devices offers two types of data movement:
migration to configured space and migration to unconfigured space.
Virtual LUN VP migration migrates thin devices from one thin storage pool to another. In this way, data can
be moved between storage pools configured on different drive technologies and with different RAID
protection types.

Reasons for VLUN migration

Virtual LUN Migration is a technology that enables storage virtualization.It allows Symmetrix users
to seamlessly move application data between different storage tiers. The storage tiers could reside
inside the Symmetrix as well as outside the Symmetrix if FTS is being used. As data ages and the
need for its availability diminishes, it can be moved to less expensive storage as part of the ILM
Performance optimization is another function of V-LUN migration as applications are moved between
storage tiers.
Virtual Lun Technologies
Virtual LUN DP(Disk Group Provisioning)
Moves symmetrix luns non-disruptively
Move to unconfigured space requires available disk space.
Move to configured space requires an existing,unused device.
Virtual Lun VP(Virtual Provisioning)
Moves thin device allocations from one thin pool to another
VLUN migration VP binds thin devices to the target pool unless a source pool is specified
Device pool level migrations
If a source pool is secified allocated tracks from the source pool are migrated to the target
pool without affecting the binding.
VLUN migration for disk group provisioning(DP)
RAID virtual architecture
Abstracts RAID protection from Device
RAID group occupies single mirror position
Enabling echnilogy for virtual lun migration and Fully Automates Storage Tiering(FAST)
All RAID types(RAID 1,RAID 5, RAID 6 )supported
Enginuity supports up to four mirrors for each Symmetrix volume.
RAID Virtual Architecture is used for handling device mirror position usage. RAID Virtual
Architecture is an enabling technology for the Enhanced Virtual LUN technology and Fully
Automated Storage Tiering (FAST). Note that RAID Virtual Architecture does not introduce any new
RAID types. All device protection types use the same I/O execution engine which separates the
mirror position interface from RAID internal operations.
With the RAID virtual architecture a mirror position holds a logical representation of a RAID group rather
than a device, resulting in additional free mirror positions as demonstrated in this example. Notice the RAID
5 volume with SRDF protection consumes only two mirror positions. The RAID 5 group occupies only one
mirror position with the SRDF protection occupying a second position. This frees two mirror positions for
other operations such as a migration to another RAID type.
RVA Operations and Limitations
2RAID groups attached
During migration only
RAID groups can be in different disk groups
Non-disruptive to local and remote replication

The diagram shows the migration process of an R2 device, which has one remote mirror. When a migration
is initiated, the target device occupies the third mirror position during the course of the migration. The RAID
Virtual Architecture (RVA) will allow a maximum of two RAID groups attached as a mirror at the same time.
However, this is a transitional state to support operations such as migrations and Optimizer swaps. The
array operations associated with the RAID groups will be handled independently for both attached groups.
The attached RAID groups may be in different disk groups containing different capacities and speeds.

The SYMCLI command symmigrate can be used to perform and monitor migrations. When performing a
migration you must designate the source, which is the device the data will be migrated from. The device can
be selected from Unisphere for VMAX GUI or on the command line by device group, an auto provisioning
storage group, or by a device file that contains a single column listing only the desired source devices.
Next, you will need to identify the target, which is the volume the data will be migrated to. The criteria and
syntax for designating the target will vary based on whether the target is unconfigured or configured. A
symcli device group cannot be designated as a target.
When working with configured space you can control which source volumes are migrated to which target
volumes by creating a device file with two columns. The first column containing the source device numbers,
and the second column containing the desired target device numbers.
Migrations are submitted and managed as sessions, so you can use a session name using the name option.
Note that the control host for the migration must be locally connected to the Symmetrix VMAX array on
which the migration is being performed. Note that the source or target volumes can reside on eDisks that
were created on an external array.
The symmigrate command has three control actions and three monitor actions. The control actions
include validate which tests the user input to see if it will run, establish which creates the migration
session and start the synchronization process, and terminate which removes a session by name. The
terminate action can only be performed when all volumes in the session are in the Migrated state.
The monitor actions include query which provides status about a specified session, list which shows all
sessions for a given Symmetrix array or all local Symmetrix arrays, and verify which determines if the
specified session is in a specified state.
V-LUN migration is made possible by the creation and movement of RAID groups. Initially the target RAID
group is added as a secondary mirror. Once the migration completes, the target RAID group is made the
primary mirror.
If migrating to unconfigured space, the target RAID group, i.e. the original source device is deleted.
If migrating to configured space the target device, i.e. the original source device is iVTOCd after the
migration is completed.
V-LUN Migration can be controlled either from the command line using the symmigrate command,
or from the Unisphere for VMAX GUI.

V-LUN Migration can be controlled either from the command line using the symmigrate command, or from
the Unisphere for VMAX GUI.
VLUN migration to a configured space
In the first example we will demonstrate a Virtual LUN Migration which transfers two volumes non-
disruptively from RAID-1 to RAID-6 protection.
The symmigrate validate command checks the Symmetrix for suitable migration candidates.
In our example the validation step identifies devices 1A3 and 1A4 as suitable targets for migration.
The RAID-1 volumes are part of disk group 3 and the RAID-6 volumes are in disk group 1. When the LUN
migration starts the RAID-6 volume temporarily takes up a mirror position while the two volumes are
synchronized. Once the migration is complete the original target device assumes the identity of the source
and the original source assumes the identity of the target. Data on the target LUN, i.e. the original source is
destroyed through IVTOC.
Migration to configured space
symdisk list -dskgrp_summary -sid <xx> ============>Listing the diskgroup of symmetrix array.

The symdisk list dskgrp_summary command lists the disk groups present in the
Symmetrix. A disk group generally contains disks of a certain technology and RPM. Drives of
different technologies (such as FC, SATA or Flash) are placed in different disk groups whereas drives
of similar technologies are placed in the same disk group.
Here disk group 3 contains 15K RPM FC drives, disk group 2 contains 10K RPM FC drives and disk group 1
contains 7200 RPM SATA drives.
The Migration source will be a storage group(sg) called appsg with 2 devices B1 and B2.
symsg show Appsg -sid <xx>=============>lISTING DEVICES OF A STORAGE GROUP

The device in the storage group belong todisk group 3.i.e they reside on 15k RPM FC drives.
symdev list -disk_group 3 devs B1|B2 =========>Listing specific devices from dskgroup

The validate command queries the Symmetrix to find if there are devices that suit the criteria that we asked
for, namely, RAID-6 devices with 6+2 protection. The tgt_config option specifies that the migration should
be undertaken to configured space. If valid targets exist the device pairing should be written out to a file.
This file can later be used as input to the symmigrate command.
Symmigrate validate -nop -outfile devlist -sg appsg -tgt_raid6 -tgt_prot 6+2 -name migdemo1 -tgt_config
-tgt_dsk_grp 1 ========>looking for target devices with raid 6 and 6+2 protection with configured space on a
disk group 1.
Here we take a look at the two devices suggested by the output of the validate command.
The interrogation marks in the SA column in the symdev list output mean the devices are
unmapped. If there had been asterisks it would had meant that the devices were mapped to more
than one front end port.
There happen to be two RAID-6 devices in this Symmetrix that are not mapped to any director as evidenced
by the interrogation marks in the SA column. The devices in the output are connected to backend directors
8A:D0 and 8D:C0. These devices are in disk group 1.

Start migration using the file devlist created during the validation session. Since the file devlist
contains the pairings of the devices, it is clear that we are migrating to configured space. We also no
longer need to specify the target disk group and RAID protection. Below syntax of
symmigrate command.
Symmigrate -sid <xx> establish -file devlist -nop -name migdemo1
Symmigrate list cmd shows info on all migration sessions in a symmetrix.
The Migrated status shows this as a migration to configured space.
The C in the flags column shows this as a migration to configured space.

A listing of the storage group shows that devices B1 and B2 are now RAID-6 protected. They used to be
RAID-1 protected.
The output shows that 1A3 and 1A4 have swapped places with B1 and B2. This includes the RAID protection
and the disk groups. During the migration process devices B1 and B2 continued to stay host accessible.

The session can now be terminated with below command.

Symmigrate terminate sid 77 name migdemo1 -nop

Lets review the migration process we observed in the previous demonstration. After specifying
configured space and the target protection (or disk type), the establish action instructs the array to
perform the following steps:
1.An available target Symmetrix LUN is determined with a RAID group of the protection and disk
type specified.
2.The RAID group is disassociated from the target Symmetrix LUN and is associated to the
Symmetrix LUN being migrated as the secondary mirror.
3.The secondary mirror is synchronized with the primary mirror.
4.The primary and secondary indicators are swapped between the two mirrors.
5.The secondary mirror, which is now pointing to the original RAID group, is disassociated from the
Symmetrix LUN being migrated and associated as the primary mirror to the target Symmetrix LUN.
6.The target Symmetrix LUN is then iVTOCd to clear the data from it.

VLUN Migration to unconfigured space

This time well perform a migration from the same device group to unconfigured space.
The unconfigured space to which the data is being moved is designated on the command line with
the option tgt_unconfigured. When migrating to unconfigured space we must designate the
desired target configuration and protection on the command line, in this case we are designating
RAID 1 which will be built automatically and placed in the next available mirror position.
Once we issue the command, the migration session is established and the source is synchronized with the
target, while also servicing production I/O from the host. After synchronization completes, the target
assumes the identity of the source. The source is then deallocated and the storage capacity is freed up in
the original disk group. From the hosts point-of-view this operation is completely transparent. Keep in mind
that migrating to unconfigured space is a slightly longer operation as the target configuration is created
before the migration begins.
This time we will use a device file called devs to move devices B1 and B2 back to disk group 3 with RAID-1
protection. We first validate if this will work and execute the migration.
Validate the devs file and start the migrations as below.
symmigrate validate -f devs -tgt_raid1 -tgt_dsk_grp 3 -tgt_unconfig -name Migdemo2 -sid <xx>
symmigrate establish -f devs -tgt_raid1 -tgt_dsk_grp 3 -tgt_unconfig -name Migdemo2 -sid <xx>

The query confirms that the migration is complete. Also, it is a migration to unconfigured space as the
legend U in the second last column indicates.
Symmigrate sid 77 query name migdemo2

The query confirms that the migration is complete. Also, it is a migration to unconfigured space as the
legend U in the second last column indicates.
After specifying un-configured space and the target protection (or disk type), the establish action
instructs the array to perform the following steps:
1.A new RAID group of the specified protection type is created on the specified disk type.
2.The newly created RAID group is associated to the Symmetrix LUN being migrated as the
secondary mirror.
3.The secondary mirror is synchronized to the primary mirror.
4.The primary and secondary indicators are swapped.
5.The secondary mirror, which is now pointing to the original RAID group, is deleted.

Virtual LUN Migration for Thin pools

Virtual LUN VP mobility is a non-disruptive migration of a (complete) thin device from one pool (or
multiple pools) to another thin pool.
The migration operation is initiated by the Storage Administrator. All extents associated with the
designated thin device are moved to the designated target pool. When complete, the thin device is
bound to the target pool. When performing a thin device migration the thin pool that the thin
device is currently bound to may be specified as the target of the migration. In this case any tracks
for the device that are allocated in pools other than the bound pool will be consolidated to that
Manual migration operations take precedence over FAST VP operations, which will be covered later
in the course. If a device is under FAST VP control, a migration will abort all FAST VP movements in
progress for the migrating device.
Virtual LUN VP allows thin devices to be moved between pools. The source thin devices can be specified in a
file, device group, storage group, or another pool. The target is always a thin pool.

VLUN VP Features
Non-disruptive move of thin devices and thin meta devices between storage tiers.
Supported with replication technologies.
SRDF,Timefinder/Snap,Timefinder/Clone,Open replicator
Active replication on symmetix devices being migrated stays intact.
Incremental relationships maintained for migrated devices.
Migration facilated by moving thin device track groups to unallocated space in target pool
Only allocated track groups are relocated
Track groups allocated but not written to are moved but no data is copied
Moved track groups are deallocated in source pool
During migration,new hist generated allocations come from target pool.
Migration complete when all allocated track groups have been moved.
The migration of VP volumes, or thin devices, is achieved by rebinding the device to a new thin pool,
and then relocating all the allocated extents, belonging to the device, to that pool.
As thin pools can be of varying RAID types, VLUN VP allows the migration of data from one
protection scheme to another, or from one drive technology to another, or both. This data
movement is performed without interruption to the host application accessing data on the thin
Virtual LUN VP can be used to migrate Symmetrix thin devices, and thin metadevices, configured as
FBA in open systems environments.
Migrations can be performed between thin pools configured on all drive types including high-performance
Flash drives, Fibre Channel drives, and large capacity SATA drives. This feature can also be a strong
complement to automated tiering, as it enables administrators to override the FAST VP algorithm and
manually re-tier thin volumes based on new or unexpected performance requirements.
Data is migrated to unallocated space in the target thin pool. There must be sufficient unallocated space to
accommodate thin device extents. As they are relocated, they are de-allocated from the source pool,
leaving additional unallocated space in that pool.
Migration Considerations
Compressed tracks can only be migrated as compressed tracks to a pool with compression
Uncompressed tracks can be migrated as uncompressed tracks regardless of whether
compression is enabled on the target pool.
Refer to the module on VP for more detail.
This example was created on Solutions Enabler 7.4 before VP compression was introduced. Thin devices
1A1 and 1A2 are bound to pool P1. Both devices are fully allocated.
Pool DG1R6SATA_Pl is free of Thin devices and all of its space is available for use. The validate action
confirms that the pool can accept the allocated tracks from thin devices F1 and F2.

Migrate the thin devices from pool DG3R1FC15_Pl to DG1R6SATA_Pl.

From Uncompressed TDEV to Uncompresed Pool-4
Thin devices are now bound to Pool DG1R6SATA_Pl. The old pool DG3R1FC15_Pl is now empty and free
of thin device allocations.

Two Variations of FAST

1) FAST(Also referred to as FAST DP) supports disk group provisioning for symmetrix VMAX.
Full LUN movement of disk group provisioned(hick) devices.
Supports FBA and CKD devices
Introduced in Enginuity 5874
2) FAST VP supports virtual provisioning for symmetrix VMAX.
SubLUN movement of thin devices
Supports FBA and CKD devices
Introduces in 5875
FAST moves application data at the volume (LUN) level. Entire devices are promoted or
demoted between tiers based on overall device performance.
FAST VP adds finer granularities of performance measurement and data movement. The data from a
single thin device under FAST control can be spread across multiple tiers. The FAST controller is free
to relocate individual sub-extents of a thin device, based on performance data gathered at the
extent level.
FAST and FAST VP helped skewed workloads.
Skew is the uneven distribution of I/Os over a defined capacity leading to hot spots
and cold spots on the storage.
If 95% of I/O is to 5% of storage capacity,skew is said to be 95/5%.
Most production work loads are skewed.
FAST and FAST VP take advantage of workload skew by moving heavily accessed data
to higher performing storage tiers and rarely accessed date to less expensive lower
performing tiers.
Understanding of workload skew facilitates building of efficient configurations.
FAST is most effective in an environment where the workload tends to be skewed and the skew
patterns are dynamic.
FAST is not beneficial if the workload is not skewed or if the I/O workload is static, since workload
skews can then be addressed using manual methods such as Symmetrix VLUN Migration.
Key features of FAST and FAST VP
Monitors I/O activity
Identifies data for relocation
Automates migration during user defined time windows
Support for FBA and CKD
FAST and FAST VP automate and optimize storage tiers allowing better utilization of EFDs for
data in need of high performance and SATA technology for infrequently accessed data. Combining
EFD, Fibre Channel, and low-cost SATA drives provides improved performance in a tiered storage
solution at a lower operating cost than a similarly sized Fibre Channel only array.
FAST and FAST VP use defined policies to non-disruptively relocate Symmetrix devices to the
most beneficial drive technology based upon I/O profile. FAST VP can relocate smaller chunks of
data located on thin devices from one tier of storage to another.
FAST and FAST VP can be managed via the Unisphere for VMAX or SYMCLI.
The benefits of FAST are:
Automated movement of hot devices to higher performing drives and less used
drives to lower performing drives in a dynamically changing application
Achieves better cost / benefit ratios by improving application performance at the
same cost, or providing the same application performance at lower cost. Cost factors
include the price of drives, their energy usage and the effort to manage them.
Management and operation of FAST is provided by Unisphere for VMAX, as well as
the Solutions Enabler Command Line Interface (SYMCLI).
Reporting:Unisphere for VMAX provides relative performance information and tier
utilization for each storage group.
VMware and vSphere interaction with FAST and FAST VP
EMC does not recommend mixing datastores backed by devices having different
properties.Unless the devices are part of a FAST VP policy.
Do not combine replicated and non replicated datastores in the same cluster.
When using VMware Storage distributed resource scheduler(SDRS) and FAST(VP or
Only capacity based SDRS is recommended
Uncheck the Enable I/o Metric for SRDS recommendations box
For VMs with performance sensitive application,EMC advices not using SDRS with
Set up a rule or use manual mode for SDRS
VMware Storage Distributed Resource Scheduler (SDRS) operates on a Datastore Cluster. A
Datastore Cluster is a collection of datastores with shared resources. SDRS provides initial placement
and ongoing balancing recommendations to datastores in a SDRS enabled datastore cluster.
A datastore cluster can contain a mix of datastores with different sizes and I/O capacities,
and can be from different arrays and vendors. However, EMC does not recommend mixing
datastores backed by devices that have different properties, i.e. different RAID types or disk
technologies, unless the devices are part of a FAST VP policy.
Replicated datastores cannot be combined with non-replicated datastores in the SRDS
cluster. If SDRS is enabled, only manual mode is supported with replicated datastores.
When EMC FAST (DP or VP) is used in conjunction with SDRS, only capacity based SDRS is
recommended. Storage I/O load balancing is not recommended. Uncheck the Enable I/O metric for
SRDS recommendations box for the datastore cluster. Unlike FAST DP, which operates on thick
devices at the whole device level, FAST VP operates on thin devices as the far more granular extent
level. Because FAST VP is actively managing the data on disks, knowing the performance
requirements of a VM (on a datastore under FAST VP control) is important before a VM is migrated
from one datastore to another. This is because the exact thin pool distribution of the VMs data may
not be the same as it was before the move.
If a VM houses performance sensitive applications, EMC advises not using SDRS with FAST VP for
that VM. Preventing SDRS from moving the VM can be achieved by setting up a rule or using Manual
Mode for SDRS.

Traditional systems often have a range of hot and cold data activity where some LUNs are more
active than others. Placing these different workloads on a single tier can lead to active data being
underserviced, limiting performance. Meanwhile, less active data is being over serviced, increasing
storage costs. With FAST, it is possible to tier within the array which enables the use of Enterprise
Flash Drives (EFD) for heavily utilized LUNs and low cost SATA for less frequently used LUNs.
There are three main elements related to the use of FAST DP. These are:
Symmetrix Tier: A shared resource with common technologies
FAST Policy: A policy that manages data placement and movement across
Symmetrix tiers to achieve service levels and for one or more storage groups
Storage Group: A logical grouping of devices for common management

Symmetrix disk groups

Symmetrix disks can be grouped in to disk groups.
A disk group contains drives of same technology and speed
Fibre channel and SATA drives haveto be in different disk groups.
10K RPM drives and 15K RPM drives have to be in different disk groups
Disk group numbers start with 0 or 1 and have a name
Default names are DISK_GRUP_001, DISK_GRUP_002 etc.
Users can change the default names using config manager.
If the disks belong to an external array(FTS) group numbers start with 512.
Solutions Enabler supports setting and reporting disk group names. The addition of
names -allows easier management of disk groups.
Symmetrix Tiers
A disk group tier is the combination of a disk technology and a RAId protection type
and is managed using the symtiercommand
Symmetrix VMAX supports upto 256 tier definitions
Symmetrix tier name
Is case insensitive(Tier A is equilent to tier a)
Can contain up to 32 alpha numeric charecters including hypen and underscore.
May not start with Hypen or underscores
A Symmetrix tier is a specification of a set of resources of the same disk technology
type (EFD, FC, or SATA) combined with a given RAID protection type (RAID 1, RAID 5, or RAID
There are two types of storage tiers disk group provisioning (DP) and virtual
provisioning (VP).
The Symmetrix tier used by FAST DP is called a DP tier. It is defined by combining one
or more physical disk groups, of the same technology type, and a RAID protection type.
For FAST VP, the storage tier is called a VP tier and it is described later.
The maximum number of tiers that can be defined on a Symmetrix array is 256. If creating a
new tier exceeds this limit, then an existing tier must be deleted before creating the new
Dynamic Symmetrix tier
Only technology(EFD,FC,SATA) and desired RAID specification needs to be
Size and rotational speed are not consideredso a tier may contain disk of
differing performancecharecteristics.
Includes all disk groups in the symmetrix that matche the current technology.
Newly added disk storage will automatically be added to the other disk
groups of the same disk technology.
Can only be used for FAST but FAST VP.
There are two types of Symmetrix tiers, dynamic and static. A dynamic tier
automatically includes all disk groups currently in the array that match the tier technology.
This type of tier will expand to accommodate any newly added disk groups.
Dynamic tiers are easy to set up and involve less management overhead, but since it
allows disk groups of varying speeds to be part of a tier, it is difficult to get predictable
performance from a particular tier.
Static tiers allow for better performance predictability as a result of FAST moves, and
are preferable to dynamic tiers.
FAST VP cannot run on dynamic Symmetrix tiers.
Static Symmetrix tier
Each symmetrix disk group to be added to the static FAST DP tier must be
specified by user.
Symmetrix disk groups may belong to more than one FAST DP tier
External disk groups cannot belong to a FAST DP tier.
To create a static tier, each Symmetrix disk group to be included in the tier must be
explicitly specified. Each physical disk group added to a static Symmetrix tier must be of the
same disk technology. If additional capacity is added to the Symmetrix, and it is added to a
new physical disk group, expansion of a static Symmetrix tier must be performed manually.
This is done by adding any newly added physical disk groups to the Symmetrix tier.
External tiers are not supported by FAST DP.
FAST DP Policy
Manages data placement and movement across symmetrix tiers to achieve service levels
for one or more storage groups.
SYMCLI command symfast used to create and manage FAST
Each policy can contain upto three tier and tere corresponding usage rules.
Usage of a tier is specified as an upper limit percentage of thetotal storage reuired by
the devices in the storage group.
The percentage of storage specified for all tiers in a policy must total atleast 100%
May be as high as 300%(3 tiers X 100%)
Tiers may be shared amongst multiple FAST policies
A FAST DP policy groups between one and three tiers and assigns an upper usage limit
for each Symmetrix tier. The upper limit specifies how much capacity of a storage group
associated with the policy can reside on that particular Symmetrix tier.
FAST policies may include storage tiers of only one type; disk group provisioning (DP) or
virtual provisioning (VP). The first tier added to a policy will determine the type of tiers that can
subsequently be added.
The usage limit for each tier must be between 1 percent and 100 percent. When
combined, the upper usage limit for all thin storage tiers in the policy must total at least 100
percent, but may be greater than 100 percent.
Creating a policy with a total upper usage limit greater than 100 percent allows flexibility
with the configuration of a storage group, whereby data may be moved between tiers without
necessarily having to move a corresponding amount of other data within the same storage
Multiple FAST policies may reuse the same tier, allowing different usage limits to be
applied to different storage groups for the same tier.
A Symmetrix VMAX storage array will support up to 256 FAST policies. Each FAST policy name
may be up to 32 alpha-numeric characters, hyphens (-), and underscores (_). Policy names are
not case-sensitive.

Storage Groups
Storage groups logically group together devices for common management
Used for FAST,FAST VP and Auto-provisioning and manged with the symsg and/or
symaccess commands(symaccess mainly used for autoprovisioning)
Devices may only belong to one storage group managed FAST.
Devices may belong to other storage groups used for Auto-provisioning.
Parent storage groups cannot be managed by FAST.
FAST does not support the following device types.
CKD EAV,CKD EAV phase 3 and CKD concatenated metadevices
Diskless devices
Private devices(SFS,Vault,DRV)
VDEV(can be added to a storage group but will e ignored for FAST
Metadevice members
A storage group is a logical collection of Symmetrix devices that are to be managed together.
Storage group definitions are shared between FAST and Auto-provisioning Groups. However, a
Symmetrix device may only belong to one storage group that is under FAST control.
Storage groups are associated with a FAST policy, thereby defining the maximum percentage of
devices in the storage group that can exist in a particular tier.
Some devices are not supported by FAST as shown. Details can be found in the Array Control Guide.
A Symmetrix VMAX storage array will support up to 8192 storage groups associated with FAST policies.
Storage groups may contain up to 4096 devices.

FAST DP Policy /Storage group association

A policy associates a storage group with up to three tiers. The percentage of storage specified for
each tier in the policy when aggregated must total at least 100 percent.
The same FAST policy may be applied to multiple storage groups; however, a storage group may only
be associated with one policy.
When a storage group is associated with a FAST policy, a priority value must be assigned to the
storage group. This priority value can be between 1 and 3, with 1 being the highest prioritythe
default is 2.
When multiple storage groups share the same policy, the priority value is used when the devices contained
in the storage groups are competing for the same resources in one of the associated tiers. Storage groups
with a higher priority will be given preference when deciding which devices need to be relocated to another
Time windows for FAST and FAST VP
Three different time windows.
Data movement time windows for disk group provisioned devices(Symmetrix volumes)
Data movement time windows for virtual provisioned devices
Performance time windows which control statistics collection
Time windows can be inclusive or exclusive:
Both can be set in half hour increments from 00:00 to 24:00
Time windows are stored in symmetrix in GMT but displayed in host time through symcli.
Older version of time windows created in Enginuity 5874 should be converted to the new format using the
symtw convert command
Time windows are used by FAST, FAST VP, and Symmetrix Optimizer to collect performance statistics
and execute data movement within the array. There are three different types of time windows:
1.Data movement for disk group provisioned devices (-dp).
2.Data movement for virtually provisioned devices (-vp).
3.Performance time windows which control the collection of statistics. In addition, a defined time
window needs to be specified as either inclusive, which allows the operation to be executed
repetitively, or exclusive which prevents the operation for a future specific date and time. All
inclusive time windows are similar to the weekly by day time window definitions without the start
and end date. The inclusive time windows are defined by using one or more days of the week and
the start/end time to be applied to each day. The start and end time are in 30 minute increments
from 00:00 to 24:00. The time window definitions stored on Symmetrix database are in GMT time.
Solutions Enabler has the option in the API to display the host local time when adding, removing, or
querying the time windows. Any newly added time windows will not replace the current time
windows. They will be added on top of the current time windows. The remove operation will allow
the user to remove any specified time windows. Any expired exclusive time windows will be deleted
whenever the time window database is updated.