Anda di halaman 1dari 45

EMC Time finder clones Notes :VP SNAP improves cache utilization.

uses track table to record the track that has been changed.Key technology behind
SNAP/VP/CLONE/TIMEFINDER/SRDF
TF Clone suitable if
clones should be used for recovery scenarios.
multiple copies of production data is needed and you want to reduce disk content
ion and improve data access.
TF /SNAP
Only fraction of data changed on the production volumes
TF/VP
you want to create space efficent snap for thin devices.
VMAX 20K/40K -- TF/VP SNAP TIMEFINDER/CLONE/TIMEFINDER/SNAP
## TF VMAX 10K/VMAXe
TF Clone & TF VP SNAP (There is not SNAP availabile)
TF/Clone
No mirror position is required. can be source for srdf family.
copy r/w enabled, 4 main frame copies,
high workloads,data availability, 16 concurrent copies,can be raid 1-5-6,
protected establish & restore, incremental resync. 100% space is required for so
urce volumes.
TF/Snap :- (Not availabile for vmax 10k/vmaxe)
Does not required the mirror position.
supports moderate I/O workloads and functionality.
Data immediately availabile.
copy r/w enabled.
128 copies are there.
cannot be source for SRDF family. server mounting a snapshot have full read/writ
e capabilities with the snapshot.
symmetrics 40 k always create multi virutal snap copy.
TF VP/Snap :improved cache utiliztion.
no mirror position.
32 snap per source volumes
availabile at engenuity 5875 or higher.
in snap vp the tracks can be saved in the same thin pool or another.
TF Fundamentals Clone :volume copy
raid 1-5-6
immediately rw
16 copies of production volume
precopy option/
copy on first write are availabile.
support tf Mirror scripts.
TF Mirror emulation (Mirror commands are converted to snap commands.)
max. 8 differential session as 2 session can exist.
TF CLONE Operations.
1. create - creates the relationship b/w source and target.
2. Activate - clone is now active and availabile immediately after the r/w acces
s.

Production io processed against standard.


3. establish - create and activate
4. recreate and activate :- take the pit and activate the session.
5. restore - re-attached to standard (incremental or full restore)
6. Terminate - Targer host to lose access to clone.
the clone is used when there is parallel acces of production information.
there could be 8 differential clones.
the TF nocopy when enables copy only when the data get accessed.
the TF precopy starts the copies before the copy session has started.
TF Support TF/CG to perform the consistency groups.
####To Use Case
> Copy on Access - use nocopy option. (Default Mode) , source and target must b
e fba/ckd, (standard/bcv Volumes),
> Full background copy -copy option to complete background copy.
> pre-copy -precopy , starts copying before activation of session.
#####Main frame environment.
> Mode copy is default for main frame environment.
> to get copy on acess -- mention nocopy parameter.
###restoring data from Clone device.
> Full Restore -- the TF/clone restores from target to source or another device.
in this case the original session terminates and full restore session is started
.
> incremental restore - the restoration occurs from target to source. in this ca
se the on gong session stops and new incremental session started from clone to t
arget.
####Thin to thick replication.
with thick to thin 5874 eng. the copy is feasible only the NWBH are not copied.
>NWBH are not copied to thin device.
thick to thin replication.
>the thin pool must have adequate amount of space.
> the source and target can belongs to different raid protection.
although the replication can be performed from unprotected to protected , but pr
otected to unprotected is not possible.
TF/Clone support thin to thick for FBA emulations for EFD,FC,SATA.
####Cascaded TF/Clone
cascading will occur either on all thick or all thin.
cascading for thin required Engenuity above 5875.
mixed thin and thick are not supported.
RTT is supported with 5875 or higher.
cascading is supported with VP Snap.
A>B>C
so to do this session from A>B must be enabled first then session from B>C .
###TF/CLONE Functional capabilities
Max. TF pairs > 50,000
Max. Clones 16 for production volumes
**Max. no. of clones for production volumes to provides changed data incremental
Syncronization is 8.
can ensure changed data synchronization - yes.
supports raid 1-5-6 volume.
three site srdf.

provides protective restore immediately for host write.


can ensure database consistency with restartable copies of image - EMC TF/CG.
Can perform the remote point in time frame -- yes with srdf.
TIMEFINDER/SNAP and clone.
> vdevs., track level location ,
> raid protected -- raid1 , raid 5 and raid 6.
> 128 copies of production volume.
> can be made accessible to secondary hosts.
> snap can be made read/write which goes to save pool.
> Multivirtual snap.
> EMC Time finder allows multiple service level requiremnt through existing time
finder solutions. (Clone/Mirror).
TF/Snap Overview :source device get the data and it write the exisigtng data to the save pool.
> save devices cannot be metadevices.
6 Snap Operations
> create - creats the relationship b/w standard and virtual devices.
> recreate - create a point in time of the standard.
> Activeate - activates the session which is access to host for r/w. start the C
OFW mechanish.
> establish - creates the relationship b/w source and target.
> Restore - copies data from virtual devices to standard devices.
Terminate - the target vdev lose the access to the hsot.
> prior to eng. 5874 the creating PIT leads to termination of source and vdev re
lationship, but now ti can be created without terminatiosn.
> save area is typically a fraction of source capacity.
> as best practice the a snap doesnot require more than 20% of production volume
in save area.
Perform a backup of TF/Snap
> in this method the backup is ran through the snap, the backup device reads seq
uentially the data from the volume.
> it may cause some source volume contention at the source device as the source
and snap both looking the same spindles.
TF/SNAP COFW :> to handle the writes the snap uses COFW.
steps.
1. hosts writes at the
business continuity volume :page 32
#####Time finder clone with thin devices.
allows replicaiton in out of virual environment.
the cascaded thin replication with engenuity 5875.
replication from thin to thick and vice versa is possible with engenuity above t
han 5874 solution enabler 7.1.1
###Clone from Clone Target :A -> B -> C whenever the copy happens from STD to STD (TF CLONE)
IF copy happens between the STD to BCV the timefinder/emulation will be used.
if started A -> B & B -> C precopy mode is simultaneously so when the precopy s

tated reached one copy cycle. then the A -> B Can be activated & once it
get into oopied state then B -> C can be activated.
with engenuity 5874 , the TF/Mirror operation will run in TF emulation mode when
ever TF/Mirror functionality is called.
####CLONE FROM CLONE
symdg show clone_a_to_b
clone target in symapi group
SYMAPI_ALLOW_DEV_IN_MULTIPLE_GRPS = ENABLE (this option has to be used when you
are trying for cascaded clone in /var/emc/config/options.)
##Creating the Clone sessions ###
hence the SYMAPI Option SYMAPI_ALLOW_DEV_IN_MULTIPLE_GRPS let the target to acts
as the source.
the devices A->B the stated once turned to copied state then only the activation
could be done unless it would get failed.
###Session states when both clone states
A->B created copied not in copyinprogress
B->C created copied copyinprogress
** Both session cannot be in copy in progress.
once the state A->B is copied the RTT restore to target can be perfromed from C>B.
##Recreate in cascaded environment :A->B in TF/CLONE
B->C in TF/VP or TF/Snap IN engenuity 5874 Q4 2012.
so the session A->B can be recreated which leads to incremental copy of contents
of A->B without affecting the existing session B->C.
#####Copying to larger targets:SYMCLI_CLONE_LARGER_TGT=ENABLED.
source to target copying is allowed but restored is not allowed which is useful
in migrations. differential copy is not allowed only full copy is allowed.
the concatanated meta devices is also not allowed for operations.
the striped meta devices could be used if the number source and target should ha
ve the same metamembers. also target metamember can be larger than source.
#####Timefinder/clone and VP operations
#####Timefinder/VP Snap :New features with engenuity 5876
Multiple target are possible for the source volume which lies in the same target
pool and share tracks.
all target must be bound to the same thin pool.
Both source device and clone target must be virtual device.
supported with FBA and AS400 D910 iseries device.
32 session can be created with same source device.
####Timefinder VP Snap How ti works :session are created as nocopy and nodifferential.
uses COFW and optimized as ACOFW.
the new writes on the source device triggers a copy to the save devices.
the reads from the target snap si redirected to source device & for the PIT the
save dev is reffered.
on activation the session turns into copy on write.

####Timefinder VP Snap How it works 2 :on activating the second session between the same source and different target.
case 1. in case there is no write on the protected track :- then the reads is pe
rformed from the same source.
case 2. if there is write happened.
the write occured after the activation of the first session.
the second session will get it own track means track got copied
from source to target.
in case the track is not modified after the first activation.
then the original copied to the thin pool -henc shared tracks
end
symdg show vpsnapdg| more
symclone create -vse dev001 sym ld TGT001 -nop
syclone create -vse dev002 sym ld TGT002 -nop (after creation the session state
was activated )
symquery -multi
symquery -multi
C - background copy. X bg, . nothing .v vse
g - grouped with target.
d- differential
p- precopy.
###Activating Timefinder snap session:- after creation the session chagned to co
pyonwrite.
symclone activate -consistent DEV001 sym ld TGT001 -nop
symclone activate -consistent DEV002 sym ld TGT002 -nop
symclone query -multi
###Display shared tracks :symcfg list -sid 12 -pool -thin detail
to show the shared tracks
symcfg list -sid 12 -pool -thin detail
PTESL ->
POOL - S - SNAP pOOL, R- RDFA_DSE , T=Thin
T -technology =S- SATA , F -FIBRE, E -EFD
E - Emulation --> F =FBA, A= AS400 , 8= CKD3380, 9 = CKD3390
Compression = e =enable, D =disable , N =enabling, S= Disabling.
S state = e enabled, d disabled, b balancing.
L ocation - disk internal ,external
##Dispaly shared tracks (2)
symcfg list -sid 12 -tdev -range 0243:0323 -bound -detail
symcfg -list -sid 12 -tdev -range 121:2323 -bound -detail
FLAG -ESPT
E emulation A = AS400, F=FBA, 8=CKD3380 9= CKD390
S shared tracks S - SHARED,
P persistent status completed one cycle or not
T status -- binding, boudn, allocating, compressing, unbound.
###Restore TF/VP SNAP
symclone restore -sid 12 DEV001 sym ld TGT002 -nop
this will create an new session and all the exisitng session will reamin as it i

s.
once the restoration got completed than the session can be terminated with -rest
ored option.
###Timefinder/VP Snap consideration :-vse session can only be specified at the time of session creation.
once the sesion created the mode cannot be changed.
once the target devices are managed by fast/vp then the relocation is not possib
le .
TIMEFINDER/SNAP and timefinder/vp snap cannot exist for the same source volume.
Timefinder/clone and Timefinder/snap can co exist on the same source volume.
Timefinder/clone cannot be created as no copy session.
also the target larger than source is not possible in tf/vp snap.
###Timefinder/VP Snap consideration 2 :cascading of Timefinder/VP is not allowed the target of the timefinder/vp cannot
be the source of another timefinder/vp session.
timefinder/vp session is allowed from the timefinder/clone target is allowed. ju
st the timefinder/clone state should either be split or copied.
timefinder/vp snap from an SRDF R2 device in consistent and active SRDF R2 is po
ssible just that the device level write pacing should be enabled.
both the R1 and R2 should be above 5876 engenuity.
######Restore operation in cascaded environment :restore operation in cascaded environment
A->B timefinder/clone
B->C timefinder/vp session
restore C->A via B (without terminating the session A->B AND B->C Sessions)
##Timefinder VP Snap Restore to Target :precondition
A->B state should be copied or split.
B->C state should be copied or copyonwrite.
1. sysnap restore the B <- C once copied/copy onwirte. (symsnap is used for TF/S
NAP not for TF/VP SNAP)
2. symclone restore the A <-B once copied/copy write.
terminate the operation once the successful restoraion is completed.
Timefinder Snap Restore to target :A -> B in copied/split B-> copied/copyonwrite
for TF/VP SNAP
symclone restore the B <- C
symclone restore the A < -B
Terminate the session once both are restored successfully.
###Restore operation in concurrent environment :=
A->B timefinder/clone
B->C timefinder/vp clone
restoration can be done A <-B

A->B timefinder/clone
B->C timefinder/snaap
restoration can be done A <-B
also the timefinder/clone and timefinder snap cannot exist together.
###Module 2 TF/CLONE and timefinder vp snap operation
symdg create rdmdg -nop
symclone -g rdmg establish -full -consistent -tgt -nop -v -sid 12
symclone rdmdg query
#####Module 2 Session 5:Device Groups:- all the device groups stays in symapi_db.bin or GNA if active ar
e availabile.
DG can be created/recreated/renamed.
Data Protection
timefinder/clone actions
create,recreate,terminate,activate,set mode,establish,restore.
###Module 3 timefinder/clone operations :Type 1. Normal Snap --> 16
Multivirtual Snap --> 128
Logical Point in time images the pointer are created on the virtual devices thes
e virutal devices than given access to host.
128 Snap session and are avaialbe immediately- Timefinder/clone and provid unmat
ched replication flexibility.
the default max. no. of snap is 16 if restore is planned the default goe
s to 15.
although in case of engenuity 5875 there are 2 sessions are stored for r
estoration so the max. possible of snap is 14.
the multivirtual snap can be enabled by Operating FLAG SYMCLI_MULTI_VIRT
UAL_SNAP = ENABLED.
###Timefinder /Snap Operations :symsnap command is used with normal snap operations although in case of
vp the symclone command is used with -vse option.
## timefinder snap :target is virutal device mapped to host. VDEV
copying only occurs when there are writes to source or target.
only original data which has changed is saved to save pool.
query can done using symsnap command.
timefinder/snap - copying data
timefinder/snap uses process called as copy on first write , so when the
host attempts to write the track on the source , the original data get for than
the first time the data get copied to the save pool , the exisitg track
remains same untill write is initiated on it for first time.
also the new write to the vdev save to save pool.
##Striping the save device.
copy on write is done for each of changed tracked happened.
the copy is done in striped from source to striped save devices.
tracks are striped in round robin manner to save devices to improve perf
ormance.

##Terminating a copy session:when the snap sessions are terminated the tracks are reclaimed by back a
nd the space is released.all copy structure are freed up.also the virtual device
s
made ready.
##Multiple save device Pools:symmetrix save pool are the special devices that provides the physical s
torages. the save pool allocation should be consider.
the write instesive application should have larger snap pool.
also long duration snap shot should have larger snap pools.
-svp pool option can be create action to specify which save pool to be u
sed.
##Symsnap operations.
create, activate, restore ,terminate,recreate (engenuity 5874 and SE 7.2
HIGER), establish.(Engenuity 5874 and SE 7.4 or Higher)
prior engenuity 5874 Create --> Activate --> Terminate
engenuity 5874 and SE 7.4 above recreate --> activate(incremental)
##Configuration considerations
there is some cache required for TF/Snap operations.
also the number of snap VDEV also considered.
vdevs (snapshots)
are persistent, cache only device, consume sym id device.
save dev should be spreaded across as many as physical devices possible.
savearea monitoring
savedev threshold can be set and put on monitoring.
savedev area fills then the sessions that required free space are put on
failed state.
save dev can be added dynamically.
Space Device Space Considerations :if write cannot be completed because of no space in save dev , then the
devices moves into not ready state.also the copy on write got disabled.
Draining save devices
permit a disable command to work on an active save devices.
all active tracks copied to other devices in the save pool.
disable the save devices which leads to draining the data to ther other
devices. also the disabling leads to the pool overflow and session got terminate
d.
disable dev 2323 in pool snappool ,type=snap
##Monitoring Save devices
symsnap -sid 20 monitor -svp appn_a_pool -i 5
Monitoring save devices.
symsnap monitor -sid 12 -percent 80 -action onepercentscript.sh -svp def
ault -i 60 -c 5.
symsnap moniotr -sid 12 -percent 80 -action onepercentscript.sh -svp def
ault -i 6- -c 5.
##TIMEFINDER/SNAP operations.
symdev list -vdev -sid 12
2E3:2E6 Production hosts 2E7:2EA backup host
symsnap list -pools

symsnap list -svp DEFAULT_POOL -type savedevs


symsnap list -pool
symsnap list -svp DEFAULT_POOL -type savedevs
##Create new Save
symconfigure -sid
symconfigure -sid
symconfigure -sid
, member_state=enable"

pool and add save devices


12 -c "create pool appn_a_pool, type=snap"
12 -c "disable dev 0247:024A in pool DEFAULT_POOL"
12 -c "add dev= 0247:02A in pool appn_a_pool type=snap

##Show device group


symdg create snapdg
set SYMCLI_DG=snapdg
symdg add all dev -range 2C3:2C4
symdg add all dev -range 2E7:2E8 -vdev (-vdev means as the target)
symsnap create
symsnap query
GD FLAG G - The Target
D - duplicates
e duplicates, there is

-svp appn_a_pool
device is associated with this group.
exist for this duplicates, there is more than one inactiv
no duplicates for this target.

##symsnap list (the hold is placed on created as snap )


symdev list -held
symsnap activate -consistent -nop
symsnap query (the session goes into copy on write)
##Save Pool Utilization
symsnap list -svp appn_a_pool -savedev
symsnap query -multi
##Creating concurrent session on symsnap
2E9:2EA added to the symdg device group.
symsnap create -svp appn_a_pool
symsnap activate -consistent
symsnap create -svp appn_a_pool -concurrent
symsnap activate -consistent
##Draining save devices
symsnap list -svp appn_a_pool -savedev
symconfigure -sid 24 disable dev 0247 pool in appn_a_pool,type =snap" co
mmit -nop
symsnap -list -svp appn_a_pool -savedev -all (it will show the disabled
devices as well)
symcfg -sid -show -pool -svp appn_a_pool -snap all
symsnap terminate -nop
symsnap list -svp appn_a_pool -savdev -all (the used tracks goes to zero
after termination of pool)
##Symsnap Restore
3 types of restores operationa can be perfomed on virtual devices.
restoration of the target vdev to the source vdev incrementally.

incremental resotre to the BCV which got split but still holds the incre
mental reltaionship with the source device.
incremental restore with the device outside of the sessions for the full
restore has to be performed,
target of the source should be equal to greather than the source device,
and as the target is bcv so that the emulation mode will be used.
also during resotre all the exisitng session will be maintained. and in
engenuity 5876 there will be 1 restoration sessio will be created although in ca
se
of the engenuity 5875 there will be 2 session will be created for restor
ations.
##Restore a snap session
symsnap resotre -nop
symsnap query
when the symsnap restore command is issued then it make the source devic
es not ready for the short time, and when restore starts then the source device
becomes ready again althouhg the vdevs remains not ready. so they can be
made ready again by issuing this commands.\
symdev ready 2E7 -sid 20;symdev ready 2E8 -sid 20.
##symdev list -held
##symsnap query -multi
even after the restore operation , original snap session is maintained,
so in order to recreate the existing restore session. the restore session has to
be closed.
symsnap terminate -restore nop
symsnap query -multi
this will convert the existing VDEVs into read/write.
##Duplicate snap session
introduced with engenuity 5875 and SE 7.2
the original snap with the source must be created before the duplicate s
ession. once the the first snap session got activate the other duplicate
snapshot could be taken.
the duplicate snap session will persist even if the original snap sessio
n got terminated.
duplicates snap session allows snap of a dev. the snap session between t
he STD device and the first dev . the vdev can be used as the source for the nex
t
create snap although the also when the duplicate snap is activated that
will be activated against the original STD devices although PIT will be same for
both VDEV althought the time stamp will vary.
the original snap session will have 10am and the other 11am. both snap w
ill use the same save pool. once the original snap session got terminated the
duplicate will still persisits. also the max. two duplicate snap session
will persist. although the max. original session may exist upto the permissible
limit.
##Timefinder consideration with XtremeSW Cache (SW)
Creating Duplicate snap:symdg show duplicate_snap | more
symsnap create -svp appn_a_pool DEV001 vdev ld VDEV001 -nop

symsnap activate -consistent -nop (after activation the status goes to c


opyonwrite)
symsnap query -multi
symsnap create -Duplicate DEV001 vdev ld vdev002
symsnap query -multi
GD status xx
g associated with device group
d the target device has one or more inactive duplicates
symsnap activate -duplicate -consistent VDEV001 vdev ld VDEV002 -nop
symsnap query multi
symsnap query -summary -multi
this will show the 2 session running with copy on write.
## Module 3 Timefinder/snap operations unisphere.
create savepool (Data protection -> replication groups and pools )
select Default_POOL -> Click on save volumes devices.
disable devices in the exisitng in default pool.
click on create save pool -> in advanced option enable new pool member.
Now create device groups and add STD devices.
in device groups choose the volumes STD and VDEV as well.
Timefinder /snap operations
Data protection -> local replication
Timefinder/Clone and Timefinder VP
Timefinder Snap action
create pairs, activate ,establish,terminate,duplicate,restore,recreate.

####Module 4 SRDF
operating system independent (Open system and Main frame)
srdf groups is the relationship between the local director port to the r
emote director port.any symmetrix device that is assigned as configured SRDF mus
t
added to the srdf groups for replications.
the static srdf groups are remains in the impl.bin file, also there is d
ynamic srdf group which is not written in the symapi_db.bin file even though it
is
persistent to the power cycle and IMPL.
the dynamic srdf is enabled by default on symmetrics VMAX arrays with en
genuity 5874
##Dynamic srdf groups
check both array have dynamic rdf configuration state enabled. enabled b
y default.
symcfg list (the num phys devices shows the number of physical devices a
re assigned to local host where command has ran.)
symcfg list -sid 20 -v
dynamic device pair leads to create srdf grups and dynamic device paris
enable one to create /delete the and swap srdf r1-r2 pairs.
##List avaialbe Remote Adapters and currently SRDF Groups
symcfg list -ra all -sid 20;

symcfg list -ra all -sid 20;


symmetrix vmax with engenuity 5875 can support max. 250 SRDF groups.
##Enhancement to symsan command
symsan list -sid 20 -sanrdf -dir all
engenuity 5876 Q4 2012 and SE 7.5 and above will be able to run on the l
ocal system.
the symcfg command not able to fetch the rdf groups or connectivity b/w
details between the two storage systems so also if the first srdf groups is crea
ted
in between then it won't show any thing.
also in order to create first SRDF group the whole serial number of remo
te array is needed which can be fetch easily with the symsan command.
symsan shows local and remote ra groups and remote array serial number.
the full serial number is required to created the first srdf groups.
also in order to run the symsna command the remote array can have the lo
wer engenuity level.
symsan list
symsan list
##Determine
symdev list
symdev show

-sid 20 -sanrdf -dir all


-sid 12 -sanrdf -dir all
the Dynamic Capable Device
-dyanmic -sid 12
2C7 (check Dynamic RDF capabiley: rdf1 and rdf2)

##creating rdf groups


symrdf addgrp -label srdf_s -sid 20 -remote sid 12 -dir 9F,10F -remote_d
ir 9F,10F -rdfg10 -remote_rdfg 10
symrdfg addgrp -label srdf_s -sid 2- -remote_sid 12 -dir 09F,10F -remtoe
_dir 09F,10F -rdfg 1- -remote_rdfg 10
symcfg list -sid 12 -rdfg all
symrdf addgrp creates an empty Dyanmic srdf groups and logically links t
hem. alsos the rdfg group number is entered in the decimal number which got covn
erted
to hexadeciaml number which should be kept same both at remote and local
. although which is not the requiremtn.
so if rdfg group number 10 is give then ti will show 09 in the output.
##New option with createpair:prior to engenuity 5876 and solution enabler 7.4, the create pair R1 -> R2
Engenuity 5876 and solution enabler 7.4
symrdf createpair -format (-establish,-type,-rdf_mode, cons_exempt)
so prior to issuing this command both the devices shold be unmppaed from
the front end port or made not ready.
also after the operation got created this will make them RW back again o
nce the opration got completed.
##Create RDF Device Pairs
symrdf createpair -sid 20 -f pairs.txt -rdfg 10 -type r1 establish
the type r1 device type leads to the column in the will be the r1 device
and their corresponding will act as their correspding r2.
with solution enabler 7.4 the default srdf mode created is adaptive copy
mode.
although before SE7.4 the default srdf mode is sync.
also a new option got created in file a var/emc/config/options SYMAPI_D

EFAULT_RDF_MODE= Syncronization
##Device pair Created:symrdf -f pair.txt query -sid 20 -rdfg 10
MDAE
M MODE SYNC,ASYNC, E=SEMI Sync c = adaptive copy.
D Domino x=enabled disbled.
consistency exemtt = x enabled . disabled m= mixed.
##List avaialbe RA and currently configured SRDF Groups
symcfg list -sid 12 -RA all
##Deleting Device pairings
the deleting the RDF pairs lead to removal of information from the symme
trix
must suspend RDF Links before issuing symrdf deletepair command, the sta
te should be suspended , split or failover
cancelling the srdf pair changes the status from r1/r2 to regular.
device in the device group changes from the RDF to RDF Capable.
symrdf suspend -sid 12 -f pair.txt -rdfg 5
symrdf deletepair -sid 12 -f pair.txt -rdfg 5
##Identify Accsible SRDF Volumes:syminq
symdev list -r1 (shows all the r1 devices configured on the hsot)
symdev list -r2 (shows all the r2 devices configured on the host)
##Symcli SRDF Device Groups
devices can be grouped into device groups.
all devices in a device must be in the same symmetrix array.
all devices must be of the same R1,R2,R21.
symdg create -type R1 srdfsg
set SYMCLI_DG=srdfsg
symdg add all dev range 2c7:289
all devices in the device group shouldbe on the same symmetrix array. th
e type of device group must be specified.
the type of device gropup r1,r2 then the only that type of device can be add
ed.
the device group definition is stored in the symapi_db.bin on the host w
here the symdg was created.
##Display symdg device groups
symdg show srdfsg| more
symdg show
##Displaying SYMCLI Device Groups
DEV001 DEV002
##symrdf commands syntax
symrdf -g <device group> <actions> [options]
actions : suspend resume establish terminate split activate failover fai
lback update restore set mode

actions: suspend resume terminate establish split failback failover upda


te restore set_mode,update
syrdf ping -sid 12
symrdf ping -sid 20
## changing SRDF Operational mode
symrdf set mode <Mode val> <skew >
Mode val = async| sync | acp_wp | acp_off
symrdf set mode -sync -nop
symrdf query
MDAE
##Suspending the SRDF Links
symrdf suspend -nop
symrdf query
suspend is singular operation that will changes the links state to NR an
d the source device accumulation to R2 invalid tracks.
to invoke a suspend the RDF pair must be already in the following states
. suspend the operation got stoppped although the rdfg will still be their and w
hich could be resumed.
R1 updated.
synchronized.
##Resuming the SRDF links.
symrdf resume -nop
symrdf query
as soon as the resume operation ran the state will turn into the syncinp
rogess . during the transfer of the R2 invalid tracks the write serialization is
not
maintain.the link will agains set to read/write. r1 will be r/w and r2 w
ill be WD.
##SRDF/synchronous operations.
SRDF/Synchronous
SRDF Disaster Recovery Operations:Failover -> the failover make the r2 device r/w where the r1 will become
wd so the r2 site will become the prodution site. in case the link have the fai
liure
then the r1 invalid tracks will increased.
invoked in the case of disaster.
update:this transfer the accumulated of r1 from r2 while the production work co
ntinues on r2.
failback :- resumes back the production work on r1 site making the r1 r/
w and r2 wd.as soon as the failback command is given the production host continu
es the
work.
SRDF Disaster Recovery Operations:Failover from the source to the target side, switching data processing t
o the target side.

##SRDF
symrdf
in the
ce pairs links
read/write.

Failover
failover -nop (executed from the r2 side)
failover scenario the r1 devices are write disabled and srdf devi
becomes the suspeneded and r2 will acts as the production and in

steps.
1. stop all applicaitons
2. unmount file system and unmount file system.
3. perform the failover operations.
4. resume the applicaiton from r2.
##Symrdf query
the r2 will be read/write and r1 will be write disabled. RDF Pair state
- failover over.
##Symrdf update -nop
as this can be performed in the failover state only hence in this state
the R1 is wd . so this command will update all the r1 write pending track to r1
after running this commands the srdf link status changed form NR to r/w.
the update command has to be perfomed before failback as the write pendi
ng ios will makde the r1 lags teh r2 which will reduced the performance.
the status changed to updateinprogress.
##Symrdf query
##SRDF Failback
Failback event should always be done gracfully.
the r1 will be r/w and r2 will be WD. also the SRDF Links will be SET TO
rw.
##SRDF decision support concurretn operations
SRDF Split - places the units in concurrent operations, suspend links b/
w r1 and r2. and r/w both volumes.
SRDF Establish - save source data, resume Normal SRDF operations, preser
ve data on r1 and discard the data on r2.
SRDF Restore - resume srdf operation presetve data on r2 and discard dat
a on r1.
SRDF Concurrent
##symrdf split -nop
symrdf query
##symrdf establish
if the establish command runned in split state, then the r1 device got c
opied to the r2 side discard the r2 chagnes.r2 device will be wd and r1 will be
r/w.
the links becomes the read/write.
##SRDF Restore
symrdf restore -nop
restore operation will resume SRDF
remote mirroring .changes made t
o target while in split state. changes made to the source are overwritten.the
r2 device will be write disable. links are resumed. r1 can be accessed a
gains without requiring synchhronization as that will be copied from the r2 side

.
##Query after restore:also restore resume the SRDF links.
##split, establish,restore -summary
production continues on the r1- r2 volumes for DSS operations.
##RDF R1/R2 Personality swap operations
symrdf swap
it changes the personality of the r1 devices to new r2 devices and r2 de
vices to new r1.
symrdf failover -establish
R1/R2 personality swap
symmetrix load balancing->
some times it is necessary
disaster recovery drills.
data center relocation
maintenace operation on hosts while continue produciton on dr site.
##Concurrnet SRDF device:an srdf device mirror with 2 srdf mirror is called as concurrent srdf de
vice.
R11 - each r1 mirror is paired with differnet r2 device on remtoe array.
R21 - this device is used as secondary site for cascaded srdf. in this c
ase the R2 is acting as Mirror for primary r1 site. and also acting as r1 source
for tertiary site.
R22 - each R2 mirror is paired with two different remote symmetrix array
. only one of the R2 mirror can be read write at a time.
it is used in SRDF star environment. which means it can recieve data fro
m just one R1 mirror at a time.
##concurrent SRDF R11 concurrent SRDF allows two remote SRDF mirror of single R1 device. each
pair will belong to differnt ra GROUPS. one copy for disaster recovery and anoth
er
for backup. also any combination of SRDF is allowed except for async and
async. although with engenuity 5875 and above the both legs can be async modes.
R1 <-> R2 site B in synchronous modes R1 <-> R2 site c asynchonouse mdoe
.
R1 <-> R2 site B in synchronous modes R1 <-> R2 site c adaptive copy mode.
R1 <-> R2 site B in synchronous modes R1 <-> R2 site c synchonouse mdoe.
##2 Synchronous remote mirrors:also a write from the primary site would not return as the write complte
d until the both symmetrix revert that IO is in their cache.
1 Sync and 1 Adaptive Copy Remote Mirror:the SRDF IO from the secondary device operating must show as the ending
status at symmetrics before a second host IO can be accepted.the adaptive copy m
ode
host doesn't need to be acknowledge.
simultaneously restore from r2 and r1 cannot be performed . SRDF swap is

not allowed in this configuration.


##Concurrent RDF Example
symrdf query -rdfg all
creating concurrent srdf connections
symrdf addgrp -sid 12 -remote_sid 20 -label conc -dir 09F,10F -remote_di
r 09F,10F -rdfg 11 -remote_rdfg 11
symrdf createpair -sid 20 -f createpair.txt -type r1 -establish -rdfg 11
2C7 2C9
2C8 2CA
##symrdf list -concurrent
so we can see that the mode can be changed by using
symrdf set mode sync -rdfg 11
##RDF ECA for consistency protection
the rdf eca is feature of the of symmetrics engenuity environment.
used the SRDF/Synchronous to hold write IO across all the composite grou
p untill all relevan tlinks are suspended.
interacts through the RDF Deamon on or more hosts to manage consistency.
holds write IO to auser defined list of symmetrics device and their corr
espding replica.
RDF Engenuity consistency assist suspends the IO across all the device a
cross the composite group. RDF-ECA is supported by RDF Daemon that
performs monitoing and cache recovery.if one or more r1 source not able
to send their data then all their corresponding replica will get stopped replica
ting.
and they would resume togerther.this ensures database consistency across
all device to form a recoverable PIT.
a composite group must be created using
-rdf_consistency & that shoudlb enabled using the symcg enable command.
the symcg enable the daemon monitoring of the and managing rdf consisten
cy group.starting with engenuity 5874 , SRDF/S consistnecy only supported in wit
h rdf-eca
##Fast VP Coordination for SRDF device pair
prior to engenuity 5876 the data movement of r1 is depends on the storag
e groups fast vp statistics .also the r2 data movement will be depends on the
fast vp statistics of the r2 device. practically the r2 device may assig
n as the sata tier as it is not being used and fast vp statisitcs will do that.
so the r1 and r2 movement is done using the fast vp statisitcs for the r
1 and r2.
with engenuity 5876 and solution enabler 7.4 the fast vp statisitcs for
both the r1 and r2 is calculated.
fast vp srdf coordination is supported with single ,concurrent ,cascaded
.
synchornous, asynchonous, srdf/star,srdf/edp, the r21 device acts as a r
elay , transmitting the r1 metrics to the r2 device.
incase of r22 the metrics will only be recieve from any one of the devic
e.
##enabling fast vp srdf coordination
symfast -sid 20 -fp_name BC_POLICY -sg esx161_FASTsg modify -rdf_coordin
ation enable

symfast -list -sid 20 -association


symfast -sid 20 -fp_name bc_poloichy -sg esx161 modify -rdf_coordination
enable
##SRDF Considerations with XtremeSW Cache VFCACHE
Actively cached volume VF Cache which is being actively cached.
symcg,symstar,symrcopy
##Mixed RDF Mode on Remote Adapters:prior to engenuity 5876 if the same RA is being used for both synchonous
e and asynchonous . then the asynchonous get the prefercen which lead to
the performanc impact on synchonouse applicaitons.
with engenuity 5876 and solution enabler 7.4 the RA CPU allocaiton can b
e enabled which leads to independence of host cpu cycle utiliztion b/w
sync/asyn/copymodes 70/20/10 like this also this can be disabled.
symqos -sid -ra dir 9F set io -sync 50 -async 40 -copy 10
symqos -sid 20 list -ra -io
##SRDF virtual
SRDF
2-site
cascaded
EDP
STAR

provisiong to standard
Thin(5876 A4 2012)
vmax10k/20k/40k
vmax10k/20k/40k
vmax10k/20k/40k
vmax10k/20k/40k

Thick (5876 A4 2012)


vmax20k/40k
vmax20k/40k
vmax20k/40k
vmax20k/40k

SE 7.5 now support SRDF Device Pair between virtual provision


SRDF /EDP (Extended Distance Protection):
3-way SRDF for long distance with secondary site as a pass through site using Ca
scaded SRDF.
For Primary to Secondary sites customers can use SRDF/S, for Secondary to Tertia
ry sites customer can use SRDF/A
Diskless R21 pass-through device, where the data does not get stored on the driv
es or consume disk. R21 is really in cache so the host is not able to access it.
Needs more cache based on the amount of data transferred.
R1 S > R21
A > R2 (Production site > Pass-thru Site > Out-of-region Site)
Primary (R1) sites can have DMX-3 or DMX-4 or V-Max systems, Tertiary (R2) sites
can have DMX-3 or DMX-4 or V-Max systems, while the Secondary (R21) sites needs
to have a V-Max system.
SRDF/Star configuration allows a mixture of thin to thick volumes with a
ll vmax series array running 5876 Q4 2012 SR.
SRDF MODULE 4.
##DR for VMFS datastore Primary ESXI server
RDF_DATASTORE - 002F6
RDF_DATASTORE contains RDFStudentVM - Contain R1_Data
SRDF LESSON 5:- DR using RDM
symrdf -g srdf_rdm set mode sync -nop
symrdf -g srdf_rdm query
symrdf -g srdf_rdm failover -nop
symrdf -g srdf_rdm query
the pair state become failover.

##Rescan Remote ESXi Server


uploaded the download .vmx file to any datastore.
##SRDF
symrdf
symrdf
symrdf
symrdf
symrdf

Failback
-g srdf_rdm query
-g srdf_rdm failover -nop
-g srdf_rdm query
-g srdf_rdm failback -nop
-g srdf_rdm query

##SRDF synchronous operation


SRDF/A setting, SRDF/A DSE setting
SRDF Asynchronous Operations:Asynchronous remote :minimum impact to replication
extended distance.
always consistent on r2.
efficient bandwidth usage.
support mainframe and open system
minimal impact to production application
efficent bandwidth usage
meet wide range of RPO RTO
##SRDF/A architecture - Delta sets
uses delta sets to maintain a group of writes over a short period of ti
me . delta sets are discrete buckets of data that resides in different
sections of the symmetrix cache. there are 4 types of Delta sets to mana
ge flow
source host -- capture N --> Transmit N-1 --|
|

target host

<-- Apply <-- recevied N-1 <---|

the capture delta sets contain all the writes coming from the source hos
t. which is marked as N .
Transmit Delta set in the source symmetrix numbered N-1 contains data fr
om the immediately being sent as tranmit delta set.
also the N-1 at the recieved side is called as recieved delta set. the r
ecieve delta set is the process of receiving the data from the trasmit delta set
N-1.
the apply delta set numbered N-2 which is always consistent is going to
apply on the remote symmetrics array. data from the apply delta set is applied
to
the appropriate cache slot ready to destage on the disk. also the data i
n apply delta set is restartable and consistent .
the symmetrix performs the cycle switch once the N-1 set is compltely re
ceived and N-2 apply set is compltely applied and minimum cycle time has elapsed

. during the
cycle switch the N+1 becomes capture, N becomes transmit & recive , N-1
becomes the apply delta set.

##Dependent write consistency


##SRDF/A System attributes :symcfg list -v -sid 20| more
> the rdf configuration state must be enabled in this configuraiton.
the host throttle and max. cache usage will be explained.
##List RDF Group -SRDF/A Properties
symcfg list -sid 12 -rdfg 10 -rdfa | more
few FLAGS
CSRM
TDA
DEV SAU
FLG P

GRP SAU

CONSISTENCY
- X - ENBLED . DISABLED N/A
STATUS
- A - Active , I = Inactive
RDFA MODE
- S - Single session , M - Multi session.
Mutisession clean up required - C clean up required.
Transmit idle T - X - Enabled , . =Disabled. -=N/A
D - X - A = active , . =Disabled.
A - Autostart = X =enabled, .=Disabled.
Write Pacing FLAGS
GRP Group- Level Pacing
S STATUS
: A= active , I= inactive.
A Autostart : X = enabled, .=Disabled.
U supported : X = Supported, . = Not supported.
Device level Pacing FLAG
S STATUS
: A= active , I= inactive.
A Autostart : X = enabled, .=Disabled.
U supported : X = Supported, . = Not supported.
FLAG for group-level and Device Level Pacing :Devs Paceable
P = X= all device are paceable , .= Not any device is paceable.
##List an individual RDF group:symrdf list -sid 20 -rdfg 10

SRDF
Adaptive copy modes
Posted on February 24, 2013
Here I share few informationa about Adaptive copy modes in SRDF. Basically Adapt
ive copy modes allow the primary and secondary volumes to be more than one I/O o
ut of synchronization.
There are two adaptive copying modes:
1) Adaptive copy write-pending (AW) mode

2) Adaptive copy disk (AD) mode.


Both modes allow write tasks to accumulate on the local system before being sent
to the remote system.
Adaptive copy write-pending mode,
write tasks accumulate in global memory. A background process moves, or destage
s, the write-pending tasks to the primary
volume and its corresponding secondary volume on the other side of the SRDF link
. The advantage to this mode is that it is faster to read data from global
memory than from disk, thus improving overall system performance. An additional
advantage is that the unit of transfer across the SRDF link is the updated
blocks rather than an entire track, resulting in more efficient use of SRDF lin
k bandwidth. The disadvantage is that global memory is temporarily consumed
by the data until it is transferred across the link. Consequently, adaptive cop
y write pending mode should only be used where detailed information about
the host write workload is fully understood.
Adaptive copy disk mode
is similar to adaptive copy write-pending mode, except that write tasks accumul
ate on the primary volume rather than in global memory.
A background process destages the write tasks to the corresponding secondary vol
ume. The advantages and disadvantages of this mode are opposite from those of
the adaptive copy write-pending mode; that is, while less global memory is consu
med it is typically slower to read data from disk than from global memory,
additionally, more bandwidth is used because the unit of transfer is the entire
track. In addition, because it is slower to read data from disk than
global memory, device resynchronization time will increase.
## symrdf list -sid 20 -rdfg 10
symrdf addgrp -label srdf_a -sid 12 -remote_sid 23 -dir 09F,10F -remote_dir 09F,
10F -rdfg_a 10 -remote_rdfga 10
##Create RDF Device pairs:symrdf createpair -sid 20 -rdfg 10 -f pairs.txt -type R1 -establish -g srdfadg
, srdfadg is the name.
pairs.txt
2C7 2C7
2C8 2C8
##Transitioning to SRDF/A mode
Synchronous - if the devices are in synchonized state, then by definition the
R2 device will always be consistent .
so enabling SRDF/A will immediately yield the consistent image.
from Adaptive copy Disk:- in this mode the r1 got the write ack immediately af
ter the data is written to local volume. then the data is read in blocks
and transferred to remote r2 volumes. which leads to increase in bandwidth as
data is reads in blocks and less use of global memeory as there is no memory
is being used.any invalid track owed to r2 will be synchronized and two cycle
switching is required to make the data consistent at R2. SRDF/A provides the
consistent data at output.
##From Adaptive Copy write pending :- as this will keep the data right into gl
obal memory so r1 will be more fast also the background process destages the
data onto the disk there is more usage of global memory.also the bandwidth util
ization is less as tracks are transferred through blocks rather than tracks.

so link utilization is less. so write pending slots are merged into SRDF/A cy
cles , and incase there are no more writepending slots than the it more
additional two more cycles to before R2 is consistent.
##SYNC to SRDF/A(1) :symrdf query
any SRDF/A operation must be perfomed on all the device group . this means that
all the SRDF device must be part of the same srdf groups. this is in constrast
where operations can be performed on subset of devices in an srdf groups.
transition from synchornous to asynchornous.
##sync to SRDF/A :symrdf set mode async -nop
symrdf enable -nop
symrdf query -rdfa
Transision from ACP to asynchronous.
current state is sync in progress.
symrdf set mode async -nop
symrdf enable -nop
symrdf query -rdfa
the state is same the sync is in progess. after synchhronization the it would
take two more cycle for consistnecy of data at r2 side.
##symrdf query -rdfa
MDACE
M mode - S SYNC, A-ASYNC, E-SEMISYNC, C- ACP
Link D Domino - x enabled , . not enabled.
Adaptive copy - D - disk mode, w - writepending .- disabled
C - consistnecy - X - enabled .- disabled
E (exemtp consistnecy ) - X = enabled, .=Disabled , M= mixed. .=N/A
##SRDF/A Configuration Parameters:Maximum SRDF/A Cache Usage:the system wide parameters are used using the symconfigure commands although th
e group wise setting is done by symrdf command.
set symmetrix rdfa_cache_percent = 50
set symmetrix host_throttle_time = 2;
##SRDF Group Level Setting:Minimum Cycle Time
symrdf -sid 12 -rdfg 10 set rdfa -cycle_time 10
session priority
symrdf -sid 12 -rdfg 10 -rdfa set session_priority 30
session priority :- the priority range is 1 - 64 although the 1 being the highe
st priority (last to be dropped)
Minimum cycle time:- this the minimum time after which the srdf attempt to cycl
e switch .ranges from 1- 59 seconds.
minimum cycle time for SRDF is 3 seconds for MSC . also for engenuity 5875 and
above default value is 15 seconds.
##SRDF/A System configuration paramters
rdfa_cache_percent - defaults to 75 and ranges upto 100.
this is percentage of the Max# of the system write pending slots availabile to
SRDF/A. the purpose this is to allow other applciaiton
use the WP limits.

also as soon as the SRDF/A usage increased the WP cache limit , it will for th
e SRDF/A to drop the session to free up the cache.
setting it lower reserves WP limit for non SRDF/A cache usage. setting it high
er to allows the srdf/a to use more cache and it will may impact the
other application which may be using the cache.
rdfa_host_throttle_time :- defaults to 0 (0-65535)
if >0 , then this value override the rdfa_cache_percent and rdfa_host_throttle
_time both.
when the system WP limit is reached then the system delay the write on host un
till the session becomes freely avaialbe.
the each system has 75% of write pending slots limit.
the purpose of this limit is to ensure that the cache is not filled with these
slots with write pending tracks as there is no place
to put the I/O in cache.
SRDF/A creates WP tracks as part of each cycle.
##Monitoring the symstat command optionssymstat -type cycle -reptyep -rdfa rdfg all -i <interval>
symstat -type cycle -reptype -rdfa rdfa all -i <interval>
symstat -type cycle -reptype -rdfa rdfg 10 -i <interval>
symstat -type cache -rdfa -reptype rdfg all -i <interval>
symstat -type request -rdfa -reptype rdfga all -i <interval>
symevent
symevent list -error
##Monitoring SRDF/A
symstat -type cycle -reptype rdfa rdfg 10 -i 5;
FLAG TAS
T - Type
ASYNC
S STATUS

- 1=R1 , 2=R2
- Y = YES, N= No
- A = Active , I = inactive

Active
source
target

capture
apply

inactive
transmit
receive

so when the status is active the r2 is applied and r1 is capture.


when the status is inactive the r1 transmit and r2 is recieving.
## Monitoring SRDF/A
symstat -type cache -reptype rdfa -rdfg 10 -i 5
note that the CACHE SLOT avaialbe for srdf/a session is 75% of the system wri
te pending limit.
##SRDF Asynchronous Operations:SRDF/A resiliency features

SRDF Tranmit idle:in case link lost then


the data transmission stops to target.
the link suspended.
srdf/a remain active & session grows
after link is revived , cycle switching occurs
should be used if used delta set extension.
it is the feature in which the srdf/a that provides the capability of to exten
d the capture , trasmit recived and apply delta phases while maksing the
effect of "all srdf link lost".
normally in absense of transmit idle the srdf/a got terminated if link got los
t. although with dynamic srdf groups the transmit idle is enabled bydefulat. whi
chi
in case the link got terminated the capture cycle grows in cache.eventually if
the when a cache full condition occurs thne then srdf/a becomes inactive. also
further to that there is feature called as delta set extension which deals wit
h cache in and cache out in the event the cache threshold is reached.
## SRDF Delta set extension
allows offloading of SRDF/A delta sets from cache to specially configured pool
s .
> delta set extension pools these uses the pool like save pools which he
lps in extension of phases capture , transmit , receive using the disk by puttin
g
the srdf delta set buffering capability.
DSE extends the cache space avaialbe for SRDF/A session by cycle by off
loading some or all of its cycle to preconfigured storage pools. the DSE must be
used in conjuction with transmit idle. transmit idle is enabled by defau
lt.
##SRDF/A Delta set extension :save pools are designated as DSE pools at creation.
> Contains SAVE devices of a single emulation
> CDK,FBA OR AS400
> DSE Pools are like Timefinder/snap pools , each DSE pools must have th
e same device type in the symmetrix array like for CKD, FBA eg.
rdfa_dse can be associated with more than one rdfa groups.
the dse has to be configured at both ends of the storage so that it could ha
ndle the longs transmit cycle sent.
also there is DSE threshold is there after which the data from cache is stag
ed to dse pools.
> DSE Task is through which the cache data got destage to dse pools as soon a
s the cache threshold reaches.
SRDF/A Delta Set Extension:traditional srdf/a data flows continue to interact directly with cache buffer
.
separate DSE task monitor delta set cache utilization and transfer delta set
data between cache and disk.
transmit idle must be enabled.

SRDF/A DELTA SET EXTENSION :SRDF/A solves abnormal and temporarily problems
##when can dse help ?
SRDF/A DSE Solves abnormal and temporary problems
Unexpeted host load
Link bandwidth issues
temporarily link loss
Increases resiliency for SRDF/A:dse is not going to solve any permanent and persistent problems
##Listing Configured DSE pools:symcfg list -sid 20 -pools -rdfa_dse
symconfigure -sid 20 -cmd "create pool BC_DSE,type = rdfa_dse ;" commit
symmconfigure -sid 20 -cmd "disable dev 24B:24E in default_pool,type=snap;" c
ommit
symmconfigure -sid 20 -cmd "add dev 24B:24E to pool BC_DSE type=rdfa_dse, mem
ber_state=ENABLE;" commit
PTECSL
POOL
TECHNOLOGY
EMULATION
Compression
STATE
DISK LOCATION
##Set RDF Group Attributes and activates DSE
> symrdf -sid 20 -rdfg 10 set -rdf_dse autostart on -fba_pool bc_dse -both_si
des
##Set RDF Group Attributes and Activate DSE :##symrdf -sid 20 -rdfg 10 -rdfa_dse activate -both_sides
##symcfg list -sid 20 -rdfg 10 -rdfa
TDA
C Consistency
S State
R RDF Mode
M Multi session consistency
T Transmit idle
D delta set status
A Autostart
thr threshold reaches to 50% the cache data is staged to
##symcfg show -sid 20 -pool BC_DSE -rdfa_dse
##symcfg show -sid 20 -pool BC_DSE -rdfa_dse
##Query after Temporary Link Loss :at the time of capture the session state has been in tranmit idle for 35 seco
nds.
when the link is lossed then the transmit idle time is showing as 35 seconds
.

##Query after temporarily Link Loss :the RDF Pair state is transidle.
##DSE Pool Utilization -##SRDF/A Group Level Write Pacing :extend the availability by avoiding cache overflow .
it also monitors
> the R1 side IO
> R2 side restore rates
> also monitors the transmit rate and recived rates source and target side.
srdf write level pacing is avaialbe for engenuity 5874 and above.help securin
g the availability of an SRDF/A session by preventing condition that cause cache
overflow
so that host write is paced according to the SRDF IO RATE so that cache overf
low can be avoided. it prevents cache overflow at both end of r1 and r2.
also the SRDF/A
write pacing can also moniotr and respond to spikes in t
he host writes IO rates and slowdown in data transmittal to R2 and R2 restore ra
tes.
SRDF/A Device Level write pacing :the device level write pacing is new feature which is supported in engenuity
5875 and above with SE 7.2 on both sides. r1 and r2
in this the restore rates also called apply rates is
monitored the r2 side. when the write rate at the r1 site is compared at the
apply rate at the r2 site and the r1 rate is corrective action is taken.
both group and device level write pacing can be enable at the same time.
**Only those device will be included which of snap devices that are attached
with r2 device will only be paced.
##SRDF/A - Device Level Write Pacing
when the apply rates are no longer slower rate than the write rate then the p
acing would stop.
with engenuity 5876 and solution enabler 7.5 the device level & group level w
rite pacing can be set on r21 side.
R21 -> R2 Leg of cascaded SRDF configuration. group and device level write p
acing supported.
R1 -> R21 Leg must be in synchornous mode and R21 -> R2 must be asynchon
ous.
the r21 volume must be at engenuity 5876 Q4 2012 , the other 2 symmetrix
must be at minimum of 5875.135.91 for enhanced group and device level write pac
ing.
this allows TF/SNAP ,TF/VP TF/CLONE OF R2 DEVICE.
##Active group and device level write pacing :> symrdf -rdfg 10 -activate -rdfa_pace -nop
> symrdf -rdfg 1- -activate -rdfa_pace -nop
> symrdf -rdfg 10 -activate -rdfa_pace -nop
with solution enbler 7.4 both the group and device write pacing can be deacti
vated in single command.
# symrdf -sid 20o -rdfg 10 -activate -dp_autostart on -wp_autostart on -nop
##Group and Device-Level Write Pacing

symcfg list -sid 20 -rdfg 10 -rdfa


the default value of threshold is 60% and default pacing 50 ms which can be c
hanged to
symrdf -rdfg 10 set rdfa_pace -delay xxxx -threshold yyy
##Recovering after Loss of Links:it is recommended to make a GOLD COPY of the R2 prior
ization.
in the event of loss of links large number of invalid
side.
it advisable to enable async mode once both the sides
setting SRDF Mode to Adaptive copy mode leads to less
de.

to start any resynchron


tracks may build at r1
are synchonized.
impact on production si

as noted that r2 don't have consistent data after long time of link failueres
.as at this time there will be lot of write pending also there will be
so enabling just rdf link wouldn't be good at all. it is better to go for srd
f/a once both the sides are synchonized properly because of high write pending.
##Recovery Example:now as the link goes failed and the device status goes into paritioned state.
production work continues on the r1 sides.
#symrdf query -rdfa (loss of link placed the stauts in partitioned state.)
once the link recovered.
the session is still inactvie. the mode is asynchoronous.
when the links are active again the pair is moved to suspended state. even thou
gh the link is active.
symrdf query -rdfa
##symrdf disable -nop
##symrdf set mode acp_disk -nop
##symrdf query -rdfa
as the consistency is enabled so we have to disable the consistency before chang
ing the mode.
symrdf resume
symrdf set mode async
symrdf enable
##SRDF Session Recovery Tool:this is the SRDF session recovery utility is initiated by the symrecover comm
and.
runs in the back grouund using windows scheduled tasks and monitors the synch
onouse and asynchonous operations.
if the failure is detected , automatic recovery is initiated through the prec
onfigured file with gold copy paramters.
the symrecovery commands can be run from r1 or r2 side. but in case of concur
rent srdf it must be run from r1 side.
symrecover start -cg RDFAmon -mode async -options cg_mon_opts
this is to start the symrecover start form r1 host.
RDFAmon is the consistency group
options file - cg_mon_opts

##Failover/Failback with SRDF/A


again it is advisable to make a copy of the R2 prior to execting a failback op
eration.
the srdf failback should be enabled once the pairs is in synchonized state.
so during failback the there lots of write pending at r2 for r1 . so right after
enabling the r1 , then there are chances that r2 data get inconsistent. so
better to create a gold copy of each side before makding these operations. as th
e host write at r1 that will transmit to r2 back so that may make the data
inconsistnet. so it is better to create gold copy of these before making these o
perations.
## Lesson 3 Multisession Consistnecy
#symcfg list -sid 20 -rdfg all
##Multiple Independent SRDF/A groups
symrdf -g rdfg1 -rdfa query group number 10 , cycle number 4
symrdf -g rdfg2 -rdfa query group number 11, cycle number 12
Multiple Independent SRDF/A Groups
symrdf -g rdfg1 query -rdfa (Link loss --session changes to transidle)
symrdf -g rdfg2 query -rdfa (Links loss Consistent as the link is enabled.)
## srdf multisession Consistnecy.
manages multiple SRDF/A sessions logically as if they were a single sess
ion.
rdf daemon for open system.
session can be within or different symmetrix.
ensure complete restartable images copies.
*if there is one or more source R1 devices in srdf/a MSC is enabled then
if the transmission stopped at one leg of srdf r1 to r2 , then it will lead
to stopping of rest of target in the consistnecy group. halting all data
to the target R2 . RDF Daemon storrdfd performs cycle switching and cache recov
ery
this ensures the data consistency of R2 at all times.
> all array should be running the storrdfd daemon which must have access
to all hosts and more than one such hosts should be used.
a composite group must be created using rdf_consistency protection optio
n (-rdf_consistency) and must be enable using cg enable .
## SRDF/A Multisession consistency
RDF Daemon coordinates cycle switching of SRDF/A MSC
Responsible for detecting failure condition that would cause data on R2
side to become inconsistent.
when the failure condition occurs then the SRDF/A session in the group a
re stopped in manner that R2 leaves the R2 side with consistent data image.
second
##the RDF process daemon maintain consistency environment , each locally
attached host performing management operation must run an instance.

storrdfd -- rdf daemon for cycle switching.


base daemon -- storrapid
GNS - Group Naming Services - it communicates the composite group defini
tion back to rdf daemon.in case the GNS daemon is not running then the individua
l name
of the composite group must be defined.
in single session, SRDF/A cycle swtich occurs when the transmit cycle on
R1 and apply cycle on r2 are both empty .this switch is contorlled by engenuity
.
in MSC the transmit cycle on the R1 side of all participating session must be
empty and also the corresponding apply cycle on R2 side. the switch is
coordinated and controlled by engenuity.
##all host writes for the duration of cycle switch . this ensure dependent wr
ite consistnecy . if one or more session in MSC complete their Transmit and appl
y
cycle ahead of other sessions. they have to wait for all sessions to compete
, prior to cycle switch.

##SRDF/A MSC Operations:there are three ways the RDF daemon can be started. if the RDF Deamon is enabl
ed by default the daemon is started by the solution enabler . it may take
bit of time to first connect it & builds its cache.
set SYMAPI_USE_RDFD = ENABLE
create a composite group -rdf_consistency options:Group Defintion is passed to the RDF Daemon as a condidate group.
if the Daemon is not already running, it is started automatically.
##create the consistnecy group.
symrdf -cg <Composite_group> set mode async
symrdf -cg enable.
##Managing RDF Daemon
prior to starting storrdfd ensure that the default SYMAPI Configuration data b
ase is up to date. storrdfd daemon use the database information to connect with
the remote arrays. that why the database information shouldb be correct.
# there are 3 ways the RDF daemon can be started first, if the RDF Daemon is e
nabled , then the solution Enabler will start the daemon automatically.
so during first time to connect it may take time to build its cache.
2. It can also be started by
stordaemon start storrdfd -wait 10
stordaemon install storrdfd -autostart
setting the stordaemon autostart makes its because the cache may take a time to
rebuilt the cache.- depending on the number rdg groups to be used.
stordaemon install storrdfd -autostart.
stordaemon start storrdfd -wait 10
stordaemon install storrdfd -auto
##SRDF/A with MSC ###
the composite group is created and rdf groups are added to this composite group.
the CG is enabled for multisession consistency.

symrdf -g rdfg1 disable -nop


symrdf -g rdfg2 disable -nop
symdg dg2cg rdfg1 rdfa_msc_cg -rdf_consistency
symdg dg2cg rdfg2 rdfa_msc_cg -rdf_consistency -rename
symcg list
symcg -cg rdfa_msc_cg enable -nop
##SRDF/A with MSC(2)
symrdf -cg rdfa_msc_cg enable -nop
symrdf -cg rdfa_msc_cg query -rdfa
now you can see the cycle number changes to same nubmer 3 for both the RDF Group
s.
##SRDF/A with MSC(3)
now in case the one link of the rdf GROUP is not able to propagte the other rdf
groups will also not propagagte and goes into
parition state.
once the link goes up. then
symrdf -cg rdfa_msc_cg establish
once the invalid tracks marked,merged and synchonized MSC protection is automati
cally re-instated i.e.
symcg -cg rdfa_msc_cg enable.
##MSC Clean up.
##3 possibe scenario in case of MSC clean up.
1. all receive cycles are marked as complete . in this case the recieve cycle ar
e complete so these cycle can be applied.so recive cycles are
promoted to apply.
2. some recieve cycle are completed and some are not completed. so in this scnea
rio all the recieve cycles.
3. some receive cycles are prmoted and some of them are not promoted yet. in thi
s case the promoted recive cycles and those not yet
promoted are commited.
the most recent contains the most recent and consistent data.
the second situation arises if there is a failure when some receive cycles are
complete while other are in transit.
in this case clearly it is the only apply cycle which contains the consistent d
ata. so all receivec cycles are discarded and only apply is commited.
> for cycle switch to happens all transmit set and apply set must be empty .
so error must have occured during the cycle switch. also recieve can only be pr
omoted during the cycle switch.
## MSC Clean up
clean up is automatically perfomed by RDF Daemon if the link to the R2 side is a
vailable.
also if the link to the r2 is not availabile then the invocation of any SRDF com
mand such as symrdf failover or split from the R2 side peforms
the automatic clean up.

##SRDF/A Consistency exempt feature:adding and removing device groups from active SRDF/A Group frequently.
> all devices currently in the SRDF/A session has to suspened in order to remove
it .
> if there is write to the current device in the session . those writes will bec
ome invalid tracks.
> after adding/removing devices they can be resumed which will be session active
again.
> during synchronization there will the status syncinprogress untill all invalid
tracks are cleared untill cycle switches occurs.
> if durng this time the R2 goes down the dr is inconsistent so DR exposure occu
rs.
##SRDF/A Consistency exempt feature:it is feature which required engenuity 5773.150 that allows devices to be exempt
ed from the dependent write consistency calculations.
requires engenuity 5773.150 and solution enabler 7.0
consistency exempt attributes is maintained at an SRDF mirror once set by user.
although it got cleared in following situation.
1. Deleting SRDF Pairs.
1. Moving SRDF Pairs.
3. Resuming SRDF Pairs. it got deleted once the sychronizationgot completed .the
re is no invlaid tracks left and after the second cycle swithcing.
the r2 will report the r1 is not availbe. the attribute cannot be removed & it i
s only to be removed by the engenuity. also there is no cli
command to remove it. with this features the addition of the devices could be do
ne without suspending the link so the consistnecy algorithm won't
be applied on the newly added device as the consistency expet attribute is set o
n it. it can only be removed once the device synchoronizedafter
two cycle switches.
##Moving the Device will clear the Consistency Exempt indicator from the SRDF Mi
rror in the group. -cons_exempt flag with move pair operation.
then the consistency indicator will be set when the device is moved into new srd
f group.
##SRDF Operations allowed with Consistency Exempt:operations such as establish resume suspend can be performed on this. also the s
plit and failover cannot be perfomred on subset devices.
-consistency exempt
can be used on device file, device group , consistency group as well.
symrdf createpair -cons_exempt :- this will create both the pairs in consistency
exemtp.
symrdf movepair -cons_exempt :- this will create both the pairs in target group
as consistency exempt.
symrdf suspend -cons_exempt :- this will enable the consistency exempt on the cu
rrent SRDF pair.
##Adding devices to the active srdf session:-

1. create a new device pair into a temporarily SRDF group and synchonized them.
2. synchorization is done with -establish option.
3. suspend the pair.
4. move the pair from the srdf group to the new .
5. resume the pairs. once the
wait for the srdf pair consistency exempt to consistent.
##Removing Devices from an Active SRDF/A session:solution enabler 7.0 and engenuity 5874
1. suspend the relevant device pairs(s) in the current SRDF/A session.
this requires -cons_exempt flag.
if the consistency is enabled for the srdf group then the -force option may be r
equire for suspend operation to suceed.
2. verify the device are suspended and consistency feature is set for them.
3. move the pairs to diffent rdf group.
move pair also cons_exempt only if device are being moved to srdf/a group.
##Query the existing SRDF group:> symquery query -rdfa
> symrdf query -rdfa
MDACE - E stands for consistency_exempt.
X- enable, . -Disabled
## Create New Pair
symrdf addgrp -lable temp -sid 12 -remote_sid 20 -dir 09F,10F -remote_dir 09F,10
F -rdfg 11 -remote_rdfg 11
symrdf createpair -sid 20 -type r1 -establish -f pairs.txt
symrdf -sid 20 -rdfg 11 query -f pair2.txt
now move the devic pair.
symrdf -sid 20 -rdfg 11 -f pair.txt suspend -nop
symrdf movepair -rdfg 11 -new_rdfg 10 -f pair.txt -cons_exempt
##query the srdf/a pair.
symrdf query -rdfa
##Verify consistency :symrdf -sid 20 -rdfg 10 resume -f pair.txt -nop
symrdf query -rdfa

##Open Replicator
the open replicator copies PIT of local symmetrix copies volumes and transfer th
em from one to another storage .Only PIT is transferable.
once PIT is transfered.also the open replicator offer live and incremental migra
tions. also during migration you don't have to wait to data
to get copied.
-uses san/wan to make copies.
-full/incremental copies.

-no server/Lan impact.


Simplepush/pull/live/bcv
##Hot push
> at creation all control tracks are marked as protected.
> a proteted track must be moved before read or write to that track is allowed.t
his term is used in snap/vp snap /clones.
> -precopy option with create and recreate command initate a data copy immediate
ly in the background, before the session is activated.
> all tracks are marked as protected once the open replicator session is created
.
> if the background copy was specified at the time of creation then the copy sta
rts in the background. if it isn't then the copy starts to
occur when the host access that track.
> during the HOT push the control device is avaialbe for read and writes and the
data got copied during the first write to remote host.
during this time delay will occur for that track after that there wouln't be no
copy.
>as in the open replicator the data transfer, data on the remote server should n
ot be altered until the firnish has occurs. most operating
system is open for reads.
> since each FA port on the DMX through which the I/O is received to the control
device . the same FA port should be accessible to the remote
FA port to access the remote device. so local array for the control device FA po
rt must be accessible to remote device. so mapping has to be done.

##Hot push (2)


> up to 15 session can be created and activated with one after another. with one
session active at a time.
> total 1024 session is allowed in symmetrix.
> upon link failure b/w control and remote the session fails.
symrcopy -file <filename> -push -hot create
symrcopy -file <filename> activate
symrcopy -file <filename> terminate
file
Hot
symdev=symmetrix id :dev
symdev=000196232020:2C3

Cold
wwn of remote.
wwn=3wr23847238salkdfja;sjd

a device is limited to total 16 SDDF sessions.


> create ,recreated, copyinprog is used in SDDF session in protection bitmap.
> differential uses an additional session to track change since ACTIVATE.
> 15 Limit is reached with 15 diffential sessions, with one currently active (pr
otection bit map is the 16th device SDDF session.)
> by default the total session are 512 although which can be changged to 1024 by
editing the file /var/emc/config/options with parameter
SYMAPI_RCOPY_SESSION_LIMIT 1024. the 512 session work only till 5671 and above.
> if links fails and communication is lost between the two sites , the session f
ails because you do not want to hold up production on the
local host.
##Cold Push:> always remember to use -copy option, unless session will sit there forever. be
cause the it is cold the production volume is not being access.
> usually BCV is used as cold device in push session.this is not absolute requir

ement the as long as the control volume is WD OR NR to the host.


> since there is no session cofw session is going on as the control volume is WD
/NR so there is no sddf session for that is required.so
16 SDDF session can be created.
> created,recreated,copyinprog session used an SDDF session for protection bitma
p.
> differential sessions use an additional session since activate.
> 15th limit is reached with the differntial session with one currently active p
rotection bitmap is the 16th device SDDF session.
##Cold Push:-> not all FA ports (mapped to the control volume)are required to have access to
the remote volume.
-> upon loss of links of the remote/control symm
session stall.
symmetrix keep trying untill session recovered.
-> CLI Examples
symrcopy -f <filename> -push -cold create
symrcopy -f <filename > activate
symrcopy -f <filename> terminate
CLI file format
COLD control
symdev=symmetrixid:symdev

remote
wwn=kja;sdlfja;sdkf;as

##Incremental Push
symrcopy create -differential
symrcopy activates
symrcopy verify -copied
symrcopy recreate
symrcopy activate
initially the differentially session has to created.
##Symmetrix Differential Data Facility:each symmetrix logical volume can support up to 16 SDDF.
SDDF session comprise bitmap that flip a bit for every track that changes the si
nce the session was iniated.
SDDF session are used to monitor changes in
-clones
-snaps
-BCV
-Change Tracker
-Open Replicator
##Incremental Push Details
> upon creation of session two bitmap sets up
Protection Bitmap :- 1111111111111111111111111
SDDF Bitmap :0000000000000000000000000
after copy:Protection Bitmap :- 0000000000000000000000000
SDDF Bitmap :0101011100000000000000000
there is the protection bitmap which represetn which of the tracks has been copi

ed to the remote volume. as the copy of that track got completed


the bit of that track got cleared up.once all the data get copied the SDDF bitma
p got cleared complted.
SDDF bitmap:- this is the bitmap which reprsent the track that has been written
so this sets to 1 if there is anuthing written on that
tracks.
##Incremental Push Details(2)
in this case the when the recreate/activate command will be issued. the SRDF bit
are copied to the proteciton bit map. as you know that
the protection bitmap default value is 1 . so that value that has changed is set
to 1 which has to be copied and all the bit map turns to 0.
Protection Bitmap :- 0000100110110100000111000
SDDF Bitmap :0000000000000000000000000
After copy.
Protection Bitmap :- 0000000000000000000000000
SDDF Bitmap :0101011100000000000000000
##Incremental restore after incremental push.
for incremental push operation only, the data can be push to the control device
by pulling back only the changed track. the session must
be created with -differential option and must be in copied state. Hot or Cold PU
SH session can be restored.
For e.g., if you copied all data from the control to remote with -differential o
ption , and there some changes that has been made to the
control device.these changed track can be restored to the control device using t
he symrestore command. so if it is hot push. then the
control device get nr state untill copied get completed.and once the copy is don
e then the control device is accessible.
similarly with there is cold push was done. it can be restored back to the contr
ol volume and the device will be NR after the restoration.
##HOT pull:during HOT pull the
control is the destination and remtoe = is the source.
Control Symmetrix is host accessible :- can be larger than remote.
remote should not be read by any other hosts during the transfer.
-> at activation, all control tracks are marked protected:-> all copy initiated b/w remote and control.
-> a read or write from the control device causes the track to be pulled over be
fore access is permitted copy on access.
-> copy continues until if -copy is specified.
- all FA have access to control device.
HOT pull is way to perform migration with minimum application downtime . how eve
r there is risk of data loss if the hot pull session is
prematurely terminated before the data transfer got completed.because writes to
the control volumes are not written back to the remote side.
if target copy is larger than source -force_copy should be used in the command.
hot push - cofw behaviours.
hot pull - cofa behaviours.
so a read/write behaviours on the control volumes trigger a copy from the remote
volume.

##HOT pull (2):upon loss of i/o the application got impacted . however the session persists and
once the link comes back the copy proceeds.
CLI Examples:symrcopy -file <filename> -pull -hot create
symrcopy -file <filename> activate
symrcopy -file <filename> terminate
CLI Format
control
symdev=vmaxid:symdev

remote
wwn=askdjf;asdfalsfka;lsdfk

##HOT pull protection during donor update.


use of -donor_update option during the hot pull protects the new data written to
the control device.
where it is new or after data transfer , all new writes are sent to the remote d
evice.
symrcopy -file <filename> -donor_update -hot -pull create -copy
##Cold Pull:control is not ready to host
can be larger than remtoe
Remote should be inaccessible during transfer
at activation all tracks are marked as protected.
-copy specified : Background copy initiated between Remote and Control.
-Track copy continues until complete.
all remote volumes should be accessible to the remote volumes to the local FA fo
r the duration of copy.
Not all FA with access to control device must have access to remote devices.
upon loss of link between control and remtoe
session stalls and copy progress once the link come back.
symrcopy -file <filename> -create -cold -pull create
symrcopy -file <filename> activate
symrcopy -file <filename> terminate
cli format
control
symdev=323werqwe:2c3

remote
wwn=iqwuepfsdlfasdf;fd'aks;dfas

##Throttling of open relicator data transfer


if not restricted, open replicator can consume all bw so it important to set the
pace and celing.
pace :- it specifies the delay b/w the tracks of transmitted data.
ceiling :- restricts the max amount bandwidth allocated to the FA port for the t
rasnfering symrcopy.
throttling is important in case of hot transfer in which the io is shared b/w th
e host io and open replictor io . so keepin g it is importan.
althugh in case of cold session the sepearte FA port can be assigned for the tra
nsfer.

pace: Delay in ms.


celing :- sets bandwidth limit on FA
##Pace of an open replicator session:pace =0 fastest.
pace = 1-9 slowest.
Default pace -5.
symrcopy -file <filename> set pace 0
##Ceiling setting the max. bandwidth
set as percentage 0-100 of the max. bandwidth for symmetrix director port.
syntax:symrcopy set ceiling 80 -sid 12 -dir 7F -p 1
actions
control remote ration
when link fails
HOT Push
Control is
COFW
all
Cold push
all tracks copied
Hot pull
cofa
Cold Pull
all tracks copied

state control/remote
copy type
FA Access
R/W , remote NHDC

1:1
Session fails.

Contorl is NR, remote is NHDC


1:16
session stalls.
even 1 FA ACCESS WILL WORK
Control is R/W , Remote is NHDC
1:1
session stalls.
ALL
control is not ready , remote is nhdc
1L1
session stalls.
even 1 FA ACCESS WILL WORK

the session fails only at Hot push location becuase you don't want them to wait
for the produciton volume to respond.
##Open Replicator symmetrix operational Details.
Zoning: HOT push/pull
in this situation every FA which has access to control device must have acess to
FA of corresponding remote device so that all the read/writes
can be push/pull from the remote volume.
##Zone and Mask 2 Symmetrix arrays:identify FA with access to control devices.
get wwn of FA ports.
#on remote symmetrics
identify FA Port number.
mask the wwn of the controlling symmetrics FA ports to the remote FA Ports.
create zone b/w controlling and remote FA of your choice.
symask - if dmx.
symaccess - if symmetrix vmax array.
* Masking has to be done at remote array.
##symaccess list view -sid 12
##symaccess -sid 33 show ors_20_ig -type initiator
on the remote storage the masking has tob e done. in the initiator groups there
will be WWN of the control Symmetrix array. so that

the control FA will acts as inititor for these symmetrix and remote will act as
the target.
##symmaccess for port group on remote array.
symaccess -sid 33 show esx163_pg -type port
port groups contains the port number of remote arrays.
##symaccess command viewing storage group:symaccess -sid 33 show ors_33_sg -type storage
contains the volumes.
##SYMCLI to perform ORS operations
SYMCLI_RCOPY_COPY_MODE Variable environment
copy_diff = set background copy mode. when the session got activated it transiti
oned into copyinprog.
s
sets default mode for create as diffential ,allowing for
subsequent recreate.
do not use with offline or online pull.
NOCOPY_DIFF:- no copy in background. and copy occurs differentially and change t
he status to copyonaccess.
do not use with offline or online pull.
copy_nodiff:- sets background copy mode. when session is active , it transitione
d into 'copyinprog'.
nocopy_nodiff:- doesnot set the background copy mode; the recreate will failed.
nocopy_nodiff:- so this will copy on access.
precopy_diff:- sets as precopy state , recreated ..also only be used for hot pus
h.
precopy_nodiff:- sets precopy mode ,session is in precopy mode.
session is not diffential at create time ,recreate fails.
use only for hot push.
##SYMAPI_RCOPY_GET_MODIFIED_TRACKS Options
option file variable effect all session.
##Device to use for ORS transfer. sid 20 is the control array, and sid 33 is rem
ote array. these device are also added in maksing at
the remtoe array.
symdev list -sid 20 pd
##symsan command
list port and LUN wwns seems from the particular ports and directorl.
so can validate zoning b/w the port and intended OR target.
does not require the OR session to be created.
Examples
##Display remote ports WWNs.

symsan -sanports -sid <symid> -dir all


##Display the LUNs behind the remote port wwn.
symsan -sanluns -wwn <san_port_wwn> -sid <13212> -dir all
also ensure the san connectivity.
##Command to view Remote Director Port WWN
symsan list -sid 20 -sanports -dir 7F -p 1 -detail
##command to see the remote lun from the contorl symmetrix
symsan list -sid 20 -sanluns -dir 7F -p 1 -wwn lasd;askdf;lkasdf
FLAG
ICRTHS
I INCOMPLETE.
C CONTORLLER
R RESERVED
T Type
H Thin
S symmetrix X

X - incomplete.
X- record is controller,. record is not contorller.
X - record is reserved, record is not reserved.
A = AS400 ,F =FBA, C =CKD, .
THIN DEVICE
=SYMMETRIX DEVICE.

##Device file and Environment Variable.


##symcfg list
more rsymm.txt
control
remtoe
symdev=20:2c3
symdev=33:2c3
symdev=20:2c4
symdev=33:2c4
set SYMCLI_RCOPY_COPY_MODE =COPY_DIFF
##Hot push Creation:symrcopy create -file rsymm.txt -push -hot -nop
symrcopy -f rsymm.txt query
FLAG
CDSHUTZ
C Copy
D Differential
S x . session is pushing data to remtoe device.
. session is pulling data from remtoe device.
H = x is Hot copy session.
. is Cold Copy session
U = X =the session has donor update enabled.
. donor udpated not enable.dT = C Continious session.
M = migration session.
R = Recovery Point session.
S = standard ORS session.
Z = front end zero detection enabled.
* = the session can be re actviated.
##Activate Hot push
symrcopy -f rsymm.txt activate -consistent -nop
symrcopy -f rsymm.txt query
>> session changed to copyin progress

symrcopy -f rsymm.txt verify


##Recreate command
symrcopy -f rsymm.txt recreate -nop
symrcopy -f rsymm.txt query
##Restore from recarete hot push session
symrcopy -f rsymm.txt restore -nop
symrcopy -f rsymm.txt query
in order to restore the session must be created as incremtanl restore.
##Terminate session.
symrcopy -f rsymm.txt query
##hot pull creation
set SYMCLI_RCOPY_COPY_MODE=COPY_NODIFF
symrcopy -f rsymm.txt create -hot -pull create -donor_update -nop
symrcopy -f rsymm.txt query
##Hot Pull activation
symrcopy -f rsymm.txt
symrcopy -f rsymm.txt
syrmcopy -f rsymm.txt
symrcopy -f rsymm.txt

activate -consistent -nop


query
set donor_update off
terminate

##Preparing for Cold Push :set SYMCLI_RCOPY_COPY_MODE=COPY_NODIFF


symdev list -sid 20 -range 2c13:2c14
symdev not_ready -range 2c13:2c14 -sid 20 -nop
symrcopy -f rsymm.txt create -cold -push -pace 0 -nop
symrcopy -f rsymm.txt activate -nop
symrcopy -f rsymm.txt query

##Create
symrcopy
symrcopy
symrcopy

a cold pull and activate the session


-f rsymm.txt -cold -pull create -nop
-f rsymm.txt activate -nop
-f rsymm.txt query

##ORS with teh VDEV as control devices.(Cold push only )


prior to engenuity 5874 cold push operations from a BCV were only allowed from f
ull volume copies such tf clone tf mirror.or by making the STD
volume notready. but after engenuity 5875 the timefinder/snap can be used as con
troldeve as control device which is actually a vdev.which
actually space efficient operations.
##ORS Steps for using vdev (Cold push only)
1. create timefinder/snap session
2. Create ORS Session
3. Active timefinder/snap session.
the snap session should be activated -not_ready flag.

1. Create the Timefinder/snap session.


2. Create the ORS session .
3. Activate the snap session.
this creates a point in time copy.
4. activate the ORS session.
5. wait for the ORS session to complete.
##Repeating the ORS Operation using vdev:repeating the ORS session
1. recreate the snap session.
2. recreate the ORS session.
3. activate the snap session.
4. activate the ors session.
5. Terminate the ORS session.
6. terminate the snap session.
##Consideration to keep vdev as the control device:the vdev must remain not ready to the user while the ORS session is present.
ORS session can be created as a cold diffential or non differential push.
there can be only 1 ORS session for a given VDEV control.
ORS restore to a ORS Timefinder/SNAP session is not supported.
ORS Precopy is not supported.
##with vdev for cold push, there is single cold push with ORS session.
##Zone and masking of clariion.
oN SYMMETRIX CONTORL BOX.
identify FA ports with access to control devices.
identify the WWN of the FAs.
size in the Block of the control devices
ON Clariion Remote Box.
Discover the Clariion over network.
verfiy the size of clariion devices.
Register the control FAs on the clariion as host.
create a masking on the clariion if required.
also create a storage group for open replicator.
performed the zoning b/w the vmax fa and clariion fa using any tool.
##device for the undiscovered clariion.
Control
Remote
symdev=vmaxid:symdev
wwn=askd;fjkasdfj;askdf;able
for discovered clariion.
control
symdev=vmaxid:symdev

remote
clardev=askldjf;aksdf;laks

if you do not discovered the using the SYMCLI the devices has to discovered will
show APMA98324098W7 although it is best
to discover the clariion devices. although it is not recommended to use wwn as i
t may cause error.
symcfg list -authorization
symcfg add authorization -host <ip address> -username <username> -password <adsf
as;df>
symcfg add authorization -host <ip address> -username <username> -password <adsf
as;df>

symcfg discover -clariion -file <assisted_discover_file>


##more clar.txt
10.1.1.1 10.1.1.2
SYMCFG LIST -clariion
##Non Symmetrix Array Consideration:supported list of Arrays
> avaialbe in the elab navigator
obtaining WWN of the remtoe device
inq utility distributed by emc
array vendor tools to determine the wwn of other vendor lun.
SPA

SPB

##Sizing remote array volumes:Transfering data b/w DMX and non-symmetrix is not different than vmax to vmax.
as the e lab navigator have the details for the storage
that are qualified for the dta transfer.the principle challenge is to find the w
wns of the device on the remote array.Once the wwns are knowsn
the open replicator can be created over san.
while symmetrix array measure configured to measure the size of their lun in cyl
inder while the other array use the byte blocks. so
care must be taken if there is bidirection data transfer is planned.
when the remote is smaller than the source, then data cannot be copied without s
pecifying the -force_copy flag.
however the pull works because the extra track are simply left untouch.
>> when the remote device is bigger than the source device. the pull operation c
annot be perfomred unless specifed with -force_copy . showever
push works because extra on the remtoe devic e is simply untouched.
##Open Replicator and thin devices.
thin devices can be used as control or remote devices.
also thin to standard repliction can also be perfomed using the open replicator.
##Federated Live Migraiton
non disruptive migration approach using array based migration of data using ORS
and host based migration rediretion using Powerpath .
it does this by using a set of coordinated command through EMC symmetrix managem
ent console coordinate the array migration and coordinate the
host application redirection from one central point making the migraiotn truely
non disruptive.
addionaliy FLM is flexible which hlep to migrate thin-thin thin-thinck thick -th
in also the host level redirection usin powerpath help to
eliminate time conssuming remediate.
federated live mgiration eng. 5671,5773,5875 and powerpath 4.5

##FLM migration considertion.


flm has unique requiremnt which that must be met .they have unique procdure whic
h must be checked and vary from operating system to operating

system.
##Underlying TECHNOLOGY :FLM operating the new symmetrix as the having the new VMAX device assume the ide
ntity and geometry
FLM Terminology:uses mainly hot pull so that
control device
FLM Target
Remote device
FLM Source (Donor)
Host Access Mode:New in Engenuity 5875
Active or Passive
Device External Identify:the FLM symmetrix array present a unique identity are visible host symmetrix Log
ical volume . the identity is made up WWN front end director.
and device geometry.the spoofed identity can be recognized the director ports ar
e spoofed by 2.
e.g. 7E:2 instead 7E:0
##Migration Consideration:Old Zone - connectivity b/w the dmx to application host.
new zone 1 - connectivity b/w the VMAX to applciaiton host.
new zone 2 - connectivity b/w the VMAX to old dmx.
solution enabler 7.2 on control host. engenuity 5875 with ACLX on new VMAX and E
ngenuity ePack on the Old demx required.
donor dmx device should not be part of local or remote replication.
max. 32 pairs at a time.
san view equivalent to symsan

Anda mungkin juga menyukai