Abstract
As the need to store, protect and process more information
rapidly increases, businesses experience a growing demand for
intelligent storage systems while at the same time required to
keep cost to a minimum. The Symmetrix VMAX 10K system is a
new member in the trusted Symmetrix product family, and
designed to be simple, cost effective and reliable to support
from one to many databases and applications. EMC Symmetrix
VMAX 10K satisfies all these needs by combining Symmetrix
Enginuity features, 100% Virtually Provisioned storage for speed
and ease of deployment, FAST VP for improved performance,
and a combination of TimeFinder and native RecoverPoint
splitter for robust replications. This white paper describes how
EMC Symmetrix VMAX 10K can be deployed to support Oracle
databases and applications.
September 2012
Table of Contents
Executive summary.................................................................................................. 5
Audience ............................................................................................................................ 5
Introduction ............................................................................................................ 6
Products and features overview ............................................................................... 6
Symmetrix VMAX 10K series with Enginuity ........................................................................ 6
Unishpere for VMAX 10K..................................................................................................... 7
Symmetrix VMAX 10K Auto-provisioning Groups ................................................................. 8
Symmetrix VMAX 10K Virtual Provisioning .......................................................................... 9
Automated pool rebalancing ........................................................................................ 11
Symmetrix VMAX 10K FAST VP .......................................................................................... 12
Evolution of storage tiering ........................................................................................... 12
Symmetrix FAST VP ....................................................................................................... 13
FAST VP and Virtual Provisioning .................................................................................. 13
FAST VP elements ......................................................................................................... 13
FAST VP Performance Time Window considerations ...................................................... 15
FAST VP Move Time Window considerations ................................................................. 16
FAST VP architecture..................................................................................................... 16
Symmetrix VMAX 10K TimeFinder product family .............................................................. 17
TimeFinder/Clone full clone and clone with no-copy option .......................................... 17
TimeFinder Consistent Split .......................................................................................... 18
General best practices for ASM when using TimeFinder based local replications .......... 18
EMC RecoverPoint/EX ....................................................................................................... 18
RecoverPoint components ............................................................................................ 19
Combining TimeFinder and RecoverPoint for repurposing and recovery ............................ 20
Conclusion ............................................................................................................ 35
Appendixes ........................................................................................................... 37
Appendix A Example of storage provisioning steps for configuration 1 .......................... 37
Detailed configuration steps............................................................................................. 37
Appendix B TimeFinder/Clone configuration steps ........................................................ 41
Executive summary
The EMC Symmetrix VMAX 10K with Enginuity delivers a multi-controller, scale-out
architecture for enterprise reliability, availability, and serviceability at an affordable
price. Built on the strategy of simple, intelligent, modular storage, it incorporates a
scalable Virtual Matrix interconnect that connects all shared resources across all
VMAX 10K engines, allowing the storage array to grow seamlessly from an entry-level
configuration with one engine up to four engines. Each VMAX 10K engine contains
two directors and redundant interface to the Virtual MatrixTM interconnect for
increased performance and availability.
EMC Symmetrix VMAX 10K delivers enhanced capability and flexibility for deploying
Oracle databases throughout the entire range of business applications, from missioncritical applications to test and development. In order to support this wide range of
performance and reliability at minimum cost, Symmetrix VMAX 10K can start with as
little as 24 drives up to 240 drives with a single engine (a single system bay) and
scale up to four engines supporting 960 drives and 512 GB of cache (four system
bays and two drive bays). Symmetrix VMAX 10K arrays support multiple drive
technologies that include Enterprise Flash Drives (EFDs), Fibre Channel (FC) drives,
and SATA drives. Symmetrix VMAX 10K with FAST VP technology provides automatic
policy-driven storage tiering allocation, based on the actual application workload.
For ease of deployment and improved performance, Symmetrix VMAX 10K is fully
based on Virtual Provisioning technology. Virtual Provisioning provides ease and
speed of storage management, and native wide striping storage layout for higher
performance. When oversubscription is used, it can highly improve storage capacity
utilization with seamless grow as you go thin provisioning model.
For business continuity and Disaster Recovery Symmetrix VMAX 10K offers
TimeFinder/Clone for creating local space efficient copies of the data for
recoverability and restartability. Symmetrix VMAX 10K also offers native RecoverPoint
splitter. RecoverPoint provides local and remote replications with any point in time
recovery using CDP, CRR or CLR RecoverPoint technology.
Audience
This white paper is intended for Oracle database administrators, storage
administrators and architects, customers, and EMC field personnel who want to
understand a Symmetrix VMAX 10K deployment with Oracle databases.
Introduction
This white paper demonstrates how to implement a typical Oracle database using the
new installation and configuration features specific to the VMAX 10K platform. As
Symmetrix VMAX 10K is 100% Virtually Provisioned, and with the combination of
TimeFinder for point-in-time replicas, and native RecoverPoint splitter for local and
remote data protection, the paper focuses on storage layout choices that can best
accommodate performance, protection and availability for Oracle databases.
The VMAX 10K packaging and ease-of-use features streamline the implementation of
Symmetrix systems from order entry through final configuration. A complete VMAX
10K system can be selected with just a few mouse clicks to specify the host
connectivity, the disk types and total capacity. Standard systems are pre-configured
in the factory, new VMAX 10K installation script executed at the customer site, and
the final application related configuration is completed with the use of Unisphere for
VMAX (although Solutions Enabler CLIs are available as well).
Perform and monitor replication operations (TimeFinder for VMAX 10K, Open
Replicator)
RAC1_HBAs
RAC2_HBAs
Storage
SAN
Port: 07E:1
Port: 10E:1
Autoprovisioning
Group
10
When more physical data storage is required to service existing or future thin devices,
for example, when a thin pool is approaching full storage allocations, data devices
can be added to existing thin pools dynamically without causing a system outage.
New thin devices can also be created and bound to an existing thin pool at any time.
When data devices are added to a thin pool they can be in an enabled or disabled
state. In order for the data device to be used for thin extent allocation it needs to be
in the enabled state. For it to be removed from the thin pool, it needs to be in a
disabled state. Symmetrix automatically initiates a drain operation on a disabled data
device without any disruption to the application. Once all the allocated extents are
drained to other data devices, a data device can be removed from the thin pool.
The following figure depicts the relationships between thin devices and their
associated thin pools. Thin Pool A contains six data devices, and thin Pool B contains
three data devices. There are nine thin devices associated with thin Pool A and three
thin devices associated with thin pool B. The data extents for thin devices are
distributed on various data devices as shown in Figure 5.
11
thin pool. Because the thin extents are allocated from the thin pool in round-robin
fashion, the rebalancing mechanism will be used primarily when adding data devices
to increase thin pool capacity. If automated pool rebalancing is not used, existing
data extents will not benefit from the added data devices as they will not be
redistributed.
The balancing algorithm will calculate the minimum, maximum, and mean used
capacity values of the data devices in the thin pool. The Symmetrix will then move
thin device extents from the data devices with the highest used capacity to those with
the lowest until the pool is balanced. Pool rebalancing is a nondisruptive operation
and thin devices (LUNs) can continue to be accessed by the applications during the
rebalance.
12
13
Storage tiers are the combination of drive technology and RAID protection
available in the VMAX 10K array. Examples for storage tiers are RAID 5 EFD, RAID 1
FC, RAID 6 SATA, and so on.
A FAST VP policy combines storage groups with storage tiers, and defines the
configured capacities, as a percentage, that a given storage group is allowed to
consume on each of these tiers. For example a FAST VP policy can define 10
percent of its allocation to be placed on EFD_RAID5, 40 percent on FC15k_RAID1,
and 50 percent on SATA_RAID6 as shown in Figure 7. Note that these allocations
are the maximum allowed. For example, a policy of 100 percent on each of the
storage tiers means that FAST VP has liberty to place up to 100 percent of the
storage group data on any of the tiers. When combined, the policy must total at
least 100 percent, but may be greater than 100 percent as shown in Figure 8. In
addition the FAST VP policy defines exact time windows for performance analysis,
data movement, data relocation rate, and other related settings.
14
15
16
Allocation compliance
The allocation compliance algorithm enforces the upper limits of storage
capacity that can be used in each tier by a given storage group by also issuing
data movement requests to the VLUN VP data movement engine.
17
complete as any needed data tracks will be prioritized as the background copy proceeds). This
provides tremendous amount of improvement in RTO.
TimeFinder Consistent Split
With TimeFinder you can use the Enginuity Consistency Assist (ECA) feature to perform
consistent splits between source and target device pairs across multiple, heterogeneous hosts.
Consistent split helps to avoid inconsistencies and restart problems that can occur if you split
database-related devices without first quiescing the database. The difference between a
normal instant split and a consistent split is that when using consistent split on a group of
devices, the database writes are held at the storage level momentarily while the foreground
split occurs, maintaining dependent-write order consistency on the target devices comprising
the group. Since the foreground split completes in just a few seconds, Oracle needs to be in hot
backup mode only for this short time when hot backup is used. Consistent split can be also
used stand-alone to create a restartable replica, as described in the white paper referenced
above.
TimeFinder target devices, after performing a consistent split, are in a state that is equivalent to
the state a database would be in after a power failure, or if all database instances were aborted
simultaneously. This is a state that is well known to Oracle and it can recover easily from it by
performing a crash recovery the next time the database instance is started.
General best practices for ASM when using TimeFinder based local replications
Use external redundancy (not ASM mirroring) in accordance with EMCs recommendation of
leveraging the Symmetrix array RAID protection instead.
Use separate disk groups for redo, data and archive logs. For example, +REDO (redo logs),
+DATA (data, control, temp files), and +FRA (archives, flashback logs). Typically EMC
recommends separating logs from data for performance monitoring and backup offload
reasons. Finally, +FRA can typically use a lower-cost storage tier like SATA drives and
therefore require their own diskgroup.
Starting with Oracle 11gR2 Oracle Cluster Ready Services (CRS) and ASM have been
merged. Therefore when installing CRS the first ASM disk group is created. In that case it is
recommended to create a small ASM disk group exclusively for CRS (no database objects
should be stored in it), for example: +GRID and provide it with 5 LUNs. That will allow the
ASM disk group to use High Redundancy ASM protection, which is the only way to have
Oracle clusterware create multiple voting disks (quorum devices). As described earlier, all
other ASM disk groups should use External Redundancy, making use of storage RAID
protection.
Whenever TimeFinder is used to clone an ASM disk group, consistency technology should
be used (-consistent flag) even if Hot Backup mode is used at the database level. The
reason is that Hot Backup mode does not protect ASM metadata writes.
EMC RecoverPoint/EX
The EMC RecoverPoint provides DVR-like point in-time recovery with three topologies, the first is
local continuous data protection (CDP), the second is synchronous or asynchronous continuous
remote replication (CRR) and the third is a combination of both (CLR). RecoverPoint/EX is the
offering that simplifies continuous data protection and replication by using VMAX 10K with
18
Enginuity based write splitter. RecoverPoint/EX is appliance based, out-of-band data protection
solution designed to ensure the integrity of production data at local and/or remote sites. It
enables customers to centralize and simplify their data protection management and allow for
the recovery of data to nearly any point in time.
RecoverPoint provides continuous replication of every write between a pair of local volumes
residing on one or more arrays. RecoverPoint also provides remote replication between pairs of
volumes residing in two different sites. For local replication and remote synchronous replication
every write is collected and written to local and remote journal and then distributed to target
volumes. For remote asynchronous replications multiple writes are collected at local site,
deduplicated, compressed and sent across periodically to the remote site where they are
uncompressed and written to the journals and then distributed to target volumes. Figure 10
depicts the RecoverPoint configuration for local and remote replication.
19
20
volume can also be associated with TimeFinder/Clone operation allowing similar use cases
from the replica volume as well. Creation of periodic TimeFinder/Clone of replica volume and
refreshing replica volume from production data would extend the data protection window
beyond what only RecoverPoint journals can support by reusing the journal volumes for more
recent changes. It should be noted that when using TimeFinder/Clone to restore the production
data RecoverPoint consistency group operations should be disabled on those volumes as such
a restore would invalidate RecoverPoint based replica. Once the restore completes consistency
groups can be reenabled which will result in full sweep in order to refresh the RecoverPoint
replica.
21
22
23
placed in one pool, the pool has eight RAID 5 devices of four drives each. If one of the
drives in this pool fails, you are not losing one drive from a pool of 32 drives; rather,
you are losing one drive from one of the eight RAID-protected data devices and that
RAID group can continue to service read and write requests, in degraded
mode, without data loss. Also, as with any RAID group, with a failed drive Enginuity
will immediately invoke a hot sparing operation to restore the RAID group to its
normal state. While this RAID group is rebuilding, any of the other RAID groups in the
thin pool can have a drive failure and there is still no loss of data. In this example,
with eight RAID groups in the pool there can be one failed drive in each RAID group in
the pool without data loss. In this manner data stored in the thin pool is no more
vulnerable to data loss than any other data stored on similarly configured RAID
devices. Therefore a protection of RAID 1 or RAID 5 for thin pools is acceptable for
most applications and RAID 6 is only required when in situations where additional
parity protection is warranted.
The choice of drive technology and RAID protection is the first factor in determining
the number of thin pools. The other factor has to do with the business owners. When
applications share thin pools they are bound to the same set of data devices and
spindles, and they share the same overall thin pool capacity and performance. If
business owners require their own control over thin pool management they will likely
need a separate set of thin pools based on their needs. In general, however, for ease
of manageability it is best to keep the overall number of thin pools low, and allow
them to be spread widely across many drives for best performance.
24
optimizes storage capacity utilization and reduces the database and application
impact as they continue to grow. Note, however, that the larger the device the more
metadata is associated with it and tracked in the Symmetrix cache. Therefore the
sizing should be reasonable and realistic to limit unnecessary cache overhead.
Thin devices and ASM disk group planning
Thin devices are presented to the host as SCSI LUNs. Oracle recommends creating at
least a single partition on each LUN to identify the device as being used. On x86based platforms it is important to align the LUN partition, for example by using fdisk
or parted on Linux. With fdisk, after the new partition is created type x to enter
Expert mode, then use the b option to move the beginning of the partition. Either
128 blocks (64 KB) offset or 2,048 blocks (1 MB) offset are good choices and align
with the Symmetrix 64 KB cache track size. After assigning Oracle permissions to the
partition it can become an ASM disk group member or used in other ways for the
Oracle database.
Oracle recommends when using Oracle Automatic Storage Management (ASM) to use
a minimum number of ASM disk groups for ease of management. Indeed when
multiple smaller databases share the same performance and availability
requirements they can also share ASM disk groups; however, larger, more critical
databases may require their own ASM disk groups for better control and isolation.
EMC best practice for mission-critical Oracle databases is to create a few ASM disk
groups based on the following guidelines:
+GRID: Starting with database 11gR2 Oracle has merged Cluster Ready Services
(CRS) and ASM and they are installed together as part of Grid installation.
Therefore when the clusterware is installed the first ASM disk group is also
created to host the quorum and cluster configuration devices. Since these devices
contain local environment information such as hostnames and subnet masks,
there is no reason to replicate them. EMC best practice starting with Oracle
Database 11.2 is to only create a very small disk group during Grid installation for
the sake of CRS devices and not place any database components in it. When other
ASM disk groups containing database data are replicated with storage technology
they can simply be mounted to a different +GRID disk group at the target host or
site, already with Oracle CRS installed with all the local information relevant to
that host and site. Note that while external redundancy (RAID protection is
handled by the storage array) is recommended for all other ASM disk groups, EMC
recommends normal or high redundancy only for the +GRID disk group. The reason
is that Oracle automates the number of quorum devices based on redundancy
level and it will allow the creation of more quorum devices. Since the capacity
requirements of the +GRID ASM disk group are tiny, very small devices can be
provisioned (Normal redundancy implies 3 failure groups, quorum devices and
LUNs, High redundancy implies five failure groups, quorums devices, and LUNs).
+DATA, +LOG: While separating data and log files to two different ASM disk groups
is optional, EMC recommends it in the following cases:
25
Another reason for separation of data from log files is performance and
availability. Redo log writes are synchronous and require to complete in
the least amount of time. By having them placed in separate storage
devices the commit writes wont have to share LUN I/O queue with large
async buffer cache checkpoint I/Os. Placing the logs in different thin
devices than the data make it possible to use a different thin pool and
therefore have an increased availability value (when the thin pools dont
share spindles) and possibly different RAID protection (when the thin pools
use different RAID protection).
+FRA: Fast Recovery Area typically hosts the archive logs and sometimes
flashback logs and backup sets. Since the I/O operations to FRA are typically
sequential writes, it is usually sufficient to have it located on a lower tier such as
SATA drives. It is also an Oracle recommendation to have FRA as a separate disk
group from the rest of the database to avoid keeping the database files and
archive logs or backup sets (that protect them) together.
26
To simplify the storage reclamation of thin pool space no longer needed by ASM
objects, Oracle and storage partners have developed the ASM Storage Reclamation
Utility. ASRU in conjunction with Symmetrix Space Reclamation helps in consolidating
the Oracle ASM disk group, and reclamation of the space that was freed in the ASM
disk group, from the Symmetrix storage array. The integration of Symmetrix with ASRU
is covered in the white paper Implementing Virtual Provisioning on EMC Symmetrix
VMAX with Oracle Database 10g and 11g.
27
28
storage tiering optimized by FAST VP to temporarily degrade until FAST VP reoptimizes the database layout. ASM rebalance commonly takes place when devices
are added or dropped from the ASM disk group. These operations are normally known
in advance (although not always) and will take place during maintenance or lowactivity times. Typically new thin devices given to the database (and ASM) will be
bound to a medium- or high-performance storage tier, such as FC or EFD. Therefore
when such devices are added, ASM will rebalance extents into them, and it is unlikely
that database performance will degrade much afterward (since they are already on a
relatively fast storage tier). If such activity takes place during low-activity or
maintenance time it may be beneficial to disable FAST VP movement until it is
complete and then let FAST VP monitor the performance and initiate a move plan
based on the new layout. FAST VP will respond to the changes and re-optimize the
data layout. Of course it is important that any new devices that are added to ASM
should be also added to the FAST VP controlled storage groups so FAST VP can
operate on them together with the rest of the database devices.
29
data set of the OLTP applications will have a higher priority to be uptiered by FAST VP
over DSS. However, DSS applications can benefit from FAST VP as well. First, data
warehouse/BI systems often have large indexes that generate random read activity.
These indexes generate an I/O workload that can highly benefit by being uptiered to
EFD. Master Data Management (MDM) tables are another example of objects that can
highly benefit from the EFD tier. FAST VP also downtiers inactive data. This is
especially important in DSS databases that tend to be very large. FAST VP can reduce
costs by downtiering the aged data and partitions, and keep the active data set in
faster tiers. FAST VP does the storage tiering automatically without having to
continuously perform complex ILM actions at the database or application tiers.
30
example, create an +EFD ASM disk group from thin LUNs bound to the EFD tier). A
combination of manual and automated storage tiering (FAST VP) can be used as well.
Note that when database data files are spread across all tiers, some of the isolation
advantages of configuration 1 will be diminished, and therefore configuration 2 will
be more attractive. Making use of FAST VP (or manual storage tiering) for Oracle
databases is highly recommended.
The Table 1 and Table 2 show the storage and host environments used for deployment and test
of the two configurations. Note that Symmetrix VMAX 10K arrives with disk groups and data
devices preconfigured.
Table 1. Symmetrix VMAX 10K storage environment
Configuration aspect
Storage array
Disk Group 1
Disk Group 2
Disk Group 3
FC tier Data devices
SATA tier Data devices
EFD tier Data devices
Description
Symmetrix VMAX 10K
83 x 15k rpm 450 GB FC
15 x 7500 rpm 2000 GB SATA
4 x 200 GB EFD
162 x 154 GB devices
64 x 224 GB devices
8 x 70 GB devices
Description
CRS and database version 11gR2
Oracle Enterprise Linux 5.3
EMC PowerPath 5.3 SP1
Dell R900 (4 quad core)
Oracle ASM
Configuration 1 details
Review of configuration 1
This configuration segregates Oracle data and logs to separate storage disk groups,
storage tiers and RAID protections. Symmetrix data devices from each tier are added
to a single thin pool for that tier for ease of manageability. TimeFinder/Clone is used
to create backups and gold copies of the database that can be kept for long time, or
used for test/dev/reports, and RecoverPoint, if used locally, allows local CDP with
DVR like recovery of production database. Table 3 shows the storage configuration
used in this configuration.
Table 3. Configuration 1 details
Thin devices (LUNs)
Data devices
31
Assignment
Thin pool
binding 2
(pre-configured)
SATA_Pool
52 x RAID1
SATA thin pool
All thin LUNs were fully allocated in creation, consuming their full capacity in the thin pools they were bound to.
As discussed earlier, RAID1 and RAID5 protected thin pools can sustain many drive failures and remain fully available, as
long as no two disks have failed in the same RAID group. RAID6 protects from 2 drive failure in the same RAID group.
3
32
separate tiers, the combination provided more physical disks for the workload
achieving about 7,500 average transactions per minute (TPM) rate.
FC Pool: Data
SATA Pool: REDO, FRA
9000
8000
7000
6000
5000
4000
3000
2000
1000
0
Time
Event
Waits
Time(s)
Avg
wait
(ms)
% DB
time
Wait
Class
5,962,793
33,999
85.21
User I/O
276,081
3,292
12
8.25
User I/O
1,549
3.88
82,133
666
1.67
User I/O
610,432
374
0.94
Commit
Configuration 2 details
Review of configuration 2
This configuration puts the Oracle data, log and temp files in the FC thin pool whereas
FRA is placed in the SATA thin pool together with the TimeFinder clone devices.
TimeFinder/Clone is used to create backups and gold copies of the database that can
be kept for long time, or used for test/dev/reports, and RecoverPoint, if used, allows
33
CDP, CRR or CLR with DVR like recovery of production database. Table 5 shows the
storage configuration used in this configuration.
Table 5. Configuration 2 details
Thin devices (LUNs)
Assignment
Thin pool
binding 4
Data devices
(pre-configured)
FC thin pool
FC_Pool
SATA_Pool
52 x RAID1
SATA thin pool
In this configuration data, logs and temp files are in the FC RAID-5 thin pool and redo
FRA in the SATA RAID-6 thin pool. Having the database data and log files sharing a
single pool is recommended for overall configuration simplicity although it is
recommended that the VMAX 10K is protected by remote replications such as
RecoverPoint. In this configuration if the FC thin pool fails due to a catastrophic
event, the database can be recovered from the TimeFinder clone devices or a remote
replica, however no-dataloss of committed transactions can only be achieved if the
logs were synchronously replicated remotely.
All thin LUNs were fully allocated in creation, consuming their full capacity in the thin pools they were bound to.
34
due to total number of physical drives would have been irrelevant (since the database
would be spanning multiple tiers benefiting from all spindles in the system).
Time
Event
db file sequential read
db file parallel read
Waits
Time(s)
Avg
wait
(ms)
% DB
time
Wait
Class
6,161,968
34,621
86.75
User I/O
260,994
2,782
11
6.97
User I/O
DB CPU
1,522
3.81
80,625
648
1.62
User I/O
590,362
323
0.81
Commit
Conclusion
VMAX 10K with its modular design and industry standard components is a highly scalable
storage array that can support from one to many applications workload. It is fully based on
virtual provisioning and therefore offers high performance and ease of use. As Symmetrix VMAX
10K comes pre-configured with tiers, RAID protection and data devices customers can more
easily and quickly create and provision thin LUNs for their applications. To make full benefit of
35
this ease of deployment, the choice of tiers and RAID protection should be done prior to the
purchase (although changes can be made later as well). Unisphere for VMAX makes
management and provisioning of entirely thin provisioned VMAX 10K array very easy. With
features like FAST VP customers can effectively achieve higher performance at lower overall
cost. A choice of local and remote replication using TimeFinder and RecoverPoint greatly
improves the protection and availability of database environments and reduces recovery time
considerably. Thus it provides a cost effective alternative for customers who are looking for
multi-controller, scalable storage array with the advanced feature set of the Symmetrix family.
36
Appendixes
Appendix A Example of storage provisioning steps for configuration 1
The Symmetrix VMAX 10K comes with factory configured devices using standard RAID
protection for the specified disk technology. The thin devices and pools can be easily created
using these pre-configured devices and provisioned to the host to suit the requirements of the
databases. The Symmetrix VMAX 10K allows very easy provisioning and management
functionality using the Unisphere for VMAX graphical user interface. Here are the steps required
to provision storage to the Oracle databases:
(1) Create the thin pool using factory configured data devices
(2) Create the database thin LUNs of desired capacities and bind them to the pool
(3) Create Auto-provisioning storage groups, port groups and initiator groups
(4) Create the masking view to provision the storage to the host
The following sections describe all these steps in more details.
37
PRDDB requires creation of multiple ASM disk groups and the sizes and protection for the
devices are also different. The following Unisphere screen shot shows the steps to create the
thin LUNs of the desired capacity. The process can be easily repeated to have the customer
sizes required for database.
20 75 GB thin devices are created and bound to FC_Pool. The devices are created as fully
allocated thin device which can be changed to have any amount of pre-allocated capacity or no
allocation at all.
Once a new configuration session is created select the config session tab on Unisphere menu
bar and commit the session to have new devices created and allocated to the desired capacity.
Creation and commit of the configuration operations can be done in a sequence or multiple
device configuration tasks can be defined and then committed together.
38
39
Figure 19 Creation of storage group specifying thin devices for Oracle database
(4) Create the masking view to provision the storage to the host
Once initiator group, port group and storage group are created a masking view has to be
created that automatically performs necessary mapping and masking commands to make
storage visible to the host.
40
41
42
43