Anda di halaman 1dari 40

HPE 3PAR Adaptive Flash Cache

Lab guide

Rev. 16.42
 Copyright 2016 Hewlett Packard Enterprise Development Company, L.P.
The information contained herein is subject to change without notice. The only warranties for
HPE products and services are set forth in the express warranty statements accompanying
such products and services. Nothing herein should be construed as constituting an additional
warranty. HPE shall not be liable for technical or editorial errors or omissions contained
herein.
This is an HPE copyrighted work that may not be reproduced without the written permission
of HPE. You may not use these materials to deliver training to any person outside of your
organization without the written permission of HPE.
Microsoft, Encarta, MSN, and Windows are either registered trademarks or trademarks of
Microsoft Corporation in the United States and/or other countries.
Printed in United States of America
HPE 3PAR Adaptive Flash Cache Lab
Lab Activity Guide
August 2016
Contents

Lab 1—Implementing HP 3PAR Adaptive Flash Cache Prerequisites


Objectives.............................................................................................................. 1
Requirements ........................................................................................................ 1
Overview of AFC.................................................................................................... 2
AFC data specifications .................................................................................. 3
Caching versus tiering .................................................................................... 4
AFC best practices ......................................................................................... 4
Lab environment ............................................................................................. 5
Exercise 1—Verifying prerequisites and setup ....................................................... 6
Lab 2—Using Iometer
Objectives.............................................................................................................. 1
Requirements ........................................................................................................ 1
Exercise 1—Adding an I/O test when Iometer is already running........................... 2
Exercise 2—Configuring Iometer ........................................................................... 4
Exercise 3—Configuring monitoring in the SSMC .................................................. 8
Lab3— Creating and Enabling HP 3PAR Adaptive Flash Cache
Objective ............................................................................................................... 1
Requirements ........................................................................................................ 1
Exercise 1—Creating and enabling AFC................................................................ 2

Rev. 15.412 i
HPE Storage TME, Houston

ii Rev. 15.412
Implementing HPE 3PAR
Adaptive Flash Cache Prerequisites
Lab 1

Objectives
After completing this lab, you should be able to:
 Describe the features and best practices of HPE 3PAR Adaptive Flash Cache
(AFC)
 Verify the prerequisites to implement AFC and virtual volume setup

Requirements
To complete this lab, you will need:
 Knowledge of provisioning storage on a 3PAR array and basic administration
 An HPE 3PAR StoreServ array with at least two tiers of storage:
• Fast Class (FC) drives
• Solid state disk (SSD) drives
 The base license for the StoreServ to enable functionality, including the AFC
feature
 A Microsoft Windows host for provisioning storage with a supported Fibre
Channel host bus adapter (HBA) and multipath I/O connected directly (or
through a SAN fabric) to the StoreServ array
 Software
• HPE 3PAR OS 3.2.1 or later
• HPE 3PAR Management Console (SSMC) 2.2 or later
• Iometer on the host to generate an I/O workload

Important
! AFC was introduced with HPE 3PAR OS 3.2.1 and is included as part of the
base operating system license—no additional license is required. This feature
requires SSDs in the system. It can be implemented with a minimum of four
dedicated SSDs—two mirrored behind each node for a 7200 that requires four
drives. This feature can also be implemented by sharing an existing tier of
SSDs (eight minimum). In this case, the SSDs can be used to provision
storage and to host the AFC. With HPE 3PAR OS 3.2.1, the AFC is configured
from the command line interface (CLI).

Rev. 15.412 L1 –1
HPE Storage TME, Houston

Overview of AFC

One of the most difficult workloads for a storage array to handle is a random read
workload. A random read workload is a sequence where future read requests
cannot be predicted based on previous read requests.
By increasing the size of the primary read cache on an array or by adding large
Level 2 (L2) read cache, you can increase the probability that a block of random
read data will be serviced multiple times from a much faster cache tier than the
spinning media where it resides on the back end of the array. AFC is a built-in
array functionality that uses capacity on SSDs to act as L2 read cache holding
random read data that has been removed from DRAM read cache. By serving a
large percentage of I/Os from flash, you reduce front-end I/O latency and decrease
the back-end I/O load.
One use case for AFC is read acceleration—high-frequency random read
workloads benefit from lower latencies and increased IOPs on the front end. Web
servers and online transaction processing (OLTP) workloads are ideal candidates
for read acceleration.
Other use cases include improved response time to cold data in an HPE 3PAR
Adaptive Optimization (AO) configuration—random read requests from the slower
tier are cached into the AFC, and real-time response for subsequent read requests
improve. AFC can also be used for service provider charge-back, similar to HPE
3PAR Priority Optimization, to offer additional performance options.

L1 –2 Rev. 15.412
Implementing HP 3PAR Adaptive Flash Cache

AFC data specifications

After the primary (controller) DRAM cache reaches 90% utilization, data in a
node’s DRAM read cache starts to destage to AFC, which resides in the SSD tier.
Read data moves from a node’s DRAM read cache to AFC when the caching
algorithms are looking to free up cache memory pages (CMPs) in a node’s DRAM
memory and a read cache CMP is freed up.
By design, AFC caches random reads (and writes) that are less than 64 KB (these
I/O sizes are considered cache-friendly) and meet one of the following conditions:
 The data must be in DRAM as a result of a small block random read. The I/O
must be less than 64 KB.
 The data must not be the result of a sequential read stream (regardless of the
I/O size). Sequential reads will be handled by the node’s DRAM read cache.
 The data must reside in a CMP containing write data that becomes a read
data CMP as the result of dirty write data being written to the back-end hard
disk drives (HDDs).
 The data must not be in read cache as the result of a large block write. The
I/O must be less than 64 KB.
 The data must not be in cache as the result of a sequential write stream.
 The data must not be from an SSD tier.
The AFC uses a Least Recently Used algorithm to replace flash cache data.
 Data admitted initially is at “normal” temperature.
 Data is promoted to “hot” when frequently accessed.
 Data is demoted to “cold” when aged.
 “Cold” data is subject to eviction from flash cache.

Rev. 15.412 L1 –3
HPE Storage TME, Houston

Caching versus tiering


AFC and AO can coexist and complement each other on the same system.
Because AFC acts just like a cache, it does not influence AO I/O density
calculations. AFC does not preclude the need for AO on a customer’s system.
The following table lists differences between these two features.
Adaptive Optimization Adaptive Flash Cache
Tiering flash is primary storage Caching makes a copy of the data
Operates on the principle of I/O access density Accelerates random reads < 64 KB acting as a
second layer cache
Works on 128 MB regions of data Works on 16 KB pages
Schedules data movement Reacts to dynamic load instantly
Provides read and write acceleration for hot data, Provides read acceleration for random reads
random and sequential access

AFC best practices


AFC is simple to create and configure. CLI commands are used to create, enable,
disable, clear, display the status of, and remove flash cache on a StoreServ array.
 The smallest supported flash cache is 64 GB behind a node pair in a 7200,
which is 32 GB per node. Flash cache must be created in 16 GB increments.
The maximum amount varies depending on the 3PAR model.

Note
Not all consumer multilevel cells (cMLCs) are supported as flash cache—
check which models are supported.

 Flash cache can be enabled for the entire system or on a per virtual volume
set (VVset) basis.
 Flash cache cannot dynamically be increased. For example, if you are adding
more SSDs and want to increase the AFC size, remove it and recreate it as
needed.
 AFC statistics can be monitored using the statcache command.
 AFC simulator—You can enable AFC (createflashcache –sim <size>) in a
system that does not have SSDs in simulator mode and then look at the
statcache command to determine what impact an AFC would have on that
specific configuration.

L1 –4 Rev. 15.412
Implementing HP 3PAR Adaptive Flash Cache

Lab environment

In these labs, you will create an FC RAID5 common provisioning group (CPG)
using 450 GB 10 K hard drives. Then you will create two fully provisioned
volumes from the FC CPG (AFC_FC-r5), part of a volume set (VVAFC), and export
them to your Windows host. Using Iometer, you will generate a 32 KB, 100%
random, 100% read workload to these volumes. From the SSMC, you will chart the
number of I/Os per second and the response time/latency in milliseconds (ms) for
the volume set. You will then set up and enable AFC for this volume set and
observe the effect on the number of I/Os and the response time. The volumes will
remain unformatted to minimize file system constraints.

Note
AFC takes effect in real-time. To see the impact in this lab, allot at least 60
minutes from the time you enable AFC to observe the results.

Rev. 15.412 L1 –5
HPE Storage TME, Houston

Exercise 1—Verifying prerequisites and setup


1. From the SSMC, verify that you have a license for AFC.

2. Create an FC CPG named AFC_FC-r5.

L1 –6 Rev. 15.412
Implementing HP 3PAR Adaptive Flash Cache

3. Create a “Fully Provisioned” virtual volume named VVAFC that is 100 GB


and fully provisioned in the AFC_FC-r5 CPG. Set “Volume
Grouping””Number of volumes” to 2. Name the volume set “VVAFC”

4. This will create two volumes and a VVAFC volume set.

Rev. 15.412 L1 –7
HPE Storage TME, Houston

5. Export the two volumes to your Windows host.

6. Under Computer Management, make sure the volumes are visible under Disk
Management. Initialize them but do not create any volumes on them.

L1 –8 Rev. 15.412
Using Iometer
Lab 2

Objectives
After completing this lab, you should be able to:
 Add an I/O test on a Microsoft Windows host
 Configure Iometer
 Configure monitoring in the HPE StoreServ Management Console (SSMC) to
observe the impact of HPE 3PAR Adaptive Flash Cache (AFC)

Requirements
To complete this lab, you will need:
 Knowledge of provisioning storage on a 3PAR and basic administration
 An HPE 3PAR StoreServ array with at least two tiers of storage:
• Fast Class (FC) drives
• Solid state disk (SSD) drives
 The base license for the StoreServ to enable functionality, including the AFC
feature
 A Windows host for provisioning storage with a supported Fibre Channel host
bus adapter (HBA) and multipath I/O connected directly (or through a SAN
fabric) to the StoreServ array
 Software
• HPE 3PAR OS 3.2.1 or later
• HPE 3PAR SSMC 2.2 or later
• Iometer on the host to generate an I/O workload

Important
! AFC was introduced with HPE 3PAR OS 3.2.1 and is included as part of the
base operating system license—no additional license is required. This feature
requires SSDs in the system. It can be implemented with a minimum of four
dedicated SSDs—two mirrored behind each node for a 7200 that requires four
drives. This feature can also be implemented by sharing an existing tier of
SSDs (eight minimum). In this case, the SSDs can be used to provision
storage and to host the AFC. With HPE 3PAR OS 3.2.1, the AFC is configured
from the CLI.

Rev. 15.41 L2 –1
HPE Storage TME, Houston

Exercise 1—Adding an I/O test when Iometer is


already running
If IOmeter is not already running then go to Exercise 2
If you are already generating I/O to some volumes, as in the AO lab, then you will
need to adjust how you set up Iometer. To do so, temporarily stop Iometer. From
the main menu, add a new manager and set up the I/O access specifications.
Then restart Iometer as instructed in the following steps.
1. If Iometer is running, temporarily stop all testing by clicking Stop or Stop All.

2. In the top menu bar, click the computer icon to start a new manager.
3. With the second manager selected, select the two newly created targets in the
right pane and set the # of Outstanding I/Os to 24 per target.

L2 –2 Rev. 15.412
Using Iometer

4. On the Access Specifications tab, create an I/O profile that is 32 KB, 100%
read, and 100% random.

5. Save the results to the desktop. You can then resume testing by clicking the
green flag.

Note:
IOmeter will not display the unformatted volumes if it detects any configuration
information on the volumes. If volumes don’t show up you will need to delete
and recreate them.

Rev. 15.41 L2 –3
HPE Storage TME, Houston

Exercise 2—Configuring Iometer


1. On the Windows host, launch Iometer. IOmeter can be found in the C:\sw
directory.
2. In the left pane, with the server selected, hold down the Ctrl key, select both
PHYSICAL DRIVES, and set the # of Outstanding I/Os to 24 per target.

3. On the Access Specifications tab, create an I/O profile that is 32 KB, 100%
read, and 100% random.

L2 –4 Rev. 15.412
Using Iometer

4. Add the profile to the Assigned Access Specifications field.

5. On the Results tab, set the Update Frequency to 2 seconds.

Rev. 15.41 L2 –5
HPE Storage TME, Houston

6. Save the test configuration.

6. Click the green flag to launch the test and accept the default file name. Click
Save to start the test.

L2 –6 Rev. 15.412
Using Iometer

7. Wait for two minutes and record the start time and readings from Iometer in
the following table. You will fill in the second column at the conclusion of the
lab.

Start time End time


Total I/Os per second Total I/Os per second
Total MB/s per second Total MB/s per second
Average I/O response time (ms) Average I/O response time (ms)
% CPU utilization (total) % CPU utilization (total)

Rev. 15.41 L2 –7
HPE Storage TME, Houston

Exercise 3—Configuring and monitoring


To observe the impact of AFC, you will monitor IOmeter and chart the total number
of I/Os per second and the latency within SSMC. You will then enable AFC and
observe the results.
1. From the SSMC Mega Menu, select Reports.

2. Create a Report.

L2 –8 Rev. 15.412
Using Iometer

3. Click on the “Select” button

4. Select the “Exported Volumes-Real Time Performance Statistics” item then press
“Select”

Rev. 15.41 L2 –9
HPE Storage TME, Houston

5. Click on the “x” to remove all objects, then click on “Add objects”

L2 –10 Rev. 15.412


Using Iometer

6. Change to “Select objects”. On Y Axis select “IO/s”, Type “Read” Hold the shift key
and select all of the AFC volumes that you created in an earlier step. Click “Add +”

7. Change type to “Total” and click “Add +”

Rev. 15.41 L2 –11


HPE Storage TME, Houston

8. Change Y Axis “Service Time” Type “Read”. Click “Add +”. Change to “Total” and
Click “Add”.

9. Your screen should look like this. Click “Create”

L2 –12 Rev. 15.412


Using Iometer

10. You should see two charts. The top chart shows total IOPs and should be registering
approximately 2k IOPs.

The bottom chart shows service time and should be registering approximately 20 MS

Rev. 15.41 L2 –13


Creating and Enabling
HPE 3PAR Adaptive Flash Cache
Lab 3

Objective
After completing this lab, you should be able to create and enable HPE 3PAR
Adaptive Flash Cache (AFC).

Requirements
To complete this lab, you will need:
 Knowledge of provisioning storage on a 3PAR array and basic administration
 An HPE 3PAR StoreServ array with at least two tiers of storage:
• Fast Class (FC) drives
• Solid state disk (SSD) drives
 The base license for the StoreServ to enable functionality, including the AFC
feature
 A Microsoft Windows host for provisioning storage with a supported Fibre
Channel host bus adapter and multipath I/O connected directly (or through a
SAN fabric) to the StoreServ
 Software
• HPE 3PAR OS 3.2.1 or later
• HPE 3PAR SSMC 2.1 or later
• Iometer on the host to generate an I/O workload

Important
! AFC was introduced with HPE 3PAR OS 3.2.1 and is included as part of the
base operating system license—no additional license is required. This feature
requires SSDs in the system. It can be implemented with a minimum of four
dedicated SSDs—two mirrored behind each node for a 7200 that requires four
drives. This feature can also be implemented by sharing an existing tier of
SSDs (eight minimum). In this case, the SSDs can be used to provision
storage and to host the AFC. With HPE 3PAR OS 3.2.1, the AFC is configured
from the command line interface (CLI).

Rev. 15.41 L3 –1
HPE Storage TME, Houston

Exercise 1—Creating and enabling AFC


Now that an I/O load is running and you are monitoring some parameters, you can
now create and enable AFC. This is performed from the CLI using some simple
commands.
Launch the CLI and connect to the 3PAR array.
Ensure that no AFC configuration already exists by entering the following
command:
removeflashcache
Use the megamenu and select Adaptive Flash Cache

Click on the “Actions” dropdown and select “Edit”

L3 –2 Rev. 15.412
Creating and Enabling HP 3PAR Adaptive Flash Cache

Before we commit to the use of AFC we have the option to test its effectiveness on
our array. We will first run the simulator and then if we think it is worth it we will
enable the AFC.

Click on Mode and select “Simulation” from the drop down.

Rev. 15.41 L3 –3
HPE Storage TME, Houston

Modify this page so the size is 128G.


Add the vvset created earlier. Click “OK”

L3 –4 Rev. 15.412
Creating and Enabling HP 3PAR Adaptive Flash Cache

The AFC is now enabled and you should see a screen like this

Note that the mode is “Simulation” and the capacity is just beginning to increase.

As Flash Cache is used the capacity will grow.

If the capacity does not increase it means that there is probably no need for flash
cache on your array.

Rev. 15.41 L3 –5
HPE Storage TME, Houston

Let’s add another graph to look at. Select “Reports” from the Mega Menu. Click on
“Create Report” and then select the “Adaptive Flash Cache – Performance
Statistics”. Click “Select”

L3 –6 Rev. 15.412
Creating and Enabling HP 3PAR Adaptive Flash Cache

Modify the “Time Settings” to “HiRes” and 1 hour.

Scroll down and keep the “Schedule” at “Once now” and click “Create”

Rev. 15.41 L3 –7
HPE Storage TME, Houston

When the graph appears you should be seeing flash cache use similar to the
screen shot below

From the Mega Menu click “Adaptive Flash Cache”. You should be seeing some
flash cache use.

L3 –8 Rev. 15.412
Creating and Enabling HP 3PAR Adaptive Flash Cache

It’s safe to say that this array will benefit from “Flash Cache” so let’s take it out of
“Simulation Mode” and enable it.

Use the “Actions” menu and select “Edit”.

Rev. 15.41 L3 –9
HPE Storage TME, Houston

Change the mode from “Simulation” to “Standard”. Choose RAID0, then click OK.
RAID0 ! Yikes ! . But think about it. Why waste the space with RAID1 for use in a
cache.

Notice that the mode is now “standard” and that the Flash Cache is beginning to
be used.

L3 –10 Rev. 15.412


Creating and Enabling HP 3PAR Adaptive Flash Cache

Re-open the chart. You should notice that the Service time (ms) is gradually
coming down, and the number of I/Os per second is gradually going up.

In about a half an hour your graph should look like the one below. Don’t wait. Go to
the next page

Rev. 15.41 L3 –11


HPE Storage TME, Houston

The AFC Performance graph should look similar to the below. Note that the dip in
the center of the graph is when we w

ent from simulation mode to standard mode.

Note that most of the cache is being used.

L3 –12 Rev. 15.412


Creating and Enabling HP 3PAR Adaptive Flash Cache

Go back to the CLI window and look at the output of the statcache -v command.

The output shows the following information:


• Cache Memory Page (CMP)—A 16 KiB page of physical DRAM cache
memory that contains data for either a read I/O or write I/O.
• Flash Memory Page (FMP)—A 16 KiB page of physical flash in the flash
cache.
• In the CLI display:
 CMP Hit %—Cache hits from the node’s CMP.
 FMP Hit %—Cache hits from the Adaptive FMP.
 Destaged write—Writes from the node’s cache that have been
cached into the AFC.
 Read back—Read requests from AFC of previously cached reads.

Rev. 15.41 L3 –13


HPE Storage TME, Houston

After about 45 to 60 minutes, return to the plots and Iometer and compare your
results.
What can you conclude about AFC?
______________________________________________________________
______________________________________________________________
______________________________________________________________

After about 30 to 50 minutes. Your chart should show a significant increase in IOPs as well
as a reduction in service time.

L3 –14 Rev. 15.412


Creating and Enabling HP 3PAR Adaptive Flash Cache

Return to the chart where you saved the IOmeter performance figures at the start
of the test. Complete the chart and compare the figures.

Start time End time


Total I/Os per second Total I/Os per second
Total MB/s per second Total MB/s per second
Average I/O response time (ms) Average I/O response time (ms)
% CPU utilization (total) % CPU utilization (total)

Make sure the IOmeter button 'Results since last update' is selected to ensure the
most recent results are displayed:

Rev. 15.41 L3 –15

Anda mungkin juga menyukai