Anda di halaman 1dari 77

Welcome to VNXe Fundamentals.

Copyright 2015 EMC Corporation. All Rights Reserved. Published in the USA. EMC believes the information in this publication is accurate
as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF
ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. The trademarks,
logos, and service marks (collectively "Trademarks") appearing in this publication are the property of EMC Corporation and other parties.
Nothing contained in this publication should be construed as granting any license or right to use any Trademark without the prior written
permission of the party that owns the Trademark.

EMC, EMC AccessAnywhere Access Logix, AdvantEdge, AlphaStor, AppSync ApplicationXtender, ArchiveXtender, Atmos, Authentica,
Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Bus-Tech, Captiva, Catalog Solution, C-
Clip, Celerra, Celerra Replicator, Centera, CenterStage, CentraStar, EMC CertTracker. CIO Connect, ClaimPack, ClaimsEditor, Claralert
,cLARiiON, ClientPak, CloudArray, Codebook Correlation Technology, Common Information Model, Compuset, Compute Anywhere,
Configuration Intelligence, Configuresoft, Connectrix, Constellation Computing, EMC ControlCenter, CopyCross, CopyPoint, CX, DataBridge ,
Data Protection Suite. Data Protection Advisor, DBClassify, DD Boost, Dantz, DatabaseXtender, Data Domain, Direct Matrix Architecture,
DiskXtender, DiskXtender 2000, DLS ECO, Document Sciences, Documentum, DR Anywhere, ECS, elnput, E-Lab, Elastic Cloud Storage,
EmailXaminer, EmailXtender , EMC Centera, EMC ControlCenter, EMC LifeLine, EMCTV, Enginuity, EPFM. eRoom, Event Explorer, FAST,
FarPoint, FirstPass, FLARE, FormWare, Geosynchrony, Global File Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase,
Illuminator , InfoArchive, InfoMover, Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix, ISIS,Kazeon, EMC LifeLine, Mainframe
Appliance for Storage, Mainframe Data Library, Max Retriever, MCx, MediaStor , Metro, MetroPoint, MirrorView, Multi-Band
Deduplication,Navisphere, Netstorage, NetWorker, nLayers, EMC OnCourse, OnAlert, OpenScale, Petrocloud, PixTools, Powerlink,
PowerPath, PowerSnap, ProSphere, ProtectEverywhere, ProtectPoint, EMC Proven, EMC Proven Professional, QuickScan, RAPIDPath, EMC
RecoverPoint, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, the RSA logo, SafeLine, SAN Advisor, SAN Copy, SAN Manager,
ScaleIO Smarts, EMC Snap, SnapImage, SnapSure, SnapView, SourceOne, SRDF, EMC Storage Administrator, StorageScope, SupportMate,
SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder, TwinStrata, UltraFlex, UltraPoint, UltraScale,
Unisphere, Universal Data Consistency, Vblock, Velocity, Viewlets, ViPR, Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning,
Virtualize Everything, Compromise Nothing, Virtuent, VMAX, VMAXe, VNX, VNXe, Voyence, VPLEX, VSAM-Assist, VSAM I/O PLUS, VSET,
VSPEX, Watch4net, WebXtender, xPression, xPresso, Xtrem, XtremCache, XtremSF, XtremSW, XtremIO, YottaYotta, Zero-Friction Enterprise
Storage.

Revision Date: August 2015

Revision Number: MR-1WN-VNXEFD.2.0

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 1


This course covers the VNX2e Series. It includes the EMC VNXe2 Series models,
architecture, features, functionality, and management options.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 2


The course contents are shown here.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 3


This module focuses on the VNXe2 series product benefits and use cases.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 4


VNXe unifies EMCs file-based and block-based offerings into a single product that can be
managed with one easy to use GUI. VNXe data storage solutions are designed with the IT
generalist in mind. The user manages and optimizes unified storage though the EMC
Unisphere interface. VNXe provides point-and-click wizards to provision and protect storage
for a range of applications, including VMware and Hyper-V virtualization as well as Microsoft
Exchange.

VNXe series of storage platforms have been engineered to make it simple to deploy and
manage local application workloads for small and medium business environments, remote
or branch offices and Federal, while maintaining required levels of protection and availability
for critical data.

VNXe is optimized for virtual environments, with integration and features that streamline
operations and leverage virtual server deployments at remote sites.
The VNXe series takes full advantage of the latest Intel multi-core technology with the
introduction of MCx.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 5


The VNXe3200 has the power of EMCs flash optimized VNXe unified storage systems -
compressed into an efficient, easy-to-use package designed for resource-constrained IT
departments in any size company.

The VNXe1600 is an entry-level platform providing purpose built Block storage using the
new Ivy Bridge-based motherboard.

MCx architecture makes use of threads and resources versus simply using more sockets and
cores at higher clock rates. Its similar to the effect of hyper-threading on processors in
general.

These systems are architected to automate the use of a fast layer of SSD drives to boost
performance and auto-tier based on hot spot recognition in the system using EMCs FAST
Suite.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 6


VNXe models include the MCx models, VNXe3200 and VNXe1600 and the Pre-MCx models,
VNXe3150 and VNXe3300. Both the MCx and Pre-MCx models provide file and block
storage services. However, there are some differences.

For file services, all models provide NAS services to both CIFS and NFS clients, except
the 1600.

For block services, the VNXe series provide storage access via Fibre Channel and iSCSI
protocols. However, the Pre-MCx models support only iSCSI services for block storage.

While the MCx and Pre-MCx models provide many of the same functions, some of the
terminology has changed and is represented on this slide.

Additionally, while the Pre-MCx models provided wizards for Microsoft Exchange and Hyper-
V provisioning, with MCx, these needs can now be serviced from the Microsoft application
using the EMC Storage Integrator for Microsoft Windows (which uses MMC plug-ins to
provision storage for Microsoft Windows Server, SharePoint and Exchange).

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 7


There is also an all FLASH VNXe 3200 available with both 200 GB SSD and 800GB MLC
drives available. This is Multicore optimized and will provide consistent performance in both
virtualized, VMware and Hyper- V, and Database/Transactional application environments

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 8


The pressure in Exchange environments to eliminate backup windows and reduce recovery
times is stronger than ever before. With dynamic, policy-based protection management,
these dynamic environments can easily exceed demanding recovery objectives using EMC
VNXe advanced technologies such as snapshots and continuous protection technology. It
also helps efficiently use storage resources and optimize protection for Microsoft Exchange.
The VNXe Series is optimized for virtualization, supports all leading Hypervisors, and
simplifies desktop creation and storage configuration. VNX leverages advanced technologies
to optimize performance for the virtual desktop environment, helping support service level
agreements. Virtualization management integration allows the Microsoft Hyper-V
administrator to extend their familiar management console for VNXe related activities.
In the Microsoft Server 2012 and Hyper-V 3.0 space, the Array Offloaded Data Transfer
(ODX) allows the VNXe to be fully optimized for Windows virtual environments. This
technology offloads storage-related functions from the server to the storage system.
EMC Storage Integrator (ESI) for Windows provides the ability to provision block and file
storage for Microsoft Windows or for Microsoft SharePoint sites.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 9


Online Transaction Processing (OLTP) database applications tend to be mission-critical
and usually have stringent I/O latency requirements. Traditionally, these OLTP databases
are deployed on a huge number of rotating Fibre Channel (FC) spindles to meet the low I/O
latency requirement. Consequently, the effective capacity utilization of these spindles is
very low. VNXe reduces the need to buy more drives to keep up with database growth. Also
the VNXe automatically and nondisruptively migrates hot and cold data between the
available storage tiers, thereby improving the effective storage utilization.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 10


The VNXe Series is optimized for virtualization, supports all leading Hypervisors and
simplifies desktop creation and storage configuration. VNX leverages advanced technologies
to optimize performance for the virtual desktop environment, helping support service level
agreements.
Virtualization management integration allows the VMware administrator to extend their
familiar management console for VNXe related activities.
VMware vStorage APIs for Array Integration:
VAAI for both SAN and NAS connections allows the VNX to be fully optimized for
virtualized environments. EMC Virtual Storage Integrator (VSI) is targeted towards
the VMware administrator. VSI supports VNX provisioning within vCenter, full
visibility to physical storage, and increases management efficiency.
VASA will allow storage vendors to obtain and display storage information through
vCenter.
VMware Aware Integration (VAI) allows the end-to-end discovery of VMware
environment from the Unisphere GUI interface.
The EMC Storage Analytics (ESA) is a joint solution from EMC and VMware that delivers the
storage-only features and functionality of VMware vRealize Operations Manager (vR Ops),
such as configurable dashboards with drill down, notifications, and patented self-learning,
together with deep VNX storage analytics for VNX Unified storage.
vVNX Virtual storage array for VMware vCenter environments to provide virtual storage
array on ESXi servers. Deployed from an OVA file in vCenter environments.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 11


EMC VNXe unified storage provides a single storage platform that can address all of the
needs of a remote site, from SAN-based application storage to file shares to support local
users individual and group data.

EMC Unisphere Remote provides simultaneous reporting and alerting for many VNXe
storage platforms, enabling central IT to monitor individual capacity utilization and system
health, and to access an individual array for intervention..

VNXe is optimized for virtual environments, with integration and features that streamline
operations and leverage virtual server deployments at remote sites.

Integration with Avamar provides the centralized, edge-to-core backup and recovery
capabilities that

organizations require for their business-critical data without over-taxing local, network, and
data center resources.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 12


This module covered the introduction of the VNXe solution and its benefits. It also
covered the VNXe key use cases and VNXe family integration benefits.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 13


This module focusses on the VNXe architecture and key terminology, VNXe components
and a high-level overview of the MCx Cache options.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 14


VNXe storage systems consist of the three main hardware components depending on the
model:

Storage Processors (SPs) carry out the tasks of saving and retrieving block and file
data. SPs utilize I/O modules to provide connectivity to hosts and 6 Gb/s Serial
Attached SCSI (SAS) to connect to disks in Disk Array Enclosures (SPs will be covered
in detail in following slides).

Disk Processor Enclosures (DPEs) house Storage Processors (SPs), the first tray of
disks, and I/O interface modules. DPEs connect to Disk Array Enclosures. All
components within the enclosure are redundant and highly available.

Disk Array Enclosures (DAEs) house the non-volatile hard and Flash drives used in
the VNXe storage systems.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 15


Fan Modules
Power supplies
Management ports
Service Ports
4 X 10 GbE connectivity ports (front-end)
2 X 6 Gb/s SAS connectivity ports (back-end)
4 X 8 Gb/s FC connectivity ports (front-end) optional
Front View
12 X 3.5 drives
25 X 2.5 drives

In the Pre-MCx models, each SP houses the AC power supply, fan modules, a battery
backup unit (BBU), a USB eFlash device and 16 GB Solid-state drive. The SSD and the USB
eFlash device contain the operating environment, root and swap area and other data. The
SSD is located inside of each SP.
In the MCx models, each SP houses the fan modules, AC power supply and a battery
backup unit (BBU), management port, service port, the optional I/O module for FC
connections and the mSATA drive. The 32 GB MLC Flash, mSATA (Located inside of each
SP), is the system boot device, programmed with standard BIOS bootable partitioning.
mSATA replaces the USB and SSD Boot flash present in the pre-MCX models. Contained in
the mSATA are the partitions for boot, root, cores, swap, Linux PMP (Permanent Memory
Persistence or lxPMP), and the image repository for firmware.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 16


Power supplies and cooling modules

Redundant 6 Gb/s SAS Connectivity LCC ports

12 X 3.5 drives

25 X 2.5 drives

Disk Array Enclosures (DAEs) house the non-volatile hard and Flash drives used in the
VNXe storage systems. All DAEs accommodate two 4-lane, 6 Gb/s SAS back-end ports, and
two Power Supply/Cooling modules, PS A and PS B.

The rear view of the DAE shows the redundant Power Supplies and Link Control Cards
(LCCs).

The LCCs main function is to be a SAS expander and provide enclosure services for all drive
slots. The LCCs in a DAE connects to the DPE and other DAEs with 6 Gb/s SAS cables. The
LCCs each contain a primary and expansion port and independently monitors the
environmental status of the entire enclosure and communicates the status to the SPs. The
primary port is indicated by two circles and is used when adding this DAE to an existing
configuration. The expansion port is indicated by two diamonds and is used to connect
additional DAEs to this DAE.
The DAE is managed by the system software. The DAEs can house up to the same number
of drives as the DPE, 25x 2.5 (2U) or 12x 3.5 (2U) drives and support Flash, SAS, and NL-
SAS drives. The slots are numbered 0 through 11 for the 12-drive DAE, and 0 through 24
for the25-drive DAE.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 17


The addition of FC brings a new level of connectivity performance to this system.

10GbE has also been added which delivers 10 GbE or 1 GbE speeds.

In addition, iSCSI is faster and plugs into the system directly versus going through a file
layer as was the case with VNXe3150.

The MCx model supports native block front-end connectivity using the 8 Gb FC Block I/O
Module. There is a maximum of four Fibre Channel Ports per SP. The same type of eSLIC
must be installed on both SPA and SPB (i.e. both SPs must have a Fibre Channel I/O
module or both must be blank) and a symmetrical configuration is needed on both SPs.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 18


The VNXe series includes two models: the VNXe3200, and VNXe1600.
The VNXe3200 and VNXe1600, with MCx architecture, leverage Flash drives to address the
high-performance, low-latency requirements of virtualized applications. They take full
advantage of the latest Intel multi-core technology with the introduction of MCx
architecture. MCx distributes all VNXe data services across all cores.

Only the VNXe3200 offers traditional NAS connectivity via NFS and CIFS protocols. Both
models support the Fibre Channel and iSCSI block access protocols.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 19


Multi-core optimization is key to the VNXe series and it fundamentally ensures long term
cost/performance leadership.

Multi-core optimization delivers efficient performance through multicore stack design. MCx
is an EMC trademark for Multicore everything:
MCx is an important piece of the VNXe unified storage architecture for the next decade
Robust, enterprise ready storage OS
MCx unleashing the power of all VNX cores
Linear performance scaling across all cores
Enables better leverage of intelligent data services
MCx performance and functionality improvements:
Multicore Cache (MCC)
Multicore FAST Cache (MCF)
Multicore RAID (MCR)

The multicore initiative, MCx, is a re-architecture project that redesigns the core Block OE
stack within the VNXe series to ensure optimum performance at high scale. This is achieved
with an adaptive architecture.

Multicore Cache, Multicore FAST Cache and Multicore RAID are key engineering
enhancements.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 20


The VNXe series with MCx introduces the concept of the Multi-core Cache capability. The
cache is the most valuable storage asset in the storage subsystem and its efficient usage is
key to the overall efficiency of the platform in handling variable and changing workloads.
The cache engine has been modularized to take advantage of all the cores available in the
system. This means that the VNXe with MCx family members seamlessly scales.

In addition, MCC removes the need to manually separate space for Read vs. Write Cache so
there is no management overhead in ensuring the cache is working in the most effective
way regardless of the IO mix coming into the system.

Multicore FAST Cache acts as an extension of the VNXe series multicore cache (DRAM).
FAST Cache on the System can be configured using the Flash Optimized Flash Drives.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 21


Previously known as "virtual disks" and "generic iSCSI (Pre-MCx), LUNs are block-based
storage that provide hosts access to storage over network-based iSCSI or Fibre Channel
(FC) connections. With LUN the user can manage addressable partitions of block storage so
that host systems can mount and use it (LUNs) over FC or IP connections. After a host
connects to the LUN, it can use the LUN like a local storage drive. The user must choose a
storage pool with which to associate the LUN when creating VNXe storage for hosts to use.
The storage that the LUN uses is drawn from the specified pool.
A LUN Group (also called an application consistency group) enables the user to group a
number of LUNs together to specify a common data protection schedule across all LUNs in
the group. It contains one or more LUNs (up to 50) and is associated with one or more
hosts. LUN groups organize the storage allocated for a particular host or hosts. When the
user configures host access to a LUN group, the specified access extends to all LUNs within
the group.
A Storage Pool is a set of disks that provide specific storage characteristics for the
resources that use them. For example, the storage pool configuration defines the types and
capacities of the disks in the pool. It also defines the RAID configurations (RAID types and
stripe widths) for these disks. Storage pools generally provide optimized storage for a
particular set of applications or conditions.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 22


This module covered the VNXe architecture and key terminology, VNXe components and
a high-level overview of the MCx Cache options.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 23


This module focuses on the VNXe features, capabilities and the usage of these in an IT
environment.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 24


VNXe Base Software includes: VNXe OE, Unisphere, Unisphere Central, Integrated
Online Support Ecosystem, Protocols, FAST Suite (Fast VP & FAST Cache), Monitoring and
Reporting, Unified Snapshots, Remote Protection (Native), File deduplication and
Compression, Thin provisioning, Event Enabler (Common anti-virus), File Level Retention.
EMC Secure Report Support (ESRS).

Options include:
For Deep VMware-level insight and analysis, EMC Storage Analytics is available as
a vCOPS option. vCenter Operations Manager, EMC Adapter for VNXe
RecoverPoint Advance Protection for VNXe3200: includes the write splitter for
RecoverPoint EX. Provides local and remote Continuous Data Protection for recovery
to any point in time
Free EMC Storage Integrator (ESI) for Windows (ESI)
Provision storage via Microsoft interfaces.
Free Virtual Storage Integrator (VSI)
Allows VMware administrators to manage VNXe3200 storage from within
VMware vCenterVirtual Storage Integrator (VSI) plug-in for support of
VMware storage management, provisioning and hardware off-load features
AppSync Copy Management Fast copy and rapid restores of VMware, Exchange,
SQL, SharePoint, Oracle, and more..
EMC Storage Analytics - Powerful monitoring and analytics tool for VMware
vRealize Operations Manager, (EMC Adapter for VNXe)
PowerPath - Intelligent load balancing and multi-pathing software for networked
storage environments

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 25


VNXe arrays provides rich integration across EMC portfolio and federation offerings for
Business Continuity (BC), System Integration, and Application and Infrastructure
Management. XtremIO integrates with Data Domain, Avamar, Traditional NDMP backup
solutions and RecoverPoint to provide business continuity. VNXe integration with EMC
Storage Integrator, EMC Virtual Storage Integrator, AppSync Copy Management, and EMC
Storage Analytics provides Application and infrastructure integration.

The following slides discuss some of the key integration capabilities of VNXe arrays.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 26


This lesson covers Hardware & Component Redundancy, RAID protection, MCx
architecture features, and network high availability features.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 27


VNXe availability and redundancy features include:
Dual Storage Processors with mirrored write cache. Each SP contains both primary cached
data for its LUNs and a secondary copy of the cache for its peer SP. Even in the VNXe3150
with one SP, the second SP slot will house a battery backed up, flash-based persistent write
cache (the cache protection module).
Each disk drive has two data ports. This gives two separate paths to each drive. If an SP
fails, or any component of the path fails, the drive can still be accessed by the other SP.
RAID protection levels 0, 1/0, 5, and 6 are available and can co-exist in the same array
simultaneously to match different protection requirements.
Permanent disk sparing enhances system robustness and delivers maximum reliability and
availability.
The VNXe supports network pass-through to provide network path redundancy. If a network
path becomes unavailable due to a failed NIC, switch port, or bad cable, network traffic is
rerouted through the peer SP using an inter-SP network, and all the network connections
remain active.
Redundant power supplies.
Battery backup to allow for an orderly shutdown and flush cache contents to flash disk to
ensure data protection in the event of a power failure. In the event of a power failure, the
flash disk provides the de-stage area for data in write cache that is not yet committed to
the disk.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 28


PowerPath is an EMC-supplied software product that may be installed on any supported
host platform. It provides the multipathing features to meet High Availability requirements
at the application level. The core features of PowerPath are automated path failover, which
includes an automated path restore function, and dynamic load-balancing. Regardless of
cause, the direct result of a path failure is the failure of I/O requests on the corresponding
native device from the host. PowerPath auto-detects such I/O failures, confirms the path
failure via subsequent retries on the same path, and then reroutes the I/O request to
alternative paths to the same device. The application is completely unaware of the I/O
rerouting. Thus, failover is fully transparent to the application.

PowerPath has built-in algorithms that attempt to balance I/O load over all available, active
paths to a LUN. Again, this is transparent to the application and ensures optimal use of the
available I/O bandwidth. Any subsequent manual reconfiguration is required only in highly
specific situations, e.g. when new LUNs or new paths are provisioned to a host on the fly,
and uptime requirements prohibit a subsequent host reboot.

PowerPath supports popular operating systems. Also, it supports all EMC-branded storage
arrays and qualified non-EMC arrays. Visit http://powerlink.emc.com and access the E-Lab
Interoperability Navigator page for more information about supported operating systems
and storage arrays. (EMCs Powerlink portal requires an EMC customer, partner or employee
account.). PowerPath requires a multipathing license.

PowerPath/VE is used in VMware virtualized environments.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 29


Part of the MCx architecture is Multicore RAID that defines, manages, creates, and
maintains VNXe RAID protection. Sparing is the process of rebuilding a failed drives data
onto a system-selected compatible drive. Multicore RAID allows for the permanent
replacement of failed drives with spare drives. When a drive fails, Multicore RAID spares
it to a suitable unbound drive, assigns it as part of the storage pool, and rebuilds the
data onto it.
When the failed drive is replaced, the new drive is unbound and available to be a future
spare.
Any spare drive is a possible replacement for a faulted drive.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 30


Drive mobility, also known as Portable Drives, allows users to move VNXe disks within
the array. Multicore RAID allows physically moving drives and entire DAEs inside the
same frame from their existing locations to other slots or buses. Drives can be moved to
any slot within the array, as long as they fit within available space.

Shown here is an example of a Storage Pool that has two drives moved to new locations,
including a different enclosure.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 31


FailSafe Networking (FSN) is configured on the VNXe storage systems by default. If one of
the ports in the storage system fails, FSN automatically reroutes the I/O internally to the
corresponding physical port on the peer Storage Processor.

VLANs, or virtual LANs, are a networking feature that allows a single physical network to be
segmented into multiple virtual networks. This provides several benefits, such as improved
security and restricting broadcast domains. Trunking is a feature that allows a single
network interface to carry traffic for multiple VLANs.

LACP - Link Aggregation is a technique for combining several links to enhance availability of
network access. It applies to a single SP, not across SPs. To implement Link Aggregation,
the network switches must support the IEEE 802.3ad standard. File Access only is
supported.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 32


This lesson covers the storage efficiency features including Thin Provisioning and File
Deduplication and Compression.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 33


Thin provisioning is the ability to present a server with more capacity than is actually
allocated within the storage system, essentially giving the host the illusion it has capacity
that is not physically allocated from the storage system. Physical storage is assigned to the
server in a capacity-on-demand fashion from the shared pool. When the amount of storage
consumed within the LUN approaches the limit of the current allocation, the system
allocates additional storage to the LUN from the storage pool.

Thin provisioning allows multiple LUNs to subscribe to a common storage capacity within a
pool. The remaining storage is available for other LUNs to use.

Thin provisioning allows the user to improve storage efficiency while reducing the time and
effort required to monitor and rebalance existing pool resources. Organizations can
purchase less storage capacity up-front, and increase available storage capacity by adding
disks as needed, according to actual storage usage.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 34


File-level Deduplication and Compression combines two technologies to provide maximum
storage efficiency by eliminating redundant data. Hence, VNXe is saving storage space,
while still minimizing the client impact on mission-critical files.

This feature processes file data, not metadata. If multiples files contain the same data but
different names, the files are deduplicated. Deduplicate files can also have different
permissions and timestamps.

Deduplication and compression operate on whole files that are stored in the file system. For
example, if there are 10 unique files in a file system that is being deduplicated, 10 unique
files will still exist, but the data will be compressed, yielding a space savings of up to 50
percent. On the other hand, if there are 10 identical copies of a file, 10 files will still exist,
but they will share the same file data. The one instance of the shared file is also
compressed, providing further space savings.

File-level deduplication provides relatively modest space savings. It does not require
substantial CPU and memory resources to implement.

Compression is often regarded as a separate mechanism from deduplication. However,


compression can also be viewed as variable, bit-level, intra-object deduplication.
Compression alters the way data is stored, mainly to improve storage efficiency. It is
relatively modest in terms of its resource footprint and relatively CPU-intensive but requires
very little memory.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 35


VNXe storage systems support an optional FAST Cache consisting of a storage pool of
Flash disks configured to function as FAST Cache. Multi-Core FAST Cache extends the
MCx storage systems existing caching capacity for better system-wide performance. This
cache provides low latency and high I/O performance without requiring a large number
of Flash disks. With MCx, all writes go directly to the DRAM Cache, thus considerably
reducing host latency. As a result, host writes are mirrored and protected directly from
the DRAM cache layer, increasing performance. Multi-Core Cache space is shared for
both writes and reads.
The FAST Cache is based on the locality of reference of the data set. By promoting the
data set to the FAST Cache, the storage system services any subsequent requests for
this data from the Flash disks that make up the FAST Cache; thus, reducing the load on
the disks in the LUNs that contain the data (the underlying disks). FAST Cache consists
of one or more pairs of mirrored disks (RAID 1) and provides both read and write
caching.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 36


FAST VP complements FAST Cache by optimizing storage pools on a regular, scheduled
basis. Customers can define how and when data is tiered using policies that dynamically
move most active data to high-performance drives (e.g., Flash), and less active data to
high-capacity drives, all in 256MB increments for both block and file data.
Moves data (in 256MB slices) between tiers
Statistics are taken on every I/O, and UI is updated hourly

Start High then Auto-tier (Recommended)


Sets the preferred tier for initial placement to the highest performing drives with
available space
Relocates the data in the LUNs based on the performance statistics and the auto-
tiering algorithm
Especially effective for LUNs that exhibit skew
Use Case: Initial allocation of a database needs high performance but over time, only
a small portion of the entire data set will be active at any given time.

Auto-Tier:
Sets initial data placement to Optimized for Pool where the distribution of data is
based on the ratio of tiers in a Storage Pool
Relocates data within LUNs based on the performance statistics such that data is
relocated among tiers according to IO activity
Use Case: You are deploying a test application and you have no preference for initial
placement; however you want some data to benefit from the higher tiers.

<Continued>

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 37


Highest available Tier:
Selected for LUNs which, although not always the most active, require high levels of
performance whenever they are accessed
Starts with the hot slices first and places them in the highest available tier
Use Case: Portions of your database may be very active but you need the entire data
set on Flash drives so you can experience high performance

Lowest Available Tier:


Selected for LUNs that are not performance sensitive or response-time sensitive
Sets the preferred tier for the initial data placement and subsequent data relocation (if
applicable) to the most cost-effective disk drives with available space
Use Case: Archived research data from older projects.

No Data Movement:
Only selected after the LUN is created
No slices provisioned to the LUN will be relocated across tiers
Slices will remain in their current tier but can still be relocated within that tier
Use Case: Data will be heavily used but other LUNs are set to the Highest Available
Tier. You can create a LUN in the highest tier and, since all slices are on Flash, you
can freeze those slices in that tier.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 38


This lesson covers the VNXe performance features including FAST VP and FAST Cache.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 39


VNXe systems include the Unified Snapshot feature. VNXe Unified Snapshots is a unified
technology (for both file and block) whose source can be a LUN, LUN Group, file system, or
a VMware Datastore. VNXe Unified Snapshots is based on Redirect on Write technology
(ROW). Unified Snapshots provide VNXe administrators with comprehensive scheduling and
policy capabilities.

This type of replication does not require replication interfaces to be configured, as no data
must be transferred over the external network. Local replication where the destination
storage resource is owned by a different SP than the source storage resource will be
completed over the internal CMI bus connecting the SPs. For local replication, a default
Local System replication connection is selected, which is pre-existing and does not need to
be manually configured.

Up to 256 snaps per LUN and 96 snaps per file system are supported.

<Continued>

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 40


In general, asynchronous replication operates in the following way:

1. When a replication session is established, two internal snapshots are created on each of
the source and destination storage resources. After the snapshots are created, Source
Snap A and B each contain a current point-in-time view of the Source LUN.

2. Data from Snap A is then copied to the empty Destination LUN. This is known as the
initial synchronization, and is a full copy.

3. Once this initial synchronization is completed, Destination Snap A is refreshed to reflect


the current state of the Destination LUN. At this point, Source Snap A and Destination
Snap A contain the same data, which is reflective of the point-in-time view of the Source
LUN at the time the initial synchronization began. Snap A is now the common base
between the source and destination LUNs.

4. As hosts continuously write to Source LUN A, the data in the LUN is changed.

5. At the next automatic or manual sync, Source Snap B is refreshed to reflect the current
point-in-time view of the Source LUN. All incremental changes since the time of the
previous sync are then copied from Source Snap B to the Destination LUN.

6. After this copy is complete, Destination Snap B is refreshed to reflect the current state
of the Destination LUN. Now Snap B is the common base between the source and
destination.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 41


VNXe Remote Replication enables the user to automatically maintain an asynchronous,
complete second copy of the LUN on a remote system over designated interfaces. The user
can manage replication sessions for different LUNs in Unisphere. The user can manage file-
level replication entirely within the VNXe environment and is driven from Unisphere. Block-
level replication for LUNs is configured and controlled by Unisphere and is a schedule driven
synchronization.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 42


RecoverPoint is an appliance based replication and disaster recovery solution packaged as
either a physical or virtual appliance. In collateral you will see a physical RecoverPoint
appliance referenced by the acronym RPA and the virtual appliance reference as vRPA.
Unless highlighted, the use of RPA in this course implies both RPA and vRPA. RecoverPoint
uses crash-consistent snapshots for data protection, allowing data to be rolled back to any
point in time. Synchronous and asynchronous replication are supported for LUNs and VMFS
datastores.

Typical use cases are deployments for disaster recovery, assurance testing, remote
backups, and multi-vendor replication.

RecoverPoint version 4.1 SP1, supports built in splitter integration with VNXe3200 systems.
The splitter is a key component that splits or copies incoming writes, sending them to
both the RecoverPoint appliances and the VNXe3200 storage. Replication operations are
performed by the RPA or vRPA to insure minimal performance impact on the VNXe3200
systems. This graphic illustrates a Local Protection implementation as shown by the
production and replica volumes located on the same system. Remote Protection is also
supported.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 43


This table show a comparison of the VNXe replication solutions.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 44


The VNXe series simplifies backup and restore operations. In a typical data center, backups
need to be performed on multiple application servers. VNXe storage systems allow the
application data to be stored at a central location, where the data can be backed up and
restored, if required. For hosts utilizing block-level storage on the VNXe, traditional backup
methods operate from the attached hostno differently than with locally attached storage.
For file-level storage, VNXe storage systems support NDMP versions 2 through 4 over the
network, but direct-attach NDMP is not supported. This means that the tape drives need to
be connected to a media server, and the VNXe system communicates with the media server
over the network. Also, when using deduplication on a file system, the files are backed up
in their compressed state when NDMP is used.

VNXe supports EMC Data Domain for backup and archive data, and EMC Avamar for
deduplicated backup and recovery.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 45


This lesson covers the VNXe archival, security and integration features.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 46


File Level Retention - Enterprise (FLR-E) on the VNXe3200.
Allows for businesses to practice good governance practices. It protects data content
from changes made by users through CIFS, NFS, and FTP, allowing a VNXe admin to
delete an FLR-E-enabled file system, although a system warning will appear and
requires confirmation. Retention periods are set on a per-file basis and are managed
at the file level.

In FLR-E- enabled file systems, files that are in the locked state cannot be modified or
deleted. The path to a file in the locked state is also protected from modification, which
means a directory on a File-Level Retention-enabled file system cannot be renamed or
deleted unless it does not contain any protected files.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 47


Function

When a file is written and saved (scan on update) or the first read (scan on read), the
VNXe3200 places a block on that file until virus checking has been performed. It
immediately issues a remote procedure call (RPC) to a virus-checking engine. This could be
a single engine or many, depending on the volume of data being protectedthus providing
a highly scalable solution. Because EMC can easily use multiple virus-checking servers, the
performance impact of virus checking with VNXe is a small fraction of the total throughput
of the system and of systems that use a single virus-checking server.

On receipt of the request, an access is initiated from a filter driver, and the virus-checking
server performs a standard check on the file. Understand that standard virus checkers
request only a small amount of data (signatures of a few kilobytes each) to establish the
presence of a virus, so the overhead is relatively small. The exception to this is with
compressed files, in which case the entire file must be shipped across the network. The
implementation may be through the normal user network; in the case of heavy-load
environments, you may wish to dedicate a network interface to the virus-checking server
farm. If a virus is detected, the user and the Administrator will see a customizable pop-up
message.

The scan-on-read functionality is triggered when a file is opened for read that was last
scanned before a set access time. This access time is typically set when a new virus-
definition file is loaded to rescan old files (once) that may contain undetected viruses. You
may also wish, under certain circumstances, to run anti-virus in scan-on-read modefor
instance, after a restore of data that may be infected with a latent virus.

With the VNXe3200, as part of the base software, the Common AntiVirus Agent (CAVA)
solution is a component of the Event Enabler infrastructure, an alerting API that indicates to
external applications when an event (e.g., a file save) occurs within VNXe. The Event
Enabler framework is also used for quota management tool integration.

<Continued>
Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 48
The AntiVirus Sizing Tool is used after installation to identify the need for additional anti-
virus engines to ensure performance of the system. The AntiVirus Sizing Tool comes with
the Common AntiVirus Agent, which is Windows Management Interface (WMI)-enabled.
There is also a pre-install sizing tool available for initial anti-virus sizing.

Scalability

You can scale the solution by adding virus-checking servers as required. Your server
vendors should be able to provide you with an understanding of how many dedicated
servers you would need. You can also use different server types (e.g., McAfee, Symantec,
Trend Micro, CA, Sophos and Kaspersky) concurrently, as per their original anti-virus
implementation.

Performance of anti-virus solutions tend to be measured in server overhead, and come with
the typical your mileage may vary qualification, depending on application and workload.

PartnershipsEMC has an ISV Program agreement with all of the major five anti-virus
vendors. Utilizing CAVA is the only method for doing anti-virus checking method supported
by major virus-checking vendors for network shares

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 49


VNXe2 series, introduces a 64-bit scalable file system, known as UFS64. Initially, the new
file system will include new features focused on virtualization workloads. For example, it
will scale up to 64TB (a limit in VMware), and supports VAAI VMDK snaps/fast clones for
VDI. However, array snapshots and deduplication are not supported for 64-bit VMware NFS
datastores. The UFS64 allows users to extend and shrink both thick and thin VMware NFS
Datastores. Performing UFS64 extends and shrinks is transparent to the client, meaning
the array can still service I/O to a client during extend and shrink operations.

This 64 bit file system also brings in better fault tolerance. The VNXe2 Series will
concurrently support both 64-bit and 32-bit file systems. The 64-bit FS is NFS only and is
aimed at Virtualization use cases. The 32-bit FS is the default file system for the VNXe2
Series.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 50


This module covered the VNXe features, capabilities, and key usage of these features in
the IT environment.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 51


This module provides an overview of basic VNXe management options and list the
management extension options for VMware, on-array SMI-S API provider and AppSync
integration.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 52


Unisphere for VNXe provides tools to configure system settings, view system status, and to
manage a VNXe storage system. Unisphere enables users to configure LUNs easily to meet
specific needs of their applications, host operating system, and all users. Unisphere is
completely web-enabled for remote management of the storage environment.

Unisphere wizards simplify storage provisioning by automatically implementing best


practices as users provision storage. This optimizes system performance and minimizes
costs. Troubleshooting is also simplified by easily identifying failed components and by
providing direct access to EMC support options.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 53


Administration of the VNXe system can also be performed with a command line interface
(CLI). Administrative users must authenticate to the VNXe when using CLI interface as
well. Unisphere CLI (uemcli) enables the users to run commands on a system through a
prompt from a Microsoft Windows or UNIX/Linux host. Unisphere CLI is intended for
advanced users who want to use commands in scripts for automating routine tasks as
shown in the slide. For example, create a script to create a snapshot of a LUN and delete
the older snapshots created before it.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 54


Unisphere Remote is a centralized easy-to-use network application that provides users
with a way to centrally monitor their VNXe storage systems. Unisphere Remote enables
the user to:

Monitor up to 1,000 VNXe systems from a single instance.

View aggregated alerts, health, capacity, and CPU usage for multiple systems.

Control access to the monitoring interface by setting up local Unisphere remote users
or integrating existing LDAP enabled users and groups.

Organize views of the VNXe nodes in logical ways, including by location, type,
department, etc.

Launch Unisphere from Unisphere Remote to manager an individual VNXe system.

The Unisphere Remote environment consists of a Unisphere Remote virtual server


appliance running in a VMware virtualized environment, one or more VNXe systems, and
a remote workstation to acces the Unisphere Remote server.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 55


Beyond the things a customer can do on their own, EMC Customer Support can monitor a
VNXe3200 24x7 through the use of EMC Secure Remote Support (ESRS).

ESRS enables the entire team of EMC customer support professionals to proactively
monitoring - and through careful coordination repair, when needed, your customers VNXe
information infrastructure as quickly as possible. ESRS uses industry standard security
precautions, monitors a VNXe 24x7, with expedited response to a customers needsall in
an effort to free up the customers time to focus on their business initiatives.

ESRS provides:
Automation features that include 24x7 remote monitoring to ensure EMC support
personnel are aware of potential problems. The IP connection provides the necessary
bandwidth and speed to ensure EMC can quickly remediate any issue.
Authentication that includes advanced Encryption Standard (AES) 256-bit encryption
and RSA digital certificates to protect customer data.
Authorization features that empower customers to allow or deny remote support
sessions based on security requirements, and to assign privileges and apply policy
filters to ensure only authorized personnel are making decisions concerning remote
support activities.
Audit capabilities that enable detailed and easy review of remote support activities to
address federal, industry, and internal business requirements.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 56


EMC Virtual Storage Integrator (VSI) is targeted towards the VMware administrator. VSI
supports VNXe provisioning within vCenter, full visibility to physical storage and increases
management efficiency. VMware administrators can utilize VSI to do tasks such as create
VMFS and NFS datastores, RDM volumes, have Access Control Utility support and many
other storage management functions from their native VMware GUI interface.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 57


Virtual VNX offers the same core and extended features of the VNXe3200 at the same level
Operating Environment, with certain exceptions. As a single node system, having only one
Storage Processor (SP), a High Availability environment cannot be supported. The vVNX will
be offered with the following Front End services; Block and File storage, Snapshots, Local
and Remote Replication, File Deduplication and Compression. This slide shows a table of
the vVNX features,

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 58


vStorage APIs for Storage Awareness (VASA) are VMware-defined APIs that storage
vendors can implement to obtain and display storage information through vCenter. This
visibility makes it easier for virtualization and storage administrators to make decisions
about how data stores should be maintained -- for example, choosing which disk should
host a particular virtual machine (VM).

VMware Aware Integration (VAI) allows the end-to-end discovery of VMware environment
from the Unisphere GUI interface. The user can import and view VMware Virtual Centers,
ESXi Servers, Virtual Machines, and VMDisks and view their relationships. Also VAI allows
the users to create, manage, and configure VMware datastores on ESXi servers from
Unisphere.

<Continued>

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 59


VMware vStorage APIs for Array Integration (VAAI)

For Block connection


Hardware locking operations
Hardware-assisted locking provides an alternate method to protect the
metadata for VMFS cluster file systems and improve the scalability of large
ESXi servers sharing a VMFS datastore. Atomic Test & Set (ATS) allows
locking at the block level of a logical unit (LU) instead of locking a whole
LUN. Hardware-assisted locking provides a much more efficient method to
avoid retries for getting a lock when many ESXi servers are sharing the
same datastore. It offloads the lock mechanism to the VNXe3200, and then
the array performs the lock at a very granular level. This permits significant
scalability without compromising the integrity of the VMFS-shared storage
pool metadata when a datastore is shared on a VMware cluster.
Bulk Zero Acceleration
This feature enables the VNXe3200 to zero out a large number of blocks to
speed up virtual machine provisioning. The benefit is that with Block Zero
the process of writing zeros is offloaded to the storage array. Redundant and
repetitive write commands are eliminated to reduce the server load and the
I/O load between the server and storage. This results in faster capacity
allocation.
Full copy acceleration
This feature enables the VNXe3200 to make full copies of data within the
array without the need for the VMware ESXi server to read and write the
data. The result is the copy processing is faster. The server workload and
I/O load between the server and storage are reduced.
<Continued>

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 60


VMware vStorage APIs for Array Integration (VAAI)
Thin Provisioning
With this VAAI feature the storage device is communicated that the blocks
are no longer used. This leads to more accurate reporting of disk space
consumption and enables the reclamation of the unused blocks on the thin
LUN.

For NFS connections, allows the VNXe Series to be fully optimized for virtualized
environments. This technology offloads VMware storage-related functions from the server
to the storage system, enabling more efficient use of server and network resources for
increased performance and consolidation.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 61


The VNXe3200 systems provide an on-array SMI-S provider which can be used for both File
and Block operations. Any application which has the capability to leverage the SMI-S API
can be used to discover and configure the supported operations on the VNXe3200 systems.
The SMI-S API (Storage Management Initiative Specification) was developed and is
maintained by SNIA (Storage Networking Industry Association).
Supports over 75 software products and 800 hardware platforms
Provides an industry standard that defines a method to manage heterogeneous
storage environments

The SMI-S 1.5 compliant API will be implemented in VNXe3200.

Why support SMI-S?


Third party applications can leverage the SMI-S API to provision storage
SMI-S is the primary storage management interface for Windows Server 2012 and
System Center Virtual Machine Manager (SCVMM)
SMI-S is based on open standards
WBEM Web Based Enterprise Management
CIM Common Information Model
CIM/XML as the payload and HTTPS as the transport

<Continued>

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 62


The On-Array provider has 2 components, a Block and a File. The block component can
discover the block array, storage pools, LUNs, LUN groups, and snapshots. SMI-S API will
allow a third party application to modify LUNs, LUN Groups, and snapshots on LUNs and
LUN Groups.

For the file component, you can discover and configure NAS Servers, File Systems, and File
Shares. You can assign ACL privileges on file shares.

SMI-S Profiles describes what kind of actions can you perform on the VNXe3200 array. An
example would be the iSCSI Target Ports Profile where by using the SMI-S API you could:
Discover iSCSI ports in the VNXe3200 system
Identify IP addresses tied to an iSCSI interface

Limitations:
Non-storage content (e.g., network configuration and ESRS)
Advanced features that are not covered in latest SMI-S 1.5 standard (e.g., FAST
Cache and FAST VP)

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 63


AppSync Copy Management uses the concept of service plans (protection policies) which
are designed to meet specific service level requirements. The service plans are fully
customizable and can be applied to your application with a single click. As a result, AppSync
is very easy to use for application administrators as well as storage administrators, because
it does not require prior knowledge in data replication technology. AppSync supports
Exchange, SQL, and Oracle environments.

Some other advantages of AppSync are the lightweight agents and the added application
and Exchange Database Availability Group (DAG) awareness.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 64


This module covered the VNXe management options for the VNXe environment, VMware
management extensions, the on-array SMI-S provider options and AppSync Copy
Management options.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 65


This course covered the VNXe1600 and VNXe3200 models, the VNXe Series Solution, the
VNXe Product architecture and components, available software options for the VNXe2
Series, the features and functionality of the VNXe2 series and the management options
available for the VNXe2 Series.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 66


This appendix covers several topics of storage for review: Fibe Channel Overview, iSCSI
Fundamentals and Network Requirements; VNXe RAID Levels, a Storage Pool Overview,
Advanced Networking and File-Level Retention Workflow.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 67


Fibre Channel is a serial data transfer interface that operates over copper wire and/or
optical fiber at data rates up to 800 Mb/s (8 GB connection).

Networking and I/O protocols (such as SCSI commands) are mapped to Fibre Channel
constructs, and then encapsulated and transported within Fibre Channel frames. This
process allows high-speed transfer of multiple protocols over the same physical interface.

Fibre Channel systems are assembled from familiar types of components: adapters, hubs,
switches and storage devices.

Host Bus Adapters (HBAs) are installed in computers and servers in the same manner as a
SCSI HBAs.

Fibre Channel switches provide full bandwidth connections for highly scalable systems
without a practical limit to the number of connections supported (16 million addresses are
possible).

The word fiber indicates the physical media. The word fibre indicates the Fibre Channel
protocol and standards.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 68


Note: ESXi server refers to the ESXi instance on a vSphere Server:
1. Initiator = an iSCSI client, sends SCSI commands and encapsulating in IP packets
2. Target = an iSCSI server, usually an array
3. Initiator port = the end-point or ramp for data of an iSCSI session
4. Network portal = an IP address or grouping of IP addresses used by iSCSI initiator
or target
5. Connection = carries control info, SCSI commands and data being read or written
6. Session = one or more connections that form an initiator-target session:

Link Aggregation = aggregates physical links to transmit a given connection. Link


selection is done by doing a hash on the MAC addresses or IP address.

Multiple Connections per Session (MC/S) = multiple connections within a single session
to an iSCSI target

MPIO = multiple sessions, each with one or more connections to any iSCSI target

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 69


LAN configuration allows Layer 2 (switched) and Layer 3 (routed) networks. Layer 2
networks are recommended over Layer 3 networks.

The network should be dedicated solely to the iSCSI configuration. For performance
reasons, EMC recommends that no traffic apart from iSCSI traffic should be carried over
the network. If using MDS switches, EMC recommends creating a dedicated VSAN for all
iSCSI traffic.

CAT5 network cables are supported for distances up to 100 meters. If cabling is to
exceed 100 meters, you must use CAT6 network cables.

The network must be a well-engineered network with no packet loss or packet


duplication. When planning the network, care must be taken in making certain that the
utilized throughput will never exceed the available bandwidth.

VLAN tagging protocol is supported. Link Aggregation, also known as NIC teaming, is
not.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 70


This table provides a brief summary of the RAID levels 5, 6 and 1/0, supported by VNXe.
Please take a moment to pause and read through them all.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 71


The VNXe3200 provides the ability to mix RAID types within a storage pool.

Here is a typical use case for three tiers in a pool:

Traditional RAID 5 (4+1) for Flash, RAID 5 (8+1) for SAS, and RAID 6 (14+2) for NL-SAS
means customers will get high performance AND maximum efficiency.

This complements additional RAID types, seen on the next slide, and further improves
storage pool efficiency.

This storage pool capability dramatically improves storage capacity utilization.

The 8+1 illustration is simply to show flexible options and is not a recommendation.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 72


FailSafe Networking (FSN) is configured on the VNXe storage systems by default. If one of
the ports in the storage system fails, FSN automatically reroutes the I/O internally to the
corresponding physical port on the peer Storage Processor. For example, the user can have
port eth2 on SPA plugged into switch A and port eth2 on SPB plugged into switch B, if
both of these switches are on the same subnet. If a host application has data going over
port eth2 on SPA and switch A fails, FSN will route the I/O traffic over port eth2 on SPB
using switch B. FSN, therefore, helps in providing switch level redundancy in the
environment.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 73


Link Aggregation is a technique for combining several links to enhance availability of
network access. It applies to a single SP, not across SPs. To implement Link Aggregation,
the network switches must support the IEEE 802.3ad standard. File Access only is
supported.

Automatic configuration with statistical load balancing is based on source and destination
MAC addresses. The Link Aggregation Control Protocol (LACP) has been implemented,
allowing peers to provide any required load balancing. Only full duplex operation is currently
supported.

VLANs create multiple broadcast domains. An SPs Gigabit Ethernet port(s) can be
connected to different switches, and each of these switches can be in a different subnet and
different VLAN. Configurations need to be symmetrical for both SPs for all failover scenarios
to work.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 74


VLANs, or virtual LANs, are a networking feature that allows a single physical network to
be segmented into multiple virtual networks. This provides several benefits, such as
improved security and restricting broadcast domains. Trunking is a feature that allows a
single network interface to carry traffic for multiple VLANs. Without trunking, each port
on a switch or network device would be restricted to a single VLAN, requiring multiple
interfaces for devices that need to exist on multiple networks simultaneously. Each VLAN
is assigned a unique number, or ID, within a network, and when devices are configured
to utilize trunking, the VLAN ID is inserted into the Ethernet frame so that the switch
knows on which network the data belongs.

On VNXe, VLAN IDs are assigned to an IP interface/configuration. The IP interface can be


configured on a single Ethernet port, or an LACP port group. When configuring multiple
IP interfaces on an Ethernet port or port group, each IP interface can belong to a
different VLAN.

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 75


File-Level Retention allows you to set file-based permissions on a file system to limit write
access for a specified retention date. File-Level Retention is enabled on a specified file
system at creation time. When a new file system is created and enabled for file-level
retention, it is always marked as a File-Level Retention-enabled file system. After a file
system is created and enabled with File-Level Retention, you can apply protection on a per-
file basis.

A file in a File-Level Retention-enabled file system is always in one of four possible states
based on the files last access time (LAT) and read-only: not locked, locked, append (only),
or expired. A file that is not locked is treated exactly as a file in a file system that is not
enabled for File Level Retention; it can be renamed, modified, or deleted. You manage files
in the locked state by setting retention dates that, until the dates have passed, prevent
the files from being modified or deleted. File-Level Retention files can be grouped by
directory or batch process, thus enabling you to manage the file archives on a file-system
basis, or to run a script to locate and delete files in the expired state.

File states:
Not-lockedNormal file
Locked (WORM)File Level Retention-enabled files; files cannot be deleted, renamed,
modified, or appended to
AppendFiles cannot be deleted or renamed; existing data cannot be modified, but
new data can be added
ExpiredFiles cannot be renamed, modified, or appended to, but can be deleted or
relock

<Continued>

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 76


Transitions from one state to another occur as follows:
Non File-Level Retention-enabled files
Set retention date/time (atime [access time]) and make file read-only; file is
committed to a locked state (WORM) (i.e., it becomes a File-Level Retention file)
Increase file retention date/time (atime) where atime is greater than current File-
Level Retention clock time
File retention date/time (atime) becomes greater than current File-Level Retention
clock time; the file is then expired
If the file is locked, it can become APPEND_ONLY state, where new files can be added
(e.g., log files)
File must be expired to be deleted

Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 77

Anda mungkin juga menyukai