Copyright 2015 EMC Corporation. All Rights Reserved. Published in the USA. EMC believes the information in this publication is accurate
as of its publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF
ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. The trademarks,
logos, and service marks (collectively "Trademarks") appearing in this publication are the property of EMC Corporation and other parties.
Nothing contained in this publication should be construed as granting any license or right to use any Trademark without the prior written
permission of the party that owns the Trademark.
EMC, EMC AccessAnywhere Access Logix, AdvantEdge, AlphaStor, AppSync ApplicationXtender, ArchiveXtender, Atmos, Authentica,
Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Bus-Tech, Captiva, Catalog Solution, C-
Clip, Celerra, Celerra Replicator, Centera, CenterStage, CentraStar, EMC CertTracker. CIO Connect, ClaimPack, ClaimsEditor, Claralert
,cLARiiON, ClientPak, CloudArray, Codebook Correlation Technology, Common Information Model, Compuset, Compute Anywhere,
Configuration Intelligence, Configuresoft, Connectrix, Constellation Computing, EMC ControlCenter, CopyCross, CopyPoint, CX, DataBridge ,
Data Protection Suite. Data Protection Advisor, DBClassify, DD Boost, Dantz, DatabaseXtender, Data Domain, Direct Matrix Architecture,
DiskXtender, DiskXtender 2000, DLS ECO, Document Sciences, Documentum, DR Anywhere, ECS, elnput, E-Lab, Elastic Cloud Storage,
EmailXaminer, EmailXtender , EMC Centera, EMC ControlCenter, EMC LifeLine, EMCTV, Enginuity, EPFM. eRoom, Event Explorer, FAST,
FarPoint, FirstPass, FLARE, FormWare, Geosynchrony, Global File Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase,
Illuminator , InfoArchive, InfoMover, Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix, ISIS,Kazeon, EMC LifeLine, Mainframe
Appliance for Storage, Mainframe Data Library, Max Retriever, MCx, MediaStor , Metro, MetroPoint, MirrorView, Multi-Band
Deduplication,Navisphere, Netstorage, NetWorker, nLayers, EMC OnCourse, OnAlert, OpenScale, Petrocloud, PixTools, Powerlink,
PowerPath, PowerSnap, ProSphere, ProtectEverywhere, ProtectPoint, EMC Proven, EMC Proven Professional, QuickScan, RAPIDPath, EMC
RecoverPoint, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, the RSA logo, SafeLine, SAN Advisor, SAN Copy, SAN Manager,
ScaleIO Smarts, EMC Snap, SnapImage, SnapSure, SnapView, SourceOne, SRDF, EMC Storage Administrator, StorageScope, SupportMate,
SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder, TwinStrata, UltraFlex, UltraPoint, UltraScale,
Unisphere, Universal Data Consistency, Vblock, Velocity, Viewlets, ViPR, Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning,
Virtualize Everything, Compromise Nothing, Virtuent, VMAX, VMAXe, VNX, VNXe, Voyence, VPLEX, VSAM-Assist, VSAM I/O PLUS, VSET,
VSPEX, Watch4net, WebXtender, xPression, xPresso, Xtrem, XtremCache, XtremSF, XtremSW, XtremIO, YottaYotta, Zero-Friction Enterprise
Storage.
VNXe series of storage platforms have been engineered to make it simple to deploy and
manage local application workloads for small and medium business environments, remote
or branch offices and Federal, while maintaining required levels of protection and availability
for critical data.
VNXe is optimized for virtual environments, with integration and features that streamline
operations and leverage virtual server deployments at remote sites.
The VNXe series takes full advantage of the latest Intel multi-core technology with the
introduction of MCx.
The VNXe1600 is an entry-level platform providing purpose built Block storage using the
new Ivy Bridge-based motherboard.
MCx architecture makes use of threads and resources versus simply using more sockets and
cores at higher clock rates. Its similar to the effect of hyper-threading on processors in
general.
These systems are architected to automate the use of a fast layer of SSD drives to boost
performance and auto-tier based on hot spot recognition in the system using EMCs FAST
Suite.
For file services, all models provide NAS services to both CIFS and NFS clients, except
the 1600.
For block services, the VNXe series provide storage access via Fibre Channel and iSCSI
protocols. However, the Pre-MCx models support only iSCSI services for block storage.
While the MCx and Pre-MCx models provide many of the same functions, some of the
terminology has changed and is represented on this slide.
Additionally, while the Pre-MCx models provided wizards for Microsoft Exchange and Hyper-
V provisioning, with MCx, these needs can now be serviced from the Microsoft application
using the EMC Storage Integrator for Microsoft Windows (which uses MMC plug-ins to
provision storage for Microsoft Windows Server, SharePoint and Exchange).
EMC Unisphere Remote provides simultaneous reporting and alerting for many VNXe
storage platforms, enabling central IT to monitor individual capacity utilization and system
health, and to access an individual array for intervention..
VNXe is optimized for virtual environments, with integration and features that streamline
operations and leverage virtual server deployments at remote sites.
Integration with Avamar provides the centralized, edge-to-core backup and recovery
capabilities that
organizations require for their business-critical data without over-taxing local, network, and
data center resources.
Storage Processors (SPs) carry out the tasks of saving and retrieving block and file
data. SPs utilize I/O modules to provide connectivity to hosts and 6 Gb/s Serial
Attached SCSI (SAS) to connect to disks in Disk Array Enclosures (SPs will be covered
in detail in following slides).
Disk Processor Enclosures (DPEs) house Storage Processors (SPs), the first tray of
disks, and I/O interface modules. DPEs connect to Disk Array Enclosures. All
components within the enclosure are redundant and highly available.
Disk Array Enclosures (DAEs) house the non-volatile hard and Flash drives used in
the VNXe storage systems.
In the Pre-MCx models, each SP houses the AC power supply, fan modules, a battery
backup unit (BBU), a USB eFlash device and 16 GB Solid-state drive. The SSD and the USB
eFlash device contain the operating environment, root and swap area and other data. The
SSD is located inside of each SP.
In the MCx models, each SP houses the fan modules, AC power supply and a battery
backup unit (BBU), management port, service port, the optional I/O module for FC
connections and the mSATA drive. The 32 GB MLC Flash, mSATA (Located inside of each
SP), is the system boot device, programmed with standard BIOS bootable partitioning.
mSATA replaces the USB and SSD Boot flash present in the pre-MCX models. Contained in
the mSATA are the partitions for boot, root, cores, swap, Linux PMP (Permanent Memory
Persistence or lxPMP), and the image repository for firmware.
12 X 3.5 drives
25 X 2.5 drives
Disk Array Enclosures (DAEs) house the non-volatile hard and Flash drives used in the
VNXe storage systems. All DAEs accommodate two 4-lane, 6 Gb/s SAS back-end ports, and
two Power Supply/Cooling modules, PS A and PS B.
The rear view of the DAE shows the redundant Power Supplies and Link Control Cards
(LCCs).
The LCCs main function is to be a SAS expander and provide enclosure services for all drive
slots. The LCCs in a DAE connects to the DPE and other DAEs with 6 Gb/s SAS cables. The
LCCs each contain a primary and expansion port and independently monitors the
environmental status of the entire enclosure and communicates the status to the SPs. The
primary port is indicated by two circles and is used when adding this DAE to an existing
configuration. The expansion port is indicated by two diamonds and is used to connect
additional DAEs to this DAE.
The DAE is managed by the system software. The DAEs can house up to the same number
of drives as the DPE, 25x 2.5 (2U) or 12x 3.5 (2U) drives and support Flash, SAS, and NL-
SAS drives. The slots are numbered 0 through 11 for the 12-drive DAE, and 0 through 24
for the25-drive DAE.
10GbE has also been added which delivers 10 GbE or 1 GbE speeds.
In addition, iSCSI is faster and plugs into the system directly versus going through a file
layer as was the case with VNXe3150.
The MCx model supports native block front-end connectivity using the 8 Gb FC Block I/O
Module. There is a maximum of four Fibre Channel Ports per SP. The same type of eSLIC
must be installed on both SPA and SPB (i.e. both SPs must have a Fibre Channel I/O
module or both must be blank) and a symmetrical configuration is needed on both SPs.
Only the VNXe3200 offers traditional NAS connectivity via NFS and CIFS protocols. Both
models support the Fibre Channel and iSCSI block access protocols.
Multi-core optimization delivers efficient performance through multicore stack design. MCx
is an EMC trademark for Multicore everything:
MCx is an important piece of the VNXe unified storage architecture for the next decade
Robust, enterprise ready storage OS
MCx unleashing the power of all VNX cores
Linear performance scaling across all cores
Enables better leverage of intelligent data services
MCx performance and functionality improvements:
Multicore Cache (MCC)
Multicore FAST Cache (MCF)
Multicore RAID (MCR)
The multicore initiative, MCx, is a re-architecture project that redesigns the core Block OE
stack within the VNXe series to ensure optimum performance at high scale. This is achieved
with an adaptive architecture.
Multicore Cache, Multicore FAST Cache and Multicore RAID are key engineering
enhancements.
In addition, MCC removes the need to manually separate space for Read vs. Write Cache so
there is no management overhead in ensuring the cache is working in the most effective
way regardless of the IO mix coming into the system.
Multicore FAST Cache acts as an extension of the VNXe series multicore cache (DRAM).
FAST Cache on the System can be configured using the Flash Optimized Flash Drives.
Options include:
For Deep VMware-level insight and analysis, EMC Storage Analytics is available as
a vCOPS option. vCenter Operations Manager, EMC Adapter for VNXe
RecoverPoint Advance Protection for VNXe3200: includes the write splitter for
RecoverPoint EX. Provides local and remote Continuous Data Protection for recovery
to any point in time
Free EMC Storage Integrator (ESI) for Windows (ESI)
Provision storage via Microsoft interfaces.
Free Virtual Storage Integrator (VSI)
Allows VMware administrators to manage VNXe3200 storage from within
VMware vCenterVirtual Storage Integrator (VSI) plug-in for support of
VMware storage management, provisioning and hardware off-load features
AppSync Copy Management Fast copy and rapid restores of VMware, Exchange,
SQL, SharePoint, Oracle, and more..
EMC Storage Analytics - Powerful monitoring and analytics tool for VMware
vRealize Operations Manager, (EMC Adapter for VNXe)
PowerPath - Intelligent load balancing and multi-pathing software for networked
storage environments
The following slides discuss some of the key integration capabilities of VNXe arrays.
PowerPath has built-in algorithms that attempt to balance I/O load over all available, active
paths to a LUN. Again, this is transparent to the application and ensures optimal use of the
available I/O bandwidth. Any subsequent manual reconfiguration is required only in highly
specific situations, e.g. when new LUNs or new paths are provisioned to a host on the fly,
and uptime requirements prohibit a subsequent host reboot.
PowerPath supports popular operating systems. Also, it supports all EMC-branded storage
arrays and qualified non-EMC arrays. Visit http://powerlink.emc.com and access the E-Lab
Interoperability Navigator page for more information about supported operating systems
and storage arrays. (EMCs Powerlink portal requires an EMC customer, partner or employee
account.). PowerPath requires a multipathing license.
Shown here is an example of a Storage Pool that has two drives moved to new locations,
including a different enclosure.
VLANs, or virtual LANs, are a networking feature that allows a single physical network to be
segmented into multiple virtual networks. This provides several benefits, such as improved
security and restricting broadcast domains. Trunking is a feature that allows a single
network interface to carry traffic for multiple VLANs.
LACP - Link Aggregation is a technique for combining several links to enhance availability of
network access. It applies to a single SP, not across SPs. To implement Link Aggregation,
the network switches must support the IEEE 802.3ad standard. File Access only is
supported.
Thin provisioning allows multiple LUNs to subscribe to a common storage capacity within a
pool. The remaining storage is available for other LUNs to use.
Thin provisioning allows the user to improve storage efficiency while reducing the time and
effort required to monitor and rebalance existing pool resources. Organizations can
purchase less storage capacity up-front, and increase available storage capacity by adding
disks as needed, according to actual storage usage.
This feature processes file data, not metadata. If multiples files contain the same data but
different names, the files are deduplicated. Deduplicate files can also have different
permissions and timestamps.
Deduplication and compression operate on whole files that are stored in the file system. For
example, if there are 10 unique files in a file system that is being deduplicated, 10 unique
files will still exist, but the data will be compressed, yielding a space savings of up to 50
percent. On the other hand, if there are 10 identical copies of a file, 10 files will still exist,
but they will share the same file data. The one instance of the shared file is also
compressed, providing further space savings.
File-level deduplication provides relatively modest space savings. It does not require
substantial CPU and memory resources to implement.
Auto-Tier:
Sets initial data placement to Optimized for Pool where the distribution of data is
based on the ratio of tiers in a Storage Pool
Relocates data within LUNs based on the performance statistics such that data is
relocated among tiers according to IO activity
Use Case: You are deploying a test application and you have no preference for initial
placement; however you want some data to benefit from the higher tiers.
<Continued>
No Data Movement:
Only selected after the LUN is created
No slices provisioned to the LUN will be relocated across tiers
Slices will remain in their current tier but can still be relocated within that tier
Use Case: Data will be heavily used but other LUNs are set to the Highest Available
Tier. You can create a LUN in the highest tier and, since all slices are on Flash, you
can freeze those slices in that tier.
This type of replication does not require replication interfaces to be configured, as no data
must be transferred over the external network. Local replication where the destination
storage resource is owned by a different SP than the source storage resource will be
completed over the internal CMI bus connecting the SPs. For local replication, a default
Local System replication connection is selected, which is pre-existing and does not need to
be manually configured.
Up to 256 snaps per LUN and 96 snaps per file system are supported.
<Continued>
1. When a replication session is established, two internal snapshots are created on each of
the source and destination storage resources. After the snapshots are created, Source
Snap A and B each contain a current point-in-time view of the Source LUN.
2. Data from Snap A is then copied to the empty Destination LUN. This is known as the
initial synchronization, and is a full copy.
4. As hosts continuously write to Source LUN A, the data in the LUN is changed.
5. At the next automatic or manual sync, Source Snap B is refreshed to reflect the current
point-in-time view of the Source LUN. All incremental changes since the time of the
previous sync are then copied from Source Snap B to the Destination LUN.
6. After this copy is complete, Destination Snap B is refreshed to reflect the current state
of the Destination LUN. Now Snap B is the common base between the source and
destination.
Typical use cases are deployments for disaster recovery, assurance testing, remote
backups, and multi-vendor replication.
RecoverPoint version 4.1 SP1, supports built in splitter integration with VNXe3200 systems.
The splitter is a key component that splits or copies incoming writes, sending them to
both the RecoverPoint appliances and the VNXe3200 storage. Replication operations are
performed by the RPA or vRPA to insure minimal performance impact on the VNXe3200
systems. This graphic illustrates a Local Protection implementation as shown by the
production and replica volumes located on the same system. Remote Protection is also
supported.
VNXe supports EMC Data Domain for backup and archive data, and EMC Avamar for
deduplicated backup and recovery.
In FLR-E- enabled file systems, files that are in the locked state cannot be modified or
deleted. The path to a file in the locked state is also protected from modification, which
means a directory on a File-Level Retention-enabled file system cannot be renamed or
deleted unless it does not contain any protected files.
When a file is written and saved (scan on update) or the first read (scan on read), the
VNXe3200 places a block on that file until virus checking has been performed. It
immediately issues a remote procedure call (RPC) to a virus-checking engine. This could be
a single engine or many, depending on the volume of data being protectedthus providing
a highly scalable solution. Because EMC can easily use multiple virus-checking servers, the
performance impact of virus checking with VNXe is a small fraction of the total throughput
of the system and of systems that use a single virus-checking server.
On receipt of the request, an access is initiated from a filter driver, and the virus-checking
server performs a standard check on the file. Understand that standard virus checkers
request only a small amount of data (signatures of a few kilobytes each) to establish the
presence of a virus, so the overhead is relatively small. The exception to this is with
compressed files, in which case the entire file must be shipped across the network. The
implementation may be through the normal user network; in the case of heavy-load
environments, you may wish to dedicate a network interface to the virus-checking server
farm. If a virus is detected, the user and the Administrator will see a customizable pop-up
message.
The scan-on-read functionality is triggered when a file is opened for read that was last
scanned before a set access time. This access time is typically set when a new virus-
definition file is loaded to rescan old files (once) that may contain undetected viruses. You
may also wish, under certain circumstances, to run anti-virus in scan-on-read modefor
instance, after a restore of data that may be infected with a latent virus.
With the VNXe3200, as part of the base software, the Common AntiVirus Agent (CAVA)
solution is a component of the Event Enabler infrastructure, an alerting API that indicates to
external applications when an event (e.g., a file save) occurs within VNXe. The Event
Enabler framework is also used for quota management tool integration.
<Continued>
Copyright 2015 EMC Corporation. All rights reserved. VNXe Fundamentals 48
The AntiVirus Sizing Tool is used after installation to identify the need for additional anti-
virus engines to ensure performance of the system. The AntiVirus Sizing Tool comes with
the Common AntiVirus Agent, which is Windows Management Interface (WMI)-enabled.
There is also a pre-install sizing tool available for initial anti-virus sizing.
Scalability
You can scale the solution by adding virus-checking servers as required. Your server
vendors should be able to provide you with an understanding of how many dedicated
servers you would need. You can also use different server types (e.g., McAfee, Symantec,
Trend Micro, CA, Sophos and Kaspersky) concurrently, as per their original anti-virus
implementation.
Performance of anti-virus solutions tend to be measured in server overhead, and come with
the typical your mileage may vary qualification, depending on application and workload.
PartnershipsEMC has an ISV Program agreement with all of the major five anti-virus
vendors. Utilizing CAVA is the only method for doing anti-virus checking method supported
by major virus-checking vendors for network shares
This 64 bit file system also brings in better fault tolerance. The VNXe2 Series will
concurrently support both 64-bit and 32-bit file systems. The 64-bit FS is NFS only and is
aimed at Virtualization use cases. The 32-bit FS is the default file system for the VNXe2
Series.
View aggregated alerts, health, capacity, and CPU usage for multiple systems.
Control access to the monitoring interface by setting up local Unisphere remote users
or integrating existing LDAP enabled users and groups.
Organize views of the VNXe nodes in logical ways, including by location, type,
department, etc.
ESRS enables the entire team of EMC customer support professionals to proactively
monitoring - and through careful coordination repair, when needed, your customers VNXe
information infrastructure as quickly as possible. ESRS uses industry standard security
precautions, monitors a VNXe 24x7, with expedited response to a customers needsall in
an effort to free up the customers time to focus on their business initiatives.
ESRS provides:
Automation features that include 24x7 remote monitoring to ensure EMC support
personnel are aware of potential problems. The IP connection provides the necessary
bandwidth and speed to ensure EMC can quickly remediate any issue.
Authentication that includes advanced Encryption Standard (AES) 256-bit encryption
and RSA digital certificates to protect customer data.
Authorization features that empower customers to allow or deny remote support
sessions based on security requirements, and to assign privileges and apply policy
filters to ensure only authorized personnel are making decisions concerning remote
support activities.
Audit capabilities that enable detailed and easy review of remote support activities to
address federal, industry, and internal business requirements.
VMware Aware Integration (VAI) allows the end-to-end discovery of VMware environment
from the Unisphere GUI interface. The user can import and view VMware Virtual Centers,
ESXi Servers, Virtual Machines, and VMDisks and view their relationships. Also VAI allows
the users to create, manage, and configure VMware datastores on ESXi servers from
Unisphere.
<Continued>
For NFS connections, allows the VNXe Series to be fully optimized for virtualized
environments. This technology offloads VMware storage-related functions from the server
to the storage system, enabling more efficient use of server and network resources for
increased performance and consolidation.
<Continued>
For the file component, you can discover and configure NAS Servers, File Systems, and File
Shares. You can assign ACL privileges on file shares.
SMI-S Profiles describes what kind of actions can you perform on the VNXe3200 array. An
example would be the iSCSI Target Ports Profile where by using the SMI-S API you could:
Discover iSCSI ports in the VNXe3200 system
Identify IP addresses tied to an iSCSI interface
Limitations:
Non-storage content (e.g., network configuration and ESRS)
Advanced features that are not covered in latest SMI-S 1.5 standard (e.g., FAST
Cache and FAST VP)
Some other advantages of AppSync are the lightweight agents and the added application
and Exchange Database Availability Group (DAG) awareness.
Networking and I/O protocols (such as SCSI commands) are mapped to Fibre Channel
constructs, and then encapsulated and transported within Fibre Channel frames. This
process allows high-speed transfer of multiple protocols over the same physical interface.
Fibre Channel systems are assembled from familiar types of components: adapters, hubs,
switches and storage devices.
Host Bus Adapters (HBAs) are installed in computers and servers in the same manner as a
SCSI HBAs.
Fibre Channel switches provide full bandwidth connections for highly scalable systems
without a practical limit to the number of connections supported (16 million addresses are
possible).
The word fiber indicates the physical media. The word fibre indicates the Fibre Channel
protocol and standards.
Multiple Connections per Session (MC/S) = multiple connections within a single session
to an iSCSI target
MPIO = multiple sessions, each with one or more connections to any iSCSI target
The network should be dedicated solely to the iSCSI configuration. For performance
reasons, EMC recommends that no traffic apart from iSCSI traffic should be carried over
the network. If using MDS switches, EMC recommends creating a dedicated VSAN for all
iSCSI traffic.
CAT5 network cables are supported for distances up to 100 meters. If cabling is to
exceed 100 meters, you must use CAT6 network cables.
VLAN tagging protocol is supported. Link Aggregation, also known as NIC teaming, is
not.
Traditional RAID 5 (4+1) for Flash, RAID 5 (8+1) for SAS, and RAID 6 (14+2) for NL-SAS
means customers will get high performance AND maximum efficiency.
This complements additional RAID types, seen on the next slide, and further improves
storage pool efficiency.
The 8+1 illustration is simply to show flexible options and is not a recommendation.
Automatic configuration with statistical load balancing is based on source and destination
MAC addresses. The Link Aggregation Control Protocol (LACP) has been implemented,
allowing peers to provide any required load balancing. Only full duplex operation is currently
supported.
VLANs create multiple broadcast domains. An SPs Gigabit Ethernet port(s) can be
connected to different switches, and each of these switches can be in a different subnet and
different VLAN. Configurations need to be symmetrical for both SPs for all failover scenarios
to work.
A file in a File-Level Retention-enabled file system is always in one of four possible states
based on the files last access time (LAT) and read-only: not locked, locked, append (only),
or expired. A file that is not locked is treated exactly as a file in a file system that is not
enabled for File Level Retention; it can be renamed, modified, or deleted. You manage files
in the locked state by setting retention dates that, until the dates have passed, prevent
the files from being modified or deleted. File-Level Retention files can be grouped by
directory or batch process, thus enabling you to manage the file archives on a file-system
basis, or to run a script to locate and delete files in the expired state.
File states:
Not-lockedNormal file
Locked (WORM)File Level Retention-enabled files; files cannot be deleted, renamed,
modified, or appended to
AppendFiles cannot be deleted or renamed; existing data cannot be modified, but
new data can be added
ExpiredFiles cannot be renamed, modified, or appended to, but can be deleted or
relock
<Continued>