Anda di halaman 1dari 125

Open Industry Network Performance & Power Test for Cloud Networks Evaluating 10/40 GbE Switches Fall

2011 Edition

December, 2011
Lippis Enterprises, Inc. 2011

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Forward by Scott Bradner


It is now almost twenty years since the Internet Engineering Task Force (IETF) initially chartered the Benchmarking Methodology Working Group (bmwg) with me as chair. The aim of the working group was to develop standardized terminology and testing methodology for various performance tests on network devices such as routers and switches. At the time that the bmwg was formed, it was almost impossible to compare products from different vendors without doing testing yourself because each vendor did its own testing and, too often, designed the tests to paint their products in the best light. The RFCs produced by the bmwg provided a set of standards that network equipment vendors and testing equipment vendors could use so that tests by different vendors or different test labs could be compared. Since its creation, the bmwg has produced 23 IETF RFCs that define performance testing terminology or methodology for specific protocols or situations and are currently working on a half dozen more. The bmwg has also had three different chairs since I resigned in 1993 to join the IETFs steering group. The performance tests in this report are the latest in a long series of similar tests produced by a number of testing labs. The testing methodology follows the same standards as I was using in my own test lab at Harvard in the early 1990s thus the results are comparable. The comparisons would not be all that useful since I was dealing with far slower speed networks, but a latency measurement I did in 1993 used the same standard methodology as do the latency tests in this report. Considering the limits on what I was able to test way back then, the Ixia test setup used in these tests is very impressive indeed. It almost makes me want to get into the testing business again.

Scott is the University Technology Security Officer at Harvard University.He writes a weekly column for Network World,and serves as the Secretary to the Board of Trustees of the Internet Society (ISOC). In addition, he is a trustee of the American Registry of Internet Numbers (ARIN) and author of many RFC network performance standards used in this industry evaluation report.
License Rights
2011 Lippis Enterprises, Inc. All rights reserved. This report is being sold solely to the purchaser set forth below. The report, including the written text, graphics, data, images, illustrations, marks, logos, sound or video clips, photographs and/or other works (singly or collectively, the Content), may be used only by such purchaser for informational purposes and such purchaser may not (and may not authorize a third party to) copy, transmit, reproduce, cite, publicly display, host, post, perform, distribute, alter, transmit or create derivative works of any Content or any portion of or excerpts from the Content in any fashion (either externally or internally to other individuals within a corporate structure) unless specifically authorized in writing by Lippis Enterprises. The purchaser agrees to maintain all copyright, trademark and other notices on the Content. The Report and all of the Content are protected by U.S. and/or international copyright laws and conventions, and belong to Lippis Enterprises, its licensors or third parties. No right, title or interest in any Content is transferred to the purchaser. Purchaser Name:

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Acknowledgements
We thank the following people for their help and support in making possible this first industry evaluation of 10GbE private/public data center cloud Ethernet switch fabrics. Atul Bhatnagar, CEO of Ixia, for that first discussion of this project back at Interop in May 2009 and his continuing support throughout its execution. Scott O. Bradner of Harvard University for his work at the IEFT on network performance benchmarks, used in this project, and the Forward of this report. Leviton for the use of fiber optic cables equipped with optical SFP+ connectors to link Ixia test equipment to 10GbE switches under test. Siemon for the use of cooper and fiber optic cables equipped with QSFP+ connectors to link Ixia test equipment to 40GbE switches under test. Michael Githens, Lab Program Manager at Ixia, for his technical competence, extra effort and dedication to fairness as he executed test week and worked with participating vendors to answer their many questions. Henry He, Technical Product Manager, Kelly Maloit, Director of Public Relations, Tara Van Unen, Director of Market Development, Jim Smith, VP of Marketing, all from Ixia, for their support and contributions to creating a successful industry event. Bob Anderson, Photography and Videography at Ixia, for video podcast editing. All participating vendors and their technical plus marketing teams for their support of not only the event but to each other, as many provided helping hands on many levels to competitors. Tiffany OLoughlin for her amazing eye in capturing test week with still and video photography. Bill Nicholson for his graphic artist skills that make this report look amazing. Jeannette Tibbetts for her editing that makes this report read as smooth as a test report can. Steven Cagnetta for his legal review of all agreements.

Lippis Enterprises, Inc. 2011 Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Table of Contents
Executive Summary .............................................................................................................................................5 Market Background .............................................................................................................................................8 Participating Suppliers Alcatel-Lucent OmniSwitch 10K ......................................................................................................13 Aclatel-Lucent OmniSwitch 6900-40X..............................................................................................17 Arista 7504 Series Data Center Switch ..............................................................................................23 Arista 7124SX 10G SFP Data Center Switch ....................................................................................27 Arista 7050S-64 10/40G Data Center Switch ...................................................................................32 IBM BNT RackSwitch G8124 ............................................................................................................37 IBM BNT RackSwitch G8264 ............................................................................................................41 Brocade VDX 6720-24 Data Center Switch ...................................................................................45
TM

Brocade VDX 6730-32 Data Center Switch ...................................................................................49


TM

Extreme Networks BlackDiamond X8 .............................................................................................54 Extreme Networks Summit X670V ..................................................................................................62 Dell/Force10 S-Series S4810 ...............................................................................................................67 Hitachi Cable, Apresia15000-64XL-PSR ..........................................................................................71 Juniper Network EX Series EX8200 Ethernet Switch ....................................................................75 Mellanox/Voltaire Vantage 6048 ....................................................................................................80
TM

Cross Vendor Analysis Top-of-Rack Switches ..........................................................................................................................84 Latency ...........................................................................................................................................85 Throughput ....................................................................................................................................92 Power ............................................................................................................................................104 Top-of-Rack Switches Industry Recommendations ......................................................................105 Core Switches......................................................................................................................................106 Latency .........................................................................................................................................107 Throughput ..................................................................................................................................109 Power ............................................................................................................................................115 Core Switches Industry Recommendations ...................................................................................117 Test Methodology .............................................................................................................................................119 Terms of Use......................................................................................................................................................124 About Nick Lippis ............................................................................................................................................125

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Executive Summary
To assist IT business leaders with the design and procurement of their private or public data center cloud fabric, the Lippis Report and Ixia have conducted an open industry evaluation of 10GbE and 40GbE data center switches. In this report, IT architects are provided the first comparative 10 and 40 Gigabit Ethernet Switch (10GbE) performance information to assist them in purchase decisions and product differentiation. The resources available for this test at Ixias iSimCity are out of reach for nearly all corporate IT departments with test equipment on the order of $9.5M, devices under test on the order of $2M, plus costs associated with housing, power and cooling the lab, and lastly nearly 22 engineers from around the industry working to deliver test results. Its our hope that this report will remove performance, power consumption, virtualualization scale and latency concern from the purchase decision, allowing IT architects and IT business leaders to focus on other vendor selection criteria, such as post sales support, platform investment, vision, company financials, etc. The Lippis test reports based on independent validation at Ixias iSim City, communicates credibility, competence, openness and trust to potential buyers of 10GbE and 40GbE data center switching equipment as the tests are open to all suppliers and are fair, thanks to RFC and custom-based tests that are repeatable. The private/public data center cloud 10GbE and 40GbE fabric test was free for vendors to participate and open to all industry suppliers of 10GbE and 40GbE switching equipment, both modular and fixed configurations. This report communicates the results of three set of test which took place during the week of December 6 to 10, 2010, April 11 to 15, 2011 and October 3 to 14, 2011 in the modern Ixia test lab, iSimCity, located in Santa Clara, CA. Ixia supplied all test equipment needed to conduct the tests while Leviton provided optical SPF+ connectors and optical cabling. Siemon provided copper and optical cables equippped with QSFP+ connectors for 40GbE connections. Each 10GbE and 40GbE supplier was allocated lab time to run the test with the assistance of an Ixia engineer. Each 5 switch vendor configured their equipment while Ixia engineers ran the test and logged the resulting data. The tests conducted were IETF RFC-based performance and latency test, power consumption measurements and a custom cloud-computing simulation of large north-south plus east-west traffic flows. Virtualization scale or how was calcluated and reported. Both Top-of-Rack (ToR) and Core switches were evaluated.

ToR switches evaluated were:


Alcatel-Lucent OmniSwitch 6900-40X Arista 7124SX 10G SFP Data Center Switch Arista 7050S-64 10/40G Data Center Switch BLADE Network Technologies, an IBM Company IBM BNT RackSwitch G8124 BLADE Network Technologies, an IBM Company IBM BNT RackSwitch G8264 Brocade VDXTM 6720-24 Data Center Switch Brocade VDX 6730-32 Data Center Switch Extreme Networks Summit X670V Dell/Force10 S-Series S4810 Hitachi Cable, Apresia15000-64XL-PSR Mellanox/Voltaire Vantage 6048
TM

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Core switches evaluated were:


Alcatel-Lucent OmniSwitch 10K Arista 7504 Series Data Center Switch Extreme Networks BlackDiamond X8 Data Center Switch Juniper Network EX Series EX8200 Ethernet Switch From the above list the following products were tested in October, 2011 and added to the products evaluated in December 2010 and April 2011. Alcatel-Lucent OmniSwitch 6900-40X Brocade VDXTM 6730-32 Data Center Switch Extreme Networks BlackDiamond X8 Extreme Networks Summit X670V

What We Have Learned from 18 Months of Testing:

Video feature: Click to view What We Have Learned video podcast

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

The following list our top ten findings:


1. 10GbE ToR and Core switches are ready for mass deployment in private/public data center cloud computing facilities, as the technology is mature, and products are both stable and deliver stated goals of high performance, low latency and low power consumption. 2. There are differences between suppliers, and its recommended to review each suppliers results along with other important information in making purchase decisions. 3. The Lippis/Ixia industry evaluations are the litmus test for products to enter the market, as only those firms who are confident in their products and ready to go to market participate. For each of the three industry test, all ToR and Core switches evaluated in the Lippis/Ixia test were recently introduced to market where the Lippis/Ixia test was their first open public evaluation. 4. Most ToR switches are based upon a new generation of merchant silicon that provides a single chipforwarding engine of n-10GbE and n-40GbE ports that delivers consistent performance and low power consumption. 5. All ToR and Core switches offer low power consumption with energy cost over a three-year period estimated between 1.3% and 4% of acquisition cost. We measure WATTS per 10GbE port via ATIS methods and TEER values for all switches evaluated. Its clear that these products represent the stateof-the-art in terms of energy conservation. ToR switches consume between 1.5 and 5.5 WATTS per 10GbE port while Core switches consume between 8.1 and 21.68 WATTS per 10GbE port. 6. Most switches design airflow from front-to-back or back-to-front rather than side-to-side to align with data center hot/cold aisle cooling design. 7. There are differences between suppliers in terms of the network services they offer, such as virtualization support, quality of service, etc., as well as how their Core/Spine switches connect to ToR/Leaf switches to create a data center fabric. Its recommended that IT business leaders evaluate Core switches with ToR switches and vice versa to assure that the network services and fabric attributes sought after are realized throughout the data center. 8. Most Core and ToR switches demonstrated the performance and latency required to support storage enablement or converged I/O. In fact, all suppliers have invested in storage enablement, such as support for Converged Enhanced Ethernet (CEE), Fibre Channel over Ethernet (FCoE), Internet Small Computer System Interface (iSCSI), Network-attached Storage (NAS), etc., and while these features were not tested in the Lippis/Ixia evaluation, most of these switches demonstrated that the raw capacity is built into the switches for its support. 9. Three ToR switches support 40GbE while all Core switches possess the backplane speed capacity for some combination of 40 and 100GbE. Core switches tested here will easily scale to support 40GbE and 100GbE while ToR suppliers offer uplinks at these speeds. Of special note is the Extreme BlackDiamond X8 which was the first core switch tested with 24-40GbE ports, marking a new generation of data center core products entering the market. 10. The Lippis/Ixia test demonstrated the performance and power consumption advantages of 10GbE networking, which can be put to work and exploited for corporate advantage. For new server deployments in private/public data center cloud networks, 10GbE is recommended as the primary network connectivity service as a network fabric exists to take full advantage of server I/O at 10GbE bandwidth and latency levels. 40GbE connections are expected to ramp up during 2012 for ToR to Core connections with server connections in select high performance environments.
www.lippisreport.com

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Market Background
The following market background section provides perspective and context to the major changes occurring in the IT industry, and its fundamental importance to the network switches that were evaluated in the Lippis/Ixia test. The IT industry is in the midst of a fundamental change toward centralization of application delivery via concentrated and dense private and public data center cloud computing sites. Corporations are concentrating IT spending in data centers and scaling them up via server virtualization to reduce complexity and associated operational cost while enjoying capital cost savings advantage. In fact, there is a new cloud-o-nomics that is driven by the cost advantage of multi-core computing coupled with virtualization that has enabled a scale of computing density not seen before. Couple this with a new tier of computing, Apples iPad, Android tablets plus smartphones that rely upon applications served from private and public cloud computing facilities, and you have the making of a systemic and fundamental change in data center design plus IT delivery. The world has witnessed the power of this technology and how it has changed the human condition and literally the world. On May 6, 2010, the U.S. stock market experienced a flash crash that in 13 short seconds, 27,000 contracts traded consisting of 49% of trading volume. Further, in 15 minutes $1 trillion of market value disappeared. Clusters of computers connected at 10 and 40GbE programmed to perform high frequency trades based upon proprietary algorithms did all this. The making of the movie Avatar was produced at WETA Digital with 90% of the movie developed via giant computer clusters connected at 10GbE to process and render animation. As big science cost has soared, scientists have moved to high performance computing clusters connected at 10 and 40 GbE to simulate rather than build scientific experiments. Biotech engineers are analyzing protein-folding simulations while military scientists simulate nuclear explosions rather than detonate these massive bombs. Netflix has seen its stock price increase and then crash as it distributes movies and TV programs over the Internet and now threatens Comcast and others on-demand business thanks to its clusters of computers connected at 10GbE. At the same time, Blockbuster Video filed for bankruptcy. 8
Lippis Enterprises, Inc. 2011

A shift in IT purchasing is occurring too. During the market crash of 2008, demand for information increased exponentially while IT business decision makers were forced to scale back budgets. These diverging trends of information demand and IT budget contraction have created a data center execution gap where IT leaders are now forced to upgrade their infrastructure to meet information and application demand. To close the gap, IT business decision makers are investing in 10 and 40GbE switching. Over the past several quarters, sales of 10GbE fixed and modular switches have grown in the 60% to 70% range and now represent 25% of all ethernet shipments. Note that data center etherent ports represent just 10% of the market, but nearly 40% of revenue. 40GbE shippments are expected to ramp during 2012. At the center of this new age in private and public data center cloud computing is networking, in particular Ethernet networking that links servers to storage and the internet providing high-speed transport for application traffic flow. As the data center network connects all devices, it is the single most important IT asset to assure end users receive an excellent experience and service is continually available. But in the virtualization/cloud era of computing, networking too is fundamentally changing as demand for low latency, high performance and lower power consumption switches increase to address radically different traffic patterns. In addition, added network services, such as storage enablement, are forcing Ethernet to become not just a connectivity service in the data center but a fabric that connects computing and storage; transporting data packets plus blocks of storage traffic. In short, IT leaders are looking for Ethernet to provide a single data center fabric that connects all IT assets with the promise of less equipment to install, simpler management and greater speed to IT service delivery. The following characterize todays modern private and public data center cloud network fabric.

Virtualization: The problems with networking in virtualized infrastructure are well documented. Move a virtual machine (VM) from one physical server to another and network port profile, Virtual Local Area Networking (VLAN), security settings, etc., have to be reconfigured,
www.lippisreport.com

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

adding cost, delay and rigidity to what should be an adaptive infrastructure. Many networking companies are addressing this issue by making their switches virtualization aware, auto reconfiguring and/or recommending a large flat layer 2 domain. In addition, networking companies are offering network management views that cross administrative domains of networking and virtualization in an effort to provide increased visibility and management of physical and virtual infrastructure. The fabric also needs to eliminate the boundary between physical and virtual infrastructure, both in terms of management and visibility plus automated network changes as VMs move. Traffic Patterns Shift with Exponential Volume Increase: Data centers are experiencing a new type of traffic pattern that is fundamentally changing network design plus latency and performance attributes. Not only are traditional north-south or client-server flows growing but east-west or server-server and storage-server flows now dominate most private and public data center cloud facilities. A dominate driver of east-west flows is the fact that the days of static web sites are over while dynamic sites take over where one page request can spawn 50 to 100 connections across multiple servers. Large contributors to east-west flows are mobile devices. Mobile application use is expanding exponentially, thanks to the popularity of the iPhone, iPad and increasingly Android smartphones plus tablets. As of this writing, there are some 700,000 plus smartphone applications. The traffic load these applications are placing on data center Ethernet fabrics is paradoxically immense. The vast majority of mobile applications are hosted in data centers and/or public cloud facilities. The application model of mobile devices is not to load them with thick applications like Microsoft Word, PowerPoint, Excel, etc, but to load them with thin clients that access applications and data in private and/or public data centers, cloud facilities. A relatively small query to a Facebook page, New York Times article, Amazon purchase, etc., sends a flurry of eastwest traffic to respond to the query. Todays dynamic web spreads a web page over multiple servers and when called upon, drives a tsunami of traffic, which must be delivered

well before the person requesting the data via mobile device or desktop loses interest and terminates the request. East-west traffic will only increase as this new tier of tablet computing is expected to ship 171 million units in 2014 up from 57 million this year, according toIsupplie. Add tens of millions of smartphones to the mix, all relying on private/ public data center cloud facilities for their applications, and you have the ingredients for a sustained trend placing increased pressure for east-west traffic flows upon Ethernet data center fabrics. Low Latency: Data experiences approximately 10 ns of latency as it traverses across a computer bus passing between memory and CPU. Could networking become so fast that it will emulate a computer bus so that a data center operates like one giant computer? The answer is no, but the industry is getting closer. Todays 10GbE switches produce 400 to 700 ns of latency. By 2014, its anticipated that 100GbE switching will reduce latency to nearly 100 ns. Special applications, especially in high frequency trading and other financial markets, measure latency in terms of millions of dollars in revenue lost or gained, placing enormous pressure on networking gear to be as fast as engineers can design. As such, low latency is an extremely important consideration in the financial services industry. For example, aggregating all 300 plus stock market feeds would generate approximately 3.6Gbs of traffic, but IT architects in this market choose 10GbE server connections versus 1GbE for the sole purpose of lower latency even though the bandwidth is nearly three times greater than what theoretically would be required. They will move to 40GbE too as 40GbE is priced at three to four times that of 10GbE for four times the bandwidth. More generally, latency is fundamental to assuring a users excellent experience. With east-west traffic flows making up as much as 80% of data center traffic, low latency requirements are paramount as every trip a packet makes between servers during the processes of responding to an end users query adds delay to the response and reduces the user experience or revenue potential.

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches Storage Traffic over Ethernet Fabric: Converged I/O or unified networking where storage and network traffic flow over a single Ethernet network will increasingly be adopted in 2012. Multiple storage enablement options exist, such as iSCSI over Ethernet, ATA over Ethernet (AoE), FCoE, etc. For IP-based storage approaches, such as iSCSI over Ethernet and AoE, all that is needed is a 10GbE network interface card (NIC) in a server to support both storage and data traffic, but the Ethernet fabric requires lossless Ethernet to assure integrity of storage transport over the fabric as blocks of data transmit between server and storage farms. For FCoE, a single converged network adaptor or CNA plugged into a server provides the conduit for storage and application traffic flows to traverse over an Ethernet fabric. The number of suppliers offering CNAs has grown significantly, including Intel, HP, Emulex, IBM, ServerEngines, QLogic, Cisco, Brocade, etc. In addition, the IEEE opened up the door for mass deployment as it has ratified the key Ethernet standards for lossless Ethernet. What will drive converged I/O is the reduced cost of cabling, NIC and switching hardware. Low Power Consumption: With energy costs increasing plus green corporate initiatives as well as government mandates pervasive, all in an effort to reduce carbon emissions, IT leaders have been demanding lower energy consumption of all IT devices, including data center network equipment. The industry has responded with significant improvements in energy efficiency and overall lower wattage consumption per port while 10 and 40 GbE switches are operating at full line rate. Virtualized Desktops: 2012 promises to be a year of increased virtualized desktop adoption. Frustrated with enterprise desktop application licensing, plus desktop support model, IT business leaders will turn toward virtualizing desktops at increasing numbers. The application model of virtualized desktops is to deliver a wide range of corporate applications hosted in private/public data center clouds over the enterprise network. While there are no estimates to the traffic load this will place on campus and data center Eth-

ernet networking, one can only assume it will add northsouth traffic volume to an already increasing and dominating east-west flows. Fewer Network Tiers: To deliver low latency and high throughput performance to support increasing and changing traffic profiles mentioned above, plus ease VM moves without network configuration changes, a new two-tier private/public data center cloud network design has emerged. A three-tier network architecture is the dominant structure in data centers today and will likely continue as the optimal design for many smaller data center networks. For most network architects and administrators, this type of design provides the best balance of asset utilization, layer 3 routing for segmentation, scaling and services plus efficient physical design for cabling and fiber runs. By three tiers, we mean access switches/ToR switches, or modular/End-of-Row switches that connect to servers and IP-based storage. These access switches are connected via Ethernet to aggregation switches. The aggregation switches are connected into a set of Core switches or routers that forward traffic flows from servers to an intranet and internet, and between the aggregation switches. It is common in this structure to oversubscribe bandwidth in the access tier, and to a lesser degree, in the aggregation tier, which can increase latency and reduce performance. Spanning Tree Protocol or STP between access and aggregation plus aggregation and core further drive oversubscription. Inherent in this structure is the placement of layer 2 versus layer 3 forwarding that is VLANs and IP routing. It is common that VLANs are constructed within access and aggregation switches, while layer 3 capabilities in the aggregation or Core switches route between them. Within the private and public data center cloud network market, where the number of servers are in the thousands to tens of thousands plus, east-west bandwidth is significant and applications require a single layer 2 domain, the existing Ethernet or layer 2 capabilities within this tiered architecture do not meet emerging demands.

10

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

An increasingly common design to scale a data center fabric is often called a fat-tree or leaf-spine and consists of two kinds of switches; one that connects servers and the second that connects switches creating a non-blocking, low-latency fabric. We use the terms leaf or Top of Rack (ToR) switch to denote server connecting switches and spine or core to denote switches that connect leaf/ToR switches. Together, a leaf/ToR and spine/core architecture create a scalable data center fabric. Key to this design is the elimination of STP with some number of multi-links between leaf and spine that eliminate oversubscription and enable a non-blocking fabric, assuming the switches are designed with enough backplane capacity to support packet forwarding equal to the sum of leaf ingress bandwidth. There are multiple approaches for connecting leaf and spine switches at high bandwidth, which fall under the category of MultiChassis Link Aggregation Group or MC-LAG covered in project IEEE 802.3ad. Paramount in the two-tier leaf-spine architecture is high spine switch performance, which collapses the aggregation layer in the traditional three-tier network. Another design is to connect every switch together in a full mesh, via MC-LAG connections with every server being one hop away from each other. In addition, protocols such as TRILL or Transparent Interconnection of Lots of Links, SPB or Shortest Path Bridging, et al have emerged to connect core switches together at high speed in an active-active mode.

The above captures the major trends and demands IT business leaders are requiring from the networking industry. The underpinnings of private and public data center cloud network fabric are 10GbE switching with 40GbE and 100GbE ports/modules. 40GbE and 100GbE are in limited availability now but will be increasingly offered and adopted during 2012. Network performance including throughput performance and latency are fundamental switch attributes to understand and review across suppliers. Because if the 10/40GbE switches an IT leader selects cannot scale performance to support increasing traffic volume plus shifts in traffic profile, not only will the network fail to be a fabric unable to support converge storage traffic, but business processes, application performance and user experience will suffer too. During 2012, an increasing number of servers will be equipped with 10GbE LAN on Motherboard (LOM) driving 10GbE network requirements. In addition, with nearly 80% of IT spend being consumed in data center infrastructure with all IT assets eventually running over 10/40GbE switching, the stakes could not be higher to select the right product upon which to build this fundamental corporate asset. Further, data center network equipment has the longest life span of all IT equipment; therefore, networking is a longterm investment and vendor commitment.

11

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Participating Suppliers
This was the first time that all participating vendors switches were tested in an open public forum. In addition, the ToR switches all utilized a new single chip design sourced from Broadcom, Marvel or Fulcrum. This chip design supports up to 64 10GbE ports on a single chip and represents a new generation of ToR switches. These ToR or leaf switches with price per port as low as $350 per 10GbE are the fastest growing segment of the 10GbE fixed configuration switch market. Their list price points vary $12K to $25K. ToR switches tested were: Arista 7124SX 10G SFP Data Center Switch Arista 7050S-64 10/40G Data Center Switch BLADE Network Technologies, an IBM Company IBM BNT RackSwitch G8124 BLADE Network Technologies, an IBM Company IBM BNT RackSwitch G8264 Brocade VDXTM 6720-24 Data Center Switch Dell/Force10 S-Series S4810 Hitachi Cable, Apresia15000-64XL-PSR Mellanox/Voltaire Vantage 6048
TM

All Core switches including Junipers EX8216, Arista Networks 7504, and Alcatel-Lucents OmniSwitch 10K were tested for the first time in an open and public forum too. These spine switches make up the core of cloud networking. A group of spine switches connected in partial or full mesh creates an Ethernet fabric connecting ToR or leaf switches and associated data plus storage traffic. These Core switches range in list price from $250K to $800K and price per 10GbE port of $1.2K to $6K depending upon port density and software license arrangement. The Core/Spine switches tested were:

Alcatel-Lucent OmniSwitch 10K


Arista 7504 Series Data Center Switch Juniper Network EX Series EX8200 Ethernet Switch We present each suppliers results in alphabetical order.

12

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Alcatel-Lucent OmniSwitch 10K


Alcatel-Lucent launched its new entry into the enterprise data center market on December 17, 2010, with the OmniSwitch 10K. The OmniSwitch was the most densely-populated device tested with 256 ports of 10GbE. The test numbers below represent the first public performance and power consumption measurements for the OmniSwitch 10K. The Alcatel-Lucent OmniSwitch 10K Modular Ethernet LAN Chassis is the first of a new generation of network adaptable LAN switches. It exemplifies Alcatel-Lucents approach to enabling what it calls Application Fluent Networks, which are designed to deliver a high-quality user experience while optimizing the performance of legacy, real-time, and multimedia applications. Alcatel-Lucent OmniSwitch 10K Test Configuration
Hardware Device under test Test Equipment OmniSwitch 10K www.alcatel-lucent.com Ixia XM12 High Performance Chassis Xcellon Flex AP10G16S 10 Gigabit Ethernet LAN load module LSM10GXM8S and LSM10GXM8XP 10 Gigabit Ethernet load modules 1U Application Server http://www.ixiacom.com/ Cabling Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers www.leviton.com IxAutomate 6.90 GA SP1 Software Version 7.1.1.R01.1638 IxOS 6.00 EA IxNetwork 5.70 EA Port Density 256

Alcatel-Lucent OmniSwitch 10K* RFC 2544 Layer 2 Latency Test


ns
180000 160000 140000 120000 100000 80000 60000 40000 20000 0 Max Latency Avg Latency Min Latency

Max Latency Avg Latency Min Latency

64 23600 20864 8480 9

128 23420 20561 8940 5

256 23580 20631 9280 5

512 23840 20936 9500 8

1024 24040 21216 10380 7

1280 24480 21787 10300 5

1518 25640 22668 10960 10

2176 42960 25255 11440 5

9216 158600 36823 20420 10

Video feature: Click to view Alcatel-Lucent video podcast


The OmniSwitch 10K was tested across all 256 ports of 10GbE. Its average latency ranged from a low of 20561 ns or 20 s to a high of 36,823 ns or 36 s at jumbo size 9216 Byte size frames for layer 2 traffic. Its average delay variation was ranged between 5 and 10 ns, providing consistent latency across all packet sizes at full line rate.

Avg. delay Variation

Alcatel-Lucent OmniSwitch 10K* RFC 2544 Layer 3 Latency Test


ns
400000 350000 300000 250000 200000 150000 100000 50000 0 Max Latency Avg Latency Min Latency Avg. delay Variation 64 23700 20128 9160 9 128 23600 20567 9360 5 256 23500 20638 8760 5 512 24000 20931 9260 8 1024 24280 21211 9740 7 1280 24480 21793 10340 4 1518 25700 22658 10260 10 2176 38580 25242 10980 5 9216 334680 45933 20180 10 Max Latency Avg Latency Min Latency

* These RFC 2544 tests were conducted with default configuration. Lower latency configuration mode will be available in the future.

13

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

For layer 3 traffic, the OmniSwitch 10Ks measured average latency ranged from a low of 20,128 ns at 64Bytes to a high of 45,933 ns or 45s at jumbo size 9216 Byte size frames. Its average delay variation for layer 3 traffic ranged between 4 and 10 ns, providing consistent latency across all packet sizes at full line rate.

Alcatel-Lucent OmniSwitch 10K RFC 2544 L2 & L3 Throughput Test


Layer 2 100% Layer 3

Throughput % Line Rate

80% 60% 40% 20% 0% Layer 2 Layer 3

The OmniSwitch 10K demonstrated 100% throughput as a percentage of line rate across all 256-10GbE ports. In other words, not a single packet was dropped while the OmniSwitch 10K was presented with enough traffic to populate its 256 10GbE ports at line rate simultaneously for both L2 and L3 traffic flows. The OmniSwitch 10K demonstrated nearly 80% of aggregated forwarding rate as percentage of line rate during congestion conditions. A single 10GbE port was flooded at 150% of line rate. The OmniSwitch did not use HOL blocking which means that as the 10GbE port on the OmniSwitch became congested, it did not impact the performance of other ports. There was no back pressure detected, and the Ixia test gear did not receive flow control frames.

64 100 100

128 100 100

192 100 100

256 100 100

512 100 100

1024 100 100

1280 100 100

1518 100 100

2176 100 100

9216 100 100

Alcatel-Lucent OmniSwitch 10K RFC 2889 Congestion Test


% Line Rate
100 90 80 70 60 50 40 30 20 10 0 Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate

64 128 256 512 1024 1280 1518 2176 9216 Layer2 Agg. Forwarding Rate 77.784 77.79 77.786 77.772 77.775 77.813 77.828 77.857 78.39 Layer 3 Agg. Forwarding Rate 77.784 77.79 77.786 77.772 77.775 77.813 77.828 77.856 78.39 Head of Line Blocking no no no no no no no no no Back Pressure Agg Flow Control Frames no 0 no 0 no 0 no 0 no 0 no 0 no 0 no 0 no 0

14

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Alcatel-Lucent OmniSwitch 10K RFC 3918 IP Multicast Test


Agg Tput Rate (%) Agg. Average Latency 30 s 25 20 15 10 5 64 100 9.6 128 100 9.6 256 100 9.8 512 100 10.3 1024 100 11.1 1280 100 11.6 1518 100 11.9 2176 100 13.0 9216 100 28.1 0

Agg Tput Rate (%)

100 90 80 70 60 50 40 30 20 10 0

The OmniSwitch 10K demonstrated 100% aggregated throughput for IP multicast traffic with latencies ranging from a 9,596 ns at 64Byte size packet to 28,059 ns at 9216Byte size packets.

Agg Tput Rate (%) Agg. Average Latency

Alcatel-Lucent OmniSwitch10K Cloud Simulation Test


Traffic Direction East-West East-West East-West East-West East-West North-South North-South Traffic Type Database_to_Server Server_to_Database HTTP iSCSI-Server_to_Storage iSCSI-Storage_to_Server Client_to_Server Server_to_Client Avg Latency (ns) 28125 14063 19564 17140 26780 13194 11225

The OmniSwitch 10K performed well under cloud simulation conditions by delivering 100% aggregated throughput while processing a large combination of east-west and north-south traffic flows. Zero packet loss was observed as its latency stayed under 28s.

Alcatel-Lucent OmniSwitch10K Power Consumption Test


WattsATIS/10GbE port 3-Year Cost/WattsATIS/10GbE Total power cost/3-Year 3 yr energy cost as a % of list price TEER Value Cooling 13.34 $48.77 $12.485.46 2.21% 71 Front to Back

The OmniSwitch 10K represents a new breed of cloud network spine switches with power efficiency being a core value. Its WattsATIS/port is 13.34 and TEER value is 71. Its power cost per 10GbE is estimated at $16.26 per year. The threeyear cost to power the OmniSwitch is estimated at $12,485.46 and represents less than 3% of its list price. Keeping with data center best practices, its cooling fans flow air front to back.

15

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Discussion:
The OmniSwitch 10K seeks to improve application performance and user experience with deep packet buffers, lossless virtual output queuing (VOQ) fabric and extensive traffic management capabilities. This architecture proved its value during the RFC2889 layer 2 and layer 3 congestion test with a 78% aggregated forwarding rate when a single 10GbE port was oversubscribed at 150% of line rate. The OmniSwitch 10K did not use HOL blocking, back pressure or signal back to the Ixia test equipment with Aggregated Flow Control Frames to slow down traffic flow. Not tested but notable features are its security and high availability design for uninterrupted uptime. The OmniSwitch 10K was found to have low power consumption, front-to-back cooling, front-accessible components and a compact form factor. The OmniSwitch 10K is designed to meet the requirements for mid- to large-sized enterprises data centers.

16

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Alcatel-Lucent OmniSwitch 6900-X40


The Alcatel-Lucent (ALU) OmniSwitchTM 6900 Stackable LAN Switch series is the latest hardware platform in Alcatel-Lucent Enterprises network infrastructure portfolio, following the introduction of the OmniSwitchTM 10K and the OmniSwitchTM 6850E. The OmniSwitchTM 6900 series is a fixed form factor LAN switch designed for data centers while at the same time offering great value as a small core switch in a converged enterprise network. Alcatel-Lucent OmniSwitchTM 6900-X40 Test Configuration
Hardware Device under test Test Equipment Ixia Line Cards OmniSwitch 6900-X40 www.alcatel-lucent.com
TM

Software Version 7.2.1.R02 IxOS 6.10 EA SP2 IxNetwork 6.10 EA

Port Density 64

Ixia XG12 High Performance Chassis Xcellon Flex AP10G16S (16 port 10G module) Xcellon Flex Combo 10/40GE AP (16 port 10G / 4 port 40G) Xcellon Flex 4x40GEQSFP+ (4 port 40G) http://www.ixiacom.com/

Cabling

10GbE Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers www.leviton.com Siemon QSFP+ Passive Copper Cable 40 GbE 3 meter copper QSFP-FA-010 Siemon Moray Low Power Active Optical Cable Assemblies Single Mode, QSFP+ 40GbE optical cable QSFP30-03 7 meters www.siemon.com

Video feature: Click to view Alcatel-Lucent video podcast


The OmniSwitchTM 6900 represents the next step in realizing Alcatel-Lucent Enterprises Application Fluent Network (AFN) vision of network infrastructure to the data center and LAN core. Alcatel-Lucents AFN vision is based upon a resilient architecture with the network dynamically adapting to applications, users and devices to provide a high quality user experience, as well as reducing operation costs and management complexity.

The OmniSwitchTM 6900 series consists of two models: the OmniSwitchTM 6900X40 and the OmniSwitchTM 6900-X20. They provide 10 GbE connectivity with industry-leading port density, switching capacity and low power consumption in a compact 1 rack unit (1RU) form factor. The OmniSwitchTM 6900X40 can deliver up to 64 10GbE ports, while the OmniSwitchTM 6900-X20 can deliver up to 32 10 GbE ports wirerate non-blocking performance with unique high availability and resiliency features as demanded in data centers and core of converged networks.

The OmniSwitchTM 6900 series with two plug-in module bays is a flexible 10GbE fixed configuration switch, which can accommodate up to 6 40GbE ports for the OS6900-X40. The OmniSwitchTM 6900 series provides a long term sustainable platform that supports 40GbE connectivity as well as specialized data center convergence features such as Data Center Bridging (DCB), Shortest Path Bridging (SPB) and Fibre Channel over Ethernet (FCoE) without any change out hardware.

17

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Alcatel-Lucent OmniSwitch TM 6900-X40* RFC 2544 Layer 2 Latency Test


ns
1200 1000 800 600 400 200 0 Max Latency Avg Latency Min Latency Avg. delay Variation 64 1060 869.37 760 7.70 128 1120 880.46 780 5.48 192 1140 897.07 740 10.04 Max Latency Avg Latency Min Latency

256 1060 860.74 700 5.70

512 1120 822.48 680 7.96

1024 1080 833.37 700 7.41

1280 1020 808.98 680 3.33

1518 1040 796.07 640 10.33

2176 1020 802.65 680 5.11

9216 1020 792.39 660 9.24

Alcatel-Lucent OmniSwitch TM 6900-X40* RFC 2544 Layer 3 Latency Test


ns
1400 1200 1000 800 600 400 200 0 Max Latency Avg Latency Min Latency Avg. delay Variation 64 1220 891.41 760 7.20 128 1220 885.37 760 5.89 192 1260 910.72 740 9.22 Max Latency 256 1220 880.76 700 5.44 512 1160 838.76 680 7.65 Avg Latency 1024 1200 850.94 680 7.00 1280 1160 828.33 660 3.50 Min Latency 1518 1160 812.83 670 9.13 2176 1200 819.72 665 4.94 9216 1200 809.94 645 9.37

For the Fall 2011 Lippis/Ixia test we populated and tested the ALU OmniSwitchTM 6900-X40s 10 and 40GbE functionality and performance. The ALU OmniSwitchTM 6900-X40 switch was tested across all 40-10GbE and 6-40GbE ports. Its average latency ranged from a low of 792 ns to a high of 897 ns for layer 2 traffic measured in store and forward mode. Its average delay variation ranged between 5 and 10 ns, providing consistent latency across all packet sizes at full line rate, a comforting result giving the ALU OmniSwitchTM 6900-X40s support of converged network/storage infrastructure. For layer 3 traffic, the ALU OmniSwitchTM 6900-X40s measured average latency ranged from a low of 809 ns at jumbo size 9216 Bytes to a high of 910 ns. Its average delay variation for layer 3 traffic ranged between 3 and 9 ns, providing consistent latency across all packet sizes at full line rate.

* Test was done between far ports to test across chip performance/latency.

Alcatel-Lucent OmniSwitch 6900-X40* RFC 2544 L2 & L3 Throughput Test


TM
Layer 2 100% Layer 3

Throughput % Line Rate

80% 60% 40% 20% 0% Layer 2 Layer 3

The ALU OmniSwitchTM 6900-X40 demonstrated 100% throughput as a percentage of line rate across all 4010GbE and 6-40GbE ports. In other words, not a single packet was dropped while the ALU OmniSwitchTM 6900X40 was presented with enough traffic to populate its 40-10GbE and 6-40GbE ports at line rate simultaneously for both L2 and L3 traffic flows.
1518 100 100 2176 100 100 9216 100 100

64 100 100

128 100 100

192 100 100

256 100 100

512 100 100

1024 100 100

1280 100 100

18

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Alcatel-Lucent OmniSwitch TM 6900-X40* RFC 2889 Congestion Test


(150% of line rate into a single 10GbE)
Layer2 Agg. Forwarding Rate 100 Layer 3 Agg. Forwarding Rate

% Line Rate

90 80 70 60 50 40 30 20 10 0 64 100 100 no no yes yes 133444 124740 128 100 100 no no yes yes 76142 76142 192 100 100 no no yes yes 106048 106014 256 100 100 no no yes yes 81612 81614 512 100 100 no no yes yes 63596 63592 1024 100 100 no no yes yes 64864 64866 1280 100 100 no no yes yes 60730 60730 1518 100 100 no no yes yes 58794 58796 2176 100 100 no no yes yes 56506 56506 9216 100 100 no no yes yes 54864 54868

Two congestion test were conducted using the same methodology. A 10GbE and 40GbE congestion test stressed the ALU OmniSwitchTM 6900-X40 ToR switchs congestion management attributes for both 10GbE and 40GbE. The ALU OmniSwitchTM 6900-X40 demonstrated 100% of aggregated forwarding rate as percentage of line rate during congestion conditions for both 10GbE and 40GbE tests. A single 10GbE port was oversubscribed at 150% of line rate. In addition, a single 40GbE port was oversubscribed at 150% of line rate. The ALU OmniSwitchTM 6900-X40 did not depict HOL blocking behavior, which means that as 10GbE and 40GbE ports became congested, it did not impact the performance of other ports. The OmniSwitchTM 6900-X40 did send flow control frames to the Ixia test gear signaling it to slow down the rate of incoming traffic flow, which is common and expected in ToR switches.

Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate L2 Head of Line Blocking L3 Head of Line Blocking L2 Back Pressure L3 Back Pressure L2 Agg Flow Control Frames L3 Agg Flow Control Frames

Alcatel-Lucent OmniSwitch TM 6900-X40* RFC 2889 Congestion Test


(150% of line rate into a single 40GbE)
Layer2 Agg. Forwarding Rate 100 Layer 3 Agg. Forwarding Rate

% Line Rate

90 80 70 60 50 40 30 20 10 0 64 100 100 no no yes yes 128 100 100 no no yes yes 192 100 100 no no yes yes 256 100 100 no no yes yes 512 100 100 no no yes yes 1024 100 100 no no yes yes 1280 100 100 no no yes yes 1518 100 100 no no yes yes 231850 231842 2176 100 100 no no yes yes 9216 100 100 no no yes yes

Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate L2 Head of Line Blocking L3 Head of Line Blocking L2 Back Pressure L3 Back Pressure L2 Agg Flow Control Frames L3 Agg Flow Control Frames

518736 299492 414694 485364 299470 414742

320628 250906 320650 250876

255842 239894 255852 239892

223580 217598 223582 217600

19

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Alcatel-Lucent OmniSwitch TM 6900-X40* RFC 3918 IP Multicast Test


Agg Tput Rate (%)
100 90 80 70 60 50 40 30 20 10 0 Agg Tput Rate (%) 64 100 128 100 192 100 256 100 512 100 1024 100 1280 100 1518 100 2176 100 9216 100 1000 1020 1010 1030 1050 1040 1060 ns

The ALU OmniSwitchTM 6900-X40 demonstrated 100% aggregated throughput for IP multicast traffic with latencies ranging from a 1021 ns or 1s at 64Byte size packet to 1055 ns or 1s at 9216Byte size packets. The ALU OmniSwitchTM 6900-X40 demonstrated consistently lower IP multicast latency throughout for all packet sizes

Agg. Average Latency 1021.98 1026.76 1045.49 1043.53 1056.07 1057.22 1052.11 1055.53 1054.58 1055.64

Alcatel-Lucent OmniSwitchTM 6900-X40 Cloud Simulation Test


Traffic Direction East-West East-West East-West East-West East-West North-South North-South Traffic Type Database_to_Server Server_to_Database HTTP iSCSI-Server_to_Storage iSCSI-Storage_to_Server Client_to_Server Server_to_Client Avg Latency (ns) 3719 1355 2710 2037 3085 995 851

The ALU OmniSwitchTM 6900-X40 performed well under cloud simulation conditions by delivering 100% aggregated throughput while processing a large combination of east-west and north-south traffic flows. Zero packet loss was observed as its latency stayed under 4 s measured in store and forward mode.

20

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Alcatel-Lucent OmniSwitch 6900-X40 Power Consumption Test


TM

Power Measurement: The ALU OmniSwitchTM 6900-X40 represents a new breed of cloud network core switches with power efficiency being a core value. Its WattsATIS/ port is 3.38 and TEER value is 280; higher TEER values are better than lower. Note that the total number of OmniSwitchTM 6900-X40 10GbE ports to calculate Watts/10GbE were 64, as 6-40Gbs ports or 24 10GbE ports were added to the 40-10GbE ports for Watts/ /port calculation. The 40GbE ports ATIS are essentially four 10GbE ports from a power consumption point of view. The ALU OmniSwitchTM 6900-X40 power cost per 10GbE is calculated at $12.36 per year. The three year cost to power the ALU OmniSwitchTM 6900X40 is estimated at $790.87 and represents less than 2.82% of its list price. Keeping with data center best practices, its cooling fans flow air front to back while a back to front option will be available in the coming months. Virtualization Scale Calculation Within one layer 2 domain, the ALU OmniSwitchTM 6900-X40 can support approximately 500 physical servers and 15,500 VMs. This assumes 30 VMs per physical server with each physical server requiring two host IP/ MAC/ARP entries; one for management, one for IP storage, etc into a data center switch. Each VM will require from a data center switch one MAC entry, one 32/IP host route and one ARP entry. Clearly the ALU OmniSwitchTM 6900-X40 can support a much larger virtualized infrastructure than the number of its physical ports assuring virtualization scale in layer 2 domains.
www.lippisreport.com

WattsATIS/10GbE port 3-Year Cost/WattsATIS/10GbE Total power cost/3-Year 3 yr energy cost as a % of list price TEER Value Cooling Power Cost/10GbE/Yr List price Price/port

3.38 $12.36 $790.87 2.82% 280 Front to Back, Reversible $4.12 $28,000 $437.50

Alcatel-Lucent OmniSwitchTM 6900-X40 Virtual Scale Calculation


/32 IP Host Route Table Size MAC Address Table Size ARP Entry Siize Assume 30:1 VM:Physical Machine Total number of Physical Servers per L2 domain Number of VMs per L2 domain 16000* 128000 32000 32 500 15500

* /32 IP host route table size: 16,000 in Hardware. However, the software can support up to 32,000 routes for OSPFv2 and 128,000 for BGP.

21

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Discussion:
The Alcatel-Lucent (ALU) OmniSwitchTM 6900 Stackable LAN Switch series offers yet another option for ToR switch connections to servers while feeding into a core OmniSwitchTM 10K at 40GbE. The combination of OmniSwitch 10K at the core and 6900 at the ToR offer multiple cloud network design options such as a two-tier, low latency, lossless fabric capable of supporting network and storage traffic in highly dense virtualized infrastructure. The ALU OmniSwitchTM 6900-X40 demonstrated low sub 900ns latency, wire speed 100% throughout, low 3.38 W/10GbE power consumption, consistently low 1s latency of IP multicast traffic, high virtualization scale and all within a small 1RU form factor. Further, the OmniSwitchTM 6900-X40 demonstrated fault tolerance attributes required in private and public cloud environments as the OmniSwitchTM 6900-X40 delivered 100% traffic throughput while CLI, STP and SNMP were shut down proving that its network operating system is resilient and that these processes can be shut down while the switch forwards traffic without packet loss. Its buffer architecture is sound as its link aggregation group (LAG) hashing allows the OmniSwitchTM 6900-X40 to be connected to the OmniSwitchTM 10K or other core switches at 80Gbps reliably. Based upon the OmniSwitchTM 6900-X40s test results this ToR switch is well targeted for both the enterprise and service provider data center plus enterprise campus of a converged enterprise network.

22

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Arista 7504 Series Data Center Switch


On January 8, 2011, Arista Networks launched a new member of its 7500 series family called the 7504 modular switch that supports a 192 10GbE interface. The test data and graphics below represent the first public performance and power consumption measurements of the 7504. The 7504 is a smaller version of its award-winning and larger 384 10GbE port 7508 data center switch. Arista Networks 7504 Test Configuration
Hardware Device under test Test Equipment 7504 Modular Switch http://www.aristanetworks.com Ixia XM12 High Performance Chassis Xcellon Flex AP10G16S 10 Gigabit Ethernet LAN load module LSM10GXM8S and LSM10GXM8XP 10 Gigabit Ethernet load modules 1U Application Server http://www.ixiacom.com/ Cabling Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers www.leviton.com IxAutomate 6.90 GA SP1 Software Version EOS 4.5.5 IxOS 6.00 EA IxNetwork 5.70 EA Port Density 192

Arista 7504 RFC2544 Layer 2 Latency Test


ns
120000 100000 Max Latency Avg Latency Min Latency

Video feature: Click to view Arista video podcast


The Arista 7504 was tested across all 192 ports of 10GbE. Its average latency ranged from a low of 6,831 ns or 6.8 s to a high of 17,791 ns or 17.7 s at jumbo size 64 Byte size frames for layer 2 traffic. Its average delay variation was ranged between 5 and 10 ns, providing consistent latency across all packet sizes at full line rate. For layer 3 traffic, the Arista 7504s average latency ranged from a low of 6,821 ns to a high of 12,387 ns or 12.3 s at jumbo size 9216 Byte size frames. Its average delay variation for layer 3 traffic ranged between 5 and 10.6 ns, providing consistent latency across all packet sizes at full line rate.

80000 60000 40000 20000 0 Max Latency Avg Latency Min Latency Avg. delay Variation 64 110120 3080 9 128 19560 3100 5 256 13800 7850.714 3280 5.953 512 16020 8452.021 3420 8 1024 13540 8635.505 3580 7 1280 12520 8387.01 3640 6 1518 12040 8122.078 3720 10

2176 12500 3960 5

9216 15160 6840 10.547

17791.203 6831.521

8995.995 12392.828

Arista 7504 RFC2544 Layer 3 Latency Test


ns
70000 60000 50000 40000 30000 20000 10000 0 Max Latency Avg Latency Min Latency Avg. delay Variation 64 69420 9994.88 3340 9 128 15300 6821.323 3160 5 256 13880 7858.688 3440 5.953 512 15060 8466.526 3380 8 1024 15340 8647.958 3660 7 1280 12720 8387.26 3740 6 1518 12060 8112.948 3720 10 2176 12820 8996.547 3980 5 9216 15160 12387.99 6600 10.599 Max Latency Avg Latency Min Latency

23

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Arista 7504 RFC 2544 L2 & L3 Throughput Test


Layer 2 100% Layer 3

Throughput % Line Rate

80% 60% 40% 20% 0% Layer 2 Layer 3

The Arista 7504 demonstrated 100% throughput as a percentage of line rate across all 192 10GbE ports. In other words, not a single packet was dropped while the 7504 was presented with enough traffic to populate its 192 10GbE ports at line rate simultaneously for both L2 and L3 traffic flows.

64 100 100

128 100 100

192 100 100

256 100 100

512 100 100

1024 100 100

1280 100 100

1518 100 100

2176 100 100

9216 100 100

Arista 7504 Cloud Simulation Test


Traffic Direction East-West East-West East-West East-West East-West North-South North-South Traffic Type Database_to_Server Server_to_Database HTTP iSCSI-Server_to_Storage iSCSI-Storage_to_Server Client_to_Server Server_to_Client Avg Latency (ns) 14208 4394 9171 5656 13245 5428 4225

The Arista 7504 performed well under cloud simulation conditions by delivering 100% aggregated throughput while processing a large combination of eastwest and north-south traffic flows. Zero packet loss was observed as its latency stayed under 14s.

Arista 7504 Power Consumption Test


WattsATIS/10GbE port 3-Year Cost/WattsATIS/10GbE Total power cost/3-Year 3 yr energy cost as a % of list price TEER Value Cooling 10.31 $37.69 $7,237.17 3.14% 92 Front to Back

The Arista 7504 represents a new breed of cloud network spine switches with power efficiency being a core value. Its WattsATIS/port is 10.31 and TEER value is 92. Its power cost per 10GbE is estimated at $12.56 per year. The threeyear cost to power the 7504 is estimated at $7,237.17 and represents approximately 3% of its list price. Keeping with data center best practices, its cooling fans flow air front to back.

24

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Arista 7504 RFC 3918 IP Multicast Test


At the time of testing in December 2010, the Arista 7504 did not support IP Multicast. Both the 7504 and 7508 now do with EOS 4.8.2.

Arista 7504 RFC 2889 Congestion Test


% Line Rate
100 90 80 70 60 50 40 30 20 10 0 Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate L2 Head of Line Blocking L3 Head of Line Blocking L2 Back Pressure L3 Back Pressure Agg Flow Control Frames 64 83.34 83.34 no no yes yes 0 128 83.342 83.342 no no yes yes 0 256 83.345 83.345 no no yes yes 0 512 83.35 83.35 no no yes yes 0 1024 83.362 83.362 no no yes yes 0 1280 83.362 83.362 no no yes yes 0 1518 83.362 83.362 no no yes yes 0 2176 83.362 83.362 no no yes yes 0 9216 83.362 78.209 no yes yes no 0 Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate

The Arista 7504 demonstrated nearly 84% of aggregated forwarding rate as percentage of line rate during congestion conditions for both L2 and L3 traffic flows. A single 10GbE port was flooded at 150% of line rate. For L2 forwarding the 7504 did not use HOL blocking, which means that as the 7504-10GbE port became congested, it did not impact the performance of other ports. However, during L3 testing HOL blocking was detected at the 9216 packet size. Note the 7504 was in beta testing at the time of the Lippis/ IXIA test and there was a corner case at 9216 bytes at L3 that needed further tuning. Arista states that its production code provides wirespeed L2/L3 performance at all packet sizes without any head of line blocking. Note that while the Arista 7504 shows back pressure, in fact there is none. 25

IXIA and other test equipment calculates back pressure per RFC 2889 paragraph 5.5.5.2. which states that if the total number of received frames on the congestion port surpasses the number of transmitted frames at MOL (Maximum Offered Load) rate then back pressure is present. Thanks to the 7504s 2.3GB of packet buffer memory it can overload ports with more packets than the MOL, therefore, the IXIA or any test equipment calculates/sees back pressure, but in reality this is an anomaly of the RFC testing method and not the 7504. The Arista 7504 can buffer up 40ms of traffic per port at 10GbE speeds which is 400K bits or 5,425 packets of 9216 bytes. Arista 7500 switches do not use back pressure (PAUSE) to manage congestion, while other core switches in the industry, not tested here, transmit pause

frames under congestion. The Arista 7504 offers lossless forwarding for datacenter applications due to its deep buffer VOQ architecture. With 2.3GB of packet buffer memory, Arista could not find any use case where the 7504 needs to transmit pause frames and hence that functionality is turned off. The Arista 7504 does honor pause frames and stops transmitting, as many hosts and other switches still cannot support wirespeed 10G traffic. Again due to its deep buffers, the 7504 can buffer up to 40ms of traffic per port at 10GbE speeds, which is a necessary factor for a lossless network. Aristas excellent congestion management results are a direct consequence of its generous buffer memory design as well as its VOQ architecture.

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Discussion:
In a compact 7 RU chassis, the Arista 7504 offers 5 Terabits of fabric capacity, 192 wire speed 10 GbE ports and 2.88 BPPS of L2/3 throughput. The Arista 7500 boasts a comprehensive feature set, as well as future support for 40GbE and 100GbE standard without needing a fabric upgrade. According to Arista with front-to-rear airflow, redundant and hot swappable supervisor, power, fabric and cooling modules, the 7500 is energy efficient with typical power consumption of approximately 10 watts per port for a fully loaded chassis. In fact, the Lippis/Ixia test measures 10.31 WATTSATIS per port for the 7504 with a TEER value of 92. The Arista 7500 is a platform targeted for building low latency, scalable, data center networks. Arista EOS is a modular switch operating system with a unique state sharing architecture that separates switch state from protocol processing and application logic. Built on top of a standard Linux kernel, all EOS processes run in their own protected memory space and exchange state through a memory-based database. This multi-process state sharing architecture provides the foundation for in-servicesoftware updates and self-healing resiliency. Arista EOS automates the networking infrastructure natively with VMware vSphere via its VM Tracer to provide VM discovery, auto-VLAN provisioning and visibility into the virtual computing environment. All Arista products, including the 7504 switch, run Arista EOS software. The same binary image supports all Arista products. For the Arista 7500 series, engineers have implemented an advanced VOQ architecture that eliminates HOL blocking with a dedicated queue on the ingress side for each output port and each priority level. This is verified in the RFC 2889 layer 2 and 3 congestion Lippis/Ixia test where there is no HOL blocking even when an egress 10GbE port is saturated with 150% of line rate traffic and in the RFC2544 latency Lippis/Ixia test where the average delay variation range is 5 to 10 ns across all packet sizes. As a result of VOQ, network traffic flows through the switch smoothly and without encountering additional congestion points. In addition, the Arista 7500 VOQ fabric is buffer-less. Packets are only stored once on ingress. This achieves one of the lowest store-and-forward latencies for any Core switch product, less than 10 microseconds for a 64B packet at layer 3.

26

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Arista 7124SX 10G SFP Data Center Switch


For this Lippis/Ixia evaluation at iSimCity Arista submitted its newest addition to the Arista 7100 Series Top-ofRack (ToR) switches, the 7124SX. The 7124SX is a wire-speed 24-port cutthrough switch in a 1RU (Rack Unit) form factor supporting 100Mb, 1GbE and 10GbE SPF+ optics and cables. It forwards packets at layer 2, and layer 3 in hardware. The 7124SX was built for data center and ultra low latency environments where sub 500ns latency is required. The 7124SX runs Aristas Extensible Operating System (EOS), which is common across the entire Arista family of switches in the same binary EOS image and can be customized to customer needs such as financial services with access to Linux tools. With EOS the 7124SX possesses in-servicesoftware-upgrade (ISSU), self-healing stateful fault repair (SFR) and APIs to customize the software. The 7124SX supports Aristas Latency Analyzer (LANZ), a new capability to monitor and analyze network performance, generate early warning of pending congestion events, and track sources of bottlenecks. In addition, a 50GB Solid State Disk (SSD) is available as a factory-option and can be used to store packet data captures on the switch, LANZ generated historical data, and take full advantage of the Linux based Arista EOS. Other EOS provisioning and monitoring modules include Zero Touch Provisioning (ZTP) and VM Tracer, which ease set-up and overall manageability of networks in virtualized data center environments. Arista Networks 7124SX Test Configuration
Hardware Device under test Test Equipment 7124SX 10G SFP Switch http://www.aristanetworks.com Ixia XM12 High Performance Chassis Xcellon Flex AP10G16S 10 Gigabit Ethernet LAN load module LSM10GXM8XP 10 Gigabit Ethernet load modules 1U Application Server http://www.ixiacom.com/ Cabling Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers www.leviton.com IxAutomate 6.90 GA SP1 Software Version EOS-4.6.2 IxOS 6.00 EA IxNetwork 5.70 EA Port Density 24

Arista 7124SX RFC2544 Layer 2 Latency Test


ns
700 600 500 400 300 200 100 0 Max Latency Avg Latency Min Latency Avg. delay Variation 64 600 541.083 480 8.417 128 600 548.208 480 5.958 192 620 553.292 460 9.667 256 620 527.917 460 5.542 512 620 528.958 460 7.667 1024 620 531.583 460 7.833 1280 620 531.417 460 5.917 Max Latency Avg Latency Min Latency 1518 620 531.042 460 9.542 2176 620 531.417 460 6.667 9216 620 532.792 460 9.667

Arista 7124SX RFC2544 Layer 3 Latency Test


ns
700 600 500 400 300 200 100 0 Max Latency Avg Latency Min Latency Avg. delay Variation 64 600 561.583 500 9.958 128 640 582.333 520 5.208 192 640 589.5 500 9.917 256 640 567.75 480 5.417 512 640 570.292 480 7.917 1024 640 568.792 480 6.875 1280 640 568.542 480 5.5 Max Latency Avg Latency Min Latency 1518 640 564.333 480 9.25 2176 640 563.833 480 5.25 9216 640 562.208 480 9.333

27

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

For those in the financial services industry, pay particular attention to packet size 192 Bytes, which typically is not covered by many tests but is the most common packet size in financial trading environments. The 7124SX ToR switch was tested across all 24 ports of 10GbE at layer 2

and 3 forwarding. The minimum layer 2 latency observed was 460 ns. Its average layer 2 latency was measured from a low of 527 ns to a high of 553 ns, a very tight range across packet sizes from 64 to 9216 Bytes. Its average delay variation ranged between 5.9 and 9.6 ns, providing consistent latency across all packet sizes at full line rate.

The minimum layer 3 latency measured was 480 ns. The average layer 3 latency of the 7124SX was measured from a low of 561 ns to a high of 589 ns, again a very tight range across packet sizes from 64 to 9216 Bytes. Its average delay variation ranged between 5.2 and 9.9 ns, providing consistent latency across all packet sizes at full line rate.

Arista 7124SX RFC 2544 L2 & L3 Throughput Test


Layer 2 100% Layer 3

Throughput % Line Rate

80% 60% 40% 20% 0% Layer 2 Layer 3

The Arista 7124SX demonstrated 100% throughput as a percentage of line rate across all 24 10GbE ports. In other words, not a single packet was dropped while the Arista 7124SX was presented with enough traffic to populate its 24 10GbE ports at line rate simultaneously for L2 and L3 traffic flows. The 7124SX is indeed a 24-port wire-speed switch.

64 100 100

128 100 100

192 100 100

256 100 100

512 100 100

1024 100 100

1280 100 100

1518 100 100

2176 100 100

9216 100 100

Arista 7124SX RFC 2889 Congestion Test


Layer2 Agg. Forwarding Rate 100 Layer 3 Agg. Forwarding Rate

% Line Rate

90 80 70 60 50 40 30 20 10 0 64 100 100 no no yes yes 0 128 100 100 no no yes yes 0 192 100 100 no no yes yes 0 256 100 100 no no yes yes 0 512 100 100 no no yes yes 0 1024 100 100 no no yes yes 0 1280 100 100 no no yes yes 0 1518 100 100 no no yes yes 0 2176 100 100 no no yes yes 0 9216 100 100 no no yes yes 0

The Arista 7124SX demonstrated 100% of aggregated forwarding rate as percentage of line rate during congestion conditions. A single 10GbE port was flooded at 150% of line rate. The Arista 7124SX did not exhibit Head of Line (HOL) blocking problems, which means that as a 10GbE port on the Arista 7124SX became congested, it did not impact performance of other ports avoiding a cascading of dropped packets and congestion. Uniquely the Arista 7124SX did indicate to Ixia test gear that it was using back pressure, however there were no flow control frames detected and in fact there were none. Ixia and other
www.lippisreport.com

Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate L2 Head of Line Blocking L3 Head of Line Blocking L2 Back Pressure L3 Back Pressure Agg Flow Control Frames

28

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

test equipment calculate back pressure per RFC 2889 paragraph 5.5.5.2., which states that if the total number of received frames on the congestion port surpasses the number of transmitted frames at MOL (Maximum Offered Load) rate then back pressure is present. Thanks to the 7124SXs generous and dynamic buffer allocation it can overload ports with more packets than

the MOL, therefore, the Ixia or any test equipment calculates/sees back pressure, but in reality this is an anomaly of the RFC testing method and not the 7124SX. The Arista 7124SX has a 2MB packet buffer pool and uses Dynamic Buffer Allocation (DBA) to manage congestion. Unlike other architectures that

have a per-port fixed packet memory, the 7124SX can handle microbursts to allocate packet memory to the port that needs it. Such microbursts of traffic can be generated by RFC 2889 congestion tests when multiple traffic sources send traffic to the same destination for a short period of time. Using DBA, a port on the 7124SX can buffer up to 1.7MB of data in its transmit queue.

Arista 7124SX RFC 3918 IP Multicast Test


Agg Tput Rate (%)
100 90 80 70 60 50 40 30 20 10 0 Agg Tput Rate (%) 64 100 128 100 192 100 256 100 512 100 1024 100 1280 100 1518 100 2176 100 9216 100 500 520 540 560 580 600 ns

The Arista 7124SX demonstrated 100% aggregated throughput for IP multicast traffic with latencies ranging from a 562 ns at 64Byte size packets to 565 ns at 9216 jumbo frame size packets, the lowest IP multicast latency measured thus far in the Lippis/Ixia set of tests. Further, the Arista 7124SX demonstrated consistently low IP Multicast latency.

Agg. Average Latency 562.822 590.087 592.348 558.435 570.913 561.630 569.221 569.725 569.174 565.652

Arista 7124SX Cloud Simulation Test


Traffic Direction East-West East-West East-West East-West East-West North-South North-South Traffic Type Database_to_Server Server_to_Database HTTP iSCSI-Server_to_Storage iSCSI-Storage_to_Server Client_to_Server Server_to_Client Avg Latency (ns) 7439 692 4679 1256 7685 1613 530

The Arista 7124SX performed well under cloud simulation conditions by delivering 100% aggregated throughput while processing a large combination of east-west and north-south traffic flows. Zero packet loss was observed as its latency stayed under 7685 ns even under congestion, measured in cut-through mode. Of special note is the low latency observed during north-south server-to-client and east-west serverto-database flows.

29

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Arista 7124SX Power Consumption Test


WattsATIS/10GbE port 3-Year Cost/WattsATIS/10GbE Total power cost/3-Year 3 yr energy cost as a % of list price TEER Value Cooling 6.76 $24.71 $593.15 4.56% 140 Front-to-Back, Back-to-Front

The Arista 7124SX represents a new breed of cloud network leaf or ToR switches with power efficiency being a core value. Its WattsATIS/port is a low 6.76 and TEER value is 140. Note higher TEER values are more desirable as they represent the ratio of work performed over energy consumption. The Arista 7124SX consumes slightly more power than a typical 100 Watt light bulb. Its power cost per 10GbE is estimated at $8.24 per year. The three-year cost to power the Arista 7124SX is estimated at $593.15 and represents approximately 4.56% of its list price. Keeping with data center best practices, its cooling fans flow air rear to front and are reversible.

30

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Discussion
The Arista 7124SX offers 24-wire speed 10GbE ports of layer 2 and 3 cut-through forwarding in a 1 RU footprint. Designed with performance in mind, the Arista 7124SX provides ultra low, 500ns latency, line-rate forwarding and unique dynamic buffer allocation to keep traffic moving under congestion and microburst conditions. These claims was verified in the Lippis/Ixia RFC2544 latency test where average latency across all packet sizes was measured between 527 to 589 ns for layer 2 and 3 forwarding. Of special note is the 7124SXs minimum latency of 460ns across a large range of packet sizes. Further, there was minimal delay variation, between 5 and 9 ns, across all packet sizes, demonstrating the switch architectures ability to forward packets consistently and quickly boosting confidence that the Arista 7124SX will perform well in converged I/O configurations and consistently in demanding data center environments. In addition, the Arista 7124SX demonstrated 100% throughput at all packet sizes. Redundant power and fans along with various EOS high availability features ensure that the Arista 7124SX is always available and reliable for data center operations. The 7124SX is well suited for low latency applications such as financial trading, Message Passing Interface or MPI jobs, storage interconnect, virtualized environment and other 10GbE data center deployments The 7124SX is a single chip 10GbE product, which provides consistent throughput and latency over all packet ranges as verified during the Lippis/Ixia test at iSimCity. The 7124SX is just one of the ToR switches in the Arista 7100 Series of fixed configuration ToR switches. In addition Arista also offers the 7148SX with low latency and low jitter over 48 ports and 7050S-64 ToR switch for denser 10GbE server connections that require 40GbE uplinks. These Arista ToR switch products connect into Aristas 7500 Series of modular core switches, which provide industry-leading density of 192 to 384 10GbE connections. Across all Arista switch provides is a single binary image of its EOS operation system allowing all switches to utilize and employ its high availability features as well as unique operational modules including LANZ, VM Tracer, vEOS and ZTP. The combination of Aristas ToR and Core switches enable data center architects to build a flat, non blocking two-tier network with high 10GbE density, low latency, low power consumption and high availability. Aristas two-tier designs can scale to 18,000 nodes using standard Layer 2/Layer 3 protocols. All Arista ToR and Core switches support 32 port Multi-Chassis LAG or Link Aggregation Group which enable high levels of switch interconnect to be built creating large scale out cloud computing facilities.

31

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Arista Networks 7050S-64 Switch


For this Lippis/Ixia evaluation at iSimCity Arista submitted its highest density and newest Top-of-Rack (ToR) switch; the 7050S-64. The 7050S-64 is a wire speed layer 2/3/4 switch with 48 10GbE SFP+ and 4-40GbE QSFP+ ports in a 1RU form factor. Each 40GbE port can also operate as four independent 10GbE ports to provide a total of 64 10GbE ports. The Arista 7050S-64 forwards in cut-through or store and forward mode and boast a shared 9 MB packet buffer pool allocated dynamically to ports that are congested. The 7050S-64 runs Aristas Extensible Operating System (EOS), which is common across the entire Arista family of switches in the same binary EOS image. With EOS the 7050S-64 possesses inservice-software-upgrade (ISSU), selfhealing stateful fault repair (SFR) and APIs to customize the software. The 7050S-64 supports Zero Touch Provisioning and VMTracer to automate provisioning and monitoring of the network. In addition, a 50GB Solid State Disk (SSD) is available as a factory-option and can be used to store packet data captures on the switch, syslogs over the life of the switch, and take full advantage of the Linux based Arista EOS. For those in the financial services industry, pay particular attention to packet size 192 Bytes, which typically is not covered by many tests but is the most common packet size in financial trading environments. Arista Networks 7050S-64 Test Configuration
Hardware Device under test Test Equipment 7050S-64 Switch http://www.aristanetworks.com Ixia XM12 High Performance Chassis Xcellon Flex AP10G16S 10 Gigabit Ethernet LAN load module LSM10GXM8XP 10 Gigabit Ethernet load modules 1U Application Server http://www.ixiacom.com/ Cabling Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers www.leviton.com IxAutomate 6.90 GA SP1 Software Version EOS-4.6.2 IxOS 6.00 EA IxNetwork 5.70 EA Port Density 64

Arista 7050S-64 RFC2544 Layer 2 Latency Test


ns
1600 1400 1200 1000 800 600 400 200 0 Max Latency Avg Latency Min Latency Avg. delay Variation 64 1040 914.328 700 8.672 128 1060 780 5.266 192 1260 980 9.594 256 1260 980 5.406 512 1400 1140 7.797 1024 1440 1180 6.922 1280 1440 1180 5.516 Max Latency Avg Latency Min Latency 1518 1440 1180 9.734 2176 1440 1180 5.234 9216 1440 1305.5 1160 9.75

926.766 1110.813 1110.641 1264.906 1308.844 1309.109 1310.766 1309.766

Arista 7050S-64 RFC2544 Layer 3 Latency Test


ns
1600 1400 1200 1000 800 600 400 200 0 Max Latency Avg Latency Min Latency Avg. delay Variation 64 1180 905.422 720 9.672 128 1220 780 5.391 192 1320 860 9.641 256 1320 880 5.641 512 1460 1040 7.781 1024 1520 1233 1060 6.75 1280 1520 1080 5.578 Max Latency Avg Latency Min Latency 1518 1520 1080 9.578 2176 1520 1080 5.328 9216 1520 1060 9.859

951.203 1038.422 1039.141 1191.641

1232.453 1232.484 1229.922 1225.578

32

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

The 7050S-64 ToR switch was tested in its 64-10GbE-port configuration using its 4-40GbE ports as 16-10GbE ports. It was tested for both layer 2 and 3 forwarding at 10GbE. The minimum layer 2 latency was 700 ns. The average layer 2 latency was measured from a low of 914 ns to a high of 1310 ns, across packet sizes from 64 to 9216 Bytes. Its average delay variation ranged between 5.2

and 9.7 ns, providing consistent latency across all packet sizes at full line rate. The 7050S-64 minimum layer 3 latency was 720 ns. The 7050-64S layer 3 latency was measured from a low of 905 ns to a high of 1,233 ns, again across packet sizes from 64 to 9216 Bytes again across packet sizes from 64 to 9216 Bytes. Its average delay variation ranged between

5.3 and 9.8 ns, providing consistent latency across all packet sizes at full line rate. The Arista 7050S-64 switch has the lowest latency amongst all 64 port switches tested so far in the Lippis/Ixia set of tests.

Arista 7050S-64 RFC 2544 L2 & L3 Throughput Test


Layer 2 100% Layer 3

Throughput % Line Rate

80% 60% 40% 20% 0% Layer 2 Layer 3

The Arista 7050S-64 demonstrated 100% throughput as a percentage of line rate across all 64 10GbE ports. In other words, not a single packet was dropped while the Arista 7050S-64 was presented with enough traffic to populate its 64 10GbE ports at line rate simultaneously for L2 and L3 traffic flows. The 7050S-64 is indeed a 64-port wire-speed switch.

64 100 100

128 100 100

192 100 100

256 100 100

512 100 100

1024 100 100

1280 100 100

1518 100 100

2176 100 100

9216 100 100

Arista 7050S-64 RFC 2889 Congestion Test


Layer2 Agg. Forwarding Rate 100 Layer 3 Agg. Forwarding Rate

% Line Rate

90 80 70 60 50 40 30 20 10 0 64 100 100 no no yes yes 0 128 100 100 no no yes yes 0 192 100 100 no no yes yes 0 256 100 100 no no yes yes 0 512 100 100 no no yes yes 0 1024 100 100 no no yes yes 0 1280 100 100 no no yes yes 0 1518 100 100 no no yes yes 0 2176 100 100 no no yes yes 0 9216 100 100 no no yes yes 0

The Arista 7050S-64 demonstrated 100% of aggregated forwarding rate as percentage of line rate during congestion conditions. A single 10GbE port was flooded at 150% of line rate. The Arista 7050S-64 did not exhibit Head of Line (HOL) blocking problems, which means that as a 10GbE port on the Arista 7050S-64 became congested it did not impact performance of other ports preventing a cascading of packet loss. Uniquely the Arista 7050S-64 did indicate to Ixia test gear that it was using back pressure, however there were no flow control frames detected and in fact there were none. Ixia and other
www.lippisreport.com

Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate L2 Head of Line Blocking L3 Head of Line Blocking L2 Back Pressure L3 Back Pressure Agg Flow Control Frames

33

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

test equipment calculate back pressure per RFC 2889 paragraph 5.5.5.2., which states that if the total number of received frames on the congestion port surpasses the number of transmitted frames at MOL (Maximum Offered Load) rate then back pressure is present. Thanks to the 7050S-64 s generous and dynamic buffer allocation it can overload ports with more packets than the MOL, therefore, the Ixia or any test

equipment calculates/sees back pressure, but in reality this is an anomaly of the RFC testing method and not the 7050S-64. The Arista 7050S-64 is designed with a dynamic buffer pool allocation such that during a microburst of traffic as during RFC 2889 congestion test when multiple traffic sources are destined to the same port, packets are buffered in

packet memory. Unlike other architectures that have fixed per-port packet memory, the 7050S-64 uses Dynamic Buffer Allocation (DBA) to allocate packet memory to ports that need it. Under congestion, packets are buffered in shared packet memory of 9 MBytes. The 7050S-64 uses DBA to allocate up to 5MB of packet memory to a single port for lossless forwarding as observed during this RFC 2889 congestion test.

Arista 7050S-64 RFC 3918 IP Multicast Test


Agg Tput Rate (%)
100 90 80 70 60 50 40 30 20 10 0 Agg Tput Rate (%) 64 100 128 100 192 100 256 100 512 100 1024 100 1280 100 1518 100 2176 100 9216 100 1400 ns 1300 1200 1100 1000 900 800

The Arista 7050S-64 demonstrated 100% aggregated throughput for IP multicast traffic with latencies ranging from a 894 ns at 64Byte size packets to 1,300 ns at 9216 jumbo frame size packets, the lowest IP multicast latency measured thus far in the Lippis/Ixia set of test.

Agg. Average Latency 894.90 951.95 1069.35 1072.35 1232.06 1272.03 1276.35 1261.49 1272.46 1300.35

Arista 7050S-64 Cloud Simulation Test


Traffic Direction East-West East-West East-West East-West East-West North-South North-South Traffic Type Database_to_Server Server_to_Database HTTP iSCSI-Server_to_Storage iSCSI-Storage_to_Server Client_to_Server Server_to_Client Avg Latency (ns) 8173 948 4609 1750 8101 1956 1143

The Arista 7050S-64 performed well under cloud simulation conditions by delivering 100% aggregated throughput while processing a large combination of east-west and north-south traffic flows. Zero packet loss was observed as its latency stayed under 8,173 ns even under congestion, measured in cut-through mode. Of special note is the low latency observed during north-south server-to-client and east-west serverto-database flows.

34

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Arista 7050S-64 Power Consumption Test


WattsATIS/10GbE port 3-Year Cost/WattsATIS/10GbE Total power cost/3-Year 3 yr energy cost as a % of list price TEER Value Cooling 2.34 $8.56 $547.20 1.82% 404 Front-to-Back, Back-to-Front

The Arista 7050S-64 represents a new breed of cloud network leaf or ToR switches with power efficiency being a core value. Its WattsATIS/port is a very low 2.34 and TEER value is 404, the lowest power consumption we have measured thus far across all vendors. Note higher TEER values are more desirable as they represent the ratio of work performed over energy consumption. The Arista 7050S-64 consumes slightly more power than a typical 100-Watt light bulb. Its power cost per 10GbE is estimated at $2.85 per year. The three-year cost to power the Arista 7050S-64 is estimated at $547.20 and represents approximately 1.82% of its list price. Keeping with data center best practices, Arista provides both front to rear or rear to front airflow options, allowing the switch to be mounted as cabling needs dictate. Arista has taken the added step of color coding the fan and power supply handles, red for hot isle and blue for cold isle; a very nice and unique industrial design detail. In Aristas product literature it states that typical power consumption of less than 2 Watt/port with twinax copper cables, and less than 3 Watt/port with SFP/QSFP lasers, the 7050S-64 provides industry leading power efficiency for the data center. We concur with this claim and its power consumption values.

35

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Discussion
The Arista 7050S-64 offers 64-wire speed 10GbE ports of layer 2 and 3 cut-through or store and forward mode forwarding in a 1 RU footprint. It can also be configured with 48 ports of 10GbE and 4-40GbE uplink ports. Designed with performance, low power and forward migration to 40GbE in mind, the Arista 7050S-64 provides low latency, line-rate forwarding and unique dynamic buffer allocation to keep traffic moving under congestion and microburst conditions. These claims were verified in the Lippis/ Ixia RFC2544 latency test where average latency across all packet sizes was measured between 905 to 1,310 ns for layer 2 and 3 forwarding. Of all 64 10GbE port switches tested so far, the Arista 75050S-64 offers the lowest latency. Further, there was minimal delay variation, between 5 and 9 ns, across all packet sizes, demonstrating the switch architectures ability to forward packets consistently and quickly boosting confidence that the Arista 7050S-64 will perform well in converged I/O configurations and consistently in demanding data center environments. In addition, the Arista 7050S-64 demonstrated 100% throughput at all packet sizes. Redundant power and fans along with various EOS high availability features ensure that the Arista 7050S-64 is always available and reliable for data center operations. The 7050S-64 is a single chip 10GbE product, which provides consistent throughput and latency over all packet ranges as verified during the Lippis/Ixia test at iSimCity. The 7050S-64 is but one ToR switch in the Arista fixed configuration ToR products. In addition Arista also offers the 7100 ToR switches for ultra-low latency requirements. These Arista ToR switch products connect into Aristas 7500 Series of modular core switches, which provide industryleading density of 192 to 384 10GbE connections. Across all Arista switches is a single binary image of its EOS operation system allowing all switches to utilize and employ its high availability features as well as unique operational modules including VM Tracer, vEOS and ZTP. The combination of Aristas ToR and Core switches enable data center architects to build a flat, non blocking two-tier network with high 10GbE density, low latency, low power consumption and high availability. All Arista ToR and Core switches support 32 link MultiChassis LAG or Link Aggregation Group which enable high levels of switch interconnect to be built creating large scale out cloud computing facilities. The 7050S-64 is well suited as a ToR switch for highdensity server connections where it would connect into Arista 7500 Core/Spine switches. This configuration scales up to 18,000 10GbE connections in a non-blocking two-tier architecture. Alternatively, for 1,500 to 3,000 10GbE connections the 7050S-64 can be both ToR/leaf and Core/Spine. The 7050S-64 is well suited for low latency applications, storage interconnect, virtualized environment and other 10GbE data center deployments. Where high performance, low latency and low power consumption are all high priority attributes, the 7050S-64 should do very well.

36

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

BLADE Network Technologies, an IBM Company


BLADE Network Technologies is an IBM company; referred here forward as BLADE. BLADE submitted two ToR switches for evaluation in the Lippis/ Ixia test at iSimCity. Both products are based on a single chip 10GbE network design, which provides consistent throughput and latency over all packet ranges. Each product is profiled here along with their resulting test data. The IBM BNT RackSwitch G8124 is a 24-port 10GbE switch, specifically designed for the data center, providing a virtual, cooler and easier network solution, according to BLADE. The IBM BNT RackSwitch G8124 is equipped with enhanced processing and memory to support large-scale aggregation layer 10GbE switching environments and applications, such as high frequency financial trading and IPTV.

IBM BNT RackSwitch G8124

BLADE Network Technologies, an IBM Company IBM BNT RackSwitch G8124 Test Configuration
Hardware Device under test Test Equipment IBM BNT RackSwitch G8124 http://www.bladenetwork.net/RackSwitchG8124.html Ixia XM12 High Performance Chassis Xcellon Flex AP10G16S 10 Gigabit Ethernet LAN load module LSM10GXM8S and LSM10GXM8XP 10 Gigabit Ethernet load modules 1U Application Server http://www.ixiacom.com/ Cabling Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers www.leviton.com IxAutomate 6.90 GA SP1 Software Version 6.5.2.3 IxOS 6.00 EA IxNetwork 5.70 EA Port Density 24

IBM BNT RackSwitch G8124* RFC2544 Layer 2 Latency Test


ns
750 700 650 600 550 500 Max Latency Avg Latency Min Latency Avg. delay Variation 64 700 651.75 600 9.25 128 700 671.5 620 5.083 256 740 709.458 660 5.417 512 740 709.042 660 7.875 1024 740 708.25 660 7.167 1280 740 709.708 660 5.25 1518 740 707.917 660 10.125 Max Latency Avg Latency Min Latency 2176 740 708.042 660 5.167 9216 740 706.625 660 10.125

IBM BNT RackSwitch G8124* RFC2544 Layer 3 Latency Test


ns
900 800 700 600 500 Max Latency Avg Latency Min Latency Avg. delay Variation Max Latency Avg Latency Min Latency 64 760 642.708 600 8.75 128 760 659.667 600 5.667 256 800 701.292 640 5.458 512 800 699.792 660 8.083 1024 800 699 660 7.292 1280 800 699.917 660 7.833 1518 800 698.5 660 10.375 2176 800 698.333 660 5.125 9216 800 696.917 660 11.083

Video feature: Click to view IBM BNT RackSwitch G8124 video podcast
The IBM BNT RackSwitch G8124 ToR switch was tested across all 24 ports of 10GbE. Its average latency ranged from a low of 651 ns to a high of 709 ns for layer 2 traffic. Its average delay variation ranged between 5 and 10 ns, providing consistent latency across all packet sizes at full line rate. 37

* The IBM BLADE switches were tested differently than all other switches. The IBM BNT RackSwitch G8264 and G8124 were configured and tested via cut-through test method, while all other switches were configured and tested via store-and-forward method. During store-and-forward testing, test equipment subtract packet transmission latency, decreasing actual latency measurements by the time it takes to transmit a packet from input to output port. Note that other potential device specific factors can impact latency too. This makes comparisons between two testing methodologies difficult.

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

For 64Byte size packet, the Ixia test gear was configured for a staggered start. For layer 3 traffic, the IBM BNT RackSwitch G8124s average latency ranged from a low of 642 ns to a high of 701 ns across all frame sizes. Its average delay variation for layer 3 traffic ranged between 5 ns and 11 ns, providing consistent latency across all packet sizes at full line rate.

IBM BNT RackSwitch G8124 RFC 2544 L2 & L3 Throughput Test


100% Layer 2 Layer 3

Throughput % Line Rate

80% 60% 40% 20% 0% Layer 2 Layer 3

The IBM BNT RackSwitch G8124 demonstrated 100% throughput as a percentage of line rate across all 24 10GbE ports. In other words, not a single packet was dropped while the IBM BNT RackSwitch G8124 was presented with enough traffic to populate its 24 10GbE ports at line rate simultaneously for both L2 and L3 traffic flows.

64 100 100

128 100 100

256 100 100

512 100 100

1024 100 100

1280 100 100

1518 100 100

2176 100 100

9216 100 100

IBM BNT RackSwitch G8124 RFC 2889 Congestion Test


Layer 2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate

% Line Rate

100 80 60 40 20 0 64 100 100 no yes


8202900 8291004

Layer 2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate Head of Line Blocking Back Pressure L2 Agg Flow Control Frames L3 Agg Flow Control Frames

128 100 100 no yes


6538704 6580984

256 100 100 no yes


4531140 4531070

512 100 100 no yes


4303512 4315590

1024 100 100 no yes


5108166 4398264

1280 100 100 no yes


9139344 4661968

1518 100 100 no yes


8690688 4433112

2176 100 100 no yes


8958128 4630880

9216 100 100 no yes


9022618 4298994

The IBM BNT RackSwitch G8124 demonstrated 100% of aggregated forwarding rate as percentage of line rate during congestion conditions for both L2 and L3 traffic flows. A single 10GbE port was flooded at 150% of line rate. The IBM BNT RackSwitch G8124 did not exhibit Head of Line (HOL) blocking problems, which means that as the IBM BNT RackSwitch G8124-10GbE port became congested, it did not impact performance of other ports. As with most ToR switches, the IBM BNT RackSwitch G8124 did use back pressure as the Ixia test gear detected flow control frames.

38

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

IBM BNT RackSwitch G8124 RFC 3918 IP Multicast Test


Agg Tput Rate (%)
100 90 80 70 60 50 40 30 20 10 0 Agg Tput Rate (%) Agg. Average Latency 64 100 650 128 100 677 256 100 713 512 100 715 1024 100 713 1280 100 725 1518 100 707 2176 100 713 9216 100 712 660 640 620 600 740 ns 720 700 680

The IBM BNT RackSwitch G8124 demonstrated 100% aggregated throughput for IP multicast traffic with latencies ranging from a 650 ns at 64Byte size packet to 715 ns at 512 Byte size packets.

IBM BNT RackSwitch G8124 Cloud Simulation Test


Traffic Direction East-West East-West East-West East-West East-West North-South North-South Traffic Type Database_to_Server Server_to_Database HTTP iSCSI-Server_to_Storage iSCSI-Storage_to_Server Client_to_Server Server_to_Client Avg Latency (ns) 613 598 643 600 619 635 581

The IBM BNT RackSwitch G8124 performed well under cloud simulation conditions by delivering 100% aggregated throughput while processing a large combination of east-west and north-south traffic flows. Zero packet loss was observed as its latency stayed under 581 ns. Of special note is the consistency in latency across all traffic profiles observed during this test of the IBM BNT RackSwitch G8124.

IBM BNT RackSwitch G8124 Power Consumption Test


WattsATIS/10GbE port 3-Year Cost/WattsATIS/10GbE Total power cost/3-Year 3 yr energy cost as a % of list price TEER Value Cooling 5.5 $20.11 $482.59 4.04% 172 Front to Back

The IBM BNT RackSwitch G8124 represents a new breed of cloud network leaf or ToR switches with power efficiency being a core value. Its WattsATIS/port is 5.5 and TEER value is 172. Note higher TEER values are more desirable as they represent the ratio of work performed over energy consumption. Its power cost per 10GbE is estimated at $6.70 per year. The three-year cost to power the IBM BNT RackSwitch G8124 is estimated at $482.59 and represents approximately 4% of its list price. Keeping with data center best practices, its cooling fans flow air front to back.
www.lippisreport.com

39

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Discussion:
The IBM BNT RackSwitch G8124 offers 24-wire speed 10GbE ports in a 1 RU footprint. Designed with performance in mind, the IBM BNT RackSwitch G8124 provides line-rate, high-bandwidth switching, filtering and traffic queuing without delaying data, and large data-center grade buffers to keep traffic moving. This claim was verified in the Lippis/Ixia RFC2544 latency test where average latency across all packet sizes was measured between 651 to 709 ns for layer 2 and 642 to 701 ns for layer 3 forwarding. Further, there was minimal delay variation, between 5 and 10 ns, across all packet sizes, demonstrating the switch architectures ability to forward packets consistently and quickly. In addition, the IBM BNT RackSwitch G8124 demonstrated 100% throughput at all packet sizes independent upon layer 2 or layer 3 forwarding. Redundant power and fans along with various high availability features ensure that the IBM BNT RackSwitch G8124 is always available for business-sensitive traffic. The low latency offered by the IBM BNT RackSwitch G8124 makes it ideal for latency sensitive applications, such as high performance computing clusters and financial applications. Further, the IBM BNT RackSwitch G8124 supports the newest data center protocols including CEE for support of FCoE which was not evaluated in the Lippis/Ixia test.

40

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

BLADE Network Technologies, an IBM Company


The IBM BNT RackSwitch G8264 is a wire speed 10 and 40GbE switch specifically designed for the data center. The IBM BNT RackSwitch G8264 is offered in two configurations: either as a 64-port wire speed 10GbE ToR switch or 48-port wire speed 10GbE with four-wire speed 40GbE ports. The total bandwidth of the IBM BNT RackSwitch G8264 is 1.2 Terabits per second packaged in a 1 RU footprint. For the Lippis/Ixia test, the IBM BNT RackSwitch G8264 was configured as a 64-port wire speed 10GbE ToR switch. The IBM BNT RackSwitch G8264 ToR switch was tested across all 64 ports of 10GbE. Its average latency ranged from a low of 1081 ns to a high of 1421 ns for layer 2 traffic. Its average delay variation ranged between 5 and 9.5 ns, providing consistent latency across all packet sizes at full line rate. For layer 3 traffic, the IBM BNT RackSwitch G8264s average latency ranged from a low of 1075 ns to a high of 1422 ns across all frame sizes. Its average delay variation for layer 3 traffic ranged between 5 and 9.5 ns, providing consistent latency across all packet sizes at full line rate.

IBM BNT RackSwitch G8264

BLADE Network Technologies, an IBM Company IBM BNT G8264 Test Configuration
Hardware Device under test Test Equipment IBM BNT RackSwitch G8264 http://www.bladenetwork.net/RackSwitchG8264.html Ixia XM12 High Performance Chassis Xcellon Flex AP10G16S 10 Gigabit Ethernet LAN load module LSM10GXM8S and LSM10GXM8XP 10 Gigabit Ethernet load modules 1U Application Server http://www.ixiacom.com/ Cabling Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers www.leviton.com IxAutomate 6.90 GA SP1 Software Version 6.4.1.0 IxOS 6.00 EA IxNetwork 5.70 EA Port Density 64

IBM BNT RackSwitch G8264* RFC2544 Layer 2 Latency Test


ns
1800 1600 1400 1200 1000 800 600 Max Latency Avg Latency Min Latency Avg. delay Variation 64 1240 1081.344 940 8.688 128 1300 1116.828 980 5.219 256 1380 1214.922 1020 5.609 512 1540 1374.25 1240 7.609 1024 1580 1422 1260 6.922 1280 1580 1421.281 1260 7.563 1518 1580 1421.75 1260 9.641 Max Latency Avg Latency Min Latency 2176 1580 1419.891 1280 5.047 9216 1580 1415.359 1280 9.563

IBM BNT RackSwitch G8264* RFC2544 Layer 3 Latency Test


ns
1800 1600 1400 1200 1000 800 600 Max Latency Avg Latency Min Latency Avg. delay Variation 64 1240 1075.391 940 8.609 128 1300 1121.016 980 5.172 256 1380 1213.547 1020 5.641 512 1540 1370.938 1240 7.672 1024 1580 1421.938 1260 6.734 1280 1580 1422.625 1260 6.25 1518 1580 1421.828 1260 9.422 Max Latency Avg Latency Min Latency 2176 1580 1419.734 1280 5.078 9216 1580 1415.781 1280 9.578

* The IBM BLADE switches were tested differently than all other switches. The IBM BNT RackSwitch G8264 and G8124 were configured and tested via cut-through test method, while all other switches were configured and tested via store-and-forward method. During store-and-forward testing, test equipment subtract packet transmission latency, decreasing actual latency measurements by the time it takes to transmit a packet from input to output port. Note that other potential device specific factors can impact latency too. This makes comparisons between two testing methodologies difficult.

41

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

IBM BNT RackSwitch G8264 RFC 2544 L2 & L3 Throughput Test


Layer 2 100% Layer 3

Throughput % Line Rate

80% 60% 40% 20% 0% Layer 2 Layer 3

The IBM BNT RackSwitch G8264 demonstrated 100% throughput as a percentage of line rate across all 64-10GbE ports. In other words, not a single packet was dropped while the IBM BNT RackSwitch G8264 was presented with enough traffic to populate its 64 10GbE ports at line rate simultaneously for both L2 and L3 traffic flows.

64 100 100

128 100 100

256 100 100

512 100 100

1024 100 100

1280 100 100

1518 100 100

2176 100 100

9216 100 100

IBM BNT RackSwitch G8264 RFC 2889 Congestion Test


% Line Rate
Layer 2 Agg. Forwarding Rate 100 80 60 40 20 0 Layer 2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate Head of Line Blocking Back Pressure L2 Agg Flow Control Frames L3 Agg Flow Control Frames 64 100 100 no yes
7498456 7486426

Layer 3 Agg. Forwarding Rate

128 100 100 no yes


5582384 5583128

256 100 100 no yes


6014492 6008396

512 100 100 no yes


5352004 5351836

1024 100 100 no yes


3751186 5348868

1280 100 100 no yes


514182 5096720

1518 100 100 no yes


844520 4982224

2176 100 100 no yes


903536 4674228

9216 100 100 no yes


9897178 3469558

The IBM BNT RackSwitch G8264 demonstrated 100% of aggregated forwarding rate as percentage of line rate during congestion conditions for both L2 and L3 traffic flows. A single 10GbE port was flooded at 150% of line rate. The IBM BNT RackSwitch G8124 did not exhibit Head of Line (HOL) blocking problems, which means that as the IBM BNT G8264 10GbE port became congested, it did not impact the performance of other ports. As with most ToR switches, the IBM BNT RackSwitch G8264 did use back pressure as the Ixia test gear detected flow control frames.

42

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

IBM BNT RackSwitch G8264 RFC 3918 IP Multicast Test


Agg Tput Rate (%)
100 90 80 70 60 50 40 30 20 10 0 Agg Tput Rate (%) Agg. Average Latency 64 100 1059 128 100 1108 256 100 1139 512 100 1305 1024 100 1348 1280 100 1348 1518 100 1332 2176 100 1346 9216 100 1342 1600 ns 1400 1200 1000 800 600 400 200 0

The IBM BNT RackSwitch G8264 demonstrated 100% aggregated throughput for IP multicast traffic with latencies ranging from a 1059 ns at 64Byte size packet to 1348 ns at 1024 and 1280 Byte size packets.

IBM BNT RackSwitch G8264 Cloud Simulation Test


Traffic Direction East-West East-West East-West East-West East-West North-South North-South Traffic Type Database_to_Server Server_to_Database HTTP iSCSI-Server_to_Storage iSCSI-Storage_to_Server Client_to_Server Server_to_Client Avg Latency (ns) 1060 1026 1063 1026 1056 1058 1017

The IBM BNT RackSwitch G8264 performed well under cloud simulation conditions by delivering 100% aggregated throughput while processing a large combination of east-west and north-south traffic flows. Zero packet loss was observed as its latency stayed under 1017 ns. Of special note is the consistency in latency across all traffic profiles observed during this test of the IBM BNT RackSwitch G8264.

IBM BNT RackSwitch G8264 Power Consumption Test


WattsATIS/10GbE port 3-Year Cost/WattsATIS/10GbE Total power cost/3-Year 3 yr energy cost as a % of list price TEER Value Cooling 3.92 $14.33 $917.22 4.08% 241 Front to Back

The IBM BNT RackSwitch G8264 represents a new breed of cloud network leaf or ToR switches with power efficiency being a core value. Its WattsATIS/port is 3.92 and TEER value is 241. Note higher TEER values are more desirable as they represent the ratio of work performed over energy consumption. Its power cost per 10GbE is estimated at $4.78 per year. The three-year cost to power the IBM BNT RackSwitch G8264 is estimated at $917.22 and represents approximately 4% of its list price. Keeping with data center best practices, its cooling fans flow air front to back.
www.lippisreport.com

43

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Discussion:
The IBM BNT RackSwitch G8264 was designed with top performance in mind as it provides line-rate, high-bandwidth switching, filtering, and traffic queuing with low consistent latency. This claim was verified in the Lippis/Ixia RFC2544 latency test, where average latency across all packet sizes was measured between 1081 to 1422 ns for layer 2 and 1075 to 1422 ns for layer 3 forwarding. Further, there was minimal delay variation, between 5 and 9 ns, across all packet sizes for layer 2 and 3 forwarding, demonstrating the switchs architecture to forward packets at nanosecond speeds consistently under load. In addition, the IBM BNT RackSwitch G8264 demonstrated 100% throughput at all packet sizes independent upon layer 2 or layer 3 forwarding. The low latency offered by the IBM BNT RackSwitch G8264 makes it ideal for latency sensitive applications, such as high performance computing clusters and financial applications. In fact, the IBM BNT RackSwitch G8264 demonstrated 100% aggregated throughput at all packet sizes with latency measured in the 1059 to 1348 ns range during the Lippis/Ixia RFC3918 IP multicast test. Nanosecond latency offered by the IBM BNT RackSwitch G8264 makes it ideal for latency sensitive applications, such as high performance computing clusters and financial applications. Further, the IBM BNT RackSwitch G8264 supports the newest data center protocols including CEE for support of FCoE which was not evaluated in the Lippis/Ixia test.

44

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Brocade VDXTM 6720-24


For this Lippis/Ixia evaluation at iSimCity Brocade submitted the industrys first fabric enabled VDX 6720-24 Top-of-Rack (ToR) switch thats built with Brocades Virtual Cluster Switching (VCS) technology, an Ethernet fabric innovation that addresses the unique requirements of evolving data center environments. The Brocade VDX 6720-24 is a 24 port 10 Gigabit Ethernet (10GbE) ToR switch. Brocade VDX 6720 Data Center Switches, are 10 GbE line-rate, low-latency, lossless switches in 24-port (1U) and 60-port (2U) models. The Brocade VDX 6720 series switch is based on sixth-generation ASIC fabric switching technology and runs the feature dense Brocade Network Operating System (NOS). The VDX 6720-24 ToR switch was tested across all 24 ports of 10GbE. Its average latency ranged from a low of 510 ns to a high of 619 ns for layer 2 traffic. Its average delay variation ranged between 5 and 8 ns, providing consistent latency across all packet sizes at full line rate. Brocade VDXTM 6720-24 Test Configuration
Hardware Device under test Test Equipment VDX 6720-24 http://www.brocade.com/
TM

Software Version 2.0.0a IxOS 6.00 EA IxNetwork 5.70 EA

Port Density 24

Ixia XM12 High Performance Chassis Xcellon Flex AP10G16S 10 Gigabit Ethernet LAN load module LSM10GXM8XP 10 Gigabit Ethernet load modules 1U Application Server http://www.ixiacom.com/

IxAutomate 6.90 GA SP1

Cabling

Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers www.leviton.com

Brocade VDX TM 6720-24 RFC2544 Layer 2 Latency Test


ns
800 700 600 500 400 300 200 100 0 Max Latency Avg Latency Min Latency Avg. delay Variation 64 760 527.042 440 8.75 128 640 510.708 420 5.375 256 660 570.083 480 5.875 512 680 619.458 560 7.75 1024 680 615.542 560 6.375 1280 680 615.583 560 8.25 Max Latency Avg Latency Min Latency 1518 680 614.208 560 8.75 2176 680 612.167 560 5.167 9208 680 610.958 560 6.542

note max frame size is 9208 so could not do 9216

Brocade VDX TM 6720-24 RFC2544 Layer 3 Latency Test


The Brocade VDX 6720-24 is a layer 2 switch and thus did not participate in layer 3 test.

45

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Brocade VDX TM 6720-24 RFC 2544 L2 Throughput Test


Layer 2 100%

Throughput % Line Rate

80% 60% 40% 20% 0% Layer 2

The Brocade VDX 6720-24 demonstrated 100% throughput as a percentage of line rate across all 24 10GbE ports. In other words, not a single packet was dropped while the Brocade VDX 6720-24 was presented with enough traffic to populate its 24 10GbE ports at line rate simultaneously for L2 traffic flows.

64 100

128 100

256 100

512 100

1024 100

1280 100

1518 100

2176 100

9208 100

Brocade VDX TM 6720-24 RFC 2889 Congestion Test


Layer2 Agg. Forwarding Rate 100

% Line Rate

90 80 70 60 50 40 30 20 10 0 64 100 no yes 128 100 no yes 256 100 no yes 512 100 no yes 1024 100 no yes 1280 100 no yes 1518 100 no yes 2176 100 no yes 9208 100 no yes

Layer2 Agg. Forwarding Rate L2 Head of Line Blocking L2 Back Pressure Agg Flow Control Frames

The Brocade VDX 6720-24 demonstrated 100% of aggregated forwarding rate as percentage of line rate during congestion conditions. A single 10GbE port was flooded at 150% of line rate. The Brocade VDX 6720-24 did not exhibit Head of Line (HOL) blocking problems, which means that as the Brocade VDX 6720-24 port became congested, it did not impact performance of other ports. As with most ToR switches, the Brocade VDX 6720-24 uses back pressure as the Ixia test gear detected flow control frames.

6281422 7108719

6295134 5677393

5373794 5206676

5163746 4943877 4317694

46

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Brocade VDX
Traffic Direction East-West East-West East-West East-West East-West North-South North-South

TM

6720-24 Cloud Simulation Test


Traffic Type Database_to_Server Server_to_Database HTTP iSCSI-Server_to_Storage iSCSI-Storage_to_Server Client_to_Server Server_to_Client Avg Latency (ns) 7653 675 4805 1298 7901 1666 572

The Brocade VDX 6720-24 performed well under cloud simulation conditions by delivering 100% aggregated throughput while processing a large combination of east-west and northsouth traffic flows. Zero packet loss was observed as its latency stayed under 7901 ns, measured in cut-through mode. Of special note is the low latency of 572 ns and 675 ns observed during north-south server-to-client and east-west server-to-database flows respectively.

Brocade VDX
WattsATIS/10GbE port

TM

6720-24 Power Consumption Test


3.05 $11.15 $267.62 2.24% 310 Front to Back

3-Year Cost/WattsATIS/10GbE Total power cost/3-Year 3 yr energy cost as a % of list price TEER Value Cooling

The Brocade VDX 6720-24 represents a new class of Ethernet fabric enabled fixed port switches with power efficiency being a core value. Its WattsATIS/ port is a very low 3.05 and TEER value is 310. Note TEER values represent the ratio of work performed over energy consumption. Hence, higher TEER values are more desirable. Its power cost per 10GbE is estimated at $3.72 per year. The three-year cost to power the Brocade VDX 6720-24 is estimated at $267.62 and represents approximately 2.24% of its list price. Keeping with data center best practices, its cooling fans flow air rear to front. There is also a front to back airflow option available to meet all customer cooling requirements.

47

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Discussion:
New Ethernet fabric based architectures are emerging to support highly virtualized data center environments and the building of cloud infrastructures allowing IT organizations to be more agile and customer responsive. Ethernet fabrics will transition the data center networking from the inefficient and cumbersome Spanning Tree Protocol (STP) networks, to high performance, flat, layer 2 networks built for the east-west traffic demanded by server virtualization and highly clustered applications. The Brocade VDX 6720-24 with VCS technology offers 24wire speed 10GbE ports of layer 2 cut-through forwarding in a 1 RU footprint. Designed with performance in mind, the Brocade VDX 6720-24 provides line-rate, high-bandwidth switching, filtering and traffic queuing without delaying data, and large data-center grade buffers to keep traffic moving. This claim was verified in the Lippis/Ixia RFC2544 latency test where average latency across all packet sizes was measured between 510 to 619 ns for layer 2 forwarding. Further, there was minimal delay variation, between 5 and 8 ns, across all packet sizes, demonstrating the switch architectures ability to forward packets consistently and quickly boosting confidence that the Brocade VDX 6720-24 will perform well in converged I/O configurations. In addition, the Brocade VDX 6720-24 demonstrated 100% throughput at all packet sizes. Redundant power and fans along with various high availability features ensure that the Brocade VDX 6720-24 is always available and reliable for data center operations. The VDX 6720-24 is build on innovative Brocade designed ASIC, which provides consistent throughput and latency over all packet ranges as verified during the Lippis/Ixia test at iSimCity. The VDX 6720-24 is the entry level of the VDX family, which can be configured as a single logical chassis and scales to support 16-, 24-, 40-, 50-, and 60-ports of 10 GbE. Brocade VCS capabilities allow customers to start from small two-switch ToR deployments, such as the VDX 6720-24 tested here, and scale to large virtualization-optimized Ethernet fabrics with tens of thousands of ports overtime. Furthermore, Brocade claims that VCS-based Ethernet fabrics are convergence ready, allowing customers to deploy one network for storageFibre Channel over Ethernet (FCoE), iSCSI, NASand IP data traffic. Brocade VCS technology boasts lower capital cost by collapsing access and aggregation layers of a three-tier network into two and thus reducing operating cost through managing fewer devices. Customers will be able to seamlessly transition their existing multi-tier hierarchical networks to a much simpler, virtualized, and converged data center networkmoving at their own pace, according to Brocade.

48

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Brocade VDXTM 6730-32


For this Lippis/Ixia evaluation at iSimCity Brocade submitted the industrys first fabric enabled VDX 6730-32 Topof-Rack (ToR) switch thats built with Brocades Virtual Cluster Switching (VCS) technology, an Ethernet fabric innovation that addresses the unique requirements of evolving data center environments. The VDX 6730-32 is a fixed configuration ToR convergence switch supporting 8-8 Gbp native Fibre Channel ports plus 24-10GbE ports, the perfect switch for consolidated I/O applications. In addition the VDX 6730-32 supports ethernet based storage connectivity for Fibre Channel over Ethernet (FCoE), iSCSI and NAS; protecting investment in Fibre Channel Storage Area Networks and Ethernet fabrics. The Brocade VDX 6730 is available in two modelsthe 2U Brocade VDX 6730-76 with 60 10 GbE LAN ports and 16 8-Gbps native Fibre Channel ports, and the 1U Brocade VDX 6730-32 with 24 10 GbE LAN ports and eight 8 Gbps native Fibre Channel ports. Brocade VDXTM 6730-32 Test Configuration*
Hardware Device under test Test Equipment VDX 6730-32 http://www.brocade.com/
TM

Software Version Brocade NOS 2.1 IxOS 6.10 EA SP2 IxNetwork 6.10 EA

Port Density 24

Ixia XM12 High Performance Chassis Xcellon Flex AP10G16S (16 port 10G module) Xcellon Flex Combo 10/40GE AP (16 port 10G / 4 port 40G) Xcellon Flex 4x40GEQSFP+ (4 port 40G) http://www.ixiacom.com/

Cabling

Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers Siemon QSFP+ Passive Copper Cable 40 GbE 3 meter copper QSFP-FA-010

Siemon Moray Low Power Active Optical Cable Assemblies Single Mode, QSFP+ 40GbE optical cable QSFP30-03 7 meters
www.siemon.com
* Brocade VDX 6730-32 was configured in 24 10GbE ports

Brocade VDX TM 6730-32 RFC2544 Layer 2 Latency Test


ns
800 700 600 500 400 300 200 100 0 Max Latency Avg Latency Min Latency 64 760 521.17 440 8.71 128 660 530.04 460 5.46 192 680 540.75 480 9.25 256 720 577.92 500 5.63 512 740 606.50 580 7.71 1024 740 605.17 580 6.63 1280 720 604.92 580 4.75 1518 740 607.75 580 9.63 2176 740 606.42 580 5.17 9216 740 629.00 580 9.67 Max Latency Avg Latency Min Latency

Video feature: Click to view Brocade Networks video podcast

Avg. delay Variation

Test was done between far ports to test across chip performance/latency

Brocade VDX TM 6720-24 RFC2544 Layer 3 Latency Test


The Brocade VDX 6730-32 is a layer 2 switch and thus did not participate in layer 3 test.

49

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Brocade VDX TM 6730-32 RFC 2544 L2 Throughput Test


Layer 2 100%

Throughput % Line Rate

80% 60% 40% 20% 0% Layer 2

For the Fall 2011 Lippis/Ixia test we populated and tested the Brocade VDX 6730-32s 10GbE functionality, which is a line-rate, low-latency, lossless ToR switch in a 1U form factor. The Brocade VDX 6730 series switches are based on sixth-generation custom ASIC fabric switching technology and runs the feature dense Brocade Network Operating System (NOS).
1024 100 1280 100 1518 100 2176 100 9216 100

64 100

128 100

192 100

256 100

512 100

Brocade VDX TM 6730-32 RFC 2889 Congestion Test


Layer2 Agg. Forwarding Rate 100

% Line Rate

90 80 70 60 50 40 30 20 10 0 64 100 no yes 128 100 no yes 192 100 no yes 256 100 no yes 512 100 no yes 1024 100 no yes 1280 100 no yes 1518 100 no yes 2176 100 no yes 9216 100 no yes

The VDX 6730-32 ToR switch was tested across all 24 ports of 10GbE. Its average latency ranged from a low of 521 ns to a high of 606 ns for layer 2 traffic measured in cut-through mode. The VDX 6730-32 demonstrates one of the most consistent latency results we have measured. Its average delay variation ranged between 4 and 9 ns, providing consistent latency across all packet sizes at full line rate, a comforting result giving the VDX 6730-32s design focus of converged network/ storage infrastructure. The Brocade VDX 6730-32 demonstrated 100% throughput as a percentage of line rate across all 24-10GbE ports. In other words, not a single packet was dropped while the Brocade VDX 6730-32 was presented with enough traffic to populate its 24-10GbE ports at line rate. The Brocade VDX 6730-32 demonstrated 100% of aggregated forwarding rate as percentage of line rate during congestion conditions. A single 10GbE port was over subscribed at 150% of line rate. The Brocade VDX 6730-32 did not depict HOL blocking behavior which means that as the 10GbE port became congested, it did not impact the performance of other ports, assuring reliable operation during congestion conditions. As with most ToR switch-

Layer2 Agg. Forwarding Rate L2 Head of Line Blocking L2 Back Pressure L2 Agg Flow Control Frames

6192858 7126424 5278148 5745640 5706859 5366002 5153639 5789807 16982513 4317697

50

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Brocade VDX
Traffic Direction East-West East-West East-West East-West East-West North-South North-South

TM

6730-32 Cloud Simulation Test


Traffic Type Database_to_Server Server_to_Database HTTP iSCSI-Server_to_Storage iSCSI-Storage_to_Server Client_to_Server Server_to_Client Avg Latency (ns) 7988 705 4852 1135 8191 1707 593

es, the Brocade VDX 6730-32 did use backpressure as the Ixia test gear detected flow control frames. The Brocade VDX 6730-32 performed well under cloud simulation conditions by delivering 100% aggregated throughput while processing a large combination of east-west and northsouth traffic flows. Zero packet loss was observed as its latency stayed under 8 s measured in cut through mode. The Brocade VDX 6730-32 represents a new breed of cloud network core switches with power efficiency being a core value. Its WattsATIS/port is 1.5Watts and TEER value is 631. Remember high TEER values a better than low. Note that this is the lowest power consumption that we have observed for a 24 port ToR switch. The Brocade VDX 6730-32 was equipped with one power supply while others are usually equipped with two. Considering this, the Brocade VDX 6730-32 is still one of the lowest power consuming 24 port ToR switches on the market. Note that the 8 native FC ports were not populated and active which may/should consume additional power. The Brocade VDX 6730-32 power cost per 10GbE is calculated at $5.48 per year. The three year cost to power the Brocade VDX 6730-32 is estimated at $131.62 and represents less than 0.82% of its list price. Keeping with data center best practices, its cooling fans flow air front to back.

Brocade VDX TM 6730-32 Power Consumption Test


WattsATIS/10GbE port 3-Year Cost/WattsATIS/10GbE Total power cost/3-Year 3 yr energy cost as a % of list price TEER Value Cooling Power Cost/10GbE/Yr List price Price/port 1.5 $5.48 $131.62 0.82% 631 Front to Back, Reversible $1.83 $16,080 $670.00

51

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Brocade VDX

TM

6730-32 Virtual Scale Calculation


* 30000 32 938 29063

Virtualization Scale
Within one layer 2 domain, the Brocade VDX 6730-32 can support approximately hundreds of physical servers and tens of thousands of VMs. This assumes 30 VMs per physical server with each physical server requiring two host IP/ MAC/ ARP entries; one for management, one for IP storage, etc into a data center switch. Each VM will require from a data center switch one MAC entry, one 32/IP host route and one ARP entry. Clearly the Brocade VDX 6730-32 can support a much larger virtual network than the number of its physical ports assuring virtualization scale in layer 2 domains. From a larger VDX perspective, the Brocade VDX data center switches can support 1,440 ports and tens of thousands of VMs.

/32 IP Host Route Table Size MAC Address Table Size ARP Entry Siize Assume 30:1 VM:Physical Machine Total number of Physical Servers per L2 domain Number of VMs per L2 domain
* This is L2 device so technically you can have 30K both host IP and ARP size.

52

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Discussion:
New Ethernet fabric based architectures are emerging to support highly virtualized data center environments and the building of cloud infrastructures allowing IT organizations to be more agile and customer responsive. Ethernet fabrics will transition the data center networking from the inefficient and cumbersome Spanning Tree Protocol (STP) networks, to high performance, flat, layer 2 networks built for the east-west traffic demanded by server virtualization and highly clustered applications. Rather than STP, the Brocade VDX 6730 series utilized a hardware-based InterSwitch Link (ISL) trunking algorithm, not tested during the Fall Lippis/Ixia test. The Brocade VDX 6730-32 with VCS technology offers 24wire speed 10GbE ports of layer 2 cut-through forwarding in a 1 RU footprint with 8-8Gbs native Fibre Channel ports. Designed with storage and data traffic convergence in mind, the Brocade VDX 6730-32 provides line-rate, high-bandwidth switching, filtering and traffic queuing without delaying data with large data-center grade buffers to keep traffic moving This claim was verified in the Lippis/Ixia RFC2544 latency test where average latency across all packet sizes was measured between 521 to 606 ns for layer 2 forwarding. Further, there was minimal delay variation, between 4 and 9 ns, across all packet sizes, demonstrating the switch architectures ability to forward packets consistently and quickly boosting confidence that the VDX 6730-32 will perform well in converged network/storage configurations. In addition, the VDX 6730-32 demonstrated 100% throughput at all packet sizes. Redundant power and fans, although not tested here, along with various high availability features ensure that the VDX 6730-32 is always available and reliable for data center operations. The VDX 6730-32 is build on innovative Brocade designed ASIC, which provides consistent throughput and latency over all packet ranges as verified during the Lippis/Ixia test at iSimCity. The VDX 6730-32 is the next level up from the VDX 6720 ToR switches in the VDX family, which can be configured as a single logical chassis and scales to support 60 10GbE and 16-8Gbs native Fibre Channel ports. A key design center of the VDX 6730 is its support for Ethernet storage connectivity as it connects to Fibre Channel Storage Area Networks (SANs) in addition to Fibre Channel over Ethernet (FCoE), iSCSI, and NAS storage, providing multiple unified Ethernet storage connectivity options. Brocade VCS capabilities allow customers to start from small two-switch ToR deployments, such as the VDX 6730 tested here, and scale to large virtualization-optimized Ethernet fabrics with tens of thousands of ports overtime. VCS is a software upgrade feature to the VDX 6730. Furthermore, as proven here VCS-based Ethernet fabrics are convergence ready, allowing customers to deploy one network for storageFCoE, iSCSI, NASand IP data traffic. Brocade VCS technology boasts lower capital cost by collapsing access and aggregation layers of a three-tier network into two and thus reducing operating cost through managing fewer devices. Customers will be able to seamlessly transition their existing multi-tier hierarchical networks to a much simpler, virtualized, and converged data center networkmoving at their own pace, according to Brocade.

53

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Extreme Networks BlackDiamond X8 Core Switch


Extreme Networks launched its new entry into the enterprise data center and cloud computing markets with its new line of BlackDiamond X series switches in May of 2011. This line of Top of Rack and Core switches pushes the envelope on performance, power consumption, density and data center network design. Extreme Networks submitted their new BlackDiamond X8 (BDX8) core switch into the Fall Lippis/Ixia industry test. The BDX8 is the highest Video feature: Click to density 10GbE and 40GbE core switch on the market. Its capable of supview Extreme Networks porting 768 10GbE and 192 40GbE ports in the smallest packaging footvideo podcast print of one third of a rack or only 14.5RU. This high port density core switch in a small footprint offers new flexibility in building enterprise core networks and cloud data centers. For example, a single BDX8 could support 768 servers connected directly at 10GbE creating a non-blocking flat layer 2 network. But this is but one option in many designs afforded by the BDX8s attributes. From an internal switching capacity point of view, the BDX8 offers 1.28 Tbps slot capacity and more than 20 Tbps switching fabric capacity; enough to support the 768 ports of 10 GbE or 192 ports of 40 GbE per chassis. In addition to raw port density and footprint, the BDX8 offers many software features. Some notable software features include automated configuration and provisioning, and XNV(ExtremeXOS Network Virtualization), which provides automation of VM (Virtual Machine) tracking, reporting and migration of VPPs (Virtual Port Profile). These are but a few software features enabled through ExtremeXOS network operating system. The BlackDiamond X8 is also key to Extreme Networks OpenFabric architecture that enables best-in-class choice of switching, compute, storage and virtualization solutions. Extreme Networks BlackDiamond X8 Core Switch Test Configuration*
Hardware Device under test Test Equipment Ixia Line Cards BSX8 http://www.extremenetworks.com Ixia XM12 High Performance Chassis Xcellon Flex AP10G16S 16 port 10G module Xcellon Flex Combo 10/40GE AP 16 port 10G / 4 port 40G http://www.ixiacom.com/ Cabling 10GbE Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850mm SPF+ transceivers www.leviton.com Siemon QSFP+ Passive Copper Cable 40 GbE 3 meter copper QSFP-FA-010 Siemon Moray Low Power Active Optical Cable Assemblies Single Mode QSFP+ 40GbE optical cable QSFP30-03 7 meters www.siemon.com
* Extreme BDX8 was configured in 256 10GbE ports and 24-40GbE ports or an equivalent 352-10GbE ports.

Software Version 15.1 IxOS 6.10 EA IxNetwork 6.10 EA

Port Density 352

54

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Extreme Networks BlackDiamond X8 Core Switch


The BDX8 breaks all of our previous records in core switch testing from performance, latency, power consumption, port density and packaging design. In fact, the Extreme Networks BDX8 was between 2 and 9 times faster at forwarding packets over a wide range of frame sizes than previous core switch measurements conducted duringLippis/Ixia iSimCity test of core switches fromAlcatel-Lucent,Arista and Juniper. In other words, the BDX8 is the fastest core switch we have tested with the lowest latency measurements. The BDX8 is based upon the latest Broadcom merchant silicon chip set. For the fall Lippis/Ixia test we populated the Extreme Networks BlackDiamond X8 with 256 10GbE ports and 24 40GbE ports, thirty three percent of its capacity. This was the highest capacity switch tested during the entire series of Lippis/Ixia cloud network test at iSimCity to date. We tested and measured the BDX8 in both cut through and store and forward modes in an effort to understand the difference these latency measurements offer. Further, latest merchant silicon forwards packets in store and forward for smaller packets while larger packets are forwarded in cut-through making this new generation of switches hybrid cut-through/store and forward devices. The Extreme Networks BlackDiamond X8 is unique in that its switching architecture forward packets in both cut-through and store-and-forward modes. As such, we measure the BlackDiamond X8 latency with both measurement methods. The purpose is not to compare store-and-forward vs cut through, but to demonstrate the low latency results obtained independent upon forwarding method. During store-and-forward testing, the latency of packet serialization delay (based on packet size) is removed from the reported latency number by test equipment. Therefore, to compare cutthrough to store-and-forward latency measurement, packet serialization delay needs to be added to store-and-forward latency number. For example, in a store-and-forward latency number of 800ns for a 1,518 byte size packet, the additional latency of 1240ns (serialization delay of a 1518 byte packet at 10Gbps) is required to be added to the store-and-forward measurement. This difference can be significant. Note that other potential device specific factors can impact latency too. This makes comparisons between two testing methodologies difficult.

Extreme Networks BDX8 RFC 2544 Layer 2 Latency Test in Cut-Through Mode
ns
120000 100000 80000 60000 40000 20000 0 Max Latency Avg Latency Min Latency

64 2510 2100 8.314

128 2657 2162 5.403

192 2862 2222 9.179

256 2997 2277 5.386

512 3542 2522 7.636

1024 4642 2897 7.146

1280 4877 2977 5.632

1518 5277 3062 9.314

2176 6402 3317 4.986

9216 18262 5982 9.09

The Extreme Networks BDX8 was tested across all 256 ports of 10GbE and 24 ports of 40GbE. Its average cut through latency ranged from a low of 2324 ns or 2.3 s to a high of 12,755 ns or 12.7 s at jumbo size 9216 Byte size frames for layer 2 traffic. Its average delay variation ranged between 5 and 9 ns, providing consistent latency across all packet sizes at full line rate; a welcome measurement for converged I/O implementations

Max Latency Avg Latency Min Latency Avg. delay Variation

2324.921 2442.854 2579.339 2673.136 3137.229 3919.9

4169.825 4396.996 5087.025 12755.718

55

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Extreme Networks BlackDiamond X8 Core Switch


The Extreme Networks BDX8 was tested across all 256 ports of 10GbE and 24 ports of 40GbE. Its average store and forward latency ranged from a low of 2273 ns or 2.2 s to a high of 12,009 ns or 12 s at jumbo size 9216 Byte size frames for layer 2 traffic. Its average delay variation ranged between 5 and 9 ns, providing consistent latency across all packet sizes at full line rate. For layer 3 traffic, Extreme Networks BDX8s measured average cut through latency ranged from a low of 2,321 ns at 64Bytes to a high of 12,719 ns or 12.7 s at jumbo size 9216 Byte size frames. Its average delay variation for layer 3 traffic ranged between 4 and 9 ns, providing consistent latency across all packet sizes at full line rate; again a welcome measurement for converged I/O implementations.

Extreme Networks BDX8 RFC 2544 Layer 2 Latency Test in Store and Forward Mode
ns
120000 100000 80000 60000 40000 20000 0 Max Latency Avg Latency Min Latency

64 2482 2040 8.314

128 2642 2100 5.403

192 2797 2160 9.179

256 2910 2180 5.386

512 3442 2417 7.636

1024 4547 2780 7.146

1280 4990 2980 5.632

1518 5430 3140 9.314

2176 6762 3660 4.986

9216 21317 8960 9.079

Max Latency Avg Latency Min Latency Avg. delay Variation

2273.521 2351.750 2436.686 2493.575 2767.593 3331.511 3588.750 3830.261 4542.814 12095.05

Extreme Networks BDX8 RFC 2544 Layer 3 Latency Test In Cut-Through Mode
ns
120000 100000 80000 60000 40000 20000 0 Max Latency Avg Latency Min Latency

64 2550 2100 9.982

128 2722 2162 5.454

192 2902 2230 9.2

256 3162 2302 5.411

512 3770 2547 7.593

1024 5102 2870 7.164

1280 5430 2987 5.529

1518 6010 3070 9.321

2176 7290 3302 4.982

9216 21970 5997 9.118

Max Latency Avg Latency Min Latency Avg. delay Variation

2321.164 2440.482 2577.779 2668.411 3132.839 3898.596 4162.443 4406.857 5126.400 12719.646

56

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Extreme Networks BlackDiamond X8 Core Switch


For layer 3 traffic, Extreme Networks BDX8s measured average store and forward latency ranged from a low of 2,273 ns at 64Bytes to a high of 11,985 ns or 11.9 s at jumbo size 9216 Byte size frames. Its average delay variation for layer 3 traffic ranged between 4 and 9 ns, providing consistent latency across all packet sizes at full line rate; again a welcome measurement for converged I/O implementations. The above latency measures are the lowest we have measured for core switches by a large degree. The BDX8 forwards packets ten to six times faster than other core switches weve tested. The Extreme Networks BDX8 demonstrated 100% throughput as a percentage of line rate across all 256-10GbE and 24-40GbE ports. In other words, not a single packet was dropped while the Extreme Networks BDX8 was presented with enough traffic to populate its 256-10GbE and 24-40GbE ports at line rate simultaneously for both L2 and L3 traffic flows measured in both cut-through and store and forward; a first in these Lippis/Ixia test.

Extreme Networks BDX8 RFC 2544 Layer 3 Latency Test in Store and Forward Mode
ns
120000 100000 80000 60000 40000 20000 0 Max Latency Avg Latency Min Latency

64 2542 2040 9.982

128 2717 2100 5.454

192 2937 2140 9.2

256 3117 2220 5.411

512 3657 2400 7.593

1024 5062 2780 7.164

1280 5537 3000 5.529

1518 6130 3160 9.321

2176 7677 3640 4.982

9216 25070 8960 9.118

Max Latency Avg Latency Min Latency Avg. delay Variation

2273.346 2353.996 2432.400 2492.879 2766.289 3324.279 3581.018 3826.468 4504.139 11985.946

Extreme Networks BDX8 RFC 2544 L2 & L3 Throughput Test


Layer 2 Layer 3

100%
Throughput % Line Rate

80% 60% 40% 20% 0%


Layer 2 Layer 3

64 100 100

128 100 100

192 100 100

256 100 100

512 100 100

1024 100 100

1280 100 100

1518 100 100

2176 100 100

9216 100 100

57

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Extreme Networks BlackDiamond X8 Core Switch


The Extreme Networks BDX8 demonstrated 100% of aggregated forwarding rate as percentage of line rate during congestion conditions. We tested congestion in both 10GbE and 40GbE configurations. A single 10GbE port was flooded at 150% of line rate. In addition, a single 40GbE port was flooded at 150% of line rate; another first in these Lippis/Ixia test. The Extreme Networks BDX8 did not use HOL blocking which means that as the 10GbE and 40GbE ports on the BDX8 became congested, it did not impact the performance of other ports. Back pressure was detected. The BDX8 did send flow control frames to the Ixia test gear signaling it to slow down the rate of incoming traffic flow. Priority Flow Control (PFC) was enabled on the BDX8 during congestion test, as this is Extreme Networks direction to run Data Center Bridging protocols on the BDX8 in support of data center fabric requirements. We expect more core switches will enable and support PFC as this enables Ethernet fabrics to support storage and IP traffic flows under congestion conditions without packet loss, thus maintaining 100% throughput. The Extreme Networks BDX8 demonstrated 100% aggregated throughput for IP multicast traffic measured in cut through mode with latencies ranging from a 2,382 ns at 64Byte size packet to 3,824 ns at 9216Byte size packets.

Extreme Networks BDX8 RFC 2889 Congestion Test


150% of Line Rate into a single 10GbE
Frame (Bytes) Agg Forwarding Rate (% Line Rate) Head of Line Blocking Back Pressure Agg Flow Control Frames

LAYER 2

LAYER 3

64 128 192 256 512 1024 1280 1518 2176 9216

100 100 100 100 100 100 100 100 100 100

no no no no no no no no no no

yes yes yes yes yes yes yes yes yes yes

8641332 5460684 6726918 7294218 8700844 5887130 5076130 4841474 5268752 5311742

9016074 5459398 6399598 5798720 5081322 5342640 5076140 4840594 5268552 5219964

150% of Line Rate into a single 40GbE


64 128 192 256 512 1024 1280 1518 2176 9216 100 100 100 100 100 100 100 100 100 100 no no no no no no no no no no yes yes yes yes yes yes yes yes yes yes 11305858 9679780 9470606 29566584 23892956 22794016 24439074 21749118 20897472 12938948 11303930 9682464 9466194 10193602 8909568 8915370 7944288 9093160 9101282 8977442

Extreme Networks BDX8 RFC 3918 IP Multicast Test Cut Through Mode
Agg. Tput Rate (%)
Agg Tput Rate (%)

Agg. Average Latency

Agg Average Latency (ns)

100 90 80 70 60 50 40 30 20 10 0

4000 3500 3000 2500 2000 1500 1000 500 64 100


2382.767

128 100
2448.613

192 100
2519.792

256 100
2578.477

512 100
2835.100

1024 100
3117.849

1280 100
3143.043

1518 100
3161.283

2176 100
3217.358

9216 100
3824.054

Agg Tput Rate (%) Agg. Aver. Latency

58

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Extreme Networks BlackDiamond X8 Core Switch


The Extreme Networks BDX8 performed very well under cloud simulation conditions by delivering 100% aggregated throughput while processing a large combination of east-west and north-south traffic flows. Zero packet loss was observed as its latency stayed under 4.6 s and 4.8 s measured in cut through and store and forward modes respectively. This measurement too breaks all previous records as the BDX8 is between 2 and 10 times faster in forwarding cloud based protocols under load. The Extreme Networks BDX8 represents a new breed of cloud network core switches with power efficiency being a core value. Its WattsATIS/port is 8.1 and TEER value is 117. To put power measurement into perspective, other core switches are between 30% and 2.6 times less power efficient. In other words, the BDX8 is the most power efficient core switch tested to date in the Lippis/Ixia iSimCity lab. Note that the total number of BDX8 10GbE ports to calculate Watts/10GbE were 352, as 24-40Gbs ports or 96 10GbE ports were added to the 25610GbE ports for Watts/ATIS/port calculation. The 40GbE ports are essentially four 10GbE ports from a power consumption point of view. While these are the lowest Watts/10GbE port and highest TEER values observed for core switches, the Extreme Networks BDX8s actual Watts/10GbE port is actually lower; we estimate approximately 5 Watts/10GbE 59

Extreme Networks BDX8 Cloud Simulation Test


Trafc Direction
East-West East-West East-West East-West East-West North-South North-South

Trafc Type
Database_to_Server Server_to_Database HTTP iSCSI-Server_to_Storage iSCSI-Storage_to_Server Client_to_Server Server_to_Client

Cut-Through Avg Latency (ns)

Store and Forward Avg Latency (ns)

2574 1413 4662 1945 3112 3426 3576

3773 1339 4887 2097 3135 3368 3808

Extreme Networks BDX8 Power Consumption Test


WattsATIS/10GbE port 3-Year Cost/WattsATIS/10GbE Total power cost/3-Year 3 yr energy cost as a % of list price TEER Value Cooling 8.1 $29.61 $10,424.05 1.79% 117 Front to Back, Reversible

port when fully populated with 756 10GbE or 192 40GbE ports. During the Lippis/Ixia test, the BDX8 was only populated to a third of its port capacity but equipped with power supplies, fans, management and switch fabric modules for full port density population. Therefore, when this full power capacity is divided across a fully populated BDX8, its WattsATIS per 10GbE Port will be lower than the measurement observed.

The Extreme Networks BDX8 power cost per 10GbE is calculated at $9.87 per year. The three year cost to power the Extreme Networks BDX8 is estimated at $10,424.05 and represents less than 1.7% of its list price. Keeping with data center best practices, its cooling fans flow air front to back.

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Virtualization Scale Calculation


Within one layer 2 domain, the Extreme Networks BlackDiamond X8 can support approximately 900 physical servers and 27,125 VMs, thanks to its support of 28K IP host route table size and 128K MAC address table size. This assumes 30 VMs per physical server with each physical server requiring two-host IP/ MAC/ARP entries; one for management, one for IP storage, etc into a data center switch. Each VM will require from a data center switch one MAC entry, one 32/IP host route and one ARP entry. In fact, its IP host route and MAC address table sizes are larger than the number of physical ports that it can support at this VM to physical server ratio. From a deployment mode very large deployments are enabled. For example, assume 40 VMs/server and with a 28K host route table, 700 servers can be connected into a single BDX8 in an End of Row (EOR) configuration. Here the BD-X8 can serve as the routed boundary for the entire row accommodating 28,000 VMs and approximately 700 physical servers. Scaling further, multiple BDX8s each placed in EOR could be connected to a core routed BDX8 that serves as the Layer 3 interconnect for the EOR BDX8s. Since each EOR BDX8 is a Layer 3 domain, the core BDX8 does not see the individual IP host table entries from each VM (which are hidden behind the EOR BDX8). A very large 2-tier network with BDX8s can be created with the first tier being the EOR BDX8s which is routing, while the second tier are core BDX8(s) which is also routing, thus accommodate hundreds of thousands of VMs. The BDX8 can scale very large in its support of virtualized data center environments.

60

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Discussion:
The Extreme Networks BDX8 seeks to offer high performance, low latency, high port density of 10 and 40GbE and low power consumption for private and public cloud networks. This architecture proved its value as it delivered the lowest latency measurements to date for core switches while populated with the equivalent of 352 10GbE ports running traffic at line rate. Not a single packet was dropped offering 100% throughput a line rate for the equivalent of 352 10GbE ports. Even at 8 Watts/10GbE port, the Extreme Networks BDX8 consumes the least amount of power of core switches measured to date at the Lippis/Ixia industry test at iSimCity. Observations of the BDX8s IP Multicast and congestion test performance were the best measured to date in terms of latency and throughput under congestion conditions in both 10GbE and 40GbE scenarios. The Extreme Networks BDX8 was found to have low power consumption, front-to-back cooling, front-accessible components and an ultra compact form factor. The Extreme Networks BDX8 is designed to meet the requirements for private and public cloud networks. Based upon these Lippis/Ixia test, it achieves its design goals.

61

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Extreme Networks Summit X670V Top-of-Rack Switch


Extreme Networks launched its new entry into the enterprise data center and cloud computing markets with its new line of Summit X series of switches in May of 2011. This line of Top of Rack switches pushes the envelope on performance, power consumption, density and data center network design. The Summit X670 series are purpose-built Top-of-Rack (ToR) switches designed to support emerging 10 GbE enabled servers in enterprise and cloud data centers. The Summit X670 is ready for 40GbE uplinks thanks to its optional 40 GbE support. In addition the Summit X670 provides for both 1 and 10GbE physical server connections to ease transition to high-performance server connections while providing unique features for virtualized data center environments. The ExtremeXOS operating system, with its high availability attributes provides simplicity and ease of operation through the use of one OS everywhere in the network. Extreme Networks submitted their new Summit X670V ToR switch into the Fall Lippis/Ixia industry test. The X670V is a member of the Summit X family of ToR switches. The X670V is capable of supporting 48-10GbE and 4-40GbE ports in a small 1RU footprint. The X670V in combination with Extreme Networks BlackDiamond X8 offer new flexibility in building enterprise and cloud spec data center networks with the X670V providing server connectivity at 1 and 10GbE while connecting into the BDX8 at either 10 or 40GbE under a single OS image in both devices.

Extreme Networks Summit X670V Test Configuration*


Hardware Device under test Test Equipment Ixia Line Cards X670V http://www.extremenetworks.com Ixia XM12 High Performance Chassis Xcellon Flex AP10G16S 16 port 10G module Xcellon Flex Combo 10/40GE AP 16 port 10G / 4 port 40G http://www.ixiacom.com/ Cabling 10GbE Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850mm SPF+ transceivers www.leviton.com Siemon QSFP+ Passive Copper Cable 40 GbE 3 meter copper QSFP-FA-010 Siemon Moray Low Power Active Optical Cable Assemblies Single Mode QSFP+ 40GbE optical cable QSFP30-03 7 meters www.siemon.com
* Extreme X670V was configured in 48 10GbE ports and 4-40GbE ports or an equivalent 64-10GbE ports.

Software Version 12.6 IxOS 6.10 EA SP2 IxNetwork 6.10 EA

Port Density 64

62

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Extreme Networks Summit X670V Top-of-Rack Switch


For the fall Lippis/Ixia test we populated and tested the Extreme Networks Summit X670V with 48-10GbE ports and 4-40GbE ports; its full capacity. Its average latency ranged from a low of 900 ns to a high of 1,415 ns for layer 2 traffic. Its average delay variation ranged between 7 and 10 ns, providing consistent latency across all packet sizes at full line rate. For layer 3 traffic, the Extreme Networks X670Vs measured average latency ranged from a low of 907 ns at 64Bytes to a high of 1,375 ns. Its average delay variation for layer 3 traffic ranged between 5 and 9 ns, providing consistent latency across all packet sizes at full line rate.

Extreme Networks Summit X670V RFC 2544 Layer 2 Latency Test in Cut-Through Mode
ns
2000 1500 1000 500 0

Max Latency Avg Latency Min Latency

64 1020 737 9.346

128 1040 742 7.962

192 1200 762 10.25

256 1200 782 6.942

512 1360 782 7.865

1024 1580 802 9.712

1280 1560 817 7.827

1518 1560 817 10.192

2176 1580 817 7.962

9216 1580 802 10.731

Max Latency Avg Latency Min Latency Avg. delay Variation

900.192. 915.519

1056.519 1067.115 1208.731 1412.808 1415.962 1415.327

1411.923 1410.019

Extreme Networks Summit X670V RFC 2544 Layer 3 Latency Test in Cut-Through Mode
ns
2000 1500 1000 500 0

Max Latency Avg Latency Min Latency

64 1040 907.923 762 8.135

128 1060 943.135 747 5.423

192 1160 790 8.942

256 1180 770 5.558

512 1320 810 7.019

1024 1520 810 6.885

1280 1540 847 4.558

1518 1540 790 9.346

2176 1540 810 5.058

9216 1520 810 9.712

Max Latency Avg Latency Min Latency Avg. delay Variation

1026.615 1029.731 1176.923 1375.019 1373.865 1374.077 1372.712 1370.327

63

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Extreme Networks Summit X670V Top-of-Rack Switch


The Extreme Networks X670V demonstrated 100% throughput as a percentage of line rate across all 48-10GbE and 4-40GbE ports. In other words, not a single packet was dropped while the Extreme Networks X670V was presented with enough traffic to populate its 48-10GbE and 4-40GbE ports at line rate simultaneously for both L2 and L3 traffic flows. Two congestion test were conducted using the same methodology. A 10GbE and 40GbE congestion test stressed the X670V ToR switchs congestion management attributes for both 10GbE and 40GbE. The Extreme Networks X670V demonstrated 100% of aggregated forwarding rate as percentage of line rate during congestion conditions for both the 10GbE and 40GbE test. A single 10GbE port was flooded at 150% of line rate. In addition, a single 40GbE port was flooded at 150% of line rate. The Extreme Networks X670V did not use HOL blocking which means that as the 10GbE and 40GbE ports on the X670V became congested, it did not impact the performance of other ports. Back pressure was detected. The X670V did send flow control frames to the Ixia test gear signaling it to slow down the rate of incoming traffic flow, which is common in ToR switches.

Extreme Networks Summit X670V RFC 2544 L2 & L3 Throughput Test


Layer 2 Layer 3

100%
Throughput % Line Rate

80% 60% 40% 20% 0%


Layer 2 Layer 3

64 100 100

128 100 100

192 100 100

256 100 100

512 100 100

1024 100 100

1280 100 100

1518 100 100

2176 100 100

9216 100 100

Extreme Networks Summit X670V Congestion Test


150% of Line Rate into a single 10GbE
Frame (Bytes) Agg Forwarding Rate (% Line Rate) Head of Line Blocking Back Pressure Agg Flow Control Frames

LAYER 2

LAYER 3

64 128 192 256 512 1024 1280 1518 2176 9216

100 100 100 100 100 100 100 100 100 100

no no no no no no no no no no

yes yes yes yes yes yes yes yes yes yes

6976188 4921806 6453360 4438204 3033068 3125294 17307786 13933834 13393656 9994308

6641074 5720526 6751626 6167986 5425000 5668574 5540616 5320560 5339310 5042648

150% of Line Rate into a single 40GbE


64 128 192 256 512 1024 1280 1518 2176 9216 100 100 100 100 100 100 100 100 100 100 no no no no no no no no no no yes yes yes yes yes yes yes yes yes yes 10265682 7951820 9938718 28243060 26799154 31017876 25762198 28271714 27102102 22865370 10273162 9517872 10655066 9508906 9534112 7900572 9482604 9345090 9294636 9521042

64

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Extreme Networks Summit X670V Top-of-Rack Switch


The Extreme Networks X670V demonstrated 100% aggregated throughput for IP multicast traffic with latencies ranging from a 955 ns at 64Byte size packet to 1009 ns at 9216Byte size packets. The Extreme Networks X670V performed well under cloud simulation conditions by delivering 100% aggregated throughput while processing a large combination of east-west and north-south traffic flows. Zero packet loss was observed as its latency stayed under 3.1 s measured in cut through mode. The Extreme Networks X670V represents a new breed of cloud network ToR switches with power efficiency being a core value. Its WattsATIS/port is 3.2 and TEER value is 295. Note that the total number of X670V 10GbE ports to calculate Watts/10GbE were 64, as 4-40Gbs ports or 16 10GbE ports were added to the 48-10GbE ports for Watts/ATIS/port calculation. The 40GbE ports are essentially four 10GbE ports from a power consumption point of view. The Extreme Networks X670V power cost per 10GbE is calculated at $11.74 per 3-year period. The three-year cost to power the Extreme Networks X670V is estimated at $751.09 and represents less than 2.2% of its list price. Keeping with data center best practices, its cooling fans flow air front to back.

Extreme Networks Summit X670V RFC 3918 IP Multicast Test Cut Through Mode
Agg. Tput Rate (%)
Agg Tput Rate (%)

Agg. Average Latency

Agg Average Latency (ns)

100 90 80 70 60 50 40 30 20 10 0

4000 3500 3000 2500 2000 1500 1000 500 64 100


955.824

128 100
968.431

192 100
987.51

256 100
985.98

512 100
998.157

1024 100
1011

1280 100
1012.529

1518 100
1010.157

2176 100
1009.824

9216 100
1009.863

Agg Tput Rate (%) Agg. Aver. Latency

Extreme Networks Summit X670V Cloud Simulation Test


Trafc Direction
East-West East-West East-West East-West East-West North-South North-South

Trafc Type
Database_to_Server Server_to_Database HTTP iSCSI-Server_to_Storage iSCSI-Storage_to_Server Client_to_Server Server_to_Client

Cut-through Avg Latency (ns)

2568 1354 2733 1911 3110 1206 1001

Extreme Networks Summit X670V Power Consumption Test


WattsATIS/10GbE port 3-Year Cost/WattsATIS/10GbE Total power cost/3-Year 3 yr energy cost as a % of list price TEER Value Cooling 3.21 $11.74 $751.09 2.21% 295 Front to Back, Reversible

65

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Virtualization Scale Calculation


Within one layer 2 domain, the Extreme Networks Summit X670V can support approximately 188 physical servers and 5813 VMs, thanks to its 6K IP Host route table size and 128K MAC address table size. This assumes 30 VMs per physical server with each physical server requiring two host IP/ MAC/ ARP entries; one for management, one for IP storage, etc into a data center switch. Each VM will require from a data center switch one MAC entry, one 32/IP host route and one ARP entry. Clearly the Extreme Networks Summit X670V can support a much larger logical network than its physical port capability.

Discussion:
The Extreme Networks X670V seeks to offer a range of server and ToR connectivity options to IT architects as they design their data centers and cloud network facilities. The X670V is competitive with other ToR switches from a price, performance, latency and power consumption point of view. The X670V when combined with the X8 offers a powerful two tier cloud network architecture as they bring together the performance advantages observed during the Lippis/Ixia test and software features found is ExtremeXOS network operating system. The Extreme Networks X670V was found to have low power consumption, front-to-back cooling, front-accessible components and a 1RU compact form factor. The Extreme Networks X670V is designed to meet the requirements for private and public cloud networks. Based upon these Lippis/ Ixia test, it achieves its design goals.

66

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Dell/Force10 S-Series S4810


The Dell/Force10 S-Series S4810 is a low-latency (nanosecond scale) 10/40 GbE ToR/leaf switch purpose-built for applications in high-performance data center and computing environments. The compact S4810 design provides port density of 48 dual-speed 1/10 GbE (SFP+) ports as well as four 40 GbE QSFP+ uplinks to conserve valuable rack space and simplify the migration to 40 Gbps in the data center core. For the Lippis/Ixia test, the S4810 was configured as a 48-wire speed 10 GbE ToR switch. Dell/Force10 Networks S4810 Test Configuration
Hardware Device under test Test Equipment S4810 http://www.force10networks.com Ixia XM12 High Performance Chassis Xcellon Flex AP10G16S 10 Gigabit Ethernet LAN load module LSM10GXM8S and LSM10GXM8XP 10 Gigabit Ethernet load modules 1U Application Server http://www.ixiacom.com/ Cabling Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers www.leviton.com IxAutomate 6.90 GA SP1 Software Version 8.3.7.0E4 IxOS 6.00 EA IxNetwork 5.70 EA Port Density 48

Dell/Force10 Networks S4810 RFC2544 Layer 2 Latency Test


ns
1000 Max Latency 900 800 Avg Latency Min Latency

Video feature: Click to view Dell/Force10 video podcast


The Dell/Force10 S4810 ToR switch was tested across all 48 ports of 10GbE. Its average latency ranged from a low of 799 ns to a high of 885 ns for layer 2 traffic. Its average delay variation ranged between 5 and 9.2 ns, providing consistent latency across all packet sizes at full line rate. For layer 3 traffic, the Dell/Force10 S4810s average latency ranged from a low of 796 ns to a high of 868 ns across all frame sizes. Its average delay variation for layer 3 traffic ranged between 4.6 and 9.1 ns, providing consistent latency across all packet sizes at full line rate. 67
Max Latency Avg Latency Min Latency

700 600

64 940 874.479 760 9.104

128 940 885 780 5.229

256 920 867.604 740 5.458

512 880 825.292 720 7.938

1024 900 846.521 780 6.729

1280 880 819.896 720 6

1518 880 805.063 720 8.979

2176 880 812.063 720 5

9216 860 799.792 720 9.271

Avg. delay Variation

Dell/Force10 Networks S4810 RFC2544 Layer 3 Latency Test


ns
1000 Max Latency 900 800 700 600 Max Latency Avg Latency Min Latency Avg. delay Variation Avg Latency Min Latency

64 940 862.417 740 8.458

128 940 868.542 780 5.25

256 940 863.771 680 5.521

512 880 822.625 720 7.688

1024 900 843.021 740 6.688

1280 880 817.833 700 4.646

1518 880 802.375 700 9.063

2176 880 809.708 700 5.083

9216 860 796.063 720 9.146

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Dell/Force10 Networks S4810 RFC 2544 L2 & L3 Throughput Test


Layer 2 Layer 3

Throughput % Line Rate

100% 80% 60% 40% 20% 0% Layer 2 Layer 3 64 100 100 128 100 100 256 100 100 512 100 100 1024 100 100 1280 100 100 1518 100 100 2176 100 100 9216 100 100

The Dell/Force10 S4810 demonstrated 100% throughput as a percentage of line rate across all 48-10GbE ports. In other words, not a single packet was dropped while the Dell/Force10 S4810 was presented with enough traffic to populate its 48 10GbE ports at line rate simultaneously for both L2 and L3 traffic flows.

Dell/Force10 Networks S4810 RFC 2889 Congestion Test


Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate

% Line Rate

100 80 60 40 20 0 64 100 100 no yes


5761872 5762860

Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate Head of Line Blocking Back Pressure L2 Agg Flow Control Frames L3 Agg Flow Control Frames

128 100 100 no yes


3612428 3614948

256 100 100 no yes


3841000 3851964

512 100 100 no yes


3073936 3070936

1024 100 100 no yes


3158216 3155552

1280 100 100 no yes


2958772 2957128

1518 100 100 no yes


2876596 2875592

2176 100 100 no yes


2730684 2730528

9216 100 100 no yes


2681572 2677416

The Dell/Force10 S4810 demonstrated 100% of aggregated forwarding rate as percentage of line rate during congestion conditions for both L2 and L3 traffic flows. A single 10GbE port was flooded at 150% of line rate. The Dell/ Force10 S4810 did not use HOL blocking, which means that as the Dell/ Force10 S4810-10GbE port became congested, it did not impact the performance of other ports. As with most ToR switches, the Dell/Force10 S4810 did use back pressure as the Ixia test gear detected flow control frames.

68

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Dell/Force10 Networks S4810 RFC 3918 IP Multicast Test


Agg Tput Rate (%)
100 90 80 70 60 50 40 30 20 10 0 Agg Tput Rate (%) Agg. Average Latency 64 100 0.984 128 100 1.033 256 100 1.062 512 100 1.232 1024 100 1.442 1280 100 1.442 1518 100 1.429 2176 100 1.438 9216 100 1.437 1.50 s 1.40 1.30 1.20 1.10 1.00 0.90 0.80

The Dell/Force10 S4810 demonstrated 100% aggregated throughput for IP multicast traffic with latencies ranging from a 984 ns at 64Byte size packet to 1442 ns at 1024 and 1280 Byte size packets. The Dell/Force10 S4810 was configured in cut-through mode during this test.

Dell/Force10 Networks S4810 Cloud Simulation Test


Traffic Direction East-West East-West East-West East-West East-West North-South North-South Traffic Type Database_to_Server Server_to_Database HTTP iSCSI-Server_to_Storage iSCSI-Storage_to_Server Client_to_Server Server_to_Client Avg Latency (ns) 6890 1010 4027 771 6435 1801 334

The Dell/Force10 S4810 performed well under cloud simulation conditions by delivering 100% aggregated throughput while processing a large combination of east-west and north-south traffic flows. Zero packet loss was observed as its latency stayed under 6900 ns. The Dell/ Force10 S4810 was configured in cutthrough mode during this test.

Dell/Force10 Networks S4810 Power Consumption Test


WattsATIS/10GbE port 3-Year Cost/WattsATIS/10GbE Total power cost/3-Year 3 yr energy cost as a % of list price TEER Value Cooling 4.03 $14.73 $707.22 2.83% 235 Front to Back

The Dell/Force10 S4810 represents a new breed of cloud network leaf or ToR switches with power efficiency being a core value. Its WattsATIS/port is 4.03 and TEER value is 235. Note higher TEER values are more desirable as they represent the ratio of work performed over energy consumption. Its power cost per 10GbE is estimated at $4.91 per year. The three-year cost to power the Dell/ Force10 S4810 is estimated at $707.22 and represents approximately 3% of its list price. Keeping with data center best practices, its cooling fans flow air front to back.
www.lippisreport.com

69

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Discussion:
Leveraging a non-blocking, cut-through switching architecture, the Dell/Force10 Networks S4810 delivers line-rate L2 and L3 forwarding capacity with nanosecond latency to maximize network performance. These claims were substantiated during the Lippis/Ixia RFC 2544 latency test, where the S4810 demonstrated average latency of 799 to 874 ns across all packet sizes for layer 2 forwarding and 796 to 862 ns across all packet sizes for layer 3 forwarding. The S4810s architecture demonstrated its ability to forward packets in the nanosecond range consistently with little delay variation. In fact, the S4810 demonstrated between 5 and 9 ns of delay variation across all packet sizes for both layer 2 and layer 3 forwarding. The S4810 demonstrated 100% throughput at line rate for all packet sizes ranging from 64 Bytes to 9216 Bytes. In addition to the above verified and tested attributes, Dell/ Force10 Networks claims that the S4810 is designed with powerful QoS features, coupled with Data Center Bridging (DCB) support via a future software enhancement, making the S4810 ideally suited for iSCSI storage environments. In addition, the S4810 incorporates multiple architectural features that optimize data center network flexibility, efficiency and availability, including Dell/Force10s VirtualScale stacking technology, reversible front-to-back or backto-front airflow for hot/cold aisle environments, and redundant, hot-swappable power supplies and fans. The S4810 also supports Dell/Force10s Open Automation Framework, which provides advanced network automation and virtualization capabilities for virtual data center environments. The Open Automation Framework is comprised of a suite of inter-related network management tools that can be used together or independently to provide a network that is flexible, available and manageable while reducing operational expenses.

70

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Hitachi Cable Apresia 15000-64XL-PSR


The Apresia 15000 series was developed under the new BoxCore concept for improving network efficiency by using box switches as Core switches to fulfill customers satisfaction. The Apresia 15000-64XL-PSR, referred to here as the 15K, is a 64-port wire speed 10GbE Core switch. As the Apresia 15K is a 64port 10GbE switch, it could be deployed in ToR, End-of-Row or as a Core/Spine switch. For the Lippis/Ixia evaluation, Hitachi Cable requested that the Apresia 15K be grouped with Core switches. For the Lippis/Ixia evaluation, Hitachi Cable requested that the Apresia 15K be grouped with Core switches, but is more appropriate for the ToR category. Apresia 15000-64XL-PSR Test Configuration
Hardware Device under test Test Equipment 15000-64XL-PSR http://www.apresia.jp/en/ Ixia XM12 High Performance Chassis Xcellon Flex AP10G16S 10 Gigabit Ethernet LAN load module LSM10GXM8S and LSM10GXM8XP 10 Gigabit Ethernet load modules 1U Application Server http://www.ixiacom.com/ Cabling Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers www.leviton.com IxAutomate 6.90 GA SP1 Software Version AEOS 8.09.01 IxOS 6.00 EA IxNetwork 5.70 EA Port Density 64

Apresia 15000-64XL-PSR RFC2544 Layer 2 Latency Test


ns
1100 1000 900 800 700 Max Latency 600 Avg Latency Min Latency

Video feature: Click to view Apresia video podcast


The Apresia 15K switch was tested across all 64 ports of 10GbE. Note that during testing, the Apresia 15K did not maintain VLAN through the switch at 64Byte size packets, which is used to generate a packet signature to measure latency. Therefore, there is no 64Byte latency test data available. Hitachi claims they have fixed the issue after the test. Further, the Apresia 15Ks largest frame size supported in 9044 Bytes, therefore, precluding 9216 Byte size testing.

Max Latency Avg Latency Min Latency Avg Latency Delay

128 1020 940.672 860 7.984

256 1080 979.953 840 7.063

512 1040 950.063 840 8

1024 1060 973.547 800 9.453

1280 1040 944.641 800 7

1518 1020 932.516 800 10.031

2176 1020 940.344 820 8.031

Apresia 15000-64XL-PSR RFC2544 Layer 3 Latency Test


ns
1100 1000 900 800 700 Max Latency 600 Avg Latency Min Latency

Max Latency Avg Latency Min Latency Avg delay Variation

128 1020 967.172 860 5.234

256 1020 941.078 780 5.609

512 980 910.844 720 7.719

1024 1000 933.047 740 6.641

1280 980 904.844 740 6.047

1518 960 889.547 740 9.234

2176 980 897.328 760 5.016

71

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Latency data is measured for packet sizes between 128 and 9044. Within this packet range, the Apresia 15Ks average latency was a low of 900 ns to a high of 979 ns for layer 2 traffic. Its average delay variation ranged between 7 and 10.5 ns, providing consistent latency across these packet sizes at full line rate. For layer 3 traffic, the Apresia 15Ks average latency ranged from a low of 889 ns to a high of 967 ns across 128-9044 Bytes size frames. Its average delay variation for layer 3 traffic ranged between 5 and 9.6 ns, providing consistent latency across these packet sizes at full line rate.

Apresia 15000-64XL-PSR RFC 2544 L2 & L3 Throughput Test


100% Layer 2 Layer 3

Throughput % Line Rate

80% 60% 40% 20% 0% Layer 2 Layer 3

The Apresia 15K demonstrated nearly 100% throughput as a percentage of line rate across all 64-10GbE ports for layer 2 traffic and 100% throughput for layer 3 flows. In other words, there was some packet loss during the RFC 2544 Layer 2 test, but for Layer 2 not a single packet was dropped while the Apresia 15K was presented with enough traffic to populate its 64 10GbE ports at line rate simultaneously.
1280 99.682 100 1518 99.73 100 2176 99.808 100

64 100 100

128 97.288 100

256 98.541 100

512 99.238 100

1024 99.607 100

Apresia 15000-64XL-PSR RFC 2889 Congestion Test


% Line Rate
100 90 80 70 60 50 40 30 20 10 0 Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate L2 Head of Line Blocking L3 Head of Line Blocking L2 Back Pressure L3 Back Pressure Agg Flow Control Frames 64 77.854 77.913 no no yes yes 0 128 79.095 77.911 no no yes yes 0 256 78.624 77.785 no no yes yes 0 512 78.581 78.229 no no yes yes 0 1024 79.029 67.881 no yes yes no 0 1280 78.974 50.071 no yes yes no 0 1518 78.609 50.029 no yes yes no 0 2176 49.946 50.03 yes yes no no 0 Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate

The Apresia 15K demonstrated mixed results for the congestion test. Between 64 and 1518 Byte size packets, the 15K demonstrated nearly 80% of aggregated forwarding rate as percentage of line rate during congestion conditions for L2 traffic flows. At 2176 Byte size packets, the aggregated forwarding rate dropped to nearly 50% with HOL blocking present. For L3 traffic flows at 64 and 1512 Byte size packets, the 15K demonstrated nearly 80% of aggregated forwarding rate as percentage of line rate during congestion conditions. At packet sizes 1024 to 2176, HOL blocking was present as the aggregated forwarding rate as percentage of line rate during congestion conditions dropped to 67% and 50%.
www.lippisreport.com

72

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Apresia 15000-64XL-PSR RFC 3918 IP Multicast Test


The Apresia 15K does not support IP Multicast.

Apresia 15K Cloud Simulation Test


Traffic Direction East-West East-West East-West East-West East-West North-South North-South Traffic Type Database_to_Server Server_to_Database HTTP iSCSI-Server_to_Storage iSCSI-Storage_to_Server Client_to_Server Server_to_Client Avg Latency (ns) 1698 1200 1573 1148 1634 2183 1021

The Apresia 15K delivered 100% aggregated throughput while processing a large combination of east-west and north-south traffic flows during the cloud computing simulation test. Zero packet loss was observed as its latency stayed under 2183 ns.

Apresia 15K Power Consumption Test


WattsATIS/10GbE port 3-Year Cost/WattsATIS/10GbE Total power cost/3-Year TEER Value Cooling 3.58 $13.09 $837.67 264 Front to Back

The Apresia 15K is a new breed of cloud network ToR or leaf switch with power efficiency being a core value. Its WattsA/port is 3.58 and TEER value is 264. TIS Note higher TEER values are more desirable as they represent the ratio of work performed over energy consumption. Hitachi Cable was unable to provide list pricing information thus there is no data on the Apresia 15Ks power cost per 10GbE, three-year cost of power and estimate of three-year energy cost as a percentage of list price. Keeping with data center best practices, its cooling fans flow air front to back and states that Back-to-Front cooling will be available at some point in the future.

73

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Discussion: The Apresia 15000-64XL-PSR, referred to here as the 15K


The Apresia 15000-64XL-PSR (hereinafter called 64XL) is a terabit box switch that implements 2 40G uplink ports, and 64 1G/10G SFP/SFP+ ports in a 2U size unit. The Apresia15000-32XL-PSR (hereinafter called 32XL) is half the size of 64XL and implements 2 40G uplink ports, and 32 1G/10G SFP/SFP+ ports in a 1U size unit. Switching capacities are 1.28 terabits/second for the 64XL, and 640 gigabits/ second for the 32XL. When set to equal capacities, the 64XL and the 32XL are not only space-saving, but also achieve significant cost-savings when compared to the conventional chassis switches. Both the 64XL and 32XL are designed to be used as data center switches, or network Core switches and broadband L2 switches for enterprises and academic institutions. Planned functions specific to data center switches that play an important role in cloud computing include FCoE, storage I/O consolidation technology, and DCB, a new Ethernet technical standard to realize FCoE, and other functions that support virtual server environments. A data center license and L3 license are optional and can be purchased in accordance with the intended use. Without such licenses, both the 64XL and 32XL perform as high-end L2 switches.

74

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Juniper Network EX Series EX8200 Ethernet Switch, EX8216 Ethernet Switch


The 16-slot EX8216 Ethernet Switch, part of the EX8200 line of Ethernet Switches from Juniper Networks, offers a high-density, high-performance platform for aggregating access switches deployed in data center ToR or end-of-row applications, as well as for supporting Gigabit Ethernet and 10 Gigabit Ethernet server access in data center end-ofrow deployments. During the Lippis/ Ixia test, the EX8216 was populated with 128 10GbE ports, classifying it as a Core/spine switch for private or public data center cloud implementations. Juniper Networks EX8216 Test Configuration
Hardware Device under test Test Equipment EX8216, EX8200-8XS, EX8200-40XS http://www.juniper.net/us/en/ Ixia XM12 High Performance Chassis Xcellon Flex AP10G16S 10 Gigabit Ethernet LAN load module LSM10GXM8S and LSM10GXM8XP 10 Gigabit Ethernet load modules 1U Application Server http://www.ixiacom.com/ Cabling Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers www.leviton.com IxAutomate 6.90 GA SP1 Software Version JUNOS 10.3R2.11 IxOS 6.00 EA IxNetwork 5.70 EA Port Density 128

Juniper Networks EX8216 RFC2544 Layer 2 Latency Test


ns
60000 50000 40000 30000 Max Latency Avg Latency Min Latency

Video feature: Click to view Juniper video podcast


The Juniper EX8216 was tested across all 128 ports of 10GbE. Its average latency ranged from a low of 11366 ns or 11.3 s to a high of 34,723 ns or 34 s at jumbo size 9216 Byte size frames for layer 2 traffic. Its average delay variation was ranged between 5 and 9.9 ns, providing consistent latency across all packet sizes at full line rate. For layer 3 traffic, the Juniper EX8216s measured average latency ranged from a low of 11,814 ns to 35,500 ns or 35s at jumbo size 9216 Byte size frames. Its average delay variation for layer 3 traffic ranged between 5 and 10 ns, providing consistent latency across all packet sizes at full line rate. 75

20000 10000 0 Max Latency Avg Latency Min Latency Avg. delay Variation 64 15520 8860 9 128 14560 9100 5 256 14540 9500 5.375 512 15780 10380 7.945 1024 17760 11800 7 1280 19180 12480 5.141 1518 19820 12980 9.516 2176 22380 14720 5 9216 50340 32100 9.938

11891.617 11366.211 11439.586 11966.75 13322.914 14131.773 14789.578 16302.711 34722.688

Juniper Networks EX8216 RFC2544 Layer 3 Latency Test


ns
40000 35000 30000 25000 20000 15000 10000 5000 0 Max Latency Avg Latency Min Latency Avg. delay Variation 64 14120 12390.375 10320 9 128 13820 12112.5 10460 5 256 13500 10780 5.109 512 14540 11420 8 1024 15840 12420 7 1280 16540 12720 5 1518 17020 13160 9.563 2176 18660 14280 5 9216 35500 26220 10 Max Latency Avg Latency Min Latency

11814.047 12685.406 13713.25 14318.484 14664.844 15767.438 28674.078

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Juniper Networks EX8216 RFC 2544 L2 & L3 Throughput Test


100% 80% Layer 2 Layer 3

Throughput % Line Rate

60% 40% 20% 0% Layer 2 Layer 3 64 100 100 128 100 100 256 100 100 512 100 100 1024 100 100 1280 100 100 1518 100 100 2176 100 100 9216 100 100

The Juniper EX8216 demonstrated 100% throughput as a percentage of line rate across all 128-10GbE ports. In other words, not a single packet was dropped while the Juniper EX8216 was presented with enough traffic to populate its 128 10GbE ports at line rate simultaneously for both L2 and L3 traffic flows.

Juniper Networks EX8216 RFC 2889 Congestion Test


% Line Rate
100 90 80 70 60 50 40 30 20 10 0 Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate Head of Line Blocking Back Pressure Agg Flow Control Frames 64 77.005 77.814 no no 0 128 75.009 77.835 no no 0 256 75.016 77.825 no no 0 512 75.031 74.196 no no 0 1024 75.06 72.784 no no 0 1280 75.06 77.866 no no 0 1518 75.124 77.691 no no 0 2176 75.06 77.91 no no 0 9216 75.06 78.157 no no 0 Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate

The Juniper EX8216 demonstrated nearly 80% of aggregated forwarding rate as percentage of line rate during congestion conditions for L2 and L3 forwarding. A single 10GbE port was flooded at 150% of line rate. The EX8216 did not use HOL blocking, which means that as the 10GbE port on the EX8216 became congested, it did not impact the performance of other ports. There was no back pressure detected, and the Ixia test gear did not receive flow control frames.

76

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Juniper Networks EX8216 RFC 3918 IP Multicast Test


Agg Tput Rate (%)
100 90 80 70 60 50 40 30 20 10 0 Agg Tput Rate (%) Agg. Average Latency 64 100 27.2 128 100 26.2 256 100 25.1 512 100 26.6 1024 100 28.8 1280 100 30.0 1518 100 30.7 2176 100 33.0 9216 100 60.1 20 30 40 50 60 70

The Juniper EX8216 demonstrated 100% and at times 99.99% aggregated throughput for IP multicast traffic with latencies ranging from a 25,135 ns or 25 s to 60,055 ns or 60 s at 9216Byte size packets.

Juniper Networks EX8216 Cloud Simulation Test


Traffic Direction East-West East-West East-West East-West East-West North-South North-South Traffic Type Database_to_Server Server_to_Database HTTP iSCSI-Server_to_Storage iSCSI-Storage_to_Server Client_to_Server Server_to_Client Avg Latency (ns) 30263 11614 18833 19177 26927 12145 11322

The Juniper EX8216 performed well under cloud simulation conditions by delivering 100% aggregated throughput while processing a large combination of east-west and north-south traffic flows. Zero packet loss was observed as its latency stayed under 30s. Of special note is the consistency that the Juniper EX8216 performed while processing a heavy combination of east-west and north-south flows.

Juniper Networks EX8216 Power Consumption Test


WattsATIS/10GbE port 3-Year Cost/WattsATIS/10GbE Total power cost/3-Year 3 yr energy cost as a % of list price TEER Value Cooling 21.68 $79.26 $10,145.60 1.30% 44 Side-to-Side

The Juniper EX8216 represents a new breed of cloud network spine switches with power efficiency being a core value. Its WattsATIS/port is 21.68 and TEER value is 44. Its power cost per 10GbE is estimated at $26.42 per year. The three-year cost to power the EX8216 is estimated at $10,145.60 and represents less than 2% of its list price. The Juniper EX8216 cooling fans flow side-to-side.

77

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Discussion:
The EX8216 delivers approximately 1.9 billion packets per second (Bpps) of high-density, wire-speed 10GbE performance for the largest data center networks and includes an advanced set of hardware features enabled by the Juniperdesigned EX-PFE2 ASICs. Working with the carrier-class Junos operating system, which runs on all EX Series Ethernet switches, the EX-PFE2 ASICs on each EX8200 line card deliver the scalability to support high-performance data center networks. Two line cards were used in Lippis/ Ixia test, the EX8200-8XS and EX8200-40XS described below.

EX8200-40XS Ethernet Line Card


The EX8200-40XS is a 40-port oversubscribed 10GbE solution for data center end-of-row and middle-of-row server access, as well as for data center blade switch and top-of-rack or campus uplink aggregation deployments. The EX8200-40XS supports a wide range of both SFP (GbE) and SFP+ (10 GbE) modular optical interfaces for connecting over multimode fiber, single-mode fiber, and copper cabling. The 40 SFP/SFP+ ports on the EX8200-40XS are divided into 8 independent groups of 5 ports each. Because port groups are independent of one another, each group can have its own oversubscription ratio, providing customers with deployment flexibility. Each group dedicates 10 Gbps of switching bandwidth to be dynamically shared among the ports; queues are allocated within a 1MB oversubscription buffer based on the number of active ports in a group and the types of interfaces installed. Users simply connect the cables and the EX820040XS automatically provisions each port group accordingly. No manual configuration is required.

EX8200-8XS Ethernet Line Card


The EX8200-8XS is an 8-port 10GBASE-X line card with compact, modular SFP+ fiber optic interfaces, enabling up to 128 line-rate 10-Gigabit Ethernet ports in an EX8216 modular switch chassis. The EX8200-8XS is ideal for enterprise applications such as campus or data center uplink aggregation, core and backbone interconnects, and for service provider deployments requiring high-density, wire-speed 10GbE interconnects in metro area networks, Internet exchange points and points of presence (POPs). The 10GbE port densities afforded by the EX82008XS line cards also enable EX8200 switches to consolidate aggregation and core layers in the data center, simplifying the network architecture and reducing power, space and cooling requirements while lowering total cost of ownership (TCO). 78
Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Virtual Chassis Technology


The EX8200 supports Juniper Networks unique Virtual Chassis technology, which enables two interconnected chassisany combination of EX8208s or EX8216sto operate as a single, logical device with a single IP address. Deployed as a collapsed aggregation or core layer solution, an EX8200 Virtual Chassis configuration creates a network fabric for interconnecting access switches, routers and servicelayer devices, such as firewalls and load balancers, using standards-based Ethernet LAGs. The network fabric created by an EX8200 Virtual Chassis configuration prevents loops, eliminating the need for protocols such as Spanning Tree. The fabric also simplifies the network by eliminating the need for Virtual Router Redundancy Protocol (VRRP), increasing the scalability of the network design. EX8200 Virtual Chassis configurations are highly resilient, with no single point of failure, ensuring that no single element can render the entire fabric inoperable following a failure. In an EX8200 Virtual Chassis configuration, the Routing Engine functionality is externalized to a purpose-built, server-class appliance, the XRE200, which supports control plane processing requirements for large-scale systems and provides an extra layer of availability and redundancy. All control protocols such as OSPF, IGMP, Link Aggregation Control Protocol (LACP), 802.3ah and VCCP, as well as all management plane functions, run or reside on the XRE200.

79

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Mellanox/Voltaire VantageTM 6048


The Mellanox/Voltaire Vantage 6048 switch is a high performance Layer 2 10GbE ToR/leaf switch optimized for enterprise data center and cloud computing environments, with 48 ports of 10GbE line-rate connectivity and 960 Gb/s non-blocking switching throughput.
TM

Mellanox/Voltaire Vantage 6048 Test Configuration


Hardware Device under test Test Equipment 6048 http://www.voltaire.com Ixia XM12 High Performance Chassis Xcellon Flex AP10G16S 10 Gigabit Ethernet LAN load module LSM10GXM8S and LSM10GXM8XP 10 Gigabit Ethernet load modules 1U Application Server http://www.ixiacom.com/ Cabling Optical SFP+ connectors. Laser optimized duplex lc-lc 50 micron mm fiber, 850nm SPF+ transceivers www.leviton.com IxAutomate 6.90 GA SP1 Software Version 1.0.0.50 IxOS 6.00 EA IxNetwork 5.70 EA Port Density 48

Video feature: Click to view Mellanox/Voltaire video podcast


The Mellanox/Voltaire Vantage 6048 ToR switch was tested across all 48 ports of 10GbE. Its average latency ranged from a low of 2484 ns to a high of 2784 ns for layer 2 traffic. Its average delay variation ranged between 4.8 and 9.6 ns, providing consistent latency across all packet sizes at full line rate.
TM

Mellanox/Voltaire Vantage TM 6048 RFC2544 Layer 2 Latency Test


ns 3500
3000 2500 2000 1500 Max Latency Avg Latency Min Latency Avg. delay Variation Max Latency Avg Latency Min Latency 64 2620 2484.354 2400 8.625 128 2720 2495.521 2420 5.208 256 2820 2513.521 2380 5.146 512 2960 2617.563 2440 7.479 1024 3120 2783.833 2420 6.979 1280 3040 2776.146 2440 4.875 1518 3120 2757.021 2420 9.688 2176 3060 2759.813 2420 4.958 9216 3040 2737.354 2420 9.417

For layer 3 traffic, the Mellanox/Voltaire Vantage 6048s average latency ranged from a low of 2504 to 2 791 ns across all frame sizes. Its average delay variation for layer 3 traffic ranged between 5 and 9.5 ns, providing consistent latency across all packet sizes at full line rate.
TM

Mellanox/Voltaire Vantage TM 6048 RFC2544 Layer 3 Latency Test


ns
3500 3000 2500 2000 1500 Max Latency Avg Latency Min Latency Avg. delay Variation Max Latency Avg Latency Min Latency 64 2940 2504.188 2420 8.583 128 2940 2513.625 2420 5.313 256 2800 2517.979 2380 5.188 512 3240 2650.021 2440 7.667 1024 3240 2773.729 2400 6.958 1280 3140 2791.313 2420 5.479 1518 3080 2762.813 2420 9.583 2176 3100 2766.958 2420 5 9216 3260 2753.958 2420 9.313

80

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Mellanox/Voltaire Vantage TM 6048 RFC 2544 L2 & L3 Throughput Test


100% 80% Layer 2 Layer 3

Throughput % Line Rate

60% 40% 20% 0% Layer 2 Layer 3 64 100 100 128 100 100 256 100 100 512 100 100 1024 100 100 1280 100 100 1518 100 100 2176 100 100 9216 100 100

The Mellanox/Voltaire Vantage 6048 demonstrated 100% throughput as a percentage of line rate across all 4810GbE ports. In other words, not a single packet was dropped while the Mellanox/Voltaire Vantage 6048 was presented with enough traffic to populate its 48 10GbE ports at line rate simultaneously for both L2 and L3 traffic flows.
TM TM

Mellanox/Voltaire Vantage TM 6048 RFC 2889 Congestion Test


Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate

% Line Rate

100 90 80 70 60 50 40 30 20 10 0 64 100 100 no yes


7152760 7152846

Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate Head of Line Blocking Back Pressure L2 Agg Flow Control Frames L3 Agg Flow Control Frames

128 100 100 no yes


5201318 5215032

256 100 100 no yes


2711124 2710746

512 100 100 no yes


1721416 1721416

1024 100 100 no yes


1756370 1756238

1280 100 100 no yes


2110202 2110308

1518 100 100 no yes


1807994 1808060

2176 100 100 no yes


2168202 2172256

9216 100 100 no yes


2079506 2079658

The Mellanox/Voltaire Vantage 6048 demonstrated 100% of aggregated forwarding rate as percentage of line rate during congestion conditions for all packet sizes during the L2 and L3 forwarding test. A single 10GbE port was flooded at 150% of line rate. The Vantage 6048 did not use HOL blocking, which means that as the 10GbE port on the Vantage 6048 became congested, it did not impact the performance of other ports. Back pressure was present throughout all packets sizes as the Mellanox/Voltaire Vantage 6048 sent flow control frames to the Ixia test gear, which is normal operation.
TM TM TM TM

For layer 3-traffic, the Vantage 6048 demonstrated 100% of aggregated forwarding rate as percentage of line rate during congestion conditions for all packet sizes. No HOL blocking was present. Back pressure was present throughout all packets sizes as the Mellanox/Voltaire Vantage 6048 sent flow control frames to the Ixia test gear, which is normal operation.
TM TM

81

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Mellanox/Voltaire Vantage 6048 RFC 3918 IP Multicast Test


The Mellanox/Voltaire Vantage 6048 does not support IP multicast.
TM

Mellanox/Voltaire Vantage 6048 Cloud Simulation Test


Traffic Direction East-West East-West East-West East-West East-West North-South North-South Traffic Type Database_to_Server Server_to_Database HTTP iSCSI-Server_to_Storage iSCSI-Storage_to_Server Client_to_Server Server_to_Client Avg Latency (ns) 11401 2348 10134 2884 10454 3366 2197

The Mellanox/Voltaire Vantage 6048 performed well under cloud simulation conditions by delivering 100% aggregated throughput while processing a large combination of east-west and north-south traffic flows. Zero packet loss was observed as its latency stayed under 11401 ns or 11 s.
TM

Mellanox/Voltaire Vantage 6048 Power Consumption Test


WattsATIS/10GbE port 3-Year Cost/WattsATIS/10GbE Total power cost/3-Year 3 yr energy cost as a % of list price TEER Value Cooling 5.5 $20.11 $965.19 4.17% 172 Front to Back

The Mellanox/Voltaire Vantage 6048 represents a new breed of cloud network leaf or ToR switches with power efficiency being a core value. Its WattsATIS/port is 5.5 and TEER value is 172. Note higher TEER values are more desirable as they represent the ratio of work performed over energy consumption. Its power cost per 10GbE is estimated at $6.70 per year. The three-year cost to power the Mellanox/Voltaire Vantage 6048 is estimated at $965.19 and represents approximately 4% of its list price. Keeping with data center best practices, its cooling fans flow air front to back.
TM TM

82

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Discussion
The Mellanox/Voltaire Vantage 6048 is a 48-port 10GbE ToR switch. Not tested in the Lippis/Ixia test are its claimed converged Ethernet capabilities to enable new levels of efficiency, scalability and real-time application performance, while at the same time consolidating multiple/redundant network tiers and significantly reducing infrastructure expenses.
TM

83

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Top-of-Rack Switches Cross-Vendor Analysis


Between the three-industry Lippis/Ixia tests of December 2010, April 2011 and October 2011 there were eleven ToR switches evaluated for performance and power consumption. These participating vendors are: Arista 7124SX 10G SFP Data Center Switch Arista 7050S-64 10/40G Data Center Switch Alcatel-Lucent OmniSwitch 6900-X40 Brocade VDXTM 6720-24 Data Center Switch Brocade VDXTM 6730-32 Data Center Switch Extreme Networks Summit X670V Dell/Force10 S-Series S4810 Hitachi Cable Apresia 15000-64XL-PSR IBM BNT RackSwitch G8124 IBM BNT RackSwitch G8264 Mellanox/Voltaire Vantage 6048
TM

IT business leaders are responding favorably to ToR switches equipped with a value proposition of high performance, low acquisition price and low power consumption. These ToR switches currently are the hot boxes in the industry with quarterly revenues for mid-size firms in the $10 to $15M plus. We compared each of the above firms in terms of their ability to forward packets: quickly (i.e., latency), without loss of their throughput at full line rate, when ports are oversubscribed with network traffic by 150 percent, in IP multicast mode and in cloud simulation. We also measure their power consumption as described in the Lippis Test Methodology section above. In the following ToR analysis we separate 24 port ToR switches from all others. There are four 24 port ToR switches: Arista 7124SX 10G SFP Data Center Switch Brocade VDXTM 6720-24 Data Center Switch Brocade VDXTM 6730-32 Data Center Switch IBM BNT RackSwitch G8124 The 48 port and above ToR switches evaluated are: Arista 7050S-64 10/40G Data Center Switch Alcatel-Lucent OmniSwitch 6900-X40 Extreme Networks Summit X670V Dell/Force10 S-Series S4810 Hitachi Cable Apresia 15000-64XL-PSR IBM BNT RackSwitch G8264 Mellanox/Voltaire. VantageTM 6048 This approach provides a consistent cross-vendor analysis. We start with a cross-vendor analysis of 24 Port ToR switches followed by the 48 port and above devices. We end with a set of industry recommendations.

All but Brocade utilize merchant silicon in their ToR switch. The rest utilize a new single chip design from Broadcom, Marvel or Fulcrum Microsystems. Brocade developed its own single chip silicon. With a single chip provided by a chip manufacturer, vendors are free to invest resources other than ASIC development, which can consume much of a companys engineering and financial resources. With merchant silicon providing a forwarding engine for their switches, these vendors are free to choose where to innovate, be it in buffer architecture, network services such as virtualization support, OpenFlow, 40GbE uplink or fan-in support, etc. The Lippis/Ixia test results demonstrate that these new chip designs provide state-of-the-art performance at efficient power consumption levels not seen before. Further, price points on a 10GbE per port basis are a low of $351 to a high of $670.

84

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

24 Port ToR Switches RFC 2544 Latency Test - Average Latency All Switches Performed at Zero Frame Loss
ns 800
700 600 500 400 300 200 100 0
IBM BNT RackSwitch G8124 Brocade** VDX 6720-24 Arista*** 7124SX Brocade** VDX 6730-32 IBM BNT RackSwitch G8124 Brocade** VDX 6720-24 Arista*** 7124SX Brocade** VDX 6730-32 *

All tested in Cut-Through Mode

Fall 2010 Lippis/Ixia Test

Spring 2011 Lippis/Ixia Test

Fall 2011 Lippis/Ixia Test

(smaller values are better)


*

(smaller values are better)

Brocade VDX 6720-24 is a layer 2 switch

Brocade VDX 6730-32 is a layer 2 switch

Layer 2
128-bytes 256-bytes 512-bytes 1,024-bytes

Layer 3
1,518-bytes 2,176-bytes 9,216-bytes

64-bytes

1,280-bytes

* staggered start option was configured at 64Byte frame size for the IBM BNT RackSwitch G8124. ** Brocade VDX 6720-24 & VDX 6730-32 are layer 2 switches and thus did not participate in layer 3 test.

Layer 2
Frames/Packet Size (bytes) 64 128 256 512 1,024 1,280 1,518 2,176 9,216
IBM BNT* RackSwitch G8124 Brocade VDX 6720-24 Arista 7124SX Brocade VDX 6730-32 IBM BNT* RackSwitch G8124

Layer 3
Brocade** VDX 6720-24 ** ** ** ** ** ** ** ** ** Arista 7124SX Brocade** VDX 6730-32 ** ** ** ** ** ** ** ** **

651.75 671.50 709.46 709.04 708.25 709.71 707.93 708.04 706.63

527.04 510.71 570.08 619.46 615.54 615.58 614.21 612.17 610.96

541.08 548.21 527.92 528.96 531.58 531.42 531.04 531.42 532.79

521.17 530.04 577.92 606.50 605.17 604.92 607.75 606.42 629.00

642.71 659.67 701.29 699.79 699.00 699.92 698.50 698.33 696.92

561.58 582.33 567.75 570.29 568.79 568.54 564.33 563.83 562.21

* staggered start option was configured at 64Byte frame size for the IBM BNT RackSwitch G8124. ** Brocade VDX 6720-24 & VDX 6730-32 are layer 2 switches and thus did not participate in layer 3 test.

We show average latency and average delay variation across all packet sizes for layer 2 and 3 forwarding. Measurements were taken from ports that were far away from each other to demon-

strate the consistency of performance across the single chip design. The stand out here is that these switch latencies are as low as 510ns to a high of 709ns. The Arista 7124SX layer 2 latency is

the lowest measured in the Lippis/Ixia set of cloud networking test for packet sizes between 256-to-9216 Bytes. Also of note is the consistency in low latency across all packet sizes.

85

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

ToR Switches RFC 2544 Latency Test - Average Latency All Switches Performed at Zero Frame Loss
ns Cut-Through Mode Testing
2000
Fall 2010 Lippis/Ixia Test Spring 2011 Lippis/Ixia Test Fall 2011 Lippis/Ixia Test

(smaller values are better)


1500

(smaller values are better)

1000

500

Arista 7050S-64

IBM BNT RackSwitch G8264

Extreme X670V

Arista 7050S-64

IBM BNT RackSwitch G8264

Extreme X670V

Layer 2
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes

Layer 3
1,518-bytes 2,176-bytes 9,216-bytes

The Arista 7124SX, 7050S-64, Brocade VDXTM 6720-24, 6730-32, Extreme Summit X670V and IBM BNT G8264, G8124 switches were configured and tested via cut-through test method, while Dell/Force10, Mellanox/Voltaire and Apresia were configured and tested via store-and-forward method. This is due to the fact that some switches are store and forward while others are cut through devices. During store-and-forward testing, the latency of packet serialization delay (based on packet size) is removed from the reported latency number, by test equipment. Therefore, to compare cut-through to store-andforward latency measurement, packet serialization delay needs to be added to store-and-forward latency number. For example, in a store-and-forward latency number of 800ns for a 1,518 byte size packet, the additional latency of 1240ns (serialization delay of a 1518 86

Cut-Through Mode Testing

Layer 2
Frames/Packet Size (bytes) 64 128 256 512 1,024 1,280 1,518 2,176 9,216
Arista 7050S-64 IBM BNT RackSwitch G8264 Extreme X670V Arista 7050S-64

Layer 3
IBM BNT RackSwitch G8264 Extreme X670V

914.33 926.77 1110.64 1264.91 1308.84 1309.11 1310.77 1309.77 1305.50

1081.34 1116.83 1214.92 1374.25 1422.00 1421.28 1421.75 1419.89 1415.36

900.19 915.52 1067.12 1208.73 1412.81 1415.96 1415.33 1411.92 1410.02

905.42 951.20 1039.14 1191.64 1233.00 1232.45 1232.48 1229.92 1225.58

1075.39 1121.02 1213.55 1370.94 1421.94 1422.63 1421.83 1419.73 1415.78

907.92 943.14 1029.73 1176.92 1375.02 1373.87 1374.08 1372.71 1370.33

byte packet at 10Gbps) is required to be added to the store-and-forward measurement. This difference can be significant. Note that other potential device specific factors can impact latency too. This makes comparisons between two testing methodologies difficult.

As in the 24 port ToR switches, we show average latency and average delay variation across all packet sizes for layer 2 and 3 forwarding. Measurements were taken from ports that were far away from each other to demonstrate the consistency of performance across

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

the single chip design. We separate cutthrough and store and forward results. The Arista 7050S-64 offers approximately 100ns less latency than the IBM BNT RackSwitch G8264.

As in the 24 port ToR switches, the 48 port and above ToR devices show slight difference in average delay variation between Arista 7050S-64, Extreme X670V and IBM BNT RackSwitch G8264 thus proving consistent latency

under heavy load at zero packet loss for Layer 2 and Layer 3 forwarding. Difference does reside within average latency between suppliers. Just as in the 24 port ToR switches, the range of average delay variation is 5ns to 10ns.

87

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

ToR Switches RFC 2544 Latency Test - Average Latency All Switches Performed at Zero Frame Loss
ns Store & Forward Mode Testing
3000

Fall 2010 Lippis/Ixia Test

Spring 2011 Lippis/Ixia Test

Fall 2011 Lippis/Ixia Test

(smaller values are better)


2500

(smaller values are better)

2000

1500

1000

500

** Dell/Force10 S4810 Mellanox/Voltaire 6048 Apresia 15K

* Alcatel-Lucent 6900-X40 Dell/Force10 S4810 Mellanox/Voltaire 6048

** Apresia 15K

* Alcatel-Lucent 6900-X40

Layer 2
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes

Layer 3
1,518-bytes 2,176-bytes 9,216-bytes

* largest frame size supported on Apresia 15K is 9044. ** There is no latency data for the Apresia 15K at 64 bytes due to configuration difficulties during testng. The 15K could not be configured to maintain a VLAN @ 64 Bytes which eliminated packet signature to measure latency.

Store & Forward Mode Testing

Layer 2
Frames/Packet Size (bytes) 64 128 256 512 1,024 1,280 1,518 2,176 9,216
Dell/Force10 S4810 Mellanox/Voltaire 6048 Apresia 15K Alcatel-Lucent 6900-X40 Dell/Force10 S4810

Layer 3
Mellanox/Voltaire 6048 Apresia 15K Alcatel-Lucent 6900-X40

874.48 885.00 867.60 825.29 846.52 819.90 805.06 812.06 799.79

2484.35 2495.52 2513.52 2617.56 2783.83 2776.15 2757.02 2759.81 2737.35

** 941 980 950 974 945 933 940 *900.64

869.37 880.46 860.74 822.48 833.37 808.98 796.07 802.65 792.39

862.42 868.54 863.77 822.63 843.02 817.83 802.38 809.71 796.06

2504.19 2513.63 2517.98 2650.02 2773.73 2791.31 2762.81 2766.96 2753.96

** 967.17 941.08 910.84 933.05 904.84 889.55 897.33 *900.64

891.41 885.37 880.76 838.76 850.94 828.33 812.83 819.72 809.94

* largest frame size supported on Apresia 15K is 9044. ** There is no latency data for the Apresia 15K at 64 bytes due to configuration difficulties during testng. The 15K could not be configured to maintain a VLAN @ 64 Bytes which eliminated packet signature to measure latency.

The Extreme X670V delivers the lowest latency at the smaller packet sizes. New to this category is the AlcatelLucent 6900-X40 which delivered the

lowest latency for layer 2 fowarding of all products in this cattory of store and forward ToR Switswitches. The Mellanox/Voltaire 6048 measured latency

is the highest for store and forward devices while Dell/Force10 S4810 was the lowest across nearly all packet sizes.

88

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

24 Port ToR Switches RFC 2544 Latency Test - Average Delay Variation All Switches Performed at Zero Frame Loss
ns
12

All tested in Cut-Through Mode

Fall 2010 Lippis/Ixia Test

Spring 2011 Lippis/Ixia Test

Fall 2011 Lippis/Ixia Test

(smaller values are better)


10

(smaller values are better)


*

Brocade VDX 6720-24 is a layer 2 switch

Brocade VDX 6730-32 is a layer 2 switch

IBM BNT* RackSwitch G8124

Brocade VDX 6720-24

Arista 7124SX

Brocade VDX 6730-32

Layer 2
128-bytes 256-bytes 512-bytes 1,024-bytes

IBM BNT* RackSwitch G8124

Brocade** VDX 6720-24

Arista 7124SX

Brocade** VDX 6730-32

Layer 3
1,518-bytes 2,176-bytes 9,216-bytes

64-bytes

1,280-bytes

* staggered start option was configured at 64Byte frame size for the IBM BNT RackSwitch G8124. ** Brocade VDX 6720-24 and Brocade VDX 6730-32 are layer 2 switches and thus did not participate in layer 3 test.

Layer 2
Frames/Packet Size (bytes) 64 128 256 512 1,024 1,280 1,518 2,176 9,216
IBM BNT RackSwitch G8124** Brocade VDX 6720-24** Arista 7124SX Brocade VDX 6730-32** IBM BNT RackSwitch G8124*

Layer 3
Brocade VDX 6720-24** Arista 7124SX Brocade VDX 6730-32**

9.25 5.08 5.42 7.88 7.17 5.25 10.13 5.17 10.13

8.75 5.38 5.88 7.75 6.38 8.25 8.75 5.17 6.54

8.42 5.96 5.54 7.67 7.83 5.92 9.54 6.67 9.67

8.71 5.46 5.63 7.71 6.63 4.75 9.63 5.17 9.67

8.75 5.67 5.46 8.09 7.30 7.84 10.38 5.13 11.08

** ** ** ** ** ** ** ** **

9.96 5.21 5.42 7.92 6.88 5.50 9.25 5.25 9.33

** ** ** ** ** ** ** ** **

* staggered start option was configured at 64Byte frame size for the IBM BNT RackSwitch G8124. ** Brocade VDX 6720-24 and Brocade VDX 6730-32 are layer 2 switches and thus did not participate in layer 3 test.

Notice there is little difference in average delay variation across all vendors proving consistent latency under heavy load at zero packet loss for L2 and L3

forwarding. Difference does reside within average latency between suppliers. The range of average delay variation is 5ns to 10ns.

89

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

ToR Switches RFC 2544 Latency Test - Average Delay Variation All Switches Performed at Zero Frame Loss
ns 12
10 8 6 4 2 0

Cut-Through Mode Testing

Fall 2010 Lippis/Ixia Test

Spring 2011 Lippis/Ixia Test

Fall 2011 Lippis/Ixia Test

(smaller values are better)

(smaller values are better)

Arista 7050S-64

IBM BNT RackSwitch G8264

Extreme X670V

Arista 7050S-64

IBM BNT RackSwitch G8264

Extreme X670V

Layer 2
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes

Layer 3
1,518-bytes 2,176-bytes 9,216-bytes

Cut-Through Mode Testing

Layer 2
Frames/Packet Size (bytes) 64 128 256 512 1,024 1,280 1,518 2,176 9,216
Arista 7050S-64 IBM BNT RackSwitch G8264 Extreme X670V Arista 7050S-64

Layer 3
IBM BNT RackSwitch G8264 Extreme X670V

8.67 5.27 5.41 7.80 6.92 5.52 9.73 5.23 9.75

8.69 5.22 5.61 7.61 6.92 7.56 9.64 5.05 9.56

9.35 7.96 6.94 7.87 9.71 7.83 10.19 7.96 10.73

9.67 5.39 5.64 7.78 6.75 5.58 9.58 5.33 9.86

8.61 5.17 5.64 7.67 6.73 6.25 9.42 5.08 9.58

8.14 5.42 5.56 7.02 6.89 4.56 9.35 5.06 9.71

90

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

ToR Switches RFC 2544 Latency Test - Average Delay Variation All Switches Performed at Zero Frame Loss
ns 12
10 8 6 4 2 0
* Dell/Force10 S4810 Mellanox/Voltaire 6048 Apresia* 15K Alcatel-Lucent 6900-X40 Dell/Force10 S4810

Store & Forward Mode Testing

Fall 2010 Lippis/Ixia Test

Spring 2011 Lippis/Ixia Test

Fall 2011 Lippis/Ixia Test

(smaller values are better)

(smaller values are better)

* Mellanox/Voltaire 6048 Apresia* 15K Alcatel-Lucent 6900-X40

Layer 2
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes

Layer 3
1,518-bytes 2,176-bytes 9,216-bytes

* largest frame size supported on Apresia 15K is 9044. There is no latency data for the Apresia 15K at 64 bytes due to configuration difficulties during testng. The 15K could not be configured to maintain a VLAN @ 64 Bytes which eliminated packet signature to measure latency.

Store & Forward Mode Testing

Layer 2
Frames/Packet Size (bytes) 64 128 256 512 1,024 1,280 1,518 2,176 9,216
Dell/Force10 S4810 Mellanox/Voltaire 6048 Apresia 15K Alcatel-Lucent 6900-X40 Dell/Force10 S4810

Layer 3
Mellanox/Voltaire 6048 Apresia 15K Alcatel-Lucent 6900-X40

9.10 5.23 5.46 7.94 6.73 6.00 8.98 5.00 9.27

8.63 5.21 5.15 7.48 6.98 4.88 9.69 4.96 9.42

** 8.00 7.00 8.00 9.00 7.00 10.00 8.00 *10.58

7.70 5.48 5.70 7.96 7.41 3.33 10.33 5.11 9.24

8.46 5.25 5.52 7.69 6.69 4.65 9.06 5.08 9.15

8.58 5.31 5.19 7.67 6.96 5.48 9.58 5.00 9.31

** 5.23 5.61 7.72 6.64 6.05 9.23 5.02 *9.67

7.20 5.89 5.44 7.65 7.00 3.50 9.13 4.94 9.37

* largest frame size supported on Apresia 15K is 9044. There is no latency data for the Apresia 15K at 64 bytes due to configuration difficulties during testng. The 15K could not be configured to maintain a VLAN @ 64 Bytes which eliminated packet signature to measure latency.

The 48 port and above ToR devices latency were measured in store and forward mode and show slight difference in average delay variation between Alcatel-Lucent 6900-X40, Dell/Force10 S4810, Mellanox/Voltaire VantageTM 91

6048 and Apresia 15K. All switches demonstrate consistent latency under heavy load at zero packet loss for Layer 2 and Layer 3 forwarding. The Apresia 15K does demonstrate largest average delay variation across all packet sizes

while Alcatel-Lucent 6900-X40, Dell/ Force10 and Mellanox/Voltaire are more varied. However, the range of average delay variation is 5ns to 10ns.

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

24 Port ToR Switches RFC 2544 Throughput Test


% Line Rate
100%
Fall 2010 Lippis/Ixia Test Spring 2011 Lippis/Ixia Test Fall 2011 Lippis/Ixia Test

80%

60%

40%

Brocade VDX 6720-24 is a layer 2 switch

Brocade VDX 6730-32 is a layer 2 switch

20%

0%

IBM BNT RackSwitch G8124

Brocade VDX 6720-24

Arista 7124SX

Brocade VDX 6730-32

IBM BNT RackSwitch G8124

Brocade VDX 6720-24

Arista 7124SX

Brocade VDX 6730-32

Layer 2
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes

Layer 3
1,518-bytes 2,176-bytes 9,216-bytes

* Brocade VDX 6720-24 & VDX 6730-32 are layer 2 switches and thus did not participate in layer 3 test.

Layer 2
Frames/Packet Size (bytes) 64 128 256 512 1,024 1,280 1,518 2,176 9,216
IBM BNT RackSwitch G8124 Brocade VDX 6720-24 Arista 7124SX Brocade VDX 6730-32 IBM BNT RackSwitch G8124

Layer 3
Brocade VDX 6720-24 Arista 7124SX Brocade VDX 6730-32

100% 100% 100% 100% 100% 100% 100% 100% 100%

100% 100% 100% 100% 100% 100% 100% 100% 100%

100% 100% 100% 100% 100% 100% 100% 100% 100%

100% 100% 100% 100% 100% 100% 100% 100% 100%

100% 100% 100% 100% 100% 100% 100% 100% 100%

* * * * * * * * *

100% 100% 100% 100% 100% 100% 100% 100% 100%

* * * * * * * * *

* Brocade VDX 6720-24 & VDX 6730-32 are layer 2 switches and thus did not participate in layer 3 test.

As expected all 24 port ToR switches are able to forward L2 and L3 packets at all sizes at 100% of line rate with zero pack-

et loss, proving that these switches are high performance wire-speed devices.

92

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

ToR Switches RFC 2544 Throughput Test


% Line Rate
100%
Fall 2010 Lippis/Ixia Test Spring 2011 Lippis/Ixia Test Fall 2011 Lippis/Ixia Test

80%

60%

40%

20%

0%

Arista 7050S-64

IBM BNT Dell/Force10 RackSwitch S4810 G8264

Mellanox/ Voltaire 6048

Apresia 15K

Extreme X670V

AlcatelLucent 6900-X40

Arista 7050S-64

IBM BNT Dell/Force10 RackSwitch S4810 G8264

Mellanox/ Voltaire 6048

Apresia 15K

Extreme X670V

AlcatelLucent 6900-X40

Layer 2
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes

Layer 3
1,518-bytes 2,176-bytes 9,216-bytes

Layer 2
Frames/Packet Size (bytes) Arista 7050S-64 IBM BNT RackSwitch G8264 Dell/ Force10 S4810 Mellanox/ Voltaire 6048 Apresia 15K Extreme X670V AlcatelLucent 6900-X40 Arista 7050S-64 IBM BNT RackSwitch G8264 Dell/ Force10 S4810

Layer 3
Mellanox/ Voltaire 6048 Apresia 15K Extreme X670V AlcatelLucent 6900-X40

64 128 256 512 1,024 1,280 1,518 2,176 9,216

100% 100% 100% 100% 100% 100% 100% 100% 100%

100% 100% 100% 100% 100% 100% 100% 100% 100%

100% 100% 100% 100% 100% 100% 100% 100% 100%

100% 100% 100% 100% 100% 100% 100% 100% 100%

100% 97.29% 98.54% 99.24% 99.61% 99.68% 99.73% 99.81% 100%

100% 100% 100% 100% 100% 100% 100% 100% 100%

100% 100% 100% 100% 100% 100% 100% 100% 100%

100% 100% 100% 100% 100% 100% 100% 100% 100%

100% 100% 100% 100% 100% 100% 100% 100% 100%

100% 100% 100% 100% 100% 100% 100% 100% 100%

100% 100% 100% 100% 100% 100% 100% 100% 100%

100% 100% 100% 100% 100% 100% 100% 100% 100%

100% 100% 100% 100% 100% 100% 100% 100% 100%

100% 100% 100% 100% 100% 100% 100% 100% 100%

With the exception of the Apresia 15K, as expected all switches are able to forward L2 and L3 packets at all sizes at

100% of line rate with zero packet loss, proving that these switches are high performance wire-speed devices.

93

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

24 Port ToR Switches RFC 2889 Congestion Test 150% of line rate into single 10GbE
% Line Rate
100%

Fall 2010 Lippis/Ixia Test

Spring 2011 Lippis/Ixia Test

Fall 2011 Lippis/Ixia Test

80%

60%

40%

Brocade VDX 6720-24 is a layer 2 switch

Brocade VDX 6730-32 is a layer 2 switch

20%

0%

IBM BNT RackSwitch G8124

Brocade VDX 6720-24

Arista 7124SX

Brocade VDX 6730-32

Layer 2
128-bytes 256-bytes 512-bytes 1,024-bytes

IBM BNT RackSwitch G8124

Brocade VDX 6720-24

Arista 7124SX

Brocade VDX 6730-32

Layer 3
1,518-bytes 2,176-bytes 9,216-bytes

64-bytes

1,280-bytes

Layer 2
Frames/ Packet Size (bytes) IBM BNT RackSwitch G8124
AFR% 64 128 256 512 1,024 1,280 1,518 2,176 9,216 100% 100% 100% 100% 100% 100% 100% 100% 100% HOL N N N N N N N N N BP Y Y Y Y Y Y Y Y Y

Layer 3
Arista 7124SX Brocade VDX 6730-32
BP Y Y Y Y Y Y Y Y Y AFR% 100% 100% 100% 100% 100% 100% 100% 100% 100% HOL N N N N N N N N N BP Y Y Y Y Y Y Y Y Y

Brocade VDX 6720-24


AFR% 100% 100% 100% 100% 100% 100% 100% 100% 100% HOL N N N N N N N N N BP Y Y Y Y Y Y Y Y Y

IBM BNT RackSwitch G8124


AFR% 100% 100% 100% 100% 100% 100% 100% 100% 100% HOL N N N N N N N N N BP Y Y Y Y Y Y Y Y Y

Brocade VDX 6720-24


AFR% * * * * * * * * * HOL * * * * * * * * * BP * * * * * * * * *

Arista 7124SX
AFR% 100% 100% 100% 100% 100% 100% 100% 100% 100% HOL N N N N N N N N N BP Y Y Y Y Y Y Y Y Y

Brocade VDX 6730-32


AFR% * * * * * * * * * HOL * * * * * * * * * BP * * * * * * * * *

AFR% 100% 100% 100% 100% 100% 100% 100% 100% 100%

HOL N N N N N N N N N

* Brocade VDX 6720-24 & Brocade VDX 6730-32 are layer 2 switches and thus did not participate in layer 3 test. AFR% = Agg Forwarding Rate (% Line Rate) HOL = Head of Line Blocking BP = Back Pressure Information was recorded and is available for Head of Line Blocking, Back Pressure and Agg. Flow Control Frames.

All 24 port ToR switches performed as expected that is no HOL blocking was observed/measured during the L2 and L3 congestion test assuring that a congested port does not impact other ports, and thus the network, i.e., 94

congestion is contained. In addition, the IBM BNT RackSwitch G8124 and Brocade VDX 6720-24 and VDX 673032 switches delivered 100% aggregated forwarding rate by implementing back pressure or signaling the Ixia

test equipment with control frames to slow down the rate of packets entering the congested port, a normal and best practice for ToR switches. While not evident in the above graphic, uniquely the Arista 7124SX did indicate to Ixia
www.lippisreport.com

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

24 Port ToR Switches RFC 2889 Congestion Test 150% of line rate into single 10GbE
test gear that it was using back pressure, however there were no flow control frames detected and in fact there were none. Ixia and other test equipment calculate back pressure per RFC 2889 paragraph 5.5.5.2., which states that if the total number of received frames on the congestion port surpasses the number of transmitted frames at MOL (Maximum Offered Load) rate then back pressure is present. Thanks to the 7124SXs generous and dynamic buffer allocation it can overload ports with more packets than the MOL, therefore, the Ixia or any test equipment calculates/sees back pressure, but in reality this is an anomaly of the RFC testing method and not the 7124SX. The Arista 7124SX has a 2MB packet buffer pool and uses Dynamic Buffer Allocation (DBA) to manage congestion. Unlike other architectures that have a per-port fixed packet memory, the 7124SX can handle microbursts to allocate packet memory to the port that needs it. Such microbursts of traffic can be generated by RFC 2889 congestion tests when multiple traffic sources send traffic to the same destination for a short period of time. Using DBA, a port on the 7124SX can buffer up to 1.7MB of data in its transmit queue.

95

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

ToR Switches RFC 2889 Congestion Test 150% of line rate into single 10GbE
% Line Rate
100%
Fall 2010 Lippis/Ixia Test Spring 2011 Lippis/Ixia Test Fall 2011 Lippis/Ixia Test

80%

60%

40%

20%

0%

Arista 7050S-64

IBM BNT Dell/Force10 Mellanox/ RackSwitch S4810 Voltaire G8264 6048

Apresia* 15K

Extreme X670V

Layer 2
256-bytes 512-bytes

AlcatelLucent 6900-X40

Arista 7050S-64

IBM BNT Dell/Force10 Mellanox/ RackSwitch S4810 Voltaire G8264 6048

Apresia* 15K

Extreme X670V

Layer 3

AlcatelLucent 6900-X40

64-bytes

128-bytes

1,024-bytes

1,280-bytes

1,518-bytes

2,176-bytes

9,216-bytes

Layer 2
Frames/ Packet Size (bytes)
Arista 7050S-64 IBM BNT RackSwitch G8264 Dell/Force10 S4810 Mellanox/ Voltaire 6048 Apresia 15K Extreme X670V AlcatelLucent 6900-X40 Arista 7050S-64 IBM BNT RackSwitch G8264 Dell/Force10 S4810

Layer 3
Mellanox/ Voltaire 6048 Apresia 15K Extreme X670V AlcatelLucent 6900-X40

AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP

64 128 256 512 1,024 1,280 1,518 2,176 9,216

100% N 100% N 100% N 100% N 100% N 100% N 100% N 100% N 100% N

Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N

Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N

Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N

Y 77.9% N Y 79.1% N Y 78.6% N Y 78.6% N Y 79.0% N Y 79.0% N Y 78.6% N Y 49.9% Y Y *

Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N N 100% N 100% N

Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N

Y 100% Y 100% Y 100% Y 100% Y 100% Y 100% Y 100% Y 100% Y 100%

N N N N N N N N N

Y 100% Y 100% Y 100% Y 100% Y 100% Y 100% Y 100% Y 100% Y 100%

N N N N N N N N N

Y 100% Y 100% Y 100% Y 100% Y 100% Y 100% Y 100% Y 100% Y 100%

N N N N N N N N N

Y 100% Y 100% Y 100% Y 100% Y 100% Y 100% Y 100% Y 100% Y 100%

N N N N N N N N N

Y 77.9% N Y 77.9% N Y 77.8% N Y 78.2% N Y 67.9% Y Y 50.1% Y Y 50.0% Y Y 50.0% Y Y *

Y 100% Y 100% Y 100% Y 100% N 100% N 100% N 100% N 100% 100%

N N N N N N N N N

Y 100% Y 100% Y 100% Y 100% Y 100% Y 100% Y 100% Y 100% Y 100%

N N N N N N N N N

Y Y Y Y Y Y Y Y Y

* largest frame size supported on Apresia 15K is 9044. AFR% = Agg Forwarding Rate (% Line Rate) HOL = Head of Line Blocking Information was recorded and is available for Head of Line Blocking, Back Pressure and Agg. Flow Control Frames.

BP = Back Pressure

For the Arista 7050S-64, Alcatel-Lucent 6900-X40, IBM BNT RackSwitch G8164, Extreme X670V, Dell/Force10 S4810 and Mellanox/Volaire Vantagetm 6048, these products performed as expected that is no HOL blocking 96

during the L2 and L3 congestion test assuring that a congested port does not impact other ports, and thus the network, i.e., congestion is contained. In addition, these switches delivered 100% aggregated forwarding rate by

implementing back pressure or signaling the Ixia test equipment with control frames to slow down the rate of packets entering the congested port, a normal and best practice for ToR switches. While not evident in the above graphic,
www.lippisreport.com

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

ToR Switches RFC 2889 Congestion Test 150% of line rate into single 10GbE
uniquely the Arista 7050S-64 did indicate to Ixia test gear that it was using back pressure, however there were no flow control frames detected and in fact there were none. Ixia and other test equipment calculate back pressure per RFC 2889 paragraph 5.5.5.2., which states that if the total number of received frames on the congestion port surpasses the number of transmitted frames at MOL (Maximum Offered Load) rate then back pressure is present. Thanks to the 7050S-64s generous and dynamic buffer allocation it can overload ports with more packets than the MOL, therefore, the Ixia or any test equipment calculates/sees back pressure, but in reality this is an anomaly of the RFC testing method and not the 7050S-64. The Arista 7050S-64 is designed with a dynamic buffer pool allocation such that during a microburst of traffic as during RFC 2889 congestion test when multiple traffic sources are destined to the same port, packets are buffered in packet memory. Unlike other architectures that have fixed perport packet memory, the 7050S-64 uses Dynamic Buffer Allocation (DBA) to allocate packet memory to ports that need it. Under congestion, packets are buffered in shared packet memory of 9 MBytes. The 7050S-64 uses DBA to allocate up to 5MB of packet memory to a single port for lossless forwarding as observed during this RFC 2889 congestion test. Apresia demonstrated HOL blocking at various packet sizes for both L3 and L2 forwarding. In addition, performance degradation beyond packet size 1518 during L2 congestion test and beyond packet size 512 during L3 congestion test were observed/detected.

97

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

ToR Switches RFC 2889 Congestion Test 150% of line rate into single 40GbE
% Line Rate
100%
Fall 2010 Lippis/Ixia Test Spring 2011 Lippis/Ixia Test Fall 2011 Lippis/Ixia Test

80%

60%

40%

20%

0%

Extreme X670V

Alcatel-Lucent 6900-X40

Extreme X670V

Alcatel-Lucent 6900-X40

Layer 2
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes

Layer 3
1,518-bytes 2,176-bytes 9,216-bytes

New to the fall 2011 Lippis/Ixia test is congestion testing at 40GbE. Both the Alcatel-Lucent 6900-X40 and Extreme X670V delivered 100% throuput as a percentate of line rate during congestion conditions thanks to back pressure or flow control. Neither switch exibited HOL blocking. These switches offer 40GbE congestion performance in line with their 10GbE uplinks, a major plus.

Layer 2
Frames/ Packet Size (bytes) Extreme X670V
AFR% HOL N N N N N N N N N BP Y Y Y Y Y Y Y Y Y

Layer 3
AlcatelLucent 6900-X40 Extreme X670V
BP Y Y Y Y Y Y Y Y Y AFR% 100% 100% 100% 100% 100% 100% 100% 100% 100% HOL N N N N N N N N N BP Y Y Y Y Y Y Y Y Y

AlcatelLucent 6900-X40
AFR% 100% 100% 100% 100% 100% 100% 100% 100% 100% HOL N N N N N N N N N BP Y Y Y Y Y Y Y Y Y

AFR% 100% 100% 100% 100% 100% 100% 100% 100% 100%

HOL N N N N N N N N N

64 128 256 512 1,024 1,280 1,518 2,176 9,216

100% 100% 100% 100% 100% 100% 100% 100% 100%

* largest frame size supported on Apresia 15K is 9044. AFR% = Agg Forwarding Rate (% Line Rate) HOL = Head of Line Blocking BP = Back Pressure Information was recorded and is available for Head of Line Blocking, Back Pressure and Agg. Flow Control Frames.

98

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

24 Port ToR Switches RFC 3918 IP Multicast Test


% Line Rate
100%

ns
1500

Fall 2010 Lippis/Ixia Test

Spring 2011 Lippis/Ixia Test

Fall 2011 Lippis/Ixia Test

(smaller values are better)


80%

60%

Brocade does not support IP Multicast as its a layer 2 switch

Brocade does not support IP Multicast as its a layer 2 switch

1000

Brocade does not support IP Multicast as its a layer 2 switch

Brocade does not support IP Multicast as its a layer 2 switch

40% 500 20%

0%

IBM BNT RackSwitch G8124

Brocade VDX 6720-24*

Arista 7124SX

Brocade VDX 6730-32*

IBM BNT RackSwitch G8124

Brocade VDX 6720-24*

Arista 7124SX

Brocade VDX 6730-32*

Throughput
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes

Agg Average Latency


1,518-bytes 2,176-bytes 9,216-bytes

* Brocade does not support IP Multicast as its a layer 2 switch.

Throughput (% Line Rate)


Frames/Packet Size (bytes) 64 128 256 512 1,024 1,280 1,518 2,176 9,216
IBM BNT RackSwitch G8124 Brocade VDX 6720-24 Arista 7124SX Brocade VDX 6730-32 IBM BNT RackSwitch G8124

Agg Average Latency (ns)


Brocade VDX 6720-24 Arista 7124SX Brocade VDX 6730-32

100% 100% 100% 100% 100% 100% 100% 100% 100% Brocade does not support IP Multicast as its a layer 2 switch

100% 100% 100% 100% 100% 100% 100% 100% 100% Brocade does not support IP Multicast as its a layer 2 switch

650.00 677.00 713.00 715.00 713.00 725.00 707.00 713.00 712.00 Brocade does not support IP Multicast as its a layer 2 switch

562.82 590.09 558.44 570.91 561.63 569.22 569.73 569.17 565.65 Brocade does not support IP Multicast as its a layer 2 switch

For the Arista 7124SX and IBM BNT RackSwitch G8124 these products performed as expected in RFC 3918 IP Multicast test delivering 100% throughput with zero packet loss. The Arista

7124SX demonstrated over 100ns less latency than the IBM BNT RackSwitch G8124 at various packet sizes. Notice that both the Arista 7124SX and IBM BNT RackSwitch G8124 demonstrated

consistently low latency across all packet sizes. The Brocade VDXTM 6720-24 and 6730-32 are layer 2 cut through switches and thus did not participate in the IP Multicast test.

99

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

ToR Switches RFC 3918 IP Multicast Test


% Line Rate
100%

ns
1500

Fall 2010 Lippis/Ixia Test

Spring 2011 Lippis/Ixia Test

Fall 2011 Lippis/Ixia Test

(smaller values are better)


80% 1000

60%

40%

Mellanox/Voltaire and Apresia do not support IP Multicast at this time.

Mellanox/Voltaire and Apresia do not support IP Multicast at this time.

500 20%

0%

Arista 7050S-64

IBM BNT RackSwitch G8264

Mellanox/ Voltaire* 6048

Apresia* 15K

Extreme X670V

Arista 7050S-64

IBM BNT RackSwitch G8264

Mellanox\ Voltaire* 6048

Apresia* 15K

Extreme X670V

Throughput
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes

Agg Average Latency


1,518-bytes 2,176-bytes 9,216-bytes

* Mellanox/Voltaire & Apresia do not suport IP Multicast at this time.

Throughput (% Line Rate)


Frames/Packet Size (bytes) 64 128 256 512 1,024 1,280 1,518 2,176 9,216
Arista 7050S-64 IBM BNT RackSwitch G8264 Mellanox/ Voltaire 6048 Apresia 15K Extreme X670V Arista 7050S-64

Agg Average Latency (ns)


IBM BNT RackSwitch G8264 Mellanox/ Voltaire 6048 Apresia 15K Extreme X670V

100% 100% 100% 100% 100% 100% 100% 100% 100%

100% 100% 100% 100% 100% 100% 100% 100% 100% Mellanox/Voltaire and Apresia do not support IP Multicast at this time.

100% 100% 100% 100% 100% 100% 100% 100% 100%

894.90 951.95 1072.35 1232.06 1272.03 1276.35 1261.49 1272.46 1300.35

1059.00 1108.00 1139.00 1305.00 1348.00 1348.00 1332.00 1346.00 1342.00 Mellanox/Voltaire and Apresia do not support IP Multicast at this time.

955.82 968.43 985.98 998.16 1011.00 1012.53 1010.16 1009.82 1009.86

For the Arista 7050S-64, Extreme X670V, IBM BNT RackSwitch G8264 and Dell/Force10 S4810, these products performed as expected in RFC 3918 IP Multicast test delivering 100% throughput with zero packet loss. In terms of IP Multicast average latency between the Arista 7050S-64, Extreme X670V, 100

IBM BNT RackSwitch G8264 and Dell/ Force10 S4810 their data is similar with higher latencies at larger packet sizes. The Extreme X670V offers the lowest IP Multicast latency for packet sizes greater than 128 bytes by nearly 300ns. There is a slight difference in aggregate average latency between the Arista 7050S-64 and

IBM BNT RackSwitch G8264 with Arista offering 100 ns less latency. Arista, IBM and Extreme ToR switches can be configured in a combination of 10GbE ports plus 2 or 4 40GbE uplink ports or 64 10GbE ports. Mellanox/Voltaire and Apresia does not support IP Multicast at this time.
www.lippisreport.com

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

ToR Switches RFC 3918 IP Multicast Test - Cut Through Mode


% Line Rate
100%

ns
1500

Fall 2010 Lippis/Ixia Test

Spring 2011 Lippis/Ixia Test

Fall 2011 Lippis/Ixia Test

(smaller values are better)


80% 1000 60%

40% 500 20%

0%

Dell/Force10 S4810

Alcatel-Lucent 6900-X40

Dell/Force10 S4810
1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes

Alcatel-Lucent 6900-X40

Throughput
64-bytes 128-bytes 256-bytes 512-bytes

Agg Average Latency


9,216-bytes

Layer 2
Frames/Packet Size (bytes) 64 128 256 512 1,024 1,280 1,518 2,176 9,216
Dell/Force10 S4810 Alcatel-Lucent 6900-X40 Dell/Force10 S4810

Layer 3
Alcatel-Lucent 6900-X40

100% 100% 100% 100% 100% 100% 100% 100% 100%

100% 100% 100% 100% 100% 100% 100% 100% 100%

984.00 1033.00 1062.00 1232.00 1442.00 1442.00 1429.00 1438.00 1437.00

1021.98 1026.76 1043.53 1056.07 1057.22 1052.11 1055.53 1054.58 1055.64

The Alcatel-Lucent 6900-X40 and Dell/ Force10 S4810 were tested in cutthrough mode and performed as expected in RFC 3918 IP Multicast test delivering 100% throughput with zero packet loss. In terms of IP Multicast average latency between the Alcatel-Lucent 101

6900-X40 delivered lower latencies at all packet sizes except 64 bytes than the Dell/Force10 S4810. But at the higher packet range, which is the most important the Alcatel-Lucent 6900-X40 offers more than 300ns faster forwarding than the Dell/Force10 S4810.
Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com

Lippis Enterprises, Inc. 2011

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Cloud Simulation ToR Switches Zero Packet Loss: Latency Measured in ns


ns

Fall 2010 Lippis/Ixia Test

Spring 2011 Lippis/Ixia Test

Fall 2011 Lippis/Ixia Test

12,000

(smaller values are better)


9,000

6,000

3,000

Mellanox/ Voltaire 6048

Dell/Force10 S4810

Brocade VDX-6720-24

Arista 7124SX

Arista 7050S-64
EW HTTP

Brocade VDX-6730-32

Extreme X670V

Alcatel-Lucent 6900-X40

EW Database_to_Server

EW Server_to_Database

EW iSCSI-Server_to_Storage NS Server_to_Client

EW iSCSI-Storage_to_Server

NS Client_to_Server

Cloud Simulation ToR Switches Zero Packet Loss: Latency Measured in ns


Company Product EW Database_ to_Server 11401 6890 7653 7439 EW Server_ to_Database 2348 1010 675 692 EW HTTP 10134 4027 4805 4679 EW iSCSIServer_to_ Storage 2884 771 1298 1256 EW iSCSIStorage_to_ Server 10454 6435 7901 7685 NS Client_ to_Server 3366 1801 1666 1613 NS Server_ to_Client 2197 334 572 530

Mellanox/Voltaire 6048 Dell/Force10 S4810 Brocade VDX 6720-24


TM

Arista 7124SX Arista 7050S-64 Brocade VDX 6730-32


TM

8173
7988 2568 3719

948
705 1354 1355

4609
4852 2733 2710

1750
1135 1911 2037

8101
8191 3110 3085

1956
1707 1206 995

1143
593 1001 851

Extreme X670V Alcatel-Lucent 6900-X40

The Alcatel-Lucent 6900-X40, Arista 7124SX and 7050S-64, Brocade VDXTM 6720-24 and VDXTM 6730-32, Extreme X670V, Dell/Force10 S4810 and Mellanox/Voltaire VantageTM 6048 delivered 100% throughput with zero packet loss 102

and nanosecond latency during the cloud simulation test. Both the AlcatelLucent 6900-X40 and Extreme X670V offer the lowest overall latency in cloud simulation measured thus far. All demonstrated spikes at EW Database-

to-Server, EW HTTP and EW iSCSIStorage-to-Server cloud protocols. The Mellanox/Voltaire VantageTM 6048 added between 600ns and 300ns to all others.

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Power Consumption 24 Port ToR Switches


WattsATIS/10GbE Port
7 6 5 4 3 2 1 0

Fall 2010 Lippis/Ixia Test

Spring 2011 Lippis/Ixia Test

Fall 2011 Lippis/Ixia Test

IBM BNT RackSwitch G8124

Brocade VDX 6720-24

Arista 7124SX

Brocade VDX 6730-32

$25 $20 $15 $10 $5 $0

3-Year Cost/WattsATIS/10GbE

The Brocade VDXTM 6730-32 is the standout in this group of ToR switches with a low 1.5 WATTSATIS and a TEER value of 631, the lowest power consumption measured in these industry test thus far. Note that the Brocade VDXTM 6730-32 was equipped with only one power supply. Also VDXTM 6730-32 supports 24 10GbE ports and 8 2/4/8 Gbps FCoE ports. This test populated the VDXTM 6730-32 only with 24 10GbE ports. If the FCoE ports were operational power consumption would rise. The Arista 7124SX and IBM BNT RackSwitch G8124 did not differ materially in their power consumption and 3-Year energy cost as a percent of list pricing. All 24 port ToR switches are highly energy efficient with high TEER values and low WATTSATIS.

IBM BNT RackSwitch G8124

Brocade VDX 6720-24

Arista 7124SX

Brocade VDX 6730-32

800 600 400 200 0

TEER

(larger values are better)

IBM BNT RackSwitch G8124

Brocade VDX 6720-24

Arista 7124SX

Brocade VDX 6730-32

5% 4% 3% 2% 1% 0%

3-Year Energy Cost as a % of List Price

IBM BNT RackSwitch G8124

Brocade VDX 6720-24

Arista 7124SX

Brocade VDX 6730-32

Company IBM Brocade Arista Brocade

Product BNT Rack Switch G8124 VDXTM 6720-24 7124SX VDXTM 6730-32

WattsATIS/port 5.5 3.1 6.8 1.5

Cost/ WattsATIS/Year $6.70 $3.72 $8.24 $1.83

3 yr Energy Cost per 10GbE Port $20.11 $11.15 $24.71 $5.48

TEER 172 310 140 631

3 yr Energy Cost as a % of List Price 4.04% 2.24% 4.56% 0.82%

Cooling Front-to-Back Front-to-Back, Back-to-Front Front-to-Back, Back-to-Front Front-to-Back, Back-to-Front

103

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Power Consumption ToR Switches


WattsATIS/10GbE Port
7 6 5 4 3 2 1 0

Fall 2010 Lippis/Ixia Test

Spring 2011 Lippis/Ixia Test

Fall 2011 Lippis/Ixia Test

Arista 7050S-64

$25 $20 $15 $10 $5 $0

3-Year Cost/WattsATIS/10GbE

IBM BNT RackSwitch G8264

Force10 S4810

Voltaire 6048

Apresia 15K

Extreme X670V

AlcatelLucent 6900-X40

The Arista 7050S-64 is the standout in this group of ToR switches with a low 2.3 WATTSATIS and a TEER value of 404, both of which were the lowest and highest we have ever measured making the Arista 7050S-64 the most power efficient 48 port + ToR switch measured to date. A close second is a near tie with the Extreme Networks X670V and Alcatel-Lucent 6900-X40 at 3.2 and 3.4 WATTSATIS respectively and 295 and 280 TEER values respectively. All switches are highly energy efficient ToR switches with high TEER values and low WATTSATIS. While there is difference in the threeyear energy cost as a percentage of list price, this is due to the fact that these products were configured differently during the Lippis/Ixia test with the Arista 7050S-64 and IBM BNT RackSwitch G8264 configured as 64 10GbE while the Dell/Force10 S4810 configured as 48 10GbE ports with subsequent different list pricing. The Extreme X670V was configured with 48-10GbE and 4-40GbE ports. The Alcatel-Lucent 6900-X40 was configured with 4010GbE and 6-40GbE ports. Hitachi Cable was unable to provide list pricing information thus there is no data on the Apresia 15Ks power cost per 10GbE, three-year energy cost as a percentage of list price.
3 yr Energy Cost as a % of List Price

Arista 7050S-64

500 400 300 200 100 0

TEER

IBM BNT RackSwitch G8264

Force10 S4810

Voltaire 6048

Apresia 15K

Extreme X670V

AlcatelLucent 6900-X40

(larger values are better)

Arista 7050S-64

IBM BNT RackSwitch G8264

Force10 S4810

Voltaire 6048

Apresia 15K

Extreme X670V

AlcatelLucent 6900-X40

5% 4% 3% 2% 1% 0%

3-Year Energy Cost as a % of List Price

Arista 7050S-64

* Apresia did not provide list pricing

IBM BNT RackSwitch G8264

Force10 S4810

Voltaire 6048

Apresia 15K

Extreme X670V

AlcatelLucent 6900-X40

Company / Product

WattsATIS/port

Cost/ WattsATIS/port

3 yr Energy Cost per 10GbE Port

TEER

Cooling

Arista 7050S-64 IBM BNT Rack Switch G8264 Dell/Force10 S4810 Mellanox/Voltaire 6048 Apresia* 15K Extreme X670V Alcatel-Lucent OmniSwitch 6900-X40
* Apresia did not provide list pricing

2.3 3.9 4.0 5.5 3.6 3.2 3.4

$2.85 $4.78 $4.91 $6.70 $13.09 $3.91 $4.12

$8.56 $10.75 $14.73 $20.11 $13.09 $11.74 $12.36

404 241 235 172 264 295 280

1.82% 4.08% 2.83% 4.17% 2.21% 2.82%

Front-to-Back, Back-to-Front Front-to-Back Front-to-Back Front-to-Back Front-to-Back Front-to-Back Front-to-Back

104

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

ToR Industry Recommendations


The following provides a set of recommendations to IT business leaders and network architects for their consideration as they seek to design and build their private/public data center cloud network fabric. Most of the recommendations are based upon our observations and analysis of the test data. For a few recommendations, we extrapolate from this baseline of test data to incorporate key trends and how these ToR switches may be used in their support for corporate advantage. 10GbE ToR Switches Ready for Deployment: 10GbE ToR switches are ready for prime time, delivering full line rate throughput at zero packet loss and nanosecond latency plus single to double-digit delay variation. We are now in the era of 500ns latency ToR switching. In addition, these ToR switches offer low power consumption with energy cost over a three-year period estimated between 1.8% and 4% of acquisition cost. Evaluate Each ToR Switch Separately: While this set of ToR switch category tested very well, there are differences between vendors especially in the congestion and cloud simulation test. Therefore, it is recommended to review each suppliers results and make purchase decisions accordingly. Deploy ToR Switches that Demonstrate Efficient Power and Cooling: In addition to high TEER values and low WATTSATIS all ToR switches tested support front-to-back or rear-to-front cooling in support of data center hot/cold aisle airflow designs. In fact some offer a reversible option too. Therefore, it is recommended that these ToR switches can be used as part of an overall green data center initiative. Ready for Storage Enablement: Most of the ToR switches demonstrated the performance and latency required to support storage enablement or converged I/O. In fact, all suppliers have invested in storage enablement, such as support for CEE, FCoE, iSCSI, ATA, NAS, etc., and while these features were not tested in the Lippis/Ixia evaluation, these switches demonstrated that the raw capacity is built into the switches for its support. In fact some are now offering storge and ethernet ports. Evaluate ToR Switches for Network Services and Fabric Differences: There are differences between suppliers in terms of the network services they offer as well as how their ToR switches connect to Core/Spine switches to create a data center fabric. It is recommended that IT business leaders evaluate ToR switches with Core switches to assure that the network services and fabric attributes sought after are realized. Single Chip Design Advantages: With most, but not all ToR switches being designed with merchant silicon and proving their performance and power consumption advantages, expect many other suppliers to introduce ToR switches based upon this design. Competitive differentiation will quickly shift toward network operating system and network services support especially virtualization aware, unified networking and software defined networking with OpenFlow. Connect Servers at 10GbE: The Lippis/Ixia test demonstrated the performance and power consumption advantages of 10GbE networking, which can be put to work and exploited for corporate advantage. For new server deployments in private/public data center cloud networks, 10GbE is recommended as the primary network connectivity service as a network fabric exists to take full advantage of server I/O at 10GbE bandwidth and latency levels. 40GbE Uplink Options Become Available: The Alcatel-Lucent 6900-X40, Arista 7050S-64, Extreme Networks X670V, IBM BNT RackSwitch G8264 and Dell/Force10 S4810 support a range of 40GbE ports and configuration options. The Lippis/Ixia evaluation tested the Arista 7050S-64 and IBM BNT RackSwitch G8264 using its 40GbE as a fan-in of 4 10GbE to increase 10GbE port density to 64. The AlcatelLucent 6900-X40 and Extreme Networks X670V we tested with six and four 40GbE links for all test, a first in these indsutry test. These options offer design flexibility at no cost to performance, latency or power consumption, and thus we recommend its use. Further, in the spring of 2012 we expect many new 40GbE core switches to be available to accept these 40GbE uplinks.

105

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Core/Spine Cross-Vendor Analysis


There were four Core/Spine Switches evaluated for performance and power consumption in the Lippis/Ixia test. These participating vendors were: Alcatel-Lucent OmniSwitch 10K Arista 7504 Series Data Center Switch Extreme Networks BlackDiamond X8 Juniper Network EX Series EX8200 Ethernet Switch These switches represent the state-of-the-art of computer network hardware and software engineering, and are central to private/public data center cloud computing infrastructure. If not for this category of Ethernet switching, cloud computing would not exist. The Lippis/Ixia public test was the first evaluation for every Core switch tested. Each suppliers Core switch was evaluated for its fundamental performance and power consumption features. The Lippis/ Ixia test results demonstrate that these new Core switches provide state-of-the-art performance at efficient power consumption levels not seen before. The port density tested for these Core switches ranged from 128 10GbE ports to a high of 256 10GbE and for the first time 24-40GbE ports for the Extreme Networks BlackDiamond X8 IT business leaders are responding favorably to Core switches equipped with a value proposition of high performance, high port density, competitive acquisition cost, virtualization aware services, high reliability and low power consumption. These Core switches currently are in high demand with quarterly revenues for mid-size firms in the $20 to $40M plus range. The combined market run rate for both ToR and Core 10GbE switching is measured in the multi-billion dollar range. Further, Core switch price points on a 10GbE per port basis are a low of $1,200 to a high of $6,093. 40GbE core ports are priced between 3 and 4 times that of 10GbE core ports. Their list price varies from $230K to $780K with an average order usually being in the million plus dollar range. While there is a large difference in list price as well as price per port between vendors, the reason is found in the number of network services supported by the various suppliers and 10GbE/40GbE port density. We compare each of the above firms in terms of their ability to forward packets: quickly (i.e., latency), without loss or their throughput at full line rate, when ports are oversubscribed with network traffic by 150%, in IP multicast mode and in cloud simulation. We also measure their power consumption as described in the Lippis Test Methodology section.

106

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Core Switches RFC 2544 Latency Test - Average Latency All Switches Performed at Zero Frame Loss
ns
50000 45000 40000

Fall 2010 Lippis/Ixia Test

Spring 2011 Lippis/Ixia Test

Fall 2011 Lippis/Ixia Test

number of 10GbE ports populated

400

(smaller values are better)

(smaller values are better)

350 300

35000 30000 25000 20000 15000 100 10000 5000 0 50 0 250 200 150

Alcatel-Lucent OmniSwitch 10K

Arista 7504

Juniper X8216

Extreme BDX8

Alcatel-Lucent OmniSwitch 10K

Arista 7504

Juniper X8216

Extreme BDX8

Layer 2
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes

Layer 3
1,518-bytes 2,176-bytes 9,216-bytes

Layer 2
Frames/ Packet Size (bytes) Alcatel-Lucent* OmniSwitch 10K 20864.00 20561.00 20631.00 20936.00 21216.00 21787.00 22668.00 25255.00 36823.00 Arista 7504 17791.20 6831.50 7850.70 8452.00 8635.50 8387.00 8122.10 8996.00 12392.80 Juniper EX8126 11891.60 11366.20 11439.60 11966.80 13322.90 14131.80 14789.60 16302.70 34722.70 Extreme BDX8 2273.50 2351.80 2493.60 2767.60 3331.50 3588.80 3830.30 4542.80 12095.10 Alcatel-Lucent* OmniSwitch 10K 20128.00 20567.00 20638.00 20931.00 21211.00 21793.00 22658.00 25242.00 45933.00 Arista 7504 9994.88 6821.32 7858.69 8466.53 8647.96 8387.26 8112.95 8996.55

Layer 3
Juniper EX8126 12390.38 12112.50 11814.05 12685.41 13713.25 14318.48 14664.84 15767.44 28674.08 Extreme BDX8 2273.30 2354.00 2492.90 2766.30 3324.30 3581.00 3826.50 4504.10 11985.90

64 128 256 512 1,024 1,280 1,518 2,176 9,216

12387.99

* RFC2544 was conducted on the OmniSwitch 10K via default configuration, however Alcatel-Lucent has an optimized configuration for low latency networks which it says improves these results.

We show average latency and average delay variation across all packet sizes for layer 2 and 3 forwarding. Measurements were taken from ports that were far away from each other to demonstrate the consistency of performance between modules and across their backplanes. 107

The clear standout here is the Extreme Networks BlackDiamond X8 latency measurements which are 2 to 9 times faster at forwarding packets than all others. Another standout here is the Arista 7504s large latency at 64Byte size pack-

ets. According to Arista, the Arista 7500 series performance and design are fine tuned for applications its customers use. These applications use mixed packet sizes, rather than an artificial stream of wire speed 64 byte packets.

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Core Switches RFC 2544 Latency Test - Average Delay Variation All Switches Performed at Zero Frame Loss Fall 2010 Spring 2011
ns
Lippis/Ixia Test

Lippis/Ixia Test

Fall 2011 Lippis/Ixia Test

12

(smaller values are better)

(smaller values are better)

10

Alcatel-Lucent OmniSwitch 10K

Arista 7504

Juniper X8216

Extreme BDX8

Alcatel-Lucent OmniSwitch 10K

Arista 7504

Juniper X8216

Extreme BDX8

Layer 2
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes

Layer 3
1,518-bytes 2,176-bytes 9,216-bytes

Layer 2
Frames/ Packet Size (bytes) Alcatel-Lucent* OmniSwitch 10K 9.00 5.00 5.00 8.00 7.00 5.00 10.00 5.00 10.00 Arista 7504 9.00 5.00 6.00 8.00 7.00 6.00 10.00 5.00 10.50 Juniper EX8126 9.00 5.00 5.40 7.90 7.00 5.10 9.50 5.00 9.90 Extreme BDX8 8.30 5.40 5.40 7.60 7.10 5.60 9.30 5.00 9.10 Alcatel-Lucent* OmniSwitch 10K 9.00 5.00 5.00 8.00 7.00 4.00 10.00 5.00 10.00 Arista 7504 9.00 5.00 5.95 8.00 7.00 6.00 10.00 5.00 10.60

Layer 3
Juniper EX8126 9.00 5.00 5.11 8.00 7.00 5.00 9.56 5.00 10.00 Extreme BDX8 9.98 5.50 5.40 7.60 7.20 5.50 9.30 5.00 9.10

64 128 256 512 1,024 1,280 1,518 2,176 9,216

* RFC2544 was conducted on the OmniSwitch 10K via default configuration, however Alcatel-Lucent has an optimized configuration for low latency networks which it says improves these results.

Notice there is little difference in average delay variation across all vendors proving consistent latency under heavy load at zero packet loss for L2 and L3 forwarding. Average 108

delay variation is in the 5 to 10 ns range. Difference does reside within average latency between suppliers, thanks to different approaches to buffer management and port densities.
Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment www.lippisreport.com

Lippis Enterprises, Inc. 2011

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Core Switches RFC 2544 Throughput Test


% Line Rate
100%
Fall 2010 Lippis/Ixia Test Spring 2011 Lippis/Ixia Test Fall 2011 Lippis/Ixia Test

number of 10GbE ports populated

400 350

80% 300 250 200 40% 150 100 20% 50 0% 0

60%

Alcatel-Lucent OmniSwitch 10K

Arista 7504

Juniper X8216

Extreme BDX8

Alcatel-Lucent OmniSwitch 10K

Arista 7504

Juniper X8216

Extreme BDX8

Layer 2
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes

Layer 3
1,518-bytes 2,176-bytes 9,216-bytes

Layer 2
Frames/ Packet Size (bytes)

Layer 3
Juniper EX8126 100% 100% 100% 100% 100% 100% 100% 100% 100% Extreme BDX8 100% 100% 100% 100% 100% 100% 100% 100% 100% Alcatel-Lucent OmniSwitch 10K 100% 100% 100% 100% 100% 100% 100% 100% 100% Arista 7504 100% 100% 100% 100% 100% 100% 100% 100% 100% Juniper EX8126 100% 100% 100% 100% 100% 100% 100% 100% 100% Extreme BDX8 100% 100% 100% 100% 100% 100% 100% 100% 100%

Alcatel-Lucent OmniSwitch 10K 100% 100% 100% 100% 100% 100% 100% 100% 100%

Arista 7504 100% 100% 100% 100% 100% 100% 100% 100% 100%

64 128 256 512 1,024 1,280 1,518 2,176 9,216

As expected, all switches are able to forward L3 packets at all sizes at 100% of line rate with zero packet loss, prov-

ing that these switches are high performance devices. Note that while all deliver 100% throughput the Extreme

BlackDiamond X8 was the only device configured with 24-40GbE ports plus 256-10GbE ports.

109

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Core Switches RFC 2889 Layer 2 Congestion Test 150% of Line Rate into Single 10GbE
% Line Rate
100%

Fall 2010 Lippis/Ixia Test

Spring 2011 Lippis/Ixia Test

Fall 2011 Lippis/Ixia Test

80%

60%

40%

20%

0%
Alcatel-Lucent OmniSwitch 10K Arista 7504 Juniper X8216 Extreme BDX8 Alcatel-Lucent OmniSwitch 10K Arista 7504 Juniper X8216 Extreme BDX8

Layer 2
64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes

Layer 3
1,518-bytes 2,176-bytes 9,216-bytes

Layer 2
Frames/ Packet Size (bytes)
64 128 256 512 1,024 1,280 1,518 2,176 9,216

Layer 3
Juniper EX8126 Extreme BDX8
N N N N N N N N N 100% 100% 100% 100% 100% 100% 100% 100% 100% N N N N N N N N N Y Y Y Y Y Y Y Y Y

Alcatel-Lucent OmniSwitch 10K


AFR% 77.80% 77.80% 77.80% 77.80% 77.80% 77.80% 77.80% 77.90% 78.40% HOL N N N N N N N N N BP N N N N N N N N N

Arista 7504
83.34% 83.34% 83.35% 83.35% 83.36% 83.36% 83.36% 83.36% 83.36% N N N N N N N N N Y Y Y Y Y Y Y Y Y

Alcatel-Lucent OmniSwitch 10K


AFR% 77.80% 77.80% 77.80% 77.80% 77.80% 77.80% 77.80% 77.90% 78.40% HOL N N N N N N N N N BP N N N N N N N N N

Arista 7504
84.34% 84.34% 84.35% 84.35% 84.36% 84.36% 84.36% 84.36% 78.21% N N N N N N N N Y

Juniper EX8126
Y 77.81% N Y 77.84% N Y 77.83% N Y 74.20% N Y 72.78% N Y 77.87% N Y 77.69% N Y 77.91% N N 78.16% N N N N N N N N N N

Extreme BDX8
AFR% 100% 100% 100% 100% 100% 100% 100% 100% 100% HOL BP N N N N N N N N N Y Y Y Y Y Y Y Y Y

AFR% HOL BP* AFR% HOL BP AFR% HOL BP 77.01% 75.01% 75.02% 75.03% 76.06% 76.06% 75.12% 76.06% 75.06% N N N N N N N N N

AFR% HOL BP* AFR% HOL BP

AFR% = Agg Forwarding Rate (% Line Rate) HOL = Head of Line Blocking BP = Back Pressure * Note that while the Aristas 7504 shows back pressure, in fact there is none. All test equipment including IXIA calculates back pressure per RFC 2889 paragraph 5.5.5.2. which states that if the total number of received frames on the congestion port surpasses the number of transmitted frames at MOL (Maximum Offered Load) rate then back pressure is present. Thanks to the 7504s 2.3GB of packet buffer memory it can overload ports with more packets than the MOL, therefore, IXIA or any test equipment calculates/sees back pressure, but in reality this is an anomaly of the RFC testing method and not the 7504. The Arista 7504 can buffer up 40ms of traffic per port at 10GbE speeds which is 400K bits or 5,425 packets of 9216 bytes.

All Core switches performed extremely well under congestion conditions with nearly no HOL blocking and between 77 to 100% aggregated forwarding as a percentage of line rate. Its expected that at 150% of offered load to a port, that that Core switchs port would show 33% 110

loss if its receiving at 150% line rate and not performing back pressure or flow control. A few standouts arose in this test. First, Aristas 7504 delivered nearly 84% of aggregated forwarding rate as percent-

age of line rate during congestion conditions for both L2 and L3 traffic flows, the highest for all suppliers without utilizing flow control. This is due to its generous 2.3GB of buffer memory design as well as its VOQ architecture. Note also that Arista is the only Core
www.lippisreport.com

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

switch that showed HOL blocking for 9216 byte size packets at L3. Note the 7504 was in beta testing at the time of the Lippis/IXIA test and there was a corner case at 9216 bytes at L3 that

needed further tuning. Arista states that its production code provides wirespeed L2/L3 performance at all packet sizes without any head of line blocking

Second, Alcatel-Lucent and Juniper delivered consistent and perfect performance with no HOL blocking or back pressure for all packet sizes during both L2 and L3 forwarding.

111

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Core Switches RFC 2889 Layer 2 Congestion Test 150% of Line Rate into Single 40GbE
% Line Rate
100%

Fall 2010 Lippis/Ixia Test

Spring 2011 Lippis/Ixia Test

Fall 2011 Lippis/Ixia Test

80%

60%

40%

20%

0%

Extreme BDX8

Extreme BDX8

Layer 2
64-bytes 1,280-bytes 128-bytes 256-bytes 1,518-bytes

Layer 3
512-bytes 1,024-bytes 9,216-bytes 2,176-bytes

Layer 2
Frames/ Packet Size (bytes)
AFR% 64 128 256 512 1,024 1,280 1,518 2,176 9,216 100% 100% 100% 100% 100% 100% 100% 100% 100%

Layer 3
Extreme BDX8
BP Y Y Y Y Y Y Y Y Y AFR% 100% 100% 100% 100% 100% 100% 100% 100% 100% HOL N N N N N N N N N BP Y Y Y Y Y Y Y Y Y

Extreme BDX8
HOL N N N N N N N N N

AFR% = Agg Forwarding Rate (% Line Rate) HOL = Head of Line Blocking BP = Back Pressure * Extremes BDX8 was the only core switch to utilize back pressure to slow down the rate of traffic flow so as to maintain 100% throughput

The Extreme Networks BDX8 demonstrated 100% of aggregated forwarding rate as percentage of line rate during congestion conditions. We tested congestion in both 10GbE and 40GbE configurations, a first for these test. The Extreme Networks BDX8 sent flow control frames to the Ixia test gear signaling it to slow down the rate of in112

coming traffic flow. We anticpate that this will become a new feature that most core switch vendors will adopt as it provides reliable network forwarding when the network is the data center fabric supporting both storagte and data gram flows.

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Core Switches RFC 3918 IP Multicast Test


% Line Rate
100%

ns
60,000

Fall 2010 Lippis/Ixia Test

Spring 2011 Lippis/Ixia Test

Fall 2011 Lippis/Ixia Test

(smaller values are better)


80% 50,000

60%

40%

Arista did not support IP Multicast at the time of testing in December 2010. Both the 7504 and 7508 now do with EOS 4.8.2.

40,000

30,000

20,000

Arista did not support IP Multicast at the time of testing in December 2010. Both the 7504 and 7508 now do with EOS 4.8.2.

20%

10,000

0% Alcatel-Lucent OmniSwitch 10K


64-bytes

0 Arista 7504 Juniper X8216 Extreme BDX8

Throughput
128-bytes 256-bytes 512-bytes 1,024-bytes

Alcatel-Lucent OmniSwitch 10K


1,280-bytes

Arista 7504

Juniper X8216

Extreme BDX8

Agg Average Latency


1,518-bytes 2,176-bytes 9,216-bytes

* Arista does not suport IP Multicast at this time.

Throughput (% Line Rate)


Frames/ Packet Size (bytes)

Agg Average Latency (ns)


Extreme BDX8 100% 100% 100% 100% 100% 100% 100% 100% 100% Alcatel-Lucent OmniSwitch 10K 9596 9646 9829 10264 11114 11625 11921 13023 28059
Arista does not support IP Multicast at this time.

Alcatel-Lucent OmniSwitch 10K 100% 100% 100% 100% 100% 100% 100% 100% 100%

Arista 7504*

Juniper EX8126 100% 100% 100% 100% 100% 100% 100% 100% 100%

Arista 7504*

Juniper EX8126 27210 26172 25135 26600 28802 29986 30746 33027 60055

Extreme BDX8 2452 2518 2648 2904 3417 3668 3915 4574 11622

64 128 256 512 1,024 1,280 1,518 2,176 9,216

Arista does not support IP Multicast at this time.

At the time of testing in December 2010, the Arista 7504 did not support IP Multicast. Both the 7504 and 7508 now do with EOS 4.8.2. The Extreme Networks BDX8 offers the lowest IP Multicast latency at the highest port density tested

to date in these industry test. The Extreme Networks BDX8 is the fastest at forwarding IP Multicast thanks to its replication chip design delivering forwarding speeds that are 3-10 times faster than all others. Alcatel-Lucents

OmniSwitch 10K demonstrated 100% throughput at line rate. Junipers EX8216 demonstrated 100% throughput at line rate and the IP multicast aggregated average latency twice that of Alcatel-Lucents OmniSwitch.

113

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Cloud Simulation Core Switches Zero Packet Loss: Latency Measured in ns


ns

Fall 2010 Lippis/Ixia Test

Spring 2011 Lippis/Ixia Test

Fall 2011 Lippis/Ixia Test

30,000 25,000

(smaller values are better)


20,000 15,000 10,000 5,000 0 Alcatel-Lucent OmniSwitch 10K
EW Database_to_Server

Arista 7504
EW Server_to_Database

Juniper EX8216
EW HTTP

Extreme BDX8*

EW iSCSI-Server_to_Storage NS Server_to_Client

EW iSCSI-Storage_to_Server

NS Client_to_Server

* Extreme Networks BDX8 was measured in Store & Forward plus Cut-Through Mode. For cut-through cloud simulation latency results see the X8 write-up.

Cloud Simulation Core Switches Zero Packet Loss: Latency Measured in ns


EW Database_ to_Server 28125 14208 30263 3773 EW Server_to_ Database 14063 4394 11614 1339 EW iSCSIEW iSCSIServer_to_ Storage_to_ Storage Server 17140 5656 19177 2097 26780 13245 26927 3135 NS Client_ to_Server 13194 5428 12145 3368 NS Server_ to_Client 11225 4225 11322 3808

Company Alcatel-Lucent Arista Juniper Extreme*

Product OmniSwitch 10K 7504 EX8216 BDX8

EW HTTP 19564 9171 18833 4887

* Extreme Networks BDX8 was measured in Store & Forward plus Cut-Through Mode. For cut-through cloud simulation latency results see the X8 write-up.

All Core switches delivered 100% throughput with zero packet loss during cloud simulation test. Alcatel-Lucents and Junipers average latencies for vari-

ous traffic flows were consistent with each other while Aristas latency was measured, at specific traffic types, significantly lower. But the clear standout

here is the Extreme Networks BDX8 that forwards a range of cloud protocols 4 to 10 times faster than others tested.

114

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Power Consumption Core Switches


25 20 15 10 5 0 Alcatel -Lucent OmniSwitch 10K Arista 7504 Juniper EX8216 Extreme BDX8

Cost/WattsATIS/Year/10GbE

Fall 2010 Lippis/Ixia Test

Spring 2011 Lippis/Ixia Test

Fall 2011 Lippis/Ixia Test

3 yr Energy Cost/10GbE port


$80 $60 $40 $20 $0

There are differences in the power efficiency across these Core switches. The Alcatel-Lucent OmniSwitch 10K, Arista 7540, Extreme X8 and Juniper EX8216 represent a breakthrough in power efficiency for Core switches with previous generation of switches consuming as much as 70 WATTSATIS. These switches consume between 8 and 21 WATTSATIS with TEER values between 117 and 71; remember the higher the TEER value the better. The standout here is Extreme Networks BDX8 with the lowest WATTSATIS and highest TEER values measured thus far for Core swithes in these Lippis/Ixia test. The Extreme Networks BDX8s actual Watts/10GbE port is actually lower; when fully populated with 756 10GbE or 192 40GbE ports. During the Lippis/Ixia test, the BDX8 was only populated to a third of its port capacity but equipped with power supplies, fans, management and switch fabric modules for full port density population. Therefore, when this full power capacity is divided across a fully populated BDX8, its WattsATIS per 10GbE Port will be lower than the measurement observed.

Alcatel -Lucent OmniSwitch 10K

Arista 7504

Juniper EX8216

Extreme BDX8

TEER
120 100 80 60 40 20 0

(larger values are better)

Alcatel -Lucent OmniSwitch 10K

Arista 7504

Juniper EX8216

Extreme BDX8

3-Year Energy Cost as a % of List Price


4% 3% 2% 1% 0%

Alcatel -Lucent OmniSwitch 10K

Arista 7504

Juniper EX8216

Extreme BDX8

Company

Product

3-Yr Energy WattsATIS/ Cost per port 10GbE Port 13.3 10.3 21.7 8.1 $48.77 $37.69 $79.26 $29.60

TEER 71 92 44 117

3 yr Energy Cost as a % of List Price

Cooling Front-to-Back Front-to-Back Side-to-Side* Front-to-Back

Alcatel-Lucent OmniSwitch 10K Arista Juniper Extreme 7504 EX8216 BDX8

2.21% 3.14% 1.30% 1.79%

* 3rd party cabinet options from vendors such as Chatsworth support hot-aisle cold-aisle deployments of up to two EX8216 chassis in a single cabinet.

115

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

The three year cost per 10GbE port shows the difference between the three core switches with 3-yr estimated cost per 10GbE ranging from a low of

$37.69 to a high of $79.26. In terms of cooling, Junipers EX8216 is the only Core switch that does not support front-to-back airflow. However, third

party cabinet options from vendors such as Chatsworth support hot-aisle cold-aisle deployments of up to two EX8216 chassis in a single cabinet.

116

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Core Switch Industry Recommendations


The following provides a set of recommendations to IT business leaders and network architects for their consideration as they seek to design and build their private/public data center cloud network fabric. Most of the recommendations are based upon our observations and analysis of the test data. For a few recommendations, we extrapolate from this baseline of test data to incorporate key trends and how these Core switches may be used in their support for corporate advantage. 10GbE Core Switches Ready for Deployment: 10GbE Core switches are ready for prime time, delivering full line rate throughput at zero packet loss and nanosecond latencies plus single to double-digit delay variation. In addition, these Core switches offer low power consumption with energy cost over a three-year period estimated between 1.3% and 3.14% of acquisition cost. Evaluate Each Core Switch Separately: While this set of Core switch category tested very well, there are differences between vendors especially in the RFC 2889 congestion, RFC 3918 IP Multicast, power consumption and cloud simulation test. Therefore, it is recommended to review each suppliers results and make purchase decisions accordingly. Deploy Core Switches that Demonstrate Efficient Power and Cooling: In addition to solid TEER values and low WATTSATIS , all Core switches, except the EX8216, tested support front-to-back or rear-to-front cooling in support of data center hot/cold aisle airflow designs. Juniper does support front to back airflow via the purchase of a 3rd party adapter. Therefore, it is recommended that these Core switches can be used as part of an overall green data center initiative. Ready for Storage Enablement: Most of the Core switches demonstrated the performance and latency required to support storage enablement or converged I/O. In fact, all suppliers have invested in storage enablement such as support for CEE, FCoE, iSCSI, NAS, etc., and while these features were not tested in the Lippis/Ixia evaluation, these switches demonstrated that the raw capacity is built into the switches for its support. Look for more vendors to support flow control in their core switches. Evaluate Core Switches for Network Services and Fabric Differences: There are differences between suppliers in terms of the network services they offer as well as how their Core switches connect to ToR/Leaf switches to create a data center fabric. It is recommended that IT business leaders evaluate Core switches with ToR switches to assure that the network services and fabric attributes sought after are realized. Connect Servers at 10GbE: The Lippis/Ixia test demonstrated the performance and power consumption advantages of 10GbE networking, which can be put to work and exploited for corporate advantage. For new server deployments in private/public data center cloud networks, 10GbE is recommended as the primary network connectivity service as a network fabric exists to take full advantage of server I/O at 10GbE bandwidth and latency levels. 40GbE Uplink Termination Options Available: The Extreme X8 supports 192 40GbE ports, 24 were tested here. The OmniSwitch 10K, Juniper EX8216 and Arista 7504 possess the hardware architecture and scale to support 40GbE and 100GbE modules. While only Extreme was tested for 40GbE, we expect that during the Spring 2012 Lippis Report test at iSimCity more will be aviable. Further, we expect 40GbE adoption will start in earnest during 2012 thanks to 40GbE cost being only 3-4 times that of 10GbE.

117

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Consider a Two-Tier Network Architecture: Core switch port desnsity is on the rise with 10GbE port densities in the 128 to 756 port range plus 40GbE in 192 port density range. We tested Core switches with 256-10GbE and 24-40GbE and found that these products forward L2 and L3 packets at wire speed at nano and s latency while offering excellent congestion management and are ready to be deployed in a two-tier network architecture made up of Core and ToR switches. This design is preferable for private/public data center cloud computing scale deployments. The OmniSwitch 10K, Arista 7504, Extreme X8 and Juniper EX8216 possess the performance and power efficiency required to be deploy in a two-tier network architecture, reducing application latency plus capital and operational cost from these budgets.

Evaluate Link Aggregation Technologies: Not tested in the Lippis/Ixia evaluation was a Core switchs ability to support MC-LAG (Multi-Chassis Link Aggregation Group), ECMP (Equal-Cost Multi-Path routing , TRILL (Transparent Interconnection of Lots of Links) or SPB (802.1aq (Shortest Path Bridging), Ciscos FastPath, Junipers QFabric and Brocades VCS. This is a critical test to determine Ethernet fabric scale. It is recommended that IT business leaders and their network architects evaluate each suppliers MC-LAG, ECMP TRILL, SPB, FabricPath, QFabric, VCS approach, performance, congestion and latency characteristics.

118

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

The Lippis Report Test Methodology


To test products, each supplier brought its engineers to configure its equipment for test. An Ixia test engineer was available to assist each supplier through test methodologies and review test data. After testing was concluded, each suppliers engineer signed off on the resulting test data. We call the following set of testing conducted The Lippis Test. The test methodologies included: Throughput Performance: Throughput, packet loss and delay for layer-2 (L2) unicast, layer-3 (L3) unicast and layer-3 multicast traffic was measured for packet sizes of 64, 128, 256, 512, 1024, 1280, 1518, 2176, 9216 bytes. In addition, a special cloud computing simulation throughput test consisting of a mix of north-south plus east-west traffic was conducted. Ixias IxNetwork RFC 2544 Throughput/Latency quick test was used to perform all but the multicast tests. Ixias IxAutomate RFC 3918 Throughput No Drop Rate test was used for the multicast test. Latency: Latency was measured for all the above packet sizes plus the special mix of north-south and east-west traffic blend. Two latency tests were conducted: 1) latency was measured as packets flow between two ports on different modules for modular switches, and 2) between far away ports (port pairing) for ToR switches to demonstrate latency consistency across the forwarding engine chip. Latency test port configuration was via port pairing across the entire device versus side-by-side.This meant that a switch with N ports, port 1 was paired with port (N/2)+1, port 2 with port (N/2)+2, etc. Ixias IxNetwork RFC 2544 Throughput / Latency quick test was used for validation. Jitter: Jitter statistics was measured during the above throughput and latency test using Ixias IxNetwork RFC 2544 Throughput/Latency quick test. Congestion Control Test: Ixias IxNetwork RFC 2889 Congestion test was used to test both L2 and L3 packets.The objective of the Congestion Control Test is to determine how a Device Under Test (DUT) handles congestion. Does the device implement congestion control and does congestion on one port affect an uncongested port? This procedure determines if Head-of-Line (HOL) blocking and/ or if back pressure are present. If there is frame loss at the uncongested port, HOL blocking is present. Therefore, the DUT cannot forward the amount of traffic to the congested port, and as a result, it is also losing frames destined to the uncongested port. If there is no frame loss on the congested port and the port receives more packets than the maximum offered load of 100%, then Back Pressure is present.

Video feature: Click to view a discussion on the Lippis Report Test Methodology RFC 2544 Throughput/Latency Test
Test Objective: This test determines the processing overhead of the DUT required to forward frames and the maximum rate of receiving and forwarding frames without frame loss. Test Methodology: The test starts by sending frames at a specified rate, usually the maximum theoretical rate of the port while frame loss is monitored. Frames are sent from and received at all ports on the DUT, and the transmission and reception rates are recorded. A binary, step or combo search algorithm is used to identify the maximum rate at which no frame loss is experienced. To determine latency, frames are transmitted for a fixed duration. Frames are tagged once in each second and during half of the transmission duration, then tagged frames are transmitted. The receiving and transmitting timestamp on the tagged frames are compared. The difference between
www.lippisreport.com

119

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches Test Methodology: If the ports are set to half duplex, collisions should be detected on the transmitting interfaces. If the ports are set to full duplex and flow control is enabled, flow control frames should be detected. This test consists of a multiple of four ports with the same MOL (Maximum Offered Load). The custom port group mapping is formed of two ports, A and B, transmitting to a third port C (the congested interface), while port A also transmits to port D (uncongested interface). Test Results: This test captures the following data: intended load, offered load, number of transmitted frames, number of received frames, frame loss, number of collisions and number of flow control frames obtained for each frame size of each trial are captured and calculated. The following graphic depicts the RFC 2889 Congestion Control test as conducted at the iSimCity lab for each product.

the two timestamps is the latency. The test uses a one-toone traffic mapping. For store and forward DUT switches latency is defined in RFC 1242 as the time interval starting when the last bit of the input frame reaches the input port and ending when the first bit of the output frame is seen on the output port. Thus latency is not dependent on link speed only, but processing time too. Results: This test captures the following data: total number of frames transmitted from all ports, total number of frames received on all ports, percentage of lost frames for each frame size plus latency, jitter, sequence errors and data integrity error. The following graphic depicts the RFC 2554 throughput performance and latency test conducted at the iSimCity lab for each product.

Port A

Port B

Port C

Port D

RFC 2889 Congestion Control Test


Test Objective: The objective of the Congestion Control Test is to determine how a DUT handles congestion. Does the device implement congestion control and does congestion on one port affect an uncongested port? This procedure determines if HOL blocking and/or if back pressure are present. If there is frame loss at the uncongested port, HOL blocking is present. If the DUT cannot forward the amount of traffic to the congested port, and as a result, it is also losing frames destined to the uncongested port. If there is no frame loss on the congested port and the port receives more packets than the maximum offered load of 100%, then Back Pressure is present.

RFC 3918 IP Multicast Throughput No Drop Rate Test


Test Objective: This test determines the maximum throughput the DUT can support while receiving and transmitting multicast traffic. The input includes protocol parameters (IGMP, PIM), receiver parameters (group addressing), source parameters (emulated PIM routers), frame sizes, initial line rate and search type. Test Methodology: This test calculates the maximum DUT throughput for IP Multicast traffic using either a binary or a linear search, and to collect Latency and Data
www.lippisreport.com

120

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches Test Objective: This test determines the Energy Consumption Ratio (ECR), the ATIS (Alliance for Telecommunications Industry Solutions) Telecommunications Energy Efficiency Ratio (TEER) during a L2/L3 forwarding performance. TEER is a measure of network-element efficiency quantifying a network components ratio of work performed to energy consumed. Test Methodology: This test performs a calibration test to determine the no loss throughput of the DUT. Once the maximum throughput is determined, the test runs in automatic or manual mode to determine the L2/L3 forwarding performance while concurrently making power, current and voltage readings from the power device. Upon completion of the test, the data plane performance and Green (ECR and TEER) measurements are calculated. Engineers followed the methodology prescribed by two ATIS standards documents: ATIS-0600015.03.2009: Energy Efficiency for Telecommunication Equipment: Methodology for Measuring and Reporting for Router and Ethernet Switch Products, and ATIS-0600015.2009: Energy Efficiency for Telecommunication Equipment: Methodology for Measuring and Reporting - General Requirements The power consumption of each product was measured at various load points: idle 0%, 30% and 100%. The final power consumption was reported as a weighted average calculated using the formula: WATIS = 0.1*(Power draw at 0% load) + 0.8*(Power draw at 30% load) + 0.1*(Power draw at 100% load). All measurements were taken over a period of 60 seconds at each load level, and repeated three times to ensure result repeatability. The final WATIS results were reported as a weighted average divided by the total number of ports per switch to derive at a WATTS per port measured per ATIS methodology and labeled here as WATTSATIS. Test Results: The L2/L3 performance results include a measurement of WATIS and the DUT TEER value. Note that a larger TEER value is better as it represents more work done at less energy consumption. In the graphics

Integrity statistics. The test is patterned after the ATSS Throughput test; however this test uses multicast traffic. A one-to-many traffic mapping is used, with a minimum of two ports required. If choosing OSPF or ISIS as IGP protocol routing, the transmit port first establishes an IGP routing protocol session and PIM session with the DUT. IGMP joins are then established for each group, on each receive port. Once protocol sessions are established, traffic begins to transmit into the DUT and a binary or linear search for maximum throughput begins. If choosing none as IGP protocol routing, the transmit port does not emulate routers and does not export routes to virtual sources. The source addresses are the IP addresses configured on the Tx ports in data frame. Once the routes are configured, traffic begins to transmit into the DUT and a binary or linear search for maximum throughput begins. Test Results: This test captures the following data: maximum throughput per port, frame loss per multicast group, minimum/maximum/average latency per multicast group and data errors per port.The following graphic depicts the RFC 3918 IP Multicast Throughput No Drop Rate test as conducted at the iSimCity lab for each product.

Power Consumption Test


Port Power Consumption: Ixias IxGreen within the IxAutomate test suite was used to test power consumption at the port level under various loads or line rates.

121

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

throughout this report, we use WATTSATIS to identify ATIS power consumption measurement on a per port basis. With the WATTSATIS we calculate a three-year energy cost based upon the following formula. Cost/WattsATIS/3-Year = ( WATTSATIS /1000)*(3*365*24)*(0.1046)*(1.33), where WATTSATIS = ATIS weighted average power in Watts 3*365*24 = 3 years @ 365 days/yr @ 24 hrs/day 0.1046 = U.S. average retail cost (in US$) of commercial grade power as of June 2010 as per Dept. of Energy Electric Power Monthly
(http://www.eia.doe.gov/cneaf/electricity/epm/table5_6_a.html)

across M sets of 8-port topologies. M is an integer and is proportional to the number of ports the DUT is populated with. This test includes a mix of north-south traffic and east-west traffic, and each traffic type is configured for the following parameters: frame rate, frame size distribution, offered traffic load and traffic mesh. The following traffic types are used: web (HTTP), database-server, server-database, iSCSI storage-server, iSCSI server-storage, client-server plus server-client. The north-south client-server traffic simulates Internet browsing, the database traffic simulates serverserver lookup and data retrieval, while the storage traffic simulates IP-based storage requests and retrieval. When all traffic is transmitted, the throughput, latency, jitter and loss performance are measured on a per traffic type basis. Test Results: This test captures the following data: maximum throughput per traffic type, frame loss per traffic type, minimum/maximum/average latency per traffic type, minimum/maximum/average jitter per traffic type, data integrity errors per port and CRC errors per port. For this report we show average latency on a per traffic basis at zero frame loss. The following graphic depicts the Cloud Simulation test as conducted at the iSimCity lab for each product.

1.33 = Factor to account for power costs plus cooling costs @ 33% of power costs. The following graphic depicts the per port power consumption test as conducted at the iSimCity lab for each product.

Public Cloud Simulation Test


Test Objective: This test determines the traffic delivery performance of the DUT in forwarding a variety of northsouth and east-west traffic in cloud computing applications. The input parameters include traffic types, traffic rate, frame sizes, offered traffic behavior and traffic mesh. Test Methodology: This test measures the throughput, latency, jitter and loss on a per application traffic type basis 122
Lippis Enterprises, Inc. 2011

40GbE Testing: For the test plan above, 24-40GbE ports


were available for those DUT that support 40GbE uplinks or modules. During this Lippis/Ixia test 40GbE testing included latency, throughput congestion, IP Multicast, cloud simulation and power consumption test. ToR switches with 4-40GbE uplinks are supported in this test.

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Virtualization Scale Calculation


We observe technical specifications of MAC address size, /32 IP host route table size and ARP entry size to calculate the largest number of VMs supported in a layer 2 domain. Each VM will consume a switchs logical resources of, especially a modular/core switch, (1) MAC entry, (1) /32 IP host route, and (1) ARP entry. Further, each physical server may use (2) MAC/IP/ARP entries, one for management, one for IP storage and/or other uses. Therefore, the lowest common denominator of the three (MAC/IP/ARP) entries will determine the total number of VM and physical machines that can reside within a layer 2 domain. If a data center switch has a 128K MAC table size, 32K IP host routes, and 8K ARP table, then it could support 8,000 VM (Virtual Machines) + Physical (Phy) servers. From here we utilize a VM:Phy consolidation ratio to determine the approximate maximum # of physical servers. 30:1 is a typical consolidation ratio today but more dense (12) core processors entering the market may increase this ratio to 60:1. With 8K total IP endpoints at a 30:1 VM:Phy ratio, this calculates to an approximate 250 Phy servers, and 7,750 VMs. 250 physical servers can be networked into approximately 12 48-port ToR switches, assuming each server connects to (2) ToR swiches for redundancy. If each ToR has 8x10G links to each core switch, thats 96 10GbE ports consumed on the core switch. So we can begin to see how the table size scalability of the switch determines how many of its 10GbE ports, for example, can be use and therefore how large a cluster can be built. This calculation assumes the core is the Layer 2/3 boundary, providing a layer 2 infrastructure for the 7,750 VMs. More generally, for a Layer 2 switch positioned at ToR for example, the specification that matters for virtualization scale is the MAC table size. So if an IT leader purchases a layer 2 ToR switch, say switch A, and its MAC table size is 32K, then switch A can be positioned into a layer 2 domain supporting up to ~32K VMs. For a Layer 3 core switch (aggregating ToRs), the specifications that matter are MAC table size, ARP table size, IP host route table size. The smaller of the three tables is the important and limiting virtualization scale number. If an IT leader purchases a layer 3 core switch, say switch B and its IP host route table size is 8K, and MAC table size is 128K, then this switch can support a VM scale of ~8K. If an IT architect deploys the Layer 2 ToR switch A supporting 32K MAC addresses and connects it to a core switch B, then the entire layer 2 virtualization domain scales to the lowest common denominator in the network, the 8K from switch B.

123

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

Terms of Use
This document is provided to help you understand whether a given product, technology or service merits additional investigation for your particular needs. Any decision to purchase a product must be based on your own assessment of suitability based on your needs. The document should never be used as a substitute for advice from a qualified IT or business professional. This evaluation was focused on illustrating specific features and/or performance of the product(s) and was conducted under controlled, laboratory conditions. Certain tests may have been tailored to reflect performance under ideal conditions; performance may vary under real-world conditions. Users should run tests based on their own real-world scenarios to validate performance for their own networks. Reasonable efforts were made to ensure the accuracy of the data contained herein but errors and/or oversights can occur. The test/ audit documented herein may also rely on various test tools the accuracy of which is beyond our control. Furthermore, the document relies on certain representations by the vendors that are beyond our control to verify. Among these is that the software/ hardware tested is production or production track and is, or will be, available in equivalent or better form to commercial customers. Accordingly, this document is provided as is, and Lippis Enterprises, Inc. (Lippis), gives no warranty, representation or undertaking, whether express or implied, and accepts no legal responsibility, whether direct or indirect, for the accuracy, completeness, usefulness or suitability of any information contained herein. By reviewing this document, you agree that your use of any information contained herein is at your own risk, and you accept all risks and responsibility for losses, damages, costs and other consequences resulting directly or indirectly from any information or material available on it. Lippis Enterprises, Inc. is not responsible for, and you agree to hold Lippis Enterprises, Inc. and its related affiliates harmless from any loss, harm, injury or damage resulting from or arising out of your use of or reliance on any of the information provided herein. Lippis Enterprises, Inc. makes no claim as to whether any product or company described herein is suitable for investment. You should obtain your own independent professional advice, whether legal, accounting or otherwise, before proceeding with any investment or project related to any information, products or companies described herein. When foreign translations exist, the English document is considered authoritative. To assure accuracy, only use documents downloaded directly from www.lippisreport.com . No part of any document may be reproduced, in whole or in part, without the specific written permission of Lippis Enterprises, Inc. All trademarks used in the document are owned by their respective owners. You agree not to use any trademark in or as the whole or part of your own trademarks in connection with any activities, products or services which are not ours, or in a manner which may be confusing, misleading or deceptive or in a manner that disparages us or our information, projects or developments.

124

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Open Industry Network Performance & Power Test for Private/Public Data Center Cloud Computing Ethernet Fabrics Report: Evaluating 10 GbE Switches

About Nick Lippis


Nicholas J. Lippis III is a world-renowned authority on advanced IP networks, communications and their benefits to business objectives. He is the publisher of the Lippis Report, a resource for network and IT business decision makers to which over 35,000 executive IT business leaders subscribe. Its Lippis Report podcasts have been downloaded over 160,000 times; i-Tunes reports that listeners also download the Wall Street Journals Money Matters, Business Weeks Climbing the Ladder, The Economist and The Harvard Business Reviews IdeaCast. Mr. Lippis is currently working with clients to design their private and public virtualized data center cloud computing network architectures to reap maximum business value and outcome. He has advised numerous Global 2000 firms on network architecture, design, implementation, vendor selection and budgeting, with clients including Barclays Bank, Eastman Kodak Company, Federal Deposit Insurance Corporation (FDIC), Hughes Aerospace, Liberty Mutual, Schering-Plough, Camp Dresser McKee, the state of Alaska, Microsoft, Kaiser Permanente, Sprint, Worldcom, Cigitel, Cisco Systems, Hewlett Packet, IBM, Avaya and many others. He works exclusively with CIOs and their direct reports. Mr. Lippis possesses a unique perspective of market forces and trends occurring within the computer networking industry derived from his experience with both supply and demand side clients. Mr. Lippis received the prestigious Boston University College of Engineering Alumni award for advancing the profession. He has been named one of the top 40 most powerful and influential people in the networking industry by Network World. TechTarget an industry on-line publication has named him a network design guru while Network Computing Magazine has called him a star IT guru. Mr. Lippis founded Strategic Networks Consulting, Inc., a well-respected and influential computer networking industry-consulting concern, which was purchased by Softbank/Ziff-Davis in 1996. He is a frequent keynote speaker at industry events and is widely quoted in the business and industry press. He serves on the Dean of Boston Universitys College of Engineering Board of Advisors as well as many start-up venture firms advisory boards. He delivered the commencement speech to Boston University College of Engineering graduates in 2007. Mr. Lippis received his Bachelor of Science in Electrical Engineering and his Master of Science in Systems Engineering from Boston University. His Masters thesis work included selected technical courses and advisors from Massachusetts Institute of Technology on optical communications and computing.

125

Lippis Enterprises, Inc. 2011

Evaluation conducted at Ixias iSimCity Santa Clara Lab on Ixia test equipment

www.lippisreport.com

Anda mungkin juga menyukai