Feature Overview
Competitive Positioning
Summary
Switch Architectural Approaches
Packet
Packet Forwarding
Forwarding Packet Access Ports: 10/100,
Uplinks: Packet Queuing 10/100/1000 or 100FX
Gigabit Queuing
Switch
Fabric
Packet
Point-to-Point Backplane
CPU Forwarding
Packet Access Ports: 10/100,
Queuing 10/100/1000 or 100FX
Control
Packet
CPU
Switch
Fabric
Forwarding
Packet Uplinks:
Uplinks: Packet Queuing Gigabit
Gigabit Queuing
Packet
Forwarding
• Performance limited by
Switch/Route/Mgmt modules
• As modules are added, overall system
performance decreases
• Higher performance requires modules
Centralized Design and daughtercard upgrades
• No feedback QoS mechanism between
Redundant Switch/Route/Mgmt
Central Switch/Router and Line Cards
Switch/Route/Mgmt
• Limited guarantee of High priority
traffic (specifically Voice) QoS
Point-to-Point Backplane
• More than Two Uplinks requires Costly
Line Card Line Card Line Card Line Card Additional Line Cards
• Maximum 1+1 redundancy
To achieve distributed forwarding,
additional option modules are necessary,
increasing overall system cost
› In one vendor’s platform, the maximum
central performance is 30M 64 byte packets
per second, the equivalent of 20 Gbps
maximum through put
Packet Packet
Forwarding Forwarding
Switch
Switch
Fabric
Fabric
Access and/or Packet Packet Access and/or
Uplinks Queuing Queuing Uplinks
CPU CPU
DFE DFE
Packet Packet
Forwarding Forwarding
Switch
Switch
Fabric
Fabric
Access and/or Packet Packet Access and/or
Uplinks Queuing Queuing Uplinks
CPU CPU
DFE DFE
Increases Host
Performance for nTera™
Host
Concurrent (and Future) Host
Processor
Services Accelerator
nTera™
Increase Overall Packet Packet
Scalability, Performance and Processor
Control User Ports
nTera™ nTera™
Packet Distributed
Processor Fabric
Enables High-Capacity
Distributed Switching and nTera™
Backplane
Reliability Packet
Processor
DFE Architecture
• Advantages:
High Availability (N:6) – No single CPU Fully Distributed Passive Backplane
Slot 3
Slot 6
Slot 7
Slot 2
Slot 1
Slot 4
Slot 5
› 21 segments X 20 Gb = 420 Gb
Future Backplane Capacity at 80 Gbps
› 21 segments X 80 Gb = 1.68 Tb
Each of the 21 Backplane Segments
supports 20 Gbps (10 Gbps
Bidirectional)
© 2007 Enterasys Networks, Inc. All rights reserved.
Agenda
Feature Overview
Competitive Positioning
Summary
7C111
1 Slot N Chassis
Distributed Forwarding
Engine (DFE)
© 2007 Enterasys Networks, Inc. All rights reserved. Last Updated August 2007
Power over Ethernet DFE Modules
• Two options
Matrix N3
Dragon Intrusion Defense
Enterasys NAC Apliance
Matrix N1
10 Gigabit Ports 6 10 14
Interface Types Edge Edge, Dist and Core Distribution and Core
Diamond
(7R Series)
Performance (Module/System 6.5/45.5 Mpps 13.5/94.5 Mpps 13.5/94.5 Mpps
Maximum)
Basic and Advanced (optional) Routing Basic Advanced (with license)Advanced (large route
tables)
Gold
(4 Series)
Legacy Matrix E7 chassis support Yes Yes Yes
• Chassis Support
Gold DFEs .Platinum DFEs and Diamond DFEs can go into any slot in the Matrix N3, N5
or N7 chassis.
Multiple Gold DFEs work seamlessly in the same chassis, but can not be mixed with
Platinum or Diamond DFE in the same chassis.
Gold DFEs work in a Matrix E7 chassis, but without any other type module.
Platinum DFEs and Diamond DFEs can be mixed in the same chassis, it is
recommended to have a minimum of two Diamond DFEs in a mixed configuration.
• High Availability
By default the Gold DFE does not provide any high availability (system redundancy).
To get 1+1 redundancy, the NEOSRED software license must be purchased and
installed. Only one 1+1 Redundancy license (NEOSRED) is required per chassis.
For redundancy, the primary and secondary Gold DFE have to be in slots 1 and 2.
• Routing
Basic EOS routing (static routes and RIP) is included with each Gold DFE.
Gold DFEs support Enterasys’ Advanced Routing Package (NEOSL3) that includes
OSPF, DVMRP, and PIMSM.
Only one advanced Routing Package (NEOSL3) is required per chassis.
Diamond DFEs ship with the advanced Routing Package (NEOSL3)
Collapsed Backbone
Backbone
Routing (tier two
Matrix N7
environments)
s
er
Us
Matrix N7
er
Fib
E
Gb
s
Internet
10
er
Us
VPN/Intranet
s
er
10/1
rv
00/1
Se
000
Matrix N7
Server Aggregation
Premium Edge
Matrix N7
s
er
Us
SecureStack
C2
Matrix X4
s
er
s
Us
er
rv
Se
Matrix X4
SecureStack
C2 Matrix N7
Matrix N7
s
er
Us
SecureStack
C2
s
er
Us
s
er
SecureStack
rv
Se
C2 Matrix N7
Feature Overview
Competitive Positioning
Summary
• User, Port and Device Level • Spanning Trees, Multiple Spanning • IPv4 Unicast/Multicast
• Multiple Control Features Trees, VLANs • RIP 1/2, OSPF
• Granular QoS/Rate Limiting • Link Aggregation/Rapid • IGMP, DVMRP
• VLAN to Policy Mapping Reconfiguration • Multi-Path OSPF
• Multi-field Classification • Span Guard • VRRP
• Flow Setup Throttling • PIM-SM (Sparse Mode)
Granularity
- What can I identify?
Why does Enterasys make the best Secure - What can I control?
Networks™ switches in the industry? - How can I control it?
© 2007 Enterasys Networks, Inc. All rights reserved.
DFE Switching/VLAN Services
• HighPerformance Switching
• VLAN Services Support
Link Aggregation (IEEE 802.3ad)
Multiple Spanning Trees (IEEE 802.1s)
Rapid Reconfiguration of Spanning Tree (IEEE 802.1w)
• Policybased Switching
• User Security
Authentication (802.1X, MAC and Web), MAC (Static and Dynamic) Port
Locking
MultiUser Authentication/Policies
• Network Security
Access Control Lists (ACL) – Basic and Extended
Policybased Security Services (Examples: Spoofing, Unsupported Protocol
Access, Intrusion Prevention, DoS Attacks Limits)
• Host
Secure access to the Matrix NSeries via SSH, SSL, SNMP v3
• Configuration
IndustryStandard CLI and Web Support
Multiple Images with Editable Up/Downloadable configuration files
• Network Analysis
SNMP v1/v2c/v3, RMON/RMON II, and SMON (rfc2613) VLAN and Stats
Port/VLAN Mirroring (One to one, one to many, many to many)
Issues
VLAN-based • Costly, timeconsuming VLAN
management
Port mapped to VLAN (with • Mobility becomes an issue as
VLAN access control (ACLs) VLAN spread across the campus
• VLANs provide no inherent
security
User authenticated Network within the VLAN no control
to port
All users share the same ACL
Matrix N-Series • VLAN changes for quarantine
require proper endsystem
support (DHCP renew etc.)
Benefits
Policy-based
• Simple, quick to implement
Access control (policies) • Rapid response to security threats
mapped to user • Much more granular control
• Far more scaleable
User authenticated • No mobility issues
Network
to port • No issues when user is quarantined
Matrix N-Series
User physically
connected here
Matrix
N-Series
Access Backbone
Extends access and application control (for security, convergence, and on-
demand networking) to users aggregated by devices with limited features
• Supported Mirrors:
Physical ports (Front Panel, FTM1)
Virtual Ports (802.3ad Aggregated Link, Host)
VLAN
IDS
› One to many mirror
• Possibility to mirror:
Received frames only
Transmitted frames only
Or both
• All frames are copied to the destination port in the same format as it
was received by the switch
Any header changes performed by the switch will be done after the frame
has been mirrored
• There is no restriction on the number of ports or VLANs that
can be included in the mirror to a destination port
Network Core
• Host
Hardened OS
Management VLANs
RADIUS Authentication
SSH v2
• User
802.1X User Authentication
User Personalized Networking (UPN)
MAC Based Port Locking
MAC Authentication
• Feature :
Ability to authenticate multiple users on a single Matrix
N Series port
Ability to map several different network policies
(profiles) on a single Matrix N Series port
• Benefits :
Authenticate users even if the edge switches do not
support authentication
Deliver PolicyBased Network even if the edge switches
do not support authentication and/or policing
User A User B
• From 8 up to 256 per port (with N-EOS-PPC) and 2048 per system (with N-EOS-PUC).
• Different authentication methods (in random combination per port/user)
802.1x, PWA (Web), MAC authentication, Default Role
• Single physical interface
Filter ID Credit
RADIUS Authority
802.1X Login
MUA Logic
SMAC = Ted
Any Traffic MAC Policy
Dynamic
Admin Rule
Filter ID Policy Engineering
Engineering
MAC Credentials
DFE
• Network worms and hacker attacks rely on ability to discover machines on a network and assess
their vulnerability.
The process of discovering machines on a network is typically done by attempting to establish ICMP
communication with a randomly generated IP destination address (address scanning).
• Each attempt to discover network device or assess its vulnerability requires new flow to be
created. Since attacks desire to discover susceptible machines as quickly as possible, flow build
up is unavoidable.
Worm description User Duration Packet (flows) Fps (mean) Packet size
(mean)
Welchia: ICMP sweep 140.112.215.131 18.94 1203 63.52 110
Welchia: ICMP sweep 140.112.240.132 18.82 2361 125.36 110
Welchia: ICMP sweep 140.112.242.5 18.51 2006 108.36 110
Welchia: ICMP sweep 140.114.232.103 18.69 2061 110.28 110
Welchia: ICMP sweep 140.115.236.59 18.95 1893 99.91 110
Welchia: ICMP sweep 140.115.240.83 18.95 1894 100 110
Welchia: ICMP sweep 140.115.86.136 18.94 1855 97.3 110
Welchia: ICMP sweep 140.116.201.118 18.72 2244 119.9 110
Welchia: ICMP sweep 140.116.246.164 18.5 1967 106.3 110
Welchia: ICMP sweep 140.116.99.117 18.94 702 37.07 110
• Flow Setup Throttling (FST) is Enterasys proprietary solution which tracks flow setup and provides
mechanism to respond to excessive flow buildup (typically a suspicious behavior).
• Using FST, network administrator can define acceptable per port flow counts and flow setup
rate.
When violations are detected, FST can apply reactive measures such as SNMP notifications (and start a ASM
reponse (via SEG) or disabling the interface.
• Flow monitoring provides additional visibility into network activities by indicating the network
communication paths or how many conversations are occurring. Like bandwidth utilization
indicator, flow buildup can warn of suspicious behavior.
• FST provides ability to limit the number of flows on a port.
Putting restriction of flow usage penalizes the user as far as number of network activities (conversations) that can be
performed at once, but the user is not penalized (but can be through DIR/ASM) in bandwidth usage.
• Use NetSight Policy Manager to statically define which MAC address(es) can
communicate on the port
Feature Overview
Competitive Positioning
Summary
• High Performance
720 Gbps system performance
400 Mpps throughput
• The Catalyst 6500 Family of Multi Layer • Hardware based IP
Switches is Cisco’s Flagship switch products. Wirespeed IPv4, IPv6 & MPLS
5 chassis. (6513, 6509NEBS, 6509, 6507,6503) • Advanced Virtual Network capabilities
All 6500 series modules can be used in any MPLS L2 and L3 VPNs
chassis variant
IP in IP Tunneling
● Cisco claims significant performance levels and
very advanced functionality and low cost !! Generic Router Encapsulation
Fabric Backplane
marketed as a 32 Gbps bus that
Classic Bus
• Supervisor 720
Enterprise Core, Data Center, Service Provider Applications
› Hardware IPv6, MPLS, 30 Mpps Supervisor IPv4 performance
› Distributed forwarding allows for maximum of 400 Mpps forwarding
• The Catalyst’s Fabric backplane provides a high speed interconnect for the various Catalyst
modules.
• There are two switch fabric models available for the Catalyst
The Supervisor 720 provides 16 channels which allow for up to 20 Gbps operation per direction per
channel. The Channels can be clocked down to support 8 Gbps per direction operation allowing
support for older generation module
The Switch Fabric Module (SFM) provides for 16 channels with 8 Gbps per direction performance.
Newer CEF720 modules will not operate with a SFM.
All packet lookup takes place on a supervisor engine, unless Distributed Forwarding Cards are
installed. Switch Fabrics only act as transport. A Supervisor Engine can look up 30 Million headers a
second whether the received frame was 64 bytes or 1500 bytes long. This capability allows for full
wirespeed fabric operation with large packets even if no DFCs are installed.
• Every Cisco sales person will claim that the Cat6500 is a 720 Gbps with 400 Mpps,
• But..... they will most certainly lead with Classic Bus or Generation 2 (CEF256) Modules which
never hit the 720 Gbps performance plateau, and are significantly less expensive.
• Almost all of Cisco’s line modules rely on the supervisor engine for packet look up & they will
not operate without a supervisor in the chassis.
• Fabric enabled line cards can have local lookup engines called Distributed Forwarding Modules
enabling slot to slot communications without a supervisor engine. DFC’s list for about $7500.
• Ensure you are comparing Apples with Apples
General Specifications
Fault Tolerance Distributed 1+1 Supervisor Engine 1+1 MSM 1+1 Switch Fabric
Fault Tolerance
Port Density 504 10/100/1000 577 10/100/1000 384 10/100/1000 384 10/100/1000
168 1000BaseX 410 1000BaseX 224 1000BaseX 384 1000BaseX
14 10Gbps 32 10Gbps 32 10Gbps 36 10Gbps
Forwarding Flowbased Longest prefix match Longest prefix match Longest prefix match
granular policy via Cisco Express
Architecture visibility and control Forwarding
Security Port/VLAN/Flow Port/VLAN via ACL Port/VLAN via ACL Port/VLAN via ACL
via centrally administered
Granularity Policy
Multimethod YES NO NO NO
Authentication 802.1x 802.1x 802.1x 802.1x
Webbased PWA
MAC Address
Multiuser YES NO NO NO
Authentication 1,000 users per port using
MAC, PWA or 802.1x
simultaneously
Feature Overview
Competitive Positioning
Summary
•Secure Networks!
Why •Most sophisticated SN feature set in the
customers Enterasys portfolio
•Distributed Management
choose
•High availability
N-Series…
•Flexibility
• Chassis footprints
• Module Port speeds and densities from
edge to core
• Performance and Price Points (Gold /
Platinum/ Diamond)
Last UpdatedSeptember2007
© 2007 Enterasys Networks, Inc. All rights reserved. 77