Anda di halaman 1dari 66

CRS Packet Forwarding and Queuing

BRKARC-3004

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
What Makes the CRS Different?
 What does Carrier Class architecture mean?
‒ Reliability
‒ Scalability
‒ Predictability
 Where do you need it?
 How does a router deliver?

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 3
Before We Begin

 Intermediate CRS High End Router Architecture


‒ Prior knowledge of CRS helpful but not required (quick poll)
 Clarify CRS-1 (40G) vs. CRS-3 (140G) vs. FP40/140
 Please ask questions – raise your hand
 Please answer questions – don‟t be shy
 May defer network specific questions

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 4
CRS Architecture Review – System
PLIM – Physical Layer Interface Module Switch Fabric
MSC – Modular Service Card - 4 or 8 redundant planes
RP – Route Processor - Multi-Chassis option
RP (active) RP (standby)
PLIM M
I
MSC M
I
F
A
F
A
M
I
MSC M
I
PLIM
D D B B D D
Packets PLIM P MSC P R F R P
MSC P
PLIM
In Packets
L L I I L L
PLIM A MSC A C
A C A MSC A PLIM Out
N N B N N
PLIM E MSC E S1 S3 E MSC E PLIM
R
I
PLIM M
MSC M F F M
MSC M
PLIM
I I A C A I I
D D B B D D
PLIM P MSC P R R P
MSC P
PLIM
PLIM
L
MSC
L I S2 I L
MSC
L
PLIM
A A C C A A
N N N N
PLIM E MSC E S1 S3 E MSC E PLIM
BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 5
CRS Architecture Review – Line Card
F
Ingress A
PSE IngressQ B
R
Queuing and I F
M
L3 Lookup Segmentation M C
& Features A
I I S1
P D D B
L P P R
L CPU L
I I
A A
M N N
C
Egress F
E E
EgressQ PSE 2 FabricQs A
B S2
R
Output Reassembly & I
L3 Lookup
Queuing Queuing into PSE C
& features
S3

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 6
CRS Architecture – Line Card Details
HW CEF TCAM Shape Queues Fabric Dest Qs F
HP
BE A
HP B
BE Cells
R
HP
I
BE
C
F
M HP
BE
M A
I I S1
P D PSE IngressQ D B
L P P R
CPU
I L L I
A A
M N
Priority
Assured Forwarding
HP
N
C
Assured Forwarding HW CEF TCAM HP
F
E Assured Forwarding
Assured Forwarding
AF
FQ E
Best Effort AF A
Priority
Assured Forwarding
BE
BE
B S2
Best Effort

Priority HP FabricQs R
Best Effort

Priority
HP I
AF
Assured Forwarding
Assured Forwarding
AF FQ C
Best Effort

BE S3
EgressQ PSE BE

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 7
Key Presentation Topics Discard
Fabric Bitmap
Queuing
PLIM
Stats HW CEF TCAM Shape Queues Fabric Dest Qs F
HP
PSE BE A
Operation HP B
BE Cells R
HP
BE
I F
M M C
HP
BE
A
I CEF I S1
P D PSE IngressQ D B
L P P R
L Output CPU Reassembly
L
I Queuing
TX Features Queues I
A A
M N
Priority
Assured Forwarding
HW CEF TCAM
HP
HP N
C
Assured Forwarding
Assured Forwarding
AF
F
E Assured Forwarding
Best Effort AF FQ E A
Priority
Assured Forwarding
Best Effort
BE
BE
B S2
Priority HP FabricQs R
Best Effort

Priority
HP I
AF
Assured Forwarding
Assured Forwarding
AF FQ C
Best Effort
BE
EgressQ PSE BE S3

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 8
Ingress PLIM
 1 – 4 PLIM ASICs (PLAs) depending on card type
 Nominally 40 Gbps for CRS-1, 140G for CRS-3
 Oversubscription allowed when all Ethernet
 96 Gbps aggregate bandwidth into PSE
‒ No bandwidth bottleneck even when oversubscribed
 CRS-3 adds PLIM QoS
CRS-1 PLIMs
PLIM PLAs BW to PSE (per PLA) M HW CEF TCAM
4xOC192 POS 2 48 Gbps
PLA I
D
16xOC48 POS 4 24 Gbps
P
1xOC768 POS 1 96 Gbps PLA L
4/8xTenGE 2 48 Gbps A
SIP-800 2 48 Gbps PLIM N
E PSE

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 9
Detecting PLIM Drops – Receive Side
show controllers plim asic stat int pos 0/1/0/0 loc 0/1/CPU0
DRP/0/0/CPU0:BUY-CRS# show contr plim asic stat int pos 0/1/0/0 loc 0/1/CPU0
Node: 0/1/CPU0
POS0/1/0/0 Rx Statistics
------------------------------------------------------------------------------
TotalOctets : 135458452621652
TotalPkts : 3069907840506 UnicastPkts : 3069907840532
MulticastPkts : 0 BroadcastPkts : 0
64Octets : 0 65to127Octets : 0
128to255Octets : 0 256to511Octets : 1
512to1023Octets : 0 1024to1518Octets : 0
1519to1548Octets : 0 1549to9216Octets : 3
>9216Octets : 0 BadCRCPkts : 0
BadCodedPkts : 0 Runt : 0
ShortPkts : 3069907840561 802.1QPkts : 0
Drop : 4941487731 PausePkts : 0
ControlPkts : 0 Jabbers : 0
BadPreamble : 0

POS0/1/0/0 Drop
------------------------------------------------------------------------------
RxFiFO Drop : 4294967295 PAR Tail Drop : 4294967295
Audience Question – What is the most likely cause of Ingress PLA drops?
BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 10
Ingress PSE Topics

 ASIC Architecture
Ingress  PPE Operation
IngressQ
PSE  TCAMs/ACLs

 Features
CPU  CEF - IP/MPLS
 ACLs
 Policing*
EgressQ Egress
FabricQs  Netflow*
PSE
 uRPF*

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 11
CRS-1 PSE Forwarding ASIC Architecture
Cluster – 12 PPEs Instruction Memory
Packet
Input Buffer
Stream
Interface IMEM MUX

IMEM MUX
R PLU
IMEM MUX E
IMEM MUX S
IMEM MUX O
IMEM MUX U
TLU
IMEM MUX R
C
Packets IMEM MUX

from PLAs IMEM MUX


E TCAM
IMEM MUX
F
IMEM MUX
A
IMEM MUX
B STATS
Packet IMEM MUX R
Distributor IMEM MUX I
IMEM MUX C POLICER
Packet Processing PPE IMEM MUX

Engine MUX to access Resource Fabric Resources


BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 12
Packet Path Within PSE
Assign Packet (Header) to a PPE Available PPEs
Busy PPE (Red) (Green)
Packet
Rest of Buffers
Packet IMEM MUX
R
IMEM MUX PLU
IMEM MUX
E
IMEM MUX
S
IMEM MUX
O
U TLU
IMEM MUX

IMEM MUX
R
IMEM MUX
C
IMEM MUX
E TCAM
IMEM MUX
Header IMEM MUX
F
IMEM MUX A STATS
IMEM MUX B
IMEM MUX R
IMEM MUX I POLICER
IMEM MUX C

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 13
Packet Path Within PSE
Perform Lookup and Features Using Resources
Packet
Buffer
IMEM MUX
R
IMEM MUX
E
PLU
IMEM MUX

IMEM MUX
S
IMEM MUX
O
U TLU
IMEM MUX

IMEM MUX
R
IMEM MUX
C
IMEM MUX
E TCAM
IMEM MUX

IMEM MUX
F
IMEM MUX
A STATS
IMEM MUX
B
IMEM MUX R
I
IMEM MUX
POLICER
IMEM MUX C

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 14
Packet Path Within PSE
Recombine Header and Tail and Send to IngressQ
Packet
Buffer
IMEM MUX
R
IMEM MUX
E
PLU
IMEM MUX

IMEM MUX
S
IMEM MUX
O
U TLU
IMEM MUX

IMEM MUX
R
IMEM MUX
C
IMEM MUX
E TCAM
IMEM MUX

IMEM MUX
F
IMEM MUX
A STATS
IMEM MUX
B
IMEM MUX
R
I
IMEM MUX POLICER
IMEM MUX
C

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 15
Inside the PSE - Cisco Express Forwarding
 PSE consists of 188 PPEs (Packet Processing Engines) (CRS-3 - 256)
 Each packet is forwarded by a single PPE
‒ No pipeline (no stall risk)
‒ Lookups and features for a packet executed by same PPE
 CEF uses Tree Bitmap for forwarding table
‒ Low memory utilization
‒ Fast lookup times
‒ Fast update times (improves convergence)

DRP/0/0/CPU1:CRSRULES# show controllers pse utilization


PPE Utilization
Node Ingress Egress
========================================
0/1/CPU0: 84.5 95.7
0/2/CPU0: 54.2 32.8

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 16
Detecting PSE Drops Both Ingress and Egress
RP/0/RP0/CPU0:XR-ROCKS# show controllers pse statistics 0/0/CPU0
Node 0/0/CPU0 Ingress PSE Stats
--------------------------------
Punt Stats Punted Policed & Dropped
---------- ------ -----------------
CDP 2145978 0
ARP 121223275 10
Bundle Control 223294 0
IPv4 TTL expiration 3510 0
IPv4 L2LI punt 1369 0
IFIB lookup miss 51 0
ACL deny gen ICMP 21170743 18935344
IPv6 link local 478990 0

Drop Stats Dropped


---------- -------
L2 unknown 3940904
IFIB policer drop 1385009
IPv4 not enabled 6
IPv4 addr sanity 21
IPv4 length error 21
IPv4 frag offset within TCP hdr 43

Debug Stats Count


----------- -----
PPE idle counter 284018098746681

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 17
Looking at PSE Drops
RP/0/RP0/CPU0:XR#show captured packets ingress location 0/1/CPU0
-------------------------------------------------------
packets dropped in hardware in the ingress direction
buffer overflow pkt drops:3941067, current: 200, non wrapping: 0 maximum: 200
-------------------------------------------------------
Wrapping entries
-------------------------------------------------------
[1] Jun 21 18:31:55.700, len: 239, hits: 1, i/p i/f: TenGigE0/1/0/3
[Distribution header: exception: 0 error 0 dummy 0 moose_congestion 0
gpm_allocated 1 gpm_segment 903]
[Moose Header: pkt_length: 74 port_num: 580 mark: 0 vlan_result: 7
mac_da_result: 0 mac_sa_result]0
[punt reason: IPv4 not enabled] [PPE used: cluster=15 ppe=8]
[ether dst: 0100.5e00.000a src: 001b.5374.cab6 type/len: 0x800]
[IPV4: source 172.18.26.6, dest 224.0.0.10 ihl 5, ver 4, tos 192
id 365, len 60, prot 88, ttl 1, sum 111b, offset 0]
0205f769 00000000 00000000 00000000 00000064 0001000c 01000100 0000000f
00040008 03030102 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 000000

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 18
Security Access List Implementation
 ACLs are executed on Ingress and Egress in TCAMs
‒ Content Addressable Memory – search returns first matching entry
‒ Ternary – Adds “don‟t care” option to regular binary CAM (3 choices)
 Support for large ACLs w/o performance impact
‒ 16K lines per ACL and 64K lines of ACLs per card per direction
Source Source Proto Source Dest Destination Destination
IP Mask L4 Port IP Mask L4 Port

32.0.0.0 0.255.255.255 TCP 23 any 255.255.255.255 any

191.2.0.0 0.0.255.255 IP any 5.5.5.5 0.0.0.0 any

8.8.8.0 0.0.0.127 UDP any any 255.255.255.255 50000

TCAM Example
BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 19
Monitoring TCAM Utilization Before adding ACL
RP/0/RP0/CPU0:XR-ROCKS# show controllers pse tcam sum loc 0/5/CPU0
TCAM Device Information for Ingress Metro, CAM channel 0:
Device size: 18M (256K array entries of 72-bits), 262051 available
Current mode of operation: Turbo
Software Initialization:
Memory management state: complete
Range block state: complete
IPv6 prefix compression state: complete 262051 available
Hardware Initialization:
Device registers: complete
CAM/SRAM Memory: complete
Default entries for applications: complete
Range Logic Block registers: complete
IPv6 prefix compression region: complete
Feature specific information:
QoS (id 1):
Owner client id: 1. Limit 262144 cells
Duplication enabled in Turbo mode into CAM channel 1
Fab QoS (id 2):
Owner client id: 14. Limit 262144 cells
Total 1 regions using 11 CAM cells
Duplication enabled in Turbo mode into CAM channel 1
ipv6 prefix compress (id 10):
Owner client id: 13. Limit 262144 cells
Total 1 regions using 2 CAM cells
Entry duplication enabled in Turbo and Feature modes into CAM channel 1
tcam_mgr (id 11):
Owner client id: 13. Limit 262144 cells
Total 2 regions using 80 CAM cells
BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 20
Monitoring TCAM Utilization After adding 6 line input ACL
RP/0/RP0/CPU0:XR-ROCKS# show controllers pse tcam sum loc 0/5/CPU0
TCAM Device Information for Ingress Metro, CAM channel 0:
Device size: 18M (256K array entries of 72-bits), 262045 available
Current mode of operation: Turbo
Software Initialization:
Memory management state: complete
Range block state: complete 262045 available
IPv6 prefix compression state: complete
Hardware Initialization: -> 6 used for ACL
Device registers: complete
CAM/SRAM Memory: complete
<snip>
Feature specific information:
packet filtering (id 0):
Owner client id: 3. Limit 262144 cells
Total 1 regions using 6 CAM cells
QoS (id 1):
Owner client id: 1. Limit 262144 cells
Duplication enabled in Turbo mode into CAM channel 1
Fab QoS (id 2):
Owner client id: 14. Limit 262144 cells
Total 1 regions using 11 CAM cells
Duplication enabled in Turbo mode into CAM channel 1
ipv6 prefix compress (id 10):
Owner client id: 13. Limit 262144 cells
Total 1 regions using 2 CAM cells
Entry duplication enabled in Turbo and Feature modes into CAM channel 1
tcam_mgr (id 11):
Owner client id: 13. Limit 262144 cells
Total 2 regions using 80 CAM cells

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 21
IngressQ Topics

 Ingress Shaping
Ingress  To-Fabric Queuing
PSE IngressQ

 Discard Bitmap*
CPU  Fabric QoS*

EgressQ Egress
PSE FabricQs

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 22
IngressQ
Shape Queues Fabric Dest Qs F
 Input Shaping Queues HP
BE
A
P HP
Cells B
 Per Interface S
BE
R
HP
I
 HP & LP if configured E BE

HP C
 Fabric Destination Queues
BE

IngressQ S1
 HP & LP for every FabricQ in system
 4 queues for every MSC in entire system
 Queue determined by Ingress QoS or Fabric QoS
 Segmentation of packets into cells
 45 Gbps limit between Shape Queues and Fabric Destination Queues
 140 Gbps in CRS-3
 Discard bitmap (discussed later but occurs between these two queues)
BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 23
Monitoring Shaping Parameters
Before and After Adding Input Shaper (on gig 0/5/0/1)
RP/0/RP0/CPU0:CRS# show controllers ingressq queue all loc 0/5/CPU0
iqm queues:
legend: (*) default q, LP low priority,HP high priority, bs burst sizebw (kbps), bs (usec),
quant(10s of 2*Fab MTU).
name owner q-hd port type max_bw min_bw max_bs min_bs quant pktcnt
-----------------------------------------------------------------------------------------
default_queue(*) hfr_pm 32 8 LP 999936 0 639 0 10 0
default_queue(*) hfr_pm 33 9 LP 999936 0 639 0 10 0
default_queue(*) hfr_pm 34 10 LP 999936 0 639 0 10 0
default_queue(*) hfr_pm 35 11 LP 999936 0 639 0 10 0
default_queue(*) hfr_pm 36 12 LP 999936 0 639 0 10 0
<snip>
RP/0/RP0/CPU0:CRS# show controllers ingressq queue all loc 0/5/CPU0
iqm queues:
legend: (*) default q, LP low priority,HP high priority, bs burst sizebw (kbps), bs (usec), quant(10s of
2*Fab MTU).
name owner q-hd port type max_bw min_bw max_bs min_bs quant pktcnt
-----------------------------------------------------------------------------------------
default_queue(*) hfr_pm 32 8 LP 999936 0 639 0 10 0
default_queue(*) QOS-MGR 33 9 LP 49920 0 9512 0 10 923
default_queue(*) hfr_pm 34 10 LP 999936 0 639 0 10 0
default_queue(*) hfr_pm 35 11 LP 999936 0 639 0 10 0
default_queue(*) hfr_pm 36 12 LP 999936 0 639 0 10 0
<snip>
BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 24
Fabric Destination Queue Numbering
First Queue for Each MSC = (Rack * 128) + (Slot * 8)

Rack Slot FabricQ Priority Queue Shape Queues Fabric Dest Qs


HP
0 0 0 LP 0
BE

0 0 0 HP 1 HP
Cells
BE
0 0 1 LP 2
HP
0 0 1 HP 3 BE

HP
0 1 0 LP 8 BE

0 1 0 HP 9 IngressQ
0 1 1 LP 10
0 1 1 HP 11
1 0 0 LP 128
1 0 0 HP 129
1 0 1 LP 130
1 0 1 HP 131

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 25
Monitoring Fabric Destination Queues
Show Controllers Ingressq Block dqs Queues
DRP/0/2/CPU0:CARRIER-CLASS#
show controllers ingressq block dqs queues 0 63 loc 0/3/CPU0
Queue Fabric Dest Num QE Q Len(bytes) Packet Seq Num
--------------------------------------------------------------
16 0/2 LP 0 0 4172
18 0/2 LP 0 0 2788
24 0/3 LP 0 0 331
26 0/3 LP 0 0 5547
27 0/3 HP 0 0 6591
62 0/RP1 LP 0 0 518

 Must specify range of queues


 Limited to displaying 64 queues at a time
 Run on line card via “location” command
Audience Question – What would cause these queues to get congested?
BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 26
CRS Fabric Cells

Header Payload CRC

12b 30b 30b 30b 30b 4b


120 bytes
136 bytes

 136 byte cells with


‒ 12 byte header, 120 byte payload, 4 byte CRC
 1 or 2 packets per cell
‒ Packets must start on a 30 byte boundary
‒ Packets sharing a cell must be same priority and cast
‒ Entire cell travels over 1 fabric plane
 Round Robin among 4 or 8 fabric planes

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 27
FabricQ Topics

 FabricQ queues
Ingress  Monitoring queues
PSE IngressQ
 Discard Bitmap
 Fabric QoS
CPU

EgressQ Egress
PSE FabricQs

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 28
FabricQ Queues – Before Egress PSE
INT0 HP
INT1 HP
INT0 AF
 Cells reassembled into packets
INT1 AF  Packets queued prior to PSE
E INT0 BE  Unicast queues
INT1 BE
FQ0
G ‒ Per type of service (HP/AF/BE)
F
R MCAST HP
‒ Per output interface
MCAST AF
A  Multicast queues
E MCAST BE

To CPU
B ‒ Per type of service
S
R  Raw (to CPU) queues
S INT2 HP
INT3 HP I ‒ 8 queues
INT2 AF C
P INT3 AF

S INT2 BE
FQ1 S3
INT3 BE
E MCAST HP
MCAST AF
MCAST BE

To CPU

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 29
Monitoring FabricQs – Queue Length
RP/0/RP0/CPU0:GO-CISCO# show controllers fabricq queues loc 0/5/CPU0
Fabric Queue Manager Queue Information:
=======================================
Location: 0/5/CPU0 Which FabricQ ASIC?
Asic Instance: 0
Fabric Destination Address: 0
CpuCtrl Cast range : 0 - 7
Multicast Range : 8 - 71 3 FabricQ Queues per interface
Unicast Quanta in KBytes : 58, Multicast Quanta : 14
+-------------------------------------------------------------------------------------+
|Type/Ifname |Port|Queue| Q |P-quanta|Q-quanta|HighW |LowW |Q Len |BW |
| | num| num |pri| KBytes | KBytes |KBytes|KBytes|KBytes |(kbps) |
+-------------------------------------------------------------------------------------+
|Cpuctrl Cast | 0|0 - 7| HI| 13| 13| 1021| 919| 0| N/A|
|Multicast | 0| 9| BE| 13| 13| 5118| 4606| 0| N/A|
|Multicast | 0| 10| AF| 13| 13| 5118| 4606| 0| N/A|
|Multicast | 0| 11| HI| 13| 1905| 5118| 4606| 0| N/A|
|GigabitEthernet0/5/0/0| 1| 1025| BE| 13| 187| 3661| 3295| 0|1000000|
|GigabitEthernet0/5/0/0| 2| 2049| AF| 13| 187| 3661| 3295| 0|1000000|
|GigabitEthernet0/5/0/0| 3| 3073| HI| 1905| 187| 3661| 3295| 0|1000000|
|GigabitEthernet0/5/0/1| 1| 1026| BE| 13| 187| 3661| 3295| 0|1000000|
<snip>

Audience Question – What would cause these queues to get congested?


BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 30
Monitoring FabricQs - Drops
DRP/0/0/CPU1:CRS#show controllers fabricq statistics loc 0/1/CPU0
Fabric Queue Manager Packet Statistics
======================================
Location: 0/1/CPU0
Asic Instance: 0
Fabric Destination Address: 4
BP Asserted Count : 1 (+ 0 )
MC BP Asserted Count : 0 (+ 0 )
Input Cell counters:
+----------------------------------------------------------+
Data cells : 5202891134 (+ 4 )
Control cells : 2133618291 (+ 34000 )
Idle cells : 31374246993871 (+ 499897823 )
Reassembled packet counters
+----------------------------------------------------------+
Ucast pkts : 8715884484 (+ 0 )
Mcast pkts : 0 (+ 0 )
Cpuctrlcast pkts : 1422623 (+ Total 2 )

Dropped packets
drops
+----------------------------------------------------------+
Ucast pkts : 0 (+ 0 ) Change since last
Mcast pkts : 0 (+ 0 )
Cpuctrlcast pkts : 0 (+ 0 ) time command run
Vital denied pkts : 0 (+ 0 )
NonVital denied pkts : 0 (+ 0 )
Unicast lost pkts : 0 (+ 0 )
Ucast partial pkts : 0 (+ 0 )
PSM OOR Drops : 0 (+ 0 )
BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 31
Drop on IngressQ when a FabricQ queue is
Discard Bitmap Concept congested
HW CEF TCAM Shape Queues Fabric Dest Qs F
HP A
BE

HP
B
BE Cells R
?
HP I F
BE
C A
M HP M
BE
I I B
P D PSE IngressQ D
S1
R
L P P
I
L
CPU L
I C
M A HP
A
N
Priority
Assured Forwarding
Assured Forwarding HW CEF TCAM HP N F
E
Assured Forwarding
Assured Forwarding
AF
FQ E A S2
AF
B
Best Effort

Priority BE
Assured Forwarding
Best Effort
BE
R
Priority
Best Effort
HP FabricQs I
HP
Priority
Assured Forwarding AF C
Assured Forwarding
Best Effort
AF FQ
BE
EgressQ PSE BE S3
Audience Question – What is the path for the notification?
BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 32
Discard Bitmap Example Sent by every FabricQ to every IngressQ
Rack MSC Fabric Q Priority Interface Congested
0 0 0 HP Pos 0/0/0/0 NO
0 0 0 HP Pos 0/0/0/1 NO
0 0 0 AF Pos 0/0/0/0 NO
0 0 0 AF Pos 0/0/0/1 YES
0 0 0 BE Pos 0/0/0/0 NO
0 0 0 BE Pos 0/0/0/1 NO
0 0 0 HP Multicast NO
0 0 0 AF Multicast NO
0 0 0 BE Multicast NO
0 0 1 HP Pos 0/0/0/2 NO
0 0 1 HP Pos 0/0/0/3 NO
0 0 1 AF Pos 0/0/0/2 NO
0 0 1 AF Pos 0/0/0/3 NO
0 0 1 BE Pos 0/0/0/2 NO
0 0 1 BE Pos 0/0/0/3 NO
0 0 1 HP Multicast NO
0 0 1 AF Multicast NO
0 0 1 BE Multicast NO

Audience Question: How many DB entries for a multi-chassis system with 128 8xTenGE Line Cards?
BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 33
Monitoring Discard Bitmap Drops
Show Controllers Ingressq Statistics Location <loc>
RP/0/RP0/CPU0:MULTI-CHASSIS# show controllers ingressq statistics location 0/2/CPU0
Ingressq Rx Statistics.
------------------------------------------------------------------------
rx pkts : 8098091 ( 12286872455 bytes)
rx pkts from cpu : 5350901 ( 4843791985 bytes)
<snip>
Ingressq Tx Statistics.
------------------------------------------------------------------------
tx pkts : 8098090 ( 12405816530 bytes)
tx pkts to cpu : 648410 ( 1013046860 bytes)
<snip>
tx cells to fabric : 96232534
Ingressq Drops.
------------------------------------------------------------------------
length error drops - PSE : 0
length error drops - Cpuctrl : 0
crc error drops - PSE : 0
crc error drops - Cpuctrl : 0
OOR error drops - PSE : 0
OOR error drops - Cpuctrl : 0
discard drops : 0
tail drops : 0
tail drops - no QE : 0
cell drops : 56
BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 34
Fabric QoS Introduction
Normal Traffic Pattern – No Internal Congestion

PLIM M MSC M F F M MSC M PLIM


I I A A I I
PLIM D MSC D B B D MSC D PLIM
P P R R P P
L L I F I L L
PLIM A MSC A C
A C A MSC A PLIM
N N N N
PLIM E MSC E S1 B S3 E MSC E PLIM
R
I
PLIM M MSC M F C F M MSC M PLIM
I I A A I I
PLIM D MSC D B B D MSC D PLIM
P P R S2 R P P
L L I I L L
PLIM A MSC A C C A MSC A PLIM
N N N N
PLIM E MSC E S1 S3 E MSC E PLIM

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 35
Does This Ever Happen? Can we protect priority traffic?
 Routing problems – bugs or misconfiguration
 Link failures during peak traffic periods
 Denial of Service attacks 100 Gbps
(CRS-1)
 “It’s just like that all the time” Egress PSE
M M F F M M
PLIM I MSC I A A I
MSC I
PLIM
D D B B D D
PLIM P MSC P R F R P MSC P
PLIM
L L I A I L L
PLIM A MSC A C C A MSC A
PLIM
N N B N N
PLIM MSC MSC PLIM
E E S1 R S3 E E
Port Bandwidth
M M F I F M M
PLIM I MSC I A A I
MSC I
PLIM
C
D D B B D D
PLIM P MSC P R R P MSC P
PLIM
PLIM L
MSC L I S2 I L
MSC L
PLIM
A A C C A A
N N N N
PLIM E MSC E S1 S3 E MSC E PLIM
BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 36
Switch Fabric QoS
Managing Internal Congestion
 Protects priority traffic against potential internal congestion
 Potential sources of internal congestion?
 Multiple switch fabric failures or accidental fabric removal
 Unusual traffic patterns
F
 DoS attacks A
 Links down Ingress
B
IngressQ R
 Routing problems PSE I F
C A
 Ingress PSE classifies traffic S1 B
 Via Ingress QoS per interface CPU
R
I
 Full MQC/ACL matching F C
 Set priority A
EgressQ Egress B S2
 Set QoS Group FabricQs R
PSE I
 Set Discard Class C
 Via Fabric QoS S3

 Match DSCP, IP Precedence, QoS Group, Discard Class, MPLS EXP


 Set priority and/or bandwidth remaining
BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 37
Ingress QoS
Setting “Priority” in Ingress QoS Policy
HW CEF TCAM Shape Queues Fabric Dest Qs
HP
F
BE A
HP B
BE Cells
R
HP I F
BE
C
HP
BE
M
S1
A
I B
PSE IngressQ D
P
R
CPU L I
Priority HP
A C
Assured Forwarding
Assured Forwarding HW CEF TCAM HP N F
Assured Forwarding
AF
FQ E A
S2
Assured Forwarding
Best Effort AF
Priority BE B
Assured Forwarding
Best Effort
BE
R
Priority
Best Effort
HP FabricQs I
HP
Priority
Assured Forwarding AF C
Assured Forwarding
AF FQ
Best Effort

BE
S3
EgressQ PSE BE

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 38
Switch Fabric QoS
Setting “Priority” in Fabric QoS Policy

HW CEF TCAM Shape Queues Fabric Dest Qs


HP
F
BE A
HP B
BE Cells
R
HP I F
BE
C
HP
M A
BE S1
PSE IngressQ I B
D
P
R
CPU L I
Priority HP
A C
Assured Forwarding
Assured Forwarding HW CEF TCAM HP N F
Assured Forwarding
AF
FQ E A
S2
Assured Forwarding
Best Effort AF
Priority BE B
Assured Forwarding
Best Effort
BE
R
Priority
Best Effort
HP FabricQs I
HP
Priority
Assured Forwarding AF
C
Assured Forwarding
AF FQ
Best Effort S3
BE
EgressQ PSE BE

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 39
Switch Fabric QoS
Setting “Bandwidth Remaining Percent” in Fabric QoS Policy
HW CEF TCAM Shape Queues Fabric Dest Qs
HP
F
BE A
HP B
BE Cells
R
HP I F
BE
C
HP
BE
M
S1
A
I B
PSE IngressQ D
P
R
CPU L I
A C
Priority HP
Assured Forwarding
Assured Forwarding HW CEF TCAM HP N F
Assured Forwarding
AF
FQ E A
S2
Assured Forwarding
Best Effort AF
Priority
Assured Forwarding
BE B
BE
Best Effort
R
Priority
Best Effort
HP FabricQs I
HP
Priority
Assured Forwarding AF C
Assured Forwarding
AF FQ
Best Effort

BE
S3
EgressQ PSE BE

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 40
Configuring Switch Fabric QoS
 Priority Treatment for VOIP
 Prefer Assured Forwarding over Best Effort (by 3:1 ratio)
Preference into Egress PSE MQC class-maps w/ limited match types
DSCP
IP Precedence
MPLS EXP
QoS Group – can be set by ingress QoS
policy-map fabric-qos Discard Class – can be set by ingress QoS
class fab-high
priority
class fabric-af
bandwidth remaining percent 75

switch-fabric Apply globally


service-policy fabric-qos

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 41
Egress PSE Topics
 2 Stage CEF Lookup

Ingress
 Multicast replication
PSE IngressQ

 Egress Features*
CPU  Output ACLs*
 Egress Netflow*
EgressQ Egress
FabricQs
 WRED*
PSE
 Policing*

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 42
Understanding 2-Stage CEF Lookup
Key Benefit – Improves Scaling
Shape Queues Fabric Dest Qs
Ingress PSE selects
HW CEF TCAM
HP • Output Line Card
• Output Interface
BE

HP
Cells
BE
• Fabric Dest Q (HP/LP)
HP
BE
• FabricQ (FQ0 or FQ1)
HP
BE
• FabricQ Queue
IngressQ • HP, AF, or BE
PSE
CPU
Priority
HP
Assured Forwarding
Assured Forwarding HW CEF TCAM AF Egress PSE selects
• Output Interface
Assured Forwarding
BE
Assured Forwarding
Best Effort HP
FQ
Priority
Assured Forwarding
AF
BE • Output Queue
FabricQs
Best Effort

Priority
Best Effort
HP
AF
• Adjacency
Priority
Assured Forwarding
BE • Dest MAC address
Assured Forwarding
Best Effort
HP FQ
AF

EgressQ PSE BE

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 43
CLI - CEF 2-Stage Lookup
Forwarding Table CLI Command
RP Routing show route [IP]

RP CEF show cef [IP]

LC SW CEF show cef [IP] location [X/Y/CPU0]

LC Ingress HW CEF show cef [IP] hardware ingress location [X/Y/CPU0]

LC Egress HW CEF show cef [IP] hardware Egress location [X/Y/CPU0]

DRP/0/2/CPU0:XR# show cef 85.1.1.1 hardware egress location 0/3/CPU0


85.1.1.0/24, version 1, internal 0x40000001 (0x7c076790) [1], 0x0 (0x0), 0x0
(0x0)
Updated Mar 16 09:57:11.643
Prefix Len 24, traffic index 0, precedence routine (0)
via 2.2.13.10, 3 dependencies, recursive
next hop 2.2.13.10 via 2.2.13.0/24
EGRESS PLU
BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 44
Hardware Multicast Replication
 HW Replication within fabric planes
RP RP
For cells going to multiple line cards
No performance impact for additional
replications MSC
 HW Replication on Egress PSE
For multiple ports on a line card MSC
 Efficient scale for high fan-out F
No increase in load on MSC MSC A
B
R
I
MSC
C

1 plane shown (of 4 or 8)

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 45
EgressQ Topics

 Egress QoS
Ingress  Hierarchical Queuing
PSE IngressQ
 Monitoring Queues

CPU

EgressQ Egress
PSE FabricQs

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 46
EgressQ Overview

 Per interface/sub-interface queuing Priority


Assured Forwarding

 MQC configuration Assured Forwarding


Assured Forwarding

‒ Strict HP queue Assured Forwarding


Best Effort

‒ Bandwidth guarantees P Priority


P
‒ Shaping L Assured Forwarding
Best Effort
S
I
‒ Bandwidth remaining Priority
E
M Best Effort

 3 Level Hierarchy Priority

‒ Port – Highest Level Queuing Engine


Assured Forwarding
Assured Forwarding

‒ Group – Middle Level Queuing Engine


Best Effort

EgressQ
‒ Queue – Lowest Level Queuing Engine

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 47
Monitoring EgressQ
Baseline - Interface with No QoS
RP/0/RP0/CPU0:CRS# show policy-map interface pos 0/2/0/0 output
No service policy installed
RP/0/RP0/CPU0:CRS# show qos interface pos 0/2/0/0 output Queue 9
No output QoS Policy applied on this interface ----------------------------------
Group :9
RP/0/RP0/CPU0:CRS# show controllers egressq queue from- Priority : Low
interface pos 0/2/0/0 loc 0/2/CPU0 Max LB Tokens : 38880
---------------------- Max LB Limit Index : 43
Interface POS0/2/0/0 Tokens for shaping Min LB Tokens :0
---------------------------------- 38800 = No shaping
Port 8 Min LB Limit Index : 0
---------------------------------- Quantum : 27
Max LB Tokens : 38880 Instantaneous length : 0
Max LB Limit Index : 43 Length high watermark : 823
Quantum : 27
Default Group :9 No BW guarantee Timestamp of Queue Length high watermark :
High Priority Group : N/A Wed JUN 17 11:39:05 2011
Low Priority Group : 9 ----------------------------------
---------------------------------- Queue 10
Group 9 Was it congested? ----------------------------------
---------------------------------- Group :9
Port :8 Priority : High
Priority : Low
Max LB Tokens : 38880 Max LB Tokens :0
Max LB Limit Index : 43 Max LB Limit Index : 33
Min LB Tokens :0 Min LB Tokens : 38880
Min LB Limit Index : 0 Min LB Limit Index : 57
Quantum : 27
Default Queue :9 Quantum : 27
High Priority Queue : 10 Instantaneous length : 0
Low Priority Queue : 9 Length high watermark : 0
BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 48
EgressQ QOS - the Goal - Avoid Drops!

dropped packet (pass!!)

no drops!
BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 49
Monitoring EgressQ - HP Queue & BW Guarantee
Show Policy-map Interface (Maps MQC to HW Queue)
RP/0/RP0/CPU0:CRS# show policy-map interface Class demo-af-5
pos 0/2/0/1 output Classification statistics (pkts/bytes) (rate - kbps)
POS0/2/0/1 output: demo-output-1 Matched : 0/0 0
Transmitted : 0/0 0
Class demo-af-6 Total Dropped : 0/0 0
Classification statistics (pkts/bytes) (rate - kbps) Queueing statistics
Matched : 0/0 0 Vital (packets) : 0
Transmitted : 0/0 0 Queue ID : 28
Total Dropped : 0/0 0 High watermark (bytes) : 0
Policing statistics (pkts/bytes) (rate - kbps) Inst-queue-len (bytes) : 0
Policed(conform) : 0/0 0 Avg-queue-len (bytes) : 0
Policed(exceed) : 0/0 0 Taildropped(packets/bytes) : 0/0
Policed(violate) : 0/0 0
Policed and dropped : 0/0 Class class-default
Queueing statistics Classification statistics (pkts/bytes) (rate - kbps)
Vital (packets) : 0 Matched : 0/0 0
Queue ID : 12 Transmitted : 0/0 0
High watermark (bytes) : 0 Total Dropped : 0/0 0
Inst-queue-len (bytes) : 0 Queueing statistics
Avg-queue-len (bytes) : 0 Vital (packets) : 0
Taildropped(packets/bytes) : 0/0 Queue ID : 11
High watermark (bytes) : 0
Class demo-af-4 Inst-queue-len (bytes) : 0
Classification statistics (pkts/bytes) (rate - kbps)
Matched : 0/0 0 policy-map demo-output-1
Transmitted : 0/0 0
Total Dropped 0/0 0
class demo-af6
Queueing statistics priority
Vital (packets) : 0 police rate 2 gbps
Queue ID : 27
High watermark (bytes) : 0 class demo-af5
Inst-queue-len (bytes) : 0 Queue numbers for bandwidth percent 20
Avg-queue-len (bytes) : 0 show controllers
Taildropped(packets/bytes) : 0/0
commands class demo-af4
BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. bandwidth percent
Cisco Public 10 50
HP Queue & BW Guarantee class demo-af6
priority
Show Controllers Egressq police rate 2 gbps
show controllers egressq queue from-
RP/0/RP0/CPU0:CRS# Queue 12
interface pos 0/2/0/1 loc 0/2/CPU0 ----------------------------------
Group : 10 class demo-af5
---------------------------------- Priority : High
Interface POS0/2/0/1 Max LB Tokens : 0
bandwidth percent 20
---------------------------------- Max LB Limit Index : 33
Port 12 Min LB Tokens : 38880
---------------------------------- Min LB Limit Index : 57 class demo-af4
Max LB Tokens : 38880 Quantum : 27
Max LB Limit Index : 43 Instantaneous length : 0 bandwidth percent 10
Quantum : 27 Length high watermark : 0
Default Group : 10 Timestamp of Queue Length high watermark : Wed Jun 17 11:39:05 2011
High Priority Group : N/A ----------------------------------
Low Priority Group : 10 Queue 28 (af-5)
---------------------------------- ----------------------------------
Group 10
----------------------------------
Group
Priority
: 10
: Low
7776 + 31104 = 38880
Port : 12 Max LB Tokens : 31104
Priority : Low Max LB Limit Index : 42
Max LB Tokens : 19440 Min LB Tokens : 7776
Max LB Limit Index : 41 Min LB Limit Index : 51
Min LB Tokens : 19440 Quantum : 27
Min LB Limit Index : 54 Instantaneous length : 0
Quantum : 27 Length high watermark : 0
Default Queue : 11 Timestamp of Queue Length high watermark : Wed Jun 17 11:39:05 2011
High Priority Queue : 12 ----------------------------------
Low Priority Queue : 28 Queue 27 (af-4)
Low Priority Queue : 27 ----------------------------------
Low Priority Queue : 11
----------------------------------
Group
Priority
: 10
: Low
Tokens for BW Guarantee
Queue 11 (class-default) Max LB Tokens : 34992
---------------------------------- Max LB Limit Index : 43
Group : 10 Min LB Tokens : 3888
Priority : Low Min LB Limit Index : 49
Max LB Tokens : 38880 Quantum : 27
Max LB Limit Index : 43 Instantaneous length : 0
Min LB Tokens : 0 Length high watermark : 0
Min LB Limit Index : 0 Timestamp of Queue Length high watermark : Wed Jun 17 11:39:05 2011
Quantum : 27
Instantaneous length : 0
Length high watermark : 0
Timestamp of Queue Length high watermark : Wed Jun 17 11:39:05 2011

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 51
CRS Packet Path Review
HW CEF TCAM Shape Queues Fabric Dest Qs F
HP A
BE
B
HP
BE Cells R
?
HP I
BE
C F
M M
HP
BE A
I I
P D PSE IngressQ D S1 B
L P P R
CPU
I L L I
M A HP
A C
N
Priority
Assured Forwarding
Assured Forwarding HW CEF TCAM HP
N F
E
Assured Forwarding
Assured Forwarding
AF
FQ E A
Best Effort AF
Priority
Assured Forwarding
BE B S2
Best Effort
BE
R
Priority
Best Effort
HP FabricQs I
HP
Priority
Assured Forwarding
Assured Forwarding
AF
AF FQ C
Best Effort

BE
EgressQ PSE BE
S3

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 52
Introducing CRS-3
 Extends CRS architecture to 140G / slot
 New MSC/FP, fabric, and line cards
‒ MSC140 & FP-140
 All existing chassis are upgradeable
‒ 4, 8, & 16 slot single chassis systems
‒ Multi-chassis Nx16 slot systems

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 53
CRS-3 PLIMs
 1x100GE
 14xTenGE
 20xTenGE (Oversubscribed)
 Adds PLIM QoS to classify traffic before Ingress PSE

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 54
CRS Fabric Bandwidth Comparison
CRS-1 CRS-3
2.5G x 4 = 10Gbps ingress (per plane) 5G x 5 = 25Gbps ingress (per plane)
10G * 8 planes = 80Gbps total ingress 25G * 8 planes = 200Gbps total ingress
MSC40 Fabric MSC140 Fabric
2.5G 5G
2.5G 5G
IngressQ 2.5G S1 IngressQ 5G S1
2.5G 5G
5G

2.5G 5G
2.5G 5G
FabricQ 2.5G S3 FabricQ 5G S3
2.5G 5G
2.5G 5G
2.5G 5G
FabricQ 2.5G S3 FabricQ 5G S3
2.5G 5G

2.5G * 8 = 20Gbps egress (per plane) 5G * 8 = 40Gbps egress (per plane)


20G * 8 planes = 160Gbps total egress 40G * 8 planes = 320Gbps total egress
BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 55
Fabric Bandwidth Calculations
CRS-3
• IngressQ to S1
‒ = 200Gbps (raw bw) * 8b/10b (encoding) * 120/136 (cell overhead) = 141Gbps
• S3 to FabricQ
‒ = 320Gbps (raw bw) * 8b/10b (encoding) * 120/136 (cell overhead) = 225Gbps
or 113Gbps per FabricQ

CRS-1
• IngressQ to S1
‒ = 80Gbps(raw bw) * 8b/10b (encoding) * 120/136 (cell overhead) = 56Gbps
• S3 to FabricQ
‒ = 160Gbps(raw bw) * 8b/10b (encoding) * 120/136 (cell overhead) = 112Gbps or
56 Gbps per FabricQ
BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 56
CRS Fabric Bandwidths and Links Between
MSC40 and MSC140
MSC40 MSC140

To Fabric IngressQ has 32 links @2.5G for 40G Scaling IngressQ to 140G requires 40 links @5G
= 32 links * 2.5Gbps *8/10 (coding) IngressQ to Fabric ASIC S1
*120/136 (cell tax) = 56Gbps
Ingress = 40 links * 5Gbps *8/10 (coding) *120/136 (cell tax)
(This is for 8 planes) = 141Gbps
(This is for 8 planes)

From 2 FabricQ ASICs per MSC, each with 2 Fabric ASIC S3 to FabricQ
Fabric RX Links per S3 ASIC. 64 links @ 2.5Gb
= 64 links * 5Gbps * 8/10 (coding) *120/136 (cell tax)
= 64 links * 2.5Gbps *8/10 (coding) = 225Gbps or 113Gbps per FabricQ.
*120/136(cell tax) = 100Gb
Egress (for 8 planes)
(this is for 8 planes)
100 Gb is split across 2 FabricQ ASICs
on the LC. Egress forwarding capacity
is only 40Gb, so the LC must
backpressure when overloaded

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 57
CRS-3 vs. CRS-1 Architecture Comparison
Data Path Increased to at Least 140G in all ASIC
More PPEs 140G Fabric
Faster PPEs HW CEF TCAM Shape Queues Fabric Dest Qs F
HP A
More memory BE

Higher scale HP
B
BE
Cells R
HP I F
BE
C A
M HP M
PLIM
I
BE
I B
P QoS
D PSE IngressQ D
S1 R
L P More Queues Faster CPU P I
I L 8K -> 64K CPU L C
F
M A HP
A A
N
Priority

N S2
B
Assured Forwarding
Assured Forwarding HW CEF TCAM HP
Assured Forwarding
AF
E Assured Forwarding
Best Effort AF FQ E R
Priority
Assured Forwarding
BE
BE
I
Best Effort

Priority HP FabricQs C
140G Best Effort
HP

PLIMs
Priority
AF
Assured Forwarding
Assured Forwarding
Best Effort
AF FQ S3
BE
EgressQ PSE BE
58

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
New on CRS-3 10GE LineCard’s

 Two levels egress priority queues


 Bandwidth oversubscription
 Mandatory “police” statement in priority classes
 Dynamic queue-limit & WRED thresholds adjustment
 Layer-all accounting
 Finer granularity
 Higher scalability

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 59
CRS-3 Line Card ASICs
PLIM MSC Fabric

160G
PSE 160G 141G
10G 2x100G IngressQ SEA
10G
PLA CPU
PLIM

PLIM
10G
10G
PLA CPU Sub-system
SEA 5G
FabricQ PODs
120G 100G 113G
100G 100G EgressQ 160G PSE
PHY PLA
Mac 2x80G 100G 113G
FabricQ

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 60
Two Levels Egress Priority Queues
class hp-1
police rate percent 20
!
priority level 1 >>>> Default level if users didn‟t explicit use “level” statement
!
class hp-2
police rate percent 40
!
priority level 2 >>>> Only available on Taiko linecard
Note:
HP-1 queues have higher priority than HP-2 queues. Multiple HP-1 and/or HP-2 classes can be present in same
group/policy-map. In that case, same HP-1 or HP-2 hardware queue will be shared among those classes. Egress PLIM
has two queues (HP & LP) per port. The PLIM HP queues are mapped to egress MSC HP-1 queues; PLIM LP queues
are mapped to egress MSC HP-2 & LP queues.

HP-1 HP-1 PLIM HP queue

HP-2 & LP PLIM LP queue


BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 61
Mandatory “Police” Statement in Priority Classes

 High priority propagation


High priority traffic uses single high priority FIFO queue which is given strict
priority over the LP queues. HP traffic under all groups is serviced before ANY
Low Priority traffic (all the groups). In other words, scope of priority
assignment at queue level is not restricted to the parent group, it is „global‟.
 To ensure guaranteed low-latency for high priority traffic and LP
Bandwidth guaranties, it is mandatory to configure police in priority
classes on certain CRS-3 platform LC‟s. CLI validation will ensure that
sum of HP police rates of all groups under a port is less than interface
rate.

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 62
CRS-3 LC Scalability
 Ingress
‒ 64K Ingress queues
‒ 16K Groups
‒ 16 dedicate queues to CPU (TH-PDMA)
‒ 16K HI Queues

 Egress
‒ 64K egress queues (HP and LP)
‒ 128 Ports
‒ 16K Groups
‒ 16 dedicate queues to CPU (TH-PDMA)

 Up to 512 classes per policy-map


 Maximum Policy-map number per system depends on memory size

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 63
Complete Your Online
Session Evaluation
 Give us your feedback and you
could win fabulous prizes.
Winners announced daily.
 Receive 20 Passport points for each
session evaluation you complete.
 Complete your session evaluation
online now (open a browser through
our wireless network to access our Don‟t forget to activate your
portal) or visit one of the Internet Cisco Live Virtual account for access to
stations throughout the Convention all session material, communities, and
on-demand and live activities throughout
Center. the year. Activate your account at the
Cisco booth in the World of Solutions or visit
www.ciscolive.com.

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 64
Final Thoughts

 Get hands-on experience with the Walk-in Labs located in World of


Solutions, booth 1042
 Come see demos of many key solutions and products in the main Cisco
booth 2924
 Visit www.ciscoLive365.com after the event for updated PDFs, on-
demand session videos, networking, and more!
 Follow Cisco Live! using social media:
‒ Facebook: https://www.facebook.com/ciscoliveus
‒ Twitter: https://twitter.com/#!/CiscoLive
‒ LinkedIn Group: http://linkd.in/CiscoLI

BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 65
BRKARC-2004 © 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public

Anda mungkin juga menyukai