The ASR9001 "Iron Man" has 1 built in RSP with quad-core PPC CPU and 8GBs of RAM. The chassis supports 120Gbps of non-blocking full-duplex traffic (dual Typhoon chips). It has 4x 10Gbps SFP+s ports built in and 2x Modular Port Adapters (MPAs) bays that will support the 20x 1Gbps / 2x 10Gbps / 4x 10Gbps / 1x 40Gbps line cards. The internals of the ASR9001 (the NPUs and the built in 4x10G SFP+ ports) are built using the 2nd generation Typhoon line card hardware (which support four million IPv4 or two million IPv6 prefixes in shared RLDRAM by default, without changing the scaling profile, and one million MPLS labels).

Even though the ASR9001 supports OIR, all ports on the NP attached to the bay the new MPA is being inserted into will experience a brief outage (should be sub-second) as all ports go through a fast reset.

The ASR9001-S is the exact same unit as the ASR9001 but two of the built in 10G SFP+ ports and the 2nd MPA slot are disabled, but can be enabled with a license upgrade and reboot.

The 9001 chassis has two Network Processors Units (NPUs/NPs), so for the ASR9001-S the 2nd NP is disabled. The first MPA bay and 2 of the built in SFP+ ports are on one NP and the 2nd bay and remaining two built in SFP+ ports are on the other NP. These are Typhoon NPs that have 6x 10G lanes, so 60Gbps of non-blocking throughput per NPU, hense 120Gbps non-blocking per chassis.

The example ASR9001 chassis below has a 20x1G MPA in bay 0 (Gi0/0/0/0 to Gi0/0/0/19) and a 4x 10G MPA in bay 1 (Te0/0/1/0 to Te0/0/1/3). The built in SFP+ ports (Te0/0/2/0 to Te0/0/2/4) sit in "virtual" bay 2. Altogether, bay 0 (0/0/0/x) and bay 1 (0/0/1/x) sit on an internal Modular Line Card which shows as 0/0/CPU (like a MOD80-SE MPA). Bay 2 sits (0/0/2/x) sits on a Virtual Module and they all share the MPA same CPU, 0/0/CPU0. So the port numbers are Chassis/MPA/Bay/Port.

For comparison, in an ASR9006 for example with two A9K-MOD80-SE "80G Modular Linecard, Service Edge Optimized" MPAs, and a single 4x10G line card in one bay of each of the MOD80-SE MPAs, the first card shows as Te0/0/0/0 to Te0/0/0/3 and the second as Te0/1/0/0 to Te0/1/0/3:

The ASR-9001 has a single Fabric Switching ASIC (Sacramento) at location 0/0/CPU0 and two Fabric Interface ASIC (FIAs). The Fabric Switching ASIC (Skytrain) is the same ASIC that is used on the RSP440 and 2nd generation based linecards on other ASR9000 platforms. The Fabric Switching ASIC has 4 ports with 2 connected to each FIA, each FIA connects to one of the two Trident NPUs and like the NPUs has 60Gbps of non-blocking throughput capacity.

The following diagram shows this toplogy:

The following command displays the counters related to the Fabric Switching ASIC:

- For MPA cards: MPA Port PHYs > NP > FIA > FSA > to FIA on another MPA > NP > PHY- For the built int SFP+ ports: SFP+ PHY > direct to NP > FIA > FSA > FIA > NP > PHY- There is 40Gbps of bandwidth available to each bay from each NPU (20Gbps is used by the 2 built in SFP+ ports assigned to each NPU) so the chassis cannot be oversubscribed- Each Line Card has a LC CPU accessed via the NP (PHY > NP > LC CPU) for features like programming hardware, inline NetFlow, SW switched packets

The NP contains the lookup table (FIB, MAC), TCAM, Stats memory, Frame memory and performs LPTS (CoPP). Each line card has a CPU that provides local logic to run control plane protocols like BFD, ARP, ICMP, OAM NetFlow etc all distributed at the line-card level.

Larger routers and line cards have a bridges that sits between the NPs and FIAs, it sits as a non-blocking memory converter between the two and can be checked using "show controllers fabric fia bridge stats location 0/RSP0/CPU0". On the 9001 there is one bridge per NP and they cannot be interacted with.

Down at the interface level, the four built-in SFP+ interface are 10G only, not mixed 1/10G. If fragmentation is required of IPv4 packets then the packet is punted to the LC CPU so no hardware assistance on fragemented packets (continuous fragmented flows will never reach wirerate). IPv6 fragmentation is not supported, the packet is punted and ICMP messages are generated back to the sender (by the LC CPU).