Are we closer to the release of a POE FEX? A Nexus 5500/2000 for user access?
5548-1(config)# fex 110
5548-1(config-fex)# type ?
B22HP Fabric Extender 16x10G SFP+ 8x10G SFP+ Module
N2148T Fabric Extender 48x1G 4x10G Module
N2224TP Fabric Extender 24x1G 2x10G SFP+ Module
N2232P Fabric Extender 32x10G 8x10G Module
N2232TM Fabric Extender 32x10GBase-T 8x10G SFP+ Module
N2248GV+P Fabric Extender 48x1G 4x10G POE Module
N2248T Fabric Extender 48x1G 4x10G Module
5548-1# sh ver | inc "system image"
system image file is: bootflash:/n5000-uk9.5.0.3.N2.2a.bin
The Network Arcanum™
11/3/11
4/13/11
6500 Service Module BREAK
This morning I was dealing with a failed ACE-20 upgrade; the module was stuck in a reboot loop. For some reason, I was unable to send a BREAK via ssh, through a terminal server, in to the console of the ACE. Without a good way to interrupt the boot sequence, and lacking physical access to the ACE, I needed a way to force the module in to ROMMON. Luckily, the 6500's Sup720 can do it via the EOBC:
<0-15> Specify literal value for the module's boot option
config-register Boot using the module's config-register value
eobc Boot using an image downloaded through EOBC
flash Boot using an image in module's internal flash memory
recovery Trifecta X86 recovery option
rom-monitor Stay in rom-monitor after module reset
6504-1#hw-module module 2 boot rom-monitor
Then, issue a module reset:
6504-1#hw-module module 2 reset
Apparently, this feature has been available since SXF, but I've not needed it until now.
Labels:
6500
4/1/11
DNS Anycasting with IP SLA
Anycasting is a common way to steer traffic to the nearest server, while providing failover and load distribution without a hardware load balancer. This technique is commonly used with DNS servers.
DNS server IPs are hard coded in many servers, end user machines, and a slew of other network devices. This makes server maintenance and unexpected outages noticeable. With anycasting, we can automatically steer traffic to other servers with our dynamic routing protocol.
This is often done by running the user visible IP on a loopback adapter and letting the server inject this IP into your network with routing software like Quagga. With at least two servers injecting the same IP, any one can die and traffic will failover to the working server.
Unfortunately, it's not always that easy. There are many environments in which the server admins are not network savvy or the network admins are uncomfortable extending a dynamic routing capability to the server. Luckily, IOS has some tools to help solve this problem, specifically IP SLA and reliable static routing.
ip sla 1
dns dnscheck1.ventrefamily.com name-server 172.16.1.10
timeout 20
frequency 20
ip sla schedule 1 life forever start-time now
With object tracking, we can monitor the IP SLA for failures. We're specifying 61 seconds, which is 3 missed probes (20 seconds each), before we change the state to up or down.
track 1 ip sla 1
delay down 61 up 61
With reliable static routing, we can install a static route as long as the track object is up. 192.168.1.1 is the IP of the loopback interface on the server; this will be the anycasted IP. It's the IP that your clients should be using for DNS.
ip route 192.168.1.1 255.255.255.255 Vlan110 172.16.1.10 track 1
Now, with a simple redistribution, we can advertise this route into our network, and automatically remove it if the server stops responding to DNS queries.
I would recommend using a different A record query for each probe on the other servers. If you don't, you could end up with no service if a single A record was accidentally deleted.
When you deploy this with multiple servers, you can use IOS and your IGP to automate the failover, hopefully improving the availability of your infrastructure.
2/9/11
ONS Code Upgrades & Expected Packet Loss
As a consultant, I frequently work in environments that don't have regularly scheduled maintenance windows. One of the internal BUs is always doing something critical or their customers are inflexible. For critical infrastructure, it's often difficult to coordinate maintenance windows with all of the parties, even in a redundant network where no outage is expected.
I've recently been tasked to upgrade a Cisco ONS 15454 DWDM solution from Release 9.0 to 9.2, which utilizes Xponders (GE-XP and OTU-2). In talking with some friends who are solely devoted to working on very large Cisco optical networks, and an optical Cisco SE, everyone agreed that most folks won't notice the upgrade. While that's comforting, I wanted quantifiable data on how much packet loss to expect. I also wanted to know about any potential link bounces that may occur on the client interfaces. That's not only helpful for me while I'm performing the upgrade, so I know if it's proceeding as expected, but it's also an important data point I can tell my customers.
I started by taking the lab hardware and setting up my worst case scenario. Often, in redundant networks, traffic can wrap to the protect path with very little impact to the production traffic. To eliminate this as a variable in the lab, I didn't redundantly connect the two nodes; I linearly connected them. Node 1 is a WSS/ROADM config while Node 2 is an AD4C (4 Channel Add/Drop). Each node has a GE-XP (Layer-2 Mode) with a SmartBits connected to a client side port. It's a very simple topology:
During the upgrade I sent a bi-directional 1 gbps stream between the two nodes. The test included upgrading both nodes, one at a time. The entire process took about an hour, and I'm happy to report that there was zero packet loss with no link bounces. If only my other devices could be upgraded with zero user impact.
I've recently been tasked to upgrade a Cisco ONS 15454 DWDM solution from Release 9.0 to 9.2, which utilizes Xponders (GE-XP and OTU-2). In talking with some friends who are solely devoted to working on very large Cisco optical networks, and an optical Cisco SE, everyone agreed that most folks won't notice the upgrade. While that's comforting, I wanted quantifiable data on how much packet loss to expect. I also wanted to know about any potential link bounces that may occur on the client interfaces. That's not only helpful for me while I'm performing the upgrade, so I know if it's proceeding as expected, but it's also an important data point I can tell my customers.
I started by taking the lab hardware and setting up my worst case scenario. Often, in redundant networks, traffic can wrap to the protect path with very little impact to the production traffic. To eliminate this as a variable in the lab, I didn't redundantly connect the two nodes; I linearly connected them. Node 1 is a WSS/ROADM config while Node 2 is an AD4C (4 Channel Add/Drop). Each node has a GE-XP (Layer-2 Mode) with a SmartBits connected to a client side port. It's a very simple topology:
During the upgrade I sent a bi-directional 1 gbps stream between the two nodes. The test included upgrading both nodes, one at a time. The entire process took about an hour, and I'm happy to report that there was zero packet loss with no link bounces. If only my other devices could be upgraded with zero user impact.
Labels:
Optics
10/13/10
Unknown Unicast Flooding - Part 2
In the first UUF entry, geertn444 reminded me of a nagging question: Does making the ARP timeout equal to the CAM timeout cause very short UUF problems? I tested this with two 6500s and a traffic generator. Setting two timers equal to each other did not cause flooding problems. I ran the test sequence between 10 and 15 times.
For those of you already using 300 seconds as your ARP timeout, which was the default CAM timer, and have since upgraded to SXI, you now have a large safety cushion because the default CAM timer has been increased to 480 seconds.
For those of you already using 300 seconds as your ARP timeout, which was the default CAM timer, and have since upgraded to SXI, you now have a large safety cushion because the default CAM timer has been increased to 480 seconds.
9/21/10
Auto-MDIX
If you have an environment where offices and cubicles have multiple network jacks, it's inevitable that two ports will be connected together. This potential bridging loop is usually caught by running BPDU guard on all edge ports; it works well, but you can catch this issue sooner.
Auto-MDIX, Automatic-Medium Dependent Interface crossover(X), is a feature that eliminates the need for a crossover cable when connecting two similar (OSI Layer) ports together. It is enabled by default on most switches. This works great for infrastructure links where you interconnect two switches, but it's not typically needed on edge/user ports. If you disable Auto-MDIX on user ports, when those two ports are accidentally connected together, they won't get link. It is disabled with the interface command:
You can verify its status by looking at the controller:
3560-1#show controllers ethernet-controller FastEthernet 0/1 phy | inc MDIX
Auto-MDIX : Off [AdminState=0 Flags=0x00002248]
Auto-MDIX, Automatic-Medium Dependent Interface crossover(X), is a feature that eliminates the need for a crossover cable when connecting two similar (OSI Layer) ports together. It is enabled by default on most switches. This works great for infrastructure links where you interconnect two switches, but it's not typically needed on edge/user ports. If you disable Auto-MDIX on user ports, when those two ports are accidentally connected together, they won't get link. It is disabled with the interface command:
interface FastEthernet0/1
no mdix auto
You can verify its status by looking at the controller:
3560-1#show controllers ethernet-controller FastEthernet 0/1 phy | inc MDIX
Auto-MDIX : Off [AdminState=0 Flags=0x00002248]
Labels:
Ethernet,
Interfaces
9/12/10
3560G/E and 3750G/E Buffers
The size of the buffers in the 3560/3750 platforms is commonly referred to as inadequate and small, but it's rare that the buffer size is listed. I've found one document that lists it:
http://www.cisco.com/en/US/docs/solutions/Enterprise/Video/tpqoscampus.html
Catalyst 3560G/3750G and 3560-E/3750E
[SNIP]these platforms provide (minimally) 750 KB of receive buffers and (up to) 2 MB of transmit buffers for each set of 4 ports.[SNIP]
http://www.cisco.com/en/US/docs/solutions/Enterprise/Video/tpqoscampus.html
Catalyst 3560G/3750G and 3560-E/3750E
[SNIP]these platforms provide (minimally) 750 KB of receive buffers and (up to) 2 MB of transmit buffers for each set of 4 ports.[SNIP]
Labels:
QoS
Subscribe to:
Posts (Atom)