Get Adobe Flash player

Cisco UCS for Dummies – LAN and SAN Connectivity

As a class and in smaller groups, I’ve participated in several discussions trying to understand UCS connectivity and communication both internally and externally to the LAN and the SAN. This post summarizes several diagrams and drawings from whiteboards, my notes, and the bootcamp manual to explain what hardware communicates with which protocol, and how redundancy and fail over works in Cisco’s Unified Compute System. If you are comparing UCS to other blade centers some details mentioned will jump out at you. I’ll conclude with some thoughts on these items.

Again I am using terminology and acronyms established in my post from day 1. Review that post if necessary.

The following diagram illustrates the current connectivity between the UCS Blades, Fabric Extenders (FEX), and the Interconnects. The diagram only includes a single chassis, a single half height blade, and a single full height blade for simplicity while covering all scenarios. Duplicate the same connectivity for each blade inside the chassis, and duplicate connectivity of 2 more FEX for each additional chassis in the solution. As shown, the 2 Interconnects can manage up to 20 different chassis if model 6140 and up to 10 chassis if model 6120. (The max number of chassis can not be achieved because 2 FCoE cables are being used to the Interconnects)

UCS Bootcamp - LAN SAN Connectivity

Click on the image to see a larger version.

New terms to understand before continuing:

  • Northbound networking – any connectivity and communication to other switches outside of the UCS solution. Port channels, allowed VLANs, and a matching native VLAN must exist on the next switch up. LACP will be automatically configured. vSAN object in Service Profile
  • Inbound networking – any connectivity and communications to blade servers. Configured by the UCSM, assigned via Service Profiles and represented as a vNIC or vHBA objects. Includes MACs, native VLAN, allowed VLANs, WWNs, WWPNs, etc.
  • End-host mode - UCS Interconnect default operation. No MAC tables are maintained. Does not switch traffic northbound, but does switch traffic inbound – including both blade to blade connectivity and packets from outside UCS headed inbound.
  • Pinning – automatic or manual assignment of ports. Happens both on the Interconnects and the mezzanine cards. On the interconnects  without switching or MAC tables for northbound traffic from blades.

Yes, other switches for the LAN and SAN are needed since the Interconnects do not route or switch, and FCoE adapters in the storage device cannot be directly connected to the 6100s.

There is an option to change the mode of the entire UCS switch to “switching mode”, but it is highly recommended not to do this.

Blades cannot communicate with each other inside the same chassis via the FEX. Local traffic must travel to the Interconnects first.

There is no multipathing provided from the blade hardware (mezzanine cards). Multipathing is only possible from the blade operating system.

On the Interconnects, only ports 1 through 8 are licensed by default. ports 9 through 20/40 are licensed per port as needed.

Oplin mezzanine cards provide ethernet only. Menlo and Palo provide both LAN and SAN connectivity.

FCoE Boundary

The biggest misconception I’ve had about UCS (and it has been common among a lot of people I have talked with) is where FCoE is used in the solution. In the current version of UCS FCoE exists between the mezzanine cards on the blades to the Interconnects only. FCoE is not possible between the Interconnects and the northbound switches. As mentioned earlier, a FCoE adapter in a storage device cannot be directly connected to the Interconnects. This is possibly on the roadmap, but today’s UCS cannot do it.

Blade mezzanine card to FEX connectivity

Each mezzanine card has 2 ports both capable of 10 GE. Half height blades can hold one mezzanine card and full height blades can hold two – or 4 ports each capable of 10 GE.

Without 2 FEX in a chassis only one mezzanine port will be active per card. This means fail over is not possible for half height blades and only possible in full height blades if two mezzanine cards exist – 1 active port on each card.

Without 2 interconnects having 2 FEX is useless. You can not connect both FEX to the same 6100.

Pinning also occurs between the mezzanine card and the internal ports on the FEX (inside the chassis). This assignment of ports is automatic and depends on the number of cables between the FEX and the Interconnects. Only 1, 2, or 4 cables (ports) can be used and pinned as follows:

  • 4 cables
    • Blade 1 and 5 to port 1
    • Blade 2 and 6 to port 2
    • Blade 3 and 7 to port 3
    • Blade 4 and 8 to port 4
  • 2 cables
    • even numbered blades to port 1
    • odd numbered blades to port 2
  • 1 cable
    • all blades to single port

If you have 4 cables uplinked and one fails UCS will have to re-pin the blades to a 2 cable configuration. Blades using ports 3 and 4 will temporarily lose connectivity.

The Part of Tens – UCS was built for virtualization

Blades cannot communicate inside the chassis – If the 10 GE between the FEX and the interconnects is not enough bandwidth for an application, running ESX on the blades allows an affinity to keep VMs that need local connectivity together.

Operating system multipathing only – VMware vSphere to the rescue again.

Hardware high availability limitations – vSphere VMotion, DRS, and HA server this purpose.

Bandwidth reduction from the interconnects to the northbound switches – Virtualized servers, regardless of chassis location, managed by the same interconnect domain should rarely have northbound needs. Until physical clients have 10 GE adapters inbound network traffic will not be an issue. Some storage devices currently have FCoE adapters however, and Cisco is aware of the need but maintains current virtual server loads do not need that size pipe.

Related Posts

  • Pingback: Twitted by PlanetV12n

  • http://blog.scottlowe.org/ Scott Lowe

    One small correction, Rich (unless I am reading incorrectly): Keep in mind that when you have two FEXs, you must also have two fabric interconnects. Therefore, a UCS with a pair of 6120XM fabric interconnects can support up to 20 chassis, because each FEX will connect to only 1 of the fabric interconnects. With the 6140 (not released yet), you can scale to 40 chassis. Of course, in this configuration, each chassis has only a single uplink, which limits the available bandwidth from the chassis to the fabric interconnects and limits the number of virtual adapters that can be created on the Palo adapter.

  • bknudtson

    Rich-
    Did they discuss why switching mode is available on the interconnects, but not recommended to use it? Will this be supported in the future (if so, when)?

    Great articles (as always). Making me excited for my UCS boot camp.

    brian

  • http://vmetc.com rbrambley

    Scott,

    Yes. 20/40 chassis are supported if you run only 1 FCoE uplink between the FEX and interconnects on each switch. My diagram shows 2 uplinked cables per FEX so I tried to point out that as shown only 10 chassis would be supported. I am assuming fail over works best with the same number of cables from each FEX to 6100.

    Personally, I'm not too sure about pinning up to 8 blades per FEX port on a single FCoE cable. LAN is fine, but when you add the converged SAN factor I think you may start to approach pushing the pipe. That's why I drew it with 2 cables.

    4 FCoE would be ideal for large VI environments, but now you are down to 5/10 chassis per pair of interconnects. So, logically 2 uplinks seems like the seetspot to me. Just my opinion.

  • http://vmetc.com rbrambley

    Brian,

    UCS is about the Service Profile and Stateless compute. I have only scratched the surface on the complexity hidden and automated by the UCSM to make stateless compute possible. Since the UCSM is on the interconnects, if you turn them into switches you lose all of that plus you end up in a default config with immediate problems such as looping, etc.

    Long story short is that Cisco will tell you the interconnects and FEX are not switches, and you have to rethink their value proposition.

    I can't wait to see your thoughts on UCS!

  • http://blog.scottlowe.org/ Scott Lowe

    Ah, yes–I didn't read completely. You do point out the limitation of not being able to hook both FEXs up to the same fabric interconnect. So, your point is absolutely correct–the more uplinks from each chassis, the lower the overall size of the system can grow.

    Good coverage, Rich–keep it coming!

  • http://vmetc.com rbrambley

    Scott,

    Yes. 20/40 chassis are supported if you run only 1 FCoE uplink between
    the FEX and interconnects on each switch. My diagram shows 2 uplinked
    cables per FEX so I tried to point out that as shown only 10 chassis
    would be supported.

    Personally, I'm not too sure about pinning up to 8 blades per FEX port
    on a single FCoE cable. LAN is fine, but when you add the converged
    SAN factor I think you may start to approach pushing the pipe. That's
    why I drew it with 2 cables.

    4 FCoE would be ideal for large VI environments, but now you are down
    to 5/10 chassis per pair of interconnects. So, logically 2 uplinks
    seems like the seetspot to me. Just my opinion.

  • http://vmetc.com rbrambley

    Brian,

    UCS is about the Service Profile. I have only scratched the surface on
    the complexity hidden and sutomated by the UCSM to make stateless
    compute possible. Since the UCSM is on the interconnects, if you turn
    them into switches you lose all of that plus you end up in a default
    config with immediate problems such as looping, etc.

    Long story short is that Cisco will tell you the interconnects and FEX
    are not switches, and you have to rethink their value proposition.

    I can't wait to see your thoughts on UCS!

  • http://vmetc.com rbrambley

    Scott,

    Yes. 20/40 chassis are supported if you run only 1 FCoE uplink between the FEX and interconnects on each switch. My diagram shows 2 uplinked cables per FEX so I tried to point out that as shown only 10 chassis would be supported. I am assuming fail over works best with the same number of cables from each FEX to 6100.

    Personally, I'm not too sure about pinning up to 8 blades per FEX port on a single FCoE cable. LAN is fine, but when you add the converged SAN factor I think you may start to approach pushing the pipe. That's why I drew it with 2 cables.

    4 FCoE would be ideal for large VI environments, but now you are down to 5/10 chassis per pair of interconnects. So, logically 2 uplinks seems like the sweet spot to me. Just my opinion.

  • http://vmetc.com rbrambley

    Brian,

    UCS is about the Service Profile and Stateless compute. I have only scratched the surface on the complexity hidden and automated by the UCSM to make stateless compute possible. Since the UCSM is on the interconnects, if you turn them into switches you lose all of that plus you end up in a default config with immediate problems such as looping, etc.

    Long story short is that Cisco will tell you the interconnects and FEX are not switches, and you have to rethink their value proposition.

    I can't wait to see your thoughts on UCS!

  • http://blog.scottlowe.org/ Scott Lowe

    Ah, yes–I didn't read completely. You do point out the limitation of not being able to hook both FEXs up to the same fabric interconnect. So, your point is absolutely correct–the more uplinks from each chassis, the lower the overall size of the system can grow.

    Good coverage, Rich–keep it coming!

  • Pingback: Twitted by marcusowens

  • Pingback: Twitted by softchoice

  • Sandakaranatunga

    do any body has connected any external end host such as netapp directly to UCS 6120.
    is this possible?

  • http://www.facebook.com/profile.php?id=1848532876 Ian Erikson

    I beleive you may have stated “end host mode” wrong. the 6120 does have a mac table, an internal only one. It does switch internal traffic.

  • La6470

    “Without 2 FEX in a chassis only one mezzanine port will be active per card. This means fail over is not possible for half height blades and only possible in full height blades if two mezzanine cards exist – 1 active port on each card.”

    – Can you lease clarify failover of what will not work? FI failover will still work. And also I think the mez ports are active-active and not active-standby.

    • http://vmetc.com rbrambley

      When I wrote this post, the UCS was in it’s first release and I was attending a technical intro class. I’m sure slot has changed since then, and the active active mez ports very could be just that. Then again, I very well may have misunderstood.

      Either way, it’s been over a year since I’ve been actively involved with the UCS, so thanks for pointing this out.

  • Pingback: Cisco UCS for Dummies – LAN and SAN Connectivity | VMETC.com « My Blog

Badges

follow-me-twitter

I blog with Blogsy

Comments / DISQUS