Get Adobe Flash player

Cisco UCS for Dummies – UCS Overview

Day 1 of the Cisco UCS Bootcamp partner training was mainly an introduction to the hardware, but also established the concept of UCS Server Profiles and Statelessness. Converged Network Fabric and the Cisco’s CNA (Converged Network Adapters) models were covered, and the day ended with a lab exploring the UCSM (Unified Compute System Manager).

I promised to report in the style of the “For Dummies” series. To do that I am going to borrow the common feature “The Part of Tens” always found at the end of those books. The “Tens” are usually a list of concepts suggested for further research. I’m going to use this more as a list of what “sticks” with me from the bootcamp topics. As in the books I hope I motivate VM /ETC readers to research further. If you count My “Tens” it might not even equal 10 items – no promises. I’ll add some opinion and scenario when motivated to do so.

Many others already have provided technical details on the Cisco chassis, blades, modules, adpaters, and the other components, and I’m not about to add anything new to what is already known. In fact, I found myself supplementing what I was being told in class by cross checking various blogs ande other documntation already on the web. Nevertheless, the first “Tens” covers the overview of the UCS parts and concepts.

Cisco UCS Overview Part of Tens

  • Converged Network Fabric - simply put, this means combining the ethernet and fiber channel (FC) connections from the blade chassis over the same cables. To do this Cisco’s Nexus switching uses 10 gigabit ethernet (GE) to encapsulate FC frames within ethernet frames. This is called Fiber Channel Over Ethernet (FCoE). FCoE effectively can cut the amount of cables needed in half.
  • UCS Blades – have 2 cpu sockets, 2 internal SAS drive bays, slots for mezzanine adapters, and DDR3 DIMM slots.  Half width blades have 12 DIMMs and full blades have 48 DIMMs for up to 384 GB of RAM per blade.
  • Converged Network Adapters (CNAs) – Cisco’s mezzanine adapter cards that are inserted in each UCS blade in order to connect to the Converged Network Fabric. Cisco currently has 3 adapters, Oplin, Menlo, and Palo, which appear to the blade operating system as 2 GE NICs and 2 4GB HBAs. UCS half blades can hold 1 mezzanine card and full blades can hold 2 mezzanine cards.
  • UCS Blade chassis – 6U tall and 32″ deep. Can hold 8 half width blades or 4 full width blades or a combination of both.
  • Fabric Extenders – The UCS chassis holds the  Fabric Extenders (FEX) otherwise known as IO Modules (IOM) that connect the blades to the switch. There are 4 10Gbps ports per FEX, and 1,2, or 4 connections are allowed to the Fabric Interconnects.
  • Fabric Interconnects – switches the fabric extenders from the chassis uplink to. Can be 20 port or 40 port. Interconnects have connectivity to both the LAN and the SAN networks.
  • The Full UCS System – consists of 2 Fabric Interconnects, 1 – 40 chassis with 2 FEX each, and 1 – 8 blades per chassis.
  • UCS Blade Statelessness – based on creating Server Profiles defined as what Cisco calls “the behavioral and identity elements that make the server unique to all others.” Cisco listed WWN, MAC, and UUID as identity and boot order, firmware versions, and QoS as behavior. The server profiles include the operating system and can be applied to bare metal UCS blades for fail over scenarios. UCS blades are required to boot from SAN to enable profiles, and the process is manual only. Blades must be powered off before they can fail over. Cisco does provide APIs for third parties to develop automation.
  • UCS Manager (UCSM) – runs on the Fabric Interconnects and accessible via a web browser as a Java application. It is a single pane of glass to manage the entire UCS system including the switches, the chassis, and the servers. Via roles and pemissions administrators can be assigned specific access. For example, the team that manages the interconnect switches does not need to access the chassis and servers and vice-versa.

Final Day 1 UCS Thoughts:

Overall I was impressed with the UCS and the ability to accomplish from the UCSM what requires multiple management interfaces and applications in other blade hardware. Stateless Server Profiles seem to offer simplified migration of server personality between blades. I witnessed first hand the power of the UCS hardware and the significant reduction of cables at VMworld.

One of my classmates made a great point today about comparing UCS to the other bladecenter offerings – UCS’ advantages are greatest in a large grid deployment. One or two chassis in a rack do not seem to be a good scenario from a competitive perspective. I’m not making final judgement on this statement just yet as there may be specific features not revealed yet that may justify UCS for smaller environments.

FCoE is the obvious push. I’m sure there will be more about this later in the week.

UCS hardware was made for VMware virtualization with the ability of each blade to provide large amounts of RAM and CPUs for any application.

Posts that diagram the full UCS solution:

http://rodos.haywood.org/2009/08/ucs-schematic-sketch.html

http://www.internetworkexpert.org/2009/07/05/cisco-ucs-vmware-vswitch-design-cisco-10ge-virtual-adapter/

Related Posts

  • http://professionalvmware.com professionalvmware

    Good stuff. Keep it coming.

  • Pingback:   Cisco UCS for Dummies – UCS Overview | VM /ETC by Cisco Information Technology

  • http://twitter.com/triggan triggan

    Great info! I've been sitting a couple of classes with Brocade last week and this week. There's a lot of FUD coming from Brocade in regards to FCoE versus convential FCP (which is generally expected). We'll have to compare notes when you get back in town.

  • http://www.benway.net benwaynet

    Looking forward to the next days UCS post.
    Thanks for the information.

  • http://boche.net/blog/ Jason Boche

    Great information – thanks Rich! I'm looking forward to learning more about the Cisco UCS.

  • http://vmetc.com rbrambley

    No doubt that UCS is a FCoE solution top to bottom.

  • http://vmetc.com rbrambley

    Cody, Ben, Jason,

    Thanks! more to come …

  • http://twitter.com/mylvisaker Mark Ylvisaker

    Good information. Without having first hand knowledge…I sorta agree with your classmate that larger deployments are more appropriate. Looking forward to learning more about UCS, and about what you think about its place in the datacenter. Thanks.

  • http://twitter.com/vseanclark vseanclark

    I'm loving the idea of stateless UCS blades. Why would anyone choose to insert SAS drives in these blades and leave all that stateless goodness on the table? Would be interested in your classes view on that or anyone logging in here.

    This will be a very popular series of posts, Rich. Thanks!

  • Pingback: Cisco UCS for Dummies – Managing Blades With UCS Manager | VM /ETC

  • http://vmetc.com rbrambley

    vSean,

    Tough to think of reasons other than OS difficult config to boot from SAN or just not comfortable with the design. UCS is design for VMware, and vSphere boot from SAN simplicity makes Stateless Server Profiles a no brainer for me too. More later in the week on this.

  • http://vmetc.com rbrambley

    Mark,

    I had some “coffee machine” talks about smaller UCS deployments yesterday. I'm not up on pricing, but those that are tell me that cost is affordable /comparable. If so, the TCO after reduction in cables, power, and cooling makes better case for UCS – especially for VI

  • ThatFridgeGuy

    I am having a call with some people on our Cisco team to get the numbers and go over a config for a small UCS deployment we are considering. I have no idea what to expect yet but it will be interesting to see if this is in the ballpark of the non-Cisco configurations I have looked at.

    Looking at 8 blades with 96GB. Not sure yet if it would be more cost effective to go with the full size blades using 4GB DIMM or 1/2 size blades with 8GB DIMM.

    Would you consider the UCS to be fully redundant w/o a single point of failure or would you say someone needs to spread the hosts across 2+ chassis for redundancy?

  • http://vmetc.com rbrambley

    FridgeGuy,

    From what I understand so far you do not need multiple chassis in separate to be fully redundant. Be sure to double your FEX and Interconnects. Now, at the blades I'm not too sure how redundancy is accomplished by a single mezzanine card in the half height blade yet, but the 2 cards in the full height blade provide physical fail over.

    Let me know (via email if not publicly in comments) how the pricing compares.

  • ThatFridgeGuy

    Going to a TelePresence UCS Virtual Briefing tomorrow after the Wisconsin VMUG meeting. I will have a chance to provide more feedback after that.

  • ThatFridgeGuy

    Going to a TelePresence UCS Virtual Briefing tomorrow after the Wisconsin VMUG meeting. I will have a chance to provide more feedback after that.

  • Pingback: Cisco UCS for Dummies – Managing Blades With UCS Manager – Gestalt IT

  • http://vibramshoesonline.com/ vibram

    “Well ,your details is really reasonable and you guy give us valuable informative post. I actually love playing various sports and I believe only sports can make you energetic. I like this forum because I learned so much knowledge in here,and there are all kinds of newest news to us
    By the way ,recently I gonna buy some shoes ,will you guys give some suggestions for me from below websites at thanks!

Badges

follow-me-twitter

I blog with Blogsy

Comments / DISQUS