Cisco UCS for Dummies – UCS Overview
Day 1 of the Cisco UCS Bootcamp partner training was mainly an introduction to the hardware, but also established the concept of UCS Server Profiles and Statelessness. Converged Network Fabric and the Cisco’s CNA (Converged Network Adapters) models were covered, and the day ended with a lab exploring the UCSM (Unified Compute System Manager).
I promised to report in the style of the “For Dummies” series. To do that I am going to borrow the common feature “The Part of Tens” always found at the end of those books. The “Tens” are usually a list of concepts suggested for further research. I’m going to use this more as a list of what “sticks” with me from the bootcamp topics. As in the books I hope I motivate VM /ETC readers to research further. If you count My “Tens” it might not even equal 10 items – no promises. I’ll add some opinion and scenario when motivated to do so.
Many others already have provided technical details on the Cisco chassis, blades, modules, adpaters, and the other components, and I’m not about to add anything new to what is already known. In fact, I found myself supplementing what I was being told in class by cross checking various blogs ande other documntation already on the web. Nevertheless, the first “Tens” covers the overview of the UCS parts and concepts.
Cisco UCS Overview Part of Tens
- Converged Network Fabric - simply put, this means combining the ethernet and fiber channel (FC) connections from the blade chassis over the same cables. To do this Cisco’s Nexus switching uses 10 gigabit ethernet (GE) to encapsulate FC frames within ethernet frames. This is called Fiber Channel Over Ethernet (FCoE). FCoE effectively can cut the amount of cables needed in half.
- UCS Blades – have 2 cpu sockets, 2 internal SAS drive bays, slots for mezzanine adapters, and DDR3 DIMM slots. Half width blades have 12 DIMMs and full blades have 48 DIMMs for up to 384 GB of RAM per blade.
- Converged Network Adapters (CNAs) – Cisco’s mezzanine adapter cards that are inserted in each UCS blade in order to connect to the Converged Network Fabric. Cisco currently has 3 adapters, Oplin, Menlo, and Palo, which appear to the blade operating system as 2 GE NICs and 2 4GB HBAs. UCS half blades can hold 1 mezzanine card and full blades can hold 2 mezzanine cards.
- UCS Blade chassis – 6U tall and 32″ deep. Can hold 8 half width blades or 4 full width blades or a combination of both.
- Fabric Extenders – The UCS chassis holds the Fabric Extenders (FEX) otherwise known as IO Modules (IOM) that connect the blades to the switch. There are 4 10Gbps ports per FEX, and 1,2, or 4 connections are allowed to the Fabric Interconnects.
- Fabric Interconnects – switches the fabric extenders from the chassis uplink to. Can be 20 port or 40 port. Interconnects have connectivity to both the LAN and the SAN networks.
- The Full UCS System – consists of 2 Fabric Interconnects, 1 – 40 chassis with 2 FEX each, and 1 – 8 blades per chassis.
- UCS Blade Statelessness – based on creating Server Profiles defined as what Cisco calls “the behavioral and identity elements that make the server unique to all others.” Cisco listed WWN, MAC, and UUID as identity and boot order, firmware versions, and QoS as behavior. The server profiles include the operating system and can be applied to bare metal UCS blades for fail over scenarios. UCS blades are required to boot from SAN to enable profiles, and the process is manual only. Blades must be powered off before they can fail over. Cisco does provide APIs for third parties to develop automation.
- UCS Manager (UCSM) – runs on the Fabric Interconnects and accessible via a web browser as a Java application. It is a single pane of glass to manage the entire UCS system including the switches, the chassis, and the servers. Via roles and pemissions administrators can be assigned specific access. For example, the team that manages the interconnect switches does not need to access the chassis and servers and vice-versa.
Final Day 1 UCS Thoughts:
Overall I was impressed with the UCS and the ability to accomplish from the UCSM what requires multiple management interfaces and applications in other blade hardware. Stateless Server Profiles seem to offer simplified migration of server personality between blades. I witnessed first hand the power of the UCS hardware and the significant reduction of cables at VMworld.
One of my classmates made a great point today about comparing UCS to the other bladecenter offerings – UCS’ advantages are greatest in a large grid deployment. One or two chassis in a rack do not seem to be a good scenario from a competitive perspective. I’m not making final judgement on this statement just yet as there may be specific features not revealed yet that may justify UCS for smaller environments.
FCoE is the obvious push. I’m sure there will be more about this later in the week.
UCS hardware was made for VMware virtualization with the ability of each blade to provide large amounts of RAM and CPUs for any application.
Posts that diagram the full UCS solution: