Get Adobe Flash player

VMworld 2009 Virtual Infrastructure Design – Lab Manager vPODS Enable Conference Cloud

By now you’ve seen the pictures, video (VMworldTV), and posts about the hardware in the datacenters that hosted the VMworld 2009 Labs. You should already know about the staggering number of virtual machines ( > 37,000 ) running on the ESX 4 servers ( > 770 ). But enough about the hardware.

If you are like me you probably would have loved to get the opportunity to use the vSphere client to connect to a vCenter server managing that entire virtual infrastructure (VI). Although I did not get to do just that, I did get the opportunity to do the next best thing – talk to the manager of the team that does. My VMworld ended by talking to Randy Keener, Group Manager of VMware’s GETO team (Global Engineering Technical Operations). Keener explained to me some of the VMworld 2009 virtual infrastructure design details that VI administrators would be interested to know.

Nested ESX in the Lab Manager Cloud

What Keener revealed somewhat surprised me. Although vCenter 4 server was a piece of the design, the true magic that supported the self paced labs, instructor led labs, and the Solutions Exchange was (arguably) an example of a private cloud created by


VMware Lab Manager. The majority of the vSphere 4 servers that VMworld attendees repeatedly deployed and configured only to be reset again for the next lab session were in fact virtual machines (VMs) themselves grouped as templates in Lab Manager 4.

How to run an ESX server nested as a virtual guest on a physical ESX host is public knowledge. There are many blog posts and white papers that explain the necessary configuration steps and requirements for achieving a complete “VI in a Box” environment. Quick duplication of this nested environment, aided by the latest generation of virtualization ready hardware (we’ve seen so much of already), is what Lab Manager enabled for the throngs of lab takers at VMworld.

vPODS

ESX VMs were grouped in what Keener described as vPODS, or collections of complete vSphere 4 datacenters that could be quickly cloned as needed for new lab sessions. By my count from the labs listed on the VMworld.com agenda page, close to 20 completely configured lab environments / vPODS could have been created and available in Lab Manager’s repository (library).

Keener did explain that not all of the labs used nested ESX hosts. Some of the vSphere Enterprise Plus features cannot be enabled on ESX inside ESX. Keener told me the labs that featured vSphere Fault Tolerance and the instructor led labs on VMware SRM utilized physical ESX hosts, for example. Keener clarified the only reason GETO did not design instructor led SRM labs in vPODS was because of older hardware (pre Intel Nehalem), and there was concern the labs would have been very slow. VMworld actually had a self-paced lab for installation and configuration of SRM running in a vPOD, however.

Virtual SANs

When I asked about the shared storage needed for the datacenters in the various vPODS, I was told that the GETO team used VMs running both OpenFiler and LeftHand Network’s VSA (Virtual San Appliance now owned by HP). Keener explained that these virtual SANs were not fully encapsulated, however. Even though the virtual storage device operating systems and management interfaces were running on virtual disks, for performance reasons the shared LUNs that were actually served to nested ESX hosts were RDMs. Nested storage to nested ESX hosts creates excessive I/O that was avoided by using RAW LUNs in the storage VMs.

The VSAs were used specifically for the instructor led SRM lab and allowed a 10 blade chassis to serve as an iSCSI SAN.  The Open Filer VM’s were used for the labs that utilized fully nested ESX.

Other interesting aspects of the VMworld 2009 VI design

There were 3 Lab Manager -centric datacenter designs.

  1. The “Enterprise” for the instructor led labs
  2. The “Medium” for the self paced labs
  3. The “SMB” for the Solutions Exchange

If you came by the vExpert booth in the Communities Lounge section of the Solutions Exchange, maybe you saw our demo environment? This 4 host vSphere cluster was in fact made from a vPOD of VMs assigned to us in the “SMB” datacenter.

Inside the “Enterprise” design GETO actually created 9 separate datacenters.  One assigned to each lab room (not each lab).  Some rooms shared 2 different topics.  This was done for added flexibility as well as insurance that a change in one lab would not impact other labs.

Lab Manager is limited to hosts grouped in ESX Clusters of 8 when using VMFS formatted storage. This is a requirement as it is a VMFS performance best practice. At VMworld, Lab Manager’s storage was served via OpenFiler NFS mounts which eliminated the number of hosts per ESX Cluster limit. This allowed GETO to manage large ESX Clusters for each of the 9 classroom datacenters.

Lab Manager ESX Clusters did not have DRS or HA enabled. Keener said this was unnecessary as the physical ESX hardware was sized to support the maximum number of students per lab and per room. That means the only DRS and HA enabled Clusters at VMworld were inside the nested virtual environments!

Front end scripting and customization automated the vPOD cloning process in order to provide a self service interface for the self paced labs. Keener explained that GETO calls this custom software the “Lab Cloud.”

Summary

Keener’s GETO team created a highly automated private cloud that leveraged Lab Manager’s ability to capture tiered system configurations in state for mass deployment and sharing. I feel that VMware has done a much better job messaging how vSphere is a private cloud this year, and the VMworld 2009 Virtual Infrastructure design proved to be the best “dog food” example possible. As over 12,000 VMworld attendees “put the cloud in their head” few knew their seat was already there.

I was not the only person to speak to the VMware GETO team. Senior Technical Architect Dan Anderson explained a lot about the VMworld datacenter hardware in a video interview that was published on YouTube on the conference’s opening day.

VMworldTV conducted a video interview with Curtis from the self-paced labs staff and provides some great footage and information.

[ad] Empty ad slot (#1)!

Related Posts

Badges

follow-me-twitter

I blog with Blogsy

Comments / DISQUS