Get Adobe Flash player

Design Challenges Of Virtualized vCenter With A vNetwork Distributed Switch

The vSphere Enterprise Plus vNetwork Distributed Switch (vDS) has been heralded as, and I might add lives up to it’s reputation of, an administrator’s time saver and single point of virtual networking configuration and visibility across many ESX/ESXi 4 hosts. However, the vDS presents some administrative challenges unique from the traditional vNetwork Standard Switch (vSS) that admins are used to. Specifically, since the vCenter 4 Server actually maintains the vDS configuration, some extra design thinking needs to be built into a vSphere 4 environment where a vDS will be used. If vCenter 4 Server itself will be a virtual machine in the environment with a vDS, the design gets even more involved.

There are a few possible problems to consider. In this post I’ll first cover (with the help of a several others) general VM and vCenter vDS networking issues, but along the way I’ll explore thoughts about designing around a vDS for keeping vCenter as a VM.

The vDS Rock and A Hard Place

I’m not the first to recognize there are problems with using a vDS if vCenter goes down. Here’s a few posts that have already addressed the pitfalls to avoid.

VMware vSphere – VMware vNetwork Distributed Switch bug or limitation

“… if you lose virtual Center you will have no way in moving virtual machines between different port groups on the vNetwork Distribute Switch. In addition, you will not be able to get a virtual machine from the traditional virtual switch to a port group on the vNetwork Distributed Switch. Extra to that, you can’t move a VM to another VMware vNetwork Distribute Switch. So that means if you are using VMware vSphere vNetwork Distributed Switches & you lose virtual center you are almost disabled on the networking part. If you lose connectivity on the classic virtual switch & your adapter on the distributed switch are OK you still can’t move your virtual machines to that distributed switch till Virtual Center is back.”

See the entire blog post for screen shot examples of vDS and vSS portgroups available to VMs with and without vCenter.

Virtualizing vCenter With vDS Catch-22 is another post that explores what happened after taking down the vCenter VM for a routine migration.

“vCenter was shut down and unavailable, therefore, I had connected my vSphere client directly to the ESX4 host in which I transferred the VM to. When trying to configure the vCenter VM to use the vNetwork Distributed Switch (vDS) port group I had set up for all VM traffic, it was unavailable in the dropdown list of networks. The vCenter server was powered down and thus the vDS Control Plane was unavailable, eliminating my view of vDS networks.

This is a dilemma. Without a network connection, the vCenter server will not be able to communicate with the back end SQL database on a different box running SQL. This will cause the vCenter server services to not start and thus I’ll never have visibility to the vDS.”

Virtualizing vCenter with vDS: Another Catch-22 is from another blogger exploring the same problem that was inspired by the last post. 

VMware has a KB Article explaining Configuring vSwitch or vNetwork Distributed Switch uplinks from the command line in ESX 4 which helps manually migrate dvPortgroups back to a vSS from the Service Console when in trouble. Interestingly enough, I couldn’t find a similar article for ESXi. I’ll assume the same process is available via “unsupported mode” in ESXi, but the potential for having to perform these actions under fire must be considered.

Use a Hybrid Mix of vSS and VDS

Does this mean a virtual infrastructure design should keep a vSS around? I would say “yes!”. Perhaps it’s now more important to dedicate 2 of the ESX host’s pNICs for the ESX Service Console / ESXi Management VMKernel isolated as a vSS. The 2 pNICs are not only for redundancy anymore, but also to support one or more standby VM portgroups in case they’re needed as a recovery network for VMs normally using the vDS. Of course, that means creating the appropriate trunking, and VLANs ahead of time. Have everything ready for a quick and easy change of critical VMs when needed.

Therefore, a hybrid design using both a vSS and a vDS is a smart “safety net” to have. Especially when an admin has to point the vSphere client directly at an ESX/ESXi host. The “safety net” vSS portgroups will be available from each host and the VMs can be easily switched via the vSphere Client GUI.

BUT, Does VMware Support vCenter As A VM Using A vDS?

Even though VMware now fully supports running vCenter virtualized, the question is not whether to run vCenter as a VM, but instead it’s whether VMware even supports a vCenter VM using a vDS!

VMware Communities: Virtual vCenter and vNetwork ….

"I called support about running vCenter within a distributed switch and they said point blank, "it is not supported". They said because vCenter governs the distributed switch environment, you can’t have vCenter within the distributed switch."

OK, so VMware support has not always told customers the correct support policy based on actual technical capabilities in the past, but it’s something serious to consider. In fact, based on what has been already explained it makes sense they wouldn’t support it. Besides, it’s more like “putting all your eggs in one basket” then ever before when a vCenter VM is placed on vDS.

I’ve brought up the logistical argument about the brains of the virtual infrastructure running in the environment it is managing before. Don’t misunderstand – I am an advocate for and I virtualize vCenter Server all the time, but I make sure to adhere to the best practices. On that note, the VMware KB Article Running VirtualCenter in a Virtual Machine (updated as of Aug 09) serves as a pointer to the VMware tech note which then points to the old VI3 tech note on this topic: VMware definitely needs to update the tech notes to include best practice for vCenter 4 as a VM in an environment containing a vDS!

After all of that why do I want to use a vDS again?

Finally, here is a quick reminder of why the extra design considerations are worth the trouble.

Comparing the vDS to a vSS:

vNetwork Distributed Switch on ESX 4.x – Concepts Overview

Comparing vNetwork Standard Switch with vNetwork Distributed Switch
The following features are available on both types of virtual switches:
  • Can forward L2 frames
  • Can segment traffic into VLANs
  • Can use and understand 802.1q VLAN encapsulation
  • Can have more than one uplink (NIC Teaming)
  • Can have traffic shaping for the outbound (TX) traffic
The following features are available on Only Distributed Switch
  • Can shape inbound (RX) traffic
  • Has a central unified management interface through vCenter
  • Supports Private VLANs (PVLANs)
  • provides potential customisation of Data and Control Planes

[ad] Empty ad slot (#1)!

Related Posts

  • Pingback: Design Challenges Of Virtualized vCenter With A vNetwork … | VirtualizationDir - Top Virtualization Providers, News and Resources()

  • quick

    Great article. Would it help mitigate the problem by running two vCenters in Linked Mode? Or using FT or vCenter Heartbeat?


  • rbrambley



    FT and vCenter heartbeat definitely add HA to the vCenter server, but
    if you virtualize VC then you're not escaping the “eggs in one basket”
    scenario. Especially if you put the VC on the vDS. Now, you have VC
    responsible for maintaining the vDS and the FT config, and it's
    depending on itself to use what it's managing?!! I'm not even sure VC
    is supported with FT, and I woudn't spend the $ for heartbeat on a VM.

    Linked mode doesn't even create redundancy for VC, however. It just
    let's you view and manage mult ESX datacenters / clusters in a single
    vSphere client. If you loose one of the VCs in link moded the others
    don't take over.

  • quick

    Ok, thanks. That's what I figured but I wanted to get another opinion… :)


  • VirtualizationTeam

    Hi Rich,

    This was a great post. I am sure it took you a good amount of time to aggregate all these information sources in one article. Great work & thanks for quoting my post.

    Eiad Al-Aqqad
    Founder of

  • rbrambley


    It was a weekend of research and post reading for sure!

    Thanks for documenting your findings to start with. They definitely
    helped me understand and form an opinion about a vDS design

  • Kevin

    Hi Rich,

    Just thinking of further ways to mitigate the above issues:
    1. Does using a Cisco Nexus 1000v help if the VSM is not on vDS?
    2. How about using VMHeartBeat between a virtualised vCenter and a physical vCenter?


  • rbrambley


    Unless I am mistaken on how the Nexus 1000v works, The issue will be with the VMs that will be stranded on the unavailable Cisco vDS. The VSM (as a part of the 1000v switch itself) can't do anything to help rescue the VMs if the Nexus 1000v is down. The VMs will still need a portgroup on the vSS until the vDS is back up.

    A VMware Heartbeat config, in any flavor, will provide redundancy to keep vCenter up and running and therefore keep the vDS available. I just haven't seen many people buy vCenter Heartbeat yet because of the cost.

  • Kevin


    Thanks! This will definitely help with the testing and design of our install.


  • Pingback: Tech Talk » Blog Archive » vNetwork Distributed Switch challenges()

  • Petya Noname

    Hi Rich,

    Yeah you are right Cisco Nexus 1000v can't help if the VSM

  • Pingback: VMware vFabric: A Hypervisor For Networkers? – Stephen Foskett, Pack Rat()

  • Pingback: VMware vFabric: A Hypervisor For Networkers? – Gestalt IT()

  • Habibalby


    what about vCenter in MSCS each node sets in the local datastore of the esx server? and have the SQL DB inside a vmdk.

    Don’t you think this will solve the issue?

    • rbrambley


      Although a MSCS config seems to cover making the VCenter highly available,
      the issue is really what virtual networking switch and portgroup the VCenter
      connects to. In your case, what both VCenter nodes connect to. I’m not sure
      this accounts for the problem I was trying to point out.

  • Habibalby

    hi rbrambley,

    MSCS will keep the vCenter highly available. The concern here is all about vNetwork and DvS, what if the vCenter is part of the Dvs, and vCenter failed, or the host which the vCenter runs on failed, the answer is no access to DvS because the vCenter is not available.

    I believe with vCenter MSCS running separated on each host, DvS will be available as long as the vCenter shutdown/not available.

  • Pingback: Migration from vCenter 4.0 to 4.1 – One Geeks Journey… : My Geek Finds()

  • Pingback: Virtual Distributed Switch (vDS): Clearing it up (to myself).()

  • Pingback: VCAP-DCA Study notes – 2.1 Implement and Manage Complex Virtual Networks |

  • Pingback: vDS – vCenter Problem - Admins Goodies()



I blog with Blogsy

Comments / DISQUS