If you are looking at various options to automate virtual machine (VM) ip address reconfiguration when failing over virtual machines to a disaster recovery (DR) site, this post explains an option so simple it is beautiful. To give full credit, the Vizioncore vReplicator 2.5 Best Practices document enlightened me to the strategy of using a local only VMware vSwitch and an extra virtual NIC (vNIC) in each VM. It’s been a long time since I had a “ton of bricks” moment, but this concept crashed down on me with the realization of a configuration that works in any version of ESX, doesn’t require extra software or hardware, and better yet, doesn’t have to be scripted! Just configure some extra virtual networking and forget about it!
Here is a general outline for automating the DR ip addressing with this method:
At the Primary Site
- For these instructions assume the production vSwitch at the primary site has a Portgroup named VM Network
- Build a new vSwitch and do not attach any physical NICs (local only isolated switch). Create a Portgroup named DR Network
- For each VM you need to fail over to a DR site, add an extra vNIC and attach it to the DR Network Portgroup
At the DR Site
- Create your DR site production vSwitch, attach physical NICs and add a Portgroup named DR Network.
- Create another vSwitch and do not attach any physical NICs (local only isolated switch). Create a Portgroup named VM Network
All you have to do for this to work is
How to increase the number of simultaneous VMotions of guests allowed between VMware ESX hosts has been covered many times already. In fact, check out the following blog posts on this topic for extra information and insight not provided here.
- Increase Simultaneous VMotions as well as Increase Performance
- Guest blog entry: VMotion performance » broche.net – VMware …
- VMware Communities: ESX Tips: Increase number of VMotions per host
- Increase number of VMotions per host
One possible scenario for changing this setting would be to temporarily increase VMotions allowed in order to evacuate ESX hosts within a short maintenance window. I prefer to leave the setting at the default, so for this scenario be sure to change it back after the maintenance is complete. if you read the links provided above, others suggest they have changed the settings permanently.
This rest of this post contains a cut and paste of the steps necessary to make the configuration change with a brief explanation about setting the appropriate value. I am pasting from a VMware Partner PDF communication assembled by Michael White, VMware engineer.
vCenter Client Shortcuts by Bouke Groenescheij is post worth book marking by VMware admins who want to speed up their administration and management of vSphere. Check out the entire post for many, many more shortcuts than those listed here, but I am high-lighting some of the key navigational shortcuts for my own reference later (and making sure I have a backup link to Groenescheij’s post!).
The following screen shots show the Ctrl+Shift keystroke combinations to move between the most common VI Client management views:
Other Ctrl+Shift Navigational shortcuts Continue reading
The VMGuy has the scoop on all the VMware releases tonight! VMware has also made available an updated version of the GUI based virtual machine (VM) backup and restore plugin for vCenter 4, VMware Data Recovery 1.1 (VDR). Download it here and check the check the Release Notes here.
VMware is saying VDR has improved performance and progress information during intgregity checks, enhanced CIFS support, and that the previous experimental support status for File Level Restores of Windows VMs has been elevated to full support.
Although I could find no mention of it on the VDR web page, the data sheet, or in the new Release Notes, VDR was originally targeted for virtual infrastructure that hosted up to 100 VMs. I’m not sure if this VMware support limitation is still in effect or not.
Still, combined with the already built in de-duplication of VDR VM backups, SMBs have a great VCB alternative that continues to improve.
It was speculated that it might not happen until next week (Monday 11/23), but VMware engineer Dave Lawrence’s post Release: ESX 4.0 Update 1 on his VMGuy Blog confirms that ESX 4.0 U1 is now available for download. The most notable changes in this update include full support for VMware View 4 (expected to be available for download on 11/23), full support for Windows 7 and 2008 R2 in both 32 bit and 64 bit flavors, and an update to the vSphere Client that fixes the problem when installing on Windows 7 desktops – eliminating the need for the workaround VI admins have had to configure until now.
Go here for the full Release Notes that explain other new changes such as enhanced MSCS support, enhanced paravirtualized SCSI support, improved Distributed Switch performance, increased vCPU core limits, Intel Xeon 3400 CPU support, and several resolved issues.
- Pre-Upgrade Checker Tool — A standalone pre-upgrade checker tool is now available as part of the vCenter Server installation media that proactively checks ESX hosts for any potential issues that you might encounter while upgrading vCenter agents on these hosts as part of the vCenter Server upgrade process. You can run this tool independently prior to upgrading an existing vCenter Server instance. The tool can help identify any configuration, networking, disk space or other ESX host-related issues that could prevent ESX hosts from being managed by vCenter Server after a successful vCenter Server upgrade.
- HA Cluster Configuration Maximum — HA clusters can now support 160 virtual machines per host in HA Cluster of 8 hosts or less. The maximum number of virtual machines per host in cluster sizes of 9 hosts and above is still 40, allowing a maximum of 1280 Virtual Machines per HA cluster.
As reported on vreference.com, there is a dangerous default in ESX 4. Before I expand on this potential problem I want to point out that a bug report has been files with VMware for correcting this in future releases, but for now VI admins need to be aware of the issue – If the key combination of Ctrl+Alt+Del is entered at the Service Console the ESX host will begin a shutdown which will stop all virtual machines running on the host in the process. Read the full vreference post for more details.
I tested this on an ESX 4 host running in a VMware Player VM on my notebook and captured the shutdown and reboot in this video.
Fortunately, there is a manual workaround to disable this default behavior until VMware provides an update. I’ll use the instructions provided in the previously mentioned vreference.com post. Continue reading
Unlike it’s big brother VMware Workstation 7, nowhere is it published that ESX/ESXi 4 is a supported guest OS of VMware Player 3.0. In fact, ESX 4 is not even among the listed choices in the Version drop down box when building a new virtual machine (VM), nor is it mentioned in the VMware Player Release Notes or Getting Started Guide. I was surprised when I was able to perform an Easy Install of ESXi 4, and just like ESXi 4 VMs running on Workstation 7, VMware Player nested ESXi successfully hosted guests. Best of all, ESXi 4 in VMware Player 3.0 can be run without any additional manual (ESX in a box) configurations (just like VMware Workstation 7).
Interestingly enough, the full Console ESX 4 install DVD is not recognized by VMware Player 3.0 for an Easy Install. However, performing a full ESX 4 Easy Install is possible with a last minute switch of the install media. That is, first browse to the ESXi 4 .ISO, complete the new VM Easy Install wizard, but modify the hardware before booting and change to the ESX 4 DVD .ISO. Watch the video at the end of this post for a demonstration on getting full ESX4 to work.
The rest of this post highlights the important parts of the Easy Install of ESXi 4 on VMware Player 3.0 with screen shots. To get an idea of more of the Easy Install screens check out my post about installing Windows 7 as a VM in Workstation earlier this year. Continue reading