vSphere 5.1 vs. Hyper-V 2012 – Part 4 – “Workload Migration”

We compared vSphere 5.1 and Hyper-V 2012 in terms of their capabilities of Storage Management, Memory Handling & CPU Scheduling earlier in this blog series. In this blog post, we’ll discuss how both the hypervisors help manage data and workload migration and provide virtual machine (VM) mobility.

VMwareRegistered has been offering vMotion for a long time and allows moving running VMs from one host to another with no—or just a few milliseconds of—downtime. Though earlier versions of Hyper-V were not able to match vSphere’s VM migration capability, the introduction of Hyper-V 2012 is closing up the gap with Live Migration feature that is similar to vMotion. Let’s discuss how both the hypervisors execute workload migration and understand the differences and similarities between them.

vMotion in vSphere 5.1

vSphere 5.1 vMotion allows you to transfer the execution state of a running VM from the source ESXiTm host to the destination ESXi host over a high speed network. The execution state consists of the VM’s:

The Execution State of a VM Consists of:

1.   Virtual disks

vSphere 5.1 uses Storage vMotion for the transfer of virtual disks. This involves a synchronous mirroring approach to migrate a virtual disk from one datastore to another on the same physical host.

2.   Physical memory

vSphere 5.1 vMotion uses pre-copy iterative approach to transfer physical memory just like the earlier versions of vSphere:

  • Guest Trace Phase: Traces are placed on the guest memory pages to track any modifications by the guest during the migration
  • Pre-copy Phase: The memory contents of the VM are copied from the source to the destination ESXi host in an iterative process.
  • Switch-over Phase: The last set of memory changes are copied to the target ESXi host, and the VM is resumed on the target ESXi host.

3.   Virtual device state

Virtual device state include the state of the CPU, network and disk adapters, SVGA, etc. vSphere 5.1 vMotion serializes the VM’s virtual device state and transfers it over a high-speed network.

4.   External network connections

vSphere virtual networking architecture makes it very easy to preserve existing networking connections even after a VM is migrated to a different host. Each vNIC has its own MAC address (which is independent of the physical NIC’s MAC address). This allows the VM to keep the networking connections alive after migration, as long as both the source and destination hosts are on the same subnet.


Live Migration in Hyper-V 2012

Windows Server 2012 provides a capability similar to vMotion with a technology called Live Migration which allows you to configure a VM to be stored on an SMB file share, and then perform live migration of this VM between non-clustered servers running Hyper-V. In this process, the VM’s storage remains on the central SMB share.

Windows Server 2012 allows you to select optimal performance options when moving VMs to a different server. In a larger virtualization setup, this can reduce overhead on the network and CPU usage in addition to reducing the amount of time for a live migration. Shared Nothing Live Migration in Hyper-V 2012 allows you to move VM between systems that don’t share common storage including two non-clustered hosts, between a non-clustered host and a clustered host, and between two clustered hosts. It’s is also possible to perform multiple live migrations of VMs, and also queue them up in line so they move in a sequence.


Similarities between vSphere 5.1 & Hyper-V 2012

It is to be noted that shared storage is no longer required for both Live Migration in Windows Server 2012 and vMotion in vSphere 5.1.

And, recent versions of both the hypervisors support workload migration with 10 Gigabit Ethernet (GbE) networks. The maximum number of concurrent vMotions that you can do per ESXi host is:

  • Four with a 1 GbE network connection
  • Eight with a 10 GbE network connection

Both vMotion and Live Migration ensure avoiding downtime and the impact on service availability while the VM and its workload are being moved between hosts.


All this does not come to say one hypervisor is better than the other. Although VMware has been there as the pioneer in the arena of server virtualization, with the evolution of Hyper-V 2012, Microsoft has positioned itself as a challenger and we’ll have to wait and see how IT teams run and manage both of them in a mixed hypervisor setup. If you are interested in virtualization performance monitoring, learn about VMware monitoring and Hyper-V monitoring.

To learn more about how vSphere 5.1 and Hyper-V 2012 differ and compare,


Read this White Paper:

vsphere vs hyper-v white paper.png

Watch this Webcast:

vsphere vs hyper-v webinar.png

Other parts of the vSphere 5.1 vs. Hyper-V 2012 series: