Hello Hyper-V : Meet Reality

First off stop reading this and go read Eric Gray’s post on the new Microsoft Virtualization Team post. He does a great job of pointing out the hypocrisy of marketing speak from the MS Virtual Team.

I am writing this blog post to address some specific annoyances in reasoning. While I have made a career in being a Microsoft guy (along with VMware, Cisco, EMC, and Nissan sportscars) I have some serious problems with the marketing pitch around Hyper-V.

So let me attack these head on. First off Chris Steffen states:

VMWare claims to support 4x more OSes that Hyper-V, but what does that really mean? When Microsoft lists an OS as supported, they COMPLETELY support the actual OS installation in the VM and you can call Microsoft support on that OS. Microsoft has support agreements with Red Hat and Novell specifically for this purpose.

So let’s be clear. What does Microsoft support in this case? Do they have support staff on hand that will work with the customer on Red Hat or Novell OS configuration? Would you trust Microsoft to touch your device information files on your Linux host? And to be quite honest, since Red Hat and Novell fully supports their enterprise products within the VMware environment. What is the real difference?

So let’s sum this up.

On vSphere if I have a problem I can:

  1. Call VMware for hypervisor specific issues (experts on this layer)
  2. Call Red Hat or Novell to get full support for OS specific issues (experts on this layer)

On Hyper-V if I have a problem I can:

  1. Call Microsoft for hypervisor specific issues (experts on this layer)
  2. Call Microsoft for OS issues (not-experts) and likely be transferred to step 3
  3. Call Red Hat or Novell to get full support for OS specific issues (experts on this layer)

So the real benefit Chris Steffen points out is an extra possible step. In the end my support coverage is the same at worst. Although I would be very curious about the actual level of knowledge between Linux support/Hyper-V and Linux support/vSphere. But, I can’t prove that point yet. And outside of these two specific operating systems flavors vSphere is light-years ahead. According to the current checklist, vSphere supports 48 flavors of OS compared to Hyper-V’s paltry 13.

Now to the next item:

Also, many of the OSes that VMWare claims to support are only supported by the Linux community – not taking a shot at the Linux community here, but most do not have a formal support organization. This leads me to question why they would be used in an enterprise environment. Also, those Linux distributions can be run under Hyper-V, using the Linux Integration Components Microsoft has available for download and the drivers which are in the 2.6.32 Linux kernel release. In this case, customers wouldn’t be able to call Microsoft for support for the OS, but would work with the Linux community, just as they would with VMware.

So this is pretty simple. The point here is: don’t use open source software. He states that VMware and Microsoft have the same community support so it is just a case of commercial vs. OSS and not a hypervisor argument. I would point out that community support is not only robust for vSphere but also VMware has guides, links, and walkthroughs on their own site (in a very easy to use setup) for how to implement multiple flavors. I wonder how easy it is with the Hyper-V side of things. Since OSS is not the argument here feel free to post OSS success stories in the comments.


Now for the fun part:

Reality: The Microsoft solution does not allow for over subscription of critical resources, but you shouldn’t do it anyway.

Oh no! I did not know this. Well I hope he is going to explain why at least.

The core of the VMWare argument is that you can somehow get “something for nothing” – that there is some kind of magic that comes with the over subscription of RAM using VMWare that is the silver bullet regarding memory management.

Wait a second, the argument is “something for nothing”? So efficiency is zero sum result? So I guess there goes thin provisioning, thin-client computing, or any other “thin” (read: effecient) technology. I better go shutdown my Windows Terminal Services farm too because I must be not really gaining anything.

So without the sarcasm, this is utter nonsense. He does not actually attack the technology or approach. He does not talk about direct risk or that fact that all efficiency models require management. Just like you have to manage the amount of users on a Terminal Services server you have to manage use on a vSphere cluster (notice I said cluster, not host. DRS much?). There is always inherent risk in higher utilization rates. That risk is managed by proper operational abilities. With vSphere these are clustering with DRS which allows automated movement of VM’s across hosts based on utilization and vCenter alarms which set low water marks against memory utilization. So with vSphere I have the option to take on operational responsibility for risk in exchange for higher efficiency (see $$$). The reason this is not zero-sum is obvious. I manage out the risk with a mature hypervisor (vSphere) gaining benefit I can never get with Hyper-V. With VDI and newer deployment models using virtualization, this can be a huge cost savings.

To leverage memory management in ESX to the fullest, one would have to fully burden the host beyond the physical memory. If you don’t, you really aren’t using memory overcommit.

Burden. Got to love that word. Puts an emotional spin on it. You can picture it right? The poor ESX host crawling across the data center will all the VM’s on it’s poor weary back.

Efficiency = lower total cost of ownership. The “burden” is your host doing more work for less money. I wonder if trucking companies talk about weight loads as “burdens” upon their poor MAC trucks.


Ok, one more:

Let’s go back to Basic Computer Architecture 101, and the example of the water pipe. There are limits to how much water you can push through a pipe at any given time, and the more taps that you add to the pipe, the longer it will take to fill up a bucket at each of the pipes. Hyper-V uses the best practice of moving a single VM as quickly as possible, using the entire bandwidth available to complete the transfer. Also, it is important to point out that without a modification of the host setting, VMWare would limit the migration to 4 VMs at a time (presumably for the same bandwidth considerations). The idea of moving 40 VMs all at the same time (as mentioned in the article) is not something that would be recommended, ever, regardless of platform.

Nice of him to explain throughput constraints for a kindergarten class. I would like to show a comparison of VMotion vs. Live Migration speeds (especially on my 10GE FCoE stuff) but instead I will keep it simple.

Why? Why can’t I do this with Hyper-V? Isn’t it because they don’t trust me? Or it is because they can’t make it work without sacrificing stability?
vSphere lets you not only do more but, also lets you do less. In other words, the mechanism is stable enough that throughput is the limitation (the water pipe) and not the stability of the mechanism (Hyper-V Live migration). Microsoft’s limitation on this points out a possible stability flaw and not a risky endeavor. What is also fascinating is the focus on making it “quicker“. Why does it have to be so quick? Are they afraid the VM’s won’t get there on Hyper-V if it takes to long?


There is a lot more to point out but, instead I will let someone else have the fun. I am not an anti-Hyper-V guy. I am an anti-F.U.D. guy. I would much rather Microsoft focus on providing a cheap product for small shop markets. In my mind that is what they designed with their product in both cost and feature-set. Even though VMware has some nice offerings also – see here & here

Also, I claim originality rights to the term: “DRS much?”. Feel free to tweet it like crazy 🙂

Comments and criticisms are welcome and appreciated.


Reblog this post [with Zemanta]


13 Comments Leave a comment

  1. Hi Nick,

    Just thought I would drop in to clarify some of the issues you raise around the issue of operating system support.

    What we often talk about on the Hyper-V team is “Support versus support”. Hyper-V can run Windows NT 4.0 – and I would love it if we said that we support it, but we cannot say that – because Microsoft as a whole does not support Windows NT 4.0. And it really changes the story when you are the OS maker.

    For example – let’s imagine that we did state that we supported Windows NT 4.0 on Hyper-V, and we compared the support experience between Microsoft and VMware:


    Customer encounters a problem with Windows NT 4.0 on ESX. They contact VMware – who after investigating the issue declares that “sorry – this is a bug in Windows NT 4.0, you should contact Microsoft”. When the customer contacts Microsoft they are told that Windows NT 4.0 is not supported. The customer still has a problem, but they do not feel lied to.


    Customer encounters a problem with Windows NT 4.0 on Hyper-V. They contact Microsoft – who after investigating the issue declares that “sorry – this is a bug in Windows NT 4.0, and even though we said we supported Windows NT 4.0 on Hyper-V we did mean that we supported Windows NT 4.0”.

    Can you see why we (Microsoft) do not feel that we can make such statements? Meanwhile VMware currently states that they support every Microsoft OS from MS-DOS 6.22 onwards.

    In regards to Linux support – when we state that we support a Linux distribution on Hyper-V, it means that we have signed formal support agreements with the owner of that Linux distribution. For example – if you have an issue running SuSE on Hyper-V, you can contact Microsoft product support. They will then work with Novel support engineers directly to solve the issue – without requiring you to bounce back and forth between the two companies.

    As an aside – the problem with this approach is that it makes it hard to support versions of Linux like CentOS, where there is not a corporate entity who can sign a support agreement. That was one of the motivating points releasing our integration components under GPL – so that community supported distributions could provide the same level of support for Hyper-V as they provide for other platforms.


  2. What I love so much in MS marketing – their own words.
    “Relying on a host to overcommit memory to support failover hosts is *potentially* dangerous and *incorrect* oversubscription leads to all VMs suffering from performance.”
    Kitchen knife is *potentially* dangerous and if you *can incorrectly* cut your own finger. So, I await that Chris Steffen will throw all his kitchen knifes to trash.

    After reading this article in 3rd time it leaved me with strong feeling that Microsoft Marketing actually saying: “You are not highly qualified engineers, you are stupid button-pushers, so we won’t give you any technology that can *potentially* affect something”.

  3. “Burden. Got to love that word. Puts an emotional spin on it. You can picture it right? The poor ESX host crawling across the data center will all the VM’s on it’s poor weary back.”

    I can just see Microsoft organizing a labor union for ESX hosts. “Hey servers, stop working for the “Man” (VMware ESX). Come work for Microsoft Hyper-V, and don’t work harder than you have to. Plus you get frequent breaks during mandatory BSOD breaktime.”

    Nick, love the points about every efficiency technology requires management. You hit the nail on the head. I also agree w/ the anti-FUD vs anti-Hyper-V. I was an Oracle guy in my developer/DBA life, and have no doubts that it is probably the best RDBMS out there, but I fully recognize the ubiquity of SQL Server and the “good-enough ness” of it. We have similar development here w/ VMware or Hyper-V, just wish the FUD angle could be replaced with patience and more developer time. M$ needs to let the product speak for itself when the time is right.

%d bloggers like this: