VMware appears to have an interesting fight on its hands in the near future in the cloud computing realm.
Much has been said about virtualization platform interoperability. VMware is constantly pushing forward with trying to achieve ANSI standards for common virtualization components and functions. One of which is the most useful… the virtual hard disk format. In theory, I should be able to take a virtual disk from my Hyper-V environment and import into my vSphere environment. I have seen this work on my workstation when I imported my Windows XP Mode drive into my Workstation setup!
However, the non-VMware virtualization providers have been combining their forces into a new cloud computing platform, called OpenStack (slogan: “Open source software to build private and public clouds”). OpenStack is a software layer that supports the operation of multiple hypervisors… including Xen, KVM, QEMU, and UML (User Mode Linux).
Installation (per installation instructions) occurs on a Debian/Ubuntu based OS. So, out of the box, it appears as though we are going to be working with a potential Type 2 hypervisor…or multiple hypervisors on a single physical platform (?!). Resource management between the multiple hypervisors looks to be interesting, then.
Much of the effort has been focused on “avoiding vendor lock-in” in virtualization services. Apparently, people do not like being locked into a single vendor for services. Although, I would reckon that any of the virtualization companies involved with the OpenStack would love to be in the situation VMware finds themselves in… at the forefront of the virtualization market. I would like to go out on a limb and state that VMware loves the success it finds, but it is still working on interoperability between virtualization providers and products. Look at the ANSI work that VMware has gone through for virtualization standards. Additionally, the conscious decision to allow 3rd party providers to interact with the vSphere environment (ex: offloaded VM antivirus, backups, monitoring, etc…). Hyper-V and XenServer do not have this level of flexibility. The VMware ecosystem is such that many other vendor products (ex: System Center Virtual Machine Manager) can manage them. The same cannot be said for the other vendors.
Surprisingly, what is bringing this to the forefront of the virtualization industry is the inclusion of Microsoft into the mix. Suddenly, a known closed source company and one of the most prolific companies that distribute a vendor lock in set of products is trying to get in on the action. Prior to this, it could be said that OpenStack was primarily open source hypervisor products. However, with the inclusion of the Hyper-V hypervisor, that model has changed.
The inclusion of Hyper-V really opens an interesting avenue of discussion, though. What is Microsoft’s intention here and how is it going about it? Are they going to open Hyper-V to the open source community? Will this run natively in the Linux environment the other hypervisors are running in or will any hosting provider need to operate a Windows environment to support this?
Supporting multiple hypervisors can and will lead to nightmares as each hypervisor product is being developed independently of one another. So, it is possible that one hypervisor is being developed at 2x the rate as the others. How are updates going to be handled? What if one requires different libraries than another?
What makes the big three virtualization vendors so useful is the inclusion of sophisticated and intuitive GUI and APIs for management. However, the management of the environment appears to be restricted to the CLI of the host server. Right away, this will drive away any non-savvy customers. So, I doubt that customers will flock to the OpenStack environment en masse. Sure, some customers have the resources to handle this, but not enough to make a difference… Statement: yes. Impact: no.
As far as cloud infrastructure is concerned, what is wrong with vendor lock in, really? The virtualization providers know how to interact with their own products and drastically increase their functionality and power. Sticking with VMware ensures compatibility and functionality across products. What is the need to run virtual machines in a Hyper-V environment locally and UML in the cloud? While the servers could interact with each other, management functions will differ significantly, and the portability is drastically limited. Contrast this with using the VMware vCloud initiative in your local and public clouds. Management and APIs are identical regardless of the location of the virtual machines (either in Public or Private clouds). Portability is not an option because the virtual machines exist on a common platform.
My feeling is that this is very much about some lesser virtualization providers ganging up to try and beat VMware in the cloud infrastructure game. While the intent is great, the complexity is much greater as there are more hypervisors to support and potential for instability and abnormal product growth. Rather, this is more of a proof of concept project for what “the cloud should be like”. However, the OpenStack environment and the other virtualization providers should take note of the VMware vCloud initiative as it is showing how true cloud operation should work.
Installation Instructions: http://wiki.openstack.org/NovaInstall