For a number of years, VMware has provided some amazing lab contents during the VMworld conferences. From year to year, the labs progressed from onsite delivered, to cloud bursting, to majority cloud, and, finally, to completely cloud provided. It was inevitable that one day… some day… the labs would be opened up to the general populace. So, it turns out that Tuesday, November 13th was that fateful day. At the Portland VMUG Conference, Mr. Pablo Roesch (@heyitspablo) and Mr. Andrew Hald (@vmwarehol) released the Hands On Labs (BETA) onto the world!
Pablo and Andrew were kind enough to help me get access just before the announcement. And, I must say, I have been so impressed with what VMware has produced.
It is important to remember that this is a BETA offering (not quite like a Google BETA (ex: Gmail)), so experiencing some bugs and bumps in the road should be expected. However, with that being said, the quality of the content, delivery mechanisms, and the UI are top notch.
Lets take a quick look at what the environment looks like:
Upon logging into the HOL environment, you are greeted with a couple notable components:
- Enrollments and Lab Navigation
- This section, on the left, allows you to view the labs you have enrolled for, filter the lab types (currently Cloud Infrastructure, Cloud Operations, and End-User Computing), and viewing your transcript.
- Standalone labs
- The actual entry to the lab content. Enroll, begin, continue, etc… from here. This is where the magic happens.
- Includes important updates as to new lab content, product YouTube videos, Twitter stream, etc…
All in all, the home page is very streamlined and efficient.
Using a Lab
Upon enrolling and beginning a lab, you are presented with:
In the background, vCloud Director is deploying your lab environment. Looks like VMware is truly eating its own dog food with this product. No more Lab Manager for this type of offering anymore.
The interface features some very well thought out design decisions that helps present the lab content, and VMs, in a very logical and convenient way. The HOL team heavily leveraged HTML5 to accomplish the magic:
The simple placement of tabs on the left and right sides allows for the console of the VM to consume the majority of the screen while minimizing the amount of times the user needs to switch between applications for information. All of the info is there. Plus, as the user scrolls down the screen, the tabs remain visible, always ready to be used.
Use the lab manual content to navigate through the labs at your leisure. Note, though, there is a time limit. This is shared infrastructure. So, if you are idle for too long, HOL Online will time you out and close up shop so others can use the same infrastructure. Don’t worry, though, the content will resume when you return later.
Bugs I Have Found
Yes… as mentioned above, this is a BETA. Did I mention this is a BETA, because it is a BETA. So, running into some bugs is expected. Don’t worry, though, I’ll participate in the forum to report them.
Thus far, I have found a couple funky bugs:
- The mouse can disappear when a console is open
- The mouse can disappear when multiple browsers are open and the HOL window is not active
- Some lab manuals are not available
- Occasionally, the HOL interface will hang at Checking Status. Force a cache refresh in your browser.
If you want to get involved in testing the functionality and getting a taste of what the HOL Online is all about:
– Acknowledge that this is a BETA at this time. Don’t expect complete perfection right now.
– Sign up for the BETA at http://hol.vmware.com
– Notice that the signup site is a forum. You can check out the bugs and commentary from other beta testers to get a feel for what you’re going to experience
– Commit to participating in the beta community. Pablo and Andrew are not going to come to your house and take your vSphere licenses away for not participating. But, this is a unique opportunity to contribute to the success of a public product like this. Take advantage of it. Help the VMware community by contributing to it!
Well done, HOL team. I look forward to seeing what this turns into in the future!
Alright… so Cisco announced a Q4 2012 availability of a new version of the Nexus 1000v virtual switch in September. Of course, the release is going to have so many features and functions that there is no way we could not justify paying the $695/CPU that they charged (plus support) for the virtual switch.
A mere 4 hours into October, a blog post was published from Cisco (http://blogs.cisco.com/datacenter/new-nexus-1000v-free-mium-pricing-model/) dramatically reducing the cost of playing with their toys to a big fat $0 (plus support). Check out the break down of the features:
[Note: the graphic above was taken from the blog post and can be found by following the link above]
I have three very differing reactions to the announcement that I am struggling with determining which one wins out:
1. Nice move Cisco
This was a very smart move on Cisco’s part. The adoption of the Cisco 1000v has been anything but spectacular. Functionality sounds great from a high level. However, when compared with the price and availability of the VMware Distributed Virtual Switch that is available out of the box, why make the jump to a per CPU solution?!
Plus, with the up and coming virtual network changes coming our way, getting customers to buy in to the Cisco ecosystem before will get them locked in for whatever the future holds. Nicira has the potential to shake up the virtual switching in upcoming releases.
2. Great… more people using Nexus 1000v
My experiences in $CurrentJob involve using an environment with 1000v deployments all over the place… and the results have been less than spectacular. All too often, a networking issue arises that we lose visibility to, ports being blocked with no explanation, VSM upgrade failures, VEM upgrade failures, etc… To say I am a fan of the 1000v would be pretty far fetched. Probably better stated: To say I am a fan of our 1000v implementation would be pretty far fetched. I am sure 1000v implementations out there are more successful. I just have a hard time recommending to people to implement the 1000v in their environments when the provided distributed virtual switch is good enough.
3. Great! More people using Nexus 1000v
See what I did there?! (“…” vs “!”)
In my self-admitted limited times with the Nexus 1000v, it really seems like there is a lack of people that know ANYTHING about the 1000v. Understanding the NX-OS side of the switch is critical (obviously). But, so is understanding the nature of a virtual environment (especially with vCloud Director making more of a play in datacenters) and the virtual environment implemented (1000v will support VMware, Microsoft, Xen, and KVM hypervisors) is equally as important. The nature of the workloads and behaviors change.
So, by having more people using the Nexus 1000v, there will be more and more people available as “experts’ or at least legitimately “experienced” with the product than before. This can bode well for future implementations for sure.
Ultimately, I think this is a good move. Cisco is acknowledging that getting another “advanced” product into the ecosystem a gratis helps drive purchasing for other Cisco products in the datacenter. Plus, at a much higher level, it is an acknowledgement in the direction that the market is moving… and they’re trying to get a toe-hold before the SDN wave takes hold. Will this fix what I am working with? Not one bit. Will this fix what I will be working with in the future? With more people having experience, it may for sure.
Sunday evening, many of the vExpert award recipients converged in the Casanova 503 room at the Venetian for the VMworld 2011 vExpert meeting. Mingling, meeting, and networking was fantastic.
However, there was one topic of significant discussion that really got my wheel spinning. While we were requested not to go into detail into what was said by VMware (proper), we all are familiar with the concept… the Virtual Datacenter.
It should be no surprise that VMware has been walking us down the path of virtualizing our datacenter components. Servers, storage, networking… the entire stack. All in an effort to create this nebulous “Virtual Datacenter”. But, what is the virtual datacenter and how do we get there? Well… if I had the answer, I would probably be working for VMware… right?!
Conceptually, the virtual datacenter is being comprised of increasingly more and more commoditized resources. x86 compute resources are readily available with minimal cost. Auto-tiering storage is becoming more and more prevalent to help mitigate IO performance. 10Gb networking, and other high-bandwidth connections, are providing the ever-so-necessary connection to networking and network-based storage. By abstracting these resources, the virtual administrator is no longer tasked with management of these resources.
The fact of the matter, though, is that in many environments, management of these resources still exists. We need the network guys to maintain the network, the storage guys to handle the storage, and the server guys to handle the server hardware and connections to systemic resources.
Fact of the matter is that the virtual datacenter still needs management from different facets of the IT house.
My view of the virtual datacenter is creation of a system where network, storage, and servers are all managed at a single point. We are seeing this come to fruition in the Cisco UCS, vBlock, and other single SKU solutions. That is a fantastic model. However, it targets a different market.
My dream virtual datacenter manages everything itself.
- Need more storage, just add a hard drive. The datacenter handles data management and availability. Seriously, just walk over and add a hard drive or add another storage node to the rack.
- Need more network bandwidth, hot-add more pNICs. The datacenter handles properly spreading the data across available links, NIC failures, etc…
- Need more compute resources, add a new server to the rack. The datacenter handles joining the server to the available compute resources.
- Need external resources, just point the datacenter towards a public provider and let the datacenter manage the resources.
Creating the foundation to make this work relies on all parties involved allowing the datacenter to configure and manage everything. Storage vendors need to allow the datacenter to handle array configurations and management. Network vendors need to allow the datacenter to configure trunks, link aggregation, bandwidth control, etc… Systems vendors need to allow the datacenter to jump into the boot process, grab the hardware, and auto configuration.
Pie in the sky, right? Existing technologies seem to elude to more elegant management that would lend itself kindly to such a model. VMware, as the datacenter enabler, would need to step up to the plate and take the initiative and ownership of managing those resources… from RAID configurations to VLAN Trunking on switches.
Seriously… walking up and adding new physical resources or extending to a public provider for more resources and they become magically available would be fantastic.
So… that is my vision for where I would like to see the virtual datacenter. VMware, let me know if you want to talk about this in more detail. I am sure we can work something out!
Start with a plot of land, plow it up, drop some seeds, water, care, and in no time flat, you have a viable farm. Alright… any farmer out there knows there is much more to it than that. Early mornings, mechanical break downs, pests, critters, varmints, etc… But, when everything is said and done correctly, that plot of land turns into something special.
The more and more I think about it, VMware is a farmer. No doubt they have created something very special. The ecosystem surrounding their foundation product, the hypervisor, is rich, fertile, and ready for harvesting.
Land = Hypervisor
There is no question that VMware has provided the most market-dominant virtualization layer out there. Continued development has yielded a smaller, more efficient, and more feature rich solution that we all see benefits from. Collaboration between VMware and core hardware vendors (network, storage, processing, and memory) have resulted in a product that provides functionality we all take for granted and rely upon heavily.
Seeds = APIs
Starting with v3.5, VMware really started a push for integrations with 3rd party vendors. This effort shot through the roof with the vSphere platform (v4.0). The theory being that if VMware opens up various APIs to the community, the community will come up with some wicked cool add-ons that provide functionality above and beyond what VMware could provide. So, rather than keep a closed and controlled environment, VMware enhanced their market presence by allowing other people to play with their toys.
This effort has turned small companies into virtualization powerhouses and larger companies areas to expand their expertise and existing technologies. Think about it:
- Vizioncore (now a part of Quest Software)
- Hyper9 (now a part of Solarwinds)
And the list goes on and on. The API availability has fostered so much new business opportunities for the companies on the list above and the countless others that are not listed.
Water/Care = Vibrant Community
VMware has fostered an incredibly active community surrounding their company.
- Active local VMware User Groups
- Social Media – The one and only, Mr. John Troyer (@jtroyer for you Twitter folk out there) is the ring leader behind this. He has become the face of VMware for so many of us out there. Just listen to the VMware Communities Roundtable podcast, sit at the social media area at VMworld, or follow his Twitter account and you will see what I mean.
- vExperts – Those individuals that go above and beyond to promote the VMware technnologies and brand.
- VMworld conferences
Ultimately, there is a great customer/vendor community that loves VMware and what they do.
How is VMware taking advantage of their farm
The obvious answer is that they are taking all of this to the bank. Sure enough, they are.
Their stock is doing well. Q1 results show great growth, especially when compared to Q1 of 2010:
- Revenues: $844 Million [+33% from Q1 2010]
- Net Income: $126 Million [ $78 million in Q1 2010]
- License Revenues: $245 million [+$34% from Q1 2010]
- Professional Services: $425 million [+32% from Q1 2010]
- [Source: http://virtualization.com/2011/05/06/vmware-reports-q1-2011-earnings-revenue-up-33-to-844-million-yoy/]
- But, aside from the financials, VMware is able to harvest some amazing technologies and people from the farm. Remember the list of companies waaaayyyy up at the top of the blog post? VMware has identified some level of value and innovation in some of the companies and decided to purchase them and bring the technologies in-house. Of the entries in the list, the following were acquired by VMware and integrated into their product offering in one way or another:
- Thinstall – Now known as ThinApp
- Integrien – vCenter Operations Standard/Advanced/Enterprise
- Shavlik – VMware Go!
By bringing these companies in house, VMware is able to expand their offering AND force the other companies to innovate more. Acquisitions are a way to level-set the 3rd party community and provide a new avenue for growth.
Take a hypervisor, sprinkle some APIs, apply and maintain a vibrant community, and you get what VMware is today…
- virtualization.info – A look at VMware’s past acquisitions – Feb 1, 2010
- virtualization.com – VMware Reports Q1 2011 Earnings: Revenue Up 33% To $844 Million Yoy
- Google Finance – Google Finance – VMware Stock – 5yr Range
All of us “Virtualization Admins” are always looking for more information about the performance of our environment. As VMware ESX products have progressed, more and more performance metrics have been gathered and presented to us… and we appreciate it oh so much.
However, one of the largest issues we need to contend with is tracking of this information longer term and being able to correlate the collected information to new issues and/or diagnose future capacity issues in the future.
Previously, we needed to invest in 3rd party company products to provide this functionality. In some cases, the information was difficult to gather correctly, provided very static alerting thresholds, and was not able to integrate well into existing virtual infrastructure.
Back on August 31, 2010, VMware announced the acquisition of Integrien… an up and coming product providing real time performance analytics for all kinds of environments… including virtual environments. Acquisitions are always so curious because we, the consumers, are always trying to figure out how it is going to be used to further the product line of the purchasing company. So, while the product from Integrien was interesting, seeing how it would fit into the vSphere realm as a VMware product was up in the air.
All of those questions are answered today, as VMware has announced the availability of the vCenter Operations products. vCenter Operations is being provided in three flavors:
Handles vSphere environments.
Deployed as a virtual appliance that hooks into vCenter and is visible as a vSphere Client Plugin
Handles up to 500 VMs
Same as Standard edition
Includes additional Capacity Planning (aka – Standard Edition bundled with CapacityIQ)
This is a whole new beast and includes the ability to monitor much more infrastructure than just virtual hosts and servers.
The product information from VMware will list off all sorts of neat features, functions, and purpose for their product. However, after being able to use the product during the pre-release period, I find that the following are the main value points for my environment:
1) 10,000’ view of Virtual Environment
All kinds of monitoring solutions exist that claim to be the single pane of glass that should be able to solve all of your problems. However, it appears as though this is the first to actually accomplish that.
Check out what I see:
Just from a glance, I can see that my environment is running well. The overview page uses the typical Green/Yellow/Red color scheme to indicate state. From this view, I can immediately see that my vCenter, Datacenters, Clusters, and ESX hosts are running within established parameters. I see there may be some problems with a handful of VMs. Instantly, I can see what is going on.
Without going into a complete demo, I can click on any object in the page and get relationship information (aka – which vCenter, which Datacenter, which Cluster, which ESX, and which VMs).
2) Defining what is normal for your environment
Determining what is normal is one of the most important aspects of what the vCenter Operations Standard product offers. What is normal to me is not normal for everyone else… we are all special in our own way and vCenter Operations Standard understands that.
When vCenter Operations Standard is installed and configured, you do not need to do much of anything. The installation guide is dead simple. Why is this?! Well, because vCenter Operations Standard learns about the behavior of your environment.
Rather than rely upon static definitions that would cause warning and critical alerts (example: Warn when RAM = 80%+ and Critical when RAM = 95%+), vCenter Operations watches what happens in your environment from day 1 and starts to determine what is normal for you. If you configure your applications to utilize almost all of the RAM assigned, the other solutions may alert you to a critical state. However, this is normal and expected. You would never expect to see an alert for when the server RAM utilization drops to 10%. However, by using dynamic thresholds in the learning and monitoring algorithms, vCenter Operations Standard is able to determine that high RAM usage is normal and alert you to when the situation is NOT normal… so, when the server drops to 10% RAM usage, you will get alerted because it is abnormal.
vCenter Operations relies on some crazy algorithms and analysis that any PhD in Rocket Science would love. Various algorithms exist in the environment that chew on the data as it is received. The results of the algorithms are selected based on most likely to be correct and used to represent the data in some fashion. So, there is a higher probability of the data and situation being statistically correct.
Check out what I see:
This is a view of a specific ESX host in my environment. You can see that normal is somewhere between 1-16 and is typically defined by the Memory usage. Additionally, you can see various statistics regarding the current workload, CPU, Memory, and ESX resources. Again, all from a single screen.
Now, compare that to a different ESX host that is hit a little harder:
Normal for this server is somewhere between 61-100 and is defined by the Memory usage.
3) Resource statistic aggregation to provide a more holistic view of what my environment is like on a historical basis.
This little nugget of joy made my day the first time I saw it.
vCenter Operations Standard is able to aggregate many statistics into a single value for you to see that represents your resource in the environment. So, rather than use resxtop to find NIC statistics in my environment, I can get those statistics from vCenter Operations Standard in a easy to read way.
For example: There have been many times where I wanted to get a good idea of how much data is passing through my NICs. Previously, I would need to get some batch data from resxtop, throw it into a spreadsheet, and process it. Or rely on some historical data from vCenter. Now, I can dig into the ESX host in vCenter Operations Standard and select the “ESX USED NETWORK INTERFACES” section at the bottom of the page.
Clicking on one of the number shows me even more information. This is the graph for Received Rate (KBps):
How cool is that?!
Similar statistics can be gathered for CPU, Memory, and Storage as well!
4) Analytics and Fires
Analytics are presented as hot zones and sized based on relationship to others. So, just looking at the graphic, you can get a sense of the relationship between other objects. I know, that is a little obscure. But, check this out:
From here, you can see that there is a fair amount of contention for the Exchange partition compared to that of the other datastores in the environment.
Having access to this data is still new to me and I am continually finding more and more ways to interpret it.
The vCenter Operations Standard product provides amazing insight into the your virtual environment. Whomever at VMware decided that Integrien was a suitable acquisition should get a high-five for this one.
No doubt, this is an insanely useful product and is definitely raising the bar in what is possible in the analytical world. I look forward to seeing this product grow and mature.
VMware appears to have an interesting fight on its hands in the near future in the cloud computing realm.
Much has been said about virtualization platform interoperability. VMware is constantly pushing forward with trying to achieve ANSI standards for common virtualization components and functions. One of which is the most useful… the virtual hard disk format. In theory, I should be able to take a virtual disk from my Hyper-V environment and import into my vSphere environment. I have seen this work on my workstation when I imported my Windows XP Mode drive into my Workstation setup!
However, the non-VMware virtualization providers have been combining their forces into a new cloud computing platform, called OpenStack (slogan: “Open source software to build private and public clouds”). OpenStack is a software layer that supports the operation of multiple hypervisors… including Xen, KVM, QEMU, and UML (User Mode Linux).
Installation (per installation instructions) occurs on a Debian/Ubuntu based OS. So, out of the box, it appears as though we are going to be working with a potential Type 2 hypervisor…or multiple hypervisors on a single physical platform (?!). Resource management between the multiple hypervisors looks to be interesting, then.
Much of the effort has been focused on “avoiding vendor lock-in” in virtualization services. Apparently, people do not like being locked into a single vendor for services. Although, I would reckon that any of the virtualization companies involved with the OpenStack would love to be in the situation VMware finds themselves in… at the forefront of the virtualization market. I would like to go out on a limb and state that VMware loves the success it finds, but it is still working on interoperability between virtualization providers and products. Look at the ANSI work that VMware has gone through for virtualization standards. Additionally, the conscious decision to allow 3rd party providers to interact with the vSphere environment (ex: offloaded VM antivirus, backups, monitoring, etc…). Hyper-V and XenServer do not have this level of flexibility. The VMware ecosystem is such that many other vendor products (ex: System Center Virtual Machine Manager) can manage them. The same cannot be said for the other vendors.
Surprisingly, what is bringing this to the forefront of the virtualization industry is the inclusion of Microsoft into the mix. Suddenly, a known closed source company and one of the most prolific companies that distribute a vendor lock in set of products is trying to get in on the action. Prior to this, it could be said that OpenStack was primarily open source hypervisor products. However, with the inclusion of the Hyper-V hypervisor, that model has changed.
The inclusion of Hyper-V really opens an interesting avenue of discussion, though. What is Microsoft’s intention here and how is it going about it? Are they going to open Hyper-V to the open source community? Will this run natively in the Linux environment the other hypervisors are running in or will any hosting provider need to operate a Windows environment to support this?
Supporting multiple hypervisors can and will lead to nightmares as each hypervisor product is being developed independently of one another. So, it is possible that one hypervisor is being developed at 2x the rate as the others. How are updates going to be handled? What if one requires different libraries than another?
What makes the big three virtualization vendors so useful is the inclusion of sophisticated and intuitive GUI and APIs for management. However, the management of the environment appears to be restricted to the CLI of the host server. Right away, this will drive away any non-savvy customers. So, I doubt that customers will flock to the OpenStack environment en masse. Sure, some customers have the resources to handle this, but not enough to make a difference… Statement: yes. Impact: no.
As far as cloud infrastructure is concerned, what is wrong with vendor lock in, really? The virtualization providers know how to interact with their own products and drastically increase their functionality and power. Sticking with VMware ensures compatibility and functionality across products. What is the need to run virtual machines in a Hyper-V environment locally and UML in the cloud? While the servers could interact with each other, management functions will differ significantly, and the portability is drastically limited. Contrast this with using the VMware vCloud initiative in your local and public clouds. Management and APIs are identical regardless of the location of the virtual machines (either in Public or Private clouds). Portability is not an option because the virtual machines exist on a common platform.
My feeling is that this is very much about some lesser virtualization providers ganging up to try and beat VMware in the cloud infrastructure game. While the intent is great, the complexity is much greater as there are more hypervisors to support and potential for instability and abnormal product growth. Rather, this is more of a proof of concept project for what “the cloud should be like”. However, the OpenStack environment and the other virtualization providers should take note of the VMware vCloud initiative as it is showing how true cloud operation should work.
Installation Instructions: http://wiki.openstack.org/NovaInstall
So… what can be virtualized? Server Loads – Check. Workstations – Check. Applications – Check. Phones – On Deck.
One of the up and coming topics in the virtualization industry is how to properly virtualize phones.
Mobile phones are quickly becoming one of the most convenient and powerful computing devices in our everyday lives. Roughly 20% of the phones in the US are smart phones… with anticipated intersection with “feature” phones coming somewhere around the end of 2011.
It is not uncommon to find a smart phone with a full QWERTY keyboard, high resolution display, multiple GB in storage, WiFi, 3G/4G, 1+ Ghz processor, etc… Increase the display size, and we could be carrying a netbook in our pockets. Heck… some even have video out built in. Install a View client, connect a monitor, and you have a perfectly working VDI client. Talk about BYOPC (Bring Your Own PC). (see Citrix Nirvana Phone)
Citrix and VMware have really taken the phone virtualization up to task. Both have been working on prototypes on how they believe that phone content should be handled.
Citrix: bare metal install
This is the most similar to what we see with ESXi and XenServer… the hypervisor (or microvisor, as it is being referred to (how cute!)) is installed onto the base hardware and the Phone OS is installed on top of the stack. For the virtualization engineers out there, this is a pretty standard concept.
This concept is being promoted as a way to allow a single device to handle personal needs as well as business needs… all the time, ensuring security between those dramatically different use cases.
VMware: hosted install (sits on top of installed phone OS) – Mobile Virtualization Platform (MVP)
This concept is similar to the VMware GSX or VMware Server 1.0/2.0 concept. A hypervisor is mounted inside of the installed phone OS and allows the phone to help manage the device resources.
This concept is being promoted as a way to allow any application to run on any platform (see Java theory: write once, run anywhere) as well as allowing security by isolating applications in their own little world.
Originally, the plan for MVP was to install onto the phone hardware itself. But, due to architecture changes, the decision was made to go to a hosted environment with the base phone OS being the “personal” and “insecure” level and the ability to add the “secure” and “trusted” corporate image on top.
Each company has made investments or acquisitions of companies that can really aid them in this new environment (Citrix: OK Labs; VMware: Trango).
So… each company has drawn a line in the sand. Where does this leave us going forward, though?
I can see some major advantages to both approaches to the implementation. And I see some faults
|Allows the user to provide their own device and only installs a single application to get the corporate image installed… and uninstalled upon employee leaving the company.||Host OS applications can consume resources that impact performance of virtualized image applications|
|Ability to merge the applications in both environments into a single menu system with the corporate applications being able to sit with the personal applications||VMware will need to be able to adapt with changing OS functions and drivers as the hardware and the OS change versions|
|Similar to application virtualization, this methodology will allow for calls, data, and other application isolation to ensure that data from one environment is not leaked elsewhere and that applications cannot pull personal information to send to outside entities|
|Completely separate and isolated phone OS environments||User needs to switch between running the personal and corporate OS images|
|As long as Citrix can stay up to speed on the hardware of the phones, they do not need to worry about the drivers on the phone OS offerings. Instead, phone developers can rely on a single set of drivers for their phone stacks to sit upon.|
Roadblocks to the adoption of phone virtualization
|Phone OS companies are going to have to give in and see the advantages of phone virtualization to their overall business success versus making licensing agreements to their hardware vendors (ex: LG and Microsoft, Motorola and Android, Nokia and Symbian).|
|Phone OS images are going to need to become freely/readily available for Corporate IT departments or phone enthusiasts to customize and deploy. This includes the proper deployment tools and customization utilities.|
|Corporate IT departments and management are going to need to determine some kind of security policy and device ownership policy that will allow them to consume the phone resources owned by the end user and place a corporate image on the device.|
|The type of user that can realistically take advantage of mobile phone virtualization in a corporate environment is potentially smaller due to the limited capabilities of current phone functions and offerings.|
I feel like VMware has the upper-hand in their implementation avenue for mobile phone virtualization. While the resource management that a bare metal hypervisor provides is great… the mobile phone user values the experience over the resource management. Having a single set of applications that launch in the appropriate environment is more important and users do not want to switch their phone from corporate mode to personal mode.
I can see major advantages to Android and Microsoft in the phone virtualization environment. Email is the killer application for phones right now… especially in the corporate environment. Microsoft Exchange is one of the key drivers to Corporate IT supporting a phone platform over another. Android is freely available, so the cost to IT departments for obtaining and customizing the software is fairly minimal and includes ActiveSync Exchange integration. Microsoft is one of the most dominant corporate environment technology providers in the world. They have such a massive user base that the inclusion of a CAL with Enterprise Agreements would increase their revenue while ensuring their OS remains relevant in the marketplace.
Phone OS providers like Apple and Blackberry may be left behind. Apple is way too concerned with experience and their image to allow their OS to be run on hardware other than Apple provided hardware. Which is alright… that is the image they portray and they seem to like it. Blackberry is slowly becoming irrelevant… while they have their BES and BIS services, those require intermediary hosted or in-house services to connect with Corporate email. The email security is the key to their continued usage and that is becoming less and less useful as ActiveSync is being licensed elsewhere and more security policies are becoming available for it.
Perhaps the biggest surprises could be the Palm/HP PalmOS and Symbian. While Symbian is very popular outside of the US, the usage is still highly restricted to Nokia hardware. Uncoupling the OS from the Nokia hardware could definitely benefit Nokia as it could be licensed and use on many more platforms! PalmOS is still undergoing an identity crisis. No one is sure where the OS is heading… especially now that HP, not known for their mobile phone prowess, is their owner. They could make a jump to be a virtual only platform and become free or lightly licensed and make a major jump in adoption.
In addition to the phone OS vendors that need to line up for mobile phone virtualization to work, the applications and functions that phones provide are going to need to change. Email is not the only offering that is going to drive this functionality. The ease and ability to develop specialized applications for each company as well as existing Enterprise software providers are going to need to provide mobile client offering that provide, at least, a subset of the standard client functionality are going to be key. Perhaps it is some fancy BI reporting, hooks into the corporate CRM system, collaboration suite, IP telephony, Thin Client, etc…
I love the direction this is heading. Again, VMware has the upper-hand in their implementation methodology despite the head start Citrix has right now. However, getting the OS vendors, application vendors, and Corporate IT to buy in is going to be key to making this work.