SAN and Compute All-In-One

Let’s be honest, x86-based compute in virtualization environments is pretty darn boring. A server has become the necessary evil required for enabling the coolness that is virtualization.

But, don’t let the boringness of servers fool you. VMware has enabled a new breed of hybrid servers that are both server AND storage all-in-one! This new paradigm adds some new methods and models for virtualization design and functionality.

Conceptually, the server boots into an ESXi environment and fires up a guest OS. This guest OS is the virtual storage appliance and provides the storage for the local server. The guest makes use of VMDirectPath functionality to take control of a locally installed storage controller connected to the local disks. The result of this is that the VM can access the local disks and ESXi will not. The local disk is now directly connected to the VM. How cool is that?!

Once the guest OS has the disks, the guest creates various storage options: block or file, object or RAID, etc…). The ESX host is, then, configured to connect to IP storage provided by the guest. The first, typical reaction may be to wonder about the reason to add this level of complexity. For a standalone host with local storage (think ROBO) this may be a little overkill. But, the advantage comes into play when you consider flexibility and new functionality.

By moving control of local storage into the VM, more advanced functions can be performed. Local storage use by ESXi is fairly limited. The VSA, though, can use the storage a little more liberally.

Take Pivot3, for example. Their VDI and surveillance solutions make use of this storage technique. The vSTAC OS (the Pivot3 VSA) creates a RAID across the local disks. Yawn, right?! Where the coolness is applied is when multiple nodes are "connected". vSTAC OS instances on other Pivot3 servers combine and RAID across multiple hosts. Suddenly, local storage is combined with local storage from other hosts and creates a big clustered pool of available storage! This cluster environment allows for added resiliency and performance as the data is no longer restricted to the local host and distributed to help against local storage issue.

Once the vSTAC OS nodes connect their storage together, data is spread across all of the other nodes to immediately protect the data and enhance performance. A new node can be added in the future. Once the new node is added, the data is automatically rebalanced across all hosts to ensure proper protection and efficient usage of the storage. Dynamic add of storage and compute is fantastic!

The VSA VM can perform additional functions if desired (and developed as such) like: deduplication, replication, compression, etc…

Bill’s Stance

I love this type of innovation. There are many use cases for solutions like this. The Pivot3 solution has a lot of potential for success in their target markets. I have concern about the selection of RAID versus object storage, though… but that is their decision. Traditional RAID5 systems suffer heavily from a disk failure and rebuild… the performance tanks until the failed disk has been replaced. In the event of a failure in the Pivot3 solution, the entire solution may suffer until the offending disk has been replace. But, with that said, I believe the benefits of the technique outweigh the potential performance hit.

This style architecture really bucks the trend of needing a separate SAN/NAS in addition to compute. Adding sophistication to the VSA component and introducing more SSD/Flash-based storage could create an interesting and valid competitor to traditional SAN/NAS solutions and breathes new life into boring servers.


BYO(a)D Reaction

The other day (Nov 16, 2011 to be exact), my fellow nerd and Tech Field Day delegate, Tom Hollingsworth crafted a great blog post on the new movement in IT, and business in general… Bring Your Own (Apple) Device to work. If you have not read the post yet… you gotta check it out.

This is Tom. Ask him about NAT!

After reading the post, I had some thoughts come to mind that I just had to throw into a reaction post.

New Culture

As new generations of individuals grow up and mature, it is expected that cultural shifts will take place. What I do not understand is how a culture of technological availability has morphed into an expectation that an individual can bring anything into the corporate environment and expect to use it for their job.

Too many times, I am approached by users bringing their personal laptop into the office and wanting to know how to connect it to the internal network. Or, users that want to connect their iPhones to the network so they can use Spotify or YouTube without using their cellular data plans… as though the corporate infrastructure and services are there to do their bidding.

This new culture developing assumes that everything in the outside world must be the same as in the corporate world. Their iPad can connect to GMail, so why not just connect it to the Exchange server?


What the user sees.                            What IT sees!

Unknown/Unowned Devices

The IT ecosystem is a carefully designed and tightly guarded world.

None shall pass!

Systems are selected carefully to ensure a proper balance between functionality, supportability, and stability. The discovery of an unknown device is enough to throw an IT professional into a fit of rage. The environment has been compromised in some fashion and there is potential to throw off the carefully designed balancing act.

The presence of an unknown device opens up a venerable Pandora’s Box and raises a huge red flag. Suddenly, the corporate environment is now vulnerable to a machine or device infested with trojans, a honeypot of virus infections, access to corporate resources, and not managed by IT.

IT has been assigned a critical role in modern businesses… provide tools that enable the business to function. Traditionally, this included the workstation, network, monitors, servers, etc… With more people feeling as though it is acceptable to provide their own devices, who is responsible for supporting them? What happens when the “S” key breaks off or the monitor is too blue for their liking. When IT owns and manages a device, IT is responsible. When the users owns the device, but is using it in a corporate environment, the answer is much foggier. An IT persons says the user is responsible. However, the true answer lies somewhere in the depths of politics and policy.

Unknown devices also introduce the loss of data control. The moment a user is allowed to bring in a USB drive, iPod, access GMail, or Dropbox, the data is no longer under any control of the company.

Corporate IT Adaptation


First and foremost, IT has a responsibility to the company to ensure the protection and function of corporate technological resources and systems.

However, with that said, IT needs to acknowledge the changing ways of technology. Anyone who has been in IT longer than 1 month knows that times have a way of changing and the minute you buy your phone, it is obsolete. That is the way of the world and 42 is the ultimate answer to the ultimate question of live, the universe, and everything.


Is Google “Deep Thought”?

IT departments need to be cognoscente of what exists in the marketplace, impacts (both positive and negative) to overall productivity/security, and the long term viability of those entities. A tablet, for example, may seem like a large phone (cough iPad cough). However, for an executive that spends more time meeting customers and reading email, it is a perfect tool to enable them to get their job done without needing a laptop… but how is it secured?

Security becomes one of the most important concerns for IT in a time where users have expectation of providing their own devices. NAC/NAP/Port Security ensures authorized devices are allowed on the network. Remote technologies (Application Presentation (XenApp/RemoteApp) and VDI (View, XenDesktop)) allow users to interact with applications running on protected and trusted infrastructure from unknown endpoints. Proper backups, snapshotting, and antivirus on the server and storage side ensure the data consistency is proper and recoverable in the event of a break in security.

Finally, IT needs to engage with the business to keep them abreast of concerns. Open dialogue with the business will help ensure technological expectations meet some sort of equilibrium between what IT feels is appropriate and what the business feels is necessary.

What do you really think, Bill?!

I whole heartedly do not like the idea of users bringing in their own devices for business use. Maybe I am cruisin for a bruisin (politically speaking), but I see my environment as known and trusted. The introduction of a new device takes some planning and testing because I have a responsibility to the company to provide a stable and operational environment. The introduction of a Mac laptop into my environment is not smooth. Exchange and SharePoint support is so horrible that Mac users need to use a Fusion VM running Windows 7 to fully function.

However, while it is possible to be completely restrictive and be more like “The Man”, I feel that the best way to manage the user owned devices converging on my environment is more political.

– I encourage the business to adapt corporate policies addressing the need to not bring personal devices into the environment.

– I encourage the business to develop a stricter definition of who needs email outside of the office, partial compensation for use of personal devices OR providing a company owned and managed phone, and which devices are supported.

– Have an open and friendly dialogue with those users that approach IT for assistance with personal devices. Being honest and frank about not supporting devices, needing management approval, and being unsure as to the functionality/operation of the device goes a long way.

I love the idea of new devices and new technology in the workplace. But, I want the introduction to be more structured and tested.


Tom – Thanks for the awesome post. Definitely food for thought and got my wheels spinning!

ESXi 5.0–1.5 Hour Boot Time During Upgrade

I have to say, I am quite shocked that I am on the tail end of waiting 1.5 hours for an ESXi 5.0 upgrade to complete booting. Seriously… 1.5 hours.

I have been waiting for some time to get some ESXi 5.0 awesomeness going on in my environment. vCenter has been sitting on v5 for some time and I have been deploying ESXi 5 in a couple stand-alone situations without any issues. So, now that I have more compute capacity in the data center, it is time to start rolling the remaining hosts to ESXi 5… or so I thought!

I downloaded ESXi 5.0.0 Kernel 469512 a while back and have been using that on my deployments. So far, so good… until today. Update Manager configured with a baseline –> Attach –> Scan –> Remediate –> back to business. Surely, Update Manager processes should take more time than the actual upgrade. About 30 minutes after starting the process, vCenter was showing that the remediation progress was a mere 22% complete and the host was unavailable. I used my RSA (IBM’s version of HP ILO or Dell DRAC) to connect to the console. Sure enough, it was stuck at loading some kernel modules. About 20 minutes later IT WAS STILL THERE!

Restarting the host did not resolve the issue. During the ESXi 5 load screen, pressing Alt + F12 loads the kernel messages. It turns out that iSCSI was having issues loading the datastores in an acceptable amount of time. I was seeing messages similar to:


A little research turned me onto the following knowledgebase article in VMware’s KB: ESXi 5.x boot delays when configured for Software iSCSI (KB2007108)

To quote:

This issue occurs because ESXi 5.0 attempts to connect to all configured or known targets from all configured software iSCSI portals. If a connection fails, ESXi 5.0 retries the connection 9 times. This can lead to a lengthy iSCSI discovery process, which increases the amount of time it takes to boot an ESXi 5.0 host.

So, I have 13 iSCSI stores on that specific host and multiple iSCSI VMkernel Ports (5). So, calling the iSCSI lengthy is quite the understatement.

The knowledgebase states that the resolution is applying ESXi 5.0 Express Patch 01. Fine. I can do that. And… there is a work around described in the article that states you can reduce the number of targets and network portals. I guess that is a workaround… after you have already dealt with the issue and the ridiculously long boot.

Finally, to help mitigate the issue going forward, VMware has released a new .ISO to download that includes the patch. However, this is currently available in parallel with the buggy .ISO ON THE SAME PAGE! Seriously. Get this… the only way to determine which one to download is:


As a virtualization admin, I know that I am using the Software iSCSI initiator in ESXi. But, why should that even matter at all?! There is a serious flaw in the boot process in version 469512  and that should be taken offline. Just because someone is not using Software iSCSI at the current time does not mean they are not going to in the future. So, if they download the faulty .ISO, they are hosed in the future. Sounds pretty crummy to me!

My Reaction

I am quite shocked that this made it out of the Q/A process at VMware in the first place. My environment is far from complex and I expect that my usage of the ESXi 5.0 hypervisor would be within any standard testing procedure. I try to keep my environment as vanilla as possible and as close to best practices as possible. 1.5 hours for a boot definitely should have been caught before release to the general public.

Additionally, providing the option to download the faulty ISO and the fixed ISO is a complete FAIL! As mentioned on the download page, this is a special circumstance due to the nature of the issue. I would expect that if this issue is as serious as the download page makes it out to be, the faulty ISO should no longer be available. There has to be a better way!


I have since patched the faulty ESXi 5.0 host to the latest/safest version, 504890, and boot times are back to acceptable. I will proceed with the remainder of the upgrades using the new .ISO and have deleted all references to the old version from my environment.

I have never run into an issue like this with a VMware product in my environment and I still have all the confidence in the world that VMware products are excellent. In the scheme of things, this is a bump in the road.

vSphere 5–PXE Installation Using vCenter Virtual Appliance

The release of vSphere 5 has a lot of little gems. One of which is the availability of a SLES-based vCenter virtual appliance. So, while that is really cool, there is another little nugget of joy waiting for you in the vCenter virtual appliance (‘VCVA’ for all the hip kids)… specifically, your own little PXE booting environment. The oh-so-wise developers decided to include the requisite DHCP daemon and TFTP daemon. So nice of you VMware. Now, now only do you get a Linux-based vCenter, you also get the web client, a virtual appliance form, no requirement for SQL server, and a PXE environment. Really, how can you go wrong?

The PXE environment components included with the VCVA are not configured and turned off by default. So, if you’re ready to configure your VCVA for PXE, time to roll up your sleeves, crack those knuckles, and get ready to get your hands dirty.

Before we get started, though, and little caution (and disclaimer so I can sleep better at night):

I know nothing about your environment. You are following these instructions at your own risk. This setup will impact DHCP functionality on your network. Follow these instructions at your own risk and make the appropriate adjustments to work in your environment.
Additionally, I do not know everything about everything. So, you are going to need to rely upon your sleuthing abilities to help resolve issues that may arise.

These instructions assume some knowledge of CLI-based file editing (vi). So, please research how to use it if you are unsure.


A PXE environment via the VCVA requires the following components in your environment
– DHCP server
– TFTP server
– Web Server (for kickstart scripts)
– SYSLINUX (for pxeboot.0)
– Access to an ESXi 5.0 installation CD (perhaps you created on using my Image Builder tutorial)
– vCenter Virtual Appliance deployed
– Blank server to PXE boot and install ESXi 5.0 on (aka – the client)
– ESXi 5.0 installation .ISO
– HTTP server on the network (for hosting kickstart files – customization during installation)

For this exercise:
– Network:
– DHCP Range: – 254
– Default Gateway:

0 – Log into the appliance as ‘root’

1 – Configure DHCP

dhcpd‘ will listen to IP address requests, provide an IP to use, direct the client to the “next-server” to continue PXE booting, and which file (filename) to download from the server.

  • cd /var/lib/dhcp/etc
  • cp -a dhcpd.conf dhcpd.conf.orig
  • vi dhcpd.conf

Once inside of the file, ensure the following exists (highlighted for your ease of identification)

ddns-update-style ad-hoc;
allow booting;
allow bootp;

#gPXE options
option space gpxe;
option gpxe-encap-opts code 175 = encapsulate gpxe;
option gpxe.bus-id code 177 = string;
class “pxeclients”{
match if substring(option vendor-class-identifier, 0, 9) = “PXEClient”;
filename “pxelinux.0”;
subnet netmask {

Save the file and exit (hint: :wq)

2 – Configure TFTP

TFTP services are provided by the ‘atftpd’ daemon

  • cd /etc/sysconfig
  • cp –a atftpd atftpd.orig
  • vi atftpd

Once inside the file adjust the “ATFTP_OPTIONS” line to read: “–daemon –user root”. Typically, the atftpd daemon runs as ‘nobody’. However, the TFTP root (/tftpboot/) is configured as owned by the ‘root’ user.

Save and exit the file.

3 – Get the SYSLINUX packages on the server

There is one package missing to make the PXE installation process work: ‘pxelinux.0’. ‘pxelinux.0‘ is an executable that is downloaded by the client in order to properly continue the PXE process (aka – download the files, execute the installer, etc…). ‘pxelinux.0‘ is provided by the SYSLINUX package. In order for PXE to work properly with the ESXi 5.0 installation, SYSLINUX version 3.86 (or higher) is needed.

Note: you can use YUM or copy the files to the server another way if you’d like. Regardless, get the files there. This example will continue to use the /tmp file as the landing area for the SYSLINUX files.

Copy the pxelinux.0 file to your TFTP root

  • cp /tmp/syslinux-3.86/core/pxelinux.0 /tftpboot

4 – Prep the TFTP root for PXE

The TFTP root configured on the VCVA is located at /tftpboot. We are going to need to get the directory structure built out to support PXE.

  • cd /tftpboot
  • mkdir esxi50

By adding a directory, we are able to organize the TFTP server and support additional versions of ESXi going forward.

5 – Get the ESXi 5.0 CD contents onto the server

Seeing as the VCVA is a virtual appliance, it is easy to get the contents of the installation media onto the server.

  • Mount the installation CD to the VCVA as a CD-ROM drive using the vSphere Client.
  • mount /dev/cdrom /media
  • cp –a /media* /tftpboot/esxi50/
  • umount /dev/cdrom

6 – Configure PXELINUX

pxelinux is the utility that enables the PXE functionality. As mentioned before, pxelinux.0 is an executable that the server downloads. The executable provides functionality to parse a menu system, load kernels, options, customizations, modules, etc…, and boot the server. Since PXE can be used by multiple physical servers for multiple images, we need to configure pxelinux for this specific image.

  • cd /tftpboot
  • mkdir pxelinux.cfg
  • cd pxelinux.cfg

pxelinux.0 looks for configuration files in the TFTP:/pxelinux.cfg directory.

pxelinux looks for a large number of configuration files… specific to a default/generic value. This allows server administrators to define a file based on a complete MAC address, partial MAC address, or none at all to determine which image to boot from. Since this is the first configuration on the VCVA, we are going to configure a default. Do your research if you want to adjust this from the default value.

The installation media contains a file called isolinux.cfg. We can use this as the basis for our file called ‘default’. Copy it from the installation media and start customizations:

  • cp –a /tftpboot/esxi50/isolinux.cfg default
  • chmod a+w default
  • vi default
    Ensure the appropriate lines match the following lines:

DEFAULT /esxi50/menu.c32
KERNEL /esxi50/mboot.cfg
APPEND -c /esxi50/boot.cfg

Save and Exit

7 – Configure the Kickstart file

Using a kickstart file, we can configure ESXi 5.0 automatically during installation. This requires that a file be placed on a server that is available to the client.  Sadly, the HTTP areas on the VCVA are not readily available… and, they may be erased during future upgrades. So, we need to use an external HTTP server somewhere on your network. (Note: NFS and FTP are options as well).

Add the following contents:

# Accept the EULA

#Set root password
rootpw supersecretpassword

#Install on first local disk
install –firstdisk –overwritevmfs

#Config initial network settings
network –bootproto=dhcp –device=vmnic0


In this example, we are saving the file to:

8 – Configure the installation files

The CD installation media for ESXi 5.0 assumes a single installation point. Thus, all the files are placed at the root of the image. However, since we want to actually organize our installation root, we added the ‘/tftpboot/esxi50‘ directory and copied the files into it. We need to adjust the installation files in /tftpboot/esxi50 to reflect the change.

  • cd /tftpboot/esxi50
  • cp -a boot.cfg boot.cfg.orig
  • vi boot.cfg
  • Using the following picture as reference, add “/esxi50” to the paths for ‘kernel’ and ‘modulesimage

Save and quit

9 – Restart services to load the service configurations and configure to start with server

  • /etc/init.d/dhcpd restart
  • /etc/init.d/atftpd restart
  • chkconfig –add dhcpd
  • chkconfig –add atftpd


10 – Take a break

    You made it this far… great job. At this time, we have configured DHCP, TFTP, pxelinux, copied installation media to the TFTP root, and configured the installation for our organizational purposes.

11 – Start your host and install away





[BELOW] Reading the Kickstart Script. No need to enter customization info anymore.


[BELOW] Checking contents of Kickstart file. You will see errors here if errors in file.






Where are all the Apps?!

Alright… I bit the bullet. It should be no surprise that a geeky dude went out and purchased a new iPad 2. Maybe the shocking part is just how long it took me to get one. Priorities, priorities, priorities.

Since getting said tablet device, I have been doing what every new owner does… GET APPS! But, I am quite shocked about the selection of applications out there. Various counts of applications available in the App Store are quite shocking. (insert some values here). But, it is the applications that are missing that are really interesting.

– Skype — a Skype app exists for the iPhone and can be downloaded to run on the iPad. However, they do not have a version of an application native to the iPad.

– Facebook — first off, I gotta say that I am not too keen on utilizing Facebook for much of anything. So, i was unaware of the missing app for the iPad. The situation was brought to my attention via a Tweet from @matthickson ‘Facebook app for iPad reportedly coming ‘in weeks”. Sho’nuff, an app from Facebook, Inc. only exists for the iPhone platform.

– Qik — Qik provides a very popular video chat service for the iPhone platform. However, there is no option for iPad. On top of that, Qik appears to have been purchased by Skype. So, i guess it should come as no surprise that it does not have an iPad version yet.

The lack of these apps existing for the iPad are of little consequence to me. I have made it this far without having them installed and I am not concerned about their timeline. But, the bigger issue is that companies, such as Skype and Facebook, are making little effort to take advantage of what is considered to be the dominant tablet platform on the market to date. Looking through the App Store, there is no shortage of what I would consider to be crap-ware. If the world of developers can throw together an ecosystem of applications in the magnitude of thousands, how can the major service vendors not have brought anything to the table yet?!

Tablet devices continue to be a nice to have and not a need to have. They represent a kitschy technology product but their long-term usage and viability are yet to be seen. So, I cannot fault the vendors from focusing on the mobile platforms first. But, passing on the market leader just does not make sense to me.

I/O Virtualization Redux (c/o Virtensys)

Alright… back in the day, I posted an article regarding a startup in San Jose, Aprius. You can find that post here:

Without reciting everything in the post, the single sentence that has encapsulated my feeling about their technology and approach was:

The biggest problem with the technology and the company direction is that there is no clear use case for this.

Until recently, I stood behind this statement. And, honestly, as it pertains to the Aprius approach I was presented, I still do. I am sure there are Aprius, Xsigo, Virtensys, and any other I/O Virtualization vendors/customers that dispute the statement. (note: I welcome comments!).

However, I believe I have found a great use case for I/O virtualization thanks to a presentation from Virtensys during the most recent Portland VMUG meeting.

The use case that was presented was huge and I see it being beneficial to all sort of environments (SMB, SME, Enterprise, Healthcare, Education, Research, etc…).

The Virtensys product utilizes PCIe extension cards in the PCIe slots on servers. Those cards connect to the the Virtensys product. For this example, we will assume the Virtensys solution is configured to share 10Gb NICs with the servers. If your Virtualization servers are sharing the 10Gb NIC through the Virtensys product, all network traffic is routed through the Virtensys solution. However, if the virtual machine on one server is trying to communicate to another virtual machine on another server and those servers are sharing the same NICs in Virtensys’ solution, the communications happen at PCIe speed, not NIC speed! Additionally, the traffic never hits the standard physical network layer.

The PCIe 2.0 standard allows for 500MB/s over a single lane (minus some overhead). So, a single lane can handle roughly 4Gb/s. A full 32-lane connection can handle 16GB/s (or 128Gb/s. Now, these are all technical values and some level of overhead and contention may need to be accounted for. But, the value from here is that PCIe bus is quicker than the 10Gb NIC that is being shared.

Use Case!!!: By utilizing the Virtensys solution, your network traffic is no longer hitting the physical network and can be transmitted at PCIe speeds!

I will be the first to admit that I am not entirely keen on the other offering that exist (again, I like comments!). I would like to think that this same use-case exists for the other I/O virtualization vendors out there. Assuming they do, I can see the I/O virtualization products being adopted by companies that can benefit from higher network throughput that is allowed. Assuming this is specific to Vrirtensys and you have needs for higher network throughput, you may want to check these guys out.


Virtensys was a presenter at the Portland VMUG meeting in May 2011 which I was the principle organizer. I am under no obligation to include them in any personal blogging I undertake (which this qualifies as). Virtensys provided the presentation that opened my eyes to the use case supplied above.


Who Knew VMware Would Be Such A Good Farmer?!

Start with a plot of land, plow it up, drop some seeds, water, care, and in no time flat, you have a viable farm. Alright… any farmer out there knows there is much more to it than that. Early mornings, mechanical break downs, pests, critters, varmints, etc… But, when everything is said and done correctly, that plot of land turns into something special.

The more and more I think about it, VMware is a farmer. No doubt they have created something very special. The ecosystem surrounding their foundation product, the hypervisor, is rich, fertile, and ready for harvesting.

Land = Hypervisor

There is no question that VMware has provided the most market-dominant virtualization layer out there. Continued development has yielded a smaller, more efficient, and more feature rich solution that we all see benefits from. Collaboration between VMware and core hardware vendors (network, storage, processing, and memory) have resulted in a product that provides functionality we all take for granted and rely upon heavily.

Seeds = APIs

Starting with v3.5, VMware really started a push for integrations with 3rd party vendors. This effort shot through the roof with the vSphere platform (v4.0). The theory being that if VMware opens up various APIs to the community, the community will come up with some wicked cool add-ons that provide functionality above and beyond what VMware could provide. So, rather than keep a closed and controlled environment, VMware enhanced their market presence by allowing other people to play with their toys.

This effort has turned small companies into virtualization powerhouses and larger companies areas to expand their expertise and existing technologies. Think about it:

  • Veeam
  • Xangati
  • Vizioncore (now a part of Quest Software)
  • Hyper9 (now a part of Solarwinds)
  • TrendMicro
  • Symantec
  • Pano
  • Thinstall
  • SpringSource
  • Zimbra
  • Integrien
  • TriCypher
  • SlideRocket
  • Shavlik

And the list goes on and on. The API availability has fostered so much new business opportunities for the companies on the list above and the countless others that are not listed.

Water/Care = Vibrant Community

VMware has fostered an incredibly active community surrounding their company.

  • Active local VMware User Groups
  • Social Media – The one and only, Mr. John Troyer (@jtroyer for you Twitter folk out there) is the ring leader behind this. He has become the face of VMware for so many of us out there. Just listen to the VMware Communities Roundtable podcast, sit at the social media area at VMworld, or follow his Twitter account and you will see what I mean.
  • vExperts – Those individuals that go above and beyond to promote the VMware technnologies and brand.
  • VMworld conferences

Ultimately, there is a great customer/vendor community that loves VMware and what they do.

How is VMware taking advantage of their farm

The obvious answer is that they are taking all of this to the bank. Sure enough, they are.


Their stock is doing well. Q1 results show great growth, especially when compared to Q1 of 2010:

    But, aside from the financials, VMware is able to harvest some amazing technologies and people from the farm. Remember the list of companies waaaayyyy up at the top of the blog post? VMware has identified some level of value and innovation in some of the companies and decided to purchase them and bring the technologies in-house. Of the entries in the list, the following were acquired by VMware and integrated into their product offering in one way or another:
    • Thinstall  – Now known as ThinApp
    • SpringSource
    • Zimbra
    • Integrien – vCenter Operations Standard/Advanced/Enterprise
    • TriCypher
    • SlideRocket
    • Shavlik – VMware Go!

By bringing these companies in house, VMware is able to expand their offering AND force the other companies to innovate more. Acquisitions are a way to level-set the 3rd party community and provide a new avenue for growth.


Take a hypervisor, sprinkle some APIs, apply and maintain a vibrant community, and you get what VMware is today…