Author Archive

ESXi 5.0–1.5 Hour Boot Time During Upgrade

November 14, 2011 3 comments

I have to say, I am quite shocked that I am on the tail end of waiting 1.5 hours for an ESXi 5.0 upgrade to complete booting. Seriously… 1.5 hours.

I have been waiting for some time to get some ESXi 5.0 awesomeness going on in my environment. vCenter has been sitting on v5 for some time and I have been deploying ESXi 5 in a couple stand-alone situations without any issues. So, now that I have more compute capacity in the data center, it is time to start rolling the remaining hosts to ESXi 5… or so I thought!

I downloaded ESXi 5.0.0 Kernel 469512 a while back and have been using that on my deployments. So far, so good… until today. Update Manager configured with a baseline –> Attach –> Scan –> Remediate –> back to business. Surely, Update Manager processes should take more time than the actual upgrade. About 30 minutes after starting the process, vCenter was showing that the remediation progress was a mere 22% complete and the host was unavailable. I used my RSA (IBM’s version of HP ILO or Dell DRAC) to connect to the console. Sure enough, it was stuck at loading some kernel modules. About 20 minutes later IT WAS STILL THERE!

Restarting the host did not resolve the issue. During the ESXi 5 load screen, pressing Alt + F12 loads the kernel messages. It turns out that iSCSI was having issues loading the datastores in an acceptable amount of time. I was seeing messages similar to:


A little research turned me onto the following knowledgebase article in VMware’s KB: ESXi 5.x boot delays when configured for Software iSCSI (KB2007108)

To quote:

This issue occurs because ESXi 5.0 attempts to connect to all configured or known targets from all configured software iSCSI portals. If a connection fails, ESXi 5.0 retries the connection 9 times. This can lead to a lengthy iSCSI discovery process, which increases the amount of time it takes to boot an ESXi 5.0 host.

So, I have 13 iSCSI stores on that specific host and multiple iSCSI VMkernel Ports (5). So, calling the iSCSI lengthy is quite the understatement.

The knowledgebase states that the resolution is applying ESXi 5.0 Express Patch 01. Fine. I can do that. And… there is a work around described in the article that states you can reduce the number of targets and network portals. I guess that is a workaround… after you have already dealt with the issue and the ridiculously long boot.

Finally, to help mitigate the issue going forward, VMware has released a new .ISO to download that includes the patch. However, this is currently available in parallel with the buggy .ISO ON THE SAME PAGE! Seriously. Get this… the only way to determine which one to download is:


As a virtualization admin, I know that I am using the Software iSCSI initiator in ESXi. But, why should that even matter at all?! There is a serious flaw in the boot process in version 469512  and that should be taken offline. Just because someone is not using Software iSCSI at the current time does not mean they are not going to in the future. So, if they download the faulty .ISO, they are hosed in the future. Sounds pretty crummy to me!

My Reaction

I am quite shocked that this made it out of the Q/A process at VMware in the first place. My environment is far from complex and I expect that my usage of the ESXi 5.0 hypervisor would be within any standard testing procedure. I try to keep my environment as vanilla as possible and as close to best practices as possible. 1.5 hours for a boot definitely should have been caught before release to the general public.

Additionally, providing the option to download the faulty ISO and the fixed ISO is a complete FAIL! As mentioned on the download page, this is a special circumstance due to the nature of the issue. I would expect that if this issue is as serious as the download page makes it out to be, the faulty ISO should no longer be available. There has to be a better way!


I have since patched the faulty ESXi 5.0 host to the latest/safest version, 504890, and boot times are back to acceptable. I will proceed with the remainder of the upgrades using the new .ISO and have deleted all references to the old version from my environment.

I have never run into an issue like this with a VMware product in my environment and I still have all the confidence in the world that VMware products are excellent. In the scheme of things, this is a bump in the road.

VMware Virtual Datacenter

August 29, 2011 Leave a comment

Sunday evening, many of the vExpert award recipients converged in the Casanova 503 room at the Venetian for the VMworld 2011 vExpert meeting. Mingling, meeting, and networking was fantastic.

However, there was one topic of significant discussion that really got my wheel spinning. While we were requested not to go into detail into what was said by VMware (proper), we all are familiar with the concept… the Virtual Datacenter.

It should be no surprise that VMware has been walking us down the path of virtualizing our datacenter components. Servers, storage, networking… the entire stack. All in an effort to create this nebulous “Virtual Datacenter”. But, what is the virtual datacenter and how do we get there? Well… if I had the answer, I would probably be working for VMware… right?!

Conceptually, the virtual datacenter is being comprised of increasingly more and more commoditized resources. x86 compute resources are readily available with minimal cost. Auto-tiering storage is becoming more and more prevalent to help mitigate IO performance. 10Gb networking, and other high-bandwidth connections, are providing the ever-so-necessary connection to networking and network-based storage. By abstracting these resources, the virtual administrator is no longer tasked with management of these resources.

The fact of the matter, though, is that in many environments, management of these resources still exists. We need the network guys to maintain the network, the storage guys to handle the storage, and the server guys to handle the server hardware and connections to systemic resources.

Fact of the matter is that the virtual datacenter still needs management from different facets of the IT house.

My view of the virtual datacenter is creation of a system where network, storage, and servers are all managed at a single point. We are seeing this come to fruition in the Cisco UCS, vBlock, and other single SKU solutions. That is a fantastic model. However, it targets a different market.

My dream virtual datacenter manages everything itself.

  • Need more storage, just add a hard drive. The datacenter handles data management and availability. Seriously, just walk over and add a hard drive or add another storage node to the rack.
  • Need more network bandwidth, hot-add more pNICs. The datacenter handles properly spreading the data across available links, NIC failures, etc…
  • Need more compute resources, add a new server to the rack. The datacenter handles joining the server to the available compute resources.
  • Need external resources, just point the datacenter towards a public provider and let the datacenter manage the resources.

Creating the foundation to make this work relies on all parties involved allowing the datacenter to configure and manage everything. Storage vendors need to allow the datacenter to handle array configurations and management. Network vendors need to allow the datacenter to configure trunks, link aggregation, bandwidth control, etc… Systems vendors need to allow the datacenter to jump into the boot process, grab the hardware, and auto configuration.

Pie in the sky, right? Existing technologies seem to elude to more elegant management that would lend itself kindly to such a model. VMware, as the datacenter enabler, would need to step up to the plate and take the initiative and ownership of managing those resources… from RAID configurations to VLAN Trunking on switches.

Seriously… walking up and adding new physical resources or extending to a public provider for more resources and they become magically available would be fantastic.

So… that is my vision for where I would like to see the virtual datacenter. VMware, let me know if you want to talk about this in more detail. I am sure we can work something out!

vSphere 5–PXE Installation Using vCenter Virtual Appliance

August 26, 2011 7 comments

The release of vSphere 5 has a lot of little gems. One of which is the availability of a SLES-based vCenter virtual appliance. So, while that is really cool, there is another little nugget of joy waiting for you in the vCenter virtual appliance (‘VCVA’ for all the hip kids)… specifically, your own little PXE booting environment. The oh-so-wise developers decided to include the requisite DHCP daemon and TFTP daemon. So nice of you VMware. Now, now only do you get a Linux-based vCenter, you also get the web client, a virtual appliance form, no requirement for SQL server, and a PXE environment. Really, how can you go wrong?

The PXE environment components included with the VCVA are not configured and turned off by default. So, if you’re ready to configure your VCVA for PXE, time to roll up your sleeves, crack those knuckles, and get ready to get your hands dirty.

Before we get started, though, and little caution (and disclaimer so I can sleep better at night):

I know nothing about your environment. You are following these instructions at your own risk. This setup will impact DHCP functionality on your network. Follow these instructions at your own risk and make the appropriate adjustments to work in your environment.
Additionally, I do not know everything about everything. So, you are going to need to rely upon your sleuthing abilities to help resolve issues that may arise.

These instructions assume some knowledge of CLI-based file editing (vi). So, please research how to use it if you are unsure.


A PXE environment via the VCVA requires the following components in your environment
– DHCP server
– TFTP server
– Web Server (for kickstart scripts)
– SYSLINUX (for pxeboot.0)
– Access to an ESXi 5.0 installation CD (perhaps you created on using my Image Builder tutorial)
– vCenter Virtual Appliance deployed
– Blank server to PXE boot and install ESXi 5.0 on (aka – the client)
– ESXi 5.0 installation .ISO
– HTTP server on the network (for hosting kickstart files – customization during installation)

For this exercise:
– Network:
– DHCP Range: – 254
– Default Gateway:

0 – Log into the appliance as ‘root’

1 – Configure DHCP

dhcpd‘ will listen to IP address requests, provide an IP to use, direct the client to the “next-server” to continue PXE booting, and which file (filename) to download from the server.

  • cd /var/lib/dhcp/etc
  • cp -a dhcpd.conf dhcpd.conf.orig
  • vi dhcpd.conf

Once inside of the file, ensure the following exists (highlighted for your ease of identification)

ddns-update-style ad-hoc;
allow booting;
allow bootp;

#gPXE options
option space gpxe;
option gpxe-encap-opts code 175 = encapsulate gpxe;
option gpxe.bus-id code 177 = string;
class “pxeclients”{
match if substring(option vendor-class-identifier, 0, 9) = “PXEClient”;
filename “pxelinux.0″;
subnet netmask {

Save the file and exit (hint: :wq)

2 – Configure TFTP

TFTP services are provided by the ‘atftpd’ daemon

  • cd /etc/sysconfig
  • cp –a atftpd atftpd.orig
  • vi atftpd

Once inside the file adjust the “ATFTP_OPTIONS” line to read: “–daemon –user root”. Typically, the atftpd daemon runs as ‘nobody’. However, the TFTP root (/tftpboot/) is configured as owned by the ‘root’ user.

Save and exit the file.

3 – Get the SYSLINUX packages on the server

There is one package missing to make the PXE installation process work: ‘pxelinux.0′. ‘pxelinux.0‘ is an executable that is downloaded by the client in order to properly continue the PXE process (aka – download the files, execute the installer, etc…). ‘pxelinux.0‘ is provided by the SYSLINUX package. In order for PXE to work properly with the ESXi 5.0 installation, SYSLINUX version 3.86 (or higher) is needed.

Note: you can use YUM or copy the files to the server another way if you’d like. Regardless, get the files there. This example will continue to use the /tmp file as the landing area for the SYSLINUX files.

Copy the pxelinux.0 file to your TFTP root

  • cp /tmp/syslinux-3.86/core/pxelinux.0 /tftpboot

4 – Prep the TFTP root for PXE

The TFTP root configured on the VCVA is located at /tftpboot. We are going to need to get the directory structure built out to support PXE.

  • cd /tftpboot
  • mkdir esxi50

By adding a directory, we are able to organize the TFTP server and support additional versions of ESXi going forward.

5 – Get the ESXi 5.0 CD contents onto the server

Seeing as the VCVA is a virtual appliance, it is easy to get the contents of the installation media onto the server.

  • Mount the installation CD to the VCVA as a CD-ROM drive using the vSphere Client.
  • mount /dev/cdrom /media
  • cp –a /media* /tftpboot/esxi50/
  • umount /dev/cdrom

6 – Configure PXELINUX

pxelinux is the utility that enables the PXE functionality. As mentioned before, pxelinux.0 is an executable that the server downloads. The executable provides functionality to parse a menu system, load kernels, options, customizations, modules, etc…, and boot the server. Since PXE can be used by multiple physical servers for multiple images, we need to configure pxelinux for this specific image.

  • cd /tftpboot
  • mkdir pxelinux.cfg
  • cd pxelinux.cfg

pxelinux.0 looks for configuration files in the TFTP:/pxelinux.cfg directory.

pxelinux looks for a large number of configuration files… specific to a default/generic value. This allows server administrators to define a file based on a complete MAC address, partial MAC address, or none at all to determine which image to boot from. Since this is the first configuration on the VCVA, we are going to configure a default. Do your research if you want to adjust this from the default value.

The installation media contains a file called isolinux.cfg. We can use this as the basis for our file called ‘default’. Copy it from the installation media and start customizations:

  • cp –a /tftpboot/esxi50/isolinux.cfg default
  • chmod a+w default
  • vi default
    Ensure the appropriate lines match the following lines:

DEFAULT /esxi50/menu.c32
KERNEL /esxi50/mboot.cfg
APPEND -c /esxi50/boot.cfg

Save and Exit

7 – Configure the Kickstart file

Using a kickstart file, we can configure ESXi 5.0 automatically during installation. This requires that a file be placed on a server that is available to the client.  Sadly, the HTTP areas on the VCVA are not readily available… and, they may be erased during future upgrades. So, we need to use an external HTTP server somewhere on your network. (Note: NFS and FTP are options as well).

Add the following contents:

# Accept the EULA

#Set root password
rootpw supersecretpassword

#Install on first local disk
install –firstdisk –overwritevmfs

#Config initial network settings
network –bootproto=dhcp –device=vmnic0


In this example, we are saving the file to:

8 – Configure the installation files

The CD installation media for ESXi 5.0 assumes a single installation point. Thus, all the files are placed at the root of the image. However, since we want to actually organize our installation root, we added the ‘/tftpboot/esxi50‘ directory and copied the files into it. We need to adjust the installation files in /tftpboot/esxi50 to reflect the change.

  • cd /tftpboot/esxi50
  • cp -a boot.cfg boot.cfg.orig
  • vi boot.cfg
  • Using the following picture as reference, add “/esxi50” to the paths for ‘kernel’ and ‘modulesimage

Save and quit

9 – Restart services to load the service configurations and configure to start with server

  • /etc/init.d/dhcpd restart
  • /etc/init.d/atftpd restart
  • chkconfig –add dhcpd
  • chkconfig –add atftpd


10 – Take a break

    You made it this far… great job. At this time, we have configured DHCP, TFTP, pxelinux, copied installation media to the TFTP root, and configured the installation for our organizational purposes.

11 – Start your host and install away





[BELOW] Reading the Kickstart Script. No need to enter customization info anymore.


[BELOW] Checking contents of Kickstart file. You will see errors here if errors in file.






VMware vSphere 5–Using Image Builder For Custom Installation

August 19, 2011 13 comments

Hard to believe the vSphere 5 release is coming down the pipe. In anticipation of the official availability of the bits for use and VMworld 2011, I thought it would be a great idea for everyone to get their house in order and prep for deploying vSphere 5.

One of the cool new features with the vSphere 5 release was the inclusion of new PowerCLI functions called Image Builder.

Image Builder allows VMware Admins to customize their installation media by adding and removing components. These components, called VMware Infrastructure Bundles (VIBs), comprise the base image, drivers, CIM providers, and other necessary components to make the vSphere go-‘round. Plus, 3rd party vendors can release VIBs in the future for new devices, providers, or whatever (can someone make a Minesweeper VIB?). This results in:

  1. VMware not needing to keep updating just to add code for new devices.
  2. VMware Admins no longer need to kludge through cramming the driver support for 3rd party products using Linux-based utilities and concepts (although, good job for knowing how to do it)
  3. VMware Admins can create a single custom installation with the appropriate drivers without having to install ESXi on a host and immediately patch to add the components.

As mentioned above, Image Builder is included with the latest and greatest version of the PowerCLI utilities… well… the latest and greatest vSphere 5 PowerCLI utilities. So, don’t rush out and download right now.

Note: When installing the new PowerCLI for vSphere 5 over an existing PowerCLI installation, you may find that the Image Builder cmdlets do not appear to be available. If this is the case, be sure to uninstall ALL PowerCLI installations on your workstation prior to installing the new PowerCLI. I ran into this problem during the installation of the pre-release bits and it drove me crazy. Heck, why not just uninstall first to be on the safe side?!

Image Builder introduces two new terms to our VMware verbiage

  1. VIB – (as mentioned above) bundles of files that can comprise any base image, driver, CIM provider, or another component. VIBs are certified by VMware and fit a very specific format.
  2. Depot – A location where Image Builder can find installation components (aka – an offline bundle). An offline bundle is just a .zip file containing the installation files for a specific version of ESXi. These can be downloaded from the vSphere download page (typically, you are provided with the option of a .iso or .zip download of the media – the .zip is the offline bundle/depot). However, a depot can also be a URL to an offline bundle!!! During Image Builder sessions, multiple depots can be added to a session.
  3. Profile – A profile is the entity that comprises the image you are working with. Offline Bundles contain multiple profiles that can be used as a basis to copy. The profile, essentially, tells Image Builder which components to pack into a custom installation.
    Finally, Image Builder understands that creating a custom image does not just involve adding and removing VIBs. Rather, you also need some way to get the custom image out in a usable format. Image Builder allows for the export of the custom image to an offline bundle (.zip) or a usable CD/DVD image (.iso). The offline bundle

So, now that we know what Image Builder does and some new terminology, let’s get down and dirty with creating a new Image Builder custom installation!


  • Start up PowerCLI


  • Connect to a depot
    • This example will use a locally saved .zip file.
    • Command: Add-EsxSoftwareDepot –DepotUrl C:\Downloads\VMware\Depot\


  • The offline bundle contains a number of profiles. These profiles are read-only and cannot be edited. However, that does not mean that it cannot be copied to a new profile and customize the copied profile!
  • Get a list of the available depot profiles:
    • Command: Get-EsxImageProfile


    • As you can see, we have two profiles: ESXi5.0.0-381646-no-tools and standard.
  • Create a copy of a profile
    • Command: New-EsxImageProfile –CloneProfile ESXi-5.0.0-381646-standard –Name “Custom_vSphere5_Installation”


  • Now that the profile has been copied, it is time to wreckcreate a new custom installation. First, let’s check on which components are included in the Depot added earlier.
    • Command: Get-ESXSoftwarePackage


    • Note: This will load all software packages for all depots loaded in the session.
    • Each of the packages listed are VIBs! (NEAT!!!)
  • At this point, the question becomes: What is it about the default installation that you do not like? Are you missing some drivers/VIBs? In most instances, you are going to be missing some VIBs. However, there may be a need to remove a VIB for some reason. In the next step, we will be removing a VIB from the custom profile.
    • Note: You are the master of your universe. This example only shows you how to do something. I do not suggest you removing the VIB from the custom profile unless you know you need to. If you remove the VIB and screw up your environment, you only have yourself to blame because you are the master (right?!).
    • Note:The availability of 3rd party VIBs prior to vSphere 5 release is provided by the 3rd parties themselves. I do not have a connection to a 3rd party that could provide a VIB (wamp wamp wamp). So, I will include the command to add one. Once a VIB is available to me, I will update the post.
    • Command (Add a VIB): Add-EsxSoftwarePackage –ImageProfile Custom_vSphere5_Installation
    • Command (Remove a VIB):  Remove-EsxSoftwarePackage -ImageProfile
      Custom_vSphere5_Installation -SoftwarePackage sata-sata-promise


  • Alright, we have a depot, copied an existing ImageProfile, and messed with the clone so it looks like we want it to. Now, we need to get the profile in some form that we can do something with. How about exporting it?! Fantastic idea. Let’s do it!
  • The customized profiles need to be exported in a format that can be used for installation. Otherwise, you just wasted precious time and bandwidth on something that just dead-ended. Recall that we have 2 options for exporting:
    • ISO – Traditional disk image. These are burned onto CD/DVD media and ESXi can be installed.
    • ZIP – These can be stored on network locations and used for PXE installations, VUM upgrades, and a basis for future Image Builder customizations.
  • Exporting as a .ZIP or .ISO is as simple as changing a value and extension in the PowerCLI command:
    • Command (ISO): Export-EsxImageProfile –ImageProfile Custom_vSphere5_Installation –FilePath C:\downloads\vmware\depot\Custom_vSphere5_Installation.isoExportToIso
    • Command (ZIP): Export-EsxImageProfile –ImageProfile Custom_vSphere5_Installation –FilePath C:\downloads\vmware\depot\Custom_vSphere5_Installation.zipExportToBundle


  • Recall that earlier, we wanted to remove the ‘sata-sata-promise’ VIB from our customized installation media? (I would suggest going back a little bit in the post to refresh your memory). This is a great time to make sure it was removed.
    • Browse to the .zip location in Windows Explorer and open the .zip file.image
    • Browse to the ‘vib20’ directory.


    • Look around for ‘sata-sata-promise’ VIB. Can you find it?


    • Nope! It’s not there! Talk about customization!

At this point, you have viable installation media to streamline your installations and save you time and headaches.

Thanks VMware for the awesome utility. Happy customizations

VMware vSphere 5 Licensure–Take 2!

August 3, 2011 Leave a comment

The release of the vSphere 5 products on July 12th was amazing. Some very cool new functions and updated existing functions! However, all of the new features were tossed by the wayside in favor of focusing on the changes in licensure.

While there is not necessarily a need to go into major detail… a refresher should suffice…

Previously, VMware had used a license that restricted servers to 6-cores per socket. With new processor technology coming out and a need to remove themselves from that equation, the decision was made to drop the core limit on processors and move to something else. What that something else should be was a topic of discussion internally. Mathematically, the next-best option for customers and VMware was the concept of vRAM pooling.

vRAM pooling takes how much virtual RAM is assigned to your virtual machines into consideration versus how much is physically installed. The vRAM pools were based on the editions of vSphere running in your environment and pooled together. If multiple sites had Enterprise licensure, all of the RAM was pooled together in an Enterprise vRAM pool. The licensure allotment was spread across all Enterprise hosts, regardless of site.

The issue was that each edition of vSphere was really lacking in how much vRAM was allowed. The previous thought of high server consolidation by using memory dense physical hosts was, now, being challenged. Licensure costs for immediate upgrades and future projects were being questioned.

VMware had done the math and it really only should affect, roughly, 4% of the customer-base. But, the future looking questions still remained… and the impact to VMware was looking bad.

Those of you who work closely with VMware in one fashion or another know that VMware actually cares about customer experience and empowering customers to adopt their products. While the math penciled out for 96% of the customers, the reaction was poor and VMware recognized a need to resolve the issue…

Which brings us to the announcement from today. VMware has adjusted the licensure scheme as a response to the feedback and needs of the customers. This is the low-down:

vRAM Entitlement Changes


As you can see, the Enterprise and Enterprise Plus licensure has been doubled while the Standard, Essentials, and Essentials Plus have been increased by 1/3. The Free Edition has been quadrupled from 8GB to 32GB.

I was very pleased to see the changes above. a 48GB vRAM entitlement for the Enterprise Plus license was a little tough to swallow. Servers are available to run a ridiculous amount of RAM. So, to think that the Enterprise Plus licensure was only 2x better than Standard was a little off-putting. However, by moving to a 96GB entitlement, those higher consolidation projects can still continue. Seeing as most Enterprise Plus users would utilize dual CPUs, allowing for 192GB of vRAM per host versus 96GB is a major increase in value.

Monster VM!

So, we all know that the vSphere 5 RAM max per VM is crazy. If, for some reason, you have the need to run a 1TB RAM allotted VM, you were going to be hit hard. You would need to buy a ton of Enterprise Plus licensure (or even more Enterprise / Standard edition) to meet the vRAM entitlement licensure.

The fact is that running a single virtual machine should not cost more than a single Enterprise Plus license. So, to help reduce the impact of the new licensure on the monster VMs running out there, VMware came up with a new scheme: The most vRAM entitlement a single VM can consume is 96GB. So, if you run that monster 1TB RAM VM, you do not need to license for 1TB entitlement. Rather, it will only hit the entitlement for 96GB.


Major cost savings for the small group (right now). Although, these license decisions are probably made to be resilient to the future. So, if you are in 2021 and reading this (welcome to the past), maybe every VM is 1TB in size.

The change in the 96GB entitlement cap per VM is not going to be included in the general release of the vSphere 5 product. An update to the product is going to be coming out to add this to the reporting capabilities of vCenter. Also, recall that the licensure change will not stop you from running your virtual machines. So, fire up the 1TB VM and reporting will catch up with you later.

Average Usage vs. High Water

The previous measurement of the entitlement focused on high watermarks of usage. This was thought to have caused too much of a pain point for the TEST/DEV environments and VDI environments.

I could not agree more. In those environments where developers can spin up virtual machines on demand, the number that can be running at a given time for a short amount of time can end up being cost prohibitive.

To combat this issue, VMware decided to calculate the average vRAM pool usage over a 12-month period. Infrequent spikes would be absorbed into the average. More frequent spikes would have a more mild impact on the bottom line.


The View on VDI vRAM Entitlement

Dealing with VDI is another beast altogether. Personally, I feel like running virtual workstations is no different than running standard servers… there is more potential for higher density per physical host, though. So, the entitlement is still valid.

However, VMware decided to take a different route and promote their product line for the vSphere Desktop edition… which is licensed per user, not per vRAM pool. So, $65 per desktop ends up being easier on the company wallet for VDI implementations.

Check out the following blog posts from Raj Mallempati (Director, Product Marketing, End-User Computing). He really dives deep into the pricing and comparison between vRAM entitlement and the per-user license models:

vSphere Desktop Licensing Overview

Desktop Virtualization with vSphere 5: Licensing Overview


Virtual Bill’s Thoughts On The Changes

This is a great move on VMware’s part. It just goes to show that VMware really listens to the customer base. They had numbers to show that it was not going to be that hard on customers but decided to adjust and allow for more flexibility.

Hindsight is 20/20 and we all think that this should have been accounted for prior to the original announcement of the licensure change. However, the adjustments made now, prior to the product being available, is the result of the user community feedback. The licensure is now more refined, flexible, and easier on all of us.

One neat side effect of the change in entitlement is that there is a nice upgrade path for users on versioning. For example, if a company with Standard licensure needs more vRAM made available to them, they end up getting more functionality by moving to an Enterprise license rather than buying a new Standard license. Same with Enterprise to Enterprise Plus. Previously, magic version upgrades happened when an edition was discontinued and users were “upgraded” OR when a new feature was needed. Now, just by doing the math and determining that the next edition up the ladder has the same vRAM entitlement as the 2 lower licenses, the users can get more functionality. Now, it may be easier to get to get VAAI in environments, for example.

Finally, I encourage you to look at Bob Plankers’ posts on the licensure change from the first go-around. While the posts are not necessarily going to reflect the newest licensure changes as described in this post, Bob has put a lot of thought into impact of the vRAM pooling in his environment.

Lone Sysadmin – The Five Stages Of VMware Licensing Greif

Lone Sysadmin – A Look At VMware Licensing Environment Growth

This is all going to work out in the end. The increase in vRAM entitlement per version is a great step and provides VMware an easy way to adapt with improvements in technology.


Footnote: The images in the post were taken from a VMware licensure presentation. The content and data from the images were obtained and created by VMware. I just used it here.

SSH Tunneling–My Sys Admin Favorite Tool

Happy SysAdmin day everyone! While my title may not say “System Administrator” any longer (although, my business cards do… probably need to change them), I still wield those skill all the time and have utmost respect for those in the trenches.

I thought today would be a great time to bust out a post on what I believe to be the most useful tool in my Sys Admin belt… SSH Tunneling. image

SSH Overview

SSH (aka – Secure Shell) is a network protocol that allows connections to network connected devices. SSH sessions are encrypted, so your communications to and from the devices are safe. Typically, SSH connections are seen in Linux-based, UNIX-based, and Networking devices.

Contrast SSH to Telnet… while they provide many of the same functions (ex: remotely connecting to a device), the Telnet communications are plain text and wide open. So, when logging into a server and providing your password, someone with a packet sniffer (WireShark, for example) would see something akin to:

SSH: B12394jfjLL1055bSSJxla,;;293
Telnet: thisismysupersecretpassword

SSH Tunneling Overview

While SSH allows for remote connections to devices, that is not the only function it can provide. Tunneling is a little option available that actually uses the SSH connection to a remote device as a launching point to get into another device or service. When the SSH connection is made to the device, a new connection to specific hosts and TCP-based services are also made and tunneled back to you.

For those of you that have not used SSH port forwarding, the concept may be a little funky. So, please read on and learn about the coolness that is SSH tunneling. For those that use the functionality now, I hope you continue to use it wisely and appropriately… and read on for a refresher if you’d like.

The Nitty Gritty

[Note: this is all hypothetical. If you have security restrictions/policies in place, I do not suggest your using this. This is just an example of how the technology works. It is up to you to figure out how to use it in your life]

This is my “environment”. This is overly simplistic, but should get the point across.

My Environment

As mentioned earlier, SSH tunneling uses the connecting host as the launching pad for the tunneled connections. When a successful connection to the SSH target is made, the specified connections are also plumbed between your workstation/client and the destination.

Once a connection is established by you, the destination will see a connection from the SSH launch pad. You will see a port on your local workstation/client as a localhost connection.


You’re visiting your in-laws in Ohio. For whatever reason, you need to check something at work on the Linux2 server. . All you have available is a non-work Windows desktop. Without a VPN connection or remote desktop client installed, how can you check on the thing? SSH tunneling through Linux1! Let me show you how.

  • Download PuTTY (Fantastic utility. I would probably die without it).
  • Start PuTTY


  • Provide the Host Name or IP of the Linux1 server


  • Walk down the Category section on the left and go to: Connection –> SSH –> Tunnels


  • This is where the magic happens. In the source port, enter the TCP port that will represent the tunnel on localhost.
    • This requires you to know what is running on the workstation. Do not chose an existing port as the SSH connection will be successful, but the tunnel will fail.
  • In the destination section, enter the target of the tunneling (aka – what you’re trying to get to) AND the TCP port of the service in the following format:
    • The destination tunnel endpoint will utilize the SSH host resources you are connecting to. So, DNS resolution of will take place from linux1, not your workstation.
  • I am going to select Source of 22222 and Destination of
  • Click the Add button


  • Now, if you want other tunnels, you can add them here as well. But, we’re just going to move on. Click Open to open the SSH connection and login.
  • Once you have logged in, you can use ‘netstat –anp tcp’ to show the TCP connections on your workstation:


  • Magic! You can see that I have a TCP:22222 listening on my workstation.
  • Now, it is time to check that super important thing you are trying to check on. Since we know this is a web-based utility, we just need to open a web browser and browse away. Open your browser-du-jour.
  • Go to http://localhost:22222/superimportant


  • Phew! Looks everything looks alright… Time to cleanup!
  • Close the browser window and logout of the SSH session.
  • If you run the ‘netstat –anp tcp’ command again, TCP:22222 is missing.





Sys Admins have a great arsenal of tools available to them at any given point in time. SSH Tunneling has provided massive functionality to me in my day job. I hope that you find some value in SSH tunneling and can add it to your tool belt for those visiting the in-law moments.

Categories: How To, Systems Tags: ,

vSphere 5 Release–HA Improvements

July 11, 2011 2 comments

High availability has become one of those functions that many companies take for granted. The ability for a mission critical virtual machine to re-spawn elsewhere in the event of a host failure is really useful. While there is some downtime associated with the virtual machine restarting and recovering itself, the reaction time is fantastic.

This functionality is accomplished by the host maintaining a “heartbeat” with other servers in the HA cluster. In the event that the other servers stop receiving the heartbeat signal, the cluster assumes that the server is down and reboots the virtual machines on the server onto other, available virtual machines.

Issues arise when a network is not designed properly or the server is somehow isolated from the other servers (perhaps a specific switch failure) or failure of the management network. Suddenly, there are major issues with multiple copies of the same virtual machine running. Not good at all. It takes just a second of thought to understand how complicated the repercussions of this situation is. However, never fear, VMware has heard your cries and incorporated another level of host detection in this round of vSphere versions.

Master / Slave Relationship

No longer are the days of primary and secondary nodes. Rather, all nodes in the HA cluster can participate automatically. The following are the criteria that are used to determine which host is going to be the master in the cluster:

  • Which host has the access to the most datastores in the cluster.
  • ESXi host with the highest MOID

Master elections occur when:

    • HA functionality being enabled initially
    • Master node fails
    • Master node enters maintenance mode
    • Management network partitioning
      • If the management network is somehow split up (failed switch, for example), hosts that cannot see the original master will elect a new one and operate within the same HA environment.
      • Upon resolution of management network partitioning, the multiple master nodes will consolidate into a single management node.

In the new master / slave relationship, the master node is responsible for monitoring the activities of the slave nodes via the heartbeat. Additionally, it will maintain a list of the VMs running on each ESXi host. The slave node, on the other hand, monitors the run state of the local VMs and monitors the health of the master node (see, even the master node needs a little love and attention sometime…. it’s hard work being the master).

Storage Heartbeats

All this talk about heartbeat monitoring is a great segue into a new type of heartbeat… the storage heartbeat.

Previously, heartbeat relied upon the IP network to pass the status information around to the other nodes. But, we all can come up with ways in which this can fail. If the virtualization environment utilizes fibre storage or is architected such that IP storage is on a separate physical network, it is possible for management to fail but the VMs continue to run uninterrupted. To the VMs, nothing has happened. They may see a drop in client connectivity, but they can access their server resources. The other hosts will freak out and start up multiple copies on other ESXi hosts. Not good.

VMware has introduces a new heartbeat type that is able to address this issue. Storage heartbeats utilizes the datastore level to maintain heartbeat information. So, in the event of a network failure, ESXi hosts will look to the datastores to determine if the ESXi host is still active. If so, the VMs remains in the same state. If the ESXi host is no longer actively using the datastores, the master node will start the VMs elsewhere. This is accomplished by storage heartbeats writing to specific locations on the VMFS datastores or to a specific file location on a NAS datastores.

The datastores are selected by random during initialization of the functionality. The datastores can be changed manually, but it is not suggested to alter the default behavior. When a new datastore is introduced into the environment or a change to the environment would allow for greater/less preference, vCenter will recalculate the proper datastore for the storage heartbeats. In manual mode, you would need to manually change it.

The storage heartbeat is meant to be a final catch-all function and ends up being a great diagnostic feature as well. It will drastically protect the integrity of your server environment and help protect the virtual machines from having multiple versions running at the same time.

This is one of those features that rely upon a properly designed network… especially if utilizing IP-based storage.

HA States

A new host property exists that will inform you of the ESXi host’s HA state in the HA cluster.

  • NA (HA not configured)
  • Election (Master election in process)
  • Master (remember, in the event of a network partition, there can be more than one master)
  • Connected (Connected to the master – aka “Slave”)
  • Network partitioned
  • Network isolated
  • Dead
  • Agent Unreachable
  • Initialization Error
  • Unconfig Error
    These new properties can be useful in ascertaining the HA state of your virtualization infrastructure… especially useful if you are experiencing an HA failure at the moment.


    The new HA operating states and functions in vSphere 5 provide for a more secure HA environment in your virtual datacenter. The new master/slave election process allows for resiliency during a management network partition. Those hosts that can see each other become a new HA sub-environment until the partitioning has been resolved. The storage heartbeat allows the protection of virtual machines in the event of a network partition or IP connectivity failure.
    HA will continue to work with multiple versions of ESXi. However, the functions available are a limit of the version running. So, if HA is critical to you and you like what you see, you better start evaluating vSphere 5 at your earliest convenience and roll it out!

vSphere 5 Release–Initial Thoughts

Well… it may not be much of a surprise that the Raising The Bar event on July 12th was regarding the release of the latest version of the vSphere product line.

So, what is this release all about? Excellent question! I am glad you asked.

When I think about all of the new features and updated functions, I cannot help but characterize the release as focusing more on storage than any release before. While the software iSCSI stack was heavily improved with vSphere 4, we are seeing more and more storage-related functions in vSphere 5.

One of my sweet spots in virtualization is how virtualization products can really benefit the SMB environment. Previously, VMware has provided SKUs for the ESXi and vCenter components. However, this time around, there are very specific product decisions and options available for the SMB markets. This is absolutely fantastic and really helps cement the role of VMware in the SMB markets. Plus, in many instances, what is good for the SMB market is also good for the Enterprise Remote Office / Branch Office configurations.

Additionally, this release of vSphere really pushes the cloud-platform along very nicely. We are going to see some more flexibility in how we can implement and operate within our private clouds as well as integrate with public clouds.

Finally, the one thing that people hate to see… changing in licensure. I promise to cover this topic in more detail in a blog post soon. However, suffice it to say that with any licensure change, people are not going to be happy. One little goodie is the Advanced SKU… say good bye to Advanced and hello to Enterprise!

When it comes down to it, I really feel like vSphere 5 is a compelling product line for new customers and existing customers to adopt. The component product lines (SRM, vCloud Director, ESXi, vSphere, etc…) have great additions and changes that should increase the adoption rate across all customer markets. Larger customers or people with higher density server deployments are going to have to evaluate the new features and licensure requirements to determine proper deployment… but, that is to be expected.

I encourage you to reach out to the vExpert community, the VMware community, local VMUGs, and VMware Account Representatives to get more information on the new product line as the posts roll out over the next couple of weeks.

I cannot wait to get my grubby hands on the production release and get it rolled out.

Happy vSphere’ing!

Categories: Systems, Virtualization Tags:

Where are all the Apps?!

June 16, 2011 1 comment

Alright… I bit the bullet. It should be no surprise that a geeky dude went out and purchased a new iPad 2. Maybe the shocking part is just how long it took me to get one. Priorities, priorities, priorities.

Since getting said tablet device, I have been doing what every new owner does… GET APPS! But, I am quite shocked about the selection of applications out there. Various counts of applications available in the App Store are quite shocking. (insert some values here). But, it is the applications that are missing that are really interesting.

- Skype — a Skype app exists for the iPhone and can be downloaded to run on the iPad. However, they do not have a version of an application native to the iPad.

- Facebook — first off, I gotta say that I am not too keen on utilizing Facebook for much of anything. So, i was unaware of the missing app for the iPad. The situation was brought to my attention via a Tweet from @matthickson ‘Facebook app for iPad reportedly coming ‘in weeks”. Sho’nuff, an app from Facebook, Inc. only exists for the iPhone platform.

- Qik — Qik provides a very popular video chat service for the iPhone platform. However, there is no option for iPad. On top of that, Qik appears to have been purchased by Skype. So, i guess it should come as no surprise that it does not have an iPad version yet.

The lack of these apps existing for the iPad are of little consequence to me. I have made it this far without having them installed and I am not concerned about their timeline. But, the bigger issue is that companies, such as Skype and Facebook, are making little effort to take advantage of what is considered to be the dominant tablet platform on the market to date. Looking through the App Store, there is no shortage of what I would consider to be crap-ware. If the world of developers can throw together an ecosystem of applications in the magnitude of thousands, how can the major service vendors not have brought anything to the table yet?!

Tablet devices continue to be a nice to have and not a need to have. They represent a kitschy technology product but their long-term usage and viability are yet to be seen. So, I cannot fault the vendors from focusing on the mobile platforms first. But, passing on the market leader just does not make sense to me.

Categories: GestaltIT, Systems

I/O Virtualization Redux (c/o Virtensys)

Alright… back in the day, I posted an article regarding a startup in San Jose, Aprius. You can find that post here:

Without reciting everything in the post, the single sentence that has encapsulated my feeling about their technology and approach was:

The biggest problem with the technology and the company direction is that there is no clear use case for this.

Until recently, I stood behind this statement. And, honestly, as it pertains to the Aprius approach I was presented, I still do. I am sure there are Aprius, Xsigo, Virtensys, and any other I/O Virtualization vendors/customers that dispute the statement. (note: I welcome comments!).

However, I believe I have found a great use case for I/O virtualization thanks to a presentation from Virtensys during the most recent Portland VMUG meeting.

The use case that was presented was huge and I see it being beneficial to all sort of environments (SMB, SME, Enterprise, Healthcare, Education, Research, etc…).

The Virtensys product utilizes PCIe extension cards in the PCIe slots on servers. Those cards connect to the the Virtensys product. For this example, we will assume the Virtensys solution is configured to share 10Gb NICs with the servers. If your Virtualization servers are sharing the 10Gb NIC through the Virtensys product, all network traffic is routed through the Virtensys solution. However, if the virtual machine on one server is trying to communicate to another virtual machine on another server and those servers are sharing the same NICs in Virtensys’ solution, the communications happen at PCIe speed, not NIC speed! Additionally, the traffic never hits the standard physical network layer.

The PCIe 2.0 standard allows for 500MB/s over a single lane (minus some overhead). So, a single lane can handle roughly 4Gb/s. A full 32-lane connection can handle 16GB/s (or 128Gb/s. Now, these are all technical values and some level of overhead and contention may need to be accounted for. But, the value from here is that PCIe bus is quicker than the 10Gb NIC that is being shared.

Use Case!!!: By utilizing the Virtensys solution, your network traffic is no longer hitting the physical network and can be transmitted at PCIe speeds!

I will be the first to admit that I am not entirely keen on the other offering that exist (again, I like comments!). I would like to think that this same use-case exists for the other I/O virtualization vendors out there. Assuming they do, I can see the I/O virtualization products being adopted by companies that can benefit from higher network throughput that is allowed. Assuming this is specific to Vrirtensys and you have needs for higher network throughput, you may want to check these guys out.


Virtensys was a presenter at the Portland VMUG meeting in May 2011 which I was the principle organizer. I am under no obligation to include them in any personal blogging I undertake (which this qualifies as). Virtensys provided the presentation that opened my eyes to the use case supplied above.



Get every new post delivered to your Inbox.