Archive

Archive for the ‘Virtualization’ Category

VMware Hands On Labs–BETA–Live!

November 14, 2012 Leave a comment

image

For a number of years, VMware has provided some amazing lab contents during the VMworld conferences. From year to year, the labs progressed from onsite delivered, to cloud bursting, to majority cloud, and, finally, to completely cloud provided. It was inevitable that one day… some day… the labs would be opened up to the general populace. So, it turns out that Tuesday, November 13th was that fateful day. At the Portland VMUG Conference, Mr. Pablo Roesch (@heyitspablo) and Mr. Andrew Hald (@vmwarehol) released the Hands On Labs (BETA) onto the world!

Pablo and Andrew were kind enough to help me get access just before the announcement. And, I must say, I have been so impressed with what VMware has produced.

It is important to remember that this is a BETA offering (not quite like a Google BETA (ex: Gmail)), so experiencing some bugs and bumps in the road should be expected. However, with that being said, the quality of the content, delivery mechanisms, and the UI are top notch.

UI Tour

Lets take a quick look at what the environment looks like:

Login

image

Upon logging into the HOL environment, you are greeted with a couple notable components:

  • Enrollments and Lab Navigation
    • This section, on the left, allows you to view the labs you have enrolled for, filter the lab types (currently Cloud Infrastructure, Cloud Operations, and End-User Computing), and viewing your transcript.
  • Standalone labs
    • The actual entry to the lab content. Enroll, begin, continue, etc… from here. This is where the magic happens.
  • Announcements
    • Includes important updates as to new lab content, product YouTube videos, Twitter stream, etc…

    All in all, the home page is very streamlined and efficient.

    Using a Lab

    Upon enrolling and beginning a lab, you are presented with:

image

In the background, vCloud Director is deploying your lab environment. Looks like VMware is truly eating its own dog food with this product. No more Lab Manager for this type of offering anymore.

The interface features some very well thought out design decisions that helps present the lab content, and VMs, in a very logical and convenient way. The HOL team heavily leveraged HTML5 to accomplish the magic:

Default View Lab Manual Consoles
image image image

The simple placement of tabs on the left and right sides allows for the console of the VM to consume the majority of the screen while minimizing the amount of times the user needs to switch between applications for information. All of the info is there. Plus, as the user scrolls down the screen, the tabs remain visible, always ready to be used.

Use the lab manual content to navigate through the labs at your leisure. Note, though, there is a time limit. This is shared infrastructure. So, if you are idle for too long, HOL Online will time you out and close up shop so others can use the same infrastructure. Don’t worry, though, the content will resume when you return later.

Bugs I Have Found

Yes… as mentioned above, this is a BETA. Did I mention this is a BETA, because it is a BETA. So, running into some bugs is expected. Don’t worry, though, I’ll participate in the forum to report them.

Thus far, I have found a couple funky bugs:

  • The mouse can disappear when a console is open
  • The mouse can disappear when multiple browsers are open and the HOL window is not active
  • Some lab manuals are not available
  • Occasionally, the HOL interface will hang at Checking Status. Force a cache refresh in your browser.

Getting Involved

If you want to get involved in testing the functionality and getting a taste of what the HOL Online is all about:

– Acknowledge that this is a BETA at this time. Don’t expect complete perfection right now.
– Sign up for the BETA at http://hol.vmware.com
– Notice that the signup site is a forum. You can check out the bugs and commentary from other beta testers to get a feel for what you’re going to experience
– Commit to participating in the beta community. Pablo and Andrew are not going to come to your house and take your vSphere licenses away for not participating. But, this is a unique opportunity to contribute to the success of a public product like this. Take advantage of it. Help the VMware community by contributing to it!

Well done, HOL team. I look forward to seeing what this turns into in the future!

Team vGlobo – v0dgeball – VMworld 2012

August 25, 2012 Leave a comment

Come one! Come all! Watch the spectacle that is v0dgeball at VMworld 2012!

This year, a number awesome vPeople have come together for a brief time to create an amazing team of dodgeballers to dominate the tournament: Team vGlobo

Download

The team is comprised of:

  • Bill Hill
  • Arjan Timmerman
  • Brandon Riley
  • Gabrie van Zanten
  • Chris Emery
  • Josh Townsend
  • Jason Shiplett
  • Mike Ellis
  • Dwayne Lessner
  • Joseph Boryczka

If you’re interested in watching the awesomeness that is Team vGlobo, please come and check out the tournament:

SUNDAY, AUGUST 26 @ 4-6PM

SOMA REC CENTER – CORNER OF FOLSOM & 6TH ST

Links (for your clicking pleasure)

In all seriousness (if you made it this far), the v0dgeball tournament is something that I am very proud to be a part of. The proceeds for the dodgeball tournament go to the Wounded Warrior Project. 

The Wounded Warrior Project provides support for US service men and women injured while serving our country. They provide a ton of services and support, which includes:

  • Stress recovery
  • Family support
  • Career transition
  • Employment placement
  • Adaptive sporting events
  • Assistance with government/insurance claims
  • And so much more

At the time of this post, the v0dgeball 2012 project has raised $10,705.00 for the Wounded Warrior Project. How cool is that?! Really!? Just thinking about the impact that $10,000+ can have to thank and support servicemen/servicewomen for their amazing sacrifice is awe inspiring.

I cannot thank Chad Sakac, Fred Nix, and EMC enough for organizing such a fun and honorable activity for the VMworld participants… and for all of the participants of the tournament. 

Team vGlobo is honored to take the floor, throw some balls in faces of opponents (there’s a joke in there somewhere), and support such an amazing organization. 

Go Team vGlobo!!!

Categories: Virtualization Tags: ,

SAN and Compute All-In-One

February 27, 2012 3 comments

Let’s be honest, x86-based compute in virtualization environments is pretty darn boring. A server has become the necessary evil required for enabling the coolness that is virtualization.

But, don’t let the boringness of servers fool you. VMware has enabled a new breed of hybrid servers that are both server AND storage all-in-one! This new paradigm adds some new methods and models for virtualization design and functionality.

Conceptually, the server boots into an ESXi environment and fires up a guest OS. This guest OS is the virtual storage appliance and provides the storage for the local server. The guest makes use of VMDirectPath functionality to take control of a locally installed storage controller connected to the local disks. The result of this is that the VM can access the local disks and ESXi will not. The local disk is now directly connected to the VM. How cool is that?!

Once the guest OS has the disks, the guest creates various storage options: block or file, object or RAID, etc…). The ESX host is, then, configured to connect to IP storage provided by the guest. The first, typical reaction may be to wonder about the reason to add this level of complexity. For a standalone host with local storage (think ROBO) this may be a little overkill. But, the advantage comes into play when you consider flexibility and new functionality.

By moving control of local storage into the VM, more advanced functions can be performed. Local storage use by ESXi is fairly limited. The VSA, though, can use the storage a little more liberally.

Take Pivot3, for example. Their VDI and surveillance solutions make use of this storage technique. The vSTAC OS (the Pivot3 VSA) creates a RAID across the local disks. Yawn, right?! Where the coolness is applied is when multiple nodes are "connected". vSTAC OS instances on other Pivot3 servers combine and RAID across multiple hosts. Suddenly, local storage is combined with local storage from other hosts and creates a big clustered pool of available storage! This cluster environment allows for added resiliency and performance as the data is no longer restricted to the local host and distributed to help against local storage issue.

Once the vSTAC OS nodes connect their storage together, data is spread across all of the other nodes to immediately protect the data and enhance performance. A new node can be added in the future. Once the new node is added, the data is automatically rebalanced across all hosts to ensure proper protection and efficient usage of the storage. Dynamic add of storage and compute is fantastic!

The VSA VM can perform additional functions if desired (and developed as such) like: deduplication, replication, compression, etc…

Bill’s Stance

I love this type of innovation. There are many use cases for solutions like this. The Pivot3 solution has a lot of potential for success in their target markets. I have concern about the selection of RAID versus object storage, though… but that is their decision. Traditional RAID5 systems suffer heavily from a disk failure and rebuild… the performance tanks until the failed disk has been replaced. In the event of a failure in the Pivot3 solution, the entire solution may suffer until the offending disk has been replace. But, with that said, I believe the benefits of the technique outweigh the potential performance hit.

This style architecture really bucks the trend of needing a separate SAN/NAS in addition to compute. Adding sophistication to the VSA component and introducing more SSD/Flash-based storage could create an interesting and valid competitor to traditional SAN/NAS solutions and breathes new life into boring servers.

Unmount VMFS Datastore

November 21, 2011 2 comments

With all the wicked-cool new functions in vSphere 5, one of the most understated but highly functional lies with the ability to unmount an iSCSI share. Seemingly a simple function, this has not been available in non-vSphere 5 hosts until now.

The problem I have faced in the past is that there is a need to remove iSCSI stores from an ESXi host. In those rare instances, I have needed to migrate some VMs off of a SAN while keeping other VMs on the same SAN (ex: moving a development SAN to another site). svMotion handles the hard work of moving the VMs to the new datastores (easy-peasy, right?). However, unlike an NFS share, a VMFS share could not be unmounted. I ran into 2 options to remove the share:

1) Right-click the datastore and select “Delete”!

image

Uh… the point of this is to not delete these VMs!

2) Remove the initiator IP address, remove access to the ESXi host initiators via the SAN interface, vMotion VMs to other hosts (if you’re lucky), and reboot the host.

– Host downtime, SAN maintenance (which, yes, I know initiators not being used should be cleaned up… but not as a requirement to save my VMs), host downtime, etc… I can add a datastore live, why not remove it live?!

 

To my surprise this morning, while removing some iSCSI stores after some over-the-weekend SAN migration, I was presented with a new option via vSphere 5!

Unmount Datastore - vSphere 5

Following this new function leads me to a functional check to ensure that the unmount requirements are green and good to go:

image

Now, the downside to this procedure is that in my environment, I have a couple non-DRS clustered hosts (thank you Oracle VMware licensing) that I am unable to take offline to upgrade to ESXi 5.0 right now. So, the same iSCSI volumes are available on both ESXi 4.1 and 5.0 hosts. Thus, the unmount process is only partially useful. Due to those darn ESXi 4.1 hosts, I still need to delete the datastore to get rid of the iSCSI volume!

Unmount Datastore - vSphere 5 and 4.1

Thanks Oracle Licensing!

Lucky for me, I do not have any VMs to save on the datastore!

This was a great way to start a Monday morning! I look forward to being able to unmount VMFS volumes as necessary… once everything is up to vSphere 5.0!

Categories: Virtualization Tags: , ,

ESXi 5.0–1.5 Hour Boot Time During Upgrade

November 14, 2011 3 comments

I have to say, I am quite shocked that I am on the tail end of waiting 1.5 hours for an ESXi 5.0 upgrade to complete booting. Seriously… 1.5 hours.

I have been waiting for some time to get some ESXi 5.0 awesomeness going on in my environment. vCenter has been sitting on v5 for some time and I have been deploying ESXi 5 in a couple stand-alone situations without any issues. So, now that I have more compute capacity in the data center, it is time to start rolling the remaining hosts to ESXi 5… or so I thought!

I downloaded ESXi 5.0.0 Kernel 469512 a while back and have been using that on my deployments. So far, so good… until today. Update Manager configured with a baseline –> Attach –> Scan –> Remediate –> back to business. Surely, Update Manager processes should take more time than the actual upgrade. About 30 minutes after starting the process, vCenter was showing that the remediation progress was a mere 22% complete and the host was unavailable. I used my RSA (IBM’s version of HP ILO or Dell DRAC) to connect to the console. Sure enough, it was stuck at loading some kernel modules. About 20 minutes later IT WAS STILL THERE!

Restarting the host did not resolve the issue. During the ESXi 5 load screen, pressing Alt + F12 loads the kernel messages. It turns out that iSCSI was having issues loading the datastores in an acceptable amount of time. I was seeing messages similar to:

image

A little research turned me onto the following knowledgebase article in VMware’s KB: ESXi 5.x boot delays when configured for Software iSCSI (KB2007108)

To quote:

This issue occurs because ESXi 5.0 attempts to connect to all configured or known targets from all configured software iSCSI portals. If a connection fails, ESXi 5.0 retries the connection 9 times. This can lead to a lengthy iSCSI discovery process, which increases the amount of time it takes to boot an ESXi 5.0 host.

So, I have 13 iSCSI stores on that specific host and multiple iSCSI VMkernel Ports (5). So, calling the iSCSI lengthy is quite the understatement.

The knowledgebase states that the resolution is applying ESXi 5.0 Express Patch 01. Fine. I can do that. And… there is a work around described in the article that states you can reduce the number of targets and network portals. I guess that is a workaround… after you have already dealt with the issue and the ridiculously long boot.

Finally, to help mitigate the issue going forward, VMware has released a new .ISO to download that includes the patch. However, this is currently available in parallel with the buggy .ISO ON THE SAME PAGE! Seriously. Get this… the only way to determine which one to download is:

image

As a virtualization admin, I know that I am using the Software iSCSI initiator in ESXi. But, why should that even matter at all?! There is a serious flaw in the boot process in version 469512  and that should be taken offline. Just because someone is not using Software iSCSI at the current time does not mean they are not going to in the future. So, if they download the faulty .ISO, they are hosed in the future. Sounds pretty crummy to me!

My Reaction

I am quite shocked that this made it out of the Q/A process at VMware in the first place. My environment is far from complex and I expect that my usage of the ESXi 5.0 hypervisor would be within any standard testing procedure. I try to keep my environment as vanilla as possible and as close to best practices as possible. 1.5 hours for a boot definitely should have been caught before release to the general public.

Additionally, providing the option to download the faulty ISO and the fixed ISO is a complete FAIL! As mentioned on the download page, this is a special circumstance due to the nature of the issue. I would expect that if this issue is as serious as the download page makes it out to be, the faulty ISO should no longer be available. There has to be a better way!

Conclusion

I have since patched the faulty ESXi 5.0 host to the latest/safest version, 504890, and boot times are back to acceptable. I will proceed with the remainder of the upgrades using the new .ISO and have deleted all references to the old version from my environment.

I have never run into an issue like this with a VMware product in my environment and I still have all the confidence in the world that VMware products are excellent. In the scheme of things, this is a bump in the road.

VMware Virtual Datacenter

August 29, 2011 Leave a comment

Sunday evening, many of the vExpert award recipients converged in the Casanova 503 room at the Venetian for the VMworld 2011 vExpert meeting. Mingling, meeting, and networking was fantastic.

However, there was one topic of significant discussion that really got my wheel spinning. While we were requested not to go into detail into what was said by VMware (proper), we all are familiar with the concept… the Virtual Datacenter.

It should be no surprise that VMware has been walking us down the path of virtualizing our datacenter components. Servers, storage, networking… the entire stack. All in an effort to create this nebulous “Virtual Datacenter”. But, what is the virtual datacenter and how do we get there? Well… if I had the answer, I would probably be working for VMware… right?!

Conceptually, the virtual datacenter is being comprised of increasingly more and more commoditized resources. x86 compute resources are readily available with minimal cost. Auto-tiering storage is becoming more and more prevalent to help mitigate IO performance. 10Gb networking, and other high-bandwidth connections, are providing the ever-so-necessary connection to networking and network-based storage. By abstracting these resources, the virtual administrator is no longer tasked with management of these resources.

The fact of the matter, though, is that in many environments, management of these resources still exists. We need the network guys to maintain the network, the storage guys to handle the storage, and the server guys to handle the server hardware and connections to systemic resources.

Fact of the matter is that the virtual datacenter still needs management from different facets of the IT house.

My view of the virtual datacenter is creation of a system where network, storage, and servers are all managed at a single point. We are seeing this come to fruition in the Cisco UCS, vBlock, and other single SKU solutions. That is a fantastic model. However, it targets a different market.

My dream virtual datacenter manages everything itself.

  • Need more storage, just add a hard drive. The datacenter handles data management and availability. Seriously, just walk over and add a hard drive or add another storage node to the rack.
  • Need more network bandwidth, hot-add more pNICs. The datacenter handles properly spreading the data across available links, NIC failures, etc…
  • Need more compute resources, add a new server to the rack. The datacenter handles joining the server to the available compute resources.
  • Need external resources, just point the datacenter towards a public provider and let the datacenter manage the resources.

Creating the foundation to make this work relies on all parties involved allowing the datacenter to configure and manage everything. Storage vendors need to allow the datacenter to handle array configurations and management. Network vendors need to allow the datacenter to configure trunks, link aggregation, bandwidth control, etc… Systems vendors need to allow the datacenter to jump into the boot process, grab the hardware, and auto configuration.

Pie in the sky, right? Existing technologies seem to elude to more elegant management that would lend itself kindly to such a model. VMware, as the datacenter enabler, would need to step up to the plate and take the initiative and ownership of managing those resources… from RAID configurations to VLAN Trunking on switches.

Seriously… walking up and adding new physical resources or extending to a public provider for more resources and they become magically available would be fantastic.

So… that is my vision for where I would like to see the virtual datacenter. VMware, let me know if you want to talk about this in more detail. I am sure we can work something out!

vSphere 5–PXE Installation Using vCenter Virtual Appliance

August 26, 2011 7 comments

The release of vSphere 5 has a lot of little gems. One of which is the availability of a SLES-based vCenter virtual appliance. So, while that is really cool, there is another little nugget of joy waiting for you in the vCenter virtual appliance (‘VCVA’ for all the hip kids)… specifically, your own little PXE booting environment. The oh-so-wise developers decided to include the requisite DHCP daemon and TFTP daemon. So nice of you VMware. Now, now only do you get a Linux-based vCenter, you also get the web client, a virtual appliance form, no requirement for SQL server, and a PXE environment. Really, how can you go wrong?

The PXE environment components included with the VCVA are not configured and turned off by default. So, if you’re ready to configure your VCVA for PXE, time to roll up your sleeves, crack those knuckles, and get ready to get your hands dirty.

Before we get started, though, and little caution (and disclaimer so I can sleep better at night):

I know nothing about your environment. You are following these instructions at your own risk. This setup will impact DHCP functionality on your network. Follow these instructions at your own risk and make the appropriate adjustments to work in your environment.
Additionally, I do not know everything about everything. So, you are going to need to rely upon your sleuthing abilities to help resolve issues that may arise.

These instructions assume some knowledge of CLI-based file editing (vi). So, please research how to use it if you are unsure.

 

Overview
A PXE environment via the VCVA requires the following components in your environment
– DHCP server
– TFTP server
– Web Server (for kickstart scripts)
– SYSLINUX (for pxeboot.0)
– Access to an ESXi 5.0 installation CD (perhaps you created on using my Image Builder tutorial)
– vCenter Virtual Appliance deployed
– Blank server to PXE boot and install ESXi 5.0 on (aka – the client)
– ESXi 5.0 installation .ISO
– HTTP server on the network (for hosting kickstart files – customization during installation)

For this exercise:
– Network: 192.168.226.0/24
– VCVA: 192.168.226.21
– DHCP Range: 192.168.226.200 – 254
– Default Gateway: 192.168.226.1

Configuration
0 – Log into the appliance as ‘root’

1 – Configure DHCP

dhcpd‘ will listen to IP address requests, provide an IP to use, direct the client to the “next-server” to continue PXE booting, and which file (filename) to download from the server.

  • cd /var/lib/dhcp/etc
  • cp -a dhcpd.conf dhcpd.conf.orig
  • vi dhcpd.conf

Once inside of the file, ensure the following exists (highlighted for your ease of identification)

ddns-update-style ad-hoc;
allow booting;
allow bootp;

#gPXE options
option space gpxe;
option gpxe-encap-opts code 175 = encapsulate gpxe;
option gpxe.bus-id code 177 = string;
class “pxeclients”{
match if substring(option vendor-class-identifier, 0, 9) = “PXEClient”;
next-server 192.168.226.21;
filename “pxelinux.0”;
}
subnet 192.168.226.0 netmask 255.255.255.0 {
range 192.168.226.200 192.168.226.254;
}

Save the file and exit (hint: :wq)

2 – Configure TFTP

TFTP services are provided by the ‘atftpd’ daemon

  • cd /etc/sysconfig
  • cp –a atftpd atftpd.orig
  • vi atftpd

Once inside the file adjust the “ATFTP_OPTIONS” line to read: “–daemon –user root”. Typically, the atftpd daemon runs as ‘nobody’. However, the TFTP root (/tftpboot/) is configured as owned by the ‘root’ user.

Save and exit the file.

3 – Get the SYSLINUX packages on the server

There is one package missing to make the PXE installation process work: ‘pxelinux.0’. ‘pxelinux.0‘ is an executable that is downloaded by the client in order to properly continue the PXE process (aka – download the files, execute the installer, etc…). ‘pxelinux.0‘ is provided by the SYSLINUX package. In order for PXE to work properly with the ESXi 5.0 installation, SYSLINUX version 3.86 (or higher) is needed.

Note: you can use YUM or copy the files to the server another way if you’d like. Regardless, get the files there. This example will continue to use the /tmp file as the landing area for the SYSLINUX files.

Copy the pxelinux.0 file to your TFTP root

  • cp /tmp/syslinux-3.86/core/pxelinux.0 /tftpboot

4 – Prep the TFTP root for PXE

The TFTP root configured on the VCVA is located at /tftpboot. We are going to need to get the directory structure built out to support PXE.

  • cd /tftpboot
  • mkdir esxi50

By adding a directory, we are able to organize the TFTP server and support additional versions of ESXi going forward.

5 – Get the ESXi 5.0 CD contents onto the server

Seeing as the VCVA is a virtual appliance, it is easy to get the contents of the installation media onto the server.

  • Mount the installation CD to the VCVA as a CD-ROM drive using the vSphere Client.
  • mount /dev/cdrom /media
  • cp –a /media* /tftpboot/esxi50/
  • umount /dev/cdrom

6 – Configure PXELINUX

pxelinux is the utility that enables the PXE functionality. As mentioned before, pxelinux.0 is an executable that the server downloads. The executable provides functionality to parse a menu system, load kernels, options, customizations, modules, etc…, and boot the server. Since PXE can be used by multiple physical servers for multiple images, we need to configure pxelinux for this specific image.

  • cd /tftpboot
  • mkdir pxelinux.cfg
  • cd pxelinux.cfg

pxelinux.0 looks for configuration files in the TFTP:/pxelinux.cfg directory.

pxelinux looks for a large number of configuration files… specific to a default/generic value. This allows server administrators to define a file based on a complete MAC address, partial MAC address, or none at all to determine which image to boot from. Since this is the first configuration on the VCVA, we are going to configure a default. Do your research if you want to adjust this from the default value.

The installation media contains a file called isolinux.cfg. We can use this as the basis for our file called ‘default’. Copy it from the installation media and start customizations:

  • cp –a /tftpboot/esxi50/isolinux.cfg default
  • chmod a+w default
  • vi default
    Ensure the appropriate lines match the following lines:

DEFAULT /esxi50/menu.c32
KERNEL /esxi50/mboot.cfg
APPEND -c /esxi50/boot.cfg

Save and Exit

7 – Configure the Kickstart file

Using a kickstart file, we can configure ESXi 5.0 automatically during installation. This requires that a file be placed on a server that is available to the client.  Sadly, the HTTP areas on the VCVA are not readily available… and, they may be erased during future upgrades. So, we need to use an external HTTP server somewhere on your network. (Note: NFS and FTP are options as well).

Add the following contents:

# Accept the EULA
vmaccepteula

#Set root password
rootpw supersecretpassword

#Install on first local disk
install –firstdisk –overwritevmfs

#Config initial network settings
network –bootproto=dhcp –device=vmnic0

 

In this example, we are saving the file to:

8 – Configure the installation files

The CD installation media for ESXi 5.0 assumes a single installation point. Thus, all the files are placed at the root of the image. However, since we want to actually organize our installation root, we added the ‘/tftpboot/esxi50‘ directory and copied the files into it. We need to adjust the installation files in /tftpboot/esxi50 to reflect the change.

  • cd /tftpboot/esxi50
  • cp -a boot.cfg boot.cfg.orig
  • vi boot.cfg
  • Using the following picture as reference, add “/esxi50” to the paths for ‘kernel’ and ‘modulesimage

Save and quit

9 – Restart services to load the service configurations and configure to start with server

  • /etc/init.d/dhcpd restart
  • /etc/init.d/atftpd restart
  • chkconfig –add dhcpd
  • chkconfig –add atftpd

image

10 – Take a break

    You made it this far… great job. At this time, we have configured DHCP, TFTP, pxelinux, copied installation media to the TFTP root, and configured the installation for our organizational purposes.

11 – Start your host and install away

image

imageimage

image

image

[BELOW] Reading the Kickstart Script. No need to enter customization info anymore.

image

[BELOW] Checking contents of Kickstart file. You will see errors here if errors in file.

image

image

image

image

image