Three months ago, I finished what would end up being the most important experiment of my career… I was happily working away for 9 years at the same company, progressing through the ranks. I was in the middle of a conversation with a buddy when he asked “Do you like being a generalist or a specialist?” Simple enough question… harmless really. However, reality is that I could not actually answer it with any certainty. My opinion on being a specialist was pure conjecture as I only knew 9 years of a being a generalist.
Coincidentally, I was approached by a VMware TAM regarding an opportunity to work with a team in a Virtualization group in Portland. There was nothing I did not like about my original job… to this day, I cannot pin point anything I did not like. So, I accepted the position and, as graceful and grateful as possible, bowed out to join a team of some amazing individuals as part of the Virtualization group. The scale, the processes, the mechanics, the exposure, and the potential were all fantastic. Though, something was missing.
- Traveling around the world.
- Designing datacenters.
- Mergers and acquisitions.
- Evaluating technologies and systems that can have a significant impact on the direction and success of a company.
- Designing and implementing systems from soup to nuts.
The end of the 9 month period lead to the most clarity about my career that I had since starting at the original company 9 years prior. I love being a generalist. I can say that with a solid level of confidence now as I have experienced what I believe to be representative of a specialist. How, then, to resolve this issue? While leaving my new virtualization post was the obvious answer, going somewhere else was going to be the tough task. Where in the world could I get my fix for everything I realized I find value in?
I approached my wife about the situation and before I could even tell her my idea, she beat me to the punch: Go back to where you came from.
As luck would have it, my position had not been filled yet. I approached my old boss and started a process that took me back to my original company. Though, in an upgraded position that is serving as the most challenging and enjoyable role yet.
Three months ago, I finished being a specialist and returned to my original company as the Lead Technical Systems Analyst / Architect. “Architect” is a title I have been striving for and, quite honestly, is flattering and the biggest challenge ever.
Aumie, Steve, Brian, Jeff, Tod, Will, Walter, Jon, and Rich were amazing coworkers and they were the most difficult things to leave behind. But, walking into the old office, sitting at my pristine (though dusty) desk, and having a myriad of people (including almost every CxO and VP) welcome me back was such an amazing day.
I am back. I am a generalist. I could not be any happier!
For a number of years, VMware has provided some amazing lab contents during the VMworld conferences. From year to year, the labs progressed from onsite delivered, to cloud bursting, to majority cloud, and, finally, to completely cloud provided. It was inevitable that one day… some day… the labs would be opened up to the general populace. So, it turns out that Tuesday, November 13th was that fateful day. At the Portland VMUG Conference, Mr. Pablo Roesch (@heyitspablo) and Mr. Andrew Hald (@vmwarehol) released the Hands On Labs (BETA) onto the world!
Pablo and Andrew were kind enough to help me get access just before the announcement. And, I must say, I have been so impressed with what VMware has produced.
It is important to remember that this is a BETA offering (not quite like a Google BETA (ex: Gmail)), so experiencing some bugs and bumps in the road should be expected. However, with that being said, the quality of the content, delivery mechanisms, and the UI are top notch.
Lets take a quick look at what the environment looks like:
Upon logging into the HOL environment, you are greeted with a couple notable components:
- Enrollments and Lab Navigation
- This section, on the left, allows you to view the labs you have enrolled for, filter the lab types (currently Cloud Infrastructure, Cloud Operations, and End-User Computing), and viewing your transcript.
- Standalone labs
- The actual entry to the lab content. Enroll, begin, continue, etc… from here. This is where the magic happens.
- Includes important updates as to new lab content, product YouTube videos, Twitter stream, etc…
All in all, the home page is very streamlined and efficient.
Using a Lab
Upon enrolling and beginning a lab, you are presented with:
In the background, vCloud Director is deploying your lab environment. Looks like VMware is truly eating its own dog food with this product. No more Lab Manager for this type of offering anymore.
The interface features some very well thought out design decisions that helps present the lab content, and VMs, in a very logical and convenient way. The HOL team heavily leveraged HTML5 to accomplish the magic:
The simple placement of tabs on the left and right sides allows for the console of the VM to consume the majority of the screen while minimizing the amount of times the user needs to switch between applications for information. All of the info is there. Plus, as the user scrolls down the screen, the tabs remain visible, always ready to be used.
Use the lab manual content to navigate through the labs at your leisure. Note, though, there is a time limit. This is shared infrastructure. So, if you are idle for too long, HOL Online will time you out and close up shop so others can use the same infrastructure. Don’t worry, though, the content will resume when you return later.
Bugs I Have Found
Yes… as mentioned above, this is a BETA. Did I mention this is a BETA, because it is a BETA. So, running into some bugs is expected. Don’t worry, though, I’ll participate in the forum to report them.
Thus far, I have found a couple funky bugs:
- The mouse can disappear when a console is open
- The mouse can disappear when multiple browsers are open and the HOL window is not active
- Some lab manuals are not available
- Occasionally, the HOL interface will hang at Checking Status. Force a cache refresh in your browser.
If you want to get involved in testing the functionality and getting a taste of what the HOL Online is all about:
- Acknowledge that this is a BETA at this time. Don’t expect complete perfection right now.
– Sign up for the BETA at http://hol.vmware.com
– Notice that the signup site is a forum. You can check out the bugs and commentary from other beta testers to get a feel for what you’re going to experience
– Commit to participating in the beta community. Pablo and Andrew are not going to come to your house and take your vSphere licenses away for not participating. But, this is a unique opportunity to contribute to the success of a public product like this. Take advantage of it. Help the VMware community by contributing to it!
Well done, HOL team. I look forward to seeing what this turns into in the future!
Alright… so Cisco announced a Q4 2012 availability of a new version of the Nexus 1000v virtual switch in September. Of course, the release is going to have so many features and functions that there is no way we could not justify paying the $695/CPU that they charged (plus support) for the virtual switch.
A mere 4 hours into October, a blog post was published from Cisco (http://blogs.cisco.com/datacenter/new-nexus-1000v-free-mium-pricing-model/) dramatically reducing the cost of playing with their toys to a big fat $0 (plus support). Check out the break down of the features:
[Note: the graphic above was taken from the blog post and can be found by following the link above]
I have three very differing reactions to the announcement that I am struggling with determining which one wins out:
1. Nice move Cisco
This was a very smart move on Cisco’s part. The adoption of the Cisco 1000v has been anything but spectacular. Functionality sounds great from a high level. However, when compared with the price and availability of the VMware Distributed Virtual Switch that is available out of the box, why make the jump to a per CPU solution?!
Plus, with the up and coming virtual network changes coming our way, getting customers to buy in to the Cisco ecosystem before will get them locked in for whatever the future holds. Nicira has the potential to shake up the virtual switching in upcoming releases.
2. Great… more people using Nexus 1000v
My experiences in $CurrentJob involve using an environment with 1000v deployments all over the place… and the results have been less than spectacular. All too often, a networking issue arises that we lose visibility to, ports being blocked with no explanation, VSM upgrade failures, VEM upgrade failures, etc… To say I am a fan of the 1000v would be pretty far fetched. Probably better stated: To say I am a fan of our 1000v implementation would be pretty far fetched. I am sure 1000v implementations out there are more successful. I just have a hard time recommending to people to implement the 1000v in their environments when the provided distributed virtual switch is good enough.
3. Great! More people using Nexus 1000v
See what I did there?! (“…” vs “!”)
In my self-admitted limited times with the Nexus 1000v, it really seems like there is a lack of people that know ANYTHING about the 1000v. Understanding the NX-OS side of the switch is critical (obviously). But, so is understanding the nature of a virtual environment (especially with vCloud Director making more of a play in datacenters) and the virtual environment implemented (1000v will support VMware, Microsoft, Xen, and KVM hypervisors) is equally as important. The nature of the workloads and behaviors change.
So, by having more people using the Nexus 1000v, there will be more and more people available as “experts’ or at least legitimately “experienced” with the product than before. This can bode well for future implementations for sure.
Ultimately, I think this is a good move. Cisco is acknowledging that getting another “advanced” product into the ecosystem a gratis helps drive purchasing for other Cisco products in the datacenter. Plus, at a much higher level, it is an acknowledgement in the direction that the market is moving… and they’re trying to get a toe-hold before the SDN wave takes hold. Will this fix what I am working with? Not one bit. Will this fix what I will be working with in the future? With more people having experience, it may for sure.
Scale Computing… the company with a name that never really made sense… until today. You see, Scale Computing began in 2007 as a storage company. Their most recent product lines, the M Series and S Series products, utilized HGFS (licensed through IBM) to provide a clustered filesystem for data consumption. Need more space… just add a new node and let the cluster work its magic. Combine the simplicity of the cluster filesystem with the creation of a highly usable web-based management utility, and you get a simple and powerful storage system.
But… isn’t that storage? Not really compute? Perhaps the “Computing” was foreshadowing for the most recent Scale Computing product release: HC3 Hyperconvergence system.
HC3 represents a significant change to the direction and focus of the company. Scale is utilizing IP and existing products in the storage realm and enhancing the storage experience with server virtualization functionality.
Let’s dig into some of the nitty gritty technical details we all want to see:
The virtualization functionality is provided via the highly powerful and versatile KVM hypervisor. The use of KVM in a system like this always raises an eyebrow, or two. More often than not, KVM is relegated to the “Linux/UNIX geek” realm. The hypervisor is highly functional, feature rich, and has a very solid following in the Open Source community. New functionality, enhancements, and maintenance is constantly ongoing. Plus, with KVM in the OpenStack development line, KVM is just the beginning of where the Scale Computing layer could go in the future. Additionally, as a consumer of Open Source software in its solution, Scale contributes back to the community. Scale has slightly modified KVM to allow for better caching and has released that code back to the community for consumption elsewhere.
As mentioned above, Scale is building the HC3 on their existing HGFS filesystem. KVM is a natural hypervisor selection based on the fact that it operates at the same level in the OS as the filesystem… just another service installed in the OS.
Many of the expected virtual machine management functionality is present in the HC3 product:
- Live Migration
- Resource Balancing
- Thin Provisioning
- VM Failover/Restart On Node Failure
- Storage Replication
As far as hardware is concerned, the HC3 product is built on the same hardware as the M Series storage line. You can see more details here: http://scalecomputing.com/files/documentation/series_datasheet_1.pdf. Heck, existing Scale Computing customers can install a firmware upgrade on the M Series hardware to get HC3 functionality… gratis too.
Management of the environment, like the storage, is handled in a clustered fashion. Connection to any of the node management IP addresses results in the ability to control the entire cluster. No need for a centralized controller for management services. The connection for management is handled by any HTML5 compliant browser. I guess this means an iPhone browser could be used (although, I question the usefulness of such a form factor… but, it should work nonetheless).
Once logged into the management interface, a number of components can easily be managed: Virtualization, Storage, General, etc… If you have used the interface before, the only major difference between the pre-HC3 and post-HC3 interfaces is the addition of the Virtualization option. The interface is very simplistic with very few options available. Navigation is logical and easy to use.
Scale computing has made a very conscious decision to focus on the SMB/SME markets. These markets tend to employ a small number of IT personnel with limited knowledge and high expectations placed upon them. For the business, selecting a product that performs a role, is easy to use, and provides high levels of availability is extremely desired.
Scale has identified the market and designed HC3 to reflect what their customers want:
- Easy administration
- Few configuration options
- Small form factor (1U servers)
- Support for common operating systems
What makes Scale Computing HC3 different?
One of the most significant differentiators for the HC3 product is their starting point. While it makes sense for THIS company, starting with a solid storage foundation, followed by a virtualization plan is really a different path to get to a converged infrastructure product. Scale has developed a solid storage product and decided to make a hypervisor decision that compliments existing efforts while providing the functionality customers are looking for.
The ability to scale compute and storage resources is becoming an expectation and the norm in virtual environments. HC3 allows for two major ways to scale:
- Addition of a new HC3 node – This adds additional footprint for executing virtual machine workloads… plus additional storage.
- Addition of a new M Series node – This adds additional storage without the compute functionality.
Scaling by adding an M Series node, while possible, just does not make sense at this time. The possibility to add additional compute resources to the HC3 cluster holds so much more potential benefit to a consumer that I find it hard to believe this would be used. But, for what it is worth, that is an option.
Simple is better. For the target market, removal of complexity results in a predictable and, hopefully, low-touch compute environment. There is less of a need to have deep knowledge just to keep the compute environment functional. Scale has made a number of configuration decisions behind the scenes to reduce the load on customers.
What is missing?
With all the hotness that HC3 provides, a number of notable features are missing from this release:
- Solid reporting – Aside from some “sparkline”-esque performance graphs (on a per VM basis), the ability to look back on a number of statistics for any number of reasons (Ex: troubleshooting performance issues) just is not there. For the target market, this may be an acceptable risk. I do not necessarily agree, though.
- VM Snapshotting – At this time, the snapshotting functionality is achieved by snapshotting the entire filesystem.
- Crash Consistent Snapshots – The snapshots of the volumes are crash consistent — in the event a VM is restored from a snapshot, it is in a state that mimics a sudden loss of power… the server has crashed. So, reliance on OS and application recovery is necessary. Probably a good idea to have backups. Pausing the VM, if possible in your environment, prior to taking a snapshot would help in stability… but, that is a stretch.
Virtual Bill’s Take
I absolutely love the focus on the SMB/SME markets. They need some TLC, for sure. By focusing on creation of a utility virtualization device, the IT resources in the business can focus on moving the business forward rather than messing with complicated details. Howard Marks made a comment during a briefing from Scale: “Do you care if your car has rack and pinion steering? I just want to get in and drive”. This solution addresses the need to get in and drive. I can see this as being very appealing to quite a number of companies out there.
Scale Computing is an up and comer in the converged infrastructure game going on now. Converged infrastructure is making a play in the corporate IT ecosystem that challenges traditional thinking… and that is always a good thing. Selection of KVM as the hypervisor is logical, but it is going to be a hurdle to overcome. KVM just does not have the household recognition as other vendors. So, getting support from many directions is going to be challenging. But, after speaking with Scale Computing, they’re up for the challenge.
If they play their cards right, HC3 could help usher in a flood of new customers to Scale and a change in how SMB/SMEs operate server environments.
Come one! Come all! Watch the spectacle that is v0dgeball at VMworld 2012!
This year, a number awesome vPeople have come together for a brief time to create an amazing team of dodgeballers to dominate the tournament: Team vGlobo
The team is comprised of:
- Bill Hill
- Arjan Timmerman
- Brandon Riley
- Gabrie van Zanten
- Chris Emery
- Josh Townsend
- Jason Shiplett
- Mike Ellis
- Dwayne Lessner
- Joseph Boryczka
If you’re interested in watching the awesomeness that is Team vGlobo, please come and check out the tournament:
SUNDAY, AUGUST 26 @ 4-6PM
SOMA REC CENTER – CORNER OF FOLSOM & 6TH ST
Links (for your clicking pleasure)
In all seriousness (if you made it this far), the v0dgeball tournament is something that I am very proud to be a part of. The proceeds for the dodgeball tournament go to the Wounded Warrior Project.
The Wounded Warrior Project provides support for US service men and women injured while serving our country. They provide a ton of services and support, which includes:
- Stress recovery
- Family support
- Career transition
- Employment placement
- Adaptive sporting events
- Assistance with government/insurance claims
- And so much more
At the time of this post, the v0dgeball 2012 project has raised $10,705.00 for the Wounded Warrior Project. How cool is that?! Really!? Just thinking about the impact that $10,000+ can have to thank and support servicemen/servicewomen for their amazing sacrifice is awe inspiring.
I cannot thank Chad Sakac, Fred Nix, and EMC enough for organizing such a fun and honorable activity for the VMworld participants… and for all of the participants of the tournament.
Team vGlobo is honored to take the floor, throw some balls in faces of opponents (there’s a joke in there somewhere), and support such an amazing organization.
Go Team vGlobo!!!
For the past 9 years, I have had the privilege of working for a fast growing, dynamic, and fun environment. I started as a summer intern and worked hard to assume full design and implementation for all IT infrastructure. I was provided with experiences I never would have thought possible… including significant travels in Asia. Management fostered an environment in which I could grow personally, as an IT professional, and in my blogging/social media life.
However, sometime earlier this year, I was approached with another opportunity by an outside group. The opportunity provided was significant and very appealing… tugging at my virtualization heart-strings. Suddenly, I was put into a position where I really needed to decide which direction I wanted to take my IT career. Continue on the IT generalist route (broad and deep in a handful of areas) or specialize more in virtualization (narrow but significantly deeper in the virtualization core pillars).
After much debate, list making, and lost sleep, I decided to take a risk and accept the offer for the new position.
Leaving my post at a company I have grown with and that has grown with me was tremendously difficult. The company fostered a family-like environment and I genuinely like everyone I work with. The company is full of rock stars and growing like a weed (but, a good weed that will turn into something cool in the future). But, I would be remiss to not take advantage of this new opportunity.
The new opportunity puts me into a Senior role working with a truly enterprise-level environment… at a scale that is just mind boggling (almost 2 physical servers for every 1 employee at my former employer) The prospect of the position is exciting and I cannot wait to start there. Plus, I am going to be working in close proximity to some VMware/virtualization rock stars (a couple blocks apart) and one of my fellow PDX VMUG leaders.
I am really going to miss my former employer and opportunities there, though. This is just a blip on their radar. I am positive the IT department is going to step up and make the infrastructure their own, just like I feel I was able to while there. But, I look forward to meeting everyone at the new job and getting deeper and dirtier into VMware virtualization!
PS – I apologize for the vague-ness of the post. However, I try and make a point of not calling out my employer on my blog… which the old company and new company appreciate. So, I hope you were able to follow along. :-)
Let’s be honest, x86-based compute in virtualization environments is pretty darn boring. A server has become the necessary evil required for enabling the coolness that is virtualization.
But, don’t let the boringness of servers fool you. VMware has enabled a new breed of hybrid servers that are both server AND storage all-in-one! This new paradigm adds some new methods and models for virtualization design and functionality.
Conceptually, the server boots into an ESXi environment and fires up a guest OS. This guest OS is the virtual storage appliance and provides the storage for the local server. The guest makes use of VMDirectPath functionality to take control of a locally installed storage controller connected to the local disks. The result of this is that the VM can access the local disks and ESXi will not. The local disk is now directly connected to the VM. How cool is that?!
Once the guest OS has the disks, the guest creates various storage options: block or file, object or RAID, etc…). The ESX host is, then, configured to connect to IP storage provided by the guest. The first, typical reaction may be to wonder about the reason to add this level of complexity. For a standalone host with local storage (think ROBO) this may be a little overkill. But, the advantage comes into play when you consider flexibility and new functionality.
By moving control of local storage into the VM, more advanced functions can be performed. Local storage use by ESXi is fairly limited. The VSA, though, can use the storage a little more liberally.
Take Pivot3, for example. Their VDI and surveillance solutions make use of this storage technique. The vSTAC OS (the Pivot3 VSA) creates a RAID across the local disks. Yawn, right?! Where the coolness is applied is when multiple nodes are "connected". vSTAC OS instances on other Pivot3 servers combine and RAID across multiple hosts. Suddenly, local storage is combined with local storage from other hosts and creates a big clustered pool of available storage! This cluster environment allows for added resiliency and performance as the data is no longer restricted to the local host and distributed to help against local storage issue.
Once the vSTAC OS nodes connect their storage together, data is spread across all of the other nodes to immediately protect the data and enhance performance. A new node can be added in the future. Once the new node is added, the data is automatically rebalanced across all hosts to ensure proper protection and efficient usage of the storage. Dynamic add of storage and compute is fantastic!
The VSA VM can perform additional functions if desired (and developed as such) like: deduplication, replication, compression, etc…
I love this type of innovation. There are many use cases for solutions like this. The Pivot3 solution has a lot of potential for success in their target markets. I have concern about the selection of RAID versus object storage, though… but that is their decision. Traditional RAID5 systems suffer heavily from a disk failure and rebuild… the performance tanks until the failed disk has been replaced. In the event of a failure in the Pivot3 solution, the entire solution may suffer until the offending disk has been replace. But, with that said, I believe the benefits of the technique outweigh the potential performance hit.
This style architecture really bucks the trend of needing a separate SAN/NAS in addition to compute. Adding sophistication to the VSA component and introducing more SSD/Flash-based storage could create an interesting and valid competitor to traditional SAN/NAS solutions and breathes new life into boring servers.