Very often, we take so many things for granted. Take Wi-Fi access on airplanes for example… or flying, for that matter (YouTube/Louis CK on Conan).
For those of us working in IT (vendor, consulting, customer, academia, etc…), we are surrounded by some of the most amazing things EVER. I was reminded of this fact earlier this morning when browsing Slashdot. An article, “Linux Kernel 4.7 Officially Released” was posted on the site and it just blew my mind.
To be honest, nothing in that list of new features is all that exciting to me. Radeon RX 480 GPUs, sync_file fencing mechanisms, and the “schedutil” frequency governor are all very foreign to me. Regardless of utility, the complexity of it all is mind boggling and beautiful.
The level of effort, detail, and functionality in the kernel is fantastic… and it’s not limited to Linux. Windows, UNIX, ESXi, NX-OS, etc…
The fact that we have these tiny little machines in our presence with increasingly stronger, faster, and more intelligent components with such utility is a borderline-miracle. The fact that I can identify a need to connect an Xbox Elite controller to a Linux environment AND IT WILL WORK is amazing. The fact that I can sit at my desk with 3 monitors, type on a piece of plastic with buttons, click Publish, and share a little thought with whomever is reading this is an amazing feat.
Am I waxing poetic or off in my own little world? Maybe a little… but, when things get frustrating and you’re having “one of those days”, step away from whatever you’re doing any try to appreciate the little pieces of magic that happen all the time to enable you to do whatever your job is.
Back in May 2013, I composed my most recent post… something about career clarity, how I love being a generalist, and how I love being back… back from whence I came.
I cannot believe it has been about 2.5 years since my last post. Seriously. What the heck happened?
With all honesty…
1) I have been devoured by work, family, and spinning plates. Expanding family, architect role, company exploding with growth (and all the growing pains that come with it). I have not been too sure about where to find the time to write with consistency and quality.
2) Migration from virtualization. My focus for many years was around virtualization and VMware, specifically. Wherever those areas intersected technology was fair game. Occasionally, I would dabble in other areas. But, only areas in which I felt some level of command over the information versus just being a dude with an opinion. With my role in the company, I just could not apply the same level of focus to Virtualization as before. I tried to keep up, but it just was not in the cards.
3) What is my purpose on the Internets? Again… with a focus on virtualization and other areas as exposed, it made sense. But, as I have evolved in my career, I began to question it.
So… I am back… Virtual Bill is back. What does that mean? Well… I’d like you to follow along the journey with me. There may be content about virtualization (ah… my home away from home), infrastructure (storage, network, WAN, whatever), security, cloud, project management, engineering, architecture, processes/walkthroughs, dealing with the business, or anything else that tickles my fancy. Plus, I like to help.
I realize I may have disappeared from some of the communities out there (TechFieldDay/GestaltIT, VMUG, etc…) and I apologize for that. I hope to re-enter some areas, enter new areas, or even create new areas if it makes sense.
At the end of the day, I am still a generalist… I just have different perspectives than 2.5 years ago.
So… if you’re reading this… thanks for following me, reading the 0’s and 1’s of my thoughts, and coming along with me.
Three months ago, I finished what would end up being the most important experiment of my career… I was happily working away for 9 years at the same company, progressing through the ranks. I was in the middle of a conversation with a buddy when he asked “Do you like being a generalist or a specialist?” Simple enough question… harmless really. However, reality is that I could not actually answer it with any certainty. My opinion on being a specialist was pure conjecture as I only knew 9 years of a being a generalist.
Coincidentally, I was approached by a VMware TAM regarding an opportunity to work with a team in a Virtualization group in Portland. There was nothing I did not like about my original job… to this day, I cannot pin point anything I did not like. So, I accepted the position and, as graceful and grateful as possible, bowed out to join a team of some amazing individuals as part of the Virtualization group. The scale, the processes, the mechanics, the exposure, and the potential were all fantastic. Though, something was missing.
- Traveling around the world.
- Designing datacenters.
- Mergers and acquisitions.
- Evaluating technologies and systems that can have a significant impact on the direction and success of a company.
- Designing and implementing systems from soup to nuts.
The end of the 9 month period lead to the most clarity about my career that I had since starting at the original company 9 years prior. I love being a generalist. I can say that with a solid level of confidence now as I have experienced what I believe to be representative of a specialist. How, then, to resolve this issue? While leaving my new virtualization post was the obvious answer, going somewhere else was going to be the tough task. Where in the world could I get my fix for everything I realized I find value in?
I approached my wife about the situation and before I could even tell her my idea, she beat me to the punch: Go back to where you came from.
As luck would have it, my position had not been filled yet. I approached my old boss and started a process that took me back to my original company. Though, in an upgraded position that is serving as the most challenging and enjoyable role yet.
Three months ago, I finished being a specialist and returned to my original company as the Lead Technical Systems Analyst / Architect. “Architect” is a title I have been striving for and, quite honestly, is flattering and the biggest challenge ever.
Aumie, Steve, Brian, Jeff, Tod, Will, Walter, Jon, and Rich were amazing coworkers and they were the most difficult things to leave behind. But, walking into the old office, sitting at my pristine (though dusty) desk, and having a myriad of people (including almost every CxO and VP) welcome me back was such an amazing day.
I am back. I am a generalist. I could not be any happier!
For a number of years, VMware has provided some amazing lab contents during the VMworld conferences. From year to year, the labs progressed from onsite delivered, to cloud bursting, to majority cloud, and, finally, to completely cloud provided. It was inevitable that one day… some day… the labs would be opened up to the general populace. So, it turns out that Tuesday, November 13th was that fateful day. At the Portland VMUG Conference, Mr. Pablo Roesch (@heyitspablo) and Mr. Andrew Hald (@vmwarehol) released the Hands On Labs (BETA) onto the world!
Pablo and Andrew were kind enough to help me get access just before the announcement. And, I must say, I have been so impressed with what VMware has produced.
It is important to remember that this is a BETA offering (not quite like a Google BETA (ex: Gmail)), so experiencing some bugs and bumps in the road should be expected. However, with that being said, the quality of the content, delivery mechanisms, and the UI are top notch.
Lets take a quick look at what the environment looks like:
Upon logging into the HOL environment, you are greeted with a couple notable components:
- Enrollments and Lab Navigation
- This section, on the left, allows you to view the labs you have enrolled for, filter the lab types (currently Cloud Infrastructure, Cloud Operations, and End-User Computing), and viewing your transcript.
- Standalone labs
- The actual entry to the lab content. Enroll, begin, continue, etc… from here. This is where the magic happens.
- Includes important updates as to new lab content, product YouTube videos, Twitter stream, etc…
All in all, the home page is very streamlined and efficient.
Using a Lab
Upon enrolling and beginning a lab, you are presented with:
In the background, vCloud Director is deploying your lab environment. Looks like VMware is truly eating its own dog food with this product. No more Lab Manager for this type of offering anymore.
The interface features some very well thought out design decisions that helps present the lab content, and VMs, in a very logical and convenient way. The HOL team heavily leveraged HTML5 to accomplish the magic:
The simple placement of tabs on the left and right sides allows for the console of the VM to consume the majority of the screen while minimizing the amount of times the user needs to switch between applications for information. All of the info is there. Plus, as the user scrolls down the screen, the tabs remain visible, always ready to be used.
Use the lab manual content to navigate through the labs at your leisure. Note, though, there is a time limit. This is shared infrastructure. So, if you are idle for too long, HOL Online will time you out and close up shop so others can use the same infrastructure. Don’t worry, though, the content will resume when you return later.
Bugs I Have Found
Yes… as mentioned above, this is a BETA. Did I mention this is a BETA, because it is a BETA. So, running into some bugs is expected. Don’t worry, though, I’ll participate in the forum to report them.
Thus far, I have found a couple funky bugs:
- The mouse can disappear when a console is open
- The mouse can disappear when multiple browsers are open and the HOL window is not active
- Some lab manuals are not available
- Occasionally, the HOL interface will hang at Checking Status. Force a cache refresh in your browser.
If you want to get involved in testing the functionality and getting a taste of what the HOL Online is all about:
– Acknowledge that this is a BETA at this time. Don’t expect complete perfection right now.
– Sign up for the BETA at http://hol.vmware.com
– Notice that the signup site is a forum. You can check out the bugs and commentary from other beta testers to get a feel for what you’re going to experience
– Commit to participating in the beta community. Pablo and Andrew are not going to come to your house and take your vSphere licenses away for not participating. But, this is a unique opportunity to contribute to the success of a public product like this. Take advantage of it. Help the VMware community by contributing to it!
Well done, HOL team. I look forward to seeing what this turns into in the future!
Alright… so Cisco announced a Q4 2012 availability of a new version of the Nexus 1000v virtual switch in September. Of course, the release is going to have so many features and functions that there is no way we could not justify paying the $695/CPU that they charged (plus support) for the virtual switch.
A mere 4 hours into October, a blog post was published from Cisco (http://blogs.cisco.com/datacenter/new-nexus-1000v-free-mium-pricing-model/) dramatically reducing the cost of playing with their toys to a big fat $0 (plus support). Check out the break down of the features:
[Note: the graphic above was taken from the blog post and can be found by following the link above]
I have three very differing reactions to the announcement that I am struggling with determining which one wins out:
1. Nice move Cisco
This was a very smart move on Cisco’s part. The adoption of the Cisco 1000v has been anything but spectacular. Functionality sounds great from a high level. However, when compared with the price and availability of the VMware Distributed Virtual Switch that is available out of the box, why make the jump to a per CPU solution?!
Plus, with the up and coming virtual network changes coming our way, getting customers to buy in to the Cisco ecosystem before will get them locked in for whatever the future holds. Nicira has the potential to shake up the virtual switching in upcoming releases.
2. Great… more people using Nexus 1000v
My experiences in $CurrentJob involve using an environment with 1000v deployments all over the place… and the results have been less than spectacular. All too often, a networking issue arises that we lose visibility to, ports being blocked with no explanation, VSM upgrade failures, VEM upgrade failures, etc… To say I am a fan of the 1000v would be pretty far fetched. Probably better stated: To say I am a fan of our 1000v implementation would be pretty far fetched. I am sure 1000v implementations out there are more successful. I just have a hard time recommending to people to implement the 1000v in their environments when the provided distributed virtual switch is good enough.
3. Great! More people using Nexus 1000v
See what I did there?! (“…” vs “!”)
In my self-admitted limited times with the Nexus 1000v, it really seems like there is a lack of people that know ANYTHING about the 1000v. Understanding the NX-OS side of the switch is critical (obviously). But, so is understanding the nature of a virtual environment (especially with vCloud Director making more of a play in datacenters) and the virtual environment implemented (1000v will support VMware, Microsoft, Xen, and KVM hypervisors) is equally as important. The nature of the workloads and behaviors change.
So, by having more people using the Nexus 1000v, there will be more and more people available as “experts’ or at least legitimately “experienced” with the product than before. This can bode well for future implementations for sure.
Ultimately, I think this is a good move. Cisco is acknowledging that getting another “advanced” product into the ecosystem a gratis helps drive purchasing for other Cisco products in the datacenter. Plus, at a much higher level, it is an acknowledgement in the direction that the market is moving… and they’re trying to get a toe-hold before the SDN wave takes hold. Will this fix what I am working with? Not one bit. Will this fix what I will be working with in the future? With more people having experience, it may for sure.
Scale Computing… the company with a name that never really made sense… until today. You see, Scale Computing began in 2007 as a storage company. Their most recent product lines, the M Series and S Series products, utilized HGFS (licensed through IBM) to provide a clustered filesystem for data consumption. Need more space… just add a new node and let the cluster work its magic. Combine the simplicity of the cluster filesystem with the creation of a highly usable web-based management utility, and you get a simple and powerful storage system.
But… isn’t that storage? Not really compute? Perhaps the “Computing” was foreshadowing for the most recent Scale Computing product release: HC3 Hyperconvergence system.
HC3 represents a significant change to the direction and focus of the company. Scale is utilizing IP and existing products in the storage realm and enhancing the storage experience with server virtualization functionality.
Let’s dig into some of the nitty gritty technical details we all want to see:
The virtualization functionality is provided via the highly powerful and versatile KVM hypervisor. The use of KVM in a system like this always raises an eyebrow, or two. More often than not, KVM is relegated to the “Linux/UNIX geek” realm. The hypervisor is highly functional, feature rich, and has a very solid following in the Open Source community. New functionality, enhancements, and maintenance is constantly ongoing. Plus, with KVM in the OpenStack development line, KVM is just the beginning of where the Scale Computing layer could go in the future. Additionally, as a consumer of Open Source software in its solution, Scale contributes back to the community. Scale has slightly modified KVM to allow for better caching and has released that code back to the community for consumption elsewhere.
As mentioned above, Scale is building the HC3 on their existing HGFS filesystem. KVM is a natural hypervisor selection based on the fact that it operates at the same level in the OS as the filesystem… just another service installed in the OS.
Many of the expected virtual machine management functionality is present in the HC3 product:
- Live Migration
- Resource Balancing
- Thin Provisioning
- VM Failover/Restart On Node Failure
- Storage Replication
As far as hardware is concerned, the HC3 product is built on the same hardware as the M Series storage line. You can see more details here: http://scalecomputing.com/files/documentation/series_datasheet_1.pdf. Heck, existing Scale Computing customers can install a firmware upgrade on the M Series hardware to get HC3 functionality… gratis too.
Management of the environment, like the storage, is handled in a clustered fashion. Connection to any of the node management IP addresses results in the ability to control the entire cluster. No need for a centralized controller for management services. The connection for management is handled by any HTML5 compliant browser. I guess this means an iPhone browser could be used (although, I question the usefulness of such a form factor… but, it should work nonetheless).
Once logged into the management interface, a number of components can easily be managed: Virtualization, Storage, General, etc… If you have used the interface before, the only major difference between the pre-HC3 and post-HC3 interfaces is the addition of the Virtualization option. The interface is very simplistic with very few options available. Navigation is logical and easy to use.
Scale computing has made a very conscious decision to focus on the SMB/SME markets. These markets tend to employ a small number of IT personnel with limited knowledge and high expectations placed upon them. For the business, selecting a product that performs a role, is easy to use, and provides high levels of availability is extremely desired.
Scale has identified the market and designed HC3 to reflect what their customers want:
- Easy administration
- Few configuration options
- Small form factor (1U servers)
- Support for common operating systems
What makes Scale Computing HC3 different?
One of the most significant differentiators for the HC3 product is their starting point. While it makes sense for THIS company, starting with a solid storage foundation, followed by a virtualization plan is really a different path to get to a converged infrastructure product. Scale has developed a solid storage product and decided to make a hypervisor decision that compliments existing efforts while providing the functionality customers are looking for.
The ability to scale compute and storage resources is becoming an expectation and the norm in virtual environments. HC3 allows for two major ways to scale:
- Addition of a new HC3 node – This adds additional footprint for executing virtual machine workloads… plus additional storage.
- Addition of a new M Series node – This adds additional storage without the compute functionality.
Scaling by adding an M Series node, while possible, just does not make sense at this time. The possibility to add additional compute resources to the HC3 cluster holds so much more potential benefit to a consumer that I find it hard to believe this would be used. But, for what it is worth, that is an option.
Simple is better. For the target market, removal of complexity results in a predictable and, hopefully, low-touch compute environment. There is less of a need to have deep knowledge just to keep the compute environment functional. Scale has made a number of configuration decisions behind the scenes to reduce the load on customers.
What is missing?
With all the hotness that HC3 provides, a number of notable features are missing from this release:
- Solid reporting – Aside from some “sparkline”-esque performance graphs (on a per VM basis), the ability to look back on a number of statistics for any number of reasons (Ex: troubleshooting performance issues) just is not there. For the target market, this may be an acceptable risk. I do not necessarily agree, though.
- VM Snapshotting – At this time, the snapshotting functionality is achieved by snapshotting the entire filesystem.
- Crash Consistent Snapshots – The snapshots of the volumes are crash consistent — in the event a VM is restored from a snapshot, it is in a state that mimics a sudden loss of power… the server has crashed. So, reliance on OS and application recovery is necessary. Probably a good idea to have backups. Pausing the VM, if possible in your environment, prior to taking a snapshot would help in stability… but, that is a stretch.
Virtual Bill’s Take
I absolutely love the focus on the SMB/SME markets. They need some TLC, for sure. By focusing on creation of a utility virtualization device, the IT resources in the business can focus on moving the business forward rather than messing with complicated details. Howard Marks made a comment during a briefing from Scale: “Do you care if your car has rack and pinion steering? I just want to get in and drive”. This solution addresses the need to get in and drive. I can see this as being very appealing to quite a number of companies out there.
Scale Computing is an up and comer in the converged infrastructure game going on now. Converged infrastructure is making a play in the corporate IT ecosystem that challenges traditional thinking… and that is always a good thing. Selection of KVM as the hypervisor is logical, but it is going to be a hurdle to overcome. KVM just does not have the household recognition as other vendors. So, getting support from many directions is going to be challenging. But, after speaking with Scale Computing, they’re up for the challenge.
If they play their cards right, HC3 could help usher in a flood of new customers to Scale and a change in how SMB/SMEs operate server environments.
Come one! Come all! Watch the spectacle that is v0dgeball at VMworld 2012!
This year, a number awesome vPeople have come together for a brief time to create an amazing team of dodgeballers to dominate the tournament: Team vGlobo
The team is comprised of:
- Bill Hill
- Arjan Timmerman
- Brandon Riley
- Gabrie van Zanten
- Chris Emery
- Josh Townsend
- Jason Shiplett
- Mike Ellis
- Dwayne Lessner
- Joseph Boryczka
If you’re interested in watching the awesomeness that is Team vGlobo, please come and check out the tournament:
SUNDAY, AUGUST 26 @ 4-6PM
SOMA REC CENTER – CORNER OF FOLSOM & 6TH ST
Links (for your clicking pleasure)
In all seriousness (if you made it this far), the v0dgeball tournament is something that I am very proud to be a part of. The proceeds for the dodgeball tournament go to the Wounded Warrior Project.
The Wounded Warrior Project provides support for US service men and women injured while serving our country. They provide a ton of services and support, which includes:
- Stress recovery
- Family support
- Career transition
- Employment placement
- Adaptive sporting events
- Assistance with government/insurance claims
- And so much more
At the time of this post, the v0dgeball 2012 project has raised $10,705.00 for the Wounded Warrior Project. How cool is that?! Really!? Just thinking about the impact that $10,000+ can have to thank and support servicemen/servicewomen for their amazing sacrifice is awe inspiring.
I cannot thank Chad Sakac, Fred Nix, and EMC enough for organizing such a fun and honorable activity for the VMworld participants… and for all of the participants of the tournament.
Team vGlobo is honored to take the floor, throw some balls in faces of opponents (there’s a joke in there somewhere), and support such an amazing organization.
Go Team vGlobo!!!