VCP6-NV Network Virtualization Exam Prep And Results

Hard to believe that a mere 8 hours so, I sat for the VCP6-NV (2V0-642) exam. 77 questions and about 60 minutes later, I walked out as a newly minted VCP!

Truth be told, I have not needed to study like this for quite some time… likely while getting my undergrad many years ago. The type of learning I adapted to in the real world was more bursty, need driven, and broad. So, I really needed to get to clean the rust off those old studyin’ routines and get to work.

The Internets were massively helpful in not only helping identify what to study but, also, confirming what I thought would be the correct content. This post is my way of paying back for the help I got. If you’re here for the actual test questions and answers, you’re in the wrong place…

Please keep in mind that this is not meant to be prescriptive. Rather, this is what worked for the type of learner I am.

What did I use to study?!

There are some awesome content creators out there with amazing reviews and success stories (Pluralsight, vBrownbags, etc…). I did not use them. I felt like trying to focus on the information from VMware would be the most effective use of my time.

  • NSX: Install, Manage, and Configure [6.2] – On Demand
  • VMware Education NSX Practice Exam
  • NSX 6.2 – Admin Guide
  • NSX 6.2 – Design Guide
  • VMware Learning Zone – NSX Exam Prep
  • VCP6-NV Exam Blueprint
  • Hands on Labs

The ICM course’s On Demand structure worked really well. I was concerned about my usual preferred learning style conflicting with the presentation and lab format of the course. However, it was quite nice and I rather enjoyed it. I completed all of the course in about 1.5 weeks… and I have a crazy amount of notes to show for it. Note: if you decide to go through the On Demand course, there are some oddities about the delivery system that you can work 2V0-641 to your benefit. Not listening to the robo-voice reading each slide was a sanity saver.

The Design Guide was surprisingly enjoyable… It has been composed in a very thoughtful and logical manner. It needs to be read from cover to cover at least one time as the pages and sections build on top of each other. I found myself re-reading chapters 3 & 5 to help drive some concepts home. Any time spent with the Design Guide was time well spent.

The Learning Zone exam prep content was really nice. Each objective and sub-objective is presented is short 5-10 minute videos. They cover the content in ways that are explanatory, show correct logic in analysis, but don’t give you the answer. They guide you to the water… But, you need to drink it.

What didn’t I do?

  • Use external content providers – I felt like I had a good grasp on the concepts from the VMware materials. The external content providers would help explain and/or make sense of concepts that I was getting pretty well.
  • Did not focus on speeds and feeds – Yes… knowing easily referable information like the amount of RAM and vCPU for NSX Manager is within scope of the exam. I can look that up if I need to… And I accepted that I may miss those questions on the exam. My time was more important elsewhere.
  • Did not memorize details of UI paths – Again, knowing which tab or right-click option is within the scope of the exam and not worth my time. Accepted risk.

Test Day

  • Did not study at all. At this point, I knew what I was going to know and spending time on last minute things do not yield anything but uncertainty.
  • I felt good about the test… Like I did in college… The rust was off the gears! I was calm and accepted the current state of my study and learning as it was.


  • Pay attention – very common concepts, themes, principles, rules, restrictions, limits, etc… show up over and over and over.
  • Write – our minds retain information better when we write. Writing engages an artistic portion of our brains… and information associated with artistic activities is retained better.
  • Schedule the exam – or else you will find a reason to start kicking the can and delaying the prep work
  • Exam structure is no surprise – single choice, multiple choice, sometimes answer options are super similar, sometimes answers are super obvious. There is nothing exotic here.
  • Question wording / Answer wording – NSX, traditional networking, and network virtualization have similar verbiage, differing implications, and concepts both shared and unique. Consider the context of the question and don’t make assumptions without considering the environment.
  • Have fun! – if you can enjoy the process and the test, you will be calmer, more confident, and have a clear thought process.
  • NSX is not just L2 overlay – be sure to understand the purpose, mechanics, workflows, and other concepts for the other functions of NSX.
  • Pay attention!!! – Did I mention that already. If I were only allowed to give one piece of advice, this would be it.

Bill’s Take

This was a really enjoyable process for me. I got to do something I have not done for quite a while. Plus, I ended up on the passing side of the exam, which does not hurt.

The exam felt appropriate to the level of studying required. It it’s very likely that I could have studied certain areas a little more and gotten a higher score. But, I felt like I had a solid hold on the subject matter, so no need to push it.

After going through the learning process for VCP6-NV, I feel like there is value in this certification process. Yes… I recognize that people have differing opinions on certification… and this is mine. Network Virtualization is not a commodity knowledge set like other technology topics may be. The range of NSX specific info, network architecture, and network concepts feel like a good evaluation of a necessary skill set versus a test on a specific product.

Good luck studying and don’t forget to PAY ATTENTION!!


Appreciate The Complexity

Very often, we take so many things for granted. Take Wi-Fi access on airplanes for example… or flying, for that matter (YouTube/Louis CK on Conan).

For those of us working in IT (vendor, consulting, customer, academia, etc…), we are surrounded by some of the most amazing things EVER. I was reminded of this fact earlier this morning when browsing Slashdot. An article, “Linux Kernel 4.7 Officially Released” was posted on the site and it just blew my mind.


Screenshot from the article from Slashdot

To be honest, nothing in that list of new features is all that exciting to me. Radeon RX 480 GPUs, sync_file fencing mechanisms, and the “schedutil” frequency governor are all very foreign to me. Regardless of utility, the complexity of it all is mind boggling and beautiful.

Hansel-Sting Quote

The level of effort, detail, and functionality in the kernel is fantastic… and it’s not limited to Linux. Windows, UNIX, ESXi, NX-OS, etc…

The fact that we have these tiny little machines in our presence with increasingly stronger, faster, and more intelligent components with such utility is a borderline-miracle. The fact that I can identify a need to connect an Xbox Elite controller to a Linux environment AND IT WILL WORK is amazing. The fact that I can sit at my desk with 3 monitors, type on a piece of plastic with buttons, click Publish, and share a little thought with whomever is reading this is an amazing feat.

Am I  waxing poetic or off in my own little world? Maybe a little… but, when things get frustrating and you’re having “one of those days”, step away from whatever you’re doing any try to appreciate the little pieces of magic that happen all the time to enable you to do whatever your job is.

VMware Hands On Labs–BETA–Live!


For a number of years, VMware has provided some amazing lab contents during the VMworld conferences. From year to year, the labs progressed from onsite delivered, to cloud bursting, to majority cloud, and, finally, to completely cloud provided. It was inevitable that one day… some day… the labs would be opened up to the general populace. So, it turns out that Tuesday, November 13th was that fateful day. At the Portland VMUG Conference, Mr. Pablo Roesch (@heyitspablo) and Mr. Andrew Hald (@vmwarehol) released the Hands On Labs (BETA) onto the world!

Pablo and Andrew were kind enough to help me get access just before the announcement. And, I must say, I have been so impressed with what VMware has produced.

It is important to remember that this is a BETA offering (not quite like a Google BETA (ex: Gmail)), so experiencing some bugs and bumps in the road should be expected. However, with that being said, the quality of the content, delivery mechanisms, and the UI are top notch.

UI Tour

Lets take a quick look at what the environment looks like:



Upon logging into the HOL environment, you are greeted with a couple notable components:

  • Enrollments and Lab Navigation
    • This section, on the left, allows you to view the labs you have enrolled for, filter the lab types (currently Cloud Infrastructure, Cloud Operations, and End-User Computing), and viewing your transcript.
  • Standalone labs
    • The actual entry to the lab content. Enroll, begin, continue, etc… from here. This is where the magic happens.
  • Announcements
    • Includes important updates as to new lab content, product YouTube videos, Twitter stream, etc…

    All in all, the home page is very streamlined and efficient.

    Using a Lab

    Upon enrolling and beginning a lab, you are presented with:


In the background, vCloud Director is deploying your lab environment. Looks like VMware is truly eating its own dog food with this product. No more Lab Manager for this type of offering anymore.

The interface features some very well thought out design decisions that helps present the lab content, and VMs, in a very logical and convenient way. The HOL team heavily leveraged HTML5 to accomplish the magic:

Default View Lab Manual Consoles
image image image

The simple placement of tabs on the left and right sides allows for the console of the VM to consume the majority of the screen while minimizing the amount of times the user needs to switch between applications for information. All of the info is there. Plus, as the user scrolls down the screen, the tabs remain visible, always ready to be used.

Use the lab manual content to navigate through the labs at your leisure. Note, though, there is a time limit. This is shared infrastructure. So, if you are idle for too long, HOL Online will time you out and close up shop so others can use the same infrastructure. Don’t worry, though, the content will resume when you return later.

Bugs I Have Found

Yes… as mentioned above, this is a BETA. Did I mention this is a BETA, because it is a BETA. So, running into some bugs is expected. Don’t worry, though, I’ll participate in the forum to report them.

Thus far, I have found a couple funky bugs:

  • The mouse can disappear when a console is open
  • The mouse can disappear when multiple browsers are open and the HOL window is not active
  • Some lab manuals are not available
  • Occasionally, the HOL interface will hang at Checking Status. Force a cache refresh in your browser.

Getting Involved

If you want to get involved in testing the functionality and getting a taste of what the HOL Online is all about:

– Acknowledge that this is a BETA at this time. Don’t expect complete perfection right now.
– Sign up for the BETA at
– Notice that the signup site is a forum. You can check out the bugs and commentary from other beta testers to get a feel for what you’re going to experience
– Commit to participating in the beta community. Pablo and Andrew are not going to come to your house and take your vSphere licenses away for not participating. But, this is a unique opportunity to contribute to the success of a public product like this. Take advantage of it. Help the VMware community by contributing to it!

Well done, HOL team. I look forward to seeing what this turns into in the future!

Cisco Nexus 1000v Free-mium Pricing Model Release Reaction/Commentary

Alright… so Cisco announced a Q4 2012 availability of a new version of the Nexus 1000v virtual switch in September. Of course, the release is going to have so many features and functions that there is no way we could not justify paying the $695/CPU that they charged (plus support) for the virtual switch.

A mere 4 hours into October, a blog post was published from Cisco ( dramatically reducing the cost of playing with their toys to a big fat $0 (plus support). Check out the break down of the features:


[Note: the graphic above was taken from the blog post and can be found by following the link above]

I have three very differing reactions to the announcement that I am struggling with determining which one wins out:

1. Nice move Cisco

This was a very smart move on Cisco’s part. The adoption of the Cisco 1000v has been anything but spectacular. Functionality sounds great from a high level. However, when compared with the price and availability of the VMware Distributed Virtual Switch that is available out of the box, why make the jump to a per CPU solution?!

Plus, with the up and coming virtual network changes coming our way, getting customers to buy in to the Cisco ecosystem before will get them locked in for whatever the future holds. Nicira has the potential to shake up the virtual switching in upcoming releases.

2. Great… more people using Nexus 1000v

My experiences in $CurrentJob involve using an environment with 1000v deployments all over the place… and the results have been less than spectacular. All too often, a networking issue arises that we lose visibility to, ports being blocked with no explanation, VSM upgrade failures, VEM upgrade failures, etc… To say I am a fan of the 1000v would be pretty far fetched. Probably better stated: To say I am a fan of our 1000v implementation would be pretty far fetched. I am sure 1000v implementations out there are more successful. I just have a hard time recommending to people to implement the 1000v in their environments when the provided distributed virtual switch is good enough.


3. Great! More people using Nexus 1000v

See what I did there?! (“…” vs “!”)

In my self-admitted limited times with the Nexus 1000v, it really seems like there is a lack of people that know ANYTHING about the 1000v. Understanding the NX-OS side of the switch is critical (obviously). But, so is understanding the nature of a virtual environment (especially with vCloud Director making more of a play in datacenters) and the virtual environment implemented (1000v will support VMware, Microsoft, Xen, and KVM hypervisors) is equally as important. The nature of the workloads and behaviors change.

So, by having more people using the Nexus 1000v, there will be more and more people available as “experts’ or at least legitimately “experienced” with the product than before. This can bode well for future implementations for sure.


Ultimately, I think this is a good move. Cisco is acknowledging that getting another “advanced” product into the ecosystem a gratis helps drive purchasing for other Cisco products in the datacenter. Plus, at a much higher level, it is an acknowledgement in the direction that the market is moving… and they’re trying to get a toe-hold before the SDN wave takes hold. Will this fix what I am working with? Not one bit. Will this fix what I will be working with in the future? With more people having experience, it may for sure.

Scale Computing – HC3 Product Launch

Scale Computing… the company with a name that never really made sense… until today. You see, Scale Computing began in 2007 as a storage company. Their most recent product lines, the M Series and S Series products, utilized HGFS (licensed through IBM) to provide a clustered filesystem for data consumption. Need more space… just add a new node and let the cluster work its magic. Combine the simplicity of the cluster filesystem with the creation of a highly usable web-based management utility, and you get a simple and powerful storage system.

But… isn’t that storage? Not really compute? Perhaps the “Computing” was foreshadowing for the most recent Scale Computing product release: HC3 Hyperconvergence system.

HC3 represents a significant change to the direction and focus of the company. Scale is utilizing IP and existing products in the storage realm and enhancing the storage experience with server virtualization functionality.

Let’s dig into some of the nitty gritty technical details we all want to see:

Technical low-down

The virtualization functionality is provided via the highly powerful and versatile KVM hypervisor. The use of KVM in a system like this always raises an eyebrow, or two. More often than not, KVM is relegated to the “Linux/UNIX geek” realm. The hypervisor is highly functional, feature rich, and has a very solid following in the Open Source community. New functionality, enhancements, and maintenance is constantly ongoing. Plus, with KVM in the OpenStack development line, KVM is just the beginning of where the Scale Computing layer could go in the future. Additionally, as a consumer of Open Source software in its solution, Scale contributes back to the community. Scale has slightly modified KVM to allow for better caching and has released that code back to the community for consumption elsewhere.

As mentioned above, Scale is building the HC3 on their existing HGFS filesystem. KVM is a natural hypervisor selection based on the fact that it operates at the same level in the OS as the filesystem… just another service installed in the OS.

Many of the expected virtual machine management functionality is present in the HC3 product:

  • Live Migration
  • Resource Balancing
  • Thin Provisioning
  • VM Failover/Restart On Node Failure
  • Storage Replication
  • etc…

As far as hardware is concerned, the HC3 product is built on the same hardware as the M Series storage line. You can see more details here: Heck, existing Scale Computing customers can install a firmware upgrade on the M Series hardware to get HC3 functionality… gratis too.

Management of the environment, like the storage, is handled in a clustered fashion. Connection to any of the node management IP addresses results in the ability to control the entire cluster. No need for a centralized controller for management services. The connection for management is handled by any HTML5 compliant browser. I guess this means an iPhone browser could be used (although, I question the usefulness of such a form factor… but, it should work nonetheless).

Once logged into the management interface, a number of components can easily be managed: Virtualization, Storage, General, etc… If you have used the interface before, the only major difference between the pre-HC3 and post-HC3 interfaces is the addition of the Virtualization option. The interface is very simplistic with very few options available. Navigation is logical and easy to use.

Target Market

Scale computing has made a very conscious decision to focus on the SMB/SME markets. These markets tend to employ a small number of IT personnel with limited knowledge and high expectations placed upon them. For the business, selecting a product that performs a role, is easy to use, and provides high levels of availability is extremely desired.

Scale has identified the market and designed HC3 to reflect what their customers want:

  • Easy administration
  • Few configuration options
  • Expandability
  • Small form factor (1U servers)
  • Support for common operating systems
  • Resiliency
  • Performance
A solution like HC3 fits really well for the SMB/SME markets. If the business does not have a need for a highly customizable environment, HC3 may be exactly what they are looking for. Leveraging HC3 may result in IT administrators having extra time on their hands to work on other projects.

What makes Scale Computing HC3 different?

One of the most significant differentiators for the HC3 product is their starting point. While it makes sense for THIS company, starting with a solid storage foundation, followed by a virtualization plan is really a different path to get to a converged infrastructure product. Scale has developed a solid storage product and decided to make a hypervisor decision that compliments existing efforts while providing the functionality customers are looking for.

The ability to scale compute and storage resources is becoming an expectation and the norm in virtual environments. HC3 allows for two major ways to scale:

  1. Addition of a new HC3 node – This adds additional footprint for executing virtual machine workloads… plus additional storage.
  2. Addition of a new M Series node – This adds additional storage without the compute functionality.

Scaling by adding an M Series node, while possible, just does not make sense at this time. The possibility to add additional compute resources to the HC3 cluster holds so much more potential benefit to a consumer that I find it hard to believe this would be used. But, for what it is worth, that is an option.

Simple is better. For the target market, removal of complexity results in a predictable and, hopefully, low-touch compute environment. There is less of a need to have deep knowledge just to keep the compute environment functional. Scale has made a number of configuration decisions behind the scenes to reduce the load on customers.

What is missing?

With all the hotness that HC3 provides, a number of notable features are missing from this release:

  • Solid reporting – Aside from some “sparkline”-esque performance graphs (on a per VM basis), the ability to look back on a number of statistics for any number of reasons (Ex: troubleshooting performance issues) just is not there. For the target market, this may be an acceptable risk. I do not necessarily agree, though.
  • VM Snapshotting – At this time, the snapshotting functionality is achieved by snapshotting the entire filesystem.
  • Crash Consistent Snapshots – The snapshots of the volumes are crash consistent — in the event a VM is restored from a snapshot, it is in a state that mimics a sudden loss of power… the server has crashed. So, reliance on OS and application recovery is necessary. Probably a good idea to have backups. Pausing the VM, if possible in your environment, prior to taking a snapshot would help in stability… but, that is a stretch.

Virtual Bill’s Take

I absolutely love the focus on the SMB/SME markets. They need some TLC, for sure. By focusing on creation of a utility virtualization device, the IT resources in the business can focus on moving the business forward rather than messing with complicated details. Howard Marks made a comment during a briefing from Scale: “Do you care if your car has rack and pinion steering? I just want to get in and drive”. This solution addresses the need to get in and drive. I can see this as being very appealing to quite a number of companies out there.

Scale Computing is an up and comer in the converged infrastructure game going on now. Converged infrastructure is making a play in the corporate IT ecosystem that challenges traditional thinking… and that is always a good thing. Selection of KVM as the hypervisor is logical, but it is going to be a hurdle to overcome. KVM just does not have the household recognition as other vendors. So, getting support from many directions is going to be challenging. But, after speaking with Scale Computing, they’re up for the challenge.

If they play their cards right, HC3 could help usher in a flood of new customers to Scale and a change in how SMB/SMEs operate server environments.

Team vGlobo – v0dgeball – VMworld 2012

Come one! Come all! Watch the spectacle that is v0dgeball at VMworld 2012!

This year, a number awesome vPeople have come together for a brief time to create an amazing team of dodgeballers to dominate the tournament: Team vGlobo


The team is comprised of:

  • Bill Hill
  • Arjan Timmerman
  • Brandon Riley
  • Gabrie van Zanten
  • Chris Emery
  • Josh Townsend
  • Jason Shiplett
  • Mike Ellis
  • Dwayne Lessner
  • Joseph Boryczka

If you’re interested in watching the awesomeness that is Team vGlobo, please come and check out the tournament:



Links (for your clicking pleasure)

In all seriousness (if you made it this far), the v0dgeball tournament is something that I am very proud to be a part of. The proceeds for the dodgeball tournament go to the Wounded Warrior Project. 

The Wounded Warrior Project provides support for US service men and women injured while serving our country. They provide a ton of services and support, which includes:

  • Stress recovery
  • Family support
  • Career transition
  • Employment placement
  • Adaptive sporting events
  • Assistance with government/insurance claims
  • And so much more

At the time of this post, the v0dgeball 2012 project has raised $10,705.00 for the Wounded Warrior Project. How cool is that?! Really!? Just thinking about the impact that $10,000+ can have to thank and support servicemen/servicewomen for their amazing sacrifice is awe inspiring.

I cannot thank Chad Sakac, Fred Nix, and EMC enough for organizing such a fun and honorable activity for the VMworld participants… and for all of the participants of the tournament. 

Team vGlobo is honored to take the floor, throw some balls in faces of opponents (there’s a joke in there somewhere), and support such an amazing organization. 

Go Team vGlobo!!!

Taking The Next Step

For the past 9 years, I have had the privilege of working for a fast growing, dynamic, and fun environment. I started as a summer intern and worked hard to assume full design and implementation for all IT infrastructure. I was provided with experiences I never would have thought possible… including significant travels in Asia. Management fostered an environment in which I could grow personally, as an IT professional, and in my blogging/social media life. 

However, sometime earlier this year, I was approached with another opportunity by an outside group. The opportunity provided was significant and very appealing… tugging at my virtualization heart-strings. Suddenly, I was put into a position where I really needed to decide which direction I wanted to take my IT career. Continue on the IT generalist route (broad and deep in a handful of areas) or specialize more in virtualization (narrow but significantly deeper in the virtualization core pillars). 

After much debate, list making, and lost sleep, I decided to take a risk and accept the offer for the new position. 

Leaving my post at a company I have grown with and that has grown with me was tremendously difficult. The company fostered a family-like environment and I genuinely like everyone I work with. The company is full of rock stars and growing like a weed (but, a good weed that will turn into something cool in the future). But, I would be remiss to not take advantage of this new opportunity. 

The new opportunity puts me into a Senior role working with a truly enterprise-level environment… at a scale that is just mind boggling (almost 2 physical servers for every 1 employee at my former employer) The prospect of the position is exciting and I cannot wait to start there. Plus, I am going to be working in close proximity to some VMware/virtualization rock stars (a couple blocks apart) and one of my fellow PDX VMUG leaders. 

I am really going to miss my former employer and opportunities there, though. This is just a blip on their radar. I am positive the IT department is going to step up and make the infrastructure their own, just like I feel I was able to while there. But, I look forward to meeting everyone at the new job and getting deeper and dirtier into VMware virtualization!


PS – I apologize for the vague-ness of the post. However, I try and make a point of not calling out my employer on my blog… which the old company and new company appreciate. So, I hope you were able to follow along. 🙂