Author: virtualbill

VCP6-NV Network Virtualization Exam Prep And Results

Hard to believe that a mere 8 hours so, I sat for the VCP6-NV (2V0-642) exam. 77 questions and about 60 minutes later, I walked out as a newly minted VCP!

Truth be told, I have not needed to study like this for quite some time… likely while getting my undergrad many years ago. The type of learning I adapted to in the real world was more bursty, need driven, and broad. So, I really needed to get to clean the rust off those old studyin’ routines and get to work.

The Internets were massively helpful in not only helping identify what to study but, also, confirming what I thought would be the correct content. This post is my way of paying back for the help I got. If you’re here for the actual test questions and answers, you’re in the wrong place…

Please keep in mind that this is not meant to be prescriptive. Rather, this is what worked for the type of learner I am.

What did I use to study?!

There are some awesome content creators out there with amazing reviews and success stories (Pluralsight, vBrownbags, etc…). I did not use them. I felt like trying to focus on the information from VMware would be the most effective use of my time.

  • NSX: Install, Manage, and Configure [6.2] – On Demand
  • VMware Education NSX Practice Exam
  • NSX 6.2 – Admin Guide
  • NSX 6.2 – Design Guide
  • VMware Learning Zone – NSX Exam Prep
  • VCP6-NV Exam Blueprint
  • Hands on Labs

The ICM course’s On Demand structure worked really well. I was concerned about my usual preferred learning style conflicting with the presentation and lab format of the course. However, it was quite nice and I rather enjoyed it. I completed all of the course in about 1.5 weeks… and I have a crazy amount of notes to show for it. Note: if you decide to go through the On Demand course, there are some oddities about the delivery system that you can work 2V0-641 to your benefit. Not listening to the robo-voice reading each slide was a sanity saver.

The Design Guide was surprisingly enjoyable… It has been composed in a very thoughtful and logical manner. It needs to be read from cover to cover at least one time as the pages and sections build on top of each other. I found myself re-reading chapters 3 & 5 to help drive some concepts home. Any time spent with the Design Guide was time well spent.

The Learning Zone exam prep content was really nice. Each objective and sub-objective is presented is short 5-10 minute videos. They cover the content in ways that are explanatory, show correct logic in analysis, but don’t give you the answer. They guide you to the water… But, you need to drink it.

What didn’t I do?

  • Use external content providers – I felt like I had a good grasp on the concepts from the VMware materials. The external content providers would help explain and/or make sense of concepts that I was getting pretty well.
  • Did not focus on speeds and feeds – Yes… knowing easily referable information like the amount of RAM and vCPU for NSX Manager is within scope of the exam. I can look that up if I need to… And I accepted that I may miss those questions on the exam. My time was more important elsewhere.
  • Did not memorize details of UI paths – Again, knowing which tab or right-click option is within the scope of the exam and not worth my time. Accepted risk.

Test Day

  • Did not study at all. At this point, I knew what I was going to know and spending time on last minute things do not yield anything but uncertainty.
  • I felt good about the test… Like I did in college… The rust was off the gears! I was calm and accepted the current state of my study and learning as it was.


  • Pay attention – very common concepts, themes, principles, rules, restrictions, limits, etc… show up over and over and over.
  • Write – our minds retain information better when we write. Writing engages an artistic portion of our brains… and information associated with artistic activities is retained better.
  • Schedule the exam – or else you will find a reason to start kicking the can and delaying the prep work
  • Exam structure is no surprise – single choice, multiple choice, sometimes answer options are super similar, sometimes answers are super obvious. There is nothing exotic here.
  • Question wording / Answer wording – NSX, traditional networking, and network virtualization have similar verbiage, differing implications, and concepts both shared and unique. Consider the context of the question and don’t make assumptions without considering the environment.
  • Have fun! – if you can enjoy the process and the test, you will be calmer, more confident, and have a clear thought process.
  • NSX is not just L2 overlay – be sure to understand the purpose, mechanics, workflows, and other concepts for the other functions of NSX.
  • Pay attention!!! – Did I mention that already. If I were only allowed to give one piece of advice, this would be it.

Bill’s Take

This was a really enjoyable process for me. I got to do something I have not done for quite a while. Plus, I ended up on the passing side of the exam, which does not hurt.

The exam felt appropriate to the level of studying required. It it’s very likely that I could have studied certain areas a little more and gotten a higher score. But, I felt like I had a solid hold on the subject matter, so no need to push it.

After going through the learning process for VCP6-NV, I feel like there is value in this certification process. Yes… I recognize that people have differing opinions on certification… and this is mine. Network Virtualization is not a commodity knowledge set like other technology topics may be. The range of NSX specific info, network architecture, and network concepts feel like a good evaluation of a necessary skill set versus a test on a specific product.

Good luck studying and don’t forget to PAY ATTENTION!!

Fixed Block vs Variable Block Deduplication – A Quick Primer

Deduplication technology is quickly becoming the new hotness in the IT industry. Previously, deduplication was delegated to secondary storage tiers as the controller could not always keep up with the storage IO demand. These devices were designed to handle streams of data in and out versus random IO that may show up on primary storage devices. Heck… deduplication has been around in email environments for some time. Just not in the same form we are seeing it today.

However, deduplication is slowly sneaking into new areas of IT… and we are seeing more and more benefit elsewhere. Backup clients, backup servers, primary storage, and who-knows-where in the future.

As deduplication is being deployed across the IT world, the technology continues to advance and become quicker and more efficient. So, in order to try and stay on top of your game, knowing a little about the techniques for deduplication may add another tool in your tool belt and allow you to make a better decision for your company/clients.

Deduplication is accomplished by sharing common blocks of data on storage environments and only storing the changes to the data versus storing a copy of the data AGAIN! This allows for some significant storage savings… especially when you consider that many of file changes are minor adjustments versus major data loads (at least as far as corporate IT user data).

So, how is this magic accomplished? – Great question, I am glad you asked! Enter Fixed Block deduplication and Variable Block deduplication…

Fixed Block deduplication involves determining a block size and segmenting files/data into those block sizes. Then, those blocks are what are stored in the storage subsystem.

Variable Block deduplication involves using algorithms to determine a variable block size. The data is split based on the algorithm’s determination. Then, those blocks are stored in the subsystem.

Check out the following example based on the following sentence: “deduplication technologies are becoming more an more important now.”


Notice how the variable block deduplication has some funky block sizes. While this does not look too efficient compared to fixed block, check out what happens when I make a correction to the sentence. Oops… it looks like I used ‘an’ when it should have been ‘and’. Time to change the file: “deduplication technologies are becoming more and more important now.”  File –> Save

After the file was changed and deduplicated, this is what the storage subsystem saw:


The red sections represent the changed blocks that have changed. By adding a single character in the sentence, a ‘d’, the sentence length shifted and more blocks suddenly changed. The Fixed Block solution saw 4 out of 9 blocks changed. The Variable Block solution saw 1 out of 9 blocks changed. Variable block deduplication ends up providing a higher storage density.

Now, if you determine you have something doing fixed block deduplication, don’t go and return it right now. It probably rocks and you are definitely seeing value in what you have. However, if you are in the market for something that deduplicates data, it is not going to hurt to ask the vendor if they use fixed block or variable block deduplication. You should find that you get better density and maximize your storage purchase even more.

Happy storing!

Cloud Crossroads

I feel like I am at a crossroads… and trying to figure out which direction to go. In my life, I strive to know about all kinds of things. Heck, in college, I went through 4-5 different majors because I was so interested in all of them. Computer Science and a Biology minor won out. So, when I come to this crossroad, I am torn… Which Cloud to go with?

The term “Cloud” is really confusing some times. While the basic concept is becoming more and more clear, what is not is that we have multiple Cloud types to contend with.

1) Datacenter Cloud – This makes perfect sense to me. My VMware experience and all of the VMware/virtualization kool-aid out there jives very well. The Datacenter Cloud is just like what I have in my datacenter. Just with some added layers of management and automation on top. I am cool with that.

2) Application Cloud – This is where I am getting lost… and, I feel like this may be where things are going, especially for environments sized like my Corporate environment. Application Clouds include Google Gmail, Salesforce, Google Docs, VMware/Salesforce VMforce, and Windows Live.

In my Corporate environment, we are trying to make a conscious decision to move towards Cloud based resources. We figure that if we can simplify the internal infrastructure to commodity components and start leveraging usage based hosted models, we can actually reap some of the benefits. Starting to acknowledge the trend now and make decisions based on the trend makes it easier to grow into a “Cloud” environment.

So, back to this darn crossroads… Datacenter Cloud or Application Cloud??

The biggest issue I am running into is my data in the Application Cloud. Like most applications, all of our applications need the app tier and the data tier. In the Application Cloud model, the database lives in one location and the app in another. Suddenly, not only do I need to worry about access times and experience for the end user getting to the application, but the access times between the various Cloud providers. AKA – things could be significantly slower.

Additionally, what about backing up the data and accessing those backups. We may have documented policies stating retention values, locations, etc… We all know that song and dance. However, each individual component theoretically operates individually.

Security become another issue to address with the Application Cloud environment. I “trust” that my data is secure. However, I am addressing security as credentials. Each service has their own authentication system. So, how do we, as IT professionals, manage these? Existing solutions provide for their own management structure (aka – web console for administration and creation of user-level accounts) or use agents that run on workstations for a pseudo-single sign-on experience. But, what I am looking for is some level of integration between my existing authentication mechanisms and what exists in the Cloud.

One of the final speedbumps in this Cloud crossroad conundrum is how can we ensure that our data is being backed up reliably and that restoration mechanisms are timely and accessible via my company versus needing to hunt down a Cloud provider support person? Many companies have regulations and policies regarding data retention and many Cloud providers cannot deal with those policies. Plus, the business may need to “feel in control” of their data.

Alright… the light is turning green… which way… WHICH WAY…???

I know, I have this awesome SUV, I am going to make my own path. Instead of left or right, I am going to forge straight ahead. With the direction we need to go, we cannot just chose one or another. There are too many advantages for both to ignore them… For those systems with their own authentication methods regardless of being hosted or internal, to the Application cloud with you! For those that we deem important to have more control over, Datacenter Cloud for you!

As long as we make a conscious decision to move towards some kind of Cloud based solution (be it Application Cloud or Datacenter Cloud), we are moving in the right direction. I feel confident that I am not the only one in the IT world with these concerns and the answers will come in good time. By moving towards Cloud infrastructure now, we can adapt when the technology advances and be more agile and lite. The development of policies that handle external authentication systems and data access (backups/recoveries/SLAs/etc…) and business buy in (perhaps with ROI and cost savings over alternatives) will help drive this path home… and perhaps the business will pay to pave this new road I am blazing. Otherwise, these darn bumps are going to kill me.

Happy Clouding!