Home > GestaltIT, Systems, Virtualization > I/O Virtualization Redux (c/o Virtensys)

I/O Virtualization Redux (c/o Virtensys)

Alright… back in the day, I posted an article regarding a startup in San Jose, Aprius. You can find that post here: http://virtualbill.wordpress.com/2010/11/15/tech-field-dayaprius-high-bandwidth-ethernet-allowing-for-virtual-pcie/

Without reciting everything in the post, the single sentence that has encapsulated my feeling about their technology and approach was:

The biggest problem with the technology and the company direction is that there is no clear use case for this.

Until recently, I stood behind this statement. And, honestly, as it pertains to the Aprius approach I was presented, I still do. I am sure there are Aprius, Xsigo, Virtensys, and any other I/O Virtualization vendors/customers that dispute the statement. (note: I welcome comments!).

However, I believe I have found a great use case for I/O virtualization thanks to a presentation from Virtensys during the most recent Portland VMUG meeting.

The use case that was presented was huge and I see it being beneficial to all sort of environments (SMB, SME, Enterprise, Healthcare, Education, Research, etc…).

The Virtensys product utilizes PCIe extension cards in the PCIe slots on servers. Those cards connect to the the Virtensys product. For this example, we will assume the Virtensys solution is configured to share 10Gb NICs with the servers. If your Virtualization servers are sharing the 10Gb NIC through the Virtensys product, all network traffic is routed through the Virtensys solution. However, if the virtual machine on one server is trying to communicate to another virtual machine on another server and those servers are sharing the same NICs in Virtensys’ solution, the communications happen at PCIe speed, not NIC speed! Additionally, the traffic never hits the standard physical network layer.

The PCIe 2.0 standard allows for 500MB/s over a single lane (minus some overhead). So, a single lane can handle roughly 4Gb/s. A full 32-lane connection can handle 16GB/s (or 128Gb/s. Now, these are all technical values and some level of overhead and contention may need to be accounted for. But, the value from here is that PCIe bus is quicker than the 10Gb NIC that is being shared.

Use Case!!!: By utilizing the Virtensys solution, your network traffic is no longer hitting the physical network and can be transmitted at PCIe speeds!

I will be the first to admit that I am not entirely keen on the other offering that exist (again, I like comments!). I would like to think that this same use-case exists for the other I/O virtualization vendors out there. Assuming they do, I can see the I/O virtualization products being adopted by companies that can benefit from higher network throughput that is allowed. Assuming this is specific to Vrirtensys and you have needs for higher network throughput, you may want to check these guys out.

Disclaimer:

Virtensys was a presenter at the Portland VMUG meeting in May 2011 which I was the principle organizer. I am under no obligation to include them in any personal blogging I undertake (which this qualifies as). Virtensys provided the presentation that opened my eyes to the use case supplied above.

Resources:

About these ads
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: