Tech Field Day–Aprius – High Bandwidth Ethernet Allowing For Virtual PCIe
Tech Field Day Presenting Company: Aprius (http://www.aprius.com/)
What is Aprius all about?
That is a good question. Check out the link above for the corporate lingo. However, my take on what they are all about is… well… the topic of the post.
Aprius, like many startups, have identified what they believe is a niche market based on the overall concepts of virtualization (not so much server virtualization, ala VMware, but the abstractions that virtualization requires) and the I/O bandwidth capacity increases as 10Gb datacenter Ethernet emerges more and more.
Their specific product addresses the tie that a server may have to specific PCIe hardware. These may include GPUs, NICs, HBAs, Modems, etc…
This functionality is accomplished by a single device holding 8 PCIe slots. The slots can contain any number of generic PCIe cards. Ideally, the PCIe cards will have some level of resource sharing enabled (some NICs and HBAs, for example). The device can be configured to allow single or shared access to any of the PCIe cards in the chassis.
The physical servers are configured with a specific Aprius PCIe card and 10Gb NICs. When the server boots, the card calls home to the Aprius device and creates virtual PCIe slots. To any OS on the server, these appear to be standard, run of the mill PCIe slots loaded with whatever device is plugged into them… in the chassis, and not the server itself.
While the specific hardware and architecture is not available and we did not discuss during the Aprius Tech Field Day presentation, what I do know is:
- The virtualized PCIe over Ethernet happens at layer 2. So, the data is encapsulated in a standard Ethernet frame that can traverse the local LAN. It is not routable.
- The PCIe over Ethernet is a proprietary protocol.
- The mechanism that handles sharing the PCIe hardware across multiple servers is similar to how a switch handles sending the proper data over switch ports. There is very little CPU needed on the switch (and, by proxy, the Aprius chassis).
- This appears to be some of the secret sauce that makes Aprius do what it does best. So, I would not expect to hear more on this until later… if the company decides to share.
The biggest problem with the technology and the company direction is that there is no clear use case for this. There is a potential for blade server vendors to see value in being able to expand their blade chassis offerings by virtualizing hardware that does not normally get placed into blades or by increasing the number of things that can be “placed” into a blade. However, companies have adopted blade computing up to this point… So, the market has adapted.
With the adoption of network storage, faster CPUs, and faster networking infrastructure, the need for general purpose PCIe cards is shrinking. I know that companies use PCIe cards for HBAs and NICs. But, most other purposes are shrinking.
Plus, what are you going to place into the chassis to share?
- NIC? – Use a NIC to get to a shared NIC? What is that going to provide? Something is going to be a bottle neck and that will become more and more difficult to determine.
- HBA? – Only really useful if you are trying to migrate from a Fibre-ish storage network to an iSCSI/NAS style. Plus, sharing HBAs across multiple machines may easily overcommit the HBA and trash the performance
- GPU – These are not really sharable… single host assigned. The option of time sharing the GPU is available. But, that is more manual coordination and that can be really ugly.
- Server virtualization is an interesting option for a use case. However, by tying the PCIe cards to the physical server and not allowing the virtual machine to have a virtual PCIe slot, there is a missed market. While I know there are ways to allow VMs to have direct access to hardware, that functionality will break things like High Availability, vMotion/Live Migration, etc… Something like an SSL Offload card is completely lost as the VM can never reliably get to the device to use it.
- Virtualization, in the most generic form, allows for some pretty wicked things. Combine that with the higher IO that 10Gb networking provides and the flood gates open to what can be abstracted from the traditional server and provided in a virtual form. Aprius has really latched onto that concept and created a really cool product. However, unless blade manufacturers see the product as being useful, I do not see this product and company going too far.