Archive

Posts Tagged ‘GestaltIT’

Scale Computing – HC3 Product Launch

August 27, 2012 1 comment

Scale Computing… the company with a name that never really made sense… until today. You see, Scale Computing began in 2007 as a storage company. Their most recent product lines, the M Series and S Series products, utilized HGFS (licensed through IBM) to provide a clustered filesystem for data consumption. Need more space… just add a new node and let the cluster work its magic. Combine the simplicity of the cluster filesystem with the creation of a highly usable web-based management utility, and you get a simple and powerful storage system.

But… isn’t that storage? Not really compute? Perhaps the “Computing” was foreshadowing for the most recent Scale Computing product release: HC3 Hyperconvergence system.

HC3 represents a significant change to the direction and focus of the company. Scale is utilizing IP and existing products in the storage realm and enhancing the storage experience with server virtualization functionality.

Let’s dig into some of the nitty gritty technical details we all want to see:

Technical low-down

The virtualization functionality is provided via the highly powerful and versatile KVM hypervisor. The use of KVM in a system like this always raises an eyebrow, or two. More often than not, KVM is relegated to the “Linux/UNIX geek” realm. The hypervisor is highly functional, feature rich, and has a very solid following in the Open Source community. New functionality, enhancements, and maintenance is constantly ongoing. Plus, with KVM in the OpenStack development line, KVM is just the beginning of where the Scale Computing layer could go in the future. Additionally, as a consumer of Open Source software in its solution, Scale contributes back to the community. Scale has slightly modified KVM to allow for better caching and has released that code back to the community for consumption elsewhere.

As mentioned above, Scale is building the HC3 on their existing HGFS filesystem. KVM is a natural hypervisor selection based on the fact that it operates at the same level in the OS as the filesystem… just another service installed in the OS.

Many of the expected virtual machine management functionality is present in the HC3 product:

  • Live Migration
  • Resource Balancing
  • Thin Provisioning
  • VM Failover/Restart On Node Failure
  • Storage Replication
  • etc…

As far as hardware is concerned, the HC3 product is built on the same hardware as the M Series storage line. You can see more details here: http://scalecomputing.com/files/documentation/series_datasheet_1.pdf. Heck, existing Scale Computing customers can install a firmware upgrade on the M Series hardware to get HC3 functionality… gratis too.

Management of the environment, like the storage, is handled in a clustered fashion. Connection to any of the node management IP addresses results in the ability to control the entire cluster. No need for a centralized controller for management services. The connection for management is handled by any HTML5 compliant browser. I guess this means an iPhone browser could be used (although, I question the usefulness of such a form factor… but, it should work nonetheless).

Once logged into the management interface, a number of components can easily be managed: Virtualization, Storage, General, etc… If you have used the interface before, the only major difference between the pre-HC3 and post-HC3 interfaces is the addition of the Virtualization option. The interface is very simplistic with very few options available. Navigation is logical and easy to use.

Target Market

Scale computing has made a very conscious decision to focus on the SMB/SME markets. These markets tend to employ a small number of IT personnel with limited knowledge and high expectations placed upon them. For the business, selecting a product that performs a role, is easy to use, and provides high levels of availability is extremely desired.

Scale has identified the market and designed HC3 to reflect what their customers want:

  • Easy administration
  • Few configuration options
  • Expandability
  • Small form factor (1U servers)
  • Support for common operating systems
  • Resiliency
  • Performance
A solution like HC3 fits really well for the SMB/SME markets. If the business does not have a need for a highly customizable environment, HC3 may be exactly what they are looking for. Leveraging HC3 may result in IT administrators having extra time on their hands to work on other projects.

What makes Scale Computing HC3 different?

One of the most significant differentiators for the HC3 product is their starting point. While it makes sense for THIS company, starting with a solid storage foundation, followed by a virtualization plan is really a different path to get to a converged infrastructure product. Scale has developed a solid storage product and decided to make a hypervisor decision that compliments existing efforts while providing the functionality customers are looking for.

The ability to scale compute and storage resources is becoming an expectation and the norm in virtual environments. HC3 allows for two major ways to scale:

  1. Addition of a new HC3 node – This adds additional footprint for executing virtual machine workloads… plus additional storage.
  2. Addition of a new M Series node – This adds additional storage without the compute functionality.

Scaling by adding an M Series node, while possible, just does not make sense at this time. The possibility to add additional compute resources to the HC3 cluster holds so much more potential benefit to a consumer that I find it hard to believe this would be used. But, for what it is worth, that is an option.

Simple is better. For the target market, removal of complexity results in a predictable and, hopefully, low-touch compute environment. There is less of a need to have deep knowledge just to keep the compute environment functional. Scale has made a number of configuration decisions behind the scenes to reduce the load on customers.

What is missing?

With all the hotness that HC3 provides, a number of notable features are missing from this release:

  • Solid reporting – Aside from some “sparkline”-esque performance graphs (on a per VM basis), the ability to look back on a number of statistics for any number of reasons (Ex: troubleshooting performance issues) just is not there. For the target market, this may be an acceptable risk. I do not necessarily agree, though.
  • VM Snapshotting – At this time, the snapshotting functionality is achieved by snapshotting the entire filesystem.
  • Crash Consistent Snapshots – The snapshots of the volumes are crash consistent — in the event a VM is restored from a snapshot, it is in a state that mimics a sudden loss of power… the server has crashed. So, reliance on OS and application recovery is necessary. Probably a good idea to have backups. Pausing the VM, if possible in your environment, prior to taking a snapshot would help in stability… but, that is a stretch.

Virtual Bill’s Take

I absolutely love the focus on the SMB/SME markets. They need some TLC, for sure. By focusing on creation of a utility virtualization device, the IT resources in the business can focus on moving the business forward rather than messing with complicated details. Howard Marks made a comment during a briefing from Scale: “Do you care if your car has rack and pinion steering? I just want to get in and drive”. This solution addresses the need to get in and drive. I can see this as being very appealing to quite a number of companies out there.

Scale Computing is an up and comer in the converged infrastructure game going on now. Converged infrastructure is making a play in the corporate IT ecosystem that challenges traditional thinking… and that is always a good thing. Selection of KVM as the hypervisor is logical, but it is going to be a hurdle to overcome. KVM just does not have the household recognition as other vendors. So, getting support from many directions is going to be challenging. But, after speaking with Scale Computing, they’re up for the challenge.

If they play their cards right, HC3 could help usher in a flood of new customers to Scale and a change in how SMB/SMEs operate server environments.

What I Get Out Of Tech Field Day!

February 15, 2011 5 comments

I find it hard to believe that another Tech Field Day has passed. It must have been the quickest 3 days in history!

I would like to take a moment to explain what Tech Field Day is, from an altitude of 50,000 feet (or so), depending on turbulence and aircraft weight.

50,000 feet – We have reached our cruising altitude

Conceptually, Tech Field Day is the brainchild of the one and only, Stephen Foskett (@sfoskett for you Twitter folks).

Technology industry conference events and seminars have become fancy/extravagant sales pitches designed to whatever agenda the hosting company has up their sleeves. You know what, though, that is their prerogative. As long as people are paying to travel and cover the entrance fees, there certainly must be some value somewhere, right?!

Stephen got an amazing idea to counter the common template of industry events. Rather than have people go to company organized events, how about companies paying to go to user organized events?! This backwards, but logical, shift in functionality started a boulder rolling down a hill. That boulder is Tech Field Day. The hill is just part of the analogy.

Fast forward to February 2011… we just finished Tech Field Day #5 (technically 6th in the series, but Networking Field Day was too cool to be numbered).

You can find more history and information on the Tech Field Day website.

Who gets to go?

Tech Field Day attendees are called “Delegates”. While this may sound a little elitist, it is an excellent description of the attendees.

  • Delegates are selected by a panel of peers based on: knowledge, respectability, online presence, intelligence, independence, personality, and community development/contribution.
  • Delegates are selected to semi-match the subject matter of the presenting sponsors.
  • Delegates are used to select future Tech Field Day delegates (hence, “selected by a panel of peers”).

I feel very lucky and honored to have been selected as a delegate for 2 Tech Field Day events! I appreciate the recognition from people whom I consider to be experts and that they see value in how I am contributing to the community.

image

What do YOU (aka: me, Virtual Bill) get out of attending Tech Field Day?

Experiences in life are really what you make of them. On a very logical and “matter of fact” level, delegates receive a trip to the Tech Field Day location, meals, lodging, presentations from sponsoring companies, little tchotchkes,  and an opportunity to network with other delegates, organizers, and sponsors.

But, I am more of a sentimental kind of guy. So, while I appreciate the logical aspects, I appreciate the social rewards so much more.

We all have our own worlds we live in. But, just talk to a world traveler and they will tell you that there is so much else out there. The same goes for IT worlds. I am used to my SME (Small-Medium Enterprise) environment. I am subject to our budgets, systems, procedures/applications, and business decisions that shape my professional life and professional world. I definitely do not have an unlimited training budget and I fully acknowledge that I know what I know and have no idea what I do not know.

Tech Field Day allows me to travel my professional world. Technologies, architectures, concepts/ideas, and battle stories appear that I have never experienced. I happily take all of that information in… even if there is no direct correlation to what I am doing professionally. Tech Field Day allows me to grow as a professional. Now, my scope, breadth, and depth as an IT professional is that much deeper.

  For Example: During Tech Field Day #5, HP informed us that they are developing their own deduplication storage system. Am I going to buy one? No. But, Curtis Preston (@wcpreston) helped drive home the advantages of variable block over fixed block deduplication techniques. THAT concept will stay with me for a very long time… another tool in my tool belt.

The group of delegates is always among the brightest and intelligent in the field. Everyone is so friendly, willing to openly discuss geeky things, and represent an amazingly diverse pool of community knowledge. The delegate pool is the true physical representation of what Gestalt is all about. I feel as though I can reach out to every single delegate I have had the pleasure of interacting with and ask for help, advice, or just keep in touch. Tech Field Day delegates are one of a kind (or 12 of a kind) and a great group to be associated with.

Hard to believe that one day in September 2010, I grew a pair and asked to contribute to Gestalt IT and started a new period in my professional life/world. Definitely the best decision  I have made in some time. Tech Field Day is about community, knowledge, critical thinking, and getting the most out of our relationships with people and companies.

I am thankful for this experience and honor. I am happy to be associated with this group of individuals and I hope I can impart my new knowledge in my professional and personal worlds.

Morals of the story:

If you made it this far, thanks for reading. If you jumped down to the end, I guess you found a shortcut. The morals of this story are:

  • Tech Field Day and Gestalt IT are awesome
  • Connect with your IT community and join the Gestalt. You will not regret it!

Tech Field Day–Intel–10Gb Adoption In Datacenter Network

November 16, 2010 3 comments

Tech Field Day Presenting Company: Intel (http://www.intel.com)

Going into Tech Field Day, I was really curious as to what Intel could possibly be presenting on. Their portfolio of products is ridiculously huge… and there are spins that you could put on each one to make it relevant for a discussion. So, I was pleasantly surprised when I found out that one of the presentations was regarding the Intel Ethernet products surrounding the emerging 10Gb datacenter Ethernet connectivity.

10Gb Ethernet is one of those topics that end up being the elephant in the room… much like an IPv6 conversation or disaster recovery. We all know it is great… more bandwidth in the datacenter is always appreciated. However, adoption requires a massive expense in the purchase of adapters and switching as well as a coordinated effort to migrate IP services to the new NICs and switching (maintenance windows, hardware installations, etc…).

Currently, a quick search online (froogle.google.com – Criteria “10Gb NIC 10GBASE-T”) is showing NICs ranging in price from ~$540 – $1,650. Similarly, a quick search online (froogle.google.com – Criteria “10Gb Switch” is showing switches ranging from 2 10Gb uplinks at ~$1,579.00 to ~$15,000. Now, this post is not meant to be a price guide by any means. However, the prices do reflect the high price of adoption in datacenter environments.

With 1Gb networking the defacto standard in datacenter environments, getting companies to migrate to a networking technology to replace their existing environment is a hard pill to swallow.

Additionally, a point missed my most people in the field is that the server adapters (PCIe, for example) take quite a bit of power to operate. In 2008, a $999/port NIC from Intel consumed 25W/port. In 2009, a $599/port NIC from Intel consumed 15W/port. Typical Intel NICs appear to be running at 1W/port (running at 1Gb speed) (see Intel 8254PI Ethernet Controller Overview for an example). For single implementations, the increased power consumption may be negligible. However, when implemented en-masse, the power consumption shoots through the roof. Power and cooling (heat production being a byproduct of increased power consumption) are two of the major variables that impact how much a datacenter costs to run. So, costs of hardware, increased power consumption, and cooling come into play.

However, Intel is looking to change all of that in the near future.

While the push has already begun to sell their adapters, the mass adoption will begin shortly next year. In 2011, Intel expects to see the addition of their LAN On Motherboard (LOM) chips (aka – the integrated LAN ports on your server and workstation motherboards) included with newly purchased servers and workstations. This, in combination with other 10Gb LOM manufacturers, is going to be the driving force to adoption of the 10Gb networks in datacenters.

Businesses are hesitant to purchase new infrastructure as there is major expense (see above). However, when the new infrastructure is included with the new purchases they make, suddenly the investment becomes less and they make purchases to leverage the new technology they possess. The simple facts that the 10Gb Ethernet ports are included on their servers and are backwards compatible with the standard 1Gb Ethernet deployed in their environment, companies are going to acknowledge the technology.

As more and more ports are deployed in corporate environments, switching manufacturers are going to need to realize the benefit in lowering the price point to increase adoption and sales. It is better to lower the switch price from $10,000 to $5,000 and sell many more. Now, as products become available to handle the bandwidth, companies are going to adopt the technologies.

Various other datacenter products exist or are emerging that take advantage of the high bandwidth opportunities that are presented with 10Gb Ethernet… storage platforms and virtualization platforms being the two most likely candidates that will fully utilize the high bandwidth. As companies see the advantages and increases in performance and efficiency that the higher bandwidth connectivity provides, they will embrace the new technology and ensure it is the new standard in their specific datacenter.

As far as power consumption is concerned, the LOM modules are expected to run at 5W/port. This drastically drops the power consumption model as it is compared to what 2008 had to offer (a mere 20% of the previous values). Plus, when you compare the 5W/10Gb-port versus 1W/1Gb-port, the new 10Gb ports are decreasing the power consumption by 50% per 1Gb.

It is true that the inclusion of the 10Gb NIC LOMs are going to be the driving force that pushes adoption of the 10Gb Ethernet into the datacenter. Intel appears to have acknowledged the need to include their LOM modules on new purchases to help ensure their place in the market and push the datacenter network to speeds it only once dreamt of. How it fares against other server vendors and 10Gb hardware providers is yet to be seen.

Tech Field Day–Aprius – High Bandwidth Ethernet Allowing For Virtual PCIe

November 15, 2010 2 comments

Tech Field Day Presenting Company: Aprius (http://www.aprius.com/)

What is Aprius all about?

That is a good question. Check out the link above for the corporate lingo. However, my take on what they are all about is… well… the topic of the post.

Aprius, like many startups, have identified what they believe is a niche market based on the overall concepts of virtualization (not so much server virtualization, ala VMware, but the abstractions that virtualization requires) and the I/O bandwidth capacity increases as 10Gb datacenter Ethernet emerges more and more.

Their specific product addresses the tie that a server may have to specific PCIe hardware. These may include GPUs, NICs, HBAs, Modems, etc…

This functionality is accomplished by a single device holding 8 PCIe slots. The slots can contain any number of generic PCIe cards. Ideally, the PCIe cards will have some level of resource sharing enabled (some NICs and HBAs, for example). The device can be configured to allow single or shared access to any of the PCIe cards in the chassis.

The physical servers are configured with a specific Aprius PCIe card and 10Gb NICs. When the server boots, the card calls home to the Aprius device and creates virtual PCIe slots. To any OS on the server, these appear to be standard, run of the mill PCIe slots loaded with whatever device is plugged into them… in the chassis, and not the server itself.

While the specific hardware and architecture is not available and we did not discuss during the Aprius Tech Field Day presentation, what I do know is:

  • The virtualized PCIe over Ethernet happens at layer 2. So, the data is encapsulated in a standard Ethernet frame that can traverse the local LAN. It is not routable.
  • The PCIe over Ethernet is a proprietary protocol.
  • The mechanism that handles sharing the PCIe hardware across multiple servers is similar to how a switch handles sending the proper data over switch ports. There is very little CPU needed on the switch (and, by proxy, the Aprius chassis).
    • This appears to be some of the secret sauce that makes Aprius do what it does best. So, I would not expect to hear more on this until later… if the company decides to share.

The biggest problem with the technology and the company direction is that there is no clear use case for this. There is a potential for blade server vendors to see value in being able to expand their blade chassis offerings by virtualizing hardware that does not normally get placed into blades or by increasing the number of things that can be “placed” into a blade. However, companies have adopted blade computing up to this point… So, the market has adapted.

With the adoption of network storage, faster CPUs, and faster networking infrastructure, the need for general purpose PCIe cards is shrinking. I know that companies use PCIe cards for HBAs and NICs. But, most other purposes are shrinking.

Plus, what are you going to place into the chassis to share?

  • NIC? – Use a NIC to get to a shared NIC? What is that going to provide? Something is going to be a bottle neck and that will become more and more difficult to determine.
  • HBA? – Only really useful if you are trying to migrate from a Fibre-ish storage network to an iSCSI/NAS style. Plus, sharing HBAs across multiple machines may easily overcommit the HBA and trash the performance
  • GPU – These are not really sharable… single host assigned. The option of time sharing the GPU is available. But, that is more manual coordination and that can be really ugly.
    Server virtualization is an interesting option for a use case. However, by tying the PCIe cards to the physical server and not allowing the virtual machine to have a virtual PCIe slot, there is a missed market. While I know there are ways to allow VMs to have direct access to hardware, that functionality will break things like High Availability, vMotion/Live Migration, etc… Something like an SSL Offload card is completely lost as the VM can never reliably get to the device to use it.
    Virtualization, in the most generic form, allows for some pretty wicked things. Combine that with the higher IO that 10Gb networking provides and the flood gates open to what can be abstracted from the traditional server and provided in a virtual form. Aprius has really latched onto that concept and created a really cool product. However, unless blade manufacturers see the product as being useful, I do not see this product and company going too far.

Tech Field Day–Disclosure

November 15, 2010 Leave a comment

Gestalt IT Field Day Logo

I was the lucky recipient of a pretty cool honor… being a delegate for Gestalt IT’s Tech Field Day event in San Jose, CA (November 11 and 12, 2010).

As a selected delegate, I was the recipient of some paid-for services and goods from Gestalt IT and the presenting sponsors:

  • Gestalt IT: Transportation, Lodging, Food, windbreaker, DVD, small Lego set, and a Hexabug Nano.
  • Actifio: 8GB USB Drive
  • Intel: Lunch
  • NetApp: Continental Breakfast, Lunch
  • Solarwinds: Myriad of buttons, stickers, mug, t-shirt, and voucher for free certification test

I appreciate the expenses required to provide the services and goods. However, it is worth noting that the goods and services above have absolutely no impact on my writing as it pertains to Tech Field Day, presenter commentary, product commentary, or anything else related to the event. Additionally, I would like to state that I am in no way, shape, or form obliged to create postings that would be considered to be positive due to my attendance in this event. This blog and my postings will be created based on the content of the presentations and my thoughts and are not impacted by the inclusion or lack of ancillary goods and services.

With all of that said, I would like to close with a HUGE thanks to Stephen and Claire for organizing such a great event!

Tech Field Day–San Jose 2010

November 4, 2010 Leave a comment

Hard to believe that Gestalt IT’s Tech Field Day (or #TechFieldDay for you Twitter folks) is just around the corner and I am still reeling from the fact that:

a) I was added as a contributing author to Gestalt IT

and

b) I am a delegate for the Tech Field Day event.

Seriously. How cool is this!? Wicked cool!

One of the primary tenants behind Gestalt IT is that the technology industry is full of some amazing people that have skills and knowledge in various markets. Individually, these are some talented people. However, combine these people into a group and, suddenly, the whole group is that much more knowledgeable… “the whole is greater than the sum of its parts”.

Tech Field Day takes the independent and community based nature of the Gestalt IT group to a whole new level. The FAQ on gestaltit.com’s Field Day page sums it up the best

We were inspired by similar events, but came away wanting something more community-oriented. Hosting a community Field Day just seemed like a natural progression for the Gestalt IT concept, and we had the right connections and contacts to make it happen. We’re pretty happy with the results so far!

http://gestaltit.com/field-day/faq/

If there is something I have learned a lot about recently… it is the value of the community. The Virtualization community I participate in is fantastic. VMware has done an amazing job fostering a community around their business and products. However, more than that, though, the network of people I have come to know is amazing. We all come from different fields, but we all have something to contribute to the greater community.

This is why I love the principles behind Gestalt IT and I really appreciate being included as an author on the site and a delegate for the Tech Field Day in San Jose, CA on November 11-12.

I have seen a listing of the sponsors that are lined up (well… most of them) and, while a couple are new names to me, I am absolutely stoked to see and hear what they have to say.

Even more so, though, I am really looking forward to meeting my fellow delegates:

Edward Aractingi @EAractingi Edward’s Blog
Brandon Carroll @BrandonCarroll GlobalConfig
Chris Dearden @ChrisDearden J.F.V.I
Robin Harris @StorageMojo StorageMojo
Storage Bits
Paul Miller @PaulMiller Cloud of Data
Frank Owen @FOwen TechVirtuoso
Jon Owings @2vcps Jon Owings Blog
Derek Schauland @WebJunkie Technically Speaking
TechRepublic
Matt Simmons @StandaloneSA Standalone Sysadmin
Server Fault
Stephen Foskett @SFoskett Gestalt IT
Stephen Foskett, Pack Rat

So, now it is T-minus 6 days until I leave for San Jose, and it cannot come quick enough.

Maybe a sponsor will be talking about Fibre Channel over Token Ring! 🙂